text
stringlengths 1
1.52M
| meta
dict |
|---|---|
\section{Introduction}
Automatically generated fake reviews have only recently become natural enough to
fool human readers. Yao et al. \cite{yao2017automated} use a deep neural network (a so-called 2-layer LSTM\cite{murphy2012machine}) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers.
They train their model using real restaurant reviews from yelp.com \cite{challenge2013yelp}.
Once trained, the model is used to generate reviews character-by-character.
Due to the generation methodology, it cannot be easily targeted for a specific \emph{context} (meaningful side information).
Consequently, the review generation process may stray \emph{off-topic}. For instance, when generating a review for a Japanese restaurant in Las Vegas, the review generation process may include references to an Italian restaurant in Baltimore. The authors of \cite{yao2017automated} apply a post-processing step (\textit{customization}), which replaces food-related words with
more suitable ones (sampled from the targeted restaurant). The word replacement strategy has drawbacks: it can miss certain words and replace others independent of their surrounding words, which may alert savvy readers.
As an example: when we applied the customization technique described in \cite{yao2017automated} to a review for a Japanese restaurant it changed the snippet \textit{garlic knots for \underline{breakfast}} with \textit{garlic knots for \underline{sushi}}).
We propose a methodology based on neural machine translation (NMT) that improves the generation process by defining a context for the each generated fake review.
Our context is a clear-text sequence of: the review rating, restaurant name, city, state and food tags (e.g. Japanese, Italian).
We show that our technique generates review that \emph{stay on topic}.
We can instantiate our basic technique into several variants.
We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews.
For one variant,
the participants' performance is close to random: the class-averaged F-score of detection is $47\%$ (whereas random would be $42\%$ given the 1:6 imbalance in the test).
Via a user study with experienced, highly educated participants, we compare this variant (which we will henceforth refer to as \emph{NMT-Fake* reviews}) with fake reviews generated using the char-LSTM-based technique from \cite{yao2017automated}.
We demonstrate that NMT-Fake* reviews constitute a new category of fake reviews that \emph{cannot be detected} by classifiers trained only using previously known categories of fake reviews \cite{yao2017automated,mukherjee2013yelp,rayana2015collective}. Therefore, NMT-Fake* reviews may go undetected in existing online review sites.
To meet this challenge, we develop an effective classifier that detects NMT-Fake* reviews effectively (97\% F-score). Our main contributions are:
\begin{itemize}
\item We present a novel method for creating machine-generated fake user reviews that \textbf{generates content based on specific context}: venue name, user rating, city etc (Sections \ref{sec:model} to \ref{sec:generating}).
We demonstrate that our model can be trained faster (90\% reduction in training time compared to \cite{yao2017automated}, Section~\ref{sec:generating}) and resulting NMT-Fake* reviews are \textbf{highly effective in fooling native English speakers} (class-averaged F-score 47\%, Section~\ref{sec:amt}).
\item We \textbf{reproduce} a previously proposed \textbf{fake review generation method} \cite{yao2017automated} (Section~\ref{sec:repl}) and show that NMT-Fake* reviews are \textbf{statistically different} from previous fake reviews, and that classifiers trained on previous fake review types do \textbf{not detect} NMT-Fake* reviews (Section~\ref{sec:automated}).
\item We compare NMT-Fake* reviews with char-LSTM reviews in a user study.
We show that our reviews are \textbf{significantly better at evading detection} with statistical significance ($\alpha = 1\%$) (Section~\ref{sec:comparison}).
\item We develop \textbf{highly efficient statistical detection tools} to recognize NMT-Fake* reviews with 97\% F-score (Section~\ref{sec:detection}). We plan to share the implementation of our detector and generative model with other researchers to facilitate transparency and reproducibility.
\end{itemize}
\section{Background
\noindent{\bf Fake reviews}
User-generated content \cite{o2008user} is an integral part of the contemporary user experience on the web.
Sites like \emph{tripadvisor.com}, \emph{yelp.com} and \emph{Google Play} use user-written reviews to provide rich information that helps other users choose where to spend money and time. User reviews are used for rating services or products, and for providing qualitative opinions. User reviews and ratings may be used to rank services in recommendations. Ratings have an affect on the outwards appearance. Already 8 years ago, researchers estimated that a one-star rating increase affects the business revenue by 5 -- 9\% on yelp.com \cite{luca2010reviews}.
Due to monetary impact of user-generated content, some businesses have relied on so-called crowd-turfing \emph{agents} \cite{wang2012serf} that promise to deliver positive ratings written by \emph{workers} to a \emph{customer} in exchange for a monetary compensation. Crowd-turfing ethics are complicated. For example, Amazon community guidelines prohibit buying content relating to promotions, but the act of writing fabricated content is not considered illegal, nor is matching workers to customers \cite{rinta2017understanding}.
Year 2015, approximately 20\% of online reviews on yelp.com were suspected of being fake \cite{luca2016fake}.
Nowadays, user-generated review sites like yelp.com use filters and fraudulent review detection techniques.
These factors have resulted in an increase in the requirements of crowd-turfed reviews provided to review sites, which in turn has led to an increase in the cost of high-quality review. Due to the cost increase, researchers hypothesize the existence of neural network-generated fake reviews. These neural-network-based fake reviews are statistically different from human-written fake reviews, and are not caught by classifiers trained on these \cite{yao2017automated}.
Detecting fake reviews can either be done on an individual level or as a system-wide detection tool (i.e. regulation). Detecting fake online content on a personal level requires knowledge and skills in critical reading.
In 2017, the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news \cite{national2017commission}.
For example, 20\% of children that use online news sites in age group 12-15 believe that all information on news sites are true.
\noindent{\bf Neural Networks}
Neural networks are function compositions that map input data through $k$ subsequent layers:
\begin{equation}
F(x) = f_k \circ f_{k-1} \circ \dots \circ f_{2} \circ f_{1} \circ x,
\end{equation}
where the functions $f_k$ are typically non-linear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation. Language models (LMs) \cite{jurafsky2014speech} are generative probability distributions that assign probabilities to sequences of tokens ($t_i$):
\begin{equation}
p(t_k | t_{<k}) = p(t_k | t_{k-1}, t_{k-2}, \dots, t_{2}, t_{1}),
\end{equation}
such that the language model can be used to predict how likely a specific token at time step $k$ is, based on the $k-1$ previous tokens. Tokens are typically either words or characters.
For decades, deep neural networks were thought to be computationally too difficult to train. However, advances in optimization, hardware and the availability of frameworks have shown otherwise \cite{murphy2012machine}, \cite{kingma2014adam}.
Neural language models (NLMs) have been one of the promising application areas.
NLMs are typically various forms of recurrent neural networks (RNNs), which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector. There are many RNN architectures that focus on different ways of updating and maintaining context vectors: Long Short-Term Memory units (LSTM) and Gated Recurrent Units (GRUs) are perhaps most popular. Neural LMs have been used for free-form text generation.
In certain application areas, the quality has been high enough to sometimes fool human readers \cite{yao2017automated}
Encoder-decoder (seq2seq) models \cite{cho2014learning} are architectures of stacked RNNs, which have the ability to generate output sequences based on input sequences. The encoder network reads in a sequence of tokens, and passes it to a decoder network (a LM). In contrast to simpler NLMs, encoder-decoder networks have the ability to use additional context for generating text, which enables more accurate generation of text.
Encoder-decoder models are integral in \emph{Neural Machine Translation} (\emph{NMT}) \cite{klein2017opennmt}, where the task is to translate a source text from one language to another language. NMT models additionally use beam search strategies to heuristically search the set of possible translations. Training datasets are parallel corpora; large sets of paired sentences in the source and target languages.
The application of NMT techniques for online machine translation has significantly improved the quality of translations, bringing it closer to human performance \cite{wu2016google}.
Neural machine translation models are efficient at mapping one expression to another (one-to-one mapping). Researchers have evaluated these models for conversation generation \cite{mei2017coherent}, with mixed results. Some researchers attribute poor performance to the use of the negative log likelihood cost function during training, which emphasizes generation of high-confidence phrases rather than diverse phrases \cite{li2016diversity}. The results are often generic text, which lacks variation.
Li et al. have suggested various augmentations to this, among others suppressing typical responses in the decoder language model to promote response diversity \cite{li2016diversity}.
\section{System Model}
\graphicspath{ {figures/}}
We discuss the attack model, our generative machine learning method and controlling the generative process in this section.
\subsection{Attack Model}
Wang et al. \cite{wang2012serf} described a model of crowd-turfing attacks consisting of three entities: \textbf{customers} who desire to have fake reviews for a particular target (e.g. their restaurant) on a particular platform (e.g. Yelp), \textbf{agents} who offer fake review services to customers, and \textbf{workers} who are orchestrated by the agent to compose and post fake reviews.
Automated crowd-turfing attacks (ACA) replace workers by a \textbf{generative model}. This has several benefits including better economy and scalability (human workers are more expensive and slower) and reduced detectability (agent can better control the rate at which fake reviews are generated and posted).
We assume that the agent has access to public reviews on the review platform, by which it can train its generative model. We also assume that it is easy for the agent to create a large number of accounts on the review platform so that account-based detection or rate-limiting techniques are ineffective against fake reviews.
The quality of the generative model plays a crucial role in the attack. Yao et al. \cite{yao2017automated} propose the use of a character-based LSTM as base for generative model. LSTMs are not conditioned to generate reviews for a specific target \cite{murphy2012machine}, and may mix-up concepts from different \emph{contexts} during free-form generation. Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews.
These may result in violations of known indicators for fake content \cite{rubin2006assessing}. For example, the review content may not match prior expectations nor the information need that the reader has.
We improve the attack model by considering a more capable generative model that produces more appropriate reviews: a neural machine translation (NMT) model.
\subsection{Generative Model}
\label{sec:model}
\subsubsection{Architecture}
We propose the use of NMT models for fake review generation. The method has several benefits: 1) the ability to \emph{learn} how to associate context (keywords) to reviews, 2) \emph{fast} training time, and 3) a high-degree of \emph{customization} during production time, e.g. introduction of specific waiter or food items names into reviews.
NMT models are constructions of stacked recurrent neural networks (RNNs). They include an \emph{encoder} network and a \emph{decoder} network, which are jointly optimized to produce a \emph{translation} of one sequence to another. The encoder rolls over the input data in sequence and produces \emph{one} $n$-dimensional context vector representation for the sentence. The decoder then generates output sequences based on the embedding vector and an \emph{attention module}, which is taught to associate output words with certain input words. The generation typically continues until a specific \emph{EOS} (end of sentence) token is encountered. The review length can be controlled in many ways, e.g. by setting the probability of generating the EOS token to zero until the required length is reached.
NMT models often also include a beam search \cite{klein2017opennmt}, which generates several hypotheses and chooses the best ones amongst them. In our work, we use the greedy beam search technique. We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used.
\subsubsection{Dataset}
We use the Yelp Challenge dataset \cite{challenge2013yelp} for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 \textendash 5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) \cite{zhao2017news}.
As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words.
We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training.
NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs.
We set up a parallel corpus by constructing (context, review)-pairs from the dataset.
Next, we describe how we created our input context.
\subsubsection{Context}
The Yelp Challenge dataset includes metadata about restaurants, including their names, food tags, cities and states these restaurants are located in. For each restaurant review, we fetch this metadata and use it as our input context in the NMT model. The corresponding restaurant review is similarly set as the target sentence. This method produced 2.9 million pairs of sentences in our parallel corpus. We show one example of the parallel training corpus in Example 1 below:
\noindent\begin{verbatim} Example 1.
5 Public House Las Vegas NV Gastropubs Restaurants > Excellent
food and service . Pricey , but well worth it . I would recommend
the bone marrow and sampler platter for appetizers . \end{verbatim}
\noindent The order {\textbf{[rating name city state tags]}} is kept constant.
Training the model conditions it to associate certain sequences of words in the input sentence with others in the output.
\subsubsection{Training Settings}
We train our NMT model on a commodity PC with a i7-4790k CPU (4.00GHz), with 32GB RAM and one NVidia GeForce GTX 980 GPU. Our system can process approximately 1,300 \textendash 1,500 source tokens/s and approximately 5,730 \textendash 5,830 output tokens/s. Training one epoch takes in average 72 minutes. The model is trained for 8 epochs, i.e. over night. We call fake review generated by this model \emph{NMT-Fake reviews}. We only need to train one model to produce reviews of different ratings.
We use the training settings: adam optimizer \cite{kingma2014adam} with the suggested learning rate 0.001 \cite{klein2017opennmt}. For most parts, parameters are at their default values. Notably, the maximum sentence length of input and output is 50 tokens by default.
We leverage the framework openNMT-py \cite{klein2017opennmt} to teach the our NMT model.
We list used openNMT-py commands in Appendix Table~\ref{table:openNMT-py_commands}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ | l | }
\hline
Example 2. Greedy NMT \\
Great food, \underline{great} service, \underline{great} \textit{\textit{beer selection}}. I had the \textit{Gastropubs burger} and it
\\
was delicious. The \underline{\textit{beer selection}} was also \underline{great}. \\
\\
Example 3. NMT-Fake* \\
I love this restaurant. Great food, great service. It's \textit{a little pricy} but worth\\
it for the \textit{quality} of the \textit{beer} and atmosphere you can see in \textit{Vegas}
\\
\hline
\end{tabular}
\label{table:output_comparison}
\end{center}
\caption{Na\"{i}ve text generation with NMT vs. generation using our NTM model. Repetitive patterns are \underline{underlined}. Contextual words are \emph{italicized}. Both examples here are generated based on the context given in Example~1.}
\label{fig:comparison}
\end{figure}
\subsection{Controlling generation of fake reviews}
\label{sec:generating}
Greedy NMT beam searches are practical in many NMT cases. However, the results are simply repetitive, when naively applied to fake review generation (See Example~2 in Figure~\ref{fig:comparison}).
The NMT model produces many \emph{high-confidence} word predictions, which are repetitive and obviously fake. We calculated that in fact, 43\% of the generated sentences started with the phrase ``Great food''. The lack of diversity in greedy use of NMTs for text generation is clear.
\begin{algorithm}[!b]
\KwData{Desired review context $C_\mathrm{input}$ (given as cleartext), NMT model}
\KwResult{Generated review $out$ for input context $C_\mathrm{input}$}
set $b=0.3$, $\lambda=-5$, $\alpha=\frac{2}{3}$, $p_\mathrm{typo}$, $p_\mathrm{spell}$ \\
$\log p \leftarrow \text{NMT.decode(NMT.encode(}C_\mathrm{input}\text{))}$ \\
out $\leftarrow$ [~] \\
$i \leftarrow 0$ \\
$\log p \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $1$, $[~]$, 0)~~~~~~~~~~~~~~~ |~random penalty~\\
\While{$i=0$ or $o_i$ not EOS}{
$\log \Tilde{p} \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$)~~~~~~~~~~~ |~start \& memory penalty~\\
$o_i \leftarrow$ \text{NMT.beam}($\log \Tilde{p}$, out) \\
out.append($o_i$) \\
$i \leftarrow i+1$
\text{return}~$\text{Obfuscate}$(out,~$p_\mathrm{typo}$,~$p_\mathrm{spell}$)
\caption{Generation of NMT-Fake* reviews.}
\label{alg:base}
\end{algorithm}
In this work, we describe how we succeeded in creating more diverse and less repetitive generated reviews, such as Example 3 in Figure~\ref{fig:comparison}.
We outline pseudocode for our methodology of generating fake reviews in Algorithm~\ref{alg:base}. There are several parameters in our algorithm.
The details of the algorithm will be shown later.
We modify the openNMT-py translation phase by changing log-probabilities before passing them to the beam search.
We notice that reviews generated with openNMT-py contain almost no language errors. As an optional post-processing step, we obfuscate reviews by introducing natural typos/misspellings randomly. In the next sections, we describe how we succeeded in generating more natural sentences from our NMT model, i.e. generating reviews like Example~3 instead of reviews like Example~2.
\subsubsection{Variation in word content}
Example 2 in Figure~\ref{fig:comparison} repeats commonly occurring words given for a specific context (e.g. \textit{great, food, service, beer, selection, burger} for Example~1). Generic review generation can be avoided by decreasing probabilities (log-likelihoods \cite{murphy2012machine}) of the generators LM, the decoder.
We constrain the generation of sentences by randomly \emph{imposing penalties to words}.
We tried several forms of added randomness, and found that adding constant penalties to a \emph{random subset} of the target words resulted in the most natural sentence flow. We call these penalties \emph{Bernoulli penalties}, since the random variables are chosen as either 1 or 0 (on or off).
\paragraph{Bernoulli penalties to language model}
To avoid generic sentences components, we augment the default language model $p(\cdot)$ of the decoder by
\begin{equation}
\log \Tilde{p}(t_k) = \log p(t_k | t_i, \dots, t_1) + \lambda q,
\end{equation}
where $q \in R^{V}$ is a vector of Bernoulli-distributed random values that obtain values $1$ with probability $b$ and value $0$ with probability $1-b_i$, and $\lambda < 0$. Parameter $b$ controls how much of the vocabulary is forgotten and $\lambda$ is a soft penalty of including ``forgotten'' words in a review.
$\lambda q_k$ emphasizes sentence forming with non-penalized words. The randomness is reset at the start of generating a new review.
Using Bernoulli penalties in the language model, we can ``forget'' a certain proportion of words and essentially ``force'' the creation of less typical sentences. We will test the effect of these two parameters, the Bernoulli probability $b$ and log-likelihood penalty of including ``forgotten'' words $\lambda$, with a user study in Section~\ref{sec:varying}.
\paragraph{Start penalty}
We introduce start penalties to avoid generic sentence starts (e.g. ``Great food, great service''). Inspired by \cite{li2016diversity}, we add a random start penalty $\lambda s^\mathrm{i}$, to our language model, which decreases monotonically for each generated token. We set $\alpha \leftarrow 0.66$ as it's effect decreases by 90\% every 5 words generated.
\paragraph{Penalty for reusing words}
Bernoulli penalties do not prevent excessive use of certain words in a sentence (such as \textit{great} in Example~2).
To avoid excessive reuse of words, we included a memory penalty for previously used words in each translation.
Concretely, we add the penalty $\lambda$ to each word that has been generated by the greedy search.
\subsubsection{Improving sentence coherence}
\label{sec:grammar}
We visually analyzed reviews after applying these penalties to our NMT model. While the models were clearly diverse, they were \emph{incoherent}: the introduction of random penalties had degraded the grammaticality of the sentences. Amongst others, the use of punctuation was erratic, and pronouns were used semantically wrongly (e.g. \emph{he}, \emph{she} might be replaced, as could ``and''/``but''). To improve the authenticity of our reviews, we added several \emph{grammar-based rules}.
English language has several classes of words which are important for the natural flow of sentences.
We built a list of common pronouns (e.g. I, them, our), conjunctions (e.g. and, thus, if), punctuation (e.g. ,/.,..), and apply only half memory penalties for these words. We found that this change made the reviews more coherent. The pseudocode for this and the previous step is shown in Algorithm~\ref{alg:aug}.
The combined effect of grammar-based rules and LM augmentation is visible in Example~3, Figure~\ref{fig:comparison}.
\begin{algorithm}[!t]
\KwData{Initial log LM $\log p$, Bernoulli probability $b$, soft-penalty $\lambda$, monotonic factor $\alpha$, last generated token $o_i$, grammar rules set $G$}
\KwResult{Augmented log LM $\log \Tilde{p}$}
\begin{algorithmic}[1]
\Procedure {Augment}{$\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$}{ \\
generate $P_{\mathrm{1:N}} \leftarrow Bernoulli(b)$~~~~~~~~~~~~~~~|~$\text{One value} \in \{0,1\}~\text{per token}$~ \\
$I \leftarrow P>0$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~Select positive indices~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log p$, $I$, $\lambda \cdot \alpha^i$,$G$) ~~~~~~ |~start penalty~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log \Tilde{p}$, $[o_i]$, $\lambda$,$G$) ~~~~~~~~~ |~memory penalty~\\
\textbf{return}~$\log \Tilde{p}$
}
\EndProcedure
\\
\Procedure {Discount}{$\log p$, $I$, $\lambda$, $G$}{
\State{\For{$i \in I$}{
\eIf{$o_i \in G$}{
$\log p_{i} \leftarrow \log p_{i} + \lambda/2$
}{
$\log p_{i} \leftarrow \log p_{i} + \lambda$}
\textbf{return}~$\log p$
\EndProcedure
}}
\end{algorithmic}
\caption{Pseudocode for augmenting language model. }
\label{alg:aug}
\end{algorithm}
\subsubsection{Human-like errors}
\label{sec:obfuscation}
We notice that our NMT model produces reviews without
grammar mistakes.
This is unlike real human writers, whose sentences contain two types of language mistakes 1) \emph{typos} that are caused by mistakes in the human motoric input, and 2) \emph{common spelling mistakes}.
We scraped a list of common English language spelling mistakes from Oxford dictionary\footnote{\url{https://en.oxforddictionaries.com/spelling/common-misspellings}} and created 80 rules for randomly \emph{re-introducing spelling mistakes}.
Similarly, typos are randomly reintroduced based on the weighted edit distance\footnote{\url{https://pypi.python.org/pypi/weighted-levenshtein/0.1}}, such that typos resulting in real English words with small perturbations are emphasized.
We use autocorrection tools\footnote{\url{https://pypi.python.org/pypi/autocorrect/0.1.0}} for finding these words.
We call these augmentations \emph{obfuscations}, since they aim to confound the reader to think a human has written them. We omit the pseudocode description for brevity.
\subsection{Experiment: Varying generation parameters in our NMT model}
\label{sec:varying}
Parameters $b$ and $\lambda$ control different aspects in fake reviews.
We show six different examples of generated fake reviews in Table~\ref{table:categories}.
Here, the largest differences occur with increasing values of $b$: visibly, the restaurant reviews become more extreme.
This occurs because a large portion of vocabulary is ``forgotten''. Reviews with $b \geq 0.7$ contain more rare word combinations, e.g. ``!!!!!'' as punctuation, and they occasionally break grammaticality (''experience was awesome'').
Reviews with lower $b$ are more generic: they contain safe word combinations like ``Great place, good service'' that occur in many reviews. Parameter $\lambda$'s is more subtle: it affects how random review starts are and to a degree, the discontinuation between statements within the review.
We conducted an Amazon Mechanical Turk (MTurk) survey in order to determine what kind of NMT-Fake reviews are convincing to native English speakers. We describe the survey and results in the next section.
\begin{table}[!b]
\caption{Six different parametrizations of our NMT reviews and one example for each. The context is ``5 P~.~F~.~Chang ' s Scottsdale AZ'' in all examples.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
$(b, \lambda)$ & Example review for context \\ \hline
\hline
$(0.3, -3)$ & I love this location! Great service, great food and the best drinks in Scottsdale. \\
& The staff is very friendly and always remembers u when we come in\\\hline
$(0.3, -5)$ & Love love the food here! I always go for lunch. They have a great menu and \\
& they make it fresh to order. Great place, good service and nice staff\\\hline
$(0.5, -4)$ & I love their chicken lettuce wraps and fried rice!! The service is good, they are\\
& always so polite. They have great happy hour specials and they have a lot\\
& of options.\\\hline
$(0.7, -3)$ & Great place to go with friends! They always make sure your dining \\
& experience was awesome.\\ \hline
$(0.7, -5)$ & Still haven't ordered an entree before but today we tried them once..\\
& both of us love this restaurant....\\\hline
$(0.9, -4)$ & AMAZING!!!!! Food was awesome with excellent service. Loved the lettuce \\
& wraps. Great drinks and wine! Can't wait to go back so soon!!\\ \hline
\end{tabular}
\label{table:categories}
\end{center}
\end{table}
\subsubsection{MTurk study}
\label{sec:amt}
We created 20 jobs, each with 100 questions, and requested master workers in MTurk to complete the jobs.
We randomly generated each survey for the participants. Each review had a 50\% chance to be real or fake. The fake ones further were chosen among six (6) categories of fake reviews (Table~\ref{table:categories}).
The restaurant and the city was given as contextual information to the participants. Our aim was to use this survey to understand how well English-speakers react to different parametrizations of NMT-Fake reviews.
Table~\ref{table:amt_pop} in Appendix summarizes the statistics for respondents in the survey. All participants were native English speakers from America. The base rate (50\%) was revealed to the participants prior to the study.
We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \emph{F-score of only 56\%}, with 53\% F-score for fake review detection and 59\% F-score for real review detection. The results are very close to \emph{random detection}, where precision, recall and F-score would each be 50\%. Results are recorded in Table~\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random.
\begin{table}[t]
\caption{Effectiveness of Mechanical Turkers in distinguishing human-written reviews from fake reviews generated by our NMT model (all variants).}
\begin{center}
\begin{tabular}{ | c | c |c |c | c | }
\hline
\multicolumn{5}{|c|}{Classification report}
\\ \hline
Review Type & Precision & Recall & F-score & Support \\ \hline
\hline
Human & 55\% & 63\% & 59\% & 994\\
NMT-Fake & 57\% & 50\% & 53\% & 1006 \\
\hline
\end{tabular}
\label{table:MTurk_super}
\end{center}
\end{table}
We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \lambda=-5)$, where true positive rate was $40.4\%$, while the true negative rate of the real class was $62.7\%$. The precision were $16\%$ and $86\%$, respectively. The class-averaged F-score is $47.6\%$, which is close to random. Detailed classification reports are shown in Table~\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \emph{our NMT-Fake reviews pose a significant threat to review systems}, since \emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.
\section{Evaluation}
\graphicspath{ {figures/}}
We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews, and proceed with a user study with experienced participants. We demonstrate the statistical difference to existing fake review types \cite{yao2017automated,mukherjee2013yelp,rayana2015collective} by training classifiers to detect previous types and investigate classification performance.
\subsection{Replication of state-of-the-art model: LSTM}
\label{sec:repl}
Yao et al. \cite{yao2017automated} presented the current state-of-the-art generative model for fake reviews. The model is trained over the Yelp Challenge dataset using a two-layer character-based LSTM model.
We requested the authors of \cite{yao2017automated} for access to their LSTM model or a fake review dataset generated by their model. Unfortunately they were not able to share either of these with us.
We therefore replicated their model as closely as we could, based on their paper and e-mail correspondence\footnote{We are committed to sharing our code with bonafide researchers for the sake of reproducibility.}.
We used the same graphics card (GeForce GTX) and trained using the same framework (torch-RNN in lua). We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters, and filtered out non-restaurant reviews. We trained the model for approximately 72 hours. We post-processed the reviews using the customization methodology described in \cite{yao2017automated} and email correspondence. We call fake reviews generated by this model LSTM-Fake reviews.
\subsection{Similarity to existing fake reviews}
\label{sec:automated}
We now want to understand how NMT-Fake* reviews compare to a) LSTM fake reviews and b) human-generated fake reviews. We do this by comparing the statistical similarity between these classes.
For `a' (Figure~\ref{fig:lstm}), we use the Yelp Challenge dataset. We trained a classifier using 5,000 random reviews from the Yelp Challenge dataset (``human'') and 5,000 fake reviews generated by LSTM-Fake. Yao et al. \cite{yao2017automated} found that character features are essential in identifying LSTM-Fake reviews. Consequently, we use character features (n-grams up to 3).
For `b' (Figure~\ref{fig:shill}),we the ``Yelp Shills'' dataset (combination of YelpZip \cite{mukherjee2013yelp}, YelpNYC \cite{mukherjee2013yelp}, YelpChi \cite{rayana2015collective}). This dataset labels entries that are identified as fraudulent by Yelp's filtering mechanism (''shill reviews'')\footnote{Note that shill reviews are probably generated by human shills \cite{zhao2017news}.}. The rest are treated as genuine reviews from human users (''genuine''). We use 100,000 reviews from each category to train a classifier. We use features from the commercial psychometric tool LIWC2015 \cite{pennebaker2015development} to generated features.
In both cases, we use AdaBoost (with 200 shallow decision trees) for training. For testing each classifier, we use a held out test set of 1,000 reviews from both classes in each case. In addition, we test 1,000 NMT-Fake* reviews. Figures~\ref{fig:lstm} and~\ref{fig:shill} show the results. The classification threshold of 50\% is marked with a dashed line.
\begin{figure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/lstm.png}
\caption{Human--LSTM reviews.}
\label{fig:lstm}
\end{subfigure}
%
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/distribution_shill.png}
\caption{Genuine--Shill reviews.}
\label{fig:shill}
\end{subfigure}
\caption{
Histogram comparison of NMT-Fake* reviews with LSTM-Fake reviews and human-generated (\emph{genuine} and \emph{shill}) reviews. Figure~\ref{fig:lstm} shows that a classifier trained to distinguish ``human'' vs. LSTM-Fake cannot distinguish ``human'' vs NMT-Fake* reviews. Figure~\ref{fig:shill} shows NMT-Fake* reviews are more similar to \emph{genuine} reviews than \emph{shill} reviews.
}
\label{fig:statistical_similarity}
\end{figure}
We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews. If anything, our fake reviews are more similar to genuine reviews than previous fake reviews. We thus conjecture that our NMT-Fake* fake reviews present a category of fake reviews that may go undetected on online review sites.
\subsection{Comparative user study}
\label{sec:comparison}
We wanted to evaluate the effectiveness of fake reviews againsttech-savvy users who understand and know to expect machine-generated fake reviews. We conducted a user study with 20 participants, all with computer science education and at least one university degree. Participant demographics are shown in Table~\ref{table:amt_pop} in the Appendix. Each participant first attended a training session where they were asked to label reviews (fake and genuine) and could later compare them to the correct answers -- we call these participants \emph{experienced participants}.
No personal data was collected during the user study.
Each person was given two randomly selected sets of 30 of reviews (a total of 60 reviews per person) with reviews containing 10 \textendash 50 words each.
Each set contained 26 (87\%) real reviews from Yelp and 4 (13\%) machine-generated reviews,
numbers chosen based on suspicious review prevalence on Yelp~\cite{mukherjee2013yelp,rayana2015collective}.
One set contained machine-generated reviews from one of the two models (NMT ($b=0.3, \lambda=-5$) or LSTM),
and the other set reviews from the other in randomized order.
The number of fake reviews was revealed to each participant in the study description. Each participant was requested to mark four (4) reviews as fake.
Each review targeted a real restaurant. A screenshot of that restaurant's Yelp page was shown to each participant prior to the study. Each participant evaluated reviews for one specific, randomly selected, restaurant. An example of the first page of the user study is shown in Figure~\ref{fig:screenshot} in Appendix.
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\columnwidth]{detection2.png}
\caption{Violin plots of detection rate in comparative study. Mean and standard deviations for number of detected fakes are $0.8\pm0.7$ for NMT-Fake* and $2.5\pm1.0$ for LSTM-Fake. $n=20$. A sample of random detection is shown as comparison.}
\label{fig:aalto}
\end{figure}
Figure~\ref{fig:aalto} shows the distribution of detected reviews of both types. A hypothetical random detector is shown for comparison.
NMT-Fake* reviews are significantly more difficult to detect for our experienced participants. In average, detection rate (recall) is $20\%$ for NMT-Fake* reviews, compared to $61\%$ for LSTM-based reviews.
The precision (and F-score) is the same as the recall in our study, since participants labeled 4 fakes in each set of 30 reviews \cite{murphy2012machine}.
The distribution of the detection across participants is shown in Figure~\ref{fig:aalto}. \emph{The difference is statistically significant with confidence level $99\%$} (Welch's t-test).
We compared the detection rate of NMT-Fake* reviews to a random detector,
and find that \emph{our participants detection rate of NMT-Fake* reviews is not statistically different from random predictions with 95\% confidence level} (Welch's t-test).
\section{Defenses}
\label{sec:detection}
We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\ref{table:features_adaboost} (Appendix).
We used word-level features based on spaCy-tokenization \cite{honnibal-johnson:2015:EMNLP} and constructed n-gram representation of POS-tags and dependency tree tags. We added readability features from NLTK~\cite{bird2004nltk}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\columnwidth]{obf_score_fair_2.png}
\caption{
Adaboost-based classification of NMT-Fake and human-written reviews.
Effect of varying $b$ and $\lambda$ in fake review generation.
The variant native speakers had most difficulties detecting is well detectable by AdaBoost (97\%).}
\label{fig:adaboost_matrix_b_lambda}
\end{figure}
Figure~\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \lambda=-5$) are detected with an excellent 97\% F-score.
The most important features for the classification were counts for frequently occurring words in fake reviews (such as punctuation, pronouns, articles) as well as the readability feature ``Automated Readability Index''. We thus conclude that while NMT-Fake reviews are difficult to detect for humans, they can be well detected with the right tools.
\section{Related Work}
Kumar and Shah~\cite{kumar2018false} survey and categorize false information research. Automatically generated fake reviews are a form of \emph{opinion-based false information}, where the creator of the review may influence reader's opinions or decisions.
Yao et al. \cite{yao2017automated} presented their study on machine-generated fake reviews. Contrary to us, they investigated character-level language models, without specifying a specific context before generation. We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews.
Supporting our study, Everett et al~\cite{Everett2016Automated} found that security researchers were less likely to be fooled by Markov chain-generated Reddit comments compared to ordinary Internet users.
Diversification of NMT model outputs has been studied in \cite{li2016diversity}. The authors proposed the use of a penalty to commonly occurring sentences (\emph{n-grams}) in order to emphasize maximum mutual information-based generation.
The authors investigated the use of NMT models in chatbot systems.
We found that unigram penalties to random tokens (Algorithm~\ref{alg:aug}) was easy to implement and produced sufficiently diverse responses.
\section {Discussion and Future Work}
\paragraph{What makes NMT-Fake* reviews difficult to detect?} First, NMT models allow the encoding of a relevant context for each review, which narrows down the possible choices of words that the model has to choose from. Our NMT model had a perplexity of approximately $25$, while the model of \cite{yao2017automated} had a perplexity of approximately $90$ \footnote{Personal communication with the authors}. Second, the beam search in NMT models narrows down choices to natural-looking sentences. Third, we observed that the NMT model produced \emph{better structure} in the generated sentences (i.e. a more coherent story).
\paragraph{Cost of generating reviews} With our setup, generating one review took less than one second. The cost of generation stems mainly from the overnight training. Assuming an electricity cost of 16 cents / kWh (California) and 8 hours of training, training the NMT model requires approximately 1.30 USD. This is a 90\% reduction in time compared to the state-of-the-art \cite{yao2017automated}. Furthermore, it is possible to generate both positive and negative reviews with the same model.
\paragraph{Ease of customization} We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search. We noticed that the success depended on the prevalence of the word in the training set. For example, adding a +5 to \emph{Mike} in the log-likelihood resulted in approximately 10\% prevalence of this word in the reviews. An attacker can therefore easily insert specific keywords to reviews, which can increase evasion probability.
\paragraph{Ease of testing} Our diversification scheme is applicable during \emph{generation phase}, and does not affect the training setup of the network in any way. Once the NMT model is obtained, it is easy to obtain several different variants of NMT-Fake reviews by varying parameters $b$ and $\lambda$.
\paragraph{Languages} The generation methodology is not per-se language-dependent. The requirement for successful generation is that sufficiently much data exists in the targeted language. However, our language model modifications require some knowledge of that target language's grammar to produce high-quality reviews.
\paragraph{Generalizability of detection techniques} Currently, fake reviews are not universally detectable. Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews (Section~\ref{sec:automated}). We see this an open problem that deserves more attention in fake reviews research.
\paragraph{Generalizability to other types of datasets} Our technique can be applied to any dataset, as long as there is sufficient training data for the NMT model. We used approximately 2.9 million reviews for this work.
\section{Conclusion}
In this paper, we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced, tech-savvy users.
This supports anecdotal evidence \cite{national2017commission}.
Our technique is more effective than state-of-the-art \cite{yao2017automated}.
We conclude that machine-aided fake review detection is necessary since human users are ineffective in identifying fake reviews.
We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews.
Robust detection of fake reviews is thus still an open problem.
\section*{Acknowledgments}
We thank Tommi Gr\"{o}ndahl for assistance in planning user studies and the
participants of the user study for their time and feedback. We also thank
Luiza Sayfullina for comments that improved the manuscript.
We thank the authors of \cite{yao2017automated} for answering questions about
their work.
\bibliographystyle{splncs}
|
{
"timestamp": "2018-06-29T02:06:02",
"yymm": "1805",
"arxiv_id": "1805.02400",
"language": "en",
"url": "https://arxiv.org/abs/1805.02400"
}
|
\section{Introduction}
One promising direction in which to get a better grasp of the effect of gravitation
is brought about by the investigation of test objects in curved spacetimes,
such as test particles, strings, or membranes.
Among these, the motion of a test particle is described by the
geodesic equations consisting of ordinary differential equations, which are easily accessible.
Accordingly, the analysis of the test particles has been worked out
in various background gravitational fields.
A natural generalization of the geodesic equation is given by
the Nambu-Goto equation for test strings or membranes describing such
test objects~\cite{Got71,Nam74}.
While the motion of geodesic particles is described by the world line
with the extremal spacetime interval, the Nambu-Goto strings or membranes
are characterized by their extremal area or volume for world sheets.
In a unified point of view, these geodesic particles, Nambu-Goto strings/membranes are regarded as
the harmonic mappings for isometric embeddings
from lower-dimensional spacetimes
into the spacetime.
The Nambu-Goto strings or membranes are considered to arise naturally
in our Universe. They correspond to the thin wall approximation
of the topological defect
produced via the symmetry breaking of the gauge interactions in the standard model
of elementary particles.
Their analytic solutions are less known
since they are subject to partial differential
equations. In cosmological applications, the main approach relies on numerical
simulations, which are very useful for getting insight into the scenario for
the structure formation in our Universe~\cite{VS95}.
It is known that considerable simplification occurs for the equation of motion
of Nambu-Goto strings when the isometry of the background spacetime
acts on the string world sheet, in which case the system reduces to that of
particle motion in a certain lower-dimensional manifold~\cite{CFH91,KF08,KKI15}.
In this direction, many interesting analytic solutions for the Nambu-Goto equation have been
found. In particular, stationary string solutions
in stationary black hole spacetimes are extensively studied~\cite{FSZH89,LS95,dVE96,FHDV97}.
These are regarded as
final equilibrium configurations of Nambu-Goto
strings in the presence of a black hole~\cite{OIKNS08}.
Initially dynamical strings would radiate their
energy due to some dissipative processes such as emission of Nambu-Goldstone bosons, which, however, are effects that are neglected in the test string approximations. Namely,
some strings would
fall into the black hole and others,
via such dissipative processes,
would settle down to final stable configurations, which would be described by
stationary solutions.
Hence, we are interested in the stability of these stationary string configurations in curved background spacetimes.
Since the Nambu-Goto equation reduces to the linear wave equation in flat backgrounds, its linear perturbation is also subject to the linear wave equation. So, any stationary strings in flat backgrounds would be stable under small fluctuations. In the presence of a black hole,
we could, however, not expect stability for stationary strings. Hence,
we have to check the stability of stationary solutions separately.
The linear perturbations of the Nambu-Goto strings are formulated by
Guven~\cite{Guv93},
and the stability problem for Nambu-Goto strings has been analyzed
by many authors~\cite{Lar95,LF94,KL06,BKP17}.
We also follow Guven's approach in order to study the stability problem of
stationary strings.
The subject of this paper is the stability problem of
a closed string winding around a flat torus embedded in five-dimensional nonrotating
black hole background, which is a nontrivial solution of Nambu-Goto equations recently discussed by Ref.~\cite{BEI08}.
This string solution is characterized by a
parameter corresponding to the distance between
the string and the event horizon.
We show that this class of string solutions allows
fully symbolic treatment for linear perturbations and that it can be proven that
these configurations are
always unstable. This means that the presence of
a black hole may switch on the instability of strings even if they are localized in an almost
flat region distant from the event horizon,
which might be unexpected with naive considerations.
This paper is organized as follows: Section 2 reviews the linear perturbation formalism developed by Guven for self-contained-ness.
Section 3 studies the stability of corresponding closed string solutions in the five-dimensional flat spacetime as a reference.
Section 4, which includes our main result,
investigates the closed string solutions
in five-dimensional Schwarzschild spacetime,
and gives a proof that they are all unstable.
Finally, Section 5 presents our conclusions.
\section{Review of linear perturbations of Nambu-Goto membranes}
First, we review the formulation of linear perturbation for Nambu-Goto
membranes in general settings developed by Guven~\cite{Guv93}.
Let $M$ be a $m$-dimensional spacetime, and let $g$ be a Lorentzian
metric on $M$ with the signature $(-,+,\dots,+)$.
We consider the isometric embedding of an $n$-dimensional differentiable
manifold $N$
into $M$, where $2\le n\le m-1$. In terms of the local coordinates,
this embedding would be expressed as
\begin{align*}
x^\mu=X^\mu(\xi^a)
\end{align*}
where $x^\mu$'s $(\mu=1,\dots,m)$ are local coordinates on $M$,
while $\xi^a$'s $(a=1,\dots,n)$ are those on $N$.
We consider only the timelike embeddings so that
the spacetime metric $g$ induces the Lorentzian metric
\begin{align}
G_{ab}=g_{\mu\nu}(D_aX^\mu)D_bX^\nu
\end{align}
on $N$,
where $D_a$ denotes the $G$ connection on $N$.
The Nambu-Goto membranes are defined as the embeddings those extremize the
action
\begin{align*}
S[X^\mu]=-T\int d^n\xi \sqrt{|G|},
\end{align*}
where $T$ is a positive constant that does not play any special roles in the
present argument, and $G$ is abbreviation for $\operatorname{det}G$.
This leads to the Euler-Lagrange equation,
\begin{align*}
D_aD^a X^\mu+\Gamma^\mu_{\nu\lambda}(D_aX^\nu)(D_bX^\lambda)G^{ab}=0,
\end{align*}
where $\Gamma^\mu_{\nu\lambda}$
denotes the restriction of the Christoffel symbol for the $g$ connection on $N$.
Note that $D_aX^\mu$ corresponds to the $x^\mu$ component of the coordinate basis
$\partial/\partial \xi^a$ of the tangent space of $N$.
The extrinsic curvature of the embedding $N\xhookrightarrow{} M$ is measured in terms of
the second fundamental form
\begin{align}
K^\mu{}_{ab}=D_a D_b X^\mu+\Gamma^\mu_{\nu\lambda}(D_aX^\nu)D_bX^\lambda,
\label{DDX}
\end{align}
defined on $N$.
This is a symmetric tensor field on $N$ i.e.
\begin{align*}
K^\mu{}_{ab}=K^\mu{}_{(ab)}
\end{align*}
holds, and
this can also be seen as a normal vector of $N$ parametrized by $(a,b)$
in the sense that
\begin{align*}
g_{\mu\nu}K^\mu{}_{ab}D_cX^\nu=0
\end{align*}
holds on $N$, as is easily confirmed.
The Nambu-Goto equation just says that the trace of the extrinsic curvature
vectors is zero:
\begin{align*}
K^\mu{}_{ab}G^{ab}=0.
\end{align*}
Denote by $X^\mu$ the unperturbed solution to the Nambu-Goto equation.
Our purpose here is to write down the linearized Nambu-Goto equation for
small deviation $\delta X^\mu$ from $X^\mu$.
Obviously, it is sufficient to assume that $\delta X^\mu$ should be normal
to $N$, since the tangential component of $\delta X^\mu$ with respect to $N$
corresponds to diffeomorphism on $N$, which is uninteresting.
Hence, we set
\begin{align*}
\delta X^\mu=f^A n_A{}^\mu,
\end{align*}
in terms of $m-n$ differentiable functions $f^A$ $(A=1,\dots,m-n)$ on $N$,
where $n_A{}^\mu$'s constitute an orthonormal frame field for the normal bundle on $N$, i.e. such that
\begin{align*}
g_{\mu\nu}n_A{}^\mu D_aX^\nu&=0,\\
g_{\mu\nu}n_A{}^\mu n_B{}^\nu&=\eta_{AB},\\
\eta_{AB}&=\operatorname{diag}(1,\dots,1).
\end{align*}
Note that there is $O_{m-n}$ gauge freedom choosing $n_A$'s so that
the resultant equation for $f^A$'s should be covariant under the $O_{m-n}$ action.
Here, we need to introduce some more
geometric quantities associated with the embedding, which does not appear
in the case of the codimension-1 embeddings (see, e.g., Ref.~\cite{Eis26}).
Define the covariant vector field on $N$, parametrized by $(A,B)$, by
\begin{align*}
\mu_{ABa}=g_{\mu\nu}n_A{}^\mu D_a n_B{}^\nu+\Gamma_{\alpha\beta\gamma}
n_A{}^\alpha n_B{}^\beta D_a X^\gamma,
\end{align*}
which is essentially the projection of $n_A{}^\mu\nabla_\nu n_{B\mu}$ onto the
cotangent space on $N$, i.e. a part of the Cartan's connection coefficients
appearing in the structure equations for the orthonormal frame in $M$.
Note that $n_A{}^\mu$ is regarded as a scalar function on $N$ in the above expression so that $D_a$ denotes just a partial derivative with respect to $\xi^a$.
These $\mu_{ABa}$'s are not independent but subject to
\begin{align*}
\mu_{ABa}=\mu_{[AB]a}.
\end{align*}
It appears in the orthogonal decomposition of
$D_an_A{}^\mu$ as
\begin{align}
D_a n_A{}^\mu=-K_A{}^b{}_a D_bX^\mu-\mu_A{}^B{}_a n_B{}^\mu-\Gamma^\mu_{\nu\lambda}
(D_aX^\nu)n_A{}^\lambda
\label{Dn},
\end{align}
as readily confirmed (see, e.g. Refs.~cite{Eis26}),
where we have defined
\begin{align*}
K_{Aab}=n_{A\mu}K^\mu{}_{ab}.
\end{align*}
According to the small displacement $X^\mu\mapsto X^\mu+f^An_A{}^\mu$,
the induced metric on the membrane undergoes a variation $\delta G_{ab}$,
which in the first order of $f^A$'s is given by
\begin{align*}
\delta G_{ab}=-2K_{Aab}f^A.
\end{align*}
The variation of $\sqrt{|G|}D_aD^a X^\mu$ becomes
\begin{align}
\delta (\sqrt{|G|}D_a D^a X^\mu)
&=\sqrt{|G|}\biggl\{
n_A{}^\mu D_aD^a f^A
+\left[2K_A{}^{ab}(D_bX^\mu)
+2(D^an_A{}^\mu)\right]D_af^A\nonumber\\
&+\left[2(D_a K_A{}^{ab})(D_bX^\mu)
+2K_A{}^{ab}(D_aD_b X^\mu)+(D_aD^an_A{}^\mu)\right ]
f^A
\biggr\}.
\label{1st}
\end{align}
Thus, we need the expression for the d'Alembertian of $n_A{}^\mu$,
which becomes
\begin{align}
D_aD^an_A{}^\mu&=
\left[-(D_aK_A{}^{ab})
+\mu_A{}^B{}_aK_B{}^{ab}\right]D_bX^\mu
-K_A{}^{ab}K^\mu{}_{ab}
\nonumber\\
&+2\Gamma^\mu_{\alpha\beta}(D_aX^\alpha)[
K_A{}^{ab}D_bX^\beta+\mu_A{}^{Ba}n_B{}^\beta]\nonumber\\
&+\left[-D_a\mu_A{}^{Ca}+\mu_A{}^B{}_a\mu_B{}^{Ca}
\right]n_C{}^\mu\nonumber\\
&+(-\Gamma^\mu_{\alpha\gamma,\beta}+\Gamma^\mu_{\gamma\delta}\Gamma^\delta_{\alpha\beta}
+\Gamma^\mu_{\beta\delta}\Gamma^\delta_{\alpha\gamma})G^{\alpha\beta}n_A{}^\gamma,
\label{DDn}
\end{align}
where
\begin{align*}
G^{\alpha\beta}=G^{ab}(D_aX^\alpha)D_bX^\beta
\end{align*}
is the component of $G^{ab}$ written in terms of the spacetime coordinate
system.
Next, the first variation of the second terms of the Nambu-Goto equation
results in
\begin{align}
\delta (\sqrt{|G|}\Gamma^\mu_{\alpha\beta}G^{\alpha\beta})
&=\sqrt{|G|}\biggl\{
2\Gamma^\mu_{\alpha\beta}(D^aX^\alpha)n_A{}^\beta D_a f^A\nonumber\\
&+\Gamma^\mu_{\alpha\beta}\left[K_A{}^{ab}(D_a X^\alpha)(D_bX^\beta)
-2\mu_A{}^{Ba}(D_aX^\alpha)n_B{}^\beta\right]f^A\nonumber\\
&+\left[
(\Gamma^\mu_{\alpha\beta,\gamma}-2\Gamma^\mu_{\alpha\delta}\Gamma^\delta_{\beta\gamma})
G^{\alpha\beta}n_A{}^\gamma
\right]f^A
\biggr\}.
\label{2nd}
\end{align}
Finally, from Eqs.~(\ref{1st}) and (\ref{2nd}),
using Eqs.~(\ref{DDX}), (\ref{Dn}), and (\ref{DDn}),
we obtain
\begin{align}
\delta(\sqrt{|G|}(D_aD^aX^\mu+\Gamma^\mu_{\alpha\beta}G^{\alpha\beta})
&=\sqrt{|G|}( L f^C)n_C{}^\mu,
\end{align}
where
\begin{align}
Lf^C&=D_aD^a f^C-2\mu_A{}^{Ca}D_af^A
+\biggl[
K_A{}^{ab}K^C{}_{ab}
-D_a\mu_A{}^{Ca}+\mu_A{}^B{}_a\mu_B{}^{Ca}\nonumber\\
&+R_{\alpha\beta}n_A{}^\alpha n^{C\beta}
-R_{\alpha\beta\gamma\delta}n_A{}^\alpha n_D{}^\beta n^{C\gamma}n^{D\delta}
\biggr]f^A,
\end{align}
which involves the spacetime Riemann and Ricci curvatures
defined by
\begin{align*}
R^\mu{}_{\nu\lambda\rho}&=\Gamma^\mu_{\nu\rho,\lambda}-
\Gamma^\mu_{\nu\lambda,\rho}
+\Gamma^\mu_{\lambda\delta}\Gamma^\delta_{\nu\rho}
-\Gamma^\mu_{\rho\delta}\Gamma^\delta_{\nu\lambda},\\
R_{\nu\rho}&=R^\mu{}_{\nu\mu\rho}.
\end{align*}
Hence, the linear perturbation equation for the Nambu-Goto membranes are
governed by
\begin{align*}
Lf^C=0.
\end{align*}
For consistency, let us confirm the covariance of $Lf^C=0$ under the
local $O_{m-n}$ transformation
\begin{align*}
n_A{}^\mu\mapsto O_A{}^Bn_B{}^\mu
\end{align*}
in terms of an orthogonal matrix field $O_A{}^B$ on $N$.
This is regarded as the gauge transformation in the principal
$O_{m-n}$ bundle over $N$.
The quantities $f^A$ and $K^A{}_{ab}$ transform like tensors as
\begin{align*}
(f^A,K^A{}_{ab})\mapsto O^A{}_B(f^B,K^B{}_{ab}),
\end{align*}
while $\mu_A{}^B{}_a$ transforms like the principal $O_{m-n}$ connection as
\begin{align*}
\mu_{A}{}^B{}_a\mapsto
O_A{}^CD_a O^B{}_C
+O_A{}^C\mu_C{}^D{}_a O^B{}_D,
\end{align*}
where the first term can be regarded as the pure gauge.
These guarantee the covariance of the perturbation equation:
\begin{align*}
Lf^C\mapsto O^C{}_D Lf^D.
\end{align*}
With this interpretation of $\mu_A{}^B{}_a$ as the connection on the
principal $O_{m-n}$-bundle, which is an $\frak{o}_{m-n}$-valued 1-form on $N$,
it turns out that the perturbation equation can be written more
compactly as
\begin{align}
L f^B=\mathscr{D}_a\mathscr{D}^a f^B
+\left(K_A{}^{ab}K^B{}_{ab}
+R_{\alpha\beta}n_A{}^\alpha n^{B\beta}
-R_{\alpha\beta\gamma\delta}n_A{}^\alpha n_C{}^\beta n^{B\gamma}n^{C\delta}
\right)f^A=0,
\label{p-eq}
\end{align}
where the covariant derivative
\begin{align*}
\mathscr{D}_af^B=D_a f^B-\mu_A{}^B{}_a f^A
\end{align*}
acting on sections of the associated vector bundle is defined.
This expression for the perturbed Nambu-Goto equation
is manifestly covariant under the $O_{m-n}$ gauge
transformation and the diffeomorphism on $N$.
\section{Stationary closed strings in 5-dimensional flat spacetimes}
Here, we consider stationary Nambu-Goto strings winding around a flat torus in the five-dimensional flat spacetime as a reference for a later section.
Starting with the line element
\begin{align*}
g=-(dx^0)^2+(dx^1)^2+(dx^2)^2+(dx^3)^2+(dx^4)^2,
\end{align*}
we take a coordinate system $(t,r,\phi,\rho,\psi)$ determined by
\begin{align*}
x^0=t,~~ x^1=r\cos\phi,~~
x^2=r\sin\phi,~~
x^3=\rho\cos\psi,~~
x^4=\rho\sin\psi.
\end{align*}
Here and in what follows, we set the speed of light to unity.
Then, the line element takes the form
\begin{align*}
g=-dt^2+dr^2+r^2d\phi^2+d\rho^2+\rho^2 d\psi^2
\end{align*}
with these coordinates.
It admits a simple solution to the Nambu-Goto equation
\begin{align*}
t=\dfrac{pq}{\sqrt{p^2+q^2}}R\tau,~~
r=\dfrac{q}{\sqrt{p^2+q^2}}R,~~
\rho=\dfrac{p}{\sqrt{p^2+q^2}}R,~~
\phi=p \sigma,~~
\psi=q(\sigma+\tau),
\end{align*}
where $\tau\in\boldsymbol{R}$ and $\sigma\in \boldsymbol{R}/2\pi\boldsymbol{Z}$ are world sheet coordinates,
$p$ and $q$ are coprime integers,
and $R$ is a positive real number.
This describe a closed string winding around a torus at $(r,\rho)={\rm const}.$
with the winding number characterized by the coprime pair $(p,q)$,
and the closed string is stationarily scrolling on the torus.
We consider the linear perturbation of this solution.
Since the Nambu-Goto equation in flat space reduces to
a linear wave equation,
\begin{align*}
D_a D^a X^\mu=0,
\end{align*}
its perturbation $\delta X^\mu$ is also subject to the same equation:
\begin{align*}
D_a D^a\delta X^\mu=0.
\end{align*}
Its solution generally contains diffeomorphism on the world sheet,
which is unphysical.
On the other hand, Eq.~(\ref{p-eq}) describes only physical modes
contained in $\delta X^\mu$.
Now, we set
\begin{align*}
n_1&=\partial_r,\\
n_2&=\partial_\rho,\\
n_3&=\partial_t-\dfrac{\sqrt{p^2+q^2}}{pqR}(p\partial_\phi-q\partial_\psi),
\end{align*}
as an orthonormal frame $(n_1,n_2,n_3)$ for the normal space to the world sheet.
We need the following geometric quantities:
\begin{align*}
G_{ab}&=\dfrac{p^2q^2R^2}{p^2+q^2}\left(
\begin{array}{cc}
0&1\\
1&2
\end{array}
\right),\\
%
K^1{}_{ab}&=-\dfrac{p^2 q R}{\sqrt{p^2+q^2}}
\left(\begin{array}{cc}0&0\\0&1 \end{array} \right),~~
K^2{}_{ab}=-\dfrac{pq^2 R}{\sqrt{p^2+q^2}}
\left(\begin{array}{cc}1&1\\1&1 \end{array}\right),~~
K^3{}_{ab}=0, \\
%
\mu_A{}^B{}_\tau&=\left(
\begin{array}{ccc}
0&0&0\\
0&0&-q\\
0&q&0
\end{array}
\right),~~
\mu_A{}^B{}_\sigma=\left(
\begin{array}{ccc}
0&0&p\\
0&0&-q\\
-p&q&0
\end{array}
\right).
\end{align*}
Then, assuming $f^A\propto e^{-i\omega\tau+ik\sigma}$ $(k\in\boldsymbol{Z})$,
the perturbation equation (\ref{p-eq}) reduces to
the algebraic equation
\begin{align*}
A\left(
\begin{array}{c}
f^1\\f^2\\f^3
\end{array}
\right)=0,~~~
A=\left(
\begin{array}{ccc}
\omega(\omega+k)&pq&-ip\omega\\
pq&\omega(\omega+k)&-iq(\omega+k)\\
ip\omega&iq(\omega+k)&\omega(\omega+k)
\end{array}
\right).
\end{align*}
The condition that this equation admits a nontrivial solution for
$f^A$'s is determined by
\begin{align*}
\operatorname{det}A=\omega(\omega+k)(\omega+k+p)(\omega+k-p)(\omega+q)(\omega-q)=0,
\end{align*}
and hence, it is given by
\begin{align*}
\omega=0,~~-k,~~-k\pm p,~~\pm q.
\end{align*}
All these modes show the stability of the Nambu-Goto strings in flat
background, as expected.
\section{Stationary closed strings in 5-dimensional black-hole space-times}
As a straightforward extension to the example given in the previous section,
we consider the stationary closed strings winding around a flat torus
in the five-dimensional Schwarzschild spacetime.
This turns out to be one of simplest cases allowing the analytic treatment of the perturbation
equation, which is one of reasons why
we describe it here in this paper.
Here, we show that this type of closed strings is generally unstable under the
small perturbation in the presence of the black holes.
The line element of the five-dimensional Schwarzschild spacetime is
given by
\begin{align*}
g=-\left(1-\dfrac{r_0^2}{r^2}\right)dt^2
+\left(1-\dfrac{r_0^2}{r^2}\right)^{-1}dr^2
+r^2[d\theta^2+(\sin\theta)^2d\phi^2+(\cos\theta)^2d\psi^2],
\end{align*}
where $r_0>0$ corresponds to the Schwarzschild radius
of the event horizon,
$\theta\in (0,\pi/2)$, $\phi\in \boldsymbol{R}/2\pi\boldsymbol{Z}$, $\psi\in \boldsymbol{R}/2\pi\boldsymbol{Z}$
are the coordinates on the 3-sphere given by $t,r={\rm const}.$
This admits a Nambu-Goto string solution,
\begin{align}
t=\dfrac{sr_0\tau}{\sqrt{2}},~~
r=sr_0,~~
\theta=\dfrac{\pi}{4},~~
\phi=\sigma,~~
\psi=\sigma+\tau,
\label{ng-sol}
\end{align}
where $\tau\in\boldsymbol{R}$ and $\sigma\in\boldsymbol{R}/2\pi\boldsymbol{Z}$ are world sheet coordinates and
$s>\sqrt{2}$ is a unique parameter of this solution
characterizing the distance between
the string and the black hole.
This describes a stationarily scrolling closed string
winding around a flat torus embedded in the Schwarzschild spacetime,
with the winding number $(p,q)=(1,1)$.
We note that this Nambu-Goto string is just a special case of the solutions
considered by Igata and Ishihara~\cite{II10a,II10b}.
We choose
\begin{align*}
n_1&=\dfrac{\sqrt{s^2-1}}{s}\partial_r,\\
n_2&=\dfrac{1}{sr_0}\partial_\theta,\\
n_3&=\dfrac{s^2}{\sqrt{(s^2-2)(s^2-1)}}\partial_t
-\dfrac{\sqrt{2(s^2-1)}}{sr_0\sqrt{s^2-2}}(\partial_\phi-\partial_\psi),
\end{align*}
as a orthonormal frame $(n_1,n_2,n_3)$ for the normal space of the
world sheet.
The geometric quantities required to compute the perturbation equation are
\begin{align*}
G_{ab}&=\dfrac{r_0^2}{2}\left(\begin{array}{cc}1&s^2\\s^2&2s^2
\end{array}\right),\\
K^1{}_{ab}&=-\dfrac{\sqrt{s^2-1}r_0}{2s^2}\left(
\begin{array}{cc}s^2-1&s^2\\s^2&2s^2
\end{array}\right),~~
K^2{}_{ab}=\dfrac{s r_0}{2}\left(
\begin{array}{cc}1&1\\1&0
\end{array}\right),~~
K^3{}_{ab}=0,\\
\mu_A{}^B{}_\tau&=\left(
\begin{array}{ccc}
0&0&-\dfrac{\sqrt{s^2-2}}{\sqrt{2}s}\\
0&0&\dfrac{\sqrt{s^2-1}}{\sqrt{2(s^2-2)}}\\
\dfrac{\sqrt{s^2-2}}{\sqrt{2}s}&-\dfrac{\sqrt{s^2-1}}{\sqrt{2(s^2-2)}}&0
\end{array}
\right),\\
\mu_A{}^B{}_\sigma&=\left(
\begin{array}{ccc}
0&0&0\\
0&0&\dfrac{\sqrt{2(s^2-1)}}{\sqrt{s^2-2}}\\
0&-\dfrac{\sqrt{2(s^2-1)}}{\sqrt{s^2-2}}&0
\end{array}
\right),\\
& R_{\alpha\beta}n_A{}^\alpha n^{B\beta}=0,\\
& R_{\alpha\beta\gamma\delta}n_A{}^\alpha n_C{}^\beta n^{B\gamma}n^{C\delta}
=\dfrac{2}{s^4(s^2-2)r_0^2}\left(
\begin{array}{ccc}
2-3s^2&0&0\\
0&s^2&0\\
0&0&-s^2
\end{array}\right).
\end{align*}
Assuming $f^A\propto e^{-i\omega \tau+ik\sigma}$ $(k\in\boldsymbol{Z})$,
Eq.~(\ref{p-eq}) becomes
\begin{align*}
& A\left(\begin{array}{c}f^1\\f^2\\f^3
\end{array}\right)=0,\\
& A=\left(\begin{array}{ccc}
2s^2\omega^2+2s^2 k\omega+k^2+2s^2-2&0&-is\sqrt{2(s^2-2)}(2\omega+k)\\
0&2s^2\omega^2+2s^2 k\omega+k^2-2s^2&ik\sqrt{2(s^2-2)(s^2-1)}\\
is\sqrt{2(s^2-2)}(2\omega+k)&-ik\sqrt{2(s^2-2)(s^2-1)}&
2s^2\omega^2+2s^2k\omega+k^2
\end{array}
\right).
\end{align*}
Then, $f^A$'s have nontrivial solutions when
$\omega$ solves the polynomial equation
\begin{align*}
\operatorname{det}A&=
8s^6\omega^6+24s^6 k\omega^5
+[12s^4(2s^2+1)k^2 -8s^4(2s^2-3)]\omega^4\\
& +[8s^4(s^2+3)k^3-16s^4(2s^2-3)k]\omega^3\\
& +[6s^2(2s^2+1)k^4-12s^4(2s^2-3)k^2+8s^4(s^2-3)]\omega^2\\
& +[6s^2 k^5-4s^4(2s^2-3)k^3+8s^4(s^2-3)k]\omega\\
& +[k^6-(4s^4-10s^2+6)k^4+(4s^4-16s^2+8)k^2]=0.
\end{align*}
Substituting $\omega=\sqrt{x}-k/2$, this reduces to
\begin{align}
\nonumber p(x)&= 8s^6 x^3+[6s^4(2-s^2)k^2+8s^4(3-2s^2)]x^2\\
\nonumber& +\left[\dfrac{3}{2}s^2(s^2-2)^2k^4+8s^4(s^2-3)\right]x\\
& +
\dfrac{1}{8}(2-s^2)^3k^6+\dfrac{1}{2}(s^2-2)^2(2s^2-3)k^4
-2(s^2-2)^2(s^2-1)k^2
=0.\label{px}
\end{align}
This is a cubic polynomial equation; hence, it is solvable via
Cardano's method.
Therefore, the present Nambu-Goto string is unstable if and only if
the polynomial $p(x)$ has two complex-conjugate roots or a negative root,
which depends on the parameters $s$ and $k$.
It is easily seen that $p(x)$ has a negative root, when the string
is sufficiently close to the $r=\sqrt{2}r_0$ surface.
Setting $s^2=2+\epsilon$ $(\epsilon>0)$,
Eq.~(\ref{px}) becomes
\begin{align*}
p(x)=8x[(8+12\epsilon)x^2-(4+3k^2\epsilon+12\epsilon)x-4]+O(\epsilon^2)=0,
\end{align*}
hence $p(x)$ has roots
\begin{align*}
x=0,~~1+\dfrac{k^2}{4}\epsilon,~~-\dfrac{1}{2}+\dfrac{k^2+6}{8}\epsilon,
\end{align*}
up to first order in $\epsilon$.
The third root corresponds to the unstable mode behaving like
\begin{align*}
f^A\propto \exp\left[\left(\dfrac{1}{\sqrt{2}}-\dfrac{6+k^2}{8\sqrt{2}}\epsilon+i\dfrac{k}{2}\right)\tau\right]e^{ik\sigma}.
\end{align*}
For $s\gg 1$, since the geometry around the string approaches the flat
spacetime, one might expect that such strings are always stable.
We, however, show that it is not the case.
To see this, consider the expansion
\begin{align*}
\nonumber p(x)&= 8s^6
\left(x-\dfrac{k^2}{4}\right)\left[x-\dfrac{(k-2)^2}{4}\right]
\left[x-\dfrac{(k+2)^2}{4}\right]
+O(s^{4})
=0.
\end{align*}
From this expression, the approximate roots for $p(x)$ can be read off
from the leading $O(s^6)$ term. It can be seen that for $|k|\ge 2$ they are given by
three distinct positive roots $k^2/4$, $(k\pm 2)^2/4$, which are consistent with
the results in the previous section. Although the exact roots might slightly differ
from these under small corrections, these would still give positive roots, showing the stability
of strings under these modes.
The cases $k=0,\pm 1$ should be considered separately,
when approximate roots include multiple one, since it possibly
becomes nonreal roots under small corrections.
The expression for small corrections from $O(s^{4})$ terms is readily
obtained thanks to Cardano's formula. For $k=\pm 1$,
we can see that $p(x)$ has two complex-conjugate roots,
\begin{align*}
x=\dfrac{1}{4}-\dfrac{3}{8}s^{-2}\pm i\dfrac{\sqrt{15}}{8}s^{-2}+O(s^{-4})
\end{align*}
which correspond to the unstable modes
\begin{align*}
f^A&\propto \exp\left\{
\left[
\dfrac{\sqrt{15}+3i}{8}s^{-2}+O(s^{-4})
\right]
\tau\right\}
e^{\pm i\sigma},\\
f^A&\propto \exp\left\{
\left[i+
\dfrac{\sqrt{15}-3i}{8}s^{-2}+O(s^{-4})
\right]
\tau\right\}
e^{\pm i\sigma},
\end{align*}
These instabilities, however, are exposed after a relatively long latent period
$\tau\sim s^2$, so strings might be possibly stabilized taking into account
dissipative effects, such as the emission of Nambu-Goldstone bosons,
which are not considered in the present analysis of
test strings. We also find out that the uniform $k=0$ modes do not show such
instabilities, where $p(x)$ has exact roots $0$, $1$, $1-3s^{-2}$.
We finally show that $k=\pm 1$ modes are always unstable.
\begin{thm}
The Nambu-Goto string solutions given by Eqs.~(\ref{ng-sol})
are unstable under the linear perturbation.
\end{thm}
\noindent{\em Proof.}
The statement of Theorem 1 is proven by showing that the cubic polynomial $p(x)$ has two complex-conjugates roots
and
otherwise has at least one negative roots, when $k=\pm 1$.
We can easily see that $p(x)$ always has
the local maximum at
\begin{align*}
x=x_1:=\dfrac{11}{12}-\dfrac{3}{2s^2}-\dfrac{\sqrt{16s^4-54s^2+72}}{6s^2},
\end{align*}
when $k=\pm 1$.
This is an increasing function of $s$ for $s>\sqrt{2}$,
so it can be shown that $x_1$ is negative for $\sqrt{2}<s<s_1$,
where $s_1$ is given by
\begin{align*}
s_1=\sqrt{\dfrac{30+4\sqrt{42}}{19}}\approx 1.7.
\end{align*}
On the other hand, the corresponding local maximum value of $p(x)$ is given by
\begin{align*}
p(x_1)=-\dfrac{128}{27}s^6+24s^4-56s^2+48
+\left(\dfrac{32}{27}s^4-4s^2+\dfrac{16}{3}\right)\sqrt{16s^4-54s^2+72}.
\end{align*}
As a function of $s$, the zeros of $p(x_1)$ can be determined
by solving a quartic equation for $s^2$.
Then, we find that $p(x_1)$ has
only one zero for $s>\sqrt{2}$ at
\begin{align*}
s=s_2=\sqrt{2+\left(\dfrac{2}{5}\right)^{2/3}(5+3\sqrt{5})^{1/3}
-2\left(\dfrac{2}{5}\right)^{1/3}(5+3\sqrt{5})^{-1/3}}\approx 1.6.
\end{align*}
It turns out that
the local maximum value $p(x_1)$ is negative for $s>s_2$,
In particular, $s_2$ is less than $s_1$.
Then, it can be concluded that
$p(x)$ has two complex-conjugate roots for $s>s_2$ and
that $p(x)$ has at least one negative root (in fact, it always has exactly
two negative roots) for $\sqrt{2}<s\le s_2$, when $k=\pm 1$.
Therefore, the Nambu-Goto strings given by Eqs.~(\ref{ng-sol})
are always unstable.\\
{\flushright$\Box$\par\medskip}
\section{Conclusions}
We have studied the stability of stationary closed strings
winding around a flat torus embedded in the five-dimensional Schwarzschild
spacetime.
The Nambu-Goto strings belonging to this class are characterized by a real
parameter $s>\sqrt{2}$, with which the location of the closed string is
written as $r=sr_0$, where $r_0$ denotes the Schwarzschild radius.
We have shown that the perturbation modes are calculable
only with algebraic manipulations.
We have proven that all the solutions belonging to this class
are unstable under linear perturbations.
|
{
"timestamp": "2018-08-21T02:04:19",
"yymm": "1805",
"arxiv_id": "1805.02307",
"language": "en",
"url": "https://arxiv.org/abs/1805.02307"
}
|
\section{Introduction}
Visual recognition is one of the hottest topics in the fields of computer vision and machine learning. In recent years, many deep learning models have been built to set new state-of-the-art results in image classification, object detection and many other visual recognition tasks \cite{ILSVRC2015, girshick2016region, szegedy2013deep}. Among these tasks, most of the breakthroughs are achieved with deep Convolutional Neural Networks (CNN) \cite{lecun1989backpropagation}.
CNN was first proposed in the late 1990s by LeCun \textit{et al.} \cite{lecun1989backpropagation, lecunhandwritten}. It was quickly overwhelmed by the combination of other shallow descriptors (such as SIFT \cite{lowe2004distinctive}, HOG \cite{mikolajczyk2005performance}, bag of words \cite{csurka2004visual}) with Support Vector Machine (SVM) \cite{vapnik1998statistical}. In recent years, with the increase of image recognition data size and computation power, CNN is becoming more and more popular and dominant. Krizhevsky \textit{et al.} \cite{Alex2012} proposed the classic eight layer CNN model (AlexNet) with five convolutional and three fully connected layers. The model is trained via back-propagation through layers and performs extremely well in domains with a large amount of training data. Since then, many new CNN models have been constructed with larger sizes and different architectures to improve the performance. A series of improvements were achieved by VGG \cite{simonyan2014very}, GooLeNet \cite{szegedy2015going}, ResNet \cite{He_2016_CVPR, he2016identity2} and so on. However, a larger model creates a larger number of parameters and larger computational complexity. Methods for compressing network and accelerating training and testing computation have also been developed \cite{han2015deep, dean2012large, denton2014exploiting, zhang2016accelerating}.
Overall, the previous deep face networks have two limitations. (a) data size: training a larger size global model requires more training data, which can be costly and not applicable in certain applications. (b) local information: one deep neural network built over all the class labels may ignore the pairwise local correlations between different labels, which can be used to improve overall performance.
\begin{figure}
\centering
\begin{center}
\includegraphics[width=1\linewidth]{example1}
\end{center}
\caption{An example of the proposed LCC-CNN matcher is depicted with the combination of global model and local models. The matching paths of the two testing images are indicated in red and cyan, respectively.}
\label{f1}
\end{figure}
This paper overcomes the limitations (a) and (b) by introducing a hierarchical matcher that builds chains of local binary CNN classifiers after the global CNN classifier over all the class labels. Moreover, it is a method to improve face recognition performance without changing the architecture of the CNN network. Hereafter, these two types of classifiers are referred as local model and global model. The motivation behind this is that a global model focuses more on the global discriminative features over all the class labels and tends to misclassify samples from visually similar classes. With fewer labels, a local model can exploit more local discriminative features for the related labels and can be used to correct the matching result of the global model. Especially when the same training data and network architecture are used for both global and local models, each local model converges fast and achieves better accuracy than the global model. Also, local models can be trained in parallel, which avoids excessive increase in computational complexity. In addition, when data size is limited, a local model can explore more pairwise label correlations than the global model. To limit the complexity of the proposed matcher, only binary local models are built in this paper. Take CIFAR-10 dataset \cite{krizhevsky2009learning} for example. Figure ~\ref{f1} depicts the intuition of the proposed matcher. It can be observed that for the ``dog'' image, the binary local model between ``cat'' label and ``dog'' label is used to correct the mistake of the global model. Importantly, local models can be built one after another, which leads to a chain of local models to boost performance and avoid error propagation. For the ``deer'' image, a chain of two local models (between ``dog'' label and ``horse'' label, ``dog'' label and ``deer'' label) are built to improve the matching of the global model.
The contributions of this paper are improving recognition performance by the following two techniques: (i) fully exploring training data by proposing a hierarchical matcher, where the contributions of global model and local models are combined. (ii) making use of pairwise label information to local model chains, which adaptively select a small set of label pairs to build local models. The pairwise correlations between different labels are learned based on their relationships in the score matrices. These correlations are not well explored in global model based methods.
Parts of this work on face recognition have been published in Zhang \textit{et al.} \cite{zhang2017ijcb}. In this paper, it is extended by providing: (i) a signature to store image information with global model and local model components; (ii) the fine-tuning process of local models; (iii) more general applications of image recognition and character recognition; (iv) the evaluation on the pose-invariant 3D-aided 2D face recognition system (UR2D) \cite{xiang2017ijcb}.
The rest of this paper is organized as follows: Section \ref{sec2} presents related work. Section \ref{sec3} and Section \ref{sec4} describe the signature and the hierarchical matcher. The experimental design, results, and analysis are presented in Section \ref{sec5}. Section \ref{sec6} concludes the paper.
\section{Related work}
\label{sec2}
In this section, the most recent works related to the proposed method are reviewed. It is also illustrated that how the proposed matcher is different from previous research.
\textbf{Image recognition with CNNs:} the previous methods are introduced based on several key factors in CNN.
(1) \textit{Filter size and stride}: Based on the visualization of feature maps with deconvnet, Zeiler \textit{et al.} \cite{zeiler2014visualizing} utilized a receptive window with size of $7 \times 7$ and a stride of 2 in the first convolutional layer and achieved better performance than AlexNet on the ImageNet dataset. Sermanet \textit{et al.} \cite{sermanet2014overfeat} proposed an integrated framework for classification, localization and detection. Different tasks are learned simultaneously using a single shared network. Their model has larger first and second layer feature maps ($11 \times 11$ and $5 \times 5$) with stride of $4 \times 4$ and $1 \times 1$. Simonyan \textit{et al.} \cite{simonyan2014very} used a small $3 \times 3$ receptive field, which is the smallest size to capture the notation of left/right, up/down, and center. The convolution stride is fixed to 1 pixel. The reason behind this is that a stack of three $3 \times 3$ convolutional layers with spatial pooling in between has the effect of a receptive field of $7 \times 7$. Three non-linear rectification layers are incorporated instead of one, which makes the decision function more discriminative.
(2) \textit{Multi-scale and multi-view}: In visual recognition, objects of interests sometimes vary significantly in size and position within an image. The general idea to address this is to apply CNN at multiple location in the image. Krizhevsky \textit{et al.} utilized the multi-view voting to boost performance, where 10 views (5 corners and center, with horizontal flip) are averaged. However, this approach ignores many regions of the image and is computationally redundant when views overlap. Also, it is applied at a single scale. Sermanet \textit{et al.} \cite{sermanet2014overfeat} explored the entire image by densely running the network at each location at multiple scales. The approach yields significantly more views for voting without increasing too much computation. The approach also uses 6 scales of input image. Howard \textit{et al.} \cite{howard2013some} applied a combination of 5 translations, 2 flips, 3 scales and 3 views leading to 90 predictions which slow the prediction down by almost an order of magnitude. To rectify this, the author applied a greedy algorithm that can choose a subset of transforms that give competitive performance.
(3) \textit{Data augmentation}: Data augmentation is the easiest and most common method to reduce over-fitting on image data. Simonyan \textit{et al.} \cite{simonyan2014very} extracted random $224 \times 224$ patches (and their horizontal reflections) from $256 \times 256$ rescaled images. The $256 \times 256$ image is generated by rescaling the largest image dimension to 256 and then cropping the other side to 256. This increases the size of the training set by a factor of 20,148, but results in a loss of information of roughly 30\% of the pixels. Howard \textit{et al.} first scaled the smallest side to 256 and then selected a random crop of $224 \times 224$ as a training image, which yields a large number of additional training images and helps the network learn more extensive translation invariance. Besides image cropping, color manipulation is also used in data augmentation. Krizhevsky \textit{et al.} \cite{Alex2012} performed PCA on the RGB pixel values to alter the intensities of the RGB channels. Their scheme approximately captures the changes in the intensity and illumination. In addition to the random lighting, other color manipulations on the contrast, brightness and color are applied to generate training examples covering the span of image variations \cite{howard2013some}.
(4) \textit{Depth}: Increasing deep neural network size includes increasing the number of levels and the number of units of each level \cite{simonyan2014very}. Simonyan \textit{et al.} \cite{simonyan2014very} explored the influence of CNN depth by an architecture with small convolutional filters ($3 \times 3$). They achieved a significant improvement by pushing the depth to 16-19 layers in a VGG network. Szegedy \textit{et al.} \cite{szegedy2015going} introduced GoogLeNet as a 22-layer Inception network, which achieved impressive results in both classification and detection tasks. He \textit{et al.} \cite{He_2016_CVPR} proposed Residual Networks (ResNet) with a depth of up to 152 layers, which set new records for many visual recognition tasks. Further more, the authors released a residual network of 1K layers with a new residual unit that makes training easier and improves generalization.
(5) \textit{Loss function}: Softmax loss function is the most common loss function used in CNN \cite{Alex2012, szegedy2015going, han2015deep}. It takes the output of a fully connected layer and produces a vector with real values in the range of 0 and 1 that add up to 1. Each real value can represent the predicated probability of one label. To enhance the discriminative power of CNN features, other loss functions have been developed. Contrastive loss constraints the distance between two training samples' deep features based on whether they are from the same class. Sun \textit{et al.} \cite{sun2014deep2} proposed to learn deep face representation with joint identification loss and verification loss. The identification loss increases the inter-personal variations while the verification loss reduces the intra-personal variations. Wen \textit{et al.} \cite{wen2016latent} proposed to use contrastive loss and softmax loss together to learn age-invariant deep features. Triple loss has also been introduced to deep face recondition by minimizing the distance between an anchor and a positive sample of the same identity and maximizing the distance between the anchor and a negative sample of a different identity \cite{schroff2015facenet}. Both contrastive loss and triplet loss require dramatic data expansion when contructing sample pairs or sample triplets from the training set. To overcome this problem, center loss is developed to increase the intra-class compactness without re-combination of training samples \cite{wen2016discriminative}. The center loss learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers.
(6) \textit{Speed up}: Larger model dramatically increases computational complexity. Training process is accelerated with multiple GPUs and distributed deep network techniques \cite{ dean2012large}. Methods for accelerating test-time computation of CNNs have also been developed. Denton \textit{et al.} \cite{denton2014exploiting} proposed to use low rank approximations and clustering of filters which achieves 1.6 speedup of single convolutional layer with 1\% drop in classification with Overfeat network. Lebedev \textit{et al.} \cite{lebedev2014speeding} introduced a two-step approach for speeding up convolution layers within large CNN based on tensor decomposition and discriminative fine-tuning. The approach achieves higher cpu speedups at the cost of lower accuracy drops. These two methods only focus on decomposition of one or a few linear layers of CNN. Zhang \textit{et al.} \cite{zhang2016accelerating} proposed an low rank decomposition method for very deep models, where the non-linear neurons are taken into account. The method achieves $4 \times$ speedup on VGG-16 model with graceful accuracy degradation.
\textbf{Face recognition with CNNs:} Face recognition is one of the major topics in visual recognition. Based on the success of CNNs in image recognition, many networks have been introduced in face recognition and achieved a series of breakthroughs. Similar to image recognition, effective CNNs require a larger amount of training images and larger network sizes. Yaniv \textit{et al.} \cite{taigman2014deepface} trained the DeepFace system with a standard eight layer CNN using 4.4M labeled face images. Sun \textit{et al.} \cite{sun2014deep, sun2014deep2, sun2015deepid3} developed the Deep-ID systems with more elaborate network architectures and fewer training face images, which achieved better performance than the DeepFace system. The FaceNet \cite{schroff2015facenet} was introduced with 22 layers based on the Inception network \cite{szegedy2015going, zeiler2014visualizing}. It was trained on 200M face images and achieved further improvement. Parkhi \textit{et al.} \cite{parkhi2015deep} introduced the VGG-Face network with up to 19 layers adapted from the VGG network \cite{simonyan2014very}, which was trained by 2.6M images. This network also achieved comparable results and has been extended to other applications. To overcome the massive request of labeled training data, Masi \textit{et al.} \cite{masi16dowe} proposed using domain specific data augmentation, which generates synthesis images for CASIA WebFace collection \cite{yi2014learning} based on different facial appearance variations. Their results trained with ResNet match the state-of-the-art results reported by networks trained on millions of images. Most of these methods also focus on increasing the network size to improve performance. Xiang \textit{et al.} \cite{xiang2017ijcb} presented the evaluation of the UR2D face recognition system, which is robust enough to pose variations as large as 90\textdegree{}. Different CNNs are integrated in face detection, landmark detection, 3D reconstruction and signature extraction.
\textbf{Hierarchical CNNs:} Hierarchical ideas have been introduced to deep neural networks in previous works \cite{tousch2012semantic}. Deng \textit{et al.} \cite{deng2014large} replaced the flat soft-max classification layer with a probabilistic graphical model that embeds given relationships between labels. Yan \textit{et al.} \cite{yan2015hd} proposed a hierarchical deep CNN, where easily distinguished classes are predicted in higher layers while visually similar classes are predicted in lower layers. Murdock \textit{et al.} \cite{murdock2015blockout} introduced blockout layers to learn the cluster membership of each output node from fully connected layer. The weight matrix of a blockout layer can be learned from back propagation. These methods modify the global neural network to learn feature representation by embedding clustering information. Local information is introduced either by multi-task learning or weight matrix restriction. However, they still rely on one global model to make predictions for all of the class labels. The pairwise label correlations are not explored separately.
\textbf{Hierarchical multi-label classification:} The proposed matcher is related to the Hierarchical Multi-label Classification (HMC) problem, where each sample has more than one label and all of these labels are organized hierarchically in a tree or Direct Acyclic Graph (DAG) \cite{silla2011survey, Wei2014, zhang2014fully}. Hierarchical information in tree and DAG structures is used to improve classification performance \cite{valentini2011true,Zhang201789}. In visual recognition, each sample only has one label. If a meta class label hierarchy is built as an HMC problem, the matching error of high level nodes will propagate to the matching of low level nodes. Another way is to build a local model for each node separately and combine the matching results of all local models. But this leads to a heavy computation burden depending on the size of the hierarchy. Thus, this paper attempts building chains of local models after one global model, which possesses the merits of both global and local models.
\textbf{Classifier chains:} The classifier chains for multi-label classification \cite{read2011classifier} is also related to the proposed work. Similarly, the proposed matcher also uses local classifier models to select the next local model based on the current matching result. The major difference of Read \textit{et al.} \cite{read2011classifier} and LCC-CNN lies in how to use a local classifier. In multi-label classification, the classifier is learned to predict multiple labels. In the proposed matcher, local classifiers are applied to update a single matching label.
\section{Signature}
\label{sec3}
To make the matching process separate from network computation, a signature is used to store the image information. The signature of each image $\mathbb{S}$ has two components: global model component $\mathbb{S}^{G}$ and local model component $\mathbb{S}^{L}$. The global model component is extracted from a global model built over all the class labels while the local component is built based on label pairs.
\subsection{Global model component: $\mathbb{S}^{G}$}
Given a visual recognition dataset \mbox{$\mathcal{S}=\{(x_i,y_i)\}_{i=1}^N$}, where $x_i$ and $y_i$ represent the $i^{th}$ sample and the corresponding class label, respectively. The label set is denoted by \mbox{$\mathcal{C}=\{c_1,c_2,...,c_l\}$}, so $y_i\in \mathcal{C}$.
First the global model is defined as $g(x_i)$ and the global matching vector of each sample is defined as $V_i=\{v_{i1},v_{i2},...,v_{il}\}$ with size of $1 \times l$. Each value $v_{ij}$ represents the probability of assigning label $c_j$ to the $i^{th}$ sample.
Different global models create different sizes of global model features and rely on different methods to obtain the global matching vector \cite{zhang2015icb, zhang2018attribute}. For non-patch based CNN model, the feature size is the same as the label set as $1 \times l$. The softmax function is applied to obtain the global matching vector based on the signature. For patch-based CNN model in the UR2D system, the global model feature contains two part: feature matrix and occlusion encoding. The feature size is $8 \times 512 + 8$ based on DPRFS signature \cite{dou2015pose, xiang2017ijcb,zhang2018icb}. Cosine similarity is applied to compute the global matching vector.
From $V_i$, the global matching result $h_i$ is obtained easily, and $h_i\in \mathcal{C}$. All the global matching results of the dataset are represented by \mbox{$H=\{h_1,h_2,...,h_N\}$}, thus,
\begin{equation}\label{e1}
V_i=g(x_i), \\
\end{equation}
\begin{equation}\label{e2}
h_i = {argmax}_{i}(V_i).
\end{equation}
\subsection{Local model component: $\mathbb{S}^{L}$}
The goal of the proposed matcher is to improve the global matching results with local models built on a set of label pairs. Let $\mathcal{P} = \{ p_{ij} \}$ and $\mathcal{F} = \{ f_{ij}(x) \}$ represent the label pair set and the local binary model set, respectively, where $p_{ij}$ and $f_{ij}(x)$ represent the label pair and the local model between label $c_i$ and label $c_j$, respectively. For each sample, the local matching vector of $p_{ij}$ is stored as $B_{ij}=\{b_{ij}^{1},b_{ij}^{2}\}$ in local model component. Thus, the local model signature component $\mathbb{S}^{L}$ size is $2*|\mathcal{P}|$.
First, local models are built between visually less discriminative labels. The assumption behind this is that global model is more likely to confuse visually similar labels than visual discriminative labels. Take Figure~\ref{f1} for example, an animal image is more likely to be mismatched as another animal than other a ``car''. Given a global model, the global matching vector of each sample in validation set is used as its visual feature vector. Based on this feature map, the averaged feature vector of each label can be computed. Then, the distance between the averaged feature vectors of two different labels can be used to represent their label similarity. Let \mbox{$\mathcal{\hat{S}}=\{(\hat{x}_i,\hat{y}_i)\}_{i=1}^{\hat{N}}$} represent the validation set, and the visual feature vector of each sample from the global model can be computed as $\hat{V}_i=g(\hat{x}_i)$. Let \mbox{$\hat{U}_i=\{\hat{u}_{i1},\hat{u}_{i2},...,\hat{u}_{il}\}$} represent the mean sample feature vector of the $i^{th}$ label. Then a similarity matrix based on the distance between each pair of mean samples can be defined as $W = \{w_{ij}\}^{l \times l}$, where $w_{ij}$ represents the Euclidean distance between the mean samples of $c_i$ and $c_j$. Thus,
\begin{equation}\label{sm1}
w_{ij}=\frac{1}{l}\sum\limits_{k=1}^{l}(\hat{U}_{ik} - \hat{U}_{jk})^2. \\
\end{equation}
To convert the elements in $W$ to unique scale, each element in $W$ is normalized by dividing the maximum element of $W$. Then, each value is subtracted from 1 to represent the similarity instead of distance. Let $Q = \{q_{ij}\}^{l \times l}$ represent the normalized similarity matrix, so that,
\begin{equation}\label{sm2}
q_{ij}= 1-\frac{w_{i,j}}{\max(W)}.
\end{equation}
For each label, the most similar labels can be obtained based on the score matrix and a pre-defined threshold $t_s$. If $W_{ij}$ is larger than $t_s$, a label pair $p_{ij} = \{c_i,c_j\}$ is added to the label pair set $\mathcal{P}$. This way, different sizes of label pair sets can be obtained based on different values of $t_s$. The pseudo-code of LCC-CNN trained with similarity matrix is summarized in Algorithm \ref{a1}.
\begin{algorithm}
\caption{Local models for image recognition: Building with similarity matrix}
\label{a1}
\KwIn{global model $g(x)$, validation set \mbox{$\mathcal{\hat{S}}=\{(\hat{x}_i,\hat{y}_i)\}_{i=1}^{\hat{N}}$} and threshold $t_s$}
\KwOut{label pair set $\mathcal{P}$ and local binary model set $\mathcal{F} = \{ f_{ij}(x) \}$}
Compute global matching vector of each sample in validation set $\hat{V}_i=g(\hat{x}_i)$ by (\ref{e1})\\
Compute mean sample vectors $\hat{U}_i$ \\
Compute similarity matrix $W$ by (\ref{sm1})\\
Compute normalized similarity matrix $Q$ by (\ref{sm2})\\
\For { $i \leftarrow 1$ \textbf{to} $l$ } {
\For { $j \leftarrow 1$ \textbf{to} $l$ } {
\If { $q_{ij} > t_s$ } {
$\mathcal{P} = \mathcal{P} \cup \{p_{ij}\}$ \\
train local model $f_{ij}(x)$ \\
$\mathcal{F} = \mathcal{F} \cup \{ f_{ij}(x) \}$ \\
}
}
}
\Return{$\mathcal{P}$, $\mathcal{F}$} ;
\end{algorithm}
Similarity correlation is not a direct way to measure the matching performance of global model. Especially in face recognition, where the differences between different identities are much smaller than other general images. Here, the confusion matrix of validation set is introduced to create the label pair set. Based on the label set $\hat{Y}$ and the global matching set $\hat{H}$ of the validation set, the confusion matrix can be computed easily, represented by $Z = \{z_{ij}\}$, where $z_{i,j}$ represents the number of samples that have a true label of $c_i$ with $\hat{y} = c_i$ and a matching label of $c_j$ with $\hat{h} = c_j$. The confusion matrix directly gives pairwise information about the performance of global model, which matches the motivation of the proposed matcher naturally. To obtain unique scale, each element in $Z$ is normalized by computing the ratio of $z_{ij}$ to the number of samples with true label of $c_i$, all normalized values are defined as a matrix $R = \{r_{ij}\}$, where
\begin{equation}\label{cm1}
r_{ij}= \frac{z_{ij}}{\sum_1^l (z_{ij})}.
\end{equation}
If $r_{ij}$ is larger than a pre-defined threshold $t_c$, many samples with a true label of $c_i$ are predicted as $c_j$. So labels $c_i$ and $c_j$ are ambiguous, and a local model is required for further evaluation. To make robust label pair set and overcome over-fitting, the direction information is ignored in each label pair, so $p_{ij} = p_{ji}$. The pseudo-code of LCC-CNN trained with confusion matrix is summarized in Algorithm \ref{a2}. The pseudo-code of signature generate for a given image is summarized in Algorithm \ref{a3}.
The procedure for building a label pair set in the proposed matcher is depicted in Figure \ref{f2}. If the global and local model use the same network structure, the weights of global model is used to fine-tune the local models to reduce the computational complexity of building local models. Otherwise, the state-of-art pre-trained weights is used to fine-tune the local models. Figure~\ref{h1} depicts the normalized similarity matrix and the normalized confusion matrix on the CIFAR-10 dataset. Figure~\ref{sm_ex1} shows the top-3 label pairs extracted from UHDB31 dataset \cite{ha2017uhdb31} based on VGG-Face network. Note that if there are many label pairs sharing the same label, a multi-class local model can also be built, instead of a set of binary local models. To limit the model complexity, chains of binary local models are built. The matching result of one local model is used to direct the selection of the next local model.
\begin{algorithm}
\caption{Local models for face recognition: Building with confusion matrix}
\label{a2}
\KwIn{global model $g(x)$, validation set \mbox{$\mathcal{\hat{S}}=\{(\hat{x}_i,\hat{y}_i)\}_{i=1}^{\hat{N}}$} and threshold set $t_c$}
\KwOut{label pair set $\mathcal{P}$ and local binary model set $\mathcal{F} = \{ f_{ij}(x) \}$}
Compute global matching vector of each sample in validation set $\hat{V}_i=g(\hat{x}_i)$ by (\ref{e1})\\
Compute global matching result of each sample $\hat{h}_i= {argmax}_{i}(\hat{V_i})$ by (\ref{e2})\\
Compute confusion matrix of validation set $Z = \{z_{ij}\}$ \\
Compute normalized confusion matrix $R = \{r_{ij}\}$ by (\ref{cm1}) \\
\For { $i \leftarrow 1$ \textbf{to} $l$ } {
\For { $j \leftarrow 1$ \textbf{to} $l$ } {
\If { $r_{ij} > t_k$ } {
$\mathcal{P} = \mathcal{P} \cup \{p_{ij}\}$ \\
train local model $f_{ij}(x)$ \\
$\mathcal{F} = \mathcal{F} \cup \{ f_{ij}(x) \}$ \\
}
}
}
\Return{$\mathcal{P}$, $\mathcal{F}$} ;
\end{algorithm}
\begin{algorithm}
\caption{Signature $\mathbb{S} = \{\mathbb{S}^{G}, \mathbb{S}^{L}\}$}
\label{a3}
\KwIn{input image $\hat{x}$, global model $g(x)$, label pair set $\mathcal{P} $ and local binary model set $\mathcal{F}$}
\KwOut{Signature $\mathbb{S} = \{\mathbb{S}^{G}, \mathbb{S}^{L}\}$}
Compute global matching vector of $\hat{V}=g(\hat{x})$\\
$\mathbb{S}^{G} = \mathbb{S}^{G} \cup \{ \hat{V} \}$ \\
\For {$p_{ij} \in \mathcal{P}$} {
compute local matching vector $\hat{B}_{ij} = f_{ij}(\hat{x})$ \\
$\mathbb{S}^{L} = \mathbb{S}^{L} \cup \{ \hat{B}_{ij} \}$ \\
}
\Return{$\mathbb{S} = \{\mathbb{S}^{G}, \mathbb{S}^{L}\}$} ;
\end{algorithm}
\begin{figure*}
\centering
\begin{center}
\includegraphics[width=1\linewidth]{pipeline21}
\end{center}
\caption{The work-flow depicts the validation-based local model generation.}
\label{f2}
\end{figure*}
\begin{figure*}[htp]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{h12}
\caption{ }
\end{subfigure} %
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{h22}
\caption{ }
\end{subfigure}
\caption{The score matrices generated on the CIFAR-10 dataset. (a) Normalized similarity matrix (b) Normalized confusion matrix. It can be observed from (a) that the three most similar label pairs are $\{$``cat'', ``dog'' $\}$ (0.80), $\{$``cat'', ``bird'' $\}$ (0.77) and $\{$``deer'', ``bird'' $\}$ (0.74). It can be observed from (b) that the three most error-prone label pairs are $\{$``cat'', ``dog'' $\}$(0.27), $\{$``truck'', ``car'' $\}$ (0.14) and $\{$``cat'', ``frog'' $\}$ (0.13).}
\label{h1}
\end{figure*}
\begin{figure}[htp]
\centering
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=1\linewidth]{sm_top3}
\caption{ }
\end{subfigure} %
\qquad
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=1\linewidth]{cm_top3}
\caption{ }
\end{subfigure}
\caption{The Top-3 label pairs generated on UHDB31 dataset. (a) From similarity matrix, the scores from top to bottom are 0.716, 0.712 and 0.703, respectively. (b) From confusion matrix, the scores from top to bottom are 0.302, 0.286 and 0.238, respectively. It can be observed that each label pair share some visual similarities that can be mismatched by the global model.}
\label{sm_ex1}
\end{figure}
\section{Hierarchical matcher}
\label{sec4}
Based on the matching information of both global and local models in signature, a top-down strategy is used to get the final matching for each testing sample. Starting from the global matching, each sample will go through a chain of local models to obtain the final matching, until none of the related local models gives a better matching. Initially, the current matching label $o$ is set based on the global matching result. Then, each model link is built based on two steps. First, all label pairs containing the current label $o$ in label pair set $\mathcal{P}$ are added to a current matching label set $\mathcal{P}'$. Second, the matching label with the largest matching result in signature is selected as the next current label. Take face recognition as example, the pseudo-code of LCC-CNN in matching is summarized in Algorithm \ref{a4}.
The motivation is that if there is only one matching label and the corresponding local model gives a higher matching value to this matching label over the current label. It is believed that this testing sample is mismatched and the current label will be updated to the matching label. If there are multiple matching labels, the matcher needs to find the best possible next current label by comparing their matching values. In this process, the next current label does not necessarily have to be the correct label. The local model chain is designed to overcome error propagation in the top-down matching. If one local model in the chain gives a wrong matching, the following local models still have chances to correct the error as long as the related label pairs have been added to the label pair set.
\begin{algorithm}
\caption{LCC-CNN: Hierarchical matcher}
\label{a4}
\KwIn{Probe image signature $\mathbb{S}^p = \{\mathbb{S}^{Gp}, \mathbb{S}^{Lp}\}$, Gallery image signature list $\{\mathbb{S}^g = \{\mathbb{S}^{Gg}, \mathbb{S}^{Lg}\}\}$ and label pair set $\mathcal{P}$}
\KwOut{ Final matching of probe image $\bar{o}$}
Compute global matching result $ \bar{h}$ based on $\mathbb{S}^{Gp}$ and $\{\mathbb{S}^{Gg}\}$\\
Set current matching label $o = \bar{h}$ \\
\While { True }{
Set current matching label set $\mathcal{P}' = \O$ \\
Update $\mathcal{P}'$, when $p_{od} \in \mathcal{P}$ \\
\If {$ \mathcal{P}' != \O $}{
Set current matching value $a = 0$ \\
Set next current matching label $o_n = o$\\
\For { $d $ \bf{in} $ \mathcal{P}' $ }{
\If { $ {argmax}(B_{od}) != o$ and $\max(B_{od}) > a$ } {
$ o_n = {argmax}(B_{od})$ \\
$ a = \max(B_{od})$ \\
}
}
\If { $o_n == o$ } {
\textbf{break}
}
\Else
{
$o = o_n$
}
}
\Else
{
\textbf{break}
}
}
Set $\bar{o} = o$
\Return{$\bar{o}$ };
\end{algorithm}
\section{Experiments}
\label{sec5}
This section first presents the evaluation of the proposed matcher on different types of recognition tasks: image recognition, character recognition and face recognition. Then, the proposed matcher is evaluated on the UR2D system. different baseline methods are used to build global model and local models. It is assumed that the global model converges when the accuracy of the validation set is stable and begins to drop. The local models are built with the samples of the two corresponding labels from the training set. These local models usually converge faster than the global model, and the weights at 3,000 iterations is used during matching. To observe the overall performance change, the proposed LCC-CNN matcher based on Similarity Matrix (LCC-CNN-SM) and Confusion Matrix (LCC-CNN-CM) is tested using different threshold sets and different maximum iteration stages. The parameter settings are selected based on the performance on the validation set. All the networks are trained with Caffe \cite{jia2014caffe}.
\subsection{Image recognition}
Two image recognition datasets, the CIFAR-10 dataset \cite{krizhevsky2009learning} and the Flower dataset \cite{nilsback2006visual} are evaluated. In the CIFAR-10 dataset, there are 10 classes where each class contains 6,000 images with size of $32 \times 32$. For each class, 600 images are randomly selected for evaluation. In the Flower dataset, there are 17 classes, where each class contains 80 images with different sizes. The two datasets are divided into training set, validation set and testing set, which contain 50\%, 25\% and 25\% of the images per class, respectively. GoogLeNet and VGG are chosen as the baseline methods to build models. To explore the performance of the proposed matcher under different networks and settings, GoogLNet is trained from scratch while the VGG model is fine-tuned with the pre-trained VGG weights based on ImageNet. On the CIFAR-10 dataset with GoogLeNet, the settings are $t_s=\{0.40, 0.50, 0.60, 0.70\}$ and $t_c=\{0.01, 0.05, 0.10, 0.15\}$ with global models at different iteration stages from 6,000 to 11,000. On the CIFAR-10 dataset with VGG, the settings are $t_s=\{0.30, 0.40, 0.50, 0.60\}$ and $t_c=\{0.03, 0.05, 0.10,$ $ 0.15\}$ with global models at different iteration stages from 5,000 to 9,000. On the Flower dataset with GoogLeNet, the settings are $t_s=\{0.40$ $, 0.50, 0.60, 0.70\}$ and $t_c=\{0.01, 0.05, 0.10, 0.20\}$ with global models at different iteration stages from 1,000 to 5,000. On the Flower dataset with VGG, the settings are $t_s=\{0.35, 0.40, 0.45, 0.50\}$ and $t_c=\{0.01, 0.02, 0.03, 0.04\}$ with global models at different iteration stages from 1,000 to 5,000. The results are shown in Figures~\ref{CIF_f1}-\ref{FLW_f2}. Tables~\ref{CIF_t1}-\ref{FLW_t2} depict the sizes of local model sets under different iterations and thresholds.
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{dnn_f_5}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{dnn_f_6}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of GoogLeNet computed on the CIFAR-10 dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{CIF_f1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{CIF_3}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{CIF_4}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of VGG computed on the CIFAR-10 dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{CIF_f2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{dnn_f_1}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{dnn_f_2}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of GoogLeNet computed on the Flower dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{FLW_f1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{FLW_3}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{FLW_4}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of VGG computed on the Flower dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{FLW_f2}
\end{figure}
\begin{table}
\begin{center}
\caption{ The size of local model set of GoogLeNet on the CIFAR-10 dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.40 & SM-0.50 &SM-0.60 &SM-0.70 & CM-0.01 & CM-0.05 &CM-0.10 &CM-0.15 \\ \hline
6,000&18 &14 &7 &4 & 44 &23 &11&4 \\ \hline
7,000 &18 &14 &7 &4 &37 &20&9&5\\ \hline
8,000 &18 &14 &7 &4 &36 &19&10&4\\ \hline
9,000&18 &14 &7 &4 &39 &18&7&2\\ \hline
10,000&18 &14 &7 &4 &36 &16&5&1\\ \hline
11,000 &18 &14 &7 &4 &36 &18&6&2\\ \hline
\end{tabular}}
\label{CIF_t1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ The size of local model set of VGG on the CIFAR-10 dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.30 & SM-0.40 &SM-0.50 &SM-0.60 & CM-0.03 & CM-0.05 &CM-0.10 &CM-0.15 \\ \hline
5,000 &22 &11 &6 &1 &18 &13 &5 &2 \\ \hline
6,000 &23 &12 &6 &1 &22 &16 &5 &4\\ \hline
7,000 &23 &11 &6 &1 &21 &16 &4 &2\\ \hline
8,000 &22 &11 &6 &1 &19 &13 &4 &2\\ \hline
9,000 &18 &10 &6 &1 &21 &15 &3 &2\\ \hline
\end{tabular}}
\label{CIF_t2}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ The size of local model set of GoogLeNet on the Flower dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.40 & SM-0.50 &SM-0.60 &SM-0.70 & CM-0.01 & CM-0.05 &CM-0.10 &CM-0.20 \\ \hline
1,000&88 &60 &27 &7 & 41 &41 &16&6 \\ \hline
2,000 &88 &60 &27 &7 &33 &33&9&4\\ \hline
3,000&88 &60 &27 &7 &32 &32&11&4\\ \hline
4,000&88 &60 &27 &7 &33 &33&9&1\\ \hline
5,000&88 &60 &27 &7 &31 &31&9&2\\ \hline
\end{tabular}}
\label{FLW_t1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ The size of local model set of VGG on the Flower dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.35 & SM-0.40 &SM-0.45 &SM-0.50 & CM-0.01 & CM-0.02 &CM-0.03 &CM-0.04 \\ \hline
1,000 &34 &21 &11 &6 &8 &8 &8&8 \\ \hline
2,000 &37 &26 &12 &7 &10 &10 &10&10\\ \hline
3,000 &33 &23 &9 &6 &10 &10 &10&10\\ \hline
4,000 &35 &25 &9 &7 &10 &10 &10&10\\ \hline
5,000 &38 &23 &10 &6 &8 &8 &8&8\\ \hline
\end{tabular}}
\label{FLW_t2}
\end{center}
\end{table}
From Figure~\ref{CIF_f1} it can be observed that in the CIFAR-10 dataset, the baseline model of GoogLeNet obtains the best performance at 10,000 iterations. With different sizes of label pair sets from the similarity matrix and confusion matrix, the proposed matcher achieves better performance. The best performance of LCC-CNN-SM is obtained ($t_s = 0.40$) with 18 label pairs. LCC-CNN-CM ($t_c = 0.01$) and LCC-CNN-CM ($t_c = 0.05$) achieve similar top performance with different sizes of label pair sets, 36 and 16 respectively. This also proves that the proposed matcher is robust to over-fitting. It can adaptively select useful label pairs from the label pair set. From Figure~\ref{CIF_f2}, similar improvements can be observed even if the performance is more sensitive under different iteration stages. From Table~\ref{CIF_t1} and Table~\ref{CIF_t2} it can be observed that the size of the label pair set in LCC-CNN-SM is not sensitive with respect to different iteration stages. It is mainly affected by the threshold value $t_s$. On the other hand, in LCC-CNN-CM, the size of the label pair set is influenced by both iteration number and threshold $t_c$. This is because the similarity matrix is based on an averaged visual feature vector of each class label and the similarity correlations are stable under different iteration stages. On the other side, the confusion matrix is very sensitive to an increase of iteration number. With fixed $t_c$, the closer the model is to converging, the fewer label pairs can be extracted. In Figures~\ref{FLW_f1}-~\ref{FLW_f2} and Tables~\ref{FLW_t1}-\ref{FLW_t2}, similar results can be observed on the Flower dataset. Both LCC-CNN-SM and LCC-CNN-CM achieve better performance at different training stages. Note that under the VGG baseline model, LCC-CNN-CM achieve the same performance under different threshold values of $t_c$, this is because that all the non-diagonal elements of the confusion matrix have the same value.
\subsection{Character recognition}
One character recognition dataset, the EnglishFnt dataset from Chars74K collection \cite{deCampos09} is evaluated. In this dataset, there are 62 classes (0-9, A-Z, a-z), where each class contains 1,016 images from different fonts. The dataset is divided into training set, validation set and testing set, which contains 50\%, 25\% and 25\% of images per class, respectively. The same with the image recognition datasets, GoogLGeNet and VGG are chosen as baseline methods to build models. For GoogLeNet, the settings are $t_s=\{0.70, 0.80, 0.90, 0.95\}$ and $t_c=\{0.05, 0.10, 0.20, 0.30\}$ with global models at different iteration stages from 2,000 to 6,000. For VGG, the settings are $t_s=\{0.65, 0.68, 0.70, 0.73\}$ and $t_c=\{0.003, 0.005, 0.10, 0.20 \}$ with global models at different iteration stages from 3,000 to 7,000. The results are shown in Figures~\ref{fnt_f1} and \ref{fnt_f2}. Note that the stricter the threshold, the more local models can be extracted. Tables~\ref{fnt_t1} and \ref{fnt_t2} depict the sizes of local model sets under different iterations and thresholds.
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Fnt_1}
\caption{ }
\end{center}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Fnt_2}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of GoogLeNet computed on the EnglishFnt dataset. (a) LCC-CNN-SM (b) LCC-CNN-CM.}
\label{fnt_f1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Fnt_3}
\caption{ }
\end{center}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Fnt_4}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of VGG computed on the EnglishFnt dataset. (a) LCC-CNN-SM (b) LCC-CNN-CM.}
\label{fnt_f2}
\end{figure}
\begin{table}
\begin{center}
\caption{ The size of local model set of GoogLeNet on the EnglishFnt dataset.}
\scalebox{0.7}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.70 & SM-0.80 &SM-0.90 &SM-0.95 & CM-0.05 & CM-0.10 &CM-0.20 &CM-0.30 \\ \hline
2000&100 &19 &10 &7 &43&17&12&11 \\ \hline
3000&100 &19 &10 &7 &25&14&10&8\\ \hline
4000&100 &19 &10 &7 &23&14&9&6 \\ \hline
5000&100 &19 &10 &7 &23&12&8&6 \\ \hline
6000&100 &19 &10 &7 &24&15&11&7 \\ \hline
\end{tabular}}
\label{fnt_t1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ The size of local model set of VGG on the EnglishFnt dataset.}
\scalebox{0.7}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.65 & SM-0.68 &SM-0.70 &SM-0.73 & CM-0.003 & CM-0.005 &CM-0.01 &CM-0.02 \\ \hline
3000&148 &88 &57 &35 &277&126&76&39 \\ \hline
4000&152 &90 &57 &31 &263&110&63&37\\ \hline
5000&141 &71 &48 &25 &231&101&65&34 \\ \hline
6000&205 &112 &69 &33 &209&97&52&33 \\ \hline
7000&244 &140 &81 &41 &201&93&58&33 \\ \hline
\end{tabular}}
\label{fnt_t2}
\end{center}
\end{table}
From the results, consistent improvement can be observed as on previous datasets. Both LCC-CNN-SM and LCC-CNN-CM improve the performance at different training stages. The sizes of label pair sets from LCC-CNN-SM and LCC-CNN-CM also convince previous analysis.
\subsection{Face recognition}
Two face recognition datasets, the UHDB31 dataset \cite{ha2017uhdb31} and the CASIA-WebFace dataset \cite{yi2014learning} are tested. The UHDB31 dataset contains 29,106 color face images of 77 subjects with 21 poses and 18 illuminations. To evaluate the performance of cross pose face recognition, Images from 7 close frontal poses are used for training. The images from the remaining 14 poses are split equally for evaluation and testing. The CASIA-WebFace dataset contains 494,414 wild face images of 10,575 subjects. 100 subjects are randomly selected that each contains more than 100 images to build a face identification subset. Then, the subset is divided into a training set, a validation set and a testing set, which contain 50\%, 25\% and 25\% of the images per subject, respectively. The pre-trained VGG-Face model \cite{parkhi2015deep} and the pre-trained ResNet-Face model \cite{yi2014learning} are used as baseline methods to fine-tune the global model. Since the pre-trained ResNet-Face model is trained on CASIA-WebFace, the weights of the pre-trained ResNet101 model from ImageNet \cite{He_2016_CVPR} is used on the CASIA-WebFace dataset. The Patch size is set to 24 for VGG-Face and 12 for ResNet-Face. In the UHDB31 dataset, the LCC-CNN-SM and LCC-CNN-CM are tested using different threshold sets (\mbox{$t_s=\{0.50, 0.60, 0.70, 0.80\}$} and \mbox{$t_c=\{0.05, 0.10, 0.15, 0.20\}$}) with global models at different iteration stages from 4,000 to 8,000 for both baselines. In the CASIA-WebFace dataset, the LCC-CNN-SM and LCC-CNN-CM are tested using different threshold sets ($t_s=\{0.55, 0.57, 0.60, 0.62\}$ and $t_c=\{0.01, 0.03, 0.05,$ $0.07\}$) with the VGG-Face model at different iteration stages from 5,000 to 9,000. The LCC-CNN-SM and LCC-CNN-CM are tested using different threshold sets ( $t_s=\{0.67, 0.70, 0.72, 0.75\}$ and \mbox{$t_c=\{0.01, 0.03, 0.05, 0.07\}$}) with the ResNet-Face model at different iteration stages from 23,000 to 27,000. The results are shown in Figures~\ref{UH_f1}-\ref{Web_f2} and Tables~\ref{UH_t1}-\ref{Web_t2}.
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{UH_1}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{UH_2}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of VGG-Face computed on the UHDB31 dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{UH_f1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{UH_3}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{UH_4}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of ResNet-Face computed on the UHDB31 dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{UH_f2}
\end{figure}
\begin{table}
\begin{center}
\caption{ The size of local model set of VGG-Face on the UHDB31 dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.50 & SM-0.60 &SM-0.70 &SM-0.80 & CM-0.05 & CM-0.10 &CM-0.15 &CM-0.20 \\ \hline
5,000&105 &62 &23 &10 &127 &47 &25 &6 \\ \hline
6,000&75 &49 &21 &9 &95 &36 &17 &6 \\ \hline
7,000&70 &47 &20 &8 &76 &36 &13 &6 \\ \hline
8,000&71 &41 &17 &8 &88 &36 &14 &6 \\ \hline
9,000&70 &47 &18 &7 &88 &38 &14 &4 \\ \hline
\end{tabular}}
\label{UH_t1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ The size of local model set of ResNet-Face computed on the UHDB31 dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.50 & SM-0.60 &SM-0.70 &SM-0.80 & CM-0.05 & CM-0.10 &CM-0.15 &CM-0.20 \\ \hline
5,000&151 &55 &19 &10 &177 &66 &28 &11 \\ \hline
6,000&134 &48 &19 &14 &160 &56 &28 &11 \\ \hline
7,000&96 &32 &11 &9 &117 &39 &13 &6 \\ \hline
8,000&117 &47 &9 &8 &139 &53 &16 &7 \\ \hline
9,000&132 &59 &19 &14 &153 &66 &28 &10 \\ \hline
\end{tabular}}
\label{UH_t2}
\end{center}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Web_1}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Web_2}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of VGG-Face computed on the CASIA-WebFace dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{Web_f1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Web_3}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Web_4}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of ResNet-Face computed on the CASIA-WebFace dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{Web_f2}
\end{figure}
\begin{table}
\begin{center}
\caption{ The size of local model set of VGG-Face on the CASIA-WebFace dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.55 & SM-0.57 &SM-0.60 &SM-0.62 & CM-0.01 & CM-0.03 &CM-0.05 &CM-0.07 \\ \hline
5,000&175 &108 &43 &14 &391 &60 &17 &4 \\ \hline
6,000&171 &94 &35 &17 &418 &62 &19 &8 \\ \hline
7,000&155 &87 &22 &8 &372 &63 &13 &4 \\ \hline
8,000&173 &108 &46 &23 &383 &48 &12 &4 \\ \hline
9,000&335 &198 &88 &46 &416 &57 &17 &10 \\ \hline
\end{tabular}}
\label{Web_t1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ The size of local model set of ResNet-Face on the CASIA-WebFace dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.67 & SM-0.70 &SM-0.72 &SM-0.75 & CM-0.01 & CM-0.03 &CM-0.05 &CM-0.07 \\ \hline
23,000&229 &107 &59 &21 &601 &78 &12 &2 \\ \hline
24,000&218 &103 &57 &19 &605 &82 &11 &1 \\ \hline
25,000&235 &108 &61 &22 &605 &82 &14 &3 \\ \hline
26,000&232 &109 &61 &20 &593 &82 &15 &4 \\ \hline
27,000&230 &107 &59 &20 &598 &87 &14 &1 \\ \hline
\end{tabular}}
\label{Web_t2}
\end{center}
\end{table}
From Figure~\ref{UH_f1}, it can be observed that the VGG-Face global model achieves the best performance (77.2\%) at 8,000 iterations. Both LCC-CNN-SM and LCC-CNN-CM improve the performance at different training stages. LCC-CNN-SM and LCC-CNN-CM achieve the best performance of 79.1\% and 79.3\% when $t_s = 0.5$ and $t_c = 0.05$, respectively. From Figure~\ref{UH_f2}, it can be observed that the best performance of the ResNet-Face model (68.8\%) is lower than that of the VGG-Face model. Both the proposed models perform better than the baseline at different training stages. The best results of LCC-CNN-SM (71.3\%) and LCC-CNN-CM (71.5\%) are also achieved when $t_s = 0.5$ and $t_c = 0.05$, respectively. From Figure~\ref{Web_f1} and Figure~\ref{Web_f2}, similar improvements can be observed on both baselines. From Figure~\ref{Web_f1}, it can be observed that the VGG-Face global model achieves the best performance (90.7\%) at 8,000 iterations. Both LCC-CNN-SM and LCC-CNN-CM improve the performance at different training stages. LCC-CNN-SM and LCC-CNN-CM achieve the best performance of 90.9\% and 91.4\% when $t_s = 0.55$ and $t_c = 0.01$, respectively. From Figure~\ref{Web_f2}, it can be observed that the best performance of the ResNet-Face model (84.6\%) is lower than that of the VGG-Face model. This is because the pre-trained model is trained on ImageNet rather than a face dataset. Both the proposed models perform better than the baseline at different training stages. The best results of LCC-CNN-SM (84.9\%) and LCC-CNN-CM (85.0\%) are achieved when $t_s = 0.67$ and $t_c = 0.01$, respectively. From Tables~\ref{UH_t1}-\ref{Web_t2}, it can be observed that as $t_s$ and $t_c$ decrease, more label pairs are selected at different training stages.
Figure~\ref{f4} and Figure~\ref{f3} depict several matching examples of the proposed matcher on different datasets. From Figure~\ref{f4}, it can be observed some confusions in global model matching when there are similar backgrounds (the images in (a)), occlusions (the ``bee'' on the flower image in (d)), and multiple objects (the images in (c) and (d)). From Figure~\ref{f3}, it can be observed that global model fails to distinguish visually similar faces, especially when there are similar facial attributes: ``bold head'' in (b), ``black skin'' in (c), and ``wavy hair'' in (d). On the other hand, local models explore more pairwise correlations between labels and can extract more locally discriminative features, especially for the related labels. Also, the proposed matcher can explore chains of similar labels like the flowers in Figure~\ref{f4}(d) and characters in Figure~\ref{f4}(f). This information can help us understand the pairwise correlations between different labels.
\begin{figure}
\centering
\begin{center}
\includegraphics[width=1\linewidth]{ex_image_all}
\end{center}
\caption{The matching examples of the proposed hierarchical matcher on image recognition and character recondition datasets. The text on the top of each image represents its label, where black, red and green indicate ground truth label, mistaken label and correct label, respectively.}
\label{f4}
\end{figure}
\begin{figure}
\centering
\begin{center}
\includegraphics[width=1\linewidth]{ex_face_all}
\end{center}
\caption{The matching examples of the proposed hierarchical matcher on face recognition datasets. The text on the top of each image represents its identity, where black, red and green indicate ground truth identity, mistaken identity and correct identity, respectively.}
\label{f3}
\end{figure}
\subsection{UR2D evaluation}
The UR2D system is evaluated on two types of face recognition scenarios: constrained environment and unconstrained environment. The datasets used for evaluation are the UHDB31 dataset \cite{ha2017uhdb31, ZHANG201828} and the IJB-A dataset \cite{yi2014learning}. The same setting is followed as \cite{klare2015pushing} in the UHDB31 dataset. To exclude the illumination changes, a subset with nature illumination is selected. To evaluate the performance of cross pose face recognition, the front pose (pose-11) face images are used as gallery and the remaining images from 20 poses are used as probe. An example of face images from different poses of the same subject is depicted in Figure~\ref{uhdb31_ex}. The IJB-A dataset \cite{klare2015pushing} contains images and videos from 500 subjects captured from ``in the wild'' environment. This dataset merges images and frames and provides evaluations on the template level. A template contains one or several images/frames of one subject. According to the IJB-A protocol, it splits galleries and probes into 10 splits. In this experiment, the same modification as \cite{xiang2017ijcb} is followed for use in close-set face recognition.
\begin{figure}
\centering
\begin{center}
\includegraphics[width=1\linewidth]{uhd31_214.png}
\end{center}
\caption{An example depicted the face images of different poses in the UHDB31 dataset.}
\label{uhdb31_ex}
\end{figure}
To create validation set, synthetic images are generated for the UHDB31 dataset. Each gallery image is rotated, masked and cropped to create 150 images. Then, half of them are used as sub-gallery and the other half as sub-probe. In the IJB-A dataset, sub-gallery and sub-probe sets are also created based on gallery images. The UR2D with Deep Pose Robust Face Signature (DPRFS) is used as baseline method. Table~\ref{re-1-p-t1} and Table~\ref{re-1-p-t2} show the results on the two datasets. The parameters $t_s$ and $t_c$ are set to 0.6 and 0.1, learned from the sub-sets. Figure~\ref{re-1-p-f1} and Figure~\ref{re-1-p-f2} show the sensitivity of $t_s$ and $t_c$, respectively. The local models are learned based on VGG-Face network, which provides better performance in previous evaluation. Table~\ref{re-1-p-t3} shows the sizes of local model sets.
\begin{table*}
\begin{center}
\caption{The Rank-1 performance of LCC-CNN computed on the UHDB31 dataset (\%). The methods are ordered as UR2D, LCC-CNN-SM and LCC-CNN-CM.}
\vspace{-7px}
\scalebox{0.7}{
\begin{tabular}{| c|c |c| c |c |c| c| c|}
\hline
\backslashbox{Pitch}{Yaw}
& -90\textdegree{} &-60\textdegree{} &-30\textdegree{} &0\textdegree{} & +30\textdegree{} &+60\textdegree{} &+90\textdegree{} \\
\hline
+30\textdegree{} &
\makecell{82,82,{\bf 83}} &
\makecell{{\bf 99},{\bf 99},{\bf 99}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{{\bf 99},{\bf 99},{\bf 99}} &
\makecell{99,{\bf 100},{\bf 100}} &
\makecell{{\bf 75},{\bf 75},{\bf 75}} \\
\hline
0\textdegree{} &
\makecell{{\bf 96},{\bf 96},{\bf 96}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}}&
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
- &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{{\bf 96},{\bf 96},{\bf 96}} \\
\hline
-30\textdegree{} &
\makecell{75,75,{\bf 76}} &
\makecell{{\bf 97},{\bf 97},{\bf 97}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{96,{\bf 97},{\bf 97}} &
\makecell{{\bf 79},{\bf 79},{\bf 79}} \\
\hline
\end{tabular}}
\label{re-1-p-t1}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\caption{The Rank-1 performance of LCC-CNN computed on the IJB-A dataset (\%).}
\vspace{-7px}
\scalebox{0.6}{
\begin{tabular}{ l |c c c c c c c c c c c}
\hline
Methods &split-1 & split-2 &split-3 &split-4 &split-5 & split-6 & split-7 &split-8 & split-9 & split-10 & Average\\
\hline
UR2D &78.78&77.60& 77.94& 79.88& 78.44 & 80.57&81.78& 79.00& 75.94& 79.22& 78.92 \\
LCC-CNN-SM & 78.78 & 77.74 & 77.94 & 80.09&{\bf 78.51}& 80.74 &{\bf 82.09}& 79.17 & 76.13 & 79.22 & 79.04 \\
LCC-CNN-CM &{\bf 79.20} &{\bf 77.78}&{\bf 77.96}&{\bf 80.66}& 78.06&{\bf 80.77}& 81.39&{\bf 79.23}& {\bf 76.47}&{\bf 79.39}&{\bf 79.09} \\
\hline
\end{tabular}}
\label{re-1-p-t2}
\end{center}
\end{table*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{lcc-bar1.png}
\caption{ }
\label{fig:gull}
\end{subfigure}%
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{lcc-bar2.png}
\caption{ }
\label{fig:tiger}
\end{subfigure}
\caption{ The sensitivity of $t_s$ computed on LCC-CNN-SM. (a) UHDB31. (b) IJB-A.}\label{re-1-p-f1}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{lcc-bar3.png}
\caption{ }
\label{fig:gull}
\end{subfigure}%
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{lcc-bar4.png}
\caption{ }
\label{fig:tiger}
\end{subfigure}
\caption{ The sensitivity of $t_c$ computed on LCC-CNN-CM. (a) UHDB31. (b) IJB-A.}\label{re-1-p-f2}
\end{figure*}
\begin{table}
\begin{center}
\caption{ The size of local model set based on UR2D.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline
Datasets &SM-0.4& SM-0.5 & SM-0.60 &SM-0.70 &SM-0.80 &CM-0.05 &CM-0.08 & CM-0.10 &CM-0.13 &CM-0.15 \\ \hline
UHDB-31&70&60 &45 &20 &10 &88 &50 &36 &12 &6 \\ \hline
IJB-A&170&120 &90 &35 &17 &319& 220&60 &17 &4 \\ \hline
\end{tabular}}
\label{re-1-p-t3}
\end{center}
\end{table}
From the results of Table~\ref{re-1-p-t1}, it can be observed that, the LCC-CNN-CM model can improve the performance of 4 poses and maintain excellent performance on other poses. The performance of LCC-CNN-SM is limited by the similarity between different faces. From Table~\ref{re-1-p-t2}, improvements can also be observed on most of the splits. The above results also confirm that LCC-CNN-CM performs better on face recognition. From Figure~\ref{re-1-p-f1} and Figure~\ref{re-1-p-f2} it can be observed that different performance is achieved with different parameter values. Table~\ref{re-1-p-t3} indicates that different number of local models are created based on different parameter values. The limitation of the proposed models is that they require re-training based on new gallery images.
\subsection{Statistical Analysis}
\label{SA}
In this section, statistical analysis is performed for the three methods (UR2D, LCC-CNN-SM, LCC-CNN-CM) over the 30 data splits (20 from UHDB31 and 10 from IJB-A). Following Dem\v{s}ar \textit{et al.} \cite{demvsar2006statistical}, the Friedman test \cite{friedman1937use,friedman1940comparison} and the two tailed Bonferroni-Dunn test \cite{dunn1961multiple} are applied to compare multiple methods over multiple datasets. Let $r_i^j$ represent the rank of the $j^{th}$ of k algorithm on the $i^{th}$ of $N$ datasets. The Friedman test compares the average ranks of different methods, by $R_j = \frac{1}{N} \sum_i r_i^j$. The null-hypothesis is that all the methods are equal, so their ranks $R_j$ should be equivalent. The original Friedman statistic \cite{friedman1937use,friedman1940comparison},
\begin{equation}\label{st1}
\mathcal{X}_F^2 = \frac{12N}{k(k+1)}[\sum_j R_j^2 - \frac{k(k+1)^2}{4}],
\end{equation}
is distributed according to $\mathcal{X}_F^2$ with $k-1$ degree of freedom. Because of its undesirable conservative property, Iman \textit{et al.} \cite{iman1980approximations} derived a better statistic
\begin{equation}\label{st2}
F_F = \frac{(N-1)\mathcal{X}_F^2}{N(k-1)-\mathcal{X}_F^2},
\end{equation}
which is distributed according to the F-distribution with $k-1$ and $(k-1) \times (N-1)$ degrees of freedom. First the average ranks of each method are computed as 2.43, 1.93 and 1.63 for UR2D, LCC-CNN-SM and LCC-CNN-CM. The $F_F$ statistical values of Rank-1 accuracy based on (\ref{st2}) is computed as $101.52$. With three methods and 10 data splits, $F_F$ is distributed with $3-1$ and $(3-1) \times (30-1) = 58$ degrees of freedom. The critical value of $F(2, 58)$ for $\alpha = 0.10$ is $2.18 < 5.02$, so the null-hypothesis is rejected. Then, the two tailed Bonferroni-Dunn test is applied to compare each pair of methods based on the critical difference:
\begin{equation}\label{st3}
CD = q_{\alpha} \sqrt{\frac{k(k+1)}{6N}},
\end{equation}
where $q_{\alpha}$ is the critical values. If the average rank between two methods is larger than critical difference, the two methods are significantly different. Otherwise, they are statistically the same. According to Table 5 in \cite{demvsar2006statistical}, the critical value of three methods when $p = 0.10$ is 1.96. The critical difference is computed as $CD = 1.96 \sqrt{\frac{3 \times 4}{6 \times 30}} = 0.51$. Thus, under Rank-1 accuracy, LCC-CNN-CM performs significantly better than UR2D (the difference between ranks is $2.43 - 1.63 = 0.8 > 0.51$). The difference between LCC-CNN-SM and UR2D $2.43 - 1.93 = 0.50$ is slightly smaller than the critical difference $0.51$. So the are not significantly different. Similarly, the tests are also performed for previous experiments which proves the significant improvements.
\section{Conclusion}
\label{sec6}
This paper proposed a hierarchical matcher that combines the merits of both global and local model. Chains of local models built based on a similarity matrix and confusion matrix are used to improve the matching of global model. The experimental results confirmed the assumption that local models explore more pairwise discriminative features and can be used to improve matching performance globally. Compared with the UR2D system, the accuracy is improved significantly by 1\% and 0.17\% on the UHDB31 dataset and the IJB-A dataset, respectively. An interesting observation is the relationship between the performance of the global model and the final matching. Looking at the results in Figure~\ref{fnt_f1} (a) for example, the global model accuracy achieves 0.8268 and 0.8499 at iterations 4,000 and 5,000, respectively. Then, the matcher improves the performance to 0.8661 and 0.8704. It can be observed that the gap is narrowed from 0.0231 to 0.0043. The observation shows that it is possible to build local models based on an unconverged early stage global model, but still achieve comparable performance to the current matcher, which leads to a better balance between global and local model.
\section*{Acknowledgements}
This material is based upon work supported by the U.S. Department of Homeland Security under Grant Award Number 2015-ST-061-BSH001. This grant is awarded to the Borders, Trade, and Immigration (BTI) Institute: A DHS Center of Excellence led by the University of Houston, and includes support for the project ``Image and Video Person Identification in an Operational Environment: Phase I'' awarded to the University of Houston. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security.
\section{References}
\bibliographystyle{elsarticle-num}
\section{Introduction}
Visual recognition is one of the hottest topics in the fields of computer vision and machine learning. In recent years, many deep learning models have been built to set new state-of-the-art results in image classification, object detection and many other visual recognition tasks \cite{ILSVRC2015, girshick2016region, szegedy2013deep}. Among these tasks, most of the breakthroughs are achieved with deep Convolutional Neural Networks (CNN) \cite{lecun1989backpropagation}.
CNN was first proposed in the late 1990s by LeCun \textit{et al.} \cite{lecun1989backpropagation, lecunhandwritten}. It was quickly overwhelmed by the combination of other shallow descriptors (such as SIFT \cite{lowe2004distinctive}, HOG \cite{mikolajczyk2005performance}, bag of words \cite{csurka2004visual}) with Support Vector Machine (SVM) \cite{vapnik1998statistical}. In recent years, with the increase of image recognition data size and computation power, CNN is becoming more and more popular and dominant. Krizhevsky \textit{et al.} \cite{Alex2012} proposed the classic eight layer CNN model (AlexNet) with five convolutional and three fully connected layers. The model is trained via back-propagation through layers and performs extremely well in domains with a large amount of training data. Since then, many new CNN models have been constructed with larger sizes and different architectures to improve the performance. A series of improvements were achieved by VGG \cite{simonyan2014very}, GooLeNet \cite{szegedy2015going}, ResNet \cite{He_2016_CVPR, he2016identity2} and so on. However, a larger model creates a larger number of parameters and larger computational complexity. Methods for compressing network and accelerating training and testing computation have also been developed \cite{han2015deep, dean2012large, denton2014exploiting, zhang2016accelerating}.
Overall, the previous deep face networks have two limitations. (a) data size: training a larger size global model requires more training data, which can be costly and not applicable in certain applications. (b) local information: one deep neural network built over all the class labels may ignore the pairwise local correlations between different labels, which can be used to improve overall performance.
\begin{figure}
\centering
\begin{center}
\includegraphics[width=1\linewidth]{example1}
\end{center}
\caption{An example of the proposed LCC-CNN matcher is depicted with the combination of global model and local models. The matching paths of the two testing images are indicated in red and cyan, respectively.}
\label{f1}
\end{figure}
This paper overcomes the limitations (a) and (b) by introducing a hierarchical matcher that builds chains of local binary CNN classifiers after the global CNN classifier over all the class labels. Moreover, it is a method to improve face recognition performance without changing the architecture of the CNN network. Hereafter, these two types of classifiers are referred as local model and global model. The motivation behind this is that a global model focuses more on the global discriminative features over all the class labels and tends to misclassify samples from visually similar classes. With fewer labels, a local model can exploit more local discriminative features for the related labels and can be used to correct the matching result of the global model. Especially when the same training data and network architecture are used for both global and local models, each local model converges fast and achieves better accuracy than the global model. Also, local models can be trained in parallel, which avoids excessive increase in computational complexity. In addition, when data size is limited, a local model can explore more pairwise label correlations than the global model. To limit the complexity of the proposed matcher, only binary local models are built in this paper. Take CIFAR-10 dataset \cite{krizhevsky2009learning} for example. Figure ~\ref{f1} depicts the intuition of the proposed matcher. It can be observed that for the ``dog'' image, the binary local model between ``cat'' label and ``dog'' label is used to correct the mistake of the global model. Importantly, local models can be built one after another, which leads to a chain of local models to boost performance and avoid error propagation. For the ``deer'' image, a chain of two local models (between ``dog'' label and ``horse'' label, ``dog'' label and ``deer'' label) are built to improve the matching of the global model.
The contributions of this paper are improving recognition performance by the following two techniques: (i) fully exploring training data by proposing a hierarchical matcher, where the contributions of global model and local models are combined. (ii) making use of pairwise label information to local model chains, which adaptively select a small set of label pairs to build local models. The pairwise correlations between different labels are learned based on their relationships in the score matrices. These correlations are not well explored in global model based methods.
Parts of this work on face recognition have been published in Zhang \textit{et al.} \cite{zhang2017ijcb}. In this paper, it is extended by providing: (i) a signature to store image information with global model and local model components; (ii) the fine-tuning process of local models; (iii) more general applications of image recognition and character recognition; (iv) the evaluation on the pose-invariant 3D-aided 2D face recognition system (UR2D) \cite{xiang2017ijcb}.
The rest of this paper is organized as follows: Section \ref{sec2} presents related work. Section \ref{sec3} and Section \ref{sec4} describe the signature and the hierarchical matcher. The experimental design, results, and analysis are presented in Section \ref{sec5}. Section \ref{sec6} concludes the paper.
\section{Related work}
\label{sec2}
In this section, the most recent works related to the proposed method are reviewed. It is also illustrated that how the proposed matcher is different from previous research.
\textbf{Image recognition with CNNs:} the previous methods are introduced based on several key factors in CNN.
(1) \textit{Filter size and stride}: Based on the visualization of feature maps with deconvnet, Zeiler \textit{et al.} \cite{zeiler2014visualizing} utilized a receptive window with size of $7 \times 7$ and a stride of 2 in the first convolutional layer and achieved better performance than AlexNet on the ImageNet dataset. Sermanet \textit{et al.} \cite{sermanet2014overfeat} proposed an integrated framework for classification, localization and detection. Different tasks are learned simultaneously using a single shared network. Their model has larger first and second layer feature maps ($11 \times 11$ and $5 \times 5$) with stride of $4 \times 4$ and $1 \times 1$. Simonyan \textit{et al.} \cite{simonyan2014very} used a small $3 \times 3$ receptive field, which is the smallest size to capture the notation of left/right, up/down, and center. The convolution stride is fixed to 1 pixel. The reason behind this is that a stack of three $3 \times 3$ convolutional layers with spatial pooling in between has the effect of a receptive field of $7 \times 7$. Three non-linear rectification layers are incorporated instead of one, which makes the decision function more discriminative.
(2) \textit{Multi-scale and multi-view}: In visual recognition, objects of interests sometimes vary significantly in size and position within an image. The general idea to address this is to apply CNN at multiple location in the image. Krizhevsky \textit{et al.} utilized the multi-view voting to boost performance, where 10 views (5 corners and center, with horizontal flip) are averaged. However, this approach ignores many regions of the image and is computationally redundant when views overlap. Also, it is applied at a single scale. Sermanet \textit{et al.} \cite{sermanet2014overfeat} explored the entire image by densely running the network at each location at multiple scales. The approach yields significantly more views for voting without increasing too much computation. The approach also uses 6 scales of input image. Howard \textit{et al.} \cite{howard2013some} applied a combination of 5 translations, 2 flips, 3 scales and 3 views leading to 90 predictions which slow the prediction down by almost an order of magnitude. To rectify this, the author applied a greedy algorithm that can choose a subset of transforms that give competitive performance.
(3) \textit{Data augmentation}: Data augmentation is the easiest and most common method to reduce over-fitting on image data. Simonyan \textit{et al.} \cite{simonyan2014very} extracted random $224 \times 224$ patches (and their horizontal reflections) from $256 \times 256$ rescaled images. The $256 \times 256$ image is generated by rescaling the largest image dimension to 256 and then cropping the other side to 256. This increases the size of the training set by a factor of 20,148, but results in a loss of information of roughly 30\% of the pixels. Howard \textit{et al.} first scaled the smallest side to 256 and then selected a random crop of $224 \times 224$ as a training image, which yields a large number of additional training images and helps the network learn more extensive translation invariance. Besides image cropping, color manipulation is also used in data augmentation. Krizhevsky \textit{et al.} \cite{Alex2012} performed PCA on the RGB pixel values to alter the intensities of the RGB channels. Their scheme approximately captures the changes in the intensity and illumination. In addition to the random lighting, other color manipulations on the contrast, brightness and color are applied to generate training examples covering the span of image variations \cite{howard2013some}.
(4) \textit{Depth}: Increasing deep neural network size includes increasing the number of levels and the number of units of each level \cite{simonyan2014very}. Simonyan \textit{et al.} \cite{simonyan2014very} explored the influence of CNN depth by an architecture with small convolutional filters ($3 \times 3$). They achieved a significant improvement by pushing the depth to 16-19 layers in a VGG network. Szegedy \textit{et al.} \cite{szegedy2015going} introduced GoogLeNet as a 22-layer Inception network, which achieved impressive results in both classification and detection tasks. He \textit{et al.} \cite{He_2016_CVPR} proposed Residual Networks (ResNet) with a depth of up to 152 layers, which set new records for many visual recognition tasks. Further more, the authors released a residual network of 1K layers with a new residual unit that makes training easier and improves generalization.
(5) \textit{Loss function}: Softmax loss function is the most common loss function used in CNN \cite{Alex2012, szegedy2015going, han2015deep}. It takes the output of a fully connected layer and produces a vector with real values in the range of 0 and 1 that add up to 1. Each real value can represent the predicated probability of one label. To enhance the discriminative power of CNN features, other loss functions have been developed. Contrastive loss constraints the distance between two training samples' deep features based on whether they are from the same class. Sun \textit{et al.} \cite{sun2014deep2} proposed to learn deep face representation with joint identification loss and verification loss. The identification loss increases the inter-personal variations while the verification loss reduces the intra-personal variations. Wen \textit{et al.} \cite{wen2016latent} proposed to use contrastive loss and softmax loss together to learn age-invariant deep features. Triple loss has also been introduced to deep face recondition by minimizing the distance between an anchor and a positive sample of the same identity and maximizing the distance between the anchor and a negative sample of a different identity \cite{schroff2015facenet}. Both contrastive loss and triplet loss require dramatic data expansion when contructing sample pairs or sample triplets from the training set. To overcome this problem, center loss is developed to increase the intra-class compactness without re-combination of training samples \cite{wen2016discriminative}. The center loss learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers.
(6) \textit{Speed up}: Larger model dramatically increases computational complexity. Training process is accelerated with multiple GPUs and distributed deep network techniques \cite{ dean2012large}. Methods for accelerating test-time computation of CNNs have also been developed. Denton \textit{et al.} \cite{denton2014exploiting} proposed to use low rank approximations and clustering of filters which achieves 1.6 speedup of single convolutional layer with 1\% drop in classification with Overfeat network. Lebedev \textit{et al.} \cite{lebedev2014speeding} introduced a two-step approach for speeding up convolution layers within large CNN based on tensor decomposition and discriminative fine-tuning. The approach achieves higher cpu speedups at the cost of lower accuracy drops. These two methods only focus on decomposition of one or a few linear layers of CNN. Zhang \textit{et al.} \cite{zhang2016accelerating} proposed an low rank decomposition method for very deep models, where the non-linear neurons are taken into account. The method achieves $4 \times$ speedup on VGG-16 model with graceful accuracy degradation.
\textbf{Face recognition with CNNs:} Face recognition is one of the major topics in visual recognition. Based on the success of CNNs in image recognition, many networks have been introduced in face recognition and achieved a series of breakthroughs. Similar to image recognition, effective CNNs require a larger amount of training images and larger network sizes. Yaniv \textit{et al.} \cite{taigman2014deepface} trained the DeepFace system with a standard eight layer CNN using 4.4M labeled face images. Sun \textit{et al.} \cite{sun2014deep, sun2014deep2, sun2015deepid3} developed the Deep-ID systems with more elaborate network architectures and fewer training face images, which achieved better performance than the DeepFace system. The FaceNet \cite{schroff2015facenet} was introduced with 22 layers based on the Inception network \cite{szegedy2015going, zeiler2014visualizing}. It was trained on 200M face images and achieved further improvement. Parkhi \textit{et al.} \cite{parkhi2015deep} introduced the VGG-Face network with up to 19 layers adapted from the VGG network \cite{simonyan2014very}, which was trained by 2.6M images. This network also achieved comparable results and has been extended to other applications. To overcome the massive request of labeled training data, Masi \textit{et al.} \cite{masi16dowe} proposed using domain specific data augmentation, which generates synthesis images for CASIA WebFace collection \cite{yi2014learning} based on different facial appearance variations. Their results trained with ResNet match the state-of-the-art results reported by networks trained on millions of images. Most of these methods also focus on increasing the network size to improve performance. Xiang \textit{et al.} \cite{xiang2017ijcb} presented the evaluation of the UR2D face recognition system, which is robust enough to pose variations as large as 90\textdegree{}. Different CNNs are integrated in face detection, landmark detection, 3D reconstruction and signature extraction.
\textbf{Hierarchical CNNs:} Hierarchical ideas have been introduced to deep neural networks in previous works \cite{tousch2012semantic}. Deng \textit{et al.} \cite{deng2014large} replaced the flat soft-max classification layer with a probabilistic graphical model that embeds given relationships between labels. Yan \textit{et al.} \cite{yan2015hd} proposed a hierarchical deep CNN, where easily distinguished classes are predicted in higher layers while visually similar classes are predicted in lower layers. Murdock \textit{et al.} \cite{murdock2015blockout} introduced blockout layers to learn the cluster membership of each output node from fully connected layer. The weight matrix of a blockout layer can be learned from back propagation. These methods modify the global neural network to learn feature representation by embedding clustering information. Local information is introduced either by multi-task learning or weight matrix restriction. However, they still rely on one global model to make predictions for all of the class labels. The pairwise label correlations are not explored separately.
\textbf{Hierarchical multi-label classification:} The proposed matcher is related to the Hierarchical Multi-label Classification (HMC) problem, where each sample has more than one label and all of these labels are organized hierarchically in a tree or Direct Acyclic Graph (DAG) \cite{silla2011survey, Wei2014, zhang2014fully}. Hierarchical information in tree and DAG structures is used to improve classification performance \cite{valentini2011true,Zhang201789}. In visual recognition, each sample only has one label. If a meta class label hierarchy is built as an HMC problem, the matching error of high level nodes will propagate to the matching of low level nodes. Another way is to build a local model for each node separately and combine the matching results of all local models. But this leads to a heavy computation burden depending on the size of the hierarchy. Thus, this paper attempts building chains of local models after one global model, which possesses the merits of both global and local models.
\textbf{Classifier chains:} The classifier chains for multi-label classification \cite{read2011classifier} is also related to the proposed work. Similarly, the proposed matcher also uses local classifier models to select the next local model based on the current matching result. The major difference of Read \textit{et al.} \cite{read2011classifier} and LCC-CNN lies in how to use a local classifier. In multi-label classification, the classifier is learned to predict multiple labels. In the proposed matcher, local classifiers are applied to update a single matching label.
\section{Signature}
\label{sec3}
To make the matching process separate from network computation, a signature is used to store the image information. The signature of each image $\mathbb{S}$ has two components: global model component $\mathbb{S}^{G}$ and local model component $\mathbb{S}^{L}$. The global model component is extracted from a global model built over all the class labels while the local component is built based on label pairs.
\subsection{Global model component: $\mathbb{S}^{G}$}
Given a visual recognition dataset \mbox{$\mathcal{S}=\{(x_i,y_i)\}_{i=1}^N$}, where $x_i$ and $y_i$ represent the $i^{th}$ sample and the corresponding class label, respectively. The label set is denoted by \mbox{$\mathcal{C}=\{c_1,c_2,...,c_l\}$}, so $y_i\in \mathcal{C}$.
First the global model is defined as $g(x_i)$ and the global matching vector of each sample is defined as $V_i=\{v_{i1},v_{i2},...,v_{il}\}$ with size of $1 \times l$. Each value $v_{ij}$ represents the probability of assigning label $c_j$ to the $i^{th}$ sample.
Different global models create different sizes of global model features and rely on different methods to obtain the global matching vector \cite{zhang2015icb, zhang2018attribute}. For non-patch based CNN model, the feature size is the same as the label set as $1 \times l$. The softmax function is applied to obtain the global matching vector based on the signature. For patch-based CNN model in the UR2D system, the global model feature contains two part: feature matrix and occlusion encoding. The feature size is $8 \times 512 + 8$ based on DPRFS signature \cite{dou2015pose, xiang2017ijcb,zhang2018icb}. Cosine similarity is applied to compute the global matching vector.
From $V_i$, the global matching result $h_i$ is obtained easily, and $h_i\in \mathcal{C}$. All the global matching results of the dataset are represented by \mbox{$H=\{h_1,h_2,...,h_N\}$}, thus,
\begin{equation}\label{e1}
V_i=g(x_i), \\
\end{equation}
\begin{equation}\label{e2}
h_i = {argmax}_{i}(V_i).
\end{equation}
\subsection{Local model component: $\mathbb{S}^{L}$}
The goal of the proposed matcher is to improve the global matching results with local models built on a set of label pairs. Let $\mathcal{P} = \{ p_{ij} \}$ and $\mathcal{F} = \{ f_{ij}(x) \}$ represent the label pair set and the local binary model set, respectively, where $p_{ij}$ and $f_{ij}(x)$ represent the label pair and the local model between label $c_i$ and label $c_j$, respectively. For each sample, the local matching vector of $p_{ij}$ is stored as $B_{ij}=\{b_{ij}^{1},b_{ij}^{2}\}$ in local model component. Thus, the local model signature component $\mathbb{S}^{L}$ size is $2*|\mathcal{P}|$.
First, local models are built between visually less discriminative labels. The assumption behind this is that global model is more likely to confuse visually similar labels than visual discriminative labels. Take Figure~\ref{f1} for example, an animal image is more likely to be mismatched as another animal than other a ``car''. Given a global model, the global matching vector of each sample in validation set is used as its visual feature vector. Based on this feature map, the averaged feature vector of each label can be computed. Then, the distance between the averaged feature vectors of two different labels can be used to represent their label similarity. Let \mbox{$\mathcal{\hat{S}}=\{(\hat{x}_i,\hat{y}_i)\}_{i=1}^{\hat{N}}$} represent the validation set, and the visual feature vector of each sample from the global model can be computed as $\hat{V}_i=g(\hat{x}_i)$. Let \mbox{$\hat{U}_i=\{\hat{u}_{i1},\hat{u}_{i2},...,\hat{u}_{il}\}$} represent the mean sample feature vector of the $i^{th}$ label. Then a similarity matrix based on the distance between each pair of mean samples can be defined as $W = \{w_{ij}\}^{l \times l}$, where $w_{ij}$ represents the Euclidean distance between the mean samples of $c_i$ and $c_j$. Thus,
\begin{equation}\label{sm1}
w_{ij}=\frac{1}{l}\sum\limits_{k=1}^{l}(\hat{U}_{ik} - \hat{U}_{jk})^2. \\
\end{equation}
To convert the elements in $W$ to unique scale, each element in $W$ is normalized by dividing the maximum element of $W$. Then, each value is subtracted from 1 to represent the similarity instead of distance. Let $Q = \{q_{ij}\}^{l \times l}$ represent the normalized similarity matrix, so that,
\begin{equation}\label{sm2}
q_{ij}= 1-\frac{w_{i,j}}{\max(W)}.
\end{equation}
For each label, the most similar labels can be obtained based on the score matrix and a pre-defined threshold $t_s$. If $W_{ij}$ is larger than $t_s$, a label pair $p_{ij} = \{c_i,c_j\}$ is added to the label pair set $\mathcal{P}$. This way, different sizes of label pair sets can be obtained based on different values of $t_s$. The pseudo-code of LCC-CNN trained with similarity matrix is summarized in Algorithm \ref{a1}.
\begin{algorithm}
\caption{Local models for image recognition: Building with similarity matrix}
\label{a1}
\KwIn{global model $g(x)$, validation set \mbox{$\mathcal{\hat{S}}=\{(\hat{x}_i,\hat{y}_i)\}_{i=1}^{\hat{N}}$} and threshold $t_s$}
\KwOut{label pair set $\mathcal{P}$ and local binary model set $\mathcal{F} = \{ f_{ij}(x) \}$}
Compute global matching vector of each sample in validation set $\hat{V}_i=g(\hat{x}_i)$ by (\ref{e1})\\
Compute mean sample vectors $\hat{U}_i$ \\
Compute similarity matrix $W$ by (\ref{sm1})\\
Compute normalized similarity matrix $Q$ by (\ref{sm2})\\
\For { $i \leftarrow 1$ \textbf{to} $l$ } {
\For { $j \leftarrow 1$ \textbf{to} $l$ } {
\If { $q_{ij} > t_s$ } {
$\mathcal{P} = \mathcal{P} \cup \{p_{ij}\}$ \\
train local model $f_{ij}(x)$ \\
$\mathcal{F} = \mathcal{F} \cup \{ f_{ij}(x) \}$ \\
}
}
}
\Return{$\mathcal{P}$, $\mathcal{F}$} ;
\end{algorithm}
Similarity correlation is not a direct way to measure the matching performance of global model. Especially in face recognition, where the differences between different identities are much smaller than other general images. Here, the confusion matrix of validation set is introduced to create the label pair set. Based on the label set $\hat{Y}$ and the global matching set $\hat{H}$ of the validation set, the confusion matrix can be computed easily, represented by $Z = \{z_{ij}\}$, where $z_{i,j}$ represents the number of samples that have a true label of $c_i$ with $\hat{y} = c_i$ and a matching label of $c_j$ with $\hat{h} = c_j$. The confusion matrix directly gives pairwise information about the performance of global model, which matches the motivation of the proposed matcher naturally. To obtain unique scale, each element in $Z$ is normalized by computing the ratio of $z_{ij}$ to the number of samples with true label of $c_i$, all normalized values are defined as a matrix $R = \{r_{ij}\}$, where
\begin{equation}\label{cm1}
r_{ij}= \frac{z_{ij}}{\sum_1^l (z_{ij})}.
\end{equation}
If $r_{ij}$ is larger than a pre-defined threshold $t_c$, many samples with a true label of $c_i$ are predicted as $c_j$. So labels $c_i$ and $c_j$ are ambiguous, and a local model is required for further evaluation. To make robust label pair set and overcome over-fitting, the direction information is ignored in each label pair, so $p_{ij} = p_{ji}$. The pseudo-code of LCC-CNN trained with confusion matrix is summarized in Algorithm \ref{a2}. The pseudo-code of signature generate for a given image is summarized in Algorithm \ref{a3}.
The procedure for building a label pair set in the proposed matcher is depicted in Figure \ref{f2}. If the global and local model use the same network structure, the weights of global model is used to fine-tune the local models to reduce the computational complexity of building local models. Otherwise, the state-of-art pre-trained weights is used to fine-tune the local models. Figure~\ref{h1} depicts the normalized similarity matrix and the normalized confusion matrix on the CIFAR-10 dataset. Figure~\ref{sm_ex1} shows the top-3 label pairs extracted from UHDB31 dataset \cite{ha2017uhdb31} based on VGG-Face network. Note that if there are many label pairs sharing the same label, a multi-class local model can also be built, instead of a set of binary local models. To limit the model complexity, chains of binary local models are built. The matching result of one local model is used to direct the selection of the next local model.
\begin{algorithm}
\caption{Local models for face recognition: Building with confusion matrix}
\label{a2}
\KwIn{global model $g(x)$, validation set \mbox{$\mathcal{\hat{S}}=\{(\hat{x}_i,\hat{y}_i)\}_{i=1}^{\hat{N}}$} and threshold set $t_c$}
\KwOut{label pair set $\mathcal{P}$ and local binary model set $\mathcal{F} = \{ f_{ij}(x) \}$}
Compute global matching vector of each sample in validation set $\hat{V}_i=g(\hat{x}_i)$ by (\ref{e1})\\
Compute global matching result of each sample $\hat{h}_i= {argmax}_{i}(\hat{V_i})$ by (\ref{e2})\\
Compute confusion matrix of validation set $Z = \{z_{ij}\}$ \\
Compute normalized confusion matrix $R = \{r_{ij}\}$ by (\ref{cm1}) \\
\For { $i \leftarrow 1$ \textbf{to} $l$ } {
\For { $j \leftarrow 1$ \textbf{to} $l$ } {
\If { $r_{ij} > t_k$ } {
$\mathcal{P} = \mathcal{P} \cup \{p_{ij}\}$ \\
train local model $f_{ij}(x)$ \\
$\mathcal{F} = \mathcal{F} \cup \{ f_{ij}(x) \}$ \\
}
}
}
\Return{$\mathcal{P}$, $\mathcal{F}$} ;
\end{algorithm}
\begin{algorithm}
\caption{Signature $\mathbb{S} = \{\mathbb{S}^{G}, \mathbb{S}^{L}\}$}
\label{a3}
\KwIn{input image $\hat{x}$, global model $g(x)$, label pair set $\mathcal{P} $ and local binary model set $\mathcal{F}$}
\KwOut{Signature $\mathbb{S} = \{\mathbb{S}^{G}, \mathbb{S}^{L}\}$}
Compute global matching vector of $\hat{V}=g(\hat{x})$\\
$\mathbb{S}^{G} = \mathbb{S}^{G} \cup \{ \hat{V} \}$ \\
\For {$p_{ij} \in \mathcal{P}$} {
compute local matching vector $\hat{B}_{ij} = f_{ij}(\hat{x})$ \\
$\mathbb{S}^{L} = \mathbb{S}^{L} \cup \{ \hat{B}_{ij} \}$ \\
}
\Return{$\mathbb{S} = \{\mathbb{S}^{G}, \mathbb{S}^{L}\}$} ;
\end{algorithm}
\begin{figure*}
\centering
\begin{center}
\includegraphics[width=1\linewidth]{pipeline21}
\end{center}
\caption{The work-flow depicts the validation-based local model generation.}
\label{f2}
\end{figure*}
\begin{figure*}[htp]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{h12}
\caption{ }
\end{subfigure} %
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{h22}
\caption{ }
\end{subfigure}
\caption{The score matrices generated on the CIFAR-10 dataset. (a) Normalized similarity matrix (b) Normalized confusion matrix. It can be observed from (a) that the three most similar label pairs are $\{$``cat'', ``dog'' $\}$ (0.80), $\{$``cat'', ``bird'' $\}$ (0.77) and $\{$``deer'', ``bird'' $\}$ (0.74). It can be observed from (b) that the three most error-prone label pairs are $\{$``cat'', ``dog'' $\}$(0.27), $\{$``truck'', ``car'' $\}$ (0.14) and $\{$``cat'', ``frog'' $\}$ (0.13).}
\label{h1}
\end{figure*}
\begin{figure}[htp]
\centering
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=1\linewidth]{sm_top3}
\caption{ }
\end{subfigure} %
\qquad
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=1\linewidth]{cm_top3}
\caption{ }
\end{subfigure}
\caption{The Top-3 label pairs generated on UHDB31 dataset. (a) From similarity matrix, the scores from top to bottom are 0.716, 0.712 and 0.703, respectively. (b) From confusion matrix, the scores from top to bottom are 0.302, 0.286 and 0.238, respectively. It can be observed that each label pair share some visual similarities that can be mismatched by the global model.}
\label{sm_ex1}
\end{figure}
\section{Hierarchical matcher}
\label{sec4}
Based on the matching information of both global and local models in signature, a top-down strategy is used to get the final matching for each testing sample. Starting from the global matching, each sample will go through a chain of local models to obtain the final matching, until none of the related local models gives a better matching. Initially, the current matching label $o$ is set based on the global matching result. Then, each model link is built based on two steps. First, all label pairs containing the current label $o$ in label pair set $\mathcal{P}$ are added to a current matching label set $\mathcal{P}'$. Second, the matching label with the largest matching result in signature is selected as the next current label. Take face recognition as example, the pseudo-code of LCC-CNN in matching is summarized in Algorithm \ref{a4}.
The motivation is that if there is only one matching label and the corresponding local model gives a higher matching value to this matching label over the current label. It is believed that this testing sample is mismatched and the current label will be updated to the matching label. If there are multiple matching labels, the matcher needs to find the best possible next current label by comparing their matching values. In this process, the next current label does not necessarily have to be the correct label. The local model chain is designed to overcome error propagation in the top-down matching. If one local model in the chain gives a wrong matching, the following local models still have chances to correct the error as long as the related label pairs have been added to the label pair set.
\begin{algorithm}
\caption{LCC-CNN: Hierarchical matcher}
\label{a4}
\KwIn{Probe image signature $\mathbb{S}^p = \{\mathbb{S}^{Gp}, \mathbb{S}^{Lp}\}$, Gallery image signature list $\{\mathbb{S}^g = \{\mathbb{S}^{Gg}, \mathbb{S}^{Lg}\}\}$ and label pair set $\mathcal{P}$}
\KwOut{ Final matching of probe image $\bar{o}$}
Compute global matching result $ \bar{h}$ based on $\mathbb{S}^{Gp}$ and $\{\mathbb{S}^{Gg}\}$\\
Set current matching label $o = \bar{h}$ \\
\While { True }{
Set current matching label set $\mathcal{P}' = \O$ \\
Update $\mathcal{P}'$, when $p_{od} \in \mathcal{P}$ \\
\If {$ \mathcal{P}' != \O $}{
Set current matching value $a = 0$ \\
Set next current matching label $o_n = o$\\
\For { $d $ \bf{in} $ \mathcal{P}' $ }{
\If { $ {argmax}(B_{od}) != o$ and $\max(B_{od}) > a$ } {
$ o_n = {argmax}(B_{od})$ \\
$ a = \max(B_{od})$ \\
}
}
\If { $o_n == o$ } {
\textbf{break}
}
\Else
{
$o = o_n$
}
}
\Else
{
\textbf{break}
}
}
Set $\bar{o} = o$
\Return{$\bar{o}$ };
\end{algorithm}
\section{Experiments}
\label{sec5}
This section first presents the evaluation of the proposed matcher on different types of recognition tasks: image recognition, character recognition and face recognition. Then, the proposed matcher is evaluated on the UR2D system. different baseline methods are used to build global model and local models. It is assumed that the global model converges when the accuracy of the validation set is stable and begins to drop. The local models are built with the samples of the two corresponding labels from the training set. These local models usually converge faster than the global model, and the weights at 3,000 iterations is used during matching. To observe the overall performance change, the proposed LCC-CNN matcher based on Similarity Matrix (LCC-CNN-SM) and Confusion Matrix (LCC-CNN-CM) is tested using different threshold sets and different maximum iteration stages. The parameter settings are selected based on the performance on the validation set. All the networks are trained with Caffe \cite{jia2014caffe}.
\subsection{Image recognition}
Two image recognition datasets, the CIFAR-10 dataset \cite{krizhevsky2009learning} and the Flower dataset \cite{nilsback2006visual} are evaluated. In the CIFAR-10 dataset, there are 10 classes where each class contains 6,000 images with size of $32 \times 32$. For each class, 600 images are randomly selected for evaluation. In the Flower dataset, there are 17 classes, where each class contains 80 images with different sizes. The two datasets are divided into training set, validation set and testing set, which contain 50\%, 25\% and 25\% of the images per class, respectively. GoogLeNet and VGG are chosen as the baseline methods to build models. To explore the performance of the proposed matcher under different networks and settings, GoogLNet is trained from scratch while the VGG model is fine-tuned with the pre-trained VGG weights based on ImageNet. On the CIFAR-10 dataset with GoogLeNet, the settings are $t_s=\{0.40, 0.50, 0.60, 0.70\}$ and $t_c=\{0.01, 0.05, 0.10, 0.15\}$ with global models at different iteration stages from 6,000 to 11,000. On the CIFAR-10 dataset with VGG, the settings are $t_s=\{0.30, 0.40, 0.50, 0.60\}$ and $t_c=\{0.03, 0.05, 0.10,$ $ 0.15\}$ with global models at different iteration stages from 5,000 to 9,000. On the Flower dataset with GoogLeNet, the settings are $t_s=\{0.40$ $, 0.50, 0.60, 0.70\}$ and $t_c=\{0.01, 0.05, 0.10, 0.20\}$ with global models at different iteration stages from 1,000 to 5,000. On the Flower dataset with VGG, the settings are $t_s=\{0.35, 0.40, 0.45, 0.50\}$ and $t_c=\{0.01, 0.02, 0.03, 0.04\}$ with global models at different iteration stages from 1,000 to 5,000. The results are shown in Figures~\ref{CIF_f1}-\ref{FLW_f2}. Tables~\ref{CIF_t1}-\ref{FLW_t2} depict the sizes of local model sets under different iterations and thresholds.
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{dnn_f_5}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{dnn_f_6}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of GoogLeNet computed on the CIFAR-10 dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{CIF_f1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{CIF_3}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{CIF_4}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of VGG computed on the CIFAR-10 dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{CIF_f2}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{dnn_f_1}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{dnn_f_2}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of GoogLeNet computed on the Flower dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{FLW_f1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{FLW_3}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{FLW_4}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of VGG computed on the Flower dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{FLW_f2}
\end{figure}
\begin{table}
\begin{center}
\caption{ The size of local model set of GoogLeNet on the CIFAR-10 dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.40 & SM-0.50 &SM-0.60 &SM-0.70 & CM-0.01 & CM-0.05 &CM-0.10 &CM-0.15 \\ \hline
6,000&18 &14 &7 &4 & 44 &23 &11&4 \\ \hline
7,000 &18 &14 &7 &4 &37 &20&9&5\\ \hline
8,000 &18 &14 &7 &4 &36 &19&10&4\\ \hline
9,000&18 &14 &7 &4 &39 &18&7&2\\ \hline
10,000&18 &14 &7 &4 &36 &16&5&1\\ \hline
11,000 &18 &14 &7 &4 &36 &18&6&2\\ \hline
\end{tabular}}
\label{CIF_t1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ The size of local model set of VGG on the CIFAR-10 dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.30 & SM-0.40 &SM-0.50 &SM-0.60 & CM-0.03 & CM-0.05 &CM-0.10 &CM-0.15 \\ \hline
5,000 &22 &11 &6 &1 &18 &13 &5 &2 \\ \hline
6,000 &23 &12 &6 &1 &22 &16 &5 &4\\ \hline
7,000 &23 &11 &6 &1 &21 &16 &4 &2\\ \hline
8,000 &22 &11 &6 &1 &19 &13 &4 &2\\ \hline
9,000 &18 &10 &6 &1 &21 &15 &3 &2\\ \hline
\end{tabular}}
\label{CIF_t2}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ The size of local model set of GoogLeNet on the Flower dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.40 & SM-0.50 &SM-0.60 &SM-0.70 & CM-0.01 & CM-0.05 &CM-0.10 &CM-0.20 \\ \hline
1,000&88 &60 &27 &7 & 41 &41 &16&6 \\ \hline
2,000 &88 &60 &27 &7 &33 &33&9&4\\ \hline
3,000&88 &60 &27 &7 &32 &32&11&4\\ \hline
4,000&88 &60 &27 &7 &33 &33&9&1\\ \hline
5,000&88 &60 &27 &7 &31 &31&9&2\\ \hline
\end{tabular}}
\label{FLW_t1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ The size of local model set of VGG on the Flower dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.35 & SM-0.40 &SM-0.45 &SM-0.50 & CM-0.01 & CM-0.02 &CM-0.03 &CM-0.04 \\ \hline
1,000 &34 &21 &11 &6 &8 &8 &8&8 \\ \hline
2,000 &37 &26 &12 &7 &10 &10 &10&10\\ \hline
3,000 &33 &23 &9 &6 &10 &10 &10&10\\ \hline
4,000 &35 &25 &9 &7 &10 &10 &10&10\\ \hline
5,000 &38 &23 &10 &6 &8 &8 &8&8\\ \hline
\end{tabular}}
\label{FLW_t2}
\end{center}
\end{table}
From Figure~\ref{CIF_f1} it can be observed that in the CIFAR-10 dataset, the baseline model of GoogLeNet obtains the best performance at 10,000 iterations. With different sizes of label pair sets from the similarity matrix and confusion matrix, the proposed matcher achieves better performance. The best performance of LCC-CNN-SM is obtained ($t_s = 0.40$) with 18 label pairs. LCC-CNN-CM ($t_c = 0.01$) and LCC-CNN-CM ($t_c = 0.05$) achieve similar top performance with different sizes of label pair sets, 36 and 16 respectively. This also proves that the proposed matcher is robust to over-fitting. It can adaptively select useful label pairs from the label pair set. From Figure~\ref{CIF_f2}, similar improvements can be observed even if the performance is more sensitive under different iteration stages. From Table~\ref{CIF_t1} and Table~\ref{CIF_t2} it can be observed that the size of the label pair set in LCC-CNN-SM is not sensitive with respect to different iteration stages. It is mainly affected by the threshold value $t_s$. On the other hand, in LCC-CNN-CM, the size of the label pair set is influenced by both iteration number and threshold $t_c$. This is because the similarity matrix is based on an averaged visual feature vector of each class label and the similarity correlations are stable under different iteration stages. On the other side, the confusion matrix is very sensitive to an increase of iteration number. With fixed $t_c$, the closer the model is to converging, the fewer label pairs can be extracted. In Figures~\ref{FLW_f1}-~\ref{FLW_f2} and Tables~\ref{FLW_t1}-\ref{FLW_t2}, similar results can be observed on the Flower dataset. Both LCC-CNN-SM and LCC-CNN-CM achieve better performance at different training stages. Note that under the VGG baseline model, LCC-CNN-CM achieve the same performance under different threshold values of $t_c$, this is because that all the non-diagonal elements of the confusion matrix have the same value.
\subsection{Character recognition}
One character recognition dataset, the EnglishFnt dataset from Chars74K collection \cite{deCampos09} is evaluated. In this dataset, there are 62 classes (0-9, A-Z, a-z), where each class contains 1,016 images from different fonts. The dataset is divided into training set, validation set and testing set, which contains 50\%, 25\% and 25\% of images per class, respectively. The same with the image recognition datasets, GoogLGeNet and VGG are chosen as baseline methods to build models. For GoogLeNet, the settings are $t_s=\{0.70, 0.80, 0.90, 0.95\}$ and $t_c=\{0.05, 0.10, 0.20, 0.30\}$ with global models at different iteration stages from 2,000 to 6,000. For VGG, the settings are $t_s=\{0.65, 0.68, 0.70, 0.73\}$ and $t_c=\{0.003, 0.005, 0.10, 0.20 \}$ with global models at different iteration stages from 3,000 to 7,000. The results are shown in Figures~\ref{fnt_f1} and \ref{fnt_f2}. Note that the stricter the threshold, the more local models can be extracted. Tables~\ref{fnt_t1} and \ref{fnt_t2} depict the sizes of local model sets under different iterations and thresholds.
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Fnt_1}
\caption{ }
\end{center}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Fnt_2}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of GoogLeNet computed on the EnglishFnt dataset. (a) LCC-CNN-SM (b) LCC-CNN-CM.}
\label{fnt_f1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Fnt_3}
\caption{ }
\end{center}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Fnt_4}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of VGG computed on the EnglishFnt dataset. (a) LCC-CNN-SM (b) LCC-CNN-CM.}
\label{fnt_f2}
\end{figure}
\begin{table}
\begin{center}
\caption{ The size of local model set of GoogLeNet on the EnglishFnt dataset.}
\scalebox{0.7}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.70 & SM-0.80 &SM-0.90 &SM-0.95 & CM-0.05 & CM-0.10 &CM-0.20 &CM-0.30 \\ \hline
2000&100 &19 &10 &7 &43&17&12&11 \\ \hline
3000&100 &19 &10 &7 &25&14&10&8\\ \hline
4000&100 &19 &10 &7 &23&14&9&6 \\ \hline
5000&100 &19 &10 &7 &23&12&8&6 \\ \hline
6000&100 &19 &10 &7 &24&15&11&7 \\ \hline
\end{tabular}}
\label{fnt_t1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ The size of local model set of VGG on the EnglishFnt dataset.}
\scalebox{0.7}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.65 & SM-0.68 &SM-0.70 &SM-0.73 & CM-0.003 & CM-0.005 &CM-0.01 &CM-0.02 \\ \hline
3000&148 &88 &57 &35 &277&126&76&39 \\ \hline
4000&152 &90 &57 &31 &263&110&63&37\\ \hline
5000&141 &71 &48 &25 &231&101&65&34 \\ \hline
6000&205 &112 &69 &33 &209&97&52&33 \\ \hline
7000&244 &140 &81 &41 &201&93&58&33 \\ \hline
\end{tabular}}
\label{fnt_t2}
\end{center}
\end{table}
From the results, consistent improvement can be observed as on previous datasets. Both LCC-CNN-SM and LCC-CNN-CM improve the performance at different training stages. The sizes of label pair sets from LCC-CNN-SM and LCC-CNN-CM also convince previous analysis.
\subsection{Face recognition}
Two face recognition datasets, the UHDB31 dataset \cite{ha2017uhdb31} and the CASIA-WebFace dataset \cite{yi2014learning} are tested. The UHDB31 dataset contains 29,106 color face images of 77 subjects with 21 poses and 18 illuminations. To evaluate the performance of cross pose face recognition, Images from 7 close frontal poses are used for training. The images from the remaining 14 poses are split equally for evaluation and testing. The CASIA-WebFace dataset contains 494,414 wild face images of 10,575 subjects. 100 subjects are randomly selected that each contains more than 100 images to build a face identification subset. Then, the subset is divided into a training set, a validation set and a testing set, which contain 50\%, 25\% and 25\% of the images per subject, respectively. The pre-trained VGG-Face model \cite{parkhi2015deep} and the pre-trained ResNet-Face model \cite{yi2014learning} are used as baseline methods to fine-tune the global model. Since the pre-trained ResNet-Face model is trained on CASIA-WebFace, the weights of the pre-trained ResNet101 model from ImageNet \cite{He_2016_CVPR} is used on the CASIA-WebFace dataset. The Patch size is set to 24 for VGG-Face and 12 for ResNet-Face. In the UHDB31 dataset, the LCC-CNN-SM and LCC-CNN-CM are tested using different threshold sets (\mbox{$t_s=\{0.50, 0.60, 0.70, 0.80\}$} and \mbox{$t_c=\{0.05, 0.10, 0.15, 0.20\}$}) with global models at different iteration stages from 4,000 to 8,000 for both baselines. In the CASIA-WebFace dataset, the LCC-CNN-SM and LCC-CNN-CM are tested using different threshold sets ($t_s=\{0.55, 0.57, 0.60, 0.62\}$ and $t_c=\{0.01, 0.03, 0.05,$ $0.07\}$) with the VGG-Face model at different iteration stages from 5,000 to 9,000. The LCC-CNN-SM and LCC-CNN-CM are tested using different threshold sets ( $t_s=\{0.67, 0.70, 0.72, 0.75\}$ and \mbox{$t_c=\{0.01, 0.03, 0.05, 0.07\}$}) with the ResNet-Face model at different iteration stages from 23,000 to 27,000. The results are shown in Figures~\ref{UH_f1}-\ref{Web_f2} and Tables~\ref{UH_t1}-\ref{Web_t2}.
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{UH_1}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{UH_2}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of VGG-Face computed on the UHDB31 dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{UH_f1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{UH_3}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{UH_4}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of ResNet-Face computed on the UHDB31 dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{UH_f2}
\end{figure}
\begin{table}
\begin{center}
\caption{ The size of local model set of VGG-Face on the UHDB31 dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.50 & SM-0.60 &SM-0.70 &SM-0.80 & CM-0.05 & CM-0.10 &CM-0.15 &CM-0.20 \\ \hline
5,000&105 &62 &23 &10 &127 &47 &25 &6 \\ \hline
6,000&75 &49 &21 &9 &95 &36 &17 &6 \\ \hline
7,000&70 &47 &20 &8 &76 &36 &13 &6 \\ \hline
8,000&71 &41 &17 &8 &88 &36 &14 &6 \\ \hline
9,000&70 &47 &18 &7 &88 &38 &14 &4 \\ \hline
\end{tabular}}
\label{UH_t1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ The size of local model set of ResNet-Face computed on the UHDB31 dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.50 & SM-0.60 &SM-0.70 &SM-0.80 & CM-0.05 & CM-0.10 &CM-0.15 &CM-0.20 \\ \hline
5,000&151 &55 &19 &10 &177 &66 &28 &11 \\ \hline
6,000&134 &48 &19 &14 &160 &56 &28 &11 \\ \hline
7,000&96 &32 &11 &9 &117 &39 &13 &6 \\ \hline
8,000&117 &47 &9 &8 &139 &53 &16 &7 \\ \hline
9,000&132 &59 &19 &14 &153 &66 &28 &10 \\ \hline
\end{tabular}}
\label{UH_t2}
\end{center}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Web_1}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Web_2}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of VGG-Face computed on the CASIA-WebFace dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{Web_f1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Web_3}
\caption{ }
\end{center}
\end{subfigure
\begin{subfigure}[b]{0.45\textwidth}
\begin{center}
\includegraphics[width=1\linewidth]{Web_4}
\caption{ }
\end{center}
\end{subfigure}
\caption{The performance of ResNet-Face computed on the CASIA-WebFace dataset. (a) LCC-CNN-SM. (b) LCC-CNN-CM.}
\label{Web_f2}
\end{figure}
\begin{table}
\begin{center}
\caption{ The size of local model set of VGG-Face on the CASIA-WebFace dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.55 & SM-0.57 &SM-0.60 &SM-0.62 & CM-0.01 & CM-0.03 &CM-0.05 &CM-0.07 \\ \hline
5,000&175 &108 &43 &14 &391 &60 &17 &4 \\ \hline
6,000&171 &94 &35 &17 &418 &62 &19 &8 \\ \hline
7,000&155 &87 &22 &8 &372 &63 &13 &4 \\ \hline
8,000&173 &108 &46 &23 &383 &48 &12 &4 \\ \hline
9,000&335 &198 &88 &46 &416 &57 &17 &10 \\ \hline
\end{tabular}}
\label{Web_t1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{ The size of local model set of ResNet-Face on the CASIA-WebFace dataset.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Iterations & SM-0.67 & SM-0.70 &SM-0.72 &SM-0.75 & CM-0.01 & CM-0.03 &CM-0.05 &CM-0.07 \\ \hline
23,000&229 &107 &59 &21 &601 &78 &12 &2 \\ \hline
24,000&218 &103 &57 &19 &605 &82 &11 &1 \\ \hline
25,000&235 &108 &61 &22 &605 &82 &14 &3 \\ \hline
26,000&232 &109 &61 &20 &593 &82 &15 &4 \\ \hline
27,000&230 &107 &59 &20 &598 &87 &14 &1 \\ \hline
\end{tabular}}
\label{Web_t2}
\end{center}
\end{table}
From Figure~\ref{UH_f1}, it can be observed that the VGG-Face global model achieves the best performance (77.2\%) at 8,000 iterations. Both LCC-CNN-SM and LCC-CNN-CM improve the performance at different training stages. LCC-CNN-SM and LCC-CNN-CM achieve the best performance of 79.1\% and 79.3\% when $t_s = 0.5$ and $t_c = 0.05$, respectively. From Figure~\ref{UH_f2}, it can be observed that the best performance of the ResNet-Face model (68.8\%) is lower than that of the VGG-Face model. Both the proposed models perform better than the baseline at different training stages. The best results of LCC-CNN-SM (71.3\%) and LCC-CNN-CM (71.5\%) are also achieved when $t_s = 0.5$ and $t_c = 0.05$, respectively. From Figure~\ref{Web_f1} and Figure~\ref{Web_f2}, similar improvements can be observed on both baselines. From Figure~\ref{Web_f1}, it can be observed that the VGG-Face global model achieves the best performance (90.7\%) at 8,000 iterations. Both LCC-CNN-SM and LCC-CNN-CM improve the performance at different training stages. LCC-CNN-SM and LCC-CNN-CM achieve the best performance of 90.9\% and 91.4\% when $t_s = 0.55$ and $t_c = 0.01$, respectively. From Figure~\ref{Web_f2}, it can be observed that the best performance of the ResNet-Face model (84.6\%) is lower than that of the VGG-Face model. This is because the pre-trained model is trained on ImageNet rather than a face dataset. Both the proposed models perform better than the baseline at different training stages. The best results of LCC-CNN-SM (84.9\%) and LCC-CNN-CM (85.0\%) are achieved when $t_s = 0.67$ and $t_c = 0.01$, respectively. From Tables~\ref{UH_t1}-\ref{Web_t2}, it can be observed that as $t_s$ and $t_c$ decrease, more label pairs are selected at different training stages.
Figure~\ref{f4} and Figure~\ref{f3} depict several matching examples of the proposed matcher on different datasets. From Figure~\ref{f4}, it can be observed some confusions in global model matching when there are similar backgrounds (the images in (a)), occlusions (the ``bee'' on the flower image in (d)), and multiple objects (the images in (c) and (d)). From Figure~\ref{f3}, it can be observed that global model fails to distinguish visually similar faces, especially when there are similar facial attributes: ``bold head'' in (b), ``black skin'' in (c), and ``wavy hair'' in (d). On the other hand, local models explore more pairwise correlations between labels and can extract more locally discriminative features, especially for the related labels. Also, the proposed matcher can explore chains of similar labels like the flowers in Figure~\ref{f4}(d) and characters in Figure~\ref{f4}(f). This information can help us understand the pairwise correlations between different labels.
\begin{figure}
\centering
\begin{center}
\includegraphics[width=1\linewidth]{ex_image_all}
\end{center}
\caption{The matching examples of the proposed hierarchical matcher on image recognition and character recondition datasets. The text on the top of each image represents its label, where black, red and green indicate ground truth label, mistaken label and correct label, respectively.}
\label{f4}
\end{figure}
\begin{figure}
\centering
\begin{center}
\includegraphics[width=1\linewidth]{ex_face_all}
\end{center}
\caption{The matching examples of the proposed hierarchical matcher on face recognition datasets. The text on the top of each image represents its identity, where black, red and green indicate ground truth identity, mistaken identity and correct identity, respectively.}
\label{f3}
\end{figure}
\subsection{UR2D evaluation}
The UR2D system is evaluated on two types of face recognition scenarios: constrained environment and unconstrained environment. The datasets used for evaluation are the UHDB31 dataset \cite{ha2017uhdb31, ZHANG201828} and the IJB-A dataset \cite{yi2014learning}. The same setting is followed as \cite{klare2015pushing} in the UHDB31 dataset. To exclude the illumination changes, a subset with nature illumination is selected. To evaluate the performance of cross pose face recognition, the front pose (pose-11) face images are used as gallery and the remaining images from 20 poses are used as probe. An example of face images from different poses of the same subject is depicted in Figure~\ref{uhdb31_ex}. The IJB-A dataset \cite{klare2015pushing} contains images and videos from 500 subjects captured from ``in the wild'' environment. This dataset merges images and frames and provides evaluations on the template level. A template contains one or several images/frames of one subject. According to the IJB-A protocol, it splits galleries and probes into 10 splits. In this experiment, the same modification as \cite{xiang2017ijcb} is followed for use in close-set face recognition.
\begin{figure}
\centering
\begin{center}
\includegraphics[width=1\linewidth]{uhd31_214.png}
\end{center}
\caption{An example depicted the face images of different poses in the UHDB31 dataset.}
\label{uhdb31_ex}
\end{figure}
To create validation set, synthetic images are generated for the UHDB31 dataset. Each gallery image is rotated, masked and cropped to create 150 images. Then, half of them are used as sub-gallery and the other half as sub-probe. In the IJB-A dataset, sub-gallery and sub-probe sets are also created based on gallery images. The UR2D with Deep Pose Robust Face Signature (DPRFS) is used as baseline method. Table~\ref{re-1-p-t1} and Table~\ref{re-1-p-t2} show the results on the two datasets. The parameters $t_s$ and $t_c$ are set to 0.6 and 0.1, learned from the sub-sets. Figure~\ref{re-1-p-f1} and Figure~\ref{re-1-p-f2} show the sensitivity of $t_s$ and $t_c$, respectively. The local models are learned based on VGG-Face network, which provides better performance in previous evaluation. Table~\ref{re-1-p-t3} shows the sizes of local model sets.
\begin{table*}
\begin{center}
\caption{The Rank-1 performance of LCC-CNN computed on the UHDB31 dataset (\%). The methods are ordered as UR2D, LCC-CNN-SM and LCC-CNN-CM.}
\vspace{-7px}
\scalebox{0.7}{
\begin{tabular}{| c|c |c| c |c |c| c| c|}
\hline
\backslashbox{Pitch}{Yaw}
& -90\textdegree{} &-60\textdegree{} &-30\textdegree{} &0\textdegree{} & +30\textdegree{} &+60\textdegree{} &+90\textdegree{} \\
\hline
+30\textdegree{} &
\makecell{82,82,{\bf 83}} &
\makecell{{\bf 99},{\bf 99},{\bf 99}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{{\bf 99},{\bf 99},{\bf 99}} &
\makecell{99,{\bf 100},{\bf 100}} &
\makecell{{\bf 75},{\bf 75},{\bf 75}} \\
\hline
0\textdegree{} &
\makecell{{\bf 96},{\bf 96},{\bf 96}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}}&
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
- &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{{\bf 96},{\bf 96},{\bf 96}} \\
\hline
-30\textdegree{} &
\makecell{75,75,{\bf 76}} &
\makecell{{\bf 97},{\bf 97},{\bf 97}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{{\bf 100},{\bf 100},{\bf 100}} &
\makecell{96,{\bf 97},{\bf 97}} &
\makecell{{\bf 79},{\bf 79},{\bf 79}} \\
\hline
\end{tabular}}
\label{re-1-p-t1}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\caption{The Rank-1 performance of LCC-CNN computed on the IJB-A dataset (\%).}
\vspace{-7px}
\scalebox{0.6}{
\begin{tabular}{ l |c c c c c c c c c c c}
\hline
Methods &split-1 & split-2 &split-3 &split-4 &split-5 & split-6 & split-7 &split-8 & split-9 & split-10 & Average\\
\hline
UR2D &78.78&77.60& 77.94& 79.88& 78.44 & 80.57&81.78& 79.00& 75.94& 79.22& 78.92 \\
LCC-CNN-SM & 78.78 & 77.74 & 77.94 & 80.09&{\bf 78.51}& 80.74 &{\bf 82.09}& 79.17 & 76.13 & 79.22 & 79.04 \\
LCC-CNN-CM &{\bf 79.20} &{\bf 77.78}&{\bf 77.96}&{\bf 80.66}& 78.06&{\bf 80.77}& 81.39&{\bf 79.23}& {\bf 76.47}&{\bf 79.39}&{\bf 79.09} \\
\hline
\end{tabular}}
\label{re-1-p-t2}
\end{center}
\end{table*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{lcc-bar1.png}
\caption{ }
\label{fig:gull}
\end{subfigure}%
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{lcc-bar2.png}
\caption{ }
\label{fig:tiger}
\end{subfigure}
\caption{ The sensitivity of $t_s$ computed on LCC-CNN-SM. (a) UHDB31. (b) IJB-A.}\label{re-1-p-f1}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{lcc-bar3.png}
\caption{ }
\label{fig:gull}
\end{subfigure}%
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{lcc-bar4.png}
\caption{ }
\label{fig:tiger}
\end{subfigure}
\caption{ The sensitivity of $t_c$ computed on LCC-CNN-CM. (a) UHDB31. (b) IJB-A.}\label{re-1-p-f2}
\end{figure*}
\begin{table}
\begin{center}
\caption{ The size of local model set based on UR2D.}
\scalebox{0.55}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline
Datasets &SM-0.4& SM-0.5 & SM-0.60 &SM-0.70 &SM-0.80 &CM-0.05 &CM-0.08 & CM-0.10 &CM-0.13 &CM-0.15 \\ \hline
UHDB-31&70&60 &45 &20 &10 &88 &50 &36 &12 &6 \\ \hline
IJB-A&170&120 &90 &35 &17 &319& 220&60 &17 &4 \\ \hline
\end{tabular}}
\label{re-1-p-t3}
\end{center}
\end{table}
From the results of Table~\ref{re-1-p-t1}, it can be observed that, the LCC-CNN-CM model can improve the performance of 4 poses and maintain excellent performance on other poses. The performance of LCC-CNN-SM is limited by the similarity between different faces. From Table~\ref{re-1-p-t2}, improvements can also be observed on most of the splits. The above results also confirm that LCC-CNN-CM performs better on face recognition. From Figure~\ref{re-1-p-f1} and Figure~\ref{re-1-p-f2} it can be observed that different performance is achieved with different parameter values. Table~\ref{re-1-p-t3} indicates that different number of local models are created based on different parameter values. The limitation of the proposed models is that they require re-training based on new gallery images.
\subsection{Statistical Analysis}
\label{SA}
In this section, statistical analysis is performed for the three methods (UR2D, LCC-CNN-SM, LCC-CNN-CM) over the 30 data splits (20 from UHDB31 and 10 from IJB-A). Following Dem\v{s}ar \textit{et al.} \cite{demvsar2006statistical}, the Friedman test \cite{friedman1937use,friedman1940comparison} and the two tailed Bonferroni-Dunn test \cite{dunn1961multiple} are applied to compare multiple methods over multiple datasets. Let $r_i^j$ represent the rank of the $j^{th}$ of k algorithm on the $i^{th}$ of $N$ datasets. The Friedman test compares the average ranks of different methods, by $R_j = \frac{1}{N} \sum_i r_i^j$. The null-hypothesis is that all the methods are equal, so their ranks $R_j$ should be equivalent. The original Friedman statistic \cite{friedman1937use,friedman1940comparison},
\begin{equation}\label{st1}
\mathcal{X}_F^2 = \frac{12N}{k(k+1)}[\sum_j R_j^2 - \frac{k(k+1)^2}{4}],
\end{equation}
is distributed according to $\mathcal{X}_F^2$ with $k-1$ degree of freedom. Because of its undesirable conservative property, Iman \textit{et al.} \cite{iman1980approximations} derived a better statistic
\begin{equation}\label{st2}
F_F = \frac{(N-1)\mathcal{X}_F^2}{N(k-1)-\mathcal{X}_F^2},
\end{equation}
which is distributed according to the F-distribution with $k-1$ and $(k-1) \times (N-1)$ degrees of freedom. First the average ranks of each method are computed as 2.43, 1.93 and 1.63 for UR2D, LCC-CNN-SM and LCC-CNN-CM. The $F_F$ statistical values of Rank-1 accuracy based on (\ref{st2}) is computed as $101.52$. With three methods and 10 data splits, $F_F$ is distributed with $3-1$ and $(3-1) \times (30-1) = 58$ degrees of freedom. The critical value of $F(2, 58)$ for $\alpha = 0.10$ is $2.18 < 5.02$, so the null-hypothesis is rejected. Then, the two tailed Bonferroni-Dunn test is applied to compare each pair of methods based on the critical difference:
\begin{equation}\label{st3}
CD = q_{\alpha} \sqrt{\frac{k(k+1)}{6N}},
\end{equation}
where $q_{\alpha}$ is the critical values. If the average rank between two methods is larger than critical difference, the two methods are significantly different. Otherwise, they are statistically the same. According to Table 5 in \cite{demvsar2006statistical}, the critical value of three methods when $p = 0.10$ is 1.96. The critical difference is computed as $CD = 1.96 \sqrt{\frac{3 \times 4}{6 \times 30}} = 0.51$. Thus, under Rank-1 accuracy, LCC-CNN-CM performs significantly better than UR2D (the difference between ranks is $2.43 - 1.63 = 0.8 > 0.51$). The difference between LCC-CNN-SM and UR2D $2.43 - 1.93 = 0.50$ is slightly smaller than the critical difference $0.51$. So the are not significantly different. Similarly, the tests are also performed for previous experiments which proves the significant improvements.
\section{Conclusion}
\label{sec6}
This paper proposed a hierarchical matcher that combines the merits of both global and local model. Chains of local models built based on a similarity matrix and confusion matrix are used to improve the matching of global model. The experimental results confirmed the assumption that local models explore more pairwise discriminative features and can be used to improve matching performance globally. Compared with the UR2D system, the accuracy is improved significantly by 1\% and 0.17\% on the UHDB31 dataset and the IJB-A dataset, respectively. An interesting observation is the relationship between the performance of the global model and the final matching. Looking at the results in Figure~\ref{fnt_f1} (a) for example, the global model accuracy achieves 0.8268 and 0.8499 at iterations 4,000 and 5,000, respectively. Then, the matcher improves the performance to 0.8661 and 0.8704. It can be observed that the gap is narrowed from 0.0231 to 0.0043. The observation shows that it is possible to build local models based on an unconverged early stage global model, but still achieve comparable performance to the current matcher, which leads to a better balance between global and local model.
\section*{Acknowledgements}
This material is based upon work supported by the U.S. Department of Homeland Security under Grant Award Number 2015-ST-061-BSH001. This grant is awarded to the Borders, Trade, and Immigration (BTI) Institute: A DHS Center of Excellence led by the University of Houston, and includes support for the project ``Image and Video Person Identification in an Operational Environment: Phase I'' awarded to the University of Houston. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security.
\section{References}
\bibliographystyle{elsarticle-num}
|
{
"timestamp": "2018-05-08T02:14:04",
"yymm": "1805",
"arxiv_id": "1805.02339",
"language": "en",
"url": "https://arxiv.org/abs/1805.02339"
}
|
\section{Introduction}
Understanding the uncertainties in redshift-independent extragalactic distance measurements is absolutely necessary before reporting statistically sound conclusions regarding the structure of the local universe \citep{void,locunivcf,nongauss,6df,localunipv,said,gg3500}, large scale structure \citep{anishub,gallargescale,morphanis,tecciencia,bayesh}, and events like transient gravitational wave detections \citep{gwgallist}. Hubble constant estimations have been using increasingly sophisticated statistical tools for primary distance determination methods, such as SNIa \citep{ridsn,unity,hubsn2018}, Cepheids \citet{hubngc} or both \citep{riess}. Although most estimates of the Hubble constant use Cepheid calibration for calibrating secondary methods \citep{hubunc,huborig,hub2010}, \citet{noceph} have explored changes in Hubble constant estimation using the Tully-Fisher relation (TF) without Cepheid calibration. Secondary methods for extragalactic distance determination like the TF relation, or the Fundamental Plane (FP) have recently become more precise thanks to increasing volumes of data from surveys like 6dF \citep{6df} and 2MASS \citep{2mass,tf07dist} together with Spitzer data \citep{sorce}, along with improved statistical methods \citep{precisetf}. \\
As of 2018, three multi-measurement catalogs including a substantial amount of redshift-independent extragalactic distance measurements have been released: HyperLEDA \citep{hyperleda}, NED-D \citep{ned07,ned}, and Cosmicflows-3 \citep{cosmicflows}. HyperLEDA includes a homogenized catalog for extragalactic distances in the nearby universe, with 12866 distance measurements for 518 galaxies to date. NED-D is the NASA/IPAC Extragalactic Distance catalog of Redshift-Independent Distances, which compiles 326850 distance measurements for 183062 galaxies in its 2018 version. Here, $\sim1800$ galaxies ($\sim1$\%) have more than 13 distance measurements, and $~180$ galaxies ($\sim0.1$\%) have distance measurements using more than 6 different methods. Cosmicflows-3 is the most up-to-date catalog, which reports distance measurements for 10616 galaxies (all of which include errors) using up to four distance determination methods, calibrated with supernova luminosities. However, unlike HyperLEDA or NED-D, Cosmicflows-3 only reports the latest distance measurement for each method. In HyperLEDA, NED-D and Cosmicflows-3 errors are reported as one standard deviation from the reported distance modulus. Treatment of errors for combining distance moduli across methods or across measurements is suggested by \citet{ned07} and \citet{cosmicflows} to be based on weighted estimates such as the uncertainty of the weighted mean, albeit with caution partly due to the heterogeneous origin of the compiled data and partly due to Malmquist bias. In the case of NED-D, this is additionally complicated by the fact that many errors are not reported or are reported as zero. In fact, the TF relation method has the largest number of galaxies with non-reported distance modulus errors (884 to date). Even though extragalactic distances measured using the TF relation were originally reported to have a relative error in distance modulus of $10-20$\% \citep{tforig}, we consider that this conservative estimate can be improved upon by using a predictive model based on the distance error of galaxies that use the same distance determination method. This requires a robust estimation of the variance of extragalactic distances based on the available data.\\
For many galaxies in all three catalogs, the random error for each distance modulus measurement $\epsilon_i$ (for $i=1,...,N$, for $N$ distance measurements per galaxy) is not representative of the scatter across measurements, even when considering the same method for determining distances. In addition, distance modulus distributions for each measurement (which are assumed to be Gaussian) are transformed to log-normal distributions in metric distance space. This can introduce a significant bias in peculiar velocity studies for large-scale structure studies \citep{lognormal}. We improve upon previous methods by robustly estimating the underlying variance across measurements and distance determination methods. To do this, we measure the 84th and 16th percentiles, and the median absolute deviation of the bootstrap-sampled posterior probability distribution of each extragalactic distance \citep{chaparro18}. We compare our results to other more commonly used frequentist methods, such as the weighted estimates mentioned above, and we produce pre-computed data tables for the three catalogs mentioned above, which can be found in the repository for this work at \texttt{https://github.com/saint-germain/errorprediction}. We then perform a Bayesian analysis of the systematics and randomness of the computed errors in the NED-D catalog for TF relation derived distances. From this analysis we build predictive models for the estimation of errors and evaluate them by performing posterior predictive checks using a discrepancy measure-derived Bayesian ``$p$-value'' \citep{gelmanppd}. Furthermore, we make predictions for the 884 galaxies in the NED-D catalog and the 203 galaxies in the HyperLEDA catalog whose distances were measured using the TF relation but have non-reported errors. Inference based on Bayesian posterior predictive checks has been advocated for in \citet{gelman2003} and \citet{ppcinf}.\\
We organize this paper as follows. In Section~\ref{sec:post} we talk about the posterior distribution of distance for individual galaxies and set up methods for measuring its variance. In Section~\ref{sec:comp} we make a comparison between the proposed variance estimation methods, and in Section~\ref{sec:predbay} we propose and evaluate predictive Bayesian models for two robust methods of error estimation, and we summarize our work in the Conclusions section. The appendix includes a description and brief analysis of extragalactic distance error data tables pre-computed with the methods described in this paper for the HyperLEDA, Cosmicflows-3 and NED-D catalogs.
\section{Estimation of extragalactic distance errors}
\label{sec:post}
The best approach to consider the effects of random and systematic errors in catalog-wide, multi-method distance analyses is to directly sample the posterior probability distribution of each extragalactic distance. This can be achieved by drawing distance modulus samples from $P(\mu)$, which is the unweighted mixture of normal distributions corresponding to each distance modulus measurement $\mu_i$,
\[\mu\sim\sum_i^N \mathcal{N}(\mu_i,\epsilon_i^2)\ ,\]
and then converting to metric distance,
\[D=10^{\frac{\mu}{5}+1}\ .\]
Therefore,
\begin{equation}\label{eqn:mix}
D_G\sim\sum_i^N\mathrm{lognormal}(M_i,\sigma_{M_i}^2)\ .
\end{equation}
Here $M_i=\ln D_i$ and $\sigma_{M_i}=\epsilon_i\cdot\ln10$.
\subsection{Estimating the variance of $P(D_G)$}
\label{sec:meth}
Although directly sampling the distribution of $D_G$ is the most transparent way to acknowledge the true variance of distance measurements, it is not a very efficient way to achieve a standardized treatment of errors. One simple measure of the variance of $D_G$ that acknowledges the possible skewness of the distribution is to take the median 16th and 84th percentile of 10k bootstrap samples of the distribution of $D_G$, e.g. one bootstrap sample corresponds to $N$ draws, one from each reported measurement. In our pre-computed error tables we report these quantities as \texttt{Dmin} and \texttt{Dmax}, respectively.\\
It can be even more convenient to treat each extragalactic metric distance $D_G$ as a normal random variable with a single-valued $\sigma_D$ as a measure of the uncertainty in the estimation of an extragalactic distance,
\begin{equation}\label{eqn:norm}
D_G\sim \mathcal{N}(D,\sigma_D^2)
\end{equation}
For this reason we compare four methods for estimating the $D,\,\sigma_D$ pair. Two of these methods (H, M) use robust measures of the distribution of each extragalactic distance, and the other two (P, Q) use measures based on propagation of errors.\\
Methods H and M, which are the methods we propose to robustly estimate $\sigma_D$ in equation \ref{eqn:norm} are based on measuring the median and variance of repeated bootstrap samples from the distribution of $D_G$ (equation \ref{eqn:mix}) as mentioned in the previous section. Method H takes $D$ as the median of the bootstrap samples and $\sigma_D$ as the half-distance (H) between the 84th and 16th percentiles of 10k bootstrap samples. We consider this to be the method which most faithfully measures the variance regardless of the shape of the posterior distribution. Method M takes $D$ as the median of the bootstrap samples and $\sigma_D$ as the median absolute deviation (MAD) of the bootstrap samples. This method is better suited for avoiding the effects of outliers.\\
The other two methods (P,Q) considered here are based on commonly used frequentist estimates of the distance error. Method P consists on calculating $D$ from the weighted mean distance modulus $\bar{\mu}^*$ with weights $w_i=\epsilon_i^{-2}$. $\sigma_D$ is calculated by propagation (P) of measurement errors i.e. from the uncertainty of the weighted mean \citep{cosmicflows},
\begin{equation}
\sigma_D^P=0.461\,\bar{D}^*\,\left(\sum_i^Nw_i\right)^{-1/2} \ ,
\end{equation}
Method P does not take into account the scatter in distance measurements for single galaxies, which is why it can be convenient to calculate $\sigma_D$ as the sum in quadrature (Q) of the propagated uncertainty of the weighted mean and the propagated unbiased weighted sample variance $\sigma_D^*$:
\begin{equation}
\sigma_D^Q=\left[ \left(\sigma_D^P\right)^2+\Big(\sigma_D^*\Big)^2\right]^{1/2} \ .
\end{equation}
Here $\sigma^*_D$ is calculated as \citep{wstdev},
\begin{equation}
\sigma^*_D=0.461\,\bar{D}^*\,\sqrt{\frac{N}{N-1.5}\frac{\sum_i^Nw_i(\mu_i-\bar{\mu}^*)^2}{\sum_i^Nw_i}\vphantom{\Biggl(}}\ .
\end{equation}
If the non-robust P and Q methods were truly representative of the variance of the distribution of $D_G$, they should yield similar results as the H or M methods. The following section shows that this is not the case.
\section{Comparison of distance error estimation methods}
\label{sec:comp}
In this section we focus on NED-D distance measurements since it is the largest of the three catalogs considered here. A full discussion of our error estimation method applied to multi-method measurements in the HyperLEDA, NED-D and Cosmicflows-3 is given in the appendix. A repository for this work, including the pre-computed error tables for the HyperLEDA, NED-D and Cosmicflows-3 is located at \texttt{https://github.com/saint-germain/errorprediction}. From here on, when we mention distance measurements in the NED-D catalog, we will be excluding from our analysis measurements that require the target redshift to calculate the distance, as indicated in the \texttt{redshift (z)} field.\\
For galaxies with a number of distance measurements between 2 and 5 (Fig.~\ref{fig:NED}, left) , errors estimated with the the quadrature (Q), and median absolute deviation (M) methods show a linear trend with similar slopes that over-predict the variance with respect to the half 84th-16th percentile distance (H) method, whereas the propagation (P) method tends to under-predict the errors. Furthermore, errors estimated using the Q method show a larger dispersion around the linear trend than the H and M methods. Fig.~\ref{fig:NED} (right) shows that the P and Q methods underpredict errors for galaxies with more than 5 distance measurements.
\subsection{Distance errors in Tully-Fisher relation derived measurements}
Even though our analysis for error estimation can be used to combine distance measurements using different methods for single galaxies, we think that due to method-intrinsic systematics it is more appropriate to separate the analysis by method. Without loss of generality, we now focus on galaxies whose distances have been measured using the Tully-Fisher method in the NED-D catalog because it is the method with the largest number of galaxies without reported measurement errors (884) in the database. \\
Fig.~\ref{fig:comp} shows that for a small but representative sample of galaxies with more than 7 distance measurements, the center and variance of the posterior distribution of each extragalactic distance is best explained using the H method, whereas the less robust P and Q methods under-predict the variance. On the other hand, the M method also under-predicts the variance because it is a robust measure, and thus not as sensitive to outliers as methods P and Q, as seen in the case of NGC 1558 in Fig.~\ref{fig:comp}. For the more symmetrical posterior distribution of UGC 12792, the M and Q methods predict the same center and variance.\\
Fig.~\ref{fig:hqp-qm} shows that the Q and P methods under-predict distance errors for galaxies with more than 5 TF distance measurements. On the other hand, method Q under-predicts distance errors with respect to the M method, which again shows a tighter linear correlation due to the robustness of the M measure. However, the scale of H and M errors (relative errors) does not depend strongly on the limiting number of measurements for $N_\mathrm{TF}>3$, as Fig.~\ref{fig:relerr} shows. \\
The general correlation between distance and distance error (Figs.~\ref{fig:NED} and \ref{fig:hqp-qm}) means that there is a strong systematic component in the variance of $P(D_G)$, which is expected from the conversion of distance modulus to metric distance. To improve visualization, only errors for galaxies with more than 5 TF distance measurements are shown in Figs.~\ref{fig:hqp-qm}, \ref{fig:drawsee}, and \ref{fig:predl1}.\\
\begin{figure*}
\includegraphics[scale=0.69]{f01Nlow.png}
\includegraphics[scale=0.69]{f02Nhigh.png}
\caption{Estimated extragalactic distance errors vs. median extragalactic distance for galaxies with $N<6$ (left) and $N>5$ (right) redshift-independent distance measurements in NED-D according to the H, M, Q, P error models (explained in the text), showing linear regressions and confidence intervals computed using the \texttt{seaborn.regplot} Python function.}
\label{fig:NED}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.69]{f03comp}
\caption{Comparison of four examples of extragalactic distance posterior distribution draws (10000 per measurement) and modeled distributions for UGC 06667, NGC 1558, UGC 08186, and UGC 12792 using the Tully-Fisher Method for distance determination in NED-D. The methods used for approximating the posterior distribution (H, M, P, and Q) are described in the text. }
\label{fig:comp}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.69]{f04hqp.png}
\includegraphics[scale=0.69]{f05qm.png}
\caption{Estimated extragalactic distance errors vs. median extragalactic distance for galaxies with more than 5 TF distance measurements in NED-D according to the H, Q, P (left) and Q, M (right) error models, showing linear regressions and confidence intervals computed using the \texttt{seaborn.regplot} Python function.}
\label{fig:hqp-qm}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.69]{f06HrelerrTF.png}
\includegraphics[scale=0.69]{f07MrelerrTF.png}
\caption{Estimated extragalactic distance errors vs. median extragalactic distance using the Tully-Fisher method for distance determination in NED-D, according to the H error model (left) and the M error model (right) showing linear regressions and confidence intervals computed using the \texttt{seaborn.regplot} Python function.}
\label{fig:relerr}
\end{figure*}
Given that each $\sigma_D$ calculated using the H and M methods is obtained from many realizations from the distribution of extragalactic distances, it is also possible to calculate its variance as the half-distance between the 84th and 16th percentile of bootstrap $\sigma_D$ realizations. Fig.~\ref{fig:drawsee} (left) shows that the variance of the estimated error is proportional to the error for the H and M methods. This will be relevant in Section~\ref{sec:predbay} when we construct a predictive model for non-reported errors.
\section{Predictive Bayesian Models for TF missing errors}
\label{sec:predbay}
In the multi-measurement catalogs considered here, we observe that the scatter of reported distance measurements and reported individual measurement errors do not match in most cases. This situation happens because there are hidden systematic sources intrisic to each method of distance estimation. These systematics can not be removed, but they can be margizalized over in order to estimate the true variance of a distance estimation method based on multiple measurements. The central limit theorem indicates that as the number of measurements increases, the behavior of the distance errors should settle toward being normally distributed. Thus, if a correlation trend between distance measurements and estimated errors can be explained from a Bayesian viewpoint, then it should be possible to use a Bayesian model to predict missing distance errors for a distance determination method, given enough data. Since more measurements can increase our knowledge of systematic uncertainties in distance measurements, the way we explore and validate our Bayesian models is based on partitioning our data by choosing different lower thresholds for $N$, the number of measurements per galaxy.\\
As seen in Fig.~\ref{fig:hqp-qm}, TF distance errors estimated using the robust methods H and M grow in a roughly linear fashion with distance and seem to be randomly distributed around this trend line. For this reason we try out simple linear and quadratic Bayesian models in order to be able to predict the value of missing distance errors. For this, we use the \texttt{emcee} affine invariant Markov Chain Monte Carlo (MCMC) ensemble sampler \citep{emcee}. Recently, \texttt{emcee} has been proved to be useful in obtaining probabilistic estimations for photometric redshifts \citet{photred1,photred2}. Since we want to be able to predict non-reported errors, our model selection is based on posterior predictive checks, i.e. we rely on models that can create synthetic datasets similar to the original dataset \citep{gelmanppd}. This allows us to reproduce the original variance of the error (Fig.~\ref{fig:drawsee}, left). Many Bayesian analyses often do not use posterior predictive checks, like in the work of \citet{propprob2018} and \citet{bayesh}, where they used \texttt{emcee} for posterior sampling, and instead using Bayesian and Akaike Information Criteria along with Bayes factors for model assessment, but without attempting to reproduce the original variance of the data. This is also the case in other Bayesian tools like Linmix \citep{gmastro}, which is widely used in astronomy for approximating unobserved data. \\
First we assume that for any galaxy $j$ the distance error $\sigma_{Dj}$ is a random normal variable, with variance $\sigma_{\sigma j}$ and mean $\hat{\sigma}_{Dj}$,
\begin{equation}
P(\sigma_{Dj}|\hat{\sigma}_{Dj},\sigma_{\sigma j})=\mathcal{N}(\hat{\sigma}_{Dj},\sigma_{\sigma j}^2)\ .
\label{eq:prob}
\end{equation}
Our likelihood function is the joint probability that each of the $\sigma_D=\{\sigma_{Dj}\}$ in the original dataset of $m$ galaxies is generated by the above probability,
\begin{equation}
P(\sigma_{D}|\hat{\sigma}_{D},\sigma_{\sigma})=\prod_j^mP(\sigma_{Dj}|\hat{\sigma}_{Dj},\sigma_{\sigma j})
\end{equation}
We want to test the hypothesis mentioned above that all errors and their variances $(\hat{\sigma}_D=\{\hat{\sigma}_{Dj}\},\ \sigma_\sigma=\{\sigma_{\sigma j}\})$ can be estimated from a single model depending on the extragalactic distances $D_G=\{D_{Gj}\}$ and a set of distance-independent parameters $\pmb{\theta}$. Thus the likelihood can be expressed as,
\[P(\sigma_D|D_G,\pmb{\theta})=\prod_j^mP(\sigma_{Dj}|D_{Gj},\pmb{\theta})\ .\]
Following Bayes' theorem we can compute the posterior probability up to a constant,
\begin{equation}
P(\pmb{\theta}|D_G,\sigma_D)\propto P(\pmb{\theta})P(\sigma_D|D_G,\pmb{\theta})\ .
\label{eq:ppd}
\end{equation}
Due to the simplicity of the models used here, we will only use reasoably conservative (flat) priors on all model parameters, which are described in the next subsection.\\
A common feature across our models is that $\sigma_\sigma=f\hat{\sigma}_D$, where the error variance scale factor $f$ is one of the parameters in $\pmb{\theta}$. This model choice is supported by Fig.~\ref{fig:drawsee} (left), which shows a roughly linear correlation between estimated errors and their variances. On the other hand, our models will differ by the proposed functional forms of $\hat{\sigma}_D(D_G,\pmb{\theta})$.\\
We obtain computationally credible samplings of the posterior probability (equation \ref{eqn:ppd})by removing the burn-in steps of the random walk according to the autocorrelation time. We can then create synthetic datasets by drawing a parameter sample $\pmb{\theta}_k$ from the posterior and using it to draw from the likelihood to create a new dataset, i.e. drawing new $\sigma_{Dj}$ from the probability distribution for all galaxies in the original dataset using equation~\ref{eq:prob}. We then assess the validity of the model by comparing synthetic data with the observed (i.e. original) data. This comparison is done by using a discrepancy measure $\mathcal{D}(\sigma_D|\pmb{\theta}_k)$ between data and model-derived expected values for the same data $e=\{e_j(\pmb{\theta}_k)\}$, where $\theta_k$ is drawn from the posterior distribution and $\sigma_D$ can be the observed errors or the model-generated synthetic errors. The discrepancy can be calculated using a statistic like $\chi^2$ \citep{chi2ms,otherdisc}, but here we will work with the Freeman-Tukey discrepancy since it is weight independent \citep{bishopft,brooks},
\[\mathcal{D}(\sigma_D|\pmb{\theta}_k)=\sum_j^m(\sqrt{\sigma_{Dj}\vphantom{e_j(\pmb{\theta}_k)}}-\sqrt{e_j(\pmb{\theta}_k)})^2\]
For each parameter draw $k$, it is possible to compare the simulated discrepancy with the observed discrepancy. If the model is representative of the data, then for many parameter draws, the simulated and observed discrepancies should be similar. We can then calculate a Bayesian ``$p$-value'' as the ratio of ``draws when the observed discrepancies are larger than the synthetic discrepancies'' to ``total draws''. If this Bayesian $p$-value is too close to 0 or to 1 we can reject the model, otherwise it is generating synthetic data that is similar to the original data. This is better visualized using a discrepancy plot, where for each draw $k$, a synthetic discrepancy is paired with its corresponding observed discrepancy. If the discrepancy points are roughly equally distributed about the $\mathcal{D}_\mathrm{obs}=\mathcal{D}_\mathrm{sim}$ line, then we cannot reject the model. As mentioned above, we expect that galaxies with the largest number of measurements are sampling more completely the ``true'' distribution of the distance. Therefore we need to find the minimum number of measurements per galaxy for which the Bayesian $p$-value shows an agreement between on the partitioned dataset and the model predictions.
\subsection{Bayesian Quadrature Model}
\label{sec:bqm}
Our first model is based on the hypothesis that there are are distinct systematic and random contributions to the distance measurement error, both of which are normally distributed. For this reason they are added in quadrature,
\begin{equation}
\sigma_D^2=\sigma_s^2+\sigma_r^2\ .
\label{eq:bayq}
\end{equation}
Here $\sigma_r$ is a random (constant) error and the systematic error is modeled allowing for scale factor ($s$) and zero setting ($a$) errors, i.e. $\sigma_s=sD+a$, as Fig.~\ref{fig:hqp-qm} suggests. We set our prior to be symmetrical around $\sigma_r=0$ in order to better visualize its behavior near this point, so
\begin{equation}
P(s,a,\sigma_r,f)\propto\left\{
\begin{aligned}
1,\ \ \ \ &\mathrm{if}\ \ \ 0<s<1\ \mathrm{and}\\
& \ \ \ \ \ 0<a<10\ \mathrm{Mpc}\ \mathrm{and}\\
&-10<\sigma_r<10\ \mathrm{Mpc}\ \mathrm{and}\\
& \ \ \ \ \ 0<f<1\\
0,\ \ \ \ &\ \mathrm{otherwise.}
\end{aligned}
\right.
\label{eq:priorq}
\end{equation}
We now use \texttt{emcee} to sample the posterior over the parameter set $\pmb{\theta}=(s,\sigma_r,f,a)$ using 100 walkers and 20000 steps ($\bar{t}_\mathrm{autocorr} \lesssim 90$ steps). According to the discrepancy plot in Fig.~\ref{fig:discq} (left), this model is able to replicate method H errors for the 31 galaxies with $N>25$ measurements (866 measurements in total). The corner plot showing the posterior sampling made by \texttt{emcee} is shown in Fig.~\ref{fig:cornerq}, which shows the 16th, 50th, and 84th percentiles of the marginalized posterior distributions for the systematic scale factor $s$, the random error component $\sigma_r$, the error variance scale factor $f$, and the zero offset systematic error $a$. From the large variance in the marginalized posterior distribution for $\sigma_r$ and $a$, we see that there is a significant degeneracy between those parameters. However, it should be noted that the marginalized posterior distribution of $\sigma_r$ is symmetric around zero (because of its own degeneracy), while the distribution of $a$ can only take positive values. The working distance range and overall fitting of this model is shown in Fig.~\ref{fig:drawsq} (left), where method H errors corresponding to galaxies with more than 25 TF distance measurements are plotted along the expected values $e=\{e_j(\pmb{\theta}_k)\}$ for parameter sets $\pmb{\theta}_k$ drawn from the posterior probability distribution.
\begin{figure*}
\includegraphics[scale=0.69]{f08discq.png}
\includegraphics[scale=0.69]{f09discq2.png}
\caption{Discrepancy plot for the Bayesian quadrature model (equation~\ref{eq:bayq}) based on errors estimated using method H for $N_\mathrm{TF}>23,24,25$ (left) and using method M for $N_\mathrm{TF}>12,13$ (right).}
\label{fig:discq}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.69]{f10cornerq}
\caption{Corner plot showing the \texttt{emcee} sampling of the posterior probability distribution (equation \ref{eq:ppd}) for the quadrature Bayesian model parameters $\pmb{\theta}=(s,\sigma_r,f,a)$ based on errors estimated using method H for galaxies with more than 25 TF distance measurements. The dashed lines indicate the 16th, 50th, and 84th percentile of the marginalized distribution of each parameter (shown at the top of each column), and the blue solid lines indicate the mean. This plot was made using the \texttt{corner} Python module.}
\label{fig:cornerq}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.69]{f11cornerq2}
\caption{Corner plot showing the \texttt{emcee} sampling of the posterior probability distribution (equation \ref{eq:ppd}) for the quadrature Bayesian model parameters $\pmb{\theta}=(s,\sigma_r,f,a)$ based on errors estimated using method M for galaxies with more than 13 TF distance measurements. The dashed lines indicate the 16th, 50th, and 84th percentile of the marginalized distribution of each parameter (shown at the top of each column), and the blue solid lines indicate the mean. This plot was made using the \texttt{corner} Python module.}
\label{fig:cornerq2}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.69]{f12drawsq}
\includegraphics[scale=0.69]{f13drawsq2}
\caption{Projection of parameter set samples from the posterior probability distribution of the Bayesian quadrature model onto the $\sigma_D$ vs. $D_G$ scatter plot for errors estimated using method H for galaxies with more than 25 TF distance measurements (left) and using method M for galaxies with more than 13 TF distance measurements (right).}
\label{fig:drawsq}
\end{figure*}
Now we sample the posterior distribution for the Bayesian quadrature model with method M errors using \texttt{emcee} with 100 walkers and 20000 steps s ($\bar{t}_\mathrm{autocorr} \lesssim 50$ steps). The discrepancy plot for method M errors in Fig.~\ref{fig:discq} (right) shows that the quadrature model also replicates method M errors, but for the 732 galaxies with more than 13 measurements (13054 measurements in total). Fig.~\ref{fig:cornerq2}, shows that values for the random error component $\sigma_r$ are so low that the model draws are almost indistinguishable from straight lines in Fig~\ref{fig:drawsq} (right). Additionally, and just as for the quadrature model for H errors above, the symmetry of the marginalized posterior distribution of $\sigma_r$ leads us to set this parameter to zero in our next (linear) model.
\subsection{Bayesian Linear Model}
\label{sec:blm}
In Section~\ref{sec:bqm} above we conclude that we can ignore the random error component in equation~\ref{eq:bayq} in order to work with a simpler, numerically stable, linear model that only considers a systematic error with scale factor and zero setting error components,
\begin{equation}
\sigma_D=\sigma_s=sD+a\ .
\label{eq:bayl}
\end{equation}
We also update our prior considering that the quadratic model yielded lower values for the zero setting error $a$ than previously considered in equation~\ref{eq:priorq},
\begin{equation}
P(s,a,f)\propto\left\{
\begin{aligned}
1,\ \ \ \ &\mathrm{if}\ \ \ 0<s<1\ \mathrm{and}\\
& \ \ \ \ \ 0<a<2\ \mathrm{Mpc}\ \mathrm{and}\\
& \ \ \ \ \ 0<f<1\\
0,\ \ \ \ &\ \mathrm{otherwise.}
\end{aligned}
\right.
\end{equation}
We use \texttt{emcee} to sample the posterior over $\pmb{\theta}=(s,a,f)$ using 100 walkers and 10000 steps ($\bar{t}_\mathrm{autocorr} < 50$ steps) for the linear Bayesian model applied to H errors. The discrepancy plot (Fig.~\ref{fig:discl}, left) shows a significant improvement over the quadratic model, as it shows an acceptable Bayesian $p$-value for the 477 galaxies with $N>15$ measurements (9361 in total), whereas the quadratic model replicated errors only for galaxies with $N>25$ measurements. Fig.~\ref{fig:cornerl}, shows the 16th, 50th, and 84th percentiles of the marginalized posterior distributions for the systematic scale factor $s$, the error variance scale factor $f$, and the zero offset systematic error $a$ for the linear Bayesian model using H errors for galaxies with more than 15 measurements.
\begin{figure*}
\includegraphics[scale=0.69]{f14discl.png}
\includegraphics[scale=0.69]{f15discl2.png}
\caption{Discrepancy plot for the Bayesian linear model (equation~\ref{eq:bayl}) based on errors estimated using method H for $N_\mathrm{TF}>13,14,15$ (left) and using method M for $N_\mathrm{TF}>11,12,13$ (right). }
\label{fig:discl}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.69]{f16cornerl}
\caption{Corner plot showing the \texttt{emcee} sampling of the posterior probability distribution (equation \ref{eq:ppd}) for the linear Bayesian model parameters $\pmb{\theta}=(s,a,f)$ based on errors estimated using method H for galaxies with more than 15 TF distance measurements. The dashed lines indicate the 16th, 50th, and 84th percentile of the marginalized distribution of each parameter (shown at the top of each column), and the blue solid lines indicate the mean. This plot was made using the \texttt{corner} Python module.}
\label{fig:cornerl}
\end{figure*}
We sample the posterior for the linear model applied to M errors using \texttt{emcee} with 100 walkers and 10000 steps ($\bar{t}_\mathrm{autocorr} < 50$ steps). Fig.~\ref{fig:discl} (right) shows the corresponding discrepancy plot, which does not show a significant improvement of the linear over the quadratic model for M errors, as it also works for galaxies with $N>13$ measurements. This happens because the sampling of the posterior for the quadratic model (Fig.~\ref{fig:cornerq2}) does not show a degeneracy between $\sigma_r$ and $a$, and also because the marginalized posterior distribution for $\sigma r$ is a near-zero-centered distribution with a variance of $\sim0.2$ Mpc. The 16th, 50th, and 84th percentiles of the marginalized posterior distributions for the systematic scale factor $s$, the error variance scale factor $f$, and the zero offset systematic error $a$ according the the linear model for M errors are shown in Fig.~\ref{fig:cornerl2}.
\begin{figure*}
\includegraphics[scale=0.69]{f17cornerl2}
\caption{Corner plot showing the \texttt{emcee} sampling of the posterior probability distribution (equation \ref{eq:ppd}) for the linear Bayesian model parameters $\pmb{\theta}=(s,a,f)$ based on errors estimated using method M for galaxies with more than 13 TF distance measurements. The dashed lines indicate the 16th, 50th, and 84th percentile of the marginalized distribution of each parameter (shown at the top of each column), and the blue solid lines indicate the mean. This plot was made using the \texttt{corner} Python module.}
\label{fig:cornerl2}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.69]{f18drawsl}
\includegraphics[scale=0.69]{f19drawsl2}
\caption{Projection of parameter set samples from the posterior probability distribution of the Bayesian linear model onto the $\sigma_D$ vs. $D_G$ scatter plot for errors estimated using method H for galaxies with more than 15 TF distance measurements (left) and using method M for galaxies with more than 13 TF distance measurements (right).}
\label{fig:drawsl}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.69]{f20ee.png}
\includegraphics[scale=0.69]{f21drawsee.png}
\caption{Variance of distance error estimates vs. estimated extragalactic distance errors, showing linear regressions and confidence intervals computed using the \texttt{seaborn.regplot} Python function (left), and showing a projection of parameter set samples from the posterior probability distribution of the Bayesian linear model (right) as determined by the H and M methods.}
\label{fig:drawsee}
\end{figure*}
\subsection{Predictions for missing errors}
\label{sec:pred}
Our linear Bayesian model is able to predict the intrinsic variance of TF H and M distance errors in NED-D by considering systematic zero setting and scale factor error components. The lower limit of distance measurements for which the model works for H and M errors, is 15 and 13, respectively\footnote{Our model validation has also worked with the two 2017 versions of the NED-D extragalactic distance catalog, albeit with different thresholds for the number of measurements per galaxy.}. Fig.~\ref{fig:drawsl} shows the linear model draws for H and M errors, for which the working range is approximately $D_G\in[3,140]$ Mpc. We also show in Fig.~\ref{fig:drawsee} that the model draws for $f$ the scale parameter for the variance of $\sigma_D$ fit the bootstrap variance of H and M errors well, which means that our model choice for the variance of the error ($\sigma_\sigma$) was the right one. \\
Now, galaxies for which the models shown above work are not intrinsically different to other galaxies, as long as they are within the same distance range. Thus, we use the posterior predictive distribution of the linear Bayesian model for predicting H and M errors for the 884 galaxies in NED-D for which all TF measurements lack a reported error. Fig.~\ref{fig:predl1} shows synthetic errors generated from the posterior predictive distribution for the $\sigma_D$ linear model, along with the expected values of $\sigma_D$ using the median of the posterior probability distribution in equation~\ref{eq:ppd}, and the $D_G$ vs. $\sigma_D$ points for galaxies with more than 5 TF distance measurements (for contrast) for methods H and M, respectively. The median expected values are only drawn for points within the predictive range of each model, and synthetic predicted errors for galaxies outside of this range are plotted in black. The distance was calculated using the median of the reported distances whenever there was more than one TF distance measurement.\\
The HyperLEDA catalog has distance measurements for 4224 galaxies, of which 1064 galaxies have reported measurements without errors. Of these galaxies with unreported distance errors, 203 report measurements obtained with the TF method. We create synthetic errors for these using our Bayesian predictive models for H and M TF errors. Fig.~\ref{fig:predhl1} (left) shows that predicted H errors are somewhat higher than those estimated for HyperLEDA, although acceptably within the range. Fig.~\ref{fig:predhl1} (right) shows that predicted M errors are even closer to the HyperLEDA M error estimates. This outstanding result is an independent validation of our linear Bayesian model for predicting TF distance errors, and its capacity to estimate systematic effects of the TF distance determination method.\\
This predictive model may work for other distance determination methods, but a cursory overview of methods which require error prediction due to missing errors (e.g. TRGB, CMD, Eclipsing Binary, Red Clump, PNLF, SZ effect, Brightest Stars, Horizontal Branch in NED-D) suggests that such attempts need to be evaluated in a case-by-case basis. For instance, in NED-D, Fundamental Plane (FP) measurements are by far the most numerous ($\sim130k$ galaxies), but only 28 of those have more than 3 FP distance measurements. We attempted to create a model similar to what we did for TF, but we were only able to find a working predictive model (i.e. yielding a good Bayesian $p$-value) for the 16 galaxies with more than 4 distance measurements. The comparatively low number of galaxies for which this model works makes us wary of predicting FP errors, therefore we do not report these results.
\begin{figure*}
\includegraphics[scale=0.69]{f22predl1.png}
\includegraphics[scale=0.69]{f23predl2.png}
\caption{Synthetic H-method (left) and M-Method (right) $\sigma_D$ and their median expected values vs. $D_G$ for the 884 galaxies in NED-D for which no TF distance measurements report an error, generated using the corresponding Bayesian linear model. Predicted errors for galaxies outside of the working distance range of the model are plotted in black. H (left) and M (right) errors for galaxies with more than 5 TF measurements are also plotted for comparison.}
\label{fig:predl1}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.69]{f24predhl1.png}
\includegraphics[scale=0.69]{f25predhl2.png}
\caption{Synthetic H-method (left) and M-method (right) $\sigma_D$ and their median expected values vs. $D_G$ for the 71 galaxies in HyperLEDA for which no TF distance measurements report an error, generated using the corresponding Bayesian linear model. Predicted errors for galaxies outside of the working distance range of the model are plotted in black. H errors for galaxies with more than 2 distance measurements are also plotted for comparison.}
\label{fig:predhl1}
\end{figure*}
\section{Conclusions}
We propose methods for robustly estimating the uncertainty in extragalactic distances in multi-measurement, multi-method catalogs. First we propose to report 16th, 50th, and 84th percentiles of the bootstrap-sampled distance distribution for each galaxy. We also propose the use of the half-distance between the 84th and 16th percentiles (method H), and the median absolute deviation (method M) if the bootstrap-sampled distance distribution for each galaxy as straightforward measures of the uncertainty in extragalactic distances. Method H gives errors that faithfully measure the variance of the distance probability distribution, whereas traditional frequentist propagation-of-error methods fail to match this variance measure. On the other hand, method M should be used whenever a specific application requires to ignore outdated or possibly wrong outliers.\\
We produce error data tables using the robust (H, M) and frequentist (P, Q) methods for NED-D, HyperLEDA, and Cosmicflows-3, along with the 16th, 50th, and 84th percentiles of the bootstrap-sampled distance distribution for each galaxy in those catalogs. These tables can be found in the repository for this paper, located at \texttt{http://github.com/saint-germain/errorprediction}. A description and analysis for each catalog can be found in the appendices. We consider that these error tables should be a fundamental tool for future precision cosmology, catalog-wide studies, as it should be possible to quote errors according to the method that the reader considers most relevant for specific applications.\\
We create a Bayesian predictive model for TF distance errors in the NED-D catalog based on a Bayesian analysis of the systematic and random components in distance errors. We perform a posterior predictive check in the form of the computation of a Bayesian $p$-value based on simulated vs. observed discrepancies measured with the Freeman-Tukey statistic. Thus we create models which can reproduce the intrinsic variance of distance errors along with systematic zero-setting and scale factor components from the posterior predictive distribution of the models, using NED-D estimated H and M errors.\\
We use these models to predict H and M errors for 884 galaxies in NED-D which report TF distance measurements but do not report measurement errors. Our predictive models are independently validated against the HyperLEDA catalog by the agreement between our pre-computed H and M errors and our predictions for 203 galaxies in HyperLEDA with non-reported TF errors. Similar Bayesian predictive methods can be set up for other distance determination methods but with caveats, as model validation works better for methods for which there are many galaxies with a high number of distance measurements.\\
Finally, we want to advocate for the widespread use of discrepancy plots and their derived Bayesian $p$-values for Bayesian model checking in astronomy, as inference is based on the model's ability to reproduce the original distribution of the data and not only on a relative comparison to other models.
\section*{Acknowledgements}
The authors would like to thank O. L. Ram\'irez-Su\'arez and J. E. Forero-Romero for their valuable input during the early stages of this work. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
\bibliographystyle{mnras}
|
{
"timestamp": "2019-03-01T02:24:59",
"yymm": "1805",
"arxiv_id": "1805.02578",
"language": "en",
"url": "https://arxiv.org/abs/1805.02578"
}
|
\section{Introduction}
\input{inctex/introduction}
\section{Adjoint Approach \label{secAdjointTheory}}
\input{inctex/theory_adjoint}
\section{The Arnoldi Method \label{sec_arnoldi}}
\input{./inctex/arnoldi}
\section{The numerical Adjoint \label{secEldidec}}
\input{./inctex/numericalAdjoint}
\section{Results and Validation \label{secResults} }
\input{./inctex/results}
\section{Conclusion}
\input{./inctex/conclusion}
\subsubsection*{Acknowledgments.}
The authors gratefully acknowledge support by the Deutsche Forschungsgemeinschaft (DFG) as part of collaborative research center SFB 1029 "Substantial efficiency increase in gas turbines through direct use of coupled unsteady combustion and flow dynamics" on project C02.
\subsection{\textsc{Burgers} Equation}
The Burgers equation is given by
\begin{equation}
\partial_t u + \partial_x \left( \frac{u^2}{2} \right) = \mu \partial_x^2 u, \label{eq_burgers}
\end{equation}
with a scalar transported quantity $u$ and a friction constant $\mu$, which is set to zero for the friction-less cases.
The equation is spatially discretized by central finite differences.
A standard central fourth order derivative is used, also for the second derivative by applying the first derivative twice.
The periodic computational area of length $2\pi$ is resolved by 128 equidistantly distributed points, if not stated otherwise.
The time integration is realized by a standard Runge-Kutta scheme of fourth order.
A total number of $256$ time steps is used to resolve the time span from $t_0$ to $t_{\mathrm{end}}$, which corresponds to about one convectional length for all setups using a CFL condition of $0.5$.
The numerical, Arnoldi-based adjoint and the reference adjoint system are discretized in the same manner.
To aid the discussion of the results, the corresponding adjoint of \eqref{eq_burgers} is analytically derived as
\begin{equation}
\partial_t u^* + u_0 \partial_x u^* = -\mu \partial_x^2 u^* \label{eq_burgersAdjoint}
\end{equation}
with $u^*$ as adjoint variable.
All adjoint computations, either based on the proposed method or on an analytical derivation, are initialized by means of a Gaussian disturbance of form
\begin{equation}
u^*(x,t_{\mathrm{end}}) = \frac{1}{2} \cdot \exp\left(-\frac{(x - x_0)^2}{(15 \Delta x)^2} \right),
\end{equation}
with $x_0$ as center of the computational domain and the grid spacing $\Delta x$.
This initial condition is set at the end of the computational time $t_{\mathrm{end}}$, as the adjoint is integrated backwards in time.
This mimics an action of a non-zero source term $g$ in the adjoint equation at the last time step ($t_{\mathrm{end}}$).
The calculation plans for the different setups, required for the mode-based adjoint method, can be found in Tbl.~\ref{app_tbl_calcplans_burgers} found in the appendix.
\paragraph{B1 - Constant base flow}
The first setup of the Burgers equation is given by the initial condition of the primal problem $u(x,t_0) = 1/2$, which is thereby the solution for all time.
Thus, the primal and the adjoint equation reduces to a simple, constant transport, as can be seen from \eqref{eq_burgers} and \eqref{eq_burgersAdjoint}.
Likewise the calculation plan is just the current state and the obtained result since by this both are represented by $V$.
The error is of the order of the linearization step used for the reference solution and within the dynamic Arnoldi method,
so that the difference is likely created by the numerical linearization, see Fig.~\ref{fig_results_c1}.
\begin{figure}
\centering
\includegraphics[width = .49\textwidth]{./figures/_matlab_01_direct}
\includegraphics[width = .49\textwidth]{./figures/_matlab_01_adjoint_analytical}\\
\includegraphics[width = .49\textwidth]{./figures/_matlab_01_adjoint_lazy}
\includegraphics[width = .49\textwidth]{./figures/_matlab_01_adjoint_diff}
\caption{B1 - constant primal flow solution with $u(x,t) = 1/2$ (top-left) and the corresponding analytical adjoint solution $u^*_{\mathrm{analytical}}$ (top-right). Mode-based adjoint solution $u^*_{\mathrm{mode-based}}$ (bottom-left) and difference $\Delta u^* = u^*_{\mathrm{mode-based}} - u^*_{\mathrm{analytical}}$ (bottom-right).
\label{fig_results_c1}
}
\end{figure}
\paragraph{B2 - Unsteady base flow}
For this setup the primal initial condition is chosen to $u(x,t_0) = 1/2 + 1/20 \sin(x)$ leading to an unsteady flow solution.
The calculation plan is slightly longer and includes the right-hand-side (RHS) of the adjoint of the previous step $t_{i+1}$, calculated before.
This is little surprising since this changes only little from time step to time step and helps to construct the new RHS for time step $t_i$.
However, a larger deviation between mode-based and analytical solution is found.
The relative error is about 1\% with respect to the analytical reference solution, see Fig.~\ref{fig_results_c2}
\begin{figure}
\centering
\includegraphics[width = .49\textwidth]{./figures/_matlab_02_direct}
\includegraphics[width = .49\textwidth]{./figures/_matlab_02_adjoint_analytical}\\
\includegraphics[width = .49\textwidth]{./figures/_matlab_02_adjoint_lazy}
\includegraphics[width = .49\textwidth]{./figures/_matlab_02_adjoint_diff}
\caption{B2 - unsteady primal flow solution (top-left) and the corresponding analytical adjoint solution $u^*_{\mathrm{analytical}}$ (top-right). Mode-based adjoint solution $u^*_{\mathrm{mode-based}}$ (bottom-left) and difference $\Delta u^* = u^*_{\mathrm{mode-based}} - u^*_{\mathrm{analytical}}$ (bottom-right).
\label{fig_results_c2}
}
\end{figure}
Note, that for the first time step of the adjoint computation, where no previously computed RHS is available another calculation plan is needed.
However, this is a minor problem since it is possible to find a larger plan, which is sufficient.
The increased demand on computational time is negligible as this plan is just needed once for the first adjoint time step.
It is also important to note that the procedure is independent of the used spatial resolution.
Using the same calculation plan as before basically the same errors are found for double and quadruple the spatial resolution, see Fig.~\ref{fig_results_c2_2x_4x}.
\begin{figure}
\centering
\includegraphics[width = .49\textwidth]{./figures/_matlab_02_adjoint_diff_2x}
\includegraphics[width = .49\textwidth]{./figures/_matlab_02_adjoint_diff_4x}
\caption{B2 -
Difference between mode-based and analytical adjoint solution $\Delta u^* = u^*_{\mathrm{mode-based}} - u^*_{\mathrm{analytical}}$ for setup B2 using $2 \times 128$ (left) and $4 \times 128$ (right) grid points for the spatial discretization.
\label{fig_results_c2_2x_4x}
}
\end{figure}
\paragraph{B3 - Unsteady base flow with friction}
This setup is dedicated to show the applicability of the mode-based adjoint method if the considered equation includes a second derivative $\partial_x^2$, e.g.~the friction term in \eqref{eq_burgers}.
Here, the friction constant is chosen to $\mu = 7.5\cdot 10^{-3}$.
The principal problem is that the corresponding friction part of the operator keeps his sign in the analytical adjoint equation, while the transport term changes.
Thus, the dynamic Arnoldi needs to separate the actions of the transport and the friction part.
If this is not possible with an acceptable number of modes the approximation of adjoint operator is poor.
A suitable alternative to handle such a case is to split the equation and treat the transport terms and the friction term individually, by the same calculation plan.
Both resulting adjoint operators are simply added in order to construct the complete adjoint operator.
This procedure leads to errors on the level of case (B2), see Fig.~\ref{fig_results_c3}.
\begin{figure}
\centering
\includegraphics[width = .49\textwidth]{./figures/_matlab_03_direct}
\includegraphics[width = .49\textwidth]{./figures/_matlab_03_adjoint_analytical}\\
\includegraphics[width = .49\textwidth]{./figures/_matlab_03_adjoint_lazy}
\includegraphics[width = .49\textwidth]{./figures/_matlab_03_adjoint_diff}
\caption{B3 - unsteady primal flow solution with friction (top-left) and the corresponding analytical adjoint solution $u^*_{\mathrm{analytical}}$ (top-right). Mode-based adjoint solution $u^*_{\mathrm{mode-based}}$ (bottom-left) and difference $\Delta u^* = u^*_{\mathrm{mode-based}} - u^*_{\mathrm{analytical}}$ (bottom-right).
\label{fig_results_c3}
}
\end{figure}
\subsection{\textsc{Euler} Equations \label{ResultEuler}}
In the following the more complex problem of coupled equations is discussed on the base of the one dimensional Euler equations
\begin{eqnarray}
\partial_t \rho + \partial_x (\rho u ) &=& 0 \no\\
\partial_t \rho u + \partial_x (\rho u u ) + \partial_x p &=& 0 \no\\
\partial_t p + \gamma \partial_x (p u ) - (\gamma -1) u \partial_x p &=& 0. \label{eq_euler}
\end{eqnarray}
Therein $\rho$ denotes the density, $u$ the velocity, $p$ the pressure and $\gamma$ the adiabatic exponent which is assumed to be $1.4$.
The equations are discretized again by means of a finite difference approach in space.
The computational domain of length $2\pi$ is resolved by $128$ equidistantly distributed and periodic if not stated otherwise.
Again fourth order differentiation schemes are employed.
The computational time span $t_0$ to $t_{\mathrm{end}}$ is separated into $171$ time steps using of CFL-condition of $0.75$, based on the base flow velocity plus the speed of sound.
A fourth order Runge-Kutta scheme is employed for the time-wise integration.
Again, the Arnoldi-based adjoint and the reference adjoint are using the same discretization.
The corresponding adjoint equations of \eqref{eq_euler} are derived and discussed in \cite{Lemke2015}.
All adjoint computations, based on the proposed method or on an analytical derivation, are initialized by means of a Gaussian disturbance in $p^*$
\begin{equation}
p^*(x,t_{\mathrm{end}}) = 5 \cdot \exp\left(-\frac{(x - x_0)^2}{(10 \Delta x)^2} \right),
\end{equation}
with $x_0$ as center of the computational domain and the grid spacing $\Delta x$, at the end of the computational time $t_{\mathrm{end}}$.
For all setups the employed calculation plans, required for the mode-based adjoint method, can be found in Tbl.~\ref{app_tbl_calcplans_euler} in the appendix.
Within this calculation plans the RHS of the previous time-step was not incorporated, since this produced smaller training plans, see discussion in Sec.~\ref{sec_training_of_the_method}.
To allow a discussion of the calculation plans a linearized version of \eqref{eq_euler} is derived as
\begin{equation}
\partial_t \begin{pmatrix} \delta \rho \\ \delta u \\\delta p \end{pmatrix}
+
\partial_x \left(
\underbrace{
\begin{pmatrix}
u_0 & \rho_0 & 0 \\
0 & u_0 & 1/\rho_0 \\
0 & \gamma p_0 & u_0
\end{pmatrix}}_{=\mathcal{A}}
\begin{pmatrix} \delta \rho \\\delta u \\\delta p \end{pmatrix}
\right)
=
0 . \label{eq_lin_euler}
\end{equation}
under the assumption of spatial constant base flows ($\partial_x \rho_0 = \partial_x u_0 = \partial_ xp_0 = 0 $).
Note that $\mathcal{A}$ is not the desired operator $A$, which includes spatial discretization and possible boundary treatment.
\paragraph{E1 - No base flow}
For the first setup a flow at rest condition with $\rho(x,t_0) = 1$, $u(x,t_0) = 0$ and $p(x,t) = 1.5$ is chosen.
Thus, the Euler equations reduce to purely acoustic equations.
One might expect that this is trivial, since the acoustic equations are known to be (with an additional rescaling) self-adjoint.
However, the structure of $\mathcal{A}$
\begin{equation}
\mathcal{A} =
\begin{pmatrix}
0 & \rho_0 & 0 \\
0 & 0 & 1/\rho_0 \\
0 & \gamma p_0 & 0
\end{pmatrix}
\qquad
\mathcal{A}^T =
\begin{pmatrix}
0 & 0 & 0 \\
\rho_0 & 0 & \gamma p_0 \\
0 & 1/\rho_0 & 0
\end{pmatrix}
\label{eq_lin_euler_no_base}
\end{equation}
reveals, that $\delta \rho $ is driven by $\delta u$, but the opposite does not hold.
The adjoint equation of \eqref{eq_lin_euler_no_base} is given by
\begin{equation}
\partial_t q^* + \partial_x \mathcal{A}^T q^* = \partial_t q^* + \mathcal{A}^T \partial_x q^* = 0
\end{equation}
with $q^* = \left[\rho^*, u^*, p^* \right]^T $ as adjoint variable and using that $q_0 = [\rho_0, u_0, p_0]$ is constant in space.
The matrix $\mathcal{A}^T$ has zero first line, reflecting that changes in $\rho$ have no effect on the other quantities whatsoever.
This reveals an important structural feature of systems with multiple coupled equations: one variable influences a second one while there is no (or a structurally different) feedback.
This poses a severe difficulty for the classical Arnoldi or block-Arnoldi.
As can be observed easily, the vectors created by iterating the matrix $A$ produce, the same entries for $\rho$ and $u$ up to a factor.
Applying the Arnoldi directly to $\mathcal{A}^T$ would create structurally different vectors with a zero entry for $\rho$.
Therefore, it is in general not possible to span the Krylov space of the adjoint operator by Krylov vectors\footnote{Neither the Krylov space of $A$ nor the one of $A^T$ span the full discrete vector space, at least for exact arithmetic.} of $\mathcal{A}$.
To circumvent this problem we allow to modify the vectors in the dynamic Arnoldi, which was already discussed in Sec.~\ref{sec_dam}.
For this case, the calculation plan employs the actual adjoint state vector and three modified vectors, which results from application of the primal right-hand-side, see Tbl.~\ref{app_tbl_calcplans_euler}.
It can by easily checked, that for this simple example, the calculation plan is able to reproduce the analytical solution.
The numerical results of the adjoint and the deviation with respect to the analytical solution are shown in Fig.~\ref{fig_results_c4}.
The adjoint solution is defined by acoustic characteristics in $u^*$ and $p^*$, while $\rho^*$ remains zero, as expected from the analytical solution.
The error is similar to the cases related to the Burgers equation.
\begin{figure}
\centering
\includegraphics[width = .49\textwidth]{./figures/_matlab_04_adjoint_lazy}
\includegraphics[width = .49\textwidth]{./figures/_matlab_04_adjoint_diff}
\caption{E1 - No base flow condition. Mode-based (mb) adjoint solution $u^*_{\mathrm{mb}}$ (left) and difference $\Delta q^* = q^*_{\mathrm{mb}} - q^*_{\mathrm{analytical}}$ (right) for all quantities.
\label{fig_results_c4}
}
\end{figure}
\paragraph{E2 - No base flow - high pressure}
The primal computational variables are usually of very different magnitude \cite{SesterhennMuellerThomann1999}, which can be problematic for the error control of iterative procedures and linearization.
This setup repeats the former case with physical units, leading to adjoint variables of different magnitudes.
The initial pressure is $p(x,t_0) = 10^5$ while the density is $\rho(x,t_0) = 1$.
A slightly different calculation plan was found by the training, see Tbl.~\ref{app_tbl_calcplans_euler}.
The results match those of the previous case in terms of accuracy, see Fig.~\ref{fig_results_c5}.
If the calculation plan of (E1) is used the resulting adjoint solution is generally consistent with the analytical solution, but characterized by high-frequency fluctuations.
\begin{figure}
\centering
\includegraphics[width = .49\textwidth]{./figures/_matlab_05_adjoint_lazy}
\includegraphics[width = .49\textwidth]{./figures/_matlab_05_adjoint_diff}
\caption{E2 - No base flow condition with $p_0=10^5$. Mode-based (mb) adjoint solution $u^*_{\mathrm{mb}}$ (left) and difference $\Delta q^* = q^*_{\mathrm{mb}} - q^*_{\mathrm{analytical}}$ (right) for all quantities. Please note the different magnitudes of the adjoint quantities.
\label{fig_results_c5}
}
\end{figure}
\paragraph{E3 - Steady base flow - intermediate Ma number}
The primal initial conditions are defined by $\rho(x,t_0) = 1$, $p(x,t_0) = 1.5$ and $u(x,t_0) = 1/3c$, with $c = \sqrt{\gamma p_0/\rho_0}$ as the speed of sound.
Within this setup a steady non-zero base flow velocity, which breaks the self-adjoint structure of the governing system, is analyzed.
Despite of the base flow, the analytic solution yields $\rho^*(x,t) = 0$ for all time steps.
Considering the operator $\mathcal A$ in \eqref{eq_lin_euler} this is difficult to realize as $\delta \rho$, previously used solely for the representation of $u^*$ and $p^*$, acts on its own.
In order to remove this dependency and allow for a zero solution in $\rho^*$ a modified input vector based on the current adjoint state is employed.
Using only the pressure part of $q^*$ as additional vector for the dynamic Arnoldi the method is enabled to remove the entanglement as the wrong extra part in the modes can be be represented as difference, see Tbl.~\ref{app_tbl_calcplans_euler} for the calculation plan.
Using six base vectors a similar quality of the adjoint solution, which is characterized by a skewness of the characteristics due to the base flow, is found, see Fig.~\ref{fig_results_c6}.
\begin{figure}
\centering
\includegraphics[width = .49\textwidth]{./figures/_matlab_06_adjoint_lazy}
\includegraphics[width = .49\textwidth]{./figures/_matlab_06_adjoint_diff}
\caption{E3 - Steady base flow - intermediate Ma number. Mode-based (mb) adjoint solution $u^*_{\mathrm{mb}}$ (left) and difference $\Delta q^* = q^*_{\mathrm{mb}} - q^*_{\mathrm{analytical}}$ (right) for all quantities.
Please note the skewness of the characteristics due to the presence of a base flow.
\label{fig_results_c6}
}
\end{figure}
\paragraph{E4 - No base flow - open boundaries}
For this setup the periodic boundary conditions are replaced by non-reflecting open boundaries, the spatial discretization is of second order.
In more detail, characteristic-boundary conditions \cite{Lemke2015,PoinsotLele1992} are combined with a quadratic sponge layer \cite{Mani2012} with acts on 10\% of the computational domain on both sides.
All other parameters of this setup corresponds to (E2).
The presence of the damping sponge requires an additional number of modes.
The calculation plan results in $10$ calls of the primal right hand side, see Tbl.~\ref{app_tbl_calcplans_euler}.
Figure \ref{fig_results_c7} shows that the quality of the resulting adjoint solution is similar to case (E2), at least before the pulses reach the boundaries.
Thereafter, only slight reflections are found in contrast to the analytic solution, see \cite{Lemke2015} for details on the adjoint sponge layer.
Please note, that other boundary conditions, for example (no)-slip walls, might have to be treated individually.
As the present approach approximates the discrete adjoint operator of the governing equations (corresponding to the automatic difference method), the same problems are expected to arise.
A similar treatment should be possible.
Further discussion can be found in \cite{GilesDutaMuellerPierce2003,GilesPierce1997}.
\begin{figure}
\centering
\includegraphics[width = .49\textwidth]{./figures/_matlab_07_adjoint_lazy}
\includegraphics[width = .49\textwidth]{./figures/_matlab_07_adjoint_diff}
\caption{E4 - No base flow - open boundaries. Mode-based (mb) adjoint solution $u^*_{\mathrm{mb}}$ (left) and difference $\Delta q^* = q^*_{\mathrm{mb}} - q^*_{\mathrm{analytical}}$ (right) for all quantities.
\label{fig_results_c7}
}
\end{figure}
\subsection{O1 - Noise Cancellation}
In order to demonstrate the applicability of the mode-based adjoint approach for optimization tasks, the previously discussed setup (E4) is modified.
The computational domain is extended to a total length of $L = 4\pi$ resolved by 256 equidistantly distributed points.
In summary 384 time steps are simulated at a CFL-condition of $0.75$.
The system is excited by means of a harmonic pressure source with a frequency of $f = 0.75$ Hz located at $x_s = L/4$.
The resulting flow field is characterized by acoustics waves as shown in Fig.~\ref{fig_results_c8} (top-left).
The overall target of the optimization is to minimize the integral objective
\begin{equation}
J = \iint \left( p(x,t) - p_{\mathrm{target}}\right)^2 \sigma_x ~\mathrm d t
\end{equation}
by means of an adjoint-based adaptation of a source term $f_p$ in the right-hand-side of the pressure equation in \eqref{eq_euler}, with $p_{\mathrm{target}} = 1.5$ as target pressure and $\sigma_x$ a spatial weight, defined by a Gauss-smoothed step function located at $3/4 L$ of the computational domain length, see Fig.~\ref{fig_results_c8}.
According to \eqref{eq_adjoint_sensitivity} the adjoint of \eqref{eq_euler} provides the gradient of $J$ with respect to $f_p$.
The solution is from a human perspective trivial, as in the middle region $\theta_x$ a source is created, which annihilates the left-running acoustic.
However, several thousand degrees of freedom are adapted.
The objective is minimized iteratively.
Starting from an initial guess for $f_p(x,t) = 0$ the primal system is solved.
The adjoint system, driven by the term $g$, see \eqref{eq_adjoint_system}, is solved subsequently.
Based on the solution the force $f_p$ is adapted corresponding to
\begin{align}
f_p^{n+1} & = f_p^n + \alpha \left( \frac{\delta J }{\delta f }\right) \theta_x \no \\
&= f_p^n + \alpha p^* \theta_x
\end{align}
with a suitable fixed step-width of $\alpha = 2.5$ and the Gaussian-smooth weight $\theta_x$ around $L/2$, which controls the location of the anti-sound-source and reduces to a source in $p$, see Fig.~\ref{fig_results_c8}.
With the updated forcing the primal system is solved again.
The procedure is repeated five times using an analytical adjoint solution and the mode-base approach.
The resulting primal solutions, in which the noise is canceled out by the adapted $f_p$, are shown in Fig.~\ref{fig_results_c8} (right).
There are no identifiable differences between both approaches.
Also the progress of the objective function with respect to the iteration is almost identical.
In both cases the objective is reduced by more than two orders of magnitude.
A nearly invisible deterioration of the convergence using the mode-based approach is negligible.
Please note, that for the dynamic Arnoldi the same calculation plan as in (E4) is employed.
This is particularly remarkable, because, here, for the first time a source term $g$ was present in the adjoint equation and that the plan was not trained to this particular optimization task.
\begin{figure}
\centering
\includegraphics[width = .49\textwidth]{./figures/_matlab_08_direct_no_opt}
\includegraphics[width = .49\textwidth]{./figures/_matlab_08_direct_analytical}\\
\includegraphics[width = .49\textwidth]{./figures/_matlab_08_setup_objective}
\includegraphics[width = .49\textwidth]{./figures/_matlab_08_direct_lazy}
\caption{O1 - Noise Cancellation. Solution of the primal system without optimization ($f_p(x,t) = 0$) (top-left) and primal solution after five iteration using the analytical adjoint (top-right).
Optimization setup and progress of the objective function normalized with respect to the first iteration (bottom-left) and primal solution after five iteration using the mode-based adjoint (bottom-right).
\label{fig_results_c8}
}
\end{figure}
\section{Calculation Plans \label{secCalcPlan}}
\begin{table}[h]
\paragraph{\textsc{Burgers} Equation \hfill \vspace{1em}}
\begin{tabular}[]{cccc}
\textbf{Case} & \textbf{input} & \textbf{item} & \textbf{map} \\
\hline
B1 & I & 1 & 1 \\
& P & 1 & 1 \\[.5em]
B2 & I & 1 & 1 \\
& I & 3 & 1 \\
& P & 2 & 1 \\
& P & 3 & 1 \\[.5em]
B3 & I & 1 & 1 \\
& I & 3 & 1 \\
& P & 1 & 1 \\
& P & 3 & 1 \\
& P & 2 & 1 \\
& P & 5 & 1 \\
& P & 6 & 1 \\
\end{tabular}
\caption{Calculation plans for the Burgers equation tests using a quality criterion of $10^{-5}$ evaluated over the full time span using each fifth step.\label{app_tbl_calcplans_burgers}}
\end{table}
\vfill\eject
~\vspace{1em}
\begin{table}[h]
\paragraph{\textsc{Euler} Equations \hfill \vspace{1em}}
\begin{tabular}[]{cccc}
\textbf{Case} & \textbf{input} & \textbf{item} & \textbf{map} \\
\hline
E1 & I & 1 & 1 2 3 \\
& P & 1 & 0 2 0 \\
& P & 1 & 0 0 3 \\
& P & 1 & 0 0 1 \\[.5em]
E2 & I & 1 & 1 2 3 \\
& I & 1 & 0 2 0 \\
& P & 1 & 0 0 1 \\
& P & 2 & 0 2 0 \\[.5em]
E3 & I & 1 & 1 2 3 \\
& P & 1 & 0 2 0 \\
& P & 1 & 0 0 3 \\
& P & 1 & 0 1 0 \\
& P & 1 & 0 0 1 \\
& I & 1 & 0 0 3 \\[.5em]
E4/O1 & I & 1 & 1 2 3 \\
& P & 1 & 0 0 1 \\
& P & 1 & 0 2 0 \\
& P & 1 & 0 0 3 \\
& P & 2 & 0 2 0 \\
& I & 1 & 0 0 3 \\
& P & 3 & 0 2 0 \\
& P & 4 & 0 2 0 \\
& P & 5 & 0 2 0 \\
& P & 6 & 0 2 0 \\
\end{tabular}
\caption{Calculation plans for the Euler equations tests using a quality criterion of $10^{-5}$ evaluated over the full time span using each fifth step.\label{app_tbl_calcplans_euler}}
\end{table}
\subsection{Dynamic Arnoldi Method}
\label{sec_dam}
We find further down that neither the Arnoldi nor the block-Arnoldi method is flexible enough for the application in mind.
To allow a direct intervention we now define the so termed \emph{Dynamic Arnoldi Method} (DAM).
It allows to choose in each step freely new vectors to expand the mode-set $V$.
These vectors are chosen from some initial set or taken from previous calculated application of $A$ and can also be modified before the application of $A$.
This choice is governed by a set of predefined rules termed the \emph{calculation plan}.
By selecting suitable {calculation plans} one recovers the classical Arnoldi or the block-Arnoldi method.
At the core of the DAM is an update step of the relation
\begin{equation}
P^m + V^m \bar H^m = A V^m.
\end{equation}
The operator $A$ has dimension $(n,n)$ and the matrices $P^m, V^m$ have dimension $(n,m)$ and $\bar H^m$ $(m,m)$.
The matrix $\bar H $ is \emph{not} the same as $ H$, introduced above, the connection is provided further down.
The $(n,m)$ matrix $P$ was introduced acting as a pile for results which, at the time of update, cannot be described by $V$.
It extends the residuum $r$ defined above.
The update is done by adding a vector $v$ to $V$ and expanding $P$ and $\bar H$:\\
\paragraph{DAM Update Routine}~\newline
\begin{algorithm}[H]
\KwData{ $q^{m+1} , V^m,\, \bar H^m, \, P^m ;~A$ }
\KwResult{ $V^{m+1}$, $\bar H^{m+1}$, $P^{m+1}$ }
\# orthogonalize input \;
$\alpha = (V^{m})^T\cdot q^{m+1} $ \;
$ v^{m+1} = q^{m+1} - (V^{m})\cdot\alpha $\;
\eIf{ $|v^{m+1}|_2 > \epsilon $ }{
\# input has linear independent part, append to $V$, apply $A$\;
$V^{m+1} = \mathrm{appendColumn} (V^m, v^{m+1} )$\;
$ w = A \cdot v^{m+1} $ \;
$ \beta = (V^{m+1})^T\cdot w $ \;
$ w = w - (V^{m+1})\cdot\beta $\;
}{
\# input is linear dependent, append zero column \;
$V^{m+1} = \mathrm{appendColumn} (V^m, 0 )$\;
$w = 0 $ \# no need to do matrix multiplication\;
}
\vspace{1em}
$\bar H ^{m+1} = \mathrm{appendColumn} ( \bar H^m, \beta )$\;
$ P ^{m+1} = \mathrm{appendColumn} ( P^m, w )$\;
~\\
\caption{The update routine part of the dynamic Arnoldi.}
\end{algorithm}~\\
This algorithm allows to prescribe an arbitrary sequence of $q^{m}$ to multiply with the matrix $A$ and to store the result in $V^m$, the pile $P^m$ and $\bar H$.
The vector $ q^{m+1}$ is entitled \emph{test vector} in the following.
\noindent Some remarks are worthwhile:
The operator $A$ is later defined purely in terms of applications of the RHS on a vector.
The method is a matrix-free method building on the application $A\cdot q$ only.
The pile $P$ is in general not orthogonal to $V$.
If, for example, a vector from $P$ is used as a new test vector, it is added to $V$, but no action is taken to remove it from the pile $P$ in the update step.
It could be included in each step, but for simplicity we remove the part of $P$ resolved by $V$ (if necessary) only in a final step of the main routine, see below.
Further, zero modes are added to $V$, which seems unnecessary, since it does not enlarge the space spanned by $V$.
However, dropping those entries would changed the position of results in $P$, which would complicate the calculation plan discussed below.
The \emph{DAM Update Routine} is in the heart of the dynamic Arnoldi method which prescribes a sequence of test vectors $q^m$ based on the input vectors and the pile vectors.
A set of $l$ input vectors is provided as an $(n,l)$ Matrix $U$.
To control the DAM we define the calculation plan $\mathcal{C} $.
It consists of lines $l$
\begin{equation}
\mathcal{C}^l = ( source ,\; index ,\; mod ).
\end{equation}
In this work the field $source$ contains the value 'I' for taking the next test vector $q^{m+1}$ from the input $U$ and 'P' for taking it from the pile $P$.
The $index$ is simply the number of the vector within the input or the pile.
The modification identifier $mod$ is specific for the application of this report.
It is a mask which allows to shuffle the different fields within a test vector.
E.g. a compressible flow in one dimension may be described by the three fields of density, velocity and pressure $(\rho,\, u,\, p)$,
a modifier entry $M: (\rho,\, u,\, p) { \overset {(2,1,3)} \longrightarrow } (u, \rho , p ) $ would exchange the first and the second field, i.e. the density and the velocity.
This will prove useful later.
An entry '0' will simply set the field to zero. As an example $(\rho,\, u,\, p) { \overset {(2,0,0)} \longrightarrow } (u, 0 , 0 )$ would write the second field to the first an set the others zero.
These modifications are discussed further in Sec.~\ref{ResultEuler}.
The DAM is given by
\paragraph{Dynamic Arnoldi Method}~\newline
\begin{algorithm}[H]
\KwData{ $ U ;\, \mathcal{C}, \, A$ }
\KwResult{ $V^{m}$, $\bar H^{m}$, $P^{m}$ }
\# initialize
$V^{0},\, \bar H^{0}, \, P^{0}$ = empty \;
\For{ $C^l = l^{th}$ line in all lines of $\mathcal{C}$ }{
\# unpack information \;
$ source, index, mod = l_k$ \;
\# choose source \;
\Switch{source}{
\Case{I}{
$q = U^{index}$ \;
}
\Case{P}{
$q = P^{index}$ \;
}
}
\# modify as described in text \;
$ q { \overset {mod} \longrightarrow } q^m $\;
\# update by Algorithm 1
$ [ V^{m}, H^{m} , P^{m} ]= DAM\_Update(q^m , V^{m-1},\, \bar H^{m-1}, \, P^{m-1} ; A ) $
}
\# calculate $H$ matrix by adding parts of $P$ described by $V$
$H^m = \bar H^m + (V^m)^T P^m $ ;\\[1em]
\caption{ The dynamic Arnoldi Method }
\end{algorithm}~\\
For a given calculation plan this allows to create a specific approximation of the matrix $A$.
The standard Arnoldi is obtained by simply using the last result of the pile as the next test vector $q^{m+1}$.
The block-Arnoldi is recovered by providing a set of input vectors in sequence, followed by using the resulting pile vectors repetitively as the next test vectors.
\subsection{Linearization of the RHS \label{subsecLinOfRHS}}
All equations considered further down are non-linear.
The linearization is derived as a Frech\'{e} derivative,
\begin{equation}
A \cdot q \approx \dfrac{\rhs \left(q_0 + \epsilon q \right) - \rhs \left(q_0 \right)}{\epsilon}. \label{eq_frechet_of_f}
\end{equation}
The choice of $\epsilon$ influences the approximation quality, here, we chose the parameter $\epsilon = \sqrt{\epsilon_{\mathrm{machine}}} $, where $\epsilon_{\mathrm{machine}} $ is the machine precision of the used computer.
This formula can in principle be also used to calculate all elements of the desired matrix $A$ by setting $q$ to all unit vectors of the discrete space dimension \cite{LemkeCaiReissPitschSesterhenn2018}.
However, this becomes prohibitive expensive for large systems.
It is, however, used in the numerical examples to obtain a reference solution.
It is also used in the current version of the training of the mode-based adjoint method.
We referrer to it as expensive method.
\subsection{Training of the method \label{sec_training_of_the_method}}
The calculation plan is created by a training.
It aims at approximating the adjoint operator $A^T$ with a minimal number of evaluations of the primal RHS.
For this, the result of the adjoint RHS (\ref{adjointRhsDAM}) for different training-plan modifications is compared with a reference result.
The later is, for now, created by the expensive method described in the previous section.
The comparison is done for a representative set of $\qs$, since by using only one, the calculation plan might reflect a specific property of this state not valid in general.
Here, we use a set of simulation snapshots created with the expensive method.
In detail, the training plan is initiated with one line applying $A$ to an unmodified $\qs$, which is a necessary mode as discussed at the start of section \ref{secEldidec}.
Then, in a greedy approach, all possible next training plan lines are tried and the one which reduced the combined error
for the whole $\qs$-set, is appended.
The last step is repeated until the error is below a prescribed value.
Different resulting calculation plans for the test cases are given in appendix \ref{secCalcPlan}.
The order of steps prescribed by the calculation plan has an impact on the error reduction of a given step, since parts which are represented by previous test vectors does not contribute to this error reduction anymore.
Also the entries in $P$ change since the new entry is orthogonal to $V$ at the step of calculation.
This introduces a strong non-linearity by which the greedy strategy can become suboptimal.
Indeed, we observed for simple cases, where the calculation plan can be constructed by hand, that the training delivers an inferior solution, i.e.~one with more than minimal evaluations of the
RHS\footnote{A simple example is given by a case where one mode approximates $\dotqs$ very good and two other modes less good but combine to a perfect representation.
The strategy will pick first the first mode and than the next two which would suffice alone.}.
To reduce this non-linearity it was chosen not to orthogonalize the pile vectors (by introducing $P=\bar P R$ ) or remove the parts contained in $V$, since the pile vectors are less dependent on the history of the updates.
We observed, thereby, a much more robust (albeit not optimal) training outcome.
Other strategies like genetic algorithms or Monte Carlo tree search \cite{BrownePowleyWhitehouseLucasCowlingRohlfshagenTavenerPerezSamothrakisColton2012} are likely to improve this but are out of scope of this paper.
The reference used in the training is derived in an expensive manner.
This might still be possible for practical problems since the training can be done for very small systems still capturing the system dynamics and yield a training plan which is assumed to work independent of the discretization size.
In the end, the training plan should reflect the mathematical structure of the problem.
If the smallest system is too big for this approach one could try, in a Monte Carlo fashion, small disturbances at a set of random locations and integrate the original equation forward in time.
The influence of this should correctly be predicted by the numerical adjoint replacing the full reference solution in the training.
This, again, is out of scope of this report, dedicated to the principle idea of the numerical adjoint.
|
{
"timestamp": "2018-05-08T02:15:43",
"yymm": "1805",
"arxiv_id": "1805.02421",
"language": "en",
"url": "https://arxiv.org/abs/1805.02421"
}
|
\section{Introduction}
The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), {\it e.g.}, Freebase~\cite{bollacker2008:FreeBase}, DBpedia \cite{lehmann2014:DBpedia}, and Google's Knowledge Vault~\cite{dong2014:KnowledgeVault}. A typical KG is a multi-relational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form ({\it head entity}, {\it relation}, {\it tail entity}). Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks~\cite{wasserman2015:WSD,hoffmann2011:IE-Freebase,yang2017:KBLSTM}.
Recently, the concept of {\it knowledge graph embedding} has been presented and quickly become a hot research topic. The key idea there is to embed components of a KG ({\it i.e.}, entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG. Early works on this topic learned such vectorial representations ({\it i.e.}, embeddings) via just simple models developed over KG triples \cite{bordes2011:SE,bordes2013:TransE,jenatton2012:LFM,nickel2011:RESCAL}. Recent attempts focused on either designing more complicated triple scoring models
\cite{socher2013:NTN,bordes2014:SME,wang2014:TransH,lin2015:TransR,xiao2016:TransG,nickel2016:Hole,trouillon2016:ComplEx,liu2017:ANALOGY}, or incorporating extra information beyond KG triples \cite{chang2014:TRESCAL,zhong2015:text,lin2015:PTransE,neelakantan2015:CompositionalVSM,guo2015:SSE,luo2015:path,xie2016:DKRL,xie2016:TKRL,xiao2017:SSP}. See \cite{wang2017:review} for a thorough review.
This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task. Specifically, we examine two types of constraints: (i) {\it non-negativity constraints} on entity representations and (ii) {\it approximate entailment constraints} over relation representations. By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability~\cite{murphy2012:NNSE}. By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction \cite{rocktaschel2015:EmbedLogic,guo2016:KALE}. These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.
Our work has some similarities to those which integrate logical background knowledge into KG embedding~\cite{rocktaschel2015:EmbedLogic,wang2015:ERInfer,guo2016:KALE,guo2018:RUGE}. Most of such works, however, need grounding of first-order logic rules. The grounding process could be time and space inefficient especially for complicated rules. To avoid grounding, \citet{demeester2016:LiftedRule} tried to model rules using only relation representations. But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities. Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create. \citet{minervini2017:ASR} proposed adversarial training which can integrate first-order logic rules without grounding. But their work, again, focuses on strict, hard rules. \citet{minervini2017:EquivalenceInversion} tried to handle uncertainty of rules. But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.
Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, {\it i.e.}, non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.
We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well. Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability. The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.
The remainder of this paper is organized as follows. We first review related work in Section~\ref{sec:RelatedWork}, and then detail our approach in Section~\ref{sec:Approach}. Experiments and results are reported in Section~\ref{sec:Experiments}, followed by concluding remarks in Section~\ref{sec:Conclusion}.
\section{Related Work}\label{sec:RelatedWork}
Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a. KG embedding. Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, {\it e.g.}, TransE which takes relations as translating operations between head and tail entities \cite{bordes2013:TransE}, and RESCAL which models triples through bilinear operations over entity and relation representations \cite{nickel2011:RESCAL}. Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, {\it e.g.}, the TransE extensions~\cite{wang2014:TransH,lin2015:TransR,ji2015:TransD}, the RESCAL extensions \cite{yang2015:Bilinear,nickel2016:Hole,trouillon2016:ComplEx,liu2017:ANALOGY}, and the (deep) neural network models \cite{socher2013:NTN,bordes2014:SME,shi2017:ProjE,schlichtkrull2017:R-GCN,dettmers2017:ConvE}; (ii) those which tried to integrate extra information beyond triples, {\it e.g.}, entity types~\cite{guo2015:SSE,xie2016:TKRL}, relation paths \cite{neelakantan2015:CompositionalVSM,lin2015:PTransE}, and textual descriptions \cite{xie2016:DKRL,xiao2017:SSP}. Please refer to \cite{nickel2016:review,wang2017:review} for a thorough review of these techniques. In this paper, we show the potential of using very simple constraints ({\it i.e.}, non-negativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.
A line of research related to ours is KG embedding with logical background knowledge incorporated \cite{rocktaschel2015:EmbedLogic,wang2015:ERInfer,guo2016:KALE,guo2018:RUGE}. But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules. To avoid grounding, \citet{demeester2016:LiftedRule} proposed lifted rule injection, and \citet{minervini2017:ASR} investigated adversarial training. Both works, however, can only handle strict, hard rules which usually require extensive effort to create. \citet{minervini2017:EquivalenceInversion} tried to handle uncertainty of background knowledge. But their work considers only equivalence and inversion between relations, which might not always be available in a given KG. Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding. And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.
Non-negativity has long been a subject studied in various research fields. Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability \cite{lee1999:NMF}. In many NLP-related tasks, non-negativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition \cite{murphy2012:NNSE,luo2015:NN,fyshe2015:NN}. In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.
\section{Our Approach}\label{sec:Approach}
This section presents our approach. We first introduce a basic embedding technique to model triples in a given KG (\S~\ref{sebsec:ComplEx}). Then we discuss the non-negativity constraints over entity representations (\S~\ref{sebsec:Non-negativity}) and the approximate entailment constraints over relation representations (\S~\ref{subsec:Entailment}). And finally we present the overall model (\S~\ref{subsec:OverallModel}).
\subsection{A Basic Embedding Model}\label{sebsec:ComplEx}
We choose ComplEx~\cite{trouillon2016:ComplEx} as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance. Specifically, suppose we are given a KG containing a set of triples $\mathcal{O}=\{(e_i,r_k,e_j)\}$, with each triple composed of two entities $e_i, e_j \in \mathcal{E}$ and their relation $r_k \in \mathcal{R}$. Here $\mathcal{E}$ is the set of entities and $\mathcal{R}$ the set of relations. ComplEx then represents each entity $e \in \mathcal{E}$ as a complex-valued vector $\mathbf{e}$ $\in \mathbb{C}^d$, and each relation $r \in \mathcal{R}$ a complex-valued vector $\mathbf{r} \in \mathbb{C}^d$, where $d$ is the dimensionality of the embedding space. Each $\mathbf{x} \in \mathbb{C}^d$ consists of a real vector component $\textrm{Re}(\mathbf{x})$ and an imaginary vector component $\textrm{Im}(\mathbf{x})$, {\it i.e.}, $\mathbf{x}=\textrm{Re}(\mathbf{x}) + i\textrm{Im}(\mathbf{x})$. For any given triple $(e_i,r_k,e_j) \in \mathcal{E} \times \mathcal{R} \times \mathcal{E}$, a multi-linear dot product is used to score that triple, {\it i.e.},
\begin{align}\label{eq:ComplEx}
\phi(e_i,r_k,e_j) &\triangleq \textrm{Re}(\langle \mathbf{e}_i, \mathbf{r}_k, \bar{\mathbf{e}}_j\rangle) \notag \\
&\triangleq \textrm{Re}(\sum\nolimits_\ell [\mathbf{e}_i]_\ell [\mathbf{r}_k]_\ell [\bar{\mathbf{e}}_j]_\ell),
\end{align}
where $\mathbf{e}_i, \mathbf{r}_k, \mathbf{e}_j \in \mathbb{C}^d$ are the vectorial representations associated with $e_i, r_k, e_j$, respectively; $\bar{\mathbf{e}}_j$ is the conjugate of $\mathbf{e}_j$; $[\cdot]_\ell$ is the $\ell$-th entry of a vector; and $\textrm{Re}(\cdot)$ means taking the real part of a complex value. Triples with higher $\phi(\cdot,\cdot,\cdot)$ scores are more likely to be true. Owing to the asymmetry of this scoring function, {\it i.e.}, $\phi(e_i,r_k,e_j) \neq \phi(e_j,r_k,e_i)$, ComplEx can effectively handle asymmetric relations~\cite{trouillon2016:ComplEx}.
\subsection{Non-negativity of Entity Representations}\label{sebsec:Non-negativity}
On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations. In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions. In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation. However, as pointed out by \citet{murphy2012:NNSE}, it would be uneconomical to store all negative properties of an entity or a concept. For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.
Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations. To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of $[0,1]^d$, as approximately Boolean embeddings~\cite{kruszewski2015:BooleanEmbedding}, {\it i.e.},
\begin{equation}\label{eq:Non-negativity}
\mathbf{0} \leq \textrm{Re}(\mathbf{e}), \textrm{Im}(\mathbf{e}) \leq \mathbf{1}, \quad \forall e \in \mathcal{E},
\end{equation}
where $\mathbf{e}\in\mathbb{C}^d$ is the representation for entity $e\in\mathcal{E}$, with its real and imaginary components denoted by $\textrm{Re}(\mathbf{e}), \textrm{Im}(\mathbf{e}) \in \mathbb{R}^d$; $\mathbf{0}$ and $\mathbf{1}$ are $d$-dimensional vectors with all their entries being $0$ or $1$; and $\geq, \leq, =$ denote the entry-wise comparisons throughout the paper whenever applicable. As shown by~\citet{lee1999:NMF}, non-negativity, in most cases, will further induce sparsity and interpretability.
\subsection{Approximate Entailment for Relations}\label{subsec:Entailment}
Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations. By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, {\it e.g.}, \texttt{\small BornInCountry} and \texttt{\small Nationality}, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country. Each such relation pair is associated with a weight to indicate the confidence level of entailment. A larger weight stands for a higher level of confidence. We denote by $r_p \xrightarrow{\lambda} r_q$ the approximate entailment between relations $r_p$ and $r_q$, with confidence level $\lambda$. This kind of entailment can be derived automatically from a KG by modern rule mining systems \cite{galarraga2015:AMIE+}. Let $\mathcal{T}$ denote the set of all such approximate entailments derived beforehand.
Before diving into approximate entailment, we first explore the modeling of strict entailment, {\it i.e.}, entailment with infinite confidence level $\lambda=+\infty$. The strict entailment $r_p \rightarrow r_q$ states that if relation $r_p$ holds then relation $r_q$ must also hold. This entailment can be roughly modelled by requiring
\begin{equation}\label{eq:Implication}
\phi(e_i,r_p,e_j) \leq \phi(e_i,r_q,e_j), \quad \forall e_i,e_j\in\mathcal{E},
\end{equation}
where $\phi(\cdot,\cdot,\cdot)$ is the score for a triple predicted by the embedding model, defined by Eq.~(\ref{eq:ComplEx}). Eq.~(\ref{eq:Implication}) can be interpreted as follows: for any two entities $e_i$ and $e_j$, if $(e_i,r_p,e_j)$ is a true fact with a high score $\phi(e_i,r_p,e_j)$, then the triple $(e_i,r_q,e_j)$ with an even higher score should also be predicted as a true fact by the embedding model. Note that given the non-negativity constraints defined by Eq.~(\ref{eq:Non-negativity}), a sufficient condition for Eq.~(\ref{eq:Implication}) to hold, is to further impose
\begin{equation}\label{eq:StrictEntailment}
\textrm{Re}(\mathbf{r}_p) \leq \textrm{Re}(\mathbf{r}_q), \;\; \textrm{Im}(\mathbf{r}_p) = \textrm{Im}(\mathbf{r}_q),
\end{equation}
where $\mathbf{r}_p$ and $\mathbf{r}_q$ are the complex-valued representations for $r_p$ and $r_q$ respectively, with the real and imaginary components denoted by $\textrm{Re}(\cdot),\textrm{Im}(\cdot) \in \mathbb{R}^d$. That means, when the constraints of Eq.~(\ref{eq:StrictEntailment}) (along with those of Eq.~(\ref{eq:Non-negativity})) are satisfied, the requirement of Eq.~(\ref{eq:Implication}) (or in other words $r_p \rightarrow r_q$) will always hold. We provide a proof of sufficiency as Appendix~\ref{subsec:Sufficiency}.
Next we examine the modeling of approximate entailment. To this end, we further introduce the confidence level $\lambda$ and allow slackness in Eq.~(\ref{eq:StrictEntailment}), which yields
\begin{align}
\lambda \big(\textrm{Re}(\mathbf{r}_p) - \textrm{Re}(\mathbf{r}_q)\big) \leq \boldsymbol{\alpha}, \label{eq:ApproximateEntailment-1} \\
\lambda \big(\textrm{Im}(\mathbf{r}_p) - \textrm{Im}(\mathbf{r}_q)\big)^2 \leq \boldsymbol{\beta}. \label{eq:ApproximateEntailment-2}
\end{align}
Here $\boldsymbol{\alpha}, \boldsymbol{\beta} \geq \mathbf{0}$ are slack variables, and $(\cdot)^2$ means an entry-wise operation. Entailments with higher confidence levels show less tolerance for violating the constraints. When $\lambda=+\infty$, Eqs.~(\ref{eq:ApproximateEntailment-1})~--~(\ref{eq:ApproximateEntailment-2}) degenerate to Eq.~(\ref{eq:StrictEntailment}). The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible $(e_i, e_j)$ entity pairs ({\it i.e.}, grounding). In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.
\subsection{The Overall Model}\label{subsec:OverallModel}
Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations. The overall model is presented as follows:
\begin{align}\label{eq:ConstrainedKGE}
\min_{\Theta, \{\boldsymbol{\alpha}, \boldsymbol{\beta}\}} \;\; & \sum_{\mathcal{D}^+\cup\mathcal{D}^-} \!\! \log \big( 1 + \exp (\!-y_{ijk} \phi(e_i,r_k,e_j)) \big) \notag \\
& + \mu \sum\nolimits_{\mathcal{T}} \boldsymbol{1}^\top (\boldsymbol{\alpha} + \boldsymbol{\beta}) + \eta \|\Theta\|_2^2, \notag \\
\textrm{s.t.} \;\; & \lambda \big(\textrm{Re}(\mathbf{r}_p) - \textrm{Re}(\mathbf{r}_q)\big) \leq \boldsymbol{\alpha}, \notag \\
\;\; & \lambda \big(\textrm{Im}(\mathbf{r}_p) - \textrm{Im}(\mathbf{r}_q)\big)^2 \leq \boldsymbol{\beta}, \notag \\
\;\; & \boldsymbol{\alpha}, \boldsymbol{\beta} \geq \mathbf{0}, \quad \forall r_p \xrightarrow{\lambda} r_q \in \mathcal{T}, \notag \\
\;\; & \mathbf{0} \leq \textrm{Re}(\mathbf{e}), \textrm{Im}(\mathbf{e}) \leq \mathbf{1}, \quad \forall e \in \mathcal{E}.
\end{align}
Here, $\Theta \triangleq \{\mathbf{e}: e\in\mathcal{E}\}\cup\{\mathbf{r}: r\in\mathcal{R}\}$ is the set of all entity and relation representations; $\mathcal{D}^+$ and $\mathcal{D}^-$ are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, {\it i.e.}, $(e_i,r_k,e_j)\in\mathcal{O}$; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, {\it i.e.}, $(e_i',r_k,e_j)$ or $(e_i,r_k,e_j')$; $y_{ijk}=\pm1$ is the label (positive or negative) of triple $(e_i,r_k,e_j)$. In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels. The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient $\mu \geq 0$. The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied. The last term is $L_2$ regularization to avoid over-fitting, and $\eta \geq 0$ is the regularization coefficient.
To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are. As such, the optimization problem of Eq.~(\ref{eq:ConstrainedKGE}) can be rewritten as:
\begin{align}\label{eq:RegularizedKGE}
\min_\Theta \;\; & \sum_{\mathcal{D}^+\cup\mathcal{D}^-} \!\! \log \big( 1 + \exp (\!-y_{ijk} \phi(e_i,r_k,e_j)) \big) \notag \\
& + \mu \!\sum\nolimits_{\mathcal{T}} \lambda \boldsymbol{1}^\top \!\big[\textrm{Re}(\mathbf{r}_p) \!-\! \textrm{Re}(\mathbf{r}_q)\big]_+ \notag \\
& + \mu \!\sum\nolimits_{\mathcal{T}} \lambda \boldsymbol{1}^\top \!\big(\textrm{Im}(\mathbf{r}_p) \!-\! \textrm{Im}(\mathbf{r}_q)\big)^2 \notag \!+ \eta \|\Theta\|_2^2, \notag \\
\textrm{s.t.} \;\; & \mathbf{0} \leq \textrm{Re}(\mathbf{e}), \textrm{Im}(\mathbf{e}) \leq \mathbf{1}, \quad \forall e \in \mathcal{E},
\end{align}
where $[\mathbf{x}]_+ = \max(\mathbf{0}, \mathbf{x})$ with $\max(\cdot,\cdot)$ being an entry-wise operation. The equivalence between Eq.~(\ref{eq:ConstrainedKGE}) and Eq.~(\ref{eq:RegularizedKGE}) is shown in the Appendix~\ref{subsec:Equivalence}. We use SGD in mini-batch mode as our optimizer, with AdaGrad~\cite{duchi2011:AdaGrad} to tune the learning rate. After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of $[0,1]^d$, to satisfy the non-negativity constraints.
While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity. Our approach has a space complexity of $O(nd+md)$, which is the same as that of ComplEx. Here, $n$ is the number of entities, $m$ the number of relations, and $O(nd+md)$ to store a $d$-dimensional complex-valued vector for each entity and each relation. The time complexity (per iteration) of our approach is $O(sd+td+\bar{n}d)$, where $s$ is the average number of triples in a mini-batch, $\bar{n}$ the average number of entities in a mini-batch, and $t$ the total number of approximate entailments in $\mathcal{T}$. $O(sd)$ is to handle triples in a mini-batch, $O(td)$ penalty terms introduced by the approximate entailments, and $O(\bar{n}d)$ further the non-negativity constraints on entity representations. Usually there are much fewer entailments than triples, {\it i.e.}, $t \ll s$, and also $\bar{n} \leq 2s$.\footnote{There will be at most $2s$ entities contained in $s$ triples.} So the time complexity of our approach is on a par with $O(sd)$, {\it i.e.}, the time complexity of ComplEx.
\section{Experiments and Results}\label{sec:Experiments}
This section presents our experiments and results. We first introduce the datasets used in our experiments (\S~\ref{subsec:Datasets}). Then we empirically evaluate our approach in the link prediction task (\S~\ref{subsec:LinkPrediction}). After that, we conduct extensive analysis on both entity representations (\S~\ref{subsec:EntityRepresentations}) and relation representations (\S~\ref{subsec:RelationRepresentations}) to show the interpretability of our model. Code and data used in the experiments are available at \url{https://github.com/iieir-km/ComplEx-NNE_AER}.
\subsection{Datasets}\label{subsec:Datasets}
The first two datasets we used are WN18 and FB15K, released by \citet{bordes2013:TransE}.\footnote{\url{https://everest.hds.utc.fr/doku.php?id=en:smemlj12}} WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities. We create our third dataset from the mapping-based objects of core DBpedia.\footnote{\url{http://downloads.dbpedia.org/2016-10/core/}} We eliminate relations not included within the DBpedia ontology such as \texttt{\small HomePage} and \texttt{\small Logo}, and discard entities appearing less than 20 times. The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities. Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively. We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.
We further use AMIE+ \cite{galarraga2015:AMIE+}\footnote{\url{https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/amie/}} to extract approximate entailments automatically from the \textit{training} set of each dataset. As suggested by \citet{guo2018:RUGE}, we consider entailments with PCA confidence higher than 0.8.\footnote{PCA confidence is the confidence under the partial completeness assumption. See \cite{galarraga2015:AMIE+} for details.} As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K. Table~\ref{tab:Rules} gives some examples of these approximate entailments, along with their confidence levels. Table~\ref{tab:Dataset} further summarizes the statistics of the datasets.
\begin{table}[!t]
\begin{center}\footnotesize\setlength{\tabcolsep}{3pt}
\begin{tabular*}{0.48 \textwidth}{@{\extracolsep{\fill}}@{}l@{}}
\toprule
$\textrm{hypernym}^{-1} \xrightarrow{1.00} \textrm{hyponym}$ \\
$\textrm{synset\_domain\_topic\_of}^{-1} \xrightarrow{0.99} \textrm{member\_of\_domain\_topic}$ \\
$\textrm{instance\_hypernym}^{-1} \xrightarrow{0.98} \textrm{instance\_hyponym}$ \\
\midrule
$\textrm{/people/place\_of\_birth}^{-1} \xrightarrow{1.00} \textrm{/location/people\_born\_here}$ \\
$\textrm{/film/directed\_by}^{-1} \xrightarrow{0.98} \textrm{/director/film}$ \\
$\textrm{/country/admin\_divisions} \xrightarrow{0.91} \textrm{/country/1st\_level\_divisions}$ \\
\midrule
$\textrm{owner} \xrightarrow{0.95} \textrm{owning\_company}$ \\
$\textrm{child}^{-1} \xrightarrow{0.92} \textrm{parent}$ \\
$\textrm{distributing\_company} \xrightarrow{0.92} \textrm{distributing\_label}$ \\
\bottomrule
\end{tabular*}
\end{center}
\caption{\label{tab:Rules} Approximate entailments extracted from WN18 (top), FB15K (middle), and DB100K (bottom), where $r^{-1}$ means the inverse of relation $r$. }
\end{table}
\begin{table}[!t]
\centering\footnotesize\setlength{\tabcolsep}{3pt}
\begin{tabular*}{0.48 \textwidth}{@{\extracolsep{\fill}}@{}l|rrrrrr@{}}
\toprule
Dataset & \#~Ent & \#~Rel & \multicolumn{3}{c}{\#~Train/Valid/Test} & \#~Cons \\
\midrule
WN18 & 40,943 & 18 & 141,442 & 5,000 & 5,000 & 17 \\
FB15K & 14,951 & 1,345 & 483,142 & 50,000 & 59,071 & 535 \\
DB100K & 99,604 & 470 & 597,572 & 50,000 & 50,000 & 56 \\
\bottomrule
\end{tabular*}
\caption{\label{tab:Dataset} Statistics of datasets, where the columns respectively indicate the number of entities, relations, training/validation/test triples, and approximate entailments.}
\end{table}
\subsection{Link Prediction}\label{subsec:LinkPrediction}
We first evaluate our approach in the link prediction task, which aims to predict a triple $(e_i, r_k, e_j)$ with $e_i$ or $e_j$ missing, {\it i.e.}, predict $e_i$ given $(r_k, e_j)$ or predict $e_j$ given $(e_i, r_k)$.
\smallskip
\textbf{Evaluation Protocol:} We follow the protocol introduced by \citet{bordes2013:TransE}. For each test triple $(e_i, r_k, e_j)$, we replace its head entity $e_i$ with every entity $e_i' \in \mathcal{E}$, and calculate a score for the corrupted triple $(e_i', r_k, e_j)$, {\it e.g.}, $\phi(e_i', r_k, e_j)$ defined by Eq.~(\ref{eq:ComplEx}). Then we sort these scores in descending order, and get the rank of the correct entity $e_i$. During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, {\it i.e.}, the {\it filtered} setting as described in \cite{bordes2013:TransE}. This whole procedure is repeated while replacing the tail entity $e_j$. We report on the {\it test} set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top $n$ (HITS@N), with $n=1,3,10$.
\smallskip
\textbf{Comparison Settings:} We compare the performance of our approach against a variety of KG embedding models developed in recent years. These models can be categorized into three groups:
\begin{itemize}
\item Simple embedding models that utilize triples alone without integrating extra information, including TransE \cite{bordes2013:TransE}, DistMult \cite{yang2015:Bilinear}, HolE \cite{nickel2016:Hole}, ComplEx \cite{trouillon2016:ComplEx}, and ANALOGY \cite{liu2017:ANALOGY}. Our approach is developed on the basis of ComplEx.
\item Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE \cite{guo2018:RUGE} and ComplEx$^\textrm{R}$ \cite{minervini2017:EquivalenceInversion}. The former requires grounding of first-order logic rules. The latter is restricted to relation equivalence and inversion, and assigns an identical confidence level to all different rules.
\item Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and FB15K, including R-GCN \cite{schlichtkrull2017:R-GCN}, ConvE \cite{dettmers2017:ConvE}, and Single DistMult \cite{kadlec2017:Baselines}.\footnote{We do not consider Ensemble DistMult \cite{dettmers2017:ConvE} which combines several different models together, to facilitate a fair comparison.} The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models. The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.
\end{itemize}
We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the \underline{N}on-\underline{N}egativity constraints on \underline{E}ntity representations, {\it i.e.}, optimization Eq.~(\ref{eq:RegularizedKGE}) with $\mu=0$; and (ii) ComplEx-NNE+AER that further imposes the \underline{A}pproximate \underline{E}ntailment constraints over \underline{R}elation representations besides those non-negativity ones, {\it i.e.}, optimization Eq.~(\ref{eq:RegularizedKGE}) with $\mu>0$.
\smallskip
\textbf{Implementation Details:} We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K. We directly report their original results on these two datasets to avoid re-implementation bias. On DB100K, the newly created dataset, we take the first two groups of baselines, {\it i.e.}, those simple embedding models and ComplEx extensions with logical background knowledge incorporated. We do not use the third group of baselines due to efficiency and complexity issues. We use the code provided by \citet{trouillon2016:ComplEx}\footnote{\url{https://github.com/ttrouill/complex}} for TransE, DistMult, and ComplEx, and the code released by their authors for ANALOGY\footnote{\url{https://github.com/quark0/ANALOGY}} and RUGE\footnote{\url{https://github.com/iieir-km/RUGE}}. We re-implement HolE and ComplEx$^\textrm{R}$ so that all the baselines (as well as our approach) share the same optimization mode, {\it i.e.}, SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.\footnote{An exception here is that ANALOGY uses asynchronous SGD with AdaGrad~\cite{liu2017:ANALOGY}.} We follow \citet{trouillon2016:ComplEx} to adopt a ranking loss for TransE and a logistic loss for all the other methods.
\begin{table*}[!t]
\centering\footnotesize\setlength{\tabcolsep}{5pt}
\begin{tabular*}{1 \textwidth}{@{\extracolsep{\fill}}@{}llllllllll@{}}
\toprule
& \multicolumn{4}{c}{WN18} && \multicolumn{4}{c}{FB15K} \\\cmidrule{2-5}\cmidrule{7-10}
& & \multicolumn{3}{c}{HITS@N} & & & \multicolumn{3}{c}{HITS@N} \\\cmidrule{3-5}\cmidrule{8-10}
&MRR &\tc{1} &\tc{3} &\tc{10} &&MRR &\tc{1} &\tc{3} &\tc{10} \\
\midrule
TransE~\cite{bordes2013:TransE} &0.454 &0.089 &0.823 &0.934 &&0.380 &0.231 &0.472 &0.641 \\
DistMult~\cite{yang2015:Bilinear} &0.822 &0.728 &0.914 &0.936 &&0.654 &0.546 &0.733 &0.824 \\
HolE~\cite{nickel2016:Hole} &0.938 &0.930 &0.945 &0.949 &&0.524 &0.402 &0.613 &0.739 \\
ComplEx~\cite{trouillon2016:ComplEx} &0.941 &0.936 &0.945 &0.947 &&0.692 &0.599 &0.759 &0.840 \\
ANALOGY~\cite{liu2017:ANALOGY} &0.942 &0.939 &0.944 &0.947 &&0.725 &0.646 &0.785 &0.854 \\
\midrule
RUGE~\cite{guo2018:RUGE} &\tc{---} &\tc{---} &\tc{---} &\tc{---} &&0.768 &0.703 &0.815 &0.865 \\
ComplEx$^\textrm{R}$~\cite{minervini2017:EquivalenceInversion}
&0.940 &\tc{---} &0.943 &0.947 &&\tc{---} &\tc{---} &\tc{---} &\tc{---} \\
\midrule
R-GCN~\cite{schlichtkrull2017:R-GCN} &0.814 &0.686 &0.928 &0.955 &&0.651 &0.541 &0.736 &0.825 \\
R-GCN+~\cite{schlichtkrull2017:R-GCN} &0.819 &0.697 &0.929 &{\bf 0.964} &&0.696 &0.601 &0.760 &0.842 \\
ConvE~\cite{dettmers2017:ConvE} &0.942 &0.935 &{\bf 0.947} &0.955 &&0.745 &0.670 &0.801 &0.873 \\
Single DistMult~\cite{kadlec2017:Baselines} &0.797 &\tc{---} &\tc{---} &0.946 &&0.798 &\tc{---} &\tc{---} &{\bf 0.893} \\
\midrule
ComplEx-NNE (this work) &0.941 &0.937 &0.944 &0.948 &&0.727$^*$ &0.659$^*$ &0.772$^*$ &0.845$^*$ \\
ComplEx-NNE+AER (this work) &{\bf 0.943} &{\bf 0.940} &0.945 &0.948 &&{\bf 0.803}$^*$ &{\bf 0.761}$^*$ &{\bf 0.831}$^*$ &0.874$^*$ \\
\bottomrule
\end{tabular*}
\caption{\label{tab:LinkPrediction} Link prediction results on the test sets of WN18 and FB15K. Results for TransE and DistMult are taken from \cite{trouillon2016:ComplEx}. Results for the other baselines are taken from the original papers. Missing scores not reported in the literature are indicated by ``---''. Best scores are highlighted in bold, and ``$*$" indicates statistically significant improvements over ComplEx.}
\end{table*}
\begin{table}[!t]
\centering\footnotesize\setlength{\tabcolsep}{5pt}
\begin{tabular*}{0.48 \textwidth}{@{\extracolsep{\fill}}@{}lllll@{}}
\toprule
& & \multicolumn{3}{c}{HITS@N} \\\cmidrule{3-5}
& MRR & \tc{1} & \tc{3} & \tc{10} \\
\midrule
TransE & 0.111 & 0.016 & 0.164 & 0.270 \\
DistMult & 0.233 & 0.115 & 0.301 & {\bf 0.448} \\
HolE & 0.260 & 0.182 & 0.309 & 0.411 \\
ComplEx & 0.242 & 0.126 & 0.312 & 0.440 \\
ANALOGY & 0.252 & 0.143 & 0.323 & 0.427 \\
\midrule
RUGE & 0.246 & 0.129 & 0.325 & 0.433 \\
ComplEx$^\textrm{R}$ & 0.253 & 0.167 & 0.294 & 0.420 \\
\midrule
ComplEx-NNE & 0.298$^*$ & 0.229$^*$ & 0.330$^*$ & 0.426 \\
ComplEx-NNE+AER & {\bf 0.306}$^*$ & {\bf 0.244}$^*$ & {\bf 0.334}$^*$ & 0.418 \\
\bottomrule
\end{tabular*}
\caption{\label{tab:LinkPrediction-DB100K} Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by ``$*$".}
\end{table}
Among those baselines, RUGE and ComplEx$^\textrm{R}$ require additional logical background knowledge. RUGE makes use of soft rules, which are extracted by AMIE+ from the {\it training} sets. As suggested by \citet{guo2018:RUGE}, length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized. Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8. But it only considers entailments between a pair of relations, {\it i.e.}, length-1 rules. ComplEx$^\textrm{R}$ takes into account equivalence and inversion between relations. We derive such axioms directly from our approximate entailments. If $r_p \xrightarrow{\lambda_1} r_q$ and $r_q \xrightarrow{\lambda_2} r_p$ with $\lambda_1,\lambda_2$ $>0.8$, we think relations $r_p$ and $r_q$ are equivalent. And similarly, if $r_p^{-1} \xrightarrow{\lambda_1} r_q$ and $r_q^{-1} \xrightarrow{\lambda_2} r_p$ with $\lambda_1,\lambda_2>0.8$, we consider $r_p$ as an inverse of $r_q$.
For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set. Specifically, we tune the embedding size $d \in \{100, 150, 200\}$, the $L_2$ regularization coefficient $\eta \!\in\! \{0.001, 0.003, 0.01, 0.03, 0.1\}$, the ratio of negative over positive training examples $\alpha$ $\in \{ 2, 10\}$, and the initial learning rate $\gamma \in \{0.01,$ $0.05, 0.1, 0.5, 1.0\}$. For TransE, we tune the margin of the ranking loss $\delta \in \{0.1, 0.2, 0.5, 1, 2, 5,$ $10\}$. Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors \cite{liu2017:ANALOGY,guo2018:RUGE}. After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER ($\mu$ in Eq.~(\ref{eq:RegularizedKGE})) in the range of $\{10^{-5}, 10^{-4}, \cdots, 10^4, 10^5\}$, with all its other hyperparameters fixed to their optimal configurations. We then directly set $\mu=0$ to get the optimal ComplEx-NNE model. The weight of soft constraints in ComplEx$^\textrm{R}$ is tuned in the same range as $\mu$. The optimal configurations for our approach are: $d=200$, $\eta=0.03$, $\alpha=10$, $\gamma=1.0$, $\mu=10$ on WN18; $d=200$, $\eta\!=\!0.01$, $\alpha\!=\!10$, $\gamma=$ $0.5$, $\mu=10^{-3}$ on FB15K; and $d=150$, $\eta=0.03$, $\alpha=10$, $\gamma=0.1$, $\mu=10^{-5}$ on DB100K.
\smallskip
\textbf{Experimental Results:} Table~\ref{tab:LinkPrediction} presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature. Table~\ref{tab:LinkPrediction-DB100K} further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting. On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test. The reciprocal rank or HITS@N value with $n=1,3,10$ for each test triple is used as paired data. The symbol ``$*$" indicates a significance level of $p<0.05$.
The results demonstrate that imposing the non-negativity and approximate entailment constraints indeed improves KG embedding. ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18). More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx$^\textrm{R}$, and even the complicated developments or implementations like ConvE or Single DistMult. This demonstrates the superiority of our approach.
\subsection{Analysis on Entity Representations}\label{subsec:EntityRepresentations}
This section inspects how the structure of the entity embedding space changes when the constraints are imposed. We first provide the visualization of entity representations on DB100K. On this dataset each entity is associated with a single type label.\footnote{\url{http://downloads.dbpedia.org/2016-10/core-i18n/en/instance_types_wkd_uris_en.ttl.bz2}} We pick 4 types \texttt{\small reptile}, \texttt{\small wine\_region}, \texttt{\small species}, and \texttt{\small programming\_language}, and randomly select 30 entities from each type. Figure~\ref{fig:Entity-Rep} visualizes the representations of these entities learned by ComplEx and ComplEx-NNE+AER (real components only), with the optimal configurations determined by link prediction (see \S~\ref{subsec:LinkPrediction} for details, applicable to all analysis hereafter). During the visualization, we normalize the real component of each entity by $[\tilde{\mathbf{x}}]_\ell \!=\! \frac{[\mathbf{x}]_\ell - \min(\mathbf{x})}{\max(\mathbf{x}) - \min(\mathbf{x})}$, where $\min(\mathbf{x})$ or $\max(\mathbf{x})$ is the minimum or maximum entry of $\mathbf{x}$ respectively. We observe that after imposing the non-negativity constraints, ComplEx-NNE+AER indeed obtains compact and interpretable representations for entities. Each entity is represented by only a relatively small number of ``active'' dimensions. And entities with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.
\begin{figure}[!t]
\centering
\includegraphics[width=0.48\textwidth]{distribution.pdf}
\caption{Visualization of real components of entity representations (rows) learned by ComplEx-NNE+AER (left) and ComplEx (right). From top to bottom, entities belong to type \texttt{\small reptile}, \texttt{\small wine\_} \texttt{\small region}, \texttt{\small species}, and \texttt{\small programming\_language} in turn. Values range from 0 (white) via 0.5 (orange) to 1 (black). Best viewed in color.}\label{fig:Entity-Rep}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.48\textwidth]{purity.pdf}
\caption{Average entropy over all dimensions of real components of entity representations learned by ComplEx (circles), ComplEx-NNE (squares), and ComplEx-NNE+AER (triangles) as $K$ varies.}\label{fig:Entropy}
\end{figure}
Then we investigate the semantic purity of these dimensions. Specifically, we collect the representations of all the entities on DB100K (real components only). For each dimension of these representations, top $K$ percent of entities with the highest activation values on this dimension are picked. We can calculate the entropy of the type distribution of the entities selected. This entropy reflects diversity of entity types, or in other words, semantic purity. If all the $K$ percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity). On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity). Figure~\ref{fig:Entropy} shows the average entropy over all dimensions of entity representations (real components only) learned by ComplEx, ComplEx-NNE, and ComplEx-NNE+ AER, as $K$ varies. We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity. We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena. The results are given as Appendix~\ref{subsec:Imaginary}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{relation.pdf}
\caption{Visualization of relation representations learned by ComplEx-NNE+AER, with the top 4 relations from the equivalence class, the middle 4 the inversion class, and the bottom 4 others.}\label{fig:Relation-Rep}
\end{figure}
\subsection{Analysis on Relation Representations}\label{subsec:RelationRepresentations}
This section further provides a visual inspection of the relation embedding space when the constraints are imposed. To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.\footnote{Equivalence and inversion are detected using heuristics introduced in \S~\ref{subsec:LinkPrediction} (implementation details). See the Appendix~\ref{subsec:Properties} for detailed properties of these three classes.} We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure~\ref{fig:Relation-Rep}, where for each relation we randomly pick 5 dimensions from both its real and imaginary components. By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well. Pairs of relations from the first class (equivalence) tend to have identical representations $\mathbf{r}_p \approx \mathbf{r}_q$, those from the second class (inversion) complex conjugate representations $\mathbf{r}_p \approx \bar{\mathbf{r}}_q$; and the others representations that $\textrm{Re}(\mathbf{r}_p) \leq \textrm{Re}(\mathbf{r}_q)$ and $\textrm{Im}(\mathbf{r}_p) \approx \textrm{Im}(\mathbf{r}_q)$.
\section{Conclusion}\label{sec:Conclusion}
This paper investigates the potential of using very simple constraints to improve KG embedding. Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations. Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity. Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines. The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.
\section*{Acknowledgments}
We would like to thank all the anonymous reviewers for their insightful and valuable suggestions, which help to improve the quality of this paper. This work is supported by the National Key Research and Development Program of China (No. 2016QY03D0503) and the Fundamental Theory and Cutting Edge Technology Research Program of the Institute of Information Engineering, Chinese Academy of Sciences (No. Y7Z0261101).
|
{
"timestamp": "2018-05-09T02:04:14",
"yymm": "1805",
"arxiv_id": "1805.02408",
"language": "en",
"url": "https://arxiv.org/abs/1805.02408"
}
|
\section{Introduction}
Advances in microfluidics and nanotechnology have boosted a rapid development of surface engineering in the last two decades. Among the different effects of micro-/nano-patterned surfaces, often inspired by observations in nature, one remarkable finding is that the introduction of micro-/nano-scale roughness on an otherwise smooth hydrophobic surface can sometimes significantly reduce the resistance to an external liquid flow. This slippery effect, due to entrapment of gas or vapor pockets under the surface asperities (superhydrophobic Cassie state), was first observed in the experiment of a water flow through a water-repellent pipe \cite{Watanabe}. Subsequently, a number of studies have demonstrated various levels of drag reduction \cite{Ou,Choi_Kim,Schaffel,Lee}, but also in some cases drag enhancement \cite{Steinberger,Karatay}. Despite the discrepancies in the literature, a common technological challenge for the application of superhydrophobic materials is their fragility \cite{Bocquet}. Under high pressures or external forces, such as turbulent fluctuation or phase change, the surface texture can be partially or fully impregnated by the outer fluid (Cassie-to-Wenzel transition), causing the system to lose the features it was designed for \cite{Gentili,Giacomello, Seo_etal_18}.
Liquid-infused surfaces (LIS) are an alternative when aiming for drag reduction. They are more robust against pressure-induced failure, while displaying the same useful properties as conventional gas-cushioned superhydrophobic surfaces \cite{Wexler}. Two recent experiments have demonstrated, using microfabricated oil-impregnated pillars and grooves separately, up to 16\% drag reduction in laminar flows \cite{Solomon} and up to 14\% drag reduction in turbulent flows \cite{Rosenberg}. In the case of the turbulent flow, the authors also tested superhydrophobic surfaces and measured approximately 10\% drag reduction \cite{Rosenberg}.
\Ge{The values cited above, obtained at small lubricant-to-external-fluid viscosity ratios, can eventually decrease to nearly zero as the lubricant becomes more viscous. However, hybrid designs have been devised to maintain the performance, see \eg a recent proof-of-concept study \cite{Hemeda}.}
Analytically, the slippage over a superhydrophobic or liquid-infused surface can be characterized by an effective slip length. Analogous to the definition of the Navier slip, the effective slip length is an \textit{averaged} quantity equal to the distance below the surface at which the velocity would extrapolate to zero (to be distinguished from the \textit{intrinsic} slip of molecular nature \cite{Gentili}). Extensive studies have been devoted to obtaining theoretical expressions of the effective slip for two-dimensional longitudinal or transverse grooves \cite{Lauga_Stone, Sbragalia_Prosperetti, Davis_Lauga, Ng_Wang, Schonecker, Nizkaya, Crowdy_long, Crowdy_tran}.
\Ge{Among these, \cite{Lauga_Stone, Sbragalia_Prosperetti, Davis_Lauga, Ng_Wang, Crowdy_tran} assume perfect slip along the liquid-gas interface, \cite{Ng_Wang, Schonecker, Nizkaya} assume flat menisci, while the meniscus deformation, if considered, is either small \cite{Crowdy_long, Crowdy_tran} or in the dilute limit (\ie the surface is mostly solid) \cite{Lauga_Stone}. Furthermore, for purpose of calculation, the shape of the interface is always assumed symmetric (\ie flat or circular) even under shear. This practically limits the application of the analytical results to the zero capillary limit, being the upper/lower bound of the drag reduction depending on the specific conditions.}
Understanding the dependence of the slip length on the imposed shear and the lubricant viscosity in more realistic conditions may require a numerical approach.
\Ge{There are, as yet, surprisingly few fully resolved hydrodynamic simulations able to solve the details of the flow reducing the underlying assumptions. Most prior numerical studies still consider flat/circular menisci with zero subphase viscosity \cite{Davies_etal, Martell_Perot_Rothstein, Cheng_Teo_Khoo, Wang_Teo_Khoo,Teo_Khoo_14, Seo_etal_18}; however, they extend analytical solutions to more complex surface patterns or the finite-Reynolds-number regime. Flexible bubble shapes were first considered in \cite{finland} for a uniform gas mattress, and later for a non-uniform distribution \cite{finland2}. Using a two-phase Lattice Boltzmann method, \cite{finland, finland2} show that increasing the capillary number reduces the effective slip, even below zero (\ie more friction than a solid plate). Specifically, their nanobubbles protrude strongly into the flow and remain trapped in the pores. Indeed, for very large protrusion angles, negative slip is both observed experimentally \cite{Steinberger} and verified analytically \cite{Davis_Lauga}. On the other hand, when the protrusion angle is smaller and the bubbles are allowed to slide on the substrate, the phase field simulation of \cite{Gao_Peng} shows the opposite behavior: the effective slip is nearly shear-independent for relatively low capillary numbers, while it can increase dramatically if the capillary number is beyond some threshold. This threshold is not a single, universal value but depends on the spacing of the grooves and the initial filling of the gas; however, the enhancement of the slip is clearly due to depinning of the liquid-gas-solid contact line. We note that the depinning process considered in \cite{Gao_Peng} might be an idealization, since realistic solid surfaces may not be smooth/chemically-homogeneous near the edge. Furthermore, both studies consider gas bubbles submerged in water under unrealistically large shear rates ($10^6 \sim 10^7 s^{-1}$)\footnote{In \cite{finland}, the shear rates were reported as $10^{-6} \sim 10^{-7} s^{-1}$. This must be a typo.}. Whether this is stable or can be physically realized without generating significant heat remains an open question.}
Here, we explore a slightly different flow configuration: planar shear flows over a micro-rough wall \textit{partially} impregnated by a lubricant fluid. Using the newly developed multiscale numerical framework in \cite{Hanna}, simulations at separate scales are performed to obtain the steady drag reduction, while capturing the dynamic wetting behavior in details.
\Ge{As we investigate the various effects of the viscosity ratio, the capillary number, and the static contact angle, we find that the filling fraction has the largest impact for drag reduction. It weakens the effects of other parameters, which are generally intertwined in a number of non-trivial ways. Moreover, for a given initial filling fraction (94\%), our results show that the viscosity of the lubricant can not only influence the effective slip length, but also the robustness of the substrate under external shear. Shear-driven failure of LIS has recently been reported in \cite{Wexler, Jacobi, Liu} in the longitudinal case. Our study predicts that a similar drainage, though the viscosity dependence differs, may also occur in the transverse case.}
Understanding of this drainage failure is instructive for improved robustness of the surface design.
\section{Microcavities partially filled with lubricants} \label{definition}
\subsection{Problem setup} \label{setup}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{schematic5.pdf}
\end{center}
\caption{(color online) Schematic of the problem definition and setup for the two separate simulations. The substrate is patterned with an array of square cavities. The unit geometry shows the cross-section of the partially filled microcavity. The boundary condition at $y=H$ is equivalent to unit tangential stress and zero normal stress, while the arrows near the contact lines represent the slip boundary. The local box depicts the computational domain of the moving contact line model (the variables are denoted by a prime). Its velocity boundary conditions correspond to a moving wall in the bending interface reference frame.}
\label{fig: schematic}
\end{figure}
We consider the transverse flow over an array of regularly spaced square cavities illustrated in Fig.\ \ref{fig: schematic}. The outer fluid of viscosity $\mu_1$ is driven by a constant shear $\dot{\gamma}$ in the $x$ direction, imposed at distance $H$ above the floor. The cavities of length $L/2$ and depth $H/2$ are partially filled with a lubricant fluid of viscosity $\mu_2$. When the number of the microcavities is large, the system is equivalent to a single cavity with periodic boundary conditions in the front and back. The solution at the (quasi-)steady state is determined by the incompressible Stokes equations, written in the non-dimensional form
\begin{equation} \label{Stokes}
\begin{aligned}
\nabla \cdot {\bm u} = 0, \quad
-\nabla p + \nabla \cdot \big[ \mu_i ( \nabla {\bm u} + \nabla {\bm u}^T ) \big] = 0,
\end{aligned}
\end{equation}
where ${\bm u}=(u,v)$ is the velocity, $p=p(x,y)$ the pressure, and $\mu_i=\tilde{\mu}_i/\tilde{\mu}_1$ ($i=$ 1 or 2) the dimensionless viscosity, using $\tilde{H}$ and $\dot{\tilde{\gamma}} \tilde{H}$ as the reference length and velocity respectively\Ge{\footnote{Dimensional values are denoted with a tilde throughout the manuscript.}}. For viscous flows, the velocity and its tangential derivatives are continuous along the fluid interface \cite{Batchelor}. The normal stress is discontinuous due to the surface tension $\tilde{\sigma}$ and the viscosity difference, giving the pressure jump (denoted as $[A]_\Gamma=A_2-A_1$)
\begin{equation} \label{p jump}
[p]_\Gamma = \frac{\kappa}{\textrm{Ca}} + 2[\mu]_\Gamma {\bm n}^T \cdot \nabla {\bm u} \cdot {\bm n}
\quad \textrm{on} \quad \Gamma,
\end{equation}
where ${\bm n}$ is the outward-pointing normal at the interface $\Gamma$, $\kappa$ its curvature, and Ca $=\tilde{\mu}_1 \dot{\tilde{\gamma}} \tilde{H}/\tilde{\sigma}$ the capillary number.
\Ge{As the lubricant only partially fills the cavity initially, it may become distorted or splatter under the external shear. The associated contact line motion can be described by a second capillary number,}
Ca$_c=\tilde{\mu}_2 \tilde{U}_c/\tilde{\sigma}$, where $\tilde{U}_c$ is the characteristic contact line velocity related to the liquid and solid surface energies. The ratio between this velocity and the shear, $\chi=\tilde{U}_c/ (\dot{\tilde{\gamma}} \tilde{H})$, measures the magnitude of the local slip in the sheared dynamical system. It also scales the slip velocity near the contact line,
\begin{equation} \label{slip bc}
\begin{aligned}
{\bm u} = \chi u_c (\theta) {\bm \Theta} (y) \quad \textrm{on} \quad \partial \Omega_\Gamma,
\end{aligned}
\end{equation}
where $\chi u_c(\theta)$ is the renormalized nanoscale contact line velocity depending on the apparent contact angle $\theta$, and ${\bm \Theta}(y)$ provides the self-similar slip velocity function of the wall-parallel coordinate $y$ that is imposed in the vicinity of the contact line on the boundary $\partial \Omega_\Gamma$ (see Fig.\ \ref{fig: schematic}). Further details of $\chi u_c(\theta)$ and ${\bm \Theta}(y)$ will be provided in Sec.\ \ref{model}
In summary, Eqs.\ (\ref{Stokes}--\ref{slip bc}) are determined, neglecting the fluid inertia and fixing the substrate geometry, by the following non-dimensional parameters:
\textit{(i)} the viscosity ratio, $\tilde{\mu}_2/\tilde{\mu}_1$,
\textit{(ii)} the static contact angle, $\theta_s$,
\textit{(iii)} the \Ge{initial filling fraction} of the cavity \Ge{$\delta=2d_0/H$} (where \Ge{$d_0$ is the initial depth of the lubricant measured from the contact point to the bottom of the cavity}),
\textit{(iv)} the capillary number based on the imposed shear Ca, and
\textit{(v)} the ratio between the characteristic contact line velocity and the shear, $\chi$.
The effect of the presence of the lubricating cavity and the corresponding apparent slip can be readily quantified by an effective slip length $\lambda_e$, defined as
\begin{equation} \label{eff slip}
\lambda_e = \frac{\bar{u}(H)}{\dot{\gamma}H}-1,
\end{equation}
with $\bar{u}(H)$ being the average streamwise velocity at distance $H$ above the floor (averaged over the $x$ direction).
\Ge{In the following, we will consider various combinations of the governing parameters \textit{(i--v)} and evaluate $\lambda_e$ for each configuration. As the result will clearly depend on the motion of the impregnated lubricant, the multiscale modelling approach that we adopt is described next. The objective here is to provide an overall description of our methodology, rather than deriving the full mathematical/numerical details. For the latter, including validations, we refer to our previous work \cite{Martin, Hanna}.}
\subsection{Modelling of the moving contact lines} \label{model}
\Ge{We model the contact line dynamics in two steps.} First, we solve the Cahn-Hilliard equations within a Stokes system
\begin{equation} \label{C-H}
\frac{\partial c}{\partial t} + \tilde{{\bm u}}\cdot \nabla c -\tilde{m}\nabla ^2 \tilde{\psi}=0,
\quad \tilde{\psi} - \frac{3\tilde{\sigma} \tilde{\epsilon}}{4}
\bigg(\frac{2}{\tilde{\epsilon}^2}(c^3-c) -\nabla ^2 c \bigg) =0,
\end{equation}
\begin{equation} \label{C-H Stokes}
\nabla \cdot \tilde{{\bm u}} = 0, \quad
-\nabla \tilde{p} + \nabla \cdot
\big[ \tilde{\mu} ( \nabla \tilde{{\bm u}} + \nabla \tilde{{\bm u}}^T ) \big] +\tilde{\psi}\nabla c = 0.
\end{equation}
In the above, $c$ is a non-dimensional phase parameter smoothly varying from $+1$ in one fluid to $-1$ in the other within a thickness of $\tilde{\epsilon}$, $\tilde{\psi}$ is the fluid chemical potential, $\tilde{m}$ is the mobility, and $\tilde{\sigma}$, again, is the surface tension. The chemical potential $\tilde{\psi}$ measures the variation of the system free energy with respect to $c$. Its gradient determines the interfacial diffusion flux $-\tilde{m}\nabla \tilde{\psi}$, which together with the convective flux $\tilde{{\bm u}} c$, models the creation, movement, and dissolution of phase interfaces \cite{Jacqmin2000}.
Technically, Eqs.\ (\ref{C-H}--\ref{C-H Stokes}) are solved in a rectangular box in the vicinity of a contact line using methods presented in \cite{Martin} (see Fig.\ \ref{fig: schematic}, the local box and its velocity boundary condition).
\Ge{They are determined solely by the viscosity ratio, the surface tension, and the static contact angle (the rest are fixed choosing the proper non-dimensionalization);} hence, the moving contact line can be simulated separately from the cavity flow.
Inherently, we assume the length and time scales of the local box are much smaller than the cavity, \ie $\tilde{H}'/\tilde{H} \ll 1$ and $\tilde{\tau}'/\tilde{\tau} \ll 1$ respectively. The first condition holds by definition and is enforced by providing enough resolution. The second condition is automatically satisfied realizing $\tilde{\tau}'/\tilde{\tau} =\tilde{H}'/\tilde{U}_c/\dot{\tilde{\gamma}}^{-1}=\tilde{H}'/\tilde{H} /\chi$. We will show in Sec.\ \ref{results} that $\chi$ in our case is indeed much bigger than 1.
\Ge{The steady-state solutions of Eqs.\ (\ref{C-H}--\ref{C-H Stokes}) give the contact line velocity, $\chi u_c(\theta)$, function of the apparent contact angle only. It is typically nonlinear, and is valid down to the nanometer scale. To impose this slip velocity in the micrometer cavity flow, as the second step, we modify the velocity boundary condition near the contact line using asymptotic matching \cite{Hanna}. Here, the self-similarity of the local velocity field is invoked and the singularity of the viscous stress is avoided \cite{Huh_Scriven}. The end result is an algebraic operator, ${\bm \Theta}(y)$, applied to $\chi u_c(\theta)$ on the boundary $\partial \Omega_\Gamma$.}
\Ge
We comment that our multiscale modelling approach is not limited to the phase-field model for the nanoscale; in principle, any model able to describe the contact line dynamics, \eg the molecular dynamics (MD) \cite{Johansson} or the Lattice-Boltzmann (LB) \cite{Sbragaglia_etal}, can be used. We also note that, by solving Eqs.\ (\ref{C-H}--\ref{C-H Stokes}) in a square domain, we implicitly assume the solid surface is nanosmooth.}
Consequently, any deviation from the static contact angle will result in an interface displacement, bringing the phase field back to its local equilibrium. In practice, a real surface may have random roughness or defects smaller than the scale of the printed patterns, causing the interface to be pinned (\ie contact angle hysteresis). Such effects can be included by modifying the geometry of the computational domain, or simply by modifying the relation $u_c=u_c(\theta)$ so that $u_c=0$ for a range of $\theta$'s.
\Ge{In Sec.\ \ref{results}, we will take this second approach to account for a small contact angle hysteresis.}
\subsection{Numerical methods} \label{numm}
The governing equations, together with the boundary conditions, Eqs.\ (\ref{Stokes}--\ref{slip bc}), are solved numerically using the two-phase flow solver described in \cite{MartinHPC}, with suitable modifications for moving contact lines. The equations are discretized in space using the finite element method and the solver is implemented in the C++ based finite element open source library \texttt{deal.II} \cite{DEAL1, DEAL2}.
\Ge{The interface between the two fluids is evolved using the conservative level set method \cite{CONSLS}, so that only one fixed set of mesh is required. Specifically, we use uniformly distributed quadrilaterals (\ie squares) with grid spacing $\Delta x=1/160$, and time steps restricted by the stability condition $\Delta t_{max} = c_0$Ca$\Delta x$ ($c_0$ is a constant) \cite{MartinHPC}. This leads to $\Delta t =10^{-4} \sim 10^{-3}$ depending on the capillary number Ca. }
\Ge{The moving contact-line velocities are pre-computed by solving Eqs.\ (\ref{C-H}--\ref{C-H Stokes}) and used as tabulated inputs.
In the simulations, additional numerical parameters include the frequency of the reinitialization (a technical procedure in the level set method, see \cite{CONSLS}), the size of the local box in the contact line model (\ie $L'$ and $H'$, see Fig.\ \ref{fig: schematic}), and the size of a so-called bump function (related to $\partial \Omega_\Gamma$, see \cite{Hanna}). These are chosen to yield numerically-independent results
as in \cite{Martin, Hanna} where validations are presented.}
\section{Results} \label{results}
We study the effective slip over microcavities partially filled with a second fluid using the parameters summarized in Tab.\ \ref{tab: param}.
Here, six pairs of fluids are considered as in the experiment \cite{Solomon}, leading to a wide range of $\tilde{\mu}_2/\tilde{\mu}_1$ from $31.7$ to $3.83\e{-3}$. \Ge{The filling fraction is initialized to $\delta=0.94$ (corresponding to a depth $d_0=0.47H$) to allow for some sloshing of the lubricant.} The velocity ratio $\chi=\tilde{U}_c/ (\dot{\tilde{\gamma}} \tilde{H})$ is set constant for all the fluid pairs and shear rates to reduce the number of parameters and focus on the single physical effects mentioned above. This also implies that the capillary number is varied by changing the outer fluid viscosity. As an example, for $\dot{\tilde{\gamma}}=800$ s$^{-1}$, $\tilde{H}=20$ $\mu$m, and $\tilde{U}_c=2.63$ m/s (see \cite{dimension} for the detailed estimation), the velocity ratio $\chi \approx 164$ (which is indeed much greater than 1), and the corresponding Ca increases from $1.92\e{-3}$ to $1.59$ for the different viscosities considered. We further modify Ca at a fixed $\chi$ to study the effect of interface deformation. Finally, the effect of the static contact angle is investigated by considering $\theta_s=80\degree$ (leading to a convex meniscus), $\theta_s=105\degree$ (concave meniscus), \Ge{and $\chi u_c(\theta)=0 $ for $\theta_s \in [76\degree, 84\degree ]$ to mimic some contact angle hysteresis.}
\begin{table}[t]
\centering
\caption{Parameters for the outer (subscript 1) and lubricant (subscript 2) fluids in the present study.}
\bgroup
\def\arraystretch{1.05}
{\setlength{\tabcolsep}{0.9em}
\begin{tabular}{l l l c c c c}
\hline
$\tilde{\mu}_1 [\nicefrac{kg}{ms}]$
&$\tilde{\mu}_2 [\nicefrac{kg}{ms}]$
&$\nicefrac{\tilde{\mu}_2}{\tilde{\mu}_1}$
&$\delta$
&$\chi$
&Ca
&$\theta_s$ (deg)\\
\hline
0.0024 &0.0760 &$31.7$ &0.94 &164 &$0.02 \sim 5$ &80 or 105 or $76 \sim 84$\\
0.0024 &0.0076 &$3.17$ &0.94 &164 &$0.02 \sim 5$ &80 or 105 or $76 \sim 84$\\
0.0152 &0.0076 &$0.5$ &0.94 &164 &$0.02 \sim 5$ &80 or 105 or $76 \sim 84$\\
0.1504 &0.0076 &$5.05\e{-2}$ &0.94 &164 &$0.02 \sim 5$ &80 or 105 or $76 \sim 84$\\
0.8942 &0.0076 &$8.50\e{-3}$ &0.94 &164 &$0.02 \sim 5$ &80 or 105 or $76 \sim 84$\\
1.9850 &0.0076 &$3.83\e{-3}$ &0.94 &164 &$0.02 \sim 5$ &80 or 105 or $76 \sim 84$\\
\hline
\end{tabular}}
\egroup
\label{tab: param}
\end{table}
\subsection{Motions at the contact line}
We precompute the contact line velocity $\chi u_c$ as function of contact angle $\theta$ for the range of parameters listed in Tab.\ \ref{tab: param}, using the nanoscale phase-field model described in Sec.\ \ref{model}.
Numerically, the non-dimensional height of the local box is $H'=36$, with grid size $h = 36/128$ and time step $\Delta t=0.5$. Steady state results obtained after $4000$ time steps are plotted in Fig.\ \ref{fig:tabulated}.
Here, the solid lines correspond to static contact angle $\theta_s=80\degree$ measured from the outer fluid side. The non-zero contact line velocity at $\theta \neq \theta_s$ shows the tendency of the contact line to reach its equilibrium position. For the results presented next, we have also used static contact angles $\theta_s=105\degree$, corresponding to menisci protruding into the cavity, and $\theta_s \in[76\degree , 84\degree]$, modelling a contact angle hysteresis of $8\degree$. Keeping the rest of the parameters unchanged, the contact line velocities for $\theta_s=105\degree$ and the case with hysteresis are obtained by shifting the curves pertaining each viscosity ratio horizontally to the modified static angles, see Fig.\ \ref{fig:tabulated} (right) for an example.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.9\columnwidth]{phase-field22.pdf}
\end{center}
\caption{(color online) Relations between the apparent contact angles and the contact line velocities for $\theta_s=80\degree$ under various viscosity ratios, precomputed using the contact line model described in Sec.\ \ref{model}. \Ge{Inset shows a close-up at small angle deviations, whereas the panel on the right illustrates how we model different static angles and contact angle hysteresis.}}
\label{fig:tabulated}
\end{figure}
\Ge{As we vary $\tilde{\mu}_2/\tilde{\mu}_1$ over four orders-of-magnitude, Fig.\ \ref{fig:tabulated} reveals a non-trivial dependence of the contact line dynamics. On the one end, $\chi u_c$ changes rapidly with $\theta$ for very viscous lubricants, almost diverging for $\theta < 60\degree$ in the case of $\tilde{\mu}_2/\tilde{\mu}_1=31.7$; on the other end, as the lubricant becomes less and less viscous, the $\chi u_c (\theta)$ relations eventually collapse onto one curve. Qualitatively, reduction of $\chi u_c$ for decreasing $\tilde{\mu}_2/\tilde{\mu}_1$ is expected as we normalize the flow using the shear in the outer fluid; in other words, it is easier (hence requires less velocity) to displace a less viscous fluid (\ie deviating $\theta$ from $\theta_s$). In addition, the change of the curvature of the function $\chi u_c ( \theta)$ can be inferred from the reciprocity of the two fluids (\ie $-\chi u_c$ instead of $\chi u_c$ and $ 180-\theta$ instead of $\theta$ for the same $\tilde{\mu}_1/\tilde{\mu}_2$)\footnote{This is merely a qualitative argument, as it does not preserve the static angle unless $\theta_s=90\degree$.}. Quantitatively, the present model has been compared favorably with Cox' law \cite{COX}, especially for small angle deviations \cite{Martin}. Since this is the regime where the fluids normally operate at, we expect our model to accurately capture the small-scale contact line motions.}
\Ge{Finally, we note that the slope of the contact line velocity profiles near the static contact angle, $\theta_s$, (cf.\ Fig.\ \ref{fig:tabulated} inset) plays an important role in the wetting of the cavity under external shear.} As we will discuss later, the difference of the contact line velocity with the viscosity ratio completely alters the robustness of lubricant infused cavities.
\subsection{Effective slip above the cavities} \label{eslip}
Now, we present steady-state results of the effective slip length, defined in Eq.\ \eqref{eff slip}, obtained by solving the governing Eqs.\ (\ref{Stokes}--\ref{slip bc}) for the setup depicted in Fig.\ \ref{fig: schematic}, using the two-phase Stokes solver described in Sec.\ \ref{numm}. \Ge{The overall results are compiled and plotted in Fig.\ \ref{fig: slip-visc}, divided into the following two categories. First, we discuss the results obtained fixing the interface shape and pinning the contact point at the cavity corner, and compare with existing theories (denoted as ``fix./pin."). In a later section, we present results with depinned interface, \ie contact line not at the cavity corner, obtained both fixing the interface (``fix./depin.") and letting it move according to the multiscale model presented above (``depin.").}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.6\columnwidth]{e-slip2.pdf}
\begin{picture}(0,0)
\put(-80,70){\includegraphics[height=1.8cm]{half-filled-cavity-inset0.pdf}}
\put(-30,70){\includegraphics[height=1.8cm]{half-filled-cavity-inset1.pdf}}
\put(-80,62){fix./pin.}
\put(-30,62){ depin.}
\end{picture}
\end{center}
\caption{(color online) Effective slip as function of viscosity ratio under various static contact angles, filling fractions, and capillary numbers. The bars represent flat and fixed interfaces fully covering the cavity ($\delta=1$), where the analytical result from \cite{Schonecker} is also plotted (red line). The filled symbols, upper blue and lower green triangles, stand for convex ($\theta_s=80\degree$) or concave ($\theta_s=105\degree$) interfaces pinned at the cavity tip in the zero capillary limit (\ie fixed interface). The open symbols are the steady state solutions at $\delta=0.94$ for different $\theta_s$ and Ca. \Ge{The capillary number Ca is not indicated as it does not affect the results noticeably.}
\label{fig: slip-visc}
\end{figure}
\subsubsection{Fixed interfaces pinned at the corners.}
\Ge{Partly as a validation of our numerical methods, we first consider interface of fixed shapes pinned at the cavity tips.}
These are obtained by imposing in the simulation flat/circular menisci fully covering the cavities, indicated by bars/filled triangles in Fig.\ \ref{fig: slip-visc}. Comparing with the analytical model taking into account finite dissipation in the cavity \cite{Schonecker}, a close agreement is observed over the broad $\tilde{\mu}_2/\tilde{\mu}_1$ spectrum examined.
Specifically, the results show a continuous decrease of the effective slip as the viscosity ratio increases; the rate of variation is logarithmic for $0.1 < \tilde{\mu}_2/\tilde{\mu}_1 < 10$, it begins to saturate for $\tilde{\mu}_2/\tilde{\mu}_1 \lesssim 0.1$, and it is practically zero for $\tilde{\mu}_2/\tilde{\mu}_1 > 10$.
We further examine the curvature dependence of the effective slip length, using $\theta_s = 80\degree$ and $105\degree$ as two representative curvatures for weakly convex and concave interfaces respectively.
As shown in Fig.\ \ref{fig: slip-visc},
for $\tilde{\mu}_2/\tilde{\mu}_1 < 1$, weakly convex interfaces have larger slip than weakly concave ones, consistently with previous analytical and experimental studies \cite{Davis_Lauga, Karatay}.
The difference of the effective slip between convex and concave menisci increases in the limit of zero viscosity ratio; this is
approximately 25\% bigger in the convex case.
\Ge{On the other hand, when $\tilde{\mu}_2/\tilde{\mu}_1 > 1$, the relative magnitude flips: the concave interfaces have a larger slip length than the flat ones, while the convex interfaces can even have negative slip, adding more drag to the flow.}
The reason for this asymmetry is rather straightforward.
Similar to the reasoning in \cite{Sbragalia_Prosperetti}, the increased shear stress modified by a more viscous fluid will reduce the local slip, even more so when the interface bows into the channel, hence a smaller $\lambda_e$ for the convex meniscus than for the concave one.
\Ge{Lastly, we remark that the dependence of the effective slip length on the curvature is non-trivial. Previous studies have shown the existence of a critical contact angle beyond which the effective slip becomes negative ($\theta_s \lesssim 30\degree$ by our definition) \cite{Davis_Lauga, Karatay, finland}. The angles we consider here are far from that range.}
\subsubsection{Interfaces depinned from the corners.}
\Ge{Next, we allow the interface to deform and slide on the cavity walls under external shear, removing the constraint of edge pinning considered earlier. The data pertain the steady state configuration, reached for shorter times at smaller Ca and verified to be unaffected by any numerical perturbations. Specifically, the effective slip length obtained initializing the filling ratio of the cavity to $\delta=0.94$ are displayed with open or round symbols in Fig.\ \ref{fig: slip-visc}.}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.6\columnwidth]{filling3.pdf}
\end{center}
\caption{Effective slip of partially filled cavities for convex ($80\degree$) and concave ($105\degree$) interfaces for $\tilde{\mu}_2/\tilde{\mu}_1=0.05$ (blue triangles) and 3.17 (green triangles) in the zero capillary limit. The main figure shows power law relations of the slip length when plotted against the inverse ``void fraction'' $(1-\delta)^{-1}$, indicated by the dashed lines (linear least squares fits for $(1-\delta)^{-1}>5$, or equivalently $\delta > 0.8$). The inset shows the sharp reduction/increase of the slip length as the meniscus recedes.}
\label{fig: slip-height}
\end{figure}
\Ge{First, we note that the effective slip length of partially filled cavities differs appreciably from the fully covered ones, regardless of the contact angle and the capillary number. For very low ($\tilde{\mu}_2/\tilde{\mu}_1 < 0.1$) and very high ($\tilde{\mu}_2/\tilde{\mu}_1 > 10$) viscosity ratios, the difference is at least a factor of 2. Meanwhile, within the cases considered for depinned interfaces, the effect of the viscosity ratio on the slip length is weaker than it is for pinned interfaces. The overall variation of $\lambda_e$ is reduced.}
These observations suggest that the filling fraction of the cavity may be the main factor determining the effective slip.
To examine possible relationships between $\lambda_e$ and $\delta$, we display in Fig.\ \ref{fig: slip-height} the effective slip under various filling fractions, for both convex ($\theta_s=80\degree$) and concave ($\theta_s=105\degree$) interfaces, at $\tilde{\mu}_2/\tilde{\mu}_1=0.05$ and 3.17. For this extensive parameter study, fixed interface shapes are imposed, corresponding to the zero capillary limit (minimum-energy interface) to speed up the simulations. Indeed, having a small capillary number does not affect the result, as shown in Fig.\ \ref{fig: slip-visc} where the slip length for the ``fix./depin." cases are not significantly different from the depinned cases at low Ca.
\Ge{As shown in Fig.\ \ref{fig: slip-height}, the effective slip length clearly depends on the filling fraction: as the meniscus recedes from the cavity tip, $\lambda_e$ quickly decreases or increases depending on $\tilde{\mu}_2/\tilde{\mu}_1$; the variation is the sharpest in the early stage ($0.8 \lesssim \delta < 1$), while it is nearly negligible as $\delta$ further reduces.
Plotting $\lambda_e$ against $(1-\delta)^{-1}$, which may be interpreted as an inverse ``void fraction'' of the cavity, we find a power law relation between the effective slip and the filling fraction. Indicated by the dashed lines in Fig.\ \ref{fig: slip-height}, the effective slip behaves as $\lambda_e \sim (1-\delta)^{-c}$ for $\delta > 0.8$, where $c$ is a constant related to the viscosity ratio and the overall geometry. Specifically, using linear least squares, we find $c \approx 0.38$ (convex) and $\approx 0.19$ (concave) for $\tilde{\mu}_2/\tilde{\mu}_1=0.05$, and $c \approx -0.40$ (convex) and $\approx -0.09$ (concave) for $\tilde{\mu}_2/\tilde{\mu}_1=3.17$. At equal viscosities, the $\lambda_e$--$(1-\delta)^{-1}$ relations display cone-like patterns with the spreading angle function of both the viscosity ratio and the meniscus curvature. At lower filling ratios, all the points converge to the value of the slip length of the single-phase cavity.
We remark that a theoretical determination of c is likely difficult, as the governing equation here is biharmonic \cite{Crowdy_tran, Ng_Wang}.}
Nevertheless, our results clearly illustrate the pronounced dependence of the effective slip on the interface displacement, already when small, for transverse grooves.
\Ge{We note from above that the slip $\lambda_e$ varies in opposite directions depending on $\tilde{\mu}_2/\tilde{\mu}_1$. This is also shown in Fig.\ \ref{fig: slip-visc}, where the effective slip of the depinned interfaces intercepts the red line (\ie the results for a flat interface) at $\tilde{\mu}_2/\tilde{\mu}_1 =1$ for both contact angles under consideration. \Ge{Specifically, the effective slip $\lambda_e$ is the same for $\delta=0.94$ and $\delta=1$, if $\tilde{\mu}_2 = \tilde{\mu}_1$;} when $\tilde{\mu}_2 < \tilde{\mu}_1$, \Ge{the slip is larger for $\delta=1$; when $\tilde{\mu}_2 > \tilde{\mu}_1$, on the contrary, it is larger for} $\delta=0.94$. This crossover thus suggests an additional viscosity dependence of the effective slip coupled with the filling fraction of the cavities.}
\begin{figure}[t]
\begin{center}
\subfigure[\quad Flat (tip)]
{\includegraphics[width=.42\columnwidth]{tang_stress_tip.pdf}}
\subfigure[\quad Flat (meniscus)]
{\includegraphics[width=.42\columnwidth]{tang_stress_int.pdf}}
\subfigure[\quad Convex (tip)]
{\includegraphics[width=.42\columnwidth]{tang_stress_tip_80.pdf}}
\subfigure[\quad Concave (tip)]
{\includegraphics[width=.42\columnwidth]{tang_stress_tip_105.pdf}}
\end{center}
\caption{Normalized tangential stress evaluated on the plane of the cavity tip (a, c, d) or along the menisci (b) for various viscosity ratios. In all the cases, the menisci are fixed and depinned from the corners, corresponding to filling fraction $\delta=0.94$.}
\label{fig: stress}
\end{figure}
\Ge{To quantitatively compare the effect of the viscosity ratio on $\lambda_e$ at $\delta=0.94$, we display the tangential shear stress $\tau_{xy}$ for flat, convex, and concave menisci in Fig.\ \ref{fig: stress}. Here, $\tau_{xy}$ is evaluated either on the cavity tip (at $y=0$) or along the fluid-fluid interface (at $y=-0.03H$), and it is normalized by the unit tangential shear stress $\tau_\infty$ imposed above the floor (at $y=H$).
As shown in Fig.\ \ref{fig: stress}, the normalized shear stress decreases as we reduce $\tilde{\mu}_2/\tilde{\mu}_1$ for all the cases, consistent with enhanced slip at lower lubricant viscosities; however, $\tau_{xy}/\tau_\infty$ does not converge to zero as it would have been if the cavities were fully covered.
Close comparison of Fig.\ \ref{fig: stress}(c) and (d) also explains the flipping of the relative magnitude of $\lambda_e$ between convex and concave interfaces noted above: the shear stress is less for convex interfaces when $\tilde{\mu}_2/\tilde{\mu}_1 <1$, while it is less (on average) for the concave ones at $\tilde{\mu}_2/\tilde{\mu}_1 >1$.
Moreover, Fig.\ \ref{fig: stress} reveals that the distribution of the local shear for partially filled cavities is non-uniform. When $\tilde{\mu}_2/\tilde{\mu}_1 <1$, $\tau_{xy}/\tau_\infty$ always retains its minimum value at $x=0$, and increases gradually towards the walls (at $x=\pm 0.25$); when $\tilde{\mu}_2/\tilde{\mu}_1 > 1$, the shear stress profiles can have several local minima/maxima depending on the protrusion angle. Such non-uniformity is most prominent when the interface is convex. In general, both the viscosity of the two fluids and the geometry of the liquid-infused cavities appear to influence $\tau_{xy}/\tau_\infty$.}
\Ge{We remark that constant shear stress along substrate surfaces is sometimes assumed in theoretical models to obtain analytical solutions \cite{Schonecker}. Although it is verified for fully-covered flat cavities (see Fig.\ 4 in \cite{Schonecker}), our results suggest that it is inaccurate for partially filled ones, even along the fluid-fluid interface, see Fig.\ \ref{fig: stress}(b). Since liquid-infused substrates are not always fully-covered in practice \cite{Wexler}, our simulations suggest this assumption be relaxed when developing more comprehensive models.}
Finally, we discuss the role of capillary and hysteresis, referring back to the circular symbols in Fig.\ \ref{fig: slip-visc}. These data are obtained via the multiscale contact line model for capillary numbers Ca $=0.02\sim 5$ and contact angles $\theta_s=80\degree$, $105\degree$, or $76\degree \sim 84\degree$ at initial filling fraction $\delta=0.94$.
\Ge{Surprisingly, we find virtually no influence of the contact angle hysteresis on the effective slip length for the entire range of viscosity ratios considered. The filled circles, corresponding to $\theta_s \in [76\degree, 84\degree$], lie closely on top of the blue open circles denoting $\theta_s=80\degree$.
Our results thus provide evidence that small scale roughness on the substrate surface, due to the material itself or the fabrication precision, does not necessarily increase the overall drag over the cavities. Indeed, as discussed in \cite{Schonecker}, the effective slip length is a far-field effect determined by the mean velocity above the substrate. Since a small contact angle hysteresis does not alter significantly the interface profile nor its wetting behavior, these changes are expected to be quickly smeared out away from the substrate. }
The above reasoning applies only to the cases when the capillary number is small.
Further increasing the capillary number, hence the shear, can eventually deform the fluid interface to an extent that a stable configuration may not be attainable. In the remaining, we will consider the lubricant-infused surface under extreme shear rates. As we examine the possible consequence under various conditions, a seemingly counter-intuitive technical solution will be suggested.
\subsection{Possible drainage of the lubricant}
\begin{figure}[t]
\centering
\subfigure[$\quad \theta_s=80\degree$]
{\includegraphics[width=.35\columnwidth]{case2a_Ca2e-2.pdf}}
\subfigure[$\quad \theta_s=105\degree$]
{\includegraphics[width=.35\columnwidth]{case2a_Ca2e-2_105.pdf}}
\caption{(color online) Typical shapes of stable interfaces and streamlines for the flow over a partially filled cavity. These examples correspond to the steady state configurations, for (a) $\theta_s=80\degree$ and (b) $\theta_s=105\degree$, at $\tilde{\mu}_2/\tilde{\mu}_1=0.5$ and Ca $=0.02$.}
\label{fig: shape}
\end{figure}
First, we examine typical interface profiles, both convex and concave, under moderate shear levels, see Fig.\ \ref{fig: shape}. Specifically, we consider the viscosity ratio $\tilde{\mu}_2/\tilde{\mu}_1=0.5$, the capillary number Ca $=0.02$, and the initial contact angle $\theta_0 = \theta_s$. The steady-state solutions are taken at $t=5$ in units of $1/\tilde{\dot{\gamma}}$.
\Ge{As illustrated in the figure, the flow, while circulating inside the cavity, is already parallel at $y \approx 0.5H$. The deformation of the interfaces is almost negligible comparing to the initial conditions, only the contact points displacing slightly in opposite directions due to the shear. These two configurations are examples of lubricant-infused cavities in working condition. The overall small change of the interface shapes is the reason for the weak shear dependence of the effective slip length discussed in Sec.\ \ref{eslip}.}
\begin{figure}[t]
\begin{center}
\subfigure[$\quad \theta_s=80 \degree$]
{\includegraphics[width=.4\columnwidth]{menis_2_80.pdf}}
\subfigure[$\quad \theta_s=105\degree$]
{\includegraphics[width=.4\columnwidth]{menis_2_105.pdf}}
\end{center}
\caption{Interface profiles under increasing capillary numbers for viscosity ratio $\tilde{\mu}_2/\tilde{\mu}_1=5.05\e{-2}$ at $t=5$.}
\label{fig: meniscus 2}
\end{figure}
\begin{figure}[t]
\begin{center}
\subfigure[$\quad \theta_s=80 \degree$]
{\includegraphics[width=.4\columnwidth]{menis_2a_80.pdf}}
\subfigure[$\quad \theta_s=105\degree$]
{\includegraphics[width=.4\columnwidth]{menis_2a_105.pdf}}
\end{center}
\caption{Interface profiles under increasing capillary numbers for viscosity ratio $\tilde{\mu}_2/\tilde{\mu}_1=0.5$ at $t=5$.}
\label{fig: meniscus 2a}
\end{figure}
\begin{figure}[t]
\begin{center}
\subfigure[$\quad \theta_s=80 \degree$]
{\includegraphics[width=.4\columnwidth]{menis_1_80.pdf}}
\subfigure[$\quad \theta_s=105\degree$]
{\includegraphics[width=.4\columnwidth]{menis_1_105.pdf}}
\end{center}
\caption{Interface profiles under increasing capillary numbers for viscosity ratio $\tilde{\mu}_2/\tilde{\mu}_1=3.17$ at $t=5$.}
\label{fig: meniscus 1}
\end{figure}
\Ge{To test the robustness of the LIS under stronger shear, we successively increase the capillary number from 0.02 to 5, keeping the other parameters unchanged (see Tab.\ \ref{tab: param}). The resulting interface profiles are visualized in Figs.\ \ref{fig: meniscus 2}--\ref{fig: meniscus 1}.
As expected, increasing Ca generally leads to larger deformations of the interface. For $\tilde{\mu}_2/\tilde{\mu}_1=5.05\e{-2}$ (Fig.\ \ref{fig: meniscus 2}), the upstream contact point continuously moves towards the tip of the cavity, indicating a draining motion of the lubricant driven by the shear. For $\tilde{\mu}_2/\tilde{\mu}_1=0.5$ (Fig.\ \ref{fig: meniscus 2a}), the downstream contact point responds more instead, almost leading to the interface rupture when Ca $=0.5$.
However, as we further increase the viscosity ratio, increasing the shear has no visible effect on the interface.
For $\tilde{\mu}_2/\tilde{\mu}_1=3.17$ (Fig.\ \ref{fig: meniscus 1}), we barely observe any additional deformation even increasing Ca by two orders of magnitude. The lubricant stays firmly in the cavity regardless of the external shear.}
\begin{figure}[t]
\begin{center}
\subfigure[$\quad \theta_s=80 \degree$]{\includegraphics[width=.49\columnwidth]{drain2_80.pdf}}
\subfigure[$\quad \theta_s=105\degree$]{\includegraphics[width=.49\columnwidth]{drain2_105.pdf}}
\end{center}
\begin{picture}(0,0)
\put(-55,75){\includegraphics[height=1.8cm]{half-filled-cavity-inset2.pdf}}
\end{picture}
\caption{Phase diagram in the cavity capillary--viscosity ratio plane ($\tilde{\mu}_2/\tilde{\mu}_1$, $\tilde{\mu}_2/\tilde{\mu}_1$Ca) showing the robustness of the lubricant-infused cavities under various capillary numbers and viscosity ratios.
The inset reports the same data as function of the outer capillary number Ca.}
\label{fig: drain}
\end{figure}
\Ge{Following these observations, we map the results pertaining all the viscosity ratios and capillary numbers considered in the phase diagram in Fig.\ \ref{fig: drain}. Here, three regimes are defined, which we label as stable, marginal, and unstable. For partially filled cavities with initial filling fraction $\delta=0.94$ (\ie initial depth $d_0/H=94\%$ measured from the contact points), we consider cases where the final depth varies within $94 \pm 2\%$ as \textit{stable}; if the final depth varies between $94 \pm 4\%$, which is very close to the cavity tip but still below, we consider the configuration as \textit{marginal}; lastly, if one contact point has already/nearly hit the cavity tip, or if the interface is clearly disrupted (see \eg Fig.\ \ref{fig: meniscus 2a}), we consider the case as \textit{unstable}. A similar, but simplified, criterion has also been chosen in \cite{Seo_etal_18} for the onset of gas pocket instability in turbulent flows. We evaluate the final depths either in the steady states, or at $t=5$ if a steady state has not been reached.}
\Ge{As shown in Fig.\ \ref{fig: drain}, the robustness of the LIS exhibits a rather complex dependence on the capillary number and the viscosity ratio.
On the lower viscosity side, \ie $\tilde{\mu}_2/\tilde{\mu}_1 \lesssim 0.1$, lubricants of both convex and concave interfaces becomes unstable above a critical capillary number. Mapping the data on the ($\tilde{\mu}_2/\tilde{\mu}_1$, $\tilde{\mu}_2/\tilde{\mu}_1$Ca) plane, our results suggest Ca$_{crit} \approx 0.01 \tilde{\mu}_1/\tilde{\mu}_2$. That is, the critical capillary number, defined with the outer fluid, is inversely proportional to the viscosity ratio $\tilde{\mu}_2/\tilde{\mu}_1$; it is harder to drain a less viscous lubricant outside the cavity fixing the outer fluid. Note that similar results were also observed experimentally for longitudinal grooves, where less viscous lubricants are found to remain over a longer distance within the grooves \cite{Liu}. Our simulations thus point towards the same direction in the design of transverse LIS against shear-driven failures.}
\Ge{As the viscosity ratio further increases, however, the existence of a critical capillary number, Ca$_{crit}$, is no longer clear: at $\tilde{\mu}_2/\tilde{\mu}_1 =0.5$, the equivalent Ca$_{crit} \approx 0.1 \tilde{\mu}_1/\tilde{\mu}_2$, which would be an order-of-magnitude higher than before; while for $\tilde{\mu}_2/\tilde{\mu}_1 > 1$, no Ca$_{crit}$ has been found within a reasonable range of capillary numbers.
This shear-induced failure can be associated with draining in the cavities, and we propose that its mechanism be linked to the dewetting of the lubricant and to its viscosity.}
\Ge{Recalling the $\chi u_c ( \theta)$ relations for various viscosity ratios in Fig.\ \ref{fig:tabulated}, the profiles collapse onto one curve for $\tilde{\mu}_2/\tilde{\mu}_1 < 0.05$, suggesting that it is the cavity capillary number, $\tilde{\mu}_2/\tilde{\mu}_1$Ca, that determines the onset of failure. This is confirmed by testing one additional case with $\tilde{\mu}_2/\tilde{\mu}_1 = 0.02$ and the same dependency $\chi u_c ( \theta)$ as in the other cases (thus not extracted from the phase-field simulations); for this case, we indeed obtain the same Ca$_{crit} \approx 0.01 \tilde{\mu}_1/\tilde{\mu}_2$, consistent with the other viscosity ratios in the same range (see Fig.\ \ref{fig: drain}).
When $\tilde{\mu}_2/\tilde{\mu}_1$ increases to above 0.5, Fig.\ \ref{fig:tabulated} shows a continuous deviation of the wetting relations. The slope of the $\chi u_c (\theta)$ curve near the static angle $\theta_s$ increases (in magnitude) rapidly with $\tilde{\mu}_2/\tilde{\mu}_1$, making it more difficult for the contact line to deform, hence reducing the drainage of the lubricant towards the cavity corner. Since the capillary number is limited by the shear rates and proportional to the scale of the micron-scale texture, substrates impregnated by very viscous lubricants are in practice very difficult to fail.}
\Ge{The above phenomenological mechanism suggests that, taking the lubricant viscosity as a design parameter, the intermediate viscosity ratios (\eg $\tilde{\mu}_2/\tilde{\mu}_1 = 0.01 \sim 1$ depending on the specific condition) are to be avoided in the application of LIS. This is consistent with previous experiments of longitudinal grooves towards the lower viscosity branch \cite{Liu};
more viscous lubricants, on the other hand, seem to ensure higher robustness.}
\Ge{Finally, we note that the critical capillary numbers reported here should be considered as an estimate, since, in practice, draining of the lubricant will also depend on the physical/chemical conditions near the cavity corner. However, we do not expect these practical limitations to influence the qualitative insight obtained from our simulations.}
\section{Conclusions}
In this paper, motivated by applications of micro-engineered liquid-infused surfaces, we study the drag reduction and robustness of the flow over an array of two-dimensional tranverse grooves partially filled with an immiscible lubricant.
We use a multiscale numerical framework to model the wetting of the two fluids at the cavity walls as well as the deformation of the interface under the shear. In particular, we combine two separate simulation methods at different scales: \textit{(i)} nanoscale phase field simulations for the contact line dynamics, and \textit{(ii)} micron-scale Stokes flows simulations using information from \textit{(i)} as a modified boundary condition, assuming self-similarity of the velocity field in the vicinity of the moving contact line. We believe the approach is however more general and it could be extended to include molecular dynamics simulations modelling the surface chemistry and roughness at the nanoscale.
We examine the effective slip $\lambda_e$ in order to quantify the steady-state drag reduction of the LIS. Specifically, we fix the geometry of the cavity and vary the lubricant-to-outer-fluid viscosity ratio $\tilde{\mu}_2/\tilde{\mu}_1$, the capillary number Ca, the static contact angle $\theta_s$, and the filling fraction of the cavity $\delta$.
\Ge{The main results are summarized as follows.
\begin{enumerate}
\item \Ge{$\lambda_e$ depends primarily on $\delta$; the filling rate is therefore the main factor determining the effective slip.}
\item \Ge{Lower $\tilde{\mu}_2/\tilde{\mu}_1$ leads to reduced drag; the reduction is however less pronounced comparing to fully covered cavities.
We relate this effect to the shear stress profiles $\tau_{xy}$ along the cavity tip, and show that $\tau_{xy}$ is non-uniform (contrary to the fully covered cases).}
\item \Ge{The effect of the contact angle on the effective slip length is different for different viscosity ratios and the filling fractions.}
\item \Ge{The effect of the contact angle hysteresis and of the capillary number on $\lambda_e$ is negligible, except}
\item \Ge{when Ca increases above a critical value Ca$_{crit}$, and the LIS can possibly fail. For an initial filling fraction $\delta =0.94$, the critical capillary number Ca$_{crit} \approx 0.01 \tilde{\mu}_1/\tilde{\mu}_2$ for $\tilde{\mu}_2/\tilde{\mu}_1 \lesssim 0.1$. For very viscous lubricants (\eg $\tilde{\mu}_2/\tilde{\mu}_1 >1$), on the other hand, the cavity remains impregnated due to their generally larger contact line velocity.}
\end{enumerate}}
As a final remark, we note that this problem is characterised by a large number of control parameters, including \eg geometry, static contact angle, surface chemistry, so that this study can be extended in a number of non-trivial ways. In addition, from a purely hydrodynamic point of view, the flow above the cavity may affect the contact line motion: it may therefore be relevant to study the response of the flow in the cavity to temporally varying shear and vortices relatively far from it.
\section*{Acknowledgments}
We thank Shervin Bagheri and U\'{g}is L\={a}cis for calling our attention on the problem of LIS. We also thank Mauro Chinappi and Pengyu Lv for many helpful discussions. This project is funded by the European Union Horizon 2020 research and innovation programme under Grant Agreement No.\ 664823 and the Swedish Research Council (No.\ 621-2012-2360).
{
|
{
"timestamp": "2018-05-08T02:14:46",
"yymm": "1805",
"arxiv_id": "1805.02381",
"language": "en",
"url": "https://arxiv.org/abs/1805.02381"
}
|
\section{Introduction}
It is a classical result that the solution to the standard heat equation $\partial_tu=\Delta u$, $u(0)=\phi_0$ allows the stochastic representation $u(t,x)=\mathbf E[\phi_0(X^{x,2}(t))]$, where $X^{x,2}$ is a Brownian motion started at $x\in\mathbb R^d$.
Space-time fractional evolution equations (EEs) extend the heat equation by introducing space-time heterogeneity. This often is done by considering the Caputo EE $D^\beta_0u=-(-\Delta)^{\frac\alpha 2}u$, where one substitutes the local operators $\partial_t$ and $\Delta$ with fractional analogues. Respectively, the Caputo derivative $ D^\beta_0u(t) = c_\beta\int_0^tu'(r)(t-r)^{-\beta}\,dr $ and the fractional Laplacian $(-\Delta)^{\frac\alpha 2}u(x)= \mathcal F^{-1}(|\xi|^\alpha \mathcal F u(\xi))(x)$, where $\beta\in(0,1)$, $\alpha\in(0,2)$, $c_\beta=\Gamma(1-\beta)^{-1}$ and $\mathcal F$ is the Fourier transform (for standard references see \cite{kai,Bogdan}). It is well known that the fundamental solution to the Caputo EE is the law of the non-Markovian anomalous diffusion $Y^x(t)=X^{x,\alpha}(\tau_0(t))$ (see, e.g., \cite{Meerschaert2012}). Here $X^{x,\alpha}$ is the rotationally symmetric $\alpha$-stable L\'evy process started at $x\in\mathbb R^d$ and $\tau_0(t)$ is the inverse process of the $\beta$-stable subordinator $X^\beta(t)$. The density of this beautiful formula was first observed in \cite{Zas97}. The time change interpretation first appeared in \cite{meershe,schef04}, based on \cite{BM01}. The process $Y^x$ displays space-heterogeneity due to the jump nature of $X^{x,\alpha}$. Also time-heterogeneity features in $Y^x$, as the time change $t\mapsto\tau_0(t)$ is constant precisely when the subordinator $t\mapsto X^\beta(t)$ jumps, so that $t\mapsto Y^x(t)$ is trapped on such time intervals. This interesting trapping phenomenon leads to the process $Y^x$ spreading at a slower rate than $X^{x,\alpha}$. Indeed, in the physics literature the anomalous diffusion $Y^x$ is often referred to as a sub-diffusion when $\alpha=2$ (see, e.g., \cite{Zas94,Sai05,Mag07}). See \cite{schef04} for a characterisation of $Y^x$ as the scaling limit of continuous time random walks with heavy-tailed waiting times. See \cite{BC11} for a characterisation of $Y^x$ as the scaling limit of random conductance models or asymmetric Bouchaud's trap models ($\alpha=2$). See \cite{Mag10,Mag15} for sample path properties of $Y^x$, and \cite{DS18,CK18} for heat kernel asymptotic formulas. Existence of classical solutions for Caputo EEs is generally a subtle problem. The works \cite{Ko03,BMN09,Caff16} tackle classical solutions on unbounded domains. Meanwhile the works \cite{MeerChen,Vel09,Vel11,Leon} consider bounded domains, and all their proofs rely on the spectral decomposition of the spatial operator. Stochastic representations for solutions to time-nonlocal equations is an active area of theoretical research (see, e.g., \cite{MeerB,Chen17,HKT17,CK18}). Partly because they provide formulas in the general absence of closed forms along with suggesting probabilistic proof methods. Moreover, such representations can be useful for particle tracking codes (see, e.g., \cite{ZMB08}). Let us remark that Caputo EEs are applied in a variety of fields, such as physics, finance, economics, biology and hydrogeology (see, e.g., \cite{Zas97,Scal06,Scal00,MeeB13,Fed})
In this work we focus on the following extension of the Caputo EE: the inhomogeneous space-time fractional EE on bounded domain with Dirichlet boundary conditions and time-nonlocal initial condition
\begin{equation}\label{preRL}
\left\{
\begin{split}
D_{\infty}^{\beta} \tilde u(t,x)&=\flap_\Omega \tilde u(t,x)+g(t,x), & \text{in }& (0,T]\times \Omega,\\
\tilde u(t,x)&=0, & \text{in }&[0,T]\times \partial\Omega, \\
\tilde u(t,x)&=\phi(t,x), &\text{in }&(-\infty,0]\times\Omega,
\end{split}
\right.
\end{equation}
where $\Omega\subset\mathbb R^d$ is a regular domain, $\flap_\Omega $ is the restricted fractional Laplacian\footnote{We define $\flap_\Omega$ on functions on $\Omega$, so that the Euclidean boundary $\partial\Omega$ makes sense in (\ref{preRL}). In the literature the operator $\flap_\Omega$ is often defined through the application of the singular integral definition of $-(-\Delta)^{\frac\alpha 2}$ to functions vanishing outside $\Omega$ (see, e.g., \cite{Bon16}).}, and the time operator $-D_{\infty}^{\beta}$ is the generator of the inverted $\beta$-stable subordinator\footnote{The operator $D_{\infty}^{\beta}$ is often referred to as the Marchaud derivative in the fractional calculus literature (see, e.g., \cite{kilbas}).}
\begin{equation}
D_{\infty}^{\beta} f(t)=\int_0^{\infty}(f(t-r)-f(t))\,\frac{\Gamma(-\beta)^{-1}dr}{r^{1+\beta}},\quad t\in\mathbb R.
\label{gen}
\end{equation}
As the main result of this work we prove existence and uniqueness of classical solutions to problem (\ref{preRL}) along with the stochastic representation for the solution
\begin{equation}\label{SR}
\begin{split}
\tilde u(t,x)=&\ \mathbf E\left[\phi\left(-X^{t,\beta}(\tau_0(t)),X^{x,\alpha}(\tau_0(t))\right)\mathbf 1_{\{\tau_0(t)<\tau_\Omega(x)\}}\right] \\
&\ +\mathbf E \left[\int_0^{\tau_0(t)\wedge \tau_\Omega(x)}g \left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right)ds\right],
\end{split}
\end{equation}
where the processes $-X^{t,\beta}=t-X^\beta$ and $X^{x,\alpha}$ are independent, and $\tau_{\Omega}(x)$ is the first exit time of $X^{x,\alpha}$ from $\Omega$. To see why problem (\ref{preRL}) extends the Caputo EE, let $\phi(t)=\phi(0)$ for every $t\in(-\infty,0)$ and $g=0$ in both (\ref{preRL}) and (\ref{SR}). Then
\begin{equation*}
D_{\infty}^{\beta} \tilde u(t)=\int_0^t (\tilde u(t-r)-\tilde u(t))\,\frac{\Gamma(-\beta)^{-1}dr}{r^{1+\beta}}-\frac{\phi(0)-\tilde u(t)}{\Gamma(1-\beta)}t^{-\beta}=D^\beta_0u(t),
\end{equation*}
where $u$ is the restriction of $\tilde u$ to $t\ge0$, and one obtains the homogeneous Caputo EE and its solution, respectively. The recent works \cite{Du17,DYZ17} introduced a class of EEs that formally includes (\ref{preRL}). They are motivated by the success of related nonlocal EEs arising in image processing, peridynamics and heat conduction (see, e.g., \cite{Gil08,Bob10,Sil10,Gu12}), and the general lack of alternatives to Caputo-type time-nonlocal models.
Part of their intent is to introduce initial conditions on the `past' ($\phi$ on $(-\infty,0)\times\Omega$). Our stochastic solution (\ref{SR}) appears to be new, and it provides an interesting interpretation for the time-nonlocal initial condition $\phi$. This is because the overshoot $W(t)=X^{t,\beta}(\tau_0(t))$ is the waiting/trapping time of the anomalous diffusion $X^{x,\alpha}(\tau_0(t))$. We discuss an interpretation where the values of $\phi$ on $(-\infty,0)\times\Omega$ describe the initial condition at time $0$ with respect to the `depth' of $\Omega$, rather than the `past' of $\Omega$. To the best of our knowledge, there are no classical-wellposedness results for the EE (\ref{preRL}). Related weak-wellposedness results can be found in \cite{Du17,DYZ17} (for certain general L\'evy kernels in (\ref{gen})) and indirectly in \cite{miao} (for abstract Markovian generators), meanwhile \cite{Allen17} considers uniqueness of weak solutions. Worth mentioning that our simple Lemma \ref{lem_equivsol} allows to obtain wellposedness and regularity results for EEs such as (\ref{preRL}) as corollaries of theorems concerning inhomogeneous Caputo EEs (see, e.g., \cite{Ko03,Caff16}). To see why the stochastic representation (\ref{SR}) is natural, one can formally apply the classical probabilistic intuition for elliptic boundary value problems (see, e.g., \cite[Introduction, \S 3]{dynkin1965}) to problem (\ref{preRL}) rewritten as
\begin{equation}\label{bvpp}
\left\{
\begin{split}
\mathcal L\tilde u&=-g, & \text{in }&\Gamma,\\
\tilde u&=\phi, & \text{in }&\partial\Gamma,
\end{split}
\right.
\end{equation}
where $\mathcal L= (-D_{\infty}^{\beta}+\flap_\Omega)$ is the generator of the process $ \{(-X^{t,\beta}(s),X^{x,\alpha}(s))\mathbf1_{\{s<\tau_\Omega(x)\}}\}_{s\ge0} $ taking values in $(-\infty,T]\times \Omega$, $ \Gamma=(0,T]\times \Omega$, and $\partial \Gamma:=(-\infty,0]\times\Omega\cup [0,T]\times\partial \Omega,$ with $\phi=0$ on $ (0,T]\times\partial \Omega$.
To prove our main result, Theorem \ref{thm_main}, we derive two results of independent interest. Namely:
\begin{itemize}
\item Theorem \ref{thm_sspostRL}: the stochastic representation
\begin{equation}\label{SRpostRL}\begin{split}
u(t,x)=&\ \mathbf E\left[\phi_0\left(X^{x,\alpha}(\tau_0(t))\right)\mathbf 1_{\{\tau_0(t)<\tau_\Omega(x)\}}\right]\\
&\ +\mathbf E \left[\int_0^{\tau_0(t)\wedge \tau_\Omega(x)}f \left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right)ds\right],
\end{split}
\end{equation}
is the unique classical solution to the inhomogeneous Caputo EE on bounded domain
\begin{equation}\label{postRL}
\left\{
\begin{split}
D_{0}^{\beta} u(t,x)&=\flap_\Omega u(t,x)+f(t,x), & \text{in }& (0,T]\times \Omega,\\
u(t,x)&=0, & \text{in }&[0,T]\times \partial\Omega, \\
u(t,x)&=\phi_0(x), &\text{in }&\{0\}\times\Omega;
\end{split}
\right.
\end{equation}
\item Theorem \ref{thm_wspostRL}: the stochastic representation (\ref{SRpostRL}) is a weak solution to problem (\ref{postRL}).
\end{itemize}
Let us outline our proof strategy for Theorem \ref{thm_main}. By plugging the values of $\phi$ in $\tilde u$, it is not hard to show the equivalence of classical solutions to problem (\ref{preRL}) and to problem (\ref{postRL}) with forcing term $f=g-D^\beta_\infty\phi$ and initial condition $\phi_0=\phi(0)$ (see Lemma \ref{lem_equivsol}). Moreover, a Dynkin formula argument proves that the respective stochastic representations (\ref{SR}) and (\ref{SRpostRL}) agree (see Lemma \ref{lem_SRs}). Hence, it is enough to prove Theorem \ref{thm_sspostRL}. We do so by proving Theorem \ref{thm_wspostRL} and then showing the required regularity of the candidate solution (\ref{SRpostRL}). The main feature of our regularity assumption on the data $\phi$ and $g$ is the differentiability in time. This is a consequence of the regularity assumption on $f$ in Theorem \ref{thm_sspostRL}, which we discuss now. Theorem \ref{thm_sspostRL} extends the proof of \cite[Theorem 5.1]{MeerChen}, where problem (\ref{postRL}) is treated for $f=0$. This proof uses separation of variables combing eigenfunction expansions of $\flap_\Omega$ with Mittag-Leffler solutions to the Caputo initial value problem. Our separation of variables formula for the second term in (\ref{SRpostRL}) reads
\begin{equation*}
\sum_{n=1}^\infty\psi_n(x) u_n(t) = \sum_{n=1}^\infty\psi_n(x) \int_0^t \langle f(s),\psi_n\rangle (t-s)^{\beta-1} \beta E_\beta'(-\lambda_n (t-s)^\beta)\,ds,
\label{eigenex}
\end{equation*}
where $E_\beta(t)=\sum_{k=0}^\infty t^k\Gamma(k\beta+1)^{-1}$ is a Mittag-Leffler function, $\{\lambda_n,\psi_n\}_{n\in\mathbb N}$ is the system of eigenvalues-eigenfunctions of $\flap_\Omega$ and $\langle \cdot,\cdot\rangle$ is the inner product on $\Omega$. Unsurprisingly, each $u_n$ is the solution to the inhomogeneous Caputo initial value problem $D^{\beta}_0 u_n(t)=-\lambda_nu_n(t)+\langle f(t),\psi_n\rangle$, $u_n(0)=0$ (see \cite[Theorem 7.2]{kai}). As we require differentiability of $t\mapsto u(t)$, we want to differentiate each $t\mapsto u _n(t)$. To compensate for the singularity of the Mittag-Leffler kernel $t^{\beta-1}E_\beta'(-\lambda_n t^\beta)$ we require differentiability of $t\mapsto f(t)$. Note that for the space fractional heat equation ($\beta=1$) the Mittag-Leffler kernel is an exponential, and so continuity of $f$ is enough to differentiate the $u_n$'s. Related results in the literature also require differentiability on $f$ (see, e.g, \cite[Theorem 7.3]{Caff16}). Briefly, the arguments for Theorem \ref{thm_wspostRL} reduce the Caputo EE (\ref{postRL}) to a Poisson equation with zero boundary conditions on $\{0\}\times\Omega\cup [0,T]\times\partial\Omega$ by constructing space-time sub-Feller semigroups. We rely on the fact that the generator $-D_{0}^{\beta} $ only requires boundary conditions on the trivial set $\{0\}$. These arguments are an extension of the ideas in \cite{HKT17}, and they appear versatile. For example, they can be used to prove stochastic weak solutions for problem (\ref{preRL}) with general nonlocal operators in both space and time (ongoing work with the authors in \cite{DYZ17}). As far as we know, stochastic representations for solutions such as (\ref{SRpostRL}) for time-nonolocal EEs appear in \cite{HKT17}, meanwhile in \cite{Meer05} the solution is given a representation via the superposition principle. Possibly worth mentioning that we do not invoke \cite[Theorem 3.1]{BM01} and all our methods work for the standard Laplacian case $\alpha=2$.
This work is structured as follows: in Section \ref{sec_preliminaries} we provide general notation and basic results about several stochastic processes obtained from $-X^{t,\beta}$ and $X^{x,\alpha}$, with a focus on semigroup results. In Section \ref{sec_ws} we prove Theorem \ref{thm_wspostRL}. In Section \ref{sec_existence} we prove Theorem \ref{thm_sspostRL}. In Section \ref{sec_sspre} we prove that the stochastic representation (\ref{SR}) is the unique classical solution to the EE (\ref{preRL}). In Section \ref{sec_int} we discuss an interpretation of the stochastic representation (\ref{SR}).
\section{Preliminaries}\label{sec_preliminaries}
\subsection{General notation}
We denote by $\mathbb N,$ $ \mathbb R^d,$ $ \Gamma(\cdot),$ $ \mathbf 1_{A}(\cdot)$, $a\wedge b$, a.e., lhs and rhs, the set of natural numbers, the $d$-dimensional Euclidean space, the gamma function, the indicator function of the set $A$, the minimum between $a,b\in\mathbb R$, the statements almost everywhere with respect to Lebesgue measure, left hand side and right hand side, respectively. We define the one parameter Mittag-Leffler function for $\beta\in(0,1)$ as $E_\beta(t)=\sum_{k=0}^\infty t^k\Gamma(k\beta+1)^{-1}$, $t\ge0$. We define the Banach spaces
\begin{align*}
B(A)=&\ \{f:A\to\mathbb R\text{ is bounded and measurable}\},\\
C(K)=&\ \{f\in B(K): f\text{ is continuous}\},\\
C_{\partial\Omega}(\Omega)=&\ \{f\in C(\overbar \Omega): f=0\text{ on }\partial \Omega\},\\
C_{0}([0,T])=&\ \{f\in C([0,T]): f(0)=0\}, \\
C_{\infty}((-\infty, T])=&\ \{f\in B((-\infty, T]): f\text{ is continuous and vanishes at infinity}\}, \\
C_{\partial\Omega}([0,T]\times \Omega)=&\ \{f \in C([0,T]\times \overbar\Omega): f=0\text{ on }\partial\Omega\},\\
C_{0,\partial\Omega}([0,T]\times \Omega)=&\ \{f \in C_{\partial\Omega}([0,T]\times \Omega): f(0)=0\},\\
C_{\infty,\partial\Omega}((-\infty, T]\times\Omega)=&\ \{f\in B((-\infty, T]\times\Omega):f\text{ is continuous and vanishes at infinity} \},\\
C_{b,\partial\Omega}((-\infty, T]\times\Omega)=&\ \{f\in B((-\infty, T]\times\overbar\Omega):f\text{ is continuous and } f=0\text{ on }\partial\Omega \},
\end{align*}
all equipped with the supremum norm, where $A$ is any subset of $\mathbb R^d$, the set $K\subset \mathbb R^d$ is compact, the set $\Omega\subset \mathbb R^d$ is bounded and open, $ T\ge0$. For a function $f:A\to \mathbb R$ we denote its supremum norm by either $\|f\|_\infty$ or $\|f\|_{C(A)}$. We define the spaces
\begin{align*}
C(O)=&\ \{f:O\to \mathbb R \text{ is continuous}\},\\
C^{k}(\Omega)=&\ \{f\in C(\Omega):f\text{ is $k$-times continuously differentiable} \}, \\
C^{k}_c(\Omega)=&\ \{f\in C(\Omega):f\in C^{k}(\Omega)\text{ and compactly supported} \}, \\
C^{\infty}_c(\Omega)=&\ \{f\in C(\Omega):f\text{ is smooth and compactly supported} \}, \\
C^1([0,T])=&\ \{f,f'\in C([0,T])\},\\
C^1_0([0,T])=&\ \{f,f'\in C_{0}([0,T])\},\\
C^{1}_\infty((-\infty, T])=&\ \{f,f'\in C_\infty((-\infty, T])\},\\
C^{1,k}((0,T)\times\Omega)=&\ \{f\in C((0,T)\times\Omega): f\text{ is $1$-time and $k$-times continuously }\\
& \quad\quad\text{differentiable in time and space, respectively}\},\\
C^{1,k}_c((0,T)\times\Omega)=&\ \{f\in C^{1,k}((0,T)\times\Omega): f\text{ is compactly supported}\},\\
C_{\partial\Omega}^1([0,T]\times \Omega)=& \ \{f\in C_{\partial\Omega}([0,T]\times \Omega):f\in C^{1,0}((0,T)\times\Omega), f'\in C_{\partial\Omega}([0,T]\times \Omega) \},\\
C_{\infty,\partial\Omega}^{n,k}((-\infty,T]\times \Omega)=& \ \{f\in C_{\infty,\partial\Omega}((-\infty,T]\times \Omega):\text{all derivatives up to order $n$ in time}\\
& \quad\quad\text{and $k$ in space exist and belong to }C_{\infty,\partial\Omega}((-\infty,T]\times \Omega) \},
\end{align*}
where the set $O\subset \mathbb R^d$ is open. We write $C_{\infty,\partial\Omega}^{1,0}((-\infty,T]\times \Omega)=C_{\infty,\partial\Omega}^1((-\infty,T]\times \Omega)$ and $C_{b,\partial\Omega}^1((-\infty, T]\times\Omega)= \{f,\partial_tf\in C_{b,\partial\Omega}((-\infty, T]\times\Omega)\}$. By $(L^1(O),\|\cdot\|_{L^1(O)})$, $(L^2(O),\|\cdot\|_{L^2(O)})$ and $(L^\infty(O),\|\cdot\|_{L^{\infty}(O)})$ we mean the standard Banach spaces of real-valued Lebesgue integrable, square-integrable and essentially bounded functions on $O$, respectively. Without risk of confusion we write $\|\cdot\|_{L^{\infty}(O)}=\|\cdot\|_{\infty}$. We denote by $\|L\|$ the operator norm of a bounded linear operator $L$ between Banach spaces. Given two sets of real-valued functions $F$ and $\tilde F$, we define $F\cdot\tilde F:=\{f\tilde f: f\in F,\ \tilde f\in\tilde F\}$, and by $\text{Span}\{F\}$ we mean the set of all linear combinations of functions in $F$. The notation we use for an $E$-valued stochastic process started at $x\in E$ is $X^x=\{X^x(s)\}_{s\ge0}$. Note that the symbol $t$ will often be used to denote the starting point of a stochastic process with state space $E\subset \mathbb R$. By a \emph{ strongly continuous contraction semigroup} $P$ we mean a collection of linear operators $P_s:B\to B$, $s\ge0$, where $B$ is a Banach space, such that $P_{s+r}=P_sP_r$, for every $s,r\ge 0$, $P_0$ is the identity operator, $\lim_{s\downarrow 0}P_sf=f$ in $B$, for every $f\in B$, and $\sup_s\|P_s\|\le1$. The generator of the semigroup $P$ is defined as the pair $(\mathcal L,\text{Dom}(\mathcal L))$, where $\text{Dom}(\mathcal L):=\{f\in B: \mathcal L f:=\lim_{s\downarrow0}s^{-1}(P_sf-f) \text{ exists in }B\}$. We say that a set $C\subset \text{Dom}(\mathcal L)$ is a \emph{core for} $(\mathcal L,\text{Dom}(\mathcal L))$ if the generator equals the closure of the restriction of $\mathcal L$ to $C$. We say that a set $C\subset B$ is \emph{invariant under} $P$ if $P_sC\subset C$ for every $s>0$. If a set $C$ is invariant under $P$ and a core for $(\mathcal L,\text{Dom}(\mathcal L))$, then we say that $C$ is an \emph{invariant core for} $(\mathcal L,\text{Dom}(\mathcal L))$. For a given $\lambda\ge0$ we define the \emph{resolvent of} $P$ by $(\lambda-\mathcal L)^{-1}:=\int_0^\infty e^{-\lambda s}P_s ds$, and recall that for $\lambda >0$, $(\lambda-\mathcal L)^{-1}:B\to \text{Dom}(\mathcal L)$ is a bijection and it solves the abstract resolvent equation
\begin{equation*}
\mathcal L(\lambda-\mathcal L)^{-1}f=\lambda (\lambda-\mathcal L)^{-1}f-f,\quad f\in B,
\end{equation*}
see for example \cite[Theorem 1.1]{dynkin1965}. By a \emph{sub-Feller semigroup} we mean a strongly continuous contraction semigroup on any of the Banach spaces of continuous functions defined above such that $P$ preserves non-negative functions. A \emph{Feller semigroup} is a sub-Feller semigroup such that its extension to bounded measurable functions preserves constants.
\subsection{Fractional derivatives, stable processes and related space-time semigroups}
\begin{definition}\label{def_operators}
For parameters $\beta\in(0,1)$ and $\alpha\in(0,2)$, we define: the \emph{Marchaud derivative} $D^{\beta}_{\infty}$ by formula (\ref{gen}); the \emph{Caputo derivative} $D^\beta_0$ by
\begin{equation*}
D^\beta_0f(t)=\int_0^{t} (f(t-r)-f(t))\,\frac{\Gamma(-\beta)^{-1}dr}{r^{1+\beta}}+(f(0)-f(t))\int_t^\infty \frac{\Gamma(-\beta)^{-1}dr}{r^{1+\beta}},\quad t>0,
\end{equation*}
and $D^\beta_0f(0)=\lim_{t\downarrow 0}D^\beta_0f(t)$; the \emph{restricted fractional Laplacian} $\flap_\Omega$ by
\begin{equation*}
\flap_\Omega f(x)=\lim_{\varepsilon\downarrow 0}\int_{ \Omega\backslash B_\varepsilon(x)}(f(y)-f(x))\,\frac{c_{\alpha,d}\,dy}{|x-y|^{d+\alpha}}-f(x)\int_{\mathbb R^d\backslash \Omega}\frac{c_{\alpha,d}\,dy}{|x-y|^{d+\alpha}},\quad x\in\Omega,
\end{equation*}
and $\flap_\Omega f(z)=\lim_{x\to z}\flap_\Omega f(x)$ for $z\in\partial\Omega$, where $c_{\alpha,d}^{-1}=\int_{\mathbb R^d}\frac{1-\cos y_1}{|y|^{d+\alpha}}\,dy$, $|\cdot|$ denotes the Euclidean norm on $\mathbb R^d$ and $B_\varepsilon(x)$ denotes the Euclidean ball of radius $\varepsilon>0$ around $x\in\Omega$.
\end{definition}
We now define several sub-Feller semigroups that relate to the fractional derivatives in Definition \ref{def_operators} and collect some results relevant for us. For $\beta\in(0,1)$, we denote by $X^\beta=\{X^\beta(s)\}_{s\ge0}$ the standard $\beta$-stable subordinator, and by $p_s^\beta$ the smooth density of $X^\beta(s)$, $s>0$.
\begin{definition}
For $\beta\in (0,1)$, we denote by $-X^{t,\beta}=\{-X^{t,\beta}(s):=t-X^\beta(s)\}_{s\ge0}$ the\emph{ inverted $\beta$-stable subordinator started at }$t\in\mathbb R$, characterised by the Laplace transforms $\mathbf E[e^{- X^{0,\beta}(s)k}]=e^{-k^{\beta}s}$, $k,s> 0$. We define the first exit/passage times $\tau_0(t)=\inf\{s>0: t-X^{\beta}(s)\le0\}$, $t\in\mathbb R$.
\end{definition}
\begin{definition}
For $\alpha\in (0,2)$, $d\in\mathbb N$, we denote by $X^{x,\alpha}=\{X^{x,\alpha}(s)\}_{s\ge0}$ the \emph{rotationally symmetric $\alpha$-stable L\'evy process with values in $\mathbb R^d$, started at }$x\in\mathbb R^d$, with characteristic functions $\mathbf E[e^{ik\cdot X^{0,\alpha}(s)}]=e^{-s|k|^{\alpha}}$, $k\in\mathbb R^d$, $s> 0$. We define the first exit times $\tau_\Omega(x)=\inf\{s>0:X^{x,\alpha}(s)\notin \Omega\}$, $x\in\mathbb R^d$.
\end{definition}
Recall that the smooth density of $-X^{t,\beta}(s)$, $s>0$, is supported $(-\infty,t)$ and it equals $p_s^\beta(t-\cdot)$, and that the law of $X^{x,\alpha}(s)$ is smooth for each $s>0$ (see for example \cite[page 10]{Bogdan}).
\begin{proposition}\label{prop_csg} Fix $T>0$. For the the inverted $\beta$-stable subordinator $-X^{t,\beta}$, denote the Feller semigroup $P^{\beta,\infty}=\{P^{\beta,\infty}_s\}_{s\ge0}$ on $C_\infty((-\infty,T])$, by $P^{\beta,\infty}_sf(t):=\mathbf E[f(-X^{t,\beta}(s))]$, $s\ge 0$, denote by $(\mathcal L_\beta^\infty,\text{Dom}(\mathcal L_\beta^\infty))$ the generator of $P^{\beta,\infty}$, and recall that $C^1_\infty((-\infty,T])$ is an invariant core for $(\mathcal L_\beta^\infty,\text{Dom}(\mathcal L_\beta^\infty))$ with $\mathcal L_\beta^\infty=-D^\beta_{\infty}$ on $C^1_\infty((-\infty,T])$.
\begin{enumerate}[(i)]
\item
Define the absorbed process $-X^{t,\beta}_0$ by
\begin{equation}
-X^{t,\beta}_0(s):=
\left\{ \begin{split}
-X^{t,\beta}(s)&,&\text{if }s<\tau_0(t),\\
0&,&\text{if }s\ge\tau_0(t).
\end{split}
\right.
\label{def_abs}
\end{equation}
Then the process $-X^{t,\beta}_0$ induces a Feller semigroup on $C([0,T])$, denoted by $P^\beta=\{P^{\beta}_s\}_{s\ge 0}$, with generator $(\mathcal L_\beta,\text{Dom}(\mathcal L_\beta))$. Moreover, $C^1([0,T])$ is an invariant core for $(\mathcal L_\beta,\text{Dom}(\mathcal L_\beta))$ and
\begin{equation*}
\mathcal L_\beta= -D^\beta_0\quad\text{on}\quad C^1([0,T]).
\end{equation*}
\item The sub-Feller semigroup $P^{\beta,\text{kill}}:=P^\beta$ on $C_{0}([0,T])$ is the the sub-Feller semigroup induced by the killed version of the process (\ref{def_abs}), and its generator is $(\mathcal L_\beta^{\text{kill}}, \text{Dom}(\mathcal L_\beta^{\text{kill}}))=(\mathcal L_\beta, \text{Dom}(\mathcal L_\beta)\cap\{f(0)=0\})$. Moreover, $C^{1}_0([0,T])$ is an invariant core for
$(\mathcal L_\beta^{\text{kill}}, \text{Dom}(\mathcal L_\beta^{\text{kill}}))$ and
\begin{equation*}
\mathcal L_{\beta}^{\text{kill}}= -D^\beta_0\quad\text{on}\quad C^1_0([0,T]).
\end{equation*}
\item The following three identities hold
\begin{equation}
\mathbf E\left[\tau_0(t)\right]=\frac{t^\beta}{\Gamma(\beta+1)},\quad \mathbf E\left[e^{-\lambda \tau_0(t)}\right]=E_\beta(-\lambda t^\beta),\quad t,\lambda\ge0, \text{ and}
\label{ident1}
\end{equation}
\begin{equation}
\int_0^\infty p^\beta_s(t-r)\,ds=\frac{(t-r)^{\beta-1}}{\Gamma(\beta)},\quad t> r.
\label{ident3}
\end{equation}
\item The alternative representation of the Caputo derivative
\begin{equation*}
D^{\beta}_0u(t)=\int_0^tu'(r)\,\frac{(t-r)^{-\beta}dr}{\Gamma(1-\beta)}, \quad \text{for $0<t<T$,}
\end{equation*}
holds if $u\in C([0,T])\cap C^1((0,T))$ and $u'\in L^1((0,T))$.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}[(i)]
\item It is easy to prove that $P^\beta_s f(t):=\int^t_0 f(r) p^\beta_s(t-r)\,dr+ f(0)\int^0_{-\infty}p^\beta_s(t-r)\,dr$ is a Feller semigroup on $C([0,T])$, and the corresponding process is indeed $-X_0^{t,\beta}$. By using the proof of \cite[Proposition 14]{Vio10}\footnote{We select $c_+ = \Gamma(-\alpha)^{-1}$ and $c_-=0$ in \cite[Proposition 14]{Vio10}. In the statement of \cite[Proposition 14]{Vio10} it is required that $F\in C^2([0,\infty))$, but $F\in C^1([0,\infty))$ is enough. }, it holds that $C^1([0,T])\subset \text{Dom}(\mathcal L_\beta)$, and that $\mathcal L_\beta=-D_0^\beta$ on $C^1([0, T])$. To prove that $C^1([0, T])$ is invariant under $P^\beta$, we directly compute for $g\in C^1([0, T])$, $t\in (0, T)$ and $s > 0$,
\begin{align*}
\partial_t P_s^\beta g(t)&=\partial_t\left(\int_0^tg(t-r)p^\beta_s(r)\,dr+g(0)\int_{-\infty}^{-t} p^\beta_s(-r)\,dr\right)\\
&=\int_0^tg'(t-r)p^\beta_s(r)\,dr \pm g(0)p^\beta_s(t).
\end{align*}
Then $C^1([0,T])$ is a dense subspace of $\text{Dom}(\mathcal L_\beta)$ which is invariant under $P^\beta$, and so it is a core for $(\mathcal L_\beta,\text{Dom}(\mathcal L_\beta))$ by \cite[Lemma 1.34]{Schilling}.
\item Similarly to part (i), it can be shown that $P^{\beta,\text{kill}}_s f(t)=\int^t_0 f(r) p^\beta_s(t-r)\,dr$. To show $\text{Dom}(\mathcal L_\beta)\cap\{f(0)=0\}\subset \text{Dom}(\mathcal L_\beta^{\text{kill}})$ , let $f\in \text{Dom}(\mathcal L_\beta)\cap\{f(0)=0\}$, then for some $\lambda>0$, let $g\in C([0,T])$ such that
\begin{equation*}
f(t)=\int_0^\infty e^{-\lambda s}P^{\beta}_s g(t)\,ds,\quad\text{and}\quad g(0)\frac{1}{\lambda}=\int_0^\infty e^{-\lambda s}P^{\beta}_s g(0)\,ds=f(0)=0,
\end{equation*}
and so $g\in C_0([0,T])$. As $P^{\beta}_s=P^{\beta,\text{kill}}_s$ on $ C_0([0,T])$, it follows that $f\in \text{Dom}(\mathcal L_\beta^{\text{kill}})$. The inclusion $\text{Dom}(\mathcal L_\beta)\cap\{f(0)=0\}\supset \text{Dom}(\mathcal L_\beta^{\text{kill}})$ is immediate using $P^{\beta}_s=P^{\beta,\text{kill}}_s$ on $ C_0([0,T])$. By equating a resolvent equation, it follows that $\mathcal L_\beta^{\text{kill}}=\mathcal L_\beta$ on $\text{Dom}(\mathcal L_\beta^{\text{kill}})$. Invariance of $C^1_0([0,T])$ can be proven as in part (i). The last statement now follows from part (i).
\item The first identity follows from the third identity (\ref{ident3}). The second identity follows by \cite[Theorem 2.10.2]{zolotarev}. To prove the third identity (\ref{ident3}), recall that
\begin{equation*} \label{E:ps-beta}
p_s^{\beta} (t-r) = s^{-1/\beta} p^{\beta}_1 (s^{-1/\beta} (t-r)), \quad t>r,
\end{equation*}
and then compute
\begin{align*}
\int_0^{\infty} p_s^{\beta}(t,r)\,ds &= (t-r)^{\beta-1} \int_0^{\infty} u^{-1/\beta} p^{\beta}_1 (u^{-1/\beta})\,du = (t-r)^{\beta-1}\frac{1}{\Gamma(\beta)}, \label{beta-density}
\end{align*}
using the Mellin transform of the $\beta$-stable density $p^{\beta}_1$ for the last equality (see for example \cite[Theorem 2.6.3]{zolotarev}).
\item This is a standard computation and we omit it.
\end{enumerate}
\end{proof}
We say that a bounded open set $\Omega\subset\mathbb R^d$ is a \emph{regular set} if $\Omega$ satisfies the exterior cone condition at every point $\partial\Omega$, i.e. for each $x\in\partial\Omega$ there exists a finite right circular open cone $V_x$ with vertex $x$, such that $V_x\subset \Omega^c$ (see \cite[end of Section 4]{MeerChen}). From now on $\Omega$ is always a regular set.
\begin{proposition}\label{prop_fl}
Define the sub-process $X^{x,\alpha}_\Omega$ started at $x\in\Omega$ by
\begin{equation*}
X^{x,\alpha}_\Omega(s):=
\left\{
\begin{split}
X^{x,\alpha}(s),&\quad s<\tau_\Omega(x),\\
\text{cemetery},&\quad s\ge\tau_\Omega(x),
\end{split}
\right.
\end{equation*}
\begin{enumerate}[(i)]
\item Then $X^{x,\alpha}_\Omega$ induces a sub-Feller semigroup on $C_{\partial\Omega}(\Omega)$, which we denote by $P^\Omega =\{P^{\Omega}_s\}_{s\ge0}$, and we denote its generator by $(\mathcal L_\Omega, \text{Dom}(\mathcal L_\Omega))$. Moreover if $u\in \text{Dom}(\mathcal L_\Omega)$ then there exists a sequence $u_n\in C_{\partial\Omega}(\Omega)\cap C^2(\Omega)$ such that $u_n\to u$ uniformly and $\flap_\Omega u_n\to \mathcal L_\Omega u$ uniformly on compact subsets of $\Omega$. The transition density of $X^{x,\alpha}_\Omega(s)$, denoted by $p^\Omega_s(x,y),$ is jointly continuous in $x$ and $y$, for every $s>0$.
\item For every $u\in \text{Dom}(\mathcal L_\Omega)$ and $\varphi\in C_c^2(\Omega)$ it holds
\begin{equation}
\int_\Omega \mathcal L_\Omega u \varphi\, dx= \int_\Omega u \flap_\Omega \varphi \,dx.
\label{dual_dflg}
\end{equation}
\item The semigroup $P^\Omega$ induces a strongly continuous contraction semigroup on $L^2(\Omega)$, and we denote its generator by $(\mathcal L_{\Omega,2},\text{Dom}(\mathcal L_{\Omega,2}))$. Moreover there exists a sequence of positive numbers $0 < \lambda_1 < \lambda_2 \le \lambda_3 \le \dots, $ and an orthonormal basis $\{\psi_n \}_{n\in\mathbb N}$ of $L^2(\Omega)$, so that $P^\Omega_s\psi_n = e^{-\lambda_n s }\psi_n$ in $L^2(\Omega)$, for every $n\in\mathbb N,\ s>0$. For $k\ge1$, we denote by $ \text{Dom}(\mathcal L_{\Omega,2}^k)$ the subset of $ L^2(\Omega)$ such that $\|f\|_{\mathcal L_{\Omega,2}^k}:=\left(\sum_{n=1}^\infty \lambda_n^{2k}\langle f,\psi_n\rangle^2\right)^{1/2}<\infty$. Moreover, $P^\Omega$ on $C_{\partial\Omega}(\Omega)$ has the same set of eigenvalues and eigenfunctions as $P^\Omega$ on $L^2(\Omega)$.
\end{enumerate}
\end{proposition}
\begin{proof}
(i) The first two statements are a consequence of \cite[Lemma 2.2 and Theorem 2.7]{MeerB}. The last statement follows by the strong Markov property along with joint continuity of the transition densities of $X^{x,\alpha}$ (see for example \cite[Section 4]{MeerChen}).
(ii) The operator $\flap_\Omega$ is self-adjoint in the sense that
\begin{equation}
\int_\Omega \flap_\Omega u \varphi\, dx= \int_\Omega u \flap_\Omega \varphi\, dx,
\label{dual_dfl}
\end{equation}
if $\varphi\in C_c^2(\Omega)$ and $u\in C_{\partial\Omega}(\Omega)\cap C^2(\Omega)$. Now use the approximating sequence from part (i) of the current proposition to conclude.
(iii) These results can be found in \cite[Section 4]{MeerChen} and references therein.
\end{proof}
In the next lemma we construct three sub-Feller semigroups by combining in space-time the sub-Feller semigroups defined so far. We combine them in a way that allows us to describe the newly constructed space-time generator as the closure of the sum of the time and space generators. This is how we give meaning to the boundary value problem viewpoint formally presented in (\ref{bvpp}).
\begin{lemma}\label{thm_semigroups} Consider the four tuples
\begin{align*}
&(P^{\beta,\infty}, C_\infty((-\infty,T]), \mathcal L_\beta^\infty, \text{Dom}(\mathcal L_\beta^\infty)), &(&P^\beta, C([0,T]), \mathcal L_\beta, \text{Dom}(\mathcal L_\beta)),\\
&(P^{\beta,\text{kill}}, C_{0}([0,T]), \mathcal L_\beta^{\text{kill}}, \text{Dom}(\mathcal L_\beta^{\text{kill}})), &(&P^\Omega, C_{\partial\Omega}(\Omega), \mathcal L_\Omega, \text{Dom}(\mathcal L_\Omega)),
\end{align*}
defined in Proposition \ref{prop_csg}, Proposition \ref{prop_csg}-(i), Proposition \ref{prop_csg}-(ii) and Proposition \ref{prop_fl}-(i), respectively. Let $\mathcal C_\beta^{\infty}$, $\mathcal C_\beta $, $\mathcal C_\beta^{\text{kill}} $ and $\mathcal C_\Omega $ be invariant cores for $(\mathcal L_\beta^\infty, \text{Dom}(\mathcal L_\beta^\infty))$, $(\mathcal L_\beta, \text{Dom}(\mathcal L_\beta))$, $(\mathcal L_\beta^{\text{kill}}, \text{Dom}(\mathcal L_\beta^{\text{kill}}))$ and $(\mathcal L_\Omega, \text{Dom}(\mathcal L_\Omega))$, respectively.
\begin{enumerate}[(i)]
\item
Then $P^{\beta,\Omega}=\{P^\beta_sP^\Omega_s\}_{s\ge 0}$ is a sub-Feller semigroup on $C_{\partial\Omega}([0,T]\times\Omega)$. The generator $(\mathcal L_{\beta,\Omega},\text{Dom}(\mathcal L_{\beta,\Omega}))$ of $P^{\beta,\Omega}$ is the closure of
$$
(\mathcal L_\beta+\mathcal L_\Omega, \text{Span}\left\{\mathcal C_\beta\cdot \mathcal C_\Omega\right\} )\quad\text{in}\quad C_{\partial\Omega}([0,T]\times\Omega),
$$
where $P^\beta$ and $\mathcal L_\beta$ act on the $[0,T]$-variable, and $P^\Omega$ and $\mathcal L_\Omega$ act on the $\Omega$-variable.
\item
Then $P^{\beta,\Omega,\text{kill}}=\{P^{\beta,\text{kill}}_sP^\Omega_s\}_{s\ge 0}$ is a sub-Feller semigroup on $C_{0,\partial\Omega}([0,T]\times\Omega)$. The generator $(\mathcal L_{\beta,\Omega}^{\text{kill}},\text{Dom}(\mathcal L_{\beta,\Omega}^{\text{kill}}))$ of $P^{\beta,\Omega,\text{kill}}$ is the closure of
$$
(\mathcal L_\beta^{\text{kill}}+\mathcal L_\Omega, \text{Span}\{\mathcal C_\beta^{\text{kill}}\cdot \mathcal C_\Omega\} )\quad\text{in}\quad C_{0,\partial\Omega}([0,T]\times\Omega),
$$
where $P^{\beta,\text{kill}}$ and $\mathcal L_\beta^{\text{kill}}$ act on the $[0,T]$-variable, and $P^\Omega$ and $\mathcal L_\Omega$ act on the $\Omega$-variable.
\item
Then $P^{\beta,\Omega,\infty}=\{P^{\beta,\infty}_sP^\Omega_s\}_{s\ge 0}$ is a sub-Feller semigroup on $C_{\infty,\partial\Omega}((-\infty,T]\times\Omega)$. The generator $(\mathcal L_{\beta,\Omega}^{\infty},\text{Dom}(\mathcal L_{\beta,\Omega}^{\infty}))$ of $P^{\beta,\Omega,\infty}$ is the closure of
$$
(\mathcal L_\beta^{\infty}+\mathcal L_\Omega, \text{Span}\{\mathcal C_\beta^{\infty}\cdot \mathcal C_\Omega\} )\quad\text{in}\quad C_{\infty,\partial\Omega}((-\infty,T]\times\Omega),
$$
where $P^{\beta,\infty}$ and $\mathcal L_\beta^{\infty}$ act on the $(-\infty,T]$-variable, and $P^\Omega$ and $\mathcal L_\Omega$ act on the $\Omega$-variable.
\item It holds that $P_s^{\beta,\Omega}=P_s^{\beta,\Omega,\text{kill}}$ $\text{on }C_{0,\partial\Omega}([0,T]\times \Omega),$ $
\mathcal L_{\beta,\Omega}=\mathcal L_{\beta,\Omega}^{\text{kill}}$ $\text{on }\text{Dom}(\mathcal L_{\beta,\Omega}^{\text{kill}}),$ and $
\text{Dom}(\mathcal L_{\beta,\Omega}^{\text{kill}})=\text{Dom}(\mathcal L_{\beta,\Omega})\cap \{f(0)=0\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proofs of (i), (ii) and (iii) can be found in Appendix \ref{proof-i-ii-iii}.
(iv) The first claim is an immediate consequence of $P^{\beta,\text{kill}}=P^\beta$ on $C_{0}([0,T])$.
The second claim follows from the third by considering a resolvent equation. To prove the third claim, we show the equivalent statement
\[
\text{Dom}(\mathcal L_{\beta,\Omega}^{\text{kill}})\subset \text{Dom}(\mathcal L_{\beta,\Omega}),\quad\text{and}\quad\text{if }u\in \text{Dom}(\mathcal L_{\beta,\Omega}),\text{ then }u-u(0)\in \text{Dom}(\mathcal L_{\beta,\Omega}^{\text{kill}}).
\]
The first inclusion is immediate using $P_s^{\beta,\Omega}=P_s^{\beta,\Omega,\text{kill}}, \text{ on }C_{0,\partial\Omega}([0,T]\times \Omega)$. For the second part, let $u\in \text{Dom}(\mathcal L_{\beta,\Omega})$ and consider its resolvent representation for some $\lambda>0$ and $g\in C_{\partial\Omega}([0,T]\times \Omega)$. Then
\[
u(0,x)=\int_0^\infty e^{-\lambda s}P^\beta_sP^\Omega_sg(0,x)\,ds=\int_0^\infty e^{-\lambda s}P^\beta_sP^\Omega_s(g(0))(t,x)\,ds,
\]
as $P^\beta_sg(0,x)=P^\beta_s(g(0))(t,x)$. Now consider
\begin{align*}
u(t,x)-u(0,x)&=\int_0^\infty e^{-\lambda s}P^\Omega_sP^\beta_s(g-g(0))(t,x)\,ds\\
&=\int_0^\infty e^{-\lambda s}P^\Omega_sP^{\beta,\text{kill}}_s(g-g(0))(t,x)\,ds\in \text{Dom}(\mathcal L_{\beta,\Omega}^{\text{kill}}),
\end{align*}
where we use the fact that $P^{\beta,\text{kill}}=P^{\beta}$ on $C_{0,\partial\Omega}([0,T]\times \Omega)$ and that $g-g(0)\in C_{0,\partial\Omega}([0,T]\times \Omega)$.
\end{proof}
\begin{remark}
Note that
\[
(-\mathcal L_{\beta,\Omega}^{\text{kill}})^{-1}g(t,x)=\int_0^\infty P_s^{\beta,\Omega}g(t,x)\, ds=\mathbf E\left[\int_0^{\tau_0(t) \wedge \tau_\Omega(x)}g\left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right)ds\right],
\]
for $g\in C_{0,\partial\Omega}([0,T]\times \Omega)$. Also, from now on we might write $\tau_{t,x}$ for $\tau_0(t) \wedge \tau_\Omega(x)$.
\end{remark}
\section{Stochastic weak solution for problem (\ref{postRL})}\label{sec_ws}
\subsection{Definition of weak solution}
Define the operator
\begin{equation*}
-D^{\beta,*}_0\varphi (s):= \partial_sI^{1-\beta}_{T}\varphi(s) +\delta_0(ds) I^{1-\beta}_{T}\varphi(0),
\end{equation*}
where $\delta_0$ is the delta-measure at $0$, and the Riemann-Liouville integral $I^{1-\beta}_{T}$ is defined as
\[
I^{1-\beta}_{T}f(s):=\int_s^Tf(t)\,\frac{(t-s)^{-\beta}dt}{\Gamma(1-\beta)},\quad s<T.
\]
In the current section only the pairing $\langle\cdot ,\cdot\rangle$ is defined as
\begin{equation*}
\langle f, g\rangle := \int_0^T\int_\Omega f(t,x)g(t,x) \,dx\,dt.
\end{equation*}
\begin{definition}\label{def_weaksolution}
Let $f\in L^\infty((0,T)\times\Omega)$ and $\phi_0\in C_{\partial\Omega}(\Omega)$. A function $u\in L^{2}((0,T)\times \Omega)$ is said to be a \emph{weak solution to problem }(\ref{postRL}) if
\begin{equation}
\langle u, (-D^{\beta,*}_0+\flap_\Omega)\varphi\rangle =\langle -f, \varphi\rangle,\quad\text{for every }\varphi\in C^{1,2}_c((0,T)\times \Omega),
\label{wsdual}
\end{equation}
and $u(t)\to \phi_0$ a.e. as $t\downarrow 0$.
\end{definition}
The next proposition motivates Definition \ref{def_weaksolution}.
\begin{proposition}\label{prop_Caputodual}
Let $\varphi\in C_c^1((0,T))$ and $u\in C([0,T]) \cap C^1((0,T))$ such that $u'\in L^1((0,T))$. Then
\begin{equation*}
\int_0^TD^{\beta}_0u(t)\varphi(t)\,dt =- \int_0^Tu(t)\left(\partial_t I^{1-\beta}_{T}\varphi(t)\right)\,dt -u(0) I^{1-\beta}_{T}\varphi(0).
\end{equation*}
\end{proposition}
\begin{proof}
Using Proposition \ref{prop_csg}-(iv), Fubini's Theorem and integration by parts, compute
\begin{align*}
\int_0^TD^\beta_0u(t)\varphi(t)\,dt&=\int_{\mathbb R} \int_{\mathbb R} u'(s)\frac{(t-s)^{-\beta}}{\Gamma(1-\beta)}\varphi(t)\mathbf1_{\{0< t< T\}}\mathbf1_{\{0< s< t\}}\,ds\,dt\\
&=\int_{\mathbb R} u'(s) \mathbf1_{\{0< s<T\}}\left(\int_s^T\frac{(t-s)^{-\beta}}{\Gamma(1-\beta)}\varphi(t)\,dt\right)\,ds\\
&=\int_0^T u'(s) I^{1-\beta}_{T}\varphi(s)\,ds\\
&=-\int_0^Tu(s) \partial_sI^{1-\beta}_{T}\varphi(s)\,ds- u(0) I^{1-\beta}_{T}\varphi(0).
\end{align*}
\end{proof}
From Proposition \ref{prop_Caputodual} and the identity in (\ref{dual_dfl}), it is straightforward to prove the following lemma.
\begin{lemma}\label{lem_dualonsmooth}
Let $\varphi\in C_c^{1,2}((0,T)\times \Omega)$ and $u\in C_{\partial\Omega}([0,T]\times \Omega) \cap C^{1,2}((0,T)\times \Omega)$ such that $\partial_tu\in L^1((0,T)\times\Omega)$. Then
\begin{equation*}
\langle u, (-D^{\beta,*}_0+\flap_\Omega)\varphi\rangle = \langle (-D^{\beta}_0+\flap_\Omega) u, \varphi\rangle.
\end{equation*}
\end{lemma}
\subsection{Existence of a weak solution}
Following \cite{HKT17}, we define two auxiliary notions of solution for problem (\ref{postRL}), starting from the abstract evolution equation
\begin{equation}
\mathcal L_{\beta,\Omega} u = -f \text{ on }(0,T]\times\overbar\Omega,\quad u=\phi_0 \text{ on }\{0\}\times\overbar\Omega,\quad u\in \text{Dom}(\mathcal L_{\beta,\Omega}).
\label{abstractee}
\end{equation}
\begin{definition}\label{def:sdog}
Let $f\in C_{\partial\Omega}([0,T]\times \Omega)$ and $\phi_0\in \text{Dom}(\mathcal L_\Omega)$ such that $f(0)=-\mathcal L_\Omega \phi_0$. We say that a function $u\in C_{\partial\Omega}([0,T]\times \Omega)$ is a \emph{solution in the domain of the generator to problem} (\ref{postRL}) if $u$ satisfies (\ref{abstractee}).
\end{definition}
The next solution concept for problem (\ref{postRL}) is defined as a pointwise approximation of solutions in the domain of the generator $\{u_n\}_{n\in\mathbb N}$ such that the approximating forcing term $\{f_n\}_{n\in\mathbb N}$ satisfies a dominated convergence type of condition.
\begin{definition}\label{def_gs}
Let $f\in B([0,T]\times \overbar\Omega)$ and $\phi_0\in \text{Dom}(\mathcal L_\Omega)$. We say that a function $u\in B([0,T]\times \overbar\Omega)$ is a \emph{generalised solution to problem} (\ref{postRL}) if
$$
u = \lim_{n\to\infty}u_n \quad\text{pointwise},
$$ where each $u_n$ is the solution in the domain of the generator for a corresponding forcing term $f_n\in C_{\partial\Omega}([0,T]\times \Omega)$ such that
$$
f_n\to f\text{ a.e. on }(0,T]\times \Omega,\quad\sup_n \|f_n\|_\infty<\infty,\quad\text{and}\quad f_n(0)=-\mathcal L_\Omega \phi_0\text{ for each }n\in\mathbb N.
$$
\end{definition}
\begin{remark} Any generalised solution must satisfy the boundary conditions $u=0$ on $[0,T]\times\partial\Omega$ and $u=\phi_0$ on $\{0\}\times\Omega$.
\end{remark}
\begin{lemma}\label{lem_sdoggs} Let $\phi_0\in \text{Dom}(\mathcal L_\Omega)$. Then
\begin{enumerate}[(i)]
\item If $f+\mathcal L_\Omega \phi_0\in C_{0,\partial\Omega}([0,T]\times \Omega)$, then there exists a unique solution in the domain of the generator to problem (\ref{postRL}).
\item If $f\in B([0,T]\times \overbar\Omega)$, then there exists a unique generalised solution to problem (\ref{postRL}).
\item Both solutions in part (i) and (ii) allow the stochastic representation (\ref{SRpostRL}).
\end{enumerate}
\end{lemma}
\begin{proof}
(i) Observe that the potential $(-\mathcal L_{\beta,\Omega}^{\text{kill}})^{-1}$ maps $C_{0,\partial\Omega}([0,T]\times \Omega)$ to itself. This follows from $P^{\beta,\Omega,\text{kill}}_sg\in C_{0,\partial\Omega}([0,T]\times \Omega)$ for $g\in C_{0,\partial\Omega}([0,T]\times \Omega)$, $s\ge0$, and Dominated Convergence Theorem (DCT) with dominating function $G(s):=\|g\|_\infty\mathbf P[s<\tau_0(T)]$. Note that we use the first identity in (\ref{ident1}) to prove that $G\in L^1((0,\infty))$. The potential $(-\mathcal L_{\beta,\Omega}^{\text{kill}})^{-1}$ is also bounded by the inequality
\begin{equation*}
\left|(-\mathcal L_{\beta,\Omega}^{\text{kill}})^{-1}g(t,x)\right|\le \|g\|_\infty\mathbf E\left[\tau_0(T)\right],\quad g\in C_{0,\partial\Omega}([0,T]\times \Omega).
\end{equation*}
It then follows by \cite[Theorem 1.1']{dynkin1965} that $\bar u:=(-\mathcal L_{\beta,\Omega}^{\text{kill}})^{-1}(f+\mathcal L_\Omega \phi_0)$ is the unique solution to the abstract evolution equation
\begin{equation}
\mathcal L_{\beta,\Omega}^{\text{kill}} \bar u = -(f+\mathcal L_\Omega \phi_0) \text{ on }(0,T]\times\overbar\Omega,\quad \bar u=0 \text{ on }\{0\}\times\overbar\Omega,\quad\text{and $\bar u\in \text{Dom}(\mathcal L_{\beta,\Omega}^{\text{kill}})$}.
\label{aaa}
\end{equation}
It is now enough to show that $\bar u$ satisfies (\ref{aaa}) if and only if $u=\bar u+\phi_0$ satisfies (\ref{abstractee}). For the `if' direction, let $u\in \text{Dom}(\mathcal L_{\beta,\Omega})$ satisfy (\ref{abstractee}). Note that $u(0)=\phi_0$. Then $\bar u:= u-\phi_0\in \text{Dom}(\mathcal L_{\beta,\Omega}^{\text{kill}})$, and $\mathcal L_{\beta,\Omega} \bar u=\mathcal L_{\beta,\Omega}^{\text{kill}} \bar u$, by Lemma \ref{thm_semigroups}-(iv). So we can compute
\begin{align*}
\mathcal L_{\beta,\Omega}^{\text{kill}}\bar u = \mathcal L_{\beta,\Omega}( u-\phi_0)= \mathcal L_{\beta,\Omega}u - \mathcal L_\Omega \phi_0=-f - \mathcal L_\Omega \phi_0,
\end{align*}
where we use
$$
\mathcal L_{\beta,\Omega}1\phi_0=(\mathcal L_\beta+\mathcal L_\Omega)1\phi_0=\mathcal L_\Omega \phi_0,
$$
from Lemma \ref{thm_semigroups}-(i) taking the invariant cores $\mathcal C_\beta =\text{Dom}(\mathcal L_\beta)$ and $\mathcal C_\Omega=\text{Dom}(\mathcal L_\Omega)$ (recalling that $\mathcal L_\beta1=0$). For the `only if' direction, let $\bar u$ satisfy (\ref{aaa}), and define $u:=\bar u +\phi_0$. Then with the same justifications as just above, compute
\begin{align*}
\mathcal L_{\beta,\Omega} u =\mathcal L_{\beta,\Omega}^{\text{kill}}\bar u+\mathcal L_{\beta,\Omega}\phi_0=-(f+\mathcal L_\Omega \phi_0)+\mathcal L_{\Omega}\phi_0=-f.
\end{align*}
It follows that $$
u = (-\mathcal L_{\beta,\Omega}^{\text{kill}})^{-1}(f+\mathcal L_\Omega \phi_0)+ \phi_0.
$$
(ii) Let $f\in B([0,T]\times\overbar\Omega)$. Then $f+\mathcal L_\Omega \phi_0\in B([0,T]\times\overbar\Omega)$. Now take a sequence $\{\tilde f_n\}_{n\mathbb N}\in C_{0,\partial\Omega}([0,T]\times \Omega) $ such that $\tilde f_n\to f+\mathcal L_\Omega \phi_0$ a.e., and $\sup_n\|\tilde f_n\|_\infty<\infty$. Define $f_n:=\tilde f_n -\mathcal L_\Omega \phi_0$ for each $n\in\mathbb N$ and note that $f_n\to f$ a.e., $\sup_n\|\tilde f_n\|_\infty<\infty$ and $f_n(0)=-\mathcal L_\Omega \phi_0$, as required by Definition \ref{def_gs}. Now, for each $f_n$ consider the stochastic representation of the respective solution in the domain of the generator
\[
u_n(t,x) =\mathbf E\left[\int_0^{\tau_{t,x}} f_n\left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right) ds\right]+\mathbf E\left[\int_0^{\tau_{t,x}}\mathcal L_\Omega \phi_0\left(X^{x,\alpha}(s)\right) ds\right]+\phi_0(x).
\]
Fix $(t,x)\in (0,T]\times\Omega$. Using absolute continuity with respect of Lebesgue measure of the laws of $-X^{t,\beta}(s)$ and $X^{x,\alpha}_\Omega(s)$ for each $s>0$, and the bound $\mathbf E\left[{\tau_{t,x}}\right]\le \mathbf E\left[{\tau_0(t)}\right]<\infty$, we can apply DCT twice to obtain as $n\to\infty$
\begin{align*}
\mathbf E\left[\int_0^{\tau_{t,x}} f_n\left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right) ds\right]&= \int_0^\infty P_s^{\beta,\text{kill}}P_s^\Omega f_n(t,x)\,ds\\
&\to \int_0^\infty P_s^{\beta,\text{kill}}P_s^\Omega f(t,x)\,ds\\
&= \mathbf E\left[\int_0^{\tau_{t,x}}f\left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right) ds\right],
\end{align*}
using as a dominating function $G:=\sup_n\|f_n\|_\infty$ to show that for each $s>0$
\[
F_n(s):=P_s^{\beta,\text{kill}}P_s^\Omega f_n(t,x)\to P_s^{\beta,\text{kill}}P_s^\Omega f(t,x)=:F(s),
\]
and the dominating function $G(s):=\sup_n\|f_n\|_\infty\mathbf P[s<\tau_{t,x}]$ to show that
\[
\int_0^\infty F_n(s)\,ds\to \int_0^\infty F(s)\,ds.
\]
The convergence on $[0,T]\times\partial\Omega\cup\{0\}\times\overbar\Omega$ is trivial. It follows that a generalised solution $u$ exists and it is given by
$$
u = (-\mathcal L_{\beta,\Omega}^{\text{kill}})^{-1}(f+\mathcal L_\Omega \phi_0)+ \phi_0.
$$
Finally, independence of the approximating sequence proves uniqueness.\\
(iii) This is a standard application of Dynkin formula (\cite[Theorem 5.1]{dynkin1965}) using the finite stopping times $\tau_{t,x}$, $(t,x)\in(0,T]\times\Omega$, namely
\begin{align*}
(-\mathcal L_{\beta,\Omega}^{\text{kill}})^{-1}(\mathcal L_\Omega \phi_0)(t,x) = \mathbf E\left[\int_0^{\tau_{t,x}}\mathcal L_{\beta,\Omega} \phi_0\left(X^{x,\alpha}(s)\right) ds\right]
=\mathbf E\left[\phi_0(X^{x,\alpha}(\tau_{t,x})) \right]-\phi_0(x).
\end{align*}
\end{proof}
We now show that the dual of $\mathcal L_{\beta,\Omega}$ is $(-D^{\beta,*}_0+\flap_\Omega)$.
\begin{lemma}\label{lem_sdogandws}
Let $u\in \text{Dom}(\mathcal L_{\beta,\Omega})$. Then
\begin{equation*}
\langle \mathcal L_{\beta,\Omega}u ,\varphi \rangle =\langle u,(-D^{\beta,*}_0+\flap_\Omega)\varphi\rangle,\quad \text{for every }\varphi\in C_c^{1,2}((0,T)\times\Omega).
\end{equation*}
\end{lemma}
\begin{proof}
By Lemma \ref{thm_semigroups}-(i) and Proposition \ref{prop_csg}-(i) we can pick a sequence
$$
\{u_n\}_{n\in\mathbb N}\subset \text{Span}\left\{C^1([0,T])\cdot \text{Dom}(\mathcal L_\Omega)\right\},
$$ such that $u_n\to u$ and $\mathcal L_{\beta,\Omega}u_n\to \mathcal L_{\beta,\Omega}u$ in $C_{\partial\Omega}([0,T]\times \Omega)$, with the additional property
\begin{equation}
\mathcal L_{\beta,\Omega}u_n =(-D^\beta_0+\mathcal L_\Omega)u_n,\quad\text{for every }n\in\mathbb N.
\label{ident}
\end{equation}
Hence, for every $\varphi\in C_c^{1,2}((0,T)\times\Omega)$, as $n\to\infty$
\begin{equation*}
\langle \mathcal L_{\beta,\Omega}u ,\varphi \rangle \leftarrow \langle \mathcal L_{\beta,\Omega}u_n ,\varphi \rangle =\langle u_n,(-D^{\beta,*}_0+\flap_\Omega)\varphi\rangle\to \langle u,(-D^{\beta,*}_0+\flap_\Omega)\varphi\rangle,
\end{equation*}
where we use DCT for both limits, and for the equality we use the identity (\ref{ident}) along with Proposition \ref{prop_Caputodual} and the dual identity in Proposition \ref{prop_fl}-(ii).
\end{proof}
We now combine Lemma \ref{lem_sdogandws} with the notion of generalised solution to obtain the main theorem of this section.
\begin{theorem}\label{thm_wspostRL}
Let $f\in L^\infty((0,T)\times \Omega)$ and $\phi_0\in C_{\partial\Omega}(\Omega)$. Then the function $u\in B([0,T]\times \overbar\Omega)$ defined in (\ref{SRpostRL}) is a weak solution to problem (\ref{postRL}).
\end{theorem}
\begin{proof}
Assume for the moment that $\phi_0\in \text{Dom}(\mathcal L_\Omega)$. By the definition of a generalised solution we can take an approximating sequence of forcing terms $\{f_n\}_{n\in\mathbb N}\subset C_{\partial\Omega}([0,T]\times\Omega)$ such that $f_n\to f$ a.e., $\sup_n\|f_n\|_\infty<\infty$, and the respective solutions in the domain of the generator $\{u_n\}_{n\in\mathbb N}$ satisfy
\[
u_n(0)=\phi_0\text{ for all }n\in\mathbb N,\quad u_n\to u\text{ pointwise on }[0,T]\times \Omega,\quad \sup_n\|u_n\|_\infty<\infty,
\]
where the last property is an immediate consequence of the stochastic representation (\ref{SRpostRL}).
Hence, we obtain for every $\varphi \in C^{1,2}_c((0,T)\times\Omega)$, as $n\to\infty$
\[
\langle -f,\varphi \rangle\leftarrow\langle -f_n,\varphi \rangle = \langle \mathcal L_{\beta,\Omega}u_n,\varphi \rangle = \langle u_n,(-D^{\beta,*}_0+\flap_\Omega)\varphi \rangle \to \langle u,(-D^{\beta,*}_0+\flap_\Omega)\varphi\rangle,
\]
where we applied DCT for both limits, the first equality is due to the $u_n$'s being solutions in the domain of the generator, and the second equality holds as a consequence of Lemma \ref{lem_sdogandws}. \\
Now, for $\phi_{0}\in C_{\partial\Omega}(\Omega)$, let $\{\phi_{0,n}\}_{n\in\mathbb N}\subset\text{Dom}(\mathcal L_\Omega)$ such that $\phi_{0,n}\to \phi_{0}$ in $C_{\partial\Omega}(\Omega)$. Let $u_n$ be the generalised solution to problem (\ref{preRL}) for $f\in B([0,T]\times \overbar\Omega)$ and $\phi_n\in \text{Dom}( \mathcal L_\Omega)$, and $u$ defined as in (\ref{SRpostRL}). Then $u_n\to u$ pointwise and $\sup_n\|u_n\|_\infty<\infty$, which in turn implies by DCT
\[
\langle -f,\varphi \rangle=\lim_{n\to\infty} \langle u_n,(-D^{\beta,*}_0+\flap_\Omega)\varphi \rangle =\langle u,(-D^{\beta,*}_0+\flap_\Omega)\varphi \rangle.
\]
It is clear that the result holds for $f\in L^\infty((0,T)\times \Omega)$. Finally, the required convergence of $u$ to the initial condition $\phi_0$ follows by the argument in Remark \ref{rmk_cont}, using the stochastic representation (\ref{SRpostRL}).
\end{proof}
\section{Stochastic classical solution for problem (\ref{postRL})}\label{sec_existence}
\begin{definition}Let $f\in C((0,T]\times\Omega)$ and $\phi_0\in C(\Omega)$. A function $u\in C_{\partial\Omega}([0,T]\times\Omega)\cap C^{1,2}((0,T)\times\Omega)$, such that $|\partial_tu(t,x)|\le Ct^{-\gamma},$ for every $(t,x)\in(0,T]\times\Omega, \text{ for some }\gamma\in(0,1),\ C>0$, is said to be a \emph{classical solution to problem} (\ref{postRL}) if $u$ satisfies the identities in (\ref{postRL}), and for every $x\in\Omega$
$$
\lim_{t\downarrow 0}|u(t,x)-\phi_0(x)|=0.
$$
\end{definition}
In this section the pairing $\langle \cdot,\cdot\rangle$ is defined as
\[
\langle f,g\rangle:=\int_\Omega f(x)g(x)\,dx.
\]
The proof of the main theorem of this section (Theorem \ref{thm_sspostRL}), extends the eigenfunction expansion argument in \cite[Thoerem 5.1]{MeerChen}, using the next lemma as the key extra ingredient. Define for $\lambda\in\mathbb R\backslash\{0\}$ and $f\in C([0,T])$
\begin{equation*}
\myF{f}{\lambda}(t):=(-\lambda)^{-1}\int_0^t f(r)\partial_t E_\beta(-\lambda (t-r)^\beta)\,dr,\quad t>0.
\end{equation*}
\begin{lemma}\label{lem_identity}
Let $\lambda>0$ and $f\in C([0,T])$. Then
\begin{enumerate}[(i)]
\item
\begin{equation*}
\mathbf E\left[\int_0^{\tau_0(t)}e^{-\lambda s} f(-X^{t,\beta}(s))\,ds \right]=\myF{f}{\lambda}(t),\quad t>0.
\end{equation*}
\item The bound
\begin{equation}
\left|\myF{f}{\lambda}(t)\right|\le \frac{c}{\lambda}\|f\|_\infty, \quad t>0,
\label{first}
\end{equation}
holds, and if $f\in C^{1}([0,T])$ then
\begin{equation}
\left|\partial_t \myF{f}{\lambda}(t)\right|\le\frac{c}{\lambda}\left( \|f'\|_\infty +f(0)\frac{ \lambda t^{\beta-1}}{1+\lambda t^\beta}\right),\quad t>0,
\label{second}
\end{equation}
for some positive constant $c$.
\end{enumerate}
\end{lemma}
\begin{proof} (i) Given the second identity in (\ref{ident1}), it is enough to prove the equivalent identity
\begin{equation}
\mathbf E\left[\int_0^{\tau_0(t)}e^{-\lambda s} f(-X^{t,\beta}(s))\,ds \right]+u_0\mathbf E\left[ e^{-\lambda \tau_0(t)} \right]=\myF{f}{\lambda}(t) + u_0E_\beta(-\lambda t^\beta),
\label{whatweprove}
\end{equation}
where $u_0$ is some constant.
We show that the lhs of (\ref{whatweprove}) is the unique continuous solution to the Caputo initial value problem solved by the rhs of (\ref{whatweprove}).
Let $w\in C_0([0,T])$ such that $w'\in C([0,T])$. Then $u(t):=(\lambda-\mathcal L_\beta)^{-1}w(t) = \mathbf E[\int_0^{\tau_0(t)} e^{-\lambda s}w(-X^{t,\beta}(s))\,ds]$ solves the resolvent equation
$$
\mathcal L_\beta u =\lambda u - w,\quad u(0)=0,
$$
and $u\in \text{Dom}(\mathcal L_\beta)$, by Proposition \ref{prop_csg}-(i). By the following computation
\begin{align*}
\partial_t u(t)&=\partial_t \int_0^t w(t-y)\left(\int_0^\infty e^{-\lambda s} p^\beta_s(y)\,ds\right)dy\\
&= w(0)\int_0^\infty e^{-\lambda s} p^\beta_s(t)\,ds+\int_0^t w'(t-y)\int_0^\infty e^{-\lambda s} p^\beta_s(y)\,ds\,dy,&t>0,
\end{align*}
it follows that
$u\in C^1_0([0,T])$, and so $\mathcal L_\beta u =-D^{\beta}_0 u$ by Proposition \ref{prop_csg}-(i). Let $u_0\in\mathbb R$. Then $\bar u:= u+u_0 $ is a continuous solution to the Caputo initial value problem
\begin{align*}
-D^{\beta}_0\bar u=\mathcal L_\beta u-D^{\beta}_0u_0 =\lambda u-w=\lambda \bar u-(w+\lambda u_0),
\end{align*}
with initial value $\bar u(0)=u_0$. By \cite[Theorem 6.5 and Theorem 7.2]{kai} we obtain $
\bar u =\text{rhs of (\ref{whatweprove}) for $f=w+\lambda u_0$}.$
Now compute
\begin{align*}
\bar u(t)&=\mathbf E\left[\int_0^{\tau_0(t)} e^{-\lambda s}\left(w(-X^{t,\beta}(s))\pm\lambda u_0\right)ds\right] +u_0\\
&=\mathbf E\left[\int_0^{\tau_0(t)} e^{-\lambda s}\left (w(-X^{t,\beta}(s))+\lambda u_0\right)ds\right]-\lambda u_0\frac{\mathbf E\left[ e^{-\lambda \tau_0(t)}\right]-1}{-\lambda} +u_0\\
&=\mathbf E\left[\int_0^{\tau_0(t)} e^{-\lambda s}\left (w(-X^{t,\beta}(s))+\lambda u_0\right)ds\right]+ u_0\mathbf E\left[ e^{-\lambda \tau_0(t)}\right].
\end{align*}
Now, for an arbitrary $f\in C^1([0,T])$, by picking $w\equiv f-f(0)$ and $u_0 \equiv f(0)\lambda^{-1}$, we obtain the equality (\ref{whatweprove}). A straightforward application of DCT proves the claim for $f\in C([0,T]).$\\
(ii)
Recall that there exists a constant $c>0$ such that $0\le-\partial_t E_\beta(-\lambda t^\beta)\le c\frac{\lambda t^{\beta-1}}{1+\lambda t^\beta}$ by \cite[Theorem 7.3]{kai} and \cite[Equation (17)]{Kra03}, and $E_\beta(-\lambda t^\beta)\le \frac{ c}{1+\lambda t^\beta}$. Then
\begin{align*}
\left| (-\lambda)^{-1}\int_0^t f(r) \partial_t E_\beta(-\lambda (t-r)^\beta)\, dr\right|\le&\ \|f\|_{\infty}\frac{1-E_\beta(-\lambda t^\beta)}{\lambda}\le \|f\|_{\infty} \frac{1+ c}{\lambda}.
\end{align*}
For the second inequality we exploit the smoothness of $f$, computing for $t>0$
\begin{align*}
\partial_t \myF{f}{\lambda}(t)=&\ (-\lambda)^{-1} \partial_t \left(-\int_0^t f(r) \partial_rE_\beta(-\lambda (t-r)^\beta)\, dr\right)\\
=&\ (-\lambda)^{-1} \partial_t \left(\int_0^t f'(r) E_\beta(-\lambda (t-r)^\beta)\, dr-f(t) +f(0) E_\beta(-\lambda t^\beta) \right)\\
=&\ (-\lambda)^{-1}\left(\int_0^t f'(r) \partial_t E_\beta(-\lambda (t-r)^\beta)\, dr\pm f'(t) +f(0) \partial_t E_\beta(-\lambda t^\beta) \right)\\
=&\ \myF{f'}{\lambda}(t)-\lambda^{-1}f(0) \partial_t E_\beta(-\lambda t^\beta).
\end{align*}
Then
\begin{align*}
\left| \partial_t \myF{f}{\lambda}(t)\right|\le &\ \|f'\|_\infty \frac{1+c}{\lambda}+f(0)c\frac{ t^{\beta-1}}{1+\lambda t^\beta}.
\end{align*}
\end{proof}
From the proof of \cite[Theorem 5.1]{MeerChen}, we infer the following lemma.
\begin{lemma}\label{lem_chen_bounds} Working with the notation of Proposition \ref{prop_fl}-(iii):
\begin{enumerate}[(i)]
\item the system of eigenvectors $\{\psi_n\}_{n\in\mathbb N}$ forms an orthonormal basis of $\text{Dom}(\mathcal L_{\Omega,2}^k)\subset L^2(\Omega)$. The corresponding eigenvalues can be ordered so that $\lambda_{n} \leq \lambda_{n+1}$, and also $\lambda_n\le \tilde c_1n^{\alpha/d}$ for some constant $\tilde c_1>0$. Also, for any compact subset $K$ of $\Omega$, $j=0,1,2$, there are constants $c_1=c_1(K,j,d,\alpha)$ such that
\begin{align}
\label{eq_eigenvector_bound} |\nabla^j \psi_n(x) | &\leq c_1 \lambda_n ^{(d+2j)/(2\alpha)},
\end{align}
where $c_1(K,0,d,\alpha)$ is independent of $K$.
\item Suppose $\phi_0 \in \operatorname{Dom} (\mathcal L_{\Omega,2}^k)$ for $k > -1 + (3d+4)/(2\alpha)$. Then
$ N := \sum_{n=1}^\infty \lambda_n^{2k} \langle \phi_0, \psi_n\rangle ^2 < \infty $, and the series
\[
\sum_{n=1}^\infty E_\beta (-\lambda_n t^\beta)\langle \phi_0, \psi_n \rangle \psi_n(x)=\mathbf E\left[\phi_0(X^{x,\alpha}(\tau_0(t)))\mathbf 1_{\{\tau_0(t)<\tau_\Omega(x)\}}\right],
\]
defines a function in $C_{\partial\Omega}([0,T]\times \Omega)\cap C^{1,2}( (0,T)\times \Omega)$ , with bounds for $j=1,2$,
\begin{alignat}{3}
& \nonumber\sum_{n=1}^\infty \abs{E_\beta (-\lambda_n t^\beta)\langle \phi_0, \psi_n \rangle \nabla^j \psi_n(x)}
&&\le( c_2 \sqrt{N})t^{-\beta} \sum_{n=1}^\infty \lambda_n^{(d+4)/(2\alpha) -1-k} < \infty,
&&\quad t>0,\\
&\nonumber \sum_{n=1}^\infty \abs{\partial_t E_\beta (-\lambda_n t^\beta)\langle \phi_0, \psi_n \rangle \psi_n(x)}
&&\leq c_3 t^{\gamma \beta - 1},
&& \quad x\in \Omega,
\end{alignat}
where $c_2 = c_2(K,j,d,\alpha), c_3 = c_3(\Omega,\alpha)$, and $0\le \gamma \le 1\wedge (4/(2\alpha)-1) $.
\end{enumerate}
\end{lemma}
We will assume that the forcing term $f$ in (\ref{postRL}) belongs to the space of functions
\begin{equation}\label{spaceH}
C^1([0,T];\text{Dom}(\mathcal L_{\Omega,2}^k)):=\left\{f\in C_{\partial\Omega}^1([0,T]\times\overbar \Omega): \sup_t\| f(t)\|_{\mathcal L_{\Omega,2}^k}+ \sup_t\left\| \partial_t f(t)\right\|_{\mathcal L_{\Omega,2}^k}<\infty\right\}.
\end{equation}
Note that if $f\in C^1([0,T];\text{Dom}(\mathcal L_{\Omega,2}^k))$, then there exists $M>0$ such that for every $n\in\mathbb N$
\begin{equation}
\sup_{t\in[0,T]}|\langle f(t),\psi_n \rangle|\le M\lambda_n^{-k},\quad \text{and}\quad \sup_{t\in[0,T]}\left| \langle \partial_tf(t),\psi_n \rangle\right|\le M\lambda_n^{-k}.
\label{bounds_pair}
\end{equation}
\begin{remark}\label{rmk_HTk}
The inclusion $\text{Span}\{C^1([0,T])\cdot\text{Dom}(\mathcal L_{\Omega,2}^k)\}\subset C^1([0,T];\text{Dom}(\mathcal L_{\Omega,2}^k))$ is clear. Moreover, if $k\in \mathbb N$, then the inclusion $ C^{1,2k}_{c}([0,T]\times\Omega)\subset C^1([0,T];\text{Dom}(\mathcal L_{\Omega,2}^k))$ holds\footnote{We define $C^{1,2k}_{c}([0,T]\times\Omega)= C^{1,2k}((0,T)\times\Omega)\cap\{f,\partial_tf \in C([0,T]\times\Omega), \text{supp}\{ f\}\subset [0,T]\times\Omega \text{ is compact} \} $.}. To see this, let $f\in C^{1,2k}_c([0,T]\times\Omega)$ and compute for each $t\in[0,T]$
\begin{align*}
\sum_{n=1}^\infty \lambda^{2k}_n\langle f(t),\psi_n\rangle^2&=\sum_{n=1}^\infty \langle f(t),\mathcal L_{\Omega,2}^k\psi_n\rangle^2=\sum_{n=1}^\infty \langle (\flap_\Omega)^kf(t),\psi_n\rangle^2=\|(\flap_\Omega)^kf(t)\|_{L^2(\Omega)}^2<\infty,
\end{align*}
\end{remark}
where the second equality holds by the same argument at the end the proof of Theorem \ref{thm_sspostRL}, using $(\flap_\Omega)^mf(t)\in L^2(\Omega)$ for each $t\in[0,T]$ and $m\le k$. Now observe that by DCT the function $t\mapsto\|(\flap_\Omega)^kf(t)\|_{L^2(\Omega)}$ is continuous on $[0,T]$, because $(\flap_\Omega)^kf\in C([0,T]\times\overbar\Omega)$. Repeat the argument for $\partial_t f$ to conclude.
\begin{lemma}\label{lem_nvs}
If $f(t)\in \text{Dom}(\mathcal L_{\Omega,2}^k)$ for $k>-1+(3d+4)/(2\alpha)$, for every $t\in[0,T]$, and $f\in C_{\partial\Omega}([0,T]\times\Omega)$, then
\begin{equation*}
\mathbf E\left[\int_0^{\tau_{t,x}}f\left(-X^{t,\beta}(s), X^{x,\alpha}(s)\right)ds\right]=\sum_{n=1}^\infty \psi_n(x)\myF{\langle f(\cdot),\psi_n \rangle}{\lambda_n}(t).
\end{equation*}
If in addition $f\in C^1([0,T];\text{Dom}(\mathcal L_{\Omega,2}^k))$, then there exists a constant $C$ such that for $t\in(0,T]$
\begin{align}\label{second2}
\sum_{n=1}^\infty \left| \psi_n(x)\partial_t \myF{\langle f(\cdot),\psi_n \rangle}{\lambda_n}(t)\right|
&\le Ct^{\beta - 1}.
\end{align}
\end{lemma}
\begin{proof}
We justify the following equalities
\begin{align*}
\mathbf E\left[\int_0^{\tau_{t,x}}f\left(-X^{t,\beta}(s), X^{x,\alpha}(s)\right)ds\right]&=\int_0^\infty P_s^{\beta,\text{kill}}P_s^{\Omega} f(t,x)\,ds\\
&=\int_0^\infty P_s^{\beta,\text{kill}}\left(\sum_{n=1}^\infty \langle f(t),\psi_n \rangle \psi_n(x) e^{-s\lambda_n}\right)\,ds\\
&=\sum_{n=1}^\infty \psi_n(x)\int_0^\infty P_s^{\beta,\text{kill}} \langle f(t),\psi_n \rangle e^{-s\lambda_n}\,ds\\
&=\sum_{n=1}^\infty \psi_n(x)\mathbf E\left[\int_0^{\tau_0(t)} \langle f(-X^{t,\beta}(s)),\psi_n \rangle e^{-s\lambda_n}\,ds\right]\\
&=\sum_{n=1}^\infty \psi_n(x)\myF{\langle f(\cdot),\psi_n \rangle}{\lambda_n}(t).
\end{align*}
We can apply Fubini's Theorem in the third equality as $${\sum_{n=1}^\infty |\langle f(t),\psi_n\rangle|\|\psi_n\|_\infty}\le C\sum_{n=1}^\infty n^{(\alpha/d)\left(d/(2\alpha)-k\right)}<\infty,$$ for some constant $C>0$, each $t\ge 0$ and any $ k > 3d/(2\alpha)$, using the bounds in Lemma \ref{lem_chen_bounds}-(i) and in (\ref{bounds_pair}). We apply Lemma \ref{lem_identity}-(i) in the fifth equality as $r\mapsto\langle f(r),\psi_n \rangle\in C([0,T])$ for each $n\in\mathbb N$. The other equalities are clear.\\
For the last claim we use the bounds in (\ref{second}), (\ref{bounds_pair}) and Lemma \ref{lem_chen_bounds}-(i) to obtain
\begin{align*}
\sum_{n=1}^\infty \left| \psi_n(x)\partial_t \myF{\langle f(t),\psi_n \rangle}{\lambda_n}(t)\right|&\le\sum_{n=1}^\infty | \psi_n(x)|\frac{c }{\lambda_n}\left(\sup_{r\in[0,T]}\left|\langle \partial_rf(r),\psi_n \rangle\right|+\frac{ \lambda_n t^{\beta-1}}{1+\lambda_n t^\beta} |\langle f(0),\psi_n \rangle|\right)\\
&\le\sum_{n=1}^\infty | \psi_n(x)| \frac{cM\lambda_n^{-k} }{\lambda_n}\left(1+\frac{ \lambda_n t^{\beta-1}}{1+\lambda_n t^\beta}\right)\\
&\le (c_1cM)\sum_{n=1}^\infty \frac{\lambda_n^{d/( 2\alpha)}\lambda_n^{-k} }{\lambda_n}\left(1+\frac{ \lambda_n t^{\beta-1}}{1+\lambda_n t^\beta}\right)\\
& \leq ( c_1c M ) t^{\beta - 1} \sum_{n=1}^\infty \lambda_n^{ d/(2\alpha)-k}\\
& \leq (\tilde c_1c_1c M ) t^{\beta - 1} \sum_{n=1}^\infty n^{( \alpha /d )\left( d/(2\alpha)-k\right)}<\infty,
\end{align*}
for any $ k > 3d/(2\alpha)$, where the constants $\tilde c_1,c_1,c$ and $ M$ follow the notation of the referenced inequalities, and a constant is omitted in the fourth inequality.
\end{proof}
\begin{theorem}\label{thm_sspostRL} Let $\Omega\subset \mathbb R^d$ be a regular set. Assume that $\phi_0\in \text{Dom}(\mathcal L_{\Omega,2}^k)$, and $f\in C^1([0,T];\text{Dom}(\mathcal L_{\Omega,2}^k))$ for some $k>-1+(3d+4)/(2\alpha)$, where $C^1([0,T];\text{Dom}(\mathcal L_{\Omega,2}^k))$ is defined in (\ref{spaceH}). Then
\begin{equation}\label{regprop}
\begin{split}
& u\in C_{\partial\Omega}([0,T]\times \Omega)\cap C^{1,2}((0,T)\times \Omega),\quad \text{and}\\
&|\partial_tu(t,x)|\le Ct^{-\gamma}, \text{ for every }(t,x)\in(0,T]\times\Omega, \text{ for some }\gamma\in(0,1),\ C>0,
\end{split}
\end{equation}
where $u$ is defined in (\ref{SRpostRL}). Moreover, $ u$ is the unique classical solution to problem (\ref{postRL}).
\end{theorem}
\begin{proof} (The notation for constants is consistent with the referenced inequalities.)\\
By Lemma \ref{lem_chen_bounds}-(ii) and Lemma \ref{lem_nvs} we can write our candidate solution (\ref{SRpostRL}) as
\[
u(t,x)=\sum_{n=1}^\infty E_\beta (-\lambda_n t^\beta) \langle \phi_0,\psi_n\rangle \psi_n(x)+ \sum_{n=1}^\infty \myF{\langle f(\cdot),\psi_n \rangle}{\lambda_n}(t)\psi_n(x),
\]
and the first series enjoys the regularity properties stated in (\ref{regprop}). We now prove the same regularity for the second series. Observe that $\sum_{n=1}^\infty \myF{\langle f(\cdot),\psi_n \rangle}{\lambda_n}(t)\psi_n(x) $ converges uniformly to a function in $C_{\partial\Omega}([0,T]\times \Omega)$, since we have the uniform bound
\begin{align*}
\sum_{n=1}^\infty
| \myF{\langle f(\cdot),\psi_n \rangle}{\lambda_n}(t)\psi_n(x)| & \leq \sum_{n=1}^\infty c\lambda_n^{-1}\|\langle f(\cdot),\psi_n \rangle\|_{C([0,T])} c_1\lambda_n^{d/(2\alpha)} \\
& \le( cc_1M)\sum_{n=1}^\infty \lambda_n^{-1-k+d/(2\alpha)}\\
& \leq (\tilde c_1c_1cM)\sum_{n=1}^\infty n^{(\alpha /d)(d/(2\alpha)-k-1)}<\infty,
\end{align*}
for any $ k > -1+3d/(2\alpha)$, using the bounds in (\ref{bounds_pair}), (\ref{first}) and Lemma \ref{lem_chen_bounds}-(i). Further, for $j=1,2$, and for any $x$ in a compact subset $K$ of $\Omega$, the term-wise space derivative of $u$ can be bounded as follows,
\begin{equation}
\begin{split}
\sum_{n=1}^\infty | \myF{\langle f(\cdot),\psi_n \rangle}{\lambda_n}(t) | \|\nabla^j \psi_n \|_\infty
&\le \sum_{n=1}^\infty c\lambda_n^{-1}\|\langle f(\cdot),\psi_n \rangle\|_{C([0,T])} c_1\lambda_n^{(d+4)/2\alpha} \\
&\le(\tilde c_1 c_1cM)\sum_{n=1}^\infty n^{(\alpha/d)\left((d+4)/(2\alpha) - k - 1\right)}<\infty,
\end{split}
\end{equation}
as
\[ \frac\alpha{d} \left(\frac{d+4}{2\alpha} - k - 1\right) < -1 \iff k > \frac{3d+4-2\alpha }{2\alpha}, \]
where we use the bounds in (\ref{bounds_pair}), (\ref{first}) and Lemma \ref{lem_chen_bounds}-(i). Thus, Weierstrass M-test implies that for any $t>0$, $u(t)$ is a $C^2$ function on every $K\subset\Omega$ compact. For the time regularity we use the inequality (\ref{second2}) from Lemma \ref{lem_nvs}\footnote{From the proof of Lemma \ref{lem_nvs} it follows that if $\phi_0=f(0)=0$, then $\partial_t u$ is bounded.}.\\
By Theorem \ref{thm_wspostRL}, $u$ is also a weak solution to problem (\ref{postRL}), and by Lemma \ref{lem_dualonsmooth} and standard approximation arguments, $u$ satisfies the equalities in (\ref{postRL}). Continuity at $t=0$ can be proved as in Remark \ref{rmk_cont}.
To prove uniqueness, consider two classical solutions to problem (\ref{postRL}), denoted by $u,v$. Then $w := u-v$ is a classical solution to problem (\ref{postRL}) with $f=0$, $\phi_0=0$. Consider the continuous functions on $[0,T]$, $t\mapsto\langle w(t),\psi_n\rangle$, $n\in\mathbb N$. If we can justify
\begin{equation}
D^{\beta}_0\langle w(t),\psi_n\rangle= \langle D^{\beta}_0w(t),\psi_n\rangle=\langle \flap_\Omega w(t),\psi_n\rangle=\langle w(t),\mathcal L_{\Omega,2}\psi_n\rangle=-\lambda_n \langle w(t),\psi_n\rangle,
\label{eqqq}
\end{equation}
for $t>0$, it follows by \cite[Theorem 6.5 and Theorem 7.2]{kai} that $\langle w(t),\psi_n\rangle= 0$ for every $t\in [0,T],\ n\in\mathbb N$, and we are done. The first equality is a consequence of $|\partial_r w(r,y)|\le Cr^{-\gamma}$, for some $\gamma\in(0,1)$. The second and fourth equalities in (\ref{eqqq}) are clear. Now, as $\psi_n\in \text{Dom}(\mathcal L_{\Omega,2}) $, there exists a sequence $\{\psi_{n,j}\}_{j\in\mathbb N}\subset C_c^\infty(\Omega)$, such that as $j\to\infty$
\begin{equation}
\psi_{n,j}\to\psi_n,\quad \text{and}\quad\flap_\Omega\psi_{n,j} =\mathcal L_{\Omega,2}\psi_{n,j}\to\mathcal L_{\Omega,2}\psi_n,\quad \text{in }L^2(\Omega),
\label{equaa}
\end{equation}
where the equality in (\ref{equaa}) holds by \cite[Lemma 4.1]{MeerChen}.
Combining (\ref{equaa}) with the equality (\ref{dual_dfl}) and $\flap_\Omega w(t)\in L^2(\Omega)$ for each $t>0$, the third equality in (\ref{eqqq}) is proven.
\end{proof}
\section{Stochastic classical solution for problem (\ref{preRL})}\label{sec_sspre}
\subsection{Stochastic representation and continuity at $t=0$}\label{sec_SR}
\begin{lemma}\label{lem_SRs}
Define the function $f_\phi:(0,T]\times\Omega\to\mathbb R$ as
\begin{equation}
f_\phi(t,x):=\int_t^\infty (\phi(t-r,x)-\phi(t,x))\frac{-\Gamma(-\beta)^{-1}dr}{r^{1+\beta}},
\label{w_p}
\end{equation}
assuming that $\phi\in C_{\infty,\partial \Omega}((-\infty,0]\times \Omega)$, $\phi(0)\in \text{Dom}(\mathcal L_\Omega)$, and the extension of $\phi$ to $\phi(0)$ on $(0,T]\times\overbar\Omega$ is such that
\begin{equation}
\phi\in \text{Dom}(\mathcal L_{\beta,\Omega}^\infty),\quad\text{and}\quad\mathcal L_{\beta,\Omega}^\infty\phi= (-D^\beta_{\infty}+\mathcal L_\Omega)\phi.
\label{conditionsonphi}
\end{equation}
Then $f_\phi\in C([0, T]\times\overbar\Omega)$ and the function $u$ defined in (\ref{SRpostRL}) for $f=f_\phi$ and $\phi_0=\phi(0)$, equals the function $\tilde u$ defined in (\ref{SR}) for $g=0$, on $(0,T]\times \Omega$.
\end{lemma}
\begin{proof} The first claim follows from $f_\phi = -D^\beta_{\infty} \phi\in C([0, T]\times\overbar\Omega)$, using (\ref{conditionsonphi}) and $\mathcal L_\Omega\phi(t,x)=\mathcal L_\Omega\phi(0,x)$ for all $(t,x)\in[0,T]\times \overbar\Omega$. Recall that we write $\tau_{t,x}= \tau_0(t)\wedge\tau_\Omega(x)$. Fix $(t,x)\in (0,T]\times \Omega$. It is enough to justify the following equalities
\begin{align*}
u(t,x)&=\mathbf E\left[\phi(0,X^{x,\alpha}(\tau_0(t))\mathbf 1_{\{\tau_0(t)<\tau_\Omega(x)\}} +\int_0^{\tau_{t,x}} f_\phi\left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right)ds\right]\\
&=\mathbf E\left[\phi(0,x)+\int_0^{\tau_{t,x}} \mathcal L_\Omega \phi\left(0,X^{x,\alpha}(s)\right)ds +\int_0^{\tau_{t,x}} f_\phi\left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right)ds\right]\\
&=\mathbf E\left[ \int_0^{\tau_{t,x}} \mathcal L_\Omega \phi\left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right) -D^{\beta}_{\infty} \phi\left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right)ds\right]+\phi(0,x)\\
&=\mathbf E\left[ \int_0^{\tau_{t,x}} \mathcal L_{\beta,\Omega}^\infty \phi\left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right)ds\right]+\phi(0,x)\\
&=\mathbf E\left[ \phi\left(-X^{t,\beta}(\tau_{t,x}),X^{x,\alpha}(\tau_{t,x})\right)\right]\pm\phi(0,x).
\end{align*}
For the second equality we use Dynkin formula with Lemma \ref{thm_semigroups}-(i) and $\phi(0)\in \text{Dom}(\mathcal L_\Omega)$; for the third equality, as we extended $\phi(t,x)=\phi(0,x)$ on $[0,T]\times\Omega$, we use the identities $f_\phi(t,x)=-D^\beta_{\infty}\phi(t,x)$ and $\mathcal L_\Omega \phi(0,x)=\mathcal L_\Omega \phi(t,x)$ on $(0,T]\times\Omega$; in the fourth equality we use assumption (\ref{conditionsonphi}); the fifth equality is again an application of Dynkin formula with Lemma \ref{thm_semigroups}-(iii) and $\phi(t,x)=\phi(0,x)$ on $(0,T]\times \Omega$.
\end{proof}
\begin{corollary}\label{cor_SRs}
If $\phi\in C_{b,\partial\Omega}^1((-\infty,0]\times\Omega)$, then for $(t,x)\in(0,T]\times\Omega$
\begin{align}\label{eqdyn}
\mathbf E\left[\phi\big(0,X^{x,\alpha}(\tau_{t,x})\big) +\int_0^{\tau_{t,x}} f_\phi\left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right)ds\right]=\mathbf E\left[ \phi\left(-X^{t,\beta}(\tau_{t,x}),X^{x,\alpha}(\tau_{t,x})\right)\right].
\end{align}
\end{corollary}
\begin{proof} \emph{Step 1.}
We prove (\ref{eqdyn}) for $\phi\in C_{\infty,\partial\Omega}^1((-\infty,0]\times\Omega)\cap \{\partial_t f(0)=0\}$ with compact support in $(-\infty,0]\times\overbar\Omega$. For such $\phi$, let $K>0$ such that $\phi$ is supported in $(-K,0]\times\overbar\Omega$. By the same arguments as in the proof of Lemma \ref{thm_semigroups}-(ii), it follows that $\text{Span}\{C([-K,0])\cap\{f(-K)=f(0)=0\}\cdot C_{\partial\Omega}(\Omega)\}$ is dense in $C_{\partial\Omega}([-K,0]\times\Omega)\cap\{f(-K)=f(0)=0\}$ with respect to the supremum norm. We can use this fact to construct a sequence $\{\phi_n\}_{n\in\mathbb N}\in \text{Span}\{C_{\infty}^1(-\infty,0])\cap\{f'(0)=0\}\cdot C_{\partial\Omega}(\Omega)\}$ such that
\[
\|\phi_n-\phi\|_{C((-\infty,0]\times\overbar\Omega)}+\|\partial_t(\phi_n-\phi)\|_{C((-\infty,0]\times\overbar\Omega)}\to 0,\quad\text{as }n\to\infty.
\]
Moreover, it follows that $f_{\phi_n}\to f_\phi$ as $n\to\infty$ pointwise on $[0,T]\times\Omega$ and $\sup_n\|f_{\phi_n}\|_{C([0,T]\times\overbar\Omega)}$ is finite. It remains to show that (\ref{eqdyn}) holds for functions in $\text{Span}\{C_{\infty}^1(-\infty,0])\cap\{f'(0)=0\}\cdot C_{\partial\Omega}(\Omega)\}$, as DCT applied to the sequences above yields the claim. By Lemma \ref{thm_semigroups}-(iii) with $C_\beta^\infty=C_\infty^1((-\infty,T])$, Proposition \ref{prop_csg} and Lemma \ref{lem_SRs}, equality (\ref{eqdyn}) holds for $\phi\in\text{Span}\{C_{\infty}^1((-\infty,0])\cap\{f'(0)=0\}\cdot\text{Dom}(\mathcal L_\Omega))\} $. As $\text{Dom}(\mathcal L_\Omega)$ is dense in $C_{\partial\Omega}(\Omega)$, equality (\ref{eqdyn}) holds for $\phi\in \text{Span}\{C_{\infty}^1((-\infty,0])\cap\{f'(0)=0\}\cdot C_{\partial\Omega}(\Omega)\} $ by DCT. \\
\emph{Step 2.} For $\phi\in C^1_{b,\partial\Omega}((-\infty,0]\times\Omega)$, take a sequence $\{\phi_n\}_{n\in\mathbb N}\subset C_{\infty,\partial\Omega}^1((-\infty,0]\times\Omega)\cap \{\partial_t f(0)=0\}$ compactly supported in $(-\infty,0]\times\overbar\Omega$, such that $\phi_n\to\phi$ pointwise on $(-\infty,0]\times\Omega$, and $\sup_n\|\phi_n\|_{C((-\infty,0]\times\overbar\Omega)}+\sup_n\|\partial_t\phi_n\|_{C((-\infty,0]\times\overbar\Omega)}<\infty$. Then $f_{\phi_n}\to f_\phi$ pointwise on $[0,T]\times\Omega$ and $\sup_n\|f_{\phi_n}\|_{C([0,T]\times\overbar\Omega)}<\infty$. Finally, apply DCT to both sides of (\ref{eqdyn}).
\end{proof}
\begin{remark}\label{rmk_cont}
If we can apply Corollary \ref{cor_SRs}, then we can prove continuity at $t=0$ for the solution (\ref{SR}) via the following argument
\begin{align*}
|\text{Formula (\ref{SRpostRL})}-\phi_0(x)|&\le|\mathbf E\left[\phi_0\left(X^{x,\alpha}(\tau_0(t)\wedge \tau_\Omega(x))\right)-\phi_0(x)\right]|+\|f\|_\infty \mathbf E\left[\tau_0(t)\right]\\
&= o_{t\downarrow 0}(1)\ +\|f\|_\infty\frac{t^\beta}{\Gamma(\beta+1)},
\end{align*}
for each $x\in\Omega$, using stochastic continuity of the process\footnote{This follows as $X^{x,\alpha}(s)$ is right continuous and $\tau_0(t)$ is right continuous, non-decreasing with $\tau_0(0)=0$.} $t\mapsto X^{x,\alpha}(\tau_0(t))$ at $t=0$. One could also use stochastic continuity at $t=0$ of $-X^{t,\beta}(\tau_0(t))=t-X^{\beta}(\tau_0(t))$, bypassing Corollary \ref{cor_SRs}. In Proposition \ref{thm_ctsat0} in the Appendix we prove continuity at $t=0$ by proving a bound on big overshootings $-X^{t,\beta}(\tau_0(t))$ for small times.
\end{remark}
\subsection{Equivalence of the classical solutions to problems (\ref{preRL}) and (\ref{postRL})}\label{sec_eoss}
\begin{definition} Let $\phi \in C_{b,\partial\Omega}((-\infty,0]\times\Omega)$ and $g\in C((0,T]\times\Omega)$. A function $\tilde u\in C_{b,\partial\Omega}((-\infty,T]\times \Omega)\cap C^{1,2}((0,T)\times \Omega)$ such that $|\partial_t\tilde u(t,x)|\le Ct^{-\gamma}, \text{ for every }(t,x)\in(0,T]\times\Omega, \text{ for some }\gamma\in(0,1),\ C>0$, is said to be a \emph{classical solution to problem} (\ref{preRL}) if $\tilde u$ satisfies the identities in (\ref{preRL}), and for every $x\in \Omega$
$$
\lim_{t\downarrow 0}|\tilde u(t,x)-\phi(0,x)|=0.
$$
\end{definition}
\begin{lemma}\label{lem_equivsol}
Let $\phi\in C_{b,\partial\Omega}((-\infty,0]\times\Omega)$ such that $f_\phi\in C((0,T]\times\Omega)$, where $f_\phi$ is defined in (\ref{w_p}), and let $g \in C((0,T]\times\Omega)$. Then, if $u$ is a classical solution to problem (\ref{postRL}) with $f=f_\phi+g$ and $\phi_0=\phi(0)$, then the extension
\begin{equation*}
\tilde u:=\left\{\begin{split}
u,\quad\text{in }& (0,T]\times\overbar\Omega,\\
\phi,\quad\text{in }& (-\infty,0]\times\Omega,
\end{split}
\right.
\label{ext}
\end{equation*}
is a classical solution to problem (\ref{preRL}). Conversely, if $\tilde u$ is a classical solution to problem (\ref{preRL}), then the restriction of $\tilde u$ to $[0,T]\times \overbar\Omega$ is a classical solution to problem (\ref{postRL}) with $f=f_\phi+g$ and $\phi_0=\phi(0)$.
\end{lemma}
\begin{proof}
The equivalence of convergence to initial data and the required regularities are clear. It is also immediate that $\flap_\Omega u = \flap_\Omega \tilde u$ on $(0,T]\times\Omega$. Write $\nu(r)=-\Gamma(-\beta)^{-1}r^{-1-\beta}$. On $(0,T]\times\Omega$ we have the equality
\begin{align*}
-D_{\infty}^{\beta} \tilde u(t,x)=&\ \int_0^{\infty}(\tilde u(t-r,x)-\tilde u(t,x))\,\nu(r)dr\\
=&\ \int_0^{t}(\tilde u(t-r,x)-\tilde u(t,x))\,\nu(r)dr+\int_t^{\infty}\phi(t-r,x)\,\nu(r)dr\\
&\quad-\tilde u(t,x))\int_t^\infty\nu(r)dr \pm \phi(0,x)\int_t^\infty\,\nu(r)dr\\
=&\ -D_{0}^{\beta} \tilde u(t,x)+f_\phi(t,x).
\end{align*}
This is enough to prove both directions.
\end{proof}
\subsection{Main result}
\begin{theorem}\label{thm_main} Let $\Omega\subset \mathbb R^d$ be a regular set. Assume that $\phi\in C_{b,\partial\Omega}^1((-\infty,0]\times\Omega)$ with $\phi(0)\in \text{Dom}(\mathcal L_{\Omega,2}^k)$ and $f_\phi,g\in C^1([0,T];\text{Dom}(\mathcal L_{\Omega,2}^k))$, for some $k>-1+(3d+4)/(2\alpha)$, where $f_\phi$ is defined in (\ref{w_p}) and $C^1([0,T];\text{Dom}(\mathcal L_{\Omega,2}^k))$ is defined in (\ref{spaceH}). Then
\begin{align*}
&\tilde u\in C_{b,\partial\Omega}((-\infty,T]\times \Omega)\cap C^{1,2}((0,T)\times \Omega),\quad \text{and}\\
&|\partial_t\tilde u(t,x)|\le Ct^{-\gamma}, \text{ for every }(t,x)\in(0,T]\times\Omega, \text{ for some }\gamma\in(0,1),\ C>0,
\end{align*}
where $\tilde u$ is defined as in (\ref{SR}).
Moreover, $\tilde u$ is the unique classical solution to problem (\ref{preRL}).
\end{theorem}
\begin{proof}
By the assumptions on $\phi$ and $g$, and Lemma \ref{lem_equivsol}, existence and uniqueness of classical solutions follows by Theorem \ref{thm_sspostRL} with $\phi_0=\phi(0)$ and $f=f_\phi+g$. Now apply Corollary \ref{cor_SRs} to obtain the stochastic representation (\ref{SR}) from the stochastic representation (\ref{SRpostRL}).
\end{proof}
\begin{remark}\label{rmk_HK}
Using Corollary \ref{cor_SRs} (or \cite[Theorem 1 for $\lambda=0$]{IkeWa62}), $\mathbf P[-X^{t}(\tau_0(t))\in\{0\}]=0$ for every $t>0$ (see \cite[III, Theorem 4]{ber}) and the independence of $X^{x,\alpha}$ and $-X^{t,\beta}$, one can show that for $(t,x)\in(0,T]\times\Omega$
\begin{equation*}
\mathbf E\left[\phi\left(-X^{t,\beta}(\tau_0(t)),X^{x,\alpha}(\tau_0(t))\right)\mathbf 1_{\{\tau_0(t)<\tau_\Omega(x)\}}\right]=\int_{-\infty}^0\int_\Omega \phi(r,y)H_{\beta,\alpha}^{t,x}(r,y)\,dr\,dy,
\end{equation*}
where
\begin{equation*}\begin{split}
H_{\beta,\alpha}^{t,x}(r,y)&=\int_0^t\frac{-\Gamma(-\beta)^{-1}}{(z-r)^{1+\beta}}\left(\int_0^\infty p^\Omega_s(x,y)p^\beta_s(t-z)\,ds\right)\,dz.
\end{split}
\end{equation*}
It is straightforward to compute for $(t,x)\in(0,T]\times\Omega$
\begin{equation*}
\mathbf E\left[\int_0^{\tau_0(t)\wedge\tau_\Omega(x)} g\left(-X^{t,\beta}(s),X^{x,\alpha}(s)\right)ds\right]= \int_0^t\int_\Omega g(z,y)\left(\int_0^\infty p^\Omega_s(x,y)p^\beta_s(t-z)\,ds\right)\,dz\,dy.
\end{equation*}
\end{remark}
\begin{remark}
Notice that the value $\phi(0)$ does not contribute to the solution (\ref{SR}) because $\mathbf P[-X^{t}(\tau_0(t))\in\{0\}]=0$ for all $t>0$. However, $u(t)\to \phi(0)$ as $t\downarrow 0$. We discuss the continuity of the solution at $t=0$ in more detail in Appendix \ref{app_cont}.
\end{remark}
\begin{remark}
We could drop the condition $\|\partial_t\phi\|_\infty<\infty$ in Theorem \ref{thm_main}, by weakening Corollary \ref{cor_SRs}, for example to $\phi$ being $\beta^*$-H\"older continuous at $t=0$, for some $ \beta^*>\beta$ and $\phi\in L^\infty((-\infty,0)\times\Omega)$. This is essentially because $\lim_{t\downarrow 0}f_\phi(t)$ remains well-defined. However, in order to apply Theorem \ref{thm_sspostRL} in the proof of Theorem \ref{thm_main} we need to assume $f_\phi\in C^1([0,T];\text{Dom}(\mathcal L_{\Omega,2}^k))$. Hence, a minimal requirement is that $\phi$ is continuously differentiable in time and both $\phi$ and $\partial_t\phi$ are $\mathcal O(|r|^{\beta_*})$ at $-\infty$ and $\beta^*$-H\"older continuous at $0$, for some $\beta_*<\beta< \beta^*$, as we need $f_\phi$ and $\partial_tf_\phi$ to be continuous on $[0,T]\times\overline\Omega$.
\end{remark}
\begin{remark}
Suppose that $\phi\in C_{\infty,\partial\Omega}^{2,2 k}((-\infty,0]\times\Omega)$ and $\phi(t)$ along with its partial derivatives in space are compactly supported in $\Omega$, for each $t\in (-\infty,0]$, where $k\in\mathbb N$ and $k>-1+(3d+4)/(2\alpha)$. Then, an application of Remark \ref{rmk_HTk} implies that $f_\phi\in C^1([0,T];\text{Dom}(\mathcal L_{\Omega,2}^k))$.
\end{remark}
\section{Intuition for the stochastic solution (\ref{SR})}\label{sec_int}
We discuss the intuition for the stochastic representation (\ref{SR}) as the solution to the EE (\ref{preRL}). Let us write $-W(t)=t - X^{\beta}(\tau_0 (t))=-X^{t,\beta}(\tau_0 (t))$. Then $W (t)$ is the overshoot of the subordinator $ X^{\beta}$ with respect to the barrier $t$, recalling that the first exit time/inverse subordinator is given by $\tau_0 (t)=\inf\{s>0:\ t\le X^{\beta}(s)\}$. To ease notation we write $ Y^x:= \{X^{x,\alpha}(\tau_0(t))\mathbf 1_{\{\tau_0(t)<\tau_\Omega(x)\}}\}_{t\ge 0}$. Let us start from the intuition of Caputo EEs, as if $\phi(t,x)=\phi(0,x)=:\phi_0(x)$ for every $t\in(-\infty,0]\times\Omega$, then the solution (\ref{SR}) reads
\begin{equation}
u(t,x)=\mathbf E\left[\phi_0(Y^{x}(t))\right],
\label{intro_SRC}
\end{equation}
and the EE (\ref{preRL}) equals the Caputo EE (\ref{postRL}) (for $g=f=0$). The probabilistic object defining the solution (\ref{intro_SRC}) is the anomalous diffusion $Y^x$. Recall that the particle $Y^x$ is either trapped or diffusing.\\
\textbf{Key observation}: reasoning path-wise, for some $\bar x\in \Omega$
\[
\text{the interval $(t_1,t_2)$ is the maximal open interval so that $t\mapsto Y^x(t)=\bar x$ is constant}
\]
$$\iff$$
\[
\text{the interval $(t_1,t_2)$ is the maximal open interval so that $t\mapsto\tau_0(t)$ is constant}
\]
$$\iff$$
\[
\text{the interval $(t_1,t_2)$ is the maximal open interval so that $t\mapsto X^{\beta}(\tau_0(t))$ is constant}
\]
$$\iff$$
\[
\text{$X^{\beta}(\tau_0(t)-)=t_1$ and $X^{\beta}(\tau_0(t))=t_2$, (i.e. $X^\beta$ jumped from $t_1$ to $t_2$).}
\]
The last statement implies that
\[
\text{ $W (t)= X^{\beta}(\tau_0(t))-t=t_2-t\in (0,t_2-t_1)$ for every $t\in (t_1,t_2)$},
\]
which is the trapping/waiting time of $Y^x(t)$. In words: the event of the diffusion $ Y^x$ being trapped at a point $\bar x\in\Omega$ at time $t$ until time $t+s$ happens precisely when $W (t)=s$. Hence the law of $-W (t)$ provides a weighting of the initial condition $\phi(\bar x)$ depending on the trapping/waiting time of $Y^x(t)$. Notice that the process $t\mapsto -W(t)$ is self-similar with index $1$ and it is composed by right continuous $45$ degrees increasing slopes with $0$ leftmost limit (see Figure $1$).
\begin{figure}[h!] \label{fig1}
\centering
\includegraphics[trim = 0cm 0cm 0cm .2cm, clip=true,height=6cm]{Wplot10.eps}
\caption{A typical path of the overshoot $t\mapsto -W(t)=-X^{t,\beta}(\tau_0(t))$, $\beta=0.9$. }
\end{figure}
\subsection{A non-memory interpretation}
It is possibly appealing to think about the values $(-\infty,0)\times \Omega$ for the initial condition $\phi$ as the `depth' underneath the surface $\{0\}\times \Omega$ where the particle $Y^x$ moves. Then one can think about the particle $Y^x(t)$ as falling instantaneously at the bottom of a hole/trap of depth $|t_2-t_1|$, and then taking time $|t_2-t_1|$ to climb back up to the surface. Then, at time $t$ one can observe the particle being $|t_2-t|$-depth-units down in the hole. From this viewpoint, once the particle is in the hole it just drifts upward with unit speed. As a quick example, consider the variable separable initial condition $\phi(t,x)= p(t)q(x)$ where $p(t)=\mathbf 1_{\{t< -1\}}$. Then the solution reads for $t>0$
\begin{align*}
u(t,x)&=\mathbf E\left[ q(Y^{x}(t))\mathbf 1_{\{ W(t)>1\}}\right]\\
&=\mathbf E\left[ q(Y^{x}(t)) | Y^x(t)\text{ is more than 1 unit deep in a trap}\right] \\
\Big(&=\mathbf E\left[ q(Y^{x}(t)) | Y^x(t)\text{ is trapped for more than 1 time-unit}\right]\Big).
\end{align*}
Hence, in this example the diffusive particle $Y^x$ will have to be a least a unit deep in a hole (trapped for at least a unit time) for the values at its trapping point at its depth (in the past) to contribute to the solution.
\renewcommand{\thesection}{A}
\renewcommand{\thesubsection}{\thesection.\Roman{subsection}}
\section{Appendix}
\subsection{Continuity of solution (\ref{SR}) at $t=0$}\label{app_cont}
\begin{proposition}\label{prop:bound}
For every $p,\varepsilon>0$, the following bound on small overshootings holds,
\begin{equation*}
\mathbf P[X^{t,\beta}(\tau_0(t))\le \varepsilon]\ge(1-p), \quad \text{for every } t\le \varepsilon p^{\frac{1}{\beta}}.
\end{equation*}
\end{proposition}
\begin{proof}
With the first equality holding by \cite[Theorem 1 for $\lambda=0$]{IkeWa62} along with the identity (\ref{ident3}), compute
\begin{align*}
\mathbf P[X^{t,\beta}(\tau_0(t))\le \varepsilon]&=\int_{-\varepsilon}^0 \left(\frac{1}{\Gamma(\beta)}\int_{0}^{ t} (-\partial_y(y-r)^{-\beta})\frac{ (t-y)^{\beta-1}}{\Gamma(1-\beta)}\, d y\right)\,dr\\
&=\int_{-\varepsilon}^0 \left(\frac{\beta}{\Gamma(\beta)\Gamma(1-\beta)}\int_{0}^{ t} (y-r)^{-\beta-1} (t-y)^{\beta-1}\, d y\right)\,dr\\
&=\frac{-\Gamma(\beta)^{-1}}{\Gamma(-\beta)}\int_{0}^{t} (t-y)^{\beta-1} \left( \int_{-\varepsilon}^0(y-r)^{-\beta-1} d r\right)\,dy\\
&=\frac{-\Gamma(\beta)^{-1}\beta^{-1}}{\Gamma(-\beta)}(a-a_\varepsilon(t)),
\end{align*}
where $a_\varepsilon(t):= \int_{0}^{t} (t-y)^{\beta-1}(y+\varepsilon)^{-\beta}\,dy$ and $a:=\int_{0}^{t} (t-y)^{\beta-1}y^{-\beta}\,dy=\Gamma(\beta)\Gamma(1-\beta)$ for every $t>0$. Now pick $\tilde t=\varepsilon p^{1/\beta}$. Then for every $0\le y\le\tilde t$
\begin{equation*}
(y+\varepsilon)^{-\beta}= (y+p^{-1/\beta}\tilde t)^{-\beta}\le p\tilde t^{-\beta}\le py^{-\beta},
\end{equation*}
hence for every $t\le \tilde t$
\[
\frac{a_\varepsilon(t)}{a}=\frac{\int_{0}^{t} (t-y)^{\beta-1}(y+\varepsilon)^{-\beta}dy}{\int_{0}^{t} (t-y)^{\beta-1}y^{-\beta}\,dy}\le p.
\]
Then $a_\varepsilon(t)\le p a$ for every $t\le \tilde t$, which is equivalent to $a-a_\varepsilon(t)\ge (1-p)a$ for every $t\le \tilde t$. And so we obtain
\[
\mathbf P[X^{t,\beta}(\tau_0(t))\le \varepsilon]\ge(1-p) \frac{-\Gamma(\beta)^{-1}}{\Gamma(-\beta)}\beta^{-1}\Gamma(\beta)\Gamma(1-\beta)=(1-p).
\]
\end{proof}
We now use the bound in Proposition \ref{prop:bound} to prove the following continuity result
\begin{proposition}\label{thm_ctsat0}
Consider the function $\tilde u$ defined in (\ref{SR}), with an arbitrary $\Omega$-valued stochastic (sub-)process $X^x$ in place of $X^{x,\alpha}$, such that $t\mapsto X^{x}(\tau_0(t))$ is stochastically continuous at $t=0$. Also assume $\phi\in B((-\infty,0]\times\Omega))$ and $\phi$ is continuous at every point in $\{0\}\times\Omega$. Then for every $x\in\Omega$
\[
\lim_{t\downarrow 0}|\tilde u(t,x)- \phi(0,x)|=0.
\]
\end{proposition}
\begin{proof}
Let $x\in\Omega$. Let $\delta>0$ be arbitrary. Pick $\varepsilon,\varepsilon'>0$ such that
\[
\sup_{(s,y)\in(-\varepsilon,0]\times B_{\varepsilon '}(x)}|\phi(s,y)-\phi(0,x)|\le \delta.
\]
Then
\begin{align*}
|\tilde u(t,x)-\phi(0,x)|\le&\ \left|\mathbf E\left[(\phi(-X^{t,\beta}(\tau_0(t)),X^{x}(\tau_0(t))-\phi(0,x))\mathbf 1_{\{X^{t,\beta}(\tau_0(t))>\varepsilon\}}\right]\right|\\
&\ +\left|\mathbf E\left[(\phi(-X^{t,\beta}(\tau_0(t)),X^{x}(\tau_0(t)))-\phi(0,x))\mathbf1_{\{X^{t,\beta}(\tau_0(t))\le\varepsilon\}}\right]\right|\\
\le&\ 2\|\phi\|_{\infty}\mathbf P[X^{t,\beta}(\tau_0(t))>\varepsilon]\\
&\ +\mathbf E\left[|\phi(-X^{t,\beta}(\tau_0(t)),X^{x}(\tau_0(t)))-\phi(0,x)|\mathbf1_{\{X^{t,\beta}(\tau_0(t))\le\varepsilon, |X^{x}(\tau_0(t)))-x|\le\varepsilon'\}}\right]\\
&\ +\mathbf E\left[|\phi(-X^{t,\beta}(\tau_0(t)),X^{x}(\tau_0(t)))-\phi(0,x)|\mathbf1_{\{X^{t,\beta}(\tau_0(t))\le\varepsilon,|X^{x}(\tau_0(t)))-x|>\varepsilon'\}}\right]\\
\le&\ 2\|\phi\|_{\infty}\mathbf P[X^{t,\beta}(\tau_0(t))>\varepsilon] +\delta +2\|\phi\|_{\infty}\mathbf P[|X^{x}(\tau_0(t)))-x|>\varepsilon']
\end{align*}
Now, by Proposition \ref{prop:bound}, for all $ t\le \delta^{\frac 1 \beta}\varepsilon$ it holds that $\mathbf P[X^{t,\beta}(\tau_0(t))>\varepsilon]\le \delta$. Then the estimate above reads
\begin{align*}
|\tilde u(t,x)-\phi(0,x)|\le&\ 2\|\phi\|_{\infty}\delta+\delta+ 2\|\phi\|_{\infty}\mathbf P[|X^{x}(\tau_0(t)))-x|>\varepsilon'],\quad \text{for every }t\le \delta^{\frac 1 \beta}\varepsilon.
\end{align*}
To conclude, by stochastic continuity, pick a possibly smaller threshold $\bar t$ to obtain
\[
\mathbf P[|X^{x}(\tau_0(t)))-x|>\varepsilon']\le \delta\quad \text{for every }t\le \bar t.
\]
\end{proof}
\begin{remark}
The continuity at $t=0$ of Proposition \ref{thm_ctsat0} is not obvious. For example it is clear that Proposition \ref{thm_ctsat0} fails if we replace $-X^{t,\beta}$ with a decreasing Poisson process. In fact Proposition \ref{thm_ctsat0} fails in general if we replace $-X^{t,\beta}$ with a decreasing compound Poisson process $-N^t(s)$ with generator
$$
-D _\infty^{(\nu)} f(t):=\int_0^\infty (f(t-r)-f(t))\,\nu(dr),\quad \text{where}\quad0<\lambda:=\int_0^\infty \nu(dr)<\infty.
$$
To see this, observe that for every $\varepsilon,t>0$
\begin{align*}
\mathbf P\left[N^t\left(\tau_0 (t)\right)> \varepsilon\right] \ge \mathbf P\left[\text{first jump of $N^t$ is greater than $t+\varepsilon$}\right]=\int_{t+\varepsilon}^\infty \frac{\nu(dr)}{\lambda},
\end{align*}
and note that the right hand side is non-decreasing as $t\downarrow 0$, where $\tau_0 $ is the left continuous inverse of $N^0$. As $\int_0^\infty\nu(dr)>0 $ we can choose $\varepsilon_0>0$ and $\bar t>0$ so that
$$
\inf_{t\le\bar t}\mathbf P\left[N^t\left(\tau_0 (t)\right)> \varepsilon_0\right]\ge\int_{\bar t+\varepsilon_0}^\infty\frac{\nu(dr)}{\lambda}=:c>0.
$$
Now, consider a continuous non-negative $\phi$ with $\phi(0)=0$, such that $\inf_{r\in (-\infty,-\varepsilon_0]}\phi(r)>0$. Then for every $t\le \bar t$
\begin{align*}
|\tilde u(t)-\phi(0)|&=\mathbf E\left[\phi(-N^t\left(\tau_0(t)\right)\left(\mathbf1_{\{N^t(\tau_0(t))> \varepsilon_0\}}+\mathbf1_{\{N^t(\tau_0(t))\le \varepsilon_0\}}\right)\right]\\
&\ge \mathbf E\left[\phi(-N^t\left(\tau_0(t)\right))\mathbf1_{\{N^t(\tau_0(t))> \varepsilon_0\}}\right]\\
&\ge \inf_{r\in (-\infty,-\varepsilon_0]}\phi(r)\mathbf P\left[N^t\left(\tau_0(t)\right)> \varepsilon_0\right]\\
&\ge \inf_{r\in (-\infty,-\varepsilon_0]}\phi(r)c>0.
\end{align*}
\end{remark}
\subsection{Proof of Lemma \ref{thm_semigroups}-(i)-(ii)-(iii)}\label{proof-i-ii-iii}
The three proofs are essentially the same, hence we prove only (ii).
Note that $P^{\beta,\text{kill}}_sP^\Omega_r=P^\Omega_rP^{\beta,\text{kill}}_s$ for every $s,r\ge 0$, and that
\[
\|P^\Omega_s f\|_{C([0,T]\times\overbar\Omega)},\ \|P^{\beta,\text{kill}}_s f\|_{C([0,T]\times\overbar\Omega)}\le \|f\|_{C([0,T]\times\overbar\Omega)},
\] for every $f\in C_{0,\partial\Omega}([0,T]\times\Omega)$, $s\ge0$. It is then easy to prove that $P^{\beta,\Omega,\text{kill}}$ is sub-Feller semigruop on $ C_{0,\partial\Omega}([0,T]\times\Omega)$. We denote the generator of $P^{\beta,\Omega,\text{kill}}$ by $(\mathcal L_{\beta,\Omega}^{\text{kill}},\text{Dom}(\mathcal L_{\beta,\Omega}^{\text{kill}}))$. Let $f=pq $, where $p\in \mathcal C_\beta^{\text{kill}}$ and $q\in\mathcal C_\Omega $. Then, by a standard triangle inequality argument, we obtain
\begin{align*}
\Big|\frac{P^{\beta,\text{kill}}_hP^\Omega_h f(t,x)-f(t,x)}{h}&-(\mathcal L_\beta^{\text{kill}} +\mathcal L_\Omega )f(t,x)\Big|\\
\le&\ \|p\|_{C([0,T])}\left\|\frac{P^\Omega_hq-q}{h} -\mathcal L_\Omega q\right\|_{C(\overbar\Omega)} + \|\mathcal L_\Omega q\|_{C(\overbar\Omega)}\left\| P^{\beta,\text{kill}}_h p-p\right\|_{C([0,T])} \\
&\ +\|q\|_{C(\overbar\Omega)}\left\| \frac{P^{\beta,\text{kill}}_h p-p}{h}-\mathcal L_\beta^{\text{kill}}p\right\|_{C([0,T])}\to 0,
\end{align*}
as $h\downarrow 0$. An induction argument proves that $ \text{Span} \{\mathcal C_\beta^{\text{kill}}\cdot \mathcal C_\Omega \}\subset \text{Dom}(\mathcal L_{\beta,\Omega}^{\text{kill}})$ and $\mathcal L_{\beta,\Omega}^{\text{kill}}= (\mathcal L_\beta^{\text{kill}} +\mathcal L_\Omega)$ on $\text{Span} \{\mathcal C_\beta^{\text{kill}}\cdot \mathcal C_\Omega \}$. Observing that $\text{Span} \{\mathcal C_\beta^{\text{kill}}\cdot \mathcal C_\Omega \}$ is invariant under $P^{\beta,\Omega\text{kill}}$ and it is a subspace of $\text{Dom}(\mathcal L_{\beta,\Omega}^{\text{kill}})$, if we can prove that $\text{Span} \{\mathcal C_\beta^{\text{kill}}\cdot \mathcal C_\Omega\}$ is dense in $ C_{0,\partial\Omega}([0,T]\times\Omega)$, we are done by \cite[Lemma 1.34]{Schilling}. So proceed by noting that set $\text{Span} \{C^\infty([0 ,T])\cdot C^\infty(\overbar\Omega)\}$ is a sub-algebra of $C([0,T]\times\overbar\Omega)$ that contains constant functions and separates points. Hence $\text{Span} \{C^\infty([0 ,T])\cdot C^\infty(\overbar\Omega)\}$ is dense in $C([0,T]\times\overbar\Omega)$ by Stone-Weierstrass Theorem for compact\footnote{In the case of unbounded domains (part (iii) of the current lemma) use the Stone-Weierstrass Theorem for locally compact Hausdorff spaces.} Hausdorff spaces. We now prove density of the following set
\begin{align*}
\text{Span} \{C_c^\infty((0 ,T])\cdot C_c^\infty(\Omega)\}&\subset C_{0,\partial\Omega}([0,T]\times\Omega).
\end{align*}
For $f\in C_{0,\partial\Omega}([0,T]\times\Omega)$ we take a sequence $\{f_n\}_{n\in\mathbb N}\subset \text{Span} \{C^\infty([0 ,T])\cdot C^\infty(\overbar\Omega)\} $ such that $f_n\to f$, where $f_n(t,x)=\sum_{i=1}^{N_n}p_{i,n}(t)q_{i,n}(x)$, for some $N_n\in\mathbb N$ depending on $n\in\mathbb N$. Let $ 1_{T,n}\in C_c^\infty((0,T])$ and $ 1_{\Omega,n}\in C_c^\infty(\Omega)$ be smooth functions for each $n\in\mathbb N$, such that $ 0\le 1_{T,n}, 1_{\Omega,n}\le 1 $, $ 1_{T,n}(t)= 1_{\Omega,n}(x)=1$ for $t\in (\frac1n,T]$ and $x\in K_n$, and $ 1_{T,n}(t)= 1_{\Omega,n}(x)=0$ for $t\in (0,\frac1{n+1}]$ and $x\in\Omega\backslash K_{n+1}$, where $K_n$ is compact, $K_n\subset K_{n+1}\subset \Omega$ for each $n$, and $\cup_n K_n=\Omega$. Define for each $n\in\mathbb N$,
$$(t,x)\mapsto\tilde f_n(t,x):=\sum_{i=1}^{N_n}p_{i,n}(t)1_{T,n}(t)q_{i,n}(x)1_{\Omega,n}(x)\in \text{Span} \{C_c^\infty((0 ,T])\cdot C_c^\infty(\Omega)\}.$$
Then, as $n\to\infty$
\begin{align*}
\|\tilde f_n -f\|_{C ([0,T]\times \Omega)}\le&\ \|f_n -f\|_{C ([\frac1n,T]\times K_n)} + \|\tilde f_n -f\|_{C \left((\frac{1}{n+1},\frac1n]\times \overbar\Omega\cup [0,T]\times K_{n+1}\backslash K_n\right)}\\
&\ +\|f\|_{C \left([0,T]\times \overbar\Omega \backslash K_{n+1}\cup [0,\frac{1}{n+1}]\times \overbar\Omega\right)}\to 0.
\end{align*}
As $C_c^\infty(\Omega) \not\subset \text{Dom}(\mathcal L_\Omega)$ we need to work a bit more. For any $u\in C_{0,\partial\Omega}([0,T]\times\Omega) $ we can now take a uniformly approximating sequence $\{u_n\}_{n\in\mathbb N}\subset \text{Span} \{C_c^\infty((0 ,T])\cdot C_c^\infty(\Omega)\}$. Denote $u_n(t,x)=\sum_{i=1}^{N_n}p_{i,n}(t)q_{i,n}(x)$, for some $N_n\in\mathbb N$ depending on $n\in\mathbb N$, where $p_{i,n}\in C_c^\infty((0 ,T]),\ q_{i,n}\in C_c^\infty(\Omega)$ are non-zero, for each $i\in \{1,...,N_n\}$, $n\in\mathbb N$. As $\mathcal C_\beta$ and $\mathcal C_\Omega$ are dense in $C_0([0 ,T])\supset C_c^\infty((0 ,T])$ and $ C_{\partial\Omega}(\Omega)\supset C_c^\infty(\Omega)$, respectively, we can pick $\{(\tilde p_{i,n}, \tilde q_{i,n}):i\in \{1,...,N_n\},n\in\mathbb N\}\subset \mathcal C_\beta\times \mathcal C_\Omega$, in the following fashion: for each triplet $(N_n, p_{i,n}, q_{i,n})$, first pick $\tilde p_{i,n}$ so that
\[
\| p_{i,n}-\tilde p_{i,n}\|_{C[0,T]}\le\frac{1}{ nN_n\|q_{i,n}\|_{C[0,T]}},
\]
secondly pick $\tilde q_{i,n}$ so that
\[
\| q_{i,n}-\tilde q_{i,n}\|_{C[0,T]}\le \frac{1}{nN_n\|\tilde p_{i,n}\|_{C[0,T]}}.
\]
Then, after defining $\tilde u_n(t,x):=\sum_{i=1}^{N_n}\tilde p_{i,n}(t)\tilde q_{i,n}(x)$, we obtain
\begin{align*}
\|u-\tilde u_n\|_\infty&\le \|u- u_n\|_\infty+\|u_n-\tilde u_n\|_\infty\\
&\le \|u- u_n\|_\infty+\sum_{i=1}^{N_n}\| p_{i,n} q_{i,n}-\tilde p_{i,n}\tilde q_{i,n}\|_\infty\\
&\le \|u- u_n\|_\infty+\sum_{i=1}^{N_n}\left(\| q_{i,n}\|_\infty\|p_{i,n}-\tilde p_{i,n}\|_\infty+\| \tilde p_{i,n}\|_\infty\|q_{i,n}-\tilde q_{i,n}\|_\infty\right)\\
&\le \|u- u_n\|_\infty+\sum_{i=1}^{N_n}\left(\frac{\| q_{i,n}\|_\infty}{ nN_n\|q_{i,n}\|_{C[0,T]}}+\frac{\| \tilde p_{i,n}\|_\infty}{nN_n\|\tilde p_{i,n}\|_{C[0,T]}}\right)\\
&= \|u- u_n\|_\infty+\sum_{i=1}^{N_n}\frac{2}{nN_n}\\
&\le \|u- u_n\|_\infty+\frac{2}{n}\to 0,\quad \quad \text{as }n\to\infty.
\end{align*}
\let\oldbibliography\thebibliography
\renewcommand{\thebibliography}[1]{\oldbibliography{#1}
\setlength{\itemsep}{0pt}}
|
{
"timestamp": "2018-09-03T02:12:41",
"yymm": "1805",
"arxiv_id": "1805.02464",
"language": "en",
"url": "https://arxiv.org/abs/1805.02464"
}
|
\section*{Main text}
Single-layer molybdenum disulphide (MoS$_2$) is a widely-studied candidate for future optoelectronics, where energy conversion is achieved with much lesser amounts of matter than with traditional three-dimensional materials, and a wealth of functionalities emerge from flexibility and transparency.\cite{Wang} The direct band-gap in the electronic band structure\cite{Li} is the key to light emission\cite{Mak,Splendiani} and conversion\cite{Sundaram} in single-layer MoS$_2$. Due to the reduced dimensionality, Coulomb interactions play a key role in this system and lead to a large exciton binding energy. More generally this system is a playground for testing many-body Coulomb interaction theories that should be able to describe excitonic complexes.\cite{Ugeda,Kidd,Efimkin} The current understanding is that the excitons in monolayer transition metal dichalcogenides have fast radiative lifetimes\cite{Poellmann,Wang_b,Jakubczyk} (sub-picosecond) and that the interlayer excitons in type-II junctions are long lived\cite{Rivera,Palummo} (in the nanosecond range at low temperature). Such excitons could be used for light-emission or light-harvesting devices, respectively. In both cases the presence of defects will induce non-radiative decay, reducing device efficiency, and/or modify the exciton emission energy. A chemical treatment eliminating defects allowed to demonstrate a photoluminescence quantum yield close to unity, and long lifetimes.\cite{Amani} Nevertheless the nature of defects in non-treated samples and their role in radiative and non-radiative recombination remain as open questions. To bring clearcut answers to this pressing question, well-characterised defects need to be investigated and their influence on the physical properties need to be understood. They could be intrinsic to the single-layer, \textit{e.g.} sulfur vacancies and substitutional atoms,\cite{Dolui,Kim,Tongay,Qiu,Lu,McDonnell,Addou,Noh,Komsa,Gonzalez} or extrinsic to it, in the form of charged impurities, either trapped in the MoS$_2$ substrate\cite{Lu} or adsorbed on it.\cite{Late,Li_b,Lembke,Jariwala} The latter is thought to limit the electronic mobility of MoS$_2$-based transistors below the phonon-limited value.\cite{Kaasbjerg}
\begin{figure*}[hbt]
\begin{center}
\includegraphics[width=142.9mm]{./Figure2-01.pdf}
\caption{\label{fig1}(a) Schematics of the process for exfoliation with scotch-tape towards a SiO$_2$ surface and PDMS. (b) Occurrence of MoS$_2$ flakes with less than four layers in thickness, obtained by scotch-tape exfoliation on SiO$_2$ (light pink-shaded regions) and stamping on PDMS (blue-shaded regions), in the case of a natural source of MoS$_2$ (top) and the HP/HT-MoS$_2$ (bottom) for both scotch-tape and PDMS. 45 and 52 flakes have been measured for the two sources of MoS$_2$. (c) Optical micrograph of a large bi-layer MoS$_2$ flake obtained by exfoliation on PDMS.}
\end{center}
\end{figure*}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=122.2mm]{./Figure1-01.pdf}
\caption{\label{fig2}(a) Optical micrographs of (from left to right) single- (1L), bi- (2L), and tri- (3L) layer MoS$_2$ flakes exfoliated on SiO$_2$ with scotch-tape. (b) Raman spectra of the inter-layer shear (S1) and breathing (B1) modes measured at the locations marked with a cross in (a). The three spectra are vertically-shifted for clarity; the grey-shaded area corresponds to the stop band of the notch filter (within which the measured intensity is not informative). (c) Occurrence of all MoS$_2$ 1L flakes (green bars) showing a characteristic optical contrast relative to SiO$_2$ in the red channel in numerical images (see Methods and Materials). For comparison, some flakes with more than one layer in thickness (black bars), showing a higher optical contrast, are shown. 75 flakes have been measured in total. Green bars signal single-layer flakes, as ascertained with Raman spectroscopy (see Methods and Materials), and black bars signal flakes with more than one layer in thickness.}
\end{center}
\end{figure*}
Here, we report on point defects that induce defect-bound excitons in single-layers of MoS$_2$. The MoS$_2$ is obtained from two different sources of bulk crystals. We use a natural crystal and a synthetic crystal, prepared at high pressure and high temperature (denoted HP/HT in the following). We find that the latter kind of bulk MoS$_2$ hosts specific defects, and holds promise for improved control of the structure of MoS$_2$ in the future. We discriminate the electronic doping, mechanical strain, and defect-induced exciton localisation by combining Raman spectroscopy, photoluminescence mapping, scanning tunneling microscopy (STM), and density functional theory (DFT) calculations. We also discriminate the influence of extrinsic and intrinsic effects by addressing samples transfered on silica and on hexagonal boron nitride (\textit{h}-BN).\\
\textbf{Preparation of MoS$_2$ few- and single-layers.} The natural MoS$_2$ bulk crystals (typical size, 2~mm) used in this study are provided by SPI-supplies. The second kind of bulk MoS$_2$ we used are prepared at NIMS, Tsukuba, following a slow cooling process from a molten state attained under high pressure. Here, MoS$_2$ (99.9\% pure), supplied by Kojund Chemical Laboratory Co. Ltd., is encapsulated in a \textit{h}-BN capsule and brought to 5~GPa and 1800$^\circ$C for 20~min by using a belt-type high pressure apparatus. The sample is then cooled down to room temperature at a rate of 0.8$^\circ$C/min. After releasing the pressure, the MoS$_2$ crystal is recovered by crushing the \textit{h}-BN capsule. The crystal size after this process is typically 1~mm.
Mechanical exfoliation of MoS$_2$ was achieved with two processes (Figure~\ref{fig1}a). In the first one, a macroscopic MoS$_2$ grain attached onto scotch-tape is thinned down with repeated scotch-tape exfoliation. Next the surface of the MoS$_2$-covered tape is stamped onto a SiO$_2$ wafer.\cite{Novoselov} Irrespective of the source of MoS$_2$, the typical area is of the order of few 1 to 10~$\mu$m$^2$ (Figure~\ref{fig1}b). The other process, using a polydimethylsiloxane (PDMS) host support\cite{Castellanos} instead of SiO$_2$, substantially increases the area of the exfoliated flakes, in the few 100 to 1,000~$\mu$m$^2$ range (Figure~\ref{fig1}b). Figure~\ref{fig1}c displays a photograph of one of the largest flakes (a bi-layer one) that we exfoliated, among the several tens we have prepared. We note that the transfer processes that we used are dry-process that are not expected to alter the atomic structure of the individual MoS$_2$ layers. The use of PDMS limits the amount of contaminants left on MoS$_2$,\cite{Castellanos} as observed by atomic force microscopy and electron energy loss spectroscopy performed in a scanning transmission electron microscope (see Supporting Information, Figures~S1,S2).
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=165.0mm]{./Figure3_new-01.pdf}
\caption{\label{fig3}(a) Raman spectra (532~nm-wavelength laser) for MoS$_2$ single-layers exfoliated from a natural crystal (black) and from a HP/HT source (red), on SiO$_2$ and \textit{h}-BN. (b,c,d) Optical micrographs for MoS$_2$ exfoliated from the two kinds of crystals. The cartoons (c) clarify the stacking of MoS$_2$ on \textit{h}-BN and SiO$_2$. (e,f) Raman maps of the position of the A$_{1g}$ mode for the region corresponding to (b,d), for the two kinds of MoS$_2$ samples. The distribution of the mode position is shown for the top and bottom maps. The thick black and red frames in (b-f) refer to two MoS$_2$ sources, natural and HP/HT.}
\end{center}
\end{figure*}
As a preliminary step, we present the way we determine the thickness of the flakes. Our approach is to establish the correspondence between optical contrast, which varies with the number of layers and optical wavelength in a rather complex manner\cite{Castellanos_b,Benameur} (Figure~S3), and an unambiguous independent determination of the number of layers with Raman spectroscopy. We track the occurrence and position of the shear (S1) and breathing (B1) interlayer vibrational modes.\cite{Plechinger,Zhao} Figure~\ref{fig2}a shows three MoS$_2$ flakes exfoliated with scotch-tape onto SiO$_2$. Their Raman spectra are different, with the single-layer MoS$_2$ readily identified by the absence of the B1 and S1 modes, while these two modes are found in bi- and tri-layers, and are stiffer in the latter case (Figure~\ref{fig2}b). We find that in the red-channel of the digital optical images, the contrast is the lowest, of 0.22$\pm$0.08 for a single-layer (Figure~\ref{fig2}c). This characteristic contrast value is used as a criterion for fast identification of single-layers. The remaining of the paper is focused on single-layers.
Using PDMS stamps where single-layers are first identified, we then transferred MoS$_2$ onto two kinds of substrates. The first one is SiO$_2$, and the second one is \textit{h}-BN, which has been exfoliated onto SiO$_2$ beforehand. In both processes, the interface between MoS$_2$ and the substrate has not been exposed to the polymer (PDMS), hence pristine MoS$_2$/support interfaces are formed.
\textbf{Strain and electronic doping from vibrational spectroscopy.} Figure~\ref{fig3}a shows characteristic Raman spectra zoomed in the low wavenumber region, featuring intralayer shear (E$^1_{2g}$) and breathing (A$_{1g}$) modes (which are stiffer than the interlayer modes addressed above, for which the bond strength is much lower), for the two sources of MoS$_2$ on the two substrates. To better highlight the differences between the four possible stacks (on the two substrates, for each of the two sources), we mapped the position of the A$_{1g}$ mode (determined by Lorentzian fits of the corresponding peak), which is especially sensitive to electron-phonon coupling effects,\cite{Chakraborty} across an area corresponding to the optical micrographs shown in Figures~\ref{fig3}b-d. The result is shown in Figures~\ref{fig3}e,f. The most obvious difference is the correlation between the position of the A$_{1g}$ mode and the nature of the substrate: a blue-shift, of 1.0$\pm$0.1 and 0.8$\pm$0.1~cm$^{-1}$ from the SiO$_2$ to the \textit{h}-BN substrate, is observed for the natural and HP/HT sources respectively.
These blue shifts may be caused by mechanical strain\cite{Conley,Castellanos_c,Parkin} and/or by electron doping,\cite{Chakraborty} translating the anharmonicity of the interatomic potentials and the effect of the electron-phonon interaction respectively. The energy of the A$_{1g}$ and E$^1_{2g}$ modes have characteristic variations with each of the effect. A strain \textit{vs} doping graph can hence be extracted from the maps of the A$_{1g}$ (Figures~\ref{fig3}e,f) and E$^1_{2g}$ Raman shifts --- a two-dimensional space is constructed with the positions of the A$_{1g}$ and E$^1_{2g}$ modes as principal axis.\cite{Michail} Figure~\ref{fig3bis} shows such a graph for the two samples. Disregarding at this stage the colours of the points (which will be discussed later in the light of the photoluminescence measurements), for both samples we find two groups of points, each corresponding to MoS$_2$ on \textit{h}-BN (greater A$_{1g}$ positions) and on SiO$_2$. The trend is similar for both sources of MoS$_2$, suggesting that the observations mostly point to an extrinsic effect, namely the nature of the substrate. The electron doping level is larger by 2.5 and 2.0$\times 10^{12}$~electrons/cm$^{-2}$ on SiO$_2$, for the natural and HP/HT sources respectively. Substrate-induced doping is a known phenomena, which was ascribed, in other two-dimensional materials,\cite{Lu_b,Bao,Kretinin} to charged impurities in SiO$_2$ that are absent in \textit{h}-BN. These charged impurities effectively dope MoS$_2$ with electrons, and the observed doping level is consistent with a previous observation\cite{Buscema} --- they hence represent extrinsic defects.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=110.6mm]{./Figure3bis-01.pdf}
\caption{\label{fig3bis}Positions of the A$_{1g}$ and E$^1_{2g}$ modes of the Raman spectra (532~nm-wavelength laser), each point corresponding to a point in the maps shown in Figures~\ref{fig3}e,f. The grid of the strain \textit{vs} electronic doping has increments of 0.05\% and 5$\times 10^{11}$~cm$^{-2}$. (a) and (b) correspond to the two MoS$_2$ sources, natural and HP/HT respectively. Each point is coded with a colour corresponding to the ratio of areas of the two contributions to the main excitonic feature in the photoluminescence spectra, shown in the insets (see Figure~\ref{fig4}). Inset: Spatial dispersion of area ratio.}
\end{center}
\end{figure*}
Figures~\ref{fig3bis}a,b also reveal that the transfer process can generate non-uniform strains to small extent. The two clusters of points from both images are scattered within typically 0.05 to 0.1\%. Besides, in the case of Figure~\ref{fig3bis}a (natural source of MoS$_2$), a strain difference of 0.05 to 0.1\% is found between the two clusters of points, corresponding to the top and bottom part of the optical image (on \textit{h}-BN and SiO$_2$ respectively). We do not find such a difference in Figure~\ref{fig3bis}b (HP/HT-MoS$_2$). The observed differences are not systematic, and we believe that they point to slightly different mechanical efforts exerted during the preparation and/or different \textit{h}-BN thicknesses for the two samples, rather than from, \textit{e.g.}, internal strain induced by defects.\cite{Parkin}\\
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=144.9mm]{./Figure4_new-01.pdf}
\caption{\label{fig4}(a) Room temperature photoluminescence spectra for single-layer MoS$_2$ prepared by exfoliation from natural (black) and HP/HT (red) crystals, deposited on the surface of SiO$_2$ (top) and \textit{h}-BN (bottom). The spectra have been corrected for interference effects associated to the presence of the MoS$_2$/\textit{h}-BN, MoS$_2$/SiO$_2$, \textit{h}-BN/SiO$_2$, and SiO$_2$/Si interfaces. The spectra are fitted with three Lorentzian components, respectively corresponding to the lowest energy direct transition, for the exciton (A), the trion for natural MoS$_2$ (A$^-$) or the defect-bound exciton for HP/HT MoS$_2$ (A'), and the second lowest energy direct transition for the exciton (B). The dotted lines are the best fits to the data. (b-e) Maps of energy of the A component, $\omega_\mathrm{A}$ (b,c), and of the difference in energy $\Delta\omega$ of the A$^-$ and A (d) and A' and A (e) components, for the same area as in Figures~\ref{fig3}b,d, for natural (thick black frames, b,d) and HP/HT (thick red frames, c,e) MoS$_2$ single layers.}
\end{center}
\end{figure*}
\textbf{Excitonic complexes in presence of strain, electronic doping, and defects.} Both electronic doping level and strain influence the excitonic properties of MoS$_2$.\cite{Mak_b,Mouri,Nan,Pei,Conley} To address these effects we performed photoluminescence measurements at room temperature with a 532~nm laser excitation and low power (see Materials and Methods) for both sources of MoS$_2$ and both substrates. Figure~\ref{fig4}a displays characteristic spectra corrected from optical interference effects (special care needs to be paid to these corrections, see Supporting Information, Figures~S3,S4). As expected with an excitation wavelength of 532~nm, two main excitonic peaks are observed, each corresponding to a different transition involving one or the other spin-polarized valence band.\cite{Steinhoff} In the following, we will focus on the lowest energy peak, and to start with, we address the natural source of MoS$_2$. This peak actually comprises two components. They are separated by typically 40~meV and have a full-width at half maximum of several 10~meV dominated by electron-phonon coupling effects.\cite{Dey} They correspond to a neutral (A) exciton and a charged (A$^-$) exciton --- a trion.\cite{Mak_b,Sercombe} The latter is more prominent when the electronic doping is higher.\cite{Mak_b}
As discussed in Ref.~\citenum{Mak_b}, the ratios of areas of the two peaks as inferred from photoluminescence (Figure~\ref{fig3bis}a) characterise the level of electron doping; we estimate it to be typically of the order of several 10$^{12}$~electrons/cm$^{-2}$. The inset of Figure~\ref{fig3bis}a reveals a distinctive trion \textit{vs} exciton population whether MoS$_2$ lies on SiO$_2$ or \textit{h}-BN. The ratios of A$^-$ to A areas, typically 0.01-0.02 and 0.1 respectively on these two substrates, are consistent with changes of electronic doping levels, due to charged impurities in SiO$_2$, found in the analysis of the Raman data (of the order of a few $10^{12}$~cm$^{-2}$).
The position of the two peaks (Figures~\ref{fig4}b,S5) is changing by 12~meV whether MoS$_2$ lies on SiO$_2$ or \textit{h}-BN. As Raman spectroscopy suggests, this is a result of the preparation process causing a spatial strain variation, a compression on SiO$_2$ relative to the case on \textit{h}-BN, by about 0.1\%. The magnitude of the strain-induced energy shift fits with that corresponding to previously reported strain-induced electronic band-gap change.\cite{Conley}
Let us now turn to the photoluminescence signatures in case of HP/HT-MoS$_2$. In this case also, we find that the main excitonic feature does not consist of a single component. While the above-discussed energy difference between the two componenents was about 40~meV for natural MoS$_2$, consistent with the expected trion binding energy corresponding to electron doping levels of the order of few $10^{12}$~electrons/cm$^{-2}$,\cite{Mak_b} here the two components are separated by a substantially lower energy difference (20~meV), regardless of the substrate (Figure~\ref{fig4}c). Such an energy difference cannot correspond to a trion under the influence of strain or electronic doping: the variations of strain and electronic doping in our samples are in the range of few 0.1\% and 10$^{12}$~cm$^{-2}$ respectively, which have only marginal influence on the binding energy of the trion (few 1~meV or below).\cite{Wang_c,Mak_b} What is then the nature of this low-energy emission?\cite{note_on_components} Its spectral weight is globally high and strikingly, unlike the A$^-$ feature for natural MoS$_2$, does not corelate with the kind of substrate, and corresponding doping level revealed by Raman spectroscopy (Figure~\ref{fig3bis}b). This is at variance to the behaviour expected for trions.
A rational explanation for this low-energy feature (in the case of HP/HT MoS$_2$) is that it relates to a defect-bound exciton. Defect-bound excitons were previously invoked in MoS$_2$ and attributed to sulfur vacancies, di-vacancies, and metal vacancies.\cite{Tongay,Chow} While they were found to be associated with a binding energy of the order of 100~meV, here we find a binding energy of 20~meV. The limited variations of strain or electronic doping in our samples do not allow us to reveal a possibly different influence of these effects on the A', A$^-$ and A features. As we will see, samples from the HP/HT source comprise a larger amount of defects. In the following, we devise on the nature and density of these defects using additional probes.\\
\textbf{The nature of the point defects.} A large variety of defects has been considered in MoS$_2$, including sulfur vacancies,\cite{Kim,Tongay,Qiu,Komsa} substitutional atoms replacing either the metal or the sulfur atom,\cite{Dolui,Kim,Tongay,Qiu,Noh,Komsa} and individual atoms (the electrodonor alkali atoms) adsorbed onto the surface.\cite{Dolui} Only the latter kind has been reported to be associated to shallow donor levels, that could account for usually reported $n$-doping in single-layer MoS$_2$ at room temperature. The chemical analysis of the starting material in the HP/HT process does not seem compatible with the presence of alkali atoms, though. On the contrary, based on this analysis, potential candidates as impurities are iron and carbon prominently, or boron and nitrogen from the capsule used to seal the MoS$_2$ during the HP/HT treatment.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=117.5mm]{./Figure_STM-01.pdf}
\caption{\label{figSTM}(a) Optical micrograph of the five-layer HP/HT MoS$_2$ deposited on graphene/SiC, with gold markers. (b) STM image measured with a bias voltage $V_\mathrm{b}$=-2~V and a tunneling current of $I_\mathrm{t}$=0.2~nA. The image shows the derivative of the apparent height as function of the horizontal spatial coordinate to enhance the contrast of the atomic-size defects. The red arrows points to an atomic step edge of the substrate, with a height of 0.75~nm. (c) STM topograph ($V_\mathrm{b}$=-2~V $I_\mathrm{t}$=0.2~nA) close-up view on some defects. The green and pink arrows point to two kinds of defects.}
\end{center}
\end{figure*}
The impurity levels in MoS$_2$ are too low to be reliably assessed with standard macroscopic chemically-sensitive probes such as X-ray photoelectron spectroscopy.\cite{Addou} High resolution microscopy circumvents this issue, by addressing the defects individually. We used STM for this purpose, as implemented in a ultra-high vacuum environments that limits spurious interactions of the defects with \textit{e.g.} small molecules. Very few reports in the literature have in fact been devoted to STM measurements on single- or few-layer flakes. Mostly, this is due to the small size (few 1 to 10 $\mu$m$^2$) of the exfoliated flakes, which if deposited on a non-conductive substrate, must be electrically contacted with finely designed electrodes. The observation of such small features with a short-field-of-view-technique as STM is obviously very laborious. This is probably why most reports since 20 years rely on cleaved bulk MoS$_2$.\cite{Abe,Murata,Park,Addou,Addou_b,McDonnell,Lu} Two workarounds have been recently implemented: one, taking benefit of large-area growth of MoS$_2$ on graphite,\cite{Huang,Zhou} and the other the strong adhesion of MoS$_2$ exfoliated on a gold surface.\cite{Magda} As far as we are concerned, we chose an alternative strategy and once more exploited PDMS exfoliation (Figure~\ref{fig1}a) which yields large flakes of sizes approaching 100~$\mu$m, using as a host substrate a (conductive) graphene-covered silicon carbide surface. To ease the localisation of the (few-layer) flake we further deposited micrometer-sized gold markers (Figure~\ref{figSTM}a, see Methods and Materials).
This rather advanced sample preparation allows to image single defects with STM (yet it should be noted that the measurements are in no way straightforward). A high density of defect is observed (Figure~\ref{figSTM}b), of the order of 1$\times$10$^{12}$~cm$^{-2}$, varying from 0.6 to 4$\times$10$^{12}$~cm$^{-2}$ from one place to the other. A spatially inhomogeneous distribution of defects was already quoted in previous STM analysis from MoS$_2$ samples.\cite{Addou_b,Addou,Vancso} The density we find on the HP/HT sample is larger than the one observed on samples prepared by exfoliation of natural molybdenite, which is in the few 10$^{11}$~ cm$^{-2}$ range\cite{Addou_b,Addou} or less\cite{Lu} (3.5$\times$10$^{10}$~cm$^{-2}$). Conversely, a much larger density of defects (from 5$\times$10$^{12}$~cm$^{-2}$ to 5$\times$10$^{13}$~cm$^{-2}$) has been reported for MoS$_2$ prepared by exfoliating synthetic crystals.\cite{Vancso}
In the HP/HT MoS$_2$ we find two prominent populations of point defects, which appear as a bright feature and a depression respectively (Figure~\ref{figSTM}c). Depression-like defects of the same extension (1-2~nm) or slightly larger (in the few nanometer-scale) have been reported previously.\cite{Inoue,Abe,Addou,Addou_b,Lu}. Among them, one appears as a depression at negative tip-sample bias as in our observations, and is a characteristic defect in natural MoS$_2$ that is ascribed to missing S-Mo-S fragments located either in the top or in a buried MoS$_2$ layer.\cite{Addou_b} The second kind of defect (the bright one) has not been observed in natural MoS$_2$ samples, and is hence generated during the preparation of the HP/HT sample. It has a characteristic shape resolved with sharp STM tips, consisting in a ring with three pairs of radial legs. The size of the ring is typically 0.7~nm. Defects featuring a ring shape in STM have also been reported previously\cite{Abe,Murata} and were ascribed to alkali atoms adsorbed on the surface. Nor the HP/HT process neither the ultra-high vacuum chamber where the STM measurements were performed seem to yield such adsorbates, though.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=82.2mm]{./Figure_DFT-01.pdf}
\caption{\label{figDFT}(a) Electronic density of states as a function of electron energy with respect to the Fermi level, for a nitrogen atom substituting a sulfur atom in single-layer MoS$_2$, and for defect-free MoS$_2$ (red). The spectra have been shifted horizontally to match the bottoms of the conduction band. The arrow points the position of a very sharp defect state. (b) Corresponding simulated STM image with Fermi level located at the bottom of the conduction bond and -2~V tip-sample bias, with the STM tip 4~\AA\, from the surface, with (right) and without (left) the atomic structure overlaid.}
\end{center}
\end{figure}
Both electronic and structural information contribute to STM images. To devise on the nature of the defects, DFT simulations provide key insights to interpret the observed STM contrasts. The comparison of experimental STM images with spatially-resolved information provided by DFT is a well-established approach to study defects. To our knowledge such comparison has not been made in the case of defects in MoS$_2$ beyond the case of lattice vacancies.\cite{Inoue,Vancso} We computed the stable configurations of five defects corresponding to the impurities that are detected in the chemical analysis of the raw MoS$_2$ or present in the \textit{h}-BN capsule used in the HP/HT process. This includes a sulfur vacancy, a molybdenum atom substituted by an iron atom, an a sulfur atom substituted by a carbon, a nitrogen, and a boron atom. Each of these defects are associated with electronic states inside the bandgap of MoS$_2$ (here expectedly close to the bulk value of 1.3~eV) or close to the bandgap edges (see Figures~\ref{figDFT}a,S7). The sample bias of -2.0~V corresponds to electrons tunneling from the sample to the tip, in an energy window of 2.0~eV below the MoS$_2$ Fermi level, which is presumably located close to the conduction band minimum. It is thus expected that the defect electronic states within the bandgap have significant contribution compared to the valence band, given that they correspond to a lower tunnel barrier. We simulated STM images by taking into account the STM tip (see Materials and Methods) in presence of the different defects. The results are shown in Figure~S8, and for one specific defect (nitrogen atom substituting a sulfur atom) in Figure~\ref{figDFT}b. For the latter defect, the simulated STM image is in rather good agreement with the experimental one, despite the significant difference in spatial resolution, which is higher in the simulations. Indeed in the experiment, an {\AA}ngstr\"{o}m-scale instability of the scanning tip is observed (as shown by the occurence of horizontal stripes at the defect location in Figure~\ref{figSTM}c), and the tip's shape presumably deviates from the ideal pyramidal shape assumed in the calculations. We consider this as the reason why the three brilliant lobes observed in the simulated image appear as a circle in the experimental image. Beyond this, the main features compare very well for the N substitutional defect: the size of the lower-intensity central feature match, and the three pairs of legs appearing in the experimental image seem reminiscent of the three lobes found in the simulations. Based on this comparison we propose that the ring-shaped defects we observed correspond to nitrogen atoms having replaced sulfur ones during the HP/HT sample preparation (and originating \textit{e.g.} from the \textit{h}-BN capsule used in this process).\\
\textbf{Field-effect transistor based on HP/HT MoS$_2$.} Single-layer MoS$_2$ prepared under HP/HT conditions was finally integrated into a field effect transistor with electrostatic gating from the back-side, in which direct contact with SiO$_2$ was avoided by a \textit{h}-BN buffer layer (Figure~\ref{fig5}). Accordingly a low amount of charged impurities is expected in the vicinity of MoS$_2$. Consistent with previous reports, we find that the conduction properties are improved under vacuum (compared to ambient pressure), presumably due to the desorption of species acting as charged impurities.\cite{Lembke} We only observe the blocked state of the transistor and the regime of electron conduction (and not the hole conduction regime) in the source-drain current \textit{vs} gate-voltage characteristic (Figure~\ref{fig5}).
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=105.6mm]{./Figure5-01.pdf}
\caption{\label{fig5}Room temperature conductance measured under vacuum in a single-layer transistor based on HP/HT MoS$_2$, as a function of gate voltage, with a source-drain bias of 0.2~V. The inset shows an optical micrograph of a single-layer device, which is transferred on \textit{h}-BN (exfoliated on SiO$_2$) and contacted with Au electrodes. The two contacts used for measuring the conductance are shown in the cartoon.}
\end{center}
\end{figure*}
The transport properties overall show very typical semiconducting properties which match those found in similar devices based on natural MoS$_2$. We estimate the threshold voltage to be at a gate voltage of 10~V. The mobility from the gating curve is estimated to be 2~cm$^2$V$^{-1}$s$^{-1}$ (the device in the on-state has not reached saturation in the range of applied gate-voltage, so this value is a lower-estimate). These two values are similar to those found in devices using the same geometry based on natural MoS$_2$, and more recently in similar field-effect transistor architecture featuring single-layer MoS$_2$ synthesised by chemical vapor deposition transfered onto \textit{h}-BN.\cite{Joo,Joo_b} In all these works and ours, we stress that the Schottky barriers at the Au/MoS$_2$ junctions under the source and drain electrodes play a dominant role in the low-value-mobility obtained from the two-probe measurement; in other words the defects that are present in the MoS$_2$ channel are not limiting transport in this configuration.
\section*{Conclusions}
Using Raman spectroscopy, photoluminescence spectroscopy, scanning tunneling microscopy, density functional theory, and electronic transport measurements, we addressed the optoelectronic properties of MoS$_2$ single-layers prepared by exfoliation from two different sources of bulk material --- a natural one, and another one prepared under high-pressure and high-temperature (HP/HT) conditions. The latter preparation process opens the route to the control of the structure of MoS$_2$, in terms of intentional generation of otherwise inaccessible defects and possibly in the future as well in terms of superior quality, namely increased single-crystal size and lower defect (\textit{e.g.} vacancies) concentration as was achieved with \textit{h}-BN.\cite{Watanabe} This holds promise for close-to-ideal support for other two-dimensional materials\cite{Lu_c} and high-performance optoelectronic devices. Natural and HP/HT both have a substantial electron-type doping, of the order of 10$^{12}$~cm$^{-2}$, which is stronger on SiO$_2$ substrates than on \textit{h}-BN, due to a lower amount of extrinsic charged impurity in the latter case. Additional defects are present in HP/HT MoS$_2$. We argue that they lead to defect-bound excitons, with a few 10~meV binding energy. We propose that these defects are nitrogen atoms substituting sulfur atoms. Exploring the nature of the localisation potential associated to the defects, and its effects on the coupling to the electromagnetic field, will provide valuable insights to understand light-matter coupling in transition metal dichalcogenides. In addition, the defect-bound exciton we discover may couple coherently with the neutral exciton. Stronger coupling and larger coherent times than reported in MoSe$_2$ at low temperature between excitons and trions\cite{Singh} might result from the weak trapping of the defect-bound exciton. On a general note, our work also sheds light on the influence of defects on the optoelectronic properties of these two-dimensional materials and their interplay with internal and external (force, electric, optical) fields.
\section*{Materials and methods}
\textbf{Mechanical exfoliation.} Si/SiO$_2$ substrates with 285~nm-thick oxide were cleaved into 1~cm $\times$ 1~cm pieces. They were cleaned using acetone and isopropyl-alcohol, followed by a dry nitrogen blow. The substrates were subjected to a oxygen plasma for 3 to 4 minutes. For mechanical exfoliation, small pieces of MoS$_2$ crystal were placed on the scotch tape followed by repeated exfoliation. The tape was then stamped on to the Si/SiO$_2$ substrate, then it was then heated for 15-30~s at 80-100$^\circ$C on a hot plate. The tape was then gently removed from the substrate at an angle of 60 to 70$^\circ$. A similar process was followed on the PDMS substrate, without the application of heat. For the MoS$_2$/\textit{h}BN heterostructures, a PDMS stamping method was used.\cite{Castellanos}
\textbf{Raman spectroscopy and photoluminescence measurements.} Raman spectroscopy and photoluminescence were acquired with 532~nm Nd:YAG laser using a commercial confocal WITEC spectrometer at room temperature under ambient condition. The laser spot size was $\sim$1~$\mu$m. The signal was collected through a 50$\times$ objective with a numerical aperture of 0.75. For the Raman spectra, the power was kept at 300~$\mu$W to avoid damage due to laser-induced heating in MoS$_2$ flakes. The signal was integrated for 2~s after being dispersed by a 1800~lines/mm grating. For photoluminescence measurements, a low power of 8~$\mu$W (see Supporting Information and Figure~S6) was used with a grating of 600~lines/mm. The photoluminescence spectra were taken with an integration time of 30~s to improve signal-to-noise ratio, and the spatially-resolved photoluminescence maps were taken with an integration time of 5~s.
Optical images have been acquired in a Zeiss microscope equipped with a digital camera "axiomcam 105 colour" device, a tungsten halogen light source and with a magnification of 100$\times$. The white balance has been set to the expected one for an halogen lamp at 3200~K.
To allow for fast determination of the flakes number of layers (less than four layers) across square centimeter-scale surfaces, we determined the value of a representative optical quantity for flakes of known (from Raman spectroscopy) thickness. We chose the contrast of the RBG images, with respect to the surrounding SiO$_2$ surface, in red channel (red channel contrast, RCC) as a relevant quantity, defined as the difference between the signal from SiO$_2$ ($I^\mathrm{R}_\mathrm{silica}$) and from the flake ($I^\mathrm{R}_\mathrm{flake}$), normalised by $I^\mathrm{R}_\mathrm{silica}$: $RCC = (I^\mathrm{R}_\mathrm{silica}-I^\mathrm{R}_\mathrm{flake})/I^\mathrm{R}_\mathrm{silica}$
\textbf{Fabrication and measurement of MoS$_2$ FET device.} The substrate for FET was degenerately-doped silicon with 285~nm of SiO$_2$ on which \textit{h}-BN flakes were exfoliated. The single-layer MoS$_2$ flake was deterministically transferred on \textit{h}-BN using PDMS. Two-step electron-beam lithography followed by metal deposition was used next: in the first step 5~nm of Ti and 65~nm of Au were deposited for the outer pads, and in the second step, 100~nm of Au was used to contact MoS$_2$. The FET measurement was performed at room temperature under vacuum in a probe station.
\textbf{Scanning tunneling microscopy.} The 5-layer MoS$_2$ flake (typically 100$\times$80~$\mu$m$^2$) prepared from the HP/HT source was transfered by PDMS stamping onto graphene grown over the SiC substrate. The average graphene coverage on 6H-SiC(0001), as deduced from Auger electron spectroscopy and STM images, was between one and two layers.\cite{Mallet} Gold markers were evaporated on this substrate and served as alignment marks further helping to locate the MoS$_2$ flake in STM experiments.
STM measurements were performed in an ultra high vacuum (UHV) environment at 300~K using a home-made microscope. The samples were gently outgassed in UHV (typically at 300$^\circ$C for 1~h) before being loaded in the STM setup. The tips were made from mechanically-cut PtIr wires. The data were analysed using the \texttt{WsXM} software.\cite{Horcas}
\textbf{Density functional theory calculations.} Density functional theory calculations were carried out using the Vienna ab initio simulation package \texttt{VASP}, with the projector augmented wave (PAW) approach.\cite{Kresse,Kresse_b} The exchange correlation interaction is treated within the general gradient approximation parametrized by Perdew, Burke and Ernzerhof (PBE)\cite{Perdrew}. Relaxation was performed with a 1$\times$1$\times$1 $k$-point sampling. The energy and forces were converged until 10$^{-4}$eV and 0.01~eV/\AA. Supercells of size (6$\times$6) were used to limit the interaction between image defects associated with the use of periodic boundary conditions. To avoid interaction in the direction perpendicular to the plane of MoS$_2$, a 10~\AA-thick slab of vacuum was used.
\textbf{Simulation of scanning tunneling microscopy images.} The DFT localized-orbital molecular-dynamics code as implemented in \texttt{FIREBALL}\cite{Lewis,Jelinek,Sankey} has been used for the structural relaxation of the different defects in MoS$_2$ considered for STM image calculations. The \texttt{FIREBALL} simulation package uses a localised, optimised minimal basis set,\cite{Basanta} and the local density approximation (LDA) for the exchange and correlation energy following the McWEEDA methodology.\cite{Jelinek} We used a hexagonal (10$\times$10) unit cell for each simulation, in order to reduce the interactions between defects in neighboring cells associated with the periodic boundary conditions. The convergence of the system was achieved using a set of 8 $k$-points in the Brillouin zone, until the forces have reached a value lower than 0.05~eV/\AA. Theoretical simulations of the STM current between the metal tip (placed 4~\AA\, away from the surface) and the sample were based on the non-equilibrium Green's functions technique developed by Keldysh.\cite{Keldysh,Mingo} Within this methodology, the electronic current for an applied voltage $V_\textrm{b}$ at standard tunneling distances can be written as:\cite{Gonzalez}
\begin{equation} \label{stm}
I=\frac{4\pi e^{2}}{h} \int_{E_\mathrm{F}}^{E_\mathrm{F}+eV_\mathrm{b}}\textrm{Tr}[T_{TS}\rho_{SS}(\omega)T_{ST}\rho_{TT}(\omega-eV)]d\omega.
\end{equation}
\noindent where $E_\mathrm{F}$ is the Fermi level, here set at the bottom of the conduction band, $\rho_{TT}$ and $\rho_{SS}$ are the density matrices associated with the subsystem tip and sample and $T_{TS/ST}$ the tip-sample interaction (a detailed discussion can be found elsewhere\cite{Sanchez,Gonzalez}). The $\rho_{TT}$ and $\rho_{SS}$ matrices have been obtained using the hamiltonian obtained after the atomic relaxation. This methodology has already proved to give good results on MoS$_2$-based systems.\cite{Gonzalez} We stress that the relaxed structure and electronics density of states obtained with \texttt{FIREBALL} are in good agreement with those obtained using the \texttt{VASP} code also used in this work.
\begin{suppinfo}
Supporting Information includes a discussion of the corrections of the optical spectroscopy data from optical interference effects, atomic force microscopy measurements, electron energy loss spectroscopy, power-dependent photoluminescence measurements, and DFT simulations of electronic density of states and STM images for various defects.
\end{suppinfo}
\section{Author Information}
*E-mail: johann.coraux@neel.cnrs.fr
\section{Associated Content}
The authors declare no competing financial interest.
\begin{acknowledgement}
This work was supported by the European Union H2020 Graphene Flagship program (grants no. 604391 and 696656) and the 2DTransformers project under OH-RISQUE program (ANR-14-OHRI-0004) and J2D (ANR-15-CE24-0017) and DIRACFORMAG (ANR-14-CE32-0003) projects of Agence Nationale de la Recherche (ANR). G.N. and V.B. thank support from CEFIPRA. The STEM (imaging and EELS) studies were conducted at the Laboratorio de Microscop\'{i}as Avanzadas, Instituto de Nanociencia de Arag\'{o}n, Universidad de Zaragoza, Spain. R.A. gratefully acknowledges the support from the Spanish Ministry of Economy and Competitiveness (MINECO) through project grant MAT2016-79776-P (AEI/FEDER, UE) and from the Government of Aragon and the European Social Fund under the project `Construyendo Europa desde Aragon' 2014-2020 (grant number E/26). We thank Jacek Kasprzak, Tomasz Jakubczyk, Maxime Richard and Le Si Dang for insightful discussions. C.G. acknowledges financial support from Spanish Ministry of Economy and Competitiveness, through the Mar\'{i}a de Maeztu Program for Units of Excellence in R\&D (Grant No. MDM-2014-0377).
\end{acknowledgement}
\newpage
\providecommand{\latin}[1]{#1}
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{81}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Wang \latin{et~al.}(2012)Wang, Kalantar-Zadeh, Kis, Coleman, and
Strano]{Wang}
Wang,~Q.~H.; Kalantar-Zadeh,~K.; Kis,~A.; Coleman,~J.~N.; Strano,~M.~S.
Electronics and Optoelectronics of Two-Dimensional Transition Metal
Dichalcogenides. \emph{Nat. Nanotechnol.} \textbf{2012}, \emph{7},
699--712\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Li and Galli(2007)Li, and Galli]{Li}
Li,~T.; Galli,~G. Electronic Properties of MoS$_2$ Nanoparticles. \emph{J.
Phys. Chem. C} \textbf{2007}, \emph{111}, 16192--16196\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mak \latin{et~al.}(2010)Mak, Lee, Hone, Shan, and Heinz]{Mak}
Mak,~K.~F.; Lee,~C.; Hone,~J.; Shan,~J.; Heinz,~T.~F. Atomically Thin MoS$_2$:
a New Direct-Gap Semiconductor. \emph{Phys. Rev. Lett.} \textbf{2010},
\emph{105}, 136805\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Splendiani \latin{et~al.}(2010)Splendiani, Sun, Zhang, Li, Kim, Chim,
Galli, and Wang]{Splendiani}
Splendiani,~A.; Sun,~L.; Zhang,~Y.; Li,~T.; Kim,~J.; Chim,~C.-Y.; Galli,~G.;
Wang,~F. Emerging Photoluminescence in Monolayer MoS$_2$. \emph{Nano Lett.}
\textbf{2010}, \emph{10}, 1271--1275\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sundaram \latin{et~al.}(2013)Sundaram, Engel, Lombardo, Krupke,
Ferrari, Avouris, and Steiner]{Sundaram}
Sundaram,~R.; Engel,~M.; Lombardo,~A.; Krupke,~R.; Ferrari,~A.; Avouris,~P.;
Steiner,~M. Electroluminescence in Single Layer MoS$_2$. \emph{Nano Lett.}
\textbf{2013}, \emph{13}, 1416--1421\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ugeda \latin{et~al.}(2014)Ugeda, Bradley, Shi, Felipe, Zhang, Qiu,
Ruan, Mo, Hussain, Shen, Wang, Louie, and Crommie]{Ugeda}
Ugeda,~M.~M.; Bradley,~A.~J.; Shi,~S.-F.; Felipe,~H.; Zhang,~Y.; Qiu,~D.~Y.;
Ruan,~W.; Mo,~S.-K.; Hussain,~Z.; Shen,~Z.-X.; Wang,~F.; Louie,~S.~G.;
Crommie,~M.~F. Giant Bandgap Renormalization and Excitonic Effects in a
Monolayer Transition Metal Dichalcogenide Semiconductor. \emph{Nat. Mater.}
\textbf{2014}, \emph{13}, 1091--1095\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kidd \latin{et~al.}(2016)Kidd, Zhang, and Varga]{Kidd}
Kidd,~D.~W.; Zhang,~D.~K.; Varga,~K. Binding Energies and Structures of
Two-Dimensional Excitonic Complexes in Transition Metal Dichalcogenides.
\emph{Phys. Rev. B} \textbf{2016}, \emph{93}, 125423\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Efimkin and MacDonald(2017)Efimkin, and MacDonald]{Efimkin}
Efimkin,~D.~K.; MacDonald,~A.~H. Many-Body Theory of Trion Absorption Features
in Two-Dimensional Semiconductors. \emph{Phys. Rev. B} \textbf{2017},
\emph{95}, 035417\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Poellmann \latin{et~al.}()Poellmann, Steinleitner, Leierseder, Nagler,
Plechinger, Porer, Bratschitsch, Sch{\"u}ller, Korn, and Huber]{Poellmann}
Poellmann,~C.; Steinleitner,~P.; Leierseder,~U.; Nagler,~P.; Plechinger,~G.;
Porer,~M.; Bratschitsch,~R.; Sch{\"u}ller,~C.; Korn,~T.; Huber,~R. Direct
Observation of Internal Quantum Transitions and Femtosecond Radiative Decay
of Excitons in Monolayer WSe$_2$. arXiv:1605.01164\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang \latin{et~al.}(2015)Wang, Zhang, and Rana]{Wang_b}
Wang,~H.; Zhang,~C.; Rana,~F. Surface Recombination Limited Lifetimes of
Photoexcited Carriers in Few-Layer Transition Metal Dichalcogenide MoS$_2$.
\emph{Nano Lett.} \textbf{2015}, \emph{15}, 8204\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jakubczyk \latin{et~al.}(2016)Jakubczyk, Delmonte, Koperski,
Nogajewski, Faugeras, Langbein, Potemski, and Kasprzak]{Jakubczyk}
Jakubczyk,~T.; Delmonte,~V.; Koperski,~M.; Nogajewski,~K.; Faugeras,~C.;
Langbein,~W.; Potemski,~M.; Kasprzak,~J. Radiatively Limited Dephasing and
Exciton Dynamics in MoSe$_2$ Monolayers Revealed with Four-Wave Mixing
Microscopy. \emph{Nano Lett.} \textbf{2016}, \relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rivera \latin{et~al.}(2015)Rivera, Schaibley, Jones, Ross, Wu,
Aivazian, Klement, Seyler, Clark, Ghimire, Yan, Mandrus, Yao, and Xu]{Rivera}
Rivera,~P.; Schaibley,~J.~R.; Jones,~A.~M.; Ross,~J.~S.; Wu,~S.; Aivazian,~G.;
Klement,~P.; Seyler,~K.; Clark,~G.; Ghimire,~N.~J.; Yan,~J.; Mandrus,~D.~G.;
Yao,~W.; Xu,~X. Observation of Long-Lived Interlayer Excitons in Monolayer
MoSe$_2$--WSe$_2$ Heterostructures. \emph{Nat. Commun.} \textbf{2015},
\emph{6}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Palummo \latin{et~al.}(2015)Palummo, Bernardi, and Grossman]{Palummo}
Palummo,~M.; Bernardi,~M.; Grossman,~J.~C. Exciton Radiative Lifetimes in
Two-Dimensional Transition Metal Dichalcogenides. \emph{Nano Lett.}
\textbf{2015}, \emph{15}, 2794--2800\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Amani \latin{et~al.}(2015)Amani, Lien, Kiriya, Xiao, Azcatl, Noh,
Madhvapathy, Addou, Santosh, Dubey, Cho, Wallace, Lee, He, Ager~III, Zhang,
Yablonovitch, and Javey]{Amani}
Amani,~M.; Lien,~D.-H.; Kiriya,~D.; Xiao,~J.; Azcatl,~A.; Noh,~J.;
Madhvapathy,~S.~R.; Addou,~R.; Santosh,~K.; Dubey,~M.; Cho,~K.;
Wallace,~R.~M.; Lee,~S.-C.; He,~J.-H.; Ager~III,~J.~W.; Zhang,~X.;
Yablonovitch,~E.; Javey,~A. Near-Unity Photoluminescence Quantum Yield in
MoS$_2$. \emph{Science} \textbf{2015}, \emph{350}, 1065--1068\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dolui \latin{et~al.}(2013)Dolui, Rungger, Pemmaraju, and
Sanvito]{Dolui}
Dolui,~K.; Rungger,~I.; Pemmaraju,~C.~D.; Sanvito,~S. Possible Doping
Strategies for MoS$_2$ Monolayers: An Ab Initio Study. \emph{Phys. Rev. B}
\textbf{2013}, \emph{88}, 075420\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kim \latin{et~al.}(2013)Kim, Park, Lee, Baek, Jeong, Choi, Chang,
Hong, Kim, Moon, Park, Park, and Jun]{Kim}
Kim,~B.~H.; Park,~M.; Lee,~M.; Baek,~S.~J.; Jeong,~H.~Y.; Choi,~M.;
Chang,~S.~J.; Hong,~W.~G.; Kim,~T.~K.; Moon,~H.~R.; Park,~Y.~W.; Park,~N.;
Jun,~Y. Effect of Sulphur Vacancy on Geometric and Electronic Structure of
MoS$_2$ Induced by Molecular Hydrogen Treatment at Room Temperature.
\emph{RSC Advances} \textbf{2013}, \emph{3}, 18424--18429\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Tongay \latin{et~al.}(2013)Tongay, Suh, Ataca, Fan, Luce, Kang, Liu,
Ko, Raghunathanan, Zhou, Ogletree, Li, Grossman, and Wu]{Tongay}
Tongay,~S.; Suh,~J.; Ataca,~C.; Fan,~W.; Luce,~A.; Kang,~J.~S.; Liu,~J.;
Ko,~C.; Raghunathanan,~R.; Zhou,~J.; Ogletree,~F.; Li,~J.; Grossman,~J.~C.;
Wu,~J. Defects Activated Photoluminescence in Two-Dimensional Semiconductors:
Interplay Between Bound, Charged, and Free excitons. \emph{Sci. Rep.}
\textbf{2013}, \emph{3}, 2657\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Qiu \latin{et~al.}(2013)Qiu, Xu, Wang, Ren, Nan, Ni, Chen, Yuan, Miao,
Song, Long, Shi, Litao, Wang, and Wang]{Qiu}
Qiu,~H.; Xu,~T.; Wang,~Z.; Ren,~W.; Nan,~H.; Ni,~Z.; Chen,~Q.; Yuan,~S.;
Miao,~F.; Song,~F.; Long,~G.; Shi,~Y.; Litao,~S.; Wang,~J.; Wang,~W. Hopping
Transport Through Defect-Induced Localized States in Molybdenum Disulphide.
\emph{Nat. Commun.} \textbf{2013}, \emph{4}, 2642\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lu \latin{et~al.}(2014)Lu, Li, Mao, Wang, and Andrei]{Lu}
Lu,~C.-P.; Li,~G.; Mao,~J.; Wang,~L.-M.; Andrei,~E.~Y. Bandgap, Mid-Gap States,
and Gating Effects in MoS$_2$. \emph{Nano Lett.} \textbf{2014}, \emph{14},
4628--4633\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[McDonnell \latin{et~al.}(2014)McDonnell, Addou, Buie, Wallace, and
Hinkle]{McDonnell}
McDonnell,~S.; Addou,~R.; Buie,~C.; Wallace,~R.~M.; Hinkle,~C.~L.
Defect-Dominated Doping and Contact Resistance in MoS$_2$. \emph{ACS Nano}
\textbf{2014}, \emph{8}, 2880--2888\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Addou \latin{et~al.}(2015)Addou, McDonnell, Barrera, Guo, Azcatl,
Wang, Zhu, Hinkle, Quevedo-Lopez, Alshareef, Colombo, Hsu, and
Wallace]{Addou}
Addou,~R.; McDonnell,~S.; Barrera,~D.; Guo,~Z.; Azcatl,~A.; Wang,~J.; Zhu,~H.;
Hinkle,~C.~L.; Quevedo-Lopez,~M.; Alshareef,~H.~N.; Colombo,~L.; Hsu,~J.
W.~P.; Wallace,~R.~M. Impurities and Electronic Property Variations of
Natural MoS$_2$ Crystal Surfaces. \emph{ACS Nano} \textbf{2015}, \emph{9},
9124--9133\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Noh \latin{et~al.}(2015)Noh, Kim, Park, and Kim]{Noh}
Noh,~J.-Y.; Kim,~H.; Park,~M.; Kim,~Y.-S. Deep-to-Shallow Level Transition of
Re and Nb Dopants in Monolayer MoS$_2$ with Dielectric Environments.
\emph{Phys. Rev. B} \textbf{2015}, \emph{92}, 115431\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Komsa and Krasheninnikov(2015)Komsa, and Krasheninnikov]{Komsa}
Komsa,~H.-P.; Krasheninnikov,~A.~V. Native Defects in Bulk and Monolayer
MoS$_2$ from First Principles. \emph{Phys. Rev. B} \textbf{2015}, \emph{91},
125304\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gonz{\'a}lez \latin{et~al.}(2016)Gonz{\'a}lez, Biel, and
Dappe]{Gonzalez}
Gonz{\'a}lez,~C.; Biel,~B.; Dappe,~Y. Theoretical Characterisation of Point
Defects on A MoS$_2$ Monolayer by Scanning Tunnelling Microscopy.
\emph{Nanotechnology} \textbf{2016}, \emph{27}, 105702\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Late \latin{et~al.}(2012)Late, Liu, Matte, Dravid, and Rao]{Late}
Late,~D.~J.; Liu,~B.; Matte,~H.~R.; Dravid,~V.~P.; Rao,~C. Hysteresis in
Single-Layer MoS$_2$ Field Effect Transistors. \emph{ACS Nano} \textbf{2012},
\emph{6}, 5635--5641\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Li \latin{et~al.}(2013)Li, Wakabayashi, Xu, Nakaharai, Komatsu, Li,
Lin, Aparecido-Ferreira, and Tsukagoshi]{Li_b}
Li,~S.-L.; Wakabayashi,~K.; Xu,~Y.; Nakaharai,~S.; Komatsu,~K.; Li,~W.-W.;
Lin,~Y.-F.; Aparecido-Ferreira,~A.; Tsukagoshi,~K. Thickness-Dependent
Interfacial Coulomb Scattering in Atomically Thin Field-Effect Transistors.
\emph{Nano Lett.} \textbf{2013}, \emph{13}, 3546--3552\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lembke \latin{et~al.}(2015)Lembke, Allain, and Kis]{Lembke}
Lembke,~D.; Allain,~A.; Kis,~A. Thickness-Dependent Mobility in Two-Dimensional
MoS$_2$ Transistors. \emph{Nanoscale} \textbf{2015}, \emph{7},
6255--6260\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jariwala \latin{et~al.}(2013)Jariwala, Sangwan, Late, Johns, Dravid,
Marks, Lauhon, and Hersam]{Jariwala}
Jariwala,~D.; Sangwan,~V.~K.; Late,~D.~J.; Johns,~J.~E.; Dravid,~V.~P.;
Marks,~T.~J.; Lauhon,~L.~J.; Hersam,~M.~C. Band-Like Transport in High
Mobility Unencapsulated Single-Layer MoS$_2$ Transistors. \emph{Appl. Phys.
Lett.} \textbf{2013}, \emph{102}, 173107\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kaasbjerg \latin{et~al.}(2012)Kaasbjerg, Thygesen, and
Jacobsen]{Kaasbjerg}
Kaasbjerg,~K.; Thygesen,~K.~S.; Jacobsen,~K.~W. Phonon-Limited Mobility in
n-Type Single-Layer MoS$_2$ from First Principles. \emph{Phys. Rev. B}
\textbf{2012}, \emph{85}, 115317\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Novoselov \latin{et~al.}(2005)Novoselov, Jiang, Schedin, Booth,
Khotkevich, Morozov, and Geim]{Novoselov}
Novoselov,~K.; Jiang,~D.; Schedin,~F.; Booth,~T.; Khotkevich,~V.; Morozov,~S.;
Geim,~A. Two-Dimensional Atomic Crystals. \emph{Proc. Nat. Ac. Sci.}
\textbf{2005}, \emph{102}, 10451--10453\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Castellanos-Gomez \latin{et~al.}(2014)Castellanos-Gomez, Buscema,
Molenaar, Singh, Janssen, van~der Zant, and Steele]{Castellanos}
Castellanos-Gomez,~A.; Buscema,~M.; Molenaar,~R.; Singh,~V.; Janssen,~L.;
van~der Zant,~H.~S.; Steele,~G.~A. Deterministic Transfer of Two-Dimensional
Materials by All-Dry Viscoelastic Stamping. \emph{2D Mater.} \textbf{2014},
\emph{1}, 011002\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Castellanos-Gomez \latin{et~al.}(2010)Castellanos-Gomez, Agra{\"\i}t,
and Rubio-Bollinger]{Castellanos_b}
Castellanos-Gomez,~A.; Agra{\"\i}t,~N.; Rubio-Bollinger,~G. Optical
Identification of Atomically Thin Dichalcogenide Crystals. \emph{Appl. Phys.
Lett.} \textbf{2010}, \emph{96}, 213116\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Benameur \latin{et~al.}(2011)Benameur, Radisavljevic, Heron, Sahoo,
Berger, and Kis]{Benameur}
Benameur,~M.; Radisavljevic,~B.; Heron,~J.; Sahoo,~S.; Berger,~H.; Kis,~A.
Visibility of Dichalcogenide Nanolayers. \emph{Nanotechnology} \textbf{2011},
\emph{22}, 125706\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Plechinger \latin{et~al.}(2012)Plechinger, Heydrich, Eroms, Weiss,
Sch{\"u}ller, and Korn]{Plechinger}
Plechinger,~G.; Heydrich,~S.; Eroms,~J.; Weiss,~D.; Sch{\"u}ller,~C.; Korn,~T.
Raman Spectroscopy of the Interlayer Shear Mode in Few-Layer MoS$_2$ Flakes.
\emph{Appl. Phys. Lett.} \textbf{2012}, \emph{101}, 101906\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhao \latin{et~al.}(2013)Zhao, Luo, Li, Zhang, Araujo, Gan, Wu, Zhang,
Quek, Dresselhaus, and Xiong]{Zhao}
Zhao,~Y.; Luo,~X.; Li,~H.; Zhang,~J.; Araujo,~P.~T.; Gan,~C.~K.; Wu,~J.;
Zhang,~H.; Quek,~S.~Y.; Dresselhaus,~M.~S.; Xiong,~Q. Interlayer Breathing
and Shear Modes in Few-Trilayer MoS$_2$ and WSe$_2$. \emph{Nano Lett.}
\textbf{2013}, \emph{13}, 1007--1015\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chakraborty \latin{et~al.}(2012)Chakraborty, Bera, Muthu, Bhowmick,
Waghmare, and Sood]{Chakraborty}
Chakraborty,~B.; Bera,~A.; Muthu,~D.; Bhowmick,~S.; Waghmare,~U.~V.; Sood,~A.
Symmetry-Dependent Phonon Renormalization in Monolayer MoS$_2$ Transistor.
\emph{Phys. Rev. B} \textbf{2012}, \emph{85}, 161403\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Conley \latin{et~al.}(2013)Conley, Wang, Ziegler, Haglund~Jr,
Pantelides, and Bolotin]{Conley}
Conley,~H.~J.; Wang,~B.; Ziegler,~J.~I.; Haglund~Jr,~R.~F.; Pantelides,~S.~T.;
Bolotin,~K.~I. Bandgap Engineering of Strained Monolayer and Bilayer MoS$_2$.
\emph{Nano Lett.} \textbf{2013}, \emph{13}, 3626--3630\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Castellanos-Gomez \latin{et~al.}(2013)Castellanos-Gomez, Rold{\'a}n,
Cappelluti, Buscema, Guinea, van~der Zant, and Steele]{Castellanos_c}
Castellanos-Gomez,~A.; Rold{\'a}n,~R.; Cappelluti,~E.; Buscema,~M.; Guinea,~F.;
van~der Zant,~H.~S.; Steele,~G.~A. Local Strain Engineering in Atomically
Thin MoS$_2$. \emph{Nano Lett.} \textbf{2013}, \emph{13}, 5361--5366\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Parkin \latin{et~al.}(2016)Parkin, Balan, Liang, Das, Lamparski,
Naylor, Rodr{\'\i}guez-Manzo, Johnson, Meunier, and Drndic]{Parkin}
Parkin,~W.~M.; Balan,~A.; Liang,~L.; Das,~P.~M.; Lamparski,~M.; Naylor,~C.~H.;
Rodr{\'\i}guez-Manzo,~J.~A.; Johnson,~A.~C.; Meunier,~V.; Drndic,~M. Raman
Shifts in Electron-Irradiated Monolayer MoS$_2$. \emph{ACS Nano}
\textbf{2016}, \emph{10}, 4134--4142\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Michail \latin{et~al.}(2016)Michail, Delikoukos, Parthenios, Galiotis,
and Papagelis]{Michail}
Michail,~A.; Delikoukos,~N.; Parthenios,~J.; Galiotis,~C.; Papagelis,~K.
Optical Detection of Strain and Doping Inhomogeneities in Single Layer
MoS$_2$. \emph{Appl. Phys. Lett.} \textbf{2016}, \emph{108}, 173102\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lu and Leburton(2014)Lu, and Leburton]{Lu_b}
Lu,~S.-C.; Leburton,~J.-P. Electronic Structures of Defects and Magnetic
Impurities in MoS$_2$ Monolayers. \emph{Nanoscale Res. Lett.} \textbf{2014},
\emph{9}, 676\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bao \latin{et~al.}(2013)Bao, Cai, Kim, Sridhara, and Fuhrer]{Bao}
Bao,~W.; Cai,~X.; Kim,~D.; Sridhara,~K.; Fuhrer,~M.~S. High Mobility Ambipolar
MoS$_2$ Field-Effect Transistors: Substrate and Dielectric Effects.
\emph{Appl. Phys. Lett.} \textbf{2013}, \emph{102}, 042104\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kretinin \latin{et~al.}(2014)Kretinin, Cao, Tu, Yu, Jalil, Novoselov,
Haigh, Gholinia, Mishchenko, Lozada, Georgiou, Woods, Withers, Blake, Eda,
Wirsig, Hucho, Watanabe, Taniguchi, Geim, and Gorbachev]{Kretinin}
Kretinin,~A.; Cao,~Y.; Tu,~J.; Yu,~G.; Jalil,~R.; Novoselov,~K.; Haigh,~S.;
Gholinia,~A.; Mishchenko,~A.; Lozada,~M.; Georgiou,~T.; Woods,~C.;
Withers,~F.; Blake,~P.; Eda,~G.; Wirsig,~A.; Hucho,~C.; Watanabe,~K.;
Taniguchi,~T.; Geim,~A. \latin{et~al.} Electronic Properties of Graphene
Encapsulated with Different Two-Dimensional Atomic Crystals. \emph{Nano
Lett.} \textbf{2014}, \emph{14}, 3270--3276\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Buscema \latin{et~al.}(2014)Buscema, Steele, van~der Zant, and
Castellanos-Gomez]{Buscema}
Buscema,~M.; Steele,~G.~A.; van~der Zant,~H.~S.; Castellanos-Gomez,~A. The
Effect of the Substrate on the Raman and Photoluminescence Emission of
Single-Layer MoS$_2$. \emph{Nano research} \textbf{2014}, \emph{7},
1--11\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mak \latin{et~al.}(2013)Mak, He, Lee, Lee, Hone, Heinz, and
Shan]{Mak_b}
Mak,~K.~F.; He,~K.; Lee,~C.; Lee,~G.~H.; Hone,~J.; Heinz,~T.~F.; Shan,~J.
Tightly Bound Trions in Monolayer MoS$_2$. \emph{Nat. Mater.} \textbf{2013},
\emph{12}, 207--211\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mouri \latin{et~al.}(2013)Mouri, Miyauchi, and Matsuda]{Mouri}
Mouri,~S.; Miyauchi,~Y.; Matsuda,~K. Tunable Photoluminescence of Monolayer
MoS$_2$ \textit{via} Chemical Doping. \emph{Nano Lett.} \textbf{2013},
\emph{13}, 5944--5948\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Nan \latin{et~al.}(2014)Nan, Wang, Wang, Liang, Lu, Chen, He, Tan,
Miao, Wang, and Ni]{Nan}
Nan,~H.; Wang,~Z.; Wang,~W.; Liang,~Z.; Lu,~Y.; Chen,~Q.; He,~D.; Tan,~P.;
Miao,~F.; Wang,~X.; Ni,~Z. Strong Photoluminescence Enhancement of MoS$_2$
Through Defect Engineering and Oxygen Bonding. \emph{ACS Nano} \textbf{2014},
\emph{8}, 5738\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Pei \latin{et~al.}(2015)Pei, Yang, Xu, Zeng, Myint, Zhang, Zheng, Qin,
Wang, Jiang, and Lu]{Pei}
Pei,~J.; Yang,~J.; Xu,~R.; Zeng,~Y.-H.; Myint,~Y.~W.; Zhang,~S.; Zheng,~J.-C.;
Qin,~Q.; Wang,~X.; Jiang,~W.; Lu,~Y. Exciton and Trion Dynamics in Bilayer
MoS$_2$. \emph{Small} \textbf{2015}, \emph{11}, 6384--6390\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Steinhoff \latin{et~al.}(2015)Steinhoff, Kim, Jahnke, Rosner, Kim,
Lee, Han, Jeong, Wehling, and Gies]{Steinhoff}
Steinhoff,~A.; Kim,~J.-H.; Jahnke,~F.; Rosner,~M.; Kim,~D.-S.; Lee,~C.;
Han,~G.; Jeong,~M.; Wehling,~T.; Gies,~C. Efficient Excitonic
Photoluminescence in Direct and Indirect Band Gap Monolayer MoS$_2$.
\emph{Nano Lett.} \textbf{2015}, \emph{15}, 6841--6847\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dey \latin{et~al.}(2016)Dey, Paul, Wang, Stevens, Liu, Romero, Shan,
Hilton, and Karaiskaj]{Dey}
Dey,~P.; Paul,~J.; Wang,~Z.; Stevens,~C.; Liu,~C.; Romero,~A.; Shan,~J.;
Hilton,~D.; Karaiskaj,~D. Optical Coherence in Atomic-Monolayer
Transition-Metal Dichalcogenides Limited by Electron-Phonon Interactions.
\emph{Phys. Rev. Lett.} \textbf{2016}, \emph{116}, 127402\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sercombe \latin{et~al.}(2012)Sercombe, Schwarz, Liu, Robinson,
Chekhovich, Tartakovskii, Kolosov, and Tartakovskii]{Sercombe}
Sercombe,~D.; Schwarz,~S.; Liu,~F.; Robinson,~B.; Chekhovich,~E.;
Tartakovskii,~I.; Kolosov,~O.; Tartakovskii,~A. Optical Investigation of the
Natural Electron Doping in Thin MoS$_2$ Films Deposited on Dielectric
Substrates. \emph{Sci. Rep.} \textbf{2012}, \emph{3}, 3489--3489\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang \latin{et~al.}(2014)Wang, Kutana, and Yakobson]{Wang_c}
Wang,~L.; Kutana,~A.; Yakobson,~B.~I. Many-Body and Spin-Orbit Effects on
Direct-Indirect Band Gap Transition of Strained Monolayer MoS$_2$ and WS$_2$.
\emph{Annal. Phys.} \textbf{2014}, \emph{526}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[not()]{note_on_components}
The corresponding component is not the A$^-$ found in natural MoS$_2$. In fact,
such a component is also expected (the main excitonic feature would hence
consist of three components), but appears to represent a negligible one in
case of HP/HT MoS$_2$.\relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chow \latin{et~al.}(2015)Chow, Jacobs-Gedrim, Gao, Lu, Yu, Terrones,
and Koratkar]{Chow}
Chow,~P.~K.; Jacobs-Gedrim,~R.~B.; Gao,~J.; Lu,~T.-M.; Yu,~B.; Terrones,~H.;
Koratkar,~N. Defect-Induced Photoluminescence in Monolayer Semiconducting
Transition Metal Dichalcogenides. \emph{ACS Nano} \textbf{2015}, \emph{9},
1520--1527\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Abe \latin{et~al.}(1995)Abe, Kataoka, Ueno, and Koma]{Abe}
Abe,~H.; Kataoka,~K.; Ueno,~K.; Koma,~A. Scanning Tunneling Microscope
Observation of the Metal-Adsorbed Layered Semiconductor Surfaces. \emph{Jap.
J. Appl. Phys.} \textbf{1995}, \emph{34}, 3342\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Murata \latin{et~al.}(2001)Murata, Kataoka, and Koma]{Murata}
Murata,~H.; Kataoka,~K.; Koma,~A. Scanning Tunneling Microscope Images of
Locally Modulated Structures in Layered Materials, {MoS}$_2$ (0001) and
{MoSe}$_2$ (0001), Induced by Impurity Atoms. \emph{Surf. Sci.}
\textbf{2001}, \emph{478}, 131--144\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Park \latin{et~al.}(2005)Park, France, and Parkinson]{Park}
Park,~J.; France,~C.~B.; Parkinson,~B. Scanning Tunneling Microscopy
Investigation f Nanostructures Produced by Ar$^+$ and He$^+$ Bombardment of
MoS$_2$ Surfaces. \emph{J. Vac. Sci. Technol. B} \textbf{2005}, \emph{23},
1532--1542\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Addou \latin{et~al.}(2015)Addou, Colombo, and Wallace]{Addou_b}
Addou,~R.; Colombo,~L.; Wallace,~R.~M. Surface Defects on Natural {MoS$_2$}.
\emph{ACS Appl. Mater. Interfaces} \textbf{2015}, \emph{7},
11921--11929\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Huang \latin{et~al.}(2015)Huang, Chen, Zhang, Quek, Chen, Li, Hsu,
Chang, Zheng, Chen, and Wee]{Huang}
Huang,~Y.~L.; Chen,~Y.; Zhang,~W.; Quek,~S.~Y.; Chen,~C.-H.; Li,~L.-J.;
Hsu,~W.-T.; Chang,~W.-H.; Zheng,~Y.~J.; Chen,~W.; Wee,~A. T.~S. Bandgap
Tunability at Single-Layer Molybdenum Disulphide Grain Boundaries. \emph{Nat.
Commun.} \textbf{2015}, \emph{6}, 6298\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhou \latin{et~al.}(2016)Zhou, Kang, Xie, Dadgar, Monahan, Zhu, Park,
and Pasupathy]{Zhou}
Zhou,~X.; Kang,~K.; Xie,~S.; Dadgar,~A.; Monahan,~N.~R.; Zhu,~X.-Y.; Park,~J.;
Pasupathy,~A.~N. Atomic-Scale Spectroscopy of Gated Monolayer MoS$_2$.
\emph{Nano Lett.} \textbf{2016}, \emph{16}, 3148--3154\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Magda \latin{et~al.}(2015)Magda, Pet{\H{o}}, Dobrik, Hwang, Bir{\'o},
and Tapaszt{\'o}]{Magda}
Magda,~G.~Z.; Pet{\H{o}},~J.; Dobrik,~G.; Hwang,~C.; Bir{\'o},~L.~P.;
Tapaszt{\'o},~L. Exfoliation of Large-Area Transition Metal Chalcogenide
Single Layers. \emph{Sci. Rep.} \textbf{2015}, \emph{5}, 14714\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Vancs{\'o} \latin{et~al.}(2016)Vancs{\'o}, Magda, Pet{\H{o}}, Noh,
Kim, Hwang, Bir{\'o}, and Tapaszt{\'o}]{Vancso}
Vancs{\'o},~P.; Magda,~G.~Z.; Pet{\H{o}},~J.; Noh,~J.-Y.; Kim,~Y.-S.;
Hwang,~C.; Bir{\'o},~L.~P.; Tapaszt{\'o},~L. The Intrinsic Defect Structure
of Exfoliated MoS$_2$ Single Layers Revealed by Scanning Tunneling
Microscopy. \emph{Sci. Rep.} \textbf{2016}, \emph{6}, 29726\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Inoue \latin{et~al.}(2013)Inoue, Komori, and Shudo]{Inoue}
Inoue,~A.; Komori,~T.; Shudo,~K.-i. Atomic-Scale Structures and Electronic
States of Defects on Ar$^+$-Ion Irradiated MoS$_2$. \emph{J. Electron.
Spectrosc. Relat. Phenom.} \textbf{2013}, \emph{189}, 11--18\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Joo \latin{et~al.}(2016)Joo, Moon, Ji, Han, Kim, Lee, Lim, Suh, and
Lee]{Joo}
Joo,~M.-K.; Moon,~B.~H.; Ji,~H.; Han,~G.~H.; Kim,~H.; Lee,~G.; Lim,~S.~C.;
Suh,~D.; Lee,~Y.~H. Electron Excess Doping and Effective Schottky Barrier
Reduction on the MoS$_2$/h-BN Heterostructure. \emph{Nano Lett.}
\textbf{2016}, \emph{16}, 6383--6389\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Joo \latin{et~al.}(2017)Joo, Moon, Ji, Han, Kim, Lee, Lim, Suh, and
Lee]{Joo_b}
Joo,~M.-K.; Moon,~B.~H.; Ji,~H.; Han,~G.~H.; Kim,~H.; Lee,~G.; Lim,~S.~C.;
Suh,~D.; Lee,~Y.~H. Understanding Coulomb Scattering Mechanism in Monolayer
MoS$_2$ Channel in the Presence of h-BN Buffer Layer. \emph{ACS Appl. Mater.
Interfaces} \textbf{2017}, \emph{9}, 5006--5013\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Watanabe \latin{et~al.}(2004)Watanabe, Taniguchi, and Kanda]{Watanabe}
Watanabe,~K.; Taniguchi,~T.; Kanda,~H. Direct-Bandgap Properties and Evidence
for Ultraviolet Lasing of Hexagonal Boron Nitride Single Crystal. \emph{Nat.
Mater.} \textbf{2004}, \emph{3}, 404--409\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lu \latin{et~al.}(2014)Lu, Li, Watanabe, Taniguchi, and Andrei]{Lu_c}
Lu,~C.-P.; Li,~G.; Watanabe,~K.; Taniguchi,~T.; Andrei,~E.~Y. MoS$_2$: Choice
Substrate for Accessing and Tuning the Electronic Properties of Graphene.
\emph{Phys. Rev. Lett.} \textbf{2014}, \emph{113}, 156804\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Singh \latin{et~al.}(2014)Singh, Moody, Wu, Wu, Ghimire, Yan, Mandrus,
Xu, and Li]{Singh}
Singh,~A.; Moody,~G.; Wu,~S.; Wu,~Y.; Ghimire,~N.~J.; Yan,~J.; Mandrus,~D.~G.;
Xu,~X.; Li,~X. Coherent Electronic Coupling in Atomically Thin MoSe$_2$.
\emph{Phys. Rev. lett.} \textbf{2014}, \emph{112}, 216804\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mallet \latin{et~al.}(2007)Mallet, Varchon, Naud, Magaud, Berger, and
Veuillen]{Mallet}
Mallet,~P.; Varchon,~F.; Naud,~C.; Magaud,~L.; Berger,~C.; Veuillen,~J.-Y.
Electron States of Mono-And Bilayer Graphene on SiC Probed by
Scanning-Tunneling Microscopy. \emph{Phys. Rev. B} \textbf{2007}, \emph{76},
041403\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Horcas \latin{et~al.}(2007)Horcas, Fern{\'a}ndez, Gomez-Rodriguez,
Colchero, G{\'o}mez-Herrero, and Baro]{Horcas}
Horcas,~I.; Fern{\'a}ndez,~R.; Gomez-Rodriguez,~J.; Colchero,~J.;
G{\'o}mez-Herrero,~J.; Baro,~A. WSxM: a Software for Scanning Probe
Microscopy and a Tool for Nanotechnology. \emph{Rev. Sci. Instr.}
\textbf{2007}, \emph{78}, 013705\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kresse and Furthm{\"u}ller(1996)Kresse, and Furthm{\"u}ller]{Kresse}
Kresse,~G.; Furthm{\"u}ller,~J. Efficiency of Ab-Initio Total Energy
Calculations for Metals and Semiconductors Using a Plane-Wave Basis Set.
\emph{Comput. Mater. Sci.} \textbf{1996}, \emph{6}, 15--50\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kresse and Joubert(1999)Kresse, and Joubert]{Kresse_b}
Kresse,~G.; Joubert,~D. From Ultrasoft Pseudopotentials to the Projector
Augmented-Wave Method. \emph{Phys. Rev. B} \textbf{1999}, \emph{59},
1758\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Perdew \latin{et~al.}(1996)Perdew, Burke, and Ernzerhof]{Perdrew}
Perdew,~J.~P.; Burke,~K.; Ernzerhof,~M. Generalized Gradient Approximation Made
Simple. \emph{Phys. Rev. Lett.} \textbf{1996}, \emph{77}, 3865\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lewis \latin{et~al.}(2011)Lewis, Jel{\'\i}nek, Ortega, Demkov,
Trabada, Haycock, Wang, Adams, Tomfohr, Abad, Wang, and Drabold]{Lewis}
Lewis,~J.~P.; Jel{\'\i}nek,~P.; Ortega,~J.; Demkov,~A.~A.; Trabada,~D.~G.;
Haycock,~B.; Wang,~H.; Adams,~G.; Tomfohr,~J.~K.; Abad,~E.; Wang,~H.;
Drabold,~D.~A. Advances and Applications in the FIREBALL Ab Initio
Tight-Binding Molecular-Dynamics Formalism. \emph{Phys. Status Solidi B}
\textbf{2011}, \emph{248}, 1989--2007\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jel{\'\i}nek \latin{et~al.}(2005)Jel{\'\i}nek, Wang, Lewis, Sankey,
and Ortega]{Jelinek}
Jel{\'\i}nek,~P.; Wang,~H.; Lewis,~J.~P.; Sankey,~O.~F.; Ortega,~J. Multicenter
Approach to the Exchange-Correlation Interactions in Ab Initio Tight-Binding
Methods. \emph{Phys. Rev. B} \textbf{2005}, \emph{71}, 235101\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sankey and Niklewski(1989)Sankey, and Niklewski]{Sankey}
Sankey,~O.~F.; Niklewski,~D.~J. Ab Initio Multicenter Tight-Binding Model for
Molecular-Dynamics Simulations and Other Applications in Covalent Systems.
\emph{Phys. Rev. B} \textbf{1989}, \emph{40}, 3979\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Basanta \latin{et~al.}(2007)Basanta, Dappe, Jel{\'\i}nek, and
Ortega]{Basanta}
Basanta,~M.; Dappe,~Y.; Jel{\'\i}nek,~P.; Ortega,~J. Optimized Atomic-Like
Orbitals for First-Principles Tight-Binding Molecular Dynamics. \emph{Comput.
Mater. Sci.} \textbf{2007}, \emph{39}, 759--766\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Keldysh(1965)]{Keldysh}
Keldysh,~L.~V. Diagram Technique for Nonequilibrium Processes. \emph{Sov. Phys.
JETP} \textbf{1965}, \emph{20}, 1018--1026\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mingo \latin{et~al.}(1996)Mingo, Jurczyszyn, Garcia-Vidal, Saiz-Pardo,
De~Andres, Flores, Wu, and More]{Mingo}
Mingo,~N.; Jurczyszyn,~L.; Garcia-Vidal,~F.; Saiz-Pardo,~R.; De~Andres,~P.;
Flores,~F.; Wu,~S.; More,~W. Theory of the Scanning Tunneling Microscope: Xe
on Ni and Al. \emph{Phys. Rev. B} \textbf{1996}, \emph{54}, 2225\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[S{\'a}nchez-S{\'a}nchez \latin{et~al.}(2010)S{\'a}nchez-S{\'a}nchez,
Gonz{\'a}lez, Jelinek, M{\'e}ndez, De~Andres, Mart{\'\i}n-Gago, and
L{\'o}pez]{Sanchez}
S{\'a}nchez-S{\'a}nchez,~C.; Gonz{\'a}lez,~C.; Jelinek,~P.; M{\'e}ndez,~J.;
De~Andres,~P.; Mart{\'\i}n-Gago,~J.; L{\'o}pez,~M. Understanding
Atomic-Resolved STM Images on TiO$_2$(110)-(1$\times$1) surface by DFT
calculations. \emph{Nanotechnology} \textbf{2010}, \emph{21}, 405702\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
{
"timestamp": "2018-05-08T02:14:42",
"yymm": "1805",
"arxiv_id": "1805.02374",
"language": "en",
"url": "https://arxiv.org/abs/1805.02374"
}
|
\section*{AFFILIATIONS}
\label{sec:affiliations}
$^{1}$ Centro de Investigaciones Energ\'eticas, Medioambientales y Tecnol\'ogicas (CIEMAT), Madrid, Spain\\
$^{2}$ Max Planck Institute for Extraterrestrial Physics, Giessenbachstrasse, 85748 Garching, Germany\\
$^{3}$ Universit\"ats-Sternwarte, Fakult\"at f\"ur Physik, Ludwig-Maximilians Universit\"at M\"unchen, Scheinerstr. 1, 81679 M\"unchen, Germany\\
$^{4}$ Department of Physics \& Astronomy, University College London, Gower Street, London, WC1E 6BT, UK\\
$^{5}$ Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot 76100, Israel\\
$^{6}$ LSST, 933 North Cherry Avenue, Tucson, AZ 85721, USA\\
$^{7}$ Fermi National Accelerator Laboratory, P. O. Box 500, Batavia, IL 60510, USA\\
$^{8}$ Department of Physics and Electronics, Rhodes University, PO Box 94, Grahamstown, 6140, South Africa\\
$^{9}$ Institut de F\'{\i}sica d'Altes Energies (IFAE), The Barcelona Institute of Science and Technology, Campus UAB, 08193 Bellaterra (Barcelona), Spain \\
$^{10}$Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637, USA\\
$^{11}$ Department of Physics, University of Surrey, Guildford GU2 7XH, UK\\
$^{12}$ Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK\\
$^{13}$ Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK\\
$^{14}$ CNRS, UMR 7095, Institut d'Astrophysique de Paris, F-75014, Paris, France\\
$^{15}$ Sorbonne Universit\'es, UPMC Univ Paris 06, UMR 7095, Institut d'Astrophysique de Paris, F-75014, Paris, France\\
$^{16}$ Department of Astronomy, University of Illinois at Urbana-Champaign, 1002 W. Green Street, Urbana, IL 61801, USA\\
$^{17}$ National Center for Supercomputing Applications, 1205 West Clark St., Urbana, IL 61801, USA\\
$^{18}$ Center for Cosmology and Astro-Particle Physics, The Ohio State University, Columbus, OH 43210, USA\\
$^{19}$ Kavli Institute for Particle Astrophysics \& Cosmology, P. O. Box 2450, Stanford University, Stanford, CA 94305, USA\\
$^{20}$ SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA\\
$^{21}$Instituto de F\'\i sica, UFRGS, Caixa Postal 15051, Porto Alegre, RS - 91501-970, Brazil\\
$^{22}$Laborat\'orio Interinstitucional de e-Astronomia - LIneA, Rua Gal. Jos\'e Cristino 77, Rio de Janeiro, RJ - 20921-400, Brazil\\
$^{23}$Brookhaven National Laboratory, Bldg 510, Upton, NY 11973, USA\\
$^{24}$Department of Physics, University of Chicago, Chicago, Illinois 60637, USA
$^{25}$Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena, Chile\\
$^{26}$Observat\'orio Nacional, Rua Gal. Jos\'e Cristino 77, Rio de Janeiro, RJ - 20921-400, Brazil\\
$^{27}$Department of Physics, IIT Hyderabad, Kandi, Telangana 502285, India\\
$^{28}$Instituto de Fisica Teorica UAM/CSIC, Universidad Autonoma de Madrid, 28049 Madrid, Spain\\
$^{29}$Institut d'Estudis Espacials de Catalunya (IEEC), 08193 Barcelona, Spain\\
$^{30}$Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans, s/n, 08193 Barcelona, Spain\\
$^{31}$Santa Cruz Institute for Particle Physics, Santa Cruz, CA 95064, USA\\
$^{32}$Department of Physics, The Ohio State University, Columbus, OH 43210, USA\\
$^{33}$Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138, USA\\
$^{34}$Department of Astronomy/Steward Observatory, 933 North Cherry Avenue, Tucson, AZ 85721-0065, USA\\
$^{35}$Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena, CA 91109, USA\\
$^{36}$Australian Astronomical Observatory, North Ryde, NSW 2113, Australia\\
$^{37}$Departamento de F\'isica Matem\'atica, Instituto de F\'isica, Universidade de S\~ao Paulo, CP 66318, S\~ao Paulo, SP, 05314-970, Brazil\\
$^{38}$Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104, USA\\
$^{39}$Instituci\'o Catalana de Recerca i Estudis Avan\c{c}ats, E-08010 Barcelona, Spain\\
$^{40}$Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA\\
$^{41}$School of Physics and Astronomy, University of Southampton, Southampton, SO17 1BJ, UK\\
$^{42}$Brandeis University, Physics Department, 415 South Street, Waltham MA 02453, USA\\
$^{43}$Instituto de F\'isica Gleb Wataghin, Universidade Estadual de Campinas, 13083-859, Campinas, SP, Brazil\\
$^{44}$Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA \\
$^{45}$Institute of Cosmology \& Gravitation, University of Portsmouth, Portsmouth, PO1 3FX, UK\\
\section{Introduction}
Accurate classification of astrophysical sources is essential for interpreting photometric surveys. Specifically, separating foreground stars from background galaxies is important for many astronomical research topics, from Galactic science to cosmology. Conventional morphological classification techniques separate point sources (mostly stars) from resolved sources (galaxies) using selections in magnitude-radius space or similar variables \citep{macg76, kron80, heyd89, yee91}. For bright sources, morphology has proven to be a sufficient metric for classification. In this regime, for weak lensing applications, a very pure, but also abundant, star sample is vital for deriving the correct point spread function in the images which is used to later infer cosmic shear \citep{soumagnac,jarvis,zuntz}. At fainter magnitudes unresolved galaxies will begin to contaminate catalogues of point-like sources and noisy measurements of stars will contaminate the galaxy sample. Blended sources become an issue as well, because distant and/or faint sources start to merge into single detected objects with spurious shapes. Mis-classification of stars and galaxies at faint magnitudes can introduce spurious correlations in galaxy surveys \citep{ross} and will hamper the study of stellar distributions \citep{y2satellites}.
The advent of CCD detectors provided larger, more reliable data sets which became an obvious target for machine learning classification algorithms \citep[e.g.,][]{odewahn,sextractor,bdts,mlexplore,kimdcnn}. In addition, many large, multi-band imaging surveys \NEW{use morphology \citep[such as for SDSS,][]{stoughton} and/or} have incorporated colour information into their classifiers (see \citealt{ball} for SDSS \NEW{as well}, \citealt{hildebrandt} for CFHTLS or \citealt{panstarrs} for Pan-STARRS). Adopting a Bayesian approach to incorporate fits to stellar and galaxy templates has been shown to be a promising avenue \citep{fadely}, as well as the use of infrared data to complement the optical band observations \citep{malek,kovacs,banerji}.
In this paper we test different strategies for classifying objects as point-like or extended sources in the Dark Energy Survey (DES) Year 1 data (Y1). We subsequently analyze the impact in two broad science cases, and possible developments to improve object classification in future analyses of this data. Throughout this paper, `extended' will be used as a synonym for `galaxy' whereas `point-like' includes both stars and quasi-stellar objects (QSOs) on first approximation and we will collectively call them `stars' in this work. For the case studies considered here and the general catalogue, the contamination of QSOs in the large-scale stellar and galactic catalogues is not deemed important. However, a good star-QSO separation is needed for quasar science, as studied in detail in \citet{tiess} for DES data.
After a description of the dataset in Section \ref{sec:dataset} and the classifiers we are considering here in Section \ref{sec:classifiers}, we compare the classifiers in calibration fields (Section \ref{sec:calib}) and then analyze the response in the complete Y1 dataset for a few selected ones (Section \ref{sec:y1valid}). Then we study the impact on large-scale structure and Milky Way studies (Section \ref{sec:discussion}). Finally, Section \ref{sec:conclusions} presents the conclusions and discusses possible additional developments.
\section{Dark Energy Survey datasets}
\label{sec:dataset}
The DES consists of a 5000 square-degree "wide" survey using the \textit{grizY} photometric bands to AB 10$\sigma$ magnitude limits of (24.6, 24.4, 23.7, 22.7, 21.5) respectively for 2 arc-second apertures, together with a ${\sim}27$ square degree supernovae survey observed in the \textit{griz} bands with an approximately weekly cadence. In February 2018, the project completed the original five planned observing seasons (Years 1 through 5, Y1-Y5). Additional science-quality data was collected during an earlier Science Verification (SV) season. The core goal of DES is a multi-probe study of dark energy at different cosmological epochs using the same DECam instrument \citep{decam} and DES Data Management (DESDM) pipeline \citep{desdm}, as showcased with its first results in \citet{desy1cosmo}. However, the richness of this dataset allows astronomers and cosmologists to go beyond this initial objective \citep{morethande}.
For this study, we use the subset of highest quality data from DES SV\footnote{\url{https://des.ncsa.illinois.edu/releases/sva1}} and Y1\NEW{\footnote{\url{https://des.ncsa.illinois.edu/releases/y1a1}}}\citep{y1gold} comprising the "Gold" catalogue.
We note the following features that are relevant for the present study:
\begin{itemize}
\item The object catalogues are obtained applying \texttt{SExtractor} \citep{sextractor} to coadded images with typically 2 to 4 overlapping exposures in each band in the case of Y1 or $\sim 10$ for SV data, with object detection performed on a combined \textit{riz} image.
\item \texttt{SExtractor} magnitudes have been calibrated through a global calibration module \citep{tucker} and subsequently adjusted through a fit to the stellar locus \citep{slr} anchored to the \textit{i} band\footnote{This calibration approach was eventually superseded in Y3 data products with the Forward Global Photometric Calibration approach described in \cite{fgcm}}. This procedure also corrects for Galactic extinction. In general, \texttt{MAG\_AUTO} is used for photometry (for binning purposes and as inputs for the template based method described below), as it behaves more robustly for these coadded catalogues. \texttt{MAG\_MODEL}, \texttt{MAG\_DETMODEL}\footnote{In this case the exponential model used in \texttt{SExtractor} is fitted on the detection image and scaled in the measurement images of each band.} and \texttt{MAG\_PSF} are used as inputs for the machine learning methods as well. \NEW{Shape measurements in this code include \texttt{FLUX\_RADIUS, CLASS\_STAR and SPREAD\_MODEL}, some of which will be specifically studied here.}
\item In addition, a multi-object, multi-epoch fitting pipeline (\texttt{MOF}) has been run on the single-epoch image counterparts for each coadd catalogue detection to obtain more precise photometric measurements for the objects. \NEW{It simultaneously fits a Gaussian mixture model to} the individual images, also modelling light from nearby neighbours for each object (more details in \citealt{y1gold}). \NEW{The main flux measurements used for the methods described here are the fluxes using this composite Gaussian mixture model (\texttt{CM\_MAG}) and the PSF magnitudes derived from the same \texttt{MOF} pipeline (\texttt{PSF\_MAG}). \texttt{CM\_T} is a size estimator from the code before PSF convolution, which will be studied here in detail.}
\item All objects are required to be in areas for which there is at least one exposure in each of the \textit{griz} bands.
\end{itemize}
We define two distinct regions in which we will perform our tests:
\begin{enumerate}
\item A \textbf{calibration field}: defined by those areas that overlap external datasets that we can use to train, validate and test our methods. These are the supernova (SN) fields from the DES SN survey, which overlap specific spectroscopic surveys and miscellaneous Hubble Space Telescope (HST) datasets; and the area of the survey overlapping the Sloan Digital Sky Survey \citep[SDSS;][]{sdss} Stripe 82 region \citep{sloansn}. In addition, the COSMOS field\footnote{\url{http://cosmos.astro.caltech.edu/}} has been imaged with DECam, providing a very useful dataset given the richness of multi-band imaging and spectroscopy available. Table \ref{tab:external_datasets} summarises the numbers of objects matched to various external datasets (details in Section \ref{sec:train_test_fields}).
Some of these fields have a large number of DES exposures, due to their application for SN searches, so special coadds were made from $\sim4$ exposures in each band in order to resemble the Y1 depth. The selection of these exposures was made so that their coaddition would provide similar characteristics in terms of sky brightness and seeing as the wide survey coadds \citep{y1gold,neilsen}. This procedure is not needed in forthcoming releases as the wide survey extends to cover the supernova regions.
\item An \textbf{application field}: the remaining area of the DES footprint for which suitable external datasets for training are not presently available. This includes the so-called `SPT' region due to the overlap with the South Pole Telescope\footnote{\url{https://pole.uchicago.edu/}} \citep{spt} observations, in which we can make some quality assessment as well, though limited by the lack of external references.
\end{enumerate}
\begin{table*}
\centering
\caption{External datasets used in this work. \NEW{SDSS-stripe 82 data shows two numbers according to simultaneous 2MASS and WISE matches, and VHS matches. More details are provided in Appendix \ref{sec:external_datasets}}.}
\label{tab:external_datasets}
\begin{tabular}{ccccc}
\hline
Catalogue & Type & Usage in this work & Nb. matched objects & Reference \\
\hline
ACS-COSMOS & Space optical imaging & Truth table & 116017 & \citet{leauthaud} \\
Hubble-SC & Space optical imaging & Truth table & 12927 & \citet{hsc} \\
SDSS-stripe 82 & Ground optical spectroscopy & Truth table & 18984/46700 & \citet{sdssdr13} \\
VVDS & Ground optical spectroscopy & Truth table & 4442 & \citet{vvds} \\
WISE & Space NIR imaging & Complementary data & 18984 & \citet{wise} \\
2MASS & Ground NIR imaging & Complementary data & 18984 & \citet{2mass} \\
VHS & Ground NIR imaging & Complementary data & 46700 & \citet{vhs} \\
\hline
\end{tabular}
\end{table*}
\section{Description of the object classifiers}
\label{sec:classifiers}
\begin{table*}
\centering
\caption{Summary of classification methods. \NEW{Type of data denotes whether measurements or direct pixel data are used, and in the first case if it is based on morphological and/or flux measurements. The specific algorithmical approach is named on the third column.}}
\label{tab:classifiers}
\begin{tabular}{ccc}
\hline
Name & Type of data used & Algorithm \\
\hline
CLASS\_STAR & Isophotal level measurements, morphological & Neural Network \\
SPREAD\_MODEL & Pixel-level & Normalised Linear Discriminant \\
CM\_T & Measurements on fitted shape, morphological & Second moments of Gaussian mixture fit (object)\\
MCAL\_RATIO & Measurements on fitted shape, morphological & 2nd moments of Gaussian mixture fit (noisified object and PSF) \\
ADA\_PROB & \NEW{\makecell{Most discriminating features \\ from a combination of simple functions \\ used over all catalogue columns.}} & Boosted Decision Trees \\
GALSIFT\_PROB & All catalogue columns (PCA) & Random Forests \\
SVM & \makecell{\NEW{\texttt{MAG\_AUTO},\texttt{FLUX\_RADIUS},\texttt{SPREAD\_MODEL}}, \\ \NEW{flux and morphology}} & Support Vector Machine \\
CONCENTRATION & Catalogue information, morphological & Direct subtraction of magnitudes measured with model and PSF\\
W1-J, J-K & Catalogue information, fluxes & Colour cut \\
HB\_PROB & Catalogue information, fluxes & Template fitting of \NEW{spectral energy distributions} \\
\hline
\end{tabular}
\end{table*}
Table \ref{tab:classifiers} summarises the methods explored in this paper to perform object classification. These include a variety of algorithms using machine learning methods (training on morphological and/or colour information), pixel-level flux measurements and template-fitting. For the sake of clarity and conciseness, not all algorithms are subjected to every test in this paper, but usually a selection is made in each case. Additional details and references are given below:
\subsection{CLASS\_STAR}
This is the standard \texttt{SExtractor} star-galaxy classifier, providing a neural network real number output (a `stellarity' index from 0 to 1) based on the training on a large simulation of galaxy and star images on CCDs.
\paragraph*{\textit{Input data:}} For every object, eight isophotal areas above the background are measured, plus the value of the intensity at the peak pixel in the object and the value of the FWHM for the image.
\paragraph*{\textit{Method:}} It uses a backpropagation model \citep{werbos} for learning, based on simulations that include a wide range of PSF profiles and sizes, though they are optimised to work best on intermediate magnitude ranges (in the DES magnitude scale) of $V\sim 18-22$ due to the types of galaxies simulated and relative star-galaxy mixture.
\subsection{SPREAD\_MODEL}
This quantity is a linear discriminant-based algorithm available with the \texttt{SExtractor} package. The \texttt{SPREAD\_MODEL} estimator was originally developed as a star-galaxy classifier for the DESDM pipeline, and has also been used in other surveys \citep[e.g.,][]{desai,bouy}.
\paragraph*{\textit{Input data:}} the image data at pixel level is used for each detected object in \texttt{SExtractor}.
\paragraph*{\textit{Method:}} \texttt{SPREAD\_MODEL} indicates which of the best fitting local PSF model $\vec{\phi}$ (representing a point source) or a slightly more extended model $\vec{G}$ (representing a galaxy) better matches the image data. $\vec{G}$ is obtained by convolving the local PSF model with a circular exponential model with scale length = 1/16 FWHM (Full-Width at Half-Maximum). \texttt{SPREAD\_MODEL} is normalised to allow comparing sources with different PSFs throughout the field:
\begin{equation}
{\tt SPREAD\_MODEL} = \frac{\vec{G}^T {\bf W}\,\vec{p}}{\vec{\phi}^T {\bf W}\,\vec{p}}
- \frac{\vec{G}^T {\bf W}\,\vec{\phi}}{\vec{\phi}^T {\bf W}\,\vec{\phi}},
\end{equation}
\noindent where $\vec{p}$ is the image vector centered on the source\footnote{This definition of {\tt SPREAD\_MODEL} differs from the one given in previous papers \citep{desai,bouy}, which was incorrect. In practice both estimators give very similar results.}. ${\bf W}$ is a weight matrix constant along the diagonal except for bad pixels where the weight is 0. By construction, \texttt{SPREAD\_MODEL} is close to zero for point sources, positive for extended sources (galaxies), and negative for detections smaller than the PSF, such as cosmic ray hits. The RMS error on \texttt{SPREAD\_MODEL} is estimated by propagating the uncertainties on individual pixel values:
\begin{eqnarray}
{\tt SPREADERR\_MODEL} & = & \frac{1}{(\vec{\phi}^T {\bf W}\,\vec{p})^2} \left(\vec{G}^T {\bf V}\,\vec{G}\,(\vec{\phi}^T {\bf W}\,\vec{p})^2\right.\nonumber \\
& & + \vec{\phi}^T {\bf V}\,\vec{\phi}\,(\vec{G}^T {\bf W}\,\vec{p})^2\nonumber \\
& & \left. - 2 \vec{G}^T {\bf V}\,\vec{\phi}\,(\vec{G}^T {\bf W}\,\vec{p}\, \vec{\phi}^T {\bf W}\,\vec{p}) \right)^{1/2}
\end{eqnarray}
where ${\bf V}$ is the noise covariance matrix, which is assumed to be diagonal.
An example of a classifier derived from {\tt SPREAD\_MODEL} is the default classification scheme (\texttt{MODEST\_CLASS}) used in the Y1 Gold catalogue, which includes the following criteria:
\begin{equation}
\begin{split}
galaxies \iff \\
&\texttt{SPREAD\_MODEL\_I} + \\
&(5/3) \times \texttt{SPREADERR\_MODEL\_I} > 0.005 \\
&\texttt{AND NOT} \\
&(|\texttt{WAVG\_SPREAD\_MODEL\_I}| < 0.002 \\
&\texttt{AND}\\
&\texttt{MAG\_AUTO\_I} < 21.5)
\end{split}
\label{eq:modgal}
\end{equation}
\begin{equation}
\begin{split}
stars \iff \\
&|\texttt{SPREAD\_MODEL\_I} + \\
&(5/3) \times \texttt{SPREADERR\_MODEL\_I}| < 0.002 \\
\end{split}
\label{eq:modsta}
\end{equation}
where \texttt{WAVG\_SPREAD\_MODEL} has been computed from a weighted average of the \texttt{SPREAD\_MODEL} values of single-epoch shapes corresponding to that coadd object. These provide a better separation \citep{dr1} with respect to the standard \texttt{SPREAD\_MODEL} on coadd images, albeit with a limited depth reach, as not all coadd objects have single epoch detections from which a weighted averaged can be computed (a faint object could be detected \textit{only} in the coadded image and not in the individual epochs contributing to the image). The weights come from the weight map of the Data Management processing outputs and the band chosen is the \textit{i} band where the images have a higher signal to noise, and have also demonstrated best performance in detailed simulations. Objects which do not fall into the categories expressed by Equations \ref{eq:modgal} and \ref{eq:modsta} are grouped into either a `fringe' category between both or an `artifact' category (approximately 5\% of the catalogue considered here).
\subsection{CM\_T}
\texttt{CM\_T} is an intrinsic size estimator for the object from the image fitting provided by the \texttt{MOF} pipeline.
\paragraph*{\textit{Input data:}} The fitted Gaussian mixture model using the shapes across the images composing the coadd detection.
\paragraph*{\textit{Method:}}
The \texttt{MOF} code estimates the shapes and fluxes of objects detected in the coadd catalogues, using a mixture of Gaussians\footnote{\url{https://github.com/esheldon/ngmix}}\footnote{\url{https://github.com/esheldon/ngmixer}} to simulate the PSF light profile and then convolve them with assumed bulge and disk models (fitted independently for each object, finding the best linear combination) likewise approximated using Gaussian mixtures \citep{hogg}. This is done by fitting across several images of the same object in multiple epochs and bands and then subtracting the flux of neighbours accurately. Concretely, \texttt{CM\_T} is defined as:
\begin{equation}
{\tt CM\_T} = \langle x^2 \rangle + \langle y^2 \rangle
\label{eq:tsize}
\end{equation}
where $x$ and $y$ denote the distance from the object's centre \NEW{determined by the model fit. The value $\langle x^2 \rangle + \langle y^2 \rangle$ can be obtained analytically from the individual component Gaussians.} The PSF is convolved with the fitted model to obtain these pre-PSF values. An associated uncertainty is computed as well, and our best performing classifier, as tested\footnote{Technically, a different, \textit{validation} set would be required to tune this classifier in terms of the quantity multiplying {\tt CM\_T\_ERR}, to avoid bias a towards a specific value, though in practice the differences are small between different choices.} in the COSMOS field, is based on the quantity ${\tt CM\_T} + 2\times{\tt CM\_T\_ERR}$. Typical values are in the range between -0.5 and 0.5.
\subsection{MCAL\_RATIO}
This measurement is derived from the size estimates obtained by the \textit{metacalibration} technique, developed for shear measurement in weak lensing studies \citep{metacal}, \NEW{in which the single epoch objects are artificially sheared to quantify the response of such an effect in the image.}
\paragraph*{\textit{Input data:}} The object size and PSF model size obtained using this technique.
\paragraph*{\textit{Method:}} This approach uses the same \texttt{ngmix} code as \texttt{MOF} above. However this measurement is much noisier as the metacalibration technique \citep{huffmandelbaum} adds extra noise as part of the correlated noise correction. This is part of the procedure to correct for selection effects in shear inference, as detailed in \citet{metacal}. \NEW{The discriminating quantity used is}:
\begin{equation}
\texttt{MCAL\_RATIO} = \frac{\texttt{T}_{mcal}}{\texttt{T}_{PSF}}
\end{equation}
\noindent where $T_{mcal}$ and $T_{PSF}$ are sizes of the object or PSF respectively as defined in \NEW{Equation} \ref{eq:tsize}. \NEW{In this case the size is obtained from a single Gaussian fit, so the suffix \texttt{CM} (composite model) is not used.} Values are not constrained, but typical ranges explored for star-galaxy separation are between 0 and 1.
\subsection{ADA\_PROB}
This is the name given to a machine learning framework using the {\tt scikit-learn} package \citep{scikit-learn}.
\paragraph*{\textit{Input data:}} This method uses feature generation \NEW{(using various simple mathematical functions of various catalogue variables)}, and feature pre-selection \NEW{(selecting the most informative variables).}
\paragraph*{\textit{Method:}} The selected quantities are fed into several machine learning algorithms (including AdaBoost) which are drawn from {\tt scikit-learn} with an additional probability recalibration step. The details of the framework are described in detail in Appendix \ref{sec:ada_prob_desc}. Two variants have been used of this approach, using either {\tt SExtractor} quantities {\tt ADA\_PROB} or {\tt MOF} quantities {\tt ADA\_PROB\_MOF}.
\subsection{GALSIFT\_PROB}
A probabilistic estimate based on machine learning approach over principal components, as used in the `Multi\_class' algorithm in \citet{soumagnac}.
\paragraph*{\textit{Input data:}} A principal component analysis (PCA) \NEW{over the catalogue quantities} is performed to outline the correlations between the object parameters and extract the most relevant information. We perform a calculation of the Fisher discriminant \citep{fisher} for each of the new parameters to quantify their aptitude to separate between the classes.
\begin{equation}
\mathcal{F}_i = \frac{(\overline{X_{G,i}}-\overline{X_{S,i}})^2}{\sigma^2_{G,i}+\sigma^2_{S,i}}
\end{equation}
$G$ and $S$ corresponding to the galaxy and star classes respectively.
\paragraph*{\textit{Method:}}
We select the parameters with the highest Fisher discriminant (hence the highest `separation power' of the classes) and use them as input to a machine learning classification algorithm. Whereas in \citet{soumagnac} the authors used {\tt ANNz} \citep{collister}, in this application we have replaced it by a Random Forest classification algorithm implemented as part of the \texttt{scikit-learn} package for Python \citep{scikit-learn}. The output is a probability of the object being a star or a galaxy. In this case, we have used a classifier based only on {\tt MOF} quantities, {\tt GALSIFT\_PROB\_MOF}.
\subsection{SVM}
Following Wei et al. (in prep), the support vector machine ({\tt SVM}) is a single-band, purely morphological and magnitude based classifier.
\paragraph*{\textit{Input data:}} The input features used by the SVM are \texttt{MAG\_AUTO\_I}, \texttt{FLUX\_RADIUS\_I}, and \texttt{SPREAD\_MODEL\_I}.
\paragraph*{\textit{Method:}} {\tt SVM} is a supervised machine learning algorithm that constructs a separating hyperplane in any arbitrary n-dimensional space that maximises the margins of objects to the hyperplane. To make the {\tt SVM} robust across various data sets with intrinsic variations in observation conditions, the algorithm performs linear transformations on the three input features to remove the means and make the standard deviations across all objects to be one. This preprocessing procedure also allows all three features to have equal levels of feature importance. This prevents any features with particularly large numerical values from dominating the {\tt SVM} classification decision. The {\tt SVM} uses a Gaussian radial basis function (rbf) kernel, where the hyperparameters, $\gamma = 0.01$ and $C = 46.4$, are selected while training the SVM through an exhaustive cross-validated grid search. The SVM outputs distances of objects to the hyperplane, where a high positive (negative) value corresponds to a high confidence star (galaxy) classification.
\subsection{CONCENTRATION}
A parameter similar to what was used as a star-galaxy classifier for SDSS \citep{sdssDR2}\textcolor{red}.
\paragraph*{\textit{Input data:}} The PSF and model magnitudes for each object.
\paragraph*{\textit{Method:}} In the case of DES, this translates to the use of the difference between the \texttt{MOF} PSF magnitude and a bulge + disk, or composite, model magnitude computed by the \texttt{MOF} pipeline:
\begin{equation}
\texttt{CONCENTRATION} = ${\tt PSF\_MAG\_I} - {\tt CM\_MAG\_I}$
\end{equation}
\subsection{W1-J, J-K infrared bands}
In the Stripe 82 region, we will compare with the information provided by the Vista Hemisphere Survey DR3 \citep{vhs} as proposed in \citet{banerji} up to the available depth. We will also estimate the classification power of a cut in the infrared bands from WISE \citep{wise}, 2MASS \citep{2mass}, as described in \citet{kovacs}.
\paragraph*{\textit{Input data:}} Magnitudes W1 (WISE), J (2MASS, VHS) and K (VHS).
\paragraph*{\textit{Method:}} Colour cuts in W1-J and J-K.
\COMMENT{\item \textbf{HB\_PROB} makes use of several templates for stars and galaxies (as developed and described in \citet{kim}), and uses a Hierarchical Bayesian approach to the estimation of the probability from the fluxes at different bands (\citet{fadely,kim}). A posterior probability is calculated using the standard expression (for a given class $A$):
\begin{equation}
P(A|\mathbf{x,\theta}) = P(\mathbf{x}|A,\mathbf{\theta})P(A|\mathbf{\theta})
\end{equation}
where the likelihood $P(\mathbf{x}|A,\mathbf{\theta})$ is computed by marginalizing over all star and galaxy templates. This requires specifying a prior probability $P(t|A,\mathbf{\theta})$ for every template $t$ that is obtained from the complete application sample itself.
The templates used follow the application described in \citet{kim} and include the stellar SEDs from \citet{pickles,chabrier,bohlin} and galaxy spectra from \citet{coleman,kinney}.
}
\subsection{HB\_PROB}
Additionally, we implemented a Hierarchical Bayesian method (\texttt{HB\_PROB}) developed and explored by \citet{fadely,kim} with CFHTLS data. The lack of $u$-band in our case severely impacted the performance of this method, so it was not pursued further in our analysis.
Table \ref{tab:cuts} shows the specific selection methods used with respect to a varying threshold $t$ for each of the algorithms used in this work.
\begin{table*}
\centering
\caption{Selection methods.}
\label{tab:cuts}
\begin{tabular}{cc}
\hline
Name & Selection method for galaxies using threshold \textit{t} \\
\hline
CLASS\_STAR & $CLASS\_STAR < t$ \\
SPREAD\_MODEL & $SPREAD\_MODEL + 1.67*SPREADERR\_MODEL > t$\\
CM\_T & $CM\_T + 2*CM\_T\_ERR > t$ \\
MCAL\_RATIO & $MCAL\_RATIO > t$ \\
ADA\_PROB & $ADA\_PROB > t$ \\
GALSIFT\_PROB & $GALSIFT\_PROB > t$ \\
SVM & $SVM\_PROB > t$ \\
CONCENTRATION & $PSF\_MAG\_I - CM\_MAG\_I > t$ \\
WISE J-K & $(J-K-0.6)/(MAG\_AUTO\_G-MAG\_AUTO\_I) > t$ \\
\hline
\end{tabular}
\end{table*}
\section{Performance on calibration fields}
\label{sec:calib}
\COMMENT{We study the performance of the classifiers by separating point-like versus extended sources. The former will include both stars and QSOs on first approximation and we will collectively call them `stars' in this work. For the case studies considered here and the general catalogue, the contamination of QSOs in the large-scale stellar or galactic catalogues is not deemed important. However, a good star-QSO separation is needed for quasar science, as studied in detail in \citet{tiess} for DES data.QSO
and analyze the impact in two broad science cases, and possible developments to improve object classification in future analyses of this data.}
In this section, we will look first at the metrics used to compare classifiers using the calibration fields, describe the datasets (including training and validation) and finally analyze the results.
\subsection{Receiver Operating Characteristic (ROC) curves}
We compare the performance of the different classification techniques using the calibration fields, by calculating Receiver Operating Characteristic \citep[ROC;][]{rocprimer,bradley} curves which compare the \textit{True Positive Rate (TPR)} of galaxy or star detection, given a specific threshold for the classifier, versus the \textit{False Positive Rate (FPR)}, as defined by:
\begin{equation}
TPR=\frac{TP}{TP+FN}
\label{eq:tpr}
\end{equation}
\begin{equation}
FPR=\frac{FP}{FP+TN}
\label{eq:fpr}
\end{equation}
\noindent where $TP$ are correctly identified galaxies, given a cut for a specific classifier; $FN$ are incorrectly classified galaxies as stars; $FP$ are incorrectly classified stars as galaxies and $TN$ correctly identified stars (in the assumption of using `truth' for galaxy type). See Table \ref{tab:confusion_matrix} for a reference on these concepts. Therefore, the ROC curve is confined by construction to an area spanning from 0 to 1 in FPR and TPR. As we vary the threshold $t$ for classification for a given classifier (Table \ref{tab:cuts}), a curve will be drawn across the area from (0,0) to (1,1). A completely random `classifier' would show as a diagonal line.
\begin{table*}
\centering
\caption{Definitions of different figures of merit for classifiers, according to the outcome of the classification using a `truth' reference (also termed `confusion matrix'). The term `positive` can refer to `galaxy' or `star' classes depending on the use case. The metrics examined in this work are emphasised in bold. \NEW{`Purity' can be used as a synonym for the Positive Predictive Value, PPV, whereas `completeness' can be interchanged with the True Positive Rate, TPR.}}
\label{tab:confusion_matrix}
\begin{tabular}{l|l||c|c||c|c|}
\multicolumn{2}{c}{} & \multicolumn{2}{c}{\textbf{Prediction}}& \multicolumn{2}{c}{} \\
\cline{3-4}
\multicolumn{2}{c|}{} & Positive & \multicolumn{1}{c|}{Negative} & \multicolumn{2}{c}{}\\
\hhline{~-::==:t:--}
\multirow{2}{*}{\textbf{Truth}}& Positive & \makecell{True Positive \\ (TP)} & \makecell{False Negative \\ (FN)} & \makecell{\textbf{True positive rate} \\ (TPR) = TP/(TP+FN)} & \makecell{False negative rate \\ (FNR) = FN/(TP+FN)} \\
\hhline{~-||--||--}
& Negative & \makecell{False Positive \\ (FP)} & \makecell{True Negative \\ (TN)} & \makecell{\textbf{False positive rate} \\ (FPR) = FP/(FP+TN)} & \makecell{True negative rate \\ (TNR) = TN/(FP+TN)} \\
\hhline{~-||~~||--}
\hhline{~~::==:b:~~}
\multicolumn{2}{c}{} & \multicolumn{1}{|c|}{\makecell{\textbf{Positive predictive value} \\ (PPV) = TP/(TP+FP)}} & \multicolumn{1}{|c|}{\makecell{False omission rate \\ (FOR) = FN/(FN+TN)}} & \multicolumn{2}{c}{}\\
\cline{3-4}
\multicolumn{2}{c}{} & \multicolumn{1}{|c|}{\makecell{False discovery rate \\ (FDR) = FP/(TP+FP)}} & \multicolumn{1}{|c|}{ \makecell{Negative predictive value \\ (NPV) = TN/(FN+TN)}} & \multicolumn{2}{c}{}\\
\cline{3-4}
\end{tabular}
\end{table*}
In particular, the AUC (area under the ROC curve) has been classically used as a threshold-independent metric to compare the performance of classifiers, as well as being relatively insensitive to the specific positive to negative composition (as long as sufficient statistics are available). The closer the AUC gets to unity, the better the discriminating power of the classifier associated with that particular curve. Again, a random classifier would show an AUC value of 0.5.
There are, however, some caveats to be aware of, namely the possibility of misleading results when ROC curves cross each other \citep{hand2009} and that misclassification costs can be different according to the scientific case, and this is not reflected in ROC curves. We address this by extending the range of metrics used for different classifiers, in order to have a broader view of the performance for our particular needs.
\subsection{Purity and completeness}
\label{sec:purity_completeness}
In astronomy, we are interested in evaluating the performance of classifiers in terms of their impact on measurable on parameters of interest. It is common to find the requirements for a survey defined in terms of \textit{purity} and \textit{completeness}. In \citet{soumagnac}, for example, the authors formulate the scientific requirements for weak lensing and large-scale structure studies in terms of these two observables.
`Purity' is a measurement of the contamination of a sample by misclassified objects, which can also be called \textit{precision} or \textit{positive predictive value (PPV)}:
\begin{equation}
PPV=\frac{TP}{TP+FP}
\label{eq:ppv}
\end{equation}
`Completeness' (also known as, \textit{recall}) is another name for the TPR defined in Equation \ref{eq:tpr}. A good approach to easily compare the performances of several classifiers is to use the precision-recall (PR) curve, where both quantities can be visualised simultaneously.
\subsection{Training and testing fields}
\label{sec:train_test_fields}
The dataset on which we train the machine learning (ML) codes is the weak lensing catalogue from HST ACS in the COSMOS field\citep{leauthaud}, as this provides a largely unbiased measurement of all extended and point-like sources from DES (albeit the star-galaxy mixture is affected by the specific position in the sky with respect to the Galactic plane). In particular, the \texttt{MU\_CLASS} parameter is used for this reference, defined in the peak surface brightness - \texttt{MAG\_AUTO} space, which in space-based imaging shows very distinct loci with respect to the same objects viewed through the atmosphere. This has been used previously in star-galaxy separation assessments in, e.g., \citet{crocce} and \citet{hypersuprime}.
This training set, after a 1" positional match with DES sources, contains $\sim114$k extended and $\sim12$k point-like sources. \COMMENT{It can be subdivided further into a training set and a validation set, the latter being used to tune each classifier's internal parameters.} The COSMOS dataset will also be used for some tests only with the non-ML codes in order to avoid biased conclusions based on their training in that same area.
Even in the case in which we use unbiased, imaging data, the particular position on the sky of the field will condition the relative mixture of stars and galaxies in a prominent way. Therefore we add some extra imaging data extracted from the Hubble Source Catalog\footnote{\url{https://archive.stsci.edu/hst/hsc/}} (Hubble-SC) \citep{hsc} where it overlaps the DES survey. Most of it is either too inhomogeneous or targets specific objects (nearby, large galaxies or globular clusters), but a few deep fields can be matched with some of the SN fields from DES. In this case we use the Hubble-SC catalogues' concentration index with a cut of 1.2 which seems optimal in the concentration-magnitude plane.
Spectroscopy is also a valuable resource to provide a one-to-one truth table for our classifications. However, the spectroscopic targeting and measurement efficiency is not complete in a statistical sense relative to the DES catalogue, as certain types of sources were given higher priority and some types are more difficult to classify spectroscopically, therefore the testing of purity/completeness can be strongly biased. The photometric properties of the stars and galaxies selected can also be highly skewed to particular types that introduce additional biases. This limits the usefulness of any purity metric we try to derive from these fields. For this reason, the spectroscopic datasets have been limited to those that provide a relatively unbiased sample by construction, which includes the VVDS-DEEP and VVDS-CDFS \citep{vvds} data releases. The SDSS DR13 \citep{sdssdr13} updated spectro-photometric sample over Stripe 82 is also used due to the relative variety of spectra available, and the possibility to test our classification methods against `true' spectroscopic typing. We use redshifts (a cut in z < 0.001) as the method to identify stars. However, we also consider a selection based on SDSS spectroscopic {\tt CLASS} \NEW{obtaining similar conclusions. For the VVDS data, we require the redshifts to be `reliable' according to the classification in \citet{vvds}, that is, values of 2,3,4 or 9 in their redshift quality estimate.}
Both the COSMOS catalogues and the ones recovered from the Hubble-SC have been cross-tested against spectroscopic catalogues (VIMOS-Ultra Deep Survey DR1 \citep{vuds}, zCOSMOS DR3 \citep{zcosmos}, and VVDS-CDFS \citep{vvds}) to check the robustness of their morphological classifications against a `true' type based on their spectra. In both cases, around $5\%$ of spectroscopically classified stars are misclassified as galaxies \NEW{when using these space-imaging based measurements}, whereas around $2\%$ of spectroscopically classified galaxies are misclassified as stars. This \NEW{misclassification happens at faint magnitudes (F814W from ACS-HST > 24 for COSMOS, F814W > 23 for the other Hubble fields used), denoting possible compact galaxies that are unresolvable by HST, errors in the spectroscopic measurement or matching.} These corrections are not considered for the purity estimates derived here as they belong to fainter fluxes than the truth tables used in our tests.
See Table \ref{tab:external_datasets} and Appendix \ref{sec:external_datasets} for details on the reference data in different fields including the database queries used to create these datasets.
\subsection{Results}
\subsubsection{Using HST imaging}
\label{sec:imaging_results}
We compare here the results for the classifiers used on the COSMOS field (excluding the ML codes that were trained on this field) and the supernova fields for which we have found publicly available deep HST data from the Hubble-SC.
\begin{itemize}
\item The results for the ROC comparison are shown in Figure \ref{fig:roc_cosmos} for the COSMOS field and Figure \ref{fig:roc_snfields} for the SN fields with Hubble-SC data. The AUC of the respective curves are tabulated in Table \ref{tab:aucs}.
From these plots it can be readily seen that among the morphological classifiers, the algorithms based on a linear discriminant over coadded images, {\tt SPREAD\_MODEL}, and intrinsic size on {\tt MOF} estimates, {\tt CM\_T}, are the best performing ones.
It is also seen that the ML classifiers (in Figures \ref{fig:roc_snfields} and \ref{fig:pr_snfields}) do perform better, even considering a different field with respect to training as in the case of the Hubble-SC test\COMMENT{, though admittedly the truth samples have been collected using the same instrument (the ACS camera with the F814W filter) in both training and testing (though the objects in these samples are different)}.
It is noteworthy to point out that most of the differences showcased in Figures \ref{fig:roc_cosmos} and \ref{fig:roc_snfields} become more evident when we restrict ourselves to faint objects ($i>22$). The {\tt SPREAD\_MODEL}-based cut does a good job at avoiding stellar contamination but suffers from decreased galaxy completeness. This is a result of the galaxy locus merging with the stellar locus in the magnitude-{\tt SPREAD\_MODEL} space where noisier measurements will increase the effect even further. {\tt CM\_T} fares better in this respect, but a conservative cut will provide a more pure galaxy sample using {\tt SPREAD\_MODEL}. On the other hand, the metacalibration size ratio does not perform as well as the other morphological classifiers, though this measurement is noisier than the direct assessment of sizes and shapes from the {\tt MOF} pipeline.
\item Figure \ref{fig:roc_snfields} shows that ML classifiers are able to take advantage of ancillary information for very faint objects where shape measurements are uncertain. Results with {\tt SVM} in the SN fields show that a ML approach based exclusively on morphological and magnitude information can provide some advantage over simple cuts on morphological variables. {\tt SVM} is shown to be robust outside of its training field, however other machine learning algorithms provide an extra edge in performance as shown by the higher AUC values. This is due to forgoing the additional information encoded in the rest of the variables available in the catalogue. However, this approach could provide a middle-ground solution to the issues one might encounter when incorporating colour-based information, which can incorporate interesting physics we would not like to be entangled with our star-galaxy sample selection (see Section \ref{sec:discussion}). Further developments of this approach is explored in Wei et al. (in prep).
The comparison between the COSMOS and Hubble-SC fields reveals that the {\tt CM\_T} classification is more robust as we switch between fields. {\tt SPREAD\_MODEL} and {\tt CLASS\_STAR}, which are derived from coadded PSFs are more vulnerable to the contribution of bad exposures and PSF inhomogeneities in the coadded image. It is worthwhile noting here that preliminary tests on Y3 data \citep{dr1} using Hyper Suprime Camera deep data \citep{hypersuprime} reinforce this idea, which will be explored further in a future publication, therefore favoring in general the use of a multi-epoch classifier (such as {\tt CM\_T} based on the {\tt MOF} pipeline). Both the COSMOS field dataset and SN field coadds have a much smaller dithering than the wide-field exposures. This might artificially bias classifications based on the coadded PSF to somewhat better performances than actually present in the wide-field data.
\item Figures \ref{fig:pr_snfields} and \ref{fig:pr_snfields_stars} show the precision-recall metric, for galaxies and stars respectively (COSMOS plots not shown for conciseness, but provide similar conclusions).
These plots provide a similar conclusion as the ROC curves, though in terms of more useful quantities with respect to scientific requirements such as the recall (i.e. completeness) and precision (i.e. purity). Again, the {\tt CM\_T} morphological classifier and the ML codes provide the best results, and this manifests even more strongly for selecting a star sample (these results motivate the choice for stellar classification based on multi-epoch pipelines in \citealt{shipp}). It is noteworthy to add that the ML classifiers using {\tt MOF} quantities do not add much more than a straight cut in {\tt CM\_T} itself, due to the large information content included in this classifier with regards to star-galaxy classification. On the other hand, the ML classifiers based on \texttt{SExtractor} quantities are able to extract more value from the different outputs of this code, with respect to a simple {\tt SPREAD\_MODEL} cut.
\item In Figures \ref{fig:efficiency_mag_cosmos} and \ref{fig:efficiency_mag_cosmos_stars}, we can appreciate the dependence of the completeness with the magnitude as we go to the fainter end in the sample, in the galactic and stellar case respectively.
Unlike in the previous plots, in this case a choice of threshold has to be made. We have decided to pick cuts in the variables in question in order to have a similar galaxy purity ($99\%$) in each magnitude bin, so we can compare completeness appropriately, and similarly for stars ($80\%$). We chose the COSMOS field which has good statistics to faint magnitudes, though this disallows using the ML codes in the comparison. This example shows a case where classifiers such as the concentration estimation from the {\tt MOF} pipeline, not necessarily favored at first sight from the integral under the ROC curve, works better in this regime due to its good selection of very pure samples. The ROC curve only informs about \textit{overall} classifier performance (i.e. considering all possible thresholds), and different classifiers have to be tested for the specific science case at hand.
For stars, a similar behaviour is seen for {\tt CM\_T}, {\tt CONCENTRATION} and {\tt SPREAD\_MODEL}. {\tt CLASS\_STAR} for instance suffers from a poor completeness near the faint end, as a high thresholding cut in this case removes most of the objects, which in the neural network tend to cluster towards intermediate values when the object classification is uncertain. {\tt MCAL\_RATIO} incorporates noisier measurements and additional cuts to the sample that make it less complete when providing a classified sample.
\item In addition, in Figure \ref{fig:purity_pz_hsc} a similar comparison is shown as a function of a realization of the photometric redshift from the probability distribution function obtained from the algorithm BPZ \citep{bpz}, this time also adding the ML classifiers (again over the SN fields with Hubble-SC). A similar conclusion is drawn from these plots; {\tt MOF} fitting methods and ML classifiers perform best, as indicated by the ROC curves. Note the stability of the purity of the galaxy sample with respect to photo-z, suggesting that a photo-z selected sample would not be biased by the star-galaxy separation classifiers analyzed here (however, see Section \ref{sec:lss} for an important caveat to this conclusion).
\end{itemize}
\begin{table*}
\centering
\caption{Area under the ROC curves for different classifiers. Dashes indicate tests that have not been run for that specific code and dataset combination.}
\label{tab:aucs}
\begin{tabular}{ccccc}
\hline
Name & COSMOS, imaging & SN fields, imaging & SN fields, spectroscopy & stripe 82, spectroscopy\\
\hline
CLASS\_STAR & 0.898 & 0.885 & 0.950 & \NEW{0.976} \\%0.997 \\
SPREAD\_MODEL & 0.954 & 0.956 & 0.975 & \NEW{0.962} \\%0.998 \\
CM\_T (MOF) & 0.957 & 0.959 & 0.971 & \NEW{0.972} \\%0.997 \\
CONCENTRATION (MOF) & 0.938 & \NEW{0.953} & 0.950 & \NEW{0.967} \\%0.997 \\
MCAL\_RATIO & 0.910 & 0.924 & -- & -- \\
VHS J-K vs G-I & -- & -- & -- & \NEW{0.993} \\%W1-J 0.985 \\
ADA\_PROB & -- & 0.978 & 0.983 & \NEW{0.967} \\%0.998 \\
ADA\_PROB (MOF) & -- & \NEW{0.967} & 0.980 & \NEW{0.967} \\%0.998 \\
GALSIFT\_PROB (MOF) & -- & \NEW{0.969} & 0.981 & \NEW{0.962} \\%0.998 \\
SVM & -- & 0.962 & -- & -- \\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\includegraphics[width=\columnwidth]{roc_cosmos_v5.png}
\caption{ROC plot for classifiers tested on the COSMOS field. Only non-ML codes are shown, as \NEW{the machine-learning ones were trained in this dataset}. Magnitude range is given by \texttt{MAG\_AUTO\_I} = (17,24). The {\tt SPREAD\_MODEL}-based cut is similar to {\tt MODEST\_CLASS} used in Y1 analyses. \NEW{The ROC curve is obtained by varying the threshold at which the classification divides the galaxy and star sample.}}
\label{fig:roc_cosmos}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{roc_hsc_v8.png}
\caption{ROC plot for classifiers tested on the SN fields over the Hubble-SC catalogue. Magnitude range is given by \texttt{MAG\_AUTO\_I} = (17,24). The {\tt SPREAD\_MODEL}-based cut is similar to {\tt MODEST\_CLASS} used in Y1 analyses. \NEW{The ROC curve is obtained by varying the threshold at which the classification divides the galaxy and star sample.}}
\label{fig:roc_snfields}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{pr_hsc_galaxies_v8.png}
\caption{Precision-Recall (or completeness-purity) plot for classifiers tested on the SN fields over the Hubble-SC catalogue, using \textbf{galaxies} as truth. Magnitude range is given by \texttt{MAG\_AUTO\_I} = (17,24). The {\tt SPREAD\_MODEL}-based cut is similar to the {\tt MODEST\_CLASS} used in DES Y1 analyses.}
\label{fig:pr_snfields}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{pr_hsc_stars_v8.png}
\caption{Precision-Recall (or completeness-purity) plot for classifiers tested on the SN fields over the Hubble-SC catalogue, using \textbf{stars} as truth. Magnitude range is given by \texttt{MAG\_AUTO\_I} = (17,24). The {\tt SPREAD\_MODEL}-based cut is similar to the {\tt MODEST\_CLASS} used in DES Y1 analyses.}
\label{fig:pr_snfields_stars}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{efficiency_comparison_cosmos_galaxies_v6.png}
\caption{Completeness of a {\bf galaxy} sample as a function of magnitude for classifiers tested on the COSMOS field, for a fixed galaxy purity of 99\%.}
\label{fig:efficiency_mag_cosmos}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{efficiency_comparison_cosmos_stars_v6.png}
\caption{Completeness of {\bf stellar} sample as a function of magnitude for classifiers tested on the COSMOS field, for a fixed 80\% purity.}
\label{fig:efficiency_mag_cosmos_stars}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{purity_comparison_hsc_pz_v7.png}
\caption{Purity of the galaxy sample as a function of photo-z for classifiers tested on Hubble-SC matches over the SN fields field, for a fixed 90\% completeness. We use a random MonteCarlo sampling of the probability distribution function of redshift predicted by BPZ for that particular object as an estimate of its photo-z.}
\label{fig:purity_pz_hsc}
\end{figure}
\subsubsection{Using ground-based spectroscopy}
\label{sec:spectralcalib}
Turning now to tests on the overlapping spectroscopic data, we show ROC plots to demonstrate the consistency with the results from the previous section and add a comparison with external infrared information.
Figure \ref{fig:roc_vvds} shows the ROC for the VVDS test and Figure \ref{fig:roc_s82} shows the ROC for the Stripe 82 test. The former does not add much to the conclusions mentioned above, but provides an assurance that conclusions are consistent with a different class of `truth' typing. We also add here a test on the SN fields computing the ROC curves and their areas, versus the signal to noise of the detected objects, to demonstrate it behaves as expected as well, including the ML codes (see Figure \ref{fig:aucs_sn}).
The stripe 82 dataset is shallower and therefore does not allow for a clear distinction between the performance of most of the algorithms described here. The comparison with the combination with external infrared colour cuts on the other hand, shows an important increase in performance, specifically when attempting to select a very pure stellar sample, as already advanced in \citet{baldry} and \citet{banerji}. It is important to note here again that the nature of the test is different with respect to the ones based on space imaging. In this case we are using spectroscopic redshifts to determine the nature of the object (galactic or extra-galactic) and not its extendedness. What we see here is that infrared information will select out the stars from the galaxy and QSO (which are point-like generally) population. We have also attempted to add W1-J version from 2MASS and WISE (as suggested in \citet{kovacs}) but the matches proved too shallow to be of any interest for these samples.
Unfortunately, the current VHS data does not cover the full breadth and depth of the survey and a careful combined catalogue with adequate matching is needed (overcoming the less precise infrared astrometry) beyond what was done here for comparison purposes. Cross-matching with bright sources will be explored in more detail with DES Y3 data with the goals of enhancing star selection for creating PSF models and reference catalogues for large scale structure. A combination of classifiers, as done for instance in \citet{kim} or \citet{molino}, seems to be an appropriate option in this case and even more so if \NEW{matched-aperture} photometry of VHS data can be performed survey-wide for DES \citep{banerji}. This would also have important applications for photometric redshift determination \citep{banerjipz}.
\begin{figure}
\includegraphics[width=\columnwidth]{roc_vvds_v7.png}
\caption{ROC plot for classifiers tested on the SN fields over the VVDS catalogues. Magnitude range is given by \texttt{MAG\_AUTO\_I} = (17,24). \NEW{The ROC curve is obtained by varying the threshold at which the classification divides the galaxy and star sample.}}
\label{fig:roc_vvds}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{roc_s82_v9.png}
\caption{ROC plot for classifiers tested on the Stripe 82 region overlapping SDSS and VHS data. Magnitude range is given by \texttt{MAG\_AUTO\_I} = (17,21). Note the logarithmic scale in the x-axis in this instance. \NEW{The ROC curve is obtained by varying the threshold at which the classification divides the galaxy and star sample.}}
\label{fig:roc_s82}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{aucs_sn_v2.png}
\caption{Area under the curve measured for the same classifiers as Figure \ref{fig:roc_vvds}, for different signal-to-noise thresholds, using the \texttt{MAGERR\_AUTO} quantity.}
\label{fig:aucs_sn}
\end{figure}
\section{Performance on application field}
\label{sec:y1valid}
It has been shown by \citet{fadely} that machine learning techniques in star-galaxy classification will perform better if a representative training dataset is found. We have studied the impact of this effect by testing ML algorithms over different fields other than the training set in Section \ref{sec:calib}. However all these additional areas are quite constrained either in depth or area, when compared to the complete DES volume.
In this section, we extend the scope of the performance tests in classification to have a broader picture, by making the following checks on the application field (see Section \ref{sec:dataset}):
\begin{enumerate}
\item General distribution of the classifier-flux space to qualitatively analyze the algorithms' outputs.
\item Number count distributions of stars against a well-tested simulation, both as a function of magnitude and as function of galactic latitude.
\item Galaxy versus star density profiles in search of correlations, using different proxies for the true stellar distribution.
\item Density of classified galaxies as a function of proximity to the Large Magellanic Cloud.
\item Consistency of classified stars with the expected stellar locus \citep{covey}.
\end{enumerate}
Except where noted, the sample sizes for each of these cases are approximately 1 million objects, limited by the size of tested region, magnitude range or photo-z binning.
\subsection{Classifier outputs}
A first step towards understanding the quality of classification for different algorithms in the application field of DES is to study the outputs as a function of magnitude and the number counts of classified objects.
In Figure \ref{fig:magclassifier_distributions} several density plots showcase how objects distribute in the classifier-magnitude space. These distributions are based on a $1\%$ sample of the Y1 Gold catalogue. Direct morphological outputs from the DESDM pipeline {\tt CLASS\_STAR}, {\tt SPREAD\_MODEL} and {\tt CM\_T}) show two loci that merge in the faint end. {\tt CLASS\_STAR} outputs merge into a region of 50\% probability by construction of its base neural network. This uncertainty region appears at shallower magnitudes than other classifiers as shown previously, due to the characteristics of the simulations used for its training. \COMMENT{A similar effect is seen with \texttt{GALSIFT\_PROB}.} However, a classifier using a feature importance selection\footnote{A preselection of the input variables which provide the most predictive power for the task at hand, e.g. star-galaxy separation.} manifests a more `clear-cut' classification of objects, with a large predominance of galaxies at the faint end, as expected. This can be attributed to the fact that there is a large predominance of galaxies over stars in raw numbers (a very imbalanced dataset) at faint magnitudes, so the algorithms will `learn' that the most probable classification for a given object in this range is a galaxy.
\begin{figure*}
\centering
\subfigure[CLASS\_STAR]{\includegraphics[width=0.3\textwidth]{CLASS_STAR_I_vs_mag_auto_heatmap_v3.png}\label{fig:a}}
\subfigure[SPREAD\_MODEL]{\includegraphics[width=0.3\textwidth]{SPREAD_MODEL_I_vs_mag_auto_heatmap_v3.png}\label{fig:b}}
\subfigure[CM\_T]{\includegraphics[width=0.3\textwidth]{CM_T_vs_mag_auto_heatmap_v3.png}\label{fig:c}}
\\
\subfigure[ADA\_PROB]{\includegraphics[width=0.3\textwidth]{ADABOOST_PGAL_vs_mag_auto_heatmap_v3.png}\label{fig:d}}
\subfigure[ADA\_PROB\_MOF]{\includegraphics[width=0.3\textwidth]{ADABOOST_PGAL_MOF_vs_mag_auto_heatmap_v3.png}\label{fig:e}}
\subfigure[GALSIFT\_PROB\_MOF]{\includegraphics[width=0.3\textwidth]{GALSIFT_PGAL_MOF_vs_mag_auto_heatmap_v3.png}\label{fig:f}}
\caption{Object classification heatmaps as a function of magnitude for different classifiers. The black line represents the cut for which a 99\% galaxy purity is obtained in the Hubble-SC sample in the i=(17,24) magnitude range. With the exception of CLASS\_STAR, all classifiers assign higher values to extended sources.} \label{fig:magclassifier_distributions}
\end{figure*}
\subsection{Number counts of classified stars}
\label{sec:starcounts}
On the other hand, if we limit our study to the point in which Y1 data are fairly complete over a large area ($r\sim22.5$), we can assess for instance the similarity of the stellar distribution in magnitude versus a detailed simulation such as {\tt Galaxia} \citep{galaxia}, which has been tested against Gaia DR1 data (\cite{gaiadr1}, Koposov private communication). This is shown in Figure \ref{fig:nm_stars} for a few selected classifiers, spanning a varied range of those mentioned in Section \ref{sec:classifiers}, in the DES \textit{r} band. Thresholds were used to provide a similar number of stars as {\tt MODEST\_CLASS}, the default DES Y1 Gold star-galaxy classifier based on {\tt SPREAD\_MODEL}. Up to $r\sim21$, the behaviour for most of them with respect to the simulation is similar. Two machine learning classifiers based on {\tt MOF} quantities show a significant lack of brigh objects ($r<19$) due to failures from the Y1 version of the {\tt MOF} pipeline in fitting stars in this regime\footnote{Y3 Gold {\tt MOF} photometry has solved this issue.}. This has been identified as failures of the galaxy fits for which {\tt MOF} was designed when applied to moderately bright stars. A consistent overestimation of stars by {\tt Galaxia} with respect to DES stars is apparent for all classifiers, as was seen in \citet{tingli}. On the other hand, other simulations such as the ones described in \citet{besancon} and \citet{trilegal} show discrepancies of this size as well at this latitude and longitude.
This disappears at the faint end, as compact galaxies start to leak into the stellar sample. After that, a completeness drop kicks in as we enter the survey's magnitude limit. At the faint end, {\tt CLASS\_STAR} shows a drop in completeness sooner than the other classifiers. The nature of this classifier, which provides an intermediate value of probability for `uncertain' sources, is such that a fixed threshold cut tends to `lose' stars at the faint end, if we adjust all classifiers to the same number of stars. \COMMENT{For {\tt GALSIFT\_PROB} the effect comes from an anomalous behaviour at $i\sim21$, also seen in Figure \ref{fig:f} so that a more or less pure cut in the selection will make this kink appear. A somewhat looser cut will have the effect of smoothing these feature, though admitting a few galaxies in the faintest end.}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{star_counts_spt_v10.png
\caption{Counts for stars as classified by different algorithms compared to a {\tt Galaxia} simulation \citep{galaxia} using DES photometry, in the patch of the Y1 DES footprint with 45<RA<50, -45<DEC<-50.}\label{fig:nm_stars}
\end{figure}
\subsection{Stellar density as a function of Galactic latitude}
As a complementary measure of goodness of stellar identification, we compare the number of stars as a function of Galactic latitude (Figure \ref{fig:gallat_counts_stars}). We limit the comparison to the range in which any possible issues deriving from the current {\tt MOF} processing are avoided (see Section \ref{sec:starcounts}). A slight deficit is seen nonetheless as was verified before, but the comparison of all these different approaches are qualitatively in the same range, without any preferred or outstanding behaviour from any of the classifiers tested here.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{gallat_counts_spt_v5.png}
\caption{Counts for stars as classified by different algorithms compared to a Galaxia simulation \citep{galaxia} for the application field (SPT region of the DES-Y1 footprint) for the magnitude range $r$ = (19,21.5).}\label{fig:gallat_counts_stars}
\end{figure}
\subsection{Galaxy vs stellar density}
\label{sec:densvsdens}
As mentioned in Section \ref{sec:purity_completeness}, we do not have a large-scale `truth' table available that we could use as reference to check the precision of our classification on an object-by-object basis. However, several studies of large-scale structure \citep[e.g.,][]{ross} have devised an estimate of the purity of the galaxy sample, for a given classification scheme, by measuring correlations of classified galaxy density versus some reliable measurement of the relative stellar distribution (using a very pure cut for stars, a model, or an external catalogue). This is done via the pixelisation of the field using the \texttt{HEALPix} software \citep{healpix} and fitting a linear relation between the galaxy overdensity as a function of stellar density in said pixels. For this study we used a pixelisation parameter \texttt{NSIDE}=512, which corresponds to a pixel size of approximately 0.01 square degrees.
In Figure \ref{fig:classcomp_densdens} we show a comparison \NEW{of the galaxy density as a function of stellar density for} several classifiers, tested on the application field for the galaxy sample with the magnitude cuts shown in Table \ref{tab:contamination}. \NEW{Errors for each point are computed using the jackknife method \citep{jackknife}, whereas the ones in the table correspond to the estimated error from the fit.}
\NEW{The galaxy density over samples of increasing stellar density would theoretically increase with a linear relationship, if stellar contamination was the only effect that a dense star field would introduce. However, as seen already in \citet{ross} (in their Figure 3), moderately bright stars can also induce an `occultation' effect which makes detection around them more difficult. This effect is more predominant for fainter sources. This will create an inverse, possibly non-linear, relationship between galaxy density and stellar density. The overall effect is to create a proportionality relationship at low to moderate stellar densities, which may or may not change in slope and even decrease, depending on the separation power of the classifier, as galaxies get removed from the catalog due to the presence of foreground bright stars. For our purposes here, i.e., to understand the star-galaxy separation power for different classifiers, we use
the intercept value of the linear fit to the first part of the plot, in order to} estimate the purity of the galaxy sample. We adjusted the cuts for the classifiers to provide a similar number of detected `galaxies' (i.e. a similar completeness) as {\tt MODEST\_CLASS}, in order to get a better handle on how purity compares on the same grounds, similarly to what we did on Section \ref{sec:imaging_results}.
We note that using the application sample in bulk shows no strong contamination component for the {\tt SPREAD\_MODEL}- or \texttt{MOF}-based quantities or for the machine learning approaches using magnitude and colour information. Slightly better performance is found using \texttt{MOF} quantities and the \texttt{ADABOOST} code, especially for fainter objects.
This is explained by the more accurate shape measurement of the \texttt{MOF} code and by how additional information is captured by {\tt ADA\_PROB\_MOF}.
\begin{table*}
\centering
\caption{Contamination for different classification methods for the galaxy vs stellar density tests. Threshold cuts were selected to adjust to the same number of detected galaxies as provided by {\tt MODEST\_CLASS}.}
\label{tab:contamination}
\begin{tabular}{ccccccc}
\hline
Sample & MODEST\_CLASS & CLASS\_STAR & ADA\_PROB & ADA\_PROB\_MOF & GALSIFT\_PROB\_MOF & CM\_T\\
\hline
i$<22$ & \NEW{$2.7\pm0.4\%$} & \NEW{$2.1\pm0.5\%$} & \NEW{$2.2\pm0.4\%$} & \NEW{$2.2\pm0.4\%$} & \NEW{$2.3\pm0.4\%$} & \NEW{$2.3\pm0.4\%$}\\
i$<23$ & \NEW{$3.2\pm0.4\%$} & \NEW{$4.6\pm0.2\%$} & \NEW{$2.4\pm0.4\%$} & \NEW{$2.1\pm0.4\%$} & \NEW{$2.8\pm0.3\%$} & \NEW{$2.4\pm0.4\%$}\\
\hline
\end{tabular}
\COMMENT{ \begin{tabular}{cccc}
\hline
Sample & GALSIFT\_PROB & GALSIFT\_PROB\_MOF & CM\_T\\
\hline
i$<22$ & $2.0\pm0.3\%$ & $0.8\pm0.6\%$ & $0.8\pm0.6\%$\\
i$<23$ & $1.8\pm0.4\%$ & $1.2\pm0.6\%$ & $0.8\pm0.6\%$\\
\hline
\end{tabular}}
\end{table*}
\begin{figure}
\includegraphics[width=0.45\textwidth]{gs1d1d_lt22_v5.png}
\includegraphics[width=0.45\textwidth]{gs1d1d_lt23_v5.png}
\caption{Galaxy vs star density plot for several classifiers, \NEW{for $i<22$ (\textit{top}) and $i<23$ (\textit{bottom}). Star density is traced by an external map of `secure' moderately bright stars.}}
\label{fig:classcomp_densdens}
\end{figure}
One of the components of these calculations is the choice of a star map to establish the density relationships. We have derived a $\sim1\%$ systematic uncertainty in the estimation of the impurity derived from comparing brighter and fainter stellar samples (Figure \ref{fig:starmap_comparison}). The 2MASS and Tycho-2 \citep{2mass,tycho2} stellar maps are included for completeness, but their magnitude range does not track accurately the range of brightness we need to account for Milky Way distribution in DES. Gaia's DR2 corresponds to the data described in \citet{gaiadr2}.
\begin{figure}
\includegraphics[width=\columnwidth]{starmap_comparison_v4.png}
\caption{Star contamination levels for different stellar maps. A $\sim1\%$ systematic uncertainty is derived by comparing the {\tt MODEST\_CLASS} moderate to bright stars, is estimated from this plot. Tycho and 2MASS stars are added for comparison, but their magnitude ranges (much brighter than the stellar sample considered as contaminants) do not make them good candidates for deriving this uncertainty.}
\label{fig:starmap_comparison}
\end{figure}
\subsection{Galaxy ratio near the Large Magellanic Cloud}
Using the same pixelisation as above, we also approach the comparison of different classifiers using a figure of merit based on the identified galaxy density in each of these pixels, as compared to the one found at a certain distance to the centre of the Large Magellanic Cloud (LMC), set at ($\alpha$,$\delta$) = (5h23m34.5s, $-69^\circ$45'11"). This value is normalised to one at 30 degrees from the centre of the LMC (Figure \ref{fig:lmc_test}). Here we use a flux limited sample with $i<23$. In this case, we can see a clear advantage in using a classifier with multiple input attributes (including colour), possibly helped by the fact that in a crowded field such as the peripheries of the LMC, morphology starts to have a smaller discriminating power. On the other hand, the LMC has a bluer population, but this doesn't seem to offset the ML classification significantly, though this aspect is worth studying further in a future work.
Using a metric such as this at a given fixed distance of the LMC could be useful as a figure of merit. In this case 10 degrees seems convenient but we must remark that this could be due to the odd geometry available around the LMC, so other photometric surveys might find other ranges for comparison more valuable.
\begin{figure}
\includegraphics[width=\columnwidth]{lmc_test_v4.png}
\caption{Galaxy ratio (with respect to galaxy density at 30 degrees from LMC) for as a function of angular distance from the LMC centre.}
\label{fig:lmc_test}
\end{figure}
\subsection{Stellar locus of classified stars}
Finally, we tested the consistency of the stellar locus derived in $r-i$ vs. $g-r$ colour space to a similar fit to stars in the COSMOS field. The stellar locus was fit by a fifth-order polynomial, as shown in Figure \ref{fig:slocus_cosmos}, similarly to what is realised in \citet{covey}.
The same fit curve from Figure \ref{fig:slocus_cosmos} is shown again versus several classifiers in Figure \ref{fig:slocus}. In general a good agreement is seen except for the faintest end, where classified stars seem to deviate from the expected stellar locus for {\tt CLASS\_STAR}.
\begin{figure}
\includegraphics[width=\columnwidth]{stellar_locus_reference_v2.png}
\caption{Fit to stellar locus using a fifth degree polynomial}
\label{fig:slocus_cosmos}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{stellar_locus_test_i_lt_21_v5.png}
\includegraphics[width=\columnwidth]{stellar_locus_test_i_lt_24_v5.png}
\caption{Stellar locus for star samples from various classifiers, \NEW{for a bright sample ($i<21$, \textit{top}) and a fainter one ($i<24$, \textit{bottom})}.}
\label{fig:slocus}
\end{figure}
\section{Discussion: implications for large-scale structure and Milky Way studies}
\label{sec:discussion}
In the previous section we explored a variety of tests both with and without truth information assessing the relative performance of a wide range of star-galaxy classifiers in DES Y1 data. We now turn to the impact of making different selections on scientific analyses of interest to astronomers and cosmologists. Though it is beyond the scope of this work to define specific choices for any arbitrary study, in this section we sketch out the general implications of the results shown here for two broad ranging topics of interest, namely the large-scale structure (LSS) of galaxies and Milky Way analyses within DES. With regards to weak lensing shear catalogues, \citet{zuntz} have shown that star-galaxy contamination is at most a second-order contaminant when either {\tt MODEST\_CLASS} or {\tt MCAL\_RATIO} are used
for the DES Y1 cosmology analyses.
For a thorough discussion on LSS and weak lensing requirements for star-galaxy separation, \citet{soumagnac} provides an in-depth review.
\subsection{Large-Scale Structure}
\label{sec:lss}
The impact of stellar contamination on studies of clustering amplitude has been well studied for several years now \citep[e.g.,]{ross,crocce} with an impact of the order of $(1-I)^2$ in the angular correlation function $\omega(\theta)$ if we assume an unclustered component that contaminates the galaxy population with impurity fraction $I$. A large contamination can severely dilute the signal (reducing the significance of the BAO peak as shown by \citet{carnero}), or even create a large-scale component if unaccounted for, thus mimicking an effect such as primordial non-Gaussianities \citep{giannantonio}. However, in the range $I\sim$ O($2\%$), the accuracy by which we determine $I$ becomes much more relevant, as this is the systematic that will dominate in the determination of the uncertainty in galaxy bias measurements and multiple probe analyses.
Figure \ref{fig:classcomp_densdens} implies that the choice of classifier does not matter too much for cosmology analyses in the broadest sense. However, going into a more realistic sample for large scale structure studies, using a selection for red galaxies that have better estimated photo-z and galaxy bias \citep{y1baosample} for BAO analysis for example, some evident differences appear for the highest redshifts (where due to their colours, many faint stars are mis-classified into those bins of photo-z). This is the main photo-z region of interest for BAO for DES. Also between the classifiers, which become more evident when the flux cut is driven to fainter magnitudes as shown before. See Figure \ref{fig:lss_sample_test}.
\begin{figure}
\includegraphics[width=\columnwidth]{lss_sample_test_bright_v3}
\includegraphics[width=\columnwidth]{lss_sample_test_faint_v3}
\caption{Stellar contamination level as a function of redshift \NEW{for a bright sample (\textit{top}, i$<22$) and a faint sample (\textit{bottom}, i$<23$), derived with the method described in Section \ref{sec:densvsdens} for different samples classified by photometric redshift.}}
\label{fig:lss_sample_test}
\end{figure}
These results show that a realistic LSS sample, is more severely affected by stellar contamination, driving the impurity levels up to $5-6\%$ in some redshift bins. This is seen more clearly in Figure \ref{fig:bpz_hsc} where photo-zs are shown for the true stars in the fields overlapping the COSMOS region for a general selection and an LSS-like, red galaxy selection. One way to drive down this impurity therefore is to either apply more stringent constraints to the star-galaxy thresholds, sacrificing a percentage of true galaxies along the way. For the case of {\tt MODEST\_CLASS} and {\tt ADA\_PROB\_MOF}, we can push down to 2\% by removing ${\sim}9\%$ and ${\sim}4\%$ galaxies respectively. Though a ML approach seems more convenient in this case, the use of colour and magnitude information may lead to potential correlations between object classification and photo-z determination that must be investigated in more detail. As for the uncertainty of determining $I$ using the density plots, Figure \ref{fig:starmap_comparison} shows that using fainter stellar maps to derive the impurity via this method generates a different contamination rate. This can be due to tracing of different components of the Galaxy, but for maps built upon possibly contaminated data it could well be that the star maps themselves are not ideal (e.g. the bright {\tt MODEST\_CLASS} stars could have a small component from misclassified compact galaxies). An improvement in understanding the underlying Galactic stellar structure through simulations or an adequate culling of the reference stellar maps to improve agreement would reduce this limitation in the determination of the impurity level, $I$.
\begin{figure}
\includegraphics[width=\columnwidth]{bpz_hsc_galaxies_stars_v3.png}
\caption{Normalised distribution of BPZ redshifts for a typical red galaxy sample that would be used for LSS studies, over a region with known identification of stars and galaxies through Hubble Space Telescope imaging.}
\label{fig:bpz_hsc}
\end{figure}
\subsection{Milky Way}
\label{sec:mw}
In the case of Milky Way studies, in broad terms we are interested in obtaining a more complete and pure stellar sample, down to faint magnitudes. Studies such as those in \citet{fadely}, show that currently this can become a major systematic effect in deriving the Galaxy structure. Additionally, misclassified galaxies become a limiting factor for discovering faint resolved stellar overdensities \citep[e.g.,]{willman,bechtol,y2satellites,pieres}. This problem is evidenced returning to the COSMOS ACS catalogue used in Section \ref{sec:calib}, which can be used to understand the ratio of stars to galaxies up to a very faint limit (shown in Figure \ref{fig:sgratio}).
\begin{figure}
\includegraphics[width=\columnwidth]{sgratio_v3.png}
\caption{Star-galaxy ratio in differential {\tt MAG\_AUTO} bins, taken from the COSMOS ACS catalogue. Point-sources are overwhelmed by extended sources in the faint end.}
\label{fig:sgratio}
\end{figure}
In this sense, the results in \citet{pieres} or \citet{shipp}, for example, show that the very good results can be obtained based on a multi-epoch based classifier such as the weighted averaged {\tt SPREAD\_MODEL} quantity or the {\tt MOF} pipeline.
The use of machine learning codes in this case is limited by the fact that if we want to study the distribution of specific types of stars, or search for Milky Way neighbours with a particular range of colours and magnitudes, we have to be very careful with introducing biases or complex selection functions in our application sample, much like what happens with photometric redshifts for the LSS case.
What the results of the current study show (e.g. Figure \ref{fig:pr_snfields_stars}) is that the {\tt MOF} technique has the potential of being the best candidate for selecting stellar candidates from its very tight morphological stellar locus and its capacity of reaching deeper into the separation of extended and point-like sources, by increasing by $\sim20\%$ the amount of stars in the sample for a given purity and magnitude cut versus a `classical' {\tt SPREAD\_MODEL} cut (in this plot, at 0.8 purity we go from 0.70 to 0.84 completeness). However, additional fine-tuning of the algorithm is needed to reach a good completeness in the bright end, where the model fit is not especially attuned to fits of stellar shapes. This is an open line of development in the algorithm in DES.
\section{Conclusions}
\label{sec:conclusions}
In this paper, we have compiled a wide variety of tests over a diverse array of star-galaxy classifiers for the DES Y1 dataset. These tests can be ported or used as examples for any other photometric dataset. The classifiers range from well-tested algorithms in the literature, to new developments using morphological information and/or flux information, using priors for stars/galaxies or training sets for machine learning codes based on space imaging information from the Hubble Space Telescope. We have studied their relative performance both using accurate truth information from spectroscopic and space imaging external datasets, and devised tests over the broad DES Y1 footprint that do not require this information. In the light of these results, we have analyzed the impact of using these algorithms on two broad science cases of interest to users of the DES data, namely, large-scale structure analyses and Milky Way studies. Star-galaxy classification remains as a non-dominant but important systematic source of error for cosmology, and very critical for Milky Way structure measurements and discoveries. These are the specific items that were highlighted in this work:
\begin{itemize}
\item Machine learning methods perform very well on calibration fields tests (Figures \ref{fig:roc_snfields} to \ref{fig:pr_snfields_stars} and Table \ref{tab:aucs}). In the application field the results are slightly better than for non-ML classification, especially in the faint end (Figure \ref{fig:lss_sample_test}). Optical colour based classifiers however could potentially introduce biases in sample selection.
\item Although {\tt CLASS\_STAR} has been used in the past to good effect, its lack of performance in the faint end (see e.g. Figures \ref{fig:roc_cosmos} and \ref{fig:nm_stars}) leads us to recommend alternative classification methods such as {\tt SExtractor}'s {\tt SPREAD\_MODEL} or a multi-epoch fit to the shape. In this sense, using multi-epoch, multi-object fitting instead of directly using coadded information is the preferred option for object classification in optical wavelengths (as shown in Section \ref{sec:calib}).
\item As has been demonstrated in the past, the addition of infrared data is very valuable, albeit limited currently by the depth and extension of such surveys (Section \ref{sec:spectralcalib}).
\item Photometric redshift binning will affect stellar contamination of specific galaxy samples (Figure \ref{fig:bpz_hsc}).
\end{itemize}
\subsection{Expected improvements for Y3 and beyond}
Considering these results, we have identified very clear future directions to expand and improve star-galaxy classification in forthcoming DES science analyses (Y3 and beyond).
\begin{itemize}
\item Improvement of the \texttt{MOF} quantities to better fit stellar shapes and prevention of fitting failures.
\item Understanding the impact of using colour information on specific science cases (photo-z, stellar type selections) to ascertain whether or not the usage of this information in ML codes hampers their utility for star-galaxy separation in extragalactic and Milky Way studies respectively, in exchange of an additional 2-5\% in purity depending on the case.
\item The combination of information as done in \citet{kim} from different approaches, especially adding external infrared colours, could greatly benefit the performance of some classifiers. Once an adequate template set is studied for the DES data, trying to overcome the impact of the lack of u-band information, template-based codes could be considered as well to complement this impact study. In addition, this would provide a truly probabilistic output that could be employed in statistical studies of large-scale structure, removing the need of having to eliminate a subsample of galaxies according to an arbitrary threshold.
\item Besides VHS data, the addition of Gaia's DR2 information \citep{gaiadr2} will provide a robust and broad complement to these tests at magnitudes $r<21$.
\end{itemize}
\subsection{Ideas for further study}
Finally, we call attention to other approaches and tests that we have not specifically investigated here which could be relevant for future studies:
\begin{itemize}
\item Adding available $u$-band and \NEW{specially infrared band information using matched-aperture photometry as part of the algorithms used here}.
\item With respect to a template-fitting approach, the characteristics of this dataset (lack of $u$-band or infrared information), severely limit its usability. But expanding the dataset, jointly with an accurate understanding of the template range to be used can be considered as a promising approach if these requirements are met, to be used in a joint probabilistic method.
\item Including very detailed image-based simulations for training, such as {\tt Balrog} \citep{balrog} or {\tt UFIG} \citep{ufig}, to understand the failure modes of different classifiers.
\item Adding seeing as part of the features of the machine learning classifiers, as well as for characterization of the performance of the different approaches.
\item Usage of the object position in the sky can also provide an additional lever for a probabilistic approach, as a prior to be added to the overall posterior estimation. This should be approached with care for certain analysis (e.g. Milky Way structure).
\item PSF homogeneization will improve the \texttt{SExtractor} estimates as shown in \citet{desai}. However, using {\tt MOF}-based photometry is a more promising alternative that avoids some of the problems associated with homogeneization.
\item Convolutional Neural Networks \citep[e.g.,][]{kimdcnn} can be applied directly to the images to provide a new and complementary approach to ML applied at catalogue-level. Image-level analyses may benefit by using information from multiple (>10) bands (e.g., \citet{cabayol}).
\end{itemize}
The data used in this paper are provided at \NEW{\url{http://des.ncsa.illinois.edu/releases/y1a1}}.
\section*{Acknowledgements}
ISN would like to thank \v{Z}.Ivezi\'c and A.Robin for useful discussions and insights; R.Gonz\'alez-G\'erboles for help in carrying out the HB tests; S.Koposov for providing useful insights into the expected stellar distributions; F.Ostrovsky for expert opinion on star-QSO classification and possible impact and A.Kov\`acs for suggestions on using infrared datasets.
E.Balbinot acknowledges financial support from the European Research Council (StG-335936).
Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain,
the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing
Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University,the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos, Funda{\c c}{\~a}o Carlos Chagas Filho de Amparo {\`a} Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cient{\'i}fico e Tecnol{\'o}gico and
the Minist{\'e}rio da Ci{\^e}ncia, Tecnologia e Inova{\c c}{\~a}o, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey.
The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energ{\'e}ticas,
Medioambientales y Tecnol{\'o}gicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh,
the Eidgen{\"o}ssische Technische Hochschule (ETH) Z{\"u}rich,
Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ci{\`e}ncies de l'Espai (IEEC/CSIC), the Institut de F{\'i}sica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universit{\"a}t M{\"u}nchen and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, The Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, Texas A\&M University, and the OzDES Membership Consortium.
Based in part on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, which is operated by the Association of
Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
The DES data management system is supported by the National Science Foundation under Grant Numbers AST-1138766 and AST-1536171. The DES participants from Spanish institutions are partially supported by MINECO under grants AYA2015-71825, ESP2015-66861, FPA2015-68048, SEV-2016-0588, SEV-2016-0597, and MDM-2015-0509, some of which include ERDF funds from the European Union. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya.Research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Program (FP7/2007-2013) including ERC grant agreements 240672, 291329, and 306478.
We acknowledge support from the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020, and the Brazilian Instituto Nacional de Ci\^encia e Tecnologia (INCT) e-Universe (CNPq grant 465376/2014-2).
This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide licence to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
This research uses data from SDSS-III. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is \url{http://www.sdss3.org/}. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
This research uses data from the VIMOS VLT Deep Survey, obtained from the VVDS database operated by Cesam, Laboratoire d'Astrophysique de Marseille, France.
This research uses data based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESAC/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA).
This research uses data based on zCOSMOS observations carried out using the Very Large Telescope at the ESO Paranal Observatory under Programme ID: LP175.A-0839.
The VISTA Data Flow System pipeline processing and science archive are described in \citet{irwin}, \citet{hambly} and \citet{cross}.
This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC,\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC
has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
CosmoHub has been developed by the Port d'Informaci\'o Cient\'ifica (PIC), maintained through a collaboration of the Institut de F\'isica d'Altes Energies (IFAE) and the Centro de Investigaciones Energ\'eticas, Medioambientales y Tecnol\'ogicas (CIEMAT). The work was partially funded by the "Plan Estatal de Investigaci\'on Cient\'ifica y T\'ecnica y de Innovaci\'on" program of the Spanish government.
\bibliographystyle{mnras}
|
{
"timestamp": "2018-10-31T01:09:55",
"yymm": "1805",
"arxiv_id": "1805.02427",
"language": "en",
"url": "https://arxiv.org/abs/1805.02427"
}
|
\section{Introduction}
\label{IES}
\subsection{IMAGINE aims}
In many areas of astrophysics, the magnetic fields are the least understood component and yet one of the most important. From the cosmological question of structure formation to the shapes of dusty filaments in the interstellar medium (ISM), magnetic fields on all scales play a critical role that has often been neglected. In the Milky Way, the Galactic magnetic fields (GMFs) affect all phases of the ISM from the propagation of relativistic cosmic rays to the collapse of cold dust clouds. Beyond our galaxy, the detection of primordial magnetic fields, the role of magnetic feedback in galaxy formation, and the structure of extragalactic large-scale magnetic fields are outstanding questions in cosmology and large-scale structure formation. But magnetic fields are not only important in their dynamical role; for many \done{research areas} they also turn out to be a nuisance. The polarised Galactic foregrounds are the main challenge for the detection of primordial gravitational waves with the cosmic microwave background (CMB), and for ultra-high energy cosmic rays (UHECRs), Galactic and extragalactic magnetic fields blur our view of their sources.
In nearly every context, the properties of the magnetic fields and particles as well as the properties and dynamics of the emergent structures are entangled and interdependent. Studies of any one piece of the puzzle alone are bound to be limited in scope and likely to be biased in their results. Until recently, that was the best we could do to make the problem tractable. But we now have the observational, computational, and mathematical tools to start to bring all these threads together. The different contexts and different effects provide complementary information that we can now combine.
The IMAGINE Consortium\ was conceived to join the experts in these disparate topics into a single team for a comprehensive study of the Galactic magnetic field. We will use all available data, from traditional observables such as synchrotron emission and Faraday rotation measures, to newer tracers such as polarised thermal dust emission and UHECR deflections. To model the components of the magnetised ISM, we will use not only heuristic models of the Galaxy motivated by observations, e.g., with spiral arm segments and field reversals, but also theoretically motivated parametric models and finally non-parametric models constrained by fundamental magnetohydrodynamics (MHD). We will further include constraints from dynamo theory and knowledge gained from observations of nearby galaxies.
\begin{figure*}[tb]
\includegraphics[width=\textwidth]{IMAGINE_mindmap.png}
\caption{\label{fig:imagine_mindmap} Structure of the \textit{IMAGINE}\ project. See text for further explanation.}
\end{figure*}
The \textit{IMAGINE}\ project is based on Bayesian methods, which are particularly important for solving interdependent problems, e.g., the origin \textit{and} deflection of UHECRs. They also provide a robust quantification of what we have learned with each additional piece of the analysis puzzle. Our goal is to provide a standard framework for updating our knowledge of the structure of the GMF with new data from next generation observational facilities, new analysis techniques, and advances in theoretical understanding. The core of the project is the recently released \textsw{IMAGINE}\ pipeline \citep{steininger2018}, a publicly available and modular software infrastructure. An overview of the \textit{IMAGINE}\ project is depicted in \ref{fig:imagine_mindmap} and its science goals briefly outlined in the following paragraphs. The current paper is thought of as a white paper for the \textit{IMAGINE}\ project and aims to give a summary of the various research fields involved and the impact \textit{IMAGINE}\ will have.
\subsection{IMAGINE science}
Members of the IMAGINE Consortium\ are actively researching in different areas of Galactic science that are important for understanding the GMF. These include but are not limited to the following:
\begin{itemize}
\item The multi-phase ISM (\ref{ss:MPISM}): magnetic fields are a crucial component of the ISM ecosystem and have a complex interdependence on the thermal gas and dust, the relativistic particles, the star formation, supernovae, etc.
\item Galactic cosmic rays (\ref{ss:gCR}): cosmic rays diffuse through the Galaxy, couple to the magnetic fields, and affect the dynamics of the ISM and of the entire Galaxy.
\item The Galactic magnetic field (\ref{ss:OBSGMF}): a variety of complementary observational tracers tell us about the morphology of the large-scale coherent magnetic field, the statistical and morphological properties of its small-scale turbulent structures, and the interplay between them.
\item Dynamo and MHD theory (\ref{ss:DTGMF}): MHD theory constrains the morphology of the GMF on both large and small scales and is crucial for understanding the turbulence in the ISM.
\end{itemize}
All these research areas are described in more detail throughout the paper in the sections referenced above. Beyond this, there are also various extragalactic and cosmological topics that contribute to, or profit from, our understanding of the GMF, such as:
\begin{itemize}
\item Large-scale structure formation (\ref{ss:SF}): the distribution of matter in our local Universe and its formation history affect most extragalactic tracers important for \textit{IMAGINE}, such as UHECR sources or extragalactic magnetic fields.
\item Galaxy formation and evolution (\ref{sec:gal_form}): feedback processes are important in galaxy formation and evolution, and the role of magnetic fields and cosmic rays is central but not yet well understood.
\item Ultra-high energy cosmic rays (\ref{ss:UHECR}): an improved understanding of the GMF will help us trace UHECRs back to their sources, and likewise, UHECR deflections help us trace the GMF.
\end{itemize}
Finally, there are various Galactic and extragalactic backgrounds that are of interest for fundamental physics, which will greatly profit from an improved understanding of large-scale structure, the GMF, MHD turbulence, etc. These include cosmological topics from primordial magnetic fields and gravitational waves to the epoch of reionisation (\ref{ss:exbkg}), as well as the potential signal of dark matter annihilation from our own galaxy (\ref{ss:DM}).
\subsection{IMAGINE methods}
To join all these different research fields in the common goal of modelling the GMF,
we have developed the \textit{Interstellar MAGnetic field INference Engine} (also referred to as the \textsw{IMAGINE}\ pipeline), a framework with the power and flexibility to include information from all of these different fields. The \textsw{IMAGINE}\ pipeline is a Bayesian platform that takes advantage of robust statistical methods to explore the multi-dimensional likelihood space using any number of modular inputs (\ref{s:AM}). These inputs include:
\begin{itemize}
\item {\it data}, \ie\ all possible tracers of magnetic fields, from Faraday rotation measures (RMs) through submillimeter dust polarisation to UHECR deflections (\ref{ss:OI});
\item {\it Galaxy simulations}, \ie\ formulations of models for the magnetic fields (parametric, \ref{ss:Para}, or non-parametric, \ref{ss:non-para-models}) and other Galaxy components, and the simulated observables generated from them;
\item independent {\it likelihood} evaluations (\ref{sec:Bayes:likelihood}) to compare the mock data to the observed data taking full account of any uncertainties;
\item the mathematical expression of theoretical constraints or knowledge from observations of external galaxies, etc., in the form of Bayesian {\it priors} (\ref{sec:Bayes:prior});
\item efficient {\it samplers} for exploring the multi-dimensional likelihood space (\ref{sec:Bayes:num_bayes});
\item the computation of the Bayesian {\it evidence} to quantify what we have learned from different models (\ref{sec:Bayes:evidence}).
\end{itemize}
The infrastructure is summarised in \ref{s:SD} and described in detail in \citet{steininger2018} along with its first results. In that paper, we demonstrate the pipeline on both simulated and real data using a simple GMF model and a subset of the available observables. That first application demonstrates not only the technical success of the pipeline but also the complexity of the task. Our preliminary results underline the need for a comprehensive Bayesian approach with physically motivated priors, \ie\ precisely the challenge that we describe here.
\subsection{Structure of the paper}
This white paper describes the \textit{IMAGINE}\ project in detail. In \ref{s:BGMW}, we give an overview of the various topics in Galactic astrophysics that will contribute to and/or benefit from \textit{IMAGINE}, while in \ref{s:BGEG} we discuss extragalactic astrophysics and the cosmological connections. In \ref{s:AM}, we discuss the information theoretical framework of \textit{IMAGINE}\ and in particular how we harness the power of Bayesian statistics to tackle this challenge. In \ref{s:SD}, we briefly describe the software package we have developed and how it can be used to add inputs (whether these are data, models, or priors) from all of these different fields of astrophysics as well as others we may not yet have foreseen.
\begin{table}[t]
\caption{\label{acrotab} Acronyms introduced and used in the text, software packages and experiments excluded.}
\centering
\begin{tabular}{r@{\qquad}l@{\qquad\qquad}r@{\qquad}l}
\hline
CMB & cosmic microwave background & IFT & information field theory\\
DM & dark matter & ISM & interstellar medium\\
EoR & epoch of reionisation & MCMC & Markov chain Monte Carlo\\
GCR & Galactic cosmic ray & MHD & magnetohydrodynamics\\
GMF & Galactic magnetic field & RM & rotation measure\\
EGMF & extragalactic magnetic field & UHECR & ultra-high energy cosmic ray \\ \hline
\end{tabular}
\end{table}
\section{Galactic science}
\label{s:BGMW}
The motivation for the \textit{IMAGINE}\ project comes from several directions simultaneously, and the most immediate is the need to understand our own galaxy. Here, we review the topics in Galactic astrophysics that will be advanced by \textit{IMAGINE}.
\subsection{The multi-phase interstellar medium}
\label{ss:MPISM}
The ISM is conventionally separated into thermal and non-thermal parts, the
latter represented by magnetic fields and cosmic rays
and too often neglected. With the development of
modern observational and numerical techniques, this division is becoming more and
more inappropriate. The ISM of a spiral galaxy is a complex system whose constituent
parts may be explored in isolation only as a preliminary investigation, often with little
confidence in qualitative or quantitative fidelity.
Magnetic fields and cosmic rays contribute significantly to the structure and
evolution of the ISM and the host galaxy. They affect the accretion of gas by dark
matter (DM) haloes \citep{RdSO10} as well as the outflows and inflows in
galaxies that have already formed \citep{B2009}. Magnetic fields and cosmic rays
can significantly affect galactic outflows (fountains and winds)
through their effect on the multi-phase structure of
the interstellar gas, as they confine hot gas bubbles produced by
supernovae \citep{FMLZ91,HT06}. According to numerical
simulations, a magnetised multi-phase ISM can be more homogeneous than
a purely thermal one \citep{EGSFB17}. Non-thermal effects modify
the self-regulation of star formation in spiral galaxies and its effects on the intergalactic
gas (\citealt{SSS2010,WPP17}, and references therein).
The magnetic contribution to the overall structure of a galactic
gaseous disc is at least as important as that from other sources of interstellar
pressure (\ie\ thermal, turbulent, and cosmic ray pressure terms), as all of
them are of comparable magnitude \citep{P1979,C2005}. Half of the total
interstellar pressure is thus due to non-thermal contributions, and
magnetic fields therefore directly affect the mean gas density. In turn,
this significantly affects the star formation rate. It is therefore
surprising that the role of magnetic fields and cosmic rays in galaxy
evolution has avoided attention for so long \citep{BBT15,RSFB15}. Magnetic
fields also regulate star formation locally
by controlling the collapse and fragmentation of molecular clouds
\citep{ML2009,C2012}. Magnetic fields
contribute to interstellar gas dynamics not only directly but also by
confining cosmic rays \citep{G90,S2002,Shalchi09}. The latter are effectively
weightless and so are capable of driving galactic outflows
\citep{BK00,EEZBMRG08,UPSNES12,BAKG13,Girichidis2016,pakmoretal2016,Simpson2016},
thus providing negative feedback on star formation in galactic discs
\citep{VCB-H05,PPJ12}.
The crucial role that magnetism plays in the ecosystem of a galaxy
has been known for years, but only recently have advances in detection methods,
technology, and computer power made major leaps forward and allowed comprehensive studies of
galactic magnetic fields.
As a result, it has become clear that
simple models of large-scale galactic magnetic fields following galactic
spiral arms are utterly inadequate. Recent data from large multi-wavelength
radio-polarimetric surveys have allowed refinement of these models including,
e.g., anisotropic turbulence and/or vertical field components. However, these
models are still data-starved, and inclusion of magnetic field information
through other sources than radio polarimetry is therefore needed.
\subsection{Galactic cosmic rays}
\label{ss:gCR}
Cosmic rays are ionised atomic nuclei, electrons, and positrons, and their elemental abundance resembles roughly the average abundance in the Solar System. They have an energy density of about $1\,\eV/\cm^3$, comparable to that of the Galactic magnetic field. They are
believed to be accelerated mostly by first-order Fermi processes at strong shocks in sources such as supernova remnants, and they then propagate through the Galaxy.
The propagation of Galactic cosmic rays (GCRs) is a combination of advection with the plasma as well as streaming and diffusion. In the ideal MHD approximation, magnetic fields are flux-frozen into the plasma and thus advected with the flow. Cosmic rays are bound to gyrate along individual field lines and are advected alongside the moving plasma. As they propagate, they resonantly excite Alfv\'en waves, which scatter the cosmic rays. As a result, the GCR distribution (partially) isotropizes in the reference frame of Alfv\'en waves, \ie\ low-energy GCRs stream down their gradient \citep{Zweibel:2013}. MHD turbulence maintained at larger scales by other sources can also scatter cosmic rays, redistributing their pitch angles, but leaving their energy unchanged. This can be described as anisotropic diffusion.
From our local perspective in the Milky Way, the GCR propagation is dominated by streaming and diffusion, since the Sun is almost perfectly co-moving with the orbiting ISM around the Galactic centre. Note that this is not the case if cosmic rays travel large distances from their sources to us due to differential rotation of the ISM, which makes advection more important. As mentioned above, GCR protons with energies ${\lesssim} 100\,\GeV$ are dynamically coupled to the ISM via self-generation of and scattering at Alfv\'en waves. Hence, they cannot be treated as ``test particles'' in a static background, as their back-reaction is important for the dynamics of the ISM and the turbulent magnetic field at scales comparable to the Larmor (gyration)
radius. At significantly larger scales, the MHD turbulence is
likely to be affected more weakly and the test particle approximation remains a useful tool in studies of cosmic ray propagation. Inferences of the cosmic ray pressure distribution via frameworks such as \textsw{IMAGINE}\ thus hold the promise to quantify the dynamical impact of cosmic rays on the ISM and, by extension, on important physical processes that are relevant for galaxy formation (see \ref{sec:gal_form}).
\afterpage{
\addtocounter{footnote}{-1}
\begin{figure}[t]
\centering
\includegraphics[width=0.98\columnwidth]{GCR_anisotropy.jpg}
\caption[Combined cosmic ray anisotropy of the Tibet-AS and IceCube experiments in the equatorial coordinate system.]{\done{Combined cosmic ray anisotropy of the Tibet-AS and IceCube experiments in the equatorial coordinate system. \jpr{\sout{Image credit and detailed information: M.~Ahlers and P.~Mertsch \citet{ahlers:2017}.} See \cite{ahlers:2017} for detailed information.\footnotemark
}}}
\label{fig:CR_anis}
\end{figure}
\footnotetext{\jpr{Reprinted from Progress in Particle and Nuclear Physics, Vol. 94, M.~Ahlers and P.~Mertsch, \textit{Origin of small-scale anisotropies in Galactic cosmic rays}, figure~2, pg.~187, \textcopyright\ (2017), with permission from Elsevier}}
}
At GCR proton energies ${\gtrsim} 100\,\GeV$, the decreasing cosmic ray energy density implies a significantly smaller growth rate of the resonantly excited Alfv\'en waves. Turbulent and non-linear Landau damping leads to a low amplitude of magnetic fluctuations that are not sufficient to self-confine cosmic rays to the Alfv\'en frame, and their propagation becomes mostly diffusive, until the diffusion picture breaks down for cosmic rays with energies ${\gtrsim} 10^{14}\,\eV$. These changes in the mode of propagation manifest themselves in the form of breaks and deviations from the power-law spectrum in rigidity of GCR nuclei. Most importantly in the context of \textit{IMAGINE}, the diffusion tensor is connected to the local orientation of the GMF. In an alternative approach, one numerically calculates the trajectories of individual GCR particles, solving the Lorentz equation in the turbulent and regular GMF. It has been demonstrated \citep{escape} that this method allows one to derive global constraints on the properties of the GMF. In particular, the deduced density $n(\vec{x},E)$ of GCR electrons is an important input in the determination of the GMF via synchrotron radiation, while the GMF in turn determines the propagation of cosmic rays. These simple examples illustrate that the properties of the GMF and of GCRs are entangled: deducing these properties therefore requires a joint analysis with a comprehensive approach like \textsw{IMAGINE}.
During propagation, GCRs inelastically interact with nuclei of the ISM, producing (radioactive) nuclei, (anti-)protons, electrons, positrons, and neutrinos as secondary particles in hadronic interactions as well as gamma-ray emission from decaying pions. The secondary electrons/positrons produce secondary radio synchrotron and inverse Compton gamma-ray emission. We can learn about the GCR sources and the diffusion process of cosmic rays in the Milky Way by comparing the modelled primary and the calculated secondary fluxes to observations \citep{SMP}. Observational data from, e.g., Fermi-LAT, CTA and IceCube, will extend this information to yet higher cosmic ray energies.
These non-thermal radiative GCR and GMF tracers throughout the Galaxy are complemented by cosmic ray measurements at the Earth: the ratio of primary (accelerated at the sources) to secondary (produced through spallation during the propagation processes) cosmic ray nuclei, like the boron-to-carbon ratio, is a measure for the matter traversed by cosmic rays (see e.g., \citealt{obermeier:2012}).
The resulting column density amounts to about $10\,\g/\cm^2$ at $\GeV$ energies, decreasing as a function of energy ${\propto}\, E^{-1/3}$.
At energies around $E=3\cdot 10^{15}\,\eV$, the all-particle energy spectrum of cosmic rays exhibits a change in the spectral index. Measurements indicate that the individual elements in the cosmic ray chemical composition exhibit a fall-off roughly proportional to the rigidity
\citep{antoni:2005,hoerandel:2008}.
The rigidity-dependent fall-off of individual elements is most likely due to a combination of the maximum energy attained in the accelerators and leakage from the Galaxy during the propagation processes
\citep{hoerandel:2004}.
Using gamma-ray astronomy, the fall-off of the energy spectrum can be observed at the sources directly (e.g., \citealt{aharonian:2006}). This provides an opportunity to infer the spectral behaviour of the (hadronic) cosmic rays at the sources. The differences in the fall-offs observed for cosmic rays at the Earth is due to the propagation processes (leakage from the Galaxy), which are in turn connected to the GMF. Recently, several experiments have detected small-scale anisotropies in the arrival directions of $\TeV$ cosmic rays at the level of $10^{-3}$ (\cite{2010ApJ...711..119A, 2016ApJ...826..220A},
see \ref{fig:CR_anis}). Explaining these anisotropies requires a detailed understanding of the GMF structure at a very wide range of scales.
\subsection{The Galactic magnetic field}
\label{ss:OBSGMF}
In this section, we will briefly review the main observational tracers of the GMF and the knowledge gained from them. For more extensive reviews, see e.g., \citet{haverkorn2015,kleinfletcher2015,Beck16}.
\subsubsection{Observational tracers}
\label{sss:oot}
There is a rich \done{set} of observational tracers that depend on magnetic fields. However, each of them probes only certain features of the
magnetic field, e.g., only the strength or the direction/orientation,
or only the component parallel or perpendicular to the line-of-sight. In addition, these tracers are sensitive to magnetic fields in
a specific medium, such as cold clouds, diffuse ionized gas, or the
non-thermal synchrotron-emitting plasma. This means that there is no
ideal tracer of GMFs and that {\it all} relevant
tracers should be included to obtain a complete picture. Below, we
briefly discuss the main known observational tracers of the large-scale and
small-scale GMF components.\footnote{We discuss only those tracers for which we have enough data to
probe the diffuse interstellar medium. Tracers
of dense, cold gas and specific (star forming or circumstellar)
environments, such as Zeeman splitting or masers, will not be
discussed here.} In \ref{ss:OI}, we will discuss which specific data sets of these observational tracers will be used as input for \textsw{IMAGINE}.
\afterpage{
\addtocounter{footnote}{-1}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{FaradaySky.png}
\caption[All-sky map of Faraday rotation revealing the line-of-sight component of the Galactic magnetic field.]{All-sky map of Faraday rotation by \jpr{Oppermann et al.} \citet{2012A&A...542A..93O} revealing the line-of-sight component of the Galactic magnetic field.\footnotemark}
\label{fig:Oppermann}
\end{figure}
\footnotetext{\jpr{Excerpt from figure~3 in N.~Oppermann et al., A\&A, Vol.~542, A93 (2012), reproduced with permission \textcopyright\ ESO}}
}
\paragraph{Faraday rotation}
\texttt{ }\\
Faraday rotation is the birefringence of left-circularly and right-circularly polarised radiation travelling through a magnetised
plasma. As a result, the polarisation angle of linearly
polarised radiation $\chi$ rotates as a function of wavelength $\lambda$ as
\begin{align*}
\chi &= \chi_0 + \RM\, \lambda^2,
\end{align*}
where $\chi_0$ is the intrinsic polarisation angle with which the
radiation is emitted, and $\RM$ is the rotation measure defined as
\begin{align}
\label{e:rm}
\left(
\frac{\RM}{\rad\,\m^2}
\right)
&= 0.812
\int_\text{source}^\text{observer}\left(
\frac{n_e(s)}{\cm^{-3}}\right)
\left(\frac{\vec{B}(s)\cdot\dd\vec{s}}{\muG\cdot\pc}\right) ,
\end{align}
where $n_e(s)$ and $\vec{B}(s)$ are the thermal electron density and magnetic vector field in the intervening medium, respectively, at distance $s$ away from the observer.
\done{Extragalactic radio sources therefore allow the construction of images of the total projected GMF (see \ref{fig:Oppermann} and \citealt{2012A&A...542A..93O}), whereas RMs from pulsars at different distances provide tomographic information.}
However, if the synchrotron emitting and Faraday rotating media are
mixed, $\RM$ is no longer given by \ref{e:rm}. Instead we derive a
Faraday spectrum, which is defined as the polarised intensity as a function of
{\it Faraday depth}, $\phi$. Faraday depth is obtained by \ref{e:rm}, with the distinction that now the integration
boundaries vary as a function of distance along the line-of-sight \citep{burn1966,brentjensdebruyn2005}.
Estimates of magnetic field strength from Faraday rotation measure rely on the
assumption that $B_\parallel$ and $n_e$ are uncorrelated, so
$B_\parallel\propto\RM/(\langle n_e\rangle L)$, where $L$ is the line-of-sight extent of the magneto-ionic medium and $\langle n_e\rangle$ is the average electron number density.
However, this assumption is questionable wherever the magnetic field is strong enough to
affect the gas distribution. Assuming that the ISM is in pressure equilibrium, $n_e$ and $B$ are shown to be anti-correlated \citep{BSSW03}, which can significantly affect estimates of $B_\parallel$ from $\RM$. The relation between GMF strength and gas density in
the multi-phase ISM needs to be understood in detail in order to better
interpret the $\RM$ observations.
\begin{figure*}
$\begin{array}{cc}
\includegraphics[width=0.49\columnwidth,clip=true]{b2_12_15.pdf} &
\includegraphics[width=0.49\columnwidth,clip=true]{int_ncr.pdf} \\
\end{array}$
\caption{Test particle simulations of the propagation of cosmic rays in a random magnetic field
\jpr{(figures 2 and 8 in \citep{SSWBS18})}; the particles' Larmor radius is $6\%$ of the correlation length of the magnetic field. \textit{Left-hand panel:} isosurfaces of the strength of a random magnetic
field $\vec{B}$ produced by the fluctuation dynamo, $B^2/B_0^2=12$ (blue) and $15$
(yellow), with $B_0$ the root-mean square field strength. The magnetic field is intermittent, its correlation length is about $40$ times smaller than the correlation length of the chaotic flow that generates it.
\textit{Right-hand panel:} isosurfaces of the cosmic ray number density in the magnetic field of
the left-hand panel at $n_\text{CR}/n_0=3.5$, where $n_0$ is the mean value of $n_\text{CR}$. The cosmic ray and magnetic field strength distributions are statistically
independent. The number density of cosmic ray particles is larger in random magnetic traps between
magnetic mirrors. Reproduced from \citep{SSWBS18}. \textcopyright\
2017 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society.}
\label{CR-B2}
\end{figure*}
\paragraph{Synchrotron intensity}
\texttt{ }\\
Synchrotron radiation emitted by cosmic ray electrons interacting with
the GMF has an intensity $I(\nu)$ defined as
\begin{align*}
I(\nu) &\propto \int n_{\mathrm{CR}}(\nu, s)\, B_{\perp}^{1+\alpha}(s)\,\nu^{-\alpha}\, \dd s ,
\end{align*}
where $n_{\mathrm{CR}}(\nu,s)$ is the cosmic ray number density as a function of
frequency $\nu$ and path length $s$. Here, the cosmic ray number density per
energy interval $\dd E$ is assumed to follow a power law
\begin{align*}
n_{\mathrm{CR}}(E) &\propto E^{-\gamma}\dd E
\end{align*}
with $\gamma = 2\alpha + 1$.
A typical value of the cosmic ray spectral index in the Galactic ISM
is $\gamma \approx 2.4$, which leads to the widely observed $\alpha =
0.7$. However, at low frequencies (typically below a few $100\,\MHz$), one
has to start taking into account effects like free-free absorption and
synchrotron self-absorption, which will alter the spectrum.
Since the synchrotron emissivity only
depends on the magnetic field component perpendicular to the line-of-sight, it is therefore complementary to the diagnostic of
Faraday rotation.
A significant complication in the interpretation of the observed synchrotron
intensity is the uncertain knowledge of the number density of cosmic ray electrons. An assumption
almost universally adopted to estimate magnetic field strength from synchrotron intensity
is that of energy equipartition between cosmic rays and the GMF (or its variant), combined with a further
assumption that cosmic ray electrons contain $1\%$ of the total cosmic ray energy or number density
(\citealt{BK05}, and references therein). This assumption is applied to the synchrotron
intensity observed locally, at the working resolution of the observations; in other words,
$n_\mathrm{CR}$ and $|\vec{B}|^2$ are assumed to be perfectly correlated at all scales.
Direct verifications of the CR--GMF equipartition based on gamma-ray observations are rare
and inconclusive, and its physical justification is not entirely convincing. Analyses of the
synchrotron fluctuations in the Milky Way and M33 are not consistent with this assumption
at scales of order $100\,\pc$ \citep{Iacobelli:2013a,Stepanov:2014}. Using test particle
simulations of cosmic ray propagation in random magnetic fields (either intermittent or Gaussian),
The distributions of $n_\mathrm{CR}$ and $|\vec{B}|^2$
are found to be not just uncorrelated but statistically independent \citep{SSWBS18}. Nevertheless, the two
distributions are related, as shown in \ref{CR-B2}, but in a more subtle manner: cosmic rays
are trapped between random magnetic mirrors whose occurrence is controlled not by magnetic
field strength but rather by its geometry. These results strongly suggest that the CR--GMF
equipartition does not occur at the turbulent scales even though it may be
relevant at $\kpc$ scales \citep{Stepanov:2014}. Understanding the relationship between the cosmic rays and the GMF requires
careful further analysis in order to convincingly interpret the synchrotron
observations in terms of the magnetic field strength.
\paragraph{Synchrotron polarisation}
\texttt{ }\\
Synchrotron emission is intrinsically highly linearly polarised, with
a polarisation degree given by
\begin{align*}
\Pi &= \frac{3\gamma+3}{3\gamma+7},
\end{align*}
which means $\Pi = 72\%$ for $\gamma = 2.4$. However, this high \done{polarisation degree} is almost never observed, because the radiation is partially
depolarised when travelling from the source to the observer. The depolarisation
can be wavelength independent, due to small-scale tangling of the
magnetic field at the emission site, and/or wavelength dependent, due
to magnetic field tangling including Faraday rotation. For a
synchrotron emissivity $\epsilon(\vec{r}, \lambda)$, the observed
polarisation vector is
\begin{align*}
P\left(\lambda^2\right) &= \frac{\int\int \epsilon(\vec{r},\lambda) \Pi(\vec{r})\ee^{2\ii(\chi_0+\phi(\vec{r})\lambda^2)}\,\dd s\, \dd\Omega}{\int\int \epsilon(\vec{r},\lambda)
\,\dd s\, \dd\Omega } ,
\end{align*}
where integration over the solid angle $\dd\Omega$ defines the telescope beam.
The polarised synchrotron emission traced by the \textit{Planck}\ mission at $30\,\GHz$ is shown in the top panel of \ref{fig:planck}.
\afterpage{
\addtocounter{footnote}{-1}
\begin{figure}[t]
\begin{centering}
\includegraphics[width=0.97\columnwidth]{maps_lowres.pdf}\end{centering}
\caption[Synchrotron emission at $30\,\GHz$ and dust emission at $353\,\GHz$ ]{Synchrotron emission at $30\,\GHz$ ({\it top}) and dust emission at $353\,\GHz$ ({\it bottom}).
The colour indicates the total intensity, while the texture applied shows the inferred plane-of-sky magnetic field direction, \ie\ the polarisation direction rotated by $90\degree$.
See \citet{planck15_I} \jpr{for details.\footnotemark\ \sout{Image credit: ESA and the Planck Collaboration.}}}
\label{fig:planck}
\end{figure}
\footnotetext{\jpr{From \url{https://www.cosmos.esa.int/web/planck/picture-gallery}, reproduced with permission from Astronomy \& Astrophysics, \textcopyright\ ESO;
original source ESA and the Planck Collaboration.
}}
}
\paragraph{Polarised dust emission}
\texttt{ }\\
The spin axis of a non-spherical dust grains is both perpendicular to its long axis and aligned, statistically, with the orientation of the local GMF \citep{andersson2015,hoang2016}. Microwave, sub-millimetre, and far-infrared emission from these dust grains will therefore be polarised along the long axis of the grain, \ie\ perpendicular to the local magnetic field component projected onto the plane of the sky. The amount of alignment depends on local physical conditions and on
the properties of dust grains (mainly their size and composition), but the analysis of \textit{Planck}\ dust polarisation data indicates that the degree of grain alignment is high and homogeneous in the diffuse ISM including molecular cloud envelopes \citep{PIRXIX2015,PIRXX2015}.
The polarised dust emission traced by the \textit{Planck}\ mission at $353\,\GHz$ is shown in the bottom panel of \ref{fig:planck}.
The combination of \textit{Planck}\ with higher frequency data from the BLASTPol ballon-borne experiment \citep{ashton2017}
shows no dependence of the dust polarisation fraction $p$ on frequency.
This constraint, along with the ratio of polarised dust emission to the polarisation fraction of optical interstellar polarisation \citep{PIRXXI2015}, has been used to update dust models \citep{guillet2018}.
It suggests that the emission from a single grain type, which is efficiently aligned \citet{hoang2016}, dominates the long-wavelength emission of dust in both polarisation and total intensity.
\paragraph{Polarised optical/IR absorption by dust}
\texttt{ }\\
The same dust grains that emit linearly polarised emission absorb the optical and near infrared light from stars behind, causing a linear polarisation of the starlight parallel to the local magnetic field component projected onto the plane of the sky. This linear polarisation, like the dust emission polarisation, depends on the physical properties of the dust grains.
Stellar polarisation data have been compared with sub-millimetre dust polarisation measured by \textit{Planck}\ on the same line-of-sight. The two polarisation measurements have comparable sensitivity and are closely correlated \citet{PIRXXI2015,soler2016}. They offer complementary means to map the GMF, and to study its correlation with the structure and dynamics of interstellar matter.
Stellar data are limited to a discrete set of sight-lines but they offer unique 3D information on the GMF
using stellar distances. Optical polarisation data (e.g., \citealt{heiles2000}) are best suited to map the GMF over the diffuse ISM in the Solar neighbourhood \citep{berdyugin2014}, while near-IR polarimetric observations (e.g., \citealt{clemens2012}) probe the GMF within molecular clouds \citep{chapman2011} and the Galactic plane \citep{paveletal2012}.
\afterpage{\FloatBarrier}
\subsubsection{Large-scale magnetic field components}
The observable tracers summarised above make it convenient to describe the magnetic field as consisting of three components: a large-scale coherent or mean-field component (sometimes ambiguously referred to as the regular component); a small-scale random or turbulent component whose statistics are isotropic; and a third component that changes direction stochastically on small scales but whose orientation remains aligned over large scales, variously referred to as the anisotropic random component, the ordered random component, or the striated component. A description of how various observables trace these three components can be found in \citet{jaffe10}. First, we discuss the coherent component.
As in external spiral galaxies, the magnetic field in the Milky Way is predominantly concentrated in the Galactic plane and approximately follows the spiral arms, as confirmed directly by starlight polarisation measurements and indirectly by modelling synchrotron radiation and RMs. A magnetic field reversal on scales of $\kpc$ has been unambiguously detected from pulsar RMs. However, observations cannot determine yet whether this reversal is azimuthal or follows the spiral arms, or whether this is a local feature or global along the entire spiral arm.
In current GMF models, spiral arms are commonly modelled as logarithmic spirals. However, there are multiple, heterogeneous, indications that this picture is oversimplified. GMF modelling of log-spiral arms independently in separate regions of the Galactic disk show that spiral arm locations and pitch angles seem to vary \citep{VBS2011}. The pitch angle of the magnetic field towards the anti-centre seems to be around zero \citep{RB2010}. In addition, in nearby face-on spiral M51, spiral arm pitch angles are variable with radius and azimuth and depend on their tracer: gas, dust or magnetic field \citep{PFS2006}. Also in the Milky Way, there are indications that GMF models with off-set dust and magnetic field spiral arms give better fits to synchrotron and thermal dust emission data \citep{Jaffe:2013}.
The difficulties associated with the complexity of
the observed spiral patterns, in either gas or magnetic
field, are compounded by the lack of a comprehensive theory
for the interaction of galactic dynamos with the spiral
pattern and the incompleteness of the theory of the spiral
patterns themselves \citep{CSQ14,CSS15}.
In particular, the 3D tracer of starlight polarisation will be imperative to constrain meso-scale deviations from logarithmic spirals, arm off-sets and/or varying pitch angles.
\afterpage{
\addtocounter{footnote}{-1}
\begin{figure}[tb]
\begin{centering}
\includegraphics[width=\columnwidth]{views_lowres.pdf}
\end{centering}
\caption[Three example models for the coherent magnetic field component in the Milky Way]{\jpr{Graphical representation\footnotemark\ of three} example models for the coherent magnetic field component in the Milky Way: on the {\it left} is from Sun et~ al. \citet{sun08}; in the {\it middle} is from Jansson \& Farrar
\citet{jansson12b}; on the {\it right} is Jaffe et~al. \citet{Jaffe:2013} . The
colour represents the strength of the coherent magnetic component, the white arrows show its direction. The top panel of each shows a cut through the Galactic plane at $z=0$ with the Sun position marked by the black plus, while the bottom panel of each shows a vertical cut intersecting the Sun and the Galactic centre. }
\label{fig:bcoh}
\end{figure}
\footnotetext{\jpr{Excerpt from figure~3 in Planck Collaboration, A\&A, Vol.~596, A103 (2016), reproduced with permission \textcopyright\ ESO.}}
}
The Milky Way has an out-of-plane magnetic field component, probably similar to X-shaped halo fields detected in nearby edge-on spirals. Its direction in the Solar neighbourhood is still debated \citep{hanqiao1994, maoetal2010} partly due to insufficient data but primarily due to uncertainties in local structures in the Northern Galactic hemisphere.
The strength of the coherent magnetic field as derived from pulsar RMs is ${\sim} 2\,\muG$ (e.g., \citealt{hanetal2006}), generally consistent with modelling of synchrotron radiation. The total magnetic field strength is ${\sim} 6\,\muG$ at the Solar radius \citep{strongetal2000}, increasing to ${\sim} 10\,\muG$ at a Galacto-centric radius of $3\,\kpc$ \citep{beck2001}. Both the strength of the halo field and its scale height are very uncertain, with estimates in the literature of strengths between $2\,\muG$ and $12\,\muG$ and scale heights of ${\sim} 1.5\,\kpc$ from pulsar RMs, to $5\texttt{-}6\,\kpc$ from synchrotron emissivities, assuming equipartition with cosmic rays (for an overview, see \citealt{haverkorn2015}).
However, comparison with nearby spirals makes clear that many uncertainties remain in our knowledge of the large-scale magnetic field in the Milky Way: a variety of symmetries and dynamo modes are observed (which are as yet unobservable in the Milky Way); magnetic fields follow spiral arms in general but not in detail \citep{PFS2006}; the behaviour close to the Galactic centre (e.g., Galactic winds) is not understood; and large-scale reversals are not yet convincingly observed in nearby spirals (see \citealt{Beck16} for a review). Three example models from the literature are shown in \ref{fig:bcoh}; all were fit to synchrotron emission and Faraday RM data but clearly have very different morphologies, which illustrates the modelling challenge.
\subsubsection{Small-scale magnetic field components and interstellar turbulence}
\label{sss:turbulence}
One of the main challenges of ISM research is to allow for
the interconnectivity of all its components and their
interactions. Since the energy densities of the cosmic rays, turbulent
gas, radiation, and magnetic fields are comparable \citep{C2005, HH2012}, there are no passive tracers; instead, all ISM components deliver significant feedback to the system.
Several studies show to what
degree these components are connected. The \textit{Planck}\ dust polarisation maps
showed that filaments in the diffuse cold neutral medium are
predominantly aligned with the local magnetic field \citet{PIRXXXII2016}, whereas filaments
in dense molecular clouds are mostly perpendicular to the
field \citep{PIRXXXV2016}.
This correlation between the structure of matter and that of the GMF is also observed
for filamentary structures identified in spectroscopic H\textsc{i} data cubes
\citep{mccluregriffithsetal2006, clarketal2014}. More surprisingly, in some -- but
not all -- observations, Faraday depth filaments of polarised synchrotron emission seem to be aligned
with the local magnetic field in the dusty neutral medium
\citep{ZJD2015} or with H\textsc{i} filaments \citep{kalberla2016, kalberla2017}.
For decades, the power spectrum of the small-scale component of the magnetic field was approximated by a power law with a certain coherence length, assuming Gaussianity. Observational estimates for the coherence length vary: averages of extragalactic source RMs over large parts of the sky suggest coherence scales of about $100\,\pc$, while detailed studies in the Galactic plane suggest maximum scales of order $1\texttt{-}10\,\pc$ in spiral arms \citep{Haverkorn:2008} and in the highly-polarised Fan Region \citep{Iacobelli:2013a}.
However, the properties of magnetised interstellar turbulence \textit{cannot} be captured
using power spectra (and similar measures), because they are produced by random
variations in the components of the ISM that are not Gaussian. \done{A first phenomenological attempt to improve on a simple power law to denote the small-scale magnetic field in numerical simulations is to introduce two components: an isotropic random component generated by turbulence in the ionized gas, and an anisotropic random\footnote{also called `ordered' or `striated' in the literature} component, which is random in direction but uniform in orientation. This anisotropic random component can be created, e.g., by a shock wave compressing an isotropic random field, or by Galactic shear. This division is motivated by the observational tracers, as described in the previous section, and is a first approximation of the predicted non-Gaussianity of magnetised turbulence
\done{(see also \ref{sss:MHDturb}).}
Global models show that the strengths of these random components are slightly higher than that of the coherent component but that the ratio of isotropic and anisotropic random fields varies across the Milky Way \citep{jaffe10,jansson12b,Jaffe:2013}.}
For this reason, synthetic observables that use Gaussian random fields to simulate fluctuations in the ISM do
not look the same as those generated from numerical simulations. Tools to
characterise shape and connectivity should be based on the mathematical theories of
morphology and topology. Measures using rigorously defined properties such as the
Minkowski functionals of morphology and the Betti numbers of topology, allow for
comparisons between different datasets (both observed and simulated) that better take
into account similarities and differences in their structure across different scales.
Applications of such measures to the ISM and MHD simulations have already been made
in morphology (e.g., \citealt{WBS07,MFS15}) and
topology (e.g., \citealt{Kowal:2007,Chepurnov:2008,Burkhart:2012,Makarenko:2017}).
A recent statistical study \citep{Henderson:2017} identified which combinations
of topological measures have the strongest discriminatory power, illustrating new,
effective ways in which these techniques can be used in practice. Methods from
mathematical morphology and topology can be incorporated into the \textsw{IMAGINE}\ framework to
allow for a more thorough and physically meaningful comparison between the different
real and simulated data related to the GMF and the ISM. The complexity of modern simulations of the supernova-driven ISM that include cosmic rays and allow for the mean-field dynamo action is illustrated in
\ref{CR_box}.
\begin{figure}[b]
\centering
\includegraphics[width=0.8\columnwidth,height=0.42\textheight]{CR_box_low-res.png}
\caption{\label{CR_box}Isosurfaces of gas density $n$ (blue), magnetic field $\vec{B}$ lines (red), gas
velocity $\vec{u}$ streamlines (black) and cosmic ray energy density $\epsilon_{\mathrm{cr}}$ (contours
on the right-hand face), in MHD simulations of the supernova-driven, multi-phase ISM
that extend the simulations of
Gent et\,al. \cite{Gent:2013b}
by the inclusion
of cosmic rays. Holes in the density distributions are supernova remnants;
$z=0$ at the \textit{bottom}, $z\approx 2.2\,\kpc$ at the \textit{top}.
(Simulation data: courtesy of G.~R.~Sarson, Newcastle University, UK.)}
\end{figure}
\subsection{Dynamo theory as applied to the GMF}
\label{ss:DTGMF}
Turbulent dynamo
theory appears to be successful in explaining both the origin and
the basic observed structure of the GMF at both galactic and turbulent scales, \ie\ the mean ('coherent' or `large-scale') and random (fluctuating or `small-scale') magnetic fields. The discussion in this section relies on a combination of insights from observations
and dynamo theory wherever they agree and, more importantly, where they do not.
A condensed but systematic
presentation of the current understanding of the galactic mean-field dynamo theory, with a compendium of useful qualitative results, can be found in \citet{CSSS14}. Here, we provide a brief summary.
The generation of the small-scale magnetic fields by the fluctuation
(or small-scale) dynamo relies on the random nature of
the interstellar gas flows at scales of the order of ${\lesssim} 100\,\pc$. If the electrical
conductivity of the plasma is sufficiently high,
so that the magnetic Reynolds number based on the flow correlation length exceeds a critical value of order $10^2$, a random plasma flow maintains a random magnetic field at scales smaller than the correlation scale of the flow
(\citealt{ZRS90,BS05}, and references therein).
According to dynamo theory, large-scale magnetic fields arise spontaneously because of the overall galactic rotation and stratification of the turbulent ISM.
In the galactic context, they can be isolated in the total, partially ordered magnetic
field via spatial averaging, hence for this theoretical section, we will refer to them as the mean-field component.
\subsubsection{Mean-field dynamo}
The mean-field dynamo equations that govern the mean magnetic field
$\vec{B}$ can be given in the following simplest form:
\begin{align}
\label{MFDeq}
\deriv{\vec{B}}{t}=\nabla\times\left(\vec{V}\times\vec{B}+\alpha\vec{B}\right)
-\eta_\mathrm{t}\nabla^2\vec{B}\,,
\qquad \nabla\cdot\vec{B}=0\,,
\end{align}
where $\vec{V}=\left(V_r, r\Omega(r,z), V_z\right)$ is the large-scale (mean)
velocity field written in the cylindrical polar coordinates
$(r,\phi,z)$ with the origin at the galactic centre and the $z$-axis
aligned with the angular velocity $\vec{\Omega}$. The factor $\alpha$ is a
measure of the deviations of the interstellar turbulence from
mirror symmetry,
\[
\alpha\simeq\frac{l_0^2\Omega}{h}\,,
\]
with $l_0$ the turbulent correlation length and $h$ the scale height
of the layer that hosts the dynamo, and
$\eta_{\mathrm{t}} \simeq\frac{1}{3} l_0v_0\simeq 10^{26}\,\cm^2\s^{-1}$
is
the turbulent magnetic diffusivity with $v_0$ the root-mean-square (RMS) turbulent
velocity. This equation can be rewritten to allow for anisotropy
and inhomogeneity of the turbulence.
The single most important feature of a spiral galaxy that defines
the form of its large-scale magnetic field is the thinness of its gaseous disc, which constrains the mean
magnetic field to be oriented predominantly in the
plane of the disc.
Strong galactic differential rotation enhances
the azimuthal component $B_\phi$ at the expense of the radial one $B_r$.
As a result, the horizontal magnetic field assumes
the shape of a spiral with a relatively small pitch angle, typically
$p_B=\arctan\left(B_r/B_\phi\right)\simeq-(10\degree\texttt{-}20\degree)$.
The vertical magnetic field is weaker than the horizontal
components, but only \textit{on average}. Locally, and most importantly
near radial reversals of the magnetic field direction where
$B_\phi=B_r=0$, the mean magnetic field should be dominated by $B_z$.
Another location where $B_z$ is expected to dominate is a region within
about $1\,\kpc$ of the galactic centre.
Furthermore, because galactic discs are thin, the global structure of the mean
magnetic field within the disc is \textit{quadrupolar} in its spatial
symmetry: the horizontal components of the mean magnetic field have
the same sign above and below the mid-plane $z=0$, whereas the vertical
component is antisymmetric:
$B_{r,\phi}(-z)=B_{r,\phi}(z)$ and $B_z(-z)=-B_z(z)$.
Mean-field dynamo theory predicts that (quasi-)spherical bodies, unlike
thin discs, generate mean fields
of a \textit{dipolar} parity, where
$B_{r,\phi}(-z)=-B_{r,\phi}(z)$ and $B_z(-z)=B_z(z)$.
Such magnetic fields are observed in the Sun and the Earth. It is plausible that the mean-field dynamo operates in galactic haloes. If the two parts of the
halo are weakly magnetically connected, the mean magnetic field in the
halo can have dipolar symmetry, opposite to that in the
disc. Otherwise, the symmetry of the halo field may
be controlled by the disc. Unfortunately, detailed
and systematic observational information on the overall
symmetry of the large-scale magnetic fields in galactic
haloes is still lacking, while theoretical models remain
oversimplified and do not allow for definite predictions.
Interstellar magnetic fields are induced by
interstellar gas flows, and random (turbulent) flows are at
the heart of both small-scale and large-scale dynamos. It is
therefore not surprising that magnetic energy density scales
with the kinetic energy density of interstellar turbulence,
and the convenient measure of the magnetic field strength is that
corresponding to this equipartition,
$B_0=(4\pi\rho v_0^2)^{\nicefrac{1}{2}}\simeq 5\,\muG$,
where $\rho\simeq 1.7\cdot10^{-24}\,\g/\cm^3$ and $v_0\simeq 10\,\kms$
are the gas mass density and the RMS random velocity, respectively.
\subsubsection{Turbulent magnetic fields}
\label{sss:MHDturb}
Random magnetic fields in the ISM are produced and shaped by three
dominant processes: the tangling of the mean field by the random flows,
the fluctuation dynamo action and the compression by random shocks.
The latter two mechanisms produce spatially intermittent magnetic
fields where intense magnetic filaments, ribbons and sheets occur
in the background of weaker fluctuations (e.g., \citealt{WBS07,momferratos2014}).
Such magnetic fields have strongly non-Gaussian statistical properties
\citep{SSSBW17}. If the turbulent velocity has Gaussian statistics, so
does the magnetic field produced by the tangling of the large-scale field.
Estimates of the RMS magnetic field strength $b$ from these mechanisms
are rather vague, and $b\simeq B_0$ is the best available option.
\subsubsection{Beyond the basic theory}
The large-scale magnetic field produced during the kinematic (or exponentially growing) phase of a mean-field galactic dynamo is predicted to have rather specific symmetry properties. In particular, field patterns that are symmetric with respect to rotation about the axis of galactic rotation and that are also symmetric with respect to reflection in the mid-plane of the disc are predicted to grow fastest~\citep{Ruzmaikin:1988}. This preference for axial symmetry is supported by observations of a dozen nearby galaxies~\citep{Fletcher:2010}. However, the well established presence of a reversal in the direction of the Milky Way's mean magnetic field complicates the picture; the extent of the problem for the theory critically depends on whether the reversal is local (e.g., \citealt{Sh2005,VBS2011}) or
global (e.g., \citealt{hanetal2006}). As well as providing the best possible answer to this question given the available data, \textit{IMAGINE}\
will also supply robust information about the strength and symmetry of the vertical component of the mean field.
The symmetry properties discussed above are only strictly relevant during the kinematic phase of the dynamo. Once the field is amplified sufficiently it may become strong enough to modify the flow (at least its turbulent component) and the resulting non-linear feedback may affect the observed properties of the field. Alternatively, the saturation process may involve the transport of the magnetic field out of the Galactic disc by a wind or fountain flow \citep{Shukurov:2006}. Recent attempts to use observations of nearby galaxies to identify the saturation mechanism were inconclusive \citep{VanEck:2015}, possibly because the available observables, in particular the magnetic pitch angle, are not sufficiently sensitive. A more detailed determination of the properties of the Milky Way's magnetic field, which will not be possible for other galaxies in the medium term, could help to solve the problem. In particular, recent results from numerical simulations \citep{EGSFB16,EGSFB17} show that the saturated mean-field can have a maximum that is displaced from the mid-plane by a few hundred parsecs and that the ISM phase structure is modified as the field saturates. \textit{IMAGINE}\
offers the prospect of determining the vertical profile of the magnetic field and thus makes a vital connection between observations and simulations of a mean-field dynamo in its non-linear state; this will enable the theory to move to a more advanced level.
The dynamo theory in its present form largely neglects the multi-phase structure of the ISM. Numerical simulations suggest that the large-scale magnetic field is maintained in the warm gas whereas the turbulent field is less sensitive to the multi-phase structure \citep{EGSFB16}. It remains unclear whether the mean-field dynamo acts within the complex and erratically evolving volume occupied by the warm phase alone or whether it responds to an `average' ISM whose parameters can be obtained by spatial and temporal averaging. This problem can only be solved by the coordinated observational, theoretical and numerical efforts envisaged by \textit{IMAGINE}.
\subsubsection{Plasma processes}
\label{PP}
The spectrum of interstellar magnetic fields is known to
extend from scales of order $100\,\pc$ down to Earth diameter scales (${\sim} 10^{9}\,\cm$) and less \citep{ARS95,ChLa10}.
It is argued in \citet{SCDHHQT09} that
the fluctuations extend down to the ion and
electron Larmor radii where the turbulent energy is
converted into heat. These are of order $10^{8}\,\cm$
and $3\cdot10^{6}\,\cm$, respectively, in the warm ionized
interstellar gas. The dynamics of the turbulence in the inertial
ranges, formed by non-linear interactions that vary widely
in their physical nature across this broad range of scales,
is universal and independent of the properties of the magnetic
field at the larger scales of order $100\,\pc$ where the turbulence is driven.
However, the energy input into the turbulent cascade is
controlled by the properties of the GMF at scales larger
than $100\,\pc$. Moreover, intermittency can destroy the
scale invariance of the turbulent cascade \citep{CSM15}.
Essentially, interstellar turbulence is compressible,
intermittent, MHD
turbulence whose nature is complicated by the multi-phase
ISM structure that leads to vastly different physical
conditions at different positions and, at a fixed position,
at different times. MHD turbulence theory has lately
been advanced significantly
(\citealt{SG94,GS95,BL05,BL06,Bol06,MCB06,LGS07}, and references therein), albeit not
without controversy. Further insights into the nature and
significance of the ISM turbulence are likely to follow
from a recent extension of the theory of magnetised
turbulence from an MHD approximation to kinetic plasma
theory approaches \citep{SCDHHQT09,BHXP13,BCXZ15}.
The astrophysical significance of the theoretical
progress
remains to be understood, and
appropriate observational techniques are waiting to be
fully developed and implemented. The forthcoming
observations with next-generation radio telescopes should
permit a critical assessment of the theory.
Apart from insights into the large-scale magnetic fields, \textit{IMAGINE}\
would advance our understanding of magnetic fields at the turbulent scales and reveal the magnetohydrodynamic and plasma processes that control them.
Neither the MHD theory nor the observational modelling
alone will make much progress on Galactic magnetic fields. \textit{IMAGINE}\
will join the two in order to leverage their combined power and
meet this important astrophysical challenge.
\subsection{Indirect dark matter detection}
\label{ss:DM}
We know that dark matter is the main driving force in structure formation, and thus crucial
for the creation and evolution of galaxies such as our own, but we have not pinpointed
its identity yet. There are three complementary ways of elucidating the nature of
the DM particle and its properties \citep{DMind}: (1) the production of DM in accelerator
experiments; (2) direct detection by observing nuclear recoils from interactions of DM in a detector;
and (3) indirect detection, \ie\
experiments that search for stable
secondaries,
produced by annihilation or decay, that accumulate, e.g., in
the Milky Way or its surrounding dwarf galaxies.
In this last approach, experimental efforts focus mostly on neutral messengers such as gamma-rays or neutrinos, which are particularly relevant to exploring the parameter space of heavy (${\sim} \PeV$) dark matter, and which are investigated with
state-of-the-art or upcoming instruments like H.E.S.S., VERITAS, MAGIC, and CTA for
gamma-rays as well as IceCube and KM3NeT for neutrinos.
\done{A relevant charged particle channel for indirect detection of DM would be an excess of antimatter} relative to the predictions for astrophysical backgrounds\done{, as matter and antimatter are produced in equal amounts in DM annihilations or decays.} Clearly, the identification of any \done{DM signal in, e.g., the positron fraction in GeV cosmic rays seen by PAMELA
and AMS \cite{pam1, 2013PhRvL.110n1102A}}, requires
an adequate understanding of the corresponding backgrounds. This requires improved
modelling of the propagation of astrophysical cosmic rays, which in turn relies on a better
understanding of the GMF. Moreover, predicting the secondary fluxes from
DM annihilations or decays depends strongly on the properties of the GMF inside the
Galactic halo, because DM resides in an extended, approximately spherical halo.
Even in the case of a non-detection of DM, a more precise knowledge of the GMF would
allow one to improve limits on DM properties.
\section{Extragalactic science}
\label{s:BGEG}
The impact of the \textit{IMAGINE}\ project reaches far beyond our own galaxy. Here we summarise the intimate connections between the goals of \textit{IMAGINE}\ and three of the most active areas of extragalactic astrophysics and cosmology.
\subsection{Structure formation}
\label{ss:SF}
According to the current paradigm of cosmological structure formation all observable structures in the Universe originated from weak primordial quantum fluctuations generated during the early epoch of inflation. This model connects the \done{large-scale} dynamics of the Universe and the growth of inhomogeneous structures with an underlying gravitational world model. The currently favoured $\Lambda$CDM model assumes that the Universe is governed by general relativity and that its homogeneous, large scale dynamics can be described by a special case of a Friedmann-Lema\^{i}tre-Robertson-Walker metric.
\done{Most of our current cosmological knowledge originates from observations of the homogeneous expansion of the Universe via observations of type Ia supernovae, primordial temperature fluctuations in the CMB, or the linear fluctuations of matter at the largest scales in galaxy observations (see, e.g., \citealt{2004mmu..symp..270F,2006astro.ph..1168L} and references therein).}
The $\Lambda$CDM model has been demonstrated to fit all these data, in particular the high precision CMB observations of the \textit{Planck}\ mission to astonishing accuracy (see e.g., \citealt{2016A&A...594A..13P}). These results suggest
that the gravitational evolution of the present Universe is governed by enigmatic dark matter and dark energy, constituting up to about $95\%$ of the total cosmic energy budget.
Although required to explain the formation of all observable structures within the standard picture of Einstein's gravity, so far dark matter and dark energy elude direct observations and have not yet been identified as particles within more fundamental theories \citep{2017IJMPD..2630012F}.
New challenges arise from studying the non-linear and inhomogeneous distribution of matter in our Universe in greater detail. Due to non-linear gravitational interactions dark matter aggregates into massive clusters and cosmic filaments forming the so-called cosmic web. This three-dimensional configuration of matter is believed to be a unique result of the primordial initial conditions, set by inflation, and the subsequent hierarchical structure formation history in a cold dark matter scenario. These filaments form the pathways along which matter accretes onto the massive galaxy clusters, as observed in cosmological surveys.
To study this filamentary distribution of matter in the Universe, novel Bayesian methods infer accurate and detailed maps of the 3D DM distribution from galaxy observations in the nearby Universe (see e.g., \citealt{2013MNRAS.432..894J,2015JCAP...01..036J,2016MNRAS.455.3169L}).
These methods fit 3D numerical models of non-linear gravitational structure formation to the data. In doing so they simultaneously reconstruct the initial conditions from which present cosmic structures originate as well as non-linear density and velocity fields in the present Universe including their respective dynamic formation histories (see e.g., \citealt{2016MNRAS.455.3169L}). This provides a completely new view of the dynamical large-scale structure formation in the nearby Universe and enables detailed studies of the non-linear DM distribution in nearby structures, such as the Coma or Shapley cluster. For the first time, these methods provide us with a conclusive statistical model of the large-scale 3D structure surrounding us and a detailed characterisation of its origins and non-linear formation histories.
The understanding of the large-scale structure of our local Universe and its formation history affects the extragalactic science done within \textit{IMAGINE}\ in multiple ways. In particular, it serves as a fundamental prior for the structure of extragalactic magnetic fields and the distribution of UHECR sources and thus constrains the expected distribution of UHECRs outside the Galaxy. This information is required to use UHECRs as tracers for the large-scale GMF structure, as explained in \ref{sec:UHECR_tracer}.
\subsection{Galaxy formation and evolution}
\label{sec:gal_form}
One of the most important aspects of cosmological structure formation in the context of \textit{IMAGINE}\ is that of the formation and evolution of galaxies, with an immediate focus on the growth and evolution of their magnetic fields. A similarly interesting aspect is the potential of \textsw{IMAGINE}\ to aid in understanding some of the physical processes underlying galaxy formation. There are two possible routes: (a) \textsw{IMAGINE}\ analyses of the GMF in the Milky Way that can probe (non-thermal) feedback processes at exquisite angular resolution, but limited to low-to-moderate star formation rates in our galaxy; and (b) adapting \textsw{IMAGINE}-inspired tools to observations of nearby starburst galaxies, which in some respects resemble high-redshift galaxies that form stars at high rates at the peak of the galaxy formation epoch. \done{Conversely, data obtained from observations of other galaxies can be used in \textsw{IMAGINE}\ as prior information for modelling the GMF, as discussed in \ref{sss:Galaxies}.}
Within the $\Lambda$CDM paradigm, many aspects of galaxy formation and evolution are not well understood and appear to be in conflict with the data. Most prominently, the observed galaxy luminosity and H{\sc i}-mass functions show much shallower faint-end slopes than predicted; this is locally known as the ``missing satellites problem'' of the Milky Way \citep{Klypin1999,Moore1999}. At the same time, simulations predict an inner DM density cusp in galaxies seemingly at odds with the cored profiles observed in low surface brightness galaxies and dwarf satellites. While these problems may point to an incomplete understanding of the underlying theory of DM (e.g., \citealt{vandenAarssen2012}), they also highlight our inadequate understanding of galaxy formation, particularly the effects of cosmic rays and magnetic fields.
It is believed that the problem can be resolved by inclusion of \textit{feedback} processes by stellar winds, supernovae, and active galactic nuclei, which either drive outflows of gas from the galaxy and/or interrupt the accretion of new gas via a self-regulated heating mechanism. These processes lead to the suppression of star formation activity in both small mass halos ($M\lesssim 10^{11}\,\Msun$) and large mass halos ($M\gtrsim 10^{13}\,\Msun$). The physical processes underlying these ideas include: energy and/or momentum input by cosmic rays; magnetic fields; radiation fields; and mechanical energy in the form of shock waves and turbulence. But details remain largely unclear. Numerical simulations of the present-day ISM (e.g., \citealt{WalchEtAl2015, GirichidisEtAl2016, Simpson2016, GattoEtAl2017}) indicate that the details of the density structures as well as the positioning of the supernovae are crucial to understand the dynamical cycle of gas in the ISM and the launching of outflows. In recent years, cosmic ray feedback has been rediscovered and has attracted a growing amount of work \citep{UPSNES12,BAKG13,Hanasz2013,Salem2014,Girichidis2016,pakmoretal2016,Simpson2016}.
The formation of galaxies follows two main phases: an early violent phase at $z\gtrsim1$ with large mass accretion rates and high gas-mass fractions, and a late epoch ($z\lesssim1$), which is characterised by lower gas accretion rates and some (rarer) galaxy merger events. Understanding the non-thermal components of the present-day Milky Way will not directly teach us about feedback processes that have been active at the high-noon of galaxy formation at $z\sim2$. However, the combination of non-thermal tracers from the most star-forming regions in the Milky Way with resolved observations of local starburst galaxies (such as Arp 220 and NGC 253) has the power to improve our understanding of the physical feedback processes at work during the galaxy formation epoch. Thus, understanding non-thermal components of the ISM via Bayesian inference methods such as \textsw{IMAGINE}\ promises to be an essential step towards better motivated feedback processes.
Modern models of galaxy formation are now starting to include MHD. The turbulent fluctuation dynamo can quickly (on a time-scale of order $10\,\Myr$) amplify a small-scale magnetic field, even in a very young galaxy. In later stages when galaxies grow large disks, the field is further amplified by a large-scale dynamo driven by differential rotation and turbulence, and this process preferentially grows a toroidal disk field \citep{pakmoretal2016, Pakmor2017, Pfrommer2017}. Numerical simulations show that magnetic fields can grow in about a $\Gyr$ to a saturation value where magnetic pressure is comparable to thermal pressure if turbulence is sufficiently strong \citep{wangabel2009, riederteyssier2016}. While this magnetic field can partially suppress and delay the onset of star formation \citep{PS2013}, overall, magnetic fields have only a weak effect on the global evolution of the galaxies \citep{Pakmor2017}. However, their presence is critical to correctly model cosmic ray feedback. This is because a strong starburst injects cosmic rays that accumulate to the point where their buoyancy force overcomes the magnetic tension of the dominant toroidal magnetic field, causing it to bend and open up \citep{Rodrigues2016}. Cosmic rays stream and diffuse ahead of the gas into the halo and accelerate the gas, thereby driving a strong galactic outflow \citep{pakmoretal2016,Pfrommer2017b}.
These simulations predict a local magnetic field strength of a few $\muG$ in the galactic disc, increasing to $10\texttt{-}100\,\muG$ close to the galactic centre, in magnetically dominated galactic winds \citep{pakmoretal2016,Pfrommer2017b}, or in the X-shaped magnetic fields out of the disc \citep{hanaszetal2009}. To test those simulations against observables, we need to capture the characteristics of these meso-scale outflows in a GMF model, or even to develop a non-parametric reconstruction of the GMF that can avoid our morphological preconceptions for what they should look like. Here is where the advanced methodology of \textsw{IMAGINE}\ can have an impact (see \ref{ss:non-para-models}) and conversely, where \textsw{IMAGINE}\ can use this information about galaxy evolution for improvements in the modelling of the GMF (see also \ref{sss:Galaxies}).
\subsection{Ultra-high energy cosmic rays}
\label{ss:UHECR}
Cosmic rays with energies exceeding $E \sim 10^{18}\,\eV = 1\,\EeV$ are called ultra-high energy cosmic rays and are assumed to be mostly of extragalactic origin \citep{Kotera:2011cp,2018PrPNP..98...85M}. Though more than $20\,000$ events have been observed in the highest energy decade \citep{ThePierreAuger:2015rha,Jui:2016amg,Fenu:2017sdj},
the identification of their sources still requires the solution of three problems: (a) understanding the structure and strength of cosmic magnetic fields (both Galactic and extragalactic), which determine the path of the particles on their way from their sources to us; (b) clearly identifying the mass and charge of \textit{individual} UHECR particles, and thus the magnitude of their magnetic deflections; and (c) identifying statistically significant anisotropy in UHECR arrival directions, which is required to compare potential source distributions in the sky with measurements. In all three areas, significant experimental progress has been reported recently, which is laid out as follows.
\afterpage{
\addtocounter{footnote}{-1}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{auger_lowres.pdf}
\caption[Sky map
in galactic coordinates showing the cosmic ray flux as measured by the Pierre Auger Observatory for $E>8\,\EeV$]{Sky map
in galactic coordinates showing the cosmic ray flux as measured by the Pierre Auger Observatory for $E>8\,\EeV$ smoothed with
a $45\degree$ top-hat function \jpr{\citep{2017Sci...357.1266P}}. The Galactic centre is at the origin. The cross
indicates the measured dipole direction; the contours denote the $68\%$ and $95\%$
confidence level regions. The dipole in the 2MRS galaxy distribution is
indicated. Arrows show the deflections expected for the JF12 GMF model on particles with $E/Z = 2\text{ or }5\,\EeV$.\footnotemark
\jpr{\sout{Image credit: Pierre Auger Collaboration \citep{2017Sci...357.1266P}}}.
\label{auger-skymap}}
\end{figure}
\footnotetext{\jpr{Figure~3 in \textit{Observation of a large-scale anisotropy in the arrival directions of cosmic rays above $8\,{\times}\,10^{18}\,$eV}, Pierre Auger Collaboration, Science, Vol. 357, Issue 6357, pp. 1266-1270, \textcopyright\ (2017), reprinted with permission from AAAS.}}
}
While past attempts to associate UHECRs with known extragalactic structures \citep{Stanev:1995my, Abraham:2007si} could not be confirmed with sufficient significance, the analysis of a much larger dataset obtained by the southern Pierre Auger Observatory \jpr{\sout{(PAO)}} found a large-scale dipole anisotropy with an amplitude of $6.5\%$ for energies above $E=8\,\EeV$ and a significance of $5.2\sigma$ (\citealt{Aab:2016ban} and \ref{auger-skymap}), similar to what was found in the combined analysis with the northern Telescope Array (TA) experiment \citep{Aab:2014ila}. Moreover, a recent analysis of the cumulative dataset from the \jpr{\sout{PAO} Pierre Auger Observatory} provides evidence for anisotropy in the arrival directions of UHECRs on an intermediate angular scale, which is indicative of excess arrivals from strong, nearby sources \citep{AugerStarburst}, in line with previous results of the \jpr{\sout{PAO} Pierre Auger Observatory} \citep{PierreAuger:2014yba} and TA \citep{Abbasi:2014lda}.
\afterpage{
\addtocounter{footnote}{-1}
\begin{figure}
\centering
\includegraphics[width=0.93\columnwidth]{auger-massspec.pdf}
\caption[Relative abundance of four mass groups as function of energy in cosmic rays as measured by the Pierre Auger Observatory.]{Relative abundance of four mass groups as function of energy in cosmic rays as measured by the Pierre Auger Observatory \jpr{\citep{Bellido2017}}. The upper four panels show the best-fit mass fractions and the goodness of fit is displayed in the lowest panel. Thick error bars denote the statistical uncertainties, thin error bars the systematic ones.\footnotemark \jpr{\sout{Image credit: Pierre Auger Collaboration \citep{Bellido2017}.}}
\label{auger-massspec}}
\end{figure}
\footnotetext{\jpr{Figure~6 in J.~Bellido et al., PoS (ICRC2017) 506, reproduced with permission \textcopyright\ by the Pierre Auger Collaboration.}}
}
The problem of identifying the charge of UHECR nuclei arises from the fact that it has to be inferred from the observed properties of air showers, cascades of secondary particles developing in the Earth's atmosphere initiated by the UHECR. While the energy is quite well constrained by shower calorimetry, the mass and charge of the primary particle reveals itself mostly by the atmospheric column depth of the maximum energy emission of the shower, $X_{\rm max}$, which is subject to large fluctuations between individual showers. This will be brought under control by collecting additional information on individual showers, and efforts are under way to achieve this,
such as the upgrade of the \jpr{\sout{PAO} Pierre Auger Observatory} \citep{Aab:2016vlz}, including a new method to measure air showers by radio detection \citep{Buitink:2016nkf, Schulz:2015mah}. Currently, the primary mass is only measured for a fraction of UHECRs.
There is evidence that the \textit{average} composition of UHECR nuclei is rather light at ${\sim}\,1\,\EeV$ (\citealt{kampert2012,Thoudam:2016}, and references therein), becoming heavier towards the highest energies. Recent results from the \jpr{\sout{PAO} Pierre Auger Observatory} are illustrated in \ref{auger-massspec}, depicting the relative abundance of four elemental groups in cosmic rays.
These results indicate that \textit{most} UHECRs have rigidities in the range $E/eZ \sim 3\texttt{-}10\,\EV$, which means that they would be deflected in the Galactic magnetic field by ${\gtrsim} 10\degree$, considering both the large-scale and the random component \citep{Giacinti:2011uj,Keivani:2014zja,Farrar:2017lhm}. This is in addition to the deflection in extragalactic magnetic fields, which is considerably more difficult to quantify but unlike the GMF, only \textit{diminishes} the directional information by blurring the UHECR sources over more or less large cones, an effect which can be controlled by statistics. The systematic shift of the centre of the cone away from the source can then be used to constrain \textit{both} GMF properties and UHECR source scenarios in the Bayesian analysis of \textsw{IMAGINE}.
An improved understanding of the GMF structure would be the first step toward a more rigorous UHECR astronomy. Reducing the uncertainties between measured and extragalactic arrival directions of UHECR events would allow for the comparison of different scenarios for the large-scale extragalactic structure and
distribution of UHECR sources and magnetic fields. This would not only make crucial progress
in identifying UHECR sources possible,
but also allow us to constrain extragalactic magnetic fields (EGMFs) by deducing the residual extragalactic contribution to magnetic deflection once the Galactic contribution has been subtracted. Comparing this with constrained EGMF simulations (e.g., \citealt{2018MNRAS.475.2519H} and \ref{fig:egmf_Hackstein}) will contribute to the understanding of
the astrophysical processes relevant for MHD at large cosmological scales, such as large-scale dynamo processes
and the development of MHD turbulence at galaxy cluster scales and beyond. At present, many of such EGMF simulations exist (e.g., \citealt{PhysRevD.70.043007, 1475-7516-2005-01-009, 0004-637X-682-1-29, PhysRevD.77.023005, 2016MNRAS.462.3660H, 2018MNRAS.475.2519H}), varying largely in their predictions for magnetic field strengths on various scales and effects on UHECR propagation \citep{PhysRevD.96.023010}.
\subsection{Extragalactic backgrounds}
\label{ss:exbkg}
\subsubsection{Cosmic microwave background}
\label{ss:cmb}
Studies of the cosmic microwave background (CMB) and its anisotropies have ushered in a new, high-precision era for modern cosmology.
Whether alone or combined with other cosmological probes, measurements of the CMB total intensity anisotropies have established the current cosmological model, setting the stage for further, more profound investigations with direct implications not only for cosmology but also for fundamental physics.
Today, the search for primordial gravitational waves from the inflationary phase of the expanding Universe is the paramount goal of CMB experiments. The signal imprinted on the polarised CMB, the so-called primordial $B$-modes\footnote{Cosmologists decompose the polarised emission into $E$ (gradient-like) and $B$ (curl-like) modes (e.g., \citealt{Caldwell16}). These correspond to signals of distinct physical origin within the polarisation of the CMB.}, are directly related to physics beyond the Standard Model of particle physics and on energy scales twelve orders of magnitude higher than those accessible to the Large Hadron Collider.
\done{The power spectra analysis of the \textit{Planck}\ $353\,\GHz$ polarisation maps decomposes the dust polarisation into $E$ and $B$ modes \citep{PIRXXX2016}. It led to two unexpected results: a positive $TE$ correlation and a ratio of about $2$ between the $E$ and $B$ dust powers. More recently, the $TE$ correlation and $E/B$ power asymmetry were shown to extend to low multi-poles that were not analysed in the first \textit{Planck}\ polarisation papers \citep{PIRLIV2018}. This latter study
also reports evidence for a positive $TB$ dust signal. The
$E/B$ asymmetry and the $TE$ correlation have been related empirically to the alignment of the magnetic field with the filamentary structure of the diffuse ISM \citep{planck2015-XXXVIII}. Theoretically, these results have been interpreted as signatures
of turbulence in the magnetised ISM \citep{Caldwell16,kandel18,Kritsuk17}.
The $TB$ signal indicates that dust polarisation maps do not satisfy parity invariance, a surprising
result that has not yet been explained. }
After the \textit{Planck}\ mission, a new generation of experiments, on the ground and balloon-borne, is measuring the polarisation of the CMB with an increased precision. The primordial $B$-mode signal may well be within reach of these experiments' sensitivities, but it is much weaker than the polarised foreground emission from the magnetised ISM of our galaxy.
What we learn in modelling the GMF and its interactions with the various components of the ISM will be a useful input to the CMB component separation challenge. A better understanding of the polarised emission from dust and the relationship between the field and the dust emission structures will lead to more realistic foreground simulations for testing component separation algorithms. It is also vital in order to compute realistic errors and eventually to claim a detection of the cosmological signal with confidence \citep{vansyngel2017}. At low frequencies, a better characterisation of the synchrotron emission would be crucial to model properly the anomalous microwave emission for component separation purposes.
Furthermore, detailed knowledge of these Galactic emission mechanisms at large scales are also a key for detecting other sources of $B$-modes, such as those coming from primordial magnetic fields \citep{planck19} (see also \ref{sss:primordialB}).
The CMB measurements and the detection of primordial gravitational waves are therefore intimately linked to the inference of the GMF: the measurements in the microwave bands are crucial inputs to the \textsw{IMAGINE}\ framework, and the resulting knowledge of the thermal and non-thermal components of the magnetised ISM is a crucial input to the challenge of cosmological component separation.
\subsubsection{Epoch of reionisation}
One of the greatest outstanding challenges of observational astronomy at the moment is the detection of neutral hydrogen from the era when the first galaxies started to form and reionise the Universe, the so-called epoch of reionisation (EoR). The $21\texttt{-}\cm$ neutral hydrogen line will be redshifted to low radio frequencies, where it is expected to be detectable statistically with the current generation of low-frequency radio telescopes \citep{asadetal2017}. However, as in the case of the CMB, the signal will be much weaker than several foreground components, so extreme care needs to be taken to remove these.
One of the most difficult foregrounds to remove may be spurious small-scale spectral structure caused by polarisation leakage (e.g., \citealt{jelicetal2008}). If ideal radio telescopes existed, EoR measurements would not be connected in any way with the Galactic magnetic field. However, in reality, leakage of diffuse polarised synchrotron emission from the Milky Way into total intensity will result in a frequency-dependent signal in total intensity that mimics the EoR signal itself. If the polarised foreground is known, this leakage in principle can be calculated as a power spectrum and its contribution subtracted from the observed signal \citep{asadetal2017}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.98\columnwidth]{Bfield_hsv.png}
\caption{
Map of projected mean magnetic field along the line-of-sight obtained from an MHD simulation of the local Universe as used by Hackstein et al. \citet{2018MNRAS.475.2519H}. The simulation started from constrained initial conditions provided by a method summarised in \citet{2016MNRAS.455.2078S}. The catalogue of constraints is fully described in \citet{2013AJ....146...86T}. The magnetic field is shown in $\muG$. Colours are in logarithmic scale. The additional contours show the temperature in logarithmic scale. The panel has a side-length of $200\,\Mpc/\mathrm{h}$, the projection axes are the $X$ and $Y$ in the super-galactic coordinates. The position of the Milky Way is at the centre of the box, indicated by a white circle. The additional circles show the location of the simulated counterparts of real objects in the local Universe. \label{fig:egmf_Hackstein}}
\end{figure}
\subsubsection{Large-scale extragalactic and primordial magnetic fields}
\label{sss:primordialB}
In recent years, a large effort has been devoted to study cosmic
magnetism (e.g., \citealt{Johnston-Hollitt2015}), but numerous
questions about the origin and evolution of magnetic fields in the
Universe are still unanswered. Important information could be derived from
the observation of magnetic fields in the large-scale structure of the
Universe, beyond galaxy clusters. In these environments, the properties of
the magnetic fields reflect those of the seed field, since they have been less affected by processes of structure formation (see \ref{fig:PMF}). But also magnetic fields in filamentary structures of the cosmic web are still expected to be very weak ($B\sim 1\,\nG$, see e.g., \citealt{Vazza2014}). The detection of magnetic fields outside galaxy clusters is therefore challenging, but would prove invaluable in understanding the origin of primordial fields.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{B_schlieren_plank.png}
\caption{All-sky map of the extragalactic primordial magnetic field as it should have been generated by the Harrison mechanism in the early Universe and be still present today. Shown is the field strength in colour and the field direction as a texture, where each pixel represents the average over the line of sight to a distance of $60\,\Mpc/\mathrm{h}$ from the Earth.
The 3D field was reconstructed from the observed galaxy distribution in the local Universe \citep{2018arXiv180302629H}. Galaxy clusters (like Virgo on the top right) with compressed primordial fields appear in red, galaxy voids with mostly pristine fields in blue.}
\label{fig:PMF}
\end{figure}
As pointed out in \ref{ss:cmb}, an imprint of these primordial magnetic fields must be present in the CMB. The current experimental upper limit on primordial magnetic fields from CMB observations is $B < 1\,\nG$ (for a nearly scale-invariant stochastic distribution), but most theoretical scenarios to produce primordial fields expect much smaller field strengths (\citealt{planck19}, and references therein).
A mechanism to measure such weak fields has been suggested on the basis of pair cascades that are expected to develop in the traversal of $\TeV$ photons through cosmic radiation backgrounds. These would cause an excess of $\GeV$ photons in the spectrum of $\TeV$ blazars. As sufficiently strong EGMFs can dilute the effects of the cascade in the line-of-sight, non-observation of this excess in could be used to place lower limits on the EGMF strength (e.g., \citealt{Oikonomou17}, and references therein). A rigorous analysis of Fermi LAT data for known $\TeV$ blazars has led to a conservative limit of $B \gtrsim 10^{-10}\,\nG$ \citep{2015ApJ...814...20F}; the authors also discuss how additional processes like beam-plasma instabilities \citep{Broderick2012} could suppress the cascade development and render these limits inapplicable. A direct discovery of pair cascades could be possible by observing their spatial and time-like widening of the (potentially transient) blazar emission, known as \textit{pair-halos} and \textit{pair-echoes}, respectively.
If the EGMF strength turns out to be ${\lesssim 10^{-6}\,\nG}$, future Cherenkov telescopes like CTA, AGIS, or HAWC
would be able to detect such pair-halos or echoes from $\TeV$ blazars and thus to infer further properties of the large-scale EGMFs (e.g., \cite{Neronov09}). EGMFs with a strength significantly above $10^{-6}\,\nG$ cannot be constrained by this method.
For stronger EGMFs, such as are expected in the denser regions of the cosmic web in particular, Faraday RMs of extragalactic sources provide a powerful tool. Recently, statistical
algorithms based on Bayesian inference have been developed to
separate the Galactic and extragalactic contribution \citep{2012A&A...542A..93O}
and to further disentangle the latter into the terms associated with
galaxy clusters, filaments, voids, and sheets
(e.g., \citealt{Vacca2015,2016A&A...591A..13V}). Yet again, in order to detect and constrain the
properties of magnetic fields in the cosmic web, the detailed knowledge of
the Galactic contribution gained by the \textit{IMAGINE}\ project will be essential.
\section{Approaches and methods}
\label{s:AM}
\done{We have summarised the scientific background and goals of the \textit{IMAGINE}\ project. But to connect the many disparate strands of theoretical and observational knowledge needed requires a statistically powerful and computationally flexible framework. Here, we describe the approach we take to the \textit{IMAGINE}\ project. More details as well as a first demonstration are provided in \citet{steininger2018}.}
\editorial{Please note the various subsections and their meaning: 6.1 General discussion of Bayesian methods, NOT specific to GMF inference. 6.2/6.3 Parametric models / non-parametric reconstruction of the GMF - put the GMF specific part here. Same for 6.4, GMF tracers, so observations we use, and 6.5/6.6 Galactic / extragalactic priors applied to our GMF reconstruction.}
\subsection{Introduction to Bayesian methods}
\textsw{IMAGINE}\ is designed to provide a flexible and modular analytic platform based on Bayes theorem that allows the explicit mathematical characterisation of: the constraining data and their uncertainties; the models and their parameters; the posterior likelihood of a given model; any prior information about the model not included in that likelihood; and the quantitative evidence for a given model compared to another.
\subsubsection{Bayesian inference}
\label{sec:Bayes:inference}
The most important thing to understand about Bayesian methods is that they do not introduce a different way of doing statistics, but rather a method of inference in a multi-valued logic appropriate for scientific problems. This goes back to a proof delivered by Cox \citet{Cox1946}, that the desiderata of a logic of plausibilities are fulfilled by the calculus of probability theory, so one can use the terms `plausibility' and `probability' in the same sense.
Thus Bayes' theorem, commonly written as
\begin{align}
\label{eq:bayes_law}
P\left({\cal M}|{\cal D},{\cal I}\right) &= \frac{P\left({\cal D}|{\cal M}, {\cal I}\right)}{P\left({\cal D}|{\cal I}\right)}
\cdot P\left({\cal M}|{\cal I}\right),
\end{align}
becomes the rule of inference on the \textit{plausibility}
$P \in [0,1]$, assigned to physical entity of interest
${\cal M}$, given a (new) set of empirical data
${\cal D}$ and some background information
${\cal I}$. The meaning of all these quantities in various contexts will be explained in the remainder of this section.
\subsubsection{Parametric vs.\ non-parametric methods
\label{sec:Bayes:para-non-para}}
\done{In \textit{parametric Bayesian methods}, $\mathcal{M}$ is understood as a physical or phenomenological model. Such model prescriptions can be based on purely heuristic assumptions or be the result of a fundamental theory, as described in \ref{ss:Para_phys}.}
\done{Their value lies in physical abstraction, which may be useful to transfer knowledge between related problems, but they are usually oversimplified when it comes to the exact description of a complex structure like the Galactic magnetic field. }
\done{In contrast to this, \textit{non-parametric Bayesian methods} aim to reconstruct reality to the highest possible accuracy. Here, $\mathcal{M}$ takes the role of a physical \done{field}, e.g., a direct representation}
\done{of the Galactic magnetic field}.
The main problem here is to recover this field from
the data, which is subject to limitations of measurement and noise.
\done{A field has virtually an infinite number of degrees of freedom, whereas the data provide only a finite number of constraints. Thus the reconstruction problem is heavily under-constrained and ill-defined. This can only be cured by the inclusion of prior information into the inference, which relates the many degrees of freedom.}
\done{To this end, \textit{information field theory} (IFT, \citealt{2009PhRvD..80j5005E,2013AIPC.1553..184E}), the information theory for fields, will be used. Information field theory is a probabilistic formulation of the inverse problem to estimate an unknown field that exploits the rich
variety of mathematical methods developed for quantum field theory.
}
\done{Parametric models describe a subset of the possible field configurations, and therefore parametric models can implicitly be regarded as having strong priors. Nevertheless, their lower dimensionalities can have huge computational and conceptual advantages. For these reasons, the IMAGINE Consortium\ will investigate both parametric and non-parametric GMF models. In fact, the existing \textsw{IMAGINE}\ pipeline recently presented in \citet{steininger2018} combines a parametric representation of the mean GMF with additional non-parametric realisations of magnetic fluctuations in the inference. The mean field parameters as well as the parameters controlling the magnetic fluctuations can be inferred simultaneously.}
\subsubsection{Likelihood}
\label{sec:Bayes:likelihood}
\done{The likelihood, \done{$P\left({\cal D}|{\cal M, I}\right)$}, is the most crucial element in Bayesian inference.}
\done{It is the probability of obtaining the (observed) data ${\cal D}$ out of all possible data configurations given a specific model realisation $\cal M$ and background information $\cal I$. Knowing the uncertainty of the available data is essential for calculating a likelihood, which acts as a distance measure between measurement and model prediction. The easiest way is to consider the errors communicated by the experiment as part of the data, which strictly speaking they are not as they are already a result of an inference performed by the experimentalists. Non-parametric Bayesian approaches provide sophisticated methods to estimate uncertainties directly from the measurements (see \ref{sss:field_priors}).}
\subsubsection{Prior, posterior and knowledge update}
\label{sec:Bayes:prior}
\done{The leftmost and rightmost terms in \ref{eq:bayes_law}, $P\left(\cal{M}|\cal{D},\cal{I}\right)$ and $P\left(\cal{M}|\cal{I}\right)$, are called the \textit{posterior} and the \textit{prior}, respectively. Both are closely related, as they represent plausibility values assigned to the same entity, $\cal M$. }
\done{The prior is just the assignment before the data is obtained or taken into account, and the posterior is the assignment afterwards. The change from prior to posterior reflects the information the data provided through its likelihood, which is simply multiplied by the prior in Bayes' theorem.}
\done{In practice, there exist several approaches to construct prior distributions. For example, priors may be constructed empirically via a Bayesian knowledge update from previously obtained experimental results. Another possibility is to construct the least-informative priors via the maximum entropy methodology.}
\done{Particularly \jpr{\sout{in}} in non-parametric approaches, informative priors are necessary. The simplest non-parametric priors discourage strong gradients or curvature in the solutions to ensure continuity of the physical fields. More sophisticated priors incorporate the notion of field correlation functions, which either can be calculated from theory (as in CMB science) or have to be determined simultaneously from the data. Thus, in the ideal case, the features introduced to construct sensible non-parametric priors \done{reveal} highly interesting physical quantities. The natural by-products of non-parametric inference are therefore the interpretable scientific results that provide insight and understanding into, e.g., Galactic magnetogenesis.}
\subsubsection{Evidence and background information}
\label{sec:Bayes:evidence}
\done{The last term in Bayes' theorem, given in \ref{eq:bayes_law}, is the denominator on the right side, $P\left({\cal D}|{\cal I}\right)$, which is called the \textit{evidence}. }
\done{Its most important role is to renormalise the joint probability of data and model
\[
P\left(\mathcal{D}, \mathcal{M}|\mathcal{I}\right) = P\left(\mathcal{D}| \mathcal{M},\mathcal{I}\right)\cdot P\left(\mathcal{M}|\mathcal{I}\right)
\]
that arose from multiplying the prior with the likelihood, such that
\[
\int P\left(\mathcal{M}|\mathcal{D}, \mathcal{I}\right)\cdot \dd\mathcal{M}=1.
\]
}
\done{In practice, the evidence allows us to penalise model complexity and choose the simplest model parametrisation able to explain the data. It therefore naturally implements the common understanding of Occam's razor
in Bayesian model comparisons \citep{2003prth.book.....J}.}
\done{Formally, the evidence, $P\left({\cal D}|{\cal I}\right)$, is the \done{plausibility} of the data ${\cal D}$ in the light of the \done{background} information $\mathcal{I}$, \done{marginalised over all} model parameters.} \done{It might be seen as a measure of the quality of the data, \ie\ their ability to constrain the fundamental approach (or meta-model) contained in ${\cal I}$. Another way to see it is as the likelihood of ${\cal I}$ in the light of the data ${\cal D}$. In this sense,}
\done{it represents information about the plausibility of $\mathcal{I}$ and permits comparisons between
different formulations of this background information.}
\subsubsection{Numerical approaches to Bayesian inference}
\label{sec:Bayes:num_bayes}
\done{A numerical approach to full Bayesian inference always aims at mapping the entire posterior distribution. This often requires performing numerical searches and integration in very high dimensional parameter spaces, ranging from a few to several thousand dimensions, depending on the parametrisation of models.
In such situations, the Markov Chain Monte Carlo (MCMC) technique is known to be the most efficient numerical technique for approximating high dimensional posterior distributions (see e.g., \citealt{brooks2011handbook}).
The MCMC approach is a class of algorithms generating representative samples of probability distributions (see e.g., \citealt{gelmanbda04}). This is achieved by constructing a so-called Markov chain by sequentially transitioning from one point in parameter space to another. If transitions depend only on the current point and if they obey the requirement of detailed balance, then the equilibrium state of such a chain consists of ergodic samples of the desired target probability distribution. More explicitly, the MCMC algorithm performs a sequential local exploration of parameter space and provides a numerical approximation to the desired target posterior distribution in terms of a multi-dimensional point cloud given as:
\begin{align}
\label{eq:MCMC_approx}
P\left({\cal M}|{\cal D},{\cal I}\right) &\approx \frac{1}{N} \sum_{i=0}^{N-1} \delta^D\left({\cal M}-{\cal M}_i\right) ,
\end{align}
where $\delta^D(x)$ indicates the Dirac-delta distribution, $M_i$ are sequential model realisations generated by the MCMC process and $N$ is the total number of generated posterior realisations.
It can be shown that for $N \to \infty$, the right hand side of \ref{eq:MCMC_approx} converges to the desired target posterior distribution (see e.g., \citealt{gelmanbda04,brooks2011handbook}). Given such a set of posterior samples in post-processing, one may then perform any desired Bayesian activity such as determining credibility intervals, marginalising over a nuisance parameter or determining statistical summaries such as mean, mode or variance. In particular, one can now easily approximate any posterior-weighted integral over any desired function, $f(\cal M)$, of the model and its parameters by an unbiased estimator given as
\begin{align*}
E\left(f(\cal M)\right) &\approx \frac{1}{N} \sum_{i=0}^{N-1} f({\cal M}_i) .
\end{align*}
A particularly important aspect of the MCMC approach is the fact that the errors of any derived estimators are independent of dimension and scale simply as $1/\sqrt{N}$. These features render the MCMC approach the perfect method to perform Bayesian parameter inference.
}
\subsection{Parametric models \done{of the GMF} \label{sec:GMFpar}}
\label{ss:Para}
\subsubsection{Current heuristic models of the GMF}
\label{ss:Para_heuristic}
There are many parametric models for the large-scale GMF in the literature (e.g., \citealt{sun08,jaffe10,pshirkov11,vaneck11,jansson12b,terral16} ) motivated by observations of the Milky Way and of external galaxies. Most are a combination of a toroidal field (\ie\ no vertical component) and a poloidal field (\ie\ no azimuthal component) to reproduce the observed features described in \ref{ss:OBSGMF}. Most models also parametrise the amplitude as a form of exponential disk, in some cases with spiral arm structures \citep{jaffe10,pshirkov11,vaneck11,jansson12b} or annular regions \citep{sun08} where both the field strength and direction can vary independently. Further divisions into thin and thick disks, halo, inner and outer Galaxy, a molecular ring, etc.\ attempt to model different regions of the ISM.
The small-scale turbulent component of the field is usually also parametrised statistically, e.g., as a single-scale or 3D Gaussian random field with a parametrised power spectrum. The amplitude of this component usually also follows some sort of exponential disk profile, possibly with spiral arms etc., as in the case of the regular field, but not necessarily the identical morphology. This can then roughly double the number of parameters required (e.g., \citealt{jansson12b}).
The complexity of the parametrisations of such models can clearly vary depending on how many assumptions are made, with several dozen parameters the most that are usually feasible to explore (e.g., \cite{jansson12b, ungerfarrar2017}). As always, the risk of using large numbers of parameters is overfitting and drawing conclusions about the large-scale structure of the field based on a fit that may be perturbed by a small-scale local feature. In principle, the statistics of how the expected small-scale fluctuations affect the fits can be taken into account in the Bayesian framework. In practice, this can be difficult as discussed in \cite{pipXLII}.
Many of the methods in the \textit{IMAGINE}\ framework have been applied in previous analyses of the GMF.
\done{Ruiz-Granados et al.}~\cite{ruizgranados:2010} explored several complementary parametrised models with different morphologies and a Bayesian likelihood exploration.
\done{Jansson \& Farrar}~\cite{jansson12b} chose a single parametrised model but one with many physically motivated components and a large number of degrees of freedom. Both of these analyses used the high-resolution data to estimate the variations due to the turbulent field component and to include it in the likelihood.
\done{Jaffe et al.}~\cite{jaffe10} instead explicitly modelled the random field components with an isotropic Gaussian random field and a stochastic anisotropic component.
\done{In}~\cite{pipXLII} those two approaches of handling the random component are compared.
\done{Unger and Farrar}~\cite{ungerfarrar2017} explore an ensemble of physically-motivated updates to the Jansson \& Farrar \cite{jansson12b} model (based in part on Ferri\`ere and Terral \cite{FerriereTerral:2014}). They conclude that the data does currently not discriminate among these model variations, which therefore indicates the degree of uncertainty in our knowledge of the GMF.
We will make use of all of the lessons learned in these analyses by including the ``Galactic variance'' as an observable; comparing with high-resolution simulations of random fields, both Gaussian and non-Gaussian; comparing the ability of different parametrised models to constrain the large-scale features of the GMF; and using the Bayesian information framework to robustly characterise the knowledge gained from the analysis given the assumed parametric forms and priors.
\subsubsection{Physical parametric models}
\label{ss:Para_phys}
\done{However, these models are all heuristic and their topology is based on
observed shapes instead of physics. \done{Ferri\`ere and Terral} \cite{FerriereTerral:2014} were
the first to develop analytical models for both disk and halo components of GMFs and \done{in} \cite{terral16} \done{these were applied} to the
Milky Way. Although these models still include only regular field components and are \done{clearly} oversimplified, they do present a significant step forward, as the field topology is based on physics, in contrast to the published heuristic models for which field topology consists of geometrical components based on observations.}
The next advance is building parametric GMF models based on dynamo
physics.
This theory
is aimed mainly at the explanation of the origin of GMFs and their typical properties. To obtain a parametrised model, a kinematic
(linear) solution \done{of \ref{MFDeq} is derived} for $\vec{V}$, $\alpha$ and $\eta_\mathrm{t}$,
independent of $\vec{B}$. The linear nature of the solution is not
restrictive in the present context since its aim is just to provide a
convenient functional basis for any magnetic configuration. On the other
hand, Chamandy~et~al.~\cite{CSSS14} show that a wide class of non-linear
solutions is well approximated by the marginally stable eigenfunction (\ie\ that is obtained for $\partial\vec{B}/\partial t=0$).
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth,height=0.42\textheight]{GMF_3D.png}
\caption{A GMF model based on the expansion, in
free-decay magnetic modes, of axially symmetric solutions of kinematic
mean-field dynamo equations for the Galactic disc and halo. For
illustration, we have selected a solution that has a symmetric
(quadrupolar) halo field combined with a quadrupolar disc field
that has two reversals at arbitrarily chosen
Galacto-centric distances $7$ and $12\,\kpc$. The domain
shown is $17\,\kpc$ in size. Magnetic field lines (shown in grey)
are seeded uniformly along a major diagonal through the box. Arrows show
the magnetic field at positions randomly sampled within a $2.5\,\kpc$-thick
slice around the Galactic mid-plane (indicated by the semi-transparent
surface); their length is proportional to the magnitude of the magnetic field.}
\label{GMF_3D}
\end{figure}
In a thin disc with radius $r$ and height $h$,
a series of successive approximations in aspect ratio $\epsilon = h/r$
can be used to represent \ref{MFDeq} as: a system of local partial differential equations in $z$ at a fixed Galacto-centric radius $r$, whose coefficients are functions of $r$; and a scalar equation for the magnetic field strength at a given $r$. Solutions to each approximation are obtained in an explicit form as an infinite series of orthogonal functions. The series can be truncated in order to achieve the desired level of detail in the result. In particular, such solutions can include:
deviations from axial symmetry due to spiral arms and other factors; radial and vertical large-scale velocity
components; inhomogeneity and anisotropy of the turbulent transport coefficients; magnetic field reversals; etc.
For the quasi-spherical Galactic halo, a similar solution to \ref{MFDeq} is obtained in spherical coordinates
in the form of expansion over the free-decay modes
obtained for $\vec{v}=0$ and $\alpha=0$. Analytical
expressions are available for these modes,
expressed in spherical harmonics, so that the solution is again represented by an explicit form with a configurable
level of detail.
An example of a global (large-scale) magnetic configuration
in the Galactic disc and halo produced with this model is shown in \ref{GMF_3D}.
In order to provide specific models of the GMF, this solution should be extended to include a
variety of complex physical effects,
such as the multi-phase structure of the ISM,
spiral arms, the effects of cosmic rays on the gas
dynamics, etc.
Some of these effects have already
been included, e.g., the effects of galactic
outflows \citep{BvRDBS01},
and there is some progress towards a better
understanding of the role of the multi-phase ISM structure in
galactic dynamos \citep{EGSFB16}, but much more needs to be done.
Another difficulty, of a more fundamental nature, is that any theory of the random magnetic
fields can only be statistical in nature, predicting
their ensemble-averaged properties, whereas
the GMF is just one realisation of the ensemble.
However, the existing models of the GMF,
especially those based on the mean-field dynamo theory,
can provide a physically motivated field prior for a non-parametric Bayesian
analysis of observations.
\subsubsection{Challenges for parametric modelling}
We have discussed the variety of models for the large-scale GMF that have been published, each having been optimised to match a subset of the available data (\ref{ss:Para_heuristic}). Though there are other suitable observables, the most commonly used tracers of the large-scale fields are: the Faraday RMs of Galactic pulsars and extragalactic radio sources; diffuse synchrotron emission in total and polarised intensity from radio to microwave frequencies; and diffuse dust polarisation in the microwave and sub-millimetre bands. As discussed in detail in \citet{pipXLII}, the problem
of determining the magnetic structure of the Milky Way in sufficient detail
remains under-determined due to
degeneracies in the parameter space. The ill-constrained distribution of thermal electrons and the paucity of pulsars with reliable distance measures make it difficult to study the 3D structure of the fields with RMs. Confusion from additional emission components in the microwave bands such as free-free or anomalous microwave emission (aka ``spinning dust'') make it impossible to use the synchrotron total intensity at high frequencies. Faraday depolarisation effects cut off the visible polarisation in the lower frequency radio bands at a polarisation ``horizon'' of a few $\kpc$ \citep{uyaniker:2003}. Unknown variations in the synchrotron spectral energy distribution make it difficult to combine radio and microwave observations.
Models of dust polarisation must account for its correlation with synchrotron polarisation \citep{choi15,PIRXXII2015}, as well as the correlation of the GMF with the filamentary structure of matter \citep{PIRXXXII2016}.
The structure of the local magnetic field on scales of a few
$100\,\pc$, e.g., the Local Bubble \citep{lallementetal2014}
is currently not taken into account in existing GMF models. However,
the analysis of the \textit{Planck}\ dust polarisation maps stress the need
to consider this local contribution in order to model polarisation observations away from the Galactic plane \citep{pipXLII,PIRXLIV2016,alves2018}.
Widely used models such as that of Jansson \& Farrar \citet{jansson12b} are often taken at face value without an understanding of how these systematic uncertainties affect the deceptively good fit obtained for each parameter. More data on the current observables, such as diffuse synchrotron polarisation at intermediate frequencies and better pulsar sampling, as well as new observables such as CR deflection information will help resolve some of these degeneracies, but it will remain necessary to build into any model fitting an understanding of the remaining uncertainties. The Bayesian framework of \textsw{IMAGINE}\ will allow us to do this explicitly and thereby to gain a better understanding of which parameters can be meaningfully constrained with the available data.
\subsection{Non-parametric reconstruction of the GMF}
\label{ss:non-para-models}
Non-parametric 3D GMF models do not yet exist, and are a challenge due to the large number of degrees of freedom involved compared to the number of constraining data points. It is a second goal of the IMAGINE Consortium\ to enable non-parametric reconstructions
\done{of the GMF}. All data and instrument descriptions collected by the IMAGINE Consortium\ will also be directly applicable to this more challenging goal.
A non-parametric reconstruction of the GMF requires that basically every point in 3D space carries its own magnetic field vector. As this is computationally infeasible to handle, only a pixelised version of the 3D GMF configuration can be stored. However, even such a finite resolution can easily mean that a billion field values not only have to be handled but also need to be determined from a much smaller set of observables. This is a so-called ill-posed inverse problem, which requires additional constraints or regularisation best provided by prior information such as: the GMF's solenoidality; the typical scaling relations of magnetic fluctuations in MHD turbulence; and the fact that the GMF is the result of an operating galactic dynamo that obeys the MHD equations and is driven by the kinetic energy of the ISM.
The Bayesian framework of \textsw{IMAGINE}\ will allow us to test the ability of these priors to constrain non-parametric models to provide information that will be complementary to the parametric studies.
\subsection{Observational input}
\label{ss:OI}
In \ref{sss:oot}, we described the physics of the observational tracers of Galactic magnetic fields. Here, we specifically list the available data sets for each tracer that we will use in \textsw{IMAGINE}.
\subsubsection{Rotation measure and Faraday depth}
The first and most straightforward data set to compare the GMF models against is that of RMs of extragalactic point sources. For this, we use a map of Galactic Faraday rotation reconstructed based on catalogues of extragalactic point source RMs \citep{2012A&A...542A..93O}. Pulsar RMs have additional value due to their measurable distances within the Galaxy, however, there are relatively few (${\sim}\,200$) pulsars with accurately known distances and RMs \citep{yaoetal2017}. In principle, the Faraday depth of diffuse Galactic synchrotron emission (e.g., \citealt{Iacobelli:2013b, jelicetal2014, vanecketal2017}) gives magnetic field information as well. However, due to local, small-scale structure and observational biases, this tracer may not be very useful in practice. RM and Faraday depth data depend on a reliable thermal electron density model (e.g., \citealt{cordeslazio2002, gaensleretal2008, schnitzeler2012}) and a relativistic electron density model (e.g., \citealt{strongmoskalenko1998, kissmann2014, evolietal2017}).
\subsubsection{Synchrotron emission}
With a relativistic electron density model in place, it is straightforward to calculate the synchrotron emission expected from the Milky Way given a certain GMF model. The modelled synchrotron maps at various frequencies can be directly compared to existing observational maps (see e.g., the compilation by \citealt{deoliveiracostaetal2008}). The complication of local structures needs to be dealt with, e.g., by including them in a model or masking them out.
At very low frequencies, Faraday rotation by small-scale, local magnetic field structures will dominate, but at higher frequencies the global field should be constrainable. The all-sky polarisation surveys from the WMAP \citep{pageetal2007} and \textit{Planck}\ satellites \citep{planck2015XXV} are effectively free of Faraday rotation. Lower-frequency surveys that can be used that include Faraday rotation are, e.g., the all-sky survey at $1.4\,\GHz$ \citep{wollebenetal2006, testorietal2008}, the Southern sky survey S-PASS at $2.3\,\GHz$ \citep{carrettietal2013}, or the full-sky C-BASS survey at $5\,\GHz$ \citep{kingetal2010}.
\subsubsection{Polarisation of starlight}
The optical and near-infrared polarisation of stars is a potentially powerful constraint for the Galactic magnetic field, as shown in pilot studies by \citet{paveletal2012} and \citet{pavel2014}. The catalogue of Heiles \citet{heiles2000} and the Galactic Plane Infrared Polarisation Survey (GPIPS, \citealt{clemens2012}) combined with stellar distances from Gaia can be used to start with. A huge increase in stellar polarisation data is expected from ongoing optical stellar polarisation surveys: IPS \citep{magalhaesetal2005}, SOUTH-POL \citep{magalhaes2014} and Pasiphae, which will be combined with stellar distances and extinction measurements from Gaia \citep{lindegrenetal2016}. The modelling of stellar polarisation will go together with that of the dust
distribution and extinction properties in 3D. Such models based on extinction measurements \citep{lallement2018,green2018}, and
on spectroscopy of diffuse interstellar bands \citep{zasowski2015}, are already available.
\subsubsection{Polarised dust emission}
\done{The same elongated dust particles that selectively absorb optical and near-infrared starlight, re-emit the absorbed polarised light in the far-infrared and sub-millimetre regimes. Therefore, the expected polarised emission from the chosen 3D dust model can be calculated and compared to observed all-sky maps of dust emission at various wavelengths from \textit{Planck}\ \citep{PIRXXI2015}.
\done{These maps provide sensitive, full-sky observations of the total and polarised emission up to $353\,\GHz$.}
}\done{Again, fully exploiting these data requires an accurate 3D model of the dust grain distribution.}
The observed asymmetry between $E$ and $B$ mode polarisation and their correlation with the total dust intensity
\citep{PIRXXX2016,PIRLIV2018} provide specific constraints and challenges to the GMF modelling.
\subsubsection{Ultra-high energy cosmic ray deflections}
\label{sec:UHECR_tracer}
\afterpage{
\addtocounter{footnote}{-1}
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\columnwidth]{Defl_Comp_Mollerach_2.jpg}
\caption[Comparison of deflection angles of UHECRs with rigidity $E/eZ = 10\,\EV$ predicted by two published models of the GMF.]{Comparison of deflection angles of UHECRs with rigidity
$E/eZ = 10\,\EV$ predicted by two published models of the GMF: Pshirkov et~al. \citep{pshirkov11} and Jansson \& Farrar (JF12) \citep{jansson12b}. \jpr{\sout{Image credit: S.~Mollarach and E.~Roulet \citep{2018PrPNP..98...85M}.}\footnotemark}
\label{fig:defl_diff}}
\end{figure}
\footnotetext{\jpr{Reprinted from Progress in Particle and Nuclear Physics, Vol. 98, S.~Mollerach and E.~Roulet, \textit{Progress in high-energy cosmic ray physics}, figure~15, p.~107, \textcopyright\ (2018), with permission from Elsevier.}}
}
UHECRs can act as test particles to probe the 3D GMF structure in a
unique way: unlike many other tracers, they probe the orientation of
the transverse field component, and their total deflection is not
simply a line-of-sight integral. This uniqueness becomes clear when
looking at the inconsistency in the predictions for systematic UHECR
deflections by different parametrisations for the GMF, which are both
optimised at standard GMF tracers (\ref{fig:defl_diff}, see also
\citet{Farrar:2012gm}). Including UHECRs in the GMF likelihood would require connecting various UHECR anisotropy predictions outside the Galaxy with the observed arrival direction distribution at Earth, and promising techniques to do this efficiently have been developed \citep{Winchen:2013}. Considering information on the particle rigidity for each air shower would increase the discriminatory power of the likelihood, and experimental developments give hope to have detailed information on $X_{\rm max}$ soon available for a large number of air showers \citep{2017PhRvD..96l2003A}.
UHECR deflections in magnetic fields inside and outside of the Galaxy can be calculated by numerical codes like \textsw{CRPropa\,3}\ \citep{Batista:2016yrx}, which can take any given GMF and EGMF structure as a 3D grid. To probe the GMF structure, UHECRs with moderate deflection angles ${\sim} 10\degree$ will be most useful, provided that the original anisotropy is significant on intermediate scales. First hints in this direction have been delivered from anisotropy analysis at the highest measured energies \citep{Abbasi:2014lda,AugerStarburst}, which suggests that indeed UHECRs with $E/eZ > 10\,\EV$ exist. Together with theoretical UHECR priors and constraints from multi-messenger signals (see \ref{UHECR:prior}), it is therefore the variety of the information contained in the UHECR spectrum that can help to break degeneracies in GMF model optimisations.
\subsection{Galactic priors}
\label{ss:galactic_priors}
As discussed in \ref{sec:Bayes:prior}, a prior reflects our knowledge before the observation of the data and is particularly important for non-parametric GMF modelling (see \ref{ss:non-para-models}). Here we describe how we will formulate such priors for our galaxy.
\subsubsection{Parameter priors \label{sec:GMF_par_priors}}
Parametrised models for the magnetic field structure already encode the constraints applied to describe the problem, e.g., by assuming a spiral disk morphology, \done{or requiring $\nabla\cdot\vec{B} = 0$}.
In heuristic GMF models, priors on the values of individual parameters can be either non-informative or constrained; the latter can make the analysis computationally feasible or reduce degeneracies in parameter space. They may also be used to include empirical constraints obtained from earlier optimisations, potentially also with different parametrisations. Because the results depend on the choices made at the start regarding both analytic model formulations and parameter priors, it will be important to test a number of different possibilities and to explore their mutual consistency.
A somewhat different situation occurs in dynamo models of the GMF, which are primarily based on an underlying physical process rather than a set of morphological assumptions. As described in \ref{ss:Para_phys}, these are based on
the expansion of the large-scale magnetic field over a basis of
orthogonal eigenfunctions of the mean-field dynamo equation.
Since the functional basis is complete,
any magnetic field, whether or not produced by the dynamo, can be
represented as a superposition of the eigenfunctions. Therefore,
an alternative use of these functions is to represent
any magnetic configuration of interest in terms of a tractably small
number of parameters. The resulting magnetic field will be physically
realisable, as it is by construction a solution of the induction equation. This then permits us to construct a prior for it. Different modes have different growth or decay rates. It is much more likely to observe a strong excitation of a growing mode than of a decaying mode for the simple reason that the latter are eliminated by the dynamics. Thus the eigenvalues of the dynamo modes can be turned into probabilistic statements of GMF field configurations.
\subsubsection{Field priors}
\label{sss:field_priors}
For a non-parametric GMF reconstruction, priors are essential to tame
the (in principle) unbound number of degrees of freedom beyond
the finite number of constraints provided by the data. Here, certain
field smoothness assumptions can be applied, where the field
roughness can be learned from the data themselves, as was shown to
work well theoretically
\citep{2010PhRvE..82e1112E, 2011PhRvD..83j5014E,2011PhRvE..84d1118O,2013PhRvE..87c2136O,2016arXiv161208406E} and in practice
\citep{2011A&A...530A..89O, 2012A&A...542A..93O, 2015A&A...581A..59J, 2015A&A...575A.118O, 2015A&A...581A.126S, 2016A&A...590A..59G, 2016arXiv160504317G,2016A&A...586A..76J,2016A&A...591A..13V}. Furthermore, magnetic fields are solenoidal,
$\vec{\nabla} \cdot \vec{B} = 0$, which removes a third of the
field's degrees of freedom in Fourier space. Finally, the GMF is a
result of an MHD dynamo, obeying the MHD equations and being driven
by ISM gas flows.
One can thus use the dynamo equation to quantify for each possible field configuration how transient it is. As it is much more likely to observe at a given instant a long lasting configuration than a very transient one, the former should be assigned a larger prior probability.
Considering short duration configurations is still necessary given the current theoretical understanding. The unknown correlation structure, or equivalently, the unknown field power spectrum, then represents a latent, or hidden, variable that should be inferred simultaneously but that also guides the field reconstruction. For example, the GMF power spectrum is shaped by the dynamo equation and the mean velocity field. The properties of these can also be regarded as latent variables that could and should be inferred as well.
Power spectra estimation and the exploitation of such spectra to improve
reconstructions are by now standard operations of IFT algorithms,
as used for constructing all-sky maps from gamma-ray data (D$^3$PO; \citealt{2015A&A...574A..74S,2015A&A...581A.126S}), for radio
interferometry (RESOLVE; \citealt{2015A&A...581A..59J, 2016A&A...586A..76J,
2016arXiv160504317G}), and for non-parametric
Galactic tomography \citep{2016A&A...590A..59G}.
\subsection{Extragalactic priors}
\label{ss:extragalactic_priors}
\done{Information gathered from outside our galaxy can also be important to formulate Bayesian priors to be used in \textit{IMAGINE}, as outlined in the following.}
\subsubsection{Galaxies}
\label{sss:Galaxies}
Idealised cosmological simulations of
forming galaxies will never {\it exactly} reproduce the phases and amplitudes of
the CR and magnetic field distributions in our galaxy. However, they will be invaluable
tools that enable predicting the (higher-order) statistics of these
distributions, which can then be used to provide useful Bayesian priors for
inference estimates such as the \textit{IMAGINE}\ framework presented here.
An important aspect of a useful Galaxy model is the mass distribution, both near the Galactic centre and within the DM halo.
In particular, such questions as the plausible or admissible form of the rotation curve at small Galacto-centric distances depend on the
inner DM density profile and the presence of a cusp. The structure of the DM halo and its substructures (e.g., the abundance and properties of sub-halos) can affect the large-scale properties of the interstellar gas flows and distribution. Another aspect of galaxy evolution that affects the formulation of the priors is related to magnetic field estimates in Mg\textsc{ii} absorbers at high redshift \citep{Bernet2008}.
In order to translate them into useful constraints on the Galactic magnetic field (in particular, in the Galactic halo), a clear understanding of galactic evolution is required.
Another rich source of information is observations of nearby galaxies. Detailed observations of synchrotron emission and Faraday rotation are available for dozens of spiral galaxies. The spatial distributions of these tracers along and across galactic discs and in their haloes (\citealt{2015A&ARv..24....4B} and references therein) provide rich constraints on the likely and unlikely forms of the Galactic magnetic field.
\subsubsection{UHECR source and arrival direction distribution}
\label{UHECR:prior}
Although the origin of UHECRs is still an unsolved problem, the enormous energies of these particles constrain the options to a few physically reasonable scenarios. All of them make unique predictions for the relation of the spectrum, chemical composition, and arrival directions, which predictions can be used as priors for UHECR deflection studies. The key lies in the so-called Hillas plot,
which summarises the possible sources of UHECRs in a diagram of size $R$ vs.\ magnetic field strength $B$ through the relation
$BR \ge E_{\rm max}/eZ$ \citep{1984ARA&A..22..425H}.
The traditional way to interpret this plot is to see it as an empirical collection of distinct source classes. Listed from smallest to largest linear sizes, these are mainly: pulsar wind shocks \citep{Kotera:2015pya,Lemoine:2015ala}, tidal disruption events \citep{AlvesBatista:2017shr,Biehl:2017hnb,Guepin:2017abw}, gamma-ray bursts \citep{1995PhRvL..75..386W,1998AIPC..428..776R,Globus:2014fka,2017arXiv171209984Z}, radio galaxies and AGN \citep{1993A&A...272..161R,1993A&A...273..377R,2008arXiv0808.0349R,Farrar:2008ex,2018JCAP...02..036E}, and large-scale accretion shocks \citep{1995ApJ...454...60N,Kang:1995xw,Kang:1996rp}. All these models make predictions for the spectrum, composition and anisotropy of UHECRs depending on parameters constrained by other astronomical observations and can be compared in our analysis via their Bayesian evidence in view of all available constraints.
Another way is to see the Hillas plot as the result of a general, underlying non-thermal process responsible for the production of UHECRs and to relate the contribution of various source types to the properties of cosmological structure formation. This approach allows us to understand some general features of UHECRs, in particular their observed maximum energies as well as the structure of the Hillas plot \citep{Rachen:2015Texas}, but also allows us to construct a parametrised prior for UHECR origin if combined with reconstructions of DM-driven large-scale structure formation.
Observational constraints on extragalactic UHECR priors may be constructed from multi-messenger information like the arrival directions of cosmic neutrinos \citep{Aartsen:2014gkd} or gamma-rays \citep{Acero:2015hja}.
It should be noted, however, that the relation between UHECR emission and the production of secondaries is highly non-trivial and requires detailed modelling of source properties like acceleration sites, mechanisms and target densities, as well as orientation effects. \done{Independent of such ambiguities regarding direct gamma-ray emission from the sources, future $\TeV$ gamma-ray observatories may be able to trace UHECR production in the Universe via the cosmogenic gamma-ray halos that arise from interactions of these particles with cosmic backgrounds \citep{GA05,KAL11}}
\section{IMAGINE software design}
\label{s:SD}
We have discussed extensively the scientific background and goals of the \textit{IMAGINE}\ project and the mathematical and scientific tools we will use. Here, we given an overview of the software components of the \textsw{IMAGINE}\ pipeline we have developed. For further details, \done{tests, and} its first application, see \citet{velden2017thesis,steininger2018}.
\subsection{Design overview}
\input{imagine_structure_fig}
The structure of the \textsw{IMAGINE}\ pipeline is shown in \ref{fig:imagine_structure}. The observables (\ref{ss:OI}) are compared to model predictions (\ref{ss:Para} and \ref{ss:non-para-models}) and the likelihood (\ref{sec:Bayes:likelihood}) assessed including prior information (\ref{ss:galactic_priors} and \ref{ss:extragalactic_priors}) using Bayesian statistics (\ref{sec:Bayes:inference}).
\textsw{IMAGINE}\ is a pipeline framework rather than a software application, since one of its core concepts is extensive modularity. In the context of \textsw{IMAGINE}, the Galaxy is described by a set of physical fields, e.g., the Galactic magnetic field, the thermal electron density, or the dust density.
Those fields each have independent degrees of freedom with which they are fully described.
For a certain parametric GMF model, for example, a few dozen parameters might be sufficient.
In contrast, a non-parametric GMF model has as many degrees of freedom as voxels, e.g., $2^{24}$ for a box with a resolution of $256 \times 256 \times 256$.
From the pipeline's point-of-view, it is irrelevant which constituents make up this abstract Galaxy or how many dimensions make up the parameter space of the constituents.
The individual parts of the \textsw{IMAGINE}\ framework, shown in \ref{fig:imagine_structure}, are described in the following.
The central object for the parameter space exploration is the \texttt{pipeline} object that
coordinates the likelihood calculations and evaluations.
Its most important counterpart is the \texttt{sampler}, which the \texttt{pipeline} provides with an abstract \texttt{likelihood} and a \texttt{prior} functional.
Currently, \textsw{PyMultiNest}\ \citep{2014A&A...564A.125B}, based on the nested sampling algorithm of \citet{skilling2006}, is used as the default sampler.
The \texttt{sampler} triggers the likelihood evaluation for each point in a normalised parameter space through the following steps.
\begin{itemize}
\item First, the parameter set given by the \texttt{sampler} is used to generate a certain Galaxy model realisation.
In practice this means creating the dynamic constituents our abstract Galaxy model consisting of, e.g., a Galactic magnetic field, a thermal electron density field, a dust density field, etc.;
\item The \texttt{Galaxy-instance} is then processed by an \texttt{observable-generator} that simulates physical \texttt{observables} like rotation measures or maps of the thermal dust emission;
\item That model is then assessed by a \texttt{likelihood} functional that compares the simulated \texttt{ob\-servables} to measured \texttt{data};
The \texttt{pipeline} can store the resulting likelihood value in a \texttt{repository} to allow for caching and post-processing of the calculated data before it is forwarded to the \texttt{sampler}.
\end{itemize}
Although a \texttt{Galaxy-instance} is basically determined by the parameters of its constituents, the constituents can possess random components.
For example, the Galactic magnetic field has a large stochastic component.
Because of this, instead of just creating one \texttt{Galaxy-instance}, the \texttt{Galaxy-generator} actually creates a whole ensemble of instances that share the same parametrisation.
After being processed by the \texttt{observable-generator}, the ensemble is then used by the \texttt{likelihood} functional to estimate the Galactic covariance.
\begin{figure}[tbp] \centering
\begin{tikzpicture}[>=stealth,thick,every node/.style={font=\relsize{0.9}, draw, shape=rectangle,rounded corners,align=center, anchor=center},scale=0.966]
\node at (3, 2) (imagine) {\LARGE IMAGINE};
\node at (0, 0) (nifty) {NIFTy};
\node at (3, 0) (hammu) {Hammurabi};
\node at (6.5, 0) (pymultinest) {PyMultiNest};
\node at (7, -2) (multinest) {MultiNest};
\node at (5, -3) (healpix) {HEALPix};
\node at (-2.5, -2) (numpy) {NumPy};
\node at (2.5, -2.5) (fftw) {FFTW3};
\node at (0, -3) (d2o) {D2O};
\draw[->] (healpix) to[out=90,in=-60] (nifty);
\draw[->] (numpy) to[out=90,in=-120] (nifty);
\draw[->] (fftw) to[out=90,in=-80] (nifty);
\draw[->] (d2o) to[out=90,in=-100] (nifty);
\draw[->] (fftw) to[out=90,in=-100] (hammu);
\draw[->] (healpix) to[out=90,in=-80] (hammu);
\draw[->] (multinest) to[out=90,in=-85] (pymultinest);
\draw[->] (nifty) to[out=90,in=-120] (imagine);
\draw[->] (hammu) to[out=90,in=-90] (imagine);
\draw[->] (pymultinest) to[out=90,in=-60] (imagine);
\end{tikzpicture}
\caption{The building blocks of the \textsw{IMAGINE}\ pipeline framework.}
\label{fig:imagine_building_blocks}
\end{figure}
The computational effort to process a whole ensemble of dozens or even hundreds of \texttt{Galaxy\-instance}s for each position in parameter space can be mitigated via parallelisation.
For convenient data processing in general and efficient data parallelisation in particular, the \textsw{IMAGINE}\ framework is built on the software packages \textsw{NIFTy\,3}\ \citep{2017arXiv170801073S} and \textsw{D2O}\ \citep{SGBE16}, respectively.
An overview of the framework's building blocks is given in \ref{fig:imagine_building_blocks}.
\subsection{\done{The hammurabi code}}
The \textsw{hammurabi}\ code\footnote{\url{http://sourceforge.net/projects/hammurabicode/}} \citep{waelkens:2009} is an astrophysical simulator based on 3D models of the components of the magnetised ISM such as magnetic fields, thermal electrons, relativistic electrons, and dust grains. It performs an efficient line-of-sight integration through the simulated Galaxy model using a \textsw{HEALPix}\footnote{\url{http://healpix.sourceforge.net}}-based nested grid to produce observables such as Faraday rotation measure and diffuse synchrotron and thermal dust emission in full Stokes $I$, $Q$ and $U$, while taking into account beam and depth depolarisation as well as Faraday effects. This modular code allows new analytic models for each component to be added easily, or alternatively an external file can be given that specifies the model in a binary grid. The public version already includes relatively simple field models such as the axisymmetric spiral with reversed ring of Sun et~al. \citep{sun08} and more complicated models such as Jansson \& Farrar \citep{jansson12b}, which includes spiral arm segments and an X-shaped halo field. For small-scale turbulence, the code includes a Gaussian random field simulator to add a simple random component to any analytic mean-field model. This code is the basis of one of the engines of the \textsw{IMAGINE}\ machinery, converting parametric models for the magnetised ISM into observables that can then be compared to data in the likelihood evaluation.
An updated version, \textsw{hammurabi\,X}\ (Wang et al., in preparation), is currently under development in order to achieve the higher computing performance required by \textsw{IMAGINE}.
Previously in \textsw{hammurabi}, the generation of the anisotropic component of the random field as well as the rescaling of the field strength following various parametric forms lead to unphysical divergence.
Now we propose two novel solutions for simulating random/turbulent magnetic field.
On Galactic scales, a triple Fourier transform scheme is proposed to restore the divergence-free condition via a cleaning method.
Alternatively, in the local Solar neighbourhood,
a vector-field decomposition scheme is capable of simulating a more detailed turbulent field power-spectrum.
In addition to these new field generators, the simulation accuracy has been improved with a calibrated trilinear interpolation algorithm and the implementation of Simpson's $1/3$ rule in line-of-sight integration.
The input and output control and \textsw{Python}\ wrapper are further developed to ensure an efficient interface with \textsw{IMAGINE}.
Furthermore, in the future, \textsw{hammurabi\,X}\ will be extended for application to IFT tomographic GMF reconstructions, \ie\ non-parametric modelling as described in \ref{ss:non-para-models}. For this purpose, the derivative of the simulated observables with respect to the field values has to be computed, \ie\ the linearised response of the simulated data to small model changes. For the reverse inference from the data to the field configuration, we also require the adjoint matrix to express how a mismatch between the real and simulated data sets tends to pull the model. Since parametric models can be regarded as being embedded within the space of all non-parametric models, an efficient gradient-based optimisation of parametric models will also become feasible using this extension; the linearised response operator then uses the differential relation between the 3D field configuration and the model parameters.
\subsection{Sampler}
An important part of the \textit{IMAGINE}\ project is drawing robust and accurate conclusions from observations subject to a variety of systematic as well as stochastic uncertainties. Depending on the quality of the real data and the complexity of the respective data models, we expect to encounter counter-intuitive interdependencies and degeneracies among inference quantities, nuisance parameters and noise properties. Quantifying these effects requires the joint and fully self-consistent treatment of all these quantities within a rigorous information theoretical inference approach.
Various MCMC techniques (discussed generically in \ref{sec:Bayes:num_bayes}) are known to efficiently explore high-dimensional parameter spaces (see e.g., \citealt{brooks2011handbook}). The numerical and statistical efficiency of this algorithm crucially depends on the design of proper transition kernels. Optimal transitions, obeying the detailed balance criterion, can be designed with the Metropolis-Hastings procedure \citep{txt:Metropolis,hastings1970}, the basis of almost any modern MCMC algorithm. Different MCMC approaches mostly only differ in the design of transition kernels that are optimal for different target posterior distributions. The \texttt{sampler} module of the \textsw{IMAGINE}\ pipeline can easily exploit any state-of-the-art MCMC technique by simply connecting it with the corresponding software library. In particular the \textsw{IMAGINE}\ pipeline can exploit available inference packages such as \textsw{PyMC}\ \citep{pymc2015}, \textsw{PyMC\,3}\ \citep{pymc3_2016} and \textsw{STAN}\ \citep{stan:2017}. This immediately enables a user of the \textsw{IMAGINE}\ pipeline to access various sampling algorithms ranging from random walk Metropolis-Hastings and Gibbs sampling to Hamiltonian Monte Carlo sampling \citep{txt:Metropolis,hastings1970,Geman_1984,1987PhLB..195..216D,Neal2012}.
As discussed above, the literature provides a plenitude of rival models that need to be compared and judged with respect to the data. In Bayesian parlance, we will have the data decide among models. This task is different from parameter inference, since it requires a judgement of the validity of models independently of their respective model parameters. This task therefore amounts to performing Bayesian model comparison.
A technical challenge of Bayesian model comparison is the numerical determination of the evidence. This is an active field of research and has not yet been conclusively solved. Nevertheless, for moderate dimensionality the task of numerically estimating the evidence of the posterior distribution can be solved by performing nested sampling (see e.g., \citealt{Skilling04}). The \textsw{IMAGINE}\ pipeline can readily use several publicly available software packages implementing the nested sampling algorithm \citep{2014A&A...564A.125B,2015MNRAS.450L..61H,2015MNRAS.453.4384H}. In particular the present implementation of the \textsw{IMAGINE}\ framework uses the \textsw{PyMultiNest}\ library for the evidence calculation \citep{steininger2018}. From a users' perspective, Bayesian model comparison can now easily be performed by running the \textsw{IMAGINE}\ pipeline with several different models of the magnetic field and comparing estimated evidence values for respective models, as discussed in \ref{s:AM}.
\subsection{NIFTy}
\label{ss:nifty}
\done{Information field theory \citep{2009PhRvD..80j5005E} is information theory for fields and therefore the ideal language to phrase and solve non-parametric field inference problems
(see e.g., \citealt{2010PhRvE..82e1112E, 2011PhRvD..83j5014E,2011PhRvE..84d1118O, 2013PhRvE..87c2136O, 2016arXiv161208406E}) that performs well in practice (e.g., \citealt{2011A&A...530A..89O, 2012A&A...542A..93O, 2015A&A...581A..59J,2015A&A...575A.118O, 2015A&A...581A.126S, 2016A&A...590A..59G, 2016arXiv160504317G,2016A&A...586A..76J,2016A&A...591A..13V}).
}
These numerous applications of IFT to real world problems were possible thanks to the \textit{numerical information field theory} (\textsw{NIFTy}, \citealt{2013A&A...554A..26S, 2013ascl.soft02013S}) package. \textsw{NIFTy}\ permits the direct implementation of IFT equations by providing the concepts of spaces, fields living over the spaces, and operators acting on the fields in a transparent, abstract and object oriented manner to the programmer, thereby alleviating the need to think about the properties of a chosen space pixelisation.
In order to be prepared for the task of reconstructing the 3D GMF, \textsw{NIFTy}\ was recently parallelised \citep{SGBE16} and rewritten \citep{2017arXiv170801073S} such that it \jpr{\sout{can}} can comfortably deal with vector valued fields.
For a non-parametric GMF reconstruction, the handling of large datasets needs to be mastered as well as the vector nature of the magnetic fields. For both challenges, preparations are under way.
\section{IMAGINE Consortium}
\label{s:IC}
The IMAGINE Consortium\ was conceived as an informal collaboration of astronomers, astrophysicists, experimental and theoretical physicists and applied mathematicians who share
interest and expertise in the broad range of problems related to
Galactic magnetic fields.
The collaboration that has developed into the IMAGINE Consortium\ started with the
International Team Meeting, \textit{``Bayesian modelling of the Galactic Magnetic Field
Constrained by Space and Ground-Based Radio-Millimetre and Ultra-High Energy Cosmic Ray Data''} hosted in 2014/15 by the International Space Science Institute in Bern, Switzerland\footnote{\url{http://www.issibern.ch/teams/bayesianmodel}}. The next milestone was a
workshop hosted in 2017 by the Lorentz Center in Leiden, the Netherlands\footnote{\url{http://www.lorentzcenter.nl/lc/web/2017/880//info.php3?wsid=880&venue=Snellius}}, titled \textit{``A Bayesian
View on the Galactic Magnetic Field: From Faraday Rotation to Ultra-High Energy Cosmic Ray Deflections''}, where the IMAGINE Consortium\ was formally founded. The goals of the consortium are:
\begin{itemize}
\item \done{to reveal the 3D structure of the Galactic magnetic field;}
\item \done{to achieve a comprehensive understanding of the non-thermal ISM joining the observational and theoretical efforts
of the participants in the studies of the ISM, GMF and cosmic rays;}
\item \done{to support theoretical and experimental efforts to identify the sources of UHECRs;}
\item \done{to develop novel methods to infer the GMF and other ISM components that exploit and fuse information from observations, theory, and simulations;}
\item \done{to develop, maintain and promote the \textsw{IMAGINE}\ pipeline as a standard Bayesian framework;}
\item to encourage interaction between distinct research areas and cross-disciplinary knowledge transfer.
\end{itemize}
\done{The IMAGINE Consortium\ is led by a PI team consisting of Fran\c{c}ois Boulanger, Torsten En{\ss}lin, Marijke Haverkorn, J\"org H\"orandel, Tess Jaffe, Jens Jasche, J\"org Paul Rachen and Anvar Shukurov, and} is open to new members.
Information about the consortium activities and information on how to become a member will soon be available at the \textit{IMAGINE}\ webpage.\footnote{\url{https://www.astro.ru.nl/imagine/}}
\acknowledgments
We thank the International Space Science Institute in Bern, Switzerland, and the Lorentz Center in Leiden, the Netherlands, for the hospitality and financial support \done{that has led to the founding of the IMAGINE Consortium.} We thank Sebastian Hutschenreuter for providing \ref{fig:PMF}\jpr{, and Franco Vazza for providing \ref{fig:egmf_Hackstein} based on simulations done with the ENZO code\footnote{\url{http://cosmosimfrazza.myfreesites.net/erc-magcow}} \citep{0264-9381-34-23-234001}.} TE and TS acknowledge partly support by the DFG Cluster of Excellence ``Origin and Structure of the Universe'' and by the DFG Research Unit 1254 ``Magnetisation of Interstellar and Intergalactic Media -- The Prospects of Low-Frequency Radio Observations''. AF, LFSR and AS thank STFC (ST/N00900/1) and The Leverhulme Trust (RPG-2014-427) for funding. PG and CP acknowledge support by the European Research Council under ERC-CoG grant CRAGSMAN-646955.
BR-G acknowledges \jpr{support from the European Union's Horizon 2020 project} RADI\jpr{O}FOREGROUNDS \jpr{\sout{H2020's project}} under grant agreement number 687312. GS is supported by the DFG through collaborative research centre SFB 676 ``Particles, Strings and the Early Universe'' and by the Bundesministerium
für Bildung und Forschung (BMBF) through grant 05 A17GU1. TS was supported by the Studienstiftung des deutschen Volkes. AvV acknowledges financial support from the NWO astroparticle physics grant WARP.
\bibliographystyle{JHEP_IMAGINE}
|
{
"timestamp": "2018-09-07T02:09:48",
"yymm": "1805",
"arxiv_id": "1805.02496",
"language": "en",
"url": "https://arxiv.org/abs/1805.02496"
}
|
\section*{Introduction}
Let $F\subset T\subset V$ be the usual Thompson's groups acting on the the unit interval, see \cite{Cannon-Floyd-Parry96}.
In \cite{Jones16-Thompson} it was shown that certain categories $\mathcal C$ with a privileged object $1\in \mathcal C$ give rise to a group of fractions $G_\mathcal C$ and that a functor $\Phi:\mathcal C\to\mathcal D$ provides an action of the group $G_\mathcal C$.
Similar ideas in the context of semigroups were developed by Ore, see for instance \cite{Maltsev53}.
Thompson's groups $F,T,V$ can be constructed in this way using various categories of forests written $\mathcal F,\mathcal{AF},\mathcal{SF}$.
If $\mathcal D=\Hilb$ is the category of Hilbert spaces with isometries for morphisms, then the functor $\Phi$ gives us a unitary representation of $G_\mathcal C$.
In this article, we consider functors $\Phi:\mathcal F\to\Hilb$ such that $\Phi(n) = \mathfrak H^{\otimes n}$ and $\Phi(f_{i,n})=\id^{\otimes i-1}\otimes R\otimes \id^{\otimes n-i}$ where $R:\mathfrak H\to\mathfrak H\otimes \mathfrak H$ is a fixed isometry and $f_{i,n}$ is the forest with $n$ trees all of which are trivial except the $i$th one that has two leaves.
Since any tree is a composition of $f_{i,n}$ we obtain a well defined functor and thus a unitary representation of $F$ that we extend to $V$ via permutations of the tensors, see Section \ref{sec:def} for more details.
In Section \ref{sec:T}, we construct a unitary representation of $V$ having an almost invariant vector but no nonzero $[F,F]$-invariant vectors showing that any intermediate subgroups between $[F,F]$ and $V$ does not have Kazhdan's property (T) \cite{Kazhdan67-T}.
Note that $T$ was proved to not have property (T) initially by Reznikov \cite{Reznikov01} as well as follows from the work of Ghys-Sergiescu and Navas \cite{Ghys-Sergiescu87,Navas02}.
In Section \ref{sec:H}, we give a one parameter family $(\pi_\alpha, 0\leq \alpha\leq 1)$ of unitary representations of $V$ interpolating the trivial and the left regular one.
We define some positive definite maps $\varphi_\alpha$ from those representations for which we can explicitly compute their values on $T$.
They actually coincide on $T$ with the family of maps constructed from the proper cocycle of Farley, see Remark \ref{rem:Farley}, but differ on the larger group $V$.
Hence, they provide a $C_0$ positive definite approximation of the identity for $T$ and thus proving Farley's result that $T$ has the Haagerup property (though Farley actually proved it for all of $V$)\cite{Akemann-Walter81, Farley03-H}.
\subsection*{Acknowledgement}
We thank the generous support of the New Zealand Mathematics Research Institute and the warm hospitality we received in Raglan which made this work possible.
\section{Definitions and notations}\label{sec:def}
We briefly recall the construction of actions of groups of fractions for the particular cases of $F,T,$ and $V$ and refer to \cite{Jones16-Thompson} and \cite{Cannon-Floyd-Parry96} for more details.
Let $\mathcal F$ be the category of (binary planar) forests whose objects are the natural numbers $\mathbf{N}:=\{1,2,\cdots\}$ and morphisms $\mathcal F(n,m)$ the set of binary planar forests with $n$ roots and $m$ leaves.
We think of them as planar diagram in the plane $\mathbf R^2$ whose roots and leaves are distinct points in $\mathbf R\times\{0\}$ and $\mathbf R\times \{1\}$ respectively and are counted from left to right.
We compose forests by stacking them vertically so that $p\circ q$ is the forest obtained by stacking on top of $q$ the forest $p$ where the $i$th root of $p$ is attached to the $i$th leaf of $q$.
We obtain a diagram in the strip $\mathbf R\times [0,2]$ that we rescale in $\mathbf R\times [0,1].$
For any $n\in \mathbf{N}, 1\leq i\leq n$ we consider the forest $f_{i,n}$, or simply $f_i$ if the context is clear, the forest with $n$ roots and $n+1$ leaves where the $i$th tree of $f_{i,n}$ has two leaves.
For example $$f_{2,4}=\forestftwo\ .$$
Consider the set of pairs of \emph{trees} $(t,s)$ with the same number of leaves, that we quotient by the relation generated by $(t,s)\sim (p\circ t , p\circ s)$ for any forest $p$.
We write $\frac{t}{s}$ for the equivalence class of $(t,s)$.
These form the group of fractions of the category $\mathcal F$ with the multiplication $\frac{t}{s}\cdot \frac{s}{r} = \frac{t}{r}$ and inverse $(\frac{t}{s})^{-1}=\frac{s}{t}$.
It is isomorphic to Thompson's group $F$.
Now consider the category of symmetric forests $\mathcal{SF}$ with objects $\mathbf{N}$ and morphisms $\mathcal{SF}(n,m)=\mathcal F(n,m)\times S_m$ where $S_m$ is the symmetric group of $m$ elements.
Graphically we interpret a morphism $(p,\tau)\in\mathcal{SF}(n,m)$ as the concatenation of two diagrams.
On the bottom we have the diagram explained above for the forest $p$ in the strip $\mathbf R\times [0,1].$
The diagram of $\tau$ is the union of $m$ segments $[x_i , x_{\tau(i)}+(0,1)], i=1,\cdots,m$ in $\mathbf R\times [1,2]$ where the $x_i$ are $m$ distinct points in $\mathbf R\times \{1\}$ such that $x_i$ is on the left of $x_{i+1}.$
The full diagram of $(p,\tau)$ is obtained by stacking the diagram of $\tau$ on top of the diagram of $p$ such that $x_i$ is the $i$th leaf of $p$.
Given symmetric forests $(q,\tau)\in\mathcal{SF}(n,m), (p, \sigma )\in \mathcal{SF}(m,l)$, let $l_i$ be the number of leaves of the $i$th tree of $p$, then we define the composition of morphisms as follows:
$$(p,\sigma)\circ (q,\tau) := ( \tau(p) \circ q , \sigma S( p , \tau ) ),$$
where $\tau(p)$ is the forest obtained from $p$ by permuting its trees such that the $i$th tree of $\tau(p)$ is the $\tau(i)$th tree of $p$ and $S(p,\tau)$ is the permutation corresponding to the diagram obtained from $\tau$ where the $i$th segment $[x_i , x_{\tau(i)} + (0,1)]$ is replaced by $l_{\tau(i)}$ parallel segments.
Thompson's group $V$ is isomorphic to the group of fractions of the category $\mathcal{SF}$.
Hence, any element of $V$ is an equivalence class of a pair of symmetric \emph{trees}.
Consider $g=\frac{(t,\tau)}{(s,\sigma)} \in V$ and the standard dyadic partitions $(I_1,\cdots,I_n)$ and $(J_1,\cdots,J_n)$ of $[0,1]$ associated to the trees $s$ and $t$ respectively. The element $g$ acting on $[0,1]$ is the unique piecewise linear function with constant slope on each $I_k$ that maps $I_{\sigma^{-1}(i)}$ onto $J_{\tau^{-1}(i)}$ for any $1\leq i\leq n.$
Consider the cyclic group $\mathbf{Z}/m\mathbf{Z}$ as a subgroup of the symmetric group $S_m$ and the subcategory $\mathcal{AF}\subset \mathcal{SF}$ of \emph{affine} forests where $\mathcal{AF}(n,m)=\mathcal F(n,m)\times \mathbf{Z}/m\mathbf{Z}$.
The group of fractions of $\mathcal{AF}$ is isomorphic to Thompson's group $T$.
We will often identify $\mathcal F$ and $\mathcal{AF}$ as subcategories of $\mathcal{SF}$ giving embeddings at the group level $F\subset T\subset V$.
We say that a pair of symmetric trees $( (t , \tau ) , (s, \sigma) )$ is reduced if there are no pairs $( (t',\tau') , (s',\sigma'))$ such that $t'$ has strictly less leaves than $t$ and such that $\frac{(t,\tau)}{(s,\sigma)} = \frac{(t',\tau')}{(s',\sigma')}.$
Let $\Hilb$ be the category of complex Hilbert spaces with isometries for morphisms.
Given an isometry $R:\mathfrak H\to\mathfrak H\otimes\mathfrak H$ we construct a functor
$\Phi=\Phi_R:\mathcal F\to \Hilb$ such that $\Phi(n) = \mathfrak H^{\otimes n}$ and $\Phi(f_{i,n})=\id^{\otimes i-1}\otimes R\otimes \id^{\otimes n-i}$ for $i=1,\cdots , n$.
Consider the quotient space
$$\{ (t , \xi) : t \text{ tree} , \xi\in \Phi(\target(t)) \}/\sim \text{ generated by } (t,\xi)\sim (p\circ t, \Phi(p)\xi), \forall p\in\mathcal F.$$
This quotient space has a pre-Hilbert structure given by $\langle (t,\xi) , (t,\eta)\rangle : = \langle \xi , \eta\rangle$ that we complete into a Hilbert space $\mathscr H$.
Note that $\mathscr H$ is the inductive limit of the system of Hilbert spaces $\mathfrak H_t:=\{(t,\xi) : \xi \in \Phi(\target(t)) \}$ for trees $t$ such that the embedding $\mathfrak H_t\to \mathfrak H_{p\circ t}$ is given by $\Phi(p)$.
We denote by $(t,\xi)$ or $\frac{t}{\xi}$ the equivalence class of $(t,\xi)$ inside $\mathscr H$ and identify $\mathfrak H$ and $\mathfrak H_t$ as subspaces of $\mathscr H$.
We have a unitary representation $\pi:F\to\mathcal U(\mathscr H)$ given by the formula $\pi(\frac{t}{s}) \frac{s}{\xi}:=\frac{t}{\xi}$ that we extend to the group $V$ as follows:
$$\pi \left( \frac{(t,\tau)}{(s,\sigma)} \right) \frac{s}{\xi} := \frac{t}{ \theta(\tau^{-1}\sigma) \xi}, \text{ where } \theta(\kappa)(\eta_1\otimes\cdots\otimes \eta_n):=\eta_{\kappa^{-1}(1)}\otimes\cdots\otimes \eta_{\kappa^{-1}(n)}.$$
Note that if $\xi,\eta$ are in the small Hilbert space $\mathfrak H$ and $g=\frac{(t,\tau)}{(s,\sigma)} $, then
\begin{equation}\label{equa:vacuum}
\langle \pi \left( \frac{(t,\tau)}{(s,\sigma)} \right) \xi , \eta \rangle = \langle \theta(\sigma)\Phi(s) \xi , \theta(\tau) \Phi(t) \eta \rangle.\end{equation}
Consider an orthonormal basis $\{\xi_i : i\in I\}$ of the Hilbert space $\mathfrak H$.
The isometry $R$ can be thought of as a possibly infinite matrix with three indices $R_i^{j,k}$ such that $R \xi_i=\sum_{j,k} R_i^{j,k} \xi_j\otimes \xi_k$.
We can reinterpret $\Phi(f)$ for a forest $f$ as a partition function.
Given a forest $f$ with $n$ roots and $m$ leaves, we define the set of states $\Omega(f)$ on $f$ as maps $\omega$ from the edges of $f$ to the set of indices $I$.
A \emph{vertex} of $f$ is a trivalent vertex and thus roots and leaves are not vertices.
If $\omega$ is a state on $f$ and $v$ a vertex, then we put $R^\omega_v$ the scalar equal to $R_{\omega(e_-)}^{\omega(e_l),\omega(e_r)}$ where $e_-$ is the edge with target $v$ and $e_l,e_r$ are the edges with source $v$ which goes to the left and right respectively.
Consider some multi-indices $\underline i:=(i_1,\cdots,i_n)\in I^n$ and $\underline j:=(j_1,\cdots,j_m)\in I^m$ and say that a state $\omega\in\Omega(f)$ is compatible with $(\underline i, \underline j)$ if $\omega(a_k)=i_k$ and $\omega(b_\ell)=j_\ell$ for all $1\leq k\leq n,1\leq \ell\leq m$ where $a_k$ is the edge with source the $k$th root of $f$ and $b_\ell$ is the edge with target the $\ell$th leaf of $f$.
We can now define the operator $\Phi(f)$ as follows:
$$\langle \Phi(f)\xi_{i_1}\otimes\cdots\otimes \xi_{i_n} , \xi_{j_1}\otimes\cdots\otimes\xi_{j_m} \rangle = \sum_{\begin{subarray}{c}\omega\in \Omega(f)\\ \text{ compatible with} (\underline i, \underline j) \end{subarray}} \prod_{v \text{ a vertex of } f} R_v^\omega$$
with the convention that a product (resp. a sum) over an empty set is equal to one (resp. zero).
The infinite sum converges since the scalars $R_i^{j,k}$ are matrix coefficients of an isometry.
This formula can be proved by induction on the number of leaves and using the fact that any forest is the composition of some elementary forests $f_{i,n}$.
For example, if we have the following state $$\omega(f)=\fonefone$$ that we represent with the spin of an edge (i.e. the image by $\omega$ of this edge) next to it, then $$\prod_{v \text{ a vertex of } f} R_v^\omega = R^{l,m}_j R^{j,k}_i.$$
\section{Kazhdan's property (T)}\label{sec:T}
Recall that a countable discrete group $G$ has Kazhdan's property (T) if any unitary representation having an almost invariant vector has in fact a nonzero invariant vector \cite{Kazhdan67-T}.
If $u\in\mathcal U(\mathfrak H)$ is a unitary and $\zeta\in\mathfrak H$ is a unit vector, then the map
$$R:\mathfrak H\to \mathfrak H\otimes\mathfrak H, \xi\mapsto u(\xi)\otimes \zeta$$
is an isometry which provides us a unitary representation $\pi:V\to \mathcal U(\mathscr H)$ as described in Section \ref{sec:def}.
We claim that if $ |\langle \zeta , u\zeta\rangle| \neq 1$, then $\pi$ has no nonzero $[F,F]$-invariant vectors.
Consider the following four trees
$$a=f_3f_3f_1f_1 , b = f_4f_3f_2f_1 , c = f_1f_1 , d =f_2f_1$$ and let $t_n$ be the complete binary trees with $2^n$ leaves.
Put $g=\frac{a}{b}, h=\frac{c}{d},$ and $k:=ghg^{-1}h^{-1}$.
Note that
$$k=\frac{a}{q} \text{ with } q= f_2f_3f_1f_1.$$
Define the element $k_n:=\frac{(a)_n\circ t_n}{(q)_n\circ t_n}$ where $(a)_n$ is the forests with $2^n$ roots and whose each tree is a copy of $a$.
Similarly define $g_n:=\frac{(a)_n\circ t_n}{(b)_n\circ t_n}$ and $h_n=\frac{(c)_n\circ t_n}{(d)_n\circ t_n}$ and observe that $k_n=g_nh_n(g_n)^{-1}(h_n)^{-1}$.
Therefore $k_n$ is in the commutator subgroup $[F,F]$.
Observe that
\begin{align*}
\langle \Phi(q) \xi ,\Phi(a) \eta \rangle & = \langle u^2 \xi \otimes u\zeta \otimes \zeta \otimes u\zeta \otimes \zeta , u^2 \eta \otimes \zeta \otimes u^2\zeta \otimes \zeta\otimes \zeta \rangle\\
& = \langle \xi , \eta \rangle \langle u\zeta , \zeta \rangle^2 \langle \zeta , u^2\zeta\rangle, \ \forall \xi , \eta \in\mathfrak H
\end{align*}
and thus $\Phi(a)^*\Phi(q) = C\cdot \id\in B(\mathfrak H)$ with $C = \langle u\zeta , \zeta \rangle^2 \langle \zeta , u^2\zeta\rangle.$
If $\xi:=\otimes_{i=1}^{2^n} \xi_i,\eta:=\otimes_{i=1}^{2^n} \eta_i$ are elementary tensors of $\mathfrak H_{t_n}$, then
$$\langle \pi(k_n) \xi , \eta \rangle = \langle \Phi((q)_n)\xi,\Phi((a)_n)\eta\rangle
= \prod_{i=1}^{2^n} \langle \Phi(q)\xi_i,\Phi(a)\eta_i\rangle
=C^{2^n} \langle \xi, \eta \rangle.$$
By linearity and density we obtain that $p_n\pi(k_n)p_n=C^{2^n} p_n$ where $p_n$ is the orthogonal projection onto $\mathfrak H_{t_n}$.
Assume that $\xi\in\mathscr H$ is a $[F,F]$-invariant unit vector and that $|\langle \zeta , u\zeta\rangle|<1$.
This implies that $| C |<1$.
By density, there exists $n$ and a unit vector $\xi'\in \mathfrak H_{t_n}$ such that $\Vert \xi - \xi'\Vert<1/4$.
We obtain that
$$|C|^{2^n} = |\langle\pi(k_n)\xi',\xi'\rangle| \geq |\langle\pi(k_n)\xi,\xi\rangle|-|\langle\pi(k_n)(\xi'-\xi),\xi'\rangle|-|\langle\pi(k_n)\xi',(\xi'-\xi)\rangle|\geq \frac{1}{2},$$
for $n$ as large as we want which implies a contradiction since $|C|<1$ and proves the claim.
Consider $\mathfrak H:=\ell^2(\mathbf{Z})$, the shift operator $u\in\mathcal U(\ell^2(\mathbf{Z}))$, and $\zeta_m\in\ell^2(\mathbf{Z})$ the characteristic function of $\{1,2,\cdots, h(m)\}$ divided by $\sqrt{h(m)}$ where $h(m)=2m8^m, m\geq 1$.
Let $(\pi(m),\mathscr H(m))$ be the associated sequence of unitary representations of $V$ and put $(\pi,\mathscr H):=(\oplus_m\pi(m), \oplus_m\mathscr H(m))$ their direct sum.
Note that $0<\langle \zeta_m , u\zeta_m\rangle<1$ for any $m\geq 1$ and thus the claim implies that $\pi$ does not have any nonzero $[F,F]$-invariant vectors.
Let $\eta_m$ be the elementary tensor of $\mathfrak H^{\otimes 2^m}$ where each entry is equal to $\zeta_m$ that we identify with the fraction $\frac{t_m}{\eta_m}\in\mathfrak H_{t_m}$ view as an element of $\mathscr H(m)$.
Define $\xi_m$ to be the unit vector of $\mathscr H:=\oplus_n \mathscr H(n)$ which is equal to $\frac{t_m}{\eta_m}$ in the $m$th slot and zero elsewhere.
We claim that the sequence $(\xi_m,m\geq 1)$ is an almost $V$-invariant vector.
Fix $g\in V$ and note that for $m$ big enough there exists $s\in\mathcal F(1,2^m), \kappa,\rho \in S_{2^m}$ such that $g=\frac{(s,\kappa)}{(t_m,\rho)}$.
Moreover, by increasing $m$, we can assume that there exists $p$ such that $p\circ s = t_{2m}$.
We obtain that
\begin{align*}
\langle \pi(g)\xi_m,\xi_m\rangle & =\langle \pi_m \left( \frac{(s,\kappa)}{(t_m,\rho)} \right) \frac{t_m}{\eta_m} , \frac{t_m}{\eta_m} \rangle \\
& = \langle\frac{s}{\theta(\kappa^{-1}\rho)(\eta_m)}, \frac{t_m}{\eta_m} \rangle = \langle \frac{s}{\eta_m} , \frac{t_m}{ \eta_m} \rangle\\
& = \langle \Phi(p)\eta_m , \Phi(q)\eta_m \rangle \text{ where } p\circ s =t_{2m} = q\circ t_m \\
& = \prod_{i \text{ a leaf of } t_{2m} } \langle ( \Phi(p)\eta_m)_i , (\Phi(q)\eta_m)_i \rangle \text{ where } ( \Phi(p)\eta_m)_i \text{ is the $i$th tensor.}
\end{align*}
Since $p\circ s = t_{2m} = q\circ t_m$ we have that any branch of $p$ (resp. $q$) has length smaller or equal than $2m$ (resp. $m$).
This implies that any component of $\Phi(p)\eta_m$ and $\Phi(q)\eta_m$ is equal to $u^a\zeta_m$ for some $0\leq a\leq 2m$.
Since the map $a\in\mathbf{N}\mapsto \langle u^a\zeta_m , \zeta_m\rangle\in \mathbf R_+$ is decreasing we obtain
$$\langle ( \Phi(p)\eta_m)_i , (\Phi(q)\eta_m)_i \rangle\geq \langle u^{2m}\zeta_m,\zeta_m\rangle=\frac{h(m)-2m}{h(m)}.$$
Therefore,
$$\langle \pi(g)\xi_m,\xi_m\rangle\geq \left(\frac{h(m)-2m}{h(m)}\right)^{2^{2m}}=(1-8^{-m})^{4^{m}}\longrightarrow_{m\to \infty} 1,$$
and thus $(\xi_m)_m$ is an almost $V$-invariant vector.
Since $\pi$ does not have any nonzero $[F,F]$-invariant vectors we obtain that any intermediate group between $[F,F]$ and $V$ does not have property (T).
\section{Haagerup property}\label{sec:H}
Recall that a countable discrete group $G$ has the Haagerup property if there exists a sequence $\varphi_n$ of positive definite functions which vanish at infinity on $G$ and such that $\lim_n\varphi_n(g)=1,\ \forall g\in G$ \cite{Akemann-Walter81}.
Consider the free group $\mathbb F_2$ freely generated by $a,b$ and let $\{\delta_g:g\in \mathbb F_2\}$ be its classical orthonormal basis.
Identify $\ell^2(\mathbb F_2)^{\otimes n}$ with $\ell^2(\mathbb F_2^n)$ and $\delta_{g_1}\otimes\cdots\otimes\delta_{g_n}$ with $\delta_{g_1,\cdots, g_n}$.
Set $\mathfrak H:=\ell^2(\mathbb F_2)$ and define for $0\leq \alpha\leq 1$ the isometry
$$R_\alpha:\ell^2(\mathbb F_2) \to \ell^2(\mathbb F_2\times \mathbb F_2), \delta_e\mapsto \alpha\delta_{e,e} + \sqrt{1-\alpha^2} \delta_{a,b}, \delta_g\mapsto \delta_{ag,bg}, \forall g\in\mathbb F_2,g\neq e.$$
This defines a functor $\Phi_\alpha : \mathcal F \to \Hilb$ and a unitary representation $\pi_\alpha:V\to \mathcal U(\mathscr H_\alpha)$ as described in Section \ref{sec:def}.
The associated infinite matrix $(R_g^{h,k}:=\langle R\delta_g,\delta_h\otimes \delta_k\rangle)_{g}^{h,k}$ is particularly simple since
$$R_e^{e,e}=\alpha, R_e^{a,b}=\sqrt{1-\alpha^2},R_g^{ag,bg}=1, \forall e\neq g\in \mathbb F_2$$
and zero elsewhere.
Consider the case when $\alpha=0$ and thus $R_g^{ag,bg}=1$ for any $g\in \mathbb F_2$ and zero elsewhere.
Let $f$ be a forest with $n$ roots and $m$ leaves and observe that $\Phi_0(f)\delta_{e,\cdots ,e}=\delta_{P(f)}$ where $P(f)\in \mathbb F_2^m$.
The $i$th component $P(f)_i$ is the word in $a,b$ written from right to left corresponding to the path from a root of $f$ to its $i$th leaf such that a left turn (resp. right turn) contributes in adding the letter '$a$' (resp. the letter '$b$').
For example, if $f=f_{1,4} \circ f_{3,3} \circ f_{1,2}$, then $ \Phi_0(f)\delta_{e,e} = \delta_{aa} \otimes \delta_{ba} \otimes \delta_{b} \otimes \delta_a \otimes \delta_b = \delta_{aa , ba , b , a , b }$.
The next lemma proves that if $t$ is a tree then the set of words in the tuple $P(t)$ remembers completely $t$.
\begin{lemma}\label{lem:perm}
Consider two trees $s,t$ with $n$ leaves.
Assume that there exists a permutation $\sigma\in S_n$ acting on the leaves such that $\sigma(P(s)) = P(t)$.
Then $\sigma=\id$ and $s=t$.
\end{lemma}
\begin{proof}
We prove the lemma by induction on the number of leaves $n\geq 1$.
The result is immediate for $n=1$ and is also clear for $n=2$ since there is only one tree with two leaves and thus $P(s) = P(t) = (a,b)$. The permutation $\sigma$ is necessarily trivial.
Suppose the result is true for any $k$ between $1$ and $n$ and consider $s,t$ trees with $n+1$ leaves and a permutation $\sigma$ such that $\sigma(P(s)) = P(t)$.
Note that there exists trees $s_1,s_2,t_1,t_2$ such that $s = (s_1\bullet s_2)\circ f_1$ and $t=(t_1\bullet t_2)\circ f_1$ where $s_1\bullet s_2$ is the forest with two roots whose first tree is $s_1$ and second $s_2$.
Note that the word $P(s)_i$ finishes by the letter $a$ (resp. the letter $b$) if and only if $i$ is a leaf of $s_1$ (resp. a leaf of $s_2$).
We have the same characterization for the leaves of $t$ and thus necessarily $\sigma$ realizes a bijection from the leaves of $s_j$ onto the leaves of $t_j$ for $j=1,2$.
Observe that $P(s)_i = P(s_1)_ia$ for any leaves of $s_1$.
This implies that $\sigma(P(s_1)) = P(t_1)$.
Similarly we have that $\sigma(P(s_2)) = P(t_2)$ and thus by the induction hypothesis we have that $s_1=t_1, s_2=t_2$ and $\sigma$ is the identity on the leaves of $s_1$ and on the leaves of $s_2$ implying that $\sigma=\id$ and $s=t$.
\end{proof}
The lemma implies that $\pi_0$ contains the left regular representation of $V$.
Indeed, consider some symmetric trees $(t,\tau) , (s,\sigma)$ in $\mathcal{SF}$ with the same number of leaves.
We have that
\begin{equation}\label{innerproduct}\langle \pi_0 \left( \frac{(t,\tau)}{(s,\sigma)} \right) \delta_e , \delta_e \rangle = \langle \theta(\sigma)\Phi(s)\delta_e , \theta(\tau)\Phi(t)\delta_e\rangle = \langle \delta_{P(s)} , \delta_{\sigma^{-1}\tau(P(t)) } \rangle.\end{equation}
This is nonzero (and then equal to one) if and only if $P(s)=\sigma^{-1}\tau(P(t))$.
In that case Lemma \ref{lem:perm} implies that $s=t$ and $\sigma^{-1}\tau=\id$ and thus $\frac{(t,\tau)}{(s,\sigma)}$ is the trivial group element.
We obtain that the cyclic representation generated by $\delta_e$ for $\alpha=0$ is the left regular representation of $V$.
If $\alpha=1$, then the cyclic representation generated by $\delta_e$ is the trivial one.
Indeed, if $t$ is a tree with $n$ leaves, then $\Phi(t)\delta_e = \delta_{e}\otimes\cdots\otimes\delta_e$ with $n$ tensors and thus the coefficient \eqref{innerproduct} is always equal to one for any choice of $g=\frac{(t,\tau)}{(s,\sigma)}\in V.$
Our family $\pi_\alpha$ of representations of $V$ provides an interpolation between the trivial and the left regular representations.
Consider the family of positive definite maps $\varphi_\alpha(g):=\langle\pi_\alpha(g)\delta_e,\delta_e\rangle$ with $0\leq \alpha<1$.
We will show that they vanish at infinity for $g\in T$ and tends to the identity when $\alpha$ tends to one.
Fix $0\leq \alpha<1$ and write $R$ instead of $R_\alpha$.
Section \ref{sec:def} tells us that given a tree $t$ with $n$ leaves we have the formula
$$\langle \Phi_\alpha(t)\delta_e,\delta_{\underline j}\rangle = \sum_{\begin{subarray}{c}\omega\in\Omega(t)\\ \text{compatible with } (e,\underline j) \end{subarray}} \prod_{v \text{ a vertex of } t}R_v^\omega,$$
for any multi-index $\underline j=(j_1,\cdots,j_n)$.
Fix $\omega\in\Omega(t)$ and assume that the $\omega$ coefficient of above is nonzero for a certain $\underline j$.
Then there exists a maximal subrooted tree $z_\omega$ of $t$ such that the spin (i.e. the value of $\omega$) at each of its edges is the trivial group element $e\in\mathbb F_2$.
Any vertex $v$ of the tree $z_\omega$ satisfies that $R_v^\omega=\alpha$.
If $f_\omega$ is the unique forest satisfying that $t=f_\omega\circ z_\omega$, then we have that any root $w$ of $f_\omega$ that is not a leaf of $f_\omega$ satisfies that $R_w^\omega=\sqrt{1-\alpha^2}$ since spins around it are necessarily $e,a,b$.
Then any other vertex $u$ of $f_\omega$ has its spins around it equal to $g,ga,gb$ for some $e\neq g\in \mathbb F_2$, and thus $R_u^\omega=1$.
We obtain that
$$\prod_{v \text{ a vertex of } t}R_v^\omega=\alpha^{\target(z_\omega)-1} (1-\alpha^2)^{m(t,z_\omega)/2},$$
where $m(t,z_\omega)$ is the number of leaves of $z_\omega$ that are not leaves of $t$ (i.e. the number of nontrivial trees of $f_\omega$).
The spin of the edge with target the $\ell$th leaf of $t$ is the word in $a,b$ corresponding to the path in the forest $f_\omega$ starting at the root connected to $\ell$ and finishing at the leaf $\ell$.
Write $P(t,z_\omega)_\ell$ this word and $P(t,z_\omega)$ the corresponding $n$-tuple and note that by definition the multi-index $\underline j$ is equal to $P(t,z_\omega)$.
Therefore given a multi-index $\underline j$ there is at most one state $\omega\in\Omega(f)$ compatible with $(e,\underline j)$ and having a nonzero coefficient $\prod_v R_v^\omega$.
Moreover, $\underline j$ has to be of the form $P(t,z)$ for some subrooted tree $z\leq t$.
Conversely, any subrooted tree $z$ of $t$ provides a unique state $\omega_z\in\Omega(t)$ that is compatible with $(e, P(t,z))$ defined inductively as follows:
$$\omega_z(c) = \begin{cases} e & \text{ if $c$ is an edge of $z$};\\
a . \omega_z(d) & \text{ if $c$ goes to the left, its source is the target of an edge $d$ and } c\notin z;\\
b . \omega_z(d) & \text{ if $c$ goes to the right, its source is the target of an edge $d$ and } c\notin z.\\
\end{cases}
$$
We obtain the following formula
$$\Phi_\alpha(t) \delta_e = \sum_{z\in E_t} \alpha^{\target(z)-1} (1-\alpha^2)^{m(t,z)/2} \delta_{P(t,z)},$$
where $E_t$ is the set of subrooted trees (including the trivial subtree) of $t$.
Consider a pair of symmetric trees $( (t,\tau) , (s,\sigma) )$ and $g=\frac{(t,\tau)}{(s,\sigma)}\in V$.
We have that
\begin{equation}\label{equa:vara}\varphi_\alpha(g)= \sum_{z\in E_t, r\in E_s} \alpha^{\target(z)+\target(r)-2} (1- \alpha^2)^{(m(t,z)+m(s,r))/2} \langle \theta( \sigma ) \delta_{ P(s,r) } , \theta( \tau) \delta_{ P(t,z) } \rangle . \end{equation}
Fix a \emph{reduced} pair of \emph{affine} trees $( (t,\tau) , (s,\sigma) )$ and the group element $g=\frac{(t,\tau)}{ (s,\sigma)}$ that is in Thompson's group $T$ since our trees are affine.
We will show that all the terms in the sum \eqref{equa:vara} are equal to zero but one.
\begin{lemma}\label{lem:permtwo}
Consider some forests $p,q$ both of them having $m$ leaves and let $\sigma\in\mathbf{Z}/m\mathbf{Z} < S_m$ be a cyclic rotation.
If $\sigma(P(p)) = P(q)$, then $p$ and $q$ have the same number of roots $n$ and there exists $a\geq 0$ such that the $j$th tree of $p$ is equal to the $(j+a)$th modulo $n$ tree of $q$ for any $1\leq j\leq n$.
\end{lemma}
\begin{proof}
Observe that if $f$ is a forest then the word $P(f)_i$ is a power of $a$ (resp. a power of $b$) if and only if $i$ corresponds to the first leaf (resp. the last leaf) of a tree of $f$.
Consider $p,q,\sigma$ as above and write $p_j$ and $q_k$ the $j$th and $k$th trees of $p$ and $q$ respectively.
Fix $j$ and note that the observation implies that there exists some natural numbers $a,b$ such that the first and the last leaves of $p_j$ are sent to the first leaf of $q_{j+a}$ and the last leaf of $q_{j+b}.$
If $a\neq b$ modulo the number of roots of $q$, then $P(p_j)$ would be equal to a tuple $P(f)$ of a forest $f$ having at least two trees and thus having at least two words that are a power of $b$ which is impossible.
Therefore, $\sigma$ realizes a bijection from the leaves of $p_j$ onto the leaves of $q_{j+a}$ for a certain $a$.
Since $\sigma$ is cyclic we obtain that the number $a$ does not depend on $j$.
We obtain that $P(p_j)=P(q_{j+a})$ for any $j$ which, by Lemma \ref{lem:perm}, implies that the trees $p_j$ and $q_{j+a}$ are equal.
\end{proof}
Assume that the $(z,r)$-term of the equality \eqref{equa:vara} is nonzero, then $P(s,r) = \sigma^{-1}\tau (P(t,z))$.
If $f(s,r), f(t,z)$ are the forest satisfying that $s = f(s,r)\circ r, t = f(t,z)\circ z$, then Lemma \ref{lem:permtwo} implies that there exists a cyclic permutation $\rho$ on the roots of $f(t,z)$ such that the $i$th tree of $f(s,r)$ is equal to the $\rho(i)$th tree of $f(t,z)$ and thus $s=\rho(f(t,z))\circ r$.
This implies that $g$ can be reduced as a fraction $\frac{(z,\tilde\tau)}{(r,\tilde \sigma)}$ for some permutations $\tilde\tau,\tilde\sigma$.
Since the pair $( (t,\tau) , (s,\sigma) )$ is already reduced we obtain that $z=t$ and $r=s$ and thus all the terms in the equality \eqref{equa:vara} are equal to zero except one.
Therefore,
\begin{equation}\label{equa:varatwo}
\varphi_\alpha\left(\frac{(t,\tau)}{(s,\sigma)} \right)=\alpha^{2\target(t)-2} \text{ for any reduced pair of affine trees } ((t,\tau) , (s,\sigma)).\end{equation}
This implies that
\begin{align*}
\lim_{g\to\infty}\varphi_\alpha(g) = 0 & \text{ inside } T \text{ for any } 0\leq \alpha <1 ; \\
\lim_{\alpha\to 1} \varphi_\alpha(h) = 1 & \text{ for any } h\in T.
\end{align*}
Therefore, Thompson's group $T$ has the Haagerup property.
\begin{remark}\label{rem:Farley}
Farley constructed a proper cocycle $c:V\to H$ with values in a Hilbert space and showed in the proof of \cite[Theorem 2.4]{Farley03-H} that if $g\in V$ is described by a reduced pair of symmetric trees with $n$ leaves, then $\Vert c(g)\Vert^2 = 2n-2.$
This provides a family of positive definite functions $\phi_\beta(g):= \exp(-\beta \Vert c(g)\Vert^2)=\exp(-\beta(2n-2)), \beta\geq 0$ by Schoenberg's theorem.
Note that if $g\in T$, then Formula \eqref{equa:varatwo} implies $\varphi_\alpha(g) = \phi_{\exp(-\beta)}(g)$ for any $\alpha>0$.
However, this equality is no longer true for certain elements of $V$.
Consider $g=\frac{(t,\id)}{(t,(13))}\in V$ where $t=f_3f_1f_1$ is the full binary tree with four leaves. We have that $$\Phi_\alpha(t)\delta_e = \alpha^3 \delta_{e,e,e,e} + \alpha^2\sqrt{1-\alpha^2} ( \delta_{e,e,a,b} + \delta_{a,b,e,e} ) + \alpha(1-\alpha^2)\delta_{a,b,a,b} + \sqrt{1-\alpha^2}\delta_{aa,ba,ab, bb}.$$ Then $\varphi_\alpha(g) = \alpha^6 + \alpha^2(1-\alpha^2)^2\neq \alpha^6= f_{\exp(-\beta)}(g) $.
\end{remark}
We proved that $T$ has the Haagerup property by using the net of maps $\varphi_\alpha, 0<\alpha<1.$
One could hope to extend our proof using the same approximation of the identity for the larger group $V$.
Unfortunately, the maps $\varphi_\alpha$ with $0<\alpha<1$ are no longer vanishing at infinity if we consider them as functions on $V$.
Indeed, consider the sequence of trees $x_n$ such that $x_2=f_1$ is the tree with two leaves and $x_{n+1}=f_1 x_n$ for $n\geq 2.$
Note that $x_n$ is a tree with $n$ leaves.
Let $s_n:=(x_n\bullet x_n)\circ f_1$ be the tree equal to the composition of $f_1$ with a copy of $x_n$ attached to each leaf of $f_1$.
Define the permutation $\sigma_n \in S_{2n}$ that is an involution and such that $\sigma_n(2i+1)=2i+1 + n$ and $\sigma_n(2j) =2j$ for any $1 \leq 2i+1 , 2 j\leq n$.
Hence, $\sigma_n$ sends any odd leaf of the first copy of $x_n$ to the same leaf in the second copy of $x_n$ and lets invariant the others.
We set $g_n:=\frac{(s_n,\sigma_n)}{(s_n,\id)}$ and note that this fraction is reduced.
If we consider $\varphi_\alpha(g)$ we observe that the term corresponding to $z=r=f_1$ in the formula \eqref{equa:vara} is nonzero and is equal to $\alpha^2 (1-\alpha^2)^2$.
Since all the terms of $\varphi_\alpha(g_n)$ are positives we obtain that $\varphi_\alpha(g_n)\geq \alpha^2 (1-\alpha^2)^2 >0$ for any $n\geq 2$ and thus $\varphi_\alpha(g)$ does not tend to zero when $g$ tends to infinity in $V$.
|
{
"timestamp": "2018-05-08T02:10:33",
"yymm": "1805",
"arxiv_id": "1805.02177",
"language": "en",
"url": "https://arxiv.org/abs/1805.02177"
}
|
\section{Introduction}
The fifth generation of mobile communication (5G) using millimeter-wave technology (mmWave) will be the first generation to integrate the location information in the network design and optimization \cite{Taranto2014, Akyildiz2016}, for example through, beamforming \cite{Aviles2016}, pilot assignment \cite{Akbar2016}, and resource allocation \cite{Muppirisetty2016}.
Localization error in mmWave 5G has been shown to be in the order of centimeters, making location-aware applications in 5G much more attractive than ever before. Such applications including targeted content delivery \cite{Ma2014}, vehicular communication \cite{Garcia2016}, and assisted living systems \cite{Witrisal2016}. Of particular interest are systems of connected autonomous vehicles (CAVs) \cite{WhitePaper}, which are a typical use case of 5G communication \cite{Ericsson}, and air-ground communication with unmanned aerial vehicles (UAVs) \cite{Qiu2019}.
Due to the deployment of arrays with a high number of antennas at the transmitter and the receiver, and the utilization of {large} bandwidth \cite{Andrews2014, Pi2011, Rappaport2013, Heath2016, Orhan2015}, localization with a single base station (BS) can be seen as the ultimate localization strategy for 5G. With the high number of antennas, the directions of arrival (DOA) and departure (DOD) can be estimated with {a} very low error \cite{Larsen2009}, while the large bandwidth enables a highly accurate estimation of the time of arrival (TOA) \cite{Shen2007,Shen2010,Shen2010_2,Shen2010_3}, i.e., {a} low-error range estimate. Subsequently, combining the spatial and temporal estimates, the user equipment (UE) location\footnote{In this paper, we use the terms \textit{location/localization} and \textit{position/positioning} interchangeably.} can be estimated. {On the other hand, some papers consider mmWave channels estimation in the beamspace \cite{Huang2019,Gao2019, Fan2017}, so in principle, the AOA and AOD can be deduced directly from the channel estimate. However, the estimation in the beamspace does not show how to estimate the TOA.}
Recently, the accuracy of single-anchor\footnote{In mobile networks, anchor refers to the BS, whose position and orientation are known.} localization for 5G mmWave systems has been studied in several papers in terms of position (PEB) and orientation error bounds (OEB). {PEB and OEB are theoretical bounds that are used to benchmark location estimation techniques, and hence they are measures of the optimality of such techniques.} In \cite{Shahmansoori2017}, the UE PEB and OEB of 2D localization were investigated using {uniform linear arrays} in 5G mmWave systems. Moreover, \cite{Guerra2017} and \cite{Zohair2017} derived, with different approaches, the PEB and OEB for mmWave 3D localization using arrays with arbitrary geometry. The results in \cite{Shahmansoori2017, Guerra2017, Zohair2017} showed a 5G mmWave localization performance with an error in the order of centimeters. However, one important, yet usually overlooked, requirement for localization is the synchronization of BS and UE. For example, \cite{Shahmansoori2017} and \cite{Zohair2017} assume that the BS and UE are perfectly synchronized, while \cite{Guerra2017} assumes coarse synchronization, and includes a residual synchronization error in their localization model. Synchronization can be avoided by the use of two-way ranging methods \cite{Sahinoglu2008, Pelka2017, Joon2002}, where the time-of-flight is utilized to estimate the range and clock bias, or three-way ranging \cite{Sahinoglu2008} and multi-way ranging \cite{Duisterwinkel2017, Sark2015} to additionally estimate higher-order artifacts such as clock drift and skew. However, such methods have not been evaluated for mmWave systems. {Such systems possess different features, including highly sparse channels and directional transmission, making the estimation of the angles of arrival and departure as relevant to localization as the time of arrival. Our work is the first to consider such a scenario and investigate the associated two-way positioning performance that is a function of the spatio-temporal properties of the channel.}
{In this paper, we propose two-way localization (TWL), {whereby a known signal is transmitted from the first device, the BS or the UE, to the second device that, in turn, responds by sending another known signal, after which the relative location and orientation of the devices can be estimated.} We study the PEB and OEB under line-of-sight (LOS) communication for two protocols: (i) \textit{Round-trip Localization Protocol (RLP),} where the second device waits for a pre-agreed interval, from the time the first signal is received, before sending another signal to the first device, upon which localization is based; and (ii) \textit{Collaborative Localization Protocol (CLP),} where the second device sends back the received signal to the first device, and localization is based on both signals. {By their nature, these bounds are theoretical and serve as a means to determine performance benchmarks to assess location estimation techniques, to design localization systems, and to determine when the location and orientation can be potentially estimated.} Our main contributions are:}
{\begin{itemize}
\item Introducing RLP and CLP for LOS 5G mmWave signals and their analysis in terms of the localization bounds.
\item For the two protocols, we derive the Fisher information matrices (FIMs) of the position and orientation, and consequently the PEB and OEB, with the timing bias between the BS and UE as a nuisance parameter.
\item We investigate the impact of the number of antennas at BS and UE, as well as the bandwidth, and show that, in contrast to the standard two-way ranging methods \cite{Sahinoglu2008, Pelka2017, Joon2002}, the TWL performance in mmWave multiple-input multiple-output (MIMO) systems depends on the device that initiates the protocol.
\end{itemize}
}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.8]{Figures/fig1.pdf}
\caption{Two-step rotation: {The first rotation is around the $z$-axis, creating $x'$ and $y'$ axes. The second rotation is around $x'$, creating $y''$ and $z''$ axes.}}
\label{fig:rotation}
\end{figure}
The initial results of the RLP were presented briefly in \cite{Zohair_DLP}, while in this paper, while in this paper, we i) discuss RLP in more detail, ii) provide CLP as additional protocol, and iii) present more in-depth performance analysis and insightful results on both protocols.
The rest of the paper is organized as follows. The system model, including the considered geometry, channel model and beamforming, is described in Section II, while the proposed protocols are outlined in Section III. Subsequently, FIM basics are introduced at the outset of Section IV, before proceeding to derive the PEB and OEB for both protocols. The numerical simulation results are given in Section V, while the conclusions are highlighted in Section VI.
\section{System Model}\label{sec:sys_model}
\subsection{System Geometry}
Consider a BS located at the origin of the 3D space with zero-orientation angles, and a UE located at a fixed unknown position $\mathbf{p}\triangleq{[p_x,p_y,p_z]^\mathrm{T}}$ with unknown orientation angles $\mathbf{o}\triangleq{[\zeta_0,\chi_0]^\mathrm{T}}$. As illustrated in Fig.~\ref{fig:rotation}, we define $\zeta_0$ as the rotation angle around the $z-$axis, which yields new coordinate axes $x'$, $y'$ and $z$. Similarly, $\chi_0$ is defined as the rotation angle around the $x'-$axis. Both BS and UE are equipped with antenna arrays of arbitrary but known geometries and communicate through a mmWave channel.
Although a device may have up to three rotation angles, we consider two angles because the estimation of three orientation angles is not possible with only LOS communication. Hence, our formulation is representative of practical applications characterized by two rotation angles\footnote{This corresponds for instance to a vehicle that can turn left and right ($\zeta_0$) or ascend and descend ($\chi_0$), but not slip or flip.}, such as near-static\footnote{{We study positioning with a short signal snapshot, during which the UE moves by a negligible distance. Subsequently, there has to be another layer where the snapshot positions are filtered through tracking techniques and mobility models, but this is out of the scope of this paper.}}
Our objective is to derive the performance bounds of estimating $\mathbf{p}$ and $\mathbf{o}$ via TOA, DOA, and DOD estimation, in the presence of the unknown nuisance parameters, i.e., the timing offset between the BS and UE clocks, ${B}$, and the unknown channel. This is done for the RLP and CLP protocols described in Section \ref{sec:protcols}. Our analysis considers the effect of all these unknown parameters. If a subset of the parameters is known, the bounds become lower and can be easily derived as special cases.
\subsection{Channel Model}
We consider protocols initiated by either the BS or UE. The device initiating the protocol is denoted by D$_1$, and the responding device by D$_2$. {In the presence of multipath, mmWave paths are orthogonal and information-additive \cite{Witrisal2016,Leitinger2015,Rico2018}, and hence do not interfere with one another. Moreover, the LOS path is stronger than the NLOS paths and hence provides the highest useful information in terms of positioning, while also being easy to isolate based on the signal power profile. Therefore, although we assume that the exchange of signals occurs via the LOS path, our analysis is valid even when there are NLOS paths. In any case, the presence of NLOS paths would assist localization, unlike in other systems, e.g., GPS, where multipath can limit the performance \cite{Witrisal2016,Leitinger2015,Rico2018}.}
\begin{rem}[Notation] All parameters related to D$_1$ and D$_2$ are denoted by the subscripts ``1" and ``2", respectively. Moreover, the superscripts ``f" and ``b" are used to relate the parameters to the forward and backward transmissions, respectively. Also, unless otherwise stated, all the provided times are with respect to the clock of D$_1$, which is considered a global clock. See Fig.~\ref{fig:model}.
\end{rem}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.70]{Figures/fig2.pdf}
\caption{Summary of parameters at D$_1$ and D$_2$. Although D$_1$ and D$_2$ in the figure are BS and UE, this can be reversed.}
\label{fig:model}
\end{figure}
Let $h^\mathrm{f}\triangleq\beta^\mathrm{f}\exp(j\psi^\mathrm{f})$ be the complex LOS path gain in the forward direction, $N_{\mathrm{1}}$ and $N_\mathrm{2}$ be the number of antennas at D$_1$ and D$_2$, respectively, and $(\theta_{\mathrm{1}},\phi_{\mathrm{1}})$ and $(\theta_{\mathrm{2}},\phi_{\mathrm{2}})$ be the forward DOD and DOA at D$_1$ and D$_2$, respectively. Also, define
$\boldsymbol\vartheta\triangleq[\theta_{\mathrm{1}},\phi_{\mathrm{1}},\theta_{\mathrm{2}},\phi_{\mathrm{2}}]^\mathrm{T}$.
The \textit{forward} signal, from D$_1$ to D$_2$, undergoes a forward channel {given by \cite{Heath2016}}
\begin{align}
\mathbf{H}^\mathrm{f}(\boldsymbol\vartheta, \tau^\mathrm{f},h^\mathrm{f})&\triangleq\mathbf{H}^\mathrm{f}_\mathrm{s}(\boldsymbol\vartheta,h^\mathrm{f})\delta (t-{\tau^\mathrm{f}}),\in\mathbb{C}^{N_{\mathrm{2}}\times{N_\mathrm{1}}} \label{eq1}
\end{align}
where $\delta(t)$ is the Dirac delta function, $t=\tau^\mathrm{f}$ is the perceived TOA at D$_2$, and
\begin{align}
\mathbf{H}^\mathrm{f}_\mathrm{s}(\boldsymbol\vartheta,h^\mathrm{f})&\triangleq\sqrt{N_{\mathrm{1}}N_\mathrm{2}}h^\mathrm{f}\mathbf{a}_{\mathrm{2}}(\theta_{\mathrm{2}},\phi_{\mathrm{2}})\mathbf{a}^\mathrm{T}_{\mathrm{1}}(\theta_{\mathrm{1}},\phi_{\mathrm{1}}).\label{eq:channel_model_f}
\end{align}
$\mathbf{a}_{i}, i\in\{1,2\}$ is the response vectors at D$_i$ given by
\begin{align}
\mathbf{a}_{i}(\theta_{i},\phi_{i})&\triangleq\frac{1}{\sqrt{N_{i}}}e^{-j\boldsymbol{\Delta}_{i}^\mathrm{T}\mathbf{k}(\theta_{i},\phi_{i})},\qquad\in\mathbb{C}^{N_i}\label{eq:a_t}
\end{align}
where $\mathbf{k}(\theta,\phi)=\frac{2\pi}{\lambda}[\cos\phi\sin\theta, \sin\phi\sin\theta, \cos\theta]^\mathrm{T}$ is the wavenumber vector, $\lambda$ is the wavelength, $\boldsymbol\Delta_{i}\in\mathbb{C}^{3\times{N_i}}$ is a matrix whose columns contain the 3D Cartesian coordinates of the array elements of D$_i$ in meters. For brevity, we drop the angle parameters from the notation of $\mathbf{a}_{i}$.
Similarly, the backward channel from D$_2$ to D$_1$ is defined as
\begin{align}
\mathbf{H}^\mathrm{b}(\boldsymbol\vartheta,\tau^\mathrm{b},h^\mathrm{b})&\triangleq\mathbf{H}^\mathrm{b}_\mathrm{s}(\boldsymbol\vartheta,h^\mathrm{b})\delta (t-{\tau^\mathrm{b}})\in\mathbb{C}^{N_{\mathrm{1}}\times{N_\mathrm{2}}},
\end{align}
where $h^\mathrm{b}\triangleq\beta^\mathrm{b}\exp(j\psi^\mathrm{b})$ and
\begin{align}
\mathbf{H}^\mathrm{b}_\mathrm{s}(\boldsymbol\vartheta,h^\mathrm{b})&\triangleq\sqrt{N_{\mathrm{1}}N_\mathrm{2}}h^\mathrm{b}\mathbf{a}_{\mathrm{1}}(\theta_{\mathrm{1}},\phi_{\mathrm{1}})\mathbf{a}^\mathrm{T}_{\mathrm{2}}(\theta_{\mathrm{2}},\phi_{\mathrm{2}}),\label{eq:channel_model_b}
\end{align}
where $\tau^\mathrm{b}$ denotes the local TOA at D$_1$.
{Note that \eqref{eq1}--\eqref{eq:channel_model_b} represent an accepted model for mmWave channels \cite{Heath2016}. Unlike cmWave channels, which experience rich scattering and relatively low propagation losses, mmWave channels are sparse and have high propagation losses, leading to weaker NLOS paths than LOS. Furthermore, due to the large temporal and spatial resolution of mmWave massive MIMO systems, reflections can be resolved if there are NLOS paths, and the parameters of the LOS can be estimated without noticeable impact from the NLOS \cite{Zohair2017}. Thus, for the sake of analysis, one can consider that the LOS-only situation is representative of scenarios where the reflections are resolvable, if present at all. For the cases where the LOS path is blocked, it has been shown recently that the probability of localization via NLOS paths alone is only about 12\% \cite{Lone2019}.}
\begin{figure*}[!t]
\centering
\includegraphics[scale=0.7]{Figures/Fig3.pdf}
\caption{The timeline of the studied TWL protocols.}
\label{fig:protocols}
\end{figure*}
\subsection{Precoding and Combining}
The signal transmitted from D$_1$ is modeled by $\sqrt{E_\mathrm{t}} \mathbf{F}_1\mathbf{s}_1(t)$, where $E_\mathrm{t}$ is the transmitted energy per symbol, and $\mathbf{F}_1\in\mathbb{C}^{N_\mathrm{1}\times{N_{\mathrm{B},1}}}$ is the transmit beamforming matrix at D$_1$ containing $N_{\mathrm{B},1}$ analog beamforming vectors. The pilot signal $\mathbf{s}_1(t)\triangleq[s_{1,1}(t),s_{1,2}(t),...,s_{1,N_{\mathrm{B}_1}}(t)]^\mathrm{T}$ is written as
\begin{align}
s_{1,b}(t)=\sum_{\ell=0}^{N_{\mathrm{s}}-1}a^{(b)}_{1,\ell}{g(t-\ell T_\mathrm{s})},\ 1\leq b\leq{N_{\mathrm{B_1}}}, \label{eq:txsignal}
\end{align}
where $a^{(b)}_{1,\ell}$ are known unit-energy pilot symbols transmitted over the $b^{\mathrm{th}}$ beam from D$_1$, and $g(t)$ is a unit-energy pulse with a {symmetric} power spectral density (PSD), denoted by $|{G(f)}|^2$. In \eqref{eq:txsignal}, $N_{\mathrm{s}}$ is the number of pilot symbols and $T_{\mathrm{s}}$ is the symbol duration, leading to a total observation time of $T_{\mathrm{o}} \approx N_{\mathrm{s}}T_{\mathrm{s}}$. Note that we keep the transmitted power fixed with $N_1$ by setting $\mathrm{Tr}\left(\mathbf{F}_1^\mathrm{H}\mathbf{F}_1\right)=1$, and $\mathbf{s}_1(t)\mathbf{s}_1^\mathrm{H}(t)=\mathbf{I}_{N_{\mathrm{B_1}}}$, where $\mathrm{Tr}\left(\cdot\right)$ denotes the matrix trace, and $\mathbf{I}_{N_{\mathrm{B_1}}}$ is the $N_{\mathrm{B_1}}$-dimensional identity matrix. Similarly, $\mathbf{W}_2\in\mathbb{C}^{N_\mathrm{2}\times{N_{\mathrm{B},2}}}$ denotes the receive beamforming matrix at D$_2$ containing $N_{\mathrm{B},2}$ analog beamforming vectors.
In backward transmission, D$_2$ transmits $\mathbf{s}_2(t)$ via a beamforming matrix, $\mathbf{F}_2$ containing $N_{\mathrm{B}_2}$ beams, while D$_1$ receives via a beamforming matrix, $\mathbf{W}_1$ containing $N_{\mathrm{B}_1}$ beams. Both $\mathbf{F}_2$ and $\mathbf{W}_1$ are defined similar to $\mathbf{W}_2$ and $\mathbf{F}_1$, respectively, but with possibly different beam directions.
\section{{Synchronization and Localization Protocols}}\label{sec:protcols}
In this section, {we discuss how clock synchronization can be addressed in 5G mmWave}. We start by presenting a general formulation of two-way localization, which we then specify for two different protocols with the aid of Fig. \ref{fig:protocols}.
\subsection{General Operation}
We take the clock at D$_1$ as a reference and assume that D$_2$ has a \textit{clock bias}\footnote{{Bias is modeled as an unknown constant, as we consider a snapshot observation over which it is assumed to remain unchanged.}}, ${B}$, with respect to it. We also denote the nominal TOA by $\tau=\|\mathbf{p}\|/c$, where $c$ is the speed of light.
During the \textbf{forward transmission}, the signal received after beamforming at D$_2$ is given by
\begin{align}
\mathbf{y}_2(t)&=\sqrt{E_\mathrm{t}}\mathbf{W}_2^\mathrm{H}\mathbf{H}^\mathrm{f}_\mathrm{s}(h^\mathrm{f},\boldsymbol\vartheta)\mathbf{F}_1\mathbf{s}_1(t-\tau^\mathrm{f})+\mathbf{n}_2(t),\label{eq:y2}
\end{align}
where $\mathbf{n}_2(t)$ is zero-mean additive \textit{spatially-correlated} Gaussian noise, since the received signals are observed at the beamformer output. Therefore, the corresponding noise auto-covariance matrix is $\mathbf{R}_\mathrm{n2}=N_0\mathbf{W}_2^\mathrm{H}\mathbf{W}_2$, where $N_0$ is the noise PSD. We assume that $N_0$ is identical at BS and UE. Moreover, the delay at D$_2$ is
\begin{align}
\tau^\mathrm{f}=\tau+B.\label{eq:tauf}
\end{align}
Similarly, in the \textbf{backward transmission}, the signal received after beamforming at D$_1$ is
\begin{align}
\mathbf{y}_1(t)&=\sqrt{E_\mathrm{t}}\mathbf{W}_1^\mathrm{H}\mathbf{H}^\mathrm{b}_\mathrm{s}(\boldsymbol\vartheta,h^\mathrm{b})\mathbf{F}_2\mathbf{s}_2(t-\tau^\mathrm{b})+\mathbf{n}_1(t)\label{eq:y1},
\end{align}
where $\mathbf{n}_1(t)$ has an auto-covariance matrix $\mathbf{R}_\mathrm{n1}=N_0\mathbf{W}_1^\mathrm{H}\mathbf{W}_1$. {Note that the backward transmission is initiated by D$_2$ at {a} time $t=t^\mathrm{b}$, and that the clock bias of D$_2$ observed at $D_1$ is $-B$}. Hence, the delay at D$_1$ is
\begin{align}
\tau^\mathrm{b}=t^\mathrm{b}+\tau-{B}.\label{eq:taub}
\end{align}
{There are different ways by which the synchronization of the response message from D$_2$ can be coordinated. In the following, we specify our formulation for two localization protocols, round-trip (RLP) and collaborative (CLP). While $\tau^\mathrm{f}$ is the same for CLP and RLP, their essential difference is in how each one defines $t^\mathrm{b}$, the instant at which D$_2$ sends the reply message (backward). For RLP, D$_2$ starts transmission after a pre-defined time-interval $\tau_\mathrm{D}$, taken with reference to its local clock, while in CLP, it starts transmission after $t^\mathrm{b}$, taken with reference to the clock of D$_2$.}
\subsection{Round-Trip Localization Protocol (RLP)}\label{sec:proptocols}
Under RLP, D$_2$ estimates ${\tau}^\mathrm{f}$ and waits for a pre-agreed delay $\tau^\mathrm{D}$ before transmitting back the signal $\mathbf{s}_2(t)$. In other words,
\begin{align}\label{eq:tb}
t^\mathrm{b}=\hat{\tau}^\mathrm{f}+\tau^\mathrm{D},
\end{align}
{where $(\hat\cdot)$ denotes the estimated value of a parameter.} See Fig.~\ref{fig:protocols}(a). We now introduce $e^\mathrm{f}\triangleq\hat{\tau}^\mathrm{f}-\tau^\mathrm{f}$ (and similarly $e^\mathrm{b}\triangleq\hat{\tau}^\mathrm{b}-\tau^\mathrm{b}$). Substituting \eqref{eq:tb} in \eqref{eq:taub}, then using \eqref{eq:tauf}, it can be shown that D$_1$ receives the signal $\mathbf{y}_1(t)$ at time
\begin{align}
{\tau}^\mathrm{b}&=\hat{\tau}^\mathrm{f}+\tau^\mathrm{D}+\tau-{B}=2\tau+e^\mathrm{f}+\tau^\mathrm{D},\label{eq:tau_b}
\end{align}
Finally, based on $\mathbf{y}_1(t)$, D$_1$ estimates $\hat{\tau}^\mathrm{b}$ and eventually determines $\mathbf{p}$, and $\mathbf{o}$. Note that $B$ in the forward and backward transmissions cancel out and need not be estimated at D$_2$.
\subsection{Collaborative Localization Protocol (CLP)}
Unlike RLP, {under CLP D$_2$ sends back a signal $\mathbf{s}_2(t)$ at a pre-agreed time instant $t=t^\mathbf{b}$. The value of $t=t^\mathbf{b}$ can be chosen to be large enough to avoid overlapping with the preceding transmission of $\mathbf{s}_1(t)$. Given that D$_2$ decides that the instant $t=t^\mathbf{b}$ has occurred based on its own clock, then the TOA measured by D$_1$ in its own time scale is given by \eqref{eq:taub}.}
In parallel, D$_1$ also receives $\mathbf{y}_2(t)$ via an error-free feedback\footnote{{To give a general exposition, we assume that the second signal is sent back entirely. However, there are some alternatives that facilitate obtaining the same bounds in a more practical way, like feeding back the parameters estimated from $\mathbf{y}_2(t)$ instead of the actual $\mathbf{y}_2(t)$.} {Any errors introduced in the transmission are assumed to be corrected via layers of coding and ARQ.}} link that can possibly be established using a microwave channel. Finally, based on $\mathbf{y}_1(t)$ \textit{and} $\mathbf{y}_2(t)$, D$_1$ estimates $\mathbf{p}$ and $\mathbf{o}$. Comparing \eqref{eq:tau_b} and \eqref{eq:taub}, it can be seen that $B$ needs to be estimated under CLP, unlike RLP.
\section{Derivation of the Two-Way Position and Orientation Error Bounds } \label{sec:peb_oeb}
{After defining the system model and the communication protocols that govern the observations collection, we now proceed to define and derive PEB and OEB as performance metrics for the two protocols. These metrics are lower bounds on the performance of any estimator and can thus be used to benchmark localization algorithms. In fact, these bounds are tight for the problem under investigation. That is, the performance of well-designed practical algorithms approaches these bounds in the localization scenarios of interest \cite{Shahmansoori2017}. Therefore, analyzing the protocols in terms of the PEB and the OEB has the advantage of being representative of practical designs without the need for proposing detailed estimation algorithms. Moreover, since the PEB and OEB can often be computed in closed forms, another advantage is that they provide fundamental insights into the localization problem.}
{The PEB and OEB are derived from the FIM, a notion we discuss first in Section \ref{sec:basicFIM}. Then, we apply the FIM to the estimation of channel parameters in the forward and backward transmissions in Section \ref{sec:channelFIM}. This allows us to compute the PEB and OEB of RLP and CLP in Sections \ref{sec:RLP} and \ref{sec:CLP}, respectively, and make a quantitative performance comparison in Section \ref{sec:comparison}.
}
\subsection{Basic FIM Concepts}\label{sec:basicFIM}
{In this section, we digress to provide a brief introduction to the notion of FIM and Equivalent FIM (EFIM), useful in the analysis of the TWL protocols.} {For more background on Fisher information, the reader is referred to \cite{kay1993}.}
{Given a vector observation $\mathbf{y}$ and an unknown deterministic vector parameter $\boldsymbol{\theta}$, related by $\mathbf{y} = \mathbf{h}(\boldsymbol{\theta}) + \mathbf{n}$, where $\mathbf{n} \sim \mathcal{N}(\mathbf{0},\boldsymbol{\Sigma})$, with $\boldsymbol{\Sigma}$ independent of $\boldsymbol{\theta}$, then the FIM $\mathbf{J}_{\boldsymbol{\theta}}$ is a positive semi-definite matrix, defined as $\mathbf{J}_{\boldsymbol{\theta}} = \nabla_{\boldsymbol{\theta}}\mathbf{h}^{\mathrm{T}}(\boldsymbol{\theta})\boldsymbol{\Sigma}^{-1} (\nabla_{\boldsymbol{\theta}}\mathbf{h}(\boldsymbol{\theta}))$. Under certain regularity conditions, the inverse of the FIM (provided it exists) serves as a lower bound on the estimation error covariance of any unbiased estimator:
\begin{align}
\mathbb{E}\{ (\hat{\boldsymbol{\theta}}-\boldsymbol{\theta})(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta})^{\mathrm{T}}\} \succeq \mathbf{J}^{-1}_{\boldsymbol{\theta}},
\end{align}
where the expectation is over the noise and $\mathbf{A}\succeq \mathbf{B}$ means that $\mathbf{A}- \mathbf{B}$ is a positive semidefinite matrix. The Cram\'er-Rao lower bound (CRLB) is computed as the diagonal of the inverse FIM. }
{If instead of $\boldsymbol{\theta}$ we need the FIM of $\boldsymbol{\phi}=f(\boldsymbol{\theta})$, we can apply a transformation on the FIM.
\begin{definition}[FIM Transformation]\label{def:tfim}
Given the FIM $\mathbf{J}_{\boldsymbol{\theta}}$ and an injective mapping $\boldsymbol{\theta}=f(\boldsymbol{\phi})$, the FIM $\mathbf{J}_{\boldsymbol{\phi}}$ is given by {\cite{kay1993}}
\begin{align} \label{eq:tfim}
\mathbf{J}_{\boldsymbol{\phi}} = \boldsymbol{\Upsilon}\mathbf{J}_{\boldsymbol{\theta}}\boldsymbol{\Upsilon}^{\mathrm{T}},
\end{align}
where $\boldsymbol{\Upsilon}$ is a Jacobian matrix with $[\boldsymbol{\Upsilon}]_{i,j}=\partial \boldsymbol{\theta}_i / \partial \boldsymbol{\phi}_j = \partial [f(\boldsymbol{\phi})]_i / \partial \boldsymbol{\phi}_j$.
\end{definition}
}
{The EFIM is derived from the FIM, when we are interested only in part of the vector $\boldsymbol{\theta}$.
\begin{definition}[Equivalent FIM]\label{def:efim}
Given a parameter vector $\boldsymbol{\theta}\triangleq[\boldsymbol{\theta}_1^{\mathrm{T}},\boldsymbol{\theta}_2^{\mathrm{T}}]^{\mathrm{T}}$ with associated FIM
\begin{align}
\mathbf{J}_{\boldsymbol{\theta}}=\begin{bmatrix}\mathbf{J}_{\boldsymbol{\theta}_1}&\mathbf{J}_{\boldsymbol{\theta}_1\boldsymbol{\theta}_2}\\
\mathbf{J}^\mathrm{T}_{\boldsymbol{\theta}_1\boldsymbol{\theta}_2}&\mathbf{J}_{\boldsymbol{\theta}_2}
\end{bmatrix},
\end{align}
Then, the EFIM of $\boldsymbol{\theta}_1$ is given by Schur complement as {\cite{Shen2010_2}}
\begin{align}
\mathbf{J}^{\mathrm{e}}_{\boldsymbol{\theta}_1}=\mathbf{J}_{\boldsymbol{\theta}_1}-\mathbf{J}_{\boldsymbol{\theta}_1\boldsymbol{\theta}_2}\mathbf{J}^{-1}_{\boldsymbol{\theta}_2}\mathbf{J}^\mathrm{T}_{\boldsymbol{\theta}_1\boldsymbol{\theta}_2}.
\end{align}
\end{definition}
Note that according to this definition, $\mathbf{J}_{\boldsymbol{\theta}_1}$ is the FIM of $\boldsymbol{\theta}_1$ if $\boldsymbol{\theta}_2$ were known, and $\mathbf{J}_{\boldsymbol{\theta}_1\boldsymbol{\theta}_2}\mathbf{J}^{-1}_{\boldsymbol{\theta}_2}\mathbf{J}^\mathrm{T}_{\boldsymbol{\theta}_1\boldsymbol{\theta}_2}$ is the loss of information due to the uncertainty of $\boldsymbol{\theta}_2$.}
\begin{definition}[PEB and OEB]
Given the equivalent Fisher information matrix of the orientation and the position, {$\mathbf{J}_{\mathbf{o,p}}^\mathrm{e}\triangleq\mathbf{C}^{-1}\in\mathbb{R}^{5\times{5}}$}, then, the OEB and PEB are defined as \cite{Shen2010_2}
\begin{subequations}
\begin{align}
\mathrm{OEB}&\triangleq\sqrt{{\left[\mathbf{C}\right]_{1,1}+\left[\mathbf{C}\right]_{2,2}}},\\
\mathrm{PEB}&\triangleq\sqrt{{\left[\mathbf{C}\right]_{3,3}+\left[\mathbf{C}\right]_{4,4}+\left[\mathbf{C}\right]_{5,5}}}
\end{align}
\end{subequations}
\end{definition}
\subsection{General FIM for Channel Parameters}\label{sec:channelFIM}
For either the forward or the backward transmission, we can compute the FIM of the channel parameters. Focusing on the backward transmission, the FIM of the channel parameters $\boldsymbol{\varphi}^\mathrm{b}\triangleq\left[\boldsymbol{\vartheta}^\mathrm{T},\psi^\mathrm{b}, \beta^\mathrm{b},\tau^\mathrm{b}\right]^\mathrm{T}$ from the observation $\mathbf{y}_1(t)$ is derived in Appendix \ref{sec:app_fim_dl}
and is shown to be
\begin{align}
\mathbf{J}_{\boldsymbol{\varphi}^\mathrm{b}}&\triangleq\begin{bmatrix}
\mathbf{J}^\mathrm{b}_{\mathrm{SS}}&\mathbf{0}_{5\times{2}}\\
\mathbf{0}_{2\times{5}}&\left[\begin{array}{cc} J_{\beta^\mathrm{b}}^\mathrm{b}&0\\0&J_\tau^\mathrm{b} \end{array}\right]
\end{bmatrix},\label{eq:partition_fim}
\end{align}
where,
\begin{align}
\mathbf{J}^\mathrm{b}_{\mathrm{SS}}=\begin{bmatrix}
\mathbf{J}_{\boldsymbol{\vartheta}}^\mathrm{b}&\mathbf{J}^\mathrm{b}_{\boldsymbol{\vartheta}\beta^\mathrm{b}}\\
\left(\mathbf{J}_{\boldsymbol{\vartheta}\beta^\mathrm{b}}^\mathrm{b}\right)^\mathrm{T}&J^\mathrm{b}_{\beta^\mathrm{b}}
\end{bmatrix},\label{eq:Jss}
\end{align}
is the FIM corresponding to the spatial parameters of $\mathbf{J}_{\boldsymbol{\varphi}^\mathrm{b}}$, such that
\begin{align}
\mathbf{J}_{\boldsymbol{\vartheta}}^\mathrm{b}&\triangleq
\begin{bmatrix}
J^\mathrm{b}_{\theta_1}&J^\mathrm{b}_{\theta_1\phi_1}&J^\mathrm{b}_{\theta_1\theta_2}&J^\mathrm{b}_{\theta_1\phi_2}\\
J^\mathrm{b}_{\theta_1\phi_1}&J^\mathrm{b}_{\phi_1}&J^\mathrm{b}_{\phi_1\theta_2}&J^\mathrm{b}_{\phi_1\phi_2}\\
J^\mathrm{b}_{\theta_1\theta_2}&J^\mathrm{b}_{\phi_1\theta_2}&J^\mathrm{b}_{\theta_2}&J^\mathrm{b}_{\theta_2\phi_2}\\
J^\mathrm{b}_{\theta_1\phi_2}&J^\mathrm{b}_{\phi_1\phi_2}&J^\mathrm{b}_{\theta_2\phi_2}&J^\mathrm{b}_{\phi_2}\\
\end{bmatrix},\label{eq:jthetatheta}\\
\left(\mathbf{J}_{\boldsymbol{\vartheta}\beta^\mathrm{b}}^\mathrm{b}\right)^\mathrm{T}&\triangleq
\begin{bmatrix}
J^\mathrm{b}_{\theta_1\beta^\mathrm{b}}&J^\mathrm{b}_{\phi_1\beta^\mathrm{b}}&J^\mathrm{b}_{\theta_2\beta^\mathrm{b}}&J^\mathrm{b}_{\phi_2\beta^\mathrm{b}}
\end{bmatrix}.
\end{align}
The FIM of $\boldsymbol{\varphi}^\mathrm{f}\triangleq\left[{\boldsymbol{\vartheta}}^\mathrm{T},\psi^\mathrm{f}, \beta^\mathrm{f},\tau^\mathrm{f}\right]^\mathrm{T}$ is obtained from the observation $\mathbf{y}_2(t)$ in the same way and exhibits the same structure, as highlighted at the end of Appendix \ref{app:proof_theo}.
{
\begin{rem}Due to the structure of \eqref{eq:partition_fim}, the delay is always independent of the other channel parameters and can thus be treated separately. It will be convenient to introduce the EFIM of the delay in forward and backward transmissions: we denote by $J_{\tau^\mathrm{f}}$ the EFIM of $\tau^\mathrm{f}$, obtained from applying Definition \ref{def:efim} to the FIM of $\left[{\boldsymbol{\vartheta}}^\mathrm{T},\psi^\mathrm{f}, \beta^\mathrm{f},\tau^\mathrm{f} \right]^\mathrm{T}$ based on the measurement $\mathbf{y}_2(t)$. Similarly,
we denote by $J_{\tau^\mathrm{b}}$ the EFIM of $\tau^\mathrm{b}$, obtained from applying Definition \ref{def:efim} to the FIM of $\left[\boldsymbol\vartheta^\mathrm{T},\psi^\mathrm{b}, \beta^\mathrm{b},\tau^\mathrm{b} \right]^\mathrm{T}$ based on the measurement $\mathbf{y}_1(t)$. Note that, by definition,
\begin{align}
\quad\mathbb{E}\{\left(e^\mathrm{f}\right)^2\}\geq{J_{\tau^\mathrm{f}}^{-1}},\qquad &\mathbb{E}\{\left(e^\mathrm{b}\right)^2\}\geq{J_{\tau^\mathrm{b}}^{-1}}.\label{eq:fim_eb}
\end{align}
\end{rem}
}
\subsection{PEB and OEB for RLP}\label{sec:RLP}
To compute PEB and OEB, we first need the EFIM of the position and orientation, $\mathbf{J}_{\mathbf{o,p}}^{\mathrm{e,b}}$. However, $\mathbf{p}$ and $\mathbf{o}$ are functions of $\boldsymbol{\vartheta}$ and $\tau$ and, hence $\mathbf{J}_{\mathbf{o,p}}^{\mathrm{e,b}}$ can be obtained as a transformation of the EFIM of $\boldsymbol{\vartheta}$ and $\tau$ as outlined in Definition 1. Since the temporal and spatial parts in \eqref{eq:partition_fim} are independent, the EFIM of $\boldsymbol{\vartheta}$ and $\tau$ is given by
\begin{align}
\mathbf{J}_{\boldsymbol{\vartheta}\tau}^\mathrm{e,b} =\begin{bmatrix}
\mathbf{J}_{\boldsymbol{\vartheta}}^\mathrm{e,b}&\mathbf{0}_4\\
\mathbf{0}_4^\mathrm{T}&J_{\tau}
\end{bmatrix}.\label{eq:schur}
\end{align}
We now outline how to obtain $\mathbf{J}_{\boldsymbol{\vartheta}\tau}^\mathrm{e,b}$ before transforming it to obtain $\mathbf{J}_{\mathbf{o,p}}^{\mathrm{e,b}}$.
It is straight-forward from \eqref{eq:Jss} that the EFIM of $\boldsymbol{\vartheta}$ based on the backward transmission is obtained using Schur complements as
\begin{align}
\mathbf{J}_{\boldsymbol{\vartheta}}^\mathrm{e,b}=\mathbf{J}_{\boldsymbol{\vartheta}}^\mathrm{b}-\frac{1}{{J}^\mathrm{b}_{\beta^\mathrm{b}}}\mathbf{J}_{\boldsymbol{\vartheta}\beta^\mathrm{b}}^\mathrm{b}\left(\mathbf{J}_{\boldsymbol{\vartheta}\beta^\mathrm{b}}^\mathrm{b}\right)^\mathrm{T}.\label{eq:fim_angles_e}
\end{align}
According to \eqref{eq:tau_b}, $\tau$ depends on the estimate of $\tau^\mathrm{f}$ as well as the value of $\tau^\mathrm{b}$. While we can determine $J_{\tau^\mathrm{f}}$ based on $\mathbf{y}_2(t)$, $J_{\tau^\mathrm{b}}$ is based on $\mathbf{y}_1(t)$. Therefore, to obtain the FIM of $\tau$ rather than $\tau^\mathrm{b}$ or $\tau^\mathrm{f}$, we apply the fact that, under RLP, the delays are not dependent on any of the other channel parameters. Towards that, recall that $\hat\tau^\mathrm{b}=2\tau+e^\mathrm{f}+e^\mathrm{b}+\tau^\mathrm{D},$ and define
\begin{align}
\tau'\triangleq\frac{\hat\tau^\mathrm{b}-\tau^\mathrm{D}}{2}=\tau+\frac{e^\mathrm{f}+e^\mathrm{b}}{2}.
\end{align}
Consequently, using \eqref{eq:fim_eb} yields
\begin{align}
\mathbb{E}\Big\{\left(\tau'-\tau\right)^2\Big\}\geq \frac{1}{4}\left(J_{\tau^\mathrm{f}}^{-1}+J_{\tau^\mathrm{b}}^{-1}\right),
\end{align}
that is,
\begin{align}
{J_{\tau}=4\left(J_{\tau^\mathrm{f}}^{-1}+J_{\tau^\mathrm{b}}^{-1}\right)^{-1}}.\label{eq:jatua_dlp}
\end{align}
Note that in this scenario, the estimation of $\tau$ is less accurate than the estimation of $\tau^\mathrm{b}$ due to its further dependence of $\tau^\mathrm{f}$.
Applying Definition \ref{def:tfim}, to \eqref{eq:schur}, it can be shown that
\begin{align}
\mathbf{J}_{\mathbf{o,p}}^{\mathrm{e,b}}{|_\mathrm{RLP}}=\boldsymbol\Upsilon\mathbf{J}_{\boldsymbol{\vartheta}\tau}^\mathrm{e,b}\boldsymbol\Upsilon^{\mathrm{T}},
\end{align}
where
\begin{align}\label{eq:transformation_matrix}
\boldsymbol{\Upsilon}&\triangleq\left[\begin{array}{cccc;{2pt/2pt}c}
\frac{\partial\theta_1}{\partial\mathbf{o}}&\frac{\partial\phi_1}{\partial\mathbf{o}}&\frac{\partial\theta_2}{\partial\mathbf{o}}&\frac{\partial\phi_2}{\partial\mathbf{o}}&\frac{\partial\tau}{\partial\mathbf{o}}\\
\frac{\partial\theta_1}{\partial\mathbf{p}}&\frac{\partial\phi_1}{\partial\mathbf{p}}&\frac{\partial\theta_2}{\partial\mathbf{p}}&\frac{\partial\phi_2}{\partial\mathbf{p}}&\frac{\partial\tau}{\partial\mathbf{p}}\\
\end{array}\right]=\left[\begin{array}{c;{2pt/2pt}c}
\boldsymbol{\Upsilon}_\mathrm{s}&\boldsymbol{\Upsilon}_\tau\end{array}\right].
\end{align}
Consequently, for RLP, we can isolate the spatial and temporal parts and write,
\begin{align}\label{eq:FIM_OP_RLP}
\mathbf{J}_{\mathbf{o,p}}^{\mathrm{e,b}}{|_\mathrm{RLP}}&=\underbrace{\boldsymbol{\Upsilon}_\mathrm{s}\mathbf{J}_{\boldsymbol{\vartheta}}^\mathrm{e,b}\boldsymbol{\Upsilon}_\mathrm{s}}_\text{Spatial Part}+\underbrace{{J}_{\tau}\boldsymbol{\Upsilon}_\tau\boldsymbol{\Upsilon}_\tau^\mathrm{T}}_\text{Temporal Part}.
\end{align}
{The entries of $\boldsymbol{\Upsilon}_\tau$ and $\boldsymbol{\Upsilon}_\mathrm{s}^{\mathrm{b}}$ are easily obtained from the relations mapping from location parameters to channel parameters and can be found in \cite{Zohair2017}, where it was concluded that $\boldsymbol{\Upsilon}_\tau$ is identical for the uplink and downlink, while $\boldsymbol{\Upsilon}_\mathrm{s}^{\mathrm{b}}$ is not. This results in an asymmetry in the spatial part of the FIM. To understand the implication of this asymmetry, we note that the UE position in the uplink is a function of the DOA and TOA, while in the downlink, it is a function of the DOD and TOA. However, DOD and DOA have different CRLBs, which means that the RLP localization performance in \eqref{eq:FIM_OP_RLP} depends on whether the localization is executed in the uplink (at BS) or downlink (at UE).}
\subsection{PEB and OEB for CLP}\label{sec:CLP}
As can be inferred from \eqref{eq:taub}, we have to retrieve ${B}$ in CLP, in contrast to the RLP case. Therefore, we the vector of unknown parameters is
\begin{align}
\boldsymbol{\varphi}_\mathrm{C}\triangleq\left[\boldsymbol{\vartheta}^\mathrm{T},\psi^\mathrm{b},\beta^\mathrm{b},\psi^\mathrm{f},\beta^\mathrm{f},\tau,{B}\right]^\mathrm{T}.
\end{align}
To simplify the derivation, we treat the temporal parameters, ($\tau$ and $B$), in isolation of the spatial parameters ($\boldsymbol{\vartheta}^\mathrm{T},\psi^\mathrm{b},\beta^\mathrm{b},\psi^\mathrm{f},\beta^\mathrm{f}$) because both sets are independent. Similar to \eqref{eq:schur}, we seek to compute
\begin{align}\label{eq:efim_clp}
\mathbf{J}^\mathrm{e}_{\boldsymbol\vartheta\tau}\triangleq\begin{bmatrix}
\mathbf{J}^\mathrm{e}_{\boldsymbol\vartheta}& \mathbf{0}_4\\
\mathbf{0}_4^\mathrm{T}&J^\mathrm{e}_\tau
\end{bmatrix},
\end{align}
where $\mathbf{J}^\mathrm{e}_{\boldsymbol\vartheta}$ is to be computed from FIM of the spatial parameters and $J^\mathrm{e}_\tau$ is to be computed from the temporal ones.
{Since D$_2$ transmission time is independent of the TOA of $\mathbf{y}_2(t)$, the two transmissions occur in non-overlapping time slots, and the noise is independent at both sides, the forward and backward transmissions can be considered independent. Therefore, the FIMs can be added according to the following Theorem.}
\begin{theorem}
{Consider a random process to observe the unknown parameter $\mathbf{x}$ along with the unknown nuisance parameter $\mathbf{z}_1$. Consider also another random process to observe $\mathbf{x}$ along with the unknown nuisance parameter $\mathbf{z}_2$. If both processes are independent and $\mathbf{z}_1$ and $\mathbf{z}_2$, are independent, then total EFIM of $\mathbf{x}$ is
\begin{align}
\mathbf{J}^\mathrm{e}_\mathbf{x}=\mathbf{J}^\mathrm{e,1}_\mathbf{x}+\mathbf{J}^\mathrm{e,2}_\mathbf{x},
\end{align}
where $\mathbf{J}^\mathrm{e,i}_\mathbf{x}$ is the EFIM of $\mathbf{x}$ obtained from the $i$-th process.}
\end{theorem}
\begin{proof}
See Appendix \ref{app:proof_theo}.
\end{proof}
In other words, the EFIM of $\boldsymbol{\vartheta}$ can be written as $\mathbf{J}^\mathrm{e}_{\boldsymbol\vartheta}=\mathbf{J}^\mathrm{e,b}_{\boldsymbol\vartheta}+\mathbf{J}^\mathrm{e,f}_{\boldsymbol\vartheta}$ by summing the EFIMs of $\boldsymbol\vartheta$ computed from $\mathbf{y}_1(t)$ and $\mathbf{y}_2(t)$. From \eqref{eq:Jss}, it follows that
\begin{align*}
\mathbf{J}^\mathrm{e}_{\boldsymbol\vartheta}=\mathbf{J}_{\boldsymbol{\vartheta}}^\mathrm{b}+\mathbf{J}_{\boldsymbol{\vartheta}}^\mathrm{f}-\frac{1}{{J}^\mathrm{b}_{\beta^\mathrm{b}}}\mathbf{J}_{\boldsymbol{\vartheta}\beta^\mathrm{b}}^\mathrm{b}\left(\mathbf{J}_{\boldsymbol{\vartheta}\beta^\mathrm{b}}^\mathrm{b}\right)^\mathrm{T}-\frac{1}{{J}^\mathrm{f}_{\beta^\mathrm{f}}}\mathbf{J}_{\boldsymbol{\vartheta}\beta^\mathrm{f}}^\mathrm{f}\left(\mathbf{J}_{\boldsymbol{\vartheta}\beta^\mathrm{f}}^\mathrm{f}\right)^\mathrm{T}.
\end{align*}
Moreover, $J^\mathrm{e}_\tau$ can be obtained noting that in the backward transmission $\tau^\mathrm{b} = t^\mathrm{b} + \tau -B$, while in the forward transmission $\tau^\mathrm{f} = \tau + B$, and that $\tau$ is independent of any other parameters. Hence, using the transformation of parameters and the fact that the two transmissions are independent, we can write the FIM of $\left[\tau,B\right]^\mathrm{T}$ as
\begin{align}
\mathbf{J}_{\tau B}=J_{{\tau}^\mathrm{b}}\left[\begin{array}{cc} 1&-1\\-1&1 \end{array}\right]+J_{{\tau}^\mathrm{f}} \left[\begin{array}{cc} 1&1\\1&1 \end{array}\right]
\end{align}
from which EFIM of $\tau$ is obtained by Schur complement as
\begin{align}
J^{\mathrm{e}}_{\tau}&=4\left(J^{-1}_{{\tau}^\mathrm{f}}+J^{-1}_{{\tau}^\mathrm{b}}\right)^{-1}.
\label{eq:fim_tau_e}
\end{align}
{It is interesting to see that the temporal information represented by $J_{\tau}$ is identical for both CLP \eqref{eq:fim_tau_e} and RLP \eqref{eq:jatua_dlp}. Therefore, any performance difference between these two protocols is attributed to the spatial information only. }
We now derive the EFIM of the position and orientation. Based on \eqref{eq:efim_clp} and Definition \ref{def:efim}
\begin{align}
\mathbf{J}_{\mathbf{o,p}}^{\mathrm{e}}{|_\mathrm{CLP}}&=\boldsymbol\Upsilon\mathbf{J}_{\boldsymbol{\vartheta}\tau}^\mathrm{e}\boldsymbol\Upsilon^{\mathrm{T}},\notag\\
&=\underbrace{\boldsymbol{\Upsilon}_\mathrm{s}\mathbf{J}_{\boldsymbol{\vartheta}}^\mathrm{e,f}\boldsymbol{\Upsilon}_{\mathrm{s}}^\mathrm{T}}_\text{Forward Spatial Part}+\underbrace{\boldsymbol{\Upsilon}_\mathrm{s}\mathbf{J}_{\boldsymbol{\vartheta}}^\mathrm{e,b}\boldsymbol{\Upsilon}_{\mathrm{s}}^\mathrm{T}}_\text{Backward Spatial Part}+\underbrace{{J}_{\tau}\boldsymbol{\Upsilon}_\tau\boldsymbol{\Upsilon}_\tau^\mathrm{T}}_\text{Temporal Part}.\label{eq:FIM_OP_CLP}
\end{align}
Note that \eqref{eq:FIM_OP_CLP} comprises three terms: two terms related to the spatial information in the forward and backward transmissions, and one term related to the temporal information.
\subsection{Performance Comparison of RLP, CLP and OWL}\label{sec:comparison}
{The EFIM of position and orientation under RLP is given in \eqref{eq:FIM_OP_RLP}, while that under CLP is given in \eqref{eq:FIM_OP_CLP}. In this section, we compare the performance of these two protocols with the standard one-way localization (OWL) from \cite{Zohair2017}, where it was shown that
\begin{align}\label{eq:FIM_OP_OWL}
{\mathbf{J}^{\mathrm{e},\mathrm{b}}_\mathbf{o,p}|_\mathrm{OWL}=\underbrace{\boldsymbol{\Upsilon}_\mathrm{s}\mathbf{J}_{\boldsymbol{\vartheta}}^\mathrm{e,b}\boldsymbol{\Upsilon}_{\mathrm{s}}^\mathrm{T}}_{\text{Spatial Part}}+\underbrace{{J}_{\tau^\mathrm{b}}\boldsymbol{\Upsilon}_\tau\boldsymbol{\Upsilon}_\tau^\mathrm{T}}_{\text{Temporal Part}},}
\end{align}
under the assumption of perfect synchronization between the two devices (i.e., $B=0$).
}
\subsubsection{RLP vs. CLP}
{Comparing RLP to CLP, we note that \eqref{eq:FIM_OP_RLP} contains only one spatial information term, related to the backward transmission, and another temporal information term. These two terms are equal to their counterparts in \eqref{eq:FIM_OP_CLP}. Hence, $ \mathbf{J}_{\mathbf{o,p}}^\mathrm{e}|_\mathrm{CLP} \succ \mathbf{J}_{\mathbf{o,p}}^{\mathrm{e,b}}{|_\mathrm{RLP}}$, meaning that CLP will always outperform RLP. Nevertheless, CLP requires additional overhead, as it involves sending back the waveform $\mathbf{y}_2(t)$ to D$_1$ (or estimated parameters with uncertainty) and thus requires an additional data transmission.}
\subsubsection{RLP vs. OWL} Inspecting \eqref{eq:FIM_OP_OWL}, it can be seen that $\mathbf{J}^{\mathrm{e},\mathrm{b}}_\mathbf{o,p}|_\mathrm{OWL}$ has the same expression as \eqref{eq:FIM_OP_RLP}, but with
\begin{align}
{J_{\tau}=J_{\tau^\mathrm{b}}.}\label{eq:fimOWL}
\end{align}
{This means the both RLP and OWL have the same spatial information but differ in the temporal information. However, it is not clear which protocol is superior. Therefore,} we provide the following proposition.
\begin{proposition}\label{prop:1}
RLP outperforms OWL if $ J_{\tau^\mathrm{f}}>\frac{1}{3}J_{\tau^\mathrm{b}}$.
\end{proposition}
\begin{proof}
Comparing RLP with OWL, it can be seen that they have equal spatial, but different temporal information. Comparing \eqref{eq:jatua_dlp} with \eqref{eq:fimOWL}, for RLP to outperform OWL, we should have
\begin{align*}
J_{\tau^\mathrm{b}}&<4\left(J_{\tau^\mathrm{f}}^{-1}+J_{\tau^\mathrm{b}}^{-1}\right)^{-1}=J_{\tau^\mathrm{b}}\frac{4J_{\tau^\mathrm{f}}}{J_{\tau^\mathrm{f}}+J_{\tau^\mathrm{b}}},
\end{align*}
which leads to $J_{\tau^\mathrm{f}}>\frac{1}{3}J_{\tau^\mathrm{b}}$.
\end{proof}
This means that, when the bandwidth is equal in both directions, the forward link should have at least one third the signal-to-noise ratio (SNR) of the backward link for RLP to outperform OWL. From
\eqref{eq:tau_b_tau_f}, it can be seen that this mainly depends on the transmit and receive beamforming. However, under the general case of non-identical bandwidth allocation, \eqref{eq:tau_b_tau_f} can be used to determine the values of bandwidth and SNR that satisfy the condition in Proposition 1.
\subsection{{Relationship of RLP and CLP with Channel Parameters}}\label{sec:RLP_CLP_channel_parameters}
{Since the derived position and orientation bounds are obtained through transforming the channel parameter bounds, the localization performance is ultimately affected by the channel parameter estimation accuracy. In \cite{Zohair2017}, it was concluded that the squared PEB is the sum of the CRLBs of TOA and the BS angle (DOA in the uplink or DOD in the downlink). It was also concluded that the CRLB of DOA is better than the CRLB of DOD. Extending these results to the RLP and CLP in this paper, it can be seen that from \eqref{eq:FIM_OP_RLP}, the RLP performance is governed by the backward transmission from D$_2$ to D$_1$. In other words, if the backward transmission is uplink, D$_1$ is a BS, whose angle is a DOA, which leads to a better PEB. Note that in such a case, DOD estimation error does not affect the localization performance. The opposite is true if D$_1$ is a UE.}
For CLP, it can be seen that from \eqref{eq:FIM_OP_CLP} that the squared PEB is the sum of the CRLB of DOA, DOD and TOA, meaning that regardless of the BS and UE roles, the PEB is affected by the error of estimating all of these three parameters.
{The squared OEB is the sum of the CRLBs of DOA and DOD \cite{Zohair2017}, and hence it is not affected by the accuracy of the TOA estimation.}
\subsection{{Insights on NLOS}}
When the signal propagation occurs in mixed propagation environment (LOS and NLOS), the delay of the LOS path, being the first and strongest path, can still be separated and identified, while the delays of NLOS paths must be subsequently estimated. For RLP, the NLOS paths in the backward transmission can assist the positioning, as shown in \cite{Zohair2017}. On the other hand, for CLP, the parameters of the NLOS paths in the forward and backward transmissions can be estimated separately. However, this will give rise to a path association problem, whereby the set of parameters estimated from the forward transmission have to be paired with their counter parts in the backward transmission and with the scatterers or reflectors in the environment to re-establish the different paths. Moreover, when the LOS is obstructed, the localization performance is severely degraded \cite{Lone2019}.
\section{Simulation Results and Discussion}\label{sec:sims}
\subsection{Simulation Environment}
\subsubsection{System Layout and Channel} In our simulations, we investigate and compare the RLP and CLP using the position and orientation error bounds to quantify the estimation accuracy. Since both protocols involve forward and backward transmission, we selected an equal number of antennas at both the BS and the UE to make the comparison of these protocols fair\footnote{{It is understood that BSs can accommodate more complexity, and its array can have a larger size, such as that in \cite{Fan2017} where 10,000 antennas are used. However, we use an equal number of antennas on D$_1$ and D$_2$ in order to explore the intrinsic differences between the protocols. These differences result from the lack of symmetry between the two links because the orientation is only known for one device but not the other. Thus, we prevent masking our protocol analysis by using the same number of antennas.}}. Towards that, we consider a BS and a UE both with $12\times{12}$ uniform rectangular antenna array (URA) communicating via a LOS. Moreover, we assume that the BS array is located in the $xz$-plane centered about the origin $[0,0,0]^\mathrm{T}$, thus has orientation angles of $[0^\circ,0^\circ]^\mathrm{T}$. On the other hand, the UE moves freely within a diamond-shape $120^\circ$ defined by the vertices $\{(0,0,-10),(25\sqrt{3},25,-10),(0,50,-10),(-25\sqrt{3},25,-10)\}$. That is, the BS height is 10 meters. We focus on two cases of orientation angles with respect to the $z$-axis and $x$-axis: $\mathbf{o}=[\chi_0,\zeta_0]=[0^\circ,0^\circ]^\mathrm{T}$ and $\mathbf{o}=[30^\circ,30^\circ]^\mathrm{T}$ as specified in the context. Finally, at a distance $d_1$, the channel gain is modeled as $\beta^\mathrm{b}=\beta^\mathrm{f}=\frac{\lambda}{4{\pi}d_1}$ \cite{Goldsmith2005}.
\subsubsection{Transmit-Receive Model} We select the mmWave frequency of $f=38~$GHz, and bandwidth\footnote{{At these frequency and bandwidth values, beam squint is negligible. From \cite{beqmsquint}, pointing error due to beam squint is proportional to $\left(1+f/W\right)^{-1}\approx1$.}} $W=125~$MHz. We assume an ideal $\mathrm{sinc}$ pulse-shaping filter such that $W^2_\mathrm{eff}=W^2/3$. The transmitted power $E_\mathrm{t}/T_\mathrm{s}=0~$dBm, and $N_0=-170$ dBm/Hz. Furthermore, we specify the number of pilots to be $N_\mathrm{s}=~64$ pilot symbols. This yields a location-dependent SNR of
\begin{align}
\mathrm{SNR}~[\mathrm{dB}] =150.26+20\log_{10}\left(\beta^\mathrm{b}\|\mathbf{a}^\mathrm{T}_{i}\mathbf{F}^\mathrm{H}_{i}\|\|\mathbf{W}^\mathrm{H}_j\mathbf{a}_j\|\right),
\end{align}
where $i,j\in\{1,2\}, i\neq{j}$, specified depending on the communication direction being forward or backward. {Note that this SNR results from beamforming gain of all the $N_\mathrm{B_1}$ and $N_\mathrm{B_2}$ beams combined.} {Although our formulation holds for any type of analog beamforming, in our numerical simulations,} we adopt \textit{fixed} directional beamforming {as an example scheme} similar to \cite{Zohair2017}. We consider $ 1\leq b\leq{N_{\mathrm{B}_1}}=N_{\mathrm{B}_2}=25$ beams at both the UE and BS, such that
\begin{figure}[!t]
\centering
\includegraphics[scale=0.60]{Figures/fig4.pdf}
\caption{Beamforming example with 4 beams. The rightmost device has orientation angles of $30^\circ$, while the other two have $0^\circ$. }
\label{fig:rotated_beams}
\end{figure}
\begin{align*}
\mathbf{f}_{1,b}&=\frac{1}{\sqrt{N_{\mathrm{B}_1}}}\mathbf{a}_{\mathrm{1}}(\theta^\mathrm{f}_{1,b},\phi^\mathrm{f}_{1,b}),\\
\mathbf{w}_{1,b}&=\frac{1}{\sqrt{N_{\mathrm{B}_1}}}\mathbf{a}_{\mathrm{1}}(\theta^\mathrm{w}_{1,b},\phi^\mathrm{w}_{1,b}),
\end{align*}
are D$_1$ transmit and receive beams pointing towards $(\theta^\mathrm{f}_{1,b},\phi^\mathrm{f}_{1,b})$ and $(\theta^\mathrm{w}_{1,b},\phi^\mathrm{w}_{1,b})$, respectively. The directions of the beams at the BS are chosen to be equispaced on the sector. On the UE, these directions are reversed to point upwards and rotated with respect to the UE frame of reference by the same orientation angles specified in the studied experiment. This setting provides 90\% of the locations with an SNR of at least 17 dB. Fig.~\ref{fig:rotated_beams} provides three examples on beamforming configuration: a BS at $(0,0,0)$, with beams pointing downwards, a UE at $(25,25,-10)$ with zero orientation angles, and another UE at $(-25,25,-10)$ with $\mathbf{o}=[30^\circ,30^\circ]^\mathrm{T}$. The black rectangles denote the array frame of reference of the device. Note that the first UE has reversed beam direction compared to BS, while the second UE has beam directions reversed and rotated by $[30^\circ,30^\circ]^\mathrm{T}$ , so that the beam directions remain constant with respect to the UE local frame of reference.
\subsubsection{Scenarios Studied} We study the PEB and OEB under RLP and CLP and compare these bounds to those obtained for OWL in \cite{Zohair2017}. Each of these three protocols is studied when localization is performed in the uplink (at BS) and in the downlink (at UE). Recall that CLP is symmetric in both cases, hence only one curve is given.
\subsection{PEB and OEB with $0^\circ$ UE Orientation}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.60]{Figures/fig5.pdf}
\caption{CDF of PEB with UE orientation angles of $0^\circ$, and $N_\mathrm{UE}=N_\mathrm{BS}=144$, $N_\mathrm{B}=25$.}
\label{fig:peb0}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.60]{Figures/fig6.pdf}
\caption{CDF of OEB with UE orientation angles of $0^\circ$, and $N_\mathrm{UE}=N_\mathrm{BS}=144$, $N_\mathrm{B}=25$.}
\label{fig:oeb0}
\end{figure}
The {cumulative distribution function} (CDF) of the PEB with zero orientation angles is provided in Fig.~\ref{fig:peb0} for all the considered protocols. First of all, to have a fair comparison, we compare the three solid curves corresponding to uplink localization, and then compare those related to downlink localization (dash-dot lines). It can be seen that RLP provides a negligible improvement over OWL. Despite that, RLP is still a better approach since it alleviates the need for high-accuracy synchronization, with the cost of UE-BS coordination. As discussed in Section \ref{sec:comparison}, RLP and OWL have the same spatial component, but RLP has higher temporal information content. However, Fig.~\ref{fig:peb0} shows almost identical results for both protocols, which means that the additional temporal information in RLP is of little importance, and thus the localization performance is limited by the angles estimation rather than the time delay. To understand this phenomenon more, we study the impact of the bandwidth on the performance later in Section \ref{sec:W}. On the other hand, as expected, CLP represents the best approach among the three studied, since it attains more useful information. However, this requires a more complex implementation due to the need for a feedback channel.
{Comparing the dash-dotted curves with the solid curves in Fig.~\ref{fig:peb0}, it can be seen that the three protocols behave in the downlink in a manner similar to the uplink. And, it can also be seen that while OWL and RLP are almost identical, CLP is superior to both. However, the reasons why the performance of RLP and OWL is worse in the downlink are beyond the scope of this paper and were extensively studied in \cite{Zohair2017}. Briefly,} it was concluded that, under matched orientation between the BS and UE, the uplink PEB is better than the downlink PEB. This is because 1) PEB is a function of the CRLB of the BS angles, and 2) CRLB of DOA is lower than CRLB of DOD. Therefore, when the BS angles are DOAs (uplink), the PEB will be lower.
Considering the CDF of the OEB with zero orientation angles in Fig.~\ref{fig:oeb0}, it can be seen that RLP and OWL exhibit identical performance. Note that OEB depends on DOA and DOD, while the enhancement of RLP over OWL is in the temporal domain. Furthermore, in line with the results in \cite{Zohair2017} with zero orientation angles, the uplink and downlink OEB are the same. Therefore, the four curves of RLP and OWL with uplink and downlink localization coincide. Moreover, in terms of OEB, CLP is also better than RLP and OWL due to the fourth term in \eqref{eq:FIM_OP_CLP}, which accounts for the coupling between the path gain and the transmission angles, providing more spatial information on the orientation angles. {Intuitively, this higher information is a result of estimating the path gain in both transmissions.}
\subsection{PEB and OEB with $30^\circ$ UE Orientation}\label{sec:sim_peb_oeb}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.60]{Figures/fig7.pdf}
\caption{CDF of PEB with UE orientation angles of $30^\circ$, and $N_\mathrm{UE}=N_\mathrm{BS}=144$, $N_\mathrm{B}=25$.}
\label{fig:peb30}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.60]{Figures/fig8.pdf}
\caption{CDF of OEB with UE orientation angles of $30^\circ$, and $N_\mathrm{UE}=N_\mathrm{BS}=144$, $N_\mathrm{B}=25$.}
\label{fig:oeb30}
\end{figure}
The CDF of the PEB with orientation angles $\mathbf{o}=[30^\circ,30^\circ]^\mathrm{T}$ is shown in Fig.~\ref{fig:peb30}, for all the considered protocols. The overall observation from this figure, in comparison to Fig.~\ref{fig:peb0}, is that the performance worsens when the beams are steered away, i.e., when the orientation angles are non-zero. This can result in a loss of beamforming gain that depends non-linearly on the UE location and orientation angles. However, CLP performance is still superior to RLP and OWL. In this example, performance loss of $42$ cm, $54$ cm, and $80$ cm were observed at a PEB CDF of 90\%, under CLP, uplink RLP, and downlink RLP, respectively. On the other hand, comparing Fig.~\ref{fig:oeb30} with Fig.~\ref{fig:oeb0}, it can be seen that, at a CDF of 90\%, there is an OEB performance loss of $6.8^\circ$, $8.8^\circ$, and $11.5^\circ$ under CLP, uplink RLP, and downlink RLP, respectively. Considering the PEB and OEB loss, it can be concluded that, among the studied approaches, CLP is the approach that is most robust to UE mis-orientation. Finally, we note that in comparison to the case of matched orientation, under 30$^\circ$ mis-orientation, the system can still provide sub-meter PEB, while providing significantly higher OEB. This means that orientation estimation is more challenging than position estimation. {Recall that orientation changes the beamforming angles, which impacts localization performance. Hence, the study of orientation in this context is meaningful, despite this degraded performance.}
\subsection{Impact of the System Bandwidth on PEB}\label{sec:W}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.60]{Figures/fig9.pdf}
\caption{PEB at 0.9 CDF with respect to the bandwidth $W$.}
\label{fig:W_sat}
\end{figure}
In Section \ref{sec:sim_peb_oeb}, we concluded that the system is limited by the estimation of the angles rather than the time delay. To investigate this phenomenon further, we now look closer into the impact of the bandwidth. {In the context of localization and ranging, higher bandwidths provide a more accurate estimation of the TOA, which leads to better localization bounds in general. Towards that,} the results in Fig.~\ref{fig:W_sat} indicate that as the bandwidth increases, the PEB decreases, until it reaches a floor at around $100$ MHz when $\mathbf{o}=[0^\circ,0^\circ]^\mathrm{T}$, and $60$ MHz when $\mathbf{o}=[30^\circ,30^\circ]^\mathrm{T}$. Based on these results, we make the following observations:
\begin{enumerate}
\item At higher bandwidths that are more relevant in mmWave, the temporal information is very high compared to the spatial information, and the performance becomes fixed with $W$, i.e., the systems are spatially-limited.
\item Under mis-orientation, the accuracy of spatial information degrades, and the system becomes spatially-limited. Hence, the improved temporal information does not provide any benefit to the performance achieved at lower bandwidths.
\item On the contrary, for lower bandwidths, the amount of temporal information decreases and becomes comparable to the spatial information. Therefore, the weight of the temporal information in the forward transmission becomes more significant, and the difference between OWL and RLP becomes more pronounced.
\end{enumerate}
\subsection{Impact of $N_\mathrm{BS}$ and $N_\mathrm{UE}$ on PEB}\label{sec:Nbs_Nue}
We now study the effect of the number of antennas at BS and UE on the PEB under CLP and RLP. Since this number can be $N_1$ or $N_2$ depending on the device role, we use $N_\mathrm{BS}$ and $N_\mathrm{UE}$ to unify the notation of the number of antennas at BS and UE, respectively.
Fig.~\ref{fig:N_ue} illustrates the effect of $N_\mathrm{UE}$ on PEB with $N_\mathrm{B}=25$ and $N_\mathrm{BS}=144$. It can be seen that at matched orientation (0$^\circ$, 0$^\circ$), performance tends to slightly improve with low to moderate $N_\mathrm{UE}$ values. However, higher $N_\mathrm{UE}$ generally results in a worse performance. This is because with higher $N_\mathrm{UE}$, the UE beams become narrower, and more beams are required to provide a full area coverage. It can also be noticed that, with an orientation of (30$^\circ$, 30$^\circ$), the rate of performance deterioration is higher. It is interesting to see that this rate is almost the same for the three protocols, which means that the performance loss is mainly due to SNR loss.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.60]{Figures/fig10.pdf}
\caption{PEB at 0.9 CDF as a function of the UE number of antennas, with $N_\mathrm{B}=25$, with orientation angles $0^\circ$ and $30^\circ$, and $N_\mathrm{BS}=144$.}
\label{fig:N_ue}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.60]{Figures/fig11.pdf}
\caption{PEB at 0.9 CDF as a function of the BS number of antennas, with $N_\mathrm{B}=25$, with orientation angles $0^\circ$ and $30^\circ$, and $N_\mathrm{UE}=144$.}
\label{fig:N_bs}
\end{figure}
On the other hand, the impact of $N_\mathrm{BS}$ is shown in Fig.~\ref{fig:N_bs} with $N_\mathrm{B}=25$ and $N_\mathrm{UE}=144$. It can be seen that a higher $N_\mathrm{BS}$ slightly improves the PEB in general. Similar to the case in Fig.~\ref{fig:N_ue}, it is understood that the PEB will generally increase when $N_\mathrm{BS}$ is arbitrarily large, albeit, at $N_\mathrm{BS}$ values well beyond those displayed in Fig.~\ref{fig:N_bs}, and with a lesser magnitude than higher $N_\mathrm{UE}$. Therefore, adding more antennas at the BS will not reduce the localization performance, as the UE antennas potentially would, at least within the studied range of array size. Finally, notice that both Figs.~\ref{fig:N_ue} and \ref{fig:N_bs} exhibit some non-monotonic trend. This is due to the nature of directional beamforming, whereby the beamforming gain depends on the user location, number of antennas, and beams directions as detailed in \cite{Zohair2016}. {In other words, varying the number of antennas results in a different sidelobe pattern that non-linearly varies the PEB and OEB.}
\section{Conclusions}\label{sec:conclusions}
Many publications on localization assume that the BS and UE are tightly synchronized. However, usually, communication systems are not synchronized to a high-level useful for localization. Focusing on this issue, in this paper, we considered two protocols of two-way localization referred to {round-trip localization protocol (RLP) and collaborative localization protocol (CLP)}. We investigated the PEB and OEB under these two protocols, where we showed mathematically that CLP outperforms RLP with a significant margin. However, this comes with the cost of requiring a feedback channel, unlike RLP where no synchronization or feedback are required, although it may need dedicated hardware to trigger the response. In our derivations, we considered beamforming at the transmitter and the receiver and accounted for the spatially-correlated receive noise. Considering the results of the numerical simulation, the enhancement observed for RLP over the traditional OWL was limited. That is, the localization was angle-limited rather than delay-limited. Moreover, our numerical results also showed that it is more beneficial to have more antennas at the BS than at the UE.
Future work based on this paper includes considering adaptive beamforming, whereby the beam directions are modified in the second round of transmission. Moreover, multipath propagation would be a relevant extension, since scatterers may differ in the uplink and downlink, depending on the beam directions.
\section*{Acknowledgment}
The authors would like to thank Dr. Xiangyun Zhou of the Research School of Engineering at the Australian National University for his valuable feedback on this work.
\appendices
\section{Derivation of the Elements of the FIM of the Channel Parameters}\label{sec:app_fim_dl}
Consider backward transmission round. In this case, D$_1$ has the following observation
\begin{align}
\mathbf{y}_1(t)=\sqrt{N_\mathrm{1}N_\mathrm{2}E_\mathrm{t}}h^\mathrm{b}\mathbf{W}_\mathrm{1}^\mathrm{H}\mathbf{a}_\mathrm{1}\mathbf{a}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{s}_2(t-\tau^\mathrm{b})+\mathbf{n}_1(t).
\end{align}
For the case of zero-mean additive correlated Gaussian noise, the FIM of $\boldsymbol{\varphi}^\mathrm{b}$ is given by \cite{kay1993}
\begin{align}
J^\mathrm{b}_{xy}\triangleq&
=\frac{1}{N_0}\int_0^{T_\mathrm{o}}\Re\left\{\frac{\partial\boldsymbol{\mu}^\mathrm{H}(t)}{\partial x}\left(\mathbf{W}_1^\mathrm{H}\mathbf{W}_1\right)^{-1}\frac{\partial\boldsymbol{\mu}(t)}{\partial y}\right\}\mathrm{d}t,\\ &x,y\in\{\theta_{{1}},\phi_{{1}},\theta_{{2}},\phi_{{2}},\psi^\mathrm{b},\beta^\mathrm{b},\tau^\mathrm{b}\}\notag
\end{align}
where $\boldsymbol{\mu}(t)$ is the mean of the observation vector, and $T_\mathrm{o}$ is assumed to be long enough to receive the entire pilot signal. Consequently, we write
\begin{align}
\boldsymbol{\mu}(t)=\sqrt{N_\mathrm{1}N_\mathrm{2}E_\mathrm{t}}\beta^\mathrm{b}\mathrm{e}^{j\psi^\mathrm{b}}\mathbf{W}_\mathrm{1}^\mathrm{H}\mathbf{a}_\mathrm{1}\mathbf{a}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{s}_2(t-\tau^\mathrm{b}).
\end{align}
Defining $\mathbf{\dot{s}}({t})\triangleq\frac{\partial\mathbf{{s}}({t})}{\partial t}, \mathbf{k}_\mathrm{i}=\frac{\partial}{\partial \theta_{i}}\mathbf{a}_\mathrm{i}, \mathbf{p}_\mathrm{i}=\frac{\partial}{\partial \phi_{i}}\mathbf{a}_\mathrm{i}$, $i\in\{1,2\}$, and the operator $\mathbf{P}_\mathbf{A}\triangleq\mathbf{A}\left(\mathbf{A}^\mathrm{H}\mathbf{A}\right)^{-1}\mathbf{A}^\mathrm{H}$, and $\gamma\triangleq{N_1N_2N_\mathrm{s}E_\mathrm{t}}/{N_0}$, we can write the following
\begin{subequations}\label{eq:SpatialFIM1}
\begin{align}
J^\mathrm{b}_{\theta_1}&=\gamma{\beta^\mathrm{b}}^2\left(\mathbf{a}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{a}_\mathrm{2}^{*}\right)\left(\mathbf{k}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{k}_\mathrm{1}\right)\\
J^\mathrm{b}_{\phi_1}&=\gamma{\beta^\mathrm{b}}^2\left(\mathbf{a}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{a}_\mathrm{2}^{*}\right)\left(\mathbf{p}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{p}_\mathrm{1}\right)\\
J^\mathrm{b}_{\theta_2}&=\gamma{\beta^\mathrm{b}}^2\left(\mathbf{k}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{k}_\mathrm{2}^{*}\right)\left(\mathbf{a}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right)\\
J^\mathrm{b}_{\phi_2}&=\gamma{\beta^\mathrm{b}}^2\left(\mathbf{p}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{p}_\mathrm{2}^{*}\right)\left(\mathbf{a}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right)\\
J^\mathrm{b}_{\beta^\mathrm{b}}&=\gamma\left(\mathbf{a}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{a}_\mathrm{2}^{*}\right)\left(\mathbf{a}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right),\\
J^\mathrm{b}_{\psi^\mathrm{b}}&=\gamma{\beta^\mathrm{b}}^2\left(\mathbf{a}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{a}_\mathrm{2}^{*}\right)\left(\mathbf{a}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right),\\
J^\mathrm{b}_{\theta_1\phi_1}&=\gamma{\beta^\mathrm{b}}^2\left(\mathbf{a}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{a}_\mathrm{2}^{*}\right)\left(\mathbf{p}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{k}_\mathrm{1}\right),\label{eq:example}\\
J^\mathrm{b}_{\theta_1\theta_2}&=\gamma{\beta^\mathrm{b}}^2\left(\mathbf{k}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{a}_\mathrm{2}^{*}\right)\left(\mathbf{k}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right),\\
J^\mathrm{b}_{\theta_1\phi_2}&=\gamma{\beta^\mathrm{b}}^2\left(\mathbf{p}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{a}_\mathrm{2}^{*}\right)\left(\mathbf{k}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right),\\
J^\mathrm{b}_{\theta_1\beta^\mathrm{b}}&=\gamma{\beta^\mathrm{b}}\left(\mathbf{a}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{a}_\mathrm{2}^{*}\right)\left(\mathbf{k}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right)\\
J^\mathrm{b}_{\phi_1\theta_2}&=\gamma{\beta^\mathrm{b}}^2\left(\mathbf{k}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{a}_\mathrm{2}^{*}\right)\left(\mathbf{p}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right),\\
J^\mathrm{b}_{\phi_1\phi_2}&=\gamma{\beta^\mathrm{b}}^2\left(\mathbf{p}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{a}_\mathrm{2}^{*}\right)\left(\mathbf{p}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right),\\
J^\mathrm{b}_{\phi_1\beta^\mathrm{b}}&={\gamma}{\beta^\mathrm{b}}\left(\mathbf{a}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{a}_\mathrm{2}^{*}\right)\left(\mathbf{p}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right),\\
J^\mathrm{b}_{\theta_2\phi_2}&=\gamma{\beta^\mathrm{b}}^2\left(\mathbf{p}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{k}_\mathrm{2}^{*}\right)\left(\mathbf{a}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right),\\
J^\mathrm{b}_{\theta_2\beta^\mathrm{b}}&={\gamma}{\beta^\mathrm{b}}\left(\mathbf{a}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{k}_\mathrm{2}^{*}\right)\left(\mathbf{a}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right),\\
J^\mathrm{b}_{\phi_2\beta^\mathrm{b}}&={\gamma}{\beta^\mathrm{b}}\left(\mathbf{a}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{p}_2^{*}\right)\left(\mathbf{a}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right),\\
J_{\tau^\mathrm{b}}&=4\gamma{\beta^\mathrm{b}}^2\pi^2W_\mathrm{eff}^2\left(\mathbf{a}_\mathrm{2}^\mathrm{T}\mathbf{F}_\mathrm{2}\mathbf{F}_\mathrm{2}^\mathrm{H}\mathbf{a}_\mathrm{2}^{*}\right)\left(\mathbf{a}_\mathrm{1}^\mathrm{H}\mathbf{P}_{\mathbf{W}_1}\mathbf{a}_\mathrm{1}\right). \label{eq:tau_b_tau_f}
\end{align}
\end{subequations}
\normalsize
where
\begin{align*}
W_\mathrm{eff}^2&=\int_{-W/2}^{W/2}f^2 |G(f)|^2\mathrm{d}f.\notag
\end{align*}
Other entries in $\mathbf{J}_{\boldsymbol{\varphi}}^{\mathrm{b}}$ are zero because
\begin{align}
\int_0^{T_\mathrm{o}}\mathbf{{s}}_2^\mathrm{H}(t-\tau^\mathrm{b})\mathbf{\dot{s}}_2(t-\tau^\mathrm{b})\mathrm{d}t&=0,\\
\int_0^{T_\mathrm{o}}\mathbf{{s}}_2(t-\tau^\mathrm{b})\mathbf{{s}}_2^\mathrm{H}(t-\tau^\mathrm{b})\mathrm{d}t&=N_\mathrm{s}\mathbf{I}_{N_\mathrm{B}}.
\end{align}
In forward transmission, the subscripts $``1"$ and $``2"$ should be interchanged in \eqref{eq:SpatialFIM1} and the superscript $``\mathrm{b}"$ replaced by $``\mathrm{f}"$. For example, from $J^\mathrm{b}_{\theta_1\phi_1}$ in \eqref{eq:example}, we can calculate $J^\mathrm{f}_{\theta_2\phi_2}=\gamma{\beta^\mathrm{f}}^2\left(\mathbf{a}_\mathrm{1}^\mathrm{T}\mathbf{F}_\mathrm{1}\mathbf{F}_\mathrm{1}^\mathrm{H}\mathbf{a}_\mathrm{1}^{*}\right)\left(\mathbf{p}_\mathrm{2}^\mathrm{H}\mathbf{P}_{\mathbf{W}_2}\mathbf{k}_\mathrm{2}\right)$, which goes in row 3, column 4 in the forward-transmission counterpart of \eqref{eq:jthetatheta}.
\section{Proof of Theorem 1}\label{app:proof_theo}
Define the vector of all unknown parameters as $\mathbf{v}=[\mathbf{x}^\mathrm{T},\mathbf{z}_1^\mathrm{T},\mathbf{z}_2^\mathrm{T}]^\mathrm{T}$, then the FIM of $\mathbf{v}$ based on the first and second observations are, respectively,
\begin{align*}
\mathbf{J}_\mathbf{v}^{(1)}&=\begin{bmatrix}
\mathbf{J}_\mathbf{x}^{(1)}&\mathbf{J}_{\mathbf{x},\mathbf{z}_1}^{(1)}&\mathbf{0}\\
\mathbf{J}_{\mathbf{x},\mathbf{z}_1}^{\mathrm{T}(1)}&\mathbf{J}_{\mathbf{z}_1}^{(1)}&\mathbf{0}\\
\mathbf{0}&\mathbf{0}&\mathbf{0}
\end{bmatrix},\
\mathbf{J}_\mathbf{v}^{(2)}=\begin{bmatrix}
\mathbf{J}_\mathbf{x}^{(2)}&\mathbf{0}&\mathbf{J}_{\mathbf{x},\mathbf{z}_2}^{(2)}\\
\mathbf{0}&\mathbf{0}&\mathbf{0}\\
\mathbf{J}_{\mathbf{x},\mathbf{z}_2}^{\mathrm{T}(2)}&\mathbf{0}&\mathbf{J}_{\mathbf{z}_2}^{(2)}
\end{bmatrix}
\end{align*}
Since the two observations are independent,
\begin{align}
\mathbf{J}_\mathbf{v}&=\begin{bmatrix}
\mathbf{J}_\mathbf{x}^{(1)}+\mathbf{J}_\mathbf{x}^{(2)}&\mathbf{J}_{\mathbf{x},\mathbf{z}_1}^{(1)}&\mathbf{J}_{\mathbf{x},\mathbf{z}_2}^{(2)}\\
\mathbf{J}_{\mathbf{x},\mathbf{z}_1}^{\mathrm{T}(1)}&\mathbf{J}_{\mathbf{z}_1}^{(1)}&\mathbf{0}\\
\mathbf{J}_{\mathbf{x},\mathbf{z}_2}^{\mathrm{T}(2)}&\mathbf{0}&\mathbf{J}_{\mathbf{z}_2}^{(2)}
\end{bmatrix}
\end{align}
Consequently, EFIM of $\mathbf{x}$ is given by Schur complement as
\begin{align}
\mathbf{J}_\mathbf{x}^\mathrm{e}=& \mathbf{J}_\mathbf{x}^{(1)}+\mathbf{J}_\mathbf{x}^{(2)}\notag\\&-\mathbf{J}_{\mathbf{x},\mathbf{z}_1}^{(1)}\left(\mathbf{J}_{\mathbf{z}_1}^{(1)}\right)^{-1}\mathbf{J}_{\mathbf{x},\mathbf{z}_1}^{\mathrm{T}(1)}-\mathbf{J}_{\mathbf{x},\mathbf{z}_2}^{(2)}\left(\mathbf{J}_{\mathbf{z}_2}^{(2)}\right)^{-1}\mathbf{J}_{\mathbf{x},\mathbf{z}_2}^{\mathrm{T}(2)}\label{eq:proof}
\end{align}
Note that the first and third term in \eqref{eq:proof} represent the Schur complement of $\mathbf{x}$ with respect to $\mathbf{z}_1$ obtained from the first process, while the second and fourth term represent the Schur complement of $\mathbf{x}$ with respect to $\mathbf{z}_2$ obtained from the second process. In other words,
\begin{align}
\mathbf{J}^\mathrm{e}_\mathbf{x}=\mathbf{J}^\mathrm{e,1}_\mathbf{x}+\mathbf{J}^\mathrm{e,2}_\mathbf{x}.
\end{align}
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2020-04-08T02:05:10",
"yymm": "1805",
"arxiv_id": "1805.02319",
"language": "en",
"url": "https://arxiv.org/abs/1805.02319"
}
|
\section{Data}
The organizers specially collected and annotated a One-Minute Gradual-Emotional Behavior dataset (OMG-Emotion dataset) for the challenge. The dataset is composed of Youtube videos chosen through keywords based on long-term emotional behaviors such as "monologues", "auditions", "dialogues" and "emotional scenes". An annotator has to watch a whole video in a sequence so that he takes into consideration the contextual information before annotating the arousal and valence for each utterance of a video. The dataset provided by the organizers contains a train split of 231 videos composed of 2442 utterances and validation split of 60 videos composed of 617 utterances. For each utterance, the gold arousal and valence level is given.
\section{Architecture}
Because context is taken into account during annotation, we propose a context-dependent architecture \cite{poria2017context} where the arousal and valence of an utterance is predicted according to the surrounding context. Our model consists of three successive stages:
\begin{itemize}
\item A context-independent unimodal stage to extract linguistic, visual and acoustic features per utterance
\item A context-dependent unimodal stage to extract linguistic, visual and acoustic features per video
\item A context-dependent multimodal stage to make a final prediction per video
\end{itemize}
\subsection{Context-independent Unimodal stage}
Firstly, the unimodal features are extracted from each utterance separately. We use a mean square error as loss function :
$$\mathcal{L}_{mse} = \frac{1}{N}L^2_{p=2} (\vx, \vy) = \frac{1}{N} \sum_{i=1}^{N} (x_i - y_i)^2$$
where $N$ is the number of utterances predicted, $\vx$ the prediction vector for arousal or valence and $y_i$ the ground truth vector.
Below, we explain the linguistic, visual and acoustic feature extraction methods.
\subsubsection{Convolutional Neural Networks for Sentences}
For each utterance, a transcription is given as a written sentence. We train a simple CNN with
one layer of convolution \cite{kim2014convolutional} on top of word vectors obtained from an unsupervised neural language model \cite{mikolov2013distributed}. More precisely, we represent an utterance (here, a sentence) as a sequence of $k$-dimensional word2vec vectors concatenated. Each sentence is wrapped to a window of 50 words which serves as the input to the CNN. Our model has one convolutional layer of three kernels of size 3, 4 and 2 with 30, 30 and 60 feature maps respectively. We then apply a max-overtime
pooling operation over the feature map and capture the most important feature, one with the highest value, for each feature map. Each kernel and max-pooling operation are interleaved with ReLu activation function. Finally, a fully connected network layer $\text{FC}_{out}$ of size $[120 \rightarrow 2]$ predicts both arousal and valence of the utterance. We extract the 120-dimensional features of an utterance before the $\text{FC}_{out}$ operation.
\subsubsection{3D-CNN for visual input}
In this section, we explain how we extract features of each utterance's video with a 3D-CNN \cite{ji20133d}. A video is a sequence of frames of size $W\times H \times 3$. The 3D convolution is achieved by convolving a 3D-kernel to the cube formed by stacking multiple successive video frames together. By this construction, the feature maps in the convolution layer is connected to multiple frames in the previous layer and therefore is able to capture the temporal information. In our experiments, we sample 32 frames of size $32 \times 32$ per video, equally inter-spaced, so that each video in the dataset $\in \mathbb{R}^{32 \times 32 \times 32 \times 3}$. Our CNN consists of 2 convolutional layers of 32 filters of size $5 \times 5 \times 5$. Each layer is followed by two max-pooling layers of size $4 \times 4 \times 4$ and $3 \times 3 \times 3$ respectively. Afterwards, two fully connected network layers $\text{FC}_{out_1}$ $[864 \rightarrow 128]$ and $\text{FC}_{out_2}$ $[128 \rightarrow 2]$ map the CNN outputs to a predicted arousal and valence level. We extract the 128-dimensional features of an utterance before the $\text{FC}_{out_2}$ operation.
\subsubsection{OpenSmile for audio input}
For every utterance's video, we sample a Waveform Audio file at 16 KHz frequency and use OpenSmile \cite{eyben2010opensmile} to extract 6373 features from the IS13-ComParE configuration file. To reduce the number, we only select the $k$-best features based on univariate statistical regression tests where arousal and valence levels are the targets. We pick $k=80$ for both arousal and valence tests and merge features indexes together. We ended up with 121 unique features per utterances.
\subsection{Context-dependent Unimodal stage}
In this section, we stack the utterances video-wise for each modality. Lets consider a modality $m$ of utterance feature size $k$, a video $\mV_i$ is the sequence of utterances vectors $(\vx_1, \vx_2, \hdots, \vx_T)_i$ where $\vx_j \in \mathbb{R}^k$ and $T$ is the number of utterances in $\mV_i$. We now have a set of modality videos $\mathcal{V}_m = (\mV_1, \mV_2, \hdots, \mV_N)_m$ where $N$ is number of video in the dataset.\\
In previous similar work \cite{poria2017context}, the video matrice $\mV_i$ was the input of a bi-directional LSTM network to capture previous and following context. We argue that, especially if the video has many utterances, the context might be incomplete or inaccurate for a specific utterance. We tackle the problem by using self-attention (sometimes called intra-attention). This attention mechanism relates different positions of a single sequence in order to compute a representation of the sequence and has been successfully used in a variety of tasks \cite{parikh2016decomposable,lin2017structured,vaswani2017attention}. More specifically, we use the "transformer" encoder with multi-head self-attention to compute our context-dependent unimodal video features.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.40]{dddc.jpeg}
\caption{Overview of the Context-dependent Unimodal stage. Each utterance's arousal and valence level are predicted through a whole video}
\end{figure}
\subsubsection{Transformer encoder}
The encoder is composed of a stack of N identical blocks. Each block has two
layers. The first layer is a multi-head self-attention mechanism, and the second is a fully connected feed-forward network. Each layer is followed by a normalization layer and employs a residual connection. The output of each layer can be rewritten as the following
$$\text{LayerNorm}(x + \text{layer}(x))$$
where layer(x) is the function implemented by the layer
itself (multi-head attention or feed forward).
\subsubsection{Multi-Head attention}
Let $d_k$ be the queries and keys dimension and $d_v$ the values dimension, the attention function is the dot products of the query with all keys, divide each by $\sqrt[]{d_k}$, and apply a softmax function to obtain the weights on the values :
$$\text{Attention}(Q, K, V) = \text{softmax}(\frac{QK^T}{\sqrt[]{d_k}})V$$
Authors found it beneficial to linearly project the queries, keys and values $h$ times with different learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions. The output of the multi-head attention is the concatenation of the $h$ number of $d_v$ values. \\
We pick $d_k = d_v = 64, h = 8, N = 2$.
\subsubsection{Dense output layer}
The output of each utterance's transformer goes through a last fully connected layer $\text{FC}_{out}$ of size $[m, 2]$ to predict both arousal and valence level. Because we make our prediction per video, we propose to include the concordance correlation coefficient in our loss function. We define $\mathcal{L}_\text{ccc} = 1-p_c$ where
$$p_c = \frac{2\sigma^2_{xy}}{\sigma^2_x + \sigma^2_y + (\mu_x - \mu_y)^2}$$
We now want to minimize
$$\mathcal{L}_{\text{total}} = \mathcal{L}_{\text{mse}}+0.25 \times \mathcal{L}_{\text{ccc}}$$
for both arousal and valence value. In addition to lead to better results, we found it to give the model more stability between evaluation and reproducibility between runs.
\section{Context-dependent Multimodal stage}
This section is similar to the previous section, except that we now have only one set of video $\mathcal{V} = (\mV_1, \mV_2, \hdots, \mV_N)$ where each video $\mV_i$ is composed of multimodal utterances $\vx_j = (\vx_{\text{linguistic}}, \vx_{\text{visual}}, \vx_{\text{audio}})$. In our experiments, we tried two types of fusion.
\begin{enumerate}[label=\textbf{\Alph*}]
\item \textbf{Concatenation} \\
We simply concatenate each modality utterance-wise. The utterance $\vx_j$ can be rewritten $\vx_j = \vx_{\text{linguistic}} \oplus \vx_{\text{visual}} \oplus \vx_{\text{audio}}$ where $ \oplus$ denotes concatenation.
\item \textbf{Multimodal Compact Bilinear Pooling} \\
We would like each feature of each modality to combine with each others. We would learn a model $\mW$ (here linear), i.e. $\vc = \mW [ [ \vx \otimes \vy ] \otimes \vz]$ where $\otimes$ is the outer-product operation and $[ ]$ denotes linearizing the matrix in a vector. In our experiments, our modality feature size are 120, 128 and 121. If we want $c \in \mathbb{R}^{512}$, $\mW$ would have 951 millions parameters. A multimodal compact bilinear pooling model \cite{fukui2016multimodal} can be learned by relying on the Count Sketch projection function \cite{charikar2002finding} to project the outer product to a lower dimensional space, which reduces the number of parameters in $\mW$.
\end{enumerate}
\section{Results}
We report our preliminary results in term of the concordance correlation coefficient metric. \\
\begin{tabular}{lc}
\multicolumn{1}{c}{\bf Model} &\multicolumn{1}{c}{\bf Mean CCC}
\\ \hline \\
\textbf{Monomodal feature extraction} \\
Text - CNN & 0.165\\
Audio - OpenSmile & 0.150\\
Video - 3DCNN & 0.186\\
\textbf{Contextual monomodal} \\
Text & 0.220\\
Audio & 0.223\\
Video & 0.227\\
\textbf{Contextual multimodal} \\
T + A + V & 0.274\\
T + A + V + CBP & 0.301
\end{tabular}
\bibliographystyle{acl}
|
{
"timestamp": "2018-05-31T02:10:30",
"yymm": "1805",
"arxiv_id": "1805.02489",
"language": "en",
"url": "https://arxiv.org/abs/1805.02489"
}
|
\section{Introduction}
\begin{figure}[htb!]
\centering
\includegraphics[width=.46\textwidth]{Fig1.pdf}
\caption{
\label{fig1}
(a) Schematic principle of the originally proposed Dynamical Casimir effect.
The system includes two perfectly conducting plates which allow for a discrete spectrum of modes which may be populated with photons.
Changing the distance of plates non-adiabatically will result in an energetic shift of the light modes (black arrows).
If the frequency of the modulation (green arrow) is resonant to a certain mode, a pair of virtual photons is turned into real photons (green circles).
(b) Our analogue setup:
By manipulating the energy difference of the Zeeman levels, we can excite spins out of our spinor Bose-Einstein condensate into initially empty modes.
}
\end{figure}
The quantized vacuum contains pairs of virtual quanta in all available modes of the physical system.
The static Casimir effect \cite{Casimir1948} is a measurable consequence of this process.
In the original setting, two mirrors experience an attractive interaction due to the reduced number of electromagnetic modes in the volume between them.
First attempted in 1956 \cite{Lifshitz1956}, the Casimir force was precisely measured between a plane and a sphere in the late 1990s \cite{Lamoreaux1997, Mohideen1998} and between parallel plates in 2002 \cite{Bressi2002}.
If the mode density is not varied in space but in time, for example by a modulation of the mirror's distance \cite{Moore1970} or by changing the refractive index \cite{Yablonovitch1989}, the virtual quanta can be turned into macroscopic numbers of real excitations (see Fig.~\ref{fig1}a).
This so-called Dynamical Casimir effect (DCE) \cite{Schwinger1992} has been observed in the microwave regime \cite{Wilson2011,Laehteenmaeki2013}.
The excitations are created by nonadiabatic changes of the boundary conditions \cite{Dodonov2010,Dalvit2011}.
Moreover, this process creates pairs, which leads to the generation of entangled many-particle states out of the vacuum.
Bose-Einstein condensates (BECs) of diluted gases offer the possibility to study this effect either by modulating the magnetic field \cite{Saito2008} or the atomic scattering length \cite{Carusotto2010}.
In this case, the atomic BEC acts as a background field out of which atoms can be transferred to different, initially unoccupied modes.
These unpopulated modes represent the empty electromagnetic modes of the original setting.
These modes can be excited with either spatial (scalar Bogoliubov modes) or spin excitations.
The external modulation must now alter the energy of these unpopulated modes, which can be realized by a modulation of the external trapping potential or the magnetic field.
Both schemes have been realized experimentally by driving spatial excitations~\cite{Jaskula2012} or spin excitations~\cite{Hoang2016}.
In Ref.~\cite{Jaskula2012}, the effect has been demonstrated for the first time, but the pure quantum character could not be proved.
While Ref.~\cite{Hoang2016} demonstrates the suppression of thermal excitations and a squeezing of spin-nematic observables, a proof that the DCE creates entanglement is missing.
In this article, we demonstrate the excitation of both spin and spatial degrees of freedom in a spinor BEC by the DCE.
We prove that the spin excitations are created in the form of entangled pairs by a violation of a continuous-variable entanglement criterion~\cite{Simon2000,Duan2000a}.
The experimental data is supported by a theoretical description of the system in terms of a numerical Bogoliubov analysis.
\section{Theoretical description}
\label{theory}
We consider an $F=1$ spinor BEC initially prepared in $m=0$. The system is described by the bosonic operators $\hat\psi_{m}(\vec r)$ that annihilate atoms with spin $m$ at position $\vec r$.
Using Bogoliubov's approximation $\hat\psi_0(\vec r)=\psi_0(\vec r)+\delta{\hat\psi_0(\vec r)}$, where $\psi_0(\vec r)$ is the mean-field, and ${\delta\hat\psi_0(\vec r)}$ denote scalar fluctuations of the $m=0$ condensate.
Up to second-order in the fluctuations, the spin fluctuations ${\hat\psi_{\pm 1}({\vec r})}$ decouple from the scalar fluctuations, and are given by the Hamiltonian:
\begin{eqnarray}
\hat H_1&=&\sum_{m=\pm1 }\int d^3 r\, \hat \psi_m^\dag (\vec r) \left ( \hat H_{\rm eff}+q \right ) {\hat\psi}_m(\vec r) \nonumber \\
&+& U_1\int d^3r\, n_0(\vec r)\left (\hat\psi_1^\dag(\vec r) \hat\psi_{-1}^\dag(\vec r) + \mathrm {H.c.} \right )
\label{eq:H1}
\end{eqnarray}
where $\hat H_{\rm eff}=-{\hbar^2\nabla^2}/{2M}+V(\vec r)+(U_0+U_1)n_0({\vec r})-\mu$, with $M$ the atomic mass, $n_0(\vec r)=|\psi_0(\vec r)|^2$,
$V({\vec r})$ the external trapping potential, $\mu$ the chemical potential, $U_0=(g_0+2g_2)/3$, $U_1=(g_2-g_0)/3$, and $g_F=4\pi{\hbar^2 a_F}/{M}$, with $a_F$ the scattering-length for the
collisional channel with total spin $F$. In Eq.~(\ref{eq:H1}), $q$ denotes the quadratic Zeeman energy~(QZE) term.
This energy may be externally modified using microwave fields.
In particular, $q$ may be modified in time, which is a key feature in the following (see Fig.~\ref{fig1}b).
\subsection{Homogeneous case}
The connection between the spin dynamics in the presence of a time-dependent QZE and the DCE becomes particularly evident
when considering a homogeneous BEC, i.e. $V({\vec r})=0$. In that case, $n_0({\vec r})=n$ is a constant and $\mu=U_0n$.
With this Eq.~(\ref{eq:H1}) may be re-written in momentum, $k$, space:
\begin{eqnarray}
\hat H_1&=&\sum_k \bigg[ \sum_{m=\pm 1}\left ( \frac{\hbar^2k^2}{2M}+nU_1+q \right ) \hat a_{k,m}^\dag \hat a_{k,m} \nonumber \\
&+& U_1{n}\left ( \hat a_{k,1}^\dag {\hat a_{-k,-1}^\dag} + \mathrm{H.c.} \right ) \bigg],
\label{eq:H1}
\end{eqnarray}
where $\hat a_{k,m}$ denotes the bosonic operator for particles with spin $m$ and momentum $k$.
Employing the operators ${\hat a_{k,\pm}}=(\hat a_{{k,}1}\pm\hat a_{{k,}-1})/\sqrt{2}$, we introduce the Bogoliubov transformation $\hat a_{k,\pm}=\cosh \alpha_{k,\pm} \hat b_{k,\pm} + \sinh \alpha_{k,\pm}\hat b_{-k,\pm}^\dag$, with {$\sinh 2\alpha_{k,\pm}=\mp U_1 n/\xi(k)$}, where \\
$\xi(k)=\sqrt{\left (\hbar^2k^2/2M+q\right )\left (\hbar^2k^2/2M+q + 2 U_1n\right )}$ is the Bogoliubov spectrum of spin excitations. Using this transformation\\
$\hat H_1=\sum_k \xi(k) \left (\hat b_{k,+}^\dag \hat b_{k,+} + \hat b_{k,-}^\dag \hat b_{k,-} \right )$.
For the case of a sudden quench of the QZE from an initial value $q_i$ to a final one $q_f$, we introduce the Bogoliubov modes
$\hat a_{k,\pm}=\cosh \alpha_{k,\pm} \hat b_{k,\pm} + \sinh \alpha_{k,\pm}\hat b_{-k,\pm}^\dag$, evaluated for $q_i$, and $\hat a_{k,\pm}=\cosh \tilde\alpha_{k,\pm} \hat c_{k,\pm} + \sinh \tilde\alpha_{k,\pm}\hat c_{-k,\pm}^\dag$, evaluated for $q_f$.
The initial Bogoliubov modes fulfill the vacuum statistics $\langle \hat b_{k,\pm}^\dag \hat b_{k,\pm}\rangle =0$.
When quenching q, the initial Bogoliubov modes project into the new ones: ${\hat c_{k,\pm}}=\cosh\Delta\alpha_{k,\pm} {\hat b_{k,\pm}} - \sinh\Delta\alpha_{k,\pm} {\hat b_{-k,\pm}^\dag}$, with $\Delta\alpha_{k,\pm}=\tilde\alpha_{k,\pm}-\alpha_{k,\pm}$.
As a result, as for the traditional Casimir effect, the quench of the QZE results right after the quench in non-zero occupations of the new Bogoliubov modes, $\langle \hat c_{k,\pm}^\dag (0) \hat c_{k,\pm}(0) \rangle= \sinh^2\Delta\alpha_{k,\pm}$.
In addition, $\langle {\hat c_{k,\pm}(0) \hat c_{-k,\pm}(0)}\rangle = -\frac{1}{2}\sinh 2\Delta\alpha_{k,\pm}$.
After the quench, the evolution of the Bogoliubov modes is trivial $\hat c_{k,\pm}(t)=e^{-i\xi_f(k)t/\hbar} \hat c_{k,\pm}(0)$, where $\xi_f(k)$ is calculated for $q_f$.
This occupation of the spin Bogoliubov modes results in the creation of particles in $\ket{F,m}$=$\ket{1,\pm1}$. Indeed, using the
relation between $\hat a_{k,\pm}$ and $\hat c_{k,\pm}$, we may obtain the population in $\ket{1,1}$, $n_{k,1}(t)=\langle \hat a_{k,1}^\dag \hat a_{k,1} \rangle$, which is
at any time equal to that in $\ket{1,-1}$:
\begin{eqnarray}
n_{k,1}(t) &=&\cosh^2\tilde\alpha_{k,\pm}\sinh^2\Delta\alpha_{k,\pm}\nonumber\\
&+&\sinh^2\tilde\alpha_{k,\pm}\cosh^2\Delta\alpha_{k,\pm} \nonumber \\
&-&\frac{1}{2}\cos\left ( 2 \xi(k)t/\hbar \right ) \sinh2\tilde\alpha_{k,\pm}\sinh 2\Delta\alpha_{k,\pm}.
\label{eq:quench1}
\end{eqnarray}
Assuming a large $q_i\gg nU_1$, we may approximate $\alpha_{k,\pm}\simeq 0$, and hence
\begin{equation}
n_{k, 1}(t)=\left ( 1-\cos \left (\frac{2\xi_f(k)t}{\hbar}\right ) \right ) \left ( \frac{U_1n}{\xi_f(k)}\right )^2.
\label{eq:quench2}
\end{equation}
This creation of particles in the levels $\ket{1,\pm 1}$ constitutes the spin analogue of the recently reported Sakharov oscillations observed in scalar BECs when quenching the interactions \cite{Hung2013}.
On the other hand, if $q(t)$ is periodically modulated in time, the resulting spinor dynamics resembles the DCE.
The Heisenberg equations for the Bogoliubov modes are of the form:
\begin{equation}
\frac{d}{dt}\hat b_{k,\pm}{(t)}=-\mathrm{i} f_k \hat b_{k,\pm}{(t)} \pm g_k{(t)} \hat b_{-k,\pm}^\dag{(t)},
\end{equation}
where $f_k\equiv {\xi(k)}/{\hbar}$ and $g_k{(t)}=- {\dot q(t)} {U_1n}/{2\xi(k)^2}$.
We introduce the expected values
$P_{k{,\pm}}{(t)}\equiv \langle \hat b_{k,\pm}^\dag {(t)}\hat b_{k,\pm} {(t)}\rangle$,
$S_{k{,\pm}}{(t)}\equiv \pm \langle \hat b_{k,\pm}^\dag{(t)} \hat b_{-k,\pm}^\dag{(t)} \rangle + \mathrm{H.c.}$, and
$A_{k{,\pm}}{(t)}\equiv \pm {\mathrm i} \left ( \langle \hat b_{k,\pm}^\dag{(t)} \hat b_{-k,\pm}^\dag {(t)}\rangle - \mathrm{H.c.} \right )$.
{These two sets of equations can be summarized into one set by defining $P_k=P_{k,\pm}$, $S_k=\pm S_{k,\pm}$ and $A_k=\pm A_{k,\pm}$.}
The dynamics of these expected values is given by the equations:
\begin{eqnarray}
\dot P_{k}{(t)}&=&g_k{(t)} S_{k}{(t)}, \\
\dot S_{k}{(t)}&=&4g_k{(t)} P_{k}{(t)}+2g_k{(t)}+2f_k A_{k}{(t)}, \\
\dot A_{k}{(t)}&=&-2f_k S_{k}{(t)},
\end{eqnarray}
The population in $m=\pm 1$ is given by:
\begin{eqnarray}
n_{k,\pm 1}(t)&=&\frac{\hbar^2k^2/2M+q{(t)}+nU_1}{\xi(k)}\left (P_{k}(t)+\frac{1}{2}\right )\nonumber\\
&-&\frac{1}{2}-\frac{U_1n}{\xi(k)}S_{k}(t),
\end{eqnarray}
which generalizes Eqs.~(\ref{eq:quench1}) and (\ref{eq:quench2}). Hence, as for the quench, the time dependent QZE results in a DCE, where the number of particles may be significantly enhanced employing a periodically-modulated $q(t)$
with a frequency matching resonantly one half of a Bogoliubov mode.
\subsection{Trapped case}
\label{trappedcase}
The analysis of the experimental realization of the Casimir effect demands a careful consideration of the trapping potential.
In order to determine $\hat H_{\rm eff}$, we first obtain the initial density profile $n_0(\vec r)$ from the corresponding scalar
Gross-Pitaevskii equation:
\begin{equation}
\mu \psi_0(\vec r)=\left [ -\frac{\hbar^2\nabla^2}{2M}+V(\vec r)+U_0 n_0(\vec r) \right ] \psi_0(\vec r).
\end{equation}
We then evaluate the eigenfunctions $\varphi_j({\vec r})$ of $\hat H_{\rm eff}$, such that $\hat H_{\rm eff} \varphi_j({\vec r}) = E_j \varphi_j({\vec r})$.
Expressing $\hat \psi_m({\vec r})=\sum_j \varphi_j({\vec r}) \hat a_{j,m}$, we may re-express:
\begin{eqnarray}
\hat H_1&=&\sum_{j,m=\pm 1} (E_j+q) \hat a_{j,m}^\dag \hat a_{j,m} \nonumber\\
&+&U_1\sum_{i,j}\chi_{i,j}\left ( {\hat a_{i,1}^\dag \hat a_{j,-1}^\dag }+ \mathrm{H.c.} \right ), \label{eq:final_H}
\end{eqnarray}
with $\chi_{i,j}=\int d^3 r\, n_0(\vec r)\varphi_i(\vec r)\varphi_j(\vec r)$.
For a sufficiently tight confinement, we may assume $\chi_{i,j\neq i}\ll \chi_{i,i}$, and $U_1\chi_{i,j\neq i}\ll |E_i-E_j|$~(this is indeed the case for our experimental parameters).
In that case, $\hat H_1\simeq \sum_j \hat h_j$, with $\hat h_j = (E_j+{q})\sum_{m=\pm 1}\hat a_{j,m}^\dag \hat a_{j,m} + U_1 \chi_{jj} \left ( {\hat a_{j,1}^\dag\hat a_{j,-1}^\dag} + \mathrm{H.c.} \right )$.
{We may then introduce the Bogoliubov transformation $\hat a_{j,\pm 1}=\cosh\alpha_{j,\pm} \hat b_{j,\pm}+\sinh\alpha_{j,\pm} \hat b_{j,\mp}^\dag$, with $\sinh 2\alpha_{j,\pm}=\mp U_1 \chi_{jj} / \xi_j$, where $\xi_j=\sqrt{(E_j+q)^2-(U_1\chi_{jj})^2}$ are the corresponding Bogoliubov energies.
Then, apart from constants, $\hat h_j=\xi_j(\hat b_{j,+}^\dag \hat b_{j,+}+\hat b_{j,-}^\dag \hat b_{j,-})$}
{A maximal transfer rate to the excited spin states is thus reached for a large and imaginary Bogoliubov energy $\xi_j$, which is obtained for specific resonance conditions for $q$, where }
\begin{equation}
q_j=-E_j.
\label{eq:resonancecondition}
\end{equation}
We may proceed at this point as for the free-space case, obtaining the equations for the dynamics of the Bogoliubov modes:
$\frac{d}{dt}\hat b_{j,\pm}{(t)}= \mathrm{i} f_j\hat b_{j,\pm}{(t)}+g_j{(t)}\hat b_{j,\mp}^\dag{(t)}$, with $f_j=\xi_j/\hbar$, and $g_j{(t)}=\dot q{(t)} U_1 \chi_{jj}/2\xi_j{(t)}$.
We introduce $P_{j}{(t)}\equiv\langle\hat b_{j,\pm}^\dag{(t)}\hat b_{j,\pm} {(t)}\rangle$, $S_{j}{(t)}\equiv\langle\hat b_{j,+}^\dag{(t)}\hat b_{j,-}^\dag{(t)}\rangle +\mathrm {c.c.}$, and $A_{j,+}{(t)}\equiv \mathrm{i} \left ( \langle\hat b_{j,+}^\dag{(t)}\hat b_{j,-}^\dag{(t)}\rangle -\mathrm {c.c.} \right )$.
Thus, the results for the trapped case resemble the free-space results by replacing the momentum states by eigenstates of the effective potential.
\section{Experimental observation}
\begin{figure}[htb!]
\centering
\defr{r}
\includegraphics[width=.46\textwidth]{Fig2.pdf}
\caption{The relative number of atoms in the $m=\pm1$ levels is shown as function of the modulation frequency $f$ of the quadratic Zeeman energy.
(a) Experimental results with one clearly visible resonance corresponding to the ground mode.
The full orange line is a Gaussian fit, yielding a resonance frequency of $\unit[145]{Hz}$.
The maximum transfer corresponds to a excitation creation rate of $\Omega/2\pi=\unit[1.06]{Hz}$.
(b) The theoretical calculations show one clearly visible resonance at $\unit[147]{Hz}$ with a corresponding excitation creation rate of $\Omega/2\pi=\unit[1.01]{Hz}$.
Both the position and the creation rates match the experimental results.
Two smaller resonances are also visible at the frequencies $\unit[187]{Hz}$ and $\unit[192]{Hz}$, corresponding to spatially excited modes.}
\label{fig2}
\end{figure}
We employ almost pure $^{87}$Rb BECs in a crossed-beam optical dipole trap with trapping frequencies $\unit[2\pi\times(150,160,220)]{Hz}$.
The 22,000 atoms in the BEC are prepared in the hyperfine level $\ket{1,0}$.
At our applied magnetic field of $B = \unit[2.6] {G}$, the magnetic field-induced QZE is $q_{B}=\unit[487]{Hz}$.
Before initiating the dynamics, we empty the levels $\ket{1,\pm1}$ with two microwave pulses from $\ket{1,\pm1}$ to $\ket{2,\pm2}$ followed by a light push resonant to the F=2 manifold to ensure that there are no excitations present in these levels.
In our experiments, we apply an effective shift of the QZE $q_d$ by a microwave dressing field that couples the levels $\ket{1, -1}$ and $\ket{2, -2}$.
Atoms are transferred from the level $\ket{1,0}$ to the levels $\ket{1,\pm 1}$ in the trap's ground mode if the sum of dressing field and the magnetic-field-induced QZE matches the resonance condition~(\ref{eq:resonancecondition}), $q\equiv q_B+q_d=q_0$~\cite{Klempt2009,Scherer2010,Scherer2013}.
Fig.~\ref{fig3}a shows this transition from the stable into the unstable region with the according resonance in the number of transferred atoms at the boundary of these regions (in orange).
There are further resonances (blue in Fig.~\ref{fig3}a) at $q=q_j<q_0$, when the difference $q_0-q_j$ is approximately equal to the energy difference $E_j-E_0$ to the $j$th excited mode of the effective potential.
Otherwise, the BEC remains in the state $\ket{1,0}$ and no atoms are transferred.
In our experiments, the first two excited spatial modes are seen as one resonance, because two trap frequencies are close to degeneracy.
Nevertheless, as shown with the absorption images, they can be individually addressed by choosing the correct QZE.
The analogue DCE is realized in the regime $q>q_0$, where the BEC is stable.
Here, the intensity of the microwave field is modulated sinusoidally, yielding a corresponding oscillation of the QZE.
If the frequency of the QZE oscillation is resonant to approximately twice the QZE difference to a specific resonance, $f= 2 (q-q_j)/h$, atoms are parametrically excited to the respective mode.
This process can be described as a parametric amplification of vacuum fluctuations in the $\ket{1,\pm 1}$ modes.
The number of the transferred atoms is detected by state-selective absorption imaging.
We will show that the amplification of vacuum fluctuations leads to measurable populations in the levels $\ket{1,\pm 1}$ in spin and spatial degrees of freedom and confirm the quantum origin of the dynamics by quantifying the created continuous-variable entanglement.
\subsection{Dynamical Casimir ground-mode resonance}
\label{DynamicalCasimirgroundresonance}
In our experiments, we observe the analogue DCE by setting the QZE to a value of $(q - q_0)/h = \unit[71]{Hz}$, far in the stable regime.
We modulate the QZE $q/h$ for $\unit[700]{ms}$ with an amplitude of $\unit[48]{Hz}$ by controlling the intensity of the microwave dressing field.
Figure~\ref{fig2}a shows the fraction of transferred atoms as a function of the modulation frequency $f$.
The data shows a resonance at $\unit[145]{Hz}$, which is approximately twice the QZE difference to the ground mode, $2 (q-q_0)/h=\unit[142]{Hz}$.
The data may be compared to the theoretical prediction in Fig.~\ref{fig2}b.
Here, the frequency of the ground-mode resonance at $\unit[147]{Hz}$ is in good agreement with the experimental results.
The difference between the theoretical prediction and twice the QZE difference $2 (q-q_0)/h$ may be explained by slight inaccuracies in the determination of the modulated QZE from dc measurements, as well as drifts and anharmonicities in the trapping potential.
On resonance, the transferred fraction of atoms follows an exponential growth.
We calculate a theoretical spin excitation rate of $\Omega/2\pi=\unit[1.01]{Hz}$ that matches well the experimental rate of $\Omega/2\pi=\unit[1.06]{Hz}$, as it is obtained from the maximally transferred fraction on resonance.
In contrast to the theoretical calculations, the experimental resonance width of $\unit[2.7]{Hz}$ is four times larger than the width of the theoretical resonance of $\unit[0.7]{Hz}$.
This is a result of the varying total number of atoms in our BECs as discussed below.
The excited state resonances at the frequencies $\unit[187]{Hz}$ and $\unit[192]{Hz}$ are not visible in our experimental data.
We will address this issue in the next paragraph.
\subsection{Excited resonance}
\begin{figure}[htb!]
\centering
\includegraphics[width=.46\textwidth]{Fig3.pdf}
\caption{(a) For specific values of the QZE $q=q_j$, we observe well resolved resonances in the unstable region of the BEC, where distinct spatial modes are populated via spin dynamics towards the levels $\ket{1,\pm 1}$.
The spatial profile of the modes is observed in our absorption images (see inset).
The colored lines are Gaussian fits to guide the eye.
(b) Parametric excitations populate distinct resonances by varying the modulation frequency of the quadratic Zeeman energy after generating seed atoms with collisional interactions.
The gray shaded area indicates the mean transfer due to spin-changing collisions.
Same colors indicate the same modes.
(c) Theoretically obtained resonances for different total numbers of atoms.
The brightness corresponds to different total number of atoms and different colors indicate different modes.}
\label{fig3}
\end{figure}
We further study the analogue DCE on an excited spatial mode.
As seen in Fig.~\ref{fig2}b, the creation rate of excitations is smaller and narrower for excited spatial modes due to the reduced mode overlap $\chi_{jj}$.
For such reduced spin excitation rates, our system is dominated by additional loss processes, such as atomic collisions that transfer the atoms from the excited trap mode to the ground mode.
As the exponential growth rate depends on the bosonic enhancement of initially transferred atoms, a significant loss can lead to a complete inhibition of the growth process.
Furthermore, fluctuations of the total number of atoms lead to a further suppression of the resonance, as discussed below.
To mitigate the influence of the loss processes, we enhance the creation rate on the first excited spatial mode by an initial transfer of seed atoms to the chosen mode.
Prior to our DCE protocol, we deliberately transfer seed atoms to the first excited spatial mode by choosing a resonant QZE $q=q_1$ (see Fig.~\ref{fig3}a).
The creation of seed atoms is facilitated by enabling spin-changing collisions via our microwave dressing for $\unit[150]{ms}$ on the first excited spatial mode at $(q-q_0)/h=\unit[-16.9]{Hz}$.
In the mean, $\unit[1.7]{\%}$ of the atoms are transferred to the excited spatial mode (gray shaded area in Fig.~\ref{fig3}b).
We further increase the signal by increasing the modulation amplitude.
We oscillate $q/h$ from $\unit[45]{Hz}$ to $\unit[363]{Hz}$ for a modulation time of $\unit[650]{ms}$.
Due to a nonlinearity, the oscillation is slightly distorted from a pure sinusoidal shape and is centered around $\unit[214]{Hz}$.
For these experimental parameters, we observe not only the population of the ground mode of the effective potential, but also of the seeded first excited mode (see Fig.~\ref{fig3}b).
The frequencies of the ground-mode and the excited-mode resonances are determined by Gaussian fits yielding $\unit[422]{Hz}$ and $\unit[462]{Hz}$.
The inset in Fig.~\ref{fig3}b shows the spatial mode profile of the excited atoms.
Resonances corresponding to a certain mode display the respective spatial profile.
The theoretical calculations in Fig.~\ref{fig3}c agree qualitatively with the experimental results.
The results are displayed for three different total numbers of atoms.
The positions of the excited-mode resonances shift several resonance widths depending on the number of atoms.
This number-dependent shift of the narrow lines combined with our experimental fluctuations of the total atoms number of 1800 atoms presents a reason why we were unable to observe the excited resonances without seed atoms.
Furthermore, the theoretical results show a systematic shift to higher modulation frequencies compared to the experimental results.
Again, this effect may be explained by slight inaccuracies in the determination of the modulated QZE from dc measurements, as well as drifts and anharmonicities in the trapping potential.
Our results show the parametric excitation of atoms into different spin and spatial modes by an analogue of the DCE.
\section{Entanglement characterization}
\begin{figure}[htb!]
\centering
\includegraphics[width=.46\textwidth]{Fig4.pdf}
\caption{(a) One-mode variances $V_{-1}$ and $V_{+1}$ and two-mode variances $V_d$ and $V_s$ as a function of the local oscillator phase $\theta$. The variances of the individual modes show no phase evolution and fluctuations greater than shot-noise. The two-mode variances are squeezed below shot-noise at $\theta=0.75\,\pi$ for $V_{d}$ and at $\theta=1.25\,\pi$ for $V_{s}$. (b) Inseparability parameter $V_d{(\theta)}+V_s{(\theta+\pi/2)}$ as a function of the local oscillator phase $\theta$. The dashed line indicates inseparability of the underlying quantum state. We violate this boundary by 2.3 standard deviations which proofs continuous-variable entanglement.}
\label{fig4}
\end{figure}
In this section, we prove the quantum nature of DCE by demonstrating the quantum correlations between the excitations created in the two modes.
For these experiments, we employ the sequence of section \ref{DynamicalCasimirgroundresonance}, with a shorter modulation time of $\unit[110]{ms}$.
Following our previous work \cite{Peise2015a}, we demonstrate the quantum correlations between the quadratures of the levels $\ket{1,\pm1}$, defined as $\hat x_{\pm1}={1}/{\sqrt{2}}\,(\hat a^\dag_{\pm1}+\hat a_{\pm1})$ and $\hat p_{\pm1}={i}/{\sqrt{2}}\,(\hat a^\dag_{\pm1}-\hat a_{\pm1})$.
In our experiments, we detect either the $x$ or the $p$ quadratures of both levels $\ket{1,\pm 1}$ by unbalanced atomic homodyne detection.
Our BEC acts as the local oscillator for the homodyne detection.
A radio-frequency pulse couples $15\%$ of the local oscillator with the levels $\ket{1,\pm1}$.
The local oscillator phase $\theta$ can be adjusted via a variable holding time with deactivated microwave dressing.
For each holding time, we obtain a linear combination of both quadratures $\hat X_{\pm1}(\theta)=\hat x_{\pm1}\cos(\theta-\pi/4)+\hat p_{\pm1}\sin(\theta-\pi/4)$, with the corresponding variances $V_{\pm 1} \equiv \mathrm{Var}[\hat X_{\pm1}(\theta)]$.
For $\theta=3\pi/4$, the variance of the difference $V_d \equiv \mathrm{Var}[\hat X_{+1}(\theta)-\hat X_{-1}(\theta)]$ is squeezed, while for $\theta=5\pi/4$, the variance of the sum $V_s \equiv \mathrm{Var}[\hat X_{+1}(\theta)+\hat X_{-1}(\theta)]$ is squeezed.
Consequently, the local oscillator phases $3\pi/4$ and $5\pi/4$ can be associated with the $x$ and $p$ quadratures.
These two quadratures show sub-shot-noise fluctuations (blue and purple dots in Fig.~\ref{fig4}a), which indicates two-mode squeezing.
Additionally, no phase dependence is visible for the quadrature correlations of the individual modes (red and orange dots in Fig.~\ref{fig4}a).
As a consequence, there is no one-mode squeezing, which follows our predictions.
We prove entanglement with the inseparability criterion $V_d + V_s < 2$ \cite{Simon2000,Duan2000a} for two collective atomic modes (see Fig.~\ref{fig4}b).
The strongest violation is $V_d + V_s = 1.51\pm0.17$ proving entanglement with 2.9 standard deviations.
\section{Conclusion}
In conclusion, we have demonstrated that spin dynamics in spinor BECs resembles the DCE.
We have observed the generation of atom pairs in initially empty excited states of the system by a resonant modulation of the energy of the excited states.
The created pairs carry entanglement, which we have proven by detecting the non-classical correlations between the quadratures.
This central finding unveils the deep connection between the Casimir Effect and the generation of non-classical states.
In the future, the parametric generation of entangled atom pairs can be employed as a versatile tool for the generation of entangled atomic ensembles.
The modulation method offers a fast initialization of the pair generation process compared to conventional methods, where the resonance conditions is reached by ramping the QZE to the unstable regime.
\ack{
{We acknowledge support from the Centre for Quantum Engineering and Space-Time Research (QUEST), Generalitat de Catalunya through grant 2014SGR-401, the Ministerio de Economia (Spain) through grant FIS2014- 54672-P and the Deutsche Forschungsgemeinschaft (DFG) through project KL2421/2-1, RTG 1729, CRC 1227 (DQ-mat), project A02.
}
\section*{References}
|
{
"timestamp": "2018-08-30T02:08:58",
"yymm": "1805",
"arxiv_id": "1805.02560",
"language": "en",
"url": "https://arxiv.org/abs/1805.02560"
}
|
\section{I. Introduction}
In the classical formulation, a polaron is a charged carrier localised in a potential well created via self-induced polarisation in the polar crystal \cite{Landau,Pekar}. The polaron concept has been extended to the systems with magnetic interactions, which leads to a new phenomenon, the so-called spin-polaron. Similarly to the classical polaron, the spin-polaron describes a localised charge carrier. However, in this case, the quasiparticle stabilises due to the strong magnetic interaction of an impurity spin and spin states of the host material \cite{Emin1}. In the literature this quasiparticle can be found under a large variety of terms$-$ spin-polaron \cite{Emin1}, magnetic polaron \cite{Mauger}, ferron \cite{Nagaev1,Nagaev2}, to give some examples.
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.34]{fig1.pdf}
\end{center}
\caption{\label{structure}(Color online) G-AFM supercell with one immersed seven-site spin-polaron (marked with red bars, upper left corner). Blue and red sites refer to atoms with magnetisation pointing up and down, respectively. In the insets (right hand part of the plot), we illustrate magnetic clouds (spin-polarons) in a free standing two-dimensional layer, on the surface and in bulk. The direction between nearest neighbour Mn atoms is along $<001>$ and next nearest neighbours is along $<101>$. Also, the polarons are ferromagnetically polarized along the $<101>$ direction, according to Ref.\cite{Ling1}.}
\label{fig1}
\end{figure}
The seminal work \cite{Gennes} studying the spin-polaron quasiparticle triggered great interest and has been followed by an extensive number of publications employing several theoretical methods \cite{Nagaev2, Emin2, Mauger,Umehara,Kuzemsky}. These studies are not only of academic interest but have a major importance in the understanding of e.g. the magneto-transport and magneto-optical effects \cite{Landwehr}, high-temperature superconductivity \cite{ Mott1}, colossal magnetoresistance phenomena \cite{Nagaev3}.
Magnetic polaron formation is a complex quantum-mechanical phenomenon which encompasses electronic, lattice and magnetic effects and their correlations. A carrier that binds to a self-induced magnetisation, can lower the energy of the system, compared to the unbound configuration. Therefore, a gain in magnetic energy due to a carrier localization outweighs the cost of e.g. temperature fluctuations, kinematic energies of electron hopping and possibly a local lattice deformation. In the antiferromagnetic (AFM) lattice, a spin-polaron stabillisation locally breaks magnetic symmetry suppressing long-range antiferromagnetic interactions, such that instead local ferromagnetic interactions are preferred. In a one-dimensional AFM spin chain, the polaron forms over a FM alignment of N spins \cite{ Gonzalez}. In 2D and 3D systems, magnetic polarons can form with a number of spatial configurations \cite{ Meskine1,Meskine2}.
Studying spin-polaron dynamics increases the complexity of the physical picture compared to regular polarons. As we discussed above, a spin-polaron in an AFM lattice is associated with a local region that carries a FM spin alignment over a few atomic sites. Therefore a spin-polaron motion will be reflected by how this FM region propagates in the spin subsystem, something we rely on in this investigation. According to the theory of rate processes, the transition rate of this quasiparticle depends on e.g. its size, carrier propagation regime, adiabatic or non-adiabatic behaviour and intrinsic material parameters of the system\cite{Liu,Emin3,Kemeny} .
In this study different aspects of spin-polaron physics are explored in La-doped $CaMnO_3$, an orthorhombic ( Pnma) semiconductor which stabilises in bipartite (G-AFM) \cite{ Wollan} magnetic structure ($T_N$=125K). The spin-polaron formation in this compound can be described according to the following mechanism. Excess electrons, injected via La doping, accumulate on Mn($e_g$) orbitals locally increasing the nominal charge of Mn from $4+$ to $3+$. Due to the strong Hund's coupling, carrier localization on Mn$^{3+}$ ions is possible only via spin-flip events, where the entire atomic moment reverses its nearest neighbour exchange from anti-ferromagnetic to ferromagnetic alignment, such that a local FM region is formed (see Fig.1) \cite{Meskine1,Meskine2,Allen}. In other words, the carrier localisation is suggested to lead to magnetic phase separation governed by an interplay of the AFM superexchange of the host and a FM double-exchange of FM region \cite{Anderson1,Anderson2, Zener, Goodenough}.
Experimentally, FM-droplets have been determined for La-doped $CaMnO_3$, in the La concentration range of $0.01 - 0.10$ \cite{Ling1,Ling2,Wang,Neumeier,Cohn1,Cornelius,Chiorescu,Granado1, Granado2}. Neutron powder diffraction and DC-magnetization techniques reported an average FM-droplet size of about 10 \AA~\cite{Ling2,Granado2}, a magnitude that corresponds to a 7-13 site spin-polaron \cite{Meskine1,Meskine2, Allen}. Interestingly, experimentally observed dynamical properties of magnetic polarons provide agreement with different theoretical models. For instance, Hall-mobility and thermopower analysis reported large polaron in the intermediate coupling regime \cite{Cohn2}. Measurement of the electric conductivity of $CaMnO_3-LaMnO_3$ system \cite{Worledge,Ohtaki,Lan} are in good agreement with the adiabatic small-polaron model \cite{Mott2,Emin4}.
Theoretically, there were several attempts to study magnetic polarons in $La_xCa_{1-x}MnO{_3}$. In a pioneer ab-initio work it has been shown that the spin-polaron sites exhibit $e_g$ character \cite{Meskine1}. Recently, an ab-initio study proposed a detailed microscopic description of magnetic polarons using DFT+U as well as a hybrid functional for the electronic subsystem\cite{Bondarenko3}. This investigation demonstrated that interactions beyond conventional parametrizations of DFT were needed to obtain $e_g$ localization on the polaronic (Mn) sites. Moreover, the authors of Ref.\cite{Bondarenko3} found that the excess charges mainly localize in a double-exchange active (101) plane. Also, it was shown in this work that the size and intrinsic characteristics of the magnetic polarons strongly depend on the La concentration. At lower concentration ( $x<0.02 $), spin-polaron formation is driven mainly via magnetic effects. With increasing La concentration, both lattice and magnetic effects start to play a significant role.
The present study is a multiscale approach that involves ab-initio theory, magnetisation dynamics and kinetic Monte Carlo simulations, to address the dynamical properties of spin-polarons in two and three dimensions (2D and 3D respectively). We start by introducing an effective Heisenberg model for the spin-polaron, corresponding to the $La_x Ca_{1-x}MnO_3$ antiferromagnetic lattice. The material specific exchange interaction parameters incorporated in the Heisenberg model have been extracted from the DFT+U calculations combined with the Liechenstein-Katsnelson-Antropov-Gubanov (LKAG) formalism\cite{Liechtenstein}. To study spin-polaron dynamics, we have evaluated energy barriers of single polaron hoping, using the frame-work of Marcus-Emin-Holstein-Austin-Mott theory (MEHAM), previously introduced for polarons in Refs.\onlinecite{Deskins,Bondarenko1,Bondarenko2}. With these barriers and an effective spin-Hamiltonian that is extended to allow for polaron jumps, via Kinetic Monte Carlo (KMC) simulations, we have evaluated single- and multi-polaron dynamics for a range of temperatures and external electric fields (E-fields). Detailed information about polaron dynamics is presented, both when it comes to its stability with respect to thermal fluctuations, as well as the possibility to utilize these objects in nano-technology, as carriers of information.
\section {Details of Calculations}
In order to get the insight into the microscopic magnetic properties of the doped CMO, a series of calculations were performed using the "RSPt" code\cite{rspt-book}, based on the full-potential realisation of the linear-muffin-tin-orbital method.
The calculations were done on a grid of 14$\times$10$\times$14 k-points to ensure the convergence of the exchange integrals.
The effective exchange integrals ($J_{ij}$) between Mn magnetic moments were extracted by means of the magnetic force theorem\cite{licht-exch,licht-exch2} as implemented in RSPt\cite{Kvashnin}.
We have employed the basis set, containing three types of the basis functions, characterised by different kinetic-energy tails ($\{-0.3, -2.3, -1.5\}$Ry) and all three of them were used to describe each of Mn-$3d$ orbitals.
In order to perform DFT+$U$ and, consequently, the $J_{ij}$ calculation, these states were projected onto the muffin-tin head of Mn (for details, see e.g. Ref.~\onlinecite{Kvashnin}).
To have a clearer picture of the super-exchange versus double-exchange competition, we have performed the orbital decomposition of the exchange parameters as was done in Refs.~\onlinecite{korotin-jijs-Wannier,Fe-PRL}.
We have also performed an additional set of calculations in a supercell containing an actual spin-polaron and obtained qualitatively similar trends for exchange parameters as in case of homogeneous distribution of additional charge density. Although, the accuracy of those calculations was limited by a low number of $k$-points. The latter parameter is crucial for the calculation of the $J_{ij}$'s and therefore the results are not reported here. The most important outcome of these simulations is that the effective exchange interactions within the FM region were found to be substantially anisotropic as compared with VCA-derived results.
The main reason of anisotropy around $Mn^{3+}$ ions is attributed to Jahn-Teller distortion of the local ionic octahedral environment, which is not taken into account in a VCA calculation.
The distortion results in the different population of the $x^2-y^2$ and $3z^2$ orbitals and hence the anisotropy of the magnetic couplings.
This is also consistent with the anisotropy of the transition barriers, obtained in the series of calculation described in Sec III of the main text.
Apart from the more pronounced anisotropy, the obtained $J_ij$'s were qualitatively similar to the ones obtained with a simplified VCA calculations.
The thermal effects of the magnetic moments in the AFM system were studied solving The Landau-Lifshitz-Gilbert equation, as implemented in the UppASD package~\cite{skubic}, where each atomic magnetic moment, $\mathbf{m}_i$, is considered to be a three dimensional (3D) vector with constant magnitude
\begin{equation}
\frac{\partial \mathbf{m_i}}{\partial t}=-\frac{\gamma}{1+\alpha^2} \mathbf{m_i} \times \mathbf{B}^\text{eff}_i-\frac{\gamma}{1+\alpha^2} \frac{\alpha}{m}\left[\mathbf{m_i} \times \left[ \mathbf{m_i} \times \mathbf{B}^\text{eff}_i\right]\right]
\label{eq:LL}
\end{equation}
where $\gamma$ is the gyromagnetic ratio, $\alpha$ is the Gilbert damping parameter and $\mathbf{B}^\text{eff}_i$ is the effective field at the $i$-th site, which is defined as
\begin{equation}
\mathbf{B}^{eff}_i=-\frac{\partial \mathcal{H}_\text{Heis}}{\partial \mathbf{m}_i}+ \mathbf{B}^\text{therm}_i(T).
\end{equation}
In this expression $\mathcal{H}_\text{Heis}$ is the Heisenberg Hamiltonian, containing all the interactions which model the system, and the second term is an stochastic field which introduces temperature effects.
The thermal field is modeled by using Gaussian white noise, which fulfills the following properties:
\begin{equation}
\langle B^\text{therm}_i(T) = 0 \rangle \nonumber
\end{equation}
\begin{equation}
\langle B^{\text{therm},k}_i(t) B^{\text{therm},l}_j(t') \rangle=2D\delta_{ij}\delta_{kl}\delta(t-t'),
\end{equation}
where $D$ is the amplitude of the field $D=\frac{\alpha}{(1+\alpha^2)}\frac{k_B T}{\mu_Bm}$, where $i$ and $j$ are the lattice sites, $k$ and $l$ are the vector coordinates and $T$ is the temperature, respectively.
The calculations to estimate SP hopping barriers were carried out in the DFT framework using the VASP code \cite{Kresse1,Kresse2}.
We used the DFT+U methodology employing the Lichtenstein's approach\cite{lichtenstein} with $U$= 3.9 eV and $J$=0.9 eV for Mn $3d$ states. More details regarding our choice of Hubbard $U$ for this system can be found in Ref.\cite{Bondarenko3}.
The calculations of barriers were done for three cases: 1) bulk supercell; 2) 9 layer surface slab and 3) a diatomic layer consisting of 17 Ca, 18 Mn, 54 O, 1 La atoms ($x$=0.055). For 1) and 2) the unit cells consisted of 71 Ca, 72 Mn, 216 O and 1 La atoms, what corresponds to the La atomic fraction of $x$=0.013. For the layer and slab calculation the vacuum spacing was $\sim$18 \AA, large enough to isolate atoms from their periodic images.We used the PBE functional \cite{Perdew}, 550 eV as a cutoff energy and 2$\times$2$\times$2 k point mesh for bulk calculations and 2$\times$2$\times$1 k for slab and layer, respectively.
The obtained equilibrium Pnma lattice parameters (a=5.29 \AA, b=7.44 \AA, c= 5.26 \AA) are in good agreement with the experimental data (a=5.28 \AA, b=7.46 \AA, c=5.27 \AA) \cite{lattice}.
The polaron motion is simulated by using an hybrid ASD-KMC algorithm. An usual simulations works by performing a standard ASD simulation, that is, each time step the LLG equation is solved, for a given magnetic configuration, with the \textit{polaron center}, i.e. the site that will define the ferromagnetic exchange cloud, located at a given site. The motion of the \textit{polaron center}, is determined by a KMC algorithm, which will calculate how long time, $\Delta t$, it will take for the \textit{polaron center} to move to another site. After a time $\Delta t$ has passed, the \textit{polaron center} is instantaneously moved to the new site, with the exchange interactions changing from AFM background to FM polaron region.
Hence, of this way one can move the \textit{polaron center} making use of the energy barriers calculated from \textit{ab-initio}, while the magnetic texture evolves via the LLG equation, whilst tracking the time evolution of the exchange interactions, as given by the motion of the \textit{polaron center}.
The simulation of \textit{polaron center} motion was performed according to the following KMC algorithm, for time $t_\text{ASD}=0$
\begin{itemize}
\item[1] Calculate the manifold $\{ r_{ji} \}$ of $N_i$ all possible transition rates from state j to the initial state i. We assume that the transition rate to NN or NNN spin-up (spin-down) (see fig. 3a) site finds according to the Arrhenius law:
\begin{equation}
r_{ij}=\nu_0 e^\frac{-E_a}{K_B T}
\end{equation}
and $r_{ij}=0$ to all other sites (T is the temperature and $E_{a}$ is the activation energy of the hopping event).
The cumulative function then finds as $R_{ij}=\sum_j r_{ij}$, where $j=1...N_i$.
\item[2] Take a random number taken as $\rho \in (0,R_{ij}]$). Select process j for which cumulative function satisfies the relation: $\sum_{j-1} r_{ik}< \rho \sum_{N_i} r_{ik}< \sum_{j} r_{ik}$
\item[3] Take a random number taken as $\rho' \in (0,1]$). Calculate the time needed for the system to evolve to the new state $\Delta t=-\frac{\log\left(\rho'\right)}{R_{ij}}$
\item[4] Update the LLG equation from $t_\text{ASD}=t'$ until $t_\text{ASD}=t'+\Delta t$ then select state j. If $t_\text{ASD}=t_{max}$ end simulation, else set $t'=t_\text{ASD}$ and go to step 2.
\end{itemize}
\section {II. The parametrized spin-polaron Hamiltonian}
For materials with a large magnetic moments, it is usually possible to map the magnetic configuration via the Heisenberg Hamiltonian
\begin{equation}
\mathcal{H}=-{1 \over 2}\sum_{i,j}J_{ij}\hat{s}_i\cdot\hat{s}_j
\label{heisenbergham}
\end{equation}
where the $J_{i,j}$'s are the Heisenberg exchange coupling parameters, which determine the strength of the magnetic interaction between the i-th and j-th magnetic moments $\hat{s}_i$. For spin-polarons there is a twist to this description, since electrons become localized over a few atomic sites, which in turn modifies local inter-atomic exchange interactions.
The model of the exchange interactions used here, is schematically shown in Fig.~\ref{fig-ASD}, where$ J_{bb}$ denotes the coupling between the spin corresponding to the AFM background of the CaMnO$_3$ lattice, $J_{pp}$ is the coupling within the spin-polaron body and $J_{pb}$ is the coupling of the boundary of the spin-polaron and the AFM background. Starting from the G-AFM reference state, which is the ground state for undoped CaMnO$_3$, we have computed the exchange integrals as a function of La doping. Details of these calculations were presented above.
A critical evaluation of the accuracy of the obtained interactions are presented in the Appendix. Here a symmetry resolved analysis of the interactions show that the $t_{2g}-t_{2g}$ interactions are short ranged antiferromagnetic and rather insensitive to La doping. In contrast, the $e_{g}-e_{g}$ contribution is shown to be ferromagnetic in nature and strongly dependent on La concentration. For undoped $CaMnO_3$ the calculated exchange interactions, when combined with Monte Carlo simulations, reproduce with good accuracy the observed ordering temperature (see the Appendix). This establishes a level of accuracy of the simulations, and gives credence to using Eqn.\ref{heisenbergham} for the investigations of spin-polarons that are observed in La doped $CaMnO_3$. As the Appendix shows, the thermal stability of these polarons is significant, of the same order as that of the underlying AFM order of $CaMnO_3$.
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.32]{fig2.pdf}
\end{center}
\caption{(Color online) Schematic picture of the spin-configuration of ferromagnetic Mn atoms in the AFM background, defining the spin-polaron, with three types of exchange interaction parameters. Mn moments corresponding to the spin-polaron are given in red colour.}
\label{fig-ASD}
\end{figure}
\section{III. Magnetic polaron hopping barriers}
As Fig.~\ref{fig-ASD} illustrates the spin-polaron is a region of ferromagnetic coupling in an AFM matrix, which is connected to the excess of electrons in this region. To study the dynamics of such polarons we adopt a concept where the spin-polaron can jump between two sites by overcoming a certain energy barrier, $E_a$. This idea refers to the well-known Marcus-Emin-Holstein-Austin-Mott (MEHAM) theory for polaron transfer \cite{Deskins}. Previously the method, albeit without magnetism, has successfully been applied to study lattice-polaron mobility at ionic interfaces and in band-gap semiconductors with low crystal symmetry \cite{Bondarenko1,Bondarenko2}
Studies based on ab-initio theory of the static properties of spin polarons in $La_xCa_{1-x}MnO_3$ have provided a theoretical description of the magnetic phase diagram in the La range of $0<x<0.10$, in good agreement with experimental data \cite{Bondarenko3}. These studies have shown that spin-polarons are stabilised mostly due to the magnetic interaction at lower La concentrations and due to the lattice contribution at larger concentrations. To reduce the influence of the spin-lattice correlations we chose here to calculate barriers for polaron hopping in the low La concentration limit, namely, for $x_{La}$=0.013. The barriers were estimated for hopping from the initial site to both nearest neighbor (NN) and next-nearest neighbor (NNN) sites, iin a process that is described in detail below.
We calculated the energy barriers for polaron hopping both for 3D and 2D geometry.
In practice the polaron is formed from the antiferromagnetic matrix by flipping the magnetisation direction of one Mn atom, and then allowing the atomic positions and electronic structure to fully relax (for an illustration see Fig 3a). This creates a local region of ferromagnetically coupled Mn atoms that contain one extra electron that becomes localized on the ferromagnetically coupled Mn atoms, thus forming a spin-polaron.
Spin-polaron hopping from an initial position to an adjacent location
was controlled via the spin
configuration. Spins at two neighboring sites were simultaneously rotated by an angle $\gamma$ in a clockwise and anti-clockwise directions, respectively, and for each such configuration the electronic structure, magnetic moment and total energy was calculated using first principles theory (as described in the section with details of the calculations). This coordinated rotation of two Mn moments allows the ferromagnetic cloud to move through the lattice, and the extra charge associated to the spin-polaron was found to follow this cloud. Hence a simultaneous rotation of two Mn atoms was found to provide an excellent way to study the energy landscape of spin-polaron motion. For illustration we show in Fig.3a how a ferromagnetic region (indicated by thick bonds) is moved along the $<101>$ direction of the lattice. We refer to this movement as a nearest neighbor (NN) hopping, since there are no other Mn atoms located between the two Mn atoms that are rotated. We also investigated movement in the $<100>$ direction, and in this case the two Mn atoms that have their moments rotated have a third Mn atom between, with a fixed moment. For this reason we refer to motion along the $<100>$ direction as next nearest neighbor (NNN) hopping.
The energy barrier (or activation energy, $E_a$ ) of the polaron hopping is determined by the maximum of the total energy curve along the transition path (Fig.3b). Our results show that in each point of the transition path lattice relaxation lowers the energy with a non-negligible value as compared to the energy of the unrelaxed lattice (Fig 3b).
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.33]{fig4.pdf}
\end{center}
\caption{\label{structure2}(Color online) Schematic illustration of the spin-polaron hopping mechanism.
a) Spin rotation angle of the initial Mn$^{3+}$ (solid red bonds) and final Mn$^{4+}$ (thin red bonds) sites, varied in the range of $\gamma=0^{\circ}-180^{\circ}$. Note that the moments are rotated clock-wise (anti-clock wise) for initial (final) sites. (b) Calculated transition barriers with suppressed (dashed line) and allowed (solid line) lattice relaxation. Energies of initial and final spin-polaron configuration differ due to the different distance between the spin-polarons and the La impurity. All energies along the transition path are given with respect to the ground state configuration (see also text).}
\label{fig5}
\end{figure}
The corresponding barriers were also calculated for free standing, 2D $La_xCa_{1-x}MnO_3$ layer and for the $La_xCa_{1-x}MnO_3$ surface (modeled as a slab). The maximum values of these energy barriers, together with the bulk data are collected in Table I. When a comparison can be made, the obtained energies agree well with experimental data and previous theoretical results. One can notice that the spin-polarons situated at the surface have slightly higher mobility (lower energy barrier) than those in bulk. It is also clear from Table I that the energy barriers for polaron hopping in different directions (labeled NN and NNN in Table I) are quite different. The lowest barrier was found for hopping to the NN site. Thus, we found that it was energetically favorable for the spin-polaron to move in the double-exchange active (001) plane. We also found that the transition barrier varied by a few meV depending on the La atom position with respect to the polaron. For this reason, in Table I we list two values for the bulk barrier.
\begin{table}[]
\centering
\caption{Spin-polaron hopping barriers obtained using DFT+U calculations. Data for a purely two-dimensional layer (Layer), the surface (Slab) and bulk (Bulk) of La doped $CaMnO_3$ are shown, both for nearest neighbour (NN) and next-nearest neighbour (NNN). The lattice structure of the layer remained unstable during relaxation so that we show energies only for the unrelaxed case. Note that for bulk there are two sets of NN and NNN data, depending on the distance of the spin-polaron and the La doping atom. Earlier theoretical results from a t-J model are also listed. Experimental data explains hopping barriers obtained from the conductivity and resistivity measurements of $La_xCa_{1-x}MnO_3$ fitted to the adiabatic small polaron model.}
\label{Tb2}
\begin{tabular}{|l|c|c|l|}
\hline
\multicolumn{2}{|l|}{Unrelaxed lattice (meV)} & \multicolumn{2}{l|}{Relaxed lattice (meV)} \\ \hline
Layer, NN & 18 & \multicolumn{2}{c|}{-} \\ \hline
Layer, NNN & 20 & \multicolumn{2}{c|}{-} \\ \hline
Slab, NN & 21 & \multicolumn{2}{c|}{10} \\ \hline
Slab, NNN & 23 & \multicolumn{2}{c|}{14} \\ \hline
Bulk, NN & 24-27 & \multicolumn{2}{c|}{14-18} \\ \hline
Bulk, NNN & 44 & \multicolumn{2}{c|}{33} \\ \hline\hline
\multicolumn{2}{|l|}{t-J model, NNN \cite{Meskine2}} & \multicolumn{2}{c|}{40} \\ \hline
\multicolumn{2}{|l|}{Experiment, \cite{Ohtaki, Lan, Worledge} (x=0.02)} & \multicolumn{2}{c|}{20} \\ \hline
\end{tabular}
\end{table}
\section{IV. Magnetic polaron motion}
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.23]{fig6.pdf}
\end{center}
\caption{\label{structure3}(Color online) Average times between spin-polaron jumps of La doped $CaMnO_3$, obtained from the simulations (see text) for different temperatures and strengths of an applied E-field.}
\label{fig6}
\end{figure}
\begin{figure*}[htb]
\begin{center}
\includegraphics[scale=1.40]{fig7.pdf}
\end{center}
\caption{\label{structure4}(Color online) Probability density of single spin-polaron propagation at variable temperatures and E-fields. The probability densities were obtained during a simulation time of 1 ns. The probability is given both as a color code (illustrated by the color bar shown on the right of each figure) and as a histogram projected on positions in x- and y-direction (shown to the right and the top of each figure).}
\label{fig7}
\end{figure*}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.75\textwidth]{fig8.pdf}
\end{center}
\caption{\label{structure5}(Color online) a) Cartoon showing schematically how electrons can be introduced into a substrate of $CaMnO_3$, by an STM tip, to store information via the formation of polarons that are immobile if no external stimuli, like an E-field, are present (picture kindly provided by Ms Ella Kurland). Simulation for multipolaron system with NN and NNN hoppings allowed in the system for b) E=$0~V/cm$, c) E-field=$10^5$~V/cm in $\left [100\right ]$ direction and d) E-field=$10^5$~V/cm in $\left [101\right ]$ direction.}
\label{fig8}
\end{figure*}
In this section, we discuss magnetic temperature induced motion of the spin-polaron. The hopping rate was obtained from an Arrhenius-type process with the activation barrier $E_a$. The magnetic propagation from site to site was modeled using a hybrid ASD-KMC method. In this algorithm, the polaron can be characterized by two quantities, the \textit{polaron center}, that is the site where the electron is self-localized and the actual \textit{polaron texture}, i.e. the actual ferromagnetic texture resulting from the local ferromagnetic interactions. At each time step the Landau-Lifshitz-Gilbert equation is used to determine the dynamics of the magnetic moments (AFM background and \textit{polaron texture}). Whilst, the KMC part of the algorithm calculates the time it takes for the \textit{polaron center} to move to a neighboring site, in which case the exchange interactions change instantaneously.
For each temperature, we have performed 100 different simulations with different seeds for the random number generator to obtain statistically relevant results. The energy barrier of the hopping process to the NN and NNN sites has been set as the average values obtained by the first principle calculations for the 3D case, as reported above (see Table I).
The attempt frequency, $\omega_0$, was set to be $\omega_0=1\times10^{12} \text{ Hz}$, a value that is expected if we assume spin-polaron motion driven by magnonic and phononic processes \cite{Dolling,Kaplyanski}.
We evaluated the spin-polaron hopping process using the hybrid ASD-KMC algorithm and tracked average times between \textit{polaron-center} jumps, both in the case of the random motion (no electric field applied) and in the presence of an applied external E-field. The effects of an applied E-field was introduced with a term $q\mathbf{E}\cdot\mathbf{r}$ ($q$ is electron charge and $\mathbf{r}$ the position of the \textit{polaron center}) to the energy barrier used in the simulations, taking into account both electronic and magnetic contributions to the energy.
We observed from the simulations that both the temperature and strength of the E-filed influenced the mobility of the polaronic center, and the results of the time between jumps of the \textit{polaron center} from one site to the next is shown in Fig.4. It may be seen in this figure that the average times between polaron jumps decreases by about two orders of magnitude from $~$ 100 ps to $~$ 10 ps as the temperature increases from 25 K to 125 K. From this figure it is also clear that the hopping rate of the spin-polarons changes when an external electric field is applied. For instance, at 25 K the jump time changes with about one order of magnitude when the jump rate reaches $\sim 2 \cdot 10^5$ V/cm.
We have furthermore followed the position of a \textit{polaron center}, as it moves through the lattice during the KMC simulation. The normalized probability density of detecting a \textit{polaron center} at a particular position of the lattice is plotted in Fig. 5, as a function of both temperature and applied E-field. Note that the probability density shown in Fig.5 represents all the events that occurred for the average of 100 simulations, each one covering dynamics over a total time of 1 ns.
In the absence of an E-field, the spin-polaron performs a random walk motion, that becomes more diffuse, the larger the temperature is (Fig.5a,b).
Using the obtained maximum average distances at different temperatures, we have estimated the polaron diffusivity coefficient according to random-walk diffusion model as $D=\frac{\left \langle R^2 \right \rangle}{4t}$, where $\left \langle R^2 \right \rangle$ is the average maximum distance which the \textit{polaron center} has passed, and $t$ is the simulation time \cite{Bressloff}. The obtained values of $D$ lie in the range of $0.5\cdot10^{-8}-1.5\cdot 10^{-6} m^2/s$ (25-125 K), and can be compared to characteristic values for polaron diffusion processes \cite{Coehoorn}. When an external E-field is applied, the polaron moves along this field (Fig.5c-f). For a given strength of the E-field, an increased temperature makes the polaron movement faster, which is natural since the spin-polaron movement is an activated process. It is notable, that lowering of the activation energy by the external electric fields has been recently proved experimentally in manganese oxide heterostructures \cite{Kuang}.
Note that in the chosen parameter range of temperatures and E-field, it is quite possible to control the position and the speed of the polaron. For instance, in the absence of an applied E-field and at a temperature of 25 K or 75 K, the polaron does not move significant distances, and stays instead at its original position (Figs.5a and 5b). However, an applied E-field of $10^5$ V/cm can easily move the polaron over significant distances. For instance, an E-field of $10^5$ V/cm in the [100] direction moves (over a time of 1 ns) the spin-polaron a distance of some 10 nm, when T=25 K (Fig.5c) and over 30 nm when T=75 K (Fig.5d). An E-field in the [101] direction is seen to induce even larger distances of the spin-polaron motion (Figs.5e and 5f). This means that he drift velocity of spin-polarons is of order 10-30 m/s. Note that this speed should be distinguished from the speed of actual individual spin-polaron motion, or the spin-polaron mobility, that is two to three orders of magnitude larger.
\section{V. Spin-polaronics as a technology}
The section above shows that the thermal stability of the spin-polaron in La-doped $CaMnO_3$ provides an excellent opportunity to use these quasiparticles in technological applications. As the results above show, a polaron can both be kept in a fixed position for a given period of time, and made to move in desired directions by a rather weak applied E-field. Hence we propose that storing and erasing information by means of spin-polarons in $CaMnO_3$ should be rather straight forward, something this section outlines. The basic idea is that electrons added to a substrate of $CaMnO_3$ form polarons, that stay put for any period of time, and only after an applied electric field are they moved in any desired direction. By dressing the electron with a magnetic cloud, this procedure allows to make the motion of a charged quasiparticle more 'classical' compared to undressed free charges.
The extra electrons can either be introduced via doping with a trivalent atom (e.g. La) on the Ca site of $CaMnO_3$, or as Fig. 6 schematically shows, via an external source, e.g. as provided by a STM tip. Since these extra charges are associated with a magnetic contrast (ferromagnetic) that is different compared to that of the background material (antiferromagnetic), their detection is straight forward from AFM or STM techniques \cite{Wiesendanger}. Hence it is possible to store information, e.g. as Fig.6 shows, in the form of text written with letters in nano-size, simply by adding electrons in an appropriate pattern. This information can be stored or erased at any time, simply by application of an external E-field.
To illustrate this possibility outlined above, we have performed KMC evolution of the multipolaron system schematically shown in Fig. 6. We started by introducing into a $CaMnO_3$ substrate extra electrons in a controlled way, so that formation of polarons, outlined with a magnetic contrast the text ''\texttt{NANO}''.
If the temperature is sufficiently low, this text stays intact for a period larger than 100 ps at 10K (see Fig.6b). However, as Figs.6c and 6d show, an applied E-field erases this text without difficulty. Movies illustrate in real time how the text ''\texttt{NANO}'' is robust against thermal fluctuations, but can be erased when an external E-field is applied\cite{Supplementary}. This form of writing and erasing text is a natural, but extreme miniaturization of storing information, and it is conceivable that further condensation of information is technologically impossible. It should be noted that writing and erasing text in nano-sized letters has by now been realized many times, starting with the pioneering results of Ref.\cite{Eigler} that used Xe atoms to spell out the name ''\texttt{IBM}''. However, the technology proposed here does not suffer from extremely long writing and detection times, in contrast to Ref.\cite{Eigler}.
\section {Conclusion}
In this work we have investigated theoretically the static and dynamic properties of spin-polarons in La-doped $CaMnO_3$. In order to do this, we constructed an effective low energy Hamiltonian, in which all parameters were calculated from first principles theory. This low energy Hamiltonian is used to investigate the temperature stability of the spin-polaron, as well as the response to an external applied electric field. Technically this involves ab-initio electronic structure theory and atomistic spin-dynamics simulations in combination with kinetic Monte Carlo simulations. In our study we compared results from different geometries, like spin-polarons in bulk, surface and as single two-dimensional layers, and significant differences were observed. Where a comparison can be made, primarily for bulk geometries, the results presented here compare well with experimental data, and previous theory.
We demonstrated a remarkable control of the mobility of spin-polarons in this material, and that the critical parameters deciding this, is the temperature and the strength of the applied electrical field. This opens up for technology using spin-polarons, and our simulations demonstrate that storing and erasing information magnetically, by introduction and control of electrical charge, is possible, even for rather low strength of the external E-field. We demonstrate that it is possible to write text in atomic sized letters, similar to the pioneering text written by atoms that were moved around with an STM tip\cite{Eigler}. The advantage with the technology proposed here, is the vast speed with which information can be stored and erased. It is tempting to contrast the information density, and writing speed, proposed in Fig.6, to that of the over 2000 thousand year old technology behind the Rosetta stone.
The technological implications of electrical control of spin-polarons is only touched upon in the present work. It is foreseeable that other technologies may be just as relevant. We propose that transistor functionality of spin-polarons, e.g. in $CaMnO_3$, is possible, given their stability and the ease with which one may control their movement with an electric field. Hence charges injected at one end of a device built from $CaMnO_3$ can be moved from a source to a drain, and be controlled by an E-field provided by a gate. Such studies are underway. Also, exploration of a wider selection of host materials that can host spin-polarons is interesting, both to establish functionality at even higher temperature, but also to find materials where a significant Dzyaloshinskii-Moriya interaction plays role, so that potentially spin-polarons with unique chirality can be stabilized. It is foreseeable that in light of the results shown here, spin-polaronics may enter an era with many new and exiting results in basic science and technology.
\section {Acknowledgments:}
\begin{acknowledgments}
We acknowledge the financial support by the eSSENCE, the Swedish Research Council and the KAW foundation (projects 2012.0031 and 2013.0020). The computer simulations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at the National Supercomputer Centre (NSC) and High Performance Computing Center North (HPC2N).
\end{acknowledgments}
\section {Appendix: Static properties and thermal stability of the spin-polaron}
Computing the exchange parameters for the system containing a localised spin-polaron is a computationally demanding procedure, due to the large size of the simulation cell.
Here, in order to model the La doping of $Ca_{1-x}MnO_3$, we have used the Virtual Crystal Approximation (VCA) to take La doping into account. This approach is natural in introducing free charge carriers in the valence band, and has sufficient computational efficiency. In addition, it was shown for doped LaMnO$_3$\cite{solovyev-LMO-VCA}, that VCA is able to reproduce the physics of the double-exchange mechanism in a relatively wide concentration range. We expect, that it is particularly applicable in the low-doping regime, since the modification of the electronic structure is expected to be small.
The obtained results of exchange interactions (Fig.~\ref{jijs}) show that the $t_{2g}-t_{2g}$ contribution is rather short-ranged and of AFM nature, which is typical for the super-exchange mechanism with Mn-O-Mn bonds that have angles close to 180 degrees. Moreover, this channel of the exchange interaction is independent of La concentration. As Fig.~\ref{jijs} shows, there is a small $e_g-t_{2g}$ term, which appears due to the mutual tilting of MnO$_6$ octahedra (for undistorted octahedra this channel would be zero, by symmetry \cite{Cardias}). The $e_g-e_g$ contribution to the exchange integrals is zero for $x=0$ due to that these orbitals are empty in this situation. However, upon doping these states become populated and start to participate in the magnetic interactions, predominantly with a FM character, due to their double-exchange nature. As Fig.~\ref{jijs} shows, the $J^{e_g-e_g}$ exchange is relatively long-ranged and a close scrutiny of these results show that the nearest neighbor interaction is almost linearly proportional to the La doping.
Fig.~\ref{jijs} shows that $J_1$ and $J_4$ (defined in Fig.~\ref{jijs}) are the exchange parameters that most strongly affected by La doping.
This agrees with previously reported results for La-doped CaMnO$_3$ ~\cite{solovyev-LMO-VCA}.
These are the interactions between the atoms belonging to the same Mn-O-Mn-chains. Along these directions, the lobes of the $e_g$ orbitals are oriented towards O-p states, that facilitates the electron hopping process and the double exchange mechanism.
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=1.00]{fig3.pdf}
\end{center}
\caption{(Color online) Computed exchange parameters between Mn atoms in La doped CaMnO$_3$ for different La concentrations, $x$, and as function of inter-atomic distance (in units of lattice constant a). The definition of $J_1$, $J_2$, $J_3$ and $J_4$ is illustrated in the top of the figure.}
\label{jijs}
\end{figure}
The calculated $J_{ij}$'s together with the magnetic moments (that we calculated to be 2.51 $\pm$ 0.05 $\mu_B$/Mn atom, for all La concentrations), allow us to parameterize the inter-atomic exchange interactions of the spin-polaron, via an effective spin-Hamiltonian. When combined with atomistic spin dynamic (ASD) simulations, this opens up for an investigation of the dynamics of these objects.
In order to model the magnetic interactions of the spin-polaron, one has to take into account that charge gets localized in space, and we made the following treatment to simulate this:
(i) First of all, we assume that the $e_g$ electron is localized within the body of the polaron and hence does not participate in interactions with long range that goes outside the spin-polaron. This implies that we considered only NN exchange couplings for the ASD simulation.
This approximation was tested against the ab-initio calculated distribution of the excess charge associated with the seven-site spin-polaron in bulk. We found that the electron associated with the spin-polaron is distributed essentially only over a central atom and its nearest neighbours, which justifies a short ranged double exchange within the spin-polaron only.
(ii) Second, we approximate the distribution of the excess $e_g$ electron to be homogeneous over the central site and its neighbours in the magnetic polaron.
In this way, the interactions in the inner part of a spin-polaron, with $N_s$-sites,
is described by the set of $J_{ij}$'s from Fig.~\ref{jijs} corresponding to $x=1/N_s$. On the boundary of the spin-polaron, the exchange coupling occurs between the ions with $t_{2g}^3$ and $t_{2g}^3e_g^{1/N_s}$ configurations.
In this case, the $e_g-t_{2g}$ contribution can be approximated as $J^{e_g-t_{2g}}_1(x=1/(2N_s))$, due to the linear dependence of this coupling on $x$.
Summarizing, we arrived at an interpolation scheme of exchange interactions of spin-polarons of any size, defined from the calculated nearest neighbor parameters shown in Fig.~\ref{jijs}, in the following way:
\begin{gather}
J_{pp} = J_1(x) \\
J_{pb} = J^{t_{2g}-t_{2g}}_1 (x/2) + J^{e_g-t_{2g}}_1(x/2) \\
J_{bb} = J_1(x=0).
\end{gather}
We have used the relationship $x={1 \over {N_s}}$, since one electron is associated with any $N_s$-site polaron.
According to the formalism introduced above, the parameters listed in Table~\ref{Tb1} have been used for the dynamics of the spin-polaron.
\begin{table}[thb]
\caption{Parameters used for the ASD simulations for $N_s$-site polarons. $M_p$ refers to the values of the Mn magnetic moments belonging to the polaron. $M_b$ corresponding to the background spins was calculated to be2.45 $\mu_B$. The coupling $J_{bb}$ was calculated to be 5.35 meV. The exchange parameters $J_{pb}$ and $J_{pp}$ are defined in the text.}
\centering
\begin{tabular}{cccc}
\hline
$N_s$ & $M_p$ ($\mu_B$) & $J_{pp}$ (meV) & $J_{pb}$ (meV) \\
\hline
5 & 2.56 & 9.40 & -6.77 \\
7 & 2.53 & 6.61 & -6.54 \\
8 & 2.52 & 4.99 & -6.31 \\
11 & 2.51 & 3.36 & -6.09 \\
13 & 2.50 & 1.99 & -5.85 \\
\hline
\hline
\end{tabular}
\label{Tb1}
\end{table}
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.55] {fig5.pdf}
\end{center}
\caption{\label{structure6}(Color online) Top panel. Crossed points indicate experimentally measured magnetisations in the bulk sample, for lanthanum doping levels of x=0.01-0.03 \cite{Cortes}. Note that experimental magnetisation curves are normalised to 1 at 0 K. Bottom panel. Computed average magnetisation for a variety of spin-polaron configurations in 2D and 3D lattices (schematically shown to the right), using an effective spin-Hamiltonian and Monte Carlo simulations (see text). Results for the 3D case are shown with dashed lines while the results of the 2D case are shown as solid lines. Note that different sizes of polarons are investigated. The experimental ordering temperature of bulk $CaMnO_3$ ($\sim$125 K) is indicated by a vertical dashed line.}
\label{fig4}
\end{figure}
Next we used the extracted exchange parameters to study the stability of the magnetic polarons, with respect to temperature fluctuations. The investigation is done both for a 2D and 3D lattice, making use of an extended Heisenberg Hamiltonian
\begin{eqnarray}
\label{HH}
\mathcal H = -{1 \over 2}\sum_{i,j} J_{ij} \hat{m}_i \cdot \hat{m}_j-K_\text{ani}\sum_{i}\left( \mathbf{e}_i\cdot \mathbf{e}_\textrm{K}\right)^2 ,
\end{eqnarray}
with the exchange parameters listed in Table
\ref{Tb1}. Here \textbf{m$_i$} denotes the Mn-projected atomic magnetic moment at site $i$. Furthermore, $K_{ani}$ is the parameter characterizing the magnetocrystalline anisotropy, while $\mathbf{e}_i$ and $\mathbf{e}_\textrm{K}$ are the direction of the moment of atom $i$ and the easy axis direction, respectively. The magnitude of the $K_\text{ani}$ is set to $0.01\text{ mRy}$, and it is used to have a well defined quantization axis, mostly for visualization purpouse. In general, the presented results are not strongly dependent on the value of the anisotropy. In these calculations it is only the thermal stability of the magnetic sublattice that is of interest, something we investigated by means of Monte Carlo simulations, using the Metropolis-Hastings algorithm, of the spin-system, where the polarons were considered to be fixed at a given crystal site, and were not allowed to move from site to site. We considered magnetic polarons of varying size, in the range of 5-13 sites (7-13 sites) for the 2D (3D) case, and the polaron geometries outlined in the right-hand side of Fig.\ref{fig4}. To improve the statistics for each spin-polaron configuration, one hundred different realizations, with different random number seeds of the Monte Carlo simulations had been taken.
The average magnetisation of the polaron region, $\langle \mathbf{M}_\text{SP}\rangle$ has been considered as the average magnetization taken over the polaron region, $SP$, and averaged over the different realizations:
\begin{equation}
\langle \mathbf{M}_\text{SP}\rangle=\frac{1}{N_\text{ens}}\sum^{N_\text{ens}}\frac{1}{N_{max}}\sum_{n\in SP}^{N_{max}}\mathbf{m}^n
\label{eq:pol_stab}
\end{equation}
For the treatment of 2D systems, a 100x100 square lattice was evaluated as a basis for the simulations, where periodic boundary conditions were considered in the in-plane directions (for 3D systems we have considered supercells of 20x20x20, with periodic boundary conditions in all directions).
In Fig.\ref{fig4} we show the average magnetisation for different spin-polaron configurations. It is interesting first to note that calculations performed for impurity-free $CaMnO_3$ indicate a critical temperature at $\sim$ 135 K in excellent agreement with the experimental data (Fig.\ref{fig4}, top panel). This establishes the accuracy of the exchange parameters and the spin-model used to investigate temperature effects. The values of the magnetisation of the spin-polaron region is seen to be non-zero well above the ordering temperature for all choices of geometry of the spin-polaron. This deonstrates the ropustnes of the spin-polaron with respect to thermal fluctuations. It is however not a reflection of exchange interactions being stronger in this region, but rather a consequence of short range ordering that survives in most magnets well above the bulk ordering temperature.
It can also be seen in Fig.\ref{fig4} that for the 2D case, $\langle M_{SP} \rangle$ decreases quicker as a function of temperature (Fig.\ref{fig4}, bottom panel) compared to the magnetisation curve of bulk $CaMnO_3$, as expected for a system with reduced coordination number. We also notice that for the 2D case, the temperature dependence of the spin-polaron is almost independent on size.
As for the 3D magnetisation (see Fig.\ref{fig4}, bottom panel, dashed lines) we observe similar trends to that in the 2D lattice. However, one may notice that the magnetization for all magnetic polaron configurations saturates at higher temperatures than in the 2D, as expected. We also note that for the 3D case, the size dependence is weak when it comes to the behaviour of $\langle M_{SP} \rangle$ with respect to temperature. Interestingly, for the 3D polarons, $\langle M_{SP} \rangle$ remains finite well above the ordering temperature of undoped $CaMnO_3$ ($\sim$135 K), as is shown in Fig.\ref{fig4}.
The result agrees with experimental results reporting stability of the polaronic centres in the paramagnetic phase beyond the Neel temperature \cite{Chiorescu}.
The results presented above indicate the importance of the coordination number for spin-polaron stability. As we already mentioned in the introduction, previous theoretical studies on spin-polaron formation in lanthanum doped $CaMnO_3$ discovered charge localisation preferably in the double-exchange active plain (101) \cite{Bondarenko3}. However, spin-polarons with higher coordination number (in bulk against of surfaces, for example) seem to stabilise at higher temperatures.
|
{
"timestamp": "2018-05-08T02:13:08",
"yymm": "1805",
"arxiv_id": "1805.02284",
"language": "en",
"url": "https://arxiv.org/abs/1805.02284"
}
|
\section*{Acknowledgments}
I am grateful to Daniel Hsu for suggesting this research direction to me, and many insightful discussions along this line. I would also like to thank Pranjal Awasthi, Jie Shen and Hongyang Zhang for helpful initial conversations about the results in this paper. I thank the anonymous COLT reviewers for their thoughtful comments. Special thanks to Yue Liu, who provided unconditional support throughout this research project.
\section{Algorithm}
We present our main algorithm, namely Algorithm~\ref{alg:ae_al} in this section. We defer the exact settings of constants $c_1, c_2, c_3$ to Appendix~\ref{sec:params}.
Our algorithm uses the margin-based active learning framework, initially proposed by~\cite{BBZ07}.
Specifically, it proceeds in epochs, where at each epoch $k$, it draws a sample $S_k$ from distribution $D_X|_{B_k}$, queries their labels, and updates its iterate $w_k$ based on $S_k$. Due to technical reasons, at the first epoch ($k=0$), the sampling region $B_0$ and the constraint set $W_0$ are different from those in subsequent epochs. Throughout the process, the algorithm maintains the invariant that at each epoch $k$, $w_k$ is a $t$-sparse unit vector.
At each epoch $k \geq 1$, the sampling region $B_k$ is a ``small-margin'' band $\{x: |w_{k-1} \cdot x| \leq b_k \}$, with bandwidth $b_k$ descreasing exponentially in $k$.
Then it performs constrained empirical hinge loss minimization over $S_k$, getting a linear classifier $w_k'$.
The constraint set $W_k$ is the intersection between an $\ell_1$ ball and an $\ell_2$ ball, centered at $w_{k-1}$ with different radii ($\rho_k$ and $r_k$). This is similar to the approach in~\cite{PV13b} for tackling the symmetric noise setting, where a linear optimization problem with a similar shaped constraint set is proposed. The construction of $W_k$'s is inspired by version space constructions in the PAC active learning literature~\citep{CAL94,BBL09,H14}.
Throughout the algorithm, we ensure $W_k$ to satisfy the following two properties with high probability: first, $u$ lie in all the $W_k$'s; second, the $W_k$'s are shrinking in size.\footnote{We refer the reader to Lemma~\ref{lem:induct} for a formal statement.}
In addition, the hinge loss used at epoch $k$ is parameterized by $\tau_k$, which also decreases exponentially in $k$.
Observe that $w_k'$ may not be a sparse vector; therefore, we perform a hard thresholding step (applying $\HT_t$), to ensure that our learned halfspace at the end of round $k$, is $t$-sparse. Hard thresholding has been widely used in the (unquantized) compressed sensing literature~\citep[See e.g.][]{BD09,GK09}, however its utility in one-bit compressed sensing is not yet well-understood. For example, ~\cite{JLBB13} proposes an algorithm named BIHT (binary iterative hard thresholding) that has strong empirical performance, but its convergence properties are unknown.
To the best of our knowledge, our work is the first that establishes convergence guarantees for iterative hard thresholding style algorithms for one-bit compressed sensing. We then perform a $\ell_2$ normalization step to ensure that our iterate $w_k$ is an unit vector, which has a scale comparable to $u$.
Finally, we remark that Algorithm~\ref{alg:ae_al} admits a computationally efficient implementation. First, the sampling regions $B_k$'s can be shown to have probability masses at least $\Omega(\epsilon)$ in $D_X$ for all $k$ in $\{0,1,\ldots,k_0\}$, which makes rejection sampling from $D_X|_{B_k}$ take $O(\frac 1 \epsilon)$ time per example.
Second, optimization problem~\eqref{eqn:opt} is convex, and can be approximately solved by e.g. stochastic gradient descent~\citep[See e.g.][Theorem 2]{SZ13} efficiently.
\begin{algorithm}[t]
\caption{Attribute and computationally efficient active learning of halfspaces}
\begin{algorithmic}[1]
\REQUIRE sparsity parameter $t$, target error $\epsilon$, failure probability $\delta$.
\ENSURE learned halfspace $\hat{w}$.
\STATE Initialization: $k_0 \gets \lceil \log_2 \frac {1} {C_1 \epsilon} \rceil$, where $C_1$ is defined in Equation~\eqref{eqn:angdis}.
\FOR{$k = 0, 1, 2 \ldots,k_0$}
\STATE $S_k \gets $ sample $n_k = c_1 t (\ln d + \ln \frac{1}{\epsilon} + \ln\frac1{\delta_k})^3$ examples from $D_X|_{B_k}$ and query their labels, where
\[ B_k := \begin{cases} \mathbb{R}^d, & k = 0, \\ \{x: |w_{k-1} \cdot x| \leq b_k \}, & k \geq 1, \end{cases}\]
$\delta_k = \frac{\delta}{(k+1)(k+2)}$ and $b_k = c_2 \cdot 2^{-k}$.
\STATE Solve the following optimization problem:
\begin{equation}
w_k' \gets \argmin_{w \in W_k} \sum_{(x,y) \in S_k} \ell_{\tau_k}(w, (x, y)),
\label{eqn:opt}
\end{equation}
where
\[ W_k = \begin{cases} \{ w \in \mathbb{R}^d: \| w \|_2 \leq 1 \text{ and } \| w \|_1 \leq \sqrt{t} \}, & k = 0, \\ \{ w \in \mathbb{R}^d: \| w - w_{k-1} \|_2 \leq r_k \text{ and } \| w - w_{k-1} \|_1 \leq \rho_k \}, & k \geq 1, \end{cases}
\]
$r_k = 2^{-k-3}$, $\rho_k = \sqrt{2t} \cdot 2^{-k-3}$, and $\tau_k = c_3 \cdot 2^{-k}$.
\label{line:hlm}
\STATE Let $w_k \gets \frac{\HT_t(w_k')}{\| \HT_t(w_k') \|_2}$.
\label{line:ht}
\ENDFOR
\RETURN $w_{k_0}$.
\end{algorithmic}
\label{alg:ae_al}
\end{algorithm}
\section{Detailed choices of learning and problem parameters}
\label{sec:params}
In this section, we give the exact settings of $c_1, c_2, c_3$ that appears in Algorithm~\ref{alg:ae_al}, and $\mu_1, \mu_2$, the noise rates
that can be tolerated by Algorithm~\ref{alg:ae_al} under the two noise conditions.
Define $D_k$ as
the distribution $D$ over $(x,y)$ conditioned on that $x$ lies in $B_k$.
Although cannot be sampled from directly, for analysis purposes, we define $\tilde{D}$ as the joint distribution of $(x, \sign(u \cdot x))$, and
$\tilde{D}_k$ as the distribution of $\tilde{D}$ conditioned on that $x$ lies in $B_k$.
Let $\lambda > 0$ be a constant, which will be specified at the end of this section.
Given $\lambda$, we define $c_2:=c_2(\lambda)$ such that:
\begin{enumerate}
\item $c_2(\lambda) = O(\ln \frac{1}{\lambda})$,
\item For all $w$ such that $\theta(w,w_{k-1}) \leq 2^{-k-3} \pi$,
\begin{equation}
\mathbb{P}_D( \sign(w \cdot x) \neq \sign(w_{k-1} \cdot x), |w_{k-1} \cdot x| \geq c_2(\lambda) \cdot 2^{-k} ) \leq \lambda \cdot 2^{-k}.
\label{eqn:margin}
\end{equation}
\end{enumerate}
The existence of such function $c_2(\cdot)$ is guaranteed by Theorem 21 of \cite{BL13}, along with the fact that $D_X$ is isotropic log-concave.
In addition, given $\lambda > 0$, define $c_3(\lambda) := \lambda \min(C_3 /81, C_3 c_2(\lambda)/9)$ (where $C_3 $ is a numerical constant defined in Lemma~\ref{lem:bandmass}), such that $\tau_k = c_3 2^{-k}$. Under this setting of $\tau_k$, we have that for all $k$ in $\{0,1,\ldots,k_0\}$:
\begin{equation}
\mathbb{E}_{D_k} \ell_{\tau_k}(u, x, y) \leq \mathbb{P}_{D_k}(|u \cdot x| \leq \tau_k) \leq \frac{\mathbb{P}_{D_X}(| u \cdot x | \leq \tau_k)}{\mathbb{P}_{D_X}( x \in B_k )} \leq \frac{9 \tau_k}{\min(C_3 /9, C_3 b_k)} \leq \lambda.
\label{eqn:lowerr}
\end{equation}
where the first inequality is from that $\ell_{\tau_k}(u, (x, \sign(u \cdot x))) \in [0,1]$, and $\ell_{\tau_k}(u, (x, \sign(u \cdot x))) = 0$ if $|u \cdot x| \geq \tau_k$;
the second inequality uses the fact that $\mathbb{P}(A|B) \leq \frac{\mathbb{P}(A)}{\mathbb{P}(B)}$ for any two events $A,B$; the third inequality uses Lemma~\ref{lem:bandmass} to upper bound (resp. lower bound) the numerator (resp. the denominator).
Recall that $n_k := c_1 t (\ln d + \ln \frac 1 \epsilon + \ln \frac{1}{\delta_k})^3$. Given $\lambda > 0$ and $c_2(\lambda)$, $c_3(\lambda)$, we set $c_1:=c_1(\lambda)$ such that by Lemmas~\ref{lem:conc}, for all $k$ in $\{0,1,\ldots,k_0\}$, for all $w$ in $W_k$,
\begin{equation}
|\mathbb{E}_{S_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y))| \leq \lambda.
\label{eqn:conc}
\end{equation}
Given $\lambda$ and $c_2(\lambda)$, $c_3(\lambda)$, we also choose $\mu_1 = \mu_1(\lambda), \mu_2 = \mu_2(\lambda) \in (0,\frac 1 2)$ such that under the respective noise condition, for all $k$ in $\{0,1,\ldots,k_0\}$, for all $w$ in $W_k$,
\begin{equation}
| \mathbb{E}_{\tilde{D}_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y))| \leq \lambda.
\label{eqn:corrupt}
\end{equation}
The existences of $\mu_1(\lambda)$ and $\mu_2(\lambda)$ are guaranteed in light of Lemmas~\ref{lem:tv-an} and~\ref{lem:tv-bn}.
Define $f(\lambda') = C_2(45 c_2(\lambda') \lambda' + 5\lambda')$. Observe that by the definition of $c_2(\cdot)$, $f(\lambda')$ goes to zero as $\lambda'$ goes down to zero. Therefore, we can select a value of $\lambda > 0$, such that $f(\lambda) \leq 2^{-8} \pi$.
Note that our selection of $\lambda$ also determines the value of $c_1$, $c_2$, $c_3$ and $\mu_1$, $\mu_2$.
\section{Learning guarantee at each epoch}
\label{sec:epoch}
In this section, we prove two key lemmas, namely
Lemmas~\ref{lem:hlm} and~\ref{lem:truncate}, both of which serve as the basis for Lemma~\ref{lem:induct}.
\subsection{Proof of Lemma~\ref{lem:hlm}}
The proof of Lemma~\ref{lem:hlm} is based on
a uniform concentration bound on the $\tau_k$-hinge loss over $W_k$ in the sampling region $B_k$, namely Lemma~\ref{lem:conc}.
Specifically, Lemma~\ref{lem:conc} implies that the
difference between the empirical hinge losses and the expected hinge losses for all $w$ in $W_k$ with respect to $D_k$ is uniformly bounded by $\tilde{O}(\sqrt{\frac{t (\ln d + \ln \frac 1 \epsilon)^3}{n_k}})$.
As will be seen in the analysis, only a constant concentration error $\lambda$ is required in the hinge loss minimization step (see Equation~\eqref{eqn:conc}). Therefore, the setting of $n_k = O(t (\ln d + \ln \frac 1 \epsilon)^3)$ fulfills this requirement.
\begin{proof}[Proof of Lemma~\ref{lem:hlm}]
We consider the cases of $k = 0$ and $k \geq 1$ separately.
\paragraph{Case 1: $k = 0$.} By Lemma~\ref{lem:hlm-grt} below and the fact that $D = D_0$, $\mathbb{P}_D(\sign(w_0' \cdot x) \neq \sign(u \cdot x)) \leq 5\lambda$ holds.
In addition, by the second inequality of of Equation~\eqref{eqn:angdis},
we have that $\theta(w_0',u) \leq 5 C_2 \lambda$.
By the definiton of $\lambda$, it is at most $2^{-8} \pi$.
\paragraph{Case 2: $k \geq 1$.} By Lemma~\ref{lem:hlm-grt} below, $\mathbb{P}_{D_k}(\sign(w_k' \cdot x) \neq \sign(u \cdot x)) \leq 5\lambda$ holds. We now show that the above fact implies that the angle between $w_k'$ and $u$ is
at most $2^{-k-8}\pi$. This implication is well known in the margin-based
active learning literature~\citep{BBZ07, BL13}; we provide the proof here for completeness.
By Lemma~\ref{lem:bandmass}, $\mathbb{P}_{D_k}(\sign(w_k' \cdot x) \neq \sign(u \cdot x)) \leq 5\lambda$ implies that
\begin{eqnarray}
&&\mathbb{P}_{D}(\sign(w_k' \cdot x) \neq \sign(u \cdot x), x \in B_k) \nonumber \\
&=& \mathbb{P}_{D_k}(\sign(w_k' \cdot x) \neq \sign(u \cdot x)) \cdot \mathbb{P}_{D_X}(x \in B_k) \leq 5 \lambda \cdot 9 c_2(\lambda) 2^{-k} \leq 45 \lambda c_2(\lambda) 2^{-k}.
\label{eqn:inband}
\end{eqnarray}
On the other hand, observe that for all $w$ in $W_k$, $\| w - w_{k-1} \|_2 \leq 2^{-k-3}$.
Using Lemma~\ref{lem:distangle} and the fact that $w_{k-1}$ is a unit vector, we get that for all $w$ in $W_k$,
$ \theta(w,w_{k-1}) \leq 2^{-k-3}\pi. $
Specifically, by Equation~\eqref{eqn:margin}, we have that
\[ \mathbb{P}_D(\sign(w \cdot x) \neq \sign(w_{k-1} \cdot x), x \notin B_k) \leq \lambda 2^{-k} \]
holds for $w \in \{u, w_k'\} \subset W_k$ respectively. Therefore, by triangle inequality,
\begin{eqnarray}
&& \mathbb{P}_D(\sign(w_k' \cdot x) \neq \sign(u \cdot x), x \notin B_k) \nonumber \\
&\leq& \mathbb{P}_D(\sign(w_k' \cdot x) \neq \sign(w_{k-1} \cdot x), x \notin B_k) + \mathbb{P}_D(\sign(u \cdot x) \neq \sign(w_{k-1} \cdot x), x \notin B_k) \nonumber \\
&\leq& 2 \lambda 2^{-k}. \label{eqn:outband}
\end{eqnarray}
Combining Equations~\eqref{eqn:inband} and~\eqref{eqn:outband}, we have that
\[ \mathbb{P}_D(\sign(w_k' \cdot x) \neq \sign(u \cdot x) ) \leq (45 c_2(\lambda) \lambda + 2 \lambda) 2^{-k}. \]
Applying the second inequality of Equation~\eqref{eqn:angdis} gives that
\[ \theta(w_k', u) \leq C_2 (45 c_2(\lambda) + 2) \lambda 2^{-k}. \]
By the definition of $\lambda$, the above is at most $2^{-k-8} \pi$.
Combining the above two cases, the lemma follows.
\end{proof}
\begin{lemma}
For every $k$ in $\{0,1,\ldots,k_0\}$, if $u$ is in $W_k$, then
\[ \mathbb{P}_{D_k}(\sign(w_k' \cdot x) \neq \sign(u \cdot x)) \leq 5\lambda. \]
\label{lem:hlm-grt}
\end{lemma}
\begin{proof}
If $u$ is in $W_k$, then we have the following chain of inequalities:
\begin{eqnarray*}
\mathbb{P}_{D_k}(\sign(w_k' \cdot x) \neq \sign(u \cdot x))
&=& \mathbb{P}_{\tilde{D}_k}(\sign(w_k' \cdot x) \neq y) \\
&\leq& \mathbb{E}_{\tilde{D}_k} \ell_{\tau_k}(w_k', (x, y)) \\
&\leq& \mathbb{E}_{D_k} \ell_{\tau_k}(w_k', (x, y)) + \lambda \\
&\leq& \mathbb{E}_{S_k} \ell_{\tau_k}(w_k', (x, y)) + 2\lambda \\
&\leq& \mathbb{E}_{S_k} \ell_{\tau_k}(u, (x, y)) + 2\lambda \\
&\leq& \mathbb{E}_{D_k} \ell_{\tau_k}(u, (x, y)) + 3\lambda \\
&\leq& \mathbb{E}_{\tilde{D}_k} \ell_{\tau_k}(u, (x, y)) + 4\lambda \\
&\leq& \lambda + 4\lambda = 5\lambda, \\
\end{eqnarray*}
where the first inequality is from the fact that the $\tau_k$-hinge loss is an upper bound of the 0-1 loss; the second inequality is from
Equation~\eqref{eqn:corrupt} and that $w_k' \in W_k$; the third inequality is from Equation~\eqref{eqn:conc} and that $w_k' \in W_k$;
the fourth inequality is by the optimality of $w_k'$ in optimization problem~\eqref{eqn:opt} and that $u \in W_k$; the fifth inequality is from Equation~\eqref{eqn:conc} and that $u \in W_k$;
the sixth inequality is from Equation~\eqref{eqn:corrupt} and that $u \in W_k$; the last inequality is from Equation~\eqref{eqn:lowerr}.
\end{proof}
The following lemma is used in the proof of Lemma~\ref{lem:hlm}; it establishes a connection between the angle and the $\ell_2$ distance of two vectors, when one of the vectors has unit $\ell_2$ norm.
\begin{lemma}
Suppose $v$ is an unit vector in $\mathbb{R}^d$ (that is, $\|v\|_2 = 1$). Then, for any vector $w$ in $\mathbb{R}^d$,
$\theta(w,v) \leq \pi \| w - v \|_2$.
\label{lem:distangle}
\end{lemma}
\begin{proof}
Denote by $\hat{w}$ the $\ell_2$ normalized version of $w$, i.e. $\hat{w} = \frac{w}{\|w\|_2}$.
Lemma~\ref{lem:normalize} below implies that
\begin{equation}
\| \hat{w} - v \|_2 \leq 2\| w - v \|_2.
\label{eqn:normalizedl2}
\end{equation}
Consequently,
\[ \theta(w, v) \leq \frac{\pi}{2} \cdot 2\sin\frac{\theta(w, v)}{2} = \frac{\pi}{2} \cdot 2\sin\frac{\theta(\hat{w}, v)}{2} = \frac{\pi}{2} \| \hat{w} - v \|_2 \leq \pi \| w - v \|_2.\]
where the first inequality is from the elementary inequality that $\phi \leq \frac \pi 2 \sin \phi$ for $\phi \in [0,\frac \pi 2]$ (by taking $\phi = \frac{\theta(w, v)}{2}$),
the second inequality is from the identity that $\| \hat{w} - v\|_2 = 2\sin\frac{\theta(\hat{w}, v)}2$ as both $\hat{w}$ and $v$ are unit vectors, and the last inequality is
from Equation~\eqref{eqn:normalizedl2}.
\end{proof}
The following lemma is used in the proof of Lemma~\ref{lem:distangle}; it uses the fact that $\ell_2$ normalization is an $\ell_2$ projection onto the unit sphere.
\begin{lemma}
Suppose $v$ is an unit vector in $\mathbb{R}^d$ (that is, $\|v\|_2 = 1$). Then, for any vector $w$ in $\mathbb{R}^d$,
\[ \| \frac{w}{\|w\|_2} - v \|_2 \leq 2 \| w - v \|_2. \]
\label{lem:normalize}
\end{lemma}
\begin{proof}
Denote by $\hat{w}$ the $\ell_2$ normalized version of $w$, i.e. $\hat{w} = \frac{w}{\|w\|_2}$.
We have that by triangle inequality,
\[ \| \hat{w} - w \|_2 = \| (\frac{1}{\|w\|_2}-1) w \|_2 = | \| w \|_2 - 1 | = | \| w \|_2 - \| v \|_2 | \leq \| w - v \|_2. \]
Again by triangle inequality,
\[ \| \hat{w} - v \|_2 \leq \| \hat{w} - w \|_2 + \| w - v \|_2 \leq 2\| w - v \|_2. \]
The lemma follows.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:truncate}}
The proof of Lemma~\ref{lem:truncate} is based on the key insight that the hard thresholding operation $\HT_t$
is effectively a projection onto the $\ell_0$-ball $\{w \in \mathbb{R}^d: \| w \|_0 \leq t\}$; see Lemma~\ref{lem:ht}
for a formal description.
\begin{proof}[Proof of Lemma~\ref{lem:truncate}]
Denote by $\hat{w}_k'$ the $\ell_2$ normalized version of $w_k'$: $\hat{w}_k':=\frac{w_k'}{\| w_k' \|_2}$.
Under the condition that $\theta(w_k', u) \leq 2^{-k-8} \pi$, as $\hat{w}_k'$ and $u$ are both unit vectors, we have
\begin{equation*}
\| \hat{w}_k' - u \|_2 = 2 \sin \frac{\theta(w_k', u)}{2} \leq \theta(w_k', u) \leq 2^{-k-8} \pi \leq 2^{-k-6}.
\end{equation*}
Now, by Lemma~\ref{lem:ht} below, we have that
$\| \hat{w}_k' - \HT_t(\hat{w}_k') \|_2 \leq \| \hat{w}_k' - u \|_2 \leq 2^{-k-6}$. By triangle inequality of $\ell_2$ distance, we have that
\begin{equation*}
\| \HT_t(\hat{w}_k') - u \|_2 \leq \| \hat{w}_k' - w_k \|_2 + \| \hat{w}_k' - u \|_2 \leq 2^{-k-5}.
\end{equation*}
Observe that as
$w_k$ and $\hat{w}_k'$ are equal up to scaling, $w_k := \frac{\HT_t(w_k')}{\| \HT_t(w_k') \|_2}$ is identically $\frac{\HT_t(\hat{w}_k')}{\| \HT_t(\hat{w}_k') \|_2}$.
Applying Lemma~\ref{lem:normalize} with $w = \HT_t(\hat{w}_k')$ and $v = u$, we get that
\[ \| w_k - u \|_2 \leq 2 \| \HT_t(\hat{w}_k') - u \|_2 \leq 2^{-k-4} = r_{k+1}. \]
In addition, as $w_k$ and $u$ are both $t$-sparse, $w_k - u$ is $2t$-sparse. Therefore, by Cauchy-Schwarz, $ \| w_k - u \|_1 \leq \sqrt{2t} \| w_k - u \|_2 \leq \sqrt{2t} r_{k+1} = \rho_{k+1}$.
Hence, $u$ is in the set $\{ w \in \mathbb{R}^d: \| w - w_k \|_2 \leq r_{k+1} \text{ and } \| w - w_k \|_1 \leq \rho_{k+1} \}$, namely $W_{k+1}$.
\end{proof}
\begin{lemma}
Suppose $w$ is a vector in $\mathbb{R}^d$. Then, for any $t$-sparse vector $v$ in $\mathbb{R}^d$,
\[ \| \HT_t(w) - w \|_2 \leq \| v - w \|_2. \]
In other words, $\HT_t(w)$ is the best $t$-sparse approximation to $w$, measured in $\ell_2$ distance.
\label{lem:ht}
\end{lemma}
\begin{proof}
Denote by $w_{(1)}, w_{(2)}, \ldots, w_{(d)}$ the $d$ entries of $w$ in descending
order in magnitude. We have that
\[ \| \HT_t(w) - w \|_2^2 = \sum_{i=t+1}^d w_{(i)}^2. \]
On the other hand, for any $t$-sparse vector $v$, denote by $S$ its support ($|S| \leq t$). We have that
\[ \| v - w \|_2^2 \geq \sum_{i \in \{1,\ldots,d\} \setminus S} w_i^2 \geq \sum_{i=t+1}^d w_{(i)}^2, \]
where the second inequality is from that the sum of squares of any $d-t$ entries in $w$ must be greater than that of the bottom $d-t$ entries.
The lemma follows.
\end{proof}
\section{The uniform concentration of hinge losses in label query regions}
\label{sec:conc}
In contrast to~\cite{ABHZ16} where the constraint set of the hinge loss minimization problem at epoch $k$ is the intersection of an $\ell_2$ ball of $O(2^{-k})$ radius and an $\ell_1$ ball of $O(\sqrt{t})$ radius,
Algorithm~\ref{alg:ae_al} defines the constraint set $W_k$ to be the intersection of an $\ell_2$ ball of $O(2^{-k})$ radius and an $\ell_1$ ball of $O(\sqrt{t} 2^{-k})$ radius. The following key lemma, namely Lemma~\ref{lem:conc}, shows the advantage of our construction of $W_k$.
Specifically, it establishes a sharp uniform concentration of hinge losses $\ell_{\tau_k}$ over $W_k$, with respect to sample $S_k$ drawn from distribution $D_k$. Observe that the concentration bound is $\tilde{O}(\sqrt{\frac{t (\ln d + \ln \frac 1 \epsilon)^3}{n_k}})$; if one were to use the constraint set in~\cite{ABHZ16},
one would get concentration bounds of order $\tilde{O}(\sqrt{\frac{ (t \ln d) \cdot 2^k}{n_k}})$, which has an exponential dependence in $k$.
\begin{lemma}
For any $c_2, c_3 > 0$, there exists a constant $C_6 > 0$ such that the following holds.
Given $k$ in $\{0, 1,\ldots,k_0\}$,
suppose $S_k$ is a sample of size $n_k$ drawn from distribution $D_k$. Then with probability
$1-\delta_k$, for all $w \in W_k$, we have:
\[
|\mathbb{E}_{S_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y))|
\leq C_6 \ln\frac{n_k d}{\epsilon \delta_k} \cdot \sqrt{\frac{t(\ln d + \ln \frac 2 {\delta_k})}{n_k}}.
\]
\label{lem:conc}
\end{lemma}
Before going into the proof of the lemma, let us define some notations. For every $k$ in $\{0,1,\ldots,k_0\}$, denote by $R_k = C_7 \ln(\frac{2n_k d}{\delta_k} \max(\frac9{C_3 }, \frac{1}{C_3 b_k}))$ for some large enough positive constant $C_7 $ such that
$\mathbb{P}_{D_X} (\| x \|_\infty > R_k) \leq \min(C_3 /9, C_3 b_k) \delta_k / 2n_k$ holds.
The existence of such $C_7$ is guaranteed by Lemma 20 of~\citet{ABHZ16}.
In addition, define $T_k:= \{(x,y): \| x \|_\infty \leq R_k \}$.
The proof of Lemma~\ref{lem:conc} relies on the following observation: as the marginal distribution of $D_k$ over $\mathcal{X}$ has a light tail, the probability that $(x,y) \notin T_k$ is extremely small, therefore $D_k|_{T_k}$ is ``close'' to $D_k$.
The subsequent reasoning is composed of two parts: first, we show that $|\mathbb{E}_{S_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k |_{T_k}} \ell_{\tau_k}(w, (x, y))|$ is small (Lemma~\ref{lem:sktk}). To this end, we argue that $S_k$ is almost a sample iid from $D_k |_{T_k}$, and then carefully apply Rademacher complexity
bounds for $\ell_1$ bounded linear predictors on $\ell_\infty$ bounded examples~\citep{KST09}. Second, we show that $|\mathbb{E}_{D_k |_{T_k}} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) |$ is small for all $w$ in $W_k$ (Lemma~\ref{lem:tk}).
\begin{proof}
First we show that there is an event $E$ that has probability at least $1-\delta_k/2$,
conditioned on which all the unlabeled examples in $S_k$ have $\ell_\infty$ norms uniformly bounded by $R_k$.
Define:
\begin{equation}
E := \{ \text{ for all } (x,y) \text{ in } S_k, (x,y) \text{ is in } T_k \}.
\label{eqn:e}
\end{equation}
Observe that for each individual $(x,y)$ in $S_k$ drawn from $D_k$,
\[ \mathbb{P}_{D_k} ((x,y) \notin T_k) \leq \frac{\mathbb{P}_{D_k} ((x,y) \notin T_k)}{\mathbb{P}_{D_X}(x \in B_k)} \leq \frac{\min(C_3 /9, C_3 b_k) \delta_k / 2n_k}{\min(C_3 /9, C_3 b_k)} \leq \frac{\delta_k}{2n_k}, \]
therefore, by union bound,
$\mathbb{P}(E) \geq 1-\delta_k/2$.
By Lemma~\ref{lem:sktk}, there is an event $F$ such that
$\mathbb{P}[F|E] \geq 1-\delta_k/2$, and on event $F$,
\begin{eqnarray}
\left| \mathbb{E}_{S_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k |_{T_k}} \ell_{\tau_k}(w, (x, y)) \right|
&\leq& C_8 \cdot \ln\frac{n_k d}{\epsilon \delta_k} \cdot \sqrt{\frac{t(2\ln d + \ln \frac 2 {\delta_k})}{n_k}},
\label{eqn:sktk}
\end{eqnarray}
for some constant $C_8 $ defined in Lemma~\ref{lem:sktk}.
Note that $\mathbb{P}(E \cap F) \geq (1 - \delta_k/2)^2 \geq 1-\delta_k$. We henceforth condition on $E \cap F$ happening.
Using Lemma~\ref{lem:tk}, we get that for all $w$ in $W_k$,
\begin{equation}
|\mathbb{E}_{D_k|_{T_k}} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y))| \leq C_9 \sqrt{\frac{1}{n_k}},
\label{eqn:tk}
\end{equation}
for some constant $C_9 $ defined in Lemma~\ref{lem:tk}.
Combining Equations~\eqref{eqn:sktk} and~\eqref{eqn:tk}, we conclude that there is a constant $C_6 $ such that on event $E \cap F$,
\[
|\mathbb{E}_{S_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y))|
\leq C_6 \ln\frac{n_k d}{\epsilon \delta_k} \cdot \sqrt{\frac{t(\ln d + \ln \frac 2 {\delta_k})}{n_k}}.
\]
This proves the lemma.
\end{proof}
\begin{lemma}
For every $k$ in $\{0,1,\ldots,k_0\}$, suppose event $E$ is defined as in Equation~\eqref{eqn:e}. Then there is an event $F$ such that
$\mathbb{P}[F|E] \geq 1-\delta_k/2$, and on event $F$, for all $w$ in $W_k$,
\begin{eqnarray*}
\left| \mathbb{E}_{S_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k |_{T_k}} \ell_{\tau_k}(w, (x, y)) \right|
&\leq& C_8 \cdot \ln\frac{n_k d}{\epsilon \delta_k} \cdot \sqrt{\frac{t(2\ln d + \ln \frac 2 {\delta_k})}{n_k}},
\end{eqnarray*}
\label{lem:sktk}
for some constant $C_8 > 0$ that depends on $c_2$ and $c_3$.
\end{lemma}
\begin{proof}
Conditioned on event $E$, sample $S_k$ can be seen as drawn iid from $D_k|_{T_k}$.
We consider the cases of $k = 0$ and $k \geq 1$ separately.
\paragraph{Case 1: $k = 0$.} Using Corollary 4 of~\cite{KST09} with $\ell = \pm \ell_{\tau_0}$, $L_\ell = \frac{1}{\tau_0}$, $X = R_0$ and $W_1 = \sqrt{t}$ in the notations therein, we
get that there is an event $F$, such that $\mathbb{P}[F|E] \geq 1-\delta_0/2$, on which for all $w$ in $W_k$,
\begin{equation}
\left| \mathbb{E}_{S_0} \ell_{\tau_0}(w, (x, y)) - \mathbb{E}_{D |_{T_0}} \ell_{\tau_0}(w, (x, y)) \right| \leq \frac{C_7}{\tau_0} \ln(\frac{2n_0 d}{\delta_0} \max(\frac9{C_3}, \frac{1}{C_3 b_0})) \cdot \sqrt{\frac{32 t(\ln d + \ln \frac 4 {\delta_0})}{n_0}}. \nonumber
\end{equation}
\paragraph{Case 2: $k \geq 1$.} By Lemma~\ref{lem:rad} below, we have that there is an event $F$, such that $\mathbb{P}[F|E] \geq 1-\delta_k/2$, on which for some constant $C_{10} > 0$ and for all $w$ in $W_k$,
\begin{eqnarray}
&& \left| \mathbb{E}_{S_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k |_{T_k}} \ell_{\tau_k}(w, (x, y)) \right| \nonumber \\
&\leq& (1 + \frac{b_k}{\tau_k} + \frac{\rho_k R_k}{\tau_k}) \sqrt{\frac{\ln d + \ln \frac 2 {\delta_k}}{n_k}} \nonumber \\
&\leq& C_{10} \cdot \ln\frac{n_k d}{\epsilon \delta_k} \cdot \sqrt{\frac{t(2\ln d + \ln \frac 2 {\delta_k})}{n_k}}, \nonumber
\end{eqnarray}
where the second inequality is by observing that $\frac{b_k}{\tau_k} = \frac{c_2}{c_3}$ and $\frac{\rho_k}{\tau_k} = \frac{\sqrt{2t}}{8c_3}$ and recalling that $R_k = C_7 \ln(\frac{2n_k d}{\delta_k} \max(\frac9{C_3 }, \frac{1}{C_3 b_k}))$.
Combining the above two cases, we can find a large enough constant $C_8 >0$ such that the lemma statement holds.
\end{proof}
We next show Lemma~\ref{lem:rad}, a key concentration result used in the proof of Lemma~\ref{lem:sktk}.
\begin{lemma}
Given $k$ in $\{1,\ldots,k_0\}$, suppose $S_k$ is a set of $n_k$ iid samples drawn from $D_k |_{T_k}$. We have that with probability $1-\delta_k/2$,
for all $w$ in $W_k$,
\[
|\mathbb{E}_{S_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y))|
\leq
(1 + \frac{b_k}{\tau_k} + \frac{\rho_k R_k}{\tau_k}) \sqrt{\frac{2 \ln d + \ln \frac 2 {\delta_k}}{n_k}}.
\]
\label{lem:rad}
\end{lemma}
\begin{proof}
First, for all $w$ in $W_k$, $(x,y) \in T_k$, the instantaneous hinge loss $\ell_{\tau_k}(w, (x, y))$ is at most $1+\frac{|w \cdot x|}{\tau_k} \leq 1+\frac{|w_{k-1} \cdot x|}{\tau_k}+\frac{|(w-w_{k-1}) \cdot x|}{\tau_k} \leq 1 +\frac{b_k}{\tau_k} + \frac{\rho_k R_k}{\tau_k}$.
By standard symmetrization arguments (see Theorem 8 of \cite{BM02}), we have that with probability $1-\delta_k/2$, for all $w$ in $W_k$,
\begin{equation}
|\mathbb{E}_{S_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y))| \leq (1 + \frac{b_k}{\tau_k} + \frac{\rho_k R_k}{\tau_k})\sqrt{\frac{\ln \frac 2 {\delta_k}}{2 n_k}} + R_{n_k}(\mathcal{F}),
\label{eqn:hoeff}
\end{equation}
where $R_{n_k}(\cdot)$ denotes the Rademacher complexity over the examples in $S_k$,
$\mathcal{F}$ is the set of functions $\{(x,y) \mapsto (1- \frac{y w \cdot x}{\tau_k})_+: w \in W_k \}$.
Note that $\mathcal{F}$ can be written as the composition of $\phi(a):= (1- \frac{a}{\tau_k})_+$ and function class
$\mathcal{G} := \{(x,y) \mapsto y w \cdot x: w \in W_k \}$.
By the contraction inequality of Rademacher complexity (see Theorem 12 of \cite{BM02}) and the $\frac{1}{\tau_k}$-Lipschitzness of $\phi$,
$R_{n_k}(\mathcal{F})$ is at most $\frac{1}{\tau_k} R_{n_k}(\mathcal{G})$. We now focus on bounding $R_{n_k}(\mathcal{G})$. First,
denote by $(x_i, y_i)$, $i=1,\ldots,n_k$ the elements of $S_k$. By the definition of Rademacher complexity,
\[ R_{n_k}(\mathcal{G}) = \frac 1 {n_k} \mathbb{E}_\sigma \sup_{w \in W_k} \sum_{i=1}^{n_k} \sigma_i y_i w \cdot x_i, \]
where $\sigma = (\sigma_1, \ldots, \sigma_{n_k})$, $\sigma_i$'s are iid random variables that take values uniformly in $\{-1,+1\}$.
It can be easily seen that $\sigma$ has the same distribution as $(\sigma_1 y_1, \ldots, \sigma_{n_k} y_{n_k})$. Hence, $R_n(\mathcal{G})$ can be simplified to
\[ R_{n_k}(\mathcal{G}) = \frac 1 {n_k} \mathbb{E}_\sigma \sup_{w \in W_k} \sum_{i=1}^{n_k} \sigma_i w \cdot x_i. \]
We bound $R_{n_k}(\mathcal{G})$ as follows:
\begin{eqnarray*}
R_{n_k}(\mathcal{G}) &\leq& \frac 1 {n_k} \mathbb{E}_\sigma \sup_{w: \| w - w_{k-1} \|_1 \leq \rho_k} \sum_{i=1}^{n_k} \sigma_i w \cdot x_i \\
&=& \frac 1 {n_k} \mathbb{E}_\sigma \sup_{v: \| v \|_1 \leq \rho_k} \sum_{i=1}^{n_k} \sigma_i (w_{k-1} \cdot x_i + v \cdot x_i) \\
&=& \frac 1 {n_k} \mathbb{E}_\sigma \sup_{v: \| v \|_1 \leq \rho_k} \sum_{i=1}^{n_k} \sigma_i v \cdot x_i + \frac 1 {n_k} \mathbb{E}_\sigma \sum_{i=1}^{n_k} \sigma_i w_{k-1} \cdot x_i, \\
\end{eqnarray*}
where the inequality uses the fact that all $w$'s in $W_k$ satisfy that $\| w - w_{k-1} \|_1 \leq \rho_k$.
As all $x_i$'s have $\ell_\infty$ norm at most $R_k$, by Theorem 1, Example 2 of ~\cite{KST09}, the first term is bounded by
$\rho_k \cdot R_k \cdot \sqrt{\frac{2 \ln d}{n_k}}$. In addition, as all $(x_i, y_i)$'s are sampled from $D_k$, for all $i$, $|w_{k-1} \cdot x_i| \leq b_k$. Therefore, the second term can be bounded by:
\[ \frac 1 {n_k} \mathbb{E}_\sigma \sum_{i=1}^{n_k} \sigma_i w_{k-1} \cdot x_i \leq \frac 1 {n_k} \sqrt{\mathbb{E}_\sigma \left(\sum_{i=1}^{n_k} \sigma_i w_{k-1} \cdot x_i \right)^2} \leq b_k \sqrt{\frac 1 {n_k}}. \]
Summing the two bounds up, we have that $R_{n_k}(\mathcal{G}) \leq (b_k + \rho_{k} R_k)\sqrt{\frac{2 \ln d}{n_k}}$. Therefore,
\[ R_{n_k}(\mathcal{F}) \leq (\frac{b_k}{\tau_k} + \frac{\rho_{k}}{\tau_k} R_k)\sqrt{\frac{2 \ln d}{n_k}}. \]
Combining this inequality with Equation~\eqref{eqn:hoeff}, along with some algebraic calculations, we get the lemma as stated.
\end{proof}
\begin{lemma}
For any $c_2, c_3 > 0$, there is a constant $C_9 > 0$ such that for all
$k$ in $\{0,1,\ldots,k_0\}$, $w$ in $W_k$,
\begin{equation*}
|\mathbb{E}_{D_k|_{T_k}} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y))| \leq C_9 \sqrt{\frac{1}{n_k}}.
\end{equation*}
\label{lem:tk}
\end{lemma}
\begin{proof}
We consider the cases of $k = 0$ and $k \geq 1$ separately.
\paragraph{Case 1: $k = 0$.} Observe that $\mathbb{P}_{D}((x,y) \notin T_0) \leq \frac{\delta_0}{2n_0} \leq \frac{1}{n_0}$, and $\mathbb{E}_{D}(w \cdot x)^2 \leq 1$ for $w$ in $W_0$ as $D$ is isotropic.
Using Lemma~\ref{lem:conditional}, this implies that
\begin{equation*}
\left|\mathbb{E}_{D |_{T_0}} \ell_{\tau_0}(w, (x, y)) - \mathbb{E}_{D} \ell_{\tau_0}(w, (x, y)) \right| \leq 6\sqrt{\frac{1}{n_0}\left(1 + \frac{1}{c_3^2}\right)}.
\end{equation*}
\paragraph{Case 2: $k \geq 1$.} Observe that by Lemma~\ref{lem:variance}, there is a constant $C_4 $ such that for all $w$ in $W_k \subset \{w \in \mathbb{R}^d: \| w - w_{k-1}\|_2 \leq r_k\}$,
$\mathbb{E}_{D_k}(w \cdot x)^2 \leq C_4 (b_k^2 + r_k^2)$. In addition, $\mathbb{P}_{D_k}((x,y) \notin T_k) \leq \frac 1 {n_k}$. Therefore, by Lemma~\ref{lem:conditional} and the definitions of $b_k$, $r_k$ and $\tau_k$, we have
\begin{equation*}
|\mathbb{E}_{D_k|_{T_k}} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y))| \leq 6 \sqrt{\frac{1}{n_k} \left(1 + \frac{C_4 (b_k^2 + r_k^2)}{\tau_k^2}\right)} = 6 \sqrt{\frac{1}{n_k} \left(1 + \frac{C_4}{c_3^2}(\frac 1 {64} + c_2^2)\right)}.
\end{equation*}
Combining the above two cases, we can find a large enough constant $C_9 >0$ such that the lemma statement holds.
\end{proof}
In the proof of Lemma~\ref{lem:tk}, we use the following lemma to bound the difference between $\mathbb{E}_{D_k |_{T_k}} \ell_{\tau_k}(w, (x, y))$ and
$\mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y))$ in terms of $T_k$'s probability mass in $D_k$ and $D_k$'s second moments.
\begin{lemma}
For $k$ in $\{0,1,\ldots,k_0\}$, if $\mathbb{P}_{D_k}((x,y) \notin T_k) \leq \frac {\delta_k} {2n_k}$, then the following inequality holds for all $w$ in $\mathbb{R}^d$:
\begin{eqnarray*}
&&\left|\mathbb{E}_{D_k |_{T_k}} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) \right|
\leq
6 \sqrt{\mathbb{P}_{D_k}((x,y) \notin T_k)} \cdot \sqrt{1 + \frac{\mathbb{E}_{D_k}(w \cdot x)^2}{\tau_k^2}}.
\end{eqnarray*}
\label{lem:conditional}
\end{lemma}
\begin{proof}
First, observe that
\begin{equation}
\mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) = \mathbb{E}_{D_k |_{T_k}} \ell_{\tau_k}(w, (x, y)) \mathbb{P}_{D_k}((x,y) \in T_k) + \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) I((x,y) \notin T_k).
\label{eqn:decomp}
\end{equation}
Therefore,
\begin{eqnarray}
&& \left|\mathbb{E}_{D_k |_{T_k}} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) \right| \nonumber \\
&=& \left|\frac{\mathbb{P}_{D_k}((x,y) \notin T_k)}{\mathbb{P}_{D_k}((x,y) \in T_k)} \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) - \frac{\mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) I((x,y) \notin T_k)}{\mathbb{P}_{D_k}((x,y) \in T_k)} \right| \nonumber \\
&\leq& 2 \mathbb{P}_{D_k}((x,y) \notin T_k) \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) + 2 \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) I((x,y) \notin T_k) \nonumber \\
&\leq& 2 \mathbb{P}_{D_k}((x,y) \notin T_k) \mathbb{E}_{D_k} (1 + \frac{|w \cdot x|}{\tau_k}) + 2 \mathbb{E}_{D_k} (1 + \frac{|w \cdot x|}{\tau_k}) I((x,y) \notin T_k) \nonumber \\
&\leq& 2 \mathbb{P}_{D_k}((x,y) \notin T_k) (1 + \sqrt{\frac{\mathbb{E}_{D_k}(w \cdot x)^2}{\tau_k^2}}) + 2 \sqrt{\mathbb{P}_{D_k}((x,y) \notin T_k) \mathbb{E}_{D_k}(1 + \frac{(w \cdot x)}{\tau_k})^2 } \nonumber\\
&\leq& 6 \sqrt{\mathbb{P}_{D_k}((x,y) \notin T_k)} \cdot \sqrt{1 + \frac{\mathbb{E}_{D_k}(w \cdot x)^2}{\tau_k^2}}, \nonumber
\end{eqnarray}
where the equality is from Equation~\eqref{eqn:decomp} and algebra; the first inequality is from that $\mathbb{P}_{D_k}((x,y) \in T_k) \geq 1 - \frac{\delta_k}{2n_k} \geq \frac 1 2$ and the elementary inequality $|a+b| \leq |a|+|b|$;
the second inequality is from that $\ell_{\tau_k}(w,(x,y)) \leq (1 + \frac{|w \cdot x|}{\tau_k})$;
the third inequality is by applying Cauchy-Schwarz on both terms, and the last inequality is from algebra (using the following elementary inequalities: $\sqrt{a} + \sqrt{b} \leq \sqrt{2(a+b)}$, $\mathbb{P}_{D_k}((x,y) \notin T_k) \leq 1$ and $(a+b)^2 \leq 2(a^2+b^2)$).
\end{proof}
\section{Auxiliary lemmas}
The lemmas in this section are known and used in previous works on efficient halfspace learning under isotropic log-concave distributions~\citep[See e.g.][]{ABL17, ABHZ16}; we collect them here for completeness.
The following lemma characterizes one-dimensional projections of isotropic log-concave distributions (which are in fact also isotropic log-concave).
\begin{lemma}[\citet{LV07}]
There exists a numerical constant $C_3 \in (0,1)$ such that the following holds.
Given
a unit vector $v$ and a positive real number $b$,
\[ \min(C_3 /9, C_3 b) \leq \mathbb{P}_{D_X}(|v \cdot x| \leq b ) \leq 9 b. \]
\label{lem:bandmass}
\end{lemma}
Suppose $w$ is a unit vector, and $B = \{w: |w \cdot x| \leq b \}$ is a band of width
$b > 0$ along the $w$ direction.
The following technical lemma bounds the second moments of
$D_X|_B$, along directions close to $w$.
\begin{lemma}[\citet{ABL17}]
Suppose $w, b, B$ are defined as above. Then there is a numerical constant $C_4 > 0$, such that for all $w' \in \{v: \| v - w \|_2 \leq r \}$, we have
\begin{equation*}
\mathbb{E}_{D_X|_B} (w' \cdot x)^2 \leq C_4 (r^2 + b^2).
\end{equation*}
\label{lem:variance}
\end{lemma}
Recall that $D$ (resp. $\tilde{D}$) is the joint distribution over $(x,y)$ (resp. $(x,\sign(u \cdot x))$). In addition, recall that $b_k = c_2 2^{-k}$, $\tau_k = c_3 2^{-k}$ and $B_k = \{ x: |w_{k-1} \cdot x| \leq b_k \}$.
The following lemma shows that under certain ``local low noise'' conditions on $D$, for every halfspace $w$ in $W_k$,
its expected $\tau_k$-hinge loss on $D_k$ is close to that on $\tilde{D}_k$. With the help of this result, in Lemmas~\ref{lem:tv-an} and~\ref{lem:tv-bn}, we will show that under the $t$-sparse $\mu_1 \epsilon$-adversarial noise condition and $t$-sparse $\mu_2$-bounded noise condition for sufficiently small $\mu_1$ and $\mu_2$,
the hinge loss of $w$ on $D_k$ is at most a constant away from the hinge loss of $w$ on $\tilde{D}_k$,
for all $w$ in $W_k$.
\begin{lemma}
For any choice of $c_2, c_3 > 0$, there exists a constant
$C_5 > 0$ such that the following holds. For every $k$ in $\{0,1,\ldots,k_0\}$, suppose $\mathbb{P}_{D_k}(y \neq \sign(u \cdot x)) \leq \xi_k$, then for every
$w \in W_k$,
\begin{equation}
| \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{\tilde{D}_k} \ell_{\tau_k}(w, (x, y)) |
\leq
\sqrt{C_5 \xi_k}.
\label{eqn:lossdiff}
\end{equation}
\label{lem:noisy-hinge}
\end{lemma}
\begin{proof}
We first bound the difference as follows:
\begin{eqnarray}
&& |\mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{\tilde{D}_k} \ell_{\tau_k}(w, (x, y)) | \nonumber \\
&=& |\mathbb{E}_{D_k} [\ell_{\tau_k}(w, (x, y)) - \ell_{\tau_k}(w, (x, \sign(u \cdot x)))] | \nonumber \\
&=& | \mathbb{E}_{D_k} I(y \neq \sign(u \cdot x)) \cdot (\ell_{\tau_k}(w, (x, y)) - \ell_{\tau_k}(w, (x, \sign(u \cdot x)))) | \nonumber \\
&\leq& \mathbb{E}_{D_k} I(y \neq \sign(u \cdot x)) \cdot 2\frac{|w \cdot x|}{\tau_k} \nonumber \\
&\leq& 2 \sqrt{ \mathbb{P}_{D_k}(y \neq \sign(u \cdot x)) \frac{\mathbb{E}_{D_k} (w \cdot x)^2 }{\tau_k^2} }
\label{eqn:cs}
\end{eqnarray}
where the first inequality is from that an example $(x,y)$ drawn from $\tilde{D}_k$ satisfies
$y = \sign(u \cdot x)$ with probability 1; the second inequality is by decomposing $1$ as
$I(y \neq \sign(u \cdot x)) + I(y = \sign(u \cdot x))$; the first inequality is from that
$|(1+\frac{|w \cdot x|}{\tau_k})_+ - (1-\frac{|w \cdot x|}{\tau_k})_+| \leq 2\frac{|w \cdot x|}{\tau_k}$;
the second inequality is from Cauchy-Schwarz. We now consider the cases of $k=0$ and $k \geq 1$ respectively.
\paragraph{Case 1: $k = 0$.} In this case, $W_0$ is a subset of $\{w \in \mathbb{R}^d: \| w \|_2 \leq 1\}$ and $D_0 = D$.
Therefore, for all $w$ in $W_0$,
\[ \mathbb{E}_{D_0} (w \cdot x)^2 \leq 1 \]
as $D$ is isotropic log-concave. Continuing Equation~\eqref{eqn:cs}, we get that
\[ |\mathbb{E}_{D_0} \ell_{\tau_0}(w, (x, y)) - \mathbb{E}_{\tilde{D}_0} \ell_{\tau_0}(w, (x, y)) | \leq 2\sqrt{\frac{\xi_0}{\tau_0^2}} = 2\sqrt{\frac{\xi_0}{c_3^2}}. \]
\paragraph{Case 2: $k \geq 1$.} In this case, $W_k$ is a subset of $\{w \in \mathbb{R}^d: \| w - w_{k-1} \|_2 \leq r_k \}$.
By Lemma~\ref{lem:variance}, and the choices of $b_k$ and $r_k$, we have that for all $w$ in $W_k$,
\begin{equation}
\mathbb{E}_{D_k} (w \cdot x)^2 \leq C_4 (r_k^2 + b_k^2).
\label{eqn:variance}
\end{equation}
By the definitions of $r_k$, $b_k$, and $\tau_k$ and Equation~\eqref{eqn:cs}, we have
\[
|\mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{\tilde{D}_k} \ell_{\tau_k}(w, (x, y)) |
\leq
2 \sqrt{\xi_k \frac{C_4(r_k^2 + b_k^2)}{\tau_k^2}} = 2 \sqrt{\xi_k C_4 \frac{1+c_2^2}{c_3^2}}.
\]
Now, choose $C_5 = \max\left(C_4(\frac{1+c_2^3}{c_3^2}), \frac{1}{c_3^2}\right)$. Combining the above two cases and by the choice of $C_5 $, we conclude that Equation~\eqref{eqn:lossdiff} holds for all $k$ in $\{0,1,\ldots,k_0\}$.
\end{proof}
Applying the above lemma to the two noise settings respectively, we have:
\begin{lemma}
For any $\lambda > 0$ and $c_2, c_3 > 0$, there exists a constant $\mu_1 > 0$ such that the following holds. Suppose $D$ satisfies the $t$-sparse $\mu_1 \epsilon$-bounded
noise condition. For every $k \in \{0,\ldots,k_0\}$, and $w$ in $W_k$,
\[
| \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{\tilde{D}_k} \ell_{\tau_k}(w, (x, y)) |
\leq
\lambda.
\]
\label{lem:tv-an}
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:noisy-hinge}, it suffices to let $\mu_1$ be such that for all $k \in \{0,1,\ldots,k_0\}$, $\mathbb{P}_{D_k}(y \neq \sign(u \cdot x)) \leq \frac{\lambda^2}{C_5 }$ for the $C_5$ defined therein.
Observe that
\[ \mathbb{P}_{D_k}(y \neq \sign(u \cdot x)) \leq \frac{\mathbb{P}_D(y \neq \sign(u \cdot x))}{\mathbb{P}_D(x \in B_k)}. \]
by the fact that $\mathbb{P}(A|B) \leq \frac{\mathbb{P}(A)}{\mathbb{P}(B)}$ for any two events $A$, $B$.
We now consider the cases of $k=0$ and $k \geq 1$ respectively.
\paragraph{Case 1: $k = 0$.} In this case, $B_k = \mathbb{R}^d$, hence $\mathbb{P}_D(x \in B_k) = 1$. It suffices to set $\mu_1 \leq \frac{\lambda^2}{C_5 }$.
\paragraph{Case 2: $k \geq 1$.} In this case, $\mathbb{P}_D(x \in B_k) \geq \min(C_3 /9, C_3 b_k) \geq \min(C_3 /9, C_3 c_2 C_1 \epsilon / 2)$, where the first inequality is from Lemma~\ref{lem:bandmass}; the second inequality is from the definition of $b_k$ and $k \leq k_0$.
Therefore, for sufficiently small $\mu_1$, if $\mathbb{P}_D(y \neq \sign(u \cdot x)) \leq \mu_1 \epsilon$, then $\mathbb{P}_{D_k}(y \neq \sign(u \cdot x)) \leq \frac{2 \mu_1}{\min(2C_3 /9, C_3 C_1 c_2)} \leq \frac{\lambda^2}{C_5 }$.
Combining the above two cases, we can pick a sufficiently small $\mu_1$ such that the requirements on $\mu_1$ in both cases are satisfied. This completes the proof.
\end{proof}
\begin{lemma}
For any $\lambda > 0$ and $c_2, c_3 > 0$, there exists a constant $\mu_2 > 0$ such that the following holds. Suppose $D$ satisfies the $t$-sparse $\mu_2$-bounded
noise condition. For every $k \in \{0,\ldots,k_0\}$, and $w$ in $W_k$,
\[
| \mathbb{E}_{D_k} \ell_{\tau_k}(w, (x, y)) - \mathbb{E}_{\tilde{D}_k} \ell_{\tau_k}(w, (x, y)) |
\leq
\lambda.
\]
\label{lem:tv-bn}
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:noisy-hinge}, it suffices to let $\mu_2$ be such that for all $k \in \{0,1,\ldots,k_0\}$, $\mathbb{P}_{D_k}(y \neq \sign(u \cdot x)) \leq \frac{\lambda^2}{C_5 }$ for the $C_5$ defined therein.
This can indeed be satisfied by setting $\mu_2 = \frac{\lambda^2}{C_5 }$, which immediately implies that $\mathbb{P}_{D_k}(y \neq \sign(u \cdot x)) \leq \mu_2 \leq \frac{\lambda^2}{C_5 }$.
\end{proof}
\section{Conclusions and future work}
We give a computationally efficient PAC active halfspace learning algorithm that enjoys sharp attribute efficient label complexity bounds.
It combines the margin-based framework of~\cite{BBZ07,BL13} with iterative hard thresholding~\citep{BD09, GK09}.
The main novel technical component in our analysis is a uniform concentration bound of hinge losses over shrinking $\ell_1$ balls in the sampling regions.
We outline several promising directions of future research:
\begin{itemize}
\item Can we extend our algorithm to work under $\eta$-bounded noise, when $\eta$ is arbitrarily close to $\frac 1 2$? Recall that the results of \cite{ZC14}
imply a computationally inefficient algorithm with a label complexity of $O(\frac{t \ln d}{(1-2\eta)^2} \ln \frac 1 \epsilon)$ in this setting,
which state of the art computationally efficient algorithms~\citep[e.g.][]{ABHZ16} cannot achieve.
\item Can we design attribute and computationally efficient active learning algorithms that work under broader distributions? Existing results in the active learning and one-bit compressed sensing literature have made substantial progress on settings when the unlabeled distribution is $\alpha$-stable~\citep{L16}, subgaussian~\citep{ALPV14, CB15}, or $s$-concave~\citep{BZ17}; an attribute and computationally efficient, statistically consistent recovery algorithm under any of the above settings would be a step forward.
\item In one-bit compressed sensing, under the symmetric noise condition~\citep{PV13b}, algorithms with sample complexity polynomial in $\frac 1 \epsilon$ have been proposed~\citep{PV13b, ZYJ14, ZG15}.
Can we develop adaptive one-bit compressed sensing algorithms with $O(t \polylog(d,\frac 1 \epsilon))$ measurement complexity in this setting?
\end{itemize}
\section{Performance guarantees}
In this section, we prove Theorem~\ref{thm:main}, the main result of this paper.
\begin{theorem}
There exist numerical constants $\mu_1, \mu_2 \in (0, \frac 1 2)$ such that the following holds.
Suppose $D_X$ is isotropic log-concave, and one of the following two conditions hold:
\begin{enumerate}
\item $D$ satisfies the $t$-sparse $\mu_1\epsilon$-adversarial noise condition;
\item $D$ satisfies the $t$-sparse $\mu_2$-bounded noise condition.
\end{enumerate}
In addition, Algorithm~\ref{alg:ae_al} is run with sparsity parameter $t$, target error $\epsilon$ and failure probability $\delta$.
Then, with probability $1-\delta$, the output halfspace $\hat{w}$ is such that
$\err(h_{\hat{w}}) - \err(h^*) \leq \epsilon$,
and the total number of label queries is $O( t \cdot (\ln d + \ln \frac 1 \epsilon)^3 \cdot \ln \frac 1 \epsilon )$.
\label{thm:main}
\end{theorem}
As the $t$-sparse realizable setting is a special case of the $t$-sparse adversarial noise setting (by setting $\nu = 0$), Theorem~\ref{thm:main} immediately implies the
following corollary:
\begin{corollary}
Suppose $D_X$ is isotropic log-concave, and the $t$-sparse realizable condition holds for $D$.
In addition, Algorithm~\ref{alg:ae_al} is run with sparsity parameter $t$, target error $\epsilon$ and failure probability $\delta$.
Then, with probability $1-\delta$, the output halfspace $\hat{w}$ is such that
$\err(h_{\hat{w}}) - \err(h^*) \leq \epsilon$,
and the total number of label queries is $O( t \cdot (\ln d + \ln \frac 1 \epsilon)^3 \cdot \ln \frac 1 \epsilon )$.
\label{cor:main}
\end{corollary}
Theorem~\ref{thm:main} and Corollary~\ref{cor:main} imply that, under the respective noise conditions defined above, Algorithm~\ref{alg:ae_al}
has a label complexity of $O( t \polylog(d, \frac 1 \epsilon) )$. To the best of our knowledge, this
is the first efficient PAC active learning algorithm that has a label complexity linear in $t$, and polylogarithmic
in $d$ and $\frac 1 \epsilon$. Previous works either need to sacrifice computational efficiency to achieve such guarantee~\citep{D05,ZC14}, or have label complexities polynomial in $d$ or $\frac 1 \epsilon$ ~\citep{ABL17, ABHZ16}. We remark that in the membership query model~\citep{A88, BB08}, efficient algorithms with $O( t \polylog(d, \frac 1 \epsilon) )$ label complexities are implicit in the literature (e.g. by combining \cite{HB11}'s support recovery algorithm with efficient full-dimensional active halfspace learning algorithms~\citep{DKM05,ABL17,CHK17,YZ17}). In contrast, the focus of this paper is on the more challenging PAC setting, and it is unclear how to modify a membership query algorithm to make it work in the PAC setting.
\subsection{Proof of Theorem~\ref{thm:main}}
Recall that $\delta_k = \frac{\delta}{(k+1)(k+2)}$; note that $\sum_{l=0}^{k_0} \delta_k \leq \delta$. To prove Theorem~\ref{thm:main}, we give exact settings of constants $\mu_1, \mu_2 \in (0, \frac 1 2)$ in Appendix~\ref{sec:params},
such that under either the $t$-sparse $\mu_1\epsilon$-adversarial noise condition or the $t$-sparse
$\mu_2$-bounded noise condition, the following lemma holds:
\begin{lemma}
For every $k \in \{ 0,1,\ldots,k_0 \}$, there is an event $E_k$ with probability $1-\sum_{l=0}^k \delta_l$, on which $u$ is in $W_{k+1}$.
\label{lem:induct}
\end{lemma}
The proof of Lemma~\ref{lem:induct} relies on the following two supporting lemmas. The first lemma (Lemma~\ref{lem:hlm}) shows that, $w_k'$ produced in the hinge loss minimization step (line~\ref{line:hlm}) has a small angle with $u$. Specifically, the upper bound on $\theta(w_k',u)$
is halved at each iteration $k$, with the help of constrained hinge loss minimization over a fresh set of $n_k = O(t \polylog(d,\frac1\epsilon))$ labeled examples. This relies on two ideas: first, as is standard
in the margin-based active learning framework~\citep[See e.g.][]{BBZ07,BL13}, it suffices to let $w_k'$ achieve a constant error with respect to the sampling distribution at epoch $k$; second, to ensure that the setting of $n_k$ ensures that $w_k'$ indeed has a constant error under the sampling distribution, we use a novel uniform concentration bound of hinge losses of $W_k$ over $S_k$ tighter than all prior works~\citep{ABL17,ABHZ16}. Thanks to our construction of $W_k$, our concentration bound of hinge losses is of order $\tilde{O}(\sqrt{\frac{t\ln d}{n_k}})$, which can be substantially tighter than
$\tilde{O}(\sqrt{\frac{d}{n_k}})$ used in~\citet{ABL17,HKY15} and $\tilde{O}(\sqrt{\frac{(t \ln d) \cdot 2^k}{n_k}})$ used in~\citet{ABHZ16}. We refer the reader to Appendix~\ref{sec:conc} for a formal statement.
\begin{lemma}
For every $k \in \{ 0, 1,\ldots,k_0 \}$, if $u$ is in $W_k$, then with probability $1-\delta_k$, $\theta(w_k', u) \leq 2^{-k-8} \pi$.
\label{lem:hlm}
\end{lemma}
The second lemma (Lemma~\ref{lem:truncate}) shows that, performing a hard thresholding operation followed by $\ell_2$ normalization on $w_k'$ (line~\ref{line:ht}) yields a $t$-sparse unit vector $w_k$ that is close to $u$ in terms of both $\ell_1$ and $\ell_2$ distances. This ensures that $W_{k+1}$, the constraint set of the optimization problem at the next epoch, contains $u$. A key fact used in the proof of the lemma is that, the hard thresholding operator $\HT_t$ is effectively a $\ell_2$-projection onto the $\ell_0$ ball $\{w \in \mathbb{R}^d: \| w \|_0 \leq t \}$.
\begin{lemma}
For every $k \in \{ 0,1,\ldots,k_0 \}$, if $\theta(w_k', u) \leq 2^{-k-8} \pi$, then $u$ is in $W_{k+1}$.
\label{lem:truncate}
\end{lemma}
We are now ready to prove Lemma~\ref{lem:induct}.
\begin{proof}[Proof of Lemma~\ref{lem:induct}]
We prove the lemma by induction.
\paragraph{Base case.} In the case of $k = 0$, observe that as $u$ has unit $\ell_2$ norm and $u$ is $t$-sparse, by Cauchy-Schwarz, $\| u \|_1 \leq \sqrt{t} \| u\|_2 = \sqrt{t}$. Therefore, $u$ belongs to the set $W_0$ deterministically.
Lemma~\ref{lem:hlm} with $k=0$ shows that there is an event $E_0$ with probability $1-\delta_0$, conditioned on which $\theta(w_0',u) \leq 2^{-8}\pi$. By Lemma~\ref{lem:truncate}, we get that $u$ is in $W_1$.
\paragraph{Inductive case.} For $k \geq 1$, suppose the inductive hypothesis holds. That is, there is an event $E_{k-1}$ with probability $1-\sum_{l=0}^{k-1} \delta_l$, such that
on $E_{k-1}$, $u$ is in $W_k$. By Lemma~\ref{lem:hlm}, there is an event $F_k$ such that $\mathbb{P}(F_k | E_{k-1}) \geq 1 - \delta_k$,
conditioned on which $\theta(w_k', u) \leq 2^{-k-8} \pi$.
Define event $E_k:= E_{k-1} \cap F_k$. Observe that $\mathbb{P}(E_k) = \mathbb{P}(E_{k-1})\mathbb{P}(F_k | E_{k-1}) \geq 1-\sum_{l=0}^{k} \delta_l$.
Now, on event $E_k$, Lemma~\ref{lem:truncate} implies that $u$ is in $W_{k+1}$.
This completes the induction.
\end{proof}
Theorem~\ref{thm:main} is now a direct consequence of Lemma~\ref{lem:induct}; we give its proof below.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
From Lemma~\ref{lem:induct} and the fact that the output $\hat{w}$ is $w_{k_0}$, we have that with probability $1-\sum_{l=0}^{k_0}\delta_l \geq 1-\delta$, $u$ is in $W_{k_0+1}$. By the definition of $W_k$,
\[ \| u - w_{k_0} \|_2 \leq r_{k_0+1} = 2^{-k_0 - 4}. \]
By Lemma~\ref{lem:distangle} in the Appendix and the fact that $\|u\|_2 = 1$, we know that
$\theta(w_{k_0}, u) \leq \pi \| u - w_{k_0} \|_2 \leq 2^{-k_0 - 2} \leq \frac{C_1\epsilon}{2}$.
By the first inequality of Equation~\eqref{eqn:angdis}, we have that
$ \mathbb{P}_D (h_{w_{k_0}}(x) \neq h_{u}(x) ) \leq \frac \epsilon 2. $
Therefore, by triangle inequality and the fact that the output $\hat{w}$ is $w_{k_0}$,
\[ \err(h_{\hat{w}}) - \err(h_u) \leq \frac \epsilon 2. \]
We now consider two separate cases regarding the two different noise conditions:
\begin{enumerate}
\item In the $\mu_1 \epsilon$-adversarial noise setting, we know that $\err(h_u) \leq \mu_1 \epsilon \leq \frac \epsilon 2$.
Therefore,
\[ \err(h_{\hat{w}}) - \err(h^*) \leq \err(h_{\hat{w}}) \leq \err(h_u) + \frac \epsilon 2 \leq \frac \epsilon 2 + \frac \epsilon 2 \leq \epsilon. \]
\item In the $\mu_2$-bounded noise setting, as $h_u$ and $h^*$ are identical,
it immediately follows that $\err(h_{\hat{w}}) - \err(h^*) \leq \frac \epsilon 2 \leq \epsilon$.
\end{enumerate}
We now bound the label complexity of Algorithm~\ref{alg:ae_al}. The total number of labels queried is $\sum_{k=0}^{k_0} n_k$,
where $n_k \leq c_1 \cdot t (\ln d + \ln \frac 1 \epsilon + \ln \frac{k(k+1)}{\delta})^3$, and $k_0 = O(\ln\frac1\epsilon)$.
As a consequence, the total number of label queries is $O(t \cdot (\ln d + \ln \frac 1 \epsilon)^3 \cdot \ln \frac 1 \epsilon)$ in terms of $t, d$ and $\epsilon$.
The theorem follows.
\end{proof}
\section{Introduction}
Active learning is a machine learning paradigm that aims at reducing label requirements through interacting with labeling oracles~\citep{S10}. The learner is given a distribution from which it can draw unlabeled examples,
and a labeling oracle from which it can query labels interactively. This is in contrast with passive learning, where labeled examples are drawn from distributions directly.
Using the ability to adaptively query labels, an active learning
algorithm can avoid querying the labels it has known before, thus substantially reducing label requirements.
In the PAC active learning model~\citep{V84,KSS94,BBL09,H14}, the performance of an active learner is measured by its label complexity, i.e. the number of label requests to satisfy an error requirement $\epsilon$ with high probability.
There have been many exciting works on active halfspace learning in the literature.
In this setting, the instances are in $\mathbb{R}^d$, and the labels are from $\{-1,+1\}$. The goal is to learn a classifier from $\mathcal{H} = \{\sign(w \cdot x): w \in \mathbb{R}^d \}$, the class of homogeneous linear classifiers, to predict labels from instances.
Efficient active halfspace learning algorithms that work under different distributional assumptions have been proposed. Some of these algorithms are computationally efficient, and enjoy
information theoretically optimal label complexities~\citep{DKM05, BBZ07, ABL17, HKY15, ABHU15, YZ17}, that is,
$O(d \ln\frac 1 \epsilon)$ in terms of $d$ and $\epsilon$ \citep[See e.g.][for an $\Omega(d \ln\frac 1 \epsilon)$ lower bound]{KMT93}.
On the other hand, a line of work on attribute efficient learning \citep{B90} shows that one can in fact learn faster
when the target classifier is {\em sparse}, i.e. it depends only on a few of the input features.
In the problem of active halfspace learning, one can straightforwardly apply existing results to achieve attribute efficiency.
For instance, consider running the algorithm of ~\cite{ZC14} with concept class $\mathcal{H}_t$, the set of $t$-sparse linear classifiers. Under certain distributional assumptions, \cite{ZC14}'s algorithm achieves label complexities of order $O(t \ln d \ln\frac 1 \epsilon)$. However, such algorithms are computationally inefficient: they require solving empirical 0-1 loss minimization with respect to $\mathcal{H}_t$, which is NP-hard even in the realizable setting~\citep{N95}.
The results above raise the following question: are there active learning algorithms that learn linear classifiers in an attribute and computationally efficient manner?
A line of work on one-bit compressed sensing~\citep{BB08}, partially answers this question. They show that when the learning algorithm is allowed to synthesize instances to query their labels (also known as the membership query model~\citep{A88}, abbrev. MQ), it is possible to approximately recover the target halfspace using a near-optimal number of $\tilde{O}(t (\ln d + \ln \frac 1 \epsilon))$ queries~\citep{HB11}.
However, when applied to active learning in the PAC model, these results have strong distributional
requirements.
For instance, the algorithm of~\cite{HB11} requires the unlabeled distribution to have a constant probability to observe elements in the discrete set $\{-1,0,+1\}^d$.
In the PAC setting, recent work of~\cite{ABHZ16} proposes attribute and computationally efficient active halfspace learning algorithms, under the assumption that the unlabeled distribution is isotropic log-concave~\citep{LV07}. In the $t$-sparse $\Omega(\epsilon)$-adversarial noise setting, where all but an $\Omega(\epsilon)$ fraction of examples agree with some $t$-sparse linear classifier (see also Definition~\ref{def:an}), their algorithm has a label complexity of $\tilde{O}(\frac{t}{\epsilon^2} )$.
In the $t$-sparse $\eta$-bounded noise setting, where each label is generated by some underlying $t$-sparse linear classifier and then flipped with probability at most a constant $\eta \in [0,\frac12)$ (see also Definition~\ref{def:bn}), their algorithm has a label complexity of $\tilde{O}((\frac{t}{\epsilon})^{O(1)} )$. Compared to those achieved by computationally inefficient algorithms (e.g. ~\cite{ZC14} discussed above), these label complexity bounds are suboptimal, in that they do not have a logarithmic dependence on $\frac 1 \epsilon$.
In this paper, we give an algorithm that combines the advantages of~\cite{ZC14} and~\cite{ABHZ16}, achieving computational efficiency and $\tilde{O}(t \polylog(d, \frac 1 \epsilon))$ label complexity simultaneously, under certain distributional assumptions on the data.
Specifically, our algorithm works if the unlabeled distribution is isotropic log-concave, and has the following guarantee.
If one of the two conditions below is true:
\begin{enumerate}
\item the $t$-sparse $\mu_1\epsilon$-adversarial noise condition holds (see Definition~\ref{def:an}), where $\mu_1 > 0$ is some numerial constant;
\item the $t$-sparse $\mu_2$-bounded noise condition holds (see Definition~\ref{def:bn}), where $\mu_2 > 0$ is some numerical constant,
\end{enumerate}
then, with high probability, the algorithm outputs a halfspace with excess error at most $\epsilon$, and queries at most $O(t (\ln d + \ln \frac 1 \epsilon)^3 \ln \frac 1 \epsilon )$ labels. As a corollary, if there is a $t$-sparse linear classifier that
agrees with all the labeled examples drawn from the distribution (see Definition~\ref{def:r}), the algorithm also achieves a label complexity of $O(t (\ln d + \ln \frac 1 \epsilon)^3 \ln \frac 1 \epsilon )$. In the next section, we give a detailed comparison between
our results and related results in the literature.
From a technical perspective, our algorithm combines the margin-based
framework of~\cite{BBZ07, BL13} with iterative hard thresholding~\citep{BD09, GK09}, a technique well-studied in compressed sensing~\citep{CT06,D06}. Our analysis is based on sharp uniform concentration bounds of hinge losses over linear predictors in $\ell_1$ balls in the label query regions, which is in turn built upon classical Rademacher complexity bounds for linear prediction~\citep{KST09}.
\section{Related work}
\paragraph{Attribute efficient active learning of halfspaces.}
There is a rich body of theoretical literature on active learning of general concept classes
in the PAC setting~\citep{D11, H14}. For the problem of active halfspace learning, sharp distribution-dependent label complexity results are known,
in terms of e.g. the splitting index~\citep{D05}, or the disagreement coefficient~\citep{H07}.
Direct applications of these results (without taking advantage of sparsity assumptions)
yield algorithms with label complexities at least $\Omega(d \ln \frac 1 \epsilon)$~\citep{KMT93}.
To make these algorithms attribute efficient, a natural modification is to consider concept class
$\mathcal{H}_t$, the set of $t$-sparse linear classifiers.
It is well known that $\mathcal{H}_t$ has VC dimension
$O(t \ln d)$. In conjunction with existing results in the active learning
literature, this observation immediately yields attribute efficient active
learning algorithms. For example, when the unlabeled distribution is isotropic log-concave,
an application of~\cite{ZC14}'s algorithm with $\mathcal{H}_t$ yields a label complexity
of $O(t \ln d \ln \frac 1 \epsilon)$ in the $t$-sparse realizable setting, and gives
$O(t \ln d \cdot (\ln \frac 1 \epsilon+\frac{\nu^2}{\epsilon^2}))$ and
$O(\frac{t\ln d}{(1-2\eta)^2} \ln \frac 1 \epsilon)$
label complexities in the $t$-sparse $\nu$-adversarial noise and $t$-sparse $\eta$-bounded noise settings.\footnote{To see this, note that the $\phi(\cdot,\cdot)$ function
defined in~\cite{ZC14} with respect to $\mathcal{H}_t$ can be bounded as: $\phi(r,\xi) \leq O(r \ln \frac{r}{\xi})$, as $\mathcal{H}_t$ is a subset of $\mathcal{H}$. Theorem 4 of \cite{ZC14} now applies.}
However, these algorithms require solving
empirical 0-1 loss minimization subject to sparsity constraints, which is computationally intractable in general~\citep{N95}.
The only attribute and computationally efficient PAC active learning algorithms we are aware of are in~\cite{ABHZ16}. Specifically, under the $t$-sparse $\Omega(\epsilon)$-adversarial noise setting, \cite{ABHZ16} gives an efficient algorithm with label complexity $\tilde{O}(\frac{t}{\epsilon^2}\polylog(d,\frac 1 \epsilon))$. Under the $t$-sparse $\eta$-bounded noise setting, ~\cite{ABHZ16} gives an efficient algorithm with label complexity $\tilde{O}((\frac t \epsilon)^{O(\frac 1 {(1-2\eta)^2})})$.
The notion of attribute efficient learning algorithms is initially studied in the pioneering works of~\cite{L87,B90}.
\cite{L87} considers attribute efficient online
learning of linear classifiers, with an application to learning disjunctions that depends on only $t$ attributes.
The algorithm incurs a mistake bound of $O(t \ln d)$, which can be of substantially lower order than $O(d)$ when $t$ is small.
\cite{B90} considers an online learning model where the feature space is infinite dimensional,
and each instance shown has a bounded number of nonzero attributes.
It gives efficient algorithms that learn $k$-CNFs and disjunctions
with finite mistake bounds in this setting.
\cite{S00, KS06, STT12} study attribute efficient learning of decision lists and analyzes the
tradeoff between running time and mistake bound.
\cite{LS07} shows that, if the unlabeled distribution is unconcentrated over $\{-1,1\}^d$, then there
is an algorithm that learns $t$-sparse linear classifiers with a sample complexity of $\poly(t, \ln d, 2^{O(\epsilon^{-2})})$. \cite{F07} gives algorithms for attribute efficient learning parity and DNFs
in the membership query model.
\paragraph{One-bit compressed sensing.} The line of work on one-bit compressed sensing~\citep{BB08} is closely related to our problem setup. In this setting,
there is a unknown $t$-sparse vector $u \in \mathbb{R}^d$, and the algorithm can make measurements of $u$ using vectors $x \in \mathbb{R}^d$ and receives (possibly noisy) values of $\sign(u \cdot x)$.
Note that different from standard compressed sensing~\citep{CT06,D06}, the measurement results of one-bit compressed sensing are {\em quantized} versions of $(u \cdot x)$'s (i.e. they lie in $\{-1,+1\}$ as opposed to $\mathbb{R}$).
The goal is to approximately recover $u$ up to scaling with a few (ideally, $O(t \ln d)$) measurements.
In the non-adaptive setting, the measurement vector
$x$'s are chosen at the beginning, while in the adaptive setting, the measurement vector $x$'s can be chosen sequentially,
based on past observations.
The problem of adaptive one-bit compressed sensing is therefore equivalent to attribute efficient
active halfspace learning in the membership query model~\citep{A88}.
We remark that active learning in the PAC model is more challenging than in the membership model, in that the learner has to query the labels of the unlabeled examples it has drawn.
~\cite{JLBB13} gives an algorithm that has robust recovery guarantees, however it is based on computationally-intractable $\ell_0$ minimization. Inspired by the count sketch data structure~\citep{CCF02}, ~\cite{HB11} proposes an efficient procedure that recovers the support of $u$ using $O(t \ln d)$ queries, and has strong noise tolerance properties. In conjunction with efficient full-dimensional active halfspace learning algorithms~\citep{DKM05,ABL17,CHK17,YZ17}, this procedure
yields efficient algorithms that have label complexities of $O(t (\ln d + \ln \frac 1 \epsilon ))$
(resp. $O(t (\ln d + \ln \frac 1 \epsilon))$, $O(\frac{t}{(1-2\eta)^2} (\ln d + \ln \frac 1 \epsilon ))$) in the $t$-sparse realizable setting (resp. $t$-sparse $\Omega(\epsilon)$-adversarial noise setting, $t$-sparse $\eta$-bounded noise setting).
~\cite{GNJN13, ABK17} gives upper and lower bounds for {\em universal} one-bit compressed sensing, that is, the same set of measurements can be used to approximately recover {\em any} underlying $t$-sparse signal. In this setting,~\cite{ABK17} shows that, perhaps surprisingly, the number of measurements necessary and sufficient for support recovery is $\tilde{\Theta}(t^2 \ln d)$, as opposed to $\Theta(t \ln d)$ in the non-universal setting.
~\cite{PV13a} proposes a linear programming based algorithm that works in the $t$-sparse realizable setting, and has a measurement complexity of $\tilde{O}(\frac{t}{\epsilon^5})$,
based on a new tool named random hyperplane tessellations. ~\cite{L16} gives a support recovery algorithm that tolerates bounded noise,
using $\alpha$-stable random projections.
\cite{PV13b} proposes a convex programming based algorithm that works in the $t$-sparse $\Omega(\epsilon^2)$-adversarial noise model,
and has a measurement complexity of $\tilde{O}(\frac{t}{\epsilon^{12}})$.
Works on one-bit compressed sensing under the symmetric noise condition has been studied in the literature~\citep{PV13b, ZYJ14, CB15, ZG15}. In this model, it is assumed that there is a known function $g$, such that for all $x$, $\mathbb{E}[y|x] = g(u \cdot x)$. This assumption captures some realistic scenarios, but is nevertheless strong: it requires any two examples that have the same projection on $u$ to have the same conditional label distribution. In contrast, the $t$-sparse adversarial noise and the $t$-sparse bounded noise conditions allow heterogeneous noise levels, even among examples that have the same projection on $u$.
In this setting, the state of the art result of \cite{ZYJ14} gives an nonadaptive algorithm with $O(\frac{t \ln d}{ \epsilon^2})$. It also proposes an adaptive algorithm that works in same setting, achieving a label complexity bound of $O(\min(\frac{t \ln d}{\epsilon^2}, \frac{t\sqrt{d} \ln d}{\epsilon}))$, which is sometimes lower than that of the nonadaptive algorithm.
The special case of Gaussian noise before quantization has been studied extensively, i.e. given $x$, the label $y$ is generated by the formula $y = \sign(u \cdot x + n)$, where $n$ is a Gaussian random variable. \cite{GNR10} shows that when $u$ has a large dynamic range (the absolute value of the ratio between $u$'s largest and smallest nonzero elements in magnitude), adaptive approaches require fewer measurements to identify the support of $u$ than nonadaptive approaches.
We provide a detailed comparison between our work and the results most closely related to ours in Tables~\ref{tab:comp-r}, \ref{tab:comp-an}, and \ref{tab:comp-bn}.
\begin{table}[t]
\centering
\begin{tabular}{llll}
\toprule
Algorithm & Model & Label complexity & Efficient? \\
\midrule
\begin{tabular}{@{}c@{}}\cite{HB11}\\ with \cite{DKM05} \end{tabular} & MQ & $\tilde{O}(t (\ln d + \ln \frac 1 \epsilon))$ & Yes \\
\cite{D05} & PAC & $\tilde{O}(t (\ln d + \ln \frac 1 \epsilon))$ & No \\
\cite{ABHZ16} & PAC & $\tilde{O}(\frac{t}{\epsilon^2} \polylog(d,\frac 1 \epsilon) )$ & Yes \\
Our work & PAC & $\tilde{O}(t \polylog(d,\frac 1 \epsilon) )$ & Yes \\
\bottomrule
\end{tabular}
\caption{A comparison of algorithms for active learning of halfspaces in the $t$-sparse realizable setting (Definition~\ref{def:r}); all the PAC algorithms above work under isotropic log-concave distributions.}
\label{tab:comp-r}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{lllll}
\toprule
Algorithm & Model & Noise tolerance & Label complexity & Efficient? \\
\midrule
\begin{tabular}{@{}c@{}}\cite{HB11}\\ with \cite{ABL17} \end{tabular}& MQ & $\nu = \Omega(\epsilon)$ & $\tilde{O}(t (\ln d + \ln \frac 1 \epsilon))$ & Yes \\
\cite{ZC14} & PAC & $\nu = \Omega(\epsilon)$ & $\tilde{O}(t \ln d \ln \frac 1 \epsilon))$ & No \\
\cite{PV13b} & PAC & $\nu = \Omega(\epsilon^2)$ & $\tilde{O}(\frac{t\ln d}{\epsilon^{12}})$ & Yes \\
\cite{ABHZ16} & PAC & $\nu = \Omega(\epsilon)$ & $\tilde{O}(\frac{t}{\epsilon^2} \polylog(d,\frac 1 \epsilon) )$ & Yes \\
Our work & PAC & $\nu = \Omega(\epsilon)$ & $\tilde{O}(t \polylog(d,\frac 1 \epsilon) )$ & Yes \\
\bottomrule
\end{tabular}
\caption{A comparison of algorithms for active learning of halfspaces in the $t$-sparse $\nu$-adversarial noise setting (Definition~\ref{def:an}); all the PAC algorithms above work under isotropic log-concave distributions.}
\label{tab:comp-an}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{lllll}
\toprule
Algorithm & Model & Noise tolerance & Label complexity & Efficient? \\
\midrule
\begin{tabular}{@{}c@{}}\cite{HB11}\\ with \cite{CHK17} \end{tabular} & MQ & $\eta \in [0,\frac 1 2)$ & $\tilde{O}(\frac{t}{(1-2\eta)^2} (\ln d + \ln \frac 1 \epsilon))$ & Yes \\
\cite{ZC14} & PAC & $\eta \in [0,\frac 1 2)$ & $\tilde{O}(\frac{t}{(1-2\eta)^2} \ln d \ln \frac 1 \epsilon))$ & No \\
\cite{ABHZ16} & PAC & $\eta \in [0,\frac 1 2)$ & $\tilde{O}((\frac{t}{\epsilon})^{O(\frac{1}{(1-2\eta)^2})} )$ & Yes \\
Our work & PAC & $\eta \in [0, \Omega(1))$ & $\tilde{O}(t \polylog(d,\frac 1 \epsilon) )$ & Yes \\
\bottomrule
\end{tabular}
\caption{A comparison of algorithms for active learning of halfspaces in the $t$-sparse $\eta$-bounded noise setting (Definition~\ref{def:bn}); all the PAC algorithms above work under isotropic log-concave distributions.}
\label{tab:comp-bn}
\end{table}
\section{Preliminaries}
We consider active learning in the PAC model~\citep{V84, KSS94}.
Denote by $\mathcal{X} := \mathbb{R}^d$ the instance space, and $\mathcal{Y} := \{-1,+1\}$ the label space.
The learning algorithm is given a data distribution $D$ over $\mathcal{X} \times \mathcal{Y}$.
Denote by $D_X$ the marginal distribution of $D$ over $\mathcal{X}$, and $D_{Y|X}$ the conditional distribution of label given instance.
The learning algorithm is also given a concept class, the set of homogeneous linear classifiers (halfspaces) $\mathcal{H}:=\{\sign(w \cdot x): w \in \mathbb{R}^d \}$.
For any classifier $h: \mathcal{X} \to \mathcal{Y}$, we denote by $\err(h):=\mathbb{P}_D(h(x) \neq y)$ the error rate of $h$.
Denote by $h^*$ the optimal classifier in $\mathcal{H}$: $h^*:=\argmin_{h' \in \mathcal{H}} \err(h')$.
The excess error of classifier $h$ is defined as $\err(h) - \err(h^*)$; in words, it is
the difference between $h$'s error and the best error in $\mathcal{H}$. A vector $w$ corresponds to a
linear classifier $h_w := \sign(w \cdot x)$ whose decision boundary has $w$ as its normal; define $w^*$ as the unit vector $w$ such that $h_w = h^*$. We define the angle between two vectors $w, w'$ in $\mathbb{R}^d$ as $\theta(w,w') = \arccos(\frac{w \cdot w'}{\|w\|_2 \| w'\|_2})$. \cite{BL13} shows that there exist numerical constants $C_1, C_2 > 0$, such that if $D_X$ is isotropic log-concave, then for all $w, w'$ in $\mathbb{R}^d$,
\begin{equation}
C_1 \mathbb{P}_D(h_w(x) \neq h_{w'}(x)) \leq \theta(w,w') \leq C_2 \mathbb{P}_D(h_w(x) \neq h_{w'}(x)).
\label{eqn:angdis}
\end{equation}
In active learning, the algorithm has the ability to draw unlabeled examples from $D_X$ and perform adaptive label queries to a labeling oracle $\mathcal{O}$.
The oracle $\mathcal{O}$ takes into input an unlabeled example $x$, and returns a label $y \sim D_{Y|X=x}$.
Given a random variable $z$ whose distribution is $\Delta$ over $\mathcal{Z}$ and a set $T \subset \mathcal{Z}$, denote by $\Delta|_T$ the conditional distribution of $z$ given that $z$ is in $T$.
An active learning algorithm is said to $(\epsilon,\delta)$-PAC
learn $\mathcal{H}$ and $D$ with label complexity $n(\epsilon,\delta)$, if with probability $1-\delta$, it performs at most $n(\epsilon,\delta)$ label queries to $\mathcal{O}$,
and returns a classifier $\hat{h}$ that has excess error at most $\epsilon$.
Given a vector $w$ and example $(x,y)$, the $\tau$-hinge loss $\ell_{\tau}(w, (x, y))$ is defined as $(1 - \frac{y w \cdot x}{\tau})_+$, where $(z)_+:=\max(0, z)$. Denote by $I(\cdot)$ the indicator function, that is, $I(A)$ is $1$ if predicate $A$ is true, is $0$ if $A$ is false.
A vector $v$ in $\mathbb{R}^d$ is said to be $s$-sparse, if it has at most $s$ nonzero entries.
For an integer $s \in \{1,2,\ldots,d\}$, define $\HT_s(\cdot)$ as the hard thresholding operation that takes a vector $v$ in $\mathbb{R}^d$ as input, and outputs a vector that keeps $v$'s $s$ largest entries in absolute value (breaking ties lexicographically), and setting all its other entries to zero~\citep{BD09}.
In this paper, we focus on the setting where there is a sparse halfspace that performs well under $D$.
Specifically, denote by $\mathcal{H}_t := \{\sign(w \cdot x): w \in \mathbb{R}^d, \| w \|_0 \leq t \}$ the set of $t$-sparse halfspaces.
We consider the following two conditions on $D$:
\begin{definition}
A distribution $D$ over $\mathcal{X} \times \mathcal{Y}$ is said to satisfy the {\em $t$-sparse $\nu$-adversarial noise} condition for $\nu \in (0,1)$ and $t \in \{1,\ldots,d\}$, if there is a $t$-sparse unit vector $u$, such that $\mathbb{P}_D(\sign(u \cdot x) \neq y) \leq \nu$.
\label{def:an}
\end{definition}
Observe that under this condition, $h_u$ is not necessarily the optimal classifier in $\mathcal{H}$; in fact, it may not even be the optimal classifier in $\mathcal{H}_t$. Nevertheless, by triangle inequality and Equation~\eqref{eqn:angdis}, the angle between $u$ and $w^*$ is at most $O(\nu)$.
It can be readily seen that if $t$ and $\nu$ are larger, the learning problem becomes more difficult. When $t = d$, the condition becomes the $\nu$-adversarial noise condition with respect to $\mathcal{H}$~\citep{ABL17}.
\begin{definition}
A distribution $D$ over $\mathcal{X} \times \mathcal{Y}$ is said to satisfy the {\em $t$-sparse $\eta$-bounded noise} condition for $\eta \in [0,\frac 1 2)$ and $t \in \{1,\ldots,d\}$, if there is a $t$-sparse unit vector $u$, such that
for every $x \in \mathcal{X}$, $\mathbb{P}_D(\sign(u \cdot x) \neq y | x) \leq \eta$.
\label{def:bn}
\end{definition}
Under this condition, it can be seen that $h_u$ is the Bayes optimal classifier, therefore $u$ coincides with $w^*$.
It can be readily seen that if $t$ and $\eta$ are larger, the learning problem becomes more difficult. When $t = d$, the condition becomes the $\eta$-bounded noise condition with respect to $\mathcal{H}$~\citep{MN06}.
Note that the above two conditions characterize different aspects of the data distribution $D$. The $t$-sparse $\nu$-adversarial noise condition only requires an upper bound on the total label flipping probability. On the other hand, the $t$-sparse $\eta$-bounded noise condition characterizes $D_{Y|X}$ everywhere in $\mathcal{X}$: for every instance $x$, the expected label $\mathbb{E}[y|x]$ has the same sign as $u \cdot x$.
The following condition is a special case of the above two conditions by setting $\nu = 0$ or $\eta = 0$:
\begin{definition}
A distribution $D$ over $\mathcal{X} \times \mathcal{Y}$ is said to satisfy the {\em $t$-sparse realizable} condition, for $t \in \{1,2,\ldots,d\}$, if there is a $t$-sparse unit vector $u$, such that $\mathbb{P}_D(\sign(u \cdot x) \neq y) = 0$.
\label{def:r}
\end{definition}
|
{
"timestamp": "2018-06-05T02:07:32",
"yymm": "1805",
"arxiv_id": "1805.02350",
"language": "en",
"url": "https://arxiv.org/abs/1805.02350"
}
|
\section{Introduction}
The task of text line segmentation arises as a particular case of physical layout analysis where the entities to segment are text lines of a text region.
Its importance in the document analysis field relies on the fact that many other tasks, as word-spotting or handwritten recognition, depend on the text line segmentation results.
The problem of detecting text lines was stated decades ago in the context of machine-printed text \cite{Nagy2000}. Since then, many methods have been proposed with remarkable results to the point of be considered as a solved problem for machine-printed text \cite{OGorman1993, Liang1999, Nagy1992, Plamondon2000}.
Printed text lines are expected to be uniform throughout the document, as well as to be free of line overlapping and warping effects. However, if these conditions are not satisfied, these methods can fail.
The segmentation of freestyle handwritten documents is still a challenging problem. The large variability in writing styles and possible document layouts generates a set of challenges to overcome.
First, text line orientation can vary along the document or within the same paragraph. Besides, it is also possible to find curved or broken text lines result of the writer style.
Second, text lines can overlap with each other. This is produced by the contact between ascenders and descenders of characters or just because of cramped text. This effect is a problem for many methods, which expect certain separation between lines.
Third, and regarding the document layout, text can be located in any part of the document. For instance, text in letters is usually located at the center of the document. However, handwritten annotations in administrative documents text is located randomly at any document location.
Many of the methods, which has recently been proposed in the last years, focus on particular kind of document collections. Other methods focus on specific problems, such as touching lines or curved lines and others are tailored to particular scripts or document layouts that make them hard to generalise to other collections \cite{Sulem2007}.
Statistical approaches are less commonly applied for this task and they are often limited to model local features or for post-processing tasks. Markov Random Fields (MRF) have proved to be a good choice for many computer vision tasks, since they provide a strong statistical framework to model prior information about the problem and the relationships between the set of variables \cite{Wang2013}.
However, inference and parameter learning are intractable for certain model topologies with a large number of variables and high-order relationships. In these cases approximate methods are required to efficiently learn model parameters and perform inference tasks~\cite{Komodakis2015,Liu2015,Schwing2016}.
In this paper we propose a general method for handwritten text line segmentation based on the estimation of a set of regression lines. We successfully combine Expectation-Maximization (EM) algorithm and variational approaches for parameter learning and inference on the model. Thus, we summarize the main contributions of this paper as follows:
\begin{enumerate}
\item It is a general method devised to be script, layout, and language independent. Besides, it can be applied on documents with complex layouts.
\item It can easily extended with any prior knowledge of the task by the inclusion of new feature functions.
\item It performs parameter learning in an algorithm that combines MRF parameter learning within an EM process.
\end{enumerate}
The rest of the paper is organized as follows:
In Section 2 we review some of the the main works and techniques proposed for the handwritten line segmentation task.
In Section 3 we describe the proposed model and learning algorithm.
In Section 4 we describe the initialization and post-process steps.
In Section 5 we describe an exhaustive evaluation and the obtained results. Finally, in Section 6 we present the conclusions of this work.
\section{Related Work}
In the last years there have been many attempts to tackle the task of text line segmentation from different perspectives. The variety of methods promoted the celebration of several contests and benchmark datasets \cite{Gatos2007, Gatos2009, Gatos2013}. As a particular case of physical layout analysis, common approaches are based on the bottom-up and top-down paradigms. However, hybrid approaches have emerged using a wide range of techniques.
Bottom-up approaches are based on the analysis at pixel level or at connected component level. These methods group pixel, or CC, first into characters, then into words and ultimately, to lines. These methods usually obtain good results when exists a clear separation between lines and characters [19,20,21]. However, in conditions of crowded text it may result in text line overlapping. In some cases, these methods are complemented with a post-process step where the overlapping is detected and treated apart \cite{Li2008}.
Different works usually differ in the grouping mechanism. Geometric relationships as distance, angle, or similarity are common criteria \cite{Simon1997,Jaeger2006}. Clustering methods \cite{Yin2009}, or the optimization of a fitting function \cite{Koo2012} have been also proposed. In \cite{Li2008} the level set method is used in combination with a probabilistic function to find line boundaries.
Top-down approaches analyze top level entities as text blocks, and split them into lines and words, consecutively.
Projection profile-based methods are the most representative of this type \cite{Nagy1992}. The idea is to project text pixels on the vertical axis and analyze the resulting histogram. Maximum and minimum peaks shall represent, in an ideal case, the location of the text lines and line spacing, respectively \cite{Manmatha2005}.
The sensitivity to orientation changes or curved lines is usually tackled dividing the document in vertical strips and process separately \cite{Bruzzone1999}. The results on each of the strips are then aligned by means of geometrical properties \cite{kavallieratou2002, Pal2004}, or probabilistic features \cite{Arivazhagan07,Papavassiliou2010}.
In addition, it is common to use common top-down approaches to find an initial text line location, and then run another more sophisticated method to find them \cite{Shi2009}.
These methods usually fail on freestyle handwritten documents where text is randomly spread over the whole document, or text lines have a high overlapping degree or curvature.
Hybrid methods combine bottom-up and top-down methodologies with other techniques.
The Hough Transform is used to locate text lines by extracting a set of key points of the image and computing the lines that best fit these set of points. These lines are then combined according to different criteria as contextual information \cite{Fletcher1988} or an exhaustive search approach \cite{Likforman1995}.
In general, Hough-based methods are highly affected by touching text lines and crowed text \cite{Louloudis2008, Pu1998, Shi2009}.
Morphology-based operators have also produced good results \cite{roy2008, Nicolaou2009, Alaei2011}. These methods analyze morphological properties of the documents to infer text line location. The run-length smearing algorithm (RLSA), is a representative example of this approach \cite{Wong1982, Shi2004}. These methods obtain good results on skewed and curved lines. However, touching text lines still affects negatively to the performance.
Graph-based approaches, where lines are represented by minimum cost paths, and active contours (snakes) are other examples of methodologies applied \cite{Fernandez2014, Kumar2011, Liwicki2007, Bukhari2009, Bukhari2009b, Bukhari2013}.
The use of probabilistic graphical models have been mainly focused in the task of document segmentation and text extraction \cite{Nicolas2007}. There, a MRF is defined according to the grid-like structure of the pixels considering pairwise relationships between neighbors. The main challenge relies on the inference process. The computation of exact inference is an NP-hard problem in general, and it becomes intractable for most of loopy MRF configurations. Approximate algorithms as belief propagation \cite{Pearl1982} and its extensions like the Generalized Belief Propogation (GBP) have been widely used for many segmentation tasks. However, these algorithms do not always guarantee to converge. Variational methods based on the minimization of different kind of convex free energies~\cite{Heskes2006} provide convergent extensions of the GBP algorithm~\cite{Yedidia05}. However, the convergence rate of these methods is still low and can not be applied, in practice to models with high-order cliques. Some approaches take advantage of distributed architectures to speed up learning and inference tasks~\cite{Schwing2016}. More recently, it has increased relevance weighted mini-bucket (WMB) methods as a trade-off between inference accuracy and time complexity~\cite{Dechter2003,Liu2011,Flerova2016}. Hybrids methods, which combines sampling-based methods like importance sampling (IS) and variational methods has also been developed to increase both the accuracy and the efficiency of both inference and parameter learning \cite{Liu2015}. However, there are still room for improvement in both inference and learning methods for MRF models.
\section{Model}
\label{sec:model}
In this section we describe the model proposed for the task of handwritten text line segmentation. For a given text line, our hypothesis is that, if we know the set of pixels that compose it, we can estimate a regression line through these pixels that is a good estimate of the original line position. Besides, each of these pixels will have a higher probability to be assigned to this line than to another.
We select a random set of $N$ text pixels ensuring an uniform distribution along the document image in order to cover all the textual components. The use of a random sample reduce the complexity of the overall method, and according to previous works it does not significantly affects to the final result as long as the sample covers all the data~\cite{Cruz2013}.
We define a MRF model composed of two kind of random variables. On the one hand, we have random variables $e=(x,y)$ which correspond to pixel coordinates and, on the other hand, we have hidden variables, $h$, which denote the labels of text lines. The topology of our model is given by the Delaunay triangulation computed from the set of random pixels, as we show in~\figurename~\ref{fig:crfwim}. The result is an undirected graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ where vertexes in $\mathcal{V}$ are the variables $h$ and $e$. The set $\mathcal{E}$ is composed of two kind of edges. First, we have edges between pixel coordinates $e$ and the corresponding text line label. Second, we have edges between adjacent hidden variables $h$.
\begin{figure}[ht]
\centering
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.7\linewidth]{crfwim.png}
\caption{Undirected graph: $\mathcal{G}=(\mathcal{V},\mathcal{E})$}
\label{fig:crfwim}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.6\linewidth]{crffactors2.png}
\caption{Factor graph}
\label{fig:crffactors}
\end{subfigure}
\caption{Illustration of a region of the proposed MRF. (a) Variables in green represent the {\em observed} pixels, $e$. In red, hidden variables $h$ representing the text line labels. (b) Illustration of the two types of factors. Green factors are the $v$ factors that relates the observed and the hidden values. Red factors are the $u$ factors composed only by the hidden values.}
\label{fig:CRF}
\end{figure}
We represent our MRF model by a factor graph composed of two type of factor functions in agreement with the two kind of edges describe above, see~\figurename~\ref{fig:crffactors}.
First, we have factor functions modeling dependencies between observed pixels, $e$, and hidden variables, $h$. These are 3-order factors since pixel coordinates are two random variables and we denote them by $\Psi_v$, with $v\in[1,\,N]$.
Second, we have factor functions modeling dependencies between pairs of hidden variables and we denote them by $\Psi_u$, where $u=\{i,j\}$ runs over the edges of the Delaunay triangulation. Thus, the MRF factorizes as a product of $\Psi_u$ and $\Psi_v$ as follows: %
\begin{equation}
\label{eq:ProbDistr}
p(e,h|\Theta) = \frac{1}{Z(\Theta)} \prod_{v} \Psi_{v} (e_{v},h_{v}|\Theta_a) \prod_{u} \Psi_{u}(h_{u}|\Theta_b) = \prod_v p_v(e_v|h_v,\Theta_a) p(h|\Theta_b)
\end{equation}
\noindent where $\Theta=( \Theta_a,\Theta_b)$ is the set of shared parameters, i.e. all factors $\Psi_v$ share the same parameters $\Theta_a$, and similarly, all factors $\Psi_u$ share parameters $\Theta_b$. Note that the topology of $\mathcal{G}$ allow us to factorizes the MRF model as a product of conditional likelihood probabilities $p_v(e_v|h_v,\Theta_a)$ of pixels $e_v = (x_v,y_v)$ and the {\em prior} probability of hidden variables $h$, $p(h|\Theta_b)$.
Our method relies on the classic EM algorithm~\cite{Dempster1977}. This algorithm is based on the definition of a function $Q$, which is the conditional expectation of the likelihood function of a probability density function:%
\begin{equation}
\label{EqQ}
Q(\Theta|\Theta') = \mathrm{E}_{h}(\log p(h,e|\Theta)|e,\Theta')
\end{equation}
\noindent thus, in the Expectation (E) step, $Q$ is evaluated given the current set of parameters $\Theta'$. Then, in the Maximization (M) step, new parameters $\Theta$ are computed. These new parameters are obtained by computing the partial derivatives of $Q$ with respect to each single model parameter $\theta_k \in \Theta$. This scheme is repeated until both sets of parameters: $\Theta'$ and $\Theta$ are equal.
Our method essentially follows the same scheme. The main difference concerns the parameter learning step of the MRF model. First, in the E-step, we update the parameters of the {\em prior} probability $p(h|\Theta_b)$. We update these parameters using the proposed extension of the GBP, which we explain in section~\ref{sec:GMP}, to allow parameter learning.
With the parameters learned we can approximate the posterior probability of each single hidden variable $h_v$ given the coordinates $e_v$. Then, in the M-step, we update the parameters $\Theta_a$, which correspond to the regression lines. In summary, our proposed scheme is Algorithm~\ref{alg:EM}:
\begin{algorithm}
\begin{enumerate}
\item $\Theta=(\Theta_a,\Theta_b)$ initialization
\item E-step: parameter learning of {\em prior} probability
\begin{enumerate}
\item Update $\Theta_b$: $\Theta_b \leftarrow \Theta_b'$
\item Estimate $p(h_v|e_v,\Theta_a',\Theta_b)$
\end{enumerate}
\item M-step: estimation of regression lines
\begin{enumerate}
\item Update $\Theta_a$: $\Theta_a \leftarrow \Theta_a'$
\end{enumerate}
\item Repeat steps 2-3 until convergence
\item End
\end{enumerate}
\caption{EM algorithm for MRF models}
\label{alg:EM}
\end{algorithm}
In the remainder of this section we explain the linear regression scheme and how to estimate the new updates of its parameters $\Theta_a$. Then we explain how to learn model parameters linked to the prior probability $p(h|\Theta_b)$. We will conclude this section with the definition of the feature functions used for the handwritten text line segmentation task.
\subsection{EM algorithm for linear regression}
\label{sec:EM}
We defined a set of factor functions that encode the information within the MRF. Each factor function is composed of a set of feature functions $f_k$ and $g_k$ where $k$ runs in $I_{u}$ or $I_{v}$ depending whether the feature function is defined on $\Psi_u$ or $\Psi_v$, respectively. These feature functions are embedded in factors as:
\begin{equation}
\label{ff}
\begin{split}
\log \Psi_{u} &= \sum_{k \in I_{u}} f_k(h_{u} | \Theta_b) \\
\log \Psi_{v} &= \sum_{k \in I_{v}} g_k(h_{v},e_{v}| \Theta_a)
\end{split}
\end{equation}
\noindent we replace the above definitions and the MRF model of Eq.~\eqref{eq:ProbDistr} in $Q$ and we have:%
\begin{equation}
\label{EqQ2}
\begin{split}
Q(\Theta | \Theta') = \sum_v \sum_{h_v} \left[\sum_{k \in I_v}g_k(h_v,e_v | \Theta_a) - \log Z_v(h_v,\Theta_a)\right]p_v(h_v|e_v,\Theta_a',\Theta_b') + \\
+ \sum_u \sum_{k \in I_u} \left[ \sum_{h_u} f_k(h_u | \Theta_b) p_u(h_u|\Theta_b') \right] - \log Z_0(\Theta_b)
\end{split}
\end{equation}
\noindent where $Z_v(h_v,\Theta_a)$ and $Z_0(\Theta_b)$ denote, respectively, the partition function of the conditional likelihood probabilities and the {\em prior} probability. With this expression we find the new parameter updates by finding the local maximum of $Q$, which correspond with the M-step.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\paperwidth, height=150pt]{crflines.png}
\caption{Hypothetical region of our graphical model that relates the pixels from words from consecutive lines. Messages sent through the dashed lines are supposed to favor a different label for each connected pixel. }
\label{fig:crflines}
\end{center}
\end{figure}
We use a linear regression model to fit the text lines in the document. The goal is to estimate a set of $L$ lines in the form $y_v = a_l x_v + b_l$ with vertical variance $\sigma^2_{l,t}$ from the set of pixels that compose it. Besides, in order to fit the size of text lines we define a pair of bounds that defines a segment of $l$. These bounds are given with respect to the center of the segment $c_l$ by the horizontal variance $\sigma^2_{l,s}$. Therefore, a line $l$ is defined by the following five parameters: $\theta_a = \{a_l,b_l,c_l,\sigma_{l,t},\sigma_{l,s}\}$ that define two Gaussian density functions linked to the horizontal and vertical variances. The associated likelihood probabilities are: %
\begin{equation}\begin{split}
p_t(x_v,y_v | h_v=l,\theta_a) &\propto \exp \left\{ -\frac{(y_v - a_l x_v - b_l)^2}{2 \sigma_{l,t}^2} \right\} \\
p_s(x_v,y_v | h_v=l,\theta_a) &\propto \exp \left\{ -\frac{(x_v - c_l)^2}{2 \sigma_{l,s}^2} \right\}
\end{split}\label{Gauss1}
\end{equation}
\noindent for a pixel $e_v=(x_v,y_v)$ and a line $l$. These densities will provide a measure of how well a particular pixel fits a line. \figurename~\ref{fig:crflines} shows an example of a MRF region with two regression lines across two hypothetical words from consecutive text lines $l_1$ and $l_2$. Vertical Gaussian function results perpendicular to the regression line since its purpose is to account for line residues. Horizontal Gaussian in return is defined parallel to the x-axis, since it only controls the line length.
The update equations for each parameter are found by computing the partial derivatives with respect to each parameter of Eq.~\eqref{EqQ2}. The update expressions for $\Theta_a$ are similar than in our previous work \cite{Cruz2013}, although in this case the posterior $p_v(h_v=l|e_v,\Theta_a',\Theta_b)$ is given by the inference algorithm explained later in section~\ref{sec:GMP}. We provide all details of their derivation in the supplementary material of this paper.
For a given document the number of parameters to estimate is $|\Theta_a| = 5L $. Note that only parameters $\sigma^2_{l,t}$ and $\sigma^2_{l,s}$ appear on the partition function $\log Z_v(h_v,\Theta_a)$:
\begin{equation}\begin{split}
a_l^{new} &= \frac{\sum_v (x_v - \bar{x}) (y_v - \bar{y}) p_v(h_v=l|e_v,\Theta_a',\Theta_b)} {\sum_v (x_v - \bar{x})^2 p_v(h_v=l|e_v,\Theta_a',\Theta_b)} \\
b_l^{new} &= \frac{\sum_{v} (y_v - a_l^{new} x_v) p_v(h_v=l|e_v,\Theta_a',\Theta_b)} {\sum_{v} p_v(h_v=l|e_v,\Theta_a',\Theta_b)} \\
c_l^{new} &= \frac{\sum_{v} x_v p_v(h_v=l|e_v,\Theta_a',\Theta_b)} {\sum_{v} p_v(h_v=l|e_v,\Theta_a',\Theta_b)} \\
\sigma^2_{l,t} &= \frac{\sum_{v} (y_v - a_l^{new} x_v -b_l^{new})^2 p_v(h_v=l|e_v,\Theta_a',\Theta_b)} {\sum_{v} p_v(h_v=l|e_v,\Theta_a',\Theta_b)} \\
\sigma^2_{l,s} &= \frac{\sum_{v} (x_v - c_l^{new})^2 p_v(h_v=l|e_v,\Theta_a',\Theta_b)} {\sum_{v} p_v(h_v=l|e_v,\Theta_a',\Theta_b)}
\end{split}\label{update1}
\end{equation}
In addition, we also estimate the \textit{prior} probability of each line $l$ given the updated parameters $\Theta_a$ as: %
\begin{equation}
\label{eq:pline}
p_v(h_v=l|\Theta_a,\Theta_b) = \frac{1}{N}{\sum_v p_v(h_v=l|e_v,\Theta_a,\Theta_b)}
\end{equation}
The key point is that posterior probabilities $p_v(h_v=l|e_v,\Theta_a',\Theta_b)$ are unknown and consequently we cannot update the parameters of the regression lines. To overcome this problem, we run an approximate inference algorithm that allow us to learn MRF parameters and estimate $p_v(h_v=l|e_v,\Theta_a',\Theta_b)$.
\subsection{Inference and Learning}
\label{sec:GMP}
In the previous section, we described how to estimate the parameters $\Theta_a$ linked to regression lines. However, parameters $\Theta_b$ remain unknown and still have to be learned.
Many parameter learning methods for MRF models relay on free energy methods. These are variational methods that seek density functions that approximate true marginals by beliefs functions that satisfies a set of constraints. Free energies are quite close to $Q(\Theta,\Theta')$ used within the EM algorithm and defined in Eq.~\eqref{EqQ2}, since both are defined in terms of the Kullback-Leibler divergence (KLD). For instance, the free energy associated to Belief Propagation (BP) algorithm is the Bethe energy as: %
\begin{equation}\begin{split}
F_{Bethe}( p(h|e,\Theta_a',\Theta_b) ) &= \sum_u p_u(h_u|\Theta_b)\log p_u(h_u|\Theta_b) + \\
&+ \sum_{v} n_v p_v(h_v|e_v,\Theta_a',\Theta_b) \log p_v(h_v|e_v,\Theta_a',\Theta_b)
\end{split}
\end{equation}
\noindent where $n_v$ are related to the number of neighbors of $h_v$, and can be negative. In our case, we have to include the information given by the likelihood functions of regression lines. So, we define the free energy as: %
\begin{equation}\begin{split}
F( p(h|e,\Theta_a',\Theta_b) ) &= \sum_u p_u(h_u|\Theta_b)\log p_u(h_u|\Theta_b) + \\
&+ \sum_{v} c_v p_v(h_v|e_v,\Theta_a',\Theta_b) \log p_v(h_v|e_v,\Theta_a',\Theta_b) + \\
&+ \sum_v\sum_{k\in I_v}\sum_{h_v} g_{k}(h_v,e_v|\Theta_a') p_v(h_v|e_v,\Theta_a',\Theta_b)
\end{split}\label{eq:FreeEnergy2}
\end{equation}
\noindent where parameters $c_v>0$ are any positive real value. The approximate marginals and conditional marginals have to satisfy the usual constraints used in message-passing methods. We summarize them in Table~\ref{tab:constraint}.
First, since $p_u$ and $p_v$ are marginal approximations, they have to be {\em normalized}.
Second, we have to impose the {\em sum-normalization} constraint between $p_u(h_u|\Theta_b)$ and $p_v(h_v|e_v,\Theta_a',\Theta_b)$ to ensure consistency between marginal estimation.
Unlike usual message-passing algorithms and to well tie the estimated prior probabilities by the model with the observed data, we impose consistency between prior probability of single variables $h_v$, $p_u(h_v|\Theta_b)$, and posterior probability $p_v(h_v|e_v,\Theta_a',\Theta_b)$.
Finally, we have to ensure coherence between the observations, encoded in the empirical moments $\mu_k$, and model prediction.
This last set of constraints is the called {\em moment-matching} constraint and it provides the parameter learning step for the pairwise parameters and global prior probability, Eq.~\eqref{eq:pline}.
Thus, the minimization of Eq.~\eqref{eq:FreeEnergy2} results on a constrained minimization problem that can be solved by means of Lagrange multipliers.
\begin{table}
\begin{center}
\begin{tabular}{c|c|c}
\hline
Constraint & Formula & L. Multiplier \\
\hline \hline
\textit{normalization} & $\sum_{h_u} p_u(h_u|\Theta_b) = 1$ & $\nu_u$ \\
& $\sum_{h_v} p_v(h_v|e_v,\Theta_a',\Theta_b) = 1$ & $\nu_v$ \\
\textit{sum-normalization} &$ \sum_{h_{u \setminus v }} p_u( h_{u}|\Theta_b) = p_v(h_v|e_v,\Theta_a',\Theta_b)$ & $\lambda$\\
\textit{moment-matching} & $\sum_u f_k(h_u) p_u(h_u|\Theta_b) = \mu_k$ & $\theta_k$ \\
\hline
\end{tabular}
\end{center}
\caption{Set of constraints and its corresponding Lagrange multipliers for the optimization problem of Eq.~\eqref{eq:FreeEnergy2}.}
\label{tab:constraint}
\end{table}
Algorithm~\ref{alg:MP} is the numerical implementation of block gradient descend method applied to the dual problem obtained from the previous minimization problem.
We provide details of this algorithm in the supplementary material of this paper. Basically, the partial derivative with respect to $\theta_k$ provides the parameter learning equation according to the Armijo conditions. The partial derivative with respect to $\lambda_{v \rightarrow u}(h_v)$ lead to the usual message-passing equations.
After convergence of the algorithm, we are able to get the final value of $p_v(h_v|e_v,\Theta_a',\Theta_b)$ required for the estimation of the new parameters $\Theta_a$.
\begin{algorithm}
\KwData{ $\{ \mu_k\}$: empirical moments.}
\KwResult{$\Theta_b$, $\Lambda$: model parameters, $\{p_v(h_v|e_v,\Theta_a',\Theta_b), p_u(h_u|\Theta_b)\}$ marginals.}
Initialize: $\theta_k= 0$, $\theta_k\in\Theta_b$, $m_{v \to u}(h_v)=1$;\\
\While{ not converged}
{
\For{ $\forall k \in \{I_u\}$ }{
$$\theta_k \leftarrow \theta_k' + \eta \left(\sum_{h_u} f_k(h_u) p_u(h_u|\Theta_b') - \mu_{k} \right )$$
}
\For{ $\forall v$ }
{
\For{ $\forall u \supset v$ }
{
$$ p_u(h_v|\Theta_b)=\sum_{h_{u \setminus v}} p_u(h_u|\Theta_b) $$
$$m_{v \leftarrow u}(h_u)=\frac{p_u(h_v|\Theta_b)}{m_{v \to u}(h_v)}$$
}
$$p_v(h_v|e_v,\Theta_a',\Theta_b) = \frac{1}{Z_v} \left(e^{\sum_k g_k(h_v,e_v|\Theta_a') }\prod_{u \supset v} m_{v \leftarrow u}(h_u) \right )^{\!\!\frac{1}{c_v+A_v}}$$
\For{ $\forall u \supset v$ }
{
$$m_{v \to u}(h_v) = \frac{p_v(h_v|e_v,\Theta_a',\Theta_b)}{m_{v \leftarrow u}(h_u)}$$
$$p_u(h_u|\Theta_b) = \frac{1}{Z_u}\left(e^{ -\sum_k \theta_kf_k(h_u) }\prod_{v \subset u} m_{v \to u}(h_v)\right)^{\!\!\frac{1}{c_v}}$$
}
}
}
\caption{Message passing algorithm for constrained minimization of free energies problem in Eq.~\eqref{eq:FreeEnergy2}. $Z_{u}$ and $Z_{v}$ are the partition function and $\eta$ is a step length satisfying the Armijo condition.}
\label{alg:MP}
\end{algorithm}
\subsection{Feature functions}
\label{sec:featurefunctions}
In previous sections we defined a general pairwise MRF model adapted to the detection of an unknown number of text lines. This model allow a wide range of unary feature functions to estimate text line position and pairwise feature functions to model text line labels between adjacent pixels. Now we describe the set of feature functions $f_k$ and $g_k$ defined in \eqref{ff} used for the task of handwritten text line segmentation.
\paragraph{\textbf{Local fitting}}
This function uses the information provided by the two Gaussian distributions defined in Eq.~\eqref{Gauss1} with a slight modification inspired by \cite{Koo2012}. It corresponds to a flattened Gaussian distribution on its maximum value. The width of this Gaussian plateau is controlled by a threshold $S_l$, which is computed as in \cite{Koo2012} and it estimates the interline space above and below line $l$. In summary, we define this function as:
\begin{equation}
g_k(h_v = l, e_v|\Theta_a) \triangleq \left\{ \begin{array}{cclc}
- \frac{(x - c_l)^2}{2 \sigma_{l,s}^2} & & if & d_l \leq r S_l \\
- \frac{(y - a_l x - b_l)^2}{2 \sigma_{l,t}^2} - \frac{(x - c_l)^2}{2 \sigma_{l,s}^2} & & if & d_l > r S_l
\end{array}
\right.
\label{ff2}
\end{equation}
\noindent where $d_l$ is the residue of the regression line, and $r \in [0,\,1]$. This flattened procedure slightly modifies the computation of the partition function of likelihood probabilities $p_v(e_v|h_v,\Theta_a',\Theta_b)$ but it still depend only on the variances $\sigma^2$, and therefore, the update equations in Eq.~\eqref{update1} remain valid.
\paragraph{\textbf{Line probability}}
This function integrates the {\em prior} probability computed in Eq.~\eqref{eq:pline} into the learning process described in Algorithm~\ref{alg:MP}. This prior probability can be seen as a moment of the indicator function $[h_v=l]$ and consequently we can learn its associated parameter $\theta_l$. We update the corresponding empirical moment $\mu_k$ with the line probability estimated in each iteration. In our case, for each line the empirical moment is $\mu_l=p_v(h_v=l|\Theta_a,\Theta_b)$. Thus, the function is defined as:
\begin{equation}
f_k(h_v=l | \Theta_b) \triangleq \theta_l [h_v = l]
\label{ff3}
\end{equation}
With this function we expect to avoid to assign variables to surplus lines, and reinforce the regression lines with higher probabilities.
\paragraph{\textbf{Pairwise function}}
Pairwise functions encode the probability of assigning a set of labels to neighbor variables.
In our task we encode in this function some assumptions about the configuration of the lines. For example, in a given document two connected variables are more likely to belong to the same text line, i.e. share the same label, or as much, to consecutive lines. Besides, some documents may have two connected variables from non-consecutive text lines, although they represent a few cases with respect to the most common layouts.
We define our pairwise function according to those three possible scenarios.
The function is defined on $\Psi_u$, and returns the parameter associated to each possible case:
\begin{equation}
f_k(h_i,h_j|\Theta_b) \triangleq \left\{ \begin{array}{llc}
\theta_0 & if & |h_{i} - h_{j}| = 0 \\
\theta_1 & if & |h_{i} - h_{j}| = 1 \\
\theta_2 & if & |h_{i} - h_{j}| \geq 2
\end{array}
\right.
\label{ff1}
\end{equation}
\noindent where ${\theta_0,\theta_1, \theta_2}$ are parameters in $\Theta_b$ shared for all pair of hidden variables in $u$ and learned with Algorithm~\ref{alg:MP}.
The empirical moments $\mu_k$ for this function are learned from the training set by analyzing the frequency of each considered case.
In summary, we have 5L parameters to estimate during the M-Step, and 3+L parameters in $\Theta_b$ to learn.
\section{Initialization and final labeling
In this section we describe the steps required to configure our method for the task of handwritten text line segmentation.
First, we define the initialization step which is crucial for the good performance of the EM algorithm. Second, we describe the post-process and final labeling.
\subsection{Initialization}
\label{sec:Initialization}
The initialization of our method for handwritten line segmentation consist of two steps. In the first place we detect the different text regions that compose the document image. Then, for each of them we initialize the parameters of the regression lines.
\paragraph{\textbf{Text region segmentation}}
Our region segmentation process is based on the segmentation method from \cite{Xiao2003}.
According to the Delaunay triangulation that defined the MRF structure, we analyze the length of the sides of the triangles in order to find a threshold dependent on the image that identifies the ones that are connecting different regions. Once computed, we remove the ones which longest side is above this value. In this way the different regions are isolated. More details of this process and the computation of the threshold can be found in the referenced paper. This step provides flexibility to our method, since it is able to work in documents with complex layouts by dividing the problem in smaller and simpler ones.
An example of this process is shown in \figurename~\ref{fig:Delaunay}.
\begin{figure}[t]
\centering \begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=1\linewidth]{orig.png}
\caption{}
\end{subfigure}\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=1\linewidth]{draw_1.png}
\caption{}
\end{subfigure}\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=1\linewidth]{draw2_1.png}
\caption{}
\end{subfigure}
\caption{Illustration of the text region segmentation process: a) Original image. b) Delaunay triangulation computed on the set of selected random pixels. c) Result of the process after removing the selected triangles isolating several text regions.}
\label{fig:Delaunay}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{orig_1.png}
\caption{}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{blurredcrop_1.png}
\caption{}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{blobscrop_1.png}
\caption{}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{initial_1.png}
\caption{}
\end{subfigure}
\caption{a) Example of a non-accurate initialization process in a document image with crowded text. a) Original image. b) Result of Gaussian filtering. c) Resulting candidate blobs. d) Initial candidate regression lines.}
\label{fig:Initialization}
\end{figure}
\paragraph{\textbf{Initial line hypothesis}}
It is known that the EM algorithm is often sensitive to the initial choice of parameters. An inaccurate initialization of line parameters may lead the method to fall into a local maximum that do not correspond with the better text line fitting.
We combine several common techniques to propose an initial set of regression lines.
\begin{itemize}
\item Blob estimation: We apply several steps based on the work in \cite{Ziaratban2010} for skew correction and blob identification. We apply a bank of anisotropic 2D Gaussian filters of size $W\times H$
on a range of orientations $\alpha$ and select the one with better response on the projection profile. A similar approach was previously proposed in~\cite{Bukhari2009, Bukhari2009b}. Then, we apply the Otsu binarization method to the filtered image in order to obtain a set of blobs that represent approximate line locations.
\item Overlapping detection: We analyze the obtained blobs in order to detect overlapping as result of touching or curved lines in the document.
To do so, we compute the mean connected component height and divide the blobs proportionally to a threshold $t_o$ of this value.
Besides, we identify residual blobs result of filtering diacritics or noise components. We compute the ratio of text within each blob and remove the ones under a threshold $t_r$ learned from the training set.
\item Line estimation: The number of resulting blobs define the initial number of candidate lines. For each blob, we estimate the regression line parameters using the common line regression equations on the set of pixels that compose each of them.
\end{itemize}
The initialization step itself could be a good segmentation result in documents with simple layouts where lines are properly separated. In these cases, the execution of our posterior inference process will converge in a few iterations.
However, in complex documents with crowded or slightly curved text the process is more challenging and the initialization usually is not accurate enough, obtaining over-segmented text lines and incorrect initial line locations. \figurename~\ref{fig:Initialization} shows an example of a challenging image where only a few initial lines fit exactly the correct text line.
A straightforward consequence of the initialization step is the possible over-estimation of lines.
It is possible that a text line is approximated by two or more initial line segments. Besides, some diacritics from non-romance languages might be also approximated by a short line segments.
This effect is not a drawback for our method, but the opposite. An initial over-segmentation is recommended, since we need to be sure that we fit the enough number of lines to cover all the text lines. In the case of initializing less than the correct number, some textual components will be probably assigned to the incorrect line, producing several miss detection.
\subsection{Post-process and final labeling}
In the post-process step we analyze the obtained result in order to detect and merge possible fragmented lines and remove surplus ones. After that, we label each of the textual connected components according to the probability given by the MRF model.
\paragraph{\textbf{Surplus lines removal}} We remove the extra lines remaining after the algorithm convergence.
Extra lines are featured by a low probability close to zero. We detect and remove these lines by identifying the ones which probability is under an $\varepsilon$ value fixed beforehand.
\paragraph{\textbf{Fragmented lines}}
The over-segmentation from the initialization step may lead to a fragmentation of a text line. Since our model is linear, the method deals with curved lines by splitting the line into two or more segments.
We analyze the relative position between the lines in order to identify these cases and unify the fragments into a single line.
\paragraph{\textbf{Final labeling}}
For each variable $e$ we select the line $l$ that maximizes the probability $p_v(h_v = l|e_v,\Theta)$. We assign the connected component that contains the variable to the line only if all the variables within the component share the same label. Multiple labels in one component usually correspond with touching characters. In that case we label each pixel of the component by distance to the closest regression line.
\section{Experiments}
\label{Exp}
In this section, we describe the experiments performed for the task of handwritten line segmentation.
We carry out a thorough evaluation on multiple benchmark datasets in order to prove the generality of our method to be applied on documents with different type of layouts and characteristics.
Besides, we show in an additional experiment the impact of the selection of random pixels for different configurations.
\subsection{Parameters and settings}
Along the previous sections we define a set of parameters that we fix beforehand.
In the initialization step we apply a set of Gaussian filters with orientations in the range $\alpha = [-40,40]$ degrees, and a filter size of $H = \frac{1}{3} H_{cc}$ and $W = 10 W_{cc}$ with a vertical and horizontal standard deviation of $\frac{1}{3} H_{cc}$ and $\frac{10}{3} W_{cc}$, respectively. To identify overlapped and residual blobs we experimentally set $t_o = 2\bar{H}_{cc}$ and $t_r = 0.08$.
We fix the ratio $r=0.3$, see Eq.~\eqref{ff2}, and the prior text line probability thhreshold $\varepsilon=10^{-3}$ for extra text line removal. We fix the maximum number of iterations to $50$ and we set the KLD criterion to $K=10^{-4}$.
We learn pairwise moments $\mu_k$, see Eq.~\eqref{ff1}, from the training set of ICDAR 2013. In addition, we set $c_v =1$. We use these parameter configuration for all the experiments, since they represent an accurate sample of common handwriting script.
\subsection{Metrics}
We report results according to the same metrics used in the ICDAR segmentation contests. The metric is based on counting the number of matches between the detected text lines and the text lines in the ground truth by computing the MatchScote table at pixel level \cite{Phillips1999}. It consist of: Detected lines (M), one-to-one matches (o2o), Detection Rate (DR\%), Recognition Accuracy (RA\%) and F-measure value (FM\%).
For other datasets on which the ICDAR evaluation tool can not be used we provide results in terms of precision, recall and F-measure computed at pixel level. When possible, we compute Confidence Intervals with confidence value $\alpha=0.05$.
\subsection{Datasets}
We evaluate our method on several benchmark datasets.
On the one hand we evaluate it on the ICDAR 2009 and 2013 handwriting segmentation contest datasets. These datasets contain regular text documents where the text is the main part of the page. In general the documents are free of graphical or non-text elements although some of them may contain small noise.
ICDAR 2009 dataset is composed of 200 test images with 4043 text lines. The documents contain the same extract of text written by several writters in several languages (English, German, Greek and French).
ICDAR 2013 dataset is an update of the previous one. The dataset contains a set of 150 test images with 2649 text lines also depicted by different writers and in several languages. New features comprise the addition of new more complex languages as Indian Bangla, and new layouts as multi-paragraph and complex skewed and cramped documents. Figure~\ref{fig:ICDAR} shows some examples of documents from this dataset.
\begin{figure} [t]
\centering
\begin{subfigure}{.33\textwidth}
\centering
\fbox{\includegraphics[width=.9\linewidth]{1.png}}
\caption{}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\fbox{\includegraphics[width=.9\linewidth]{2.png}}
\caption{}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\fbox{\includegraphics[width=.9\linewidth]{3.png}}
\caption{}
\end{subfigure}
\caption{Three examples of document images from the ICDAR 2013 segmentation contest. This dataset contains documents written in several languages and free of graphical or non-text elements.}
\label{fig:ICDAR}
\end{figure}
On the other hand, we evaluate on the documents of the George Washington database~\cite{Fischer2012}. This database is composed of 20 gray-scale images from the George Washington Papers at the Library of Congress dated from the 18th century. The documents are written in English language in a longhand script. This database adds a set of different challenges with respect to the previous one due to the old script style, overlapping lines and a more complex layout. Also, documents may contain non-text elements as stamps or line separators. We show several examples in \figurename~\ref{fig:GW}.
We use the same ground truth introduced for this task in \cite{Fernandez2014} since there is not public ground truth for the task of line segmentation. For this reason, it is not possible to compare with any other methods apart from previous works and \cite{Fernandez2014}. We present the results as an indicator of the adaptability of our method to historical documents.
\begin{figure}[t]
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.9\linewidth]{1_1.png}
\caption{}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.9\linewidth]{2_1.png}
\caption{}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=.9\linewidth]{3_1.png}
\caption{}
\end{subfigure}
\caption{Three examples of document images from the George Washington dataset. The dataset contains gray-scale document images including some non-text elements such as stamps or separator lines.}
\label{fig:GW}
\end{figure}
Last, we test our method in a collection of administrative documents with handwritten annotations. This is a more heterogeneous and complex dataset, since it contains documents with multiple text regions, each of them with different characteristics as orientation and writing style.
The collection includes letter-type documents, annotations in machine-printed documents, information from bank checks and other documents with complex layouts.
The set of documents in the dataset is the result of the application of a previous machine-printed text separation \cite{Belaid2014}, in order to remove all possible not handwritten components. We apply the line segmentation algorithm on the handwritten layer without any particular filtering process
The dataset is written in English and French languages and is composed of 433 document images. We show some examples of documents in \figurename~\ref{fig:DOD}.
\begin{figure} [t]
\centering
\begin{subfigure}{.33\textwidth}
\centering
\fbox{\includegraphics[width=.8\linewidth]{d1.png}}
\caption{}
\label{fig:DODa}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\fbox{\includegraphics[width=.8\linewidth]{d2.png}}
\caption{}
\label{fig:DODb}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\fbox{\includegraphics[width=.8\linewidth]{d5.png}}
\caption{}
\label{fig:DODc}
\end{subfigure}
\caption{Three documents from the dataset of administrative annotated documents. In b) we observe some residues from the machine-printed separation step that could be interpreted as text components. The dataset contains documents with multiple layouts and text configurations.}
\label{fig:DOD}
\end{figure}
\subsection{Random pixel selection}
We aim at analyzing the impact of the density of random text pixels selected for the construction of the graphical model. We conduct this experiment on the ICDAR 2013 dataset for a pixel ratio of $1\%,3\%,5\%,10\%$ and $15\%$ of the total amount of text pixels. Table~\ref{tab:randompoints} shows the obtained results in terms of the F-measure, mean processing time, and its corresponding confidence intervals.
We see that using values above $5\%$ do not produce significant improvements in the results, while the computational complexity increases considerably due to the large number of variables and connections in the MRF model.
With a $1\%$ of pixels, we obtain a $95.52\%$ in almost four times less computational time compared to $5\%$. However, the reduction in the number of pixels may leave some text regions uncovered, which can lead to an incorrect segmentation.
Besides, we observe that the confidence interval for higher number of pixels increases. This implies that the method becomes less stable.
For the rest of experiments we select a $5\%$ of text pixels as standard value, since it seems to provide a good trade-off between data representation and time complexity.
\begin{table}[h]
\begin{center}
\begin{tabular}{c|c|c}
\hline
(\%) of points & FM(\%) & Time(mean) \\
\hline \hline
1\% & 95.52 $\pm$ 1.46 & 12.6s $\pm$ 1.1 \\
3\% & 96.95 $\pm$ 1.24 & 28.4s $\pm$ 2.7 \\
5\% & 97.05 $\pm$ 1.17 & 42.4s $\pm$ 3.3 \\
10\% & 97.05 $\pm$ 1.18 & 81.3s $\pm$ 7.7 \\
15\% & 97.05 $\pm$ 1.25 & 115.1s $\pm$ 11.2 \\
\hline
\end{tabular}
\end{center}
\caption{Results for different percentage of random points selected for the construction of the graphical model. We see as values above $5\%$ do not produce significant improvements in the results, while the computational complexity considerably increases.}
\label{tab:randompoints}
\end{table}
\subsection{ICDAR segmentation contests}
We show in Table~\ref{tab:ICDAR2009} the results obtained on the ICDAR 2009 Handwriting Segmentation dataset.
We obtain a $98.68\%$ FM value, with a confidence interval $[98.23, 99.13]$. This result compares with the top methods of the competition and overcomes the result obtained by previous works using a simpler probabilistic model \cite{Cruz2013}.
In addition, the analysis of the results shows that $166/200$ images reach $100\%$ of FM, while the main errors are concentrated in a few error cases.
First type of error is related with the extra lines not removed in the post-process step that end up fitting diacritics or small isolated text components.
This type of errors has a large impact in the numerical results, since implies an extra detection and may affect to several one-to-one text line associations. However, this error has no impact in posterior text recognition tasks, since text lines are usually well segmented. A severe case of extra line is shown in \figurename~\ref{fig:extralines}.
The second type of error is produced in areas where several touching characters converge. In this case, it is possible that the high connectivity within the MRF in this area favors the same labeling for all the text component instead of separating between text lines. An example of this last error can be seen in \figurename~\ref{fig:touchingwords}.
\begin{table}[t]
\begin{center}
\begin{tabular}{ c | c | c | c | c | c }
\hline
Method & M & o2o & {DR (\%)} & {RA (\%)} & {FM (\%)} \\ \hline \hline
CUBS & 4036 & 4016 & 99.55 & 99.50 & 99.53 \\
ILSP-LWSeg-09 & 4043 & 4000 & 99.16 & 98.94 & 99.05 \\
HandwritingPAIS & 4031 & 3973 & 98.49 & 98.56 & 98.52 \\
CMM & 4044 & 3975 & 98.54 & 98.29 & 98.42 \\
Fernandez \textit{et al.} \cite{Fernandez2014} & 4176 & 3971 & 98,40 & 95,00 & 96,67 \\
CASIA-MSTSeg & 4049 & 3867 & 95.86 & 95.51 & 95.68 \\
Cruz \textit{et al}. \cite{Cruz2013} & 4061 & 3858 & 95.60 & 95.00 & 95.20 \\
PortoUniv & 4028 & 3811 & 94.47 & 94.61 & 94.54 \\
PPSL & 4084 & 3792 & 94.00 & 92.85 & 93.42 \\
LRDE & 4423 & 3901 & 96.70 & 88.20 & 92.25 \\
Jadavpur Univ & 4075 & 3541 & 87.78 & 86.90 & 87.34 \\
ETS & 4033 & 3496 & 86.66 & 86.68 & 86.67 \\
AegeanUniv & 4054 & 3130 & 77.59 & 77.21 & 77.40 \\
REGIM & 4563 & 1629 & 40.38 & 35.70 & 37.20 \\
\hline
\hline
{Proposed} & {4044} & {3986} & {98.81} & {98.56} & {98.68} \\
\hline
\end{tabular}
\end{center}
\caption{Results on the ICDAR2009 handwriting segmentation contest~\cite{Gatos2009}. We improve our previously reported results and obtain FM values up to other state-of-the-art methods.}
\label{tab:ICDAR2009}
\end{table}
In Table~\ref{tab:ICDAR2013} we show the result obtained on the ICDAR 2013 Handwriting Segmentation dataset. The additional complexity of this dataset is reflected in the results, where we obtain a $97.05\%$ FM value with a confidence interval $[95.80, 98.30]$.
In comparison with the rest of the methods we see that our method is slightly below the top methods in quantitative terms, although it overcomes many of them. However, we report a total of $125/150$ images labeled with a $100\%$ FM, and according to the confidence interval we can say that our method is stable along all the dataset.
As in the previous experiment, the $80\%$ of the errors are related to extra lines fitting isolated components. The remaining $20\%$ is related to the new characteristics of this dataset. For instance, some crowded images where our overlapping detector is not able to split them, \figurename~\ref{fig:severalerrors}. Nevertheless, in the practice our method is able to deal with the majority of these situations as seen in \figurename~\ref{fig:challenges}.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|c|c|c|c|c}
\hline
Method & M & o2o & DR(\%) & RA(\%) & FM(\%) \\
\hline \hline
INMC & 2614 & 2614 & 98.68 & 98.64 & 98.66 \\
NUS & 2645 & 2605 & 98.34 & 98.49 & 98.41 \\
GOLESTAN-a & 2646 & 2602 & 98.23 & 98.34 & 98.28 \\
CUBS & 2677 & 2595 & 97.96 & 96.94 & 97.45 \\
IRISA & 2674 & 2592 & 97.85 & 96.93 & 97.39 \\
LRDE & 2632 & 2568 & 96.94 & 97.57 & 97.25 \\
Fernandez \textit{et al.} \cite{Fernandez2014} & 2697 & 2551 & 96,30 & 94,58 & 95,43 \\
QATAR-b & 2609 & 2430 & 91.73 & 73.14 & 92.43 \\
MSHK & 2696 & 2428 & 91.66 & 90.06 & 90.85 \\
CVC & 2715 & 2418 & 91.28 & 89.06 & 90.16 \\
\hline
\hline
{Proposed} & {2647} & {2570} & {97.01} & {97.09} & {97.05} \\
\hline
\end{tabular}
\end{center}
\caption{Results on the ICDAR2013 handwriting segmentation contest~\cite{Gatos2013}. Our method still obtains results up to the other contestant methods and improves our previously reported results.}
\label{tab:ICDAR2013}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=.95\linewidth]{extralines.png}
\caption{Example of the impact of an extra regression line not removed in the post-process step. In this severe case several lines are affected by the incorrect labeling of some of their text pixels to the extra regression line (text line in green).}
\label{fig:extralines}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=.95\linewidth]{2009.png}
\caption{Segmentation error in a region with several touching characters. This is an example of how the messages sent between variables from the involved words favor the same labeling for every component in conflict.}
\label{fig:touchingwords}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.95\linewidth]{overlap.png}
\caption{Example of several overlapped lines that were not detected in the initialization step. This produces an incorrect number of initial candidates which is difficult to overcome in the rest of the process.}
\label{fig:severalerrors}
\end{center}
\end{figure}
\subsection{George Washington dataset}
Table~\ref{tab:GW} shows the results obtained on the George Washington dataset. We improve our previous results with a $90.06\%$ FM with an increase of the DR of almost a $10\%$. Low RA values are caused by non-text components, since we have not considered some features of this dataset that in other circumstances could be integrated for a better performance. However, we want to use the same model configuration for all the experiments in order to evaluate the adaptability to other datasets without parameter tuning or additional training.
For instance, the detection of non-text elements as stamps, text line separators, or underlines affects to the numerical results. In the case of underlines, they are labeled as text line component, while our method recognizes it separately. The same effect happens in some arbitrary separator lines.
As in the case of the extra text line detection, this error has an impact in the numerical results, although the final set of detected text lines is usually correct. Again, our method obtains better results in views of posterior text recognition tasks, since it is able to separate text from other non-textual components.
Results on this dataset prove the capability of our method for segmenting text lines in historical documents without the need of reconfiguration.
\begin{table} [t]
\begin{center}
\begin{tabular}{ c | c | c | c | c | c }
\hline
Method & M & o2o & {DR (\%)} & {RA (\%)} & {FM (\%)} \\
\hline \hline
Fernandez \textit{et al.}\cite{Fernandez2014} & 693 & 653 & 91,30 & 94,20 & 92,70 \\
Cruz \textit{et al.} \cite{Cruz2013} & 631 & 551 & 82,60 & 87,30 & 84,80 \\
Base line \cite{Fernandez2014} & 727 & 338 & 47,20 & 46,40 & 46,70 \\
\hline
\hline
Proposed & 702 & 614 & 92.05 & 88.16 & 90.06 \\
\hline
\end{tabular}
\end{center} \caption{Results on the George Washington dataset. Lower o2o and RA rates are produced due to the identification of separator lines as independent text line. However, qualitative results show that our method produces a better detection of text-lines with respect to the other compared methods.}
\label{tab:GW}
\end{table}
\subsection{Administrative annotated documents}
As for the GW dataset, we use the same parameter configuration than for the ICDAR experiments.
In this dataset one of the challenges is to detect the different text regions in order to process them separately. For instance, in \figurename~\ref{fig:DODa} we can appreciate at the bottom of the central block of text three text lines with different font that have to be labeled separately. We can see another example in \figurename~\ref{fig:DODb}, where several lines at the bottom of the document may be merged as the same line in the case of processing the full page.
Our method behaves on these cases on two possible ways. In most of the cases the different text regions are detected in the initialization step and then processed separately.
However, in some documents where the region segmentation is not achieved, text lines are approximated by several regression lines due to the initial over-segmentation.
On this experiment we are not able to compare to other works, since it is a non-published collection of documents, however we can compare against our previous work in order to validate the new model.
Table~\ref{tab:DOD} shows the results obtained. We see that the results are significantly improved with the new proposed approach. The observed improvement confirm the contribution of the proposed model.
In addition, the result on this dataset proves the versatility of our method on complex layouts.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|c|c|c}
\hline
& Precision & Recall & FM(\%) \\
\hline \hline
Cruz \textit{et al}. \cite{Cruz2013} & 69.45 & 72.38 & 70.88 \\
{Proposed} & {79.75} & {82.70} & {81.19} \\
\hline
\end{tabular}
\end{center}
\caption{Results on the administrative handwritten annotation dataset. We significantly improve the results obtained by our previous model.}
\label{tab:DOD}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=.80\linewidth]{challenges.png}
\caption{Results of four fragments of images from the ICDAR 2013 dataset that include crowded text and light curvatures. We can see several challenging scenarios where our method performs correctly.}
\label{fig:challenges}
\end{center}
\end{figure}
\newpage
\section{Conclusion}
\label{Concl}
In this paper we present a general method for handwriting text line segmentation based on the estimation of a set of regression lines.
We propose a probabilistic framework that relies on the EM algorithm, for the estimation of the regression parameters, and on a MRF model, for parameter learning of neighboring pixels. We implement a message-passing-based algorithm to compute approximate inference and learning the model parameters.
Our method can be applied on documents with different layouts and features. Besides, our framework permits to easily extend the model with the inclusion of prior information by means of new feature functions.
We conduct several experiments with promising results on four collections of documents without model reconfiguration. The selected datasets include several types of layouts, historical an contemporary documents, from several writers and scripts. Besides, our method is able to deal with most of the situations regarding touching text lines and light curvatures of the text, which demonstrates the contributions of the proposed model.
The results validate our initial hypothesis, since we prove that a set of regression lines can fit with high accuracy the actual text lines locations.
As future work lines, we consider to use higher order regression model that could lead to better approximation of curved and complex lines. Besides, we think that some of the current errors may be corrected with the inclusion of more informative feature functions. Note that we use a reduced and basic set of feature functions based on the pixel location and common pairwise interactions. We believe that the inclusion of more specific knowledge could improve the overall results. For instance, we can incorporate improved pairwise feature that analyzes the edge length and relative position between the connected variables.
In addition, we plan to integrate new discriminant features in order to perform machine-printed/handwritten text separation and line segmentation within the same process.
In this way we will be able to process administrative annotated documents directly without the need of previous steps.
\section*{Acknowledgements}
This work has been partially supported by the Spanish project TIN2015-70924-C2-2-R.
\section*{Bibliography}
\bibliographystyle{elsarticle-num}
\section{Update equations for linear regression parameters }
In this section we provide a complete derivation of linear regression parameters for the the model introduced in section 3.
\begin{prop}[Joint partition function]
Given the factorization of Eq. (1) of paper:
\begin{equation}
p(e,h|\Theta) = \prod_vp_v(e_v|h_v,\Theta_a)p(h|\Theta_b)
\end{equation}
The partition function $Z(\Theta)$ of the joint distribution $p(e,h|\Theta)$ is the partition function of the {\em a priori} distribution of hidden variables $h$, $Z_0(\Theta_b)$.
\end{prop}
\begin{proof}
Straightforward from the definitions of partition functions:
\begin{equation}
Z(\Theta) = \sum_h \int \prod_vp_v(e_v|h_v,\Theta_a)p(h|\Theta_b) de
\end{equation}
\noindent where the sum runs over the values of all hidden variables and the integral domain is $\mathbb{R}^{2N}$, which correspond to the coordinates of the $N$ observed values $e_v$. A simple reordering of the integral operations leads to the final result, since $p_v(e_v|h_v,\Theta_a)$ is a pdf and the integral is 1:
\begin{equation}
Z(\Theta) \triangleq \sum_h \prod_v \int p_v(e_v|h_v,\Theta_a)de_v p(h|\Theta_b) = \sum_h p(h|\Theta_b) \triangleq Z_0(\Theta_b)
\end{equation}
\end{proof}
We defined the conditional likelihood probability $p_v(e_v|h_v,\Theta_a)=\frac{\Psi_v(h_{v},e_{v}| \Theta_a)}{Z_v(h_v,\Theta_a)}$, where $Z_v(h_v,\Theta_a)$ is its corresponding partition function defined as:
\begin{equation}
Z_v(h_v,\Theta_a) = \int \exp\left\{ \sum_{k \in I_{v}} g_k(h_{v},e_{v}| \Theta_a) \right\}de_v
\end{equation}
Given the above definitions the conditional expectation $Q$ used to derive the EM algorithm becomes:
\begin{equation}
\label{eq:sup_Q}
\begin{split}
Q(\Theta | \Theta') &= \sum_u \sum_{k \in I_u} \left[ \sum_{h_u} f_k(h_u | \Theta_b) p_u(h_u|\Theta_b') \right] - \log Z_0(\Theta_b) + \\
&+ \sum_v \sum_{h_v}\log p_v(e_v|h_v,\Theta_a) p_v(h_v|e_v,\Theta_a',\Theta_b') \\
&= \sum_v \sum_{h_v} \left[\sum_{k \in I_v}g_k(h_v,e_v | \Theta_a) - \log Z_v(h_v,\Theta_a)\right]p_v(h_v|e_v,\Theta_a',\Theta_b') + \\
&+ \sum_u \sum_{k \in I_u} \left[ \sum_{h_u} f_k(h_u | \Theta_b) p_u(h_u|\Theta_b') \right] - \log Z_0(\Theta_b)
\end{split}
\end{equation}
We adapt the EM algorithm to update the parameters in $\Theta_a$, while we keep fix the parameters in $\Theta_b$. We will update later $\Theta_b$ by means of Algorithm~\ref{alg:MP}. The partial derivative of $Q$ for a parameter $\theta_k \in \Theta_a$ is:
\begin{equation}
\label{EqPartial}
\begin{split}
\frac{\partial }{\partial\theta_k}Q(\Theta | \Theta') &= \sum_v \sum_{k \in I_v} \sum_{h_v} \frac{\partial }{\partial\theta_k} g_k(h_v,e_v | \Theta_a) p_v(h_v|e_v,\Theta_a',\Theta_b) - \\
&- \sum_v \sum_{h_v}\frac{\partial}{\partial\theta_k}\log Z_v(h_v,\Theta_a)p_v(h_v|e_v,\Theta_a',\Theta_b)= 0
\end{split}
\end{equation}
In case we model text lines by the likelihood probabilities below:
\begin{equation}
\label{eq:Gauss1}
\begin{split}
p_t(x_v,y_v | h_v=l,\theta_a) &\propto \exp \left\{ -\frac{(y_v - a_l x_v - b_l)^2}{2 \sigma_{l,t}^2} \right\} \\
p_s(x_v,y_v | h_v=l,\theta_a) &\propto \exp \left\{ -\frac{(x_v - c_l)^2}{2 \sigma_{l,s}^2} \right\}
\end{split}
\end{equation}
The $v$-terms of Eq.~\eqref{eq:sup_Q}, after some manipulations and ignoring terms, which will disappear after taking partial derivatives, can be written as:
\begin{equation}\label{eq:Q_v}
\sum_v \sum_{h_v} \left( -\frac{1}{2}(A_le_v-\mu_l)^t\Sigma_l^{-1}(A_le_v-\mu_l) - \frac{1}{2}\log | \Sigma_l|\right)p(h_v=l|e_v,\Theta'_a,\Theta_b)
\end{equation}
\noindent where $A_l=\left(\begin{array}{cc}1 & 0 \\ -a_l & 1\end{array}\right)$, $\Sigma_l=\left(\begin{array}{cc}\sigma^2_{l,t} & 0 \\0 & \sigma^2_{l,s} \end{array}\right) $ and $\mu_l=\left(\begin{array}{c}c_l \\ b_l\end{array}\right)$. Moreover, $ | \Sigma_l|$ is the partition function of the conditional likelihood probability given by the next Proposition:
\begin{prop}[Partition function of conditional likelihood probability]
The partition function $Z_v(h_v=l,\Theta_a)$ is:
\begin{equation}
Z_v(h_v=l,\Theta_a) = 2\pi| \Sigma_l|^{1/2}
\end{equation}
\end{prop}
\begin{proof}
The result is straightforward after basic calculus and taking into account that the partition function of a multivariate normal distribution with covariance matrix $\Sigma$ is: $(2\pi)^{k/2}|\Sigma|^{1/2}$, where $k$ herein is the dimension of such multivariate normal distribution.
We recall that, in Eq.~\eqref{eq:Gauss1}, we defined $g_k$ as:
\begin{equation}\label{eq:gk}
g_k(h_v,e_v|\Theta_a) = -\frac{(y_v-a_lx_v-b_l)^2}{2\sigma^2_{l,t}}-\frac{(x_v-c_l)^2}{2\sigma^2_{l,s}}
\end{equation}
Which can be written, for $h_v=l$, in matrix form:
\begin{equation}
g_k(l,e_v|\Theta_a) = -\frac{1}{2}(A_le_v-\mu_l)^t\Sigma_l^{-1}(A_le_v-\mu_l)
\end{equation}
\noindent where $A_l$, $\mu_l$ and $\Sigma_l$ are defined as above. The partition function for $h_v=l$ is computed as:
\begin{equation}
Z_v(h_v=l,\Theta_a) = \int \exp\left\{ -\frac{1}{2}(A_le-\mu_l)^t\Sigma_l^{-1}(A_le-\mu_l) \right\}de
\end{equation}
A simple change of coordinate, taking into account that $|A_l|=1$ for all $l$, and setting $S_l^{-1}=A_l^t\Sigma_l^{-1}A_l$ lead us to:
\begin{equation}
Z_v(h_v=l,\Theta_a) = \int \exp\left\{ -\frac{1}{2}(u-A_l^{-1}\mu_l)^tS_l^{-1}(u-A_l^{-1}\mu_l) \right\}du
\end{equation}
\noindent which is the definition of a multivariate normal distribution of 2 dimensions. The result follows from the properties of the matrix determinant:
\begin{equation}
Z_v(h_v=l,\Theta_a) = 2\pi|S_l|^{1/2} = 2\pi|\Sigma_l|^{1/2}
\end{equation}
\end{proof}
The authors in~\cite{Bilmes1997} provide the derivation of the update parameter formulas for a gaussian mixture model. The update formulas introduced in this paper are essentially the same but adapted to regression lines and our model. Moreover, notice that Eq.~\eqref{eq:Q_v} are almost equal to Eq.~(7) in that work. It means that the arguments introduced there also applies to our method.
More specifically, variances $\sigma^2_{l,t}$ and $\sigma^2_{l,s}$ and {\em mean} parameter $c_l$ comes straightforward from~\cite{Bilmes1997}. To illustrate it we show in what follows that $c_l$ is equal to $\mu^{new}_l$ in that paper. $c_l$ only appears in the second term of $g_k$ in Eq.~\eqref{eq:gk}. The partial derivative with respect to $c_l$ is:
\begin{equation}
\frac{\partial}{\partial c_l}Q(\Theta|\Theta')=\sum_v \frac{(x_v-c_l)}{\sigma^2_{l,s}}p(h_v=l|e_v,\Theta_a',\Theta_b) = 0
\end{equation}
Which, after rearraging the terms, we can isolate $c_l$ and find the update formula:
\begin{equation}
c_l = \frac{\sum_v x_vp(h_v=l|e_v,\Theta_a',\Theta_b)}{\sum_v p(h_v=l|e_v,\Theta_a',\Theta_b)}
\end{equation}
To find the update formulas for $\sigma^2_{l,t}$ and $\sigma^2_{l,s}$, we have to apply the results from matrix algreball recalled in~\cite{Bilmes1997} to covariance matrix $S_l^{-1}=A_l^t\Sigma_l^{-1}A_l$ and {\em mean} vector $A_l^{-1}\mu_l$. Thus, matrix $M_{l,1}$ there becomes $M_{l,v}$ and it is defined as:
\begin{equation}
M_{l,v} = \Sigma_l - (A_le_v-\mu_l)(A_le_v-\mu_l)^t
\end{equation}
which leads to the update formula:
\begin{equation}\label{eq:Sigma}
\Sigma_l = \frac{\sum_v(A_le_v-\mu_l)(A_le_v-\mu_l)^tp(h_v=l|e_v,\Theta_a',\Theta_b)}{\sum_vp(h_v=l|e_v,\Theta_a',\Theta_b)}
\end{equation}
To conclude, it remains to find the update formulas for the regression line parameters $a_l$ and $b_l$. One can find the derivation of such formulas in any textbook. We follow the same ideas. We start by computing the independent term $b_l$ and then we will find the slope $a_l$. The partial derivative of $Q$ with respect to $b_l$ is:
\begin{equation}\label{eq:bl}
\frac{\partial}{\partial b_l}Q(\Theta|\Theta')=\sum_v \frac{(y_v-a_lx_v-b_l)}{\sigma^2_{l,t}}p(h_v=l|e_v,\Theta_a',\Theta_b) = 0
\end{equation}
We do exactly the same that we did for $c_l$ and we find:
\begin{equation}
b_l = \frac{\sum_v (y_v-a_lx_v)p(h_v=l|e_v,\Theta_a',\Theta_b)}{\sum_v p(h_v=l|e_v,\Theta_a',\Theta_b)}
\end{equation}
\noindent Which can easily be computed if we know $a_l$. The partial derivative with respect to $a_l$ is:
\begin{equation}
\frac{\partial}{\partial a_l}Q(\Theta|\Theta')=\sum_v \frac{x_v(y_v-a_lx_v-b_l)}{\sigma^2_{l,t}}p(l|e_v,\Theta_a',\Theta_b) = 0
\end{equation}
We replace in the above expression $b_l$ by its definition in Eq.~\eqref{eq:bl} to obtain:
\begin{align}\label{eq:a1}
\sum_v x_v(y_v-a_lx_v)p(l|e_v,\Theta_a',\Theta_b) - \sum_v (y_v-a_lx_v)p(l|e_v,\Theta_a',\Theta_b)\bar{x}=&0
\end{align}
\noindent where we define the {\em mean} of the horizontal coordinates $\bar{x}$ as:
\begin{equation}
\bar{x} = \frac{\sum_v x_vp(l|e_v,\Theta_a',\Theta_b)}{\sum_v p(l|e_v,\Theta_a',\Theta_b)}
\end{equation}
We rearrange Eq.~\eqref{eq:a1} and we find :
\begin{equation}
\sum_v (x_v-\bar{x})(y_v-a_lx_v)p(h_v=l|e_v,\Theta_a',\Theta_b)
\end{equation}
The remainder is straightforward but taking into account that $\bar{y}$ is defined as $\bar{x}$:
\begin{equation}
\bar{y} = \frac{\sum_v y_vp(h_v=l|e_v,\Theta_a',\Theta_b)}{\sum_v p(h_v=l|e_v,\Theta_a',\Theta_b)}
\end{equation}
Therefore, $a_l$:
\begin{equation}
a_l = \frac{\sum_v (x_v-\bar{x})(y_v-\bar{y})p(h_v=l|e_v,\Theta_a',\Theta_b)}{\sum_v (x_v-\bar{x})^2p(h_v=l|e_v,\Theta_a',\Theta_b) }
\end{equation}
\subsection{Rotation invariant updates}
The proposed regression model proposed in Eq.~\eqref{eq:Gauss1} is not fully rotation invariant, since we apply a shear transform $A_l$ instead of a rotation transform. In this section we introduce a rotation invariant model along its corresponding update formula. As we will see it only changes the update formula for the slope parameter $a_l$ and independent term $c_l$ while the update formulas for the other parameters remains equal.
\newpage
We define the new feature function $g_k$ as follows:
\begin{equation}\label{eq:gk_invariant}
g_k(l,e_v|\Theta_a) = -\frac{(y_v-a_lx_v-b_l)^2}{2\sigma^2_{l,t}}-\frac{(x_v+a_ly_v-c_l)^2}{2\sigma^2_{l,s}}
\end{equation}
Which can be written, for $h_v=l$, in matrix form:
\begin{equation}
g_k(l,e_v|\Theta_a) = -\frac{1}{2}(R_le_v-\mu_l)^t\Sigma_l^{-1}(R_le_v-\mu_l)
\end{equation}
\noindent where $R_l=\left(\begin{array}{cc}1 & a_l \\ -a_l & 1\end{array}\right)$; and $\mu_l$ and $\Sigma_l$ are defined as above. Recall that $a_l$ is the slope of the regression line, $a_l=\frac{\sin \beta}{\cos \beta}$. It becomes clear that $R_l$ is a rotation matrix of $-\beta$ radians and the model given by Eq.~\eqref{eq:gk_invariant} is rotation invariant.
The update formula is obtained similarly than before but it appears a new term. Thus, the partial derivative with respect to $a_l$ is:
\begin{equation}
\frac{\partial}{\partial a_l}Q=\sum_v \left(\frac{x_v(y_v-a_lx_v-b_l)}{\sigma^2_{l,t}}- \frac{y_v(x_v+a_ly_v-c_l)}{\sigma^2_{l,s}}\right)p(l|e_v,\Theta_a',\Theta_b) = 0
\end{equation}
After some calculations the final update formula is:
\begin{equation}
a_l = 2\frac{\sum_v (x_v-\bar{x})(y_v-\bar{y})p(h_v=l|e_v,\Theta_a',\Theta_b)}{\sum_v \left((x_v-\bar{x})^2 + (y_v-\bar{y})^2\right)p(h_v=l|e_v,\Theta_a',\Theta_b)}
\end{equation}
\noindent and $c_l$ is:
\begin{equation}
c_l = \frac{\sum_v (x_v+a_ly_v)p(h_v=l|e_v,\Theta_a',\Theta_b)}{\sum_v p(h_v=l|e_v,\Theta_a',\Theta_b)}
\end{equation}
Covariance matrix $\Sigma_l$ is computed like in Eq.~\eqref{eq:Sigma} but replacing $A_l$ by $R_l$.
\newpage
\section{Derivation of Algorithm~\ref{alg:MP} }
The optimization problem formulated in Eq.~(9) results on a constrained minimization problem that can be solved by means of Lagrange multipliers as:
\begin{equation}\label{lagrangian}
\begin{split}
L(p_u, p_v, \Lambda, \Theta, N) = \\
\sum_u \sum_{h_u} p_u(h_u|\Theta_b) \log p_u(h_u|\Theta_b) + \\
+ \sum_v c_v\sum_{h_v} p_v(h_v|e_v,\Theta_a',\Theta_b) \log p_v(h_v|e_v,\Theta_a',\Theta_b) + \\
+ \sum_k \sum_{h_v} g_k(h_v,e_v|\Theta_a')p_v(h_v|e_v,\Theta_a',\Theta_b) + \\
+ \sum_k \theta_k \Bigg[ \sum_u f_k(h_u) p_u(h_u|\Theta_b) - \mu_{k} \Bigg] + \\
+ \sum_v \sum_{u \supset v} \sum_{h_v} \lambda_{v \rightarrow u} (h_v) \Bigg[ \sum_{h_{u \setminus v }} p_u(h_u|\Theta_b) -p_v(h_v|e_v,\Theta_a',\Theta_b) \Bigg] + \\
+ \sum_u \nu_u \Bigg[ \sum_{h_u} p_u(h_u|\Theta_b) - 1 \Bigg] +\sum_v \nu_v \Bigg[ \sum_{h_v} p_v(h_v|e_v,\Theta_a',\Theta_b) - 1 \Bigg]
\end{split}
\end{equation}
The Lagrangian is convex for all positive $c_v$ over the sets of defined constraints~\cite{Heskes2006}. The computation of partial derivatives of $L$ with respect to $p_u(h_u|\Theta_b)$ and $p_v(h_v|e_v,\Theta_a',\Theta_b)$ lead us to the expressions:
\begin{equation}\label{logqalpha}
\begin{split}
\log p_u(h_u|\Theta_b) &= - \nu_u - 1 - \sum_k \theta_k f_k(h_u) - \sum_{v \subset u} \lambda_{v \rightarrow u} (h_v) \\
\log p_v(h_v|e_v,\Theta_a',\Theta_b) &= - \frac{\nu_v}{c_v}- 1 + \frac{1}{c_v} \sum_k g_k(h_v,e_v|\Theta_a') + \frac{1}{c_v}\sum_{v \subset u} \lambda_{v \rightarrow u} (h_v)
\end{split}
\end{equation}
\noindent \noindent where $\theta_k f_k(h_u)$ refers herein to the feature function $f_k(h_u|\Theta_b)$ defined in Eq.(12) of the main paper.
Now, we know that $\sum_{h_u} p_u(h_u|\Theta_b) = 1$, and similarly $p_v(h_v|e_v,\Theta_a',\Theta_b)$, therefore, computing the exponential values on both sides of the equation, and summing for $\sum_{h_u}$ and $\sum_{h_v}$, we obtain the values for $\nu_v$ and $\nu_u$ as: %
\begin{equation}\label{nuAlpha}
\begin{split}
\nu_u &= - 1 + \log \sum_{h_u} \exp \left\lbrace - \sum_k \theta_k f_k (h_u) - \sum_{v \subset u} \lambda_{v \rightarrow u} (h_v) \right\rbrace \\
\nu_v &= - c_v + c_v\log \sum_{h_v} \exp \left\lbrace \frac{1}{c_v}\sum_k g_k(h_v,e_v|\Theta_a') + \sum_{v \subset u} \frac{1}{c_v} \lambda_{v \rightarrow u} (h_v) \right\rbrace
\end{split}
\end{equation}
The above are the corresponding partition functions for approximate marginals $p_u(h_u|\Theta_b)$ and conditional marginals $p_v(h_v|e_v,\Theta_a',\Theta_b)$. Plugging the expressions for $p_u(h_u|\Theta_b)$ and $p_v(h_v|e_v,\Theta_a',\Theta_b)$ on the primal problem we find the dual problem $L^*$: %
\begin{equation}\label{Dual}
L^*(\Lambda,\theta_b) = -\sum_u \log Z_u(\Theta_b,\Lambda) - \sum_{v} c_v\log Z_v(e_v,\Theta'_a,\Theta_b,\Lambda) - \sum_k \theta_k \mu_k
\end{equation}
\noindent which is convex~\cite{Bertsekas2003}.
Algorithm~\ref{alg:MP} is the numerical implementation of block gradient descend method applied to the dual problem.
We find the optimal prior probabilities $p_u(h_u|\Theta_b)$ and a posterior probabilities $p_v(h_v|e_v,\Theta'_a,\Theta_b)$ by finding the optimal parameters $\Lambda$ and $\Theta$ minimizing the dual problem $L^*$, which is convex on $\lambda_{v\to u}(h_v)\in \Lambda$ and $\theta_k\in \Theta_b$. The partial derivative with respect to $\theta_k$ is: %
\begin{equation}
\frac{\partial L^* }{\partial \theta_k} = \sum_{h_u} f_k(h_u)p_u(h_u|\Theta_b) - \mu_k = 0
\end{equation}
Observe that the gradient of the dual problem with respect to $\theta_k$ is 0 when {\em moment-matching} constraints are satisfied. Since the prior $p_u(h_u|\Theta_b)$ depends on parameter $\theta_k$ we apply line search strategy to find better updates of $\theta_k$. We fix the length $\eta$ of each step according to the Armijo conditions and the update step is:%
\begin{equation}
\theta_k \leftarrow \theta_k' + \eta \left(\sum_{h_u} f_k(h_u) p_u(h_u|\Theta_b') - \mu_{k} \right )
\end{equation}
$\Theta_b$ are shared by all $\Psi_u$ and we can perform several iterations before starting sending messages. In practice, this does not improve accuracy neither speed up the convergence. Therefore, we update once between each message-passing process.
The message-passing process consist of computing partial derivative with respect $\lambda_{v\to u}(h_v)$, being fixed model parameters $\Theta_b$ and $\Theta'_a$ found in the previous gradient descend and EM iteration, respectively. This block-gradient descend strategy has been successfully applied before in many other numerical schemes such as ~\cite{Heskes2006,Schwing2011}. Algorithm~\ref{alg:MP} follows the same ideas appearing in those papers. Partial derivative with respect to $\lambda_{v\to }(h_v)$ lead to the {\em sum-marginalization} constraint:
\begin{equation}
\frac{\partial L^* }{\partial \lambda_{v \rightarrow u}(h_v)} = \sum_{h_{u \setminus v }} p_u( h_u|\Theta_b) - p_v(h_v|e_v,\Theta_a',\Theta_b) = 0
\end{equation}
For each hidden variable $h_v$, we fix $\Theta_b$, $\Theta'_a$ is always fixed in this algorithm, consider the partial derivative with respect to $\lambda_{v\to u }(h_v)$ and let us say that $\lambda_{v \to u }^{(new)}(h_v)$ is the value where the gradient is 0. Then, from the {\em sum-marginalization consistency} constraint, we have: %
\begin{equation}\begin{split}
\frac{\partial L^*}{\partial \lambda_{v\to u }(h_v) } &= \sum_{h_{u\setminus v}} p_{u}(h_u|\Theta_b) -p_{v}(h_v|e_v,\Theta_a',\Theta_b) =0\\
p_{v}(h_v|e_v,\Theta_a',\Theta_b) &= \frac{e^{\lambda_{v\to u }^{(new)}(h_v)}}{e^{\lambda_{v \to u}(h_v) } } p_{u}(h_v|\Theta_b)\\
\log p_{v}(h_v| e_v,\Theta_a',\Theta_b) &= \lambda_{v \to u }^{(new)}(h_v) - \lambda_{v \to u }(h_v) + \\
&+\log p_{u}(h_v|\Theta_b) \\
\lambda_{v \to u }^{(new)}(h_v) &= \lambda_{v \to u}(h_v) + \log p_{v}(h_v|e_v,\Theta_a',\Theta_b) -\\
&- \log p_{u}(h_v|\Theta_b) \end{split}\label{eq:lambda_new}
\end{equation}
\noindent where we express the update message $\lambda_{v \to u }^{(new)}(h_v)$ in terms of the old messages. To estimate $p_{v}(h_v|e_v,\Theta'_a,\Theta_b)$ we add the last row of~\eqref{eq:lambda_new} over all the pairs $u$ of hidden variables including $h_{v}$:%
\begin{equation}\begin{split}
\sum_{u \supset v }&\lambda_{v \to u}^{(new)}(h_v) = \sum_{u \supset v}\lambda_{v \to u }(h_v) + \\
+&\sum_{u\supset v}\log p_{v}(h_v|e_v,\Theta_a',\Theta_b) - \sum_{u\supset v}\log p_{u}(h_v|\Theta_b)
\end{split}
\end{equation}
Then, defining $A_v$ as the number of pairs $u$ containing $v$ and rearranging the terms: %
\begin{equation}
\begin{split}
A_{v}\log p_{v}(h_v|e_v,\Theta'_a,\Theta_b) &= \sum_{u \supset v}\lambda_{v \to u }^{(new)}(h_v) + \sum_{ u \supset v}[\log p_{u}(h_v|\Theta_b) - \lambda_{v \to u }(h_v)]\\
c_{v}\log p_{v}(h_v|e_v,\Theta_a',\Theta_b) &=\nu_{v}-c_{v} - \sum_k g_{k}(h_v,e_v|\Theta_a') -\sum_{u \supset v} \lambda_{v\to u}^{(new)}(h_v)
\end{split}
\end{equation}
\noindent and adding both equations in both sides, we obtain the update for \\
$\log p_{v}(h_v|e_v,\Theta_a',\Theta_b) $: %
\begin{equation} \begin{split}
\log &\,p_{v}(h_v|e_v,\Theta_a',\Theta_b) = \frac{1}{c_{v}+A_{v}}\log \psi_{v}(h_v|e_v,\Theta'_a) + \\
+& \frac{1}{c_{v}+A_{v}}\sum_{u \supset v}\left[\log p_{u}(h_v|\Theta_b) - \lambda_{v \to u }(h_v)\right]
\end{split}
\end{equation}
\noindent where $\log \psi_{v}(h_v|e_v,\Theta'_a) = \frac{\nu_{v}}{c_{v}} -1 -\frac{1}{c_{v}}\sum_k g_k(h_v,e_v|\Theta'_a) $ and%
\begin{equation}
p_{v}(h_v|e_v,\Theta_a',\Theta_b) = \left(\psi_{v}(h_v|e_v)^{c_{v}}\prod_{u \supset v }\frac{p_{u}(h_v|\Theta_b)}{e^{\lambda_{v\to u }(h_v)}}\right )^{\frac{1}{c_{v}+A_{v}}}
\end{equation}
We can now to compute the update $\lambda_{v \to u }^{(new)}(h_v) $:%
\begin{equation}\begin{split}
\lambda_{v \to u }^{(new)}(h_v)&= \lambda_{v \to u }(h_v ) + \log p_{v}(h_v|e_v,\Theta_a',\Theta_b) - \\
&- \log p_{u}(h_v|\Theta_b)
\end{split}
\end{equation}
The above formulas are more compactly expressed if we define message functions by: $m_{v \to u}(h_v)=e^{\lambda_{v \to u}(h_v)}$ and then, the update rules are:
\begin{equation}\begin{split}
m_{v \leftarrow u }(h_v) &=\frac{p_{u}(h_v|\Theta_b)}{m_{v \to u }(h_v)} \\
m_{v \to u}(h_v) & = \frac{p_{v}(h_v|e_v,\Theta_a',\Theta_b)}{m_{v \leftarrow u }(h_v)}
\end{split}
\end{equation}
In summary, we have the update expression to update the parameters of the feature functions, and the expression for the messages sent between the $u$ and $v$. These update formulas arranged as shown in Algorithm~\ref{alg:MP} provide a block gradient descend method that can be parallelized. At each iteration we find new updates of the prior model parameters $\Theta_b$, then we send messages, first from hidden variables $h_v$ to pairs of hidden variables $u,$ and then we combine them obtaining the new messages to update the posteriori probability $p_v(h_v|e_v,\Theta'_a,\Theta_b)$ for the next EM iteration of the main algorithm.
\begin{algorithm}
\KwData{ $\{ \mu_k\}$: empirical moments.}
\KwResult{$\Theta_b$, $\Lambda$: model parameters, $\{p_v(h_v|e_v,\Theta_a',\Theta_b), p_u(h_u|\Theta_b)\}$ marginals.}
Initialize: $\theta_k= 0$, $\theta_k\in\Theta_b$, $m_{v \to u}(h_v)=1$;\\
\While{ not converged}
{
\For{ $\forall k \in \{I_u\}$ }{
$$\theta_k \leftarrow \theta_k' + \eta \left(\sum_{h_u} f_k(h_u) p_u(h_u|\Theta_b') - \mu_{k} \right )$$
}
\For{ $\forall v$ }
{
\For{ $\forall u \supset v$ }
{
$$ p_u(h_v|\Theta_b)=\sum_{h_{u \setminus v}} p_u(h_u|\Theta_b) $$
$$m_{v \leftarrow u}(h_u)=\frac{p_u(h_v|\Theta_b)}{m_{v \to u}(h_v)}$$
}
$$p_v(h_v|e_v,\Theta_a',\Theta_b) = \frac{1}{Z_v} \left(e^{\sum_k g_k(h_v,e_v|\Theta_a') }\prod_{u \supset v} m_{v \leftarrow u}(h_u) \right )^{\!\!\frac{1}{c_v+A_v}}$$
\For{ $\forall u \supset v$ }
{
$$m_{v \to u}(h_v) = \frac{p_v(h_v|e_v,\Theta_a',\Theta_b)}{m_{v \leftarrow u}(h_u)}$$
$$p_u(h_u|\Theta_b) = \frac{1}{Z_u}\left(e^{ -\sum_k \theta_kf_k(h_u) }\prod_{v \subset u} m_{v \to u}(h_v)\right)^{\!\!\frac{1}{c_v}}$$
}
}
}
\caption{Message passing algorithm for constrained minimization of free energies. $Z_{u}$ and $Z_{v}$ are the partition function and $\eta$ is a step length satisfying the Armijo condition.}
\label{alg:MP}
\end{algorithm}
\endinput
|
{
"timestamp": "2018-05-16T02:08:12",
"yymm": "1805",
"arxiv_id": "1805.02536",
"language": "en",
"url": "https://arxiv.org/abs/1805.02536"
}
|
\section{Introduction}
\label{sec:intro}
The ability to reconstruct the star-formation histories of galaxies, by characterising their stellar populations, allows one to trace their individual evolution through time, and thereby directly connect their descendants to their progenitors at higher redshifts. Thus far, high-redshift galaxy surveys have produced snapshots of the galaxy population at different points in cosmic time, which produces tight boundary conditions for galaxy formation models. However, the importance of the many physical processes included in these models are not directly constrained. We still do not know individual star-formation histories (SFHs) and how these are related to global galaxy properties. To constrain galaxy formation theories more directly, `archaeological' reconstruction can be used to trace the evolution of individual galaxies over time, and then the dependance of individual SFHs on stellar mass, stellar velocity dispersion and star-formation (SF) activity can be explored.
Reconstructing SFHs requires high resolution spectra of galaxies. Ideally, individual stars would be resolved, as they are for local dwarf galaxies \citep[e.g.,][]{weisz2011}. However, in most cases we have to rely on integrated stellar light, though if a galaxy's main star formation (SF) epoch lies at $z>1$, we cannot temporally resolve its stellar age distribution, even with the highest-quality spectra. While there is a plethora of high resolution spectra of galaxies in the local universe, most of these galaxies are too old \citep[$>5$\,Gyrs,][]{gallazzi2005} to resolve their star-formation histories (SFHs) due to the similarity of stellar spectra in the age range $>5$\,Gyrs. The general insight gained from the `archaeological' studies of these galaxies is that low-mass galaxies have more extended SFHs that peak at later cosmic times compared to high-mass galaxies \citep[`downsizing', e.g.,][]{gallazzi2005,thomas2005,thomas2010}. \added{Many of these studies involved the use of fossil record methods on SDSS \citep[Sloan Digital Sky Survey,][]{york2000} spectra of local galaxies \citep[e.g. ][]{juneau2005, thomas2005, fernandes2007, tojeiro2009, mcdermid2015, ibarra2016}. However, downsizing has also been seen in other studies, such as studies by \cite{cimatti2006}, who corrected luminosity function data of early-type galaxies by adopting the empirical luminosity dimming rate derived from the evolution of the Fundamental Plane of field and cluster massive early-type galaxies, as well as \cite{leitner2012}, who derived the average growth of stellar mass in local star-forming galaxies using a Main Sequence Integration approach.}
One approach to probe the high-redshift regime, is to obtain an integrated view of galaxy evolution. Thus far, this has been the focus of spectroscopic observations of distant galaxies: the evolution of the star-formation rate density (SFRD) in the universe has been extensively studied \citep[e.g.,][]{karim2011, madau2014, khostovan2015, abramson2016}. The majority of these studies indicate that the SFRD increased from high redshift to $z\sim2$, and has since been decreasing steeply. Coupled with this, are number density evolution studies which show an increasingly dominant population of quiescent galaxies \citep[e.g.,][]{pozzetti2010, bram2011, moustakas2013, muzzin2013g}.
Another approach is to use photometric measurements to trace SFHs, however, individual galaxy evolution is not easily traced with this method due to high uncertainties. In this case, one can investigate average SFHs of galaxies as \citet{pacifici2016} have done by applying spectral energy distribution models to compute the median SFHs of 845 quiescent galaxies at $0.2< z< 2.1$. They found that galaxy stellar mass is a driving factor in determining how evolved galaxies are, with high mass galaxies being the most evolved at any time. The limitation with these approaches is that we cannot connect progenitors to descendants: studies from mass-matched samples have resulted in multiple solutions \cite[e.g.,][]{torrey2017}. To understand the mechanics of how galaxies evolve, it is crucial to expand our view from focusing on the population of galaxies as a whole, to investigating how the star-formation rate (SFR) of individual galaxies varies with time.
Probing the SFHs of individual galaxies, however, still requires high-resolution, high-quality stellar continuum spectra, which are expensive to obtain. Consequently, high-redshift samples are small and often selected with criteria to optimize data quality and sample size rather than represent the full galaxy population. \citet{jorgensen2013} obtained spectra for $\sim80$ cluster galaxies at $z=0.5-0.9$ and found ages of $3-6$~Gyrs, consistent with passive evolution between $z\sim2$ and the present. Stellar population measurements of $\sim70$ $z\sim0.7$ galaxies with stellar masses $>10^{10}$\,M$_{\odot}$ were performed by \cite{gallazzi2014}; they found that passive galaxies have ages and metallicities consistent with those of present-day galaxies, and that star-forming galaxies require further star-formation and metal enrichment to evolve into present-day descendants. \citet{choi2014} analysed stacked spectra of thousands of passive galaxies in the redshift range $0.1<z<0.7$ and also found age evolution consistent with mostly passive evolution, with little dependence on mass at $z>0.5$. \citet{belli2015} measured ages of $1-4$~Gyrs for several dozen passive galaxies at redshifts $1<z<1.6$, indicating that we are approaching the cosmic epoch at which massive, passive galaxies stopped forming stars. Finally, at $z>1.5$, measurements are limited to stacked spectra \citep[e.g.,][]{whitaker2013,onodera2015} or sample sizes ranging from single objects to a handful \citep[e.g.,][]{vandokkum2010,toft2012,vandesande2013,kriek2016}. The typical age of massive, passive galaxies at those redshifts is found to be 1 Gyr or less, with short formation time scales. From this brief review, it is evident that samples at large lookback time are generally small and/or stacked. Furthermore, ages are usually estimated by assuming a single stellar population, which is arguably justified for very massive galaxies at late cosmic epochs, but not in general.
The LEGA-C \citep[Large Early Galaxy Astrophysics Census,][]{vdw2016} survey is collecting high $S/N$ spectra of $\sim3000$ galaxies in the redshift range $0.6<$ z $<1$, selected only by their $K$-band magnitude (a proxy for stellar mass). The data, which are comparable in quality to those obtained in the nearby universe, probe the internal kinematics of stars and gas, and the ages and metallicities of stellar populations. This enables us, for the first time, to reconstruct the SFHs of individual galaxies at large look-back time that are representative of the population. \deleted{The goal of this paper is to resolve the main formation phase of massive galaxies of all types and identify the evolutionary paths of individual galaxies through cosmic time. This provides } \added{These reconstructed SFHs can provide } a direct connection between progenitors and descendants, and \deleted{allows } \added{allow } us to constrain when, and how quickly, galaxies formed their stars.
Over the past decade, there have been several algorithms developed to recover SFHs, viz. \textit{MOPED}, \textit{STARLIGHT}, \textit{STECMAP}, \textit{VESPA}, \textit{ULYSS} and \textit{FIREFLY} \citep{heavens2000, fernandes2005, ocvirk2006, tojeiro2007, koleva2009, wilkinson2015}. We develop our own approach in this study to tailor the problem for galaxies at $z\sim1$. The main differences between our algorithm and some of those listed above are the use of composite stellar populations (a group of stars which range in age within a given interval) instead of simple stellar populations (stars born from a single burst in star formation); using a defined set of template spectra which allow for direct comparisons of the SFHs; as well as the assumption of constant star formation within a given time interval. The galaxy spectra are also not continuum-normalised in the fitting process, but photometry is used to calibrate the fluxes.
The \deleted{paper is } \added{goal of this paper is to reconstruct the SFHs of galaxies in the LEGA-C sample and investigate the dependance of individual SFHs on stellar mass, stellar velocity dispersion and star-formation (SF) activity. The paper is } outlined as follows. In Section \ref{sec:data} we give a brief overview of the LEGA-C dataset. In Section \ref{sec:method} we introduce the model for reconstructing the SFHs of the galaxies \deleted{, we present some of the resultant fits, } as well as tests of the model. In Section \added{\ref{fitres} we present a sample of the resultant fits and general trends of measured parameters. In Section } \ref{sec:mr} we investigate the SFH as a function of stellar velocity dispersion and stellar mass. We demonstrate that we can verify the relation between the evolution of SFHs and mass, and we investigate the \deleted{scatter } \added{variation } in the reconstructed SFHs, at fixed stellar velocity dispersion. Finally, in Section \ref{sec:mr} we \deleted{discuss } \added{summarise } the results. A $\Lambda{CDM}$ model is assumed with $H_0=67.7$\,km\,s$^{-1}$\,Mpc$^{-1}$, $\Omega_m=0.3$ and $\Omega_{\Lambda}=0.7$.
\section{Data}
\label{sec:data}
LEGA-C \citep{vdw2016} is an ongoing ESO Public Spectroscopic survey with VLT/VIMOS of $\sim3000$ galaxies in the COSMOS field ($R.A. = 10^h00^m$; $Dec. = +2^{\circ}12'$). The galaxies were selected from the Ultra-VISTA catalog \citep{muzzin2013f}, with redshifts in the range $0.6<z<1.0$. The galaxies were K-band selected with a magnitude limit ranging from $K(AB) = 21.08$ at z = 0.6 to $K(AB) = 20.7$ at z = 0.8 to $K(AB) = 20.36$ at z = 1.0 (stellar masses $M_* > 10^{10}$\,M$_{\odot}$). These criteria were chosen to reduce the dependence on variations in age, SF activity and extinction, as well as ensure that the targets were bright enough in the observed wavelength range ($0.6\mu{m}-0.9\mu{m}$) to obtain high quality, high resolution spectra (R\,$\sim3000$). Each galaxy is observed for $\sim20$\,h, which results in spectra with $S/N\sim20$\,\AA$^{-1}$.
The analyses in this work are based on the first-year data release\footnote{http://www.eso.org/qi/catalogQuery/index/93}, which contains spectra of 892 galaxies, 678 of which are in the primary sample and have a $S/N>5$\,\AA$^{-1}$ between rest-frame wavelengths 4000\,{\AA} and 4300\,{\AA} (typically, $S/N\sim20$\,\AA$^{-1}$). Emission line subtracted spectra are used in the fitting algorithm; therefore, the emission line spectrum of each galaxy, computed using the Penalized Pixel-Fitting method \cite[pPXF, ][]{cappellari2004}, is subtracted from the observed spectrum. For details of the emission line fitting procedure, see \deleted{Bezanson et al. (2017)}\added{\citet[][accepted in \apj]{bezanson2018}}. As part of the analysis of the model, we use the following measured quantities: stellar velocity dispersions ($\sigma_*$), 4000\,{\AA} break (D$_n$4000) and H$\delta$ equivalent width \deleted{(EW) } indices [\added{EW(H$\delta$)}], U-V colours, stellar masses ($M_{*,{FAST}}$), UV+IR SFRs, and UV+IR specific SFRs (sSFR$_{UV+IR}$). Stellar masses are determined using FAST \citep{kriek2009} based on photometric measurements, \cite{bruzual2003} stellar population libraries, adopting a \cite{chabrier2003} Initial Mass Function (IMF), \cite{calzetti2000} dust extinction, and exponentially declining SFRs. The UV+IR SFRs are estimated from the UV and IR luminosities, following \cite{whitaker2012}. For details of the data reduction procedure, see \cite{vdw2016}.
\added{\section{Spectral Fitting Technique}}
\label{sec:method}
\begin{deluxetable}{LRRR}[t]
\tablecaption{Properties of the \textit{FSPS} template spectra.\label{tab:bins}}
\tablewidth{0pt}
\tablehead{
\colhead{Age Bin\tablenotemark{a}} & \colhead{SFR\tablenotemark{b}} & \colhead{M$_*$\tablenotemark{c}} & \colhead{L$_{bol}$\tablenotemark{d}} \\
\colhead{log(yr)} & \colhead{M$_{\odot}$/yr} & \colhead{M$_{\odot}$} & \colhead{log(L$_{\odot}$)}
}
\startdata
0.000-8.000 & 1.000$\times10^{-8}$ & 0.837 & 1.964 \\
8.000-8.300 & 1.005$\times10^{-8}$ & 0.711 & 0.885 \\
8.300-8.475 & 1.010$\times10^{-8}$ & 0.748 & 0.650 \\
8.475-8.650 & 6.750$\times10^{-9}$ & 0.731 & 0.497 \\
8.650-8.750 & 8.646$\times10^{-9}$ & 0.718 & 0.382 \\
8.750-8.875 & 5.332$\times10^{-9}$ & 0.707 & 0.285 \\
8.875-9.000 & 3.998$\times10^{-9}$ & 0.695 & 0.187 \\
9.000-9.075 & 5.305$\times10^{-9}$ & 0.685 & 0.127 \\
9.075-9.225 & 2.040$\times10^{-9}$ & 0.671 & 0.099 \\
9.225-9.375 & 1.444$\times10^{-9}$ & 0.652 & -0.043 \\
9.375-9.525 & 1.022$\times10^{-9}$ & 0.639 & -0.161 \\
9.525-9.845 & 2.681$\times10^{-10}$ & 0.618 & -0.347 \\
\enddata
\tablenotetext{a}{Age interval of CSP templates.}
\tablenotetext{b}{SFR s.t. 1\,M$_\odot$ of stars formed within the interval.}
\tablenotetext{c}{Stellar mass (including stellar remnants) with mass loss accounted for.}
\tablenotetext{d}{Bolometric luminosity.}
\end{deluxetable}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{f1.pdf}
\caption{Template CSP spectra used to fit LEGA-C galaxies. They were generated from \textit{FSPS}, using the time intervals listed in Table \ref{tab:bins}, with solar metallicity and arbitrary velocity dispersion; and they have been normalised and shifted here for comparison purposes.}
\label{fig:tempbins}
\end{figure}
\added{\subsection{Stellar Population Model}}
\label{sec:model}
To reconstruct the SFHs of galaxies, one needs to gauge the various ages of stellar populations within these galaxies. This is done using stellar population spectra generated with the Python implementation of the Flexible Stellar Population Synthesis package \citep[\textit{FSPS v3.0};][]{conroy2010, conroy2009, foreman2014}, using the MILES spectral library \citep{sanchez2006}, Padova isochrones \citep{girardi2000, marigo2007, marigo2008} and a Kroupa initial mass function \citep{kroupa2001}.
A galaxy spectrum is approximated to be a linear combination of template spectra at varying ages, attenuated by dust:
\begin{equation}
f_{\lambda} = \sum^n_{i=1}{m_i{T_{\lambda,i}10^{-0.4k_{\lambda}E(B-V)_i}}},
\label{eq:a1}
\end{equation}
\begin{displaymath}
{k_{\lambda} = 2.659\Big{(}-2.156 + \frac{1.509}{\lambda} - \frac{0.198}{\lambda^2} + \frac{0.011}{\lambda^3}\Big{)} + 4.05}
\end{displaymath}
\noindent{where $n$ is the number of stellar population spectra to fit to the galaxy, $T_{\lambda,i}$ are the template spectra, $m_i$ are the weights that scale the templates to match the spectra of the galaxy, $k_\lambda$ is the reddening curve \citep{calzetti2000}, and $E(B-V)_i$ are the dust reddening values.}
We generate 12 composite stellar population spectra (CSPs), with solar metallicity \added{(see Section \ref{sec:robust})}, covering ages from $0$ to about $7$\,Gyrs, the age of the Universe in LEGA-C's redshift range. To determine the intervals of the 12 age bins of the CSPs, simple stellar population spectra (SSPs) were generated and the cumulative absolute difference from one spectrum to another was calculated as the age was increased; which was then divided into 12 percentiles with equal width (see Table \ref{tab:bins} and Figure \ref{fig:tempbins} for the properties of the CSPs in each age bin). This method of determining the time intervals generates template spectra that optimise the temporal sampling of an evolving stellar population. In practice, the age bins are $\sim0.15$\,dex wide over the age range $0-7$\,Gyrs.
The template spectra are generated with a constant SFR and are normalised such that $1$\,M$_{\odot}$ of stars are formed within each time bin (stellar masses include stellar remnants). Note that there is mass loss in each bin as massive stars die off (see Table \ref{tab:bins}). \added{We assume a constant SFR within each time bin as it presents a more realistic evolution of a galaxy's star formation with time, and can take into account rapid changes in the SFH. Choosing SSP templates would not lead to significantly different SFHs, however, it would lead to aliasing effects when reconstructing the SFHs for samples of galaxies. } The templates are also broadened to the velocity dispersion of the galaxies \deleted{. } \added{\cite[][accepted in \apj]{bezanson2018}.} It is assumed that dust reddening is the same for all populations except for the youngest population (age $<100$\,Myrs). Dust extinction is expected to be different for young stellar populations as they are usually observed to be nested in the dust of their molecular birth clouds \citep{charlot2000}. Therefore, two dust reddening values are fit for: $E(B-V)_{1}$, for the age range $0-100$\,Myrs, and $E(B-V)_{2}$, for the rest of the age ranges.
\deleted{Solar metallicity was used to generate all the CSPs because according to
\cite{gallazzi2005, gallazzi2014}
, the metallicity-mass relation flattens out to solar metallicity in LEGA-C's mass range (log(M)\,$\gtrsim 10.5$), for $z\sim0.7$ galaxies. We also found that using sub-solar or super-solar metallicities for the templates, instead of solar metallicity, generally results in no significant differences in the $\chi^2$ values of the fits. Nevertheless, if we assign the implausibly high ($2.5$\,Z$_\odot$) or low ($0.4$\,Z$_\odot$) metallicities for the galaxies in our sample, the derived ages do not systematically change by more than $0.1$\,dex (less than the width of one age bin).
}
\subsection{Fitting Algorithm}
\label{sec:fit_alg}
To find the optimal values for the 14 parameters, viz. the 12 weight factors ($m_i$) \added{for the 12 CSP templates } and 2 dust reddening values ($E(B-V)_i$), we used \textit{emcee}, a Python implementation of an affine invariant ensemble sampler for MCMC \citep{foreman2013} which was proposed by \cite{goodman2010}. It uses MCMC `walkers' which randomly explore the parameter space, where each proposed step for a given walker depends on the positions of all the other walkers in the ensemble, with the aim of converging to the most likely parameter values.
The priors for the 14 parameters were set such that all parameter values were always greater or equal to 0, and the upper limit for $E(B-V)_i$ was set to 3. The parameter value for the youngest bin was initially set to be equal to the measured SFR from UV+IR measurements, but it was allowed to vary between 1/3 and 3 times that value during the fitting process, allowing for measurement errors. For the other bins, the best fitting single template, computed using least-squares fitting, was assigned all the stellar mass, with all other parameter values set to $10^{-6}$. Starting with equal SFRs in all bins also recovers the SFHs, however, the algorithm may take longer to converge to the optimal values.
For each galaxy, 100 MCMC walkers were used, initiated in a small region around the starting values mentioned above. A total of $20000$ samples were taken and 1000 steps were kept after burn-in. The mean acceptance fraction was $\gtrsim0.2$ and the typical autocorrelation time was $\sim95$ iterations. The optimal values for the parameters are taken as the 50$^{th}$ percentile of the list of samples of the converged walkers, and the lower and upper uncertainties are the 16$^{th}$ and 84$^{th}$ percentiles, respectively. The fitting algorithm resulted in 607 good fits based on their normalised $\chi^2$ values ($<5$, from visual inspection of the fits)\deleted{.
}\added{, and these were used in the analyses. The spectra that were not well-fit were mainly due to low S/N and AGN.}
\begin{figure*}[t]
\gridline{\fig{f2a.pdf}{0.45\textwidth}{}
\fig{f2b.pdf}{0.45\textwidth}{}}
\caption{\added{Reconstructed SFH (black) of a synthetic galaxy (green) with S/N\,$=10$\,\AA$^{-1}$ (left) and S/N\,$=30$\,\AA$^{-1}$ (right). The converged walkers are shown in grey and the upper and lower uncertainties are based on the 16$^{th}$ and 84$^{th}$ percentiles, respectively, as explained in Section \ref{sec:fit_alg}. By S/N\,$=30$\,\AA$^{-1}$, the recovered SFHs predict the stellar mass, age and luminosity with precision $\leq0.1$\,dex.}}
\label{fig:synsfhs}
\end{figure*}
\added{\subsection{Robustness of Fitting Results}}
\label{sec:robust}
\added{To assess the robustness of the model, we performed the following tests: generate and fit synthetic spectra; compare model stellar mass measurements of the LEGA-C population with those obtained from broad-band photometry (see Section \ref{sec:data}); fit a sample of SDSS spectra and compare model stellar masses with literature measurements; and test the assumption of solar metallicity.}
\added{Synthetic galaxy spectra were generated with varying SFHs using the CSPs in Section \ref{sec:model}, including simulated noise that mimics LEGA-C variance spectra, to compare how well the algorithm recovered the SFHs. 20 SFHs were generated for each S/N (5, 10, 20, 30, 40, 50 and 60\,\AA$^{-1}$), and the average deviations of the true a$_{<MW>}$, stellar mass and luminosity from the best-fitting model parameters were computed. In general, the model sufficiently recovered the SFHs, however, we note that the quality of the results depends on the noise introduced into a spectrum (see Figure \ref{fig:synsfhs} for two examples). Stellar mass and luminosity are recovered with precision $\leq0.1$\,dex for S/N$\geq20$, while a$_{<MW>}$\, only requires S/N$\geq10$ to reach the same level of precision. We note that these tests only constrain the purely random uncertainties due to the noise in the spectra while they do not include systematic errors in the data (e.g., sky subtraction, flux calibration) and systematic uncertainties in the FSPS model spectra.}
\added{Imposing the MCMC model on the LEGA-C dataset and comparing the stellar masses measured from the model to those measured from FAST (using photometric measurements), resulted in very good agreement between the two methods, with a scatter of $\sim0.2$\,dex and an offset of $\sim0.03$\,dex. This scatter is larger than the formal uncertainty on our mass measurements ($\sim0.15$\,dex).}
\added{We used the fitting algorithm on a sample of 20 SDSS spectra of massive local galaxies ($z\sim0.1$), selected by stellar mass (M\,$_*>10^{10}$), to determine whether the model could recover the stellar masses measured in the literature. We compared the model stellar masses to measurements from the Portsmouth method \citep{maraston2009} and found satisfactory agreement, with a $\sim0.2$\,dex scatter. The maximum age of the templates was increased to $\sim12$\,Gyrs to account for the low redshift ($\sim0.1$) of the SDSS galaxies. The $0.2$\,dex random uncertainty is an indication of how results vary as a consequence of using a different SPS model (here, \cite{maraston2009} vs. }\textit{\added{FSPS}}\added{) and fitting algorithm. }
\added{Solar metallicity was used to generate all the CSPs because according to
\cite{gallazzi2005, gallazzi2014}, the metallicity-mass relation flattens out to solar metallicity in LEGA-C's mass range (log(M)\,$\gtrsim 10.5$), for $z\sim0.7$ galaxies. On the other hand, \cite{jorgensen2017} find evidence for evolution in the metallicity for cluster galaxies, as well as a trend of increasing metallicity with increasing velocity dispersion. We test our approach by repeating our fits with implausibly low metallicity ($0.4$\,Z$_\odot$, sub-solar) and high metallicity ($2.5$\,Z$_\odot$, super-solar) CSPs. We find no significant differences in the $\chi^2$ values of the fits, but, naturally, the inferred ages depend on the chosen metallicity. If we assign sub-solar metallicity for galaxies in our sample, the derived mass-weighted and light-weighted ages are older by $0.05$ and $0.08$\,dex, respectively, with a standard deviation of $0.16$ and $0.24$\,dex, respectively. If we assign super-solar metallicity for the sample, the light-weighted ages are younger by $0.03$\,dex, with a standard deviation of $0.20$\,dex. The mass-weighted age changes from solar to high metallicity are not normally distributed: 80\% of the galaxies have the same age to within $0.20$\,dex, while for the remaining 20\% the change in age ranges from $0.2$ to $0.9$\,dex. However, only 10 of these galaxies' mass-weighted ages change by $\geq$\,$0.5$\,dex and they have a mean light-weighted age of $\sim0.4$\,Gyr. The age changes do not depend on the measured stellar mass or stellar velocity dispersion.}
\added{The velocity dispersion-metallicity trend presented by \citet{jorgensen2017} implies that our assumption of solar metallicity for all galaxies may introduce a correlation between velocity dispersion and age. Our tests show that across the velocity dispersion range $\sigma_*$\,$=100-250$\,km\,s$^{-1}$, the magnitude of this effect would be at most $0.15$\,dex and likely less. This potential bias is insufficient to explain the $\sigma_*$-age relation we find in Section \ref{sec:insfh}. Follow-up studies that explore the interdependence of age, metallicity and other galaxy properties will need to take metallicity variations into account.}
\begin{figure*}[t]
\gridline{\fig{f3.pdf}{0.95\textwidth}{}}
\caption{Distributions of M$_{*,FIT}$\,(left), a$_{<MW>}$\,(middle) and a$_{<LW>}$\,(right) of the LEGA-C sample. The quiescent and star-forming populations (as defined in Section \ref{sec:corr}) are shown in red and blue, respectively. The distribution of the uncertainties for each parameter are shown at the top of each figure.}
\label{fig:hist}
\end{figure*}
\added{\section{Fitting Results}}
\label{fitres}
\subsection{Model Outputs}
Figure \ref{fig:hist} shows the distribution of the model-measured stellar masses (M$_{*,FIT}$, left panel), mean mass-weighted ages\footnote{Mean mass-weighted and light-weighted ages are obtained by averaging the midpoint ages of the CSPs weighted by luminosity and mass, respectively.\label{n:amlw}} (a$_{<MW>}$, middle panel) and mean light-weighted ages\textsuperscript{\ref{n:amlw}} (a$_{<LW>}$, right panel) of the LEGA-C sample, along with the distribution of uncertainties for each parameter. The distributions are separated into the quiescent (red) and star-forming (blue) populations to show the differences in the distributions based on current SF activity (see Section \ref{sec:mr}). The galaxies in the LEGA-C sample span a broad range of ages: a$_{<LW>}$\, can be as young as $60$\,Myrs and as old as $4.8$\,Gyrs, and has a median value of $1.2$\,Gyrs (see Figure \ref{fig:hist}). a$_{<MW>}$\, ranges from about $400$\,Myrs to about $5.2$\,Gyrs, with a median value of $3.8$\,Gyrs. However, most of these galaxies are old, with about 60\% being older than $3$\,Gyrs. The M$_{*,FIT}$\, of the galaxies ranges from $\sim2\times10^{9}$\,M$_{\odot}$ to $\sim4\times10^{11}$\,M$_{\odot}$, with a median value of about $6\times10^{10}$\,M$_{\odot}$. The \added{formal } age and mass uncertainties lie in the ranges 1-60\% and 1-40\%, respectively. \deleted{We note that these uncertainties are underestimated as they are computed from the MCMC model fit and they } \added{As stated in Section \ref{sec:robust}, these uncertainties } do not include systematic errors\deleted{on the data as well as the template spectra (stellar population synthesis model)}.
\addtocounter{subsubsection}{-1
\begin{figure*}
\gridline{\fig{f4a.pdf}{0.95\textwidth}{}}
\caption{\added{Sample of emission line subtracted spectra of 12 LEGA-C galaxies with the best fitting model obtained from combining the 12 template spectra using MCMC. The bottom-right figure, in each plot, is the reconstructed star formation history (the converged walkers are shown in grey). The MCMC resultant mass, luminosity, mass-weighted age and dust reddening values are shown in red. The spectra are ordered by a$_{<MW>}$.}}
\label{fig:fit1}
\end{figure*}
\renewcommand{\thefigure}{\arabic{figure} (Continued)}
\addtocounter{figure}{-1}
\begin{figure*}
\gridline{\fig{f4b.pdf}{0.95\textwidth}{}}
\caption{}
\label{fig:fit2}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\gridline{\fig{f4c.pdf}{0.95\textwidth}{}}
\caption{}
\label{fig:fit3}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}[t]
\gridline{\fig{f4d.pdf}{0.95\textwidth}{}}
\caption{}
\label{fig:fit4}
\end{figure*}
\renewcommand{\thefigure}{\arabic{figure}}
\begin{figure*}[t]
\gridline{\fig{f5a.pdf}{0.9\textwidth}{(a)}}
\label{fig:hdd4000_a}
\gridline{\fig{f5b.pdf}{0.9\textwidth}{(b)}}
\label{fig:hdd4000_b}
\caption{\added{EW(H$\delta$) versus D$_n$4000 (upper panel) and U-V colour versus V-J colour (lower panel) colour-coded by the time after which the final 10\% of stars were formed (left), the mean light-weighted age (middle), and the mean mass-weighted age (right). Typical error bars are indicated in grey.}}
\label{fig:hdd4000}
\end{figure*}
\added{\subsection{Sample SFHs}}
\label{sec:insfh}
Figure \ref{fig:fit1} shows the spectra of a sample of LEGA-C galaxies \added{(in a$_{<MW>}$\, order) } along with the best-fitting model spectra as described by Equation \ref{eq:a1} using the optimal parameter values from \textit{emcee}. The weight factors, $m_i$, represent the star formation histories of these galaxies (shown on the bottom-right of each figure). The resultant normalised $\chi^2$, dust reddening values ($E(B-V)_i$), stellar masses (M$_{*,FIT}$), luminosities (L$_{FIT}$) and mean mass-weighted ages (a$_{<MW>}$) from the model are shown in red. The sample was selected to display the wide range of SFHs recovered.
The reconstructed SFHs reveal that although most galaxies at $z\sim1$ have a$_{<MW>}$\,$>3$\,Gyrs, the sample \deleted{has a good variety } \added{spans a wide range } of histories. For the older massive galaxies, the oldest template (stars in the age range $\sim$3-7\,Gyrs) contributes to the majority of their mass. Some of these galaxies only contain the oldest stars and have since been quiescent, i.e. for the past $\sim3$\,Gyrs (see the SFHs of 108361, 211736 and 130052 in Figure \ref{fig:fit1}). However, some galaxies were quiescent for several Gyrs and then had a renewed period of growth, either due to SF rejuvenation, or merging with a younger population. A merger could result in either an integration of the younger population with no further activity, or trigger bursts of star formation. This young population of stars accounts for $\sim10$\,\% of the mass of these galaxies (e.g. 206042, 131869 and 131393 in Figure \ref{fig:fit1}). We will explore the frequency of such rejuvenation events in more detail in a follow-up study.
\addtocounter{subsubsection}{-1
\added{\subsection{General Trends}}
\label{sec:trends}
\deleted{Comparison between total stellar masses measured by the fitting algorithm ($M_{*,{FIT}}$) to those obtained by FAST based on photometric measurements ($M_{*,{FAST}}$). The MCMC mass and its upper and lower uncertainties are based on the 50$^{th}$, 16$^{th}$ and 84$^{th}$ percentiles, respectively, as explained in Section \ref{sec:fit_alg}. }}
\deleted{H$\delta$ EW versus D$_n$4000 (upper panel) and U-V colour versus V-J colour (lower panel) colour-coded by the time after which the final 10\% of stars were formed (left), the mean light-weighted age (middle), and the median stellar age (right). Typical error bars are indicated in grey.}}
\deleted{To assess the robustness of the model, we performed the following tests: generate and fit synthetic spectra; fit a sample of SDSS spectra and compare model stellar masses with literature measurements; compare model stellar mass measurements of the LEGA-C population with those obtained from FAST; and analyse trends of familiar spectral features and colours with model masses and ages.
\deleted{Synthetic galaxy spectra were generated with varying SFHs, including simulated noise that mimics LEGA-C variance spectra, to compare how well the algorithm recovered the SFHs. 20 SFHs were generated for each S/N, and the average deviations of the true a$_{<MW>}$, stellar mass and luminosity from the best-fitting model parameters were computed. In general, the model sufficiently recovered the SFHs, however, we note that the quality of the results depends on the noise introduced into a spectrum. Stellar mass and luminosity are recovered with precision $\leq0.1$\,dex for S/N$\geq20$, while a$_{<MW>}$\, only requires S/N$\geq10$ to reach the same level of precision. As stated in Section \ref{sec:insfh}, the uncertainties computed here are underestimated as they are only based on the fitting algorithm.
\deleted{We used the fitting algorithm on a sample of 20 SDSS spectra of massive local galaxies ($z\sim0.1$), selected by stellar mass (M\,$_*>10^{10}$), to determine whether the model could recover the stellar masses measured in the literature. We compared the model stellar masses to measurements from the Portsmouth method
\citep{maraston2009}
and found satisfactory agreement, with a $\sim0.2$\,dex scatter. The maximum age of the templates was increased to $\sim12$\,Gyrs to account for the low redshift ($\sim0.1$) of the SDSS galaxies. The $0.2$\,dex random uncertainty is an indication of how results vary as a consequence of using a different SPS model (here,
\cite{maraston2009}
vs. }\textit{\deleted{FSPS}
\deleted{). Imposing the MCMC model on the LEGA-C dataset and comparing the stellar masses measured from the model to those measured from FAST (using photometric measurements), resulted in very good agreement between the two methods, with a scatter of $\sim0.2$\,dex and an offset of $\sim0.03$\,dex (See Figure \ref{fig:masscomp}). This scatter is larger than the formal uncertainty on our mass measurements ($\sim0.15$\,dex).
\deleted{The upper panel of} \added{Figure \ref{fig:hdd4000}(a)} presents the distribution of \deleted{the }\added{EW(}H$\delta$\deleted{EW}\added{)} as a function of the D$_n$4000 break colour-coded by the time after which the final 10\% of stars were formed (a$_{10}$, left panel), \deleted{a$_{<LW>}$ } {\added{a$_{<LW>}$}} (middle panel) \deleted{, and the median stellar age}\footnote{\deleted{Median ages are obtained by computing the time after which 50\% of the stellar population was formed.}}
\addtocounter{footnote}{-1
\deleted{(a$_{50}$, } \added{and }{\added{a$_{<MW>}$}} \added{(}right panel), estimated from the model. \added{The EW(H$\delta$)-D$_n$4000 distribution is analysed in depth in
\cite{wu2018}. } As expected, for all three age parameters, galaxies generally evolve from the upper-left region (high EW(H$\delta$) and low D$_n$4000) to the lower-right region (low EW(H$\delta$) and high D$_n$4000) as they age. \deleted{a$_{10}$ and a$_{<LW>}$ } {\added{a$_{10}$}} \added{and }{\added{a$_{<LW>}$}} are more correlated with each other than \deleted{a$_{50}$ } {\added{a$_{<MW>}$}} because they track young stars; they also have smoother transitions in the EW(H$\delta$)-D$_n$4000 plane because those features primarily track recent SF activity ($\lesssim1$\,Gyr). \deleted{The lower panel of} \added{Figure \ref{fig:hdd4000}(b)} shows the rest-frame U-V colour as a function of restframe V-J colour-coded by the same 3 age parameters as above. Once again, expected trends are seen: \deleted{a$_{10}$, a$_{<LW>}$ and a$_{50}$ } {\added{a$_{10}$}}\added{, }{\added{a$_{<LW>}$}} \added{and }{\added{a$_{<MW>}$}} correlate with the restframe colours as U-V and V-J primarily reflect recent star formation ($\sim1$\,Gyr). There is a notable population of old galaxies (\deleted{a$_{50}$}\added{a$_{<MW>}$} \,$>3.5$\,Gyrs) with relatively blue colours, which indicates that these galaxies have extended SFHs.
To demonstrate the validity of the old galaxies (\deleted{a$_{50}$,}\added{a$_{<MW>}$\,} $> 3.5$\,Gyrs) in the young region of the EW(H$\delta$)-D$_n$4000 plane, i.e. galaxies in red in Figure \ref{fig:hdd4000}'s right panel, with D$_n$4000 $<1.3$ and EW(H$\delta$) $>2$, we refer to their SFHs. These galaxies formed most of their stars early on, but also have significant recent star formation. While some seem to have been quiescent at some point in their history before they were possibly rejuvenated or merged with another galaxy (e.g. 206042 in Figure \ref{fig:fit1}), others formed stars throughout their history (e.g. 210003 in Figure \ref{fig:fit1}). Moreover, the presence of young and old populations can be seen in their spectra: they have clearly visible Balmer lines, characteristic of younger galaxies; but they also have H and K absorption lines of singly ionized Calcium with similar strengths, which is typical of older galaxies, in addition to the presence of the G-band (absorption lines of the CH molecule) around 4300\,{\AA}. As a test, we reran the fits of these galaxies excluding the 3 oldest templates and found that the spectra cannot be well fit.
\deleted{Velocity dispersion versus the model-measured stellar mass colour-coded by a$_{<MW>}$. The star-forming and quiescent populations are shown in the middle and right panels, respectively. Typical error bars are indicated in grey. The clear separation between young and old galaxies at $\sigma_*\sim170$\,km\,s$^{-1}$ shows a stronger correlation between a$_{<MW>}$\, and
\deleted{\, over M$_{*,FIT}$, which also depends on the current SF activity.}}
There is also a population of galaxies that seem to contain only young stars (e.g. 111932 and 116791 in Figure \ref{fig:fit1}), which would imply that these galaxies formed more than 90\% of their mass recently (\deleted{lookback}\added{when the Universe was $>6$} \,\deleted{$<8$\,} Gyrs). To test if young populations are `outshining' the rest of these galaxies, i.e. if there are hidden populations of old stars, we reran the fits of these galaxies allowing only the oldest template parameter to vary. We found that the contribution in mass of the old population can increase by $\sim5-10$\% before the \deleted{fits are visually degraded} \added{normalised $\chi^2$ changes by more than $0.08$\,dex. The change in $\chi^2$ is mainly due to the continuum shape of the spectra. Therefore, } \deleted{, therefore,} these galaxies do not harbour significant populations of old stars that are concealed by the light of very young stars.
\begin{figure*}[t]
\gridline{\fig{f6.pdf}{0.85\textwidth}{}}
\caption{\added{a$_{<MW>}$\, as a function of M$_{*,FIT}$\, (left) and $\sigma_*$\, (right). The star-forming and quiescent populations are indicated in blue and red, respectively, and typical error bars are indicated in grey. Galaxies with $\sigma_*$\,$\gtrsim200$\,km\,s$^{-1}$ are almost exclusively old ($>4$Gyrs) and quiescent, which indicates that $\sigma_*$\, is a stronger predictor of age and SF activity.}}
\label{fig:age_vs_v_m}
\end{figure*}
\begin{figure*}[t]
\gridline{\fig{f7.pdf}{1.0\textwidth}{}}
\caption{$\sigma_*$\, versus M$_{*,FIT}$, colour-coded by a$_{<MW>}$. The star-forming and quiescent populations are shown in the middle and right panels, respectively. Typical error bars are indicated in grey. The clear separation between young and old galaxies at $\sigma_*\sim170$\,km\,s$^{-1}$ shows a stronger correlation between a$_{<MW>}$\, and $\sigma_*$\, over M$_{*,FIT}$, which also depends on the current SF activity.}
\label{fig:vdisp_vs_mass}
\end{figure*}
\begin{figure*}[t]
\gridline{\fig{f8a.pdf}{0.95\textwidth}{(a)}}
\label{fig:sfr_lb_mbins_a}
\gridline{\fig{f8b.pdf}{0.95\textwidth}{(b)}}
\label{fig:sfr_lb_mbins_b}
\caption{Ensemble-averaged SFHs of LEGA-C galaxies, normalised by stellar mass and separated into various \deleted{velocity dispersion} \added{$\sigma_*$\,} (top) and \deleted{stellar mass} \added{M$_{*,FIT}$,} bins (bottom). The histories are divided into the star-forming and quiescent populations in the middle and right panels, respectively. The stellar content in massive galaxies formed earlier and faster, regardless of current SF activity.}
\label{fig:sfr_lb_mbins}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{f9.pdf}
\caption{SFHs of the LEGA-C sample (normalised by stellar mass) as a function of the age of the Universe separated into four \deleted{velocity dispersion} \added$\sigma_*${\,} bins indicated by the labels. The colours differentiate between the star-forming and quiescent populations at the observed redshift.}
\label{fig:sfr_q_sf_vdisp}
\end{figure*}
\section{SFHs of the Galaxy Population}
\label{sec:mr}
\added{\subsection{Correlations between Age, M$_{*,FIT}$ and $\sigma_*$}}
\label{sec:corr}
\added{Figure \ref{fig:age_vs_v_m} shows a$_{<MW>}$\, } as a function of \deleted{M$_{*,FIT}$ colour-coded by a$_{<MW>}$\,(} \added{stellar mass (M$_{*,FIT}$\,, } left panel) and \deleted{divided } \added{stellar velocity dispersion ($\sigma_*$\,, right panel) colour-coded } by current SF activity, i.e. whether the galaxy is quiescent (log(sSFR$_{UV+IR}$[Gyr$^{-1}]) <-1$) or star-forming. Selecting quiescent galaxies by their U-V and V-J colors would result in similar trends. \added{a$_{<MW>}$\, generally correlates more strongly with M$_{*,FIT}$\, than $\sigma_*$. However, there is a $\sigma_*$\, threshold above which galaxies are almost exclusively old and quiescent: galaxies with $\sigma_*$\,$>200-250$\,km\,s$^{-1}$ and a$_{<MW>}$\,$<4$\,Gyrs are very rare. Such a clear threshold does not exist for M$_{*,FIT}$: high-mass galaxies (M$_{*,FIT}$\,$\gtrsim10^{11}$\,M$_{\odot}$) show a variety of ages.}
\added{To further illustrate these trends, we show $\sigma_*$\, as a function of M$_{*,FIT}$\, colour-coded by a$_{<MW>}$\, (left panel) and divided by current SF activity (middle and right panels) in Figure \ref{fig:vdisp_vs_mass}.} There is a discernible separation between old \added{($>4$\,Gyrs)} and young \added{($<4$\,Gyrs) galaxies at a velocity dispersion of $\sigma_*$\,$\sim170$\,km\,s$^{-1}$.}\deleted{with velocity dispersion ($\sigma_*\sim170$\,km\,s$^{-1}$) that spans a broad range of M$_{*,FIT}$.} \added{Taken together with the trends seen in Figure \ref{fig:age_vs_v_m}, we can conclude that $\sigma_*$\,$>250$\,km\,s$^{-1}$ is a sufficient requirement for having an old age and $\sigma_*$\,$\sim170$\,km\,s$^{-1}$ is a necessary requirement for old age.} This extends the properties of present-day early-type galaxies, for which a correlation between \deleted{structure} \added{velocity dispersion (and closely related quantities such as surface mass density and central mass density)} and stellar age has been shown to be more fundamental than age trends with stellar mass \citep{kauffmann2003,vanderwel2009,graves2009}, to higher redshift. Our results also extend the widely reported correlation between velocity dispersion \deleted{(and closely related quantities such as} \added{(as well as} surface mass density and central mass density) and \deleted{SFR} \added{SF activity} \citep[e.g.,][]{franx2008,mosleh2017,barro2017} to an underlying correlation with overall stellar age.
The \added{scaling} relation between $\sigma_*$\, and black hole (BH) mass implies that large BH mass is correlated with early SF and the ceasing thereof. Such a scenario is supported by the direct correlation between BH mass and SF activity \citep[e.g.,][]{Terrazas2016} and the large fraction of radio AGN among galaxies with large velocity dispersions both at low and high redshift \citep[e.g.,][]{best2005, barisic2017}.
It is interesting to note that \deleted{a$_{<MW>}$\, does not correlate with $\sigma_*$\,} \added{the correlation between a$_{<MW>}$\, and $\sigma_*$\, seen in Figure \ref{fig:vdisp_vs_mass} is significantly weakened} after dividing the population by current SF activity\deleted{ (Figure \ref{fig:vdisp_vs_mass})}. Instead, \added{for} the star-forming population\added{, galaxy age is better} \deleted{is more} correlated with M$_{*,FIT}$\,\deleted{ and the quiescent population has no preference for either parameter} \added{(also seen in Figure \ref{fig:age_vs_v_m})}. \deleted{The stronger correlation between a$_{<MW>}$\, and M$_{*,FIT}$\, for star-forming galaxies indicates that when SF starts in a galaxy, its SFH is to first order constant such that M$_{*,FIT}$\, simply traces age (SFH). On the other hand, most quiescent galaxies in our sample are very old (see Figure \ref{fig:hist} and \ref{fig:fit1}), therefore, they form most of their stars in our oldest age bin, which potentially hides an existing correlation between $\sigma_*$\, or M$_{*,FIT}$\, with SFH. We still don't fully resolve the SFHs of these galaxies; whether that is the limitation of the data or a choice of age bins remains to be seen.} \added{A straightforward interpretation is that when galaxies are growing rapidly through SF--that is, when they are located on or near the SF `Main Sequence'--then M$_*$ mostly traces how long this main SF phase has lasted so far. In other words, M$_*$ simply traces the build-up of the stellar population over time, while $\sigma_*$\, is related to the end of this main SF phase, i.e. to the regulation and cessation of SF, presumably through AGN feedback.}
\subsection{Evolution of the average SFHs}
\label{sec:sfh1}
The \deleted{(completeness corrected) } average SFHs of galaxies, normalised by stellar mass, as a function of $\sigma_*$ \deleted{(upper panel)} and M$_{*,FIT}$\, \deleted{(lower panel)} are shown in Figure \ref{fig:sfr_lb_mbins}(a) and (b), respectively. The \added{average SFHs were corrected for completeness by weighing each galaxy by a completeness correction factor to create a volume-limited quantity
\cite[see ][]{wu2018}. The } population is divided by its current star-formation activity in order to disentangle the effects from these two populations as well as compare them. The velocity dispersion and mass ranges and were selected such that there were enough galaxies in each bin ($\geq10$), in both quiescent and star-forming galaxies. These relations are used to determine whether $z\sim1$ galaxies also show a downsizing trend in their SFHs, as many studies have pointed to using local galaxies. However, the SFHs seen at $z\sim1$ would not be resolved at $z\sim0.1$, as the stellar populations would be too old.
On average, high-$\sigma_*$ galaxies ($\sigma_* \geq 170$\,km\,s$^{-1}$) had higher SFRs at earlier epochs which started to decline rapidly, at a rate that increases with $\sigma_*$ and stellar mass, when the Universe was $\sim3$\,Gyrs old. Most galaxies with lower velocity dispersions ($\sigma_* < 170$\,km\,s$^{-1}$) gradually build their stellar mass as the Universe evolves; however, the SFR of a minority, i.e. the quiescent population, began to decline when the Universe was $\sim5$\,Gyrs old. Higher-mass star-forming galaxies (M$_{*,FIT}$\,$\geq10^{10.5} M_{\odot}$) have SFHs that are consistent with constant star-formation with time, while the lower mass galaxies (M$_{*,FIT}$\,$<10^{10.5}$) still have rising SFRs. The star-forming population is still undergoing its main formation phase. The SFH trend is clear with M$_{*,FIT}$\, and not $\sigma_*$\, for the star-forming population, which extends from M$_{*,FIT}$\, being better correlated with SFHs for star-forming galaxies as discussed in Section \ref{sec:corr} (see Figure \added{\ref{fig:age_vs_v_m} and} \ref{fig:vdisp_vs_mass}).
Figure \ref{fig:sfr_lb_mbins} reveals that, on average, most galaxies in the sample were forming stars quite early on; however, the SFRs were systematically higher and the eventual decline systematically more rapid with increasing $\sigma_*$\, (M$_{*,FIT}$\, for the star-forming population). This is clear evidence for the top-down scenario; where galaxies downsize in their star formation with time (more massive galaxies have older stars). This is seen in the overall population, and more strongly so in the quiescent population. While this result is in alignment with previous studies for the local universe \citep[e.g. ][]{juneau2005, thomas2005, tojeiro2009, mcdermid2015, ibarra2016}, our work establishes this trend at $z\sim1$ (half the current age of the Universe) using \deleted{spectroscopy}\added{using full spectrum fitting. \cite{wu2018}'s study of the D$_n$4000 and H$\delta$ spectral features also support the downsizing scenario.}
\subsection{The variety of SFHs}
\label{sec:sfh2}
In Figure \ref{fig:sfr_q_sf_vdisp}, we show all the stellar mass normalised SFHs in the LEGA-C sample, separated into four velocity dispersion bins and divided into the quiescent and star-forming population (at the observed redshift) as defined in Section \ref{sec:corr}. This reveals the large scatter in the SFHs at fixed mass, in addition to discerning the differences in the histories based on the current star-formation activity of the galaxies.
The SFHs of quiescent galaxies peaked early on in the Universe and thereafter, their activity generally decreases with time; while star-forming galaxies gradually grow in SFR, which peaked at later epochs. The quiescent population has consistently higher SFRs at early epochs, whereas its star-forming counterpart has higher SFRs at later epochs. This indicates that star-forming galaxies aggregate their mass slower than the quiescent population. The dominance of the quiescent population increases from low to high-mass galaxies, and vice versa for the star-forming population.
The SFRs of low mass galaxies ($\sigma_* < 115$\,km\,s$^{-1}$) have been gradually increasing, with large scatter at all epochs. These galaxies are currently undergoing the main stages of their star formation. Note that the lowest dispersion bin suffers from incompleteness, due to the survey sample selection approach. K-band quiescent galaxies are fainter than equally massive star-forming galaxies which causes an under-representation in the LEGA-C sample. However, it is well known that low-mass star-forming galaxies outnumber quiescent galaxies of the same mass; therefore, the SFHs in Figure \ref{fig:sfr_q_sf_vdisp} can be considered as illustrative.
The quiescent and star-forming populations are more evenly distributed \deleted{in } \added{(in number and variation of SFHs) in } the intermediate-$\sigma_*$ regime (between $160$ and $205$\,km\,s$^{-1}$) \deleted{, and their SFHs have less scatter overall. On the other hand, the } \added{while the } high-$\sigma_*$ population ($\sigma_* \geq 205$\,km\,s$^{-1}$) is dominated by quiescent galaxies. The \deleted{scatter in the dispersion bins $\geq160$\,km\,s$^{-1}$ was low when the Universe was young ($>10$\,Gyrs ago) regardless of whether the galaxy is currently forming stars or not; it increased towards lower redshifts, with the highest dispersion bin ($\sigma_* \geq 205$\,km\,s$^{-1}$) having larger scatter at lower lookback times ($<9$\,Gyrs ago). Taking into account that the difference in scatter could be due to the size of the age bins, by averaging out the star formation at lookback\,$<10$\,Gyrs to mimic a larger age bin, we still measured larger scatter at lower lookback times. The star-formation activity of star-forming galaxies in this mass regime peaked \,$\sim8.5$\,Gyrs ago. The } disparity between the SFHs of the quiescent and star-forming populations \added{in the high-$\sigma_*$ regime } indicates that galaxies `remember' their past. There is a strong coherence among the SFHs of quiescent and star-forming galaxies, respectively. This behaviour extends to the peak of cosmic SF activity at $z\sim$\,2-3. This implies that SF activity at the moment of observation is strongly correlated with the SF activity $\sim3$\,Gyrs prior. The results of this work indicate that many evolutionary paths can lead to galaxies at a given velocity dispersion. This illustrates the difficulty of connecting progenitor and descendant populations at different cosmic epochs.
\subsection{Comparisons to Literature Measurements}
\label{sec:sfh3}
As stated in Section \ref{sec:sfh1}, the deconstructed SFHs in this study support the galaxy downsizing scenario which has long been studied \deleted{. In terms of stellar populations, many of these studies involved the use of fossil record methods on SDSS spectra of local galaxies. Downsizing has also been seen in other studies, such as studies by \cite{cimatti2006}, who corrected luminosity function data of early-type galaxies by adopting the empirical luminosity dimming rate derived from the evolution of the Fundamental Plane of field and cluster massive early-type galaxies, as well as \cite{leitner2012}, who derived the average growth of stellar mass in local star-forming galaxies using a Main Sequence Integration approach. \cite{leitner2012} found} \added{(see Section \ref{sec:intro}). \cite{leitner2012}'s finding } that star-forming galaxies formed only $\sim15$\,\% of their mass before $z=$\,1-2 (mass dependent)\deleted{which suggests }\added{, suggesting } that present-day star-forming galaxies are not the descendants of massive star-forming galaxies at $z>2$\deleted{. This notion }\added{, } is in line with our results since the peak in star formation occurs after $z<1.5$ for almost all star-forming galaxies in the LEGA-C sample.
Intermediate-redshift stellar population studies are sparse, due to the high S/N required to undertake such studies \added{(see Section \ref{sec:intro})}. Pertaining to this work, there are a few studies we can draw comparisons from, viz. \deleted{
\cite{schiavon2006}, } \cite{choi2014} and \cite{gallazzi2014}. \deleted{
\cite{choi2014}
used a spectral fitting algorithm on SDSS quiescent galaxies in the redshift range $0.1<z<0.7$ to investigate the evolution of stellar ages over time as a function of mass. They found that the increase in stellar ages with time for massive galaxies is consistent with passive evolution since $z=0.7$. The same conclusion was reached for quiescent galaxies by
\cite{gallazzi2014}
who characterised the stellar age-stellar mass and stellar metallicity-stellar mass relations for a sample of $\sim70$ galaxies at $z\sim0.7$. These results } \added{Measurements by
\cite{choi2014}
and
\cite{gallazzi2014}
indicating that passive galaxies have ages consistent with mostly passive evolution } are also in alignment with this study as the reconstructed SFHs indicate that galaxies stay quiescent, barring some histories that showed low-level star formation after quiescence. \cite{gallazzi2014} \deleted{also measured a flattening of the stellar metallicity-stellar mass relation towards solar metallicity (M$_*>3\times10^{10}$\,M$_{\odot}$), with significant scatter. This incited our choice to use solar metallicity in the fitting algorithm, as we found no significant differences in the goodness of the fits, in terms of the normalised $\chi^{2}$, when using either sub-solar or super-solar metallicity instead.
\cite{gallazzi2014}
} reported an average lighted-weighted age of $\sim5$\,Gyrs for a $4\times10^{10}$\,M$_{\odot}$ galaxy, consistent with our value of $4.8$\,Gyrs, for a galaxy of the same mass.
\cite{diemer2017} tested \cite{gladders2013}'s hypothesis that the SFHs of individual galaxies are characterised by a log-normal function in time, which implies a slow decline in SFRs rather than rapid quenching. They did this by comparing the log-normal parameter space of total stellar mass, peak time, and full width at half maximum of simulated galaxies from Illustris \citep{vogelsberger2014} and \cite{gladders2013}, as well as \cite{pacifici2016}'s derived SFHs of a sample of quiescent galaxies using a large library of computed theoretical SFHs. They found good agreement for all three studies, however, Illustris predicted more extended SFHs on average. LEGA-C galaxies support the slow-quenching picture of galaxy evolution as \cite{gladders2013} have suggested, with a rate of decline that is mass dependent as we have seen. More comparisons will be performed in later papers.
\section{Summary}
\label{sec:summ}
We have reconstructed the SFHs of galaxies in the current LEGA-C sample, which contains 678 primary sample galaxies with S/N\,$\sim20$\,\AA$^{-1}$ in the redshift range $0.6<z<1$. We have done this by implementing an algorithm to fit flexible SFHs to the full spectrum, using \textit{FSPS} and \textit{emcee}. The galaxy spectra were fit to a linear combination of a defined set of 12 CSPs, with solar metallicity and constant star formation within the time interval of the templates. In 90\% of the cases the algorithm produced good fits based on the normalised $\chi^2$ values. We found a wide variety of SFHs, although 60\% of the galaxies have a$_{<MW>}$\,$>3$\,Gyrs by the time we observe them (Figures \ref{fig:hist} and \ref{fig:fit1}). However, we note that age estimates from spectral fits experience increasing degeneracy of spectral features as the stellar populations age. Most of the old galaxies (a$_{<MW>}$\,$\gtrsim3$\,Gyrs) had very low SFRs early on ($\gtrsim6$\,Gyrs after the Big Bang, Figure \ref{fig:fit1}). However, some exhibit subsequent peaks in star formation, which could be an indication of rejuvenated star formation, or a merger with a younger population. However, the mass formed from this more recent star formation activity is only about 10\% of the mass formed throughout the galaxies' histories. The median a$_{<LW>}$, a$_{<MW>}$\, and M$_{*,FIT}$\, were found to be $1.2$\,Gyrs, $3.8$\,Gyrs and $10^{10.8}$\,M$_{\odot}$, respectively.
The main objective of this work was to investigate how \deleted{the } \added{our } reconstructed SFHs behave as a function of \added{stellar mass, stellar } velocity dispersion and \deleted{stellar mass} \added{star-formation (SF) activity}, as well as \deleted{scatter } \added{the variation } they show at fixed velocity dispersion. We found that galaxies at $z\sim1$ have similar trends in their SFHs compared to local galaxies, i.e. the stellar content in massive galaxies formed earlier and faster (Figure \ref{fig:sfr_lb_mbins}). This top-down scenario is a known trend from fossil record inferences using SDSS spectra; however, in this study, it is shown for $z\sim1$ galaxies for the first time \added{using full spectrum fitting}. We found that the scatter \deleted{in SFHs of high-dispersion galaxies ($\sigma_* \geq 205$\,km\,s$^{-1}$) was low when the Universe was young ($>10$\,Gyrs ago), regardless of the current star-formation activity (Figure \ref{fig:sfr_q_sf_vdisp}). However, the scatter } between the quiescent and star-forming populations increases towards lower redshift (\deleted{lookback\,$<9$\,Gyrs}\added{Figure \ref{fig:sfr_q_sf_vdisp}}), which indicates that current SF activity is strongly correlated with past SF activity. High-dispersion \deleted{star-forming } \added{quiescent } galaxies had their star formation peak \deleted{around $8.5$\,Gyrs ago, while quiescent galaxies peaked earlier, } \added{early, } $>9.5$\,Gyrs ago, and exhibit decreasing SFRs throughout the rest of their history, for the most part. We found that the lowest dispersion galaxies in our sample are undergoing the main stage of their star formation as we observe them ($7$\,Gyrs ago).
The results of the spectral fits were used to measure a number of galaxy properties, viz. ages (a$_{<LW>}$, a$_{<MW>}$, \deleted{a$_{50}$ } \added{a$_{<MW>}$, } etc.) and stellar mass, in order to test the model by investigating how these properties relate to one another as well as other properties measured from the galaxy spectra, e.g. velocity dispersion, $H{\delta}$, D$_n$4000, etc. We showed that galaxies evolve from the top-left to the bottom-right of the EW(H$\delta$)-D$_n$4000 plane as they age, as would be expected (Figure \ref{fig:hdd4000}).\deleted{ The stellar masses measured from our model agreed well with those measured from FAST, which further supported the results of our spectral fits (Figure \ref{fig:masscomp}).}
Recovering the full SFHs of intermediate-redshift galaxies opens up a multitude of avenues of research. In this work we have shown the clear differences between the SFHs of quiescent and star-forming galaxies and how these SFHs are scattered at fixed velocity dispersion. We have also shown that velocity dispersion is a better indicator of the age and current SF activity of galaxies as a whole than stellar mass, while stellar mass is a better indicator of the age of star-forming galaxies (Figure \added{\ref{fig:age_vs_v_m} and } \ref{fig:vdisp_vs_mass}). \deleted{With SFHs at these redshifts, we can } \added{In future studies, we will use the reconstructed SFHs to } constrain the quenching speed and rate, as well as investigate the relationship between galactic structure and SFHs\deleted{(this will be done in future papers)}. These constraints will become valuable for future surveys like JWST that will be investigating the properties of galaxies beyond $z\sim2$, and will need $z\sim1$ measurements as a benchmark to connect those populations to the local Universe.
\section{Acknowledgements }
\label{sec:ack}
Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 194-A.2005 (The LEGA-C Public Spectroscopy Survey). PC gratefully acknowledge financial support through a DAAD-Stipendium. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 683184). KN and CS acknowledge support from the Deutsche Forschungsemeinschaft (GZ: WE 4755/4-1). We gratefully acknowledge the NWO Spinoza grant.
|
{
"timestamp": "2018-05-09T02:07:23",
"yymm": "1805",
"arxiv_id": "1805.02568",
"language": "en",
"url": "https://arxiv.org/abs/1805.02568"
}
|
\section{Introduction}
\hspace{5mm} In this project, we examine numerical methods for solving forward backward stochastic differential equations (FBSDEs) of McKean-Vlasov type. We are particularly interested in equations of this type as they can be used to represent, from the probabilistic viewpoint, solutions to mean field games or, more generally, to mean field stochastic control problems.
Mean field games were developed independently and at about the same time by Lasry and Lions \cite{Lasry_Lions}, and Huang, Caines, and Malham\'{e} \cite{Huang}. The goal of this theory is to understand the limit as $N \rightarrow \infty$ of the Nash equilibria for an $N$ player stochastic dynamic game with mean field interaction, meaning that players interact with one another through their collective state. Equivalently, mean field games must be regarded as the continuum limit of games with a large number of symmetric players, each
of them having a small effect on the dynamics of the whole group. The applications of mean field games are numerous, and spread across many disciplines, including social science (congestion \cite{congestion1}\cite{congestion2}\cite{congestion3}, cyber attacks \cite{cyber}), biology (flocking \cite{Nourian}\cite{flocking2}), and economics (systemic risk \cite{systemic_risk}, production of exhaustible resources \cite{sircar} \cite{chan}), just to name a few.
As explained in
\cite{MFGP,FBSDEs}, solutions to mean field games can be characterized through a coupled system of two forward and backward stochastic differential equations of mean field type, like those we address in this note. The forward equation provides the dynamics of one typical player in the population at equilibrium. Generally speaking, the backward equation accounts for the optimality condition in the definition of an equilibrium and the McKean-Vlasov structure is precisely here to stress the fact that the player in hand is representative of the others.
As exemplified in
\cite{CDL}, other forms of equilibria can be addressed by means of this kind of equations. This includes optimal mean field control problems, which can be regarded as the continuum limit of a control problem involving a large number of symmetric players obeying a central planner. Below, we mostly focus on examples arising in the theory of mean field games.
Whilst deterministic numerical methods, based upon finite differences or variational approaches, are also conceivable for handling mean field games, see
\cite{AchdouCapuzzo-Dolcetta,AchdouCamilliCapuzzo-Dolcetta2,AchdouPorretta}
and
\cite{BenamouCarlier,LachapelleSalomonTurinici,Gueant_numerical}, we here focus on the approach based on FBSDEs. In this regard, we
implement (and compare) two different algorithms. The first algorithm, which is based on the paper of Chassagneux, Crisan, and Delarue \cite{Solver}, relies on a tree structure to represent the pathwise law of the solution. The second algorithm, and main contribution of this report, takes the algorithm presented in the paper of Delarue and Menozzi \cite{Grid} for solving FBSDEs and extends it to the mean field framework in hand. In this algorithm, a grid structure is used to represent the marginal laws of the solution.
The serious issue that we are facing in this note is that both methods are based upon a Picard scheme, the first method involving a global Picard scheme upon the whole process and the second one involving a Picard scheme on the sole marginal laws of the process.
It is indeed a well-known fact that, because of the strong coupling between the forward and backward equations, Picard schemes for FBSDEs may just converge in small time, even in the classical case without mean field interaction. For sure, this limitation should persist
in the mean field setting for the global Picard method; as exemplified below, it turns out that it persists as well when the Picard scheme is applied to the marginal laws.
One of our main contribution in this report is to apply the
time continuation approach presented in \cite{Solver} to the grid algorithm and to compare the results with the tree algorithm for which the time continuation approach was originally designed in \cite{Solver}. In brief, the time continuation permits to extend, by a continuation argument, the time interval on which the Picard scheme converges. We illustrate both algorithms on a handful of example problems.
Section \ref{sec:review} provides a review of Nash equilibria in $N$ player stochastic differential games, and their continuum mean field game counterparts. We review two probabilistic approaches to formulate the solutions of mean field games and provide the general FBSDE system which we would like to solve. In Section \ref{Algorithms}, we describe the algorithms that we implement in the report. Some benchmark examples and the corresponding numerical results are presented in Section \ref{Examples}. We conclude in Section \ref{Conclusion}.
\section{Overview of Mean Field Games and FBSDEs}\label{sec:review}
The purpose of this section is to introduce the theoretical material that is needed for our numerical analysis.
The objective is purely pedagogical and the text does not contain any new result.
\subsection{$N$ Player Stochastic Differential Games} \label{N-Player}
\hspace{5mm}
We start with the description of the prototype of a finite player game in the theory of mean field games. We consider $N \in \mathbb{Z}^+$ players indexed by $i \in \{1,\dots, N \}$. The dynamic game occurs over a fixed time horizon $[0,T]$ for some $T>0$. We have $N$ independent $m$-dimensional Brownian motions $(W^i_t)_{0\leq t \leq T}$ which are supported by a filtered probability space $(\Omega,\mathcal{F},\mathbb{F}=(\mathcal{F}_t)_{0 \leq t \leq T},\mathbb{P})$. Each player chooses its control $\alpha^i=(\alpha^i_t)_{0\leq t\leq T}$ from the set $\mathbb{A}$ defined as the set of square integrable $\mathbb{F}$ adapted processes with values in a given set $A$ (typically $A$ is a closed convex subset of a Euclidean space). Each player $i$ has a state $X^i$ which evolves according to the stochastic differential equation:
\begin{equation*}
dX^i_t=b^i(t,X^i_t,\bar{\mu}_t,\alpha_t)dt+\sigma^i(t,X^i_t,\bar{\mu}_t,\alpha_t) dW^i_t,
\end{equation*}
where $\bar{\mu}_t$ denotes the empirical distribution of the players' states: $\bar{\mu}_t=\frac{1}{N}\sum_{j=0}^N \delta_{X^j_t} \in \mathcal{P}_2(\mathbb{R}^d)$. Here $\mathcal{P}_2(\mathbb{R}^d)$ is the space of probability measures with a finite second moment, which we equip with the 2-Wasserstein distance, denoted by $W_2$. For $\mu, \nu \in \mathcal{P}_2(\mathbb{R}^d)$, we call $\Gamma(\mu, \nu)$ the
set of all the joint laws with marginals $\mu$ and $\nu$. Then, the 2-Wasserstein distance is defined by:
\begin{equation*}
W_2(\mu,\nu) = \left(\inf_{\gamma \in \Gamma(\mu, \nu)} \int |x - y|^2 d \gamma(x,y)\right)^{1/2}.
\end{equation*}
The drift and volatility functions, $b^i$ and $\sigma^i$, respectively, are deterministic functions $(b^i,\sigma^i): [0,T] \cross \mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d) \cross {A} \rightarrow \mathbb{R}^d \cross \mathbb{R}^{d \cross m}$. Most of the time, they are assumed to be bounded in time and to be Lipschitz continuous with respect to all the arguments, the Lipschitz property in the measure argument being understood with respect to $W_{2}$. This ensures that, for a given $\bm{\alpha}=(\alpha^1,\cdots,\alpha^N)$, the state dynamics
$(X^1,\cdots,X^N)$ is well defined.
Given a tuple of controls $\bm{\alpha}=(\alpha^1,\dots \alpha^N)$, we associate with player $i$ a cost objective which we take to be of the form:
\begin{equation*}
J^i(\bm{\alpha})=\mathbb{E} \left[\int_0^T f^i(t,X^i_t,\bar{\mu}_t,\alpha^i_t) dt+ g^i(X^i_T,\bar{\mu}_T) \right].
\end{equation*}
Thus each player considers a deterministic running cost $f^i:[0,T] \cross \mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d) \cross {A} \rightarrow \mathbb{R}$, and deterministic terminal cost $g^i:\mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d) \rightarrow \mathbb{R}$. Of course, each of them wishes to minimize its own cost by tuning its own control in the most relevant way.
Note that we only allow the interaction of the players through their empirical measure, as this will be needed in our formulation of the continuum limit. Still, extensions exist, in which players also interact through the controls, see Subsection \ref{extended}.
The players are in a \textit{Nash equilibrium} if each player is no better off for switching their strategy when they consider the other players' strategies to be fixed. More precisely, the set of strategies $\bm{\alpha}$ is a Nash equilibrium if
\begin{equation*}
J^i(\bm{\alpha}) \leq J^i(\alpha^1,\dots, \alpha^{i-1},\alpha,\alpha^{i+1},\dots, \alpha^N), \forall \alpha \in \mathbb{A}, \forall i \in \{1,\dots,N\}.
\end{equation*}
\subsection{Mean Field Games} \label{MFG}
\hspace{5mm} For games where $N$ is large, the problem quickly becomes of an intractable complexity. Thus we turn to the continuum limit by considering the limit as $N$ tends to infinity.
In order for this limit to make sense, we require the players to be symmetric. Precisely, we require $b=b^i$, $\sigma=\sigma^i$, $f=f^i$, and $g=g^i$ $\forall i \in \{1, \dots, N \}$. As the number of players increase, the impact of each player on the empirical distribution decreases, and we expect to have a propagation of chaos such that the players become asymptotically independent of each other. This is the rationale for passing to the limit: Asymptotically,
the influence of one player on the group should be null
and
the statistical structure of the whole should be pretty simple.
We wish to formulate the analogue of a Nash equilibrium when there is a continuum of players. To this end, we consider the states and actions of the other players to be fixed, and consider the best response for a representative player (as we expect equilibria to inherit the symmetric structure of the game). Thus, the first step is to solve an optimization problem. The next step is to find a fixed point, providing an analogue of a Nash equilibrium for the mean field game.
We again have a filtered probability space $(\Omega,\mathcal{F},\mathbb{F}=(\mathcal{F}_t)_{0 \leq t \leq T},\mathbb{P})$ where the filtration supports an $m$-dimensional Brownian motion $W=(W_t)_{0\leq t \leq T}$ and an initial condition $\xi \in L^2(\Omega,\mathcal{F}_0,\mathbb{P};\mathbb{R}^d)$.
The strategy for solving the asymptotic game is the following:
\begin{enumerate}
\item For a fixed deterministic flow of probability measures $\mu=(\mu_t)_{0\leq t \leq T} \in \mathcal{C}([0,T] , \mathcal{P}_2(\mathbb{R}^d))$, solve the standard stochastic control problem:
\begin{equation}
\label{SSCP}
\inf_{\alpha \in \mathbb{A}}J^{\mu}(\alpha)=\mathbb{E}\left[\int_0^T f(t,X^{\alpha}_t,\mu_t,\alpha_t)dt+g(X^{\alpha}_t,\mu_T) \right],
\end{equation}
subject to
\begin{equation*}
\begin{split}
dX^{\alpha}_t&=b(t,X^{\alpha}_t,\mu_t,\alpha_t)dt+\sigma(t,X^{\alpha}_t,\mu_t,\alpha_t) dW_t \\
X^{\alpha}_0&=\xi.
\end{split}
\end{equation*}
\item Find a fixed point, $\mu$, such that $\mathcal{L}(X^{\alpha}_t)=\mu_t$ for all $0\leq t \leq T$.
\end{enumerate}
This strategy can be tackled from either the PDE viewpoint (leading to a coupled Hamilton-Jacobi-Bellman and Kolmogorov/Fokker-Plank equations, known as the MFG system in the literature) \cite{Lasry_Lions} \cite{Huang} or the probabilistic viewpoint \cite{MFGP} \cite{FBSDEs}, which is the focus of this project. Within the probabilistic viewpoint, there are two approaches, both of which are formulated with FBSDEs. See Chapters $3$ and $4$ of the manuscript of Carmona and Delarue \cite{MFGP} for reference on the two probabilistic approaches.
For simplicity, from now on we assume $m$, the dimension of the Brownian motion matches $d$, the dimension of the state variable. We also assume the diffusion coefficient, $\sigma$, is a constant matrix $\sigma \in \mathbb{R}^{d \cross d}$. For both approaches, we will utilize the Hamiltonian deriving from the aforementioned stochastic control problem \eqref{SSCP}. In fact, since we assume that the drift is uncontrolled, we can just write the reduced Hamiltonian:
\begin{equation*}
H(t,x,\mu,\alpha,y)=b(t,x,\mu,\alpha) \cdot y+f(t,x,\mu,\alpha),
\end{equation*}
for $t \in [0,T]$, $x \in \mathbb{R}^d$, $\mu \in \mathcal{P}_2(\mathbb{R}^d)$, $\alpha \in A$, and an adjoint variable $y \in \mathbb{R}^d$. Then, a key object in order to formulate the solution to
\eqref{SSCP}
is (whenever it exists):
\begin{equation*}
\hat{\alpha}(t,x,\mu,y)=\arg \inf_{\alpha \in A} H(t,x,\mu,\alpha,y).
\end{equation*}
We will provide below explicit examples for $\hat{\alpha}(t,x,\mu,y)$. We now give a brief introduction to the two probabilistic approaches.
\subsubsection{Weak approach}
\hspace{5mm} In the first probabilistic approach, which we will refer to as the weak approach, the optimization problem is solved using a backward SDE for the probabilistic representation of the value function. For a fixed flow of measures $\mu=(\mu_{t})_{0 \le t \le T}$, let $u:[0,T] \times \mathbb{R} \rightarrow \mathbb{R}$ denote the value function:
\begin{equation*}
u(t,x):=\inf_{(\alpha_s)_{t \leq s \leq T} \in \mathbb{A}} \mathbb{E} \left[\int_t^T f(s,X_s,\mu_s,\alpha_s) ds+g(X_T,\mu_T) \mid X_t=x \right].
\end{equation*}
The strategy is to evaluate the value function along the solution of the state process $(X_{t})_{0 \leq t \leq T}$, namely we
let $Y_t=u(t,X_t)$. The weak formulation of the stochastic control problem underpinning $u$ says that, under suitable assumptions
that are exemplified below,
the pair $(X_{t},Y_{t})_{0 \leq t \leq T}$
has to solve the following FBSDE:
\begin{equation}
\begin{split}
dX_t&=b\left(t,X_t,\mu_{t},\hat{\alpha}\left(t,X_t,\mu_{t},\sigma^{-1}Z_t\right)\right)dt+\sigma dW_t \\
X_0&=\xi ,\\
dY_t&=-f\left(t,X_t,\mu_{t},\hat{\alpha}\left(t,X_t,\mu_{t},\sigma^{-1}Z_t\right)\right)dt+Z_t dW_t \\
Y_T&=g(X_T,\mu_{T}),
\end{split}
\label{weak0}
\end{equation}
where we assume $\sigma$ to be invertible. For instance, we take the following set of assumptions from Chapter 3 of the manuscript by Carmona and Delarue \cite{MFGP}: We may assume that the set $A$ for the values of the controls is a bounded closed convex subset of a Euclidean space, the deterministic functions $b$, $f$, and $g$ are defined on $[0,T] \cross \mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d) \cross A$, $[0,T] \cross \mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d) \cross A$, and $\mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d)$, respectively, and there exists a constant $C_0>1$ such that:
\begin{itemize}
\item For any $t\in [0,T]$, $x$, $x'$ $\in \mathbb{R}^d$, $\alpha$, $\alpha '$ $\in A$ and $\mu \in \mathcal{P}_2(\mathbb{R}^d)$ :
\begin{equation*}
\begin{split}
\abs{(b,f)(t,x',\mu,\alpha')-(b,f)(t,x,\mu,\alpha)}&+\abs{\sigma(t,x')-\sigma(t,x)}+\\
&+\abs{g(x',\mu)-g(x,\mu)}\leq C_0\abs{(x,\alpha)-(x ',\alpha ')}.
\end{split}
\end{equation*}
\item The functions $b$, $f$, $\sigma$ and $g$ are bounded by $C_0$.
\item There exists a function
\begin{equation*}
\hat{\alpha}:[0,T] \cross \mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d) \cross \mathbb{R}^d \ni (t,x,\mu,y) \rightarrow \hat{\alpha}(t,x,\mu,y)
\end{equation*}
which is $C_0$-Lipschitz continuous in $(x,y)$ such that, for each $(t,x,\mu,y) \in [0,T] \cross \mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d) \cross \mathbb{R}^d $, $\hat{\alpha}(t,x,\mu,y)$ is the unique minimizer of the Hamiltonian $H(t,x,\mu,y,\alpha)$.
\end{itemize}
Under this set of assumptions, it is shown in Chapter 3 of the manuscript by Carmona and Delarue \cite{MFGP} that a flow of measures $\mu=(\mu_{t})_{0 \leq t \leq T}$ is a mean field game equilibrium if and only if $\mu_t=\mathcal{L}(X_t), \forall t \in[0,T]$, where $(X,Y,Z)$ is a solution of the weak approach FBSDE system in Equation (\ref{weak0}), in which case
(\ref{weak0}) becomes
\begin{equation}
\begin{split}
dX_t&=b\left(t,X_t,\mathcal{L}(X_t),\hat{\alpha}\left(t,X_t,\mathcal{L}(X_t),\sigma^{-1}Z_t\right)\right)dt+\sigma dW_t \\
X_0&=\xi ,\\
dY_t&=-f\left(t,X_t,\mathcal{L}(X_t),\hat{\alpha}\left(t,X_t,\mathcal{L}(X_t),\sigma^{-1}Z_t\right)\right)dt+Z_t dW_t \\
Y_T&=g(X_T,\mathcal{L}(X_T)),
\end{split}
\label{weak}
\end{equation}
where we use the generic notation ${\mathcal L}(\cdot)$ for the law of a random variable.
This approach is developed further in the papers and manuscript of Carmona and Delarue \cite{MFGP}\cite{FBSDEs}.
\subsubsection{Pontryagin approach}
\hspace{5mm} The second probabilistic approach, which we will refer to as the Pontryagin approach, is based on the Pontryagin stochastic maximum principle. In this formulation, the optimization problem is solved using a backward SDE for the probabilistic representation of the spatial derivative of the value function $u$. Formally,
the strategy is thus to evaluate the process $(X_{t})_{0 \leq t \leq T}$ along
$\nabla_{x} u$. Hence we let $Y_t=\nabla_x u(t,X_t)$,
which makes sense when $\nabla_{x} u$ is well-defined. In fact the Pontryagin system may be formulated without any further reference to the regularity of $u$, the Pontryagin formulation having the following general form:
\begin{equation}
\begin{split}
dX_t=&b\left(t,X_t,\mu_t,\hat{\alpha}\left(t,X_t,\mu_t,Y_t \right)\right)dt+\sigma dW_t \\
X_0=&\xi, \\
dY_t=&-[\nabla_x b(\left(t,X_t,\mu_t,\hat{\alpha}\left(t,X_t,\mu_t,Y_t \right)\right))\cdot Y_t \\
&+\nabla_xf\left(t,X_t,\mu_t,\hat{\alpha}\left(t,X_t,\mu_t,Y_t\right)\right)]dt +Z_t dW_t \\
Y_T=&\nabla_xg(X_T,\mu_{T}),
\end{split}
\label{Pontryagin0}
\end{equation}
where we assume $b$, $f$ and $g$ to be differentiable with respect to $x$. We may use the following set of assumptions taken from Chapter 3 of the manuscript by Carmona and Delarue \cite{MFGP} to guarantee that the Pontryagin system is both a necessary and a sufficient condition of optimality: We assume the coefficients $b$, $f$, and $g$ are defined on $[0,T] \cross \mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d) \cross A$, $[0,T] \cross \mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d) \cross A$, and $\mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d)$, respectively. We also assume that they satisfy:
\begin{itemize}
\item The drift $b$ is an affine function of $(x,\alpha)$ of the form:
\begin{equation*}
b(t, x,\mu,\alpha) =b_0(t,\mu)+b_1(t)x+b_2(t)\alpha,
\end{equation*}
where $b_0: [0,T] \cross \mathcal{P}_2(\mathbb{R}^d) \ni (t,\mu) \mapsto b_0(t,\mu)$, $b_1 : [0,T] \ni t \mapsto b_1(t)$ and $b_2 : [0,T] \ni t \mapsto b_2(t)$ are $\mathbb{R}^d$, $\mathbb{R}^{d \cross d}$ and $\mathbb{R}^{d\cross d}$ valued, respectively, and are measurable and bounded on bounded subsets of their respective domains.
\item There exist two constants $C_1 >0$ and $C_2 \geq 1$ such that the function $\mathbb{R}^d \cross A \ni (x,\alpha)
\mapsto f(t,x,\mu,\alpha) \in \mathbb{R}$ is once continuously differentiable with Lipschitz-continuous derivatives (so that $f(t,\cdot,\mu,\cdot)$ is $C^{1,1}$), the Lipschitz constant in $x$ and $\alpha$ being bounded by $C_2$ (so that it is uniform in $t$ and $\mu$). Moreover, it satisfies the following strong form of convexity:
\begin{equation*}
f(t,x ', \mu, \alpha ')-f(t,x, \mu, \alpha)-(x-x',\alpha - \alpha ') \cdot \partial_{(x,\alpha)}f(t,x,\mu,\alpha)\geq C_1 \abs{\alpha ' - \alpha}^2.
\end{equation*}
The notation $\partial_{(x,\alpha)}f$ stands for the gradient in the joint variables $(x,\alpha)$. Finally, $f$, $\partial_x f$ and $\partial_{\alpha}f$ are locally bounded over $[0,T] \cross \mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d) \cross A$.
\item The function $\mathbb{R}^d \cross \mathcal{P}_2(\mathbb{R}^d) \ni (x,\mu) \mapsto g(x,\mu)$ is locally bounded. Moreover, for any $\mu \in \mathcal{P}_2(\mathbb{R}^d)$, the function $\mathbb{R}^d \ni x \mapsto g(x,\mu)$ is once continuously differentiable and convex, and has a $C_2$-Lipschitz continuous first order derivative.
\end{itemize}
Under these assumptions, it is shown in Chapter 3 of the manuscript by Carmona and Delarue \cite{MFGP} that a flow of measures $\mu=(\mu_{t})_{0 \le t \le T}$ is a mean field game equilibrium if and only if $\mu_t=\mathcal{L}(X_t), \forall t \in[0,T]$, where $(X,Y,Z)$ is a solution of the Pontryagin approach FBSDE system in Equation (\ref{Pontryagin0}), in which case
(\ref{Pontryagin0}) becomes
\begin{equation}
\begin{split}
dX_t&=b\left(t,X_t,\mathcal{L}(X_t),\hat{\alpha}\left(t,X_t,\mathcal{L}(X_t),Y_t \right)\right)dt+\sigma dW_t \\
X_0&=\xi,
\\
dY_t&=-[\nabla_x b(\left(t,X_t,\mathcal{L}(X_t),\hat{\alpha}\left(t,X_t,\mathcal{L}(X_t),Y_t \right)\right))\cdot Y_t \\
&+\nabla_xf\left(t,X_t,\mathcal{L}(X_t),\hat{\alpha}\left(t,X_t,\mathcal{L}(X_t),Y_t\right)\right)]dt +Z_t dW_t \\
Y_T&=\nabla_xg(X_T,\mathcal{L}(X_T)).
\end{split}
\label{Pontryagin}
\end{equation}
This approach is also developed further in the papers and manuscript of Carmona and Delarue \cite{MFGP}\cite{FBSDEs}.
\subsubsection{Mean Field Games of Control}
\label{extended}
\hspace{5mm} In many applications, individuals may interact through their controls, instead of their states. One example is an application of trade crowding which was tackled with a mean field game approach in the paper of Cardaliaguet and Lehalle \cite{trader}. There is also a model for price impact in the book of Carmona and Delarue which we take as one of our example problems in Section \ref{trader}. Mean field games where players interact through the law of their controls is sometimes referred to as extended mean field games \cite{extended}, see also Chapter 4 in \cite{MFGP}. We are interested in testing our numerical methods on a certain class of mean field games of control: those in which the interaction is through the marginal distributions, $\mathcal{L}(X_t)$ and $\mathcal{L}(\alpha_t)$. To design our algorithms to handle such a general framework of mean field interaction, we study numerical methods for solving a general FBSDE system which includes the two approaches detailed above, as well as this class of mean field games of control.
\subsection{General System}
\hspace{5mm} We can address both probabilistic formulations for mean field games and a class of mean field games of control simultaneously by considering the following general FBSDE system. We now take the dimension of the state space to be $d=1$ since, for simplicity, our algorithms will be just applied to this case. Let $[X]=\mathcal{L}(X)$ denote the law of a process $X$. With an abuse of notation, let $[X,Y,Z]:=(\mathcal{L}(X),\mathcal{L}(Y),\mathcal{L}(Z))$ denote the laws of the individual processes (and not their joint law). The general system is the following:
\begin{equation}
\begin{split}
dX_t &= B(t,X_t,Y_t,Z_t,[X_t,Y_t,Z_t])dt + \sigma dW_t \\
X_0&=\xi \in L^2(\Omega,\mathcal{F}_0,\mathbb{P};\mathbb{R}), \\
dY_t &= -F(t,X_t,Y_t,Z_t,[X_t,Y_t,Z_t])dt + Z_t dW_t \\
Y_T &= G(X_T,[X_T]). \\
\end{split}
\label{general_equation}
\end{equation}
The assumption that the coefficients depend, at most, upon the marginal laws ${\mathcal L}(X_{t})$, ${\mathcal L}(Y_{t})$ and ${\mathcal L}(Z_{t})$ of the triple $(X_{t},Y_{t},Z_{t})$ (and not upon the
full joint law) is tailor made to the applications we have mind: As explained in the previous paragraph, we want to handle games
in which
players interact with one another through the law of the control. In order to guess the impact this may have on the FBSDE representation, it is
worth recalling that, in the problem
\eqref{SSCP},
the optimal control may be represented in a quite generic way through the function $\hat{\alpha}$ with
the adjoint variable $y$ therein taken as the gradient of the value function.
Under the weak formulation approach, this turns the FBSDE system
\eqref{weak} into an FBSDE system depending on the marginal laws of $Z$, as $Z_{t}$ is known to have the representation
$Z_{t} = \nabla_{x} u(t,X_{t}) \sigma$. Under the Pontryagin approach, it turns the FBSDE system
\eqref{Pontryagin} into an FBSDE system depending upon the marginal laws of $Y$. Hence our choice to handle
systems of the form
\eqref{general_equation}. Still the reader should observe that, in order to handle the more general case when the players interact through the joint law
of the state and of the control, it is necessary to address systems of the same type
as \eqref{general_equation} but with the convention that $[X_{t},Y_{t},Z_{t}]$ is understood as the joint law of the triple $(X_{t},Y_{t},Z_{t})$; as we just mentioned, we do not address this level of generality in the report.
In \eqref{general_equation}, the diffusion process $X$ is coupled to the diffusion process $(Y,Z)$ through the functions $B$ and $F$, representing the drift of the forward process and the driver of the backward process, respectively. The functions $B$ and $F$ are assumed to be Lipschitz in each of their arguments on $([0,T], \mathbb{R}^{3}, (\mathcal{P}_2(\mathbb{R}))^{3})$ and $G$ Lipschitz on $(\mathbb{R}, \mathcal{P}_2(\mathbb{R}))$, namely, for $(x,y,z,x',y',z) \in \mathbb{R}^6$ and $(\mu,\nu,\lambda,\mu',\nu',\lambda') \in (\mathcal{P}_2(\mathbb{R}))^6$, we have:
\begin{equation*}
\begin{split}
|B(t',x',y',z',\mu',\nu',\lambda') - B(t,x,y,z,\mu,\nu,\lambda)|\leq\ & K_B \bigl(|t'-t|+|x'-x|+|y'-y|+|z'-z| \\
&+ W_{2}(\mu',\mu)+W_{2}(\nu',\nu) + W_{2}(\lambda',\lambda)\bigr) \\
|F(t',x',y',z',\mu',\nu',\lambda') - F(t,x,y,z,\mu,\nu,\lambda)|\leq\ & K_F\bigl(|t'-t|+|x'-x|+|y'-y|+|z'-z| \\ &+W_{2}(\mu',\mu)+W_{2}(\nu',\nu)+W_{2}|(\lambda',\lambda)\bigr) \\
|G(x',\mu') - B(x,\mu)| \leq\ & K_G\bigl(|x'-x|+W_{2}(\mu',\mu)\bigr).
\end{split}
\end{equation*}
The goal of this project is to study numerical methods for solving this general FBSDE system. At some point in the report, we will
relax the Lipschitz condition of $F$ in the variables $z$ and $\lambda$ and address an FBSDE with a quadratic driver $F$,
see our examples in Section \ref{Examples}.
\section{Algorithms}\label{Algorithms}
\hspace{5mm} We implement two algorithms for numerically solving the FBSDE system in Equation (\ref{general_equation}). In the first algorithm, we represent paths of the stochastic processes $(X,Y,Z)$ using a tree structure, where branches of the tree represent quantization of the Brownian motion. In the second algorithm, we no longer represent the paths of the process, but the marginal laws of the process. In this case, the law of the process is discretized on a fixed temporal and spatial grid.
In both cases, the representation serves as a basis for a Picard scheme for approaching the solution. For the first algorithm,
the Picard scheme is implemented in the form of a global Picard scheme on the whole process; for the second one, iterations are just performed on
the marginal laws of the process. Although Picard's method sounds very natural,
this strategy suffers, whatever the algorithm, from a serious drawback as Picard iterations for forward backward systems are only expected to converge in small time. Basically, the value of $T$ for which the algorithm will converge depends on the
coupling strength of the system; we make this fact clear for the global method by showing how $T$ depends on the Lipschitz coefficients $K_B$, $K_F$ and $K_G$. In any case, bifurcations can be observed when $T$ is increased, or equivalently, when the coupling strength between the two equations is increased. For our convenience (since it can be costly to increase $T$), we will fix $T$ and explore the convergence of the algorithms as we vary the coupling strength.
This is one first step in our report: Compare how the two algorithms suffer from the coupling strength between
the forward and backward equations.
The second key feature of our report is to use, for both algorithms, a continuation in time, which allows us to extend the value of the coupling parameter for which the algorithms converge.
In the following sections, we detail our Picard approaches
for the two algorithms and the continuation in time method.
\subsection{Global Picard Iteration on a Small Time Interval}
\hspace{5mm} The main difficulty in numerically solving the FBSDE system is the fact that, not only the forward component $X=(X_t)_{0 \leq t \leq T}$ and backward component $(Y,Z)=(Y_t, Z_t)_{0 \leq t \leq T}$ are coupled, but also they run in opposite directions. Thus, neither equation can be solved independently of the other, seemingly requiring us to manage both time directions simultaneously.
Several strategies are conceivable to sort out this issue. A first one is to make use of the \textit{decoupling field} of the system in order to work with one time direction instead of two (roughly speaking, the decoupling field is the value function $u$ in the weak approach and its derivative
$\nabla_{x} u$ in the Pontryagin one). We will investigate this method for our second algorithm;
its implementation is indeed pretty subtle in the mean field setting and it leads to the aforementioned
\textit{Picard method on the marginal laws}.
For our first algorithm, however, we limit ourselves to a \textit{brute force} approach. To decouple the equations, we propose a
\textit{global} Picard iteration scheme, whose definition is as follows. For the initial and terminal data of the problem ($\xi$ and $G$), we want to define a mapping $\Phi_{\xi,G}$ that will take the $j-1$ Picard iterate and produce the $j$ Picard iterate:
\begin{equation*}
\Phi_{\xi,G}: (X^{j-1},Y^{j-1},Z^{j-1},[X^{j-1},Y^{j-1},Z^{j-1}]) \mapsto (X^{j},Y^{j},Z^{j},[X^{j},Y^{j},Z^{j}]).
\end{equation*}
We define the decoupled Picard scheme $\Phi_{\xi,G}$ as the following:
\begin{enumerate}
\item First, solve
\begin{equation*}
\begin{split}
dX^{j}_t &= B(t,X^{j-1}_t,Y^{j-1}_t,Z^{j-1}_t,[X^{j-1}_t,Y^{j-1}_t,Z^{j-1}_t])dt + \sigma dW_t \\
X^j_0&=\xi \in L^2(\Omega,\mathcal{F}_0,\mathbb{P};\mathbb{R}), \\
\end{split}
\end{equation*}
for $X^j$ which gives us $[X^j]$.
\item Next, solve
\begin{equation*}
\begin{split}
dY^j_t &= -F(t,X^{j}_t,Y^{j-1}_t,Z^{j-1}_t,[X^{j}_t,Y^{j-1}_t,Z^{j-1}_t])dt + Z^{j}_t dW_t \\
Y^j_T &= G(X^{j}_T,[X^{j}_T]), \\
\end{split}
\end{equation*}
for $Y^j$ and $Z^j$ which gives us $[Y^j]$ and $[Z^j]$.
\item Return $(X^{j},Y^{j},Z^{j},[X^{j},Y^{j},Z^{j}])$.
\end{enumerate}
After initializing $(X^{0},Y^{0},Z^{0},[X^{0},Y^{0},Z^{0}])$, we can define a sequence by:
\begin{equation*}
(X^{j},Y^{j},Z^{j},[X^{j},Y^{j},Z^{j}])=\Phi_{\xi,G}(X^{j-1},Y^{j-1},Z^{j-1},[X^{j-1},Y^{j-1},Z^{j-1}]).
\end{equation*}
If the sequence $(X^j,Y^j,Z^j,[X^j,Y^j,Z^j])_{j=1,\dots}$ converges to some $(X,Y,Z, [X,Y,Z])$, then:
\begin{equation*}
(X,Y,Z,[X,Y,Z])=\Phi_{\xi,G}(X,Y,Z,[X,Y,Z]),
\end{equation*}
and thus, $(X,Y,Z)$ is a solution to the original FBSDE system in Equation (\ref{general_equation}).
Picard schemes such as this one are only guaranteed to converge for a small time horizon, $T$, depending on the Lipschitz coefficients of the functions $B$, $F$ and $G$ (and in fact not only the convergence of the Picard sequence but also the solvability of the equation
may fail on an arbitrary time interval). We illustrate this idea through the following example
(the reader may find the general case in
\cite{Delarue02}): Let the driver function $F = 0$ and define the common Lipschitz coefficient $K = \max (K_B, K_G)$. The FBSDE system becomes:
\begin{equation*}
\begin{split}
dX_t &= B(Y_t)dt + \sigma dW_t, \ X_0 = \xi \in \mathbb{R} \\
dY_t &= Z_t dW_t, \ Y_T = G(X_T, [X_T]).
\end{split}
\end{equation*}
We write the equation above in the integral form and take the expectation conditional to the filtration $\mathcal{F}_t$ in the backward equation:
\begin{equation*}
\begin{split}
&X_t = \xi + \int_0^t B(Y_s)ds + \sigma W_t \\
&Y_t = G(X_T, [X_T]) - \int_{t}^{T} Z_s dW_s, \quad \textrm{i.e.}, \ Y_t = \mathbb{E}(G(X_T, [X_T]) | \mathcal{F}_t).
\end{split}
\end{equation*}
Instead of one single $X$,
let us now consider two (initial) processes denoted by $\hat{X}$ and $\tilde{X}$. After one Picard iteration, we arrive at $\hat{X}'$ and $\tilde{X}'$. The Picard iteration is given by:
\begin{equation*}
\begin{split}
\hat{Y}_t &= \mathbb{E}(G(\hat{X}_T, [\hat{X}_T]) | \mathcal{F}_t) \\
\hat{X}'_t &= \xi + \int_0^t B(\hat{Y}_s)ds + \sigma W_t,
\end{split}
\end{equation*}
and similarly for $\tilde{X}'$.
From the forward component, we have the following estimate for the difference between $\hat{X}'$ and $\tilde{X}'$:
\begin{equation*}
\begin{split}
& |\hat{X}'_t - \tilde{X}'_t|^2 \leq \biggl|\int_0^t (B(\hat{Y}_s) - B(\tilde{Y}_s)) ds\biggr|^2 \leq t K^2 \int_0^t |\hat{Y}_s - \tilde{Y}_s|^2 ds,
\\
& \mathbb{E} \Bigl[ \sup_{0 \leq t \leq T} |\hat{X}'_t - \tilde{X}'_t|^2 \Bigr] \leq T^2 K^2 \mathbb{E}
\Bigl[
\sup_{0 \leq t \leq T} |\hat{Y}_t - \tilde{Y}_t|^2
\Bigr].
\end{split}
\end{equation*}
Then we write for the backward component:
\begin{equation*}
\begin{split}
\mathbb{E}\Bigl[ \sup_{0 \leq t \leq T} |\hat{Y}_t - \tilde{Y}_t|^2
\Bigr] &= \mathbb{E} \Bigl[ \sup_{0 \leq t \leq T} | \mathbb{E} (G(\hat{X}_T, [\hat{X}_T]) - G(\tilde{X}_T, [\tilde{X}_T]) | \mathcal{F}_t) |^2 \Bigr]
\\
&\leq 4 \mathbb{E}\bigl[ |G(\hat{X}_T, [\hat{X}_T]) - G(\tilde{X}_T, [\tilde{X}_T])| ^2 \bigr]
\\
&\leq 8 K^2 \mathbb{E} \Bigl[
|\hat{X}_T - \tilde{X}_T|^2 + W_{2}\bigl( [\hat{X}_T] , [\tilde{X}_T]\bigr)^2 \Bigr].
\end{split}
\end{equation*}
In the second to last inequality, we used Doob's martingale inequality for the martingale term $\mathbb{E} (G(\hat{X}_T, [\hat{X}_T]) - G(\tilde{X}_T, [\tilde{X}_T]) | \mathcal{F}_t)_{0 \leq t \leq T}$. Also we used the fact that $
W_{2} ([X_t], [\tilde{X}_t] )^2 \leq \mathbb{E} [ |X_t - \tilde{X}_t|^2 ]$.
Combining the inequalities above for both forward and backward components, we obtain the following estimate:
\begin{equation*}
\begin{split}
\mathbb{E}\Bigl[ \sup_{0 \leq t \leq T} |\hat{X}'_t - \tilde{X}'_t|^2 \Bigr] \leq 16 T^2 K^4 \mathbb{E}
\bigl[ |\hat{X}_T - \tilde{X}_T|^2 \bigr] \leq 16 T^2 K^4 \mathbb{E}\Bigl[ \sup_{0 \leq t \leq T} |\hat{X}_t - \tilde{X}_t|^2
\Bigr].
\end{split}
\end{equation*}
And then,
\begin{equation*}
\begin{split}
\sup_{0 \leq t \leq T} W_{2}\bigl( [\hat{X}'_t], [\tilde{X}'_t] \bigr)^2 \leq \sup_{0 \leq t \leq T} \mathbb{E} \bigl[ |\hat{X}'_t - \tilde{X}'_t|^2 \bigr] \leq \mathbb{E} \Bigl[ \sup_{0 \leq t \leq T} |\hat{X}'_t - \tilde{X}'_t|^2
\Bigr].
\end{split}
\end{equation*}
Finally, when $16 T^2 K^4 < 1$, i.e. $T \leq 4/K^2$, the mapping $\Phi_{\xi,G}$ realizes a contraction on the forward component and the Picard iteration defined above will converge to the fixed point, providing a solution to the original FBSDE. Thus, we have illustrated that Picard schemes are only expected to converge in small time or for small coupling (i.e. smaller Lipschitz coefficients).
Keeping in mind that we eventually want to describe numerical schemes, we define a solver \textit{picard}, that will implement a finite number, $J_p \in \mathbb{Z}^+$, of Picard iterations. Thus, we define:
\vspace{5pt}
\noindent \hspace{5pt} $\bullet$ \textit{picard}$(\xi,G)$:
\begin{enumerate}
\item Initialize $X_{t}=\xi$, $Y_{t}=0$, and $Z_{t}=0, \forall t\in[0,T]$.
\item For $1\leq j \leq J_p$
\begin{enumerate}
\item[] $(X,Y,Z,[X,Y,Z]) =\Phi_{\xi,G}(X,Y,Z,[X,Y,Z])$
\end{enumerate}
\item Return $(X,Y,Z,[X,Y,Z])$.
\end{enumerate}
\subsection{Continuation in Time of the Global Method for Arbitrary Interval/Coupling}
\hspace{5mm}
Of course, we would like the mapping $\Phi_{\xi,G}$ to realize a contraction to make sure that the Picard iteration converges. As we just
explained, this is the case when the forward and backward processes have a small enough coupling strength or for small time horizon $T$. But in practice, this is not always the case: It may happen that the FBSDE system is uniquely solvable, but that we are unable to prove that the Picard sequence converges. In order to overcome this issue, we follow the approach introduced in Chassagneux, Crisan, Delarue \cite{Solver}. Basically, the point is to divide the time interval into smaller intervals, called \textit{levels}, and to apply a Picard solver recursively between the levels.
To define the levels, we fix a time mesh: $\{0=T_0 < T_1, \dots,T_k,\dots T_{N_\ell-1}< T_{N_\ell}=T\}$. We would like to use the \textit{picard} solver introduced in the previous section to apply the Picard iteration on a given level. Notice that for an arbitrary level $[T_k,T_{k+1}]$, we do not know the initial condition for $X_{T_k}$ or the terminal condition for $Y_{T_{k+1}}$. Thus, our current approximation of these values will be inputs to the \textit{picard} solver in place of $\xi$ and $G$. We would also like to modify
the solver \textit{picard} to also take the current estimate of $(X,[X])$ so that we don't have to start from scratch every time we wish to use \textit{picard}. Thus, we define for level $k$:
\vspace{5pt}
\noindent \hspace{5pt} $\bullet$ \textit{picard}$[k](X,Y_{T_{k+1}})$:
\begin{enumerate}
\item Initialize $Y_{t}=0$, and $Z_{t}=0, \forall t\in[T_k,T_{k+1})$.
\item For $1\leq j \leq J_p$
\begin{enumerate}
\item[] $(X,Y,Z,[X,Y,Z]) =\Phi_{X_{T_k},Y_{T_{k+1}}}(X,Y,Z,[X,Y,Z]))$
\end{enumerate}
\item Return $(X,Y,Z,[X,Y,Z])$,
\end{enumerate}
where $X=(X_{t})_{T_k \leq t\leq T_{k+1}}$, and similarly for $Y$ and $Z$ and their laws. Note that $\Phi_{X_{T_k},Y_{T_{k+1}}}$ is the same as $\Phi_{\xi,G}$ defined earlier except the time horizon is $[T_k,T_{k+1}]$ instead of $[0,T]$ and the initial and terminal conditions are given by $X_{T_k}$ and $Y_{T_{k+1}}$ instead of $\xi$ and $G$, respectively.
In particular, pay attention that we no longer consider the terminal condition in the form of a mapping (like $G$)
but in the form a random variable (like $Y_{T_{k+1}}$). Implicitly, this requires to store, from one step to another, the full random variable $Y_{T_{k+1}}$; hence our choice below to use a tree.
Now that we have a solver \textit{picard} which will implement the Picard iteration for a given level, next, we want to define a global solver to apply a continuation in time. The global solver, called \textit{solver}, is recursively defined as follows for some $J_s \in \mathbb{Z}^+$. For a given level $k$, define:
\vspace{5pt}
\noindent \hspace{5pt} $\bullet$ \textit{solver}$[k](X_{T_k},[X_{T_k}]):$
\begin{enumerate}
\item Initialize $X_t=X_{T_k}$, $Y_t=0$ , and $Z_t=0$, $\forall t \in [T_k,T_{k+1}]$.
\item For $1\leq j \leq J_s$
\begin{enumerate}
\item $(Y_{T_{k+1}},[Y_{T_{k+1}}])=$\textit{solver}$[k+1](X_{T_{k+1}},[X_{T_{k+1}}])$
\item $(X,Y,Z,[X,Y,Z])=$\textit{picard}$[k](X,Y_{T_{k+1}})$
\end{enumerate}
\item Return $(Y_{T_k},[Y_{T_k}])$.
\end{enumerate}
As before, it is important to notice that the entry $X_{T_{k}}$ in \textit{solver}
is a random variable; $[X_{T_{k}}]$ is its law. Having the two in our notations is a bit redundant, but
we feel it is more transparent for the reader. The break condition of the recursion is given by the terminal condition:
\begin{center}
\textit{solver}$[N_\ell](X_{T_{N_\ell}},[X_{T_{N_\ell}}])=(Y_{T_{N_\ell}},[Y_{T_{N_\ell}}])=(G(X_{T_{N_\ell}},[X_{T_{N_\ell}}]),\mathcal{L}(G(X_{T_{N_\ell}},[X_{T_{N_\ell}}])))$
\end{center}
The goal of the continuation in time is for a Picard iteration scheme to converge even for large coupling parameters or large time horizon. We will see in Section \ref{Examples} that the continuation in time successfully achieves this goal for our benchmark examples.
We refer to
\cite{Solver} for its theoretical analysis.
Thus far, we have described our Picard approach and a continuation in time method. Now we need to provide a scheme for discretizing the Picard iteration mapping $\Phi_{\xi,G}$. In this report, we implement a tree algorithm;
we also give a variant of it, in the form of a grid algorithm. For both algorithms, we consider the uniform time mesh with time step $h = T/N_t > 0$ with $N_\ell<N_t \in \mathbb{Z^+}$ and $ t_i=ih, i=0,...,N_t$. For convenience, we will assume the coarse time mesh used to define the levels, $\{0=T_0 < T_1, \dots,T_k,\dots T_{N_\ell-1}< T_{N_\ell}=T\}$, is a subset of the fine time mesh $\{0=t_0 < t_1, \dots, t_{N_{t-1}} <t_{N_t}=T\}$.
\subsection{Tree Algorithm for the Global Method}
\hspace{5mm} The first implementation of the Picard iteration, $\Phi_{\xi,G}$, is the tree algorithm presented in Chassagneux, Crisan, Delarue \cite{Solver}. We now provide a brief presentation of this algorithm.
\subsubsection{Time Discretization}
\hspace{5mm} Our first step in developing a discretization for the Picard iteration $\Phi_{\xi,G}$ is to discretize the problem in the time domain. We use the decoupled scheme derived in Chassagneux, Crisan, Delarue \cite{Solver}, and repeated below for convenience. As noted above, we consider the uniform time mesh with time step $h = T/N_t > 0$ with $N_\ell<N_t \in \mathbb{Z^+}$ and $ t_i=ih, i=0,...,N_t$.
Step 1) in defining $\Phi_{\xi,G}$ requires solving the forward equation for $X^j$. We use the classical Euler scheme:
\begin{equation*}
\begin{split}
X_{t_{i+1}}^j &= X_{t_{i}}^j + h \, B(t_i,X^{j-1}_{t_i},Y^{j-1}_{t_i},Z^{j-1}_{t_i},[X^{j-1}_{t_i},Y^{j-1}_{t_i},Z^{j-1}_{t_i}]) + \sigma \Delta W_i , \ X^j_0 = \xi.
\end{split}
\end{equation*}
Note that when we calculate $X_{t_{i+1}}^j$, the value of $X_{t_{i}}^j$ and its law is known and could be substituted for $X_{t_{i}}^{j-1}$ and its law in the drift function, $B$.
Step 2) in defining $\Phi_{\xi,G}$ requires solving the backward equation for $Y^j$ and $Z^j$. We derive the discrete-time scheme to approximate the backward component, see e.g. \cite{bouchard2004discrete}.The $Y^j$ component at time $t_i$ in the backward scheme is obtained by taking the expectation conditional to $\mathcal{F}_{t_i}$, denoted as $\mathbb{E}_{t_i}$, of the backward equation between $t_i$ and $t_{i+1}$. Let $\Delta W_i = W_{t_{i+1}} - W_{t_{i}}$ denote the forward Brownian increment between $t_{i}$ and $t_{i+1}$. The driver function $F$ is approximated by its value at time $t_i$ and we use the fact that $F$ at $t_i$ and $Z_{t_i}$ are $\mathcal{F}_{t_i}$-measurable, and $\mathbb{E}_{t_i}(\Delta W_i) =\mathbb{E}_{t_i}(W_{t_{i+1}}-W_{t_{i}}) = 0$:
\begin{equation*}
\begin{split}
Y^j_{t_{i}} &= Y^j_{t_{i+1}} + \int_{t_i}^{t_{i+1}} F(t,X^{j}_t,Y^{j-1}_t,Z^{j-1}_t,[X^{j}_t,Y^{j-1}_t,Z^{j-1}_t])dt - \int_{t_i}^{t_{i+1}}Z^{j}_t dW_t \\
& \approx Y^{j}_{t_{i+1}} + h \ F(t_i,X^{j}_{t_i},Y^{j-1}_{t_i},Z^{j-1}_{t_i},[X^{j}_{t_i},Y^{j-1}_{t_i},Z^{j-1}_{t_i}]) - Z^{j}_{t_i} \Delta W_{i} \\
Y^j_{t_i} &= \mathbb{E}_{t_i} (Y^j_{t_{i+1}}) + h \ F(t_i,X^{j}_{t_{i}},Y^{j-1}_{t_{i}},Z^{j-1}_{t_{i}},[X^{j}_{t_{i}},Y^{j-1}_{t_{i}},Z^{j-1}_{t_{i}}]) \\
Y^j_{T} &= G(X^{j}_T,[X^{j}_T]). \\
\end{split}
\end{equation*}
As for the $Z^j$ component, we multiply the approximation of $Y^j_{t_i}$ by the Brownian increment $\Delta W_{i}$, taking the conditional expectation $\mathbb{E}_{t_i}$, and using $\mathbb{E}((\Delta W_i)^2) = h$. By noticing that our scheme will never make use of $Z^j_T$, we can simply set the terminal condition for the $Z^j$ component to $0$.
\begin{equation*}
\begin{split}
Z^j_{t_i} (\Delta W_{i})^2 &\approx Y^j_{t_{i+1}} \Delta W_{i} + (-Y^j_{t_{i}} + h \ F(t_i,X^{j}_{t_{i}},Y^j_{t_{i}},Z^{j-1}_{t_{i}},[X^{j}_{t_{i}},Y^j_{t_{i}},Z^{j-1}_{t_{i}}])) \Delta W_{i} \\
Z_{t_i}^j &= h^{-1} \mathbb{E}_{t_i}(Y_{t_{i+1}}^j \Delta W_{i}), \ Z_T^j = 0.
\end{split}
\end{equation*}
Putting this together, the time-discretized decoupled forward-backward scheme for Picard iteration of our general FBSDE system (\ref{general_equation}) is the following:
\begin{equation}
\label{scheme0}
\left\{
\begin{aligned}
& X_{t_{i+1}}^j = X_{t_{i}}^j + h \ B(t_i,X^j_{t_i},Y^{j-1}_{t_i},Z^{j-1}_{t_i},[X^j_{t_i},Y^{j-1}_{t_i},Z^{j-1}_{t_i})] + \sigma \Delta W_i \\
& X^j_0 = \xi, \\
& Y^j_{t_i} = \mathbb{E}_{t_i} (Y^j_{t_{i+1}}) + h \ F(t_i,X^{j}_{t_{i}},Y^{j-1}_{t_{i}},Z^{j-1}_{t_{i}},[X^{j}_{t_{i}},Y^{j-1}_{t_{i}},Z^{j-1}_{t_{i}}]) \\
& Y^j_{T} = G(X^{j-1}_T,[X^{j-1}_T])\\
& Z_{t_i}^j = h^{-1} \mathbb{E}_{t_i}(Y_{t_{i+1}}^j \Delta W_i) \\
& Z^j_T =0.
\end{aligned}
\right.
\end{equation}
Other variants of this forward-backward scheme are possible. For example, in the forward scheme we could change $j$ to $j-1$ and $t_{i}$ to $t_{i+1}$ on the right hand side. It is easy to observe that the forward-backward system is decoupled because of lagged Picard indices $j-1$ and $j$. Thus, given $(X^{j-1},Y^{j-1},Z^{j-1},[X^{j-1},Y^{j-1},Z^{j-1}])$ where $X^{j-1}=(X^{j-1}_{t_i})_{0 \leq i \leq N_t}$, and similarly for $Y$ and $Z$, we can solve the backward scheme autonomously and then the forward scheme to obtain $(X^{j},Y^{j},Z^{j},[X^{j},Y^{j},Z^{j}])$. We have not fully provided a discrete scheme for $\Phi_{\xi,G}$ yet, however, because for a given $t_i$, we have not discretized $(X_{t_i},Y_{t_i},Z_{t_i})$. This is the goal of the next section.
\subsubsection{Spatial Discretization via Tree Structure}
\hspace{5mm} The forward-backward decoupled scheme (\ref{scheme0}) above looks quite simple and explicit. However, it still presents some difficulties for the numerical computation. Firstly, it is difficult to compute the conditional expectation in the backward scheme. Secondly, it is non trivial to compute the law of $X_{t_{i+1}}$ forward in time. Even if there was no drift, the computation would involve the convolution of the law of $X_{t_{i}}$ and a Gaussian law of the Brownian increment. Ultimately, we will need a spatial dicretization.
The approach of this algorithm is to approximate the Brownian increments using a simple binomial approximation: $\Delta W_i = \pm \sqrt{h}$ with probability $1/2$. This gives rise to a binomial tree for the forward scheme. Each node on the tree at depth $i$ represents a value of $X_{t_i}$, and has two children nodes representing the two possible values of $X_{t_{i+1}}$ (the ``up $\uparrow$" and the ``down $\downarrow$" value), conditioned on the value of $X_{t_i}$. The two values are computed as follows:
\begin{equation}
X^j_{t_{i+1}}(\uparrow \downarrow) = X_{t_{i}}^j + h \ B(t_i,X^j_{t_i},Y^j_{t_i},Z^j_{t_i},[X^j_{t_i},Y^j_{t_i},Z^j_{t_i}]) \pm \sigma \sqrt{h}.
\label{binEuler}
\end{equation}
Suppose that we use $M$ points $x_1,..., x_{M}$ for the approximation of the law $\xi$ of the forward process at the initial time, i.e. $[X_0] = \xi \approx \sum_{k=1}^M p^0_k \delta_{x_k}(\cdot)$. Then we have $M$ parallel binomial trees at each Picard iteration. For Picard iterate $j$ and time $t_i$, the number of nodes at depth $i$ is $M \times 2^i$ with values of $(X^j_{t_i},Y^j_{t_i},Z^j_{t_i})$ saved on the nodes of the tree at depth $i$. The marginal law of each process at time $t_i$ can be determined by looking at all the values on the nodes at depth $i$. The backward scheme can be easily computed on the binomial tree. At the last time step, $T=t_{N_t}$, we have $Y^j_{T} = G(X^j_{T},[X^j_{T}])$ for each of the $M \times 2^{N_t}$ nodes. The conditional expectation in the backward scheme at $t_i$ is simply the average of the ``up" and ``down" branches at $t_{i+1}$.
To initialize the $j=0$ Picard iterate as in the definition of the solver \textit{picard}, we want to set $X_{t_i}=\xi, \forall i \in \{0, \dots, N_t\}$. This amounts to taking each initial value $x_k$ and initializing its entire tree to this value, meaning that $X_{t_i}^0=x_k$ for all nodes at depth $i$ and for all $i\in \{1,\dots,N_t\}$ of the $k$-th binomial tree. We then begin the Picard iteration by applying the mapping $\Phi_{\xi,G}$ as detailed above. Using the binomial tree and approximation of Brownian increments, the forward-backward decoupled scheme becomes fully implementable.
\subsection{Picard Iteration on the Marginal
Laws:
a Grid Algorithm}
\hspace{5mm} The complexity of the tree algorithm is exponential with respect to the number of time steps $N_t$, since the size of the binomial tree is of order $2^{N_t}$. The exponential complexity becomes problematic and makes continuation in time much slower when we deal with large time horizons. In order to reduce the size of the tree, a natural idea is to make some ``recombination'' of the binomial tree. But since the drift function $B$ depends on the value of the process, the two branches ``up-down'' and ``down-up'' from the same node at time $t_i$ will not coincide at time $t_{i+2}$, in general. Instead of recombination, we may fix a spatial grid of a controllable size that the binomial tree can be projected onto, in order to avoid exponential complexity. This will be
the first ingredient of our grid algorithm, which is the main novelty in our report.
Inspired by the paper of Delarue and Menozzi \cite{Grid}, where the authors used a spatial grid for the approximation of FBSDEs without mean-field interaction, we make an intensive use of the notion of \textit{decoupling field}, which is the second key ingredient of our strategy. Indeed, using the representation result in Proposition 2.2 in \cite{PDEforUV}, we know that there exist deterministic feedback functions $(u,v): [0,T] \cross \mathbb{R} \cross \mathcal{P}_2(\mathbb{R}) \rightarrow \mathbb{R}$ with $u$ a solution to a nonlinear PDE (on the space of measures) such that for a solution $(X,Y,Z)$ to the general FBSDE system in Equation (\ref{general_equation}):
\begin{equation*}
Y_{t}=u({t},X_{t},[X_{t}]) \quad \text{ and } \quad
Z_{t}=v({t},X_{t},[X_{t}]).
\end{equation*}
Generally speaking, $u$ is called the decoupling field of (\ref{general_equation}). Here comes the main observation: The time marginals of the solution to the general FBSDE system in Equation (\ref{general_equation}), $([X_t,Y_t,Z_t])_{0\leq t \leq T}$, can be equivalently characterized by $(\mu_t,u(t,\cdot),v(t,\cdot))_{0 \leq t \leq T}$ where $[X_t]=\mu_t$. As in \cite{Grid}, our strategy is to thus approximate $(u,v)$ instead of $(X,Y,Z)$, but this is not so easy as
$u$ and $v$ are defined on a space of infinite dimension (because the mean field component lives in
${\mathcal P}({\mathbb R})$). In order to overcome this difficulty, we propose to freeze the mean field argument in
$(u,v)$. This permits to regard $u$ and $v$ as functions of a finite-dimensional variable and thus to approximate both
along the underlying spatial grid. Once an approximation of $u$ and $v$ has been computed for the given proxy of the marginal laws,
we can update the value of this proxy by using a Picard method. Hence the name of this subsection.
So, as opposed to the tree algorithm, we will no longer keep track of the pathwise laws of the processes. Instead, we will only compute the marginal laws at each time step of the time mesh by means of a Picard iteration.
\subsubsection{Picard Iteration without Grid}
We first give the inputs and outputs of our new Picard mapping without any grid approximation:
\begin{equation*}
\Psi_{[\xi],G}:(\mu^{j-1}_{t},u^{j-1}(t,\cdot),v^{j-1}(t,\cdot))_{0 \leq t \leq T} \mapsto (\mu^{j}_{t},u^{j}(t,\cdot),v^{j}(t,\cdot))_{0 \leq t \leq T},
\end{equation*}
with $\mu_0^{j-1}=\mu_0^{j}=[\xi]$. Denoting below $\varphi\sharp\nu$ as the push-forward measure of the measure $\nu$ by the function $\varphi$, $\Psi_{[\xi],G}((\mu^{j-1}_{t},u^{j-1}(t,\cdot),v^{j-1}(t,\cdot))_{0 \leq t \leq T})$ is defined by:
\begin{enumerate}
\item Solve the following SDE for $X^j$:
\begin{equation*}
\begin{split}
dX^{j}_t &= B(t,X^{j}_t,u^{j-1}(t,X^{j}_t),v^{j-1}(t,X^{j}_t),\mu^{j-1}_t,(u^{j-1}(t,\cdot),v^{j-1}(t,\cdot))\sharp \mu^{j-1}_t)dt + \sigma dW_t \\
X^j_0&=\xi \in L^2(\Omega,\mathcal{F}_0,\mathbb{P};\mathbb{R}), \\
\end{split}
\end{equation*}
Set $\mu^j_t:=[X^j_t]$.
\item Next, find $(u^{j},v^j)(\cdot,\cdot) :[0,T] \cross \mathbb{R} \rightarrow \mathbb{R}^2$ through $(u^{j},v^j)(t,X^j_t):=(Y^j_t,Z^j_t)$ by solving:
\begin{equation*}
\begin{split}
d Y^j_t &= -F(t,X^{j}_t,Y^{j}_t,Z^{j}_t,\mu^j_t,(u^{j-1}(t,\cdot),v^{j-1}(t,\cdot))\sharp \mu^{j}_t)dt
+ Z^j_t dW_t \;, \\
Y^j_T &= G(X^{j}_T,\mu^j_T)\;. \\
\end{split}
\end{equation*}
\item Return $((\mu^{j}_{t},u^{j}(t,\cdot),v^{j}(t,\cdot))_{0 \leq t \leq T})$.
\end{enumerate}
Given $\Psi_{[\xi],G}$, we can go through the construction
of the global Picard method and define similar routines \textit{picard} and \textit{solver}
for our new Picard approach
by replacing formally
$\Phi_{\xi,G}$ by $\Psi_{[\xi],G}$. The various inputs and outputs
in the new routines should be clear.
The next step to make the whole algorithm entirely tractable is to show how to compute $\mu_{t}^j$,
$u^j$, and $v^j$ explicitly in the mapping $\Psi_{[\xi],G}$. Similar to $\Phi_{\xi,G}$, this mapping will be defined explicitly for a discretized scheme on a temporal and spatial grid in the next two sections. As in the tree algorithm, we consider the uniform time mesh with time step $h = T/N_t > 0$ with $N_\ell<N_t \in \mathbb{Z^+}$ and $ t_i=ih, i=0,...,N_t$.
\subsubsection{Grid Approximation of Forward Component and its Law}
\hspace{5mm} We begin by fixing a spatial discretization grid. This grid could in principle be defined differently for each time step $t_i$, but for simplicity, we consider a homogeneous grid $\Gamma$ fixed for all time steps $t_i, \ i \in \{ 0, ..., N_t \}$ with constant spatial step size $\Delta x$. Let $\Pi$ be the projection function on the grid
$$ \Gamma = \{ x_k = x_1 + (k-1) \Delta x, \ k = 1, ..., N_x \}. $$
Precisely, $\Pi$ is given by
\begin{align*}
\Pi(x) = x_k \text{ if } x \in [x-\Delta x/2,x+\Delta x/2) \quad \text{and} \quad \Pi(x) = x_0 \text{ if } x <x_0\,,\;
\Pi(x) = x_{N_x} \text{ if } x \ge x_{N_x}\,.
\end{align*}
The initial law $\xi$ of the forward process is approximated as $\xi \approx \mu_0(\cdot)$ on the grid $\Gamma$ with $N_x$ points. Recall that in the tree algorithm, we cannot choose $M$, the number of points for the approximation of the initial law, to be too large as we will need $M$ parallel binomial trees. Because the tree algorithm has exponential complexity, we are able to choose $N_x$ to be much larger than $M$, and in turn, the approximation of the initial law is more accurate in the grid algorithm than in the tree algorithm.
We can initialize the Picard iterate $(\mu^0_i,u^0_i,v^0_i)_{0 \leq i \leq N_t}$ similar as before by letting $\mu^0_{i}=\mu_0$ and $(u^0_i,v^0_i)=(0,0)$, for all $i \leq N_t$.
We follow the definition of Step 1) for $\Psi_{[\xi],G}$,
but we use the Euler scheme for the forward process between $t_i$ and $t_{i+1}$:
\begin{equation} \label{eq euler scheme}
X_{t_{i+1}}^{j} = X_{t_{i}}^{j} + h \ B(X_{t_i}^{j}, (u^{j-1}_i,v^{j-1}_i)(X_{t_{i}}^{j}), \mu^{j-1}_i,
(u^{j-1}_i,v^{j-1}_i)\sharp\mu^{j-1}_i ) + \sigma \Delta W_{i}.
\end{equation}
Suppose the $j$ Picard iterate at time $t_i$ is given by $\mu^j_i$ with $\mu^j_0 = \mu_0$:
\begin{equation*}
\mu^j_i(\cdot) = \sum_{k = 1}^{N_x} p^j_{i,k} \delta_{x_k}(\cdot ), \ p^j_{i,k} \geq 0 \ \forall k \in \{ 1, ..., N_x \} \text{ and } \sum_{k = 1}^{N_x} p^j_{i,k} = 1.
\end{equation*}
The law of $X^j_{t_i}$ in the Euler scheme of the forward process is $\mu^j_i$. Then we would like to define $\mu^j_{i+1}$ as the law of $X_{t_{i+1}}^{j}$ in the Euler scheme above, but the quantity $X_{t_{i+1}}^{j}$ may not belong to the grid. Instead of using formula \eqref{eq euler scheme}, the natural idea is then to replace it by its projection on the grid $\Gamma$:
\begin{equation}
\label{euler_fwd}
\begin{split}
X_{t_{i+1}}^{j} &= \Pi \left(X_{t_{i}}^{j} + h \ B(X_{t_i}^{j}, (u^{j-1}_i,v^{j-1}_i)(X_{t_{i}}^{j}), \mu^{j-1}_i,
(u^{j-1}_i,v^{j-1}_i)\sharp\mu^{j-1}_i ) + \sigma \Delta W_{i} \right) \\
[X_{t_{i}}^{j}] &= \mu^j_i(\cdot) \text{ and } \mu^j_{i+1}(\cdot) = [X_{t_{i+1}}^{j}].
\end{split}
\end{equation}
The law of $X_{t_{i+1}}^{j}$ is the convolution of $\mu^j_{i}$ with transition density $q^j(t_i, t_{i+1}; x_k, x_n)$ with $x_k, x_n$ two points of the grid $\Gamma$ and $k,n \in \{ 1,..., N_x \}$. The transition probability is equal to the conditional probability that the process $X^j$ starting at time $t_i$ from point $x_k$ arrives at time $t_{i+1}$ at the point $x_n$, i.e.
\begin{equation*}
q^j(t_i, t_{i+1}; x_k, x_n) = \mathbb{P} (X^{j}_{t_{i+1}} = x_n | X_{t_{i}}^{j} = x_k).
\end{equation*}
The discretized law $\mu^j_{i+1}$ is then written as:
\begin{equation*}
\begin{split}
\mu^j_{i+1}(\cdot) = (\mu_i * q^j(t_{i}, t_{i+1}))(\cdot) &= \sum_{n = 1}^{N_x} p^{j}_{i+1,n} \delta_{x_n}(\cdot) \\
p^{j}_{i+1,n} = \sum_{k=1}^{N_x} \Bigl( p^{j}_{i,k} \times q^j(t_i, t_{i+1}; x_k, x_n)
\Bigr)
&= \sum_{k=1}^{N_x} \Bigl( p^{j}_{i,k} \times \mathbb{P} (X^{j}_{t_{i+1}} = x_n | X_{t_{i}}^{j} = x_k) \Bigr).\\
\end{split}
\end{equation*}
It is worth noticing that if we did not take the projection when computing $X_{t_{i+1}}^{j}$ in the scheme (\ref{euler_fwd}), then its law would be given by convolution of the law $\mu^j_{i}$ with the Gaussian transition density $\bar{q}^j(t_i, t_{i+1}; x_k, y)$ associated to the Euler scheme. The transition densities $q^j$ and $\bar{q}^j$ have the following relation:
\begin{equation*}
\mathbb{P} (X^{j}_{t_{i+1}} = x_n | X_{t_{i}}^{j} = x_k) = q^j(t_i, t_{i+1}; x_k, x_n) = \int_{\beta(x_n, \Delta x/2)} \bar{q}^j(t_i, t_{i+1}; x_k, y) dy.
\end{equation*}
In fact, for a more tractable implementation of the forward scheme, we use the binomial approximation for the Brownian increments (\ref{binEuler}) introduced in the previous section. Note that quantization with more points can be easily applied. In this binomial case, with $(\uparrow)/(\downarrow)$ representing the ``up'' and ``down'' branches, respectively, the transition probabilities on the grid can be easily computed:
\begin{equation*}
\begin{split}
& \mathbb{P} (X^{j}_{t_{i+1}} = x_n | X_{t_{i}}^{j} = x_k) \\
&= \frac{1}{2} \left(\boldsymbol{1} (X^{j}_{t_{i+1}}(\uparrow) = x_n | X_{t_{i}}^{j} = x_k) + \boldsymbol{1}( X^{j}_{t_{i+1}}(\downarrow) = x_n | X_{t_{i}}^{j} = x_k )\right).
\end{split}
\end{equation*}
Then we can write $\mu^j_{i+1}$ by computing the probabilities:
\begin{equation*}
\label{fwd_prob}
p^j_{i+1,n} = \sum_{k=1}^{N_x} \frac{p^j_{i,k}}{2} \cdot \left(\boldsymbol{1} (X^{j}_{t_{i+1}}(\uparrow) = x_n | X_{t_{i}}^{j} = x_k) + \boldsymbol{1}( X^{j}_{t_{i+1}}(\downarrow) = x_n | X_{t_{i}}^{j} = x_k )\right)
\end{equation*}
At Picard iteration $j \geq 1$, the forward scheme finally gives the flow of measures $(\mu^j_i)_{i=0}^{N_t}$ at discrete time steps $(t_i)_{i=0}^{N_t}$ of the discretized law defined on the grid $\Gamma$. Thus, we have described an implementation of Step 1) in the definition of the Picard mapping $\Psi_{[\xi],G}$. In the next section, we detail the implementation of the backward components in Step 2).
\subsubsection{Grid for the approximation of the backward component}
\hspace{5mm} Given the current Picard iterates $(\mu^{j}_{i}$, $u^{j-1}_i(\cdot), v^{j-1}_i(\cdot))$, $i = 0, \cdots, N_t$, we would like to detail Step 2) in the definition of $\Psi_{[\xi],G}$. Since we have a discrete spatial grid $\Gamma$, we wish to compute the values of $u^j_i(x)$ and $v^j_i(x)$ for $x \in \Gamma$.
By replacing $Y^j_{t_i}$ and $Z^j_{t_i}$ with their respective feedback functions in the backward component in Equation (\ref{scheme0}), for $x \in \Gamma $ we have the following backward scheme starting, for $i \le N_t -1$, with terminal condition at time $T = t_{N_t}$, $(u^j_{N_t},v^j_{N_t})=(G,0)$:
\begin{equation*}
\begin{split}
u^j_i(x) &= \mathbb{E} \left( u^j_{i+1}(X^{j}_{t_{i+1}},\mu^j_{i+1}) + h \cdot F(X^j_{t_{i}}, u^{j-1}_i(X_{t_{i}}^j), v^{j}_{i}(X_{t_{i}}^j),\mu^j_i, (u^{j-1}_i,v^{j-1}_i)\sharp \mu^j_i)
\, \vert \, X^j_{t_{i}}=x
\right), \\
v^j_i(x) &= \mathbb{E} \left( u^j_{i+1}( X^{j}_{t_{i+1}}) \cdot \Delta W_i/h
\, \vert \, X^j_{t_{i}} = x
\right).\\
\end{split}
\end{equation*}
The variable $X^{j}_{t_{i+1}}$ with law $[X^{j}_{t_{i+1}}] = \mu^j_{i+1}$ is given by the forward scheme presented in the previous section with starting point $X^j_{t_i}=x$. Notice that by construction of the forward scheme, $X^{j}_{t_{i+1}} \in \Gamma$ and $supp(\mu^j_{i+1}) = \Gamma$ so $u^j_{i+1}(\cdot)$ has been calculated and saved at time $t_{i+1}$.
We have a more explicit scheme in the case of binomial approximation of the Brownian increments, which is used for the numerical results for our examples, with $X^{j}_{t_{i+1}}(\uparrow)$ and $X^{j}_{t_{i+1}}(\downarrow)$ as defined in the forward scheme, always conditional to $X^j_{t_i} = x$ and given $\mu^j_i$ at time $t_i$:
\begin{equation*}
\begin{split}
u^j_i(x) &= \frac{1}{2} \left( u^j_{i+1}( X^{j}_{t_{i+1}}(\uparrow)) + u^j_{i+1}( X^{j}_{t_{i+1}}(\downarrow)) \right)
+ h \cdot F(x, u^{j-1}_i(x), v^j_i(x),\mu^j_i, (u^{j-1}_i,v^{j-1}_i)\sharp \mu^j_i), \\
v^j_i(x)
&= \frac{h^{-1/2}}{2} \left( u^j_{i+1}(X^{j}_{t_{i+1}}(\uparrow)) - u^j_{i+1}( X^{j}_{t_{i+1}}(\downarrow)) \right).
\end{split}
\end{equation*}
We have now described a scheme for Step 2) in the definition of $\Psi_{[\xi],G}$. Putting this together with the previous section, we have described a fully implementable scheme for $\Psi_{[\xi],G}$. Note that we can use this Picard iteration mapping to define the analogue of the solvers \textit{picard} and \textit{solver}. Importantly, we can also apply the continuation in time method to the grid algorithm as well. In the next section, we apply these two methods, the tree and grid algorithms, to five example problems.
\section{Examples}\label{Examples}
\hspace{5mm} We have collected five example problems to test the algorithms presented in Section \ref{Algorithms}.
\subsection{Linear Example}
\hspace{5mm} The first example is a linear model which comes directly from Chassagneux, Crisan, Delarue \cite{Solver}, in which they implemented the tree algorithm. The system of interest is the following:
\begin{equation*}
\begin{split}
dX_t&=-\rho \mathbb{E}\left(Y_t\right) dt+\sigma dW_t \\
X_0&=x_0 \\
dY_t&=-a Y_t dt+Z_t dW_t \\
Y_T&=X_T. \\
\end{split}
\end{equation*}
For this problem, the solution is known explicitly:
\begin{equation*}
Y_0=\frac{x_0e^{aT}}{1+\frac{\rho}{a}(e^{aT}-1)}
\end{equation*}
For the numerical results, we let $\rho=0.1$, $a=0.25$, $\sigma=1$, $T=1$, and $x_0=2$. We vary $h$, the time step size, and for the grid algorithm, we use $\Delta x=h^2$. Figure \ref{fig_1_1} shows the log error between the numerical and true solution values of $Y_0$ as a function of log number of time steps for the tree and grid algorithm with one level (i.e. without using continuation in time). The left figure (tree algorithm) repeats the results in \cite{Solver} and as expected, it decreases linearly. On the other hand, we also observe a negative trend in the rate of convergence for the grid algorithm. Thus, both algorithms appear to converge to the true solution.
\begin{figure}[!htb]
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[scale=0.45]{fig1a.eps}
\caption{Tree Algorithm}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[scale=0.45]{fig1b.eps}
\caption{Grid Algorithm}
\end{subfigure}
\caption{Linear Example: Convergence of the algorithms with one level as the number of time steps increases.}
\label{fig_1_1}
\end{figure}
\subsection{Trigonometric Drivers Example}
\hspace{5mm} The second example also comes directly from Chassagneux, Crisan, Delarue \cite{Solver}. The system of interest is the following:
\begin{equation*}
\begin{split}
dX_t&=\rho \cos\left(Y_t \right)dt+\sigma dW_t \\
X_0&=x_0 \\
Y_t&=\mathbb{E}_t\left(\sin(X_T)\right)
\end{split}
\end{equation*}
For the numerical results, we let $\sigma=1$, $T=1$, $x_0=0$, $h=1/6$ for the tree algorithm and $h=1/12$ and $\Delta x=h^2$ for the grid algorithm. For this problem, we observe a bifurcation when using the tree algorithm as we increase the coupling parameter, $\rho$. Figure \ref{fig_72_1} shows the values of $Y_0$ from the last $5$ Picard iterations. Starting at about $\rho=3.5$, the tree algorithm without continuation in time bifurcates. If the continuation in time method is used for the tree algorithm with two levels, there is no bifurcation for the range of values of $\rho$ showed in the plot. Note that the results from the tree algorithm repeat those in \cite{Solver}. The grid algorithm performs quite well in the sense that even with only one level, the algorithm converges for all of the values of $\rho$ in the plot. In particular, it avoids the exponential growth of the data structure characterizing the tree algorithm. Note that even though both the tree method with two levels and the grid method with one level converge, they produce different values for $Y_0$ for larger values of $\rho$. We believe this is because the tree algorithm is less accurate since the time step is larger.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{fig2.eps}
\caption{Trigonometric Drivers Example: Bifurcations in the values of $Y_0$ appear as the coupling parameter, $\rho$, increases. The tree algorithm with one level is shown in blue circles and with two levels is shown in red squares. The grid algorithm with one level is shown in black asterisks.}
\label{fig_72_1}
\end{figure}
For values of $\rho$ for which the tree algorithm bifurcates, we were interested in the effect of changing $\sigma$ on the convergence. Figure \ref{fig_72_2} shows the last 5 values of $Y_0$ from the Picard iteration as $\sigma$ varies for $\rho=5$. Surprisingly, the value of $\sigma$ plays no role in the bifurcation for this value of $\rho$. To see if $\sigma$ would play a role when $\rho$ is closer to the bifurcation point, we have analogous plots when $\rho=3.5$ and $\rho=4$ (see Figure \ref{fig_72_3}). In these plots, however, it is clear that $\sigma$ does affect the bifurcation. Understanding the role of $\sigma$ on the bifurcation is an open question. It is interesting to note that even though the tree algorithm does not bifurcate for $\rho=3.5$ when $\sigma=1$, we still observe a bifurcation when we fix $\rho=3.5$ and vary $\sigma$. This suggests that we check if the grid algorithm bifurcates as we vary $\sigma$ for a fixed value of $\rho$ where there is no bifurcation when $\sigma=1$, such as $\rho=5$. Figure \ref{fig_72_4} shows that the grid algorithm does not bifurcate as we change $\sigma$ for fixed $\rho=5$.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{fig3.eps}
\caption{Trigonometric Drivers Example: Tree algorithm with one level for $\rho=5$. Changing $\sigma$ has no effect on the bifurcation.}
\label{fig_72_2}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[scale=0.45]{fig4a.eps}
\caption{$\rho=3.5$}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[scale=0.45]{fig4b.eps}
\caption{$\rho=4$}
\end{subfigure}
\caption{Trigonometric Drivers Example: Tree algorithm with one level for $\rho=3.5$ and $\rho=4$. Changing $\sigma$ produces unexplained results.}
\label{fig_72_3}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{fig5.eps}
\caption{Trigonometric Drivers Example: Grid algorithm with one level for $\rho=5$. There is no bifurcation as we vary $\sigma$.}
\label{fig_72_4}
\end{figure}
\subsection{Mixed Model}\label{example_3}
\hspace{5mm} The third example also comes directly from Chassagneux, Crisan, Delarue \cite{Solver}. The system of interest is the following:
\begin{equation*}
\begin{split}
dX_t&=- \rho Y_t dt+\sigma dW_t \\
X_0&=x_0 \\
dY_t&=\arctan \left( \mathbb{E}(X_t) \right) dt+Z_t dW_t \\
Y_T&= \arctan \left( X_T \right).
\end{split}
\end{equation*}
For the numerical results, we let $\sigma=1$, $T=1$, $x_0=2$, $h=1/6$ for the tree algorithm and $h=1/12$ and $\Delta x=h^2$ for the grid algorithm. We also observe a bifurcation for this problem as we increase the coupling parameter, $\rho$. Figure \ref{fig_73_1} shows the values of $Y_0$ from the last $5$ Picard iterations. Starting at about $\rho=1.5$, the tree algorithm without continuation in time bifurcates. If the continuation in time method is used for the tree algorithm with two levels, the bifurcation point is pushed back to about $\rho=3$, and pushed back further when using three levels to about $\rho=5$. Note that these results for the tree algorithm repeat those in \cite{Solver}. The grid algorithm performs quite well again in the sense that it converges for all of the values of $\rho$ shown, even when using only one level. Further, its major attractivness is the lower complexity compared to the tree algorithm.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{fig6.eps}
\caption{Mixed Model: Bifurcations in the values of $Y_0$ appear as the coupling parameter, $\rho$, increases. The tree algorithm with one, two, and three levels is shown in blue circles, red squares, and green triangles, respectively. The grid algorithm with one level is shown in black asterisks.}
\label{fig_73_1}
\end{figure}
As in the trigonometric drivers example, we can also investigate the effect of changing $\sigma$ for a value of $\rho$ where the tree algorithm without continuation in time bifurcates. For $\rho=2$, Figure \ref{fig_73_2} shows that the tree algorithm converges for large enough values of $\sigma$.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{fig7.eps}
\caption{Mixed Model: Tree algorithm with one level for $\rho=2$. The algorithm converges for large enough values of $\sigma$.}
\label{fig_73_2}
\end{figure}
Since we have observed that the tree algorithm converges for small value of $\rho$ and large values of $\sigma$, this suggests trying a continuation in $\rho$ and/or $\sigma$, instead of the continuation in time. Instead of implementing a full continuation method, we used a simpler method of incrementing the parameter of interest.
The incrementation in $\rho$ is performed by starting the algorithm with a small value of $\rho$ and letting it converge. Then $\rho$ is increased by some fixed $\Delta \rho$, and the algorithm is initialized with the solution from the previous value of $\rho$. Figure \ref{fig_73_3} shows the results from the incrementation in $\rho$. The incrementation in $\rho$ only increases the bifurcation point by a small amount.
The incrementation in $\sigma$ is similar, except for each value of $\rho$, we start the algorithm with a sufficiently large value of $\sigma$ such that it will converge. Then $\sigma$ is decreased by a fixed $\Delta \sigma$. Figure \ref{fig_73_4} shows the results from the incrementation in $\sigma$. The incrementation in $\sigma$ also only increases the bifurcation point by a small amount.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{fig8.eps}
\caption{Mixed Model: Tree algorithm with one level. Without continuation or incrementation is shown in black. Incrementation in $\rho$ with $\Delta \rho=0.1$, $0.01$, and $0.001$ are shown in blue circles, red squares, and green triangles, respectively.}
\label{fig_73_3}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{fig9.eps}
\caption{Mixed Model: Tree algorithm with one level. Without continuation or incrementation is shown in black asterisks. Incrementation in $\sigma$ with $\Delta \sigma=1$, $0.5$, and $0.1$ are shown in blue circles, red squares, and green triangles, respectively.}
\label{fig_73_4}
\end{figure}
If we take the previous example but change the drift and driver functions to also be in terms of the mean of the processes, then we have the following system, which we will refer to as the mixed model of means:
\begin{equation*}
\begin{split}
dX_t&=- \rho \mathbb{E}(Y_t) dt+\sigma dW_t \\
X_0&=x_0=2 \\
dY_t&=\arctan \left( \mathbb{E}(X_t) \right) dt+Z_t dW_t \\
Y_T&= \arctan \left(\mathbb{E} (X_T) \right).
\end{split}
\end{equation*}
For the last example, we noticed that increasing $\sigma$ allows the tree algorithm to converge. We are thus interested to see if replacing the dynamics with the mean of the process will remove the effect of changing $\sigma$ on the convergence. We use the same values of the parameters as before. Figure \ref{fig_73_E_1} shows the bifurcation with one level of the tree algorithm (i.e. without using continuation in time). The effect of changing $\sigma$ is shown in Figure \ref{fig_73_E_2}. Our prediction is confirmed: $\sigma$ no longer affects the convergence when the dynamics are replaced with the mean of the process.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{fig10.eps}
\caption{Mixed Model of Means: Bifurcation for the tree algorithm with one level.}
\label{fig_73_E_1}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{fig11.eps}
\caption{Mixed Model of Means: Tree algorithm with one level with $\rho=3$. Changing $\sigma$ has no affect on the bifurcation.}
\label{fig_73_E_2}
\end{figure}
\subsection{Examples: Linear Quadratic Mean Field Games}\label{Truncation}
\hspace{5mm} The last two examples belong to the family of linear quadratic (LQ) games. In these models, the dynamics of the state are linear in the sense that the drift is defined by an affine function:
\begin{equation*}
b(t,x,\alpha)=A_t x + B_t \alpha + \beta_t.
\end{equation*}
Furthermore, the running and the terminal cost are quadratic in the state and control variables. For the sake of simplicity, we choose not to include the cross terms, so that we define f and g as follows:
\begin{equation*}
\begin{split}
f(t,x,\alpha)&=\frac{1}{2} P_t x^2 + \frac{1}{2} Q_t \alpha^2 \\
g(x)&=\frac{1}{2} S x^2. \\
\end{split}
\end{equation*}
The FBSDE system derived by approaching the LQ mean field games via the weak approach becomes:
\begin{equation}
\begin{split}
dX_t &= \big[A_t x + B_t \alpha + \beta_t\big] dt +
\sigma dW_t ,\quad X_0=\xi \\
dY_t &= \bigg[-\frac{1}{2} P_t X_t^2 + \big( A_t X_t + \beta_t\big) \frac{Z_t}{\sigma}+\frac{1}{2} \frac{B_t^2}{Q_t} \frac{Z_t^2}{\sigma^2} \bigg] dt + dW_t,\quad Y_T = \frac{1}{2} S X_T^2.\\
\end{split}
\label{quadratic}
\end{equation}
We present two examples of linear quadratic games: a problem of flocking, and a price impact model of a trader.
\begin{remark}
We observe that the driver of the BSDE is quadratic in $Z_t$, which makes it seldom solvable. On the other hand, it is possible to obtain explicit solutions for LQ models. If we were to numerically solve equation (\ref{quadratic}), we might observe blow up from the quadratic terms in the backward equation and consequently the algorithm to not converge. Luckily, we do not observe such blowup in the examples we consider. But if blowup does occur, one could consider adapting the method presented by Chassagneux and Richou in \cite{Quadratic} to numerically approximate a quadratic BSDE.
\label{remark_trunc}
\end{remark}
\subsubsection{Flocking Problem}
\hspace{5mm} The next example problem models flocking. As in the paper of Nourian, Caines, and Malham\'{e} \cite{Nourian}, we consider the spatially homogeneous case where the state, $X_t$, represents the velocity of a representative player, or bird. Each bird controls its velocity through the process:
\begin{equation*}
dX_t=\alpha_t dt+\sigma dW_t,
\end{equation*}
where the control is chosen to minimize:
\begin{equation*}
J(\alpha)=\mathbb{E} \left[ \int_0^T \left(\frac{1}{2} \alpha_t^2+\frac{\rho}{2}(X_t-\bar{\mu}_t)^2 \right) dt\right].
\end{equation*}
over $\alpha \in \mathbb{A}$. Above, we let $\bar{\mu}_t$ denote the mean of the distribution $\mu_{t}$ (of the velocities of the birds) at time $t$. The running cost consists of two components. The first term encourages the birds to minimize their kinetic energy by not choosing a large control. The second term encourages the birds to align their velocities with the mean velocity of the group.
This model falls into the class of linear quadratic games. Assume that the initial condition is given by a constant, $X_0=x_0$. It can be shown that the solution is Gaussian with mean and variance:
\begin{equation*}
\begin{split}
\mathbb{E}(X_t)&=x_0 \\
Var(X_t)&=\sigma^2 \int_0^t \exp \left(-2 \int_s^t \eta_u du\right)ds,
\end{split}
\end{equation*}
where:
\begin{equation*}
\eta_t=\sqrt{\rho}\frac{e^{2\sqrt{\rho}(T-t)}-1}{e^{2\sqrt{\rho}(T-t)}+1}.
\end{equation*}
Using the weak formulation, the FBSDE system of interest is the following:
\begin{equation*}
\begin{split}
dX_t&=-\frac{Z_t}{\sigma} dt+\sigma dW_t \\
X_0&=x_0 \\
dY_t&=-\left(\frac{Z_t^2}{2\sigma^2}+\frac{\rho}{2}(X_t-\mathbb{E}X_t)^2\right) dt+Z_t dW_t \\
Y_T&=0.
\end{split}
\end{equation*}
If we use the Pontryagin formulation, the FBSDE system becomes:
\begin{equation*}
\begin{split}
dX_t&=-Y_tdt+\sigma dW_t \\
X_0&=x_0 \\
dY_t&=-\rho\left(X_t-\mathbb{E}(X_t)\right) dt+Z_t dW_t \\
Y_T&=0.
\end{split}
\end{equation*}
The numerical results are presented for $\rho=1$, $\sigma=1$, $T=1$, $x_0=0$, $h=1/20$ for the tree algorithm and $h=1/130$ and $\Delta x=h^2$ for the grid algorithm. Figure \ref{fig_flocking_grid}(a) shows the results for the grid algorithm for both the weak and Pontryagin approaches. The plot shows the weights of the distribution $\mathcal{L}(X_{T})$. The results are similar between both approaches and coincide with the true solution.
We can look at the convergence rate by calculating the 2-Wasserstein distance between the numerical results and the true solution as we change the number of time steps. Since our state space is in one dimension, we can calculate the Wasserstein distance explicitly using the representation provided by Prokhorov \cite{W2}:
\begin{equation*}
W_p(\mu,\nu)=\left(\int_0^1 | F_\mu^{-1}(u)-F_\nu^{-1}(u)|^pdu\right)^{1/p},
\end{equation*}
where $F_\mu(x)=\mu([0,x])$, denotes the cumulative distribution function. Figure \ref{fig_flocking_grid}(b) presents the convergence rate of the grid algorithm in terms of the 2-Wasserstein distance calculated between the true solution and numerical results with respect to the number of time steps. As expected, the 2-Wasserstein distance decreases towards 0 as we increase the number of time steps, for both the Pontryagin and weak approaches.
\begin{figure}[!htb]
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[scale=0.45]{fig12a.eps}
\caption{}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[scale=0.45]{fig12b.eps}
\caption{}
\end{subfigure}
\caption{Flocking Problem: (a) Distribution $\mu_T$ of the players' states at time $T$ for the grid algorithm with one level. Pontryagin is in blue circles, weak is in red squares, true solution is shown in black asterisks. (b) 2-Wasserstein distance between true solution and numerical solution for grid algorithm with one level as we increase the number of time steps, plotted as a log-log plot. Pontryagin approach is in blue circles and weak approach is in red squares.}
\label{fig_flocking_grid}
\end{figure}
The results for the tree algorithm are shown in Figure \ref{fig_flocking_tree} for the weak and Pontryagin approaches. As with the grid algorithm, the weak and Pontryagin solutions are similar to each other and coincide with the true solution.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{fig13.eps}
\caption{Flocking Problem: Distribution $\mu_T$ of the players' states at time $T$ for the tree algorithm with one level. Pontryagin is shown in blue circles, weak is shown in red squares, and true solution is shown in black asterisks.}
\label{fig_flocking_tree}
\end{figure}
\subsubsection{Trader Problem}\label{trader}
\hspace{5mm} The last example shows an application of mean field games to finance, such as the trader congestion model in the paper of Cardialaguet and Lehalle, \cite{trader}. We focus on the \textit{Price Impact Model} presented in the book of Carmona and Delarue in \cite{MFGP}. The interest for this kind of model is motivated by their use in optimal execution problems for high frequency trading. Furthermore, it represents an instance of extended mean field game, also known as mean field game of controls, in which the representative agent interacts with the law of the control instead of the law of their state. The problem consists in a group of traders who have to buy or sell a large amount of shares in a given interval of time $[0,T]$. If they trade too fast, they will suffer from market impact. On the other hand, if they trade too slow, they will be affected by a large risk penalization. Approaching this problem as a mean field game, the inventory of the representative trader is modeled by a stochastic process $(X_t)_{0 \leq t \leq T} $ such that
\begin{equation*}
dX_t = \alpha_t dt +\sigma dW_t, \quad t\in [0,T],
\end{equation*}
where $\alpha_t$ corresponds to the trading rate. The price of the asset $(S_t)_{0 \leq t \leq T}$ is influenced by the trading strategies of all the traders trough the law of the controls $(\theta_t=\mathcal{L}(\alpha_t))_{0 \le t \le T}$ as follows:
\begin{equation*}
dS_t =\gamma \biggl( \int_{\mathbb{R}}a d\theta_t(a) \biggr) dt + \sigma_0 dW_t^0, \quad t\in [0,T],
\end{equation*}
where $\gamma$ and $\sigma_0$ are constants and the Brownian motion $W^0$ is independent from $W$.
The amount of cash held by the trader at time $t$ is denoted by the process $(K_t)_{0 \le t \le T}$. The dynamic of $K$ is modeled by
\begin{equation*}
dK_t=-[\alpha_t S_t +c_{\alpha}(\alpha_t)]dt,
\end{equation*}
where the function $\alpha \mapsto c_{\alpha}(\alpha)$ is a non-negative convex function satisfying $c_{\alpha}(0)=0$, representing the cost for trading at rate $\alpha$. The wealth $V_t$ of the trader at time $t$ is defined as the sum of the cash held by the trader and the value of the inventory with respect to the price $S_t$:
\begin{equation*}
V_t=K_t+X_t S_t.
\end{equation*}
Applying the self-financing condition of Black-Scholes' theory, the changes over time of the wealth $V$ are given by the equation:
\begin{equation}
\begin{split}
dV_t&=dK_t+X_tdS_t+S_t dX_t
\\
& =\Big[ -c_{\alpha}(\alpha_t)+\gamma X_t \int_{\mathbb{R}} a d\theta_t(a) \Big]dt + \sigma S_t dW_t + \sigma_0 X_t dW_t^0.
\end{split}
\label{wealth}
\end{equation}
We assume that the trader is subject to a running liquidation constraint modeled by a function $c_X$ of the shares they hold, and to a terminal liquidation constraint at maturity $T$ represented by a scalar function $g$. Thus, the cost function is defined by:
\begin{equation*}
J(\alpha)=\mathbb{E}\Big[\int_0^T c_X(X_t) dt +g(X_T) - V_T\Big].
\end{equation*}
Applying Equation (\ref{wealth}), it follows that
\begin{equation*}
J(\alpha)=\mathbb{E}\Big[ \int_0^T f(t,X_t,\theta_t,\alpha_t)dt +g(X_T)\Big],
\end{equation*}
where the running cost is defined by
\begin{equation*}
f(t,x,\theta,\alpha)=c_{\alpha}(\alpha)+c_X(x)-\gamma x \int_{\mathbb{R}} a d\theta(a),
\end{equation*}
for $0\leq t \leq T$, $x \in \mathbb{R}^d$, $\theta \in \mathcal{P}(A)$ and $\alpha \in A = \mathbb{R}$. We assume that the functions $c_X$ and $g$ are quadratic and that the function $c_{\alpha}$ is strongly convex in the sense that its second derivative is bounded away from $0$. Such a particular case is known as the Almgren-Chriss linear price impact model. Thus, the control is chosen to minimize:
\begin{equation*}
J(\alpha)=\mathbb{E}\left[ \int_0^T \left( \frac{c_{\alpha}}{2}{\alpha_t}^2+\frac{c_X}{2}X_t^2-\gamma X_t\int_{\mathbb{R}} a d\theta_t(a) \right)dt + \frac{c_g}{2}X_T^2\right],
\end{equation*}
over $\alpha \in \mathbb{A}$.
To summarize, the running cost consists of three components. The first term represents the cost for trading at rate $\alpha$. The second term takes into consideration the running liquidation constraint in order to penalize unwanted inventories. The third term defines the actual price impact. Finally, the terminal cost represents the terminal liquidation constraint.
As for the flocking example, this model falls in the class of linear quadratic games. Assume that the initial condition is given by a constant, $X_0=x_0$. The solution is Gaussian with mean and variance defined as:
\begin{equation*}
\begin{split}
\mathbb{E}(X_t)&=x_0 e^{-\int_0^t \frac{\bar{\eta}_s}{c_{\alpha}}ds}\\
Var(X_t)&=\sigma^2 \int_0^t e^{-\frac{2}{c_{\alpha}}\int_s^t \eta_r dr} ds\\
\end{split}
\end{equation*}
where:
\begin{equation*}
\begin{split}
\bar{\eta}_t&= \frac{-C (e^{(\delta^+-\delta^-)(T-t)}-1)-c_g(\delta^+e^{(\delta^+-\delta^-)(T-t)}-\delta^-)}{(\delta^-e^{(\delta^+-\delta^-)(T-t)}-\delta^+)-c_gB(e^{(\delta^+-\delta^-)(T-t)}-1)} \\
\eta_t&=-c_{\alpha}\sqrt{c_X/c_\alpha}\frac{c_{\alpha}\sqrt{c_X/c_\alpha}-c_g-(c_{\alpha}\sqrt{c_X/c_\alpha}+c_g)e^{2\sqrt{c_X/c_\alpha}(T-t)}}{c_{\alpha}\sqrt{c_X/c_\alpha}-c_g+(c_{\alpha}\sqrt{c_X/c_\alpha}+c_g)e^{2\sqrt{c_X/c_\alpha}(T-t)}}, \\
\end{split}
\end{equation*}
for $t\in[0,T]$, where $B=1/c_{\alpha}$, $C=c_X, \delta^\pm=-D \pm \sqrt{R}$,
with
$D = -\gamma /(2c_{\alpha})$
and
$R=D^2+BC$. \\
Using the weak approach yields the following FBSDEs system:
\begin{equation*}
\begin{split}
dX_t &= -\frac{1}{c_{\alpha}}\frac{Z_t}{\sigma} dt + \sigma dW_t,\quad X_0=x_0 \\
dY_t &= -\left[\frac{c_X}{2} X_t^2 + \frac{\gamma }{c_{\alpha}}\frac{\mathbb{E}[Z_t]}{\sigma} X_t +\frac{1}{2c_{\alpha}} \left(\frac{Z_t}{\sigma}\right)^2\right] dt + Z_t dW_t,\quad Y_T = c_g \frac{X_T^2}{2}. \\
\end{split}
\end{equation*}
Alternatively, the FBSDE system obtained via the Pontryagin approach is:
\begin{equation*}
\begin{split}
dX_t &= -\frac{1}{c_{\alpha}}Y_t dt + \sigma dW_t, \quad X_0=x_0 \\
dY_t &= -\left(c_X X_t + \frac{\gamma }{c_{\alpha}} \mathbb{E} [Y_t]\right) dt + Z_t dW_t,\quad Y_T = c_g X_T. \\
\end{split}
\end{equation*}
The numerical results focus on the effect of the continuation method for the grid algorithm. In contrast with the previous examples, we show that the grid algorithm is also affected by bifurcation. Figure \ref{fig_trader_bif} shows the last five Picard iterations of $Y_0$ for the Pontryagin approach when the number of levels ranges from $1$ to $3$. Fixing the parameters $x_0=1$, $\sigma=0.7$, $1/c_{\alpha}=1.5$, $c_g=0.3$, $\gamma =2$, $T=1$, $h=1/12$, $\Delta x=h^2$ and increasing the coupling parameter, $c_X$, we observe that the bifurcation effect can be corrected by increasing the number of levels. In fact, Figure \ref{fig_trader_bif} shows that the true value of $Y_0$ matches the value computed numerically when using three levels.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{fig14.eps}
\caption{Trader Problem: Bifurcations in the values of $Y_0$ depending on the coupling parameter $c_X$ for different number of levels in the grid algorithm. One, two, and three levels are shown in blue circles, red squares, and green triangles, respectively. The true value of $Y_0$ is shown in black asterisks.}
\label{fig_trader_bif}
\end{figure}
Furthermore, Figure \ref{fig_trader_2}(a) compares the distribution $\mathcal{L}(X_{T})$ obtained by the Pontryagin and the weak approaches using the grid algorithm with parameters $x_0=1$, $\sigma=0.7$, $1/c_{\alpha}=0.3$, $c_g=0.3$ $\gamma =2$, $c_X=2$, $T=1$, $h=1/130$ and $\Delta x=h^2$. The two approaches produce similar results that coincide with the true solution.
Figure \ref{fig_trader_2}(b) presents the convergence rate in terms of the 2-Wasserstein distance calculated between the true solution and numerical results with respect to the number of time steps. We again make use of the explicit representation of the Wasserstein distance \cite{W2}. The numerical solution is obtained using the grid algorithm with parameters $x_0=1$, $\sigma=0.7$, $1/c_{\alpha}=0.3$, $c_g=0.3$, $\gamma =2$, $c_X=2$, $T=1$, and $\Delta x=h^{2}$. As expected, the 2-Wasserstein distance decreases towards 0 as we increase the number of time steps.
\begin{figure}[!htb]
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[scale=0.45]{fig15a.eps}
\label{fig_trader_mu}
\caption{}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[scale=0.45]{fig15b.eps}
\label{fig_trader_W2}
\caption{}
\end{subfigure}
\caption{Trader Problem: (a) Distribution $\mu_T$ of the players' states at time $T$ for the grid algorithm with one level. Pontryagin is shown in blue circles, weak is shown in red squares, and the true solution is shown in black asterisks. (b) 2-Wasserstein distance between true solution and numerical solution for grid algorithm with one level as we increase the number of time steps, plotted as a log-log plot. Pontryagin approach is shown in blue circles and weak approach is shown in red squares.}
\label{fig_trader_2}
\end{figure}
The last plot, Figure \ref{fig_trader_error_control}, shows the error from the true solution of the control at time 0, $\alpha_0$, as we increase the number of time steps. This value is given by $\alpha_0=-Y_0/c_X$ for the Pontryagin approach and $\alpha_0=-Z_0/(c_X\sigma)$ for the weak approach. The true value is given by $\alpha_0=-\bar{\eta}_0 x_0/c_X$. As for the 2-Wasserstein distance, the error in the control decreases towards 0.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{fig16.eps}
\caption{Trader Problem: Error in the control at time 0, $\alpha_0$, as we increase the number of time steps, plotted as a log-log plot.}
\label{fig_trader_error_control}
\end{figure}
\section{Conclusion}\label{Conclusion}
\hspace{5mm} In conclusion, we have provided two algorithms for numerically solving FBSDEs of McKean-Vlasov type, which can be used to formulate the solutions to mean field game problems. The first algorithm is based on a pathwise tree structure. The second algorithm is based on a marginal grid structure. We have also proposed various refinements to the algorithms, including a continuation in time, and incrementation of a coupling parameter or the diffusion coefficient. The different numerical methods were illustrated on five benchmark examples.
The tree algorithm's main advantage is that we do not need to project the values of $X_t$ onto a discretized spatial grid, which potentially makes the algorithm more accurate. However, a significant disadvantage of the tree algorithm is the exponential growth of the data structure as the number of time steps is increased. This exponential growth is made worse yet if a higher order of quantization were to be used for approximating the Brownian increments.
The grid algorithm's main advantage is it avoids the exponential growth of the data structure. A higher order of quantization may be used without drastically changing the algorithm's complexity. A disadvantage of the grid algorithm is its sensitivity to the spatial step size with respect to the time step size. For the algorithm to be stable, the two step sizes need to be well adjusted to each other.
For both the tree and grid algorithms, we have observed that the continuation in time is able to extend the range of values of the coupling parameters for which the algorithms will converge. The incrementation methods proposed in Subsection \ref{example_3}, however, were not very successful at avoiding the bifurcations.
This report has touched on many things that could be explored deeper. First of all, the extension of the grid algorithm in \cite{Grid} to the mean field setting has not been studied from a theoretical standpoint. It is an open question to determine if this algorithm converges (meaning that the error decreases as the grid size decreases). The effect of the continuation in time or incrementation of the coupling parameter and/or diffusion coefficient has also yet to be studied. The numerical results also raised questions on the influence of the diffusion coefficient in the bifurcations.
\section{Acknowledgments}
\hspace{5mm} The authors would like to thank the organizers of the 2017 CEMRACS for the opportunity to study at CIRM. The student authors would also like to thank Professors Chassagneux and Delarue for mentoring us during the summer school. Finally, we would like to thank the funding sources that supported us during CEMRACS, including our respective universities, the NSF, and SMAI.
|
{
"timestamp": "2018-10-02T02:23:15",
"yymm": "1805",
"arxiv_id": "1805.02406",
"language": "en",
"url": "https://arxiv.org/abs/1805.02406"
}
|
\section{Introduction}
\subsection{Background and motivation}
Our title hints at Lieb's article ``Gaussian kernels have only Gaussian maximizers'' \cite{lieb-1990}. This remarkable work studies operators with Gaussian kernels between Lebesgue spaces $L^p(\mathbb{R}^n,\mathbb C)$, with norm $\|f\|_p=\left(\int_{\mathbb{R}^n}
|f(y)|^p \, dy \right)^{1/p}$. These operators are of the following form: for any integrable $f \colon \mathbb{R}^n\to\mathbb C$,
$$ Gf(x)=\int_{\mathbb{R}^n} e^{-\mathcal Q(x,y)} f(y)\, dy, \qquad x\in\mathbb{R}^m,$$
where $\mathcal Q \colon \mathbb{R}^m\times \mathbb{R}^n\to\mathbb C$ is such that $\re(\mathcal Q) \colon \mathbb{R}^m\times \mathbb{R}^n\to\mathbb{R}$
is a semi-definite positive quadratic form and $\im(\mathcal Q)\colon\mathbb{R}^m\times \mathbb{R}^n\to\mathbb{R}$ is a quadratic form. Lieb has given conditions which ensure that the operator norm of $G \colon L^p(\mathbb{R}^n,\mathbb C)\to
L^q(\mathbb{R}^m,\mathbb C)$ can be computed by inspecting centered Gaussian functions only, i.e. functions of the form $f(y)=e^{-q(y)}$ where $\re(q)$ is $q$ positive definite quadratic form and $\im(q)$ is a quadratic form (when $q$ is real-valued, $f$ is a real-valued Gaussian function).
Here is a simplified version of his result:
\begin{theorem}[\cite{lieb-1990}]
With the above notation, the relationship
$$ \|G\|_{L_p\to L_q}=\sup\left\{\frac{\|Gf\|_q}{\|f\|_p};\; g \; \mathrm{centered}\; \mathrm{Gaussian} \right\} $$
holds in any of the following cases
\begin{itemize}
\item $1<p\le q<+\infty$,
\item $1<p, q<+\infty$ and the Gaussian kernel is real (i.e. $\mathcal{Q}$ is a quadratic form). In this case it is enough to consider real-valued Gaussian functions.
\end{itemize}
\end{theorem}
An important step of the proof consists in the study of the non-degenerate case when $\re(\mathcal Q)$ is definite positive: the operator is shown to be compact, weak topology arguments yield the existence of maximizers of the ratio $\|Gf\|_q/\|f\|_p$, and a careful study of the product
operator $G\otimes G$ with kernel $e^{-\mathcal Q(x,y)}e^{-\mathcal Q(x',y')}$ allows to show that they are Gaussian. For further comparison, let us emphasize that
these arguments use Banach space techniques, which only apply when $p\ge 1$.
Lieb's theorem extends and unifies many important analytic results.
By considering the kernel $e^{i\langle x,y\rangle}$,
it recovers the calculation of the norm of the Fourier transform from $L^p$ to $L^{p'}$ for $p\in(1,2)$, which was first achieved by Beckner \cite{beckner-1975}. It also encompasses Nelson's theorem for the Ornstein-Uhlenbeck semigroup:
$$ P_tf(x)=\int_{\mathbb R^n} f\big( e^{-t}x+\sqrt{1-e^{-2t}}y\big) \, d\gamma_n(x),$$
where $\gamma_n$ is the standard Gaussian measure on $\mathbb{R}^n$, $d\gamma_n(x)= (2\pi)^{-n/2}\exp(-|x|^2/2) \, dx$. Nelson's theorem \cite{N} asserts that this operator is hypercontractive: for $p,q>1$ satisfying $e^{2t}\ge \frac{q-1}{p-1}$, and every measurable function $f$,
$$\|P_tf \|_{L^q(\gamma_n)}\le \|f\|_{L^p(\gamma_n)}.$$
Lieb's article also addresses multilinear operators with Gaussian kernels, and features an extension of the latter theorem in the case of real-valued kernels and functions.
(From now on we only consider real-valued functions).
\begin{theorem}[\cite{lieb-1990}]\label{theo:lieb-multi}
Let $m\ge 1$ and for $i=1,\ldots m$ let $p_i\ge 1$ and let $B_i \colon \mathbb{R}^n \to \mathbb{R}^{n_i}$
be a linear surjective map. Let $\mathcal Q$ be a semi-definite positive quadratic form on $\mathbb R^n$. For non-identically zero functions $f_i\in L^{p_i} (\mathbb{R}^{n_i},\mathbb{R})$, let
$$H(f_1,\ldots, f_m)=\frac{\int_{\mathbb{R}^n} e^{-\mathcal Q(x)}\prod_{i=1}^m f_i(B_i x) \, dx}{\prod_{i=1}^m \|f_i\|_{p_i} }.$$
Then the supremum of $H$ over all such functions is equal to its supremum over centered Gaussian functions only.
\end{theorem}
Setting $c_i=1/p_i$ and replacing $f_i$ with $f_i^{c_i}$ gives an analogous result
for the following functional on integrable functions:
$$ I(f_1,\ldots, f_m)=\frac{\int_{\mathbb{R}^n} e^{-\mathcal Q(x)}\prod_{i=1}^m f_i(B_i x)^{c_i} \, dx}{\prod_{i=1}^m \left( \int_{\mathbb{R}^{n_i}}f_i\right)^{c_i}}.$$
The above theorem is a far-reaching extension of H\"older's inequalities.
In the case when $\mathcal Q=0$ and the maps $B_i$ are linear forms (i.e. $n_i=1$),
it recovers a celebrated inequality of Brascamp and Lieb \cite{B-L-1976}, which allowed these authors to compute the optimal constants in Young's convolution inequality, independently of Beckner \cite{beckner-1975}. Indeed using duality
$$ \|f*g\|_r \le C \|f\|_p \|g\|_q $$
can be rewritten as
$$ \int_{\mathbb{R}^{2n}} f(x-y)g(y)h(x) \, dx dy\le C \|f\|_p \|g\|_q \|h\|_{r'}.$$
The classical Loomis-Whitney inequality \cite{L-W-1949} and its extension by Finner \cite{finner} are also particular cases of Theorem \ref{theo:lieb-multi}.
The Brascamp-Lieb inequalities found striking applications in convex geometry thanks to the work
of K. Ball, see e.g. \cite{ball-1,ball-3,ball-handbook}. He put forward a situation where the optimal constant is 1, and could use it in order to derive various sharp upper bounds on volumes of convex sets. The ``geometric'' Brascamp-Lieb inequality reads as follows: if $u_1,\ldots, u_m$ are unit vectors in $\mathbb{R}^n$ and if $c_1,\ldots, c_m\ge 0$ verify
\begin{equation}\label{eq:decomposition-identity-1d}
\sum_{i=1}^m c_i u_i\otimes u_i=\textup{Id}_{\mathbb{R}^n},
\end{equation}
where $u_i\otimes u_i$ is the orthogonal projection onto $\mathbb{R} u_i$ and $\textup{Id}_{\mathbb{R}^n}$ is the identity of $\mathbb{R}^n$, then for all integrable functions $f_i \colon \mathbb{R}\to \mathbb{R}^+$,
$$
\int_{\mathbb{R}^n} \prod_{i=1}^m f_i\big(\langle x,u_i\rangle \big)^{c_i} dx \le \prod_{i=1}^m \left( \int_{\mathbb{R}} f_i\right)^{c_i}.
$$
Observe that when $m=n$ and $(u_i)_{i=1}^n$ is an orthonormal basis, the inequality becomes an equality by Fubini's theorem. So, in some sense,
the geometric Brascamp-Lieb
inequality describes an extremal property of orthonormal bases among sets of vectors which decompose the identity as in \eqref{eq:decomposition-identity-1d}.
An extension to functions of several variables appears in \cite{barthe-inventiones}.
\medskip
Over the years, several related inverse inequalities appeared in the literature.
A first and very simple example is the inverse H\"older inequality (obviously, H\"older's inequalities are a particular case of Lieb's theorem): if $\lambda\ge 1$
and $f,g\colon\mathbb{R}^n\to \mathbb{R}^+$ then
$$ \int_{\mathbb{R}^n} f^\lambda g^{1-\lambda} \ge \left(\int_{\mathbb{R}^n} f\right)^\lambda
\left(\int_{\mathbb{R}^n} g\right)^{1-\lambda},$$
provided $\int g>0$ and with the convention that $0 \cdot \infty=0$.
By rearranging the terms, this is easily deduced from the usual inequality.
The inverse H\"older inequality can also be rewritten as a sort of duality for the (non-normed) spaces $L^p$ when $p\in (-\infty,0)\cup(0,1)$: for $f,g\colon\mathbb{R}^n\to \mathbb{R}^+$,
$$ \int_{\mathbb{R}^n} fg \ge \|f\|_p \|g\|_{p'},$$
where $p'\in (-\infty,0)\cup(0,1)$ is still defined by $p^{-1}+(p')^{-1}=1$. Since
the latter inequality is sharp, it follows that for $f\ge 0$,
\begin{equation}\label{eq:dual-p<1}
\|f\|_p=\inf_{g\ge 0} \frac{\int fg}{\|g\|_{p'}} .
\end{equation}
Another instance appears in the article of Brascamp and Lieb \cite{B-L-1976}, where a sharp inverse Young inequality is proved: given $p,q,r\in (0,1]$ with $1+1/r=1/p+1/q$, the best constant $C$ such for all positive functions
$$ \|f*g\|_{r}\ge C \|f\|_p \|g\|_q $$
is described, and is achieved by Gaussian functions. Observe that thanks to \eqref{eq:dual-p<1}, the latter can be rewritten as
$$ \int_{\mathbb{R}^{2n}} f(x-y)g(y)h(x) \, dx dy\ge C \|f\|_p \|g\|_q \|h\|_{r'}.$$
Later on, Borell \cite{borell-1982} proved a reverse form of Nelson's hypercontractivity: if $p,q\in (-\infty,1)$ and $e^{2t}\ge \frac{1-q}{1-p}$ then for all positive functions $f$ on $\mathbb{R}^n$:
$$\|P_tf \|_{L^q(\gamma_n)}\ge \|f\|_{L^p(\gamma_n)}.$$
This bound shows that the Ornstein-Uhlenbeck semigroup improves the positivity
of functions (for $p<0$, $\|f\|_p=1/\|1/f\|_{|p|}$ and $q\le p$).
Among the examples of reverse inequalities are the Pr\'ekopa-Leindler inequalities
\cite{prekopa,leindler}: for all $\lambda\in (0,1)$ and all $f,g\colon\mathbb R^n\to \mathbb{R}^+$,
\[ \int^*_{\mathbb{R}^n} \sup_{z=\lambda x+(1-\lambda)y} f(x)^{\lambda}g(y) ^{1-\lambda} \; dz\ge
\left(\int_{\mathbb{R}^n} f\right)^\lambda
\left(\int_{\mathbb{R}^n} g\right)^{1-\lambda},\]
where the left hand side term is an outer integral and the supremum is over all $(x,y)\in (\mathbb R^n)^2$ verifying $z=\lambda x+(1-\lambda)y$. This functional version of the Brunn-Minkowski inequality is actually a particular case of the more general reverse Brascamp-Lieb inequalities proved by the first-named author \cite{barthe-cras,barthe-inventiones}. For shortness, we only state the rank one geometric version of the reverse Brascamp-Lieb inequalities (and refer to Section \ref{sec:dual-inverse} for more details): given unit vectors $u_1,\ldots, u_m$ in $\mathbb{R}^n$ and $c_1,\ldots,c_m\ge 0$ satisfying the decomposition of identity \eqref{eq:decomposition-identity-1d}, for all integrable functions $f_i\colon\mathbb{R} \to \mathbb{R}^+$, it holds
$$ \int_{\mathbb{R}^n}^* \sup_{x=\sum_i c_i \theta_i u_i }
\prod_{i=1}^m f_i(\theta_i)^{c_i} \, dx \ge \prod_{i=1}^m \left( \int_{\mathbb{R}} f_i\right)^{c_i} .$$
This inequality allows to derive geometric properties which are dual to the ones that can be proved using the Brascamp-Lieb inequality (which was the motivation of K. Ball's conjecture of the reverse inequality). See \cite{barthe-inventiones,barthe-simplex} for the first results in this direction.
Again, centered Gaussian functions achieve (or almost achieve) the optimal constant.
The reader may object that the latter inequality seems rather different from the other ones.
Nevertheless, the supremum being an $L^\infty$ norm can be viewed as a limit of $L^p$ norms. Building on this idea, Brascamp and Lieb \cite{B-L-1976} were able to deduce the Pr\'ekopa-Leindler inequality as a limit case of their inverse Young inequality. In the same way, the first-named author proved in \cite{bart-these} an extension of the inverse Young inequality which recovers as a limit the geometric reverse Brascamp-Lieb inequalities. See \cite{chen-dafnis-paouris} for further results in this direction. Actually,
in view of the content of the present paper and of the dual features of their applications, a better name for reverse Brascamp-Lieb inequalities could be dual Brascamp-Lieb inequalities.
\medskip It is natural to ask for a general principle that would unify and explain these reverse inequalities. They definitely share some common features: they provide lower bounds on integrals involving products of positive functions in terms of their $L^p$ norms, often with $p<1$, and Gaussian functions play a special role. A significant progress in this direction was recently made by
Chen, Dafnis and Paouris.
One of their main results is stated in probabilistic terms:
\begin{theorem}[\cite{chen-dafnis-paouris}]\label{theo:CDP}
Let $n = n_1 + n_2 + \ldots + n_m$ be positive integers and $(X_1,\ldots,X_m)$ be a Gaussian random vector in $\mathbb{R}^n$ (where $X_i \in \mathbb{R}^{n_i}$), with covariance matrix $\Sigma$. For each $i$, let $\Sigma_i$ denote the covariance matrix of $X_i$
(which is a diagonal block of $\Sigma$). Let $p_1,\ldots,p_m \in \mathbb{R}\setminus\{0\}$ and consider the block diagonal matrix $P=\mathrm{diag}(p_1\Sigma_1,\ldots,p_m\Sigma_m)$. Then for all positive functions $f_1, \ldots, f_m$,
\begin{align*}
\mathrm{if} \; \Sigma\le P, \;\mathrm{then} \quad &\mathbb E\left(
\prod_{i=1}^m f_i(X_i)\right) \le \prod_{i=1}^m\mathbb E \left( f_i(X_i)^{p_i} \right)^{\frac{1}{p_i}},\\
\mathrm{if} \; \Sigma\ge P, \;\mathrm{then} \quad &\mathbb E\left(
\prod_{i=1}^m f_i(X_i)\right) \ge \prod_{i=1}^m\mathbb E \left( f_i(X_i)^{p_i} \right)^{\frac{1}{p_i}},
\end{align*}
Here the order on matrices is for the cone of semi-definite positive matrices.
\end{theorem}
The first part of the theorem is actually a direct consequence of the general Brascamp-Lieb inequality. The second part is a very neat reverse inequality.
Observe that the condition $\Sigma\ge P$ implies, by restriction to diagonal
blocks, that $1\ge p_i$. The functional inequality can be rewritten as a lower bound on a multilinear
operator with a generalized Gaussian kernel (i.e. the exponential of a quadratic form,
without sign condition). Chen, Dafnis and Paouris use
transformations of this inequality by linear changes of variables in order to get more,
and doing so they succeed to recover most of the above mentioned reverse inequalities. Nevertheless, their results do not have full generality and involve conditions which are sometimes difficult to check.
In the note \cite{barthe-wolff-cras},
we have announced a general result on the optimal constant in inequalities
of the form
$$
\int_H e^{-\mathcal Q(x)} \prod_{k=1}^m f_k^{c_k}(B_k x) \, dx\ge C \prod_{k=1}^m \left(\int_{H_k} f_k \right)^{c_k}
$$
for all positive functions $f_i$. The goal of the present paper is to give a full proof of the results of \cite{barthe-wolff-cras}, and to provide an extensive answer to the following questions: when is it possible to calculate the best constant $C$
by considering only Gaussian functions? or only centered Gaussian functions? when
is there a non-trivial inequality ($C>0$)?
\subsection{Notation and main results}
Here is a description of the setup of this article.
Let $0 \le m^+ \le m$ be integers and $H$, $H_1, \ldots, H_m$ be Euclidean spaces endowed with the usual Lebesgue measure. For $k=1,\ldots, m$ let $c_k$ be a real number satisfying $c_i > 0$ for $i \le m^+$ and $c_j < 0$ for $j > m^+$,
and let $B_k \colon H \to H_k$ be a surjective linear map.
Further, let $\mathcal Q\colon H\to \mathbb{R}$ be a quadratic form with signature $\big(s^+(\mathcal Q), s^-(\mathcal Q)\big)$.
For measurable functions $f_k \colon H_k \to [0,+\infty]$ satisfying $0 < \int_{H_k} f_k < +\infty$ define
\begin{align}\label{def:J}
J(f_1, \ldots, f_m) = \frac{\int_H e^{-\mathcal Q(x)} \prod_{k=1}^m f_k^{c_k}(B_k x) \, dx}{\prod_{k=1}^m \left(\int_{H_k} f_k \right)^{c_k}}
\end{align}
and assume the convention $0 \cdot \infty = 0$ for the product $\prod_{k=1}^m f_k^{c_k}(B_k x)$.
Our goal is to study the minimization problem for the functional $J$. It turns out that centered Gaussian functions, i.e. the functions of the form $e^{- q(x)}$ for a positive definite quadratic form $q$, play a pivotal role. One of our main results is the following counterpart to Lieb's Theorem~\ref{theo:lieb-multi}:
\begin{theorem}\label{thm:main-result}
Let $c_1, \ldots, c_{m^+} > 0$, $c_{m^++1}, \ldots, c_{m} < 0$ with $0 \le m^+ \le m$. Assume that the map
$x\mapsto (B_1x,\ldots,B_{m^+}x)$ from $H$ to $H_1\times\cdots\times H_{m^+}$ is onto and that
\[
\dim H \ge s^+(\mathcal Q) + \dim H_1 + \cdots + \dim H_{m^+}.
\]
Then $\inf J = \inf_{\mathcal{CG}} J$, where the right-hand side stands for the infimum of $J$ over all choices of centered Gaussian functions $g_k$.
\end{theorem}
Hence a Gaussian minimizers principle holds under some hypotheses. It may fail when they are not fulfilled, but this happens only in degenerate situations.
The purpose of Section \ref{sec:well-posedness} is to give a full picture of these degenerate cases. This is done via a careful inspection of the values of the functional $J$ on centered
as well as general Gaussian functions (i.e. non necessarily centered) of the form $e^{-q+\ell}$,
where $q$ is a positive definite quadratic form and $\ell$ is a linear form. This allows to put forward a natural and convenient non-degeneracy condition:
\begin{equation}
\label{eq:non-degeneracy2}
\mathcal Q_{|\bigcap_{i=1}^{m^+}\ker B_i} \mbox{ is positive definite and }\dim H \ge s^+(\mathcal Q) + \dim H_1 + \cdots + \dim H_{m^+}.
\end{equation}
Section \ref{sec:proof-main} gives a proof of the Gaussian minimizers principle under the above
condition \eqref{eq:non-degeneracy2}. The main tool is monotone transportation as in the proof of Brascamp-Lieb inequalities of \cite{barthe-inventiones}, see also \cite{barthe-young,valdimarsson}. The presence of negative exponents introduces substantial additional difficulties.
A crucial technical step is to use a particular decomposition of the quadratic form $\mathcal{Q}$
which is adapted to the geometric structure of the problem, and is inspired by \eqref{eq:non-degeneracy2}.
One could have tried and follow other techniques which applied to Brascamp-Lieb inequalities, as semigroup interpolation or stochastic representations \cite{carlen-lieb-loss,BCCT-structure,borell,barthe-cordero,bennett-bez,lehec,chen-dafnis-paouris,neeman,ledoux}. Nevertheless the transportation technique has the advantage that it does not require any a priori structural study of extremizers.
In Section \ref{sec:geometric}, we establish the analogue in our setting of the geometric Brascamp-Lieb inequality,
and show that it is equivalent to the correlation inequality of Chen, Dafnis and Paouris presented here as Theorem~\ref{theo:CDP}. Our structural study allows a better analysis of equality conditions. Note however that their semigroup proof of the geometric inequality is simpler than ours (somehow for the transportation approach the geometric situation is not easier than the general case).
Section \ref{sec:dual-inverse} presents a dual form of the inverse Brascamp-Lieb
inequalities, which can be obtained from the very same proof. A brief summary of the various types of inequalities is provided.
The next sections are devoted to the question of existence of a non-trivial inverse Brascamp-Lieb
inequality. In other words, when is it true that $\inf J>0$? For Brascamp-Lieb inequalities, the
analogous question (of boundedness of multilinear Gaussian operators) was settled in the general case by Bennett, Carbery, Christ and Tao \cite{BCCT-structure,BCCT-finiteness} after other contributions in the rank one case \cite{barthe-inventiones,carlen-lieb-loss}.
Section \ref{sec:interpolation} establishes, for fixed geometric data $(\mathcal{Q},(B_k)_{k=1}^m)$,
a convexity property of the set of exponents $(c_k)_{k=1}^m$ for which $\inf J>0$. We call this set the positivity domain of $J$.
Section \ref{sec:finiteness-1} gives a description of the positivity domain in the rank one case, i.e. when the maps $B_k$ are linear forms and when $s^+(\mathcal Q), \, s^-(\mathcal Q)\le 1$.
In this case, the proof is simple and based on explicit calculations on Gaussian functions.
The positivity domain is a polyhedral convex cone which we can describe as an intersection of half-spaces or by generating vectors.
Section \ref{sec:finiteness-n} deals with the general case. We follow the inductive approach of Bennett, Carbery, Christ and Tao \cite{BCCT-finiteness}. In our setting, the fact that
the quadratic form can have positive and negative parts (and thus corresponds to fixing two Gaussian
functions instead of one) makes the analysis more delicate.
For simplicity, we state here our characterization in the case when no kernel is involved (the general result is formulated as Theorem~\ref{th:characterization-in-general-case}).
\begin{theorem}\label{th:positivity-general-rank-no-kernel}
Let $c_1, \ldots, c_{m^+} > 0$, $c_{m^++1}, \ldots, c_{m} < 0$ with $0 \le m^+ \le m$. Assume that the map
$x\mapsto (B_1x,\ldots,B_{m^+}x)$ from $H$ to $H_1\times\cdots\times H_{m^+}$ is a bijection. For any integrable functions $f_k \colon H_k\to [0,+\infty]$ with $\int f_k>0$, let
\[
J(f_1, \ldots, f_m) = \frac{\int_H \prod_{k=1}^m f_k^{c_k}(B_k x) \, dx}{\prod_{k=1}^m \left(\int_{H_k} f_k \right)^{c_k}}\cdot
\]
Then $\inf J>0$ if and only if the following two conditions are verified:
\begin{enumerate}
\item $\dim H=\sum_{i=1}^m c_k \dim H_k$,
\item For every linear subspace $V\subset H$ such that $\dim V=\sum_{i=1}^{m^+} \dim B_iV$, it holds
\[ \dim V\ge \sum_{k=1}^m c_k \dim B_kV .\]
\end{enumerate}
If $x\mapsto (B_1x,\ldots,B_{m^+}x)$ is not surjective then $\min J=0$. If it is surjective but not injective then $\inf J=+\infty$.
\end{theorem}
Let us emphasize that in the positivity domain, $c_i\ge 1$ for $i=1,2,\ldots,m^+$ (see Proposition~\ref{prop:c-i-ge-1}), which means that our results can be stated in terms of $L^{p_k}$-spaces with $p_k=1/c_k\le 1$ and possibly negative.
\medskip
Let us conclude this introduction with some more notation and comments on the setting.
In the rest of the paper, we use the notation $\inf_{\mathcal{G}} J$ for the infimum
of $J$ on $m$-tuples of Gaussian functions (not necessarily centered).
We could only require the sets $H_k$ to be finite dimensional vector spaces equipped with a Lebesgue measure. Euclidean structures are not relevant for our problems, but working in Euclidean spaces is convenient for explicit calculations for Gaussian functions, as quadratic functions are represented by symmetric linear maps. Also Euclidean structures induce a canonical choice of Lebesgue measure. In a context of a Euclidean space, we will use $\scalar{\cdot,\cdot}$ for the inner product, $|\cdot|$ for the Euclidean norm and for a linear map $L$ between Euclidean spaces, $L^*$ will stand for its adjoint. We denote by $Q$ a self-adjoint map on $H$ such that for all $x \in H$,
\[
\mathcal{Q}(x) = \pi \scalar{x, Qx}.
\]
Eventually let us mention that we allow $H_k=\{0\}$ and choose by convention
the Lebesgue measure to be the Dirac mass at $0$. For such a $k$, the terms involving
$f_k$ can be canceled out from \eqref{def:J}.
\subsection{Acknowledgments} This work has benefited from discussions on related topics with several colleagues. We would like to thank in particular J\'er\^ome Bertrand, Max Fathi, Piotr Mi{\l}o{\'s}, Krzysztof Oleszkiewicz, Grigoris Paouris.
\section{Well-posedness of the minimization problem and the minimum value}
\label{sec:well-posedness}
We denote by $B_+$ the linear map $(B_1,\ldots,B_{m^+})$, that is
\begin{equation}\label{def:B+}
\begin{array}{rcc}
B_+: \; H & \to & H_1\times\cdots\times H_{m^+} \\
x & \mapsto & (B_1x,\ldots,B_{m^+} x).
\end{array}
\end{equation}
\subsection{A non-degeneracy condition}
We put forward a simple condition on the above map $B_+$ which allows the functional $J$ to vanish.
\begin{lemma}\label{lem:nonsurjective1}
\begin{enumerate}
\item[(i)] If the map $B_+$ from $H$ to $H_1\times\cdots\times H_{m^+}$
is not onto, then $\inf J=\min J=0$.
\item[(ii)] Conversely, if the map $B_+$ is onto, then all $f_k \colon H_k \to [0, +\infty]$ with $0 < \int_{H_k} f_k < +\infty$ one has $J((f_k)) \in (0, +\infty]$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) Consider a point $(a_1,\ldots,a_{m^+})\in H_1\times\cdots\times H_{m^+}$, so that its Euclidean
distance to the range of the non-surjective linear map $B_+$ is at least $\sqrt{m^+}$.
If we denote by $B_H(x,r)$ the open ball of center $x$ and radius $r$ in $H$, then
$$ \big(B_{H_1}(a_1,1)\times \cdots \times B_{H_m^+}(a_{m^+},1) \big) \cap \big\{ (B_1x, \ldots, B_{m^+}x); \; x\in H\big\}=\emptyset.$$
For $1\le i\le m^+$ consider the function $f_i\colon H_i\to \mathbb{R}^+$ defined as the characteristic function
of $B_{H_i}(a_i,1)$. Then the latter empty intersection ensures that for all $x\in H$,
$$\prod_{i=1}^{m^+} f_i^{c_i}(B_ix)=0.$$
Therefore, for any choice of functions $(f_j)_{j>m^+}$, it holds $J(f_1,\ldots,f_m)=0$.
(ii) Since $\prod_{i\le m^+} \int_{H_i} f_i > 0$, the measure of points $(z_1,\ldots,z_{m^+})\in H_1\times\cdots\times H_{m^+}$ for which $\prod_{i\le m^+} f_i^{c_i} (z_i)>0$ is positive. From the hypothesis that $B_+$ is onto it follows that the measure of
$$\{x\in H; \prod_{1\le i\le m^+} f_i^{c_i} (B_ix)>0\}$$
is positive. To conclude it is enough to notice that integrability of $f_j$ (for $m^+ < j < m$) implies that $\prod_{j = 1+m^+}^m f_j^{c_j}(B_j x) > 0$ $x$-a.e. in $H$.
\end{proof}
As a consequence of the previous lemma, we will often work under the non-degeneracy assumption that
$B_+$ is surjective.
\subsection{Calculations for centered Gaussian functions}\label{subsec:centered-gaussians}
Recall the classical formula $\int_{\mathbb R} e^{-\pi t^2} dt=1$. From this, it follows that
for any self-adjoint operator $A$ on $\mathbb R^d$ (or a $d$-dimensional Euclidean space),
$$ \int_{\mathbb{R}^d} e^{-\pi\scalar{x,Ax}} dx=\left\{
\begin{array}{ll} \det(A)^{-1/2} & \mbox{if $A$ is positive definite},\\
+\infty & \mbox{otherwise}.
\end{array}
\right. $$
When $A$ is positive definite, we denote $g_A$ the centered Gaussian function defined by
\[
g_A(x) = e^{-\pi \scalar{x, Ax}}.
\]
An elementary computation shows that
\begin{align}\label{eq:J-on-Gaussian-input}
J(g_{A_1}, \ldots, g_{A_m}) = \begin{cases}
\left(\frac{\det(Q + \sum_{k=1}^m c_k B_k^\ast A_k B_k)}{\prod_{k = 1}^m (\det A_k)^{c_k}}\right)^{-1/2} & \text{if $(A_1, \ldots, A_m) \in \Lambda$,} \\
\infty & \text{otherwise,}
\end{cases}
\end{align}
where the set $\Lambda$ is defined as follows
\[
\Lambda = \Big\{ (A_1, \ldots, A_m) \colon A_k \colon H_k \to H_k \text{ and } Q + \sum_{k=1}^m c_k B_k^\ast A_k B_k \colon H \to H \text{ are positive definite} \Big\}.
\]
Therefore the infimum of $J$ over centered Gaussian functions equals $D^{-1/2}$, where
\begin{align}\label{def:BL-constant}
D = \sup\left\{\frac{\det(Q + \sum_{k=1}^m c_k B_k^\ast A_k B_k)}{\prod_{k=1}^m (\det A_k)^{c_k}} \colon (A_1, \ldots, A_m) \in \Lambda \right\},
\end{align}
with the convention that $D = 0$ for $\Lambda = \emptyset$ (and thus $D^{-1/2} = \infty$).
\subsection{Ensuring finiteness for some functions}
We investigate the existence of non-zero functions for which $J$
takes a finite value.
The right setup for the functional $J$ to be non degenerate on centered Gaussian functions is the following condition:
\begin{equation}\label{eq:injectivity-condition}
\mathcal Q \textup{ is positive definite on } \ker B_+.
\end{equation}
\begin{prop}\label{prop:lamba-emptyset}
The following assertions are equivalent:
\begin{enumerate}
\item[(i)] there exist centered Gaussian functions $g_1, \ldots, g_m$ with $J(g_1, \ldots, g_m) <+\infty$,
\item[(ii)] $\Lambda \neq \emptyset$,
\item[(iii)] $\mathcal Q_{|\ker B_+}$ is positive definite.
\end{enumerate}
\end{prop}
\begin{proof}
The equivalence of $(i)$ and $(ii)$ is a direct consequence of Formula \eqref{eq:J-on-Gaussian-input}.
Assertion $(ii)$ is equivalent to the existence of positive maps $(A_k)_{k=1}^m$ such that
$Q+\sum_{k=1}^m c_k B_k^*A_k B_k$ is positive definite.
This can be rewritten as
\[Q+\sum_{i=1}^{m^+} c_i B_i^*A_i B_i>\sum_{j=1+m^+}^m |c_j| B_j^*A_j B_j. \]
Since one may choose the matrices $A_j>0$ arbitrarily small, $(ii)$ is equivalent
to the existence of positive maps $(A_i)_{i=1}^{m^+}$ such that
\[Q+\sum_{i=1}^{m^+} c_i B_i^*A_i B_i>0. \]
Similarly, one may choose each matrix $A_i$ as an arbitrarily large multiple of the identity on $H_i$. Hence $(ii)$ is equivalent to the existence of $D>0$ such
that
$Q+\sum_{i=1}^{m^+} D B_i^* B_i>0$,
or in terms of quadratic forms
\[x\mapsto \langle x,Qx\rangle + D \sum_{i=1}^{m^+} | B_ix|^2=
\langle x,Qx\rangle + D |B_+x|^2\]
is positive definite. We may conclude thanks to
Lemma \ref{lem:defpos-subspace} below for $L = B_+$.
\end{proof}
\begin{lemma}\label{lem:defpos-subspace}
Let $\mathcal R$ be a quadratic form on $\mathbb R^d$. Let $L \colon \mathbb R^d\to \mathbb R^k$ be a linear map between Euclidean spaces. Then the following assertions
are equivalent:
\begin{enumerate}
\item[(i)] There exists $D>0$ such that the quadratic form $x\mapsto \mathcal R(x)+D |Lx|^2$ is positive definite,
\item[(ii)] $\mathcal R_{|\mathrm{ker}L}$ is positive definite.
\end{enumerate}
\end{lemma}
\begin{proof}
If $(i)$ holds then for $D$ large enough $x\mapsto \mathcal R(x)+D |Lx|^2$ is positive definite. Hence its restriction to $\ker L$ is also positive definite, namely
$\mathcal R_{|\mathrm{ker}L}$ is positive definite.
Next, let us show that $(ii)$ implies $(i)$, by contradiction. If $(i)$ is not true then for every
integer $N$ there exists a unit vector $x_N\in S^{d-1}\subset \mathbb R^d$ such that
$$ \mathcal R(x_N)+N |Lx_N|^2\le 0.$$
By compactness of the unit sphere, one can find a converging subsequence $(x_{N_k})$. Let
$x\in S^{d-1}$ denote its limit.
Since $\mathcal R(x_{N_k})\le -N_k |Lx_{N_k}|^2\le 0$, passing to the limit gives $\mathcal R(x)\le 0$.
Moreover
$$ |Lx_{N_k}|^2\le -\frac{\mathcal R(x_{N_k})}{N_k}\le -\frac{\min_{S^{d-1}}\mathcal R}{N_k},$$
so by continuity, letting $k$ go to infinity $|Lx|=0$. Hence $x\in \mathrm{ker} L\setminus\{0\}$
verifies $\mathcal R(x)\le 0$, meaning that the restriction of $\mathcal R$ to $\mathrm{ker}L$
is not positive definite.
\end{proof}
Combined with the above proposition, the forthcoming one shows that if the map $B_+$ defined in \eqref{def:B+}
is surjective then the following holds: there exists functions for which $J$ is finite
if and only if there exists centered Gaussian functions for which $J$ is finite.
\begin{prop}\label{prop:finitevalues2}
Assume that the map $B_+$ is surjective.
If $\mathcal Q_{|\ker B_+}$ is not positive definite, then for every functions $f_k \colon H_k\to \mathbb{R}^+$
with $\int f_k \in (0,+\infty)$, the quantity $J((f_k)_{1\le k\le m})$ is $+\infty$.
\end{prop}
\begin{proof}
Without loss of generality, we consider arbitrary functions $f_k$ with $\int f_k=1$.
By our hypothesis, there exists a unit vector $v\in H$ such that
$\langle v,Qv\rangle\le 0$ and for all $1\le i\le m^+$, $B_iv =0$. Let $S\subset H$ be any linear complement of $\mathbb{R} v$. Then there is a positive constant $c_S$ such that,
decomposing each element of $H$ as $x=y+tv$ with $y\in S$ and $t\in\mathbb{R}$
\begin{eqnarray*}
J((f_k))&=& c_S \int e^{-\pi\big(\langle y,Qy\rangle+ 2 t \langle Q y,v\rangle+t^2 \langle v,Qv\rangle\big)} \prod_{1\le i\le m^+} f_i^{c_i} (B_iy)
\prod_{m^+ < j \le m} f_j^{c_j} (B_jy+tB_jv) \, dt dy \\
&\ge & c_S \int_S e^{-\pi\langle y,Qy\rangle} \prod_{1\le i\le m^+} f_i^{c_i} (B_iy)
\left( \int_{\mathbb{R}} e^{-2\pi t \langle Q y,v\rangle} \prod_{m^+ < j \le m} f_j^{c_j} (B_jy+tB_jv) \, dt\right) dy
\end{eqnarray*}
Let us prove that $y$-almost everywhere in $S$, the inner integral equals $+\infty$. To do this, we prove that $y$-a.e. in $S$, the non-negative function $t\mapsto \prod_{m^+ < j \le m} f_j^{c_j} (B_jy+tB_jv)$ is bounded from below by a positive constant, except maybe on a set of finite Lebesgue measure. Here are the details:
If $m^+ < j \le m$ is such that $B_jv=0$ then for all $t$, $f_j^{c_j} (B_jy+tB_jv)= f_j^{c_j} (B_jy)$. Since $B_j \colon H \to H_i$ is surjective
and $B_jv=0$ it follows that the restriction of $B_j$ to $S$ is also surjective.
Since $\int_{H_j} f_j<+\infty$, we know that $f_j<+\infty$ a.e. in $H_j$.
As the preimage of a Lebesgue negligible set by a linear surjection is also Lebesgue negligible,
we deduce that $y$-a.e in $S$, $f_j(B_jy)<+\infty$. Using that $c_j$ is negative, we get that $y$-a.e. in $S$, $f_j^{c_j}(B_jy)>0$.
If $m^+ < j \le m$ is such that $B_jv \neq 0$ we proceed differently. First, for each $y$, one can decompose $B_jy$ using orthogonal projections as follows
$$B_j y=P_{(\mathbb{R} B_j v) ^\bot} B_j y + P_{\mathbb{R} B_j v} B_jy=L_j y+ t_j(y) B_j v$$
where $L_j= P_{(\mathbb{R} B_jv) ^\bot} B_j$.
By translation invariance of Lebesgue's measure
\begin{eqnarray*}
\mathrm{vol}_1\left( \{t\in \mathbb{R}; f_j^{c_j} (B_jy+tB_jv) \le 1 \}\right)&=&
\mathrm{vol}_1\left( \{t\in \mathbb{R}; f_j (L_jy+(t+t_j(y))B_jv) \ge 1 \}\right)\\
&=& \mathrm{vol}_1\left( \{t\in \mathbb{R}; f_j (L_jy+ tB_jv)\ge 1 \}\right).
\end{eqnarray*}
Next
$$ 1=\int_{H_j} f_j= |B_jv| \int_{(\mathbb{R} B_jv)^\bot} \int_\mathbb{R} f_j(z+tB_jv) dt dz,$$
hence $z$-a.e. in $(\mathbb{R} B_jv)^\bot$, the inner integral is finite and therefore
$$\mathrm{vol}_1\left( \{t\in \mathbb{R}; f_j (z+ tB_jv)\ge 1 \}\right)<+\infty.$$
Since by construction the above map $L_j \colon S \to (\mathbb{R} B_jv)^\bot$ is linear and onto,
it follows that $y$-a.e. in $S$, $\{t\in \mathbb{R}; f_j^{c_j} (B_jy+tB_jv) \le 1 \}$
has finite Lebesgue measure.
Putting everything together, we obtain as claimed that $y$-a.e. in $S$,
$t\mapsto \prod_{j> m^+} f_j^{c_j} (B_jy+tB_jv)$ is bounded from below by $1$, except for a set of finite Lebesgue measure. Lemma \ref{lem:exp} below then yields that
$y$-a.e. in $S$ the inner integral in the latter expression for $J((f_k))$ is infinite.
Consequently
$$J((f_k))\ge c_S \int_S e^{-\pi\langle y,Qy\rangle} \prod_{1\le i\le m^+} f_i^{c_i} (B_iy) \times
(+\infty)\, dy.$$
So $J((f_k))=+\infty$ provided the set of elements $y\in S$ for which
$\prod_{i\le m^+} f_i^{c_i} (B_iy)>0$ has positive measure (for at least one choice of $S$). To this end we use the hypothesis that $B_+$ is surjective and Lemma~\ref{lem:nonsurjective1}(ii) to obtain that $J((f_k)) > 0$, which readily implies that the set of $x \in H$ for which $\prod_{i\le m^+} f_i^{c_i} (B_ix) > 0$ has positive measure. By integrating over the Grassmannian of hyperplanes in $H$ there exists a non-negligible set of hyperplanes $\mathcal{S}$ such that for each $S \in \mathcal{S}$, the set of $y \in S$ for which $\prod_{i\le m^+} f_i^{c_i} (B_iy)>0$ has positive measure. Since the set of hyperplanes of $H$ containing $v$ is negligible, there must be a hyperplane not containing $v$ in $\mathcal{S}$.
\end{proof}
\begin{lemma}\label{lem:exp}
Let $A\subset \mathbb{R}$ be a Borel set.
If $A^c$ has finite Lebesgue measure then $\int_A e^t dt=+\infty$
\end{lemma}
\begin{proof}
Assume on the contrary that $\int_A e^t dt=C<+\infty$. Then for every $N\in \mathbb N$,
$e^N \mathrm{vol}_1(A\cap [N,N+1)) \le C$. Hence $\mathrm{vol}_1(A^c\cap [N,N+1))\ge 1-C e^{-N}$. Summing over $N\in \mathbb N$ gives that $A^c$ has infinite measure.
\end{proof}
\subsection{On the effect of translating Gaussian functions and consequences of positivity}\label{subsec:translated-gaussians}
In order to explain the relevance of the hypothesis
\begin{equation}\label{eq:surjectivity-condition}
\dim H \ge s^+(\mathcal Q) + \dim H_1 + \cdots + \dim H_{m^+}
\end{equation}
which appears in Theorem~\ref{thm:main-result}, we study the value of the functional $J$ on non-centered Gaussian functions.
In order to handle the Gaussian kernel $\exp(-\mathcal{Q})$ as two additional (fixed) Gaussian functions (one function corresponding to a positive exponent and the other corresponding to a negative exponent), we will decompose the quadratic form $\mathcal{Q}$ into a positive and negative part. To this end, note the following simple fact:
\begin{lemma}\label{lem:kerS-kerT-H}
Let $S \colon H \to X$ and $T \colon H \to Y$ be linear maps. The map $(S, T) \colon H \to X \times Y$ is surjective if and only if $S$ and $T$ are surjective and
\begin{equation}\label{eq:kerS-kerT-H}
\ker S + \ker T = H.
\end{equation}
It is a linear isomorphism if and only if $S$ and $T$ are surjective and $\ker S \oplus \ker T=H$.
\end{lemma}
\begin{proof}
Assume that the map $(S, T)$ is surjective. Then $S$ and $T$ are surjective too. For~\eqref{eq:kerS-kerT-H}, consider any $x \in H$ and we aim to decompose it into $\ker S$ and $\ker T$. By surjectivity of the map $(S, T)$, there exists $y \in H$ such that $(Sy, Ty) = (Sx, 0)$.
Therefore $x - y \in \ker S$ and $y \in \ker T$, hence
\[
x = (x - y) + y \in \ker S + \ker T.
\]
For the other implication, take any $x, y \in H$. From the hypothesis~\eqref{eq:kerS-kerT-H} it follows that there exists $v \in \ker S$ and $w \in \ker T$ such that $x - y = v + w$, i.e.
\[
x - v = y + w.
\]
Denote $z = x - v$. We clearly have
\[
S z = S (x - v) = S x \quad \textup{and} \quad T z = T (y + w) = T y,
\]
i.e. $(S, T)z = (Sx, Ty)$. This means that $(S, T)$ is surjective, since $(Sx, Sy)$ is arbitrary in $S H \times T H = X \times Y$,
by surjectivity of $S$ and $T$.
The second part of the lemma follows from $\ker (S,T)=\ker S\cap \ker T$.
\end{proof}
In what follows we consider any decomposition of $\mathcal{Q}$ of the form
\begin{equation}\label{eq:any-decomposition-of-Q}
\mathcal{Q}(x) = c_0 \mathcal{Q}_+(B_0 x) + c_{m+1} \mathcal{Q}_-(B_{m+1} x)
\end{equation}
where $c_0 > 0 > c_{m-1}$ are real numbers, $B_0 \colon H \to H_0$ and $B_{m+1} \colon H \to H_{m+1}$ are surjective linear maps onto Euclidean spaces $H_0$ and $H_{m+1}$ (respectively), such that the map $(B_0, B_{m+1})$ is surjective, or equivalently, by Lemma~\ref{lem:kerS-kerT-H},
\begin{equation}\label{eq:ker-B0-Bm1-H}
\ker B_0 + \ker B_{m+1} = H
\end{equation}
and $\mathcal{Q}_+$, $\mathcal{Q}_-$ are positive definite quadratic forms on $H_0$ and $H_{m+1}$ (respectively).
The existence of such decomposition is obvious by considering an eigenvalue decomposition of the self-adjoint map $Q$. Then $B_0$ can be taken as the orthogonal projection of $H$ onto $H_0$ being a subspace spanned by eigenvectors corresponding to positive eigenvalues of $Q$, and similarly $B_{m+1}$. One can take $c_0 = 1$ and $c_{m+1} = -1$. Condition~\eqref{eq:ker-B0-Bm1-H} follows from orthogonality of $H_0$ and $H_{m+1}$ in $H$. Moreover, we clearly have
\begin{equation}\label{eq:signature-dim-H0-Hm1}
s^+(\mathcal{Q}) = \dim H_0, \quad s^-(\mathcal{Q}) = \dim H_{m+1}.
\end{equation}
Conversely, any decomposition of $\mathcal{Q}$ as in~\eqref{eq:any-decomposition-of-Q} satisfies~\eqref{eq:signature-dim-H0-Hm1}. Indeed, by~\eqref{eq:ker-B0-Bm1-H}, one can find a complement subspace $V$ of $\ker B_0$ in $H$ which satisfies $V \subseteq \ker B_{m+1}$ and hence $\mathcal{Q}$ is positive definite on $V$. This yields $s^+(\mathcal{Q}) \ge \dim V = \dim H - \dim \ker B_0 = \dim H_0$. On the other hand, $\mathcal{Q}$ is negative semi-definite on $\ker B_0$, hence $s^+(\mathcal{Q}) \le \dim H - \dim \ker B_0 = \dim H_0$. The same argument shows the second assertion of~\eqref{eq:signature-dim-H0-Hm1}.
\medskip
The starting point of the forthcoming calculations is that for any self-adjoint map $A$ on $\mathbb R^d $ and any vector $b\in \mathbb R^d$,
\begin{equation}\label{eq:integral-of-shifted-gaussian}
\int_{\mathbb{R}^d} e^{-\pi\scalar{x,Ax}+2\pi \scalar{b,x}} dx=\left\{
\begin{array}{ll} e^{\pi\scalar{A^{-1}b,b}}\det(A)^{-1/2} & \mbox{if $A$ is positive definite},\\
+\infty & \mbox{otherwise}.
\end{array}
\right.
\end{equation}
For $k=1,\ldots, m$, let $A_k$ be a positive definite map on $H_k$. Moreover, let $A_0$ be positive definite map $H_0$ such that $\mathcal{Q}_+(x) = \pi \scalar{x, A_0 x}$, and similarly define $A_{m+1}$ for $\mathcal{Q}_-$. With this notation~\eqref{eq:any-decomposition-of-Q} becomes
\begin{equation}\label{eq:any-decomposition-of-Q-as-sefl-adj-map}
Q = \sum_{k\in \{0,m+1\}} c_k B_k^\ast A_k B_k.
\end{equation}
For $k=0, \ldots, m+1$ fix any $b_k\in H_k$. Since the map $(B_0, B_{m+1})$ is surjective, we can find a vector $b \in H$ such that $B_0 b = b_0$ and $B_{m+1} b = b_{m+1}$.
We calculate the value of $J$ on the translated Gaussian functions $g_{A_k}(\cdot+b_k)$. By translation invariance of Lebesgue's measure, $\int g_{A_k}(\cdot+b_k) = \det(A_k)^{-1/2}$. In order to introduce a translation also in the Gaussian kernel, we perform a change of variable $y=x+b$ in the integral
\begin{eqnarray*}
J\big((g_{A_k}(\cdot+b_k))\big) \prod_{k=1}^m \det(A_k)^{-c_k/2}
&=& \int_H e^{-\pi\scalar{y,Qy}} \prod_{k=1}^m g_{A_k}^{c_k}(B_k y + b_k) \, dy \\
&=& \int_H e^{-\pi\scalar{x+b,Q(x+b)}} \prod_{k=1}^m g_{A_k}^{c_k}(B_k x+B_k b+b_k) \, dx \\
&=& \int_H e^{-\pi\sum_{k \in \{0, m+1\}} \scalar{B_k x+b_k,A_k(B_k x+b_k)}} \prod_{k=1}^m g_{A_k}^{c_k}(B_k x+B_k b+b_k) \, dx.
\end{eqnarray*}
Here it is convenient to set $u_k=B_k b+b_k$ for $k=1,2,\ldots,m^+$ and $u_k = b_k$ for $k \in \{0,m+1\}$ (for the sake of consistency of notation). Developing all the quadratic terms shows that the
latter integral is equal to
\begin{eqnarray*}
&&\int_H e^{-\pi\big(\sum_{k=0}^{m+1} c_k \scalar{A_k(B_k x+u_k),B_k x+u_k} \big)} \, dx \\
&=& \int_H e^{-\pi\big(\scalar{x,Ax}+2\scalar{x,v}+\sum_{k=0}^{m+1} c_k\scalar{A_k u_k,u_k} \big)} \, dx,
\end{eqnarray*}
where we have set $A=\sum_{k=0}^{m+1} c_k B_k^*A_k B_k$ and $v = \sum_{k=0}^{m+1} c_k B_k^*A_k u_k$.
From the above calculations and~\eqref{eq:integral-of-shifted-gaussian} it follows that
\begin{equation}\label{eq:J-finite-on-shifted-gaussians}
J\big((g_{A_k}(\cdot+b_k))\big) < +\infty \quad \iff A \textup{ is positive definite (i.e. $(A_k)_{k=1}^m\in \Lambda$)}
\end{equation}
and in case $A$ is positive definite,
\begin{equation} \label{eq:J-translated-gaussians}
J\big((g_{A_k}(\cdot-B_kb+u_k))\big) = \left( \frac{\det(A)}{\prod_{k=1}^m \det(A_k)^{c_k}}\right)^{-\frac12}
e^{\pi\big(\scalar{A^{-1}v,v} - \sum_{k=0}^{m+1} c_k\scalar{A_k u_k,u_k}\big)}.
\end{equation}
In terms of the
translation parameters $u_k$ (for $k=0,\ldots,m+1$), the term inside the exponential is a quadratic form. Hence
its infimum is 0 if the quadratic form is positive semi-definite and $-\infty$ else.
In the latter case, we get that the infimum of $J$ is zero because of certain non-centered Gaussian
functions, while in the former case we get that $J$ takes smaller values on centered Gaussians $(g_{A_k})$
than on their translates. In short,
$$ \inf_{\mathcal G} J\in \big\{ 0, \inf_{\mathcal{CG}}J \big\}.$$
\begin{prop}\label{prop:nonzeroinf}
Suppose that $\mathcal Q$ is positive definite on $\ker B_+$ (which guarantees that $J$ is finite
for some centered Gaussian functions) and that $\inf_{\mathcal{G}} J>0$. Assuming the notation that is involved in~\eqref{eq:any-decomposition-of-Q} and~\eqref{eq:any-decomposition-of-Q-as-sefl-adj-map}, the following assertions hold true:
\begin{enumerate}
\item If $(A_k)_{k=1}^m\in \Lambda$ then for all $v_k\in H_k$, $k=0,\ldots, m+1$, setting $A=\sum_{k=0}^{m+1} c_k B_k^*A_k B_k$ and $v=\sum_{k=0}^{m+1} c_k B_k^* v_k$, it holds
$$\scalar{A^{-1}v,v}\ge \sum_{k=0}^{m+1} c_k\scalar{A_k^{-1}v_k,v_k}.$$
\item The map $x \mapsto (B_0 x,B_1 x,\ldots,B_{m^+}x)$ from $H$ to $H_0\times \cdots\times H_{m^+}$ is onto.
\item $\dim H \ge s^+(\mathcal Q)+\sum_{i=1}^{m^+} \dim H_i$.
\end{enumerate}
\end{prop}
\begin{proof}
Since $\inf_{\mathcal{G}} J>0$, reasoning as above on the argument of the exponential term in \eqref{eq:J-translated-gaussians} shows that if $v=\sum_{k=1}^m c_k B_k^*A_k u_k$ then
$$\scalar{A^{-1}v,v} \ge \sum_{k=0}^{m+1} c_k\scalar{A_k u_k,u_k}.$$
Applying this to $u_k=A^{-1}v_k$ ($k=0,1,\ldots,m+1$) concludes the proof of the first item.
Let us address the second part of the claim.
By duality, our goal is to show that the map $(v_0,\ldots,v_{m^+})\mapsto \sum_{i=0}^{m^+}
c_i B_i^* v_i$ is injective. So we assume that $\sum_{i=0}^{m^+} c_i
B_i^* v_i=0$, and we want to prove that $v_0=\cdots=v_{m^+}=0$ (recall that $c_i\neq 0$).
If we set $v_j=0$ for $m^+ < j \le m+1$, it holds that $0=\sum_{k=0}^{m+1} c_k B_k^*v_k$.
Thanks to Proposition \ref{prop:lamba-emptyset}, we may find $(A_k)_{k=1}^m\in \Lambda$ and apply the first item of the present Proposition \ref{prop:nonzeroinf}; it gives
that
$$0=\scalar{A^{-1}0,0}\ge \sum_{k=0}^{m+1} c_k\scalar{A_k^{-1}v_k,v_k}= \sum_{i=0}^{m^+} c_i\scalar{A_i^{-1}v_i,v_i}.$$
Since $c_i>0$ for $0 \le i \le m^+$, we deduce that $\scalar{A_i^{-1}v_i,v_i}=0$, thus $v_i=0$.
Eventually the third point of the claim is a direct consequence of the second one (surjectivity implies that the dimension of the target space is not bigger than that of the initial space, and $s^+(\mathcal Q)=\dim H_0$ by~\eqref{eq:signature-dim-H0-Hm1}).
\end{proof}
\begin{prop}\label{prop:nonzeroinf-for-centered-gaussians}
Assume that $\mathcal Q$ is positive definite on $\ker B_+$ and that
$\inf_{\mathcal{CG}} J>0$. Then $B_+$ is surjective.
\end{prop}
\begin{proof}
We proceed by contradiction. Assume that $B_+$ is not onto. Then for some $i \in \{1,2,\ldots, m^+\}$ and $v \in H_i \setminus\{0\}$, the vector
$$(0,\ldots, 0, \underbrace{v}_{\textup{$i$-th component}}, 0, \ldots, 0) \in H_1 \times \cdots \times H_{m^+}$$
is not in the image of the $B_+=(B_1,\ldots, B_{m^+}) \colon H \to H_1 \times \cdots \times H_{m^+}$. Fix such $i$ and $v$ and let $P_{(\mathbb{R} v)^\bot} \colon H_i \to H_i \cap (\mathbb{R} v)^\bot$ be an orthogonal projection. Put $\tilde{H}_i = H_i \cap (\mathbb{R} v)^\bot$ and
\[
\tilde{B}_i = P_{(\mathbb{R} v)^\bot} B_i \colon H \to \tilde{H}_i.
\]
From the surjective maps $B_1, \ldots, B_{i-1}, \tilde{B}_i, B_{i+1}, \ldots, B_{m^+}$,
we construct a map $\tilde B_+=(B_1, \ldots, B_{i-1}, \tilde{B}_i, B_{i+1}, \ldots, B_{m^+})$ from $H$ to $H_1 \times \cdots \times H_{i-1} \times \tilde{H}_i \times H_{i+1} \times\cdots \times H_{m^+}$.
Now we show that $Q$ is positive definite on $\ker \tilde{B}_+$. To this end, take any $x \in H$ for which $(B_1 x, \ldots B_{i-1} x, \tilde{B}_i x, B_{i+1} x, \ldots, B_{m^+} x) = (0, \ldots, 0)$. Hence
$$B_+x=(B_1 x, \ldots, B_{m^+} x) \in (0, \ldots, 0, \underbrace{\mathbb{R} v}_{\textup{$i$-th component}}, 0, \ldots, 0),$$
but since $(0, \ldots, 0,
v, 0, \ldots, 0)$
is not in the image of $B_+=(B_1, \ldots, B_{m^+})$, we must have $B_+x =0$.
By assumption, $Q$ is positive definite on $\ker B_+$, which gives $\scalar{Qx, x} > 0$ if $x\neq 0$.
Applying Proposition~\ref{prop:lamba-emptyset} to $Q$ and the maps $B_1, \ldots, B_{i-1}, \tilde{B}_i, B_{i+1}, \ldots, B_m$ we have positive maps $A_k \colon H_k \to H_k$ for $k\neq i$ and $\tilde{A}_i \colon \tilde{H}_i \to \tilde{H}_i$ such that the map
\[
Q + c_i \tilde{B}_i^\ast \tilde{A}_i \tilde{B}_i + \sum_{\stackrel{1 \le k \le m}{k \neq i}} c_k B_k^\ast A_k B_k \quad \textup{is positive.}
\]
For $t > 0$ define a positive map $A^{(t)}_i = P_{(\mathbb{R} v)^\bot}^\ast \tilde{A}_i P_{(\mathbb{R} v)^\bot} + t v v^\ast \colon H_i \to H_i$. Note that
\[
\lim_{t \to 0^+} \det \Big(Q + c_i B_i^\ast A^{(t)}_i B_i + \sum_{\stackrel{1 \le k \le m}{k \neq i}} c_k B_k^\ast A_k B_k \Big) =
\det\Big(Q + c_i \tilde{B}_i^\ast \tilde{A}_i \tilde{B}_i + \sum_{\stackrel{1 \le k \le m}{k \neq i}} c_k B_k^\ast A_k B_k \Big) > 0
\]
while $\lim_{t \to 0+} \det A^{(t)}_i = 0$. Therefore using the formula~\eqref{eq:J-on-Gaussian-input} we see that
\[
\lim_{t \to 0^+} J(g_{A_1}, \ldots, g_{A_{i-1}}, g_{A^{(t)}_i}, g_{A_{i+1}}, \ldots, g_{A_m}) = 0,
\]
where $g_A(x)$ is a centered Gaussian function $e^{-\pi\scalar{Ax,x}}$.
\end{proof}
\subsection{Case analysis and non-degeneracy hypotheses}\label{subsection:case-analysis}
The goal of this section is to give a full view of the cases when the best
constant in inverse Brascamp-Lieb inequalities can be computed with Gaussian
functions only.
\smallskip
Case 0.0: The restriction of $\mathcal Q$ to $\ker B_+$ is not positive definite and $B_+$ is not surjective.
In this case, Lemma~\ref{lem:nonsurjective1}(i) implies that $\min J=0$. On the other hand, Proposition~\ref{prop:lamba-emptyset}
implies that $\inf_{\mathcal{CG}}J= +\infty$, or equivalently $\Lambda = \emptyset$, which combined with~\eqref{eq:J-finite-on-shifted-gaussians} implies that also $\inf_{\mathcal G} J=+\infty$.
Gaussian functions do not allow to compute the infimum of $J$.
\smallskip
Case 0.1: The restriction of $\mathcal Q$ to $\ker B_+$ is not positive definite and $B_+$ is surjective.
Proposition \ref{prop:finitevalues2} ensures that $\inf J=+\infty$.
The functional is always infinite. In a very degenerate sense, centered Gaussian
functions allow to compute the infimum of $J$.
\smallskip
Case 1.0.0: The restriction of $\mathcal Q$ to $\ker B_+$ is positive definite, $\dim H< s^+(\mathcal Q)+\sum_{i=1}^{m^+} \dim H_i$ and $B_+$ is not surjective.
By Lemma \ref{lem:nonsurjective1}(i), $\min J=0$. Proposition \ref{prop:nonzeroinf-for-centered-gaussians} ensures that
$\inf_{\mathcal{CG}}J=0$.
\smallskip
Case 1.0.1: $\mathcal Q$ is positive definite on $\ker B_+$, $\dim H< s^+(\mathcal Q)+\sum_{i=1}^{m^+} \dim H_i$ and $B_+$ is surjective. Proposition \ref{prop:nonzeroinf} gives $\inf_{\mathcal G} J=0$.
However, in this case the value of $\inf_{\mathcal CG} J$ is not always 0. We will give examples later.
\smallskip
Case 1.1: $\mathcal Q$ is positive definite on $\ker B_+$ and $\dim H\ge s^+(\mathcal Q)+\sum_{i=1}^{m^+} \dim H_i$.
This is our last case, and in a sense the only non-degenerate one. Dealing with it is the main part of the work.
We postpone the proof of the following statement to the next section, in order to discuss its consequences first.
\begin{theorem}\label{theo:gauss-mini}
If $\mathcal Q$ is positive definite on $\ker B_+$ and
$$ \dim H\ge s^+(\mathcal Q)+\sum_{i=1}^{m^+} \dim H_i,$$
then $\inf J=\inf_{\mathcal{CG}} J$.
\end{theorem}
So under the above hypothesis, centered Gaussian functions allow to compute the optimal constant in
inverse Brascamp-Lieb inequalities.
When the hypothesis of the theorem is not verified, $\inf J$ can only be 0 or $+\infty$.
\begin{remark} \label{rem:dec-Q}
Assume \eqref{eq:any-decomposition-of-Q}, \eqref{eq:ker-B0-Bm1-H}
and the notation \eqref{eq:any-decomposition-of-Q-as-sefl-adj-map}. Then
\[\mathcal Q(x)=\pi c_0\scalar{A_0 B_0 x, B_0 x} + \pi c_{m+1}\scalar{A_{m+1} B_{m+1} x, B_{m+1} x},\] which ensures that
\[ \mathrm{ker}(B_0,\ldots,B_{m^+})\subset \big\{ x \in H \colon \mathcal Q(x) \le 0\big\} \cap \bigcap_{i=1}^{m^+} \ker B_i.\]
Hence $(B_0,\ldots,B_{m^+})$ is injective when $\mathcal Q$ is positive on $\ker B_+$. Together with \eqref{eq:signature-dim-H0-Hm1}, this implies
that $ \dim H \le s^+(\mathcal Q) + \dim H_1 + \cdots + \dim H_{m^+} $. Since the hypotheses of the above theorem provide the converse inequality, they imply that $ \dim H =s^+(\mathcal Q) + \dim H_1 + \cdots + \dim H_{m^+}$, and that $(B_0,\ldots,B_{m^+})$ is a bijection.
\end{remark}
\begin{figure}
\begin{tikzpicture}
\tikzstyle{question}=[rectangle,draw,rounded corners=4pt,fill=blue!25]
\tikzstyle{conclusion-}=[rectangle,text=red]
\tikzstyle{conclusion}=[rectangle]
\node[question] (Q) at (0,4) {$\mathcal Q_{|\ker B_+}>0$?};
\node[question] (B1) at (-3,2) {$B_+$ onto?};
\node[question] (D) at (3,2) {$\dim H\ge s^+(\mathcal{Q}) +\sum_{i=1}^{m^+} \dim H_i$?};
\node[question] (B2) at (1,0) {$B_+$ onto?};
\node[conclusion-] (00a) at (-5,0) { $\;\; \min J=0 \;\; $ };
\node[conclusion-] (00b) at (-5,-0.6) {$\displaystyle \inf_{\mathcal G}J=+\infty$};
\node[conclusion] (01) at (-2,0) { $\inf J=+\infty$ };
\node[conclusion] (100a) at (-0.5,-2) { $ \min J=0$ };
\node[conclusion] (100b) at (-0.5,-2.6) { $\displaystyle \inf_{\mathcal{CG}}J=0$ };
\node[conclusion] (101a) at (2.5,-2) { $\displaystyle \inf_{\mathcal{G}}J=0$ };
\node[conclusion-] (101b) at (2.5,-2.6) { $\displaystyle \inf_{\mathcal{CG}}J\in [0,+\infty)$ };
\node[conclusion] (11) at (5,0) {$\displaystyle \inf J=\inf_{\mathcal{CG}}J<+\infty$};
\tikzstyle{suite}=[->]
\draw[suite] (Q) -- (B1) node[midway,fill=white]{no};
\draw[suite] (Q) -- (D) node[midway,fill=white]{yes};
\draw[suite] (B1) -- (00a) node[midway,fill=white]{no};
\draw[suite] (B1) -- (01) node[midway,fill=white]{yes};
\draw[suite] (D) -- (B2) node[midway,fill=white]{no};
\draw[suite] (D) -- (11) node[midway,fill=white]{yes};
\draw[suite] (B2) -- (100a) node[midway,fill=white]{no};
\draw[suite] (B2) -- (101a) node[midway,fill=white]{yes};
\end{tikzpicture}
\caption{Summary of the case analysis}
\end{figure}
Let us mention variants of the above theorem, which consist in grouping a bit differently the various possible cases.
A first variant is Theorem \ref{thm:main-result}, as stated in the introduction. Another one is given next. It means
that under the assumption that the functional $J$ is finite for some Gaussian functions, the optimal constant
can be computed using non-centered Gaussian functions only.
\begin{theorem}\label{theo:gauss-mini-noncentered}
If $\mathcal Q$ is positive definite on $\ker B_+$
then $\inf J=\inf_{\mathcal{G}} J$.
\end{theorem}
Next we provide examples of the cases when the Gaussian minimizers principle fails.
\begin{example}
Consider the very simple case of the functional
$$J(f,g):=\frac{\int_{\mathbb{R}^2} f(x)g(x) \,dx\,dy}{ \int_{\mathbb{R}} f \times \int_{\mathbb{R}} g}\cdot$$
Here $m^+=m=2$, $c_1=c_2=1$, $\mathcal Q=0$ and
$B_1(x,y)=B_2(x,y)=x$.
The map $B_+\colon \mathbb{R}^2\to \mathbb{R}^2$ is given by $B_+(x,y)=(x,x)$. It is not surjective, and
$\mathcal{Q}$ is not positive definite on $\ker B_+=\{0\}\times \mathbb{R}$. So we are in the setting
of Case 0.0 above.
By Fubini $J(f,g)=+\infty \times \int_{\mathbb{R}} f(x)g(x)\, dx/ (\int f \times \int g)$ which is equal
to 0 if the supports of $f$ and $g$ are disjoint, and is equal to $+\infty$ for Gaussian functions.
\end{example}
\begin{example}[Reversed hypercontractivity]
Borell's reverse Gaussian hypercontractivity~\cite{borell-1982} states that for any $p,q\in(-\infty,1)$ the operators of the Ornstein-Uhlenbeck semigroup $P_t f(x) = \int_{\mathbb R} f(e^{-t} x + \sqrt{1-e^{-2t}} y) \gamma(dy)$, where $\gamma$ is a standard Gaussian measure, satisfy
\[
\| P_t f \|_{L^q(\gamma)} \ge \|f\|_{L^p(\gamma)}
\]
for all \emph{positive} functions $f \in L^1(\gamma)$ if and only if $e^{-2t} \le \frac{1-p}{1-q}$.
Excluding the case when either $p$, $q$ or $t$ is $0$ and using the fact that for $q\in (-\infty,1)$ and
$h \in L^q$, $\|h\|_{L^q} = \inf \{ \int h k \colon k > 0, \int k^{q'} = 1\}$ where $q'=q/(q-1)$, the above estimate can be restated as follows: let $n=2$, $m=2$, $n_1 = n_2 = 1$, $B_1(x_1,x_2) = x_1$, $B_2(x_1,x_2)=x_2$, $c_1 = 1/p\in \mathbb{R}\setminus [0,1]$, $ c_2 = 1/q' \in \mathbb{R}\setminus [0,1]$, $t > 0$ and
\[
Q = \frac{1}{2\pi(1-e^{-2t})} \left( \begin{array}{cc} 1 - (1-e^{-2t}) c_1 & -e^{-t} \\ -e^{-t} & 1 - (1-e^{-2t}) c_2 \end{array} \right).
\]
Then for the corresponding functional
\[
J(f,g) = \int_{\mathbb{R}^2} e^{-\pi \scalar{Qx,x}} f^{c_1}(x_1) g^{c_2}(x_2) \, dx_1 dx_2 \, \Big(\int f\Big)^{-c_1} \Big(\int g\Big)^{-c_2},
\]
we have $\inf J = (2\pi)^{1 - \frac{c_1+c_2}{2}} \sqrt{1-e^{-2t}}$ if and only if $c_1 c_2 \det Q \ge 0$.
Now, let us focus on the a specific example $c_1 = c_2 = 2$. In this case $B_+(x_1,x_2)=(x_1,x_2)$. Hence $B_+$ is surjective
and $Q$ is positive definite on $\ker B_+=\{0\}$.
Condition~\eqref{eq:surjectivity-condition} is violated if and only if $s^+(Q) > 0$, which is equivalent to
\[ \textup{tr} (Q) > 0 \quad \mathrm{or} \quad \det( Q )< 0.\]
Actually, in our case $\textup{tr} (Q) > 0$ implies $\det (Q) < 0$. A simple calculation shows that $s^+(Q) > 0$ holds if and only if $e^{-2t} > 1/4$.
Thus, whenever $e^{-2t} > 1/4$, we are in Case 1.0.1 above, and $\inf J = \inf_{\mathcal G} J = 0$. Besides, Borell's result asserts that $\inf J = (2\pi)^{-1} \sqrt{1-e^{-2t}}$ provided $e^{-2t} \le 1/4$. Next we claim that
\[
\inf_{\mathcal{CG}} J =
\begin{cases}
(2\pi)^{-1} \sqrt{1-e^{-2t}} &\text{if } e^{-2t} \in \big(\frac14, \frac12\big],
\\
0 &\text{if } e^{-2t} \in \big(\frac12, 1\big).
\end{cases}
\]
This is an illustration of Case 1.0.1 above: $\inf J=\inf_{\mathcal{G}}J=0$ but $\inf_{\mathcal{CG}}J$ can be 0 in some cases,
and positive in some other cases.
It remains to prove the claim. Put $f(x) = e^{-a x^2 /2}$ and $g(x) = e^{-b x^2 /2}$ for some $a,b>0$. Then
\[
J(f,g)^2 = \begin{cases}
\frac{(2\pi)^{2-(c_1+c_2)} a^{c_1} b^{c_2} (1 - e^{-2t})^2}
{\det \left( \begin{array}{cc} 1+(1-e^{-2t}) c_1 (a-1) & -e^{-t} \\[1ex]
-e^{-t} & 1+(1-e^{-2t}) c_2 (b-1) \end{array} \right)} & \text{if $\det > 0,$} \\
+\infty & \text{otherwise.}
\end{cases}
\]
Restricting our attention to the case $c_1 = c_2 =2$,
\[
J(f,g)^2 = \begin{cases}
(2\pi)^{-2} (1-e^{-2t}) \frac{a^2 b^2}{4(1+ab)(1-e^{-2t}) - 3 + 2(2e^{-2t}-1) (a+b)} &
\text{if the denominator is positive} \\[1ex]
+\infty & \text{otherwise.}
\end{cases}
\]
In the case $e^{-2t} \in (1/2,1)$ we show that $\inf_{a,b>0} J(f,g)^2 = 0$ by checking that
\begin{align*}
\sup_{a,b>0} \frac{4(1+ab)(1-e^{-2t}) - 3 + 2(2e^{-2t}-1) (a+b)}{a^2 b^2} \\[1ex]
\ge
\sup_{a>0, b=1/a} 8(1-e^{-2t}) - 3 + 2(2e^{-2t}-1) (a+1/a) = +\infty.
\end{align*}
In the case $e^{-2t} \in (1/4, 1/2]$ we will have $\inf_{a,b>0} J(f,g) = (2\pi)^{-1} \sqrt{1-e^{-2t}}$ if we show
\begin{equation}\label{eq:sup1}
\sup_{a,b>0} \frac{4(1+ab)(1-e^{-2t}) - 3 + 2(2e^{-2t}-1) (a+b)}{a^2 b^2} = 1.
\end{equation}
Put $\lambda = 2 - 4e^{-2t} \in [0,1)$. Since $a+b$ is multiplied by the coefficient $2(2e^{-2t}-1) = -\lambda\le 0$, we can use the inequality $a+b \ge 2 \sqrt{ab}$ to calculate the supremum in~\eqref{eq:sup1} as follows:
\begin{align*}
\sup_{a,b>0} \frac{\lambda-1 - \lambda(a+b) + (\lambda+2)ab}{a^2 b^2}
= \sup_{x = (ab)^{-1/2} > 0} (\lambda-1)x^4 - 2\lambda x^3 + (\lambda+2) x^2 =: \sup_{x>0} \varphi(x).
\end{align*}
Since
\[
\varphi'(x) = -4(1-\lambda) x(x-1)\Big(x - \frac{\lambda + 2}{2(\lambda-1)} \Big),
\]
$\varphi$ is increasing on $(0,1]$ and decreasing on $[1,\infty)$ and hence $\sup_{x>0} \varphi(x) = \varphi(1) = 1$.
\end{example}
We conclude this section with the analysis of degenerate and non-degenerate cases for the inverse convolution inequality.
\begin{example}[Reverse Young inequality]
As mentioned in the introduction: for $p,q,r\in (0,1]$ such that $1+1/r=1/p+1/q$, and positive functions on $\mathbb R^n$, Brascamp and Lieb have proved that
\[ \|f*g\|_r\ge \left(\frac{C_pC_q}{C_r}\right)^n \|f\|_p \|g\|_q\] holds where $C_t=|t|^{1/t}/|t'|^{1/t'}$, and the constant is optimal. Our goal here is to discuss extensions to negative exponents.
Using a duality type argument, a change of functions and the fact that $C_{p'}=1/C_p$ we can reformulate the above result as follows:
if $p,q\in (0,1]$ and $r'\in (-\infty,0)$ verify $1/p+1/q+1/r'=2$ then for all
positive integrable functions $f,g,h$,
\[\int_{(\mathbb R^n)^2} f(x-y)^{\frac1p} g(y)^{\frac1q} h(x)^{\frac{1}{r'}} \,dx\,dy\ge (C_pC_qC_{r'})^n \left(\int f\right)^{\frac1p} \left(\int g\right)^{\frac1g} \left(\int h\right)^{\frac{1}{r'}}.\]
Simple changes of variables as
$\int F(x-y)G(y)H(x) \,dx\,dy= \int F(z)G(x-z)H(x) \,dx\,dz$ show that $p,q$ and $r'$
play symmetric roles. Therefore the convolution inequality is also true when
$1/p+1/q+1/r'=2$ and among the three numbers $p,q,r'$, two are in $(0,1]$ and one is in $(-\infty,0)$,
(which is more general than $p,q,r\in (0,1]$).
However no non-trivial inequality holds beyond this range of indices, as we show next.
The
condition $1/p+1/q+1/r'=2$ is necessary (applying the inequality to $f(\lambda \cdot),
g(\lambda \cdot)$, $ h(\lambda \cdot)$ for $\lambda> 0$ and changing variables $(x,y)=\lambda^{-1}(X,Y)$ gives it).
Consider the three surjective maps from $\mathbb R^{2n}$ to $\mathbb R^n$ defined by $B_1(x,y)=x-y$, $B_2(x,y)=y$, $B_3(x,y)=x$, and the numbers
$c_1=1/p$, $c_2=1/q$ and $c_3=1/r'$.
The above analysis of degenerate cases shows that one should focus on
the map $B_+=(B_i)_{i\colon c_i>0}$.
If $p,q,r'$ are positive then $B_+$ is not surjective and the only possible constant
in the convolution inequality is 0. If only one among the three number $p,q,r'$ is
positive, then $B_+$ is surjective but not injective and we are in Case 0.1, meaning
that the functional under study never takes finite values.
\end{example}
\section{Proof of Theorem \ref{theo:gauss-mini}}\label{sec:proof-main}
\subsection{Decomposition of the kernel $\exp(-\mathcal Q)$}\label{sec:decomposition-of-Q}
The positive and negative parts of a quadratic form $\mathcal Q$ play different roles, as do the functions $f_i$ with $i \le m^+$ and the functions $f_j$ with $j > m^+$. Although there is no canonical decomposition of $H$ into subspaces on which $\mathcal Q$ is, respectively, positive and negative definite, Condition~\eqref{eq:injectivity-condition} provides a natural candidate for a subspace on which $\mathcal Q$ is positive definite.
This leads to the following result:
\begin{lemma}\label{lem:decompose-Q}
The following two assertions are equivalent:
\begin{enumerate}
\item \textup{(i)} $\mathcal Q$ is positive definite on $\ker B_+$ and \textup{(ii)} $\dim H \ge s^+(\mathcal Q)+ \sum_{i=1}^{m^+} \dim H_i $.
\item There exist vector spaces $H_0$, $H_{m+1}$, surjective linear maps $B_0 \colon H\to H_0$ and $B_{m+1} \colon H\to H_{m+1}$, and positive definite quadratic forms $\mathcal Q_+$ on $H_0$ and $\mathcal Q_-$ on $H_{m+1}$ such that:
\begin{align}
\bullet\; & (B_0,B_+) \colon H\to H_0\times\cdots \times H_{m^+} \mbox{ is bijective}, \label{eq:isomorphism-condition} \\
\bullet\; & \ker B_+\subset \ker B_{m+1}, \label{eq:kerBplus-contained-in-kerBm1} \\
\bullet\; & \mbox{for all } x\in H, \quad \mathcal Q(x)=\mathcal Q_+(B_0x)-\mathcal Q_-(B_{m+1}x). \nonumber
\end{align}
\end{enumerate}
\end{lemma}
\begin{remark} The above decomposition of $\mathcal Q$ is more specific than the ones introduced in Subsection~\ref{subsec:translated-gaussians} in~\eqref{eq:any-decomposition-of-Q}. Although we have used for convenience the same notation $B_0$ and $B_{m+1}$ there, they do not necessarily represent the same maps as in the above lemma. However no confusion will be possible, since from now on we will only use the decomposition of Lemma~\ref{lem:decompose-Q}.
\end{remark}
\begin{proof}
We start with $(2)\Longrightarrow (1)$:
For any $x\in \ker B_+\subset \ker B_{m+1}$, it holds
$$ \mathcal Q(x)=\mathcal Q_+(B_0x)-\mathcal Q_-(B_{m+1}x)=\mathcal Q_+(B_0x)\ge 0.$$
Moreover, if $\mathcal Q(x)=0$, using that $\mathcal Q_+$ is definite positive, we deduce that
$B_0x=0$. It follows that $x$ belongs to $\ker B_0\cap \ker B_+$, which is equal to $\{0\}$ by hypothesis.
Thus we have shown that $\mathcal Q$ is positive definite on $\ker B_+$.
It remains to prove (1)(ii). By hypothesis, $(B_0,B_+)$ is
a linear isomorphism, which implies that
$$\dim H=\sum_{i=0}^{m^+} \dim H_i.$$
Therefore, it is enough to show that $s^+(\mathcal Q)\le \dim H_0$.
Since $\mathcal Q_-$ is positive definite, $\mathcal Q$ is negative semi-definite on $\ker B_0$, and hence
\[s^+(\mathcal Q)\le \dim H-\dim \ker B_0=\dim H_0. \]
\medskip
Now we prove that $(1)\Longrightarrow(2)$. Set
\[
H_0 = \ker B_+ = \bigcap_{i=1}^{m^+} \ker B_i.
\]
Note that (1) implies $\dim H_0 = s^+(\mathcal Q)$. Indeed,
\[
\dim H_0 = \dim \ker B_+ = \dim H - \dim \im B_+ \ge \dim H - \sum_{i=1}^{m^+} \dim H_i \ge s^+(\mathcal Q),
\]
where the last inequality follows from~(1)(ii). The converse inequality follows from Sylvester's theorem since $\mathcal Q$ is positive definite on $H_0$.
Now consider the subspace
\[
H_0^{\perp_{\mathcal Q}} = \{ x \in H \colon \forall {y \in H_0}, \ \mathcal Q(x,y) = 0 \},
\]
where we also denote by $\mathcal{Q}(\cdot, \cdot)$ the symmetric bilinear form associated with the quadratic form $\mathcal Q$.
Since $\mathcal Q$ is positive definite on $H_0$, we have
\[
H_0 \cap H_0^{\perp_{\mathcal Q}}
\subseteq \{ x \in H_0 \colon \mathcal Q(x, x) = 0 \} = \{0\}.
\]
As a general fact, $\dim H_0^{\perp_{\mathcal Q}} \ge \dim H - \dim H_0$, therefore
\begin{equation}\label{eq:direct-product-H0}
H_0 \oplus H_0^{\perp_{\mathcal Q}} = H.
\end{equation}
Consider the projection $P \colon H \to H$ onto $H_0$ with kernel $H_0^{\perp_{\mathcal Q}}$. Then $\textup{Id} - P \colon H \to H$ is the projection onto $H_0^{\perp_{\mathcal Q}}$ with $\ker (\textup{Id} - P) = H_0$ and
\begin{equation}\label{eq:decomposition-Q-prep-step}
\mathcal Q(x) = \mathcal Q(Px) + \mathcal Q((\textup{Id} - P)x).
\end{equation}
Next, note that $\mathcal Q$ is negative semi-definite on $H_0^{\perp_{\mathcal Q}}$. Indeed, suppose that for some $0 \neq x \in H_0^{\perp_{\mathcal Q}}$, $\mathcal Q(x) > 0$. Then for all $\lambda \in \mathbb{R}$ and $y \in H_0$,
\[
\mathcal Q(\lambda x + y) = \lambda^2 \mathcal Q(x) + 2\lambda \mathcal Q(x,y) + \mathcal Q(y) = \lambda^2 \mathcal Q(x) + \mathcal Q(y) > 0
\]
whenever $\lambda \neq 0$ or $y \neq 0$, which thanks to~\eqref{eq:direct-product-H0} is equivalent to $\lambda x + y \neq 0$. In this way $\mathcal Q$ would be positive definite on the subspace $H_0 \oplus \textup{span}\{x\}$ which has dimension strictly larger than $s^+(\mathcal Q)$ and thus it contradicts Sylvester's theorem.
Further on, as a general fact, the \emph{radical} of $\mathcal Q$
\[
\rad \mathcal Q = \{ x \in H \colon \forall{y \in H}, \ \mathcal Q(x,y) = 0\}
\]
is a subspace in $H_0^{\perp_{\mathcal Q}}$. Consider
\[
H_{m+1} = \sfrac{H_0^{\perp_{\mathcal Q}}}{\rad \mathcal Q}
\]
and the maps $ B_0 \colon H \to H_0$ and $ B_{m+1} \colon H \to H_{m+1} $ defined for $x\in H$ by
\begin{eqnarray*}
& & B_0(x) = P(x), \\
& & B_{m+1}(x) = \pi_{H_0^{\perp_{\mathcal Q}} \to H_0^{\perp_ {\mathcal Q}}/{\rad \mathcal Q} } \big((\textup{Id} - P)(x)\big),
\end{eqnarray*}
where $\pi_{H_0^{\perp_{\mathcal Q}} \to {H_0^{\perp_{\mathcal Q}}}/{\rad \mathcal Q}}$ is the natural quotient map from $H_0^{\perp_{\mathcal Q}}$ to $\sfrac{H_0^{\perp_{\mathcal Q}}}{\rad \mathcal Q}$.
Finally, consider the following positive definite quadratic forms
\[ \begin{split}
& \mathcal Q_+ = \mathcal Q|_{H_0} \colon H_0 \to \mathbb{R}, \\
& \mathcal Q_- \colon H_{m+1} \to \mathbb{R}, \quad \mathcal Q_-(x + \rad \mathcal Q) = -\mathcal Q(x) \textup{ for $x \in H_0^{\perp_{\mathcal Q}}$}.
\end{split} \]
Then the decomposition~\eqref{eq:decomposition-Q-prep-step} becomes
\[
\mathcal Q(x) = \mathcal Q_+(B_0 x) - \mathcal Q_-(B_{m+1} x).
\]
Next, let us establish the claimed properties of the linear maps which appear
in the above decomposition of $\mathcal Q$.
The non-degeneracy conditions~(1)(i) and~(1)(ii) imply that the map
\[
B_{0+} := (B_0, B_1, \ldots, B_{m^+}) \colon H \to H_0 \times H_1 \times \cdots \times H_{m^+} \textup{ is a linear isomorphism.}
\]
Indeed $\ker B_0 \cap \ker B_+=H_0^{\perp Q}\cap H_0=\{0\}$, hence $B_{0+}$ is injective. The dimension condition (1)(ii),
once rewritten as $\dim H\ge \sum_{i=0}^{m^+} \dim H_i$, shows that $B_{0+}$
is an isomorphism.
Note also that $\ker B_{m+1} = H_0 + \rad \mathcal Q$ and therefore
\[
\ker B_+ =H_0\subseteq \ker B_{m+1}.
\]
This concludes the proof of the claimed properties.
\end{proof}
The simple lemma stated below establishes the following consequence of~\eqref{eq:kerBplus-contained-in-kerBm1}:
\begin{equation}\label{eq:compact-image}
\textup{if $F \subseteq H_1 \times \cdots \times H_{m^+}$ is compact, then $B_{m+1}(B_+^{-1}(F))$ is compact.}
\end{equation}
\begin{lemma}\label{lem:compact-image}
Let $S \colon X \to Y$ and $T \colon X \to Z$ be linear maps between finite dimensional linear spaces $X, Y, Z$ and $F \subseteq Y$ be a compact set. If $\ker S \subseteq \ker T$ then $T(S^{-1}(F))$ is a compact subset of $Z$.
\end{lemma}
\begin{proof}
Put $V = \ker S$ and let $\pi \colon X \to \sfrac{X}{V}$ be the natural quotient map. Define $\tilde{S} \colon \sfrac{X}{V} \to Y$ and $\tilde{T} \colon \sfrac{X}{V} \to Z$ as
\[
\tilde{S}(x + V) = S x, \qquad \tilde{T}(x + V) = T x
\]
(these definitions are correct since the kernels of $S$ and $T$ contain $V$), i.e. $\tilde S \circ \pi = S$ and $\tilde{T} \circ \pi = T$. Note that $\ker \tilde{S}$ is trivial, hence $\tilde{S}$ is a linear an isomorphism onto its range, and thus $G = \tilde{S}^{-1}(F)$ is a compact subset of $\sfrac{X}{V}$. Finally write
\[
T(S^{-1}(F)) = \tilde{T}(\pi(\pi^{-1}(\tilde{S}^{-1}(F)))) = \tilde{T}(\tilde{S}^{-1}(F)) = \tilde{T}(G)
\]
and use that $\tilde{T}(G)$ is also compact.
\end{proof}
\subsection{More on quadratic forms}
We start with recalling a simple inequality, which appears in the transportation proof of the Brascamp-Lieb inequalities.
\begin{lemma}\label{lem:quad-direct}
Let $I$ be a finite set, and for each $i\in I$ let $d_i>0$, $L_i\colon H\to H_i$ be linear and onto
and $K_i\colon H_i\to H_i$ be a linear symmetric definite positive map.
Assume that $K:=\sum_i d_i L_i^*K_iL_i>0$. Let $w\in H$, then for all $y_i\in H_i$ verifying
$w=\sum_i d_i L_i^* y_i$, the following holds
$$\langle K^{-1}w,w \rangle \le \sum_i d_i \langle K_i^{-1} y_i,y_i\rangle.$$
There is equality if one chooses $y_i:=K_iL_iK^{-1}w$.
\end{lemma}
\begin{proof}
This is a direct application of the Cauchy-Schwarz inequality:
\begin{eqnarray*}
\langle K^{-1}w,w \rangle &=& \sum_i d_i \langle K^{-1}w,L_i^*y_i\rangle
= \sum_i d_i \langle K_i^{1/2}L_i K^{-1}w,K_i^{-1/2} y_i\rangle\\
&\le & \left( \sum_i d_i \langle K_i^{1/2}L_i K^{-1}w, K_i^{1/2}L_i K^{-1}w \right)^{\frac12}\left( \sum_i d_i \langle K_i^{-1/2} y_i,K_i^{-1/2} y_i\rangle\right)^{\frac12}\\
&=& \left( \Big\langle (\sum_i d_i L_i^* K_iL_i) K^{-1}w, K^{-1}w \Big\rangle\right)^{\frac12}\left( \sum_i d_i \langle K_i^{-1} y_i, y_i\rangle\right)^{\frac12}\\
&=& \langle K^{-1}w,w\rangle ^{\frac12}\Big( \sum_i d_i \langle K_i^{-1} y_i, y_i\rangle\Big)^{\frac12}
\end{eqnarray*}
\end{proof}
When proving Proposition \ref{prop:nonzeroinf}, we have shown that the quadratic inequality stated as its first conclusion implies that the map $(B_0,\ldots,B_{m^+})$ is onto. Our next task is to prove a converse statement, for further use.
\begin{lemma}\label{lem:quad-inverse}
Let $c_1,\ldots,c_{m^+}>0>c_{m^++1},\ldots, c_m$. For $k=1,\ldots,m$, let
$B_k \colon H\to H_k$ be a linear surjective map, and let $A_k \colon H_k\to H_k$ be symmetric definite positive operator.
Assume that $(B_1,\ldots,B_{m+})\colon H\to H_1\times\cdots\times H_{m^+}$ is onto.
If $A:=\sum_{k=1}^m c_k B_k^*A_kB_k>0$ and $y=\sum_{k=1}^m c_k B_k^* y_k$ for some $y_k\in H_k$, then
$$\langle A^{-1}y,y \rangle \ge \sum_{k=1}^m c_k \langle A_k^{-1} y_k,y_k\rangle.$$
There is equality if one chooses $y_k:=A_kB_kA^{-1}y$.
\end{lemma}
\begin{proof}
The statement is derived from the former lemma, after rearranging the terms.
By the surjectivity hypothesis, there exits $z\in H$ such that for all $i\le m^+$, $y_i=A_iB_iz$.
The relationship $y=\sum_{k=1}^m c_k B_k^* y_k$ can be rewritten as $$y+\sum_{m^+<j\le m} |c_j| B_j^* y_j=
\Big(\sum_{i\le m^+} c_iB_i^*A_iB_i\Big) z.$$
If we set $K:=\sum_{i\le m^+} c_i B_i^*A_iB_i$, $w:=Kz$, $H_{m+1}:=H$, $B_{m+1}:=\textup{Id}_H$, $A_{m+1}:=A$, $y_{m+1}:=y$ and $c_{m+1}=1$, we obtain that
\begin{equation} \label{eq:quad1}
w=\sum_{m^+<j\le m+1} |c_j| B_j^* y_j.
\end{equation}
With this notation, we may also rewrite the relationship
$A=\sum_{k=1}^m c_k B_k^*A_kB_k$ as
\begin{equation}\label{eq:quad2}
K=\sum_{i\le m^+} c_i B_i^*A_iB_i=A+\sum_{m^+<j\le m} |c_j| B_j^*A_jB_j
=\sum_{m^+<j\le m+1} |c_j| B_j^*A_jB_j.
\end{equation}
Observe that $K>0$ holds, as a consequence of $A>0$. Therefore \eqref{eq:quad1} and \eqref{eq:quad2} allow to apply Lemma~\ref{lem:quad-direct} and to get
\begin{equation}\label{eq:quad3}
\sum_{m^+<j\le m+1} |c_j| \langle A_j^{-1} y_j,y_j\rangle \ge \langle K^{-1} w,w\rangle.
\end{equation}
By our definitions for $w$, $K$ and $z$,
$$ \langle K^{-1} w,w\rangle= \langle K z,z\rangle=\Big\langle \sum_{i\le m^+} c_iB_i^*A_iB_iz,z\Big\rangle= \sum_{i\le m^+} c_i \langle A_iB_iz, B_i z\rangle
=\sum_{i\le m^+} c_i \langle y_i, A_i^{-1} y_i\rangle.$$
Since $|c_{m+1}| \langle A_{m+1}^{-1} y_{m+1},y_{m+1}\rangle=\langle A^{-1}y,y\rangle$
and $c_j<0$ for $m^+<j\le m$, the statement of~\eqref{eq:quad3} gives the claimed inequality. The case of equality is easily verified.
\end{proof}
\subsection{Preliminaries and general strategy of the proof of Theorem~\ref{theo:gauss-mini}}
The very first step in the proof of Theorem~\ref{theo:gauss-mini} is to consider a decomposition of the Gaussian kernel $\exp(-\mathcal Q)$ as explained before. Namely, thanks to the hypothesis of Theorem~\ref{theo:gauss-mini} the assertion (2) from Lemma~\ref{lem:decompose-Q} holds true. Therefore we will consider the quadratic forms $\mathcal{Q}_+$ and $\mathcal{Q}_-$ together with the maps $B_0 \colon H \to H_0$ and $B_{m+1} \colon H \to H_{m+1}$ whose existence is ensured by that assertion. Further consider self-adjoint maps $Q_+ \colon H_0 \to H_0$ and $Q_- \colon H_{m+1} \to H_{m+1}$ which represent the respective quadratic forms, i.e.
\[ \begin{split}
\mathcal{Q}_+(x) &= \pi \scalar{Q_+ x, x} \quad \textup{for $x \in H_0$,} \\
\mathcal{Q}_-(x) &= \pi \scalar{Q_- x, x} \quad \textup{for $x \in H_{m+1}$.}
\end{split} \]
For $k = 1,2,\ldots,m$ fix measurable functions $f_k \colon H_k \to [0, \infty]$ of integral one. We will deal with the Gaussian kernel $\exp(-\mathcal Q)$ as two additional functions, namely
\[
\exp(-\mathcal{Q}(x)) = \sqrt{\frac{\det Q_-}{\det Q_+}} f_0^{c_0}(B_0 x) f_{m+1}^{c_{m+1}}(B_{m+1} x),
\]
where $f_0 \colon H_0 \to [0, \infty]$ and $f_{m+1} \colon H_{m+1} \to [0, \infty]$ are defined as
\[ \begin{split}
f_0(x) &= \sqrt{\det Q_+} \exp(-\pi \scalar{Q_+ x, x}), \\
f_{m+1}(x) &= \sqrt{\det Q_-} \exp(-\pi \scalar{Q_- x, x})
\end{split} \]
and $c_0 = 1$, $c_{m+1} = -1$. With this notation we have
\begin{equation}\label{eq:J-as-product-of-f}
J(f_1, \ldots, f_m) = \sqrt{\frac{\det Q_-}{\det Q_+}} \int_H \prod_{k=0}^{m+1} f_k^{c_k}(B_k x) \, dx.
\end{equation}
The general strategy is similar to the one of the proof of the direct and reverse Brascamp-Lieb inequality from~\cite{barthe-inventiones}. Namely we will consider a tuple of centered Gaussian functions $g_k$ on $H_k$ ($k = 0,1,\ldots,m+1$) of integral one and optimal transport maps $H_k \ni x \mapsto y = T_k(x) \in H_k$ which push forward the density $f_k(x) \, dx$ onto the density $g_k(y) \, dy$. Starting from the maps $T_k$, we will build a change of variable map $\theta \colon H \to H$ which will allow us to pass from $J(f_1, \ldots, f_k)$ in the form of~\eqref{eq:J-as-product-of-f} to the integral over $H$ involving a Gaussian function only. However, since we aim to bound~\eqref{eq:J-as-product-of-f} from below, it is crucial that the map $\theta$ is \emph{surjective}. This point is a substantial technical difficulty which is not present in the transportation proof of the Brascamp-Lieb inequality with positive exponents.
In order to make the above strategy work, we need to restrict $f_k$ to carefully chosen classes of functions. There are two reasons behind this: first, we need to ensure existence of optimal transport maps; moreover, it will be convenient to have some regularity of these maps and to have the Monge-Amp{\`e}re equation satisfied in the classical sense.
The second reason is that we need to ensure surjectivity of the map $\theta$. This can be done by appropriate choice of supports of the test functions. Finally, the inequality for any integrable functions $f_k$ will be obtained via an approximation argument.
\bigskip
We close this subsection with notation and facts concerning convex functions on Euclidean spaces. A standard reference is~\cite{rockafellar-convex-analysis}. In the sequel we write $\interior A$, $\cl A$, $\bd A$ for the interior, closure and boundary of $A$.
Let $\varphi \colon \mathbb{R}^n \to \mathbb{R} \cup \{+\infty\}$ be a convex function. The \emph{domain} of $\varphi$ is
\[
\dom \varphi = \{ x \in \mathbb{R}^n \colon \varphi(x) < +\infty \}.
\]
We say that $\varphi$ is \emph{proper} if $\dom \varphi \neq \emptyset$. We say that $\varphi$ is \emph{closed} if the epigraph of $\varphi$, i.e. the set $\{ (x, y) \in \mathbb{R}^{n+1} \colon x \in \dom \varphi, y \ge \varphi(x) \}$, is a closed subset of $\mathbb{R}^n$. A convex function $\varphi$ is closed if and only if it is lower semi-continuous (or, equivalently, for every $\alpha \in \mathbb{R}$, $\{ x \in \mathbb{R}^n \colon \varphi(x) \le \alpha \}$ is a closed subset of $\mathbb{R}^n$).
For $x \in \mathbb{R}^n$, the \emph{subdifferential} $\partial \varphi(x)$ is the set of all vectors $x^\ast \in \mathbb{R}^n$, called \emph{subgradients}, which satisfy
\[
f(y) \ge f(x) + \scalar{x^\ast, y-x} \quad \textup{for all $y \in H$.}
\]
For $A \subseteq \mathbb{R}^n$,
\[
\partial \varphi(A) = \bigcup_{x \in A} \partial \varphi(x).
\]
Note that $\partial \varphi(x) \neq \emptyset$ for all $x \in \interior \dom \varphi$ (actually for all $x$ in the relative interior of $\dom \varphi$) and that if $\varphi$ is proper then $\partial \varphi(x) = \emptyset$ for all $x \not\in \dom \varphi$. If $\varphi$ is differentiable at $x \in \mathbb{R}^n$ then $\partial \varphi(x)$ contains exactly one vector $\nabla \varphi(x)$. The converse statement is also true: for a convex function having a unique subgradient at a given point implies differentiability at that point (see~\cite[Theorem 25.1]{rockafellar-convex-analysis}).
If $\varphi$ is proper, we define the Legendre conjugate of $\varphi$ as
\[
\varphi^\ast(y) = \sup_{x \in \dom \varphi} \scalar{x, y} - \varphi(x),
\]
which is a proper closed convex function on $\mathbb{R}^n$. If $\varphi$ is proper and closed then $(\varphi^\ast)^\ast$ coincides with $\varphi$.
If $\varphi$ is proper and closed then the multi-valued maps $\partial \varphi$ and $\partial \varphi^\ast$ are inverses of each other, i.e.
\begin{equation}\label{eq:subgrad-and-legendre}
y \in \partial \varphi(x) \quad \iff \quad x \in \partial \varphi^\ast(y)
\end{equation}
(see~\cite[Corollary 23.5.1]{rockafellar-convex-analysis}). In particular, $y \in \partial \varphi(\mathbb{R}^n)$ if and only if $\partial \varphi^\ast(y) \neq \emptyset$ which readily implies that if $\varphi$ is proper and closed then
\begin{equation}\label{eq:inclusion-of-domains}
\interior \dom \varphi^\ast \subseteq \partial \varphi(\mathbb{R}^n) \subseteq \dom \varphi^\ast.
\end{equation}
\subsection{Optimal transport map}
Here we present a result (formulated as Corollary~\ref{cor:transport-maps}) on existence of smooth solutions to the Monge-Amp{\`e}re equation related to certain class of optimal transport problems. Although the result is most probably well-known to specialists in the theory of the Monge-Amp{\`e}re equation, we were not able to find a reference where it is explicitly stated. For this reason we explain below how the result can be derived from well-established results in optimal transport and regularity theory of the Monge-Amp{\`e}re equation.
Let us begin our discussion with the following result of McCann~\cite{McCann-1995}, which is a refinement of an earlier result of Brenier~\cite{Brenier-1987} (see also references in~\cite{McCann-1995} for related developments):
\begin{theorem}[Brenier, McCann]\label{thm:McCann}
Let $\mu$ and $\nu$ be Borel probability measures on $\mathbb{R}^n$.
\begin{enumerate}
\item[(i)] There exist a Borel probability measure $\gamma$ on $\mathbb{R}^n \times \mathbb{R}^n$ whose marginals are $\mu$ and $\nu$ (namely, for any Borel set $A \in \mathbb{R}^n$, $\mu(A) = \gamma(A \times \mathbb{R}^n)$ and $\nu(A) = \gamma(\mathbb{R}^n \times A)$) and a closed convex function $\varphi \colon \mathbb{R}^n \to \mathbb{R} \cup \{+\infty\}$ such that the measure $\gamma$ is supported on the graph of the subdifferential of $\varphi$, i.e. the set
\[
\{(x, y) \in \mathbb{R}^n \times \mathbb{R}^n \colon y \in \partial \varphi(x)\}.
\]
\item[(ii)] In addition to (i), if $\mu$ vanishes on Borel subsets of $\mathbb{R}^n$ Hausdorff dimension $n-1$, then $\varphi$ is differentiable $\mu$-a.e. and the map $\nabla \varphi$ (defined in points where $\varphi$ is differentiable) pushes $\mu$ forward to $\nu$, i.e. for every Borel subset $B \subseteq \mathbb{R}^n$,
\begin{equation}\label{eq:push-forward-by-gradient}
\nu(B) = \mu\big((\nabla \varphi)^{-1}(B)\big)
\end{equation}
(and in fact, the map $(\textup{id}, \nabla \varphi) \colon \mathbb{R}^n \to \mathbb{R}^n \times \mathbb{R}^n$ pushes $\mu$ forward to $\gamma$). Moreover, the map $\nabla \varphi$ satisfying~\eqref{eq:push-forward-by-gradient} is uniquely determined $\mu$-a.e. among gradients of convex functions on $\mathbb{R}^n$.
\end{enumerate}
\end{theorem}
The map $\nabla \varphi$ from Theorem~\ref{thm:McCann}(ii) is called the \emph{Brenier map}.
\begin{remark}
\begin{enumerate}
\item[(i)] For a closed convex function the graph of its subdifferential is a closed subset of $\mathbb{R}^n \times \mathbb{R}^n$.
\item[(ii)] From Theorem~\ref{thm:McCann}(i) it follows that
\begin{equation}\label{eq:supp-mu-in-cl-dom-varphi}
\textup{supp}(\mu) \subseteq \cl \{ x \in \mathbb{R}^n \colon \partial \varphi(x) \neq \emptyset \} = \cl \dom \varphi
\end{equation}
and
\begin{equation}\label{eq:supp-nu-in-cl-partial-varphi}
\textup{supp}(\nu) \subseteq \cl \partial \varphi(\mathbb{R}^n).
\end{equation}
\item[(iii)] The assumption on $\mu$ in Theorem~\ref{thm:McCann}(ii) is satisfied if $\mu$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}^n$.
\item[(iv)] Using~\eqref{eq:push-forward-by-gradient} and continuity of the (sub)gradient of a convex function (see e.g.~\cite[Corollary 24.5.1]{rockafellar-convex-analysis}) one can show that
\begin{equation}\label{eq:supports-mu-nu}
\textup{supp}(\nu) = \cl \nabla \varphi(\textup{supp}(\mu))
\end{equation}
(see e.g. the proof of Theorem 2.12 in~\cite{villani-tot} for details).
\item[(v)] By the above item (ii), the exterior of the domain of $\varphi$ has measure $\mu$ zero, and the boundary of $\dom \varphi$ (as the boundary of a convex set in $\mathbb{R}^n$) has Hausdorff dimension at most $n-1$. Therefore, $\mu$-a.e. differentiability of $\varphi$ follows from the result of Anderson and Klee~\cite{Anderson-Klee} which says that a convex function on $\mathbb{R}^n$ is differentiable everywhere in its domain except for a set of Hausdorff dimension at most $n-1$.
\end{enumerate}
\end{remark}
From now on we assume that $\mu$ is a probability measure on $\mathbb{R}^n$ with a density $f > 0$, and $\nu$ is a probability measure on $\mathbb{R}^n$ with a density $g$ which is positive in an open bounded convex set $\Omega$ and $g \equiv 0$ in $\Omega^c$. Thanks to Theorem~\ref{thm:McCann} we consider a closed convex function $\varphi$ for which $\nabla \varphi$ is the Brenier map which pushes $\mu$ forward to $\nu$. Let us discuss some basic properties of $\varphi$ in this context:
\begin{enumerate}
\item[(i)] Since $\textup{supp}(\mu) = \mathbb{R}^n$, it follows from~\eqref{eq:supp-mu-in-cl-dom-varphi} that $\dom \varphi = \mathbb{R}^n$.
\item[(ii)] By the hypothesis that $f > 0$, the Lebesgue measure on $\mathbb{R}^n$ is absolutely continuous with respect to $\mu$ and hence $\varphi$ is differentiable a.e. in $\mathbb{R}^n$.
\item[(iii)] By~\eqref{eq:supports-mu-nu} it is clear that
\begin{equation}\label{eq:nabla-varphi-in-cl-omega}
\nabla \varphi(x) \in \cl \Omega \textup{ for all $x \in \mathbb{R}^n$ for which $\nabla \varphi(x)$ is defined.}
\end{equation}
Moreover, by~\eqref{eq:push-forward-by-gradient}, $\mu((\nabla \varphi)^{-1}(\bd \Omega)) = \nu(\bd \Omega) = 0$, hence the set $(\nabla \varphi)^{-1}(\bd \Omega)$ has zero Lebesgue measure. Therefore
\begin{equation}\label{eq:nabla-varphi-in-omega}
\textup{$x$-a.e. the following holds: $\varphi$ is differentiable at $x$ and $\nabla \varphi(x) \in \Omega.$}
\end{equation}
\end{enumerate}
Thanks to the regularity theory of the Monge-Amp{\`e}re equation it is known that under some additional assumptions on the densities $f$ and $g$, the Brenier map $\nabla \varphi$ is defined everywhere on $\mathbb{R}^n$ and is a $\mathcal{C}^1$ diffeomorphism onto $\Omega$. In such case, the change of variable formula justifies that~\eqref{eq:push-forward-by-gradient} is equivalent to the fact that $\varphi$ is a solution to the Monge-Amp{\`e}re equation
\begin{equation}\label{eq:monge-ampere-general}
\det \Hess \varphi(x) = \frac{f(x)}{g(\nabla \varphi(x))}.
\end{equation}
To this end we follow the argument of Caffarelli as presented in the paper~\cite{alesker-dar-milman}.
First note that due to~\eqref{eq:nabla-varphi-in-omega} the right hand side of~\eqref{eq:monge-ampere-general} is defined $x$-a.e. Second, we use the result of Caffarelli~\cite{caffarelli-1992} (see also Theorems 4.8 and 4.10 in~\cite{villani-tot}): since $\mu$ and $\nu$ are absolutely continuous with respect to the Lebesgue measure and the support of $\nu$ is convex, we have that $\varphi$ satisfies~\eqref{eq:monge-ampere-general} in the Aleksandrov sense, i.e. the \emph{Hessian measure} $ {\det}_H \Hess \varphi$ associated to $\varphi$, defined by
\[
{\det}_H \Hess \varphi(A) = \textup{vol}_n(\partial \varphi(A)) \quad \textup{for any Borel set $A \subseteq \mathbb{R}^n,$}
\]
is absolutely continuous with respect to the Lebesgue measure and its density coincides almost everywhere with the right-hand side of~\eqref{eq:monge-ampere-general}. For a proof of this result the following relation is crucial:
\begin{equation}\label{eq:partial-varphi-cl-omega}
\partial \varphi(\mathbb{R}^n) \subseteq \cl \Omega.
\end{equation}
To see~\eqref{eq:partial-varphi-cl-omega}, use~\eqref{eq:nabla-varphi-in-cl-omega} and a general result on a subdifferential of a (closed) convex function~\cite[Theorem 25.6]{rockafellar-convex-analysis} which in our case says that for any $x \in \mathbb{R}^n$, $\partial \varphi(x)$ lies in the closure of the convex hull of limits of $\nabla \varphi(x_k)$, where $(x_k)$ runs through all sequences of points of differentiability of $\varphi$ which converge to $x$. Since $\cl \Omega$ is already convex and closed, $\partial \varphi(x) \subseteq \cl \Omega$ for any $x \in \mathbb{R}^n$.
Further on, we assume additionally that the functions $f$ and $1/f$ are bounded on compact subsets of $\mathbb{R}^n$ and $g$ is bounded and bounded away from zero on $\Omega$. Then for any $R > 0$ there exists $0 < c(R) < C(R) < \infty$ such that the right hand side of~\eqref{eq:monge-ampere-general} is between $c(R)$ and $C(R)$ almost everywhere in the ball $B(0,R)$. Since $\varphi$ satisfies~\eqref{eq:monge-ampere-general} in the Aleksandrov sense, it follows that ${\det}_H \Hess \varphi$ has a density which is bounded and bounded away from zero on compact sets. With these a priori bounds on ${\det}_H \Hess \varphi$ we apply a geometric lemma of Caffarelli~\cite[Theorem 1]{caffarelli-localization-1990} (see also~\cite[Chapter 5]{guttierez-book}) which will allow us to prove that $\varphi$ is strictly convex.
\begin{lemma}[{Caffarelli~\cite[Theorem 1]{caffarelli-localization-1990}}]\label{lem:caffarelli-strict-convexity}
Let $\Gamma \subseteq \mathbb{R}^n$ be an open bounded convex set and $\psi \colon \Gamma \to \mathbb{R}$ be a non-negative convex function. Suppose for some constants $0 < c < C < \infty$,
\[
c \,\textup{vol}_n(A) \le \textup{vol}_n(\partial \psi(A)) \le C \,\textup{vol}_n(A) \quad \textup{for any Borel set $A \subseteq \Gamma.$}
\]
If the (convex) set $\{x \in \Gamma \colon \psi(x) = 0 \}$ is non-empty and contains more than one point, then it has no extremal points.
\end{lemma}
\begin{corollary}
$\varphi$ is strictly convex.
\end{corollary}
\begin{proof}
Assume the opposite. Then there exists $x_0 \in \mathbb{R}^n$ and a supporting hyperplane $l$ of $\varphi$ at $x_0$ such the closed convex set that $F = \{ x \in \mathbb{R}^n \colon \varphi(x) = l(x) \}$ contains some other point $z \neq x_0$.
First suppose that $F$ contains an extreme point, say $x_1$ (not necessarily distinct from $x_0$), and take $\Gamma$ to be an (open) ball $B(0,R)$ large enough to contain $x_0, z$ and the extreme point $x_1$. Let $\psi$ be the non-negative convex function $\varphi - l$ restricted to $\Gamma$. By translation invariance of the Lebesgue measure, $\textup{vol}_n(\partial \psi(A)) = \textup{vol}_n(\partial \varphi(A))$ for any Borel set $A \subset \Gamma$ and thus we can apply Lemma~\ref{lem:caffarelli-strict-convexity}. The set $\{ x \in \Gamma \colon \psi(x) = 0 \}$ coincides with $F \cap \Gamma$ and since $F \cap \Gamma$ contains two distinct points $x_0$ and $z$, it follows from the lemma that $x_1$ cannot be an extreme point of $F \cap \Gamma$. This clearly contradicts the fact that $x_1$ is an extreme point of $F$.
Therefore $F$ is a non-empty closed convex set which has no extreme points. Hence it must contain a line (see e.g.~\cite[Corollary 18.5.3]{rockafellar-convex-analysis}). In consequence, the graph of $\varphi$ must contain a line and hence all hyperplanes supporting $\varphi$ are parallel to that line. This means that $\partial \varphi(\mathbb{R}^n)$ is contained in an affine subset of $\mathbb{R}^n$ of dimension at most $n-1$, which clearly contradicts~\eqref{eq:supp-nu-in-cl-partial-varphi}.
\end{proof}
Having established strict convexity of $\varphi$ we can conclude with a stronger statement than~\eqref{eq:partial-varphi-cl-omega}, namely that $\partial \varphi(\mathbb{R}^n) = \Omega$. It is based on a convexity argument.
\begin{lemma}
Let $\psi$ be a convex function on $\mathbb{R}^n$ with $\dom \psi = \mathbb{R}^n$. If $\psi$ is strictly convex then $\partial \psi(\mathbb{R}^n) = \interior \dom \psi^\ast$.
\end{lemma}
\begin{proof}
In the view of~\eqref{eq:inclusion-of-domains} it is enough to prove that $\partial \psi(\mathbb{R}^n)$ is disjoint from $\bd \dom \psi^\ast$. To this end suppose $x \in \mathbb{R}^n$ and $y \in \partial \psi(x) \cap \bd \dom \psi^\ast$. Since $y \in \bd \dom \psi^\ast$, by convexity of $\dom \psi^\ast$ there exists $u \in \mathbb{R}^n$ such that
\begin{equation}\label{eq:outer-normal-to-dom-varphi-ast}
\scalar{u, v - y} \le 0 \quad \textup{for all $v \in \dom \psi^\ast.$}
\end{equation}
On the other hand, consider $z = x + \lambda u$ with any $\lambda > 0$. By strict convexity of $\psi$,
\[ \begin{split}
\psi(z) - \psi(x) &> \scalar{z - x, y} \\
\psi(x) - \psi(z) &> \scalar{x - z, v},
\end{split}
\]
where $v$ is an arbitrary vector from $\partial \psi(z)$. Adding up the two above inequalities yields $\scalar{z - x, y - v} = \lambda \scalar{u, y - v} < 0$ which means that $\scalar{u, v - y} > 0$. Since by~\eqref{eq:inclusion-of-domains} the vector $v \in \partial \psi(z)$ belongs to $\dom \psi^\ast$, it contradicts~\eqref{eq:outer-normal-to-dom-varphi-ast}.
\end{proof}
Applying the above lemma to $\varphi$ we get that $\partial \varphi(\mathbb{R}^n)$ is an open convex subset of $\mathbb{R}^n$. The fact that $\partial \varphi(\mathbb{R}^n)$ is open combined with~\eqref{eq:partial-varphi-cl-omega} implies that $\partial \varphi(\mathbb{R}^n) \subseteq \interior \cl \Omega = \Omega$.
On the other hand, since $\partial \varphi(\mathbb{R}^n)$ is open and convex, then \eqref{eq:supp-nu-in-cl-partial-varphi} yields $\Omega \subseteq \interior \cl \partial \varphi(\mathbb{R}^n) = \partial \varphi(\mathbb{R}^n)$. Therefore the three open convex sets
\begin{equation}\label{eq:partial-varphi-equals-int-dom-varphi-ast-equals-omega}
\partial \varphi(\mathbb{R}^n) = \interior \dom \varphi^\ast = \Omega
\end{equation}
coincide.
Finally, we use the following
\begin{theorem}[Caffarelli {\cite[Theorem 1.3]{alesker-dar-milman}}]\label{thm:caffarelli}
Let $\mu(dx)=f(x) dx$ and $\nu(dx)=g(x)dx$ be two probability measures on $\mathbb{R}^n$.
Assume that $f$ is locally H{\"o}lder and strictly positive on $\mathbb{R}^n$. Assume that the restriction of $g$ to an open bounded convex set $\Omega$ is locally H{\"o}lder, bounded and bounded away from zero, and that $g \equiv 0$ in $\Omega^c$. Then any convex function $\varphi$ on $\mathbb{R}^n$ that induces the Brenier map $\nabla \varphi$ which pushes $\mu$ forward to $\nu$ belongs locally to the H{\"o}lder class $\mathcal{C}^{2,\alpha}$ for some $\alpha > 0$ and satisfies~\eqref{eq:monge-ampere-general} for all $x \in \mathbb{R}^n$.
\end{theorem}
We will also need a $\mathcal{C}^2$ convex function whose gradient pushes forward $\nu$ to $\mu$. Clearly, a natural candidate is $\varphi^\ast$. In the corollary below we state the final result we will use in the sequel.
\begin{corollary}\label{cor:transport-maps}
Assume $f$ and $g$ are as in Theorem~\ref{thm:caffarelli}.
\begin{enumerate}
\item[(i)]
There exists a strictly convex function $\varphi \in \mathcal{C}^2(\mathbb{R}^n)$ with $\Hess \varphi$ positive definite everywhere whose gradient $\nabla \varphi$ maps $\mathbb{R}^n$ onto $\Omega$, pushes $\mu$ forward to $\nu$ and thus satisfies the Monge-Amp{\`e}re equation~\eqref{eq:monge-ampere-general}.
\item[(ii)] For $\varphi$ as in (i), the Legendre conjugate $\varphi^\ast$ has $\interior \dom \varphi^\ast = \Omega$, belongs to $\mathcal{C}^2(\Omega)$, $\partial \varphi^\ast(y) = \emptyset$ for all $y \not\in \Omega$. Moreover $\nabla \varphi^\ast$ pushes $\nu$ forward to $\mu$, $\Hess \varphi^\ast$ is positive definite everywhere in $\Omega$ and $\varphi^\ast$ satisfies
\begin{equation}\label{eq:monge-ampere-general-inversed}
\det \Hess \varphi^\ast(y) = \frac{g(y)}{f(\nabla \varphi^\ast(y))} \quad \textup{for all $y \in \Omega.$}
\end{equation}
\end{enumerate}
\end{corollary}
\begin{proof}
Consider $\varphi$ as in Theorem~\ref{thm:caffarelli}. In order to complete the proof of (i) it is enough to note that \eqref{eq:partial-varphi-equals-int-dom-varphi-ast-equals-omega} becomes $\nabla \varphi(\mathbb{R}^n) = \interior \dom \varphi^\ast = \Omega$.
(ii) Strict convexity of $\varphi$ allows us to conclude that $\nabla \varphi$ is a $\mathcal{C}^1$ bijection from $\mathbb{R}^n$ onto $\Omega$. Combining it with~\eqref{eq:subgrad-and-legendre} shows that $\partial \varphi^\ast(y) = \{ (\nabla \varphi)^{-1}(y) \}$ for $y \in \Omega$ and $\partial \varphi^\ast(y) = \emptyset$ for $y \not\in \Omega$. The uniqueness of subgradient of $\varphi^\ast$ at each point of $\Omega$ implies that $\varphi^\ast$ is differentiable everywhere in $\interior \dom \varphi^\ast = \Omega$ and the map $\nabla \varphi^\ast \colon \Omega \to \mathbb{R}^n$ is the inverse map of $\nabla \varphi$. This is already sufficient to justify that $\nabla \varphi^\ast$ pushes $\nu$ forward to $\mu$. In order to show that $\varphi^\ast$ is in fact $\mathcal{C}^2(\Omega)$ and satisfies~\eqref{eq:monge-ampere-general-inversed}, it is enough to use that the Jacobian of the map $\nabla \varphi$ (i.e. $\det \Hess \varphi$) does not vanish and thus use the inverse function theorem to obtain that the map $\nabla \varphi^\ast$ is $\mathcal{C}^1(\Omega)$ and its derivative $\Hess \varphi^\ast(y)$ equals $(\Hess \varphi(x))^{-1}$ where $y \in \Omega$ and $x = \nabla \varphi^\ast(y) \in \mathbb{R}^n$. Thus~\eqref{eq:monge-ampere-general-inversed} follows from~\eqref{eq:monge-ampere-general}.
\end{proof}
\subsection{Classes of test functions}
For each $k=1,2,\ldots,m$ fix a measurable function $f_k \colon H_k \to [0, \infty]$ of integral one such that
\begin{itemize}
\item for $1 \le i \le m^+$, $f_i$ is locally Lipschitz, bounded and bounded away from zero on some bounded open convex subset of $H_i$, and vanishes outside this set;
\item for $m^+ < j \le m$, $f_j$ is locally Lipschitz and strictly positive in its whole domain $H_j$.
\end{itemize}
The target functions are chosen as follows. Fix $R > 0$ and consider any $m$-tuple $(A_1, \ldots, A_m) \in \Lambda$ (for the definition of $\Lambda$ refer to Subsection~\ref{subsec:centered-gaussians}) and also put $A_0 = Q_+$ and $A_{m+1} = Q_-$. For $k = 0,1,\ldots, m+1$ define
\[
g_k(y) = (\det A_k)^{-1/2} \exp(-\pi \scalar{-A_k^{-1} y, y}).
\]
The target functions will be $\tilde{g}_k$ defined as
\begin{align*}
\tilde{g}_i &= g_i \quad \textup{for $1 \le i \le m^+,$} \\
\tilde{g}_0 &= g_0, \\
\tilde{g}_{m+1} &= g_{m+1}, \\
\tilde{g}_j &= \lambda_j g_j \Ind{B_{H_j}(0,R)} \quad \textup{for $m^+ < j \le m,$}
\end{align*}
where $\lambda_j > 1$ is a normalizing constant (such that $\int_{H_j} \tilde{g}_j = 1$).
\subsection{Transportation argument}
For each $i=1,2,\ldots,m^+$ let $\varphi_i \colon H_i \to \mathbb{R} \cup \{ +\infty \}$ be the function $\varphi^\ast$ from Corollary~\ref{cor:transport-maps}(ii) for probability measures $\mu$ and $\nu$ on $H_i$ having the densities $\tilde{g}_i$ and $f_i$ respectively. Each $\varphi_i$ belongs to $\mathcal{C}^2(\interior \dom \varphi_i)$, is lower semi-continuous (as the Legendre transform of a convex function) and satisfies
\begin{equation}\label{eq:vp-i-has-empty-subdifferential-outside-int-dom}
\partial \varphi_i(x) = \emptyset \quad \textup{for all $x \not\in \interior \dom \varphi_i$.}
\end{equation}
For each $j=1+m^+, \ldots, m$ let $\varphi_j \colon H_j \to \mathbb{R}$ be the function $\varphi$ from Corollary~\ref{cor:transport-maps}(i) for probability measures $\mu$ and $\nu$ of $H_j$ having the densities $f_j$ and $\tilde{g}_j$ respectively. Each $\varphi_j$ is $\mathcal{C}^2(H_j)$ and
\begin{equation}\label{eq:vp-j-to-B-0-R}
\nabla \varphi_j(x) \in B_{H_j}(0,R) \quad \textup{for all $x \in H_j.$}
\end{equation}
Additionally put $\varphi_0(x) = \frac12 \scalar{Q_+ x, x}$ and $\varphi_{m+1}(x) = \frac12 \scalar{Q_- x, x}$.
For $k=0,1,2,\ldots, m+1$, put $T_k = \nabla \varphi_k$ and note that by Corollary~\ref{cor:transport-maps}, for all $x \in \interior \dom \varphi_k$,
\begin{equation}\label{eq:monge-ampere}
f_k(x) = \tilde{g}_k(T_k(x)) \det dT_k(x)
\end{equation}
and
\begin{equation}\label{eq:dTk-symmetric-positive-definite}
dT_k(x) = \Hess \varphi_k(x) \textup{ is symmetric positive definite}
\end{equation}
for all $x \in \interior \dom \varphi_k$.
For $x \in H$ put
\begin{align}
\varphi_+(x) &= \sum_{1\le i\le m^+} c_i \varphi_i(B_i x), \nonumber\\
\varphi_-(x) &= \sum_{m^+ < j \le m} (-c_j) \varphi_j(B_j x), \nonumber \\
\varphi(x) &= \varphi_0(B_0 x) + \varphi_+(x) - \varphi_-(x) - \varphi_{m+1}(B_{m+1} x)=\sum_{k=0}^{m+1} c_k \varphi_k(B_k x). \label{def:phi}
\end{align}
On the open domain $S = \bigcap_{i=1}^{m^+} B_i^{-1}(\interior \dom \varphi_i) \subset H$, which is non-empty thanks to surjectivity of $B_+$, define the change of variable map $\theta \colon S \to H$ by
\begin{align*}
\theta(x) = \nabla \varphi(x) = \sum_{k=0}^{m+1} c_k B_k^\ast T_k(B_k x).
\end{align*}
This map is $\mathcal{C}^1$ and its differential equals
\begin{equation}\label{eq:dtheta}
\begin{split}
d\theta(x) &= \Hess \varphi(x) = \sum_{k=0}^{m+1} c_k B_k^\ast dT_k(B_k x) B_k \\
&= B_0^\ast Q_+ B_0 - B_{m+1}^\ast Q_- B_{m+1} + \sum_{k=1}^{m} c_k B_k^\ast dT_k(B_k x) B_k \\
&= Q + \sum_{k=1}^{m} c_k B_k^\ast dT_k(B_k x) B_k.
\end{split}
\end{equation}
Combining the above with~\eqref{def:BL-constant} and~\eqref{eq:dTk-symmetric-positive-definite} we obtain that for $x \in S$,
\begin{equation}\label{eq:det-dtheta-bound}
\det d\theta(x) \le D \prod_{k=1}^m \big(\det dT_k(B_k)\big)^{c_k}
\end{equation}
whenever $d\theta(x)$ is positive definite. Since~\eqref{eq:det-dtheta-bound} remains true for $d\theta(x)$ being positive semi-definite, we will consider the subdomain of $S$,
\[
S_+ = \{ x \in S \colon d\theta(x) \textup{ is positive semi-definite} \}
\]
on which~\eqref{eq:det-dtheta-bound} is valid. Note that by continuity of the map $S \ni x \mapsto d\theta(x)$, $S_+$ is a closed subset of $S$ and in particular $S_+$ is a measurable subset of $H$.
As announced above, the following lemma is crucial in our argument. We defer its proof to Subsection~\ref{subsec:surjectivity-of-theta}.
\begin{lemma}\label{lem:theta-is-surjective}
The map $\theta_{| S_+} \colon S_+ \to H$ is surjective.
\end{lemma}
Now we are in the position to establish a sharp lower bound on $J(f_1, \ldots, f_m)$.
Starting from~\eqref{eq:J-as-product-of-f} and using the Monge-Amp\`ere equations \eqref{eq:monge-ampere} we get
\begin{align*}
J(f_1, \ldots, f_m) &\ge \sqrt{\frac{\det Q_-}{\det Q_+}} \int_{S_+} \prod_{k=0}^{m+1} f_k^{c_k}(B_k x) \, dx
\\
&= \sqrt{\frac{\det Q_-}{\det Q_+}} \int_{S_+} \prod_{k=0}^{m+1} \big( \tilde{g}_k(T_k(B_k x)) \det dT_k(B_k x))\big)^{c_k} \, dx \\
&= \sqrt{\frac{\det Q_+}{\det Q_-}} \int_{S_+} \left( \prod_{k=0}^{m+1} \tilde{g}_k^{c_k}(T_k(B_k x))\right) \left(\prod_{k=1}^m (\det dT_k(B_k x))^{c_k} \right) \, dx \\
&\ge D^{-1} \sqrt{\frac{\det Q_+}{\det Q_-}} \int_{S_+} \left(\prod_{k=0}^{m+1} \tilde{g}_k^{c_k}(T_k(B_k x))\right) \det d\theta(x) \, dx \\
&= (*),
\end{align*}
where the latter inequality comes from~\eqref{eq:det-dtheta-bound}.
Setting $\lambda = \prod_{j=m^+ + 1}^m \lambda_j^{c_j}$ and using the point-wise estimate $\tilde{g}_j^{c_j} \ge \lambda_j^{c_j} g_j^{c_j}$ for $m^+ < j \le m$ we continue with the bound
\begin{align}
(*) &\ge
\lambda D^{-1} \sqrt{\frac{\det Q_+}{\det Q_-}}
\int_{S_+} \left(\prod_{k=0}^{m+1} g_k^{c_k}(T_k(B_k x))\right) \det d\theta(x) \, dx \nonumber\\
&\ge \lambda D^{-1} \sqrt{\frac{\det Q_+}{\det Q_-}} \int_{S_+} \left(\inf_{\theta(x) = \sum_{k=0}^{m+1} c_k B_k^\ast y_k} \prod_{k=0}^{m+1} g_k^{c_k}(y_k)\right) \det d\theta(x) \, dx \nonumber\\
&\ge \lambda D^{-1} \sqrt{\frac{\det Q_+}{\det Q_-}} \int_H \inf_{z = \sum_{k=0}^{m+1} c_k B_k^\ast y_k} \prod_{k=0}^{m+1} g_k^{c_k}(y_k) \, dz \label{eq:dual-inverse-proof}\\
&= \lambda D^{-1} \left( \prod_{k=1}^m (\det A_k)^{-c_k/2} \right)
\int_H \exp\left(-\pi \sup_{z = \sum_{k=0}^{m+1} c_k B_k^\ast y_k} \sum_{k=0}^{m+1} c_k \scalar{A_k^{-1} y_k, y_k}\right) \, dz\nonumber\\
&= (**), \nonumber
\end{align}
where $\lambda = \prod_{j=m^+ + 1}^m \lambda_j^{c_j}$ and the last inequality follows from the area formula for $\mathcal{C}^1$ maps \cite[Theorem 3.2.5]{federer}
and the fact that the map $\theta|S_+ \colon S_+ \to H$ is surjective (Lemma~\ref{lem:theta-is-surjective}). Using Lemma~\ref{lem:quad-inverse}, we finish the above estimate with
\begin{align*}
(**) &=
\lambda D^{-1} \left(\prod_{k=1}^m (\det A_k)^{-c_k/2} \right) \int_H \exp\left(-\pi \biggscalar{\Big(\sum_{k=0}^{m+1} c_k B_k^\ast A_k B_k \Big)^{-1} z, z} \right) \, dz \\
&= \lambda D^{-1} \left( \frac{\det\big(Q + \sum_{k=1}^{m} c_k B_k^\ast A_k B_k \big)}{\prod_{k=1}^m (\det A_k)^{c_k}} \right)^{1/2}.
\end{align*}
Thanks to \eqref{def:BL-constant}, taking supremum over $(A_k)_{k=1}^m \in \Lambda$ and the limit $R \to \infty$ (which results in $\lambda \uparrow 1$) yields the desired inequality, i.e. $J(f_1, \ldots, f_m) \ge D^{-1/2}$.
\subsection{Surjectivity of the map $\theta$}\label{subsec:surjectivity-of-theta}
\begin{lemma}\label{lem:difference-of-convex-attains-infimum}
Let $f \colon H \to \mathbb{R} \cup \{+\infty\}$ be convex, lower semi-continuous and $g \colon H \to \mathbb{R}$ be convex. Assume that $\dom f \neq \emptyset$ and $\partial f(x) = \emptyset$ at every $x \in \bd \dom f$. If $f(x) - g(x) \to + \infty$ as $|x| \to +\infty$ then $f-g$ attains its infimum at a point in $\interior \dom f$.
\end{lemma}
\begin{proof}
Note that $f-g$ is lower semi-continuous which combined with the hypothesis that $f-g \to +\infty$ at infinity implies that the sets $A_r = \{ x \in H \colon f(x)-g(x) \le r\}$ are compact for all $r \in \mathbb{R}$. Since $\dom f \neq \emptyset$, $A_r$ is non-empty for $r$ large enough. Therefore $f-g$ attains its infimum at some point $x \in \dom f$. Suppose $x \in \bd \dom f$. Take any $x^\ast \in \partial g(x)$ (note that the subdifferential of $g$ is everywhere non-empty). By hypothesis, $\partial f(x) = \emptyset$ hence we can find $y \in H$ such that
\begin{align}\label{ineq:empty-differential}
f(y) < f(x) + \scalar{x^\ast, y-x}.
\end{align}
On the other hand we have
\[
g(y) \ge g(x) + \scalar{x^\ast, y-x},
\]
which combined with~\eqref{ineq:empty-differential} gives
\[
f(y) - g(y) < f(x) - g(x) = \inf (f-g)
\]
and hence contradicts the assumption that $x \in \bd \dom f$.
\end{proof}
We will need one more lemma, about the function $\varphi$, defined in \eqref{def:phi}.
\begin{lemma}\label{lem:vp-superlinear}
The function $\varphi$ is superlinear, i.e.
\[
\lim_{|x| \to \infty} \frac{\varphi(x)}{|x|} = +\infty.
\]
\end{lemma}
\begin{proof}
Consider the compact set
\[
F = \prod_{i=1}^{m^+} \cl \dom \varphi_i.
\]
Obviously
\[
\dom \varphi_+ = \bigcap_{i=1}^{m^+} B_i^{-1}(\dom \varphi_i) \subseteq B_+^{-1}(F).
\]
Using~\eqref{eq:compact-image} we get
\begin{equation}\label{eq:vp-mplus1-bounded}
\sup_{x \in \dom \varphi_+} \varphi_{m+1}(B_{m+1} x) = C_1 < \infty.
\end{equation}
For each $1 \le i \le m^+$, since $\dom \varphi_i$ is bounded, we have $\inf_{H_i} \varphi_i > -\infty$ and thus $\inf_{H} \varphi_+ > -\infty$. Therefore for some constant $C_2 < \infty$ we have
\begin{equation}\label{eq:vp-plus-at-least-quadratic}
\varphi_+(x) \ge |B_+ x|^2 - C_2
\end{equation}
for all $x \in H$ (it is enough to ensure this inequality on the set $B_+^{-1}(F)$, inside which $|B_+ x|^2 - C_2$ has finite supremum).
Combining~\eqref{eq:vp-plus-at-least-quadratic} with the fact that $\varphi_0$ is a positive definite quadratic function on $H_0$, we get
\[
\varphi_0(B_0 x) + \varphi_+(x) \ge \varepsilon |B_0 x|^2 + |B_+ x|^2 - C_2 \ge \varepsilon' |x|^2 - C_2,
\]
for some $\varepsilon, \varepsilon' > 0$, where in the last inequality we used injectivity of the map $B_{0+} = (B_0, B_+)$. Combining the above estimate with~\eqref{eq:vp-mplus1-bounded} we get
\[
\varphi(x)+\varphi_-(x)=\varphi_0(B_0 x) + \varphi_+(x) - \varphi_{m+1}(B_{m+1} x) \ge \varepsilon' |x|^2 - C_1 - C_2,
\]
hence the function $\varphi + \varphi_-$ is superlinear.
Due to~\eqref{eq:vp-j-to-B-0-R} the function $\varphi_-$ is Lipschitz. Therefore $\varphi$ is also superlinear.
\end{proof}
Now we are ready to establish our claim about the map $\theta$.
\begin{proof}[Proof of Lemma~\ref{lem:theta-is-surjective}]
Consider the function $f \colon H \to \mathbb{R} \cup \{+\infty\}$ defined by
\begin{equation}\label{eq:def-function-f}
f(x) = \varphi_0(B_0 x) + \varphi_+(x)=\sum_{i=0}^{m^+} c_i \varphi_i(B_ix).
\end{equation}
Clearly $f$ is convex and lower semi-continuous. Note also that
\[
\dom f = \bigcap_{1 \le i \le m^+} \dom(\varphi_i \circ B_i) = \bigcap_{1 \le i \le m^+} B_i^{-1}(\dom \varphi_i)
\]
and
\[
\interior \dom f = \bigcap_{1 \le i \le m^+} B_i^{-1}(\interior \dom \varphi_i),
\]
i.e. $\interior \dom f$ coincides with the domain $S$.
Using Theorems 23.8 and 23.9 from~\cite{rockafellar-convex-analysis} we express the subdifferential of $f$ in terms of subdifferentials of $\varphi_0$ and $\varphi_i$ with $1 \le i \le m^+$. Namely for all $x \in H$ we have
\[
\partial f(x) = \sum_{0 \le i \le m^+} c_i B_i^\ast \partial \varphi_i(B_i x)
\]
where the summation means the Minkowski sum of sets (in $H$). (The above formula is an equality rather than merely an inclusion $\supset$ if e.g. the set $\bigcap_{i=0}^{m^+} \dom (\varphi_i \circ B_i) \neq \emptyset$; this follows from the fact that the domain $S$ is non-empty).
Combining the above with~\eqref{eq:vp-i-has-empty-subdifferential-outside-int-dom} we obtain that if $x \not\in S$ (i.e. for some $1 \le i \le m^+$, $B_i x \not\in \interior \dom \varphi_i$) then $\partial f(x) = \emptyset$.
Fix any $y_0 \in H$. We claim that the $\mathcal{C}^2$ function $S \ni x \mapsto \varphi(x) - \scalar{x, y_0}$ attains a local minimum at, say, $x_0 \in S$. This will allow to establish the lemma. Indeed, since $S$ is an open set, the gradient of this function vanishes at $x_0$, i.e. $\nabla \varphi(x_0) - y_0 = 0$ and $\Hess \varphi(x_0)$ is positive semi-definite, which means that $\theta(x_0) = y_0$ and $x_0 \in S_+$.
Eventually, let us prove the claim that $\varphi(\cdot) - \scalar{\cdot, y_0} \colon S \to H$ attains its infimum. Beside the function $f$ defined in~\eqref{eq:def-function-f}, consider a convex function $g \colon H \to \mathbb{R}$,
\[
g(x) = \varphi_-(x) + \varphi_{m+1}(B_{m+1} x) + \scalar{x, y_0}.
\]
Obviously
\begin{equation}\label{eq:f-minus-g-as-vp-and-scalar-prod}
f(x) - g(x) = \varphi(x) - \scalar{x, y_0}.
\end{equation}
By Lemma~\ref{lem:vp-superlinear} and the fact that $\scalar{\cdot, y_0}$ is Lipschitz we obtain that $f - g$ is superlinear at infinity, so in particular $f(x) - g(x) \to \infty$ as $|x| \to \infty$. Since the subdifferential of $f$ is empty outside $S = \interior \dom f$, we can use Lemma~\ref{lem:difference-of-convex-attains-infimum} in order to conclude.
\end{proof}
\subsection{Approximation argument}
For non-negative, integrable functions $f_i$ ($1 \le i \le m^+$) and $f_j$ ($m^+ < j \le m$) denote
\[
\mathcal{I}((f_i), (f_j))(x) = e^{-\mathcal{Q_+}(B_0 x)} e^{\mathcal{Q_-}(B_{m+1} x)} \prod_{i=1}^{m^+} f_i^{c_i}(B_i x) \prod_{j=1+m^+}^m f_j^{c_j}(B_j x) \quad\textup{for $x \in H$.}
\]
We proved that under the hypothesis of Theorem~\ref{theo:gauss-mini},
\begin{equation}\label{eq:gauss-mini-for-class-of-functions}
\int_H \mathcal{I}((f_i), (f_j)) \ge K \prod_{1\le i\le m^+} \left(\int_{H_i} f_i\right)^{c_i} \prod_{m^+<j\le m} \left(\int_{H_j} f_j\right)^{c_j},
\end{equation}
for all $f_i \in \mathcal{F}_i^0$ ($1 \le i \le m^+$) and for all $f_j \in \mathcal{F}_j^0$ ($m^+ < j \le m$), where $K = \inf_{\mathcal{CG}} J$ and
\begin{itemize}
\item $\mathcal{F}_i^0$ is the class of non-negative functions on $H_i$ which are locally Lipschitz, bounded and bounded away from zero on an open bounded convex subset of $H_i$, and vanish outside this set,
\item $\mathcal{F}_j^0$ is the class of strictly positive and locally Lipschitz functions on $H_j$.
\end{itemize}
We proceed in three steps $s = 1,2,3$. In each step we consider different classes of functions $\mathcal{F}_i^s$ and $\mathcal{F}_j^s$ for which we prove~\eqref{eq:gauss-mini-for-class-of-functions} to be valid. At the final step $s=3$, the classes $\mathcal{F}_i^3$, $\mathcal{F}_j^3$ will consist of all non-negative, integrable functions.
\paragraph{\bf{Step 1.}} Fix $f_i \in \mathcal{F}_i^1$ ($1 \le i \le m^+$) and $f_j \in \mathcal{F}_j^1$ ($m^+ < j \le m$) where
\begin{itemize}
\item $\mathcal{F}_i^1$ is the class of non-negative bounded measurable functions on $H_i$ with compact support,
\item $\mathcal{F}_j^1$ is the class of positive bounded Lipschitz functions $f_j$ on $H_j$ for which $f_j(y)^{-1}$ is bounded from above by a polynomial in $|y|$.
\end{itemize}
Note that $f_j$ belongs to $\mathcal{F}_j^0$ as well. For each $i$, take $R_i > 0$ such that the ball $B_{H_i}(0,R_i)$ contains the support of $f_i$. Consider the sequence of functions
\[
f_{i,n} = f_i \ast \phi_{i,n} + \frac{1}{n} \Ind{B_{H_i}(0,R_i + 1)},
\]
where $\phi_{i,n}(x) = c_i n^{\dim H_i} \textup{dist}(x, H_i \setminus B_{H_i}(0, 1/n))$ and $c_i$ is such that $\int_{H_i} \phi_{i,n} = 1$.
Since $\phi_{i,n}$ are bounded, Lipschitz and $f_i$ are non-negative and measurable with compact support, $f_{i,n} \in \mathcal{F}_i^0$. Moreover $\int_{H_i} f_{i,n} \to \int_{H_i} f_i$ and by the Lebesgue differentiation theorem, $f_{i,n} \to f_i$ a.e.. In other words the set $\Omega_i\subset H_i$ where the latter convergence holds has a negligible
complement. Then the convergence $\mathcal{I}((f_{i,n}), (f_j)) \to \mathcal{I}((f_i), (f_j))$ holds for all points
in the set $\Omega:=\bigcap_{i=1}^{m^+} B_i^{-1}(\Omega_i)$. Since the maps $B_i$ are surjective, the complement of $\Omega$
is negligible. Hence $\mathcal{I}((f_{i,n}), (f_j)) \to \mathcal{I}((f_i), (f_j))$ a.e.
In order to verify~\eqref{eq:gauss-mini-for-class-of-functions} it is enough to ensure that
\begin{equation}\label{eq:convergence-of-J}
\lim_{n \to \infty} \int_H \mathcal{I}((f_{i,n}), (f_j)) = \int_H \mathcal{I}((f_i), (f_j)).
\end{equation}
To this end we find an integrable function on $H$ which dominates $\mathcal{I}((f_{i,n}), (f_j))$ uniformly in $n$ and then apply the Lebesgue dominated convergence theorem.
First, since $f_i$ are bounded, all $f_{i,n}$ are bounded uniformly in $n$ and thus for some constant $C > 0$ and a compact set $F \subseteq H_1 \times \cdots \times H_{m^+}$,
\[
\prod_{i=1}^{m^+} f_{i,n}^{c_i}(B_i x) \le C \Ind{F}(B_+ x)\quad\textup{for all $x \in H$.}
\]
Second, by~\eqref{eq:compact-image}, $B_{m+1}(B_+^{-1}(F))$ is compact, hence after adjusting the constant $C$, we also have
\begin{equation}\label{eq:domination-of-integrand-1}
e^{\mathcal{Q}_-(B_{m+1} x)} \prod_{i=1}^{m^+} f_{i,n}^{c_i}(B_i x) \le C \Ind{F}(B_+ x)\quad\textup{for all $x \in H$.}
\end{equation}
Since the map $(B_0, B_+)$ is a linear isomorphism, $\mathcal{I}((f_{i,n}), (f_j))$ would be compactly supported if only the function $e^{-\mathcal{Q_+}}$ was compactly supported. Obviously it is not (unless $\mathcal{Q_+}$ is trivial), but we can still use a compactness argument by decomposing $e^{-\mathcal{Q_+}}$ into slices, namely
\[ \begin{split}
e^{-\mathcal{Q_+}(y)} &= \int_0^1 \Ind{\{e^{-\mathcal{Q_+}} \ge u\}}(y) \, du
= \int_0^\infty 2t e^{-t^2} \Ind{\{\mathcal{Q_+} \le t^2\}}(y) \, dt \\
&= \int_0^\infty 2 t e^{-t^2} \Ind{t \{\mathcal{Q_+} \le 1\}}(y) \, dt.
\end{split} \]
Combining the above with~\eqref{eq:domination-of-integrand-1}, we can bound $\mathcal{I}((f_{i,n}), (f_j))(x)$ pointwise and uniformly in $n$ by a constant times
\[ \begin{split}
\int_0^\infty t e^{-t^2} \Ind{t \{\mathcal{Q_+} \le 1\}}(B_0 x) \Ind{F}(B_+ x) \, dt
\prod_{j = 1+m^+}^m f_j^{c_j}(B_j x) \\
\le
\int_0^\infty t e^{-t^2} \Ind{(t+1)B_H(0,R)}(x) \, dt
\prod_{j = 1+m^+}^m f_j^{c_j}(B_j x)
\end{split} \]
with $R > 0$ large enough. Since $f_j(y)^{-1}$ is bounded from above by a polynomial in $|y|$, we finally obtain a pointwise upper bound
\[
\mathcal{I}((f_{i,n}), (f_j))(x) \le \int_0^\infty (C_1 t^\alpha + C_2) e^{-t^2} \Ind{(t+1)B_H(0,R)}(x) \, dt
\]
with some constants $C_1, C_2, \alpha > 0$, which is clearly an integrable function of $x \in H$.
\medskip
\paragraph{\bf{Step 2.}} Fix $f_i \in \mathcal{F}_i^2$ ($1\le i \le m^+$) and $f_j \in \mathcal{F}_j^2$ ($m^+ < j \le m$) where
\begin{itemize}
\item $\mathcal{F}_i^2 = \mathcal{F}_i^1$,
\item $\mathcal{F}_j^2$ is the class of non-negative integrable functions on $H_j$.
\end{itemize}
Let $\phi(u) = \frac{1}{\pi(1+u^2)}$ be the density of the standard Cauchy distribution and for each $j = 1+m^+, \ldots, m$ and $\lambda > 0$ put
\[
\phi_{j,\lambda}(x) = \lambda^{\dim H_j} \prod_{l=1}^{\dim H_j} \phi(\lambda x_l).
\]
Fix $\varepsilon > 0$ and for each $j$ and $n$ put
\[
f_{j,n} = (f_j + \varepsilon \phi_{j,1}) \ast \phi_{j,n}.
\]
For each $j$ and $n$, $\phi_{j,n}$ is bounded and Lipschitz and $f_j$ is integrable, hence $f_{j,n}$ is also bounded and Lipschitz. From the estimate
\[
f_{j,n} \ge \varepsilon \phi_{j,1} \ast \phi_{j,n} = \varepsilon \phi_{j,\frac{n}{n+1}}
\ge \varepsilon 2^{-\dim H_j} \phi_{j,1}
\]
(the equality above follows from the fact that the Cauchy distribution is $1$-stable), we obtain that
\begin{equation}\label{eq:polynomial-decay}
f_{j,n}(y)^{-1} \quad\text{is bounded from above by a polynomial in $|y|$ uniformly in $n$.}
\end{equation}
Hence we proved that $f_{j,n} \in \mathcal{F}_j^1$.
By the result of Step 1 (i.e.~\eqref{eq:gauss-mini-for-class-of-functions} for $(f_i), (f_{j,n})$),
\[ \begin{split}
\int_H \mathcal{I}((f_i), (f_{j,n})) &\ge K \prod_{1\le i\le m^+} \left( \int_{H_i} f_i \right)^{c_i} \prod_{m^+ < j\le m} \left( \int_{H_j} f_{j,n} \right)^{c_j} \\
&= K \prod_{1\le i\le m^+} \left( \int_{H_i} f_i \right)^{c_i} \prod_{m^+ < j\le m} \left( \varepsilon + \int_{H_j} f_j\right)^{c_j},
\end{split} \]
where the last equality follows from $\int_{H_j} f_{j,n} = \varepsilon + \int_{H_j} f_j$. Obviously
\[
\int_H \mathcal{I}((f_i), (f_j)) \ge \int_H \mathcal{I}((f_i), (f_j + \varepsilon \phi_{j,1})),
\]
so proving
\begin{equation}\label{eq:convergence-of-J-2}
\lim_{n \to \infty} \int_H \mathcal{I}((f_i), (f_{j,n})) = \int_H \mathcal{I}((f_i), (f_j + \varepsilon \phi_{j,1}))
\end{equation}
would yield
\[
\int_H \mathcal{I}((f_i), (f_j)) \ge K \prod_{1\le i\le m^+} \left( \int_{H_i} f_i \right)^{c_i} \prod_{m^+ < j\le m} \left( \int_{H_j} f_j + \varepsilon \right)^{c_j}
\]
and in consequence~\eqref{eq:gauss-mini-for-class-of-functions} by letting $\varepsilon \to 0$.
Since $f_{j,n} \to f_j + \varepsilon \phi_{j,1}$ a.e., we have $\mathcal{I}((f_i), (f_{j,n})) \to \mathcal{I}((f_i), (f_j + \varepsilon \phi_{j,1}))$ a.e. In the view of~\eqref{eq:polynomial-decay} and the fact that $f_i$ are bounded with compact support, we can proceed as in Step 1 to find an integrable function on $H$ which dominates $\mathcal{I}((f_i), (f_{j,n}))$ for all $n$ and conclude with~\eqref{eq:convergence-of-J-2}.
\medskip
\paragraph{\bf{Step 3.}} Fix $f_i \in \mathcal{F}_i^3$ ($1\le i \le m^+$) and $f_j \in \mathcal{F}_j^3$ ($m^+ < j \le m$) where
\begin{itemize}
\item $\mathcal{F}_i^3$ is the class of non-negative integrable functions on $H_i$,
\item $\mathcal{F}_j^3 = \mathcal{F}_j^2$.
\end{itemize}
We approximate $f_i$ with $f_{i,n} = \min(f_i, n) \Ind{B_{H_i}(0,n)}$ which belong to $\mathcal{F}_i^2$. The convergence as in~\eqref{eq:convergence-of-J} follows from the monotone convergence theorem. We conclude with~\eqref{eq:gauss-mini-for-class-of-functions} by using the result of Step 2 for the functions $(f_{i,n}), (f_j)$.
\section{Geometric Brascamp-Lieb inequality}\label{sec:geometric}
We study specific non-degenerate situations for which $\inf J=1$ and some extremizing functions can be identified. They are related to
geometric Brascamp-Lieb inequalities and the decomposition of the identity~\eqref{eq:decomposition-identity-1d}. More precisely, they are characterized by the following conditions:
\begin{align}
\label{cond:geometric-condition-0}
B_k B_k^\ast &= \textup{Id}_{H_k} \quad \text{for $k = 1,\ldots, m,$} \\
\label{cond:geometric-condition}
Q + \sum_{k=1}^m c_k B_k^\ast B_k &= \textup{Id}_{H}.
\end{align}
\subsection{Finding $\inf_{\mathcal{CG}} J$}
The aim of this subsection is to prove that if the non-degeneracy condition~\eqref{eq:non-degeneracy2} and the geometric conditions ~\eqref{cond:geometric-condition-0} and~\eqref{cond:geometric-condition} hold then the infimum of $J$ on centered Gaussian functions is equal to 1, and is achieved when for all $k$, $f_k(\cdot)=\exp(-\pi|\,\cdot\,|^2$). The crucial result here is Proposition~\ref{prop:concavity-of-log-det} which establishes a concavity property of a function related to Formula~\eqref{eq:J-on-Gaussian-input}.
First, let us put forward two useful facts and a lemma.
\begin{fact}[{see e.g.~\cite[Theorem 7.7.6]{horn_johnson_matrix_analysis}}]\label{fact:schur-complement}
Let $A$ be $n \times n$ real symmetric matrix and C be $m \times m$ real symmetric matrix and $B$ be an $n \times m$ real matrix. Let $X = \left( \begin{array}{cc} A & B \\ B^\ast & C \end{array} \right)$. If $C > 0$ then
\begin{enumerate}
\item[(i)] $X \ge 0$ if and only if $A - BC^{-1}B^\ast \ge 0$;
\item[(ii)] $X > 0$ if and only if $A - BC^{-1}B^\ast > 0$.
\end{enumerate}
\end{fact}
Below $\textup{I}_n$ denotes the $n \times n$ identity matrix.
\begin{fact}[Woodbury formula]\label{fact:woodbury-formula}
For an $m \times n$ matrix $A$,
\[
A^\ast (\textup{I}_m + A A^\ast)^{-1} A = \textup{I}_n - (\textup{I}_n + A^\ast A)^{-1}.
\]
\end{fact}
\begin{proof}
Direct calculation.
\end{proof}
\begin{lemma}\label{lem:equivalence-of-positive-definiteness}
Let $R$ be a $p \times n$ real matrix and $S$ be an $r \times n$ real matrix. Consider the $(p+r) \times (p+r)$ matrix
\[
M = \left( \begin{array}{cc}
-\textup{I}_p + R R^\ast & R S^\ast \\
S R^\ast & \textup{I}_r + S S^\ast
\end{array} \right).
\]
\begin{enumerate}
\item[(i)] $M \ge 0$ if and only if $R(\textup{I}_n + S^\ast S)^{-1} R^\ast \ge \textup{I}_p$.
\item[(ii)] If $n = p$ and $R$ is invertible then $M \ge 0$ if and only if $R^\ast R - S^\ast S \ge \textup{I}_n$.
\end{enumerate}
\end{lemma}
\begin{proof}
Applying Fact~\ref{fact:schur-complement}(i) we obtain that $M \ge 0$ is equivalent to
\[
-\textup{I}_p + R R^\ast - R S^\ast (\textup{I}_r + S S^\ast)^{-1} S R^\ast \ge 0.
\]
Using Fact~\ref{fact:woodbury-formula} for $A = S$ the above can be rephrased as
\[
-\textup{I}_p + R R^\ast - R \big(\textup{I}_n - (\textup{I}_n + S^\ast S)^{-1}\big) R^\ast \ge 0,
\]
which finishes the proof of (i).
If $n=p$ and $R$ is invertible, then $M \ge 0$ is also equivalent to
\[
(\textup{I}_n + S^\ast S)^{-1} \ge R^{-1} R^{-\ast} = (R^\ast R)^{-1}
\]
which in turn is equivalent to
\[
\textup{I}_n + S^\ast S \le R^\ast R.
\]
\end{proof}
In the context of Brascamp-Lieb inequalities the following easy consequence of the Cauchy-Binet formula is useful, see Proposition 6 of \cite{barthe-inventiones}:
if $d\le n$ and $U$ is an $n\times d$ matrix, then the map
$$x\in \mathbb R^n \mapsto \log \det\big(U^\ast \textup{diag}((e^{x_i})_{i \le n}) U\big)$$
is convex. The next property is a counterpart for inverse Brascamp-Lieb inequalities.
\begin{prop}\label{prop:concavity-of-log-det}
Let $m \ge n \ge 1$ and $U$ be an invertible $n \times n$ real matrix and $V$ be a real $(m-n) \times n$ matrix. Let
\[
\Omega = \big\{(x_1, \ldots, x_m) \in \mathbb{R}^m \colon U^\ast \textup{diag}((e^{x_i})_{i \le n}) U - V^\ast \textup{diag}((e^{x_j})_{j > n}) V > 0 \big\}.
\]
Then $\Omega$ is convex and the map $\phi \colon \Omega \to \mathbb{R}$,
\[
\phi(x_1, \ldots, x_m) = \log \det\big(U^\ast \textup{diag}((e^{x_i})_{i \le n}) U - V^\ast \textup{diag}((e^{x_j})_{j > n}) V\big)
\]
is concave.
\end{prop}
\begin{proof}
First we show that $\Omega$ is convex. Take any $x = (x_1, \ldots, x_m) \in \Omega$, $y = (y_1, \ldots, y_m) \in \Omega$ and $\lambda \in (0,1)$. Let $X = \textup{diag}((e^{x_k}/2)_{1 \le k \le m})$, $Y = \textup{diag}((e^{y_k/2})_{1 \le k \le m})$ and $X_+, X_-$ be the diagonal blocks of $X$ of size $n \times n$ and $(m-n) \times (m-n)$ (resp.), similarly $Y_+$, $Y_-$.
From $x \in \Omega$ it follows that $U^\ast X_+^2 U > V^\ast X_-^2 V$. By invertibility of $U$,
\begin{align*}
\textup{I}_n > X_+^{-1} U^{-\ast} V^\ast X_-^2 V U^{-1} X_+^{-1} = (X_- V U^{-1} X_+^{-1})^\ast (X_- V U^{-1} X_+^{-1}),
\end{align*}
which is equivalent to $\| X_- V U^{-1} X_+^{-1} \| < 1$ ($\|\cdot\|$ denotes the operator norm).
Similarly, $y \in S$ implies $\| Y_- V U^{-1} Y_+^{-1} \| < 1$.
Put $A = Y_- V U^{-1} X_+^{-1}$ and $B = X_- Y_-^{-1}$ and $C = X_+ Y_+^{-1}$. Then we have
\begin{align*}
\|B A\| < 1, \qquad \|A C\| < 1.
\end{align*}
Now use~\cite[Corollary IX.5.3]{Bhatia-Matrix-analysis} which asserts that
\[
\|B^\lambda A C^{1-\lambda}\| \le \|B A\|^\lambda \|A C\|^{1-\lambda}
\]
to obtain
\begin{align*}
\textup{I}_n > (B^\lambda A C^{1-\lambda})^\ast (B^\lambda A C^{1-\lambda}) = Y_+^{\lambda-1} X_+^{-\lambda} U^{-\ast} V^\ast X_-^{2\lambda} Y_-^{2(1-\lambda)} V U^{-1} X_+^{-\lambda} Y_+^{\lambda-1},
\end{align*}
which is equivalent to
\begin{align}\label{ineq:to-be-in-omega}
U^\ast (X_+^2)^\lambda (Y_+^2)^{1-\lambda} U > V^\ast (X_-^2)^\lambda (Y_-^2)^{1-\lambda} V.
\end{align}
Since $(X^2)^\lambda (Y^2)^{1-\lambda}$ is the diagonal matrix with the entries $e^{\lambda x_k + (1-\lambda) y_k}$, \eqref{ineq:to-be-in-omega} ensures that $\lambda x + (1-\lambda) y \in \Omega$.
Next we establish concavity of $\phi$ on $\Omega$. For $x = (x_1, \ldots, x_m) \in \mathbb{R}^m$, set $x_+ = (x_1, \ldots, x_n) \in \mathbb{R}^n$ and $x_- = (x_{n+1}, \ldots, x_m) \in \mathbb{R}^{m-n}$. Let $A \colon \mathbb{R}^m \to \mathbb{R}^{n \times n}$ be defined as
\[
A(x) = U^\ast e^{\textup{diag}(x_+)} U - V^\ast e^{\textup{diag}(x_-)} V.
\]
Then $\phi(x) = \log \det A(x)$ for $x \in \Omega$.
Since $\phi$ is a smooth function, we can analyze the Hessian of $\phi$. To this end, we will use the following formulas:
\begin{align}
\label{eq:d-log-det}
\partial \log \det X &= \textup{tr}(X^{-1} \partial X), \quad \textup{for $X > 0,$} \\
\label{eq:d-x-inverse}
\partial X^{-1} &= -X^{-1} (\partial X) X^{-1}, \quad \textup{for $X > 0,$} \\
\label{eq:d-A-of-x}
\partial A(x) &= U^\ast e^{\textup{diag}(x_+)} \textup{diag}((\partial x)_+) U - V^\ast e^{\textup{diag}(x_-)} \textup{diag}((\partial x)_-) V.
\end{align}
Specialization of~\eqref{eq:d-A-of-x} to partial derivatives gives
\begin{align}
\label{eq:di-A-of-x}
\partial_i A(x) &= e^{x_i} U^\ast e_i e_i^\ast U
\quad \textup{for $i \le n,$} \\
\label{eq:dj-A-of-x}
\partial_j A(x) &= -e^{x_j} V^\ast f_{j-n} f_{j-n}^\ast V
\quad \textup{for $j > n,$}
\end{align}
where for $i\le n$, $e_i$ is a column matrix with $n$ rows, a coefficient $1$ in the $i$-th row and all other coefficients equal to 0. Similarly, for $\ell\le m-n$, $f_\ell$
is a column matrix with $m-n$ rows, with a 1 in its $\ell$-th row and zeroes elsewhere.
Fix $x \in \Omega$. Using~\eqref{eq:d-log-det} and~\eqref{eq:di-A-of-x}, for $i \le n$ we obtain
\[
\partial_i \phi(x) = \textup{tr} \big( A^{-1}(x) \partial_i A(x) \big) = e^{x_i} e_i^\ast U A^{-1}(x) U^\ast e_i,
\]
Similarly, for $j > n$,
\[
\partial_j \phi(x) = -e^{x_j} e_{j-n}^\ast V A^{-1}(x) V^\ast e_{j-n}.
\]
In order to calculate second order partial derivatives, we use~\eqref{eq:d-x-inverse} combined with~\eqref{eq:di-A-of-x} or~\eqref{eq:dj-A-of-x}. For $i_1, i_2 \le n$ and $i_1 \neq i_2$ we have
\[ \begin{split}
\partial^2_{i_1 i_2} \phi(x) &= -e^{x_{i_1}} e_{i_1}^\ast U A^{-1}(x) \partial_{i_2} A(x) A^{-1} U^\ast e_{i_1} \\
&= -e^{x_{i_1} + x_{i_2}} e_{i_1}^\ast U A^{-1}(x) U^\ast e_{i_2} e_{i_2}^\ast U A^{-1}(x) U^\ast e_{i_1}
\end{split} \]
Denoting $$R = e^{\textup{diag}(x_+)/2} U A^{-1/2}(x)$$ we can write the above second order mixed partial derivative in a more compact way
\[
\partial^2_{i_1 i_2} \phi(x) = -(R R^\ast)_{i_1 i_2} (R R^\ast)_{i_2 i_1} = -(R R^\ast)^2_{i_1 i_2},
\]
and for $i \le n$ we have
\[
\partial^2_{ii} \phi(x) = \partial_i \phi(x) - (R R^\ast)^2_{ii} = (R R^\ast)_{ii} - (R R^\ast)^2_{ii}.
\]
Combining the two above formulas we can write that for any $i_1, i_2 \le n$,
\[
\partial^2_{i_1 i_2} \phi(x) = (R R^\ast)_{i_1 i_2} (\textup{I}_n - R R^\ast)_{i_1 i_2}.
\]
If we denote $$S = e^{\textup{diag}(x_-)/2} V A^{-1/2}(x),$$ then by similar calculations we get that for $j_1, j_2 > n$,
\[
\partial^2_{j_1, j_2} \phi(x) = -(S S^\ast)_{j_1-n, j_2-n} (\textup{I}_{m-n} + S S^\ast)_{j_1-n, j_2-n}.
\]
Lastly, for $i \le n$ and $j > n$,
\[ \begin{split}
\partial^2_{ij} \phi(x) &= -e^{x_i} e_i^\ast U A^{-1}(x) \partial_j A(x) A^{-1}(x) U^\ast e_i \\
&= e^{x_i + x_j} e_i^\ast U A^{-1}(x) V^\ast f_{j-n} f_{j-n}^\ast V A^{-1}(x) U^\ast e_i \\
&= (R S^\ast)_{i, j-n} (S R^\ast)_{j-n, i} = (R S^\ast)_{i, j-n}^2 = (S R^\ast)^2_{j-n, i}.
\end{split} \]
As a result,
\[ \begin{split}
\Hess \phi(x) &= -\left( \begin{array}{cc}
(R R^\ast) \circ (-\textup{I}_n + R R^\ast) & -(R S^\ast) \circ (R S^\ast) \\
-(S R^\ast) \circ (S R^\ast) & (S S^\ast) \circ (\textup{I}_{m-n} + S S^\ast)
\end{array} \right) \\
&= -\underbrace{\left( \begin{array}{cc}
R R^\ast & -R S^\ast \\
-S R^\ast & S S^\ast
\end{array} \right)}_{M} \circ
\underbrace{\left( \begin{array}{cc}
-\textup{I}_n + R R^\ast & R S^\ast \\
S R^\ast & \textup{I}_{m-n} + S S^\ast
\end{array} \right)}_{N},
\end{split} \]
where $A \circ B$ denotes the Hadamard product (i.e. entry-wise product) of $A$ and $B$.
Note that $M$ is positive semi-definite. Indeed, $M = \left( \begin{array}{c} R \\ -S \end{array} \right) (R^\ast, -S^\ast) \ge 0.$ Now we argue that also $N \ge 0$. From definitions of the matrices $A(x)$ and $R$ and $S$ it follows immediately that
\[
R^\ast R - S^\ast S = \textup{I}_n.
\]
Since $U$ is invertible, so does $R$ and Lemma~\ref{lem:equivalence-of-positive-definiteness}(ii) implies $N \ge 0$.
Now it is enough to apply the Schur product theorem (see e.g.~\cite[Theorem 7.5.3]{horn_johnson_matrix_analysis}) which asserts that if $M$ and $N$ are positive semi-definite then $M \circ N$ is also positive semi-definite. Therefore $\Hess \phi(x)$ is negative semi-definite at any $x \in \Omega$ and hence $\phi$ is concave.
\end{proof}
The next theorem uses the notation from Subsection~\ref{subsec:centered-gaussians}.
\begin{theorem}\label{thm:D-in-extremisable-case}
Assume the non-degeneracy condition~\eqref{eq:non-degeneracy2} holds. Let $(A_1, \ldots, A_m) \in \Lambda$. Put $A = Q + \sum_{k=1}^m c_k B_k^\ast A_k B_k>0$.
Then the supremum in~\eqref{def:BL-constant} is attained at $(A_1, \ldots, A_m)$, i.e.
\begin{align}\label{eq:gaussian-extremisable}
D = \frac{\det A}{\prod_{i=k}^m (\det A_k)^{c_k}}
\end{align}
if and only if
\begin{align}\label{eq:generalized-geometric-condition}
A_k^{-1} - B_k A^{-1} B_k^\ast = 0 \quad\text{for all $k=1,\ldots,m$ for which $c_k \neq 0.$}
\end{align}
In particular, if~\eqref{cond:geometric-condition-0} and~\eqref{cond:geometric-condition} hold then $D=1$.
\end{theorem}
\begin{remark}
If \eqref{eq:generalized-geometric-condition} holds, then $\tilde Q:=A^{-1/2}QA^{-1/2}$ and $\tilde{B}_k:=A_k^{1/2}B_kA^{-1/2}$ satisfy the generalized geometric conditions \eqref{cond:geometric-condition-0}
and \eqref{cond:geometric-condition}. This allows to show that up to linear isomorphisms (by $A^{1/2}$ on $H$ and $A_k^{1/2}$ on $H_k$)
the situations where $\inf_{\mathcal{CG}}J$ is achieved are equivalent to the geometric situations. This follows exactly what happens for direct Brascamp-Lieb inequalities, see \cite{BCCT-structure}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:D-in-extremisable-case}]
Consider the function $\Phi \colon \Lambda \to \mathbb{R}$,
\[
\Phi(M_1, \ldots, M_m) = \log \det \left(Q + \sum_{k=1}^m c_k B_k^\ast M_k B_k\right) - \sum_{k=1}^m c_k \log \det M_k.
\]
Note $\Phi$ is smooth and $\sup_\Lambda \Phi = \log D$.
Fix any self-adjoint operators $X_k \colon H_k \to H_k$ (for $k=1,\ldots,m$). Using Formula~\eqref{eq:d-log-det} the directional derivative of $\Phi$ at $(A_1,\ldots,A_m)$ in the direction of $(X_1, \ldots, X_m)$ is
\begin{align*}
& \partial_{(X_1, \ldots, X_m)} \Phi(A_1, \ldots, A_m) \\
&=
\lim_{t \to 0} \frac{1}{t} \big(\Phi(A_1 + tX_1, \ldots, A_m + tX_m) - \Phi(A_1,\ldots,A_m)\big) \\
&= \textup{tr}\left(A^{-1} \Big(\sum_{k=1}^m c_k B_k^\ast X_k B_k\Big)\right) - \sum_{k=1}^m c_k \textup{tr}(A_k^{-1} X_k) \\
&= \sum_{k=1}^m c_k \textup{tr}(A^{-1} B_k^\ast X_k B_k) - \sum_{k=1}^m c_k \textup{tr}(A_k^{-1} X_k) =
\sum_{k=1}^m c_k \textup{tr}\big((B_k A^{-1} B_k^\ast - A_k^{-1}) X_k \big).
\end{align*}
The condition~\eqref{eq:gaussian-extremisable} implies that the derivative must be $0$. Using the fact that a self-adjoint operator $Y$ is zero if and only if $\textup{tr} (Y X) = 0$ for all self-adjoint operators $X$, \eqref{eq:generalized-geometric-condition} follows by considering $X_1, \ldots, X_{k-1}, X_{k+1}, \ldots, X_m$ being zero and $X_k$ being arbitrary for each $k=1,\ldots,m$ such that $c_k \neq 0$.
For the converse implication assume that~\eqref{eq:generalized-geometric-condition} holds. Then the above calculation shows that the derivative of $\Phi$ at $(A_1,\ldots,A_m)$ is zero. In order to conclude that $\Phi$ has a global maximum at this point, we prove below that $\Phi$ enjoys a concavity type property along well chosen curves.
Fix any self-adjoint operators $Y_k \colon H_k \to H_k$ (for $k=1,\ldots,m$) and for any real $t$ put
\begin{equation}\label{eq:exponential-parametrization}
\mathbf{A}(t) = \big(A_1^{1/2} \exp(t Y_1) A_1^{1/2}, \ldots, A_m^{1/2} \exp(t Y_m) A_m^{1/2}\big).
\end{equation}
For $t \in \mathbb{R}$ for which $\mathbf{A}(t) \in \Lambda$ consider the function
\[
\varphi(t) = \Phi\big(\mathbf{A}(t)\big).
\]
Since $\Lambda$ is an open set and the function $t \mapsto \mathbf{A}(t)$ is continuous, the domain of $\varphi$ is an open subset of $\mathbb{R}$. The domain contains $0$ and, as $\varphi$ is smooth, $\varphi'(0)$ vanishes.
For each $k = 1,\ldots,m$ take an orthogonal transformation $U_k \in O(H_k)$ such that $U_k^\ast Y_k U_k$, when identified with its matrix in the standard basis $(e_l^{H_k})_l$ in $H_k$, is a diagonal matrix. Denote the diagonal entries by $y_{k 1}, \ldots, y_{k n_k}$, where $n_k = \dim H_k$. Then
\begin{equation}\label{eq:rep-expY}
B_k^\ast A_k^{1/2} \exp(tY_k) A_k^{1/2} B_k = (U_k A_k^{1/2} B_k)^\ast \textup{diag}\big((e^{t y_{k l}})_{l \le n_k}\big) U_k A_k^{1/2} B_k.
\end{equation}
Thanks for the non-degeneracy condition~\eqref{eq:non-degeneracy2} we can use the decomposition of the Gaussian kernel $\exp(-\mathcal{Q})$ as asserted by Lemma~\ref{lem:decompose-Q}. Beside the maps $B_0 \colon H \to H_0$ and $B_{m+1} \colon H \to H_{m+1}$ consider also $A_0 > 0$ on $H_0$ and $A_{m+1} > 0$ on $H_{m+1}$ such that
\[
Q = \sum_{k \in \{0, m+1\}} c_k B^\ast_k A_k B_k,
\]
with $c_0 = 1$ and $c_{m+1} = -1$. For the sake of consistency with~\eqref{eq:rep-expY}, for $k \in \{0, m+1\}$ put $Y_k = 0$ (a zero map on $H_k$), $U_k = \textup{Id}_{H_k}$ and $y_{kl} = 0$ for all $l \le n_k = \dim H_k$.
Let
\[ \begin{split}
U &= (\sqrt{c_i} U_i A_i^{1/2} B_i)_{0 \le i \le m^+} \colon H \to H_0 \times \cdots \times H_{m^+} \\
V &= (\sqrt{-c_j} U_j A_j^{1/2} B_j)_{m^+ < j \le m++1} \colon H \to H_{m^+ +1} \times \cdots \times H_{m+1}.
\end{split} \]
Considering the diagonal matrix $D_+(t) = \textup{diag}\big((e^{t y_{i l}})_{0 \le i \le m^+, l \le n_i}\big)$ as an operator acting on $H_0 \times \cdots \times H_{m^+}$ and the diagonal matrix $D_-(t) = \textup{diag}\big((e^{t y_{j l}})_{m^+ < j \le m+1, l \le n_j}\big)$ as an operator acting on $H_{m^+ +1} \times \cdots \times H_{m+1}$ we can write
\begin{align*}
\varphi(t) = \log\det\left(U^\ast D_+(t) U - V^\ast D_-(t) V \right) - \sum_{k=1}^m c_k \log\det A_k - t \sum_{k=1}^m c_k \textup{tr} Y_k,
\end{align*}
where used the formula $\log\det(\exp(Y)) = \textup{tr} Y$ for a self-adjoint $Y$.
It follows from Assertion~\eqref{eq:isomorphism-condition} that $U$ is a linear isomorphism. Therefore we can apply Proposition~\ref{prop:concavity-of-log-det}, which tells us that the domain of $\varphi$ must be an open interval and that $\varphi$ is concave. Since $\varphi'(0) = 0$, $\varphi$ attains its global maximum at $t=0$.
Since for any $(X_1, \ldots, X_m) \in \Lambda$ there exist $t$ and self adjoint operators $Y_k$ on $H_k$ (for $k=1,\ldots,m$) such that $(X_1, \ldots, X_m)$ is of the form~\eqref{eq:exponential-parametrization} (e.g. take $t=1$ and $Y_k = \log(A_k^{-1/2} X_k A_k^{-1/2}$), we actually showed that $(A_1,\ldots,A_m)$ is a global maximum of $\Phi$ and thus~\eqref{eq:gaussian-extremisable} holds.
\end{proof}
\subsection{Geometric version of Inverse Brascamp-Lieb inequalities}
\begin{theorem}\label{th:geometric-IBL}
For $k=1,\ldots,m$, let $c_k\in \mathbb R$ and let $B_k:H\to H_k$ be linear surjective maps such that
$B_kB_k^*=\mathrm{Id}{H_k}$. Let $Q:H\to H$ be a symmetric operator.
Assume that
\[
Q+\sum_{k=1}^m c_k B_k^*B_k=\mathrm{Id}_H \quad \mathrm{and}\quad
\dim H\ge s^+(Q)+\sum_{k:\, c_k>0} \dim H_k.
\]
Then for all non-negative integrable functions $h_k:H_k \to [0,+\infty]$ with
$\int h_k >0$, it holds
\[
\int_{H} \exp(-\pi \scalar{x, Qx}) \prod_{k=1}^m h_k^{c_k}(B_k x) \, dx \ge \prod_{k=1}^m \Big( \int_{H_k} h_k \Big)^{c_k}.
\]
There is equality when for all $k$ and all $y\in H_k$, $f_k(y)=\exp(-\pi|y|^2)$.
\end{theorem}
\begin{proof}
We may assume without loss of generality that $c_1,\ldots, c_{m^+}>0>c_{1+m^+},\ldots, c_m$. The above decomposition of the identity implies that
\[ Q+\sum_{i=1}^{m^+} c_i B_i^*B_i=\mathrm{Id}_H+\sum_{j>m^+} |c_j| B_j^*B_j>0.\]
Hence the restriction of $Q$ to $\ker B_+=\bigcap_{i=1}^{m^+} \ker B_i$ is positive definite. The non-degeneracy conditions \eqref{eq:non-degeneracy2} are verified and we may apply Theorem \ref{theo:gauss-mini} to conclude $\inf J=\inf_{\mathcal {CG}} J$. Then Theorem \ref{thm:D-in-extremisable-case} ensures that $\inf_{\mathcal {CG}} J=D^{-\frac12}=1$.
\end{proof}
\subsection{Relation with the results of Chen, Dafnis and Paouris}
The reverse Gaussian correlation inequality by Chen, Dafnis and Paouris, presented here in Theorem~\ref{theo:CDP}, turns out to be the geometric version of our main result. To see this, let us consider a slight reformulation of the second inequality from Theorem~\ref{theo:CDP}, which appears explicitly in~\cite{chen-dafnis-paouris}:
\begin{theorem}[{\cite[Theorem 3(ii)]{chen-dafnis-paouris}}]\label{thm:CDP-thm-3}
Let $\gamma_E$ stand for the standard Gaussian measure on a Euclidean space $E$.
Let $B_k \colon H \to H_k$ (for $k=1,\ldots,m$) be linear maps satisfying $B_kB_k^*=\mathrm{Id}_{H_k}$.
Denote
\[
B = (B_1, \ldots, B_m) \colon H \to H_1 \times \cdots \times H_m
\]
and let $C \colon H_1 \times \cdots \times H_m \to H_1 \times \cdots \times H_m$ be the block diagonal operator defined as
\[
C = \textup{diag}\big(c_1 \textup{Id}_{H_1}, \ldots, c_m \textup{Id}_{H_m}\big).
\]
If
\begin{equation}\label{cond:CDP-algebraic-condition}
B B^\ast \ge C^{-1}
\end{equation}
then for any non-negative functions $f_k \in L^1(H_k, \gamma_{H_k})$ ($k=1,\ldots,m$),
\[
\int_{H} \prod_{k=1}^m f_k^{c_k}(B_k x) \, d\gamma_H(x) \ge \prod_{k=1}^m \Big( \int_{H_k} f_k \, d\gamma_{H_k} \Big)^{c_k},
\]
\end{theorem}
By setting $h_k(x) = f_k(\sqrt{2\pi} x) e^{-\pi |x|^2}$ we can rewrite the above inequality in terms of integrals with respect to the Lebesgue measure:
\[
\int_{H} \exp(-\pi \scalar{x, Qx}) \prod_{k=1}^m h_k^{c_k}(B_k x) \, dx \ge \prod_{k=1}^m \Big( \int_{H_k} h_k \Big)^{c_k},
\]
where $Q = \textup{Id}_H - \sum_{k=1}^m c_k B_k^\ast B_k$. Hence the geometric condition~\eqref{cond:geometric-condition} is obviously satisfied.
In order to deduce Theorem~\ref{thm:CDP-thm-3} from Theorem~\ref{th:geometric-IBL}, we need to establish the dimension condition
$\dim H\ge s^+(Q)+\sum_{k:\, c_k>0} \dim H_k$. This is what we do next.
Assume as usual that $c_1, \ldots, c_{m^+} > 0$ and $c_{m^+ +1}, \ldots, c_m < 0$. Recall that $B_+ = (B_1, \ldots, B_{m^+})$ and set $B_- = (B_{m^+ +1}, \ldots, B_m)$ and $H_+ = H_1 \times \cdots \times H_{m^+}$, $H_- = H_{m^+ +1} \times \cdots \times H_m$. The condition~\eqref{cond:CDP-algebraic-condition} is equivalent to
\[
|C|^{1/2} B B^\ast |C|^{1/2} \ge \left( \begin{array}{cc} \textup{Id}_{H_+} & \\ & -\textup{Id}_{H_-} \end{array} \right).
\]
Introducing $\tilde{B}_k = |c_k|^{1/2} B_k$ for $k=1,\ldots,m$ and defining $\tilde{B}_+$ and $\tilde{B}_-$ correspondingly, the above condition can be rewritten as
\begin{equation}\label{cond:CDP-algebraic-condition-2}
\left( \begin{array}{cc}
-\textup{Id}_{H_+} + \tilde{B}_+ \tilde{B}_+^\ast & \tilde{B}_+ \tilde{B}_-^\ast \\[1ex]
\tilde{B}_- \tilde{B}_+^\ast & \textup{Id}_{H_-} + \tilde{B}_- \tilde{B}_-^\ast
\end{array} \right) \ge 0.
\end{equation}
Since the upper-left corner of the above matrix is semi-definite positive, we know that $\tilde{B}_+ \tilde{B}_+^\ast$ is positive definite and hence $\tilde{B}_+$ is surjective, so does $B_+$.
Moreover, from Lemma~\ref{lem:equivalence-of-positive-definiteness}(i) we get that~\eqref{cond:CDP-algebraic-condition-2} is equivalent to
\[
\tilde{B}_+ \big(\textup{Id}_H + \tilde{B}_-^\ast \tilde{B}_-\big)^{-1} \tilde{B}_+^\ast \ge \textup{Id}_{H_+},
\]
which in view of the identity $Q = \textup{Id}_H - \tilde{B}_+^\ast \tilde{B}_+ + \tilde{B}_-^\ast \tilde{B}_-$ implies
\[
\tilde{B}_+^\ast \tilde{B}_+ \big(\tilde{B}_+^\ast \tilde{B}_+ + Q\big)^{-1} \tilde{B}_+^\ast \tilde{B}_+ \ge \tilde{B}_+^\ast \tilde{B}_+.
\]
Thanks to the lemma below we conclude that $s^+(Q) \le \dim \ker \tilde{B}_+$, which coincides with the dimension condition~\eqref{eq:surjectivity-condition} since $\dim \ker \tilde{B}_+ = \dim H - \dim H_+$ due to surjectivity of $\tilde{B}_+$.
\begin{lemma}
Let $A, B$ be real symmetric matrices of size $d$ such that $A \ge 0$ and $A + B > 0$. If $A (A+B)^{-1} A \ge A$ then $s^+(B) \le \dim \ker A$.
\end{lemma}
\begin{proof}
Observe that the statement of this lemma is invariant under congruency (i.e. under replacing $A$ with $C^\ast A C$ and $B$ with $C^\ast B C$).
Since $A+B>0$ there exists an invertible matrix such that $C^*(A+B)C$ and $C^*AC$ are both diagonal. By subtraction we get that $C^*BC$ is diagonal too.
Hence, we may assume without loss of generality that $A=\textup{diag}\big((a_i)_{i=1}^d\big)$ and $B=\textup{diag}\big((b_i)_{i=1}^d\big)$ with for all $i$, $a_i\ge 0$ and $a_i+b_i>0$. The hypothesis $A (A+B)^{-1} A \ge A$ reads as $a_i^2/(a_i+b_i)\ge a_i$,
which is equivalent to $0\ge a_ib_i$, for all $i$. Since $a_i\ge 0$, we may deduce that for all $i$
$$ b_i>0 \Longrightarrow a_i=0.$$
The matrices being diagonal, this implication means that $s^+(B)\le \dim \ker A$.
\end{proof}
Let us also comment on the Lebesgue version of the inverse Brascamp-Lieb inequalities presented in~\cite{chen-dafnis-paouris} as Theorem 2(ii). Applying suitable linear transformation in the Euclidean spaces $H$ and $H_1, \ldots, H_m$ one can formulate that result as follows:
\begin{theorem}[{\cite[Theorem 2(ii)]{chen-dafnis-paouris}}]
In the settings of Theorem~\ref{thm:CDP-thm-3}, if
\begin{equation}\label{cond:homogeneity}
\dim H = \sum_{k=1}^m c_k \dim H_k
\end{equation}
and $BB^*\ge C^{-1}$ then for any non-negative integrable functions $f_k \in H_k \to [0,\infty)$,
\begin{equation}\label{eq:CDP-Lebesgue-BL}
\int_{H} \prod_{k=1}^m f_k^{c_k}(B_k x) \, dx \ge \prod_{k=1}^m \Big( \int_{H_k} f_k \Big)^{c_k}.
\end{equation}
\end{theorem}
Let us explain how to reprove this results from what we already did, and settle a question on existence of cases of equalities that
was left open in \cite{chen-dafnis-paouris}.
Recall that $BB^*\ge C^{-1}$ is equivalent to~\eqref{cond:CDP-algebraic-condition-2} and implies that $B_+$ (equivalently $\tilde{B}_+$) is surjective. There are two possible cases:
\begin{enumerate}
\item[Case 1:] $B_+$ (equivalently $\tilde{B_+}$) is injective. In this case we can apply Lemma~\ref{lem:equivalence-of-positive-definiteness}(ii) to get that $\eqref{cond:CDP-algebraic-condition-2}$ is equivalent to
$\tilde{B}_+^\ast \tilde{B}_+ - \tilde{B}_-^\ast \tilde{B}_- \ge \textup{Id}_H$ or simply
\begin{equation}\label{cond:CDP-algebraic-condition-3}
\sum_{k=1}^m c_k B_k^\ast B_k \ge \textup{Id}_H.
\end{equation}
Using the $B_kB_k^*=\mathrm{Id}_{H_k}$ and~\eqref{cond:homogeneity} we see the maps on both sides of the above inequality have the same trace. Hence there must be equality in~\eqref{cond:CDP-algebraic-condition-3}, i.e. the geometric condition~\eqref{cond:geometric-condition} holds. In particular for the functions $f_k(x) = \exp(-\pi |x|^2)$ we get equality in~\eqref{eq:CDP-Lebesgue-BL}. The decomposition of the identity also allows to deduce \eqref{eq:CDP-Lebesgue-BL} from Theorem \ref{cond:CDP-algebraic-condition} applied to the functions $f_k(\cdot)\exp(|\cdot|^2/2)$.
\item[Case 2:] $B_+$ has a non-trivial kernel. Since $B_+$ is surjective and $Q=0$, we are in the degenerate case 0.1 of the case analysis made in Subsection~\ref{subsection:case-analysis}, in which the left-hand side of~\eqref{eq:CDP-Lebesgue-BL} is always infinite and thus~\eqref{eq:CDP-Lebesgue-BL} does not admit extremizers.
\end{enumerate}
\section{Dual form of inverse Brascamp-Lieb inequalities}\label{sec:dual-inverse}
The transportation technique that we have used in Section \ref{sec:proof-main} in order to prove inverse Brascamp-Lieb inequalities follows the one used by the first named author in \cite{barthe-inventiones}. In this reference, the method is actually proved to establish two inequalities:
\begin{itemize}
\item The classical multilinear Brascamp-Lieb inequality, of the form
\[ \int_{ H} \prod_{i=0}^m f_i(B_ix)^{c_i} dx \le C_{BL} \prod_{i=0}^m \left( \int_{H_i} f_i \right) ^{c_i},\]
\item The ``dual'' Brascamp-Lieb inequality,
\[ \int_{ H}^* \sup_{\sum_i c_i B_i^*x_i=x}\prod_{i=0}^m f_i(x_i)^{c_i} dx \ge C_{DBL} \prod_{i=0}^m \left( \int_{H_i} f_i \right) ^{c_i}.\]
\end{itemize}
For both inequalities, the optimal constant is obtained by inspecting centered Gaussian functions. Also the only relevant indices are $c_i\in (0,1]$. Moreover it is possible to introduce a kernel by fixing $c_0=1$ and $f_0$ to be a specific Gaussian functions, and then to consider the best constant for arbitrary non-negative integrable functions $f_1,\ldots,f_m$.
Let us reproduce the proof of Theorem \ref{theo:gauss-mini} of the inverse Brascamp Lieb inequality
\[ \int_{ H} \prod_{i=0}^{m+1} f_i(B_ix)^{c_i} dx \ge C_{IBL} \prod_{i=0}^{m+1} \left( \int_{H_i} f_i \right) ^{c_i},\]
but choosing the functions $g_1,\ldots, g_m$ to be arbitrary (we omit here to repeat the regularity and support assumptions, that can be achieved by approximation. Recall that $c_0=1=-c_{m+1}$ and that $f_0, g_0, f_{m+1},g_{m+1}$ are specific Gaussian functions, which model Gaussian kernels).
With our notation,
\[J(f_1,\ldots,f_m)=\frac{\int_H e^{-\pi \langle Q_+B_0x,B_0x\rangle+\pi \langle Q_-B_{m+1}x,B_{m+1}x\rangle} \prod_{k=1}^m f_k(B_kx)^{c_k}dx}{ \prod_{k=1}^m \left( \int_{H_k} f_k\right)^{c_k}}.\]
In view of \eqref{eq:dual-inverse-proof} we also set ($*$ standing for inner integral):
\[K(g_1,\ldots,g_m)=\frac{ \int_{*,H} \inf_{\sum c_k B_k^*y_k=y} e^{-\pi \langle Q_+^{-1}y_0,y_0\rangle+\pi \langle Q_-^{-1}y_{m+1},y_{m+1}\rangle} \prod_{k=1}^m g_k(y_k)^{c_k}dy}{ \prod_{k=1}^m \left( \int_{H_k} g_k\right)^{c_k}}.\]
The above transportation argument, up to \eqref{eq:dual-inverse-proof}
yields $ J(f_1,\ldots,f_m)\ge D^{-1} K(g_1,\ldots,g_m)$ for all functions, hence
$\inf J\ge D^{-1} \sup K$.
However, we have seen in \eqref{def:BL-constant} that $\inf_{\mathcal{CG}}J=D^{-\frac12}$. The conclusion of the argument after \eqref{eq:dual-inverse-proof} can be rephrased as $\sup_{\mathcal{CG}}K=D^{\frac12}$. Therefore
\[ \sqrt D = D \inf_{\mathcal{CG}}J\ge D \inf J \ge \sup K\ge \sup_{\mathcal{CG}}K=\sqrt D.\]
In particular $\sup K =\sup_{\mathcal{CG}}K$. This means that under the non-degeneracy hypothesis of Theorem \ref{theo:gauss-mini}, the best constant
in the following inequality (which can be called dual inverse Brascamp-Lieb)
is obtained by inspecting centered Gaussian functions only: for all $g_1,\ldots,g_m$,
\[ \int_{*,H} \inf_{\sum c_k B_k^*y_k=y} \prod_{k=0}^{m+1} g_k(y_k)^{c_k}dy\le C_{DIBL} \prod_{k=1}^m \left( \int_{H_k} g_k\right)^{c_k}.\]
Let us state the simplest examples of these four inequalities: no kernel, two functions, all maps being the identity: for all $f,g:\mathbb R^n\to \mathbb R^+$ with $\int f\in (0,+\infty)$:
If $\lambda\in (0,1)$,
\[ \int f(x)^\lambda g(x)^{1-\lambda}dx \le \left(\int f\right)^\lambda \left(\int g\right)^{1-\lambda}\le \int^* \sup_{\lambda a+(1-\lambda) b=x} f(a)^\lambda g(b)^{1-\lambda}\;dx \]
If $\lambda\in \mathbb R\setminus [0,1]$,
\[\int f(x)^\lambda g(x)^{1-\lambda}dx \ge \left(\int f\right)^\lambda \left(\int g\right)^{1-\lambda}\ge \int_* \inf_{\lambda a+(1-\lambda) b=x} f(a)^\lambda g(b)^{1-\lambda}\; dx
\]
The reader has recognized the inequalities of H\"older, Pr\'ekopa-Leindler and the inverse H\"older inequality. The fourth inequality seems novel. In this very simple situation, the inequalities for $\lambda \in \mathbb R\setminus [0,1]$ can be deduced from the ones for $\lambda\in (0,1)$ by rearranging the terms.
\section{Interpolation}\label{sec:interpolation}
We have proved that the best constant in inverse Brascamp-Lieb inequalities can be computed using centered Gaussian functions, apart from some degenerate situations. In the rest of the paper, we address the question of positivity
of this optimal constant. More precisely, given a quadratic form $\mathcal Q$ and the geometric data $B=(B_k)_{k=1}^m$, our aim is to characterize exponents $c=(c)_{k=1}^m$ for which a non-trivial inverse Brascamp-Lieb inequality holds, meaning $\inf J_{\mathcal Q,B,c}>0$ where
$$
J_{\mathcal Q,B,c}(f_1, \ldots, f_m) = \frac{\int_H e^{-\mathcal Q (x)} \prod_{i=k}^m f_k^{c_k}(B_k x) \, dx}{\prod_{k=1}^m \left(\int_{H_k} f_k \right)^{c_k}}.
$$
The analogous question for direct Brascamp-Lieb inequality was solved in full generality by Bennett, Carbery, Christ and Tao \cite{BCCT-structure,BCCT-finiteness}. They gave a description of the set $\mathcal F$ of exponents $c$ for which $\sup J_{\mathcal Q,B,c}<+\infty$. It turns out that this set $\mathcal F$ is convex, which is a simple instance of interpolation of Lebesgue spaces. Actually, this may be proved by mere application of the Cauchy-Schwarz inequality: if $t\in [0,1]$,
\begin{align*}
\int e^{-\mathcal Q } \prod_{k=1}^m f_k^{tc_k+(1-t)d_k}\circ B_k
& \le \left(\int e^{-\mathcal Q} \prod_{k=1}^m f_k^{c_k}\circ B_k \right)^t \left(\int e^{-\mathcal Q } \prod_{k=1}^m f_k^{d_k}\circ B_k \right)^{1-t} \\
&\le (\sup J_{\mathcal Q,B,c})^t (\sup J_{\mathcal Q,B,d})^{1-t} \prod_{k=1}^m \left(\int_{H_k} f_k \right)^{tc_k+(1-t)d_k}.
\end{align*}
In the setting of inverse inequalities, we did not find a simple interpolation argument as above. Nevertheless
the convexity of the set of non-trivial exponents is still valid, provided one prescribes their signs.
\begin{prop}\label{prop:interpolation}
Let $0\le m^+\le m$, linear surjective maps $B_k \colon H\to H_k$, $1\le k\le m$ and a quadratic form $\mathcal Q \colon H\to \mathbb{R}$.
Assume that $\mathcal Q$ is positive definite on $\ker B_+$ and
$$ \dim H\ge s^+(\mathcal Q)+\sum_{i=1}^{m^+} \dim H_i.$$
Let $c,d\in (0,+\infty)^{m^+}\times (-\infty,0]^{m-m^+}$ satisfy $\inf J_{\mathcal Q,B,c}>0$ and $\inf J_{\mathcal Q,B,d}>0$.
Then for any $t\in [0,1]$,
$$\inf J_{\mathcal Q,B,tc+(1-t) d}>0.$$
\end{prop}
\begin{proof}
We use Theorem \ref{theo:gauss-mini} (the infimum of $J$ can be computed on centered Gaussians) and the
explicit calculations on centered Gaussian functions of Subsection \ref{subsec:centered-gaussians}. Let $Q$
be a self-adjoint linear map such that for all $x\in H$, $\mathcal Q(x)=\pi \langle x,Qx \rangle$.
Let
$$\mathcal P=\mathcal P_{\mathcal Q,B,m^+}=\big\{ x\in (0,+\infty)^{m^+}\times (-\infty,0]^{m-m^+}; \inf J_{\mathcal Q,B,x}>0\big\}$$
denote the set of exponents with prescribed signs, for which a non-trivial inequality holds.
Given $c\in (0,+\infty)^{m^+}\times (-\infty,0]^{m-m^+}$,
Equation \eqref{def:BL-constant} ensures that $c\in \mathcal P$
if and only if
$$ \sup_{A_k>0} \frac{\det \left( \Big(Q+\sum_k c_k B_k^* A_k B_k \Big)_+ \right)}{\prod_k (\det A_k)^{c_k}} <+\infty.$$
where the supremum is on $k$-tuples of definite positive self-adjoint operators $A_k$ on $H_k$ and for a self-adjoint operator $A$ we denote
\begin{equation}\label{eq:def-A-plus}
(A)_+ = \begin{cases}
A & \textup{if $A$ is positive semi-definite,} \\
0 & \textup{otherwise.}
\end{cases}
\end{equation}
When $c_k\neq 0$ (which is true at least for $k\le m^+$), we make a change of variables $M_k=|c_k| A_k$ (which is still positive definite). Hence $c\in \mathcal P$ is equivalent to
\begin{equation}\label{eq:finite-sup1}
\sup_{M_k>0} \frac{\det \left( \Big(Q+\sum_{i\le m^+} B_i^* M_iB_i-\sum_{j>m^+; \, c_j\neq 0} B_j^* M_j B_j \Big)_+ \right)}{\prod_k (\det M_k)^{c_k}}<+\infty.
\end{equation}
We claim that the latter is equivalent to
\begin{equation}\label{eq:finite-sup2}
\sup_{M_k>0} \frac{\det \left( \Big(Q+\sum_{i\le m^+} B_i^* M_iB_i-\sum_{j>m^+} B_j^* M_j B_j \Big)_+ \right)}{\prod_k (\det M_k)^{c_k}}<+\infty.
\end{equation}
The fact that \eqref{eq:finite-sup1} implies \eqref{eq:finite-sup2} is easy: $A\mapsto (A)_+$
is a non-decreasing map on self-adjoint operators and the operator in the determinant of \eqref{eq:finite-sup2} differs from the one of \eqref{eq:finite-sup1} by additional
negative definite terms.
To show that \eqref{eq:finite-sup2} implies \eqref{eq:finite-sup1}, it is sufficient to let $M_j>0$ tend to $0$ for all indices $j>n$ such that $c_j=0$ (hence $(\det M_j)^{c_j}=1$).
This requires continuity properties of the numerator. Observe that $A\mapsto (A)_+$ is not
continuous (if $A$ is not positive definite but is semi-definite positive, then $(A)_+=A$
but for $\varepsilon>0$, $A - \varepsilon \textup{Id}$ is not positive semi-definite, so that $\lim_{\varepsilon \to 0^+} (A - \varepsilon \textup{Id})_+=0$).
Fortunately, we may conclude by using the continuity of $A\mapsto \det(A_+)$ (which is
easy to verify: let $(A_n)$ be self-adjoint operators tending to $A$. If $A$ is positive definite, then so is $A_n$ for $n$ large enough, and continuity of the determinant allows to conclude.
If there is $v\neq 0$ with $\langle Av,v\rangle <0$ then this is eventually true for $A_n$
and $(A)_+=(A_n)_+=0$. If $A$ is semi-definite positive but not definite positive, then
$\det A=0$. If $A_n$ is not positive
definite then $\det(A_n)_+ = 0$, while if $A_n$ is positive definite then $\det (A_n)_+=
\det A_n$ tends to $\det A=0$ when $n$ increases).
\medskip
We are ready to show the convexity of $\mathcal P$. Let $c,d\in \mathcal P$. Using that \eqref{eq:finite-sup2} characterizes membership to $\mathcal P$, we know that there exists $K_c$ and $K_d$ in $\mathbb{R}^+$ such that
for all $M_k>0$,
$$\det \left( \Big(Q + \sum_{i\le m^+} B_i^*M_iB_i -\sum_{j>m^+} B_j^*M_jB_j \Big)_+ \right)$$
is upper bounded by $K_c \prod_k (\det M_k)^{c_k}$ and by $K_d \prod_k (\det M_k)^{d_k}$.
Therefore, for any $t\in[0,1]$,
$
\det \left( \Big(Q + \sum_{i\le m^+} B_i^*M_iB_i -\sum_{j>m^+} B_j^*M_jB_j \Big)_+ \right)
\le K_c^{t}K_d^{1-t} \prod_k (\det M_k)^{tc_k+(1-t)d_k}.$$
The above arguments show that this implies that $tc+(1-t)d\in \mathcal P$. Hence $\mathcal P$ is convex.
\end{proof}
\section{Positivity in the rank one case}
\label{sec:finiteness-1}
We study the positivity of the optimal constant in inverse Brascamp-Lieb inequalities, when for all $k=1,\ldots,m$, $\dim H_k=1$, in terms of the coefficients $(c_k)_{k=1}^m$. In this case a very complete solution can be given, based on a rather straightforward argument.
For concreteness, we may identify each $H_k$ with $\mathbb{R}$,
and we may find non-zero vectors $u_k\in H$ such that $B_k x=\langle x, u_k\rangle$ for all $x\in H$.
As before, we consider
exponents with prescribed signs: given $m^+\le m$ the first $m^+$ coefficients are positive, while the other ones are non-positive.
In addition, we work under the hypotheses of Theorem \ref{theo:gauss-mini}.
For two integers $k, l$ let $[\![k,l]\!] = \{ k, k+1, \ldots, l\}$ and $]\!]k,l]\!] = \{ k+1, k+2, \ldots, l\}$.
\subsection{No kernel}
We start with the case when $\mathcal Q=0$. In the present setting, the hypotheses of Theorem \ref{theo:gauss-mini} simply mean that the map $B_+$ is one-to-one, that is $(u_1,\ldots, u_{m^+})$ is a basis of $H$ (which can be identified to $\mathbb{R}^{m^+}$).
For every family $(f_k)_{k=1}^m$ of non-negative integrable functions on $\mathbb{R}$ with
positive integrals, the functional of interest is
$$J_{u,c}(f_1,\ldots,f_m)= \frac{\int_{H} \prod_{k=1}^m f_k\big(\langle x, u_k\rangle\big)^{c_k} dx}{ \prod_{k=1}^m \left( \int_{\mathbb R} f_k\right)^{c_k}} \cdot$$
\begin{theorem}\label{theo:positivity-rank1}
Let $m\ge m^+\ge 1$ and $u_1,\ldots u_m$ be non-zero vectors in $H$. Assume also that $(u_1,\ldots, u_{m^+})$ is
a basis of $H$. For $i\le m^+<j$, we write that $i\sim j$ if $u_j$ has a non-zero $i$-th coordinate in the
latter basis.
Consider the positivity domain
$$\mathcal P_{m^+}((u_k)_{k=1}^m)=\big\{c\in (0,+\infty)^{m^+}\times (-\infty,0]^{m-m^+}; \; \inf J_{u,c}>0\big\}.$$
For any set $S \subseteq [\![1,m]\!]$, denote by $\mathbf{1}_S$ the vector in $\{0,1\}^m$ with $i$-th coordinate equal to 1 if and only if $i\in S$.
Then
\begin{eqnarray*}
\mathcal P_{m^+}((u_k)_{k=1}^m)&=& \mathbf{1}_{[\![1,m^+]\!]}+\mathrm{Pos}\Big(\big\{\mathbf{1}_{\{i\}}-\mathbf{1}_{\{j\}};\; i\sim j \big\}\Big)\\
&=& \Big\{c\in [1,+\infty)^{m^+}\times (-\infty,0]^{m-m^+};\quad \sum_k c_k=m^+ \; \mathrm{and} \\
&& \quad \mathrm{ \, for\, all} \;
S\subset [\![1,m^+]\!], \quad \sum_{i\in S} (c_i-1) \le
\sum_{j;\; S\sim j} |c_j|\Big\}\\
&=& \Big\{c\in [1,+\infty)^{m^+}\times (-\infty,0]^{m-m^+}; \quad \sum_k c_k=m^+ \; \mathrm{and} \\
&& \quad \mathrm{ \, for\, all} \;
T\subset ]\!]m^+,m]\!], \quad \sum_{j\in T} |c_j| \le
\sum_{i;\; i\sim T} (c_i-1)\Big\},
\end{eqnarray*}
where $S\sim j$ means that there exists $i\in S$ with $i\sim j$, and $i \sim T$ means that there exists $j \in T$ with $i \sim j$, and $\mathrm{Pos}(A)$ is the positive hull of $A$.
\end{theorem}
The above description of $\mathcal{P}_{m^+}((u_k))$ as a positive hull can be phrased in terms of mass transportation. The following interpretation will be justified and applied in the course of the proof: consider a bipartite graph $G$ on the sets $I=[\![1, m^+]\!]$
and $J=]\!]m^+,m]\!]$, with an edge between $i\in I$ and $j\in J$ if
$i\sim j$ (i.e. $u_j$ has a non-zero coordinate on $u_i$, when decomposed
in the basis $(u_1,\ldots, u_{m^+})$). Then $c\in \mathcal{P}_{m^+}((u_k))$ if and only if
one can transport the measure $\sum_{i=1}^{m^+} (c_i-1)\delta_i$ onto
$\sum_{j=1+m^+}^m |c_j| \delta_j$ by moving the mass along the graph $G$.
\begin{proof}
For shortness, we write $\mathcal P$ for the positivity domain $\mathcal{P}_{m^+}((u_k)_{k=1}^m)$.
First part: Let us start with $c\in \mathcal P$ and draw consequences of this fact.
Since $\inf J_{((u_k),(c_k))}>0$, in particular the infimum on centered Gaussian functions
is positive. Using the calculations of the previous section, this can be stated as follows:
there exists $D<+\infty$ such that for all $\lambda_1,\ldots,\lambda_m>0$, it holds
\begin{equation}\label{eq:ineg-gauss-rank1}
D \prod_k \lambda_k^{c_k} \ge \det \left( \Big(\sum_k c_k \lambda_k \, u_k \otimes u_k \Big)_+ \right),
\end{equation}
where $(\cdot)_+$ is defined as in~\eqref{eq:def-A-plus}. Recall that $(u\otimes u)(x)=\langle x,u\rangle u$.
Our goal is to extract information on $c$ from \eqref{eq:ineg-gauss-rank1}. Since
it is pointless when the map inside the determinant is non positive definite, we first
look for values $(\lambda_k)$ for which the inequality has a non-trivial content.
It is convenient to work with $(\lambda_k)$ satisfying the following inequality
\begin{equation}\label{eq:strong-pos-rank1}
\sum_{i\le m^+} c_i \lambda_i \, u_i \otimes u_i \ge 2 \sum_{j>m^+} |c_j| \lambda_j \, u_j \otimes u_j.
\end{equation}
Indeed, when the above is satisfied and since $A\ge 2B$ implies $A-B\ge A/2$, it follows that
$$\sum_k c_k \lambda_k \, u_k \otimes u_k \ge \frac12 \sum_{i\le m^+} c_i \lambda_i u_i\otimes u_i.$$
The map on the right-hand side is positive definite, therefore we may deduce from \eqref{eq:ineg-gauss-rank1} that
\begin{equation}\label{eq:ineg-gauss2-rank1}
2^{m^+} D \prod_k \lambda_k^{c_k} \ge \det \Big(\sum_{i\le m^+} c_i \lambda_i \, u_i \otimes u_i \Big) = \det(u_1,\ldots,u_{m^+})^2 \prod_{i\le m^+} c_i \lambda_i.
\end{equation}
Keeping in mind that \eqref{eq:strong-pos-rank1}$\Longrightarrow\eqref{eq:ineg-gauss2-rank1}$, let us provide numbers $(\lambda_k)$ for which \eqref{eq:strong-pos-rank1} is verified. For $j>m^+$, we denote by $\alpha_i(j)$ the $i$-th coordinate of $u_j$ in the basis
$(u_1,\ldots, u_n)$. Observe that by definition $i\sim j$ means $\alpha_i(j)\neq 0$. Hence for all $j>m^+$,
$$ u_j=\sum_{i;\; i\sim j} \alpha_i(j)u_i.$$
For any vector $v\in H$, by Cauchy-Schwarz,
$$ \langle v,u_j\rangle^2 \le \left(\sum_{i;\; i\sim j} \alpha_i(j)^2 \right) \left(\sum_{i;\; i\sim j} \langle v,u_i\rangle^2\right).$$
Set $K:=\max_j\sum_{i;\; i\sim j} \alpha_i(j)^2 $.
We have proved that for $j>m^+$,
$$u_j\otimes u_j \le K \sum_{i;\; i\sim j} u_i\otimes u_i.$$
Summing upon $j>m^+$ and interchanging summations in $j>m^+$ and in $i\le m^+$ yield
\begin{eqnarray}\label{eq:ineq-tensor-rank1}
2\sum_{j>m^+} |c_j| \lambda_j u_j\otimes u_j &\le & 2K \sum_{i\le m^+} \left( \sum_{j; \; i\sim j} |c_j|\lambda_j \right) u_i\otimes u_i.
\end{eqnarray}
Let $q\in \mathbb{R}^+$ and let $a\in \mathbb{R}^m$ satisfy
\begin{equation}\label{eq:condition-proof-rank1}
i\sim j \Longrightarrow a_i\ge a_j\, .
\end{equation}
Set $K':= 2K \max_i (\sum_{j; \; i\sim j} |c_j|)/c_i$, and
$$ \lambda_j:= e^{qa_j} \quad\mathrm{for}\quad j>m^+ \qquad\mathrm{and}\qquad
\lambda_i:= K' e^{qa_i} \quad\mathrm{for}\quad i\le m^+.$$
Then Inequality \eqref{eq:ineq-tensor-rank1} readily implies
that for this choice of $\lambda$, \eqref{eq:strong-pos-rank1} is verified. As we have already seen,
this implies that \eqref{eq:ineg-gauss2-rank1} applies and gives
$$ 2^{m^+} D \Big(\prod_{i\le m^+} (K')^{c_i}\Big) e^{q \sum_k a_kc_k} \ge \det(u_1,\ldots,u_{m^+})^2 \Big(\prod_{i\le m^+} (c_iK')\Big) e^{q\sum_{i\le m^+}a_i}.$$
For $q$ tending to $+\infty$ the last inequality implies that
$
\sum_k a_kc_k \ge \sum_{i\le m^+}a_i.
$
Recall that this was proved assuming \eqref{eq:condition-proof-rank1} (and that $c$ is in the positivity domain of the functional $J$). Summarizing, for $c$ in the positivity domain, and all
$a\in \mathbb{R}^m$,
\begin{equation}\label{eq:forms-rank1}
\left(i\sim j \Longrightarrow a_i\ge a_j \right) \Longrightarrow \sum_{i\le m^+} (c_i-1)a_i\ge \sum_{j>m^+} |c_j|a_j.
\end{equation}
Let us draw some consequences of this property of coefficients $c$ in the positivity domain,
by making appropriate choices for $a$:
\begin{itemize}
\item Choosing vectors $a$ with all equal coordinates gives $\sum_k c_k=m^+$, known as the homogeneity condition.
\item For any $i\in [\![1,m^+]\!]$, we may choose $a=\mathbf{1}_{\{i\}}$ and get $c_i\ge 1$.
\item For $T\subset ]\!]m^+,m]\!]$, we may define a vector $a$ as follows:
$a_j=1$ for $j\in T$, $a_j=0$ for $j \in ]\!]m^+,m]\!]\setminus T $,
and for $i\in [\![1,m^+]\!]$, set $a_i=1$ if $i\sim T$ and $a_i=0$ otherwise. It is plain
that $i\sim j \Longrightarrow a_i\ge a_j$, so this vector is admissible and we can deduce
that
$ \sum_{j\in T} |c_j| \le
\sum_{i;\; i\sim T} (c_i-1)$.
\item In a symmetric way, for $S\subset [\![1,m^+]\!]$, we may define an admissible
vector $a$ as follows: for $i\le m^+$, $a_i=-1$ if $i\in S$ and $a_i=0$ otherwise;
for $j>m^+$, $a_j=-1$ if $S\sim j$ and $a_j=0$ otherwise.
Plugging this vector in \eqref{eq:forms-rank1} yields
$\sum_{i\in S} (c_i-1) \le
\sum_{j;\; S\sim j} |c_j|$.
\end{itemize}
Next give a dual interpretation of \eqref{eq:forms-rank1} (where the right-hand side
inequality is taken in the form $\sum_k c_k a_k \ge \sum_{i\le m^+} a_i$): for every
vector $a\in \mathbb{R}^m$, if for all $i\le m^+<j$ such that $i\sim j$, it holds $\langle a, \mathbf{1}_{\{i\}}-
\mathbf{1}_{\{j\}}\rangle \ge 0$, then $\langle a, c-\mathbf{1}_{ [\![1,m^+]\!]}\rangle\ge 0$. Equivalently,
no linear hyperplane can separate the vector $c-\mathbf{1}_{ [\![1,m^+]\!]} $ from the family
of vectors $ (\mathbf{1}_{\{i\}}- \mathbf{1}_{\{j\}})_{i\sim j}$. By the Hahn-Banach theorem, this implies
that $c-\mathbf{1}_{ [\![1,m^+]\!]}$ belongs to the convex cone generated by the vectors $ (\mathbf{1}_{\{i\}}- \mathbf{1}_{\{j\}})_{i\sim j}$. This concludes the first part of the proof.
\bigskip
Second part: let us show that $\mathbf{1}_{ [\![1,m^+]\!]}+\mathrm{Pos}( (\mathbf{1}_{\{i\}}- \mathbf{1}_{\{j\}})_{i\sim j})$ is included in the positivity domain $\mathcal{P}$ of the functional $J$. Since Proposition~\ref{prop:interpolation} ensures that $\mathcal{P}$ is convex, it is enough to show that for all $i\le m^+<j$ such that $i\sim j$, it contains the half-line $\mathbf{1}_{ [\![1,m^+]\!]}+\mathbb{R}^+ (\mathbf{1}_{\{i\}}- \mathbf{1}_{\{j\}})$.
Let us start with observing that $\mathbf{1}_{[\![1,m^+]\!]}\in \mathcal P$. Indeed, for all measurable
$f_k:\mathbb{R}\to \mathbb{R}^+$, by the change of variables $y_i:=\langle x,u_i\rangle$, $i\in [\![1,m^+]\!]$
and Fubini's theorem
$$\int_{H} \prod_{i=1}^{m^+} f_i(\langle x,u_i\rangle) \,dx =
|\det((u_i)_{i=1}^{m^+}) |^{-1} \int_{\mathbb{R}^{m^+}} \prod_{i=1}^{m^+} f_i(y_i) \,dy
= |\det((u_i)_{i=1}^{m^+}) |^{-1} \prod_{i=1}^{m^+} \int_{\mathbb{R}} f_i. $$
Another basic ingredient, which is actually the simplest instance of the reverse inequalities
we are investigating, is the reverse H\"older inequality: for $\varepsilon\ge 0$ and $f,g$ non-negative measurable functions on $\mathbb{R}$,
$$\int f^{1+\varepsilon} g^{-\varepsilon} \ge \left( \int f\right)^{1+\varepsilon}\left( \int g \right)^{-\varepsilon}.$$
We are ready to show
that for $i_0\sim j$, $\mathbf{1}_{ [\![1,m^+]\!]}+\mathbb{R}^+ (\mathbf{1}_{\{i_0\}}- \mathbf{1}_{\{j\}})\subset \mathcal P$.
Let $\varepsilon\ge 0$. Then, using that $u_j=\sum_i \alpha_i(j) u_i$ with $\alpha_{i_0}(j)\neq 0$, changing variables by $y_i:=\langle x,u_i\rangle$, $i\in [\![1,m^+]\!]$ as above
\begin{eqnarray*}
&& \int_{H} \Big(\prod_{i\in [\![1,m^+]\!]\setminus \{i_0\} } f_i(\langle x,u_i\rangle) \Big) f_{i_0}(\langle x,u_{i_0}\rangle)^{1+\varepsilon} f_j(\langle x,u_j\rangle)^{-\varepsilon} dx \\
&=& \int_{H} \Big(\prod_{i\in [\![1,m^+]\!]\setminus \{i_0\} } f_i(\langle x,u_i\rangle) \Big) f_{i_0}(\langle x,u_{i_0}\rangle)^{1+\varepsilon} f_j\big(\sum_{i\le m^+} \alpha_i(j)\langle x,u_i\rangle\big)^{-\varepsilon} dx \\
&=& |\det((u_i)_{i=1}^{m^+}) |^{-1} \int_{\mathbb{R}^{m^+}} \Big(\prod_{i\in [\![1,m^+]\!]\setminus \{i_0\} } f_i(y_i) \Big) f_{i_0}(y_{i_0})^{1+\varepsilon} f_j\big(\sum_{i\le m^+} \alpha_i(j)y_i\big)^{-\varepsilon} dy.
\end{eqnarray*}
Applying inverse H\"older in the variable $y_{i_0}$ and using $\alpha_{i_0}(j)\neq 0$, we deduce that for any $(y_i)_{i\in [\![1,m^+]\!]\setminus \{i_0\} }$
$$ \int_{\mathbb{R}} f_{i_0}(y_{i_0})^{1+\varepsilon} f_j\big(\sum_{i\le m^+} \alpha_i(j)y_i\big)^{-\varepsilon} dy_{i_0} \ge \left( \int f_{i_0}\right)^{1+\varepsilon} \left(\frac{1}{\alpha_{i_0}(j)} \int f_{j} \right)^{-\varepsilon} $$
Plugging this estimate in the latter integral over $\mathbb{R}^n$ we arrive at
\begin{eqnarray*}
&& \int_{H} \Big(\prod_{i\in [\![1,m^+]\!]\setminus \{i_0\} } f_i(\langle x,u_i\rangle) \Big) f_{i_0}(\langle x,u_{i_0}\rangle)^{1+\varepsilon} f_j(\langle x,u_j\rangle)^{-\varepsilon} dx \\
&\ge & |\det((u_i)_{i=1}^{m^+}) |^{-1} \alpha_{i_0}(j)^{-\varepsilon} \prod_{i\in [\![1,m^+]\!]\setminus \{i_0\} } \left(\int f_i\right) \times \left( \int f_{i_0}\right)^{1+\varepsilon} \left(\int f_{j} \right)^{-\varepsilon} .
\end{eqnarray*}
This inequality proves that $\mathbf{1}_{ [\![1,m^+]\!]}+\varepsilon (\mathbf{1}_{\{i_0\}}- \mathbf{1}_{\{j\}})\in \mathcal P$.
\bigskip
Third part: Our final task is to show that the descriptions of $\mathcal P$ is terms of inequalities coincides with the one in terms of positive hull. This can be done ``by hands,'' but
we present a neat argument in terms of transportation plans.
We have shown that $c\in \mathcal P$ is equivalent to $c-\mathbf{1}_{[\![1,m^+]\!]}\in \mathrm{Pos}((\mathbf{1}_{\{i\}}-\mathbf{1}_{\{j\}})_{i\sim j}).$ The latter is equivalent to the existence of non-negative
coefficients $(\gamma_{i,j})_{i\sim j}$ such that $c-\mathbf{1}_{[\![1,m^+]\!]}=\sum_{i\sim j}\gamma_{i,j}
(\mathbf{1}_{\{i\}}-\mathbf{1}_{\{j\}})$, or in coordinates:
\begin{eqnarray*}
\mathrm{for} \; i\le m^+, && c_i-1=\sum_{j;\; i\sim j} \gamma_{i,j},\\
\mathrm{for} \; j> m^+, && |c_j|=\sum_{i;\; i\sim j} \gamma_{i,j}.
\end{eqnarray*}
This can be interpreted as a coupling, or transportation plan between measures: $\gamma_{i,j}$ represents
the amount of mass which is transported from $i$ to $j$, and such a shipping is allowed
only if $i\sim j$. Therefore $c$ belongs to $\mathcal P$ if and only if it is possible to transport
the measure $\sum_{i\le m^+} (c_i-1)\delta_i$ to the measure $\sum_{j>m^+} |c_j| \delta_j$ (or vice-versa), while
carrying mass only between points which are in relation for $\sim$. This questions of existence of transport with constraints is well know. Its solution is given in the following classical lemma and it allows to complete the proof. Observe that the indices $i\le m^+$ and
$j>m^+$ play symmetric role for the transportation problem, which leads to two different
description of $\mathcal P$ in terms of inequalities.
\end{proof}
\begin{lemma}\label{lem:transport}
Let $I$ and $J$ be disjoint finite sets. Let $E\subset I\times J$ and consider the bipartite graph $(I,J;E)$. Let $(\alpha_i)_{i\in I}$ and $(\beta_j)_{j\in J}$ be non-negative numbers.
Then there exists a transportation plan along the graph between $\sum_{i\in I} \alpha_i \delta_i$ and
$\sum_{j\in J} \beta_i \delta_i$ if and only if:
\begin{equation}\label{eq:transport-constraint}
\sum_{i\in I} \alpha_i=\sum_{j\in J} \beta_j \mbox{ and for all } S\subset I;\; \sum_{i\in S} \alpha_i\le \sum_{j; \; S\sim j} \beta_j\, .
\end{equation}
\end{lemma}
\begin{proof}
The condition means that the origin and target measures have same total mass, and that the mass of any subset of the origin set is not larger than the mass for the target measure of the
set of its neighbors.
Showing that the existence of a transport plan implies the above inequalities is straightforward, and actually not the direction we need for the previous theorem, so we omit it.
Assume that \eqref{eq:transport-constraint} is verified. Let us build a weighted graph
$G$ by enriching $(I,J;E)$ as follows: we assign to every existing edge ($i\sim j$) a
weight $w:=1 + \sum_{i \in I} \alpha_i$;
we also add a vertex $A$ and connect it to each $i\in I$ with a weight $\alpha_i$
on the edge; eventually we add another vertex $B$ and connect to each $j\in J$ with a weight $\beta_i$.
Our goal is to show
that the maximal flow between $A$ and $B$ is equal to $\sum_{i\in I} \alpha_i$ (which means that all the mass from $I$ can be transported to $B$ along the graph, and since
$\sum_i \alpha_i=\sum_j \beta_j$ all the target mass is reached).
By the Max flow-Min cut theorem (see e.g. \cite{schrijver}), it is enough to show that the minimal weight of a cut
separating $A$ and $B$ is equal to $\sum_{i\in I} \alpha_i$ (with corresponds to cutting
all the edges incident to $A$).
Let us study a minimal cut. First, since the edges between $I$ and $J$ have weight $w>\sum_{i\in I} \alpha_i$, they are not in a minimal cut.
Such a cut is thus as follows: there are subsets $S\subset I$ and $T\subset J$ such that
$S^c=I\setminus S$ and $T^c=J\setminus T$ are not connected, and one cuts the edges
between $A$ and $I$ and the ones between $B$ and $J$.
The weight of this cut is
$$\sum_{i\in S} \alpha_i+\sum_{j\in T}\beta_j.$$
Our goal is to bound this weight from below as follows,
$$ \sum_{i\in S} \alpha_i+\sum_{j\in T}\beta_j\ge \sum_{i\in I} \alpha_i.$$
This is equivalent, after canceling the terms appearing twice, to
$\sum_{i\in S^c} \alpha_i\le \sum_{j\in T}\beta_j.$
This is indeed true: by hypothesis $\sum_{i\in S^c} \alpha_i\le \sum_{j; \;S^c\sim j }\beta_j.$
But since $S^c$ and $T^c$ are not connected, $S^c\sim j$ implies that $j\in T$, and the latter sum is at most $\sum_{j\in T}\beta_j$, as claimed.
\end{proof}
\subsection{With a kernel}
Here we consider in addition a kernel $e^{-\mathcal Q}$, with the restriction that $s^+(\mathcal Q)$, $s^-(\mathcal Q)\le 1$. As above we work under the assumptions $\mathcal Q_{|\ker B_+}$ positive definite and $\dim H\ge s^+(\mathcal Q)+\sum_{i=1}^{m^+} \dim H_i$, for which a convenient equivalent form is given in Lemma~\ref{lem:decompose-Q}.
In our setting, they can be rephrased as follows (we introduce a small twist with respect to the decomposition
in the lemma, namely a dilation which allows for a more concrete decomposition):
there are vectors $u_0, u_{m+1}\in H$ such that for all $x\in H$,
$$\mathcal{Q}(x)=\pi\langle x,u_0\rangle^2-\pi \langle x,u_{m+1}\rangle^2.$$
Note that these two vectors may be equal to zero (e.g. if $Q$ is non-positive, $u_0=0$).
Moreover setting $B_0x=\langle x,u_0\rangle$ and $B_{m+1}x=\langle x,u_{m+1}\rangle$,
we know that $\ker B_+\subset \ker B_{m+1}$ and that
$B_{0+}:H\to B_0H\times B_1H\times \cdots \times B_{m^+}H$ is one to one.
The former is equivalent to $\bigcap_{i=1}^{m^+} u_i^\bot \subset u_{m+1}^\bot$, that is
\begin{equation}\label{hyp:m+1}
u_{m+1}\in \mathrm{vect}\{u_1,\ldots,u_{m^+}\},
\end{equation}
while the latter means that:
\begin{itemize}
\item either $u_0=0$ and $(u_1, \ldots, u_{m^+})$ is a basis of $H$,
\item or $(u_0, u_1, \ldots, u_{m^+})$ is a basis of $H$.
\end{itemize}
In any of the above cases, we denote by $\mathbb U$ the corresponding basis of $H$.
Given $i\in I=[\![0,m^+]\!]$ and $j\in J = ]\!] m^+, m+1]\!]$, we write $i\sim j$
if $u_j$, once decomposed in the basis $\mathbb U$, has a positive coordinate on the vector $u_i$ of the basis.
This relation creates a bipartite graph $G$ on $I$ and $J$. Note that $m+1$ is an isolated
vertex of the graph when $u_{m+1}=0$, and so is $0$ when $u_0=0$.
The functional of interest is
$$J_{\mathcal Q,(u_k)_{k=1}^m,c}(f_1,\ldots,f_m)= \frac{\int_{H} e^{-\pi\langle x,u_0\rangle^2+\pi \langle x,u_{m+1}\rangle^2} \prod_{k=1}^m f_k\big(\langle x, u_k\rangle\big)^{c_k} dx}{ \prod_{k=1}^m \left( \int_{\mathbb R} f_k\right)^{c_k}} \cdot$$
Now comes a description of its positivity domain
$$\mathcal P_{m^+}(\mathcal Q,(u_k)_{k=1}^m)=\big\{c\in (0,+\infty)^{m^+}\times (-\infty,0]^{m-m^+}; \; \inf J_{\mathcal Q, (u_k)_{k=1}^m,c}>0\big\}.$$
\begin{theorem}\label{theo:positivity-rank1-Q}
With the above notation and hypotheses,
\begin{eqnarray*}
\mathcal P_{m^+}(\mathcal Q,(u_k)_{k=1}^m)
&=& \mathbf{1}_{[\![1,m^+]\!]}+\mathrm{Pos}\Big(
\big\{\mathbf{1}_{\{i\}};\; i\sim m+1 \big\}\cup \big\{-\mathbf{1}_{\{j\}};\; 0\sim j \big\} \\
&&\qquad \qquad \qquad \cup \big\{\mathbf{1}_{\{i\}}-\mathbf{1}_{\{j\}};\; 1\le i\sim j\le m \big\}
\Big)\\
&=& \Big\{c\in [1,+\infty)^{m^+}\times (-\infty,0]^{m-m^+}; \\
&& \quad \mathrm{ \, for\, all} \;
S\subset [\![1,m^+]\!] \mathrm{ \, with \, } S\not\sim m+1, \quad \sum_{i\in S} (c_i-1) \le
\sum_{j;\; S\sim j} |c_j|, \\
&& \quad \mathrm{and \, for\, all} \;
T\subset ]\!]m^+,m]\!] \mathrm{\, with \, } 0\not\sim T, \quad \sum_{j\in T} |c_j| \le
\sum_{i;\; i\sim T} (c_i-1)\Big\},\\
&=& \mathrm{Proj}_{\mathbb{R}^{[\![1,m]\!]}} \left( \mathcal P_{1 + m^+}\big(u_0,u_1,\ldots,u_m,u_{m+1}\big) \right).
\end{eqnarray*}
\end{theorem}
Let us comment on this statement before proving it. The notation of the last line, involving a projection
and an extended use of the notation of the positivity domain in the case of no kernel (if $u_0$ or $u_{m+1}$
is zero, just discard it), means the following: $\inf J_{\mathcal Q,(u_k)_{k=1}^m,c}>0$ if and only if there
exists $c_0\ge 1$ and $c_{m+1}\le 0$ and $\varepsilon>0$ such that for all $f_k:\mathbb{R}\to \mathbb{R}^+$ ($k = 0,1,\ldots,m+1$)
integrable and with positive integral:
$$ \int_H f_0(\langle x,u_0\rangle)^{c_0} f_{m+1}(\langle x,u_{m+1}\rangle)^{c_{m+1}} \prod_{k=1}^m f_k\big(\langle x, u_k\rangle\big)^{c_k} dx \ge \varepsilon
\prod_{k=0}^{m+1} \left( \int_{H_k} f_k\right)^{c_k} \cdot$$
Here $H_k$ is the image of $H$ by $x\mapsto \langle x,u_k\rangle$. If for instance $u_0=0$ then $H_0=\{0\}$
and the term $f_0(0)=\int_{H_0} f_0>0$ appears on both sides, and can be discarded.
In other words, the positivity of the constant in the inequality with kernel can be deduced from an inequality
without kernel, by specifying one or two functions to be Gaussian.
The description of the positivity domain as a positive convex hull can also be interpreted in terms of a transportation problem: $c$ is in the positivity domain if and only if one can transport the measure
$\sum_{i=1}^{m^+} (c_i-1) \delta_i$ to $\sum_{j=1+m^+}^m |c_j|\delta_j$ along the bipartite graph $G$ defined
above, with the help of a source at 0 and of a sink at $m+1$.
\begin{proof}[Proof of Theorem \ref{theo:positivity-rank1-Q}]
The strategy is the same as for Theorem \ref{theo:positivity-rank1}, so we only explain the changes. We simply
write $\mathcal P$ for the positivity domain. Let us denote by $\mathcal P_1$, $\mathcal P_2$ and $\mathcal P_3$
the three sets appearing in the claim (in the same order).
\smallskip
Let $c\in \mathcal P$. By Theorem \ref{theo:gauss-mini} and Gaussian calculations, there exists $D>0$ such that for all $\lambda_1,\ldots,\lambda_m$,
\begin{equation}\label{eq:ineg-gauss-rank1-Q}
D \prod_k \lambda_k^{c_k} \ge \det \left( \Big(\sum_{k=0}^{m+1} c_k \lambda_k \, u_k \otimes u_k \Big)_+ \right),
\end{equation}
where we have set $c_0=1, c_{m+1}=-1, \lambda_0=\lambda_{m+1}=1$ in order to include the terms coming from the kernel.
Our first task is to infer that for every $a\in\mathbb{R}^{[\![0,m+1]\!]}$ satisfying $a_0=a_{m+1}=0$,
\begin{equation}\label{eq:forms-rank1-Q}
\left(i\sim j \Longrightarrow a_i\ge a_j \right) \Longrightarrow \sum_{i\le m^+} (c_i-1)a_i\ge \sum_{j>m^+} |c_j|a_j.
\end{equation}
To do this we look for numbers $b_k$ such that for all $q\ge 0$, Inequality \eqref{eq:strong-pos-rank1}
is verified for $\lambda_k=\lambda_k(q)=b_ke^{qa_k}$. Letting $q$ tend to infinity in the determinant inequality
then yields \eqref{eq:forms-rank1-Q}. The main changes in the argument come from the ``boundary'' conditions
$\lambda_0(q)=\lambda_{m+1}(q)=1$ which force $a_0=a_{m+1}=0$ and $b_0=b_{m+1}=1$.
The strategy is again to choose the $\lambda_k$ such that \eqref{eq:ineq-tensor-rank1} holds. Observe that
\eqref{eq:ineq-tensor-rank1} is verified when for all $i\le m^+$,
$2K \sum_{j;\; i\sim j} |c_j|\lambda_j \le c_i\lambda_i$. Setting $M:=\max\big(1,2K(m+1)\big)$, we get that a sufficient condition
to ensure the latter is to have:
\begin{equation*}
i\sim j \Longrightarrow M|c_j|\lambda_j \le c_i\lambda_i.
\end{equation*}
As already mentioned, we look for $\lambda_k=b_ke^{q a_k}$ and $a$ verifies $i\sim j\Longrightarrow a_i\ge a_j$.
Hence it is enough to choose $b$ such that
\begin{equation}\label{cond:proof-positivity-rank1-Q}
i\sim j \Longrightarrow M|c_j|b_j \le c_i b_i.
\end{equation}
Recall that $c_0b_0=|c_{m+1}|b_{m+1}=1$ so the latter inequality may fail, but thanks to \eqref{hyp:m+1} $0\not\sim m+1$. Eventually, if we choose $(b_k)$ such that $b_i=M/c_i$ for $i\in [\![1,m^+]\!]$ and $|c_j|b_j\le 1/M$ for
$j\in ]\!]m^+,m]\!]$, then \eqref{cond:proof-positivity-rank1-Q} is verified.
Thus $c\in \mathcal P$ implies \eqref{eq:forms-rank1-Q}. Using the Hahn-Banach Theorem, \eqref{eq:forms-rank1-Q} means $c\in \mathcal P_1$. So we have proved that $\mathcal P\subset \mathcal P_1$.
\medskip
Next we show that $\mathcal P_1\subset \mathcal P_2$ by drawing consequences of \eqref{eq:forms-rank1-Q};
\begin{itemize}
\item For any $i\in [\![1,m^+]\!]$, we may choose $a=\mathbf{1}_{\{i\}}$ and get $c_i\ge 1$.
\item For $T\subset ]\!]m^+,m]\!]$ with $0\not\sim T$ we may define a vector $a$ as follows:
$a_j=1$ for $j\in T$, $a_i=1$ if $i\sim T$ and $a_k=0$ otherwise.
It readily verifies the hypothesis of \eqref{eq:forms-rank1-Q}, so we can deduce
that
$ \sum_{j\in T} |c_j| \le
\sum_{i;\; i\sim T} (c_i-1)$.
\item In a symmetric way, for $S\subset [\![1,m^+]\!]$ with $S\not \sim m+1$ we may define an admissible
vector $a$ as follows: $a_i=-1$ is $i\in S$;
$a_j=-1$ if $S\sim j$ and $a_k=0$ otherwise. This implies
$\sum_{i\in S} (c_i-1) \le
\sum_{j;\; S\sim j} |c_j|$.
\end{itemize}
\medskip
Now we prove that $\mathcal{P}_2\subset \mathcal{P}_3$. Let $c\in\mathcal{P}_2$, and set $\alpha_i=c_i-1$
for $1\le i\le m^+$ and $\beta_j=|c_j|$ for $m^+<j\le m$. Let us consider the bipartite graph $\tilde G$ on
$I=[\![0,m^+]\!]$ and $J=]\!]m^+,m+1]\!]$ obtained
by adding to $G$ an edge between $0$ and $m+1$.
Let us choose two numbers $\alpha_0$ and $\beta_{m+1}$ such that $\sum_{i\in I} \alpha_i=\sum_{j\in J} \beta_j$
and $\alpha_0, \beta_{m+1}> \sum_{i\in [\![1,m^+]\!]} \alpha_i+\sum_{j\in ]\!]m^+,m]\!]} \beta_j$.
Let us show that it is possible to transport along $\tilde G$ the measure $\sum_{i\in I} \alpha_i \delta_i$
to $\sum_{j\in J} \beta_j \delta_j$, by application of Lemma~\ref{lem:transport}.
The equality of masses holds by construction. It remains to prove that for every $S\subset I$,
$\sum_{i\in S} \alpha_i \le \sum_{j\in N(S)} \beta_j$, where $N(S)$ denotes the set of vertices which are connected
to $S$ in $\tilde G$. Let us consider several cases
\begin{itemize}
\item If $0\not\in S$ and $S\not \sim m+1$ then the inequality comes from the hypothesis that $c\in \mathcal P_2$.
\item If $0\not\in S$ and $S\sim m+1$, the inequality holds simply because the term $\beta_{m+1}$ is larger than
$\sum_{i=1}^{m^+} \alpha_i\ge \sum_{i\in S}\alpha_i$.
\item If $0\in S$, then by construction $m+1\in N(S)$. Our aim is to show that
$\sum_{i\in S} \alpha_i \le \sum_{j\in N(S)} \beta_j$. Subtracting from the equality of masses condition shows that the inequality is equivalent to $\sum_{i\in I\setminus S} \alpha_i \ge \sum_{j\in J\setminus N(S)} \beta_j$.
Define $T:=J\setminus N(S)$. Observe that $T\subset ]\!]m^+,m]\!]$ since $m+1\in N(S)$. Moreover by construction
$N(T)\subset I\setminus S$ does not contain 0 as $S$ does. So the fact that $c\in \mathcal P_2$ ensures that
$\sum_{j\in T} \beta_j \le \sum_{i\in N(T)} \alpha_i$. Since $N(T)\subset I\setminus S$ we obtain
$\sum_{i\in I\setminus S} \alpha_i \ge \sum_{j\in J\setminus N(S)} \beta_j$ as needed.
\end{itemize}
By Lemma~\ref{lem:transport} there is a transport along $\tilde G$. It may ship an amount $\gamma$ of mass
between the vertices $0$ and $m+1$. If we remove this amount from the initial mass at these two points,
we get two distributions which admit a transport which does not use the edge between $0$ and $m+1$. In other
words if we set $c_0=1+\alpha_0-\gamma\ge 1$ and $c_{m+1}=-(\beta_{m+1}-\gamma)\le 0$, we have shown that
there is a transportation plan along $G$ from $\sum_{i=0}^{m^+} (c_i-1)\delta_i$ to
$\sum_{j=1+m^+}^{m+1} |c_j|\delta_j$. According to Theorem~\ref{theo:positivity-rank1} and its interpretation in terms of transport, this means that $(c_0,c_1,\ldots,c_{m+1})$ is in the positivity domain $\mathcal P_{1 + m^+}(u_0,u_1,\ldots,u_{m+1})$ of an inverse Brascamp-Lieb inequality without kernel (but with more functions).
So starting from $c=(c_1,\ldots,c_m)\in \mathcal P_2$ we have expressed it as the projection of a vector
$(c_0,c_1,\ldots,c_{m+1})$ in $\mathcal P_{1 + m^+}(u_0,u_1,\ldots,u_{m+1})$. This concludes the proof of $\mathcal{P}_2\subset \mathcal{P}_3$. The particular cases when $u_0$ (or $u_{m+1}$) is zero is also treated
by this argument because in this case $0$ is isolated in $G$ and thus it is not involved in the transport.
\medskip
The inclusion $\mathcal P_3\subset \mathcal P$ is immediate. It $c\in \mathcal P_3$, then by Theorem~\ref{theo:positivity-rank1} there exists $c_0>0$, $c_{m+1}\le 0$ and $\varepsilon>0$ such that for all non-negative integrable functions $f_k$, $k=0,\ldots,m+1$,
$$ \int_H f_0(\langle x,u_0\rangle)^{c_0} f_{m+1}(\langle x,u_{m+1}\rangle)^{c_{m+1}} \prod_{k=1}^m f_k\big(\langle x, u_k\rangle\big)^{c_k} dx \ge \varepsilon
\prod_{k=0}^{m+1} \left( \int_{H_k} f_k\right)^{c_k} \cdot$$
It remains to choose adequate Gaussian functions $f_0$ and $f_{m+1}$ so that
$$f_0(\langle x,u_0\rangle)^{c_0} f_{m+1}(\langle x,u_{m+1}\rangle)^{c_{m+1}}\le e^{-\pi \langle x,u_0\rangle^2 +\pi \langle x,u_{m+1}\rangle^2} =e^{-\mathcal Q(x)}$$ to get a non-trivial inequality for the initial functional.
It is possible to achieve equality when $c_{m+1}<0$; we use an inequality in case $c_{m+1}=0$.
\end{proof}
\section{Positivity condition in the general case}
\label{sec:finiteness-n}
We turn to a positivity condition in the general case.
Let $0\le m^+\le m$ and for $k=0,\ldots,m+1$, let $B_k:H\to H_k$ be a surjective linear map.
Recall that $B_+$ denotes the map $(B_1, \ldots, B_{m^+}) \colon H \to H_1 \times \cdots \times H_{m^+}$. With this notation, $\ker B_+ = \bigcap_{k=1}^{m^+} \ker B_k$. Similarly we define $B_{0+}=(B_0,B_1,\ldots B_{m^+})\colon H \to H_0 \times \cdots \times H_{m^+}$.
Recall also the non-degeneracy conditions~\eqref{eq:injectivity-condition} and~\eqref{eq:surjectivity-condition}, which we assume from now on.
\subsection{Recursive structure of the problem}
Any linear subspace $V \subseteq H$, together with the quotient space $\sfrac{H}{V}$, yields a split of $H$, i.e. the following sequence is exact
\[
\begin{CD} 0 @>>> V @>{i}>> H @>{\pi}>> \sfrac{H}{V} @>>> 0, \end{CD}
\]
where $i \colon V \to H$ is the natural embedding and $\pi \colon H \to \sfrac{H}{V}$ is the natural quotient map.
Next, for each $k = 0, \ldots, m+1$ denote
\[
V_k = B_k V.
\]
We consider a split of $H_k$ induced from the split of $H$ by the map $B_k$, namely
\[
\begin{CD} 0 @>>> V_k @>{i_k}>> H_k @>{\pi_k}>> \sfrac{H_k}{V_k} @>>> 0, \end{CD}
\]
together with surjective linear maps $b_k \colon V \to V_k $ and $\beta_k \colon \sfrac{H}{V} \to \sfrac{H_k}{V_k}$ defined such that the diagram
\[
\begin{CD}
0 @>>> V @>{i}>> H @>{\pi}>> \sfrac{H}{V} @>>> 0 \\
@. @VV{b_k}V @VV{B_k}V @VV{\beta_k}V @.\\
0 @>>> V_k @>{i_k}>> H_k @>{\pi_k}>> \sfrac{H_k}{V_k} @>>> 0
\end{CD}
\]
commutes.
In other words, $b_k$ is the restriction of $B_k$ to $V$, while $\beta_k$ is the quotient of $B_k$ by $V$, which can be defined explicitly by
\[
\beta_k (x + V) = B_k x + V_k.
\]
Similarly as for the maps $B_k$ we consider the maps
\[ \begin{split}
b_+ &= (b_1, \ldots, b_{m^+}) \colon V \to V_1 \times \cdots \times V_{m^+}, \\
b_{0+} &= (b_0, b_1, \ldots, b_{m^+}) \colon V \to V_0 \times V_1 \times \cdots \times V_{m^+}, \\
\beta_+ &= (\beta_1, \ldots, \beta_{m^+}) \colon \sfrac{H}{V} \to \sfrac{H_1}{V_1} \times \cdots \times \sfrac{H_{m^+}}{V_{m^+}}, \\
\beta_{0+} &= (\beta_0, \beta_1, \ldots, \beta_{m^+}) \colon \sfrac{H}{V} \to \sfrac{H_0}{V_0} \times \sfrac{H_1}{V_1} \times \cdots \times \sfrac{H_{m^+}}{V_{m^+}}.
\end{split} \]
In the sequel, the above construction of restriction and quotient of maps will be applied recursively to $V$ and the maps $b_k \colon V \to V_k$ as well as to $\sfrac{H}{V}$ and the maps $\beta_k \colon \sfrac{H}{V} \to \sfrac{H_k}{V_k}$.
Note a simple fact concerning the kernel of a quotient map.
\begin{lemma}\label{lem:kernel-of-beta}
Let $B \colon H \to H'$ be a linear map (not necessarily surjective), $V \subseteq H$ be a subspace and denote $V' = B V$. Consider the map $\beta \colon \sfrac{H}{V} \to \sfrac{H'}{V'}$ defined as a quotient of $B$, i.e. $\beta (x + V) = B x + V'$. Then its kernel verifies
\begin{equation}\label{eq:kernel-of-beta}
\pi^{-1}(\ker \beta) = V + \ker B,
\end{equation}
where $\pi:H\to \sfrac{H}{V}$ is the canonical projection.
\end{lemma}
\begin{proof}
The inclusions $V \subseteq \pi^{-1}(\ker \beta)$ and $\ker B \subseteq \pi^{-1}(\ker \beta)$ are obvious. For the other inclusion, if $x \in \pi^{-1}(\ker \beta)$, i.e. $B x \in V' = B V$, then there exists $v \in V$ such that $B x = B v$ and hence $x = v + (x-v) \in V + \ker B$.
\end{proof}
\medskip
The following notion will be crucial:
\begin{dfn}
Given the maps $B_k \colon H \to H_k$ for $k = 0, 1, \ldots, m^+$ and a linear subspace $V\subset H$, we call the split $\begin{CD} 0 @>>> V @>>> H @>>> \sfrac{H}{V} @>>> 0\end{CD}$ \emph{admissible} for $(H,B)$ if the map $b_{0+}$ is a linear isomorphism. For shortness we also say that $V$ is an admissible subspace, and omit to mention $(H,B)$ when there is no ambiguity.
\end{dfn}
The next lemma tells that Condition~\eqref{eq:isomorphism-condition} is inherited by subspaces and quotients
induced by admissible splits.
\begin{lemma}
\label{lem:admissibility-of-subspace-and-quotient}
Suppose here that the map $B_{0+}$ is a linear isomorphism and consider a split $\begin{CD} 0 @>>> V @>>> H @>>> \sfrac{H}{V} @>>> 0\end{CD}$. Then the map $b_{0+}$ is injective and the map $\beta_{0+}$ is surjective. Moreover, the following assertions are equivalent:
\begin{enumerate}
\item[(i)] the split is admissible (i.e. the map $b_{0+}$ is a linear isomorphism),
\item[(i')] the map $b_{0+}$ is surjective,
\item[(ii)] $\dim V = \sum_{i=0}^{m^+} \dim B_i V$,
\item[(iii)] the map $\beta_{0+}$ is a linear isomorphism,
\item[(iii')] the map $\beta_{0+}$ is injective,
\item[(iv)] $\bigcap_{i=0}^{m^+} (V + \ker B_i) = V$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $B_{0+}:H\to H_0\times\cdots\times H_{m^+}$ is a linear isomorphism,
it is clear that its restriction $b_{0+}:V\to V_0\times\cdots\times V_{m^+}$ is injective (recall the notation $V_k=B_kV$). Hence (i) $\iff$ (i').
The fact that $B_{0+}$ is onto also ensures that
its quotient $\beta_{0+}:\sfrac{H}{V}\to \sfrac{H_0}{V_0} \times \cdots \times \sfrac{H_{m^+}}{V_{m^+}}$ is surjective. Hence (iii) $\iff$ (iii').
Next we use the basic fact that a linear map between finite dimensional vector spaces $L:X\to Y$ is bijective if and only if $\dim X=\dim Y$ and $L$ is injective (which is also equivalent to $\dim X=\dim Y$ and $L$ is surjective). This directly yields (i) $\iff$ (ii).
Since $B_{0+}$ is an isomorphism, it holds $\dim H=\sum_{i=0}^{m^+} \dim H_i$.
Therefore, by subtraction, (ii) is equivalent to
$$ \dim \sfrac{H}{V} = \sum_{i=0}^{m^+} \dim \sfrac{H_i}{V_i}.$$
Since $\beta_{0+}$ is automatically surjective, we deduce that (ii)$\iff$(iii).
It remains to show that (iii')$\iff$(iv). To do this, we start with observing
that
\[
\pi^{-1}(\ker \beta_{0+})=\pi^{-1}\Big(\bigcap_{i=0}^{m^+} \ker \beta_i \Big) =
\bigcap_{i=0}^{m^+} \pi^{-1}(\ker \beta_i)= \bigcap_{i=0}^{m^+} (V+\ker B_i),
\]
where the last equality comes from~\eqref{eq:kernel-of-beta}. Taking the preimage w.r.t. the surjective map $\pi$ gives that $\ker \beta_{0+}=\{0\}$ is equivalent to
$ \pi^{-1}(\ker \beta_{0+} ) = V$. The equivalence (iii')$\iff$(iv) follows from the above formula.
\end{proof}
Eventually, we show that Condition~\eqref{eq:kerBplus-contained-in-kerBm1} is inherited by the maps induced by admissible splits.
\begin{lemma}\label{lem:kerBplus-inside-kerBm1}
Consider a linear subspace $V\subset H$ and the corresponding split. Suppose $\ker B_+ \subseteq \ker B_{m+1}$. Then
\begin{enumerate}
\item[(i)] $\ker b_+ \subseteq \ker b_{m+1}$,
\item[(ii)] if $b_+$ is surjective then $\ker \beta_+ \subseteq \ker \beta_{m+1}$.
\end{enumerate}
\end{lemma}
\begin{proof}
\par{(i) This part is obvious since $\ker b_+ = V \cap \ker B_+$ and similarly for $\ker b_{m+1}$.}
\par{(ii) By Lemma~\ref{lem:kernel-of-beta}
\[
\pi^{-1}(\ker \beta_{m+1}) = V + \ker B_{m+1}.
\]
Applying Lemma~\ref{lem:kernel-of-beta} once again, this time to the map $B := B_+$ and the subspaces $V \subseteq H$ and $V' = B_+ V \subseteq H' := H_1 \times \cdots \times H_{m^+}$ we obtain that the map $\beta \colon \sfrac{H}{V} \to \sfrac{H'}{V'}$ being the quotient of $B$ satisfies
\[
\pi^{-1}(\ker \beta) = V + \ker B.
\]
Since $b_+$ is surjective, i.e. $V' = B_1 V \times \cdots \times B_{m^+} V$, the map $\beta_+$ coincides with $\varphi \circ \beta$, where $\varphi \colon \sfrac{H'}{V'} = \sfrac{H_1 \times \cdots \times H_{m^+}}{B_1 V \times \cdots \times B_{m^+} V} \to \sfrac{H_1}{B_1V} \times \cdots \times \sfrac{H_{m^+}}{B_{m^+}V}$ is the natural isomorphism, hence $\ker \beta$ and $\ker \beta_+$ coincide. Therefore,
\[
\pi^{-1}(\ker \beta_+) = V + \ker B_+
\]
and using the hypothesis $\ker B_+ \subseteq \ker B_{m+1}$ we obtain
\[
\pi^{-1}(\ker \beta_+) \subseteq V + \ker B_{m+1} = \pi^{-1}(\ker \beta_{m+1}).
\]
To conclude, it remains to apply the surjective map $\pi$.}
\end{proof}
\begin{corollary}\label{cor:inheritance-of-kernel-conditions}
Suppose the map $B_{0+}$ is a linear isomorphism and $\ker B_+ \subseteq \ker B_{m+1}$. If a split $\begin{CD} 0 @>>> V @>>> H @>>> \sfrac{H}{V} @>>> 0\end{CD}$ is admissible then
\begin{enumerate}
\item[(i)] $b_{0+}$ is a linear isomorphism and $\ker b_+ \subseteq \ker b_{m+1}$,
\item[(ii)] $\beta_{0+}$ is a linear isomorphism and $\ker \beta_+ \subseteq \ker \beta_{m+1}$.
\end{enumerate}
\end{corollary}
\begin{proof} This is a direct consequence of Lemma~\ref{lem:kerBplus-inside-kerBm1}
and of the equivalent forms of the admissibility property given in Lemma~\ref{lem:admissibility-of-subspace-and-quotient}.
\end{proof}
\subsection{Formulation of the characterization result}
Recall $0\le m^+\le m$. In addition to the linear surjective maps $B_k \colon H \to H_k$, $k = 0, \ldots, m+1$, we consider real numbers $c_0 = 1$, $c_1, \ldots, c_{m^+} > 0$, $c_{m^+ +1}, \ldots, c_m \le 0$ and $c_{m+1} = -1$. In this context, for any positive definite quadratic forms $\mathcal Q_+ \colon H_0 \to \mathbb{R}$ and $\mathcal Q_- \colon H_{m+1} \to \mathbb{R}$, define a functional $J_{\mathcal Q_+, \mathcal Q_-}$ acting on non-negative integrable functions $f_k \colon H_k \to \mathbb{R}$ ($k = 1, \ldots, m$) satisfying $\int_{H_k} f_k > 0$:
\begin{equation}\label{eq:J-Qplus-Qminus}
J_{\mathcal Q_+, \mathcal Q_-}(f_1, \ldots, f_m) = \frac{\int_H \prod_{k=0}^{m+1} f_k^{c_k} (B_k x) \,dx}{\prod_{k=1}^m \Big( \int_{H_k} f_k \Big)^{c_k}},
\end{equation}
where
\begin{equation}\label{eq:def-f0-fm1}
f_0 = e^{-\mathcal Q_+} \quad \text{and} \quad f_{m+1} = e^{-\mathcal Q_-}.
\end{equation}
Assuming~\eqref{eq:isomorphism-condition} and~\eqref{eq:kerBplus-contained-in-kerBm1}, the condition defined below turns out to be equivalent to the positivity of the infimum of $J_{\mathcal Q_+, \mathcal Q_-}$ over all functions $f_1, \ldots, f_m$.
\begin{dfn}
We say that $H$ together with the maps $B_k$ and the exponents $c_k$ ($k=0, \ldots, m+1$) satisfies \emph{Condition (C)} if for every admissible split
\[
\begin{CD} 0 @>>> V @>>> H @>>> \sfrac{H}{V} @>>> 0\end{CD},
\]
the following two conditions are satisfied:
\begin{enumerate}
\item[(i)] if $b_{m+1}$ is a trivial map (i.e. $V \subseteq \ker B_{m+1}$ thus $V_{m+1} = \{0\}$) then $V$ is a \emph{supercritical subspace} of $H$, i.e.
\[ \dim V \ge \sum_{k=1}^m c_k \dim V_k; \]
\item[(ii)] if $\beta_0$ is a trivial map (i.e. $B_0 V = B_0 H$ thus $\sfrac{H_0}{V_0} = \{0\}$) then $\sfrac{H}{V}$ is a \emph{subcritical quotient} of $H$, i.e.
\[ \dim \sfrac{H}{V} \le \sum_{k=1}^m c_k \dim \sfrac{H_k}{V_k}. \]
\end{enumerate}
\end{dfn}
Later on we will also use a similar notion which we call \emph{criticality}:
\begin{dfn}
Suppose $V\subset H$ induces an admissible split. We say that $V$ is a \emph{critical subspace} of $H$ if
\[ b_{m+1} \text{ is trivial and } \dim V = \sum_{k=1}^m c_k \dim V_k. \]
Similarly, we say that $\sfrac{H}{V}$ is a \emph{critical quotient} of $H$ if
\[ \beta_0 \text{ is trivial and } \dim \sfrac{H}{V} = \sum_{k=1}^m c_k \dim \sfrac{H_k}{V_k}. \]
\end{dfn}
\begin{theorem}\label{thm:positivity-of-constant-general-case}
In the setting described above, suppose that~\eqref{eq:isomorphism-condition} and~\eqref{eq:kerBplus-contained-in-kerBm1} hold.
\begin{enumerate}
\item[(i)] If for some positive definite quadratic forms $\mathcal Q_+ \colon H_0 \to \mathbb{R}$ and $\mathcal Q_- \colon H_{m+1} \to \mathbb{R}$,
\[
\inf_{f_1, \ldots, f_m} J_{\mathcal Q_+, \mathcal Q_-}(f_1, \ldots, f_m) > 0
\]
then $(H,B,c)$ satisfies Condition (C).
\item[(ii)] If $(H,B,c)$ satisfies Condition (C) then for all positive definite quadratic forms $\mathcal Q_+$ and $\mathcal Q_-$,
\[
\inf_{f_1, \ldots, f_m} J_{\mathcal Q_+, \mathcal Q_-}(f_1, \ldots, f_m) > 0.
\]
\end{enumerate}
\end{theorem}
The above theorem easily implies the characterization of positivity of the functional $J$, for which we present now an intrinsic formulation (in terms of $\mathcal Q$ only, and not
of its decomposition involving $B_0, B_{m+1}$).
\begin{theorem}\label{th:characterization-in-general-case}
Consider the functional $J$ as defined in~\eqref{def:J} along with a quadratic form $\mathcal Q \colon H \to \mathbb{R}$, surjective linear maps $B_k \colon H \to H_k$ ($k=1,\ldots,m$) and exponents $c_1, \ldots, c_{m^+} > 0, c_{m^+ +1}, \ldots, c_m \le 0$. Suppose the non-degeneracy condition~\eqref{eq:injectivity-condition} and~\eqref{eq:surjectivity-condition} hold. Then $\inf J > 0$ if and only if for every subspace $V \subseteq H$ such that
\begin{equation}\label{eq:admissibility-in-terms-of-Q-via-surjectivity-of-b0plus}
\dim\big(V \cap (\ker B_+)^{\perp_{\mathcal Q}}\big) = \sum_{i=1}^{m^+} \dim B_i V,
\end{equation}
the following two implications hold true:
\begin{enumerate}
\item[(i)] if $V \subseteq \rad \mathcal Q + \ker B_+$ then $$\dim V \ge \sum_{k=1}^m c_k \dim B_k V;$$
\item[(ii)] if $V + (\ker B_+)^{\perp_{\mathcal Q}} = H$ then $$\dim H - \dim V \le \sum_{k=1}^m c_k (\dim H_k - \dim B_k V).$$
\end{enumerate}
\end{theorem}
\begin{remark}
When no kernel is involved (i.e. $\mathcal Q=0$) we recover, in a slightly different form, the condition of Theorem~\ref{th:positivity-general-rank-no-kernel} from the introduction. To see the connection, observe that $(i)$ for $V=H$ and $(ii)$ for $V=\{0\}$ yield $\dim H=\sum_{k=1}^m c_k \dim H_k$. Then, it is clear that the inequalities in $(i)$ and $(ii)$ are equivalent.
\end{remark}
\begin{proof}[Proof of Theorem \ref{th:characterization-in-general-case}]
First we construct the maps $B_0 \colon H \to H_0$ and $B_{m+1} \colon H \to H_{m+1}$ as in the proof of the implication (1) $\implies$ (2) from Lemma~\ref{lem:decompose-Q}. Recall from that proof that $$\ker B_0=H_0^{\perp_{\mathcal Q}} = (\ker B_+)^{\perp_{\mathcal Q}}$$ and $$\ker B_{m+1} = \rad \mathcal Q + \ker B_+.$$
Next, we apply Theorem~\ref{thm:positivity-of-constant-general-case}, and reformulate it in terms of the quadratic form $\mathcal Q$ as follows. By~\eqref{eq:isomorphism-condition} and Lemma~\ref{lem:admissibility-of-subspace-and-quotient}, a subspace $V\subset H$ is admissible if and only if
\[
\dim V = \sum_{i=0}^{m^+} \dim B_i V.
\]
The last equation is equivalent to~\eqref{eq:admissibility-in-terms-of-Q-via-surjectivity-of-b0plus}, thanks to the following relation
\[
\dim B_0 V = \dim V - \dim (V \cap \ker B_0) = \dim V - \dim \big(V \cap (\ker B_+)^{\perp_{\mathcal Q}}\big).
\]
Finally note that the following equivalences hold true:
\[ \begin{split}
V \subseteq \ker B_{m+1} \quad &\iff \quad V \subseteq \rad \mathcal Q + \ker B_+, \\
B_0 V = B_0 H \quad &\iff \quad V + \ker B_0 = H \quad \iff \quad V + (\ker B_+)^{\perp_{\mathcal Q}} = H.
\end{split} \]
\end{proof}
Eventually, let us note that Lemma~\ref{lem:admissibility-of-subspace-and-quotient} (i) $\iff$ (iv) shows that \eqref{eq:admissibility-in-terms-of-Q-via-surjectivity-of-b0plus} is equivalent to
\[
\big(V + (\ker B_+)^{\perp_{\mathcal Q}}\big) \cap \bigcap_{i=1}^{m^+} (V + \ker B_i) = V.
\]
\subsection{Useful notation for the proof of Theorem~\ref{thm:positivity-of-constant-general-case}}\label{sec:prelim-results-characterization-theorem}
Consider any split
\[
\begin{CD} 0 @>>> V @>>> H @>>> \sfrac{H}{V} @>>> 0\end{CD}.
\]
We fix any linear injective maps
\[ \begin{split}
j &\colon \sfrac{H}{V} \to H, \\
j_k &\colon \sfrac{H_k}{V_k} \to H_k \quad \textup{for $k = 0, \ldots, m+1$}
\end{split} \]
such that $j$ (respectively $j_k$) composed with the canonical quotient map $H \to \sfrac{H}{V}$ (resp. $H_k \to \sfrac{H_k}{V_k}$) is the identity on $\sfrac{H}{V}$ (resp. $\sfrac{H_k}{V_k}$). For example, $j$ can be chosen so that its range is the orthogonal complement of $V$ in $H$ (and similarly $j_k$).
For any $x \in V$ and $y \in \sfrac{H}{V}$ we write
\[
B_k (x + j(y)) = B_k x + B_k j(y) = b_k x + \rho_k y + j_k(\beta_k y),
\]
where
\[
\rho_k y = B_k j(y) - j_k(\beta_k y) \colon \sfrac{H}{V} \to V_k.
\]
To see that $\rho_k$ has range in $V_k$, compose it with the quotient map $\pi_k \colon H_k \to \sfrac{H_k}{V_k}$ to see that $\pi_k \rho_k(y) = \pi_k B_k j(y) - \beta_k \pi j(y) = (\pi_k B_k - \beta_k \pi)(j(y)) = 0$, by definition of $\beta_k$.
Fix any positive definite quadratic forms $\mathcal Q_+$ on $H_0$ and $\mathcal Q_-$ on $H_{m+1}$ and thus fix $f_0$ and $f_{m+1}$ as in~\eqref{eq:def-f0-fm1}. Next,
let $f_k \colon H_k \to \mathbb{R}$ ($k = 1, 2, \ldots, m$) be non-negative, integrable functions with $\int_{H_k} f_k > 0$. By identifying each function $f_k \colon H_k \to \mathbb{R}$ (for $k = 0, 1, \ldots, m + 1$) with a function $f_k \colon V_k \times \sfrac{H_k}{V_k} \to \mathbb{R}$ and using Fubini theorem we rewrite~\eqref{eq:J-Qplus-Qminus} as
\begin{equation}\label{eq:J-iterative-integral}
J_{\mathcal Q_+, \mathcal Q_-}(f_1, \ldots, f_m) = C \frac{\int_{\sfrac{H}{V}} \int_V \prod_{k=0}^{m+1} f_k^{c_k}(b_k x + \rho_k y, \beta_k y) \, dx \, dy}{\prod_{k=1}^m \left(\int_{\sfrac{H_k}{V_k}} \int_{V_k} f_k(x, y) \, dx \, dy \right)^{c_k}},
\end{equation}
where $C \in (0, +\infty)$ is a constant resulting from changes of variables $V \times \sfrac{H}{V} \ni (x, y) \mapsto x + j(y) \in H$ and $V_k \times \sfrac{H_k}{V_k} \ni (x, y) \mapsto x + j_k(y) \in H_k$ (for $k=1,\ldots,m$). (If $j$ and $j_k$ are chosen according to Euclidean structures of $H$ and $H_k$ as in the example mentioned above, then $C = 1$. However, in what follows the exact value of $C$ has no importance).
\subsection{Necessity of Condition (C)}
Here we prove the first assertion of Theorem~\ref{thm:positivity-of-constant-general-case}.
\begin{proof}[Proof of Theorem~\ref{thm:positivity-of-constant-general-case}, part (i)]
Recall the discussion from Section~\ref{sec:prelim-results-characterization-theorem}. Assume that the subspace $V$ is admissible and that we choose the functions $f_1, \ldots, f_{m^+}$ so that they are bounded of compact support and the functions $f_{m^+ +1}, \ldots, f_m$ which are strictly positive with polynomial decay at infinity.
First consider the case $V \subseteq \ker B_{m+1}$ (i.e. $V_{m+1} = \{0\}$ and thus $b_{m+1}$ and $\rho_{m+1}$ are trivial). We aim at showing that $V$ is a supercritical subspace of $H$.
To this end, for any $R \in [1, \infty)$, set
\[
f^{(R)}_k(x,y) = f_k(x/R,y) \quad \textup{for $k=1,\ldots,m$.}
\]
By the hypothesis, for all $R \ge 1$, $J_{\mathcal Q_+, \mathcal Q_-}(f^{(R)}_1, \ldots, f^{(R)}_m)$ is uniformly bounded from below by a positive constant. On the other hand, using \eqref{eq:J-iterative-integral} for $(f^{(R)}_1,\ldots,f^{(R)}_m) $ and rescaling the variables of integration $x$ in the numerator and the denominator (i.e. replacing $x$ with $Rx$) gives
\begin{equation}\label{eq:J-interative-integral-with-R-big}
\begin{split}
& J_{\mathcal Q_+, \mathcal Q_-}(f^{(R)}_1, \ldots, f^{(R)}_m) = C \times R^{\dim V - \sum_{k=1}^m c_k \dim V_k} \\
& \times \frac{\int_{\sfrac{H}{V}} \int_V f_0(R b_0 x + \rho_0 y, \beta_0 y) f_{m+1}^{-1}(0, \beta_{m+1} y) \prod_{k=1}^m f_k^{c_k}(b_k x + \frac{1}{R} \rho_k y, \beta_k y) \, dx \, dy}{\prod_{k=1}^m \left(\int_{\sfrac{H_k}{V_k}} \int_{V_k} f_k(x, y) \, dx \, dy \right)^{c_k}}.
\end{split}
\end{equation}
Now it is enough to show that the double integral in the numerator is uniformly bounded from above as $R \to \infty$. Doing so, the positive lower bound on the l.h.s. of~\eqref{eq:J-interative-integral-with-R-big} implies that $R^{\dim V - \sum_{k=1}^m c_k \dim V_k}$ is bounded away from $0$ as $R \to \infty$ and thus $\dim V - \sum_{k=1}^m c_k \dim V_k \ge 0$.
Due to our choice of the functions $f_k$, for $k=1, \ldots, m^+$,
\[
\textup{supp} f_k \subseteq F_k \times G_k,
\]
for some compact, star-shaped sets $F_k \subseteq V_k$ and $G_k \subseteq \sfrac{H_k}{V_k}$ (by star-shaped we mean that if $x$ is in a set then so does $\lambda x$ for any $\lambda \in [0, 1]$).
Thanks to the assumption~\eqref{eq:isomorphism-condition} and the admissibility of the split of $H$, the maps $b_{0+}$ and $\beta_{0+}$ are linear isomorphisms (see Lemma~\ref{lem:admissibility-of-subspace-and-quotient}).
Observe that we can restrict the domain of the outer integral in the numerator of~\eqref{eq:J-interative-integral-with-R-big} to the set
\[
G := \beta_+^{-1}(G_1 \times \cdots \times G_{m^+}) \subseteq \sfrac{H}{V},
\]
because outside $G$ the terms $f_k^{c_k}$ with $k \in \{1, \ldots, m^+\}$ make the integrand vanish.
Although $G$ is not necessarily compact, this allows us to bound the exponentially large term $f_{m+1}^{-1}$ in~\eqref{eq:J-interative-integral-with-R-big}. Indeed, the first assertion of Corollary~\ref{cor:inheritance-of-kernel-conditions}(ii) is
\[
\ker \beta_+ \subseteq \ker \beta_{m+1},
\]
hence by Lemma~\ref{lem:compact-image}, $\beta_{m+1}(G)$ is a compact subset of $\sfrac{H_{m+1}}{V_{m+1}}$ and thus we can bound from above the integrand by replacing $f_{m+1}^{-1}$ with
\[
\sigma:=\sup_{\{0\} \times \beta_{m+1}(G)} f_{m+1}^{-1} < \infty.
\]
In order to deal with the terms $f_k^{c_k}$ for $k \in \{ m^+ + 1, \ldots, m\}$ that grow (at most) polynomially at infinity, we take advantage of the exponential decay of $f_0$. In order to use a compactness argument we decompose $f_0$ into slices. Namely, note that for some compact, star-shaped sets $F_0 \subseteq V_0$, $G_0 \subseteq \sfrac{H_0}{V_0}$, which depend on $\mathcal Q_+$ and the map $j_0$ only, we have
\[ \begin{split}
f_0(x_0, y_0) &= \int_0^1 \ind{(x, y) \in V_0 \times \sfrac{H_0}{V_0} \colon \exp(-\mathcal Q_+(x + j_0 y)) \ge u}(x_0, y_0) \, du \\
&= \int_0^\infty t e^{-t^2/2} \ind{(x, y) \in V_0 \times \sfrac{H_0}{V_0} \colon \exp(-\mathcal Q_+(x + j_0 y)) \ge \exp(-t^2/2)}(x_0, y_0) \, dt \\
&= \int_0^\infty t e^{-t^2/2} \ind{(x, y) \in V_0 \times \sfrac{H_0}{V_0} \colon \mathcal Q_+(x + j_0 y) \le t^2/2}(x_0, y_0) \, dt \\
&\le \int_0^\infty t e^{-t^2/2} \Ind{t F_0}(x_0) \Ind{t G_0}(y_0) \, dt
\end{split} \]
for all $(x_0, y_0) \in V_0 \times \sfrac{H_0}{V_0}$. Using Fubini, we can thus bound the numerator of~\eqref{eq:J-interative-integral-with-R-big} by
\begin{equation}\label{eq:J-interative-integral-with-R-big-numerator}
\sigma \int_0^\infty t e^{-t^2/2} \int_{\sfrac{H}{V}} \int_V \Ind{t F_0}(R b_0 x + \rho_0 y) \Ind{t G_0}(\beta_0 y) \prod_{k=1}^m f_k^{c_k}\Big(b_k x + \frac{1}{R} \rho_k y, \beta_k y\Big) \, dx \, dy \, dt.
\end{equation}
\medskip
Now we argue that for some polynomials $p$ and $q$, for any $t > 0$ and all $R\ge 1$, the integrand of the double integral w.r.t $x$ and $y$ in~\eqref{eq:J-interative-integral-with-R-big-numerator} is bounded from above by $q(t)$ and is supported in a compact set of measure at most $p(t)$.
To this end, fix any $R \ge 1$ and $t > 0$. The integrand in question vanishes if $y$ is outside the set $\beta_{0+}^{-1}(tG_0 \times G_1 \times \cdots \times G_{m^+})$. Clearly we have
\[
\beta_{0+}^{-1}(tG_0 \times G_1 \times \cdots \times G_{m^+}) \subseteq (t+1) \beta_{0+}^{-1}(G_0 \times G_1 \times \cdots \times G_{m^+}).
\]
Since $\beta_{0+}$ is an isomorphism, the set
\[
\mathbf{G} = \beta_{0+}^{-1}(G_0 \times G_1 \times \cdots \times G_{m^+})
\]
is a compact (and star-shaped) subset of $\sfrac{H}{V}$. Thus we can restrict the domain of integration w.r.t. $y$ to $(t+1)\mathbf{G}$.
Next, fix $y \in (t+1) \mathbf{G}$ and $R \ge 1$. Take any $x \in V$ such that
\[ \begin{split}
R b_0 x + \rho_0 y &\in t F_0, \\
b_k x + \frac{1}{R} \rho_k y &\in F_k \quad \text{for all $k = 1,\ldots,m^+$}
\end{split} \]
(otherwise the integrand is zero). Then we have
\[ \begin{split}
b_0 x &\in \frac{t}{R} F_0 + \Big(- \frac{1}{R} \rho_0((t+1)\mathbf{G}) \Big) \subseteq (t+1)(F_0 + \rho_0(-\mathbf{G})), \\
b_k x &\in F_k + \Big(-\frac{1}{R} \rho_k((t+1) \mathbf{G})\Big) \subseteq (t+1)(F_k + \rho_k(-\mathbf{G})) \quad \textup{for $k = 1,\ldots,m^+$,}
\end{split} \]
where the inclusion follows from the fact that $F_0, F_1, \ldots, F_{m^+}$ and $-\mathbf{G}$ are star-shaped. Consider compact sets
\[
\tilde{F}_k = F_k + \rho_k(-\mathbf{G}) \subseteq V_k, \quad \textup{for $k = 0,1,\ldots,m^+$}.
\]
Put
\[
\mathbf{F} = b_{0+}^{-1}(\tilde{F_0} \times \tilde{F}_1 \times \cdots \times \tilde{F}_{m^+}).
\]
Clearly $x \in (t+1) \mathbf{F}$ for all $y \in (t+1) \mathbf{G}$ and all $R \ge 1$ and hence one can restrict the integral w.r.t. $x$ to the domain $(t+1) \mathbf{F}$ which is compact, because $b_{0+}$ is an isomorphism. Therefore we have shown that for all $R \ge 1$, the domain of the double integral in~\eqref{eq:J-interative-integral-with-R-big-numerator} can be restricted to the compact set $(t+1) (\mathbf{F} \times \mathbf{G})$. Moreover, the measure of this set is a polynomial
function of $t$.
\smallskip
Now we proceed with bounding the integrand inside $(t+1) (\mathbf{F} \times \mathbf{G})$. The functions $f_1, \ldots, f_{m^+}$ are bounded, so we may focus on the terms involving $f_k$ for $k \in \{m^+ + 1, \ldots m\}$. Set
\begin{equation}
\label{eq:def-F-k-G-k}
\begin{split}
F_k &= b_k(\mathbf{F}) + \rho_k(\mathbf{G}), \\
G_k &= \beta_k(\mathbf{G})
\end{split}
\end{equation}
for $k = m^+ +1, \ldots, m$. Then for all $(x,y) \in (t+1) (\mathbf{F} \times \mathbf{G})$, all $R \ge 1$ and $k = m^+ +1, \ldots, m$,
\[
b_k x + \frac{1}{R} \rho_k y \in b_k\big((t+1) \mathbf{F}\big) + \frac{1}{R} \rho_k\big((t+1) \mathbf{G}\big) \subseteq (t+1) F_k
\]
since $\mathbf{G}$ is star-shaped, and, of course,
\[
\beta_k y \in (t+1) G_k.
\]
Therefore the integrand can be bounded from above by
\[
\prod_{k=1}^{m^+} \Big( \sup_{H_k} f_k \Big)^{c_k} \times \prod_{k=m^+ +1}^m \Big( \sup_{(t+1)(F_k \times G_k)} f_k^{-1} \Big)^{-c_k},
\]
where the first product is finite by boundedness of the functions $f_1, \ldots, f_{m^+}$ and the second product is bounded by a polynomial in $t$ due to polynomial decay of the functions $f_{m^+ +1}, \ldots, f_m$. Consequently, \eqref{eq:J-interative-integral-with-R-big} is upper bounded independently of $R$, as claimed.
\bigskip
Now we pass to the case when $B_0 V = B_0 H$ (i.e. $\sfrac{H_0}{V_0} = \{0\}$ and thus $\beta_0$ is trivial). Using a similar reasoning to the one used in the first case, we will show that $\sfrac{H}{V}$ is a subcritical quotient of $H$. For any $r \in (0, 1]$ set
\[
f_k^{(r)}(x,y) = f_k(x,y/r) \quad \textup{for $k=1, \ldots, m$}.
\]
We apply ~\eqref{eq:J-iterative-integral} for $(f_1^{(r)} ,\ldots,f_m^{(r)} )$ and then
we rescale the variables $y$ in the numerator and the denominator (i.e. we replace
$y$ with $ry$ in all integrals with respect to $y$). We get
\begin{equation}\label{eq:J-interative-integral-with-r-small}
\begin{split}
& J_{\mathcal Q_+,\mathcal Q_-}(f^{(r)}_1, \ldots, f^{(r)}_m) = C \times r^{\dim \sfrac{H}{V} - \sum_{k=1}^m c_k \dim \sfrac{H_k}{V_k}} \\
& \times \frac{\int_{\sfrac{H}{V}} \int_V f_0(b_0 x + r \rho_0 y, 0) f_{m+1}^{-1}(b_{m+1} x + r \rho_{m+1} y, r \beta_{m+1} y) \prod_{k=1}^m f_k^{c_k}(b_k x + r \rho_k y, \beta_k y) \, dx \, dy}{\prod_{k=1}^m \left(\int_{\sfrac{H_k}{V_k}} \int_{V_k} f_k(x, y) \, dx \, dy \right)^{c_k}}.
\end{split}
\end{equation}
As before, it is enough to show that the double integral in the numerator is uniformly bounded from above as $r \to 0$.
First we deal with the term $f_{m+1}^{-1}$. The map $\beta_{0+}$ is a linear isomorphism, and since the map $\beta_0$ is trivial, also the map $\beta_+$ is an isomorphism. Therefore, the set
\[
\mathbf{G} := \beta_+^{-1}(G_1 \times \cdots \times G_{m^+}) \subseteq \sfrac{H}{V}
\]
to which we can restrict the integral w.r.t. $y$ in~\eqref{eq:J-interative-integral-with-r-small} is compact (and star-shaped). Now, fix any $y \in \mathbf{G}$ and take any $x \in V$ such that $b_k x + r \rho_k y \in F_k$ for all $k = 1, \ldots, m^+$. Then we have
\[
b_k x \in F_k + \big(-r \rho_k(\mathbf{G})\big) \subseteq F_k + \rho_k(-\mathbf{G}) := \tilde{F}_k
\]
and the sets $\tilde{F}_k \subseteq V_k$ are compact and star-shaped. Put
\[
F = b_+^{-1}(\tilde{F}_1 \times \cdots \times \tilde{F}_{m^+}) \subseteq V.
\]
The set $F$ is not necessarily compact, but the first assertion of Corollary~\ref{cor:inheritance-of-kernel-conditions} says that $\ker b_+ \subseteq \ker b_{m+1}$ and hence by Lemma~\ref{lem:compact-image}, $b_{m+1}(F)$ is a compact subset of $V_{m+1}$. Therefore for all $r \in (0,1]$ we have the bound
\[
f_{m+1}^{-1}(b_{m+1} x + r \rho_{m+1} y, r \beta_{m+1} y) \le
\sup_{(b_{m+1}(F) + \rho_{m+1}(\mathbf{G})) \times \mathbf{G}} f_{m+1}^{-1} < \infty.
\]
\medskip
In order to deal with the terms $f_k^{c_k}$ for $k \in \{ m^+ + 1, \ldots, m\}$ we decompose $f_0(\cdot, 0)$ into slices. Namely, defining the compact, star-shaped set $F_0=\{ x\in V_0 \colon \mathcal Q_+(x)\le 1/2\} $, we have that for all $x_0 \in V_0 = H_0$,
\[ \begin{split}
f_0(x_0, 0) &= \int_0^1 \ind{x \in V_0 \colon \exp(-\mathcal Q_+(x)) \ge u}(x_0) \, du \\
&= \int_0^\infty t e^{-t^2/2} \ind{x \in V_0 \colon \exp(-\mathcal Q_+(x)) \ge \exp(-t^2/2)}(x_0) \, dt \\
&= \int_0^\infty t e^{-t^2/2} \Ind{t F_0}(x_0) \, dt.
\end{split} \]
Using Fubini, we can thus bound the numerator of~\eqref{eq:J-interative-integral-with-r-small} by a constant (our bound on the terms involving $f_{m+1}^{-1}$) times
\begin{equation}\label{eq:J-interative-integral-with-r-small-numerator}
\int_0^\infty t e^{-t^2/2} \int_{\sfrac{H}{V}} \int_V \Ind{t F_0}(b_0 x + r \rho_0 y) \prod_{k=1}^m f_k^{c_k}(b_k x + r \rho_k y, \beta_k y) \, dx \, dy \, dt.
\end{equation}
As discussed above, the domain of the integration w.r.t. $y$ can be restricted to the compact set $\mathbf{G} \subseteq \sfrac{H}{V}$. Now fix $t > 0$, $y \in \mathbf{G}$ and $r \in (0, 1]$. Suppose that $x \in V$ is such that the integrand in~\eqref{eq:J-interative-integral-with-r-small-numerator} does not vanish. Then we must have
\[ \begin{split}
b_0 x + r \rho_0 y &\in t F_0, \\
b_k x + r \rho_k y &\in F_k \quad \textup{for $k = 1, \ldots, m^+$,}
\end{split} \]
which implies
\[ \begin{split}
b_0 x &\in t F_0 + \big(-r \rho_0(\mathbf{G})\big) \subseteq (t+1) (F_0 + \rho_0(-\mathbf{G})) =: (t+1) \tilde{F}_0, \\
b_k x &\in F_k + \big(-r \rho_k(\mathbf{G})\big) \subseteq F_k + \rho_k(-\mathbf{G}) =: \tilde{F}_k \quad \textup{for $k = 1, \ldots, m^+$.}
\end{split} \]
Set
\[
\mathbf{F} = b_{0+}^{-1}(\tilde{F}_0 \times \tilde{F}_1 \times \cdots \times \tilde{F}_{m^+}).
\]
Clearly $\mathbf{F}$ is a compact ($b_{0+}$ is an isomorphism), star-shaped subset of $V$ and $x \in (t+1) \mathbf{F}$.
Finally define the sets $F_k$ and $G_k$ for $k = m^+ + 1, \ldots, m$ as in~\eqref{eq:def-F-k-G-k}. Then for all $(x, y) \in ((t+1) \mathbf{F}) \times \mathbf{G}$ and all $r \in (0,1]$, the arguments of the functions $f_k$ for $k = m^+ + 1, \ldots, m$ are in $\big((t+1) F_k\big) \times G_k$ and therefore the integrand can be bounded from above by
\[
\prod_{k=1}^{m^+} \Big( \sup_{H_k} f_k \Big)^{c_k} \times \prod_{k=m^+ +1}^m \Big( \sup_{((t+1) F_k) \times G_k} f_k^{-1} \Big)^{-c_k}.
\]
We conclude as in the first case.
\end{proof}
\subsection{Sufficiency of Condition (C)}
The inductive proof of the second part of Theorem~\ref{thm:positivity-of-constant-general-case} relies on the following lemma. It shows that under \eqref{eq:isomorphism-condition}, if one of the components of an admissible split (the subspace or the quotient) is critical, then Condition (C) is inherited by both components of the split.
\begin{lemma}[Inheritance of Condition (C) through a critical split]
\label{lem:inheritence-of-condition-C}
Suppose $H$ together with the maps $B_k$ and the exponents $c_k$ satisfy Condition (C) and
\[
\begin{CD} 0 @>>> V @>>> H @>>> \sfrac{H}{V} @>>> 0\end{CD}
\]
is an admissible split. If $V$ is a critical subspace of $H$ or $\sfrac{H}{V}$ is a critical quotient of $H$ then $V$ with the maps $b_k$ and the exponents $c_k$, as well as $\sfrac{H}{V}$ with the maps $\beta_k$ and the exponents $c_k$ satisfy Condition (C).
\end{lemma}
\begin{proof}
First we present the scheme of the proof:
\begin{enumerate}
\item[\emph{Part I.}] We suppose that $V$ is a critical subspace of $H$.
\begin{itemize}
\item Condition (C) for $V$:
\begin{itemize}
\item \emph{Subspace of $V$:} supercriticality of a subspace $U$ of $V$ is inherited directly from supercriticality of $U$ as a subspace of $H$.
\item \emph{Quotient of $V$:} subcriticality of the quotient $\sfrac{V}{U}$ of $V$ follows from criticality of $V$ in $H$ and supercriticality of $U$ in $V$ just proved.
\end{itemize}
\item Condition (C) for $\sfrac{H}{V}$:
\begin{itemize}
\item \emph{Subspace of $\sfrac{H}{V}$:} supercriticality of a subspace $U$ of $\sfrac{H}{V}$ follows from supercriticality of a subspace $\tilde{U} = \pi^{-1}(U)$ (where $\pi \colon H \to \sfrac{H}{V}$ is a natural quotient map) in $H$ and criticality of $V$ in $H$.
\item \emph{Quotient of $\sfrac{H}{V}$:} subcriticality of the quotient $\sfrac{\sfrac{H}{V}}{U}$ of $\sfrac{H}{V}$ follows from subcriticality of the quotient $\sfrac{H}{\tilde{U}}$ of $H$.
\end{itemize}
\end{itemize}
\item[\emph{Part II.}] We suppose that $\sfrac{H}{V}$ is a critical quotient of $H$. After dualizing, i.e. interchanging subspaces with quotient and supercriticality with subcriticality, the arguments are analogous to the ones from Part I.
\begin{itemize}
\item Condition (C) for $\sfrac{H}{V}$:
\begin{itemize}
\item \emph{Quotient of $\sfrac{H}{V}$:} subcriticality of a quotient $\sfrac{\sfrac{H}{V}}{U}$ of $\sfrac{H}{V}$ is inherited directly from subcriticality of $\sfrac{H}{\tilde{U}}$ as a quotient of $H$, where $\tilde{U} = \pi^{-1}(U)$.
\item \emph{Subspace of $\sfrac{H}{V}$:} supercriticality of the subspace $U$ of $\sfrac{H}{V}$ follows from criticality of $\sfrac{H}{V}$ in $H$ and subcriticality of $\sfrac{\sfrac{H}{V}}{U}$ in $\sfrac{H}{V}$ just proved.
\end{itemize}
\item Condition (C) for $V$:
\begin{itemize}
\item \emph{Quotient of $V$:} subcriticality of a quotient $\sfrac{V}{U}$ of $V$ follows from subcriticality of a quotient $\sfrac{H}{U}$ in $H$ and criticality of $\sfrac{H}{V}$ in $H$.
\item \emph{Subspace of $V$:} supercriticality of the subspace $U$ of $V$ follows from supercriticality of the subspace $U$ of $H$.
\end{itemize}
\end{itemize}
\end{enumerate}
\medskip
Next we give the arguments in details.
In the first part we assume that $V$ is a critical subspace: $V \subseteq \ker B_{m+1}$, and thus the map $b_{m+1}$ is trivial, and the following equality holds
\begin{equation}\label{eq:proof-inheritenceC-critical-subspace}
\dim V = \sum_{k=1}^m c_k \dim B_kV.
\end{equation}
Let us check Condition (C) for $V$ equipped with the maps $(b_k)$ and
the coefficients $(c_k)$ (for shortness, we will write $(V,b)$, since the coefficients
$c$ are the same for all sub-structures).
Suppose that
\[ \begin{CD} 0 @>>> U @>>> V @>>> \sfrac{V}{U} @>>> 0\end{CD} \]
is an admissible split of $(V,b)$. Since $U \subseteq V=\ker b_{m+1} $, we must check supercriticality of $U$ as a subspace of $(V,b)$.
The admissible split of $(V,b)$ induced by $U$ obviously leads to an admissible
split of $(H,B)$. Moreover,
$U \subseteq V \subseteq \ker B_{m+1}$, so that Condition (C) for $(H,B)$
yields
$$ \dim U\ge \sum_{k=1}^m c_k \dim B_k U.$$
Using $B_iU=b_iU$, we conclude that $U$ is a supercritical subspace of $(V,b)$.
For the same $U$, $\sfrac{V}{U}$ is a subcritical quotient of $(V,b)$ (even regardless whether $b_0(U) = b_0(V)$ or not) because $V$ is a critical subspace of $H$ and $U$ is a supercritical subspace of $V$ (subtract the inequality $ \dim U\ge \sum_{k=1}^m c_k\dim b_kU$ from \eqref{eq:proof-inheritenceC-critical-subspace}).
\medskip
Now we check Condition (C) for $(\sfrac{H}{V},\beta)$. Suppose
\[ \begin{CD} 0 @>>> U @>>> \sfrac{H}{V} @>>> \sfrac{\sfrac{H}{V}}{U} @>>> 0\end{CD} \]
is an admissible split of $(\sfrac{H}{V},\beta)$, which by Lemma~\ref{lem:admissibility-of-subspace-and-quotient} (i) $\iff$ (iv) means that
\[
\bigcap_{i=0}^{m^+} (U + \ker \beta_i) = U.
\]
Taking the preimage w.r.t. $\pi \colon H \to \sfrac{H}{V}$ we get
\[
\bigcap_{i=0}^{m^+} (\pi^{-1}(U) + V + \ker B_i) = \pi^{-1}(U),
\]
where we used~\eqref{eq:kernel-of-beta} and the relation $\pi^{-1}(A+B)=\pi^{-1}(A)+\pi^{-1}(B)+\ker \pi$ which is valid for any linear surjective map. Denote $\tilde{U} = \pi^{-1}(U)$. Of course $\tilde{U}$ contains $V$, hence the above assertion means that
\[ \begin{CD}
0 @>>> \tilde{U} @>>> H @>>> \sfrac{H}{\tilde{U}} @>>> 0
\end{CD} \]
is an admissible split of $(H,B)$ (again use Lemma~\ref{lem:admissibility-of-subspace-and-quotient} (i) $\iff$ (iv)).
We need to check supercriticality of $U$ as a subspace of $\sfrac{H}{V}$ whenever $U \subseteq \ker \beta_{m+1}$.
By criticality of $V$ in $H$ we have $V \subseteq \ker B_{m+1}$, which combined with the assertion $U \subseteq \ker \beta_{m+1}$ and~\eqref{eq:kernel-of-beta} yields
\[
\tilde{U} = \pi^{-1}(U) \subseteq \pi^{-1}(\ker \beta_{m+1}) = V + \ker B_{m+1} = \ker B_{m+1}.
\]
Applying Condition (C) for $(H,B)$, we know that:
\[\dim \tilde{U}\ge \sum_{k=1}^m c_k \dim B_k\tilde{U}.\]
If we subtract from the last inequality the relationship \eqref{eq:proof-inheritenceC-critical-subspace} corresponding to criticality of $V$ in $(H,B)$,
we get the supercriticality of $U$ in $(\sfrac{H}{V},\beta)$.
Indeed, up to isomorphism
\begin{equation}\label{eq:identification-quotients}
U\approx \sfrac{\tilde U}{V} \quad \mathrm{and} \quad\beta_k U\approx \sfrac{B_k\tilde U}{B_kV},
\end{equation}
as one readily checks by considering the ranges and kernels of the maps $\pi:\tilde{U}\to U$
and $\phi: B_k\tilde{U} \to \sfrac{B_kH}{B_kV}$ defined by $\phi(x)=x+B_kV$.
\medskip
Now we check subcriticality of the quotient $\sfrac{\sfrac{H}{V}}{U}$ of $\sfrac{H}{V}$ whenever $\beta_0(U) = \beta_0(\sfrac{H}{V})$, i.e. $U + \ker \beta_0 = \sfrac{H}{V}$.
Notice that using~\eqref{eq:kernel-of-beta}, we have
\[
H = \pi^{-1}(U + \ker \beta_0) = \tilde{U} + V + \ker B_0 = \tilde{U} + \ker B_0
\]
(the last equality follows from the fact that $V \subseteq \tilde{U}$), that is $B_0(\tilde{U}) = B_0(H)$. Therefore, by Condition (C) for $(H,B)$, the quotient $\sfrac{H}{\tilde{U}}$ must be subcritical in $(H,B)$, that is
\[
\dim \sfrac{H}{\tilde{U}} \le \sum_{k=1}^m c_k \dim \sfrac{B_k H}{B_k \tilde U}.
\]
Using again \eqref{eq:identification-quotients} together with the relation $\beta_k \sfrac{H}{V}=\sfrac{B_kH}{B_kV}$, we may rewrite the above inequality as
\[
\dim \sfrac{\sfrac{H}{V}}{U} \le \sum_{k=1}^m c_k \dim \sfrac{\beta_k\sfrac{H}{V}}{\beta_k U}.
\]
In other words, $\sfrac{\sfrac{H}{V}}{U}$ is a subcritical quotient of $(\sfrac{H}{V},\beta)$.
\bigskip
In the second part, suppose $\sfrac{H}{V}$ is a critical quotient of $(H,B)$. More specifically, $B_0(V) = B_0(H)$, i.e. $\beta_0$ is trivial, and
\begin{equation}\label{eq:proof-inheritenceC-critical-quotient}
\dim \sfrac{H}{V} = \sum_{k=1}^m c_k \dim \sfrac{B_kH}{B_kV}.
\end{equation}
The reasoning below is analogous to the first part after interchanging subspaces with quotient and supercriticality with subcriticality.
We check Condition (C) for $(\sfrac{H}{V},\beta)$. Consider a split of $\sfrac{H}{V}$ via its subspace $U$ and suppose this split is admissible. We need to check subcriticality of $\sfrac{\sfrac{H}{V}}{U}$ in $\sfrac{H}{V}$ whatever $U$ is, because $\beta_0$ is trivial and so $\beta_0(U) = \beta_0(\sfrac{H}{V})$ always holds. To this end, consider a split of $H$ via $\tilde{U} = \pi^{-1}(U)$, which is admissible, as we already showed. Moreover,
\[
B_0(\tilde{U}) \supseteq B_0(V) = B_0(H),
\]
so we can use subcriticality of $\sfrac{H}{\tilde{U}}$ in $(H,B)$. From the latter, subcriticality of
$\sfrac{\sfrac{H}{V}}{U}$ in $(\sfrac{H}{V},\beta)$ follows, as we have already explained.
For the same $U$, $U$ is a supercritical subspace of $\sfrac{H}{V}$ (regardless whether $U \subseteq \ker \beta_{m+1}$ or not), because $\sfrac{H}{V}$ is a critical quotient of $H$ and $\sfrac{\sfrac{H}{V}}{U}$ is a subcriticial quotient of $\sfrac{H}{V}$ (write the corresponding equality and inequality relations and subtract them).
\medskip
Next, we check Condition (C) for $(V,b)$. Consider a split of $V$ via a subspace $U$ of $V$ which is admissible. We use an induced split of $H$ via $U$, which is also admissible.
We need to check subcriticality of $\sfrac{V}{U}$ in $V$ whenever $b_0(U) = b_0(V)$. Since we know that
\[
B_0(U) = b_0(U) = b_0(V) = B_0(V) = B_0(H)
\]
(the last equality is due to criticality of $\sfrac{H}{V}$ in $H$), we can use subcriticality of $\sfrac{H}{U}$ in $H$. Writing the corresponding inequality for dimensions and subtracting from it the equality \eqref{eq:proof-inheritenceC-critical-subspace} related to criticality of $\sfrac{H}{V}$ in $(H,B)$, we get subcriticality of $\sfrac{V}{U}$ in $(V,b)$.
Finally we check supercriticality of $U$ in $(V,b)$ whenever $U \subseteq \ker b_{m+1}$. Since the latter implies $U \subseteq \ker B_{m+1}$, we can invoke the fact that $U$ is a supercritical subspace of $H$ to conclude.
\end{proof}
Our next result is about tensorization through a split. We say that $(H,B,c)=\big(H,(B_k)_{k=0}^{m+1},(c_k)_{k=1}^{m}\big)$ admits the \emph{strong positivity property} if for every positive definite quadratic forms $\mathcal Q_+$ on $H_0$ and $\mathcal Q_-$ on $H_{m+1}$,
\[
\inf_{(f_1,\ldots,f_m)} \frac{\int_H e^{-\mathcal Q_+(B_0x)+\mathcal Q_-(B_{m+1}x)} \prod_{k=1}^m f_k^{c_k}(B_kx)\, dx}{\prod_{k=1}^m \left(\int_{H_k} f_k\right)^{c_k}}>0.
\]
\begin{lemma}[Tensorization]\label{lem:tensorization}
Assume that $(H,B)$ satisfies~\eqref{eq:isomorphism-condition} and~\eqref{eq:kerBplus-contained-in-kerBm1}.
Let $V$ be a linear subspace of $H$ which induces an admissible split.
If $(V,b,c)$ and $(\sfrac{H}{V},\beta,c)$ have the strong positivity property, then $(H,B,c)$ has it too.
\end{lemma}
\begin{proof}
Fix any non-negative, integrable functions $f_k \colon H_k \to \mathbb{R}$ ($k=1, \ldots, m$) satisfying $\int_{H_k} f_k > 0$.
Set $f_0=e^{-\mathcal Q_+}$, $f_{m+1}=e^{-\mathcal Q_-}$, $c_0=1$ and $c_{m+1}=-1$.
Our goal is to bound from below the quantity $J_{\mathcal Q_+,\mathcal Q_-}(f_1,\ldots,f_m)$ defined in \eqref{eq:J-Qplus-Qminus} by a positive
constant (not depending on $(f_1,\ldots,f_m)$). Recall the discussion from Section~\ref{sec:prelim-results-characterization-theorem} and in particular Formula~\eqref{eq:J-iterative-integral}. Our aim is to bound from below the quantity
\begin{equation}\label{eq:proof-tensorization1}
I:=\int_{\sfrac{H}{V}} \int_V \prod_{k=0}^{m+1} f_k^{c_k}(b_k x + \rho_k y, \beta_k y) \, dx \, dy,
\end{equation}
by application of an inequality of inverse Brascamp-Lieb type on $V$ and on $\sfrac{H}{V}$.
By Corollary \ref{cor:inheritance-of-kernel-conditions}, $b_{0+}=(b_0,b_+)$ is surjective and $\ker b_+\subseteq \ker b_{m+1}$.
Hence, by Lemma~\ref{lem:kerS-kerT-H}
\[ V=\ker b_0+\ker b_+\subseteq \ker b_0+\ker b_{m+1} \subseteq V.\]
Using Lemma~\ref{lem:kerS-kerT-H} once again, we obtain that $(b_0,b_{m+1})$ is surjective. This allows us to remove the cross-terms
from the Gaussian kernel: indeed for every $y\in \sfrac{H}{V}$, there exists $v_y\in V$ such that $b_0v_y=\rho_0 y$ and
$b_{m+1}v_y=\rho_{m+1}y$. Using the translation invariance of the Lebesgue measure on $V$, we apply the change of variable $V \ni x \mapsto x - v_y \in V$ to the inner integral of \eqref{eq:proof-tensorization1}, and get that it is equal to
\begin{equation}\label{eq:proof-tensorization2}
\int_{\sfrac{H}{V}} \int_V f_0(b_0 x, \beta_0 y) f_{m+1}^{-1}(b_{m+1}x, \beta_{m+1} y) \prod_{k=1}^m f_k^{c_k}(b_k x - b_k v_y + \rho_k y, \beta_k y) \, dx \, dy.
\end{equation}
Next, we bound from below the Gaussian kernel by a product kernel.
Since $f_0=\exp(-\mathcal Q_+)$ where $\mathcal Q_+$ is viewed as a quadratic form on $V_0 \times \sfrac{H_0}{V_0}$, we can bound $f_0$ from below by
\[
f_0(x_0, y_0) \ge f_{0, V}(x_0) f_{0, \sfrac{H}{V}}(y_0),
\]
where $f_{0, V} = \exp(-\mathcal Q_{+, V})$ and $f_{0, \sfrac{H}{V}} = \exp(-\mathcal Q_{+, \sfrac{H}{V}})$ for some positive definite quadratic forms $\mathcal Q_{+, V} \colon V_0 \to \mathbb{R}$ and $\mathcal Q_{+, \sfrac{H_0}{V_0}} \colon \sfrac{H}{V} \to \mathbb{R}$.
For $f_{m+1}$ we use a reverse bound, namely for some positive definite quadratic forms $\mathcal Q_{-, V} \colon V_{m+1} \to \mathbb{R}$ and $\mathcal Q_{-, \sfrac{H_{m+1}}{V_{m+1}}} \colon \sfrac{H}{V} \to \mathbb{R}$ we have
\[
f_{m+1}^{-1}(x_0, y_0) \ge f_{m+1, V}^{-1}(x_0) f_{m+1, \sfrac{H}{V}}^{-1}(y_0),
\]
where $f_{m+1, V} = \exp(-\mathcal Q_{-, V})$ and $f_{m+1, \sfrac{H}{V}} = \exp(-\mathcal Q_{-, \sfrac{H}{V}})$.
Observe that we have used here the fact that $\mathcal Q_-$ is positive definite. We get that $I$ from \eqref{eq:proof-tensorization1} is at least
\[
\int_{\sfrac{H}{V}} f_{0,\sfrac{H}{V}}(\beta_0 y) f_{m+1,\sfrac{H}{V}}^{-1}( \beta_{m+1} y)\int_V f_{0,V}(b_0x) f_{m+1,V}^{-1}(b_{m+1}x)\prod_{k=1}^m f_k^{c_k}(b_k x - b_k v_y + \rho_k y, \beta_k y) \, dx \, dy.
\]
By the strong positivity property for $(V,b,c)$ there exists a constant $C_V>0$ such that for all $y \in \sfrac{H}{V}$,
\[ \begin{split}
\int_V f_{0, V}(b_0 x) & f_{m+1,V}^{-1}(b_{m+1}x)\prod_{k=1}^m f_k^{c_k}(b_k x - b_k v_y + \rho_k y, \beta_k y) \, dx\\
&\ge
C_V \prod_{k=1}^m \left(\int_{V_k} f_k(\cdot - b_k v_y + \rho_k y, \beta_k y) \right)^{c_k} \\
&= C_V \prod_{k=1}^m \left(\int_{V_k} f_k(\cdot, \beta_k y) \right)^{c_k},
\end{split} \]
where the equality follows from translation invariance of the Lebesgue measure on each $V_k$.
Denoting $f_{k, \sfrac{H}{V}}(y) := \int_{V_k} f_k(\cdot, y)$ for $y \in \sfrac{H_k}{V_k}$ ($k=1, \ldots, m$), we obtain
\begin{equation}\label{eq:J-iterative-integral-V-critical-2}
I \ge
C_V \int_{\sfrac{H}{V}} f_{0, \sfrac{H}{V}}(\beta_0 y) f_{m+1, \sfrac{H}{V}}^{-1}(\beta_{m+1} y) \prod_{k=1}^m f_{k, \sfrac{H}{V}}^{c_k}(\beta_k y) \,dy.
\end{equation}
Now it remains to apply the strong positivity property for $(\sfrac{H}{V}, \beta, c)$ and the functions $f_{k, \sfrac{H}{V}}$ in order to get
\[
I \ge
C_V C_{\sfrac{H}{V}} \prod_{k=1}^m \left( \int_{\sfrac{H_k}{V_k}} f_{k, \sfrac{H}{V}} \right)^{c_k} =
C_V C_{\sfrac{H}{V}} \prod_{k=1}^m \left(\int_{\sfrac{H_k}{V_k}} \int_{V_k} f_k(x,y) \,dx\,dy \right)^{c_k}
\]
for some constant $C_{\sfrac{H}{V}} > 0$ (which depends on $\mathcal Q_{+, \sfrac{H}{V}}$ and $\mathcal Q_{-, \sfrac{H}{V}}$).
\end{proof}
\begin{prop}\label{prop:c-i-ge-1}
Let $0 \le m^+ \le m$ be integers and consider surjective maps $B_k \colon H \to H_k$ for $k = 0, 1, \ldots, m+1$ and real numbers $c_k$ such that $c_k > 0$ for $k = 1, \ldots, m^+$ and $c_k \le 0$ for $k = m^+ + 1, \ldots, m$. Assume that Conditions~\eqref{eq:isomorphism-condition} and (C) hold. Then
\begin{itemize}
\item for all $i= 1, \ldots, m^+$, $
\dim H_i > 0 \implies c_i \ge 1$
\item if $H$ is a critical subspace, then $B_0=B_{m+1}=0$.
\end{itemize}
\end{prop}
\begin{proof}
Fix $1 \le i\le m^+$ such that $\dim H_i > 0$, i.e. $\ker B_i \neq H$. Consider $V = \ker B_i$ and a related split of $H$ by $V$. Since clearly
\[
\bigcap_{k=0}^{m^+} (V + \ker B_k) = V,
\]
by Lemma~\ref{lem:admissibility-of-subspace-and-quotient}, the split is admissible. Moreover, since $B_{0+}$ is surjective, the map $(B_0, B_i)$ is surjective too, hence Lemma~\ref{lem:kerS-kerT-H} yields
\[
H = \ker B_0 + \ker B_i = \ker B_0 + V,
\]
i.e. $B_0 V = B_0 H$. Therefore we can use the fact that the quotient $\sfrac{H}{V}$ of $H$ is subcritical, from which it follows that
\begin{equation}\label{eq:consequence-of-subcriticality}
\dim \sfrac{H}{V} \le \sum_{k=1}^m c_k \dim \sfrac{H_k}{V_k} \le \sum_{k=1}^{m^+} c_k \dim \sfrac{H_k}{V_k}.
\end{equation}
For all $1 \le k \le m^+$ with $k \neq i$, the map $(B_i, B_k)$ is surjective, hence again by Lemma~\ref{lem:kerS-kerT-H},
\[
H = \ker B_i + \ker B_k = V + \ker B_k,
\]
which means that $B_k V = B_k H$, i.e. $\sfrac{H_k}{V_k} = \{0\}$. Thus \eqref{eq:consequence-of-subcriticality} boils down to
$$\dim \sfrac{H}{V} \le c_i
\dim \sfrac{H_i}{V_i}.$$
Recall that $V_i=B_iV$ is reduced to $\{0\}$ since by definition $V=\ker B_i$. Moreover
\[
\dim \sfrac{H}{V}= \dim H- \dim \ker B_i = \dim B_iH=\dim H_i,
\]
so the last inequality can be rewritten as $\dim H_i\le c_i \dim H_i$. Therefore
$c_i\ge 1$ if $\dim H_i > 0$
\medskip
The proof of the second item follows the same lines. Firstly, $H$ is admissible by hypothesis. Since it is assumed to be a critical subspace, we know that $H\subset \ker B_{m+1}$ hence $B_{m+1}=0$, and that
\[ \dim H=\sum_{k=1}^m c_k \dim B_kH .\]
We set $V=\ker B_0$. As above, we can check that $V$ is admissible. Since $V\subset H=\ker B_{m+1}$, it is a supercritical subspace thanks to Condition (C). Therefore, using the above dimension equality, we get after subtraction
\begin{equation}\label{eq:consequence-of-subcriticality2}
\dim \sfrac{H}{V} \le \sum_{k=1}^m c_k \dim \sfrac{B_kH}{B_kV} \le \sum_{k=1}^{m^+} c_k \dim \sfrac{B_kH}{B_kV}
\end{equation}
again. Since $B_{0+}$ is a bijection and $V$ is admissible, we know by Lemma~\ref{lem:admissibility-of-subspace-and-quotient} that
$\dim H=\sum_{i=0}^{m^+} \dim B_iH$ and $\dim V=\sum_{i=0}^{m^+} \dim B_iV$. Hence $\dim \sfrac{H}{V}=\sum_{i=0}^{m^+} \dim \sfrac{B_iH}{B_iV}$.
Plugging this equality into \eqref{eq:consequence-of-subcriticality2} yields after rearranging
\begin{equation}\label{eq:consequence-of-subcriticality3}
\dim \sfrac{B_0H}{B_0V}\le \sum_{i=1}^{m^+} (c_i-1) \dim \sfrac{B_iH}{B_iV}.
\end{equation}
Since $B_{0+}$ is surjective, the map $(B_0, B_i)$ is surjective too for any $1\le i\le m^+$. Hence Lemma~\ref{lem:kerS-kerT-H} yields
$ H = \ker B_0 + \ker B_i = V+\ker B_i$, which ensures that $B_iH=B_iV$. Therefore, \eqref{eq:consequence-of-subcriticality3} becomes
$ \dim \sfrac{B_0H}{B_0V}\le 0$. Recall that by definition $B_0V=\{0\}$. We can conclude that $\dim B_0H=0$, that is $B_0=0$.
\end{proof}
The next statements will help to initialize the inductive proof of Theorem~\ref{thm:positivity-of-constant-general-case} (ii).
\begin{lemma}\label{lem:initialization-dimension1}
Assertion (ii) of Theorem \ref{thm:positivity-of-constant-general-case}
is true when $\dim H=1$.
\end{lemma}
\begin{proof}
The main tool here is the reverse H\"older inequality for several functions:
Let $c_1\ge 0\ge c_2,\ldots c_m$ with $\sum_k c_k=1$ then
\begin{equation}\label{eq:inverse-Holder-m-functions}
\int_{\mathbb{R}^d} \prod_k f_k^{c_k}\ge \prod_k \left( \int_{\mathbb{R}^d} f_k\right)^{c_k}
\end{equation}
holds for all integrable non-negative functions with $\int_{\mathbb{R}^d} f_k\in (0,+\infty)$.
This inequality follows from its version for two functions applied with $\lambda=c_1\ge 1$:
$$ \int_{\mathbb{R}^d} \prod_k f_k^{c_k}\ge \left( \int f_1\right)^{c_1} \left( \int \prod_{j=2}^m f_j^{\frac{c_j}{c_2+\cdots+c_m}}\right)^{c_2+\cdots+c_m},$$
and from the classical H\"older inequality applied to the second integral
(observe that the inner exponents sum up to 1 and are all non-negative, while the
outer exponent $c_2+\cdots+c_m$ is non-positive).
\medskip
Since $\dim H=1$ and for $1\le k\le m$, $B_k:H\to H_k$ is surjective and $H_k$ is non-trivial, it follows that the maps $B_k$, $k\ge 1$ are bijections.
Therefore we may reduce to the case $H=H_k=\mathbb{R}$ and $B_k=\mathrm{Id}$ for $1\le k\le m$. In this simple setting, the only possible subspaces $V$ are ${0}$ and $\mathbb{R}$. The former is trivially admissible, while the latter is admissible by hypothesis.
Hence Condition (C) rewrites as:
\begin{itemize}
\item if $\mathbb{R} \subset \ker B_{m+1}$ (i.e. $B_{m+1}=0$), then $ 1\ge \sum_{k=1}^m c_k$,
\item if $B_0\{0\}=B_0\mathbb{R}$ (i.e. $B_0=0$) then $ 1\le \sum_{k=1}^m c_k$,
\end{itemize}
Also the hypothesis of bijectivity of $(B_0,B_+)$ reduces to two cases:
either $B_0=0$, $B_+=B_1$ and $c_1\ge0\ge c_2,\ldots, c_m$,
or $B_0\neq 0$, $B_+=0$ and $0\ge c_1,\ldots,c_m$.
In order to prove the lemma, we consider several cases:
Case 1: If $B_0=B_{m+1}=0$, then Condition (C) rewrites as $\sum_{k=1}^m c_k=1$.
Moreover there is no kernel and, as explained above $c_1\ge0\ge c_2,\ldots, c_m$.
The positivity of the Brascamp-Lieb functional is a direct consequence of the reverse H\"older inequality \eqref{eq:inverse-Holder-m-functions}.
Case 2: if $B_0\neq 0$ and $B_{m+1}=0$, then $0\ge c_1,\ldots, c_m$ and Condition (C) amounts to $1\ge \sum_{k=1}^m c_k$. We define $c_0:=1- \sum_{k=1}^m c_k\ge 1$ and we are ready to apply the inverse H\"older inequality
with $m+1$ functions:
$$ \int e^{-\mathcal Q_+(B_0x)}\prod_{j=1}^m f_j(x)^{c_j} dx
\ge \left(\int e^{-\frac{1}{c_0}\mathcal Q_+(B_0x)}\right)^{c_0} \prod_{j=1}^m \left(\int f_j\right)^{c_j}.$$
Case 3: if $B_0=0$ and $B_{m+1}\neq 0$, then $c_1\ge0\ge c_2,\ldots, c_m$ and
Condition (C) reads as $1\le \sum_{k=1}^m c_k$ (actually the inequality is strict. If it where an equality then $H=\mathbb{R}$ would be a critical space, which is not compatible with $B_{m+1}\neq 0$ as explained by Proposition \ref{prop:c-i-ge-1}). We define $c_{m+1}:=1- \sum_{k=1}^m c_k<0$ and we apply the inverse H\"older inequality
with $m+1$ functions:
$$ \int e^{\mathcal Q_-(B_{m+1}x)}\prod_{k=1}^m f_k(x)^{c_k} dx
\ge \left(\int e^{\frac{1}{c_{m+1}}\mathcal Q_-(B_0x)}\right)^{c_{m+1}} \prod_{k=1}^m \left(\int f_k\right)^{c_k}.$$
Since $c_{m+1}<0$ and $\mathcal Q_-$ is positive definite, the first integral of the right-hand side term is finite.
Case 4: $B_0\neq 0$ and $B_{m+1}\neq 0$ does not happen. Indeed it implies that
$B_+=0$ but then the condition $\ker B_+\subset \ker B_{m+1}$ is violated.
A more conceptual explanation is that a quadratic form on $\mathbb{R}$ is either zero, definite positive or definite negative, so that the above three cases cover all possibilities.
\end{proof}
\begin{lemma}\label{lem:initialization-2functions-no-kernel}
Assertion (ii) of Theorem \ref{thm:positivity-of-constant-general-case}
is true when $m=2$ and $B_0=B_{m+1}=0$.
\end{lemma}
\begin{proof}
Our goal is to prove the positivity of the Brascamp-Lieb functional for two functions and no kernel. Our hypothesis is that $B_+$ is bijective and that Condition (C) holds. Since $1\le m^+\le m=2$ we can consider two cases:
Case 1: $m=m^+=2$. Proposition \ref{prop:c-i-ge-1} yields $c_1, c_2\ge 1$.
Condition (C) ensures that $H$ is a critical space, hence
$$ \dim H=c_1\dim H_1+c_2\dim H_2\ge \dim H_1+\dim H_2=\dim H,$$
where the latter inequality comes from the fact that $B_+=(B_1,B_2):H\to H_1\times H_2$ is a linear isomorphism. The intermediate inequality cannot be strict, therefore $c_1=c_2=1$. The inverse Brascamp-Lieb inequality in this case follows from Fubini theorem, after changing variables:
\begin{eqnarray*}
\int_H f_1(B_1x)f_2(B_2x) \,dx&=& |\det ((B_1,B_2))|^{-1} \int_{H_1\times H_2}
f_1(y)f_2(z) \,dy\,dz\\
&= &
|\det ((B_1,B_2))|^{-1} \int_{H_1} f_1 \int_{H_2} f_2.
\end{eqnarray*}
Case 2: $m^+=1$ and therefore $B_1$ is bijective. Any linear subspace is admissible in this case ($\dim V=\dim B_1V$). Thus, for any subspace $V$, Condition (C) yields
$$ \dim V\ge c_1 \dim B_1V+c_2 \dim B_2 V= c_1 \dim V+ c_2 \dim B_2 V,$$
and after rearranging the terms
$$ (c_1-1) \dim V\le |c_2| \dim B_2V.$$
Choosing $V=\ker B_2$, we get that $(c_1-1) \dim \ker B_2=0$.
Subcase 1: If $\ker B_2=0$, then $B_2$ is an isomorphism. Since $B_1$ is also
an isomorphism, the relation $\dim H=c_1\dim H_1+c_2\dim H_2$ implies
$c_1+c_2=1$. Recall that $c_1\ge 0\ge c_2$. We can conclude with the inverse H\"older inequality:
\begin{eqnarray*} \int f_1(B_1x)^{c_1} f_2(B_2 x)^{c_2} dx &\ge& \left(\int f_1(B_1x) dx \right)^{c_1}
\left(\int f_2(B_2x) dx \right)^{c_2}\\
& =& \left( |\det B_1|^{-1}\int_{H_1} f_1 \right)^{c_1} \left( |\det B_2|^{-1}\int_{H_2} f_2 \right)^{c_2}.
\end{eqnarray*}
Subcase 2: $c_1=1$. Using also that $\dim H=\dim H_1$, the equality $\dim H=c_1\dim H_1+c_2\dim H_2$ implies that $c_2 \dim H_2=0$, hence $c_2=0$. The inverse Brascamp-Lieb inequality is trivial in this case:
$\int f_1(B_1x) dx= |\det B_1|^{-1} \int f_1$.
\end{proof}
\begin{lemma}\label{lem:initialization-1function}
Assertion (ii) of Theorem \ref{thm:positivity-of-constant-general-case}
is true when $m=0$ and when $m=1$.
\end{lemma}
\begin{proof}
Since $0\le m^+\le m\le 1$, we consider three cases.
Case 0: when $m^+=m=0$, i.e. there are no functions $f_k$. By hypothesis $\mathcal Q$ is positive definite. Condition (C) is empty. The conclusion holds as $\int e^{-\mathcal Q}>0$.
Case 1: when $m^+=0$, $c_1\le 0$, $B_+=0$ and by hypothesis $B_0$ is a linear isomorphism.
Moreover the condition $\ker B_+\subset \ker B_{m+1}$ implies that $B_{m+1}=0$.
In this setting, Condition (C) is empty. Indeed the inequality $\dim V\ge c_1 \dim B_1V$ is valid for every subspace since $c_1\le 0$. In addition, $B_0$ being and isomorphism, the only subspace $V$ such that $B_0V=B_0H$ is $H$ (and the quotient dimension condition is empty). Consequently, our task is to show that
$$\inf_{f_1} \frac{\int_H e^{-\mathcal Q_+(B_0x)} f_1(B_1x)^{c_1} dx}{\left(\int_{H_1}f_1\right)^{c_1}}>0.$$
The case $c_1=0$ is obvious since $\mathcal Q_+\circ B_0$ is positive definite. Next, we assume that $c_1<0$. By definition $B_1:H\to H_1$ is surjective. We complete it
to a bijective map $\Phi:H\to H_1\times \tilde{H}$ of the form $\Phi(x)=(B_1(x),\tilde B(x))$. Using the bijective change of variables $x=\Phi^{-1}(y,\tilde y)$, there exits $\alpha\in (0,+\infty)$ such that for any $f_1$,
$$
\int_H e^{-\mathcal Q_+(B_0x)} f_1(B_1x)^{c_1} dx = \alpha \int_{H_1\times \tilde H} e^{-\mathcal Q_+\circ B_0 \circ \Phi^{-1}(y,\tilde y)} f_1(y)^{c_1} dy d\tilde y.
$$
There exists positive definite quadratic forms $\mathcal Q_1$ on $H_1$ and $\tilde{\mathcal Q}$ on $\tilde H$ such that for all $(y,\tilde y)$,
$$\mathcal Q_+\circ B_0 \circ \Phi^{-1}(y,\tilde y) \le \mathcal Q_1(y)+\tilde{\mathcal Q}(\tilde y).$$
Therefore the latter integral is at most
$$ \int_{\tilde H} e^{-\tilde{\mathcal Q}(\tilde y)} d\tilde y \int_{H_1} e^{-\mathcal Q_1(y)} f^{c_1}(y) dy
\ge \left(\int_{\tilde H} e^{-\tilde{\mathcal Q}(\tilde y)} d\tilde y\right)
\times \left( \int_{H_1} e^{-\frac{1}{1-c_1}\mathcal Q_1(y)}dy\right)^{1-c_1}
\left( \int_{H_1} f_1\right) ^{c_1},
$$
where the latter inequality is a consequence of the inverse H\"older inequality.
\medskip
Case 2: $m^+=m=1$. In this case $c_1\ge 0$, $(B_0,B_1)$ is bijective and $\ker B_+=\ker B_1\subset \ker B_{m+1}$.
We may assume that $\ker B_1\neq H$, otherwise $H_1=\{0\}$ and we can discard the function $f_1$ and we are back to Case 0.
Condition (C) asserts that every admissible subspace $V$ verifies:
\begin{itemize}
\item if $V\subset \ker B_{m+1}$ then $\dim V\ge c_1\dim B_1V$
\item if $B_0V=B_0H$ then $\dim \sfrac{H}{V}\le c_1 \dim \sfrac{B_1H}{B_1V}$
\end{itemize}
Observe that the latter ``quotient condition'' boils down to $c_1\ge 1$:
Indeed by hypothesis $\dim H= \dim B_0H+ \dim B_1H$, and $V$ is admissible
if and only if $\dim V=\dim B_0V+\dim B_1V$. Therefore, when $V$ also satisfies
that $B_0V=B_0H$, taking the difference of the latter two dimension equalities yields
$ \dim \sfrac{H}{V}= \dim \sfrac{B_1H}{B_1V}$. Hence the condition
$\dim \sfrac{H}{V}\le c_1 \dim \sfrac{B_1H}{B_1V}$ becomes $(c_1-1)\dim\sfrac{H}{V}\ge 0$ which can be an empty condition (when $H=V$) or equivalent to
$c_1\ge 1$ e.g. for $V=\ker B_1$ (see the argument of Proposition \ref{prop:c-i-ge-1}).
Given $(B_0, B_1, B_{m+1}=B_2)$, the set $\mathcal C_1$ of indices $c_1\ge 0$ satisfying Condition
(C) is clearly a closed convex subset of $[1,+\infty)$. Indeed, it is defined by the inequality $c_1\ge 1$ and conditions of the form $\dim V\ge c_1\dim B_1V$ (and there are finitely many of them since the dimensions are bounded).
Obviously $1\in \mathcal C_1$. For $c_1=1$, the corresponding Brascamp-Lieb inequality holds with a positive constant.
Indeed for every non-negative function $f_1$,
\begin{eqnarray*}
\int_H e^{-\mathcal Q_+(B_0x)+\mathcal Q_-(B_2 x)} f_1(B_1x)\, dx&\ge& \int_H e^{-\mathcal Q_+(B_0x)} f_1(B_1x) \, dx\\
&=& |\det((B_0,B_1))|^{-1} \int_{H_0} e^{-\mathcal Q_+} \int_{H_1} f_1,
\end{eqnarray*}
where we have used the bijection $(B_0,B_1)$ in order to change variables.
Consider the subspace $V=\ker B_0\cap \ker B_{m+1}$. Using our hypothesis $\ker B_1\subset \ker B_{m+1}$, we get that
$$ V\subset (V+\ker B_0)\cap (V+\ker B_1)\subset \ker B_0\cap \ker B_{m+1}=V.$$
Hence, by \eqref{lem:admissibility-of-subspace-and-quotient}, $V\subset \ker B_{m+1}$ is admissible. Consequently $\dim V=\dim B_0V+\dim B_1V=\dim B_1V$
and any $c_1\in \mathcal C_1$ verifies $\dim V\ge c_1 \dim B_1V$ which can be rewritten as $0\ge (c_1-1)\dim B_1V$.
Subcase 1: if $B_1V\neq\{0\}$ then the latter inequality implies that $c_1\le 1$. We have shown that $\mathcal C_1=\{1\}$
and we have established a non-trivial inverse Brascamp-Lieb inequality for $c_1=1$.
Subcase 2: if $B_1V=\{0\}$. This condition can be rephrased as $V\subset \ker B_1$. From this, we deduce that
$$ V=\ker B_0\cap \ker B_{m+1} \subset \ker B_0\cap \ker B_1=\{0\},$$
where the last equality comes from the injectivity of $(B_0,B_1)$. The latter is actually bijective so that
$$ \ker B_0 \oplus \ker B_1=H.$$
Since $\ker B_1\subset \ker B_{m+1}$, and $V=\ker B_0\cap \ker B_{m+1}=\{0\}$,
we also have
$$ \ker B_0 \oplus \ker B_{m+1}=H.$$
The previous two decompositions of $H$ into direct sums, and the inclusion $\ker B_1\subset \ker B_{m+1}$ imply that $\ker B_1=\ker B_{m+1}$. Because of this equality, the subspace constraint in Condition (C) is empty: indeed, if $V\subset \ker B_{m+1}=\ker B_1$ then $B_1V=0$ and $\dim V\ge c_1 \dim B_1V=0$ is true.
Therefore the set of numbers $c_1$ verifying Condition (C) is $[1,+\infty)$ and
our task is to prove a non-trivial inverse Brascamp-Lieb inequality for all exponents
$c_1\ge 1$. We have already dealt with $c_1=1$, so we may restrict our attention to $c_1>1$.
Observe that the equality $\ker B_1=\ker B_{m+1}$ ensures the existence of a linear isomorphism $\Psi:H_1\to H_{m+1}$ such that $B_{m+1}=\Psi\circ B_1$. Thus for every non-negative function $f_1$:
\begin{eqnarray*}
\lefteqn{\int_H e^{-\mathcal Q_+(B_0x)-\mathcal Q_-(B_{m+1}x)} f_1^{c_1}(B_1 x)\, dx }\\
&=& \int_H e^{-\mathcal Q_+(B_0x)}e^{\mathcal Q_-\circ \Psi (B_1x)} f_1^{c_1}(B_1 x)\, dx \\
&=& |\det((B_0,B_1))|^{-1} \int_{H_0} e^{-\mathcal Q_+(y)} \int_{H_1} e^{\mathcal Q_-\circ \Psi( z)} f_1^{c_1}(z) dz\\
&\ge& |\det((B_0,B_1))|^{-1} \int_{H_0} e^{-\mathcal Q_+(y)}
\left(\int_{H_1} e^{\frac{1}{1-c_1}\mathcal Q_-\circ \Psi( z)} dz\right)^{1-c_1}\left( \int_{H_1} f_1\right)^{c_1},
\end{eqnarray*}
where we have used the change of variables $(y,z)=(B_0x,B_1x)$ and the inverse H\"older inequality. The proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:positivity-of-constant-general-case} (ii)]
First of all, we can assume $\dim H_k \ge 1$ for all $k \in \{1, \ldots, m\}$, otherwise one can reduce the problem by discarding all functions $f_k$ for which $\dim H_k = 0$ while Condition (C) and the strong positivity property related to the reduced problem remain equivalent to those related to the original problem.
Let $D_1 := [1,+\infty)^{m^+} \times (-\infty, 0]^{m-m^+} \subseteq \mathbb{R}^m$.
Our main interest is in the following set:
\begin{align*}
\mathcal C :&= \big\{ c\in (0,+\infty)^{m^+}\times (-\infty,0]^{m-m^+};\; (H,B,c)\, \mathrm{satisfies}\, \mathrm{Condition\, (C)} \big\}\\
&= \big\{ c\in D_1;\; (H,B,c)\, \mathrm{satisfies}\, \mathrm{Condition\, (C)} \big\},
\end{align*}
where the latter equality comes from Proposition~\ref{prop:c-i-ge-1}.
Since Condition (C) means that the vector $c$ verifies several closed linear inequalities, the second expression of $\mathcal C$ proves that it
is closed and convex. More specifically, the triplet $(H,B,c)$ satisfies Condition (C) if and only if $c$ belongs to all sets in the following two families:
\begin{itemize}
\item
For any non-trivial subspace $V \subseteq H$ which induces an admissible split and satisfies $V \subseteq \ker B_{m+1}$ consider
\[
S_V = \Big\{ x \in \mathbb{R}^m \colon \sum_{k=1}^m x_k \dim B_k V \le \dim V \Big\}.
\]
Typically, $S_V$ is a closed half-space of $\mathbb{R}^m$, but it can happen that $S_V$ is the whole $\mathbb{R}^m$.
\item
Similarly, for any proper subspace $V \subsetneq H$ which induces an admissible split and satisfies $B_0 V = B_0 H$ consider
\[
S^{\sfrac{H}{V}} = \Big\{ x \in \mathbb{R}^m \colon \sum_{k=1}^m x_k \dim \sfrac{B_k H}{B_k V} \ge \dim \sfrac{H}{V} \Big\}.
\]
The set $S^{\sfrac{H}{V}}$ is always a half-space of $\mathbb{R}^m$, since at least for some $1 \le k \le m^+$, $B_k V \neq B_k H$ (otherwise for each $k = 0, 1, \ldots, m^+$, $B_k V = B_k H$, i.e. $V + \ker B_k = H$, which by Lemma~\ref{lem:admissibility-of-subspace-and-quotient} would contradict admissibility of the split).
\end{itemize}
Even though there are infinitely many subspaces $V$, the coefficients $\dim B_kV$ and $\dim \sfrac{B_kH}{B_kV}$ take finitely many values. Hence there are finitely many different half-spaces in the above families, and the set $\mathcal C$ is a closed convex polyhedron (which may be unbounded).
Since $\mathcal C$ is closed, it follows from its original definition that its boundary is covered by the union of the affine hyperplanes in the following three families:
\[ \begin{split}
\mathcal{P} &= \{ \partial S_V \colon \{0\} \neq V \subsetneq H \textup{ induces an admissible split and } V \subseteq \ker B_{m+1} \} \\
&\cup
\{ \partial S^{\sfrac{H}{V}} \colon \{0\} \neq V \subsetneq H \textup{ induces an admissible split and } B_0 V = B_0 H \},
\end{split} \]
$\mathcal{P}_0 = \{ \partial S_H \}$ if $B_{m+1}=0$ or $B_0=0$ (note that $\partial S_H = \partial S^H$) otherwise $\mathcal{P}_0 = \emptyset$ and
\[
\mathcal{B} = \Big\{ \{ x \in \mathbb{R}^m \colon x_k = 0 \} \colon k = m^+ + 1, \ldots, m \Big\}.
\]
Our aim is to show that $c \in \mathcal{C}$ (i.e. Condition (C)) implies the strong positivity property for $(H, B, c)$.
We proceed by induction in $(\dim H, m)$ with a partial order on $(n, m) \in \mathbb{Z}_+^2$ given by $(n_1, m_1) \preceq (n_2, m_2)$ if and only if $n_1 \le n_2$ and $m_1 \le m_2$. The founding cases are $m\in\{0,1\}$ with any $\dim H$ (this is treated in Lemma~\ref{lem:initialization-1function}) and $\dim H = 1$ with any $m \ge 1$ (see Lemma~\ref{lem:initialization-dimension1}).
The induction step, in which $\dim H \ge 2$ and $m \ge 2$, goes as follows. Take any vector $p = (p_1, \ldots, p_m) \in \mathbb{R}^m$ such that $p_1, \ldots, p_{m^+} > 0$ and $p_{m^+ + 1}, \ldots, p_m < 0$ and let $b \in \mathbb{R}$ be a constant such that the affine hyperplane
\[
P = \Big\{ x \in \mathbb{R}^m \colon \sum_{k=1}^m p_k x_k = b \Big\}
\]
contains $c$.
Then $D_1 \cap P$ is a compact subset of $\mathbb{R}^m$ and thus $\mathcal{C}_P := \mathcal{C} \cap P$ is a compact convex set (actually it is a compact convex polytope). In what follows, we restrict our considerations to the hyperplane $P$.
Due the convexity property established in Proposition~\ref{prop:interpolation}, it is enough to prove the strong positivity property for the vectors of exponents being vertices of $\mathcal{C}_P$. Therefore assume $c$ is a vertex of $\mathcal{C}_P$. Consequently, $c$ belongs to an intersection of $P$ and some $m-1$ distinct affine hyperplanes from the family $\mathcal{B} \cup \mathcal{P} \cup \mathcal{P}_0$. Consider three cases:
\begin{itemize}
\item[Case 1.] Among these $m-1$ affine hyperplanes there is at least one which belongs to $\mathcal{B}$. This means that for some $k \in \{ m^+ + 1, \ldots, m\}$, $c_k = 0$. Then we can discard the function $f_k$ and in this way reduce the number of functions considered from $m$ to $m-1$. Since neither Condition (C) nor the strong positivity property is affected by this reduction (both assertions remain equivalent for the original and the reduced problem), we are done by the induction hypothesis.
\item[Case 2.] Among these $m-1$ affine hyperplanes there is at least one which belongs to $\mathcal{P}$. Hence for some $\{0\} \neq V \subsetneq H$, $V$ is a critical subspace or $\sfrac{H}{V}$ is a critical quotient (this may happen only when $\dim H\ge 2$). Since both $V$ and $\sfrac{H}{V}$ have dimension strictly smaller than $\dim H$ and thanks to Lemma~\ref{lem:inheritence-of-condition-C}, Condition (C) is satisfied for $(V, b, c)$ and $(\sfrac{H}{V}, \beta, c)$, we can apply the induction hypothesis and get the strong positivity property for $(V, b, c)$ and $(\sfrac{H}{V}, \beta, c)$. Now the strong positivity property for $(H, B, c)$ follows from Lemma~\ref{lem:tensorization}.
\item[Case 3.] Neither Case 1 nor Case 2 holds, i.e. all $m-1$ distinct affine hyperplanes are in $\mathcal{P}_0$. This is possible only when $m = 2$ and $\mathcal{P}_0 = \{ \partial S_H \}$, i.e. $B_0$ or $B_{m+1}$ is trivial. Then $c \in \partial S_H \cap P$. If $B_{m+1}$ is trivial then $H$ is a critical subspace and Proposition~\ref{prop:c-i-ge-1} implies that $B_0$ is also trivial and we can conclude using Lemma~\ref{lem:initialization-2functions-no-kernel}. If $B_0$ is trivial then $\sfrac{H}{\{0\}}$ is a critical quotient, i.e.
\begin{equation}\label{eq:criticality-of-quotient-H}
\dim H = \sum_{k=1}^m c_k \dim B_k H.
\end{equation}
Condition (C) tells us that for every subspace $V \subseteq H$ which induces an admissible split, the quotient $\sfrac{H}{V}$ is subcritical. Subtracting the corresponding inequality from~\eqref{eq:criticality-of-quotient-H} gives that $V$ satisfies
\[
\dim V \ge \sum_{k=1}^m c_k \dim B_k V,
\]
regardless whether $V \subseteq \ker B_{m+1}$ or not. Therefore Condition (C) for our problem implies Condition (C) which corresponds to the problem with the same vector of exponents $c$ and the same maps $B_0, \ldots, B_m$ and with $B_{m+1} = 0$. Lemma~\ref{lem:initialization-2functions-no-kernel} ensures the strong positivity property for the modified problem which in turn clearly implies the strong positivity property for the original problem.
\end{itemize}
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
{
"timestamp": "2018-05-08T02:16:27",
"yymm": "1805",
"arxiv_id": "1805.02455",
"language": "en",
"url": "https://arxiv.org/abs/1805.02455"
}
|
\section{#1}\let\thesection\oldthesection}
\begin{document}
\begin{center}
{\Large{\bf Singular subalgebroids}}
{\large{\sc by Marco Zambon}}\\
{\sc with an appendix by Iakovos Androulidakis}
\end{center}
{\footnotesize
\vskip 2pt KU Leuven
\vskip-4pt Department of Mathematics
\vskip-4pt Celestijnenlaan 200B box 2400
\vskip-4pt BE-3001 Leuven, Belgium.
\vskip-4pt e-mail: \texttt{marco.zambon@kuleuven.be}
}
\bigskip
\everymath={\displaystyle}
\date{today}
\begin{abstract}\noindent
We introduce singular subalgebroids of an integrable Lie algebroid,
{extending the notion of Lie subalgebroid by dropping the constant rank requirement}.
We lay the bases of a Lie theory for singular subalgebroids: we construct the associated holonomy groupoids, {adapting the procedure of Androulidakis-Skandalis for singular foliations}, in a way that keeps track of the choice of Lie groupoid integrating the ambient Lie algebroid. The {holonomy groupoids} are topological groupoids,
and are
suitable for noncommutative geometry as they allow for the construction of the associated convolution algebras.
{Further we carry out the construction for morphisms in a functorial way.}
\end{abstract}
\setcounter{tocdepth}{2}
\tableofcontents
\section*{Introduction}
\addcontentsline{toc}{section}{Introduction}
{Lie algebroids arise in differential geometry, mathematical physics and control theory. The standard viewpoint is to declare their sub-objects to be (wide) Lie subalgebroids, i.e. involutive constant-rank subbundles. However there is a multitude of interesting
``singular'' examples that violate the constant-rank requirement.
This leads us to introduce here a new class of subalgebroids, quite more singular than
the usual Lie subalgebroids: we call them \emph{singular subalgebroids}.
}
{Our aim is to build a Lie theory for singular subalgebroids. In this paper we}
construct a topological groupoid canonically associated to them, called \emph{holonomy groupoid}, which depends on a choice of integration $G$ of the ambient Lie algebroid.
The construction
parallels the one of \cite{AndrSk}, and here too the holonomy groupoid is a topological groupoid.
This construction encompasses the integration of wide Lie subalgebroids by Moerdijk-Mr{\v{c}}un \cite{MMRC} and the holonomy groupoids of singular foliations of Androulidakis-Skandalis \cite{AndrSk}. A novel feature is the presence of many interesting morphisms. We prove a version Lie's second theorem in this context (integration of morphisms),
showing that our holonomy groupoid construction is functorial.
{Building on the present work,} in a follow-up paper with Androulidakis \cite{AZ4} we will provide a version of Lie's third theorem, making precise how one can view the holonomy groupoid as an ``integration'' of the singular subalgebroid. {This will require us to work in the realm of diffeological groupoids.}
In that paper we will also show that although the holonomy groupoid is not smooth, it is still possible to do differential geometry on it.
{Finally here, using the holonomy groupoid we attach a $C^{\ast}$-algebra to a singular subalgebroid, paving the way to the development of pseudodifferential calculus, index theory, and other noncommutative geometry constructions for such structures.}
{Recently Laurent Gengoux-Lavau-Strobl \cite{LavauThesis}\cite{LLG} showed that
singular foliations are tightly connected with higher algebraic structures:
under reasonable assumptions, a singular foliation admits a canonical $L_{\infty}$-algebroid which ``resolves'' it and which provides fine invariants. We expect their construction to extend to singular subalgebroids.}
\subsection*{Singular subalgebroids}
Fix a Lie algebroid $A$ over a manifold $M$.
A {\bf singular subalgebroid}
is a $C^{\infty}(M)$-submodule $\mathcal{B}$ of $\Gamma_c (A)$ (the module of compactly supported sections of $A$), which is locally finitely generated and closed w.r.t. the Lie bracket.
Let us display two obvious classes of singular subalgebroids, whose intersection consists exactly of the regular foliations.
\begin{ex*}[Wide Lie subalgebroids] Let $B$ be a wide Lie subalgebroid of $A$, {{\it i.e.}\; a Lie subalgebroid supported on the whole of $M$.} Then
$\Gamma_c(B)$ is a singular subalgebroid.
\end{ex*}
\begin{ex*}[Singular foliations]
The singular subalgebroids of $A=TM$ are exactly the singular foliations on $M$. {Here singular foliation is meant in the sense of \cite{AndrSk}, a notion inspired by the work of
Stefan and Sussman in the 1970's.}
\end{ex*}
There is an interesting class that strictly contains the first one: the singular subalgebroids $\mathcal{B}$ which are \emph{projective}, i.e., so that there exists a vector bundle over $M$ whose module of compactly supported sections is $\mathcal{B}$. Notice that such a vector bundle is then a Lie algebroid (but not necessarily a Lie subalgebroid of $A$).
In turn, projective subalgebroids are contained in a larger class, that of singular subalgebroids which are images of Lie groupoid morphisms covering the identity.
Other examples of singular subalgebroids will be given in \S \ref{subsec:ex}.
{Singular subalgebroids can also be viewed as a nice class of Lie-Rinehart algebras \cite{Rinehart}, more general than Lie algebroids.}
\subsection*{Main results}
For singular foliations $\mathcal{F}$ on $M$, which as we saw are exactly the singular subalgebroids of $TM$, the holonomy groupoid was constructed by Androulidakis-Skandalis \cite{AndrSk}.
There the crucial idea was that of a bisubmersion.
Bisubmersions are manifolds $U$ endowed with two submersive maps to $M$, and are defined locally from the data provided by the singular foliation. Their dimension is variable, and the holonomy groupoid $H(\mathcal{F})$ is a quotient of a disjoint union of bisubmersions.
For singular {subalgebroids $\mathcal{B}$} of an integrable Lie algebroid $A$, after choosing an integrating Lie groupoid $G$, {taking a new point of view} we extend the notion of \cite{AndrSk} by defining
bisubmersions to be smooth maps $U\to G$ satisfying certain conditions.
With this notion we can construct the {\bf holonomy groupoid} $H^G(\mathcal{B})$ in a way analogous to \cite{AndrSk}.
A feature of the construction we give here is that it keeps track of the choice of Lie groupoid $\cG$ integrating $A$. More precisely, the holonomy groupoid $H^G(\mathcal{B})$ comes together with a canonical morphism to $G$.
For instance the groupoid $H(\mathcal{F})$ given in \cite{AndrSk}, in the current context, is the holonomy groupoid associated to $\mathcal{F}$ (viewed as a singular subalgebroid) when we choose $G$ to be the pair groupoid $M \times M$. The canonical morphism to
$G$ is just the target-source map.
The main result of the paper is Thm. \ref{thm:holgroidconstr}, which can be paraphrased in a simplified way as follows:
\begin{thmx}\label{thmx:a}
{Let $\mathcal{B}$ be a singular subalgebroid of an integrable Lie algebroid $A$, and $G$ a Lie groupoid integrating $A$.}
There exists a map $$\Phi\colon H^{\cG}(\mathcal{B}) \to \cG$$ where
\begin{itemize}
\item[1)] $H^{\cG}(\mathcal{B})$ is a topological groupoid which is ``nice''
and ``integrates $\mathcal{B}$'',
\item[2)] $\Phi$ is a topological groupoid morphism ``integrating'' the inclusion $\iota\colon \mathcal{B} \hookrightarrow \Gamma_c(A)$.
\end{itemize}
\end{thmx}
In joint work with Androulidakis \cite{AZ4}
\begin{itemize}
\item we will show that $H^{\cG}(\mathcal{B})$ has some smoothness properties: first, its restriction to the leaves of $\mathcal{B}$ are Lie groupoids, and second, it has a structure of diffeological groupoid that allows to recover $\mathcal{B}$. This is what we mean by
``nice''
and ``integrates $\mathcal{B}$'' in 1) above.
\item we will consider certain diffeological groupoids endowed with maps to Lie groupoids, and
show that a morphism of such objects always induces a morphism of singular subalgebroids. Together with Ex. \ref{ex:Phi}, this explains ``integrating'' in 2) above. \end{itemize}
{Further, for singular subalgebroids which are images of Lie groupoid morphisms (this includes singular foliations), we give a minimality property for $H^{\cG}(\mathcal{B})$ in Prop. \ref{prop:lift}.
This minimality property was postulated first by Moerdijk-Mr{\v{c}}un for wide Lie subalgebroids and is satisfied by the holonomy groupoid of a singular foliation.
At present we are not able to extend this property to arbitrary singular subalgebroids.}
{
The construction of the holonomy groupoid $H^{\cG}(\mathcal{B})$ allows to transfer almost verbatim the construction of the convolution $*$-algebra of \cite{AndrSk}, see Appendix \ref{section:convsing} by Iakovos Androulidakis. Recall \cite{AndrSk,IakAnal,PseudodiffCalcSingFol} that
in the case of singular foliations
the $K$-theory of the corresponding $C^{\ast}$-algebra is the recipient of the analytic index for longitudinal elliptic pseudodifferential operators.}
{A feature of singular subalgebroids compared to singular foliations is that morphisms abound.
In Thm. \ref{thm:morph} we show that the holonomy groupoid construction extends to morphisms covering the identity. (For more general morphisms we refer to Appendix \ref{sec:appmorsub}.) This provides new statements even for singular foliations.
\begin{thmx}\label{thmx:B}
Let $F\colon \cG_1\to \cG_2$ be a morphism of Lie groupoids covering $Id_M$. Let $\mathcal{B}_i$ be a singular subalgebroid of $Lie(\cG_i)$ for $i=1,2$,
such that
$F_*(\mathcal{B}_1)\subset \mathcal{B}_2$.
Then there is a canonical morphism of topological groupoids $$\Xi \colon H^{\cG_1}(\mathcal{B}_1)\to H^{\cG_2}(\mathcal{B}_2)$$ covering $Id_M$ and
making the following diagram commute:
\begin{equation*}
\xymatrix{
H^{\cG_1}( {\mathcal{B}}_1) \ar[d]^{\Phi_1} \ar@{-->}[r]^{\Xi} &H^{\cG_2}(\mathcal{B}_2) \ar[d]^{\Phi_2} \\
\cG_1 \ar[r]^{F} & \cG_2 }
\end{equation*}.
\end{thmx}
}
\subsection*{The Moerdijk-Mr{\v{c}}un integration of wide Lie subalgebroids}
Theorem \ref{thmx:a} extends and unifies previous results by Moerdijk-Mr{\v{c}}un \cite{MMRC} and Androulidakis-Skandalis \cite{AndrSk}
for the two obvious classes of singular subalgebroids displayed above -- wide Lie subalgebroids and singular foliations --. We now elaborate on the first class, as
it is instructive to compare Theorem \ref{thmx:a} with the results of Moerdijk-Mr{\v{c}}un in \cite{MMRC}.
Let $A\to M$ be a Lie algebroid, and fix a Lie groupoid $\cG$ integrating $A$. Let $B\to M$ be a wide Lie subalgebroid of $A$. Moerdijk-Mr{\v{c}}un show:
\begin{theorem*}{\bf (\cite[Thm. 2.3]{MMRC})}
There exists a unique map $$\Phi\colon H_{min}\to \cG$$ where
\begin{itemize}
\item[1)] $H_{min}$ is a Lie groupoid integrating $B$,
\item[2)] $\Phi$ is a Lie groupoid morphism integrating the inclusion $\iota\colon B\hookrightarrow A$,
\item[3)] minimality property: for any Lie groupoid morphism
$\tilde{H}\to G$ integrating\footnote{So
$\tilde{H}$ is necessarily a Lie groupoid integrating $B$, and
the morphism is an immersion.} $\iota$, there exists a surjective Lie groupoid morphism $\tilde{H}\to H_{\min}$ integrating $Id_B$ and making this diagram commute:
\begin{equation*}
\xymatrix{
\tilde{H} \ar[rd]_{} \ar@{-->}[rr] & & H_{min} \ar[ld]^{\Phi} \\
&\cG & }
\end{equation*}
\end{itemize}
\end{theorem*}
Moerdijk-Mr{\v{c}}un refer to $H_{min}$ as the
\emph{minimal integral of $B$ over $\cG$}. By 3) above, $H_{min}$ is unique up to isomorphism.
To put this result into context, recall that the wide Lie algebroid $B$ is integrable (because $A$ is), and that the inclusion $\iota$ integrates to a morphism $H_{max}\to \cG$, where $H_{max}$ is the source simply connected Lie groupoid integrating $B$. All other Lie groupoids integrating $B$ are quotients of $H_{max}$. In general they do not admit a morphism to $\cG$ integrating $\iota$, {and $H_{min}$ is the ``smallest'' integration admitting such a morphism.} {We want to stress that the result of Moerdijk-Mr{\v{c}}un, just as our Theorem \ref{thmx:a}, does \emph{not} contain as a special case the integration of Lie algebroids (indeed, an integration $G$ of the Lie algebroid $A$ is part of the hypotheses).}
As an example of the above theorem, take the case $A=TM$. Then a wide Lie subalgebroid is just an involutive distribution, which by the Frobenius theorem corresponds of a (regular) foliation on $M$. The Lie groupoids $H_{max}$ and $H_{min}$ are nothing else than the monodromy and holonomy groupoids of this foliation.
Notice that the above theorem of Moerdijk-Mr{\v{c}}un starts with a Lie subalgebroid $B$ of $A$ (rather than with an abstract Lie algebroid $B$), and that it produces a Lie groupoid morphism to $\cG$ integrating the inclusion $\iota\colon B\hookrightarrow A$ (rather than only a Lie groupoid integrating $B$). In other words, in the above theorem $H_{min}$ is {naturally} endowed with a \emph{morphism} {$\Phi \colon H_{min} \to \cG$}. {Notice that it would not be wise to disregard this morphism and consider only its image $\Phi(H_{min})$. First, the latter is a set-theoretic subgroupoid of $\cG$, which usually fails to be smooth. (When $B$ is an involutive distribution on $M$, $\Phi(H_{min})$ is the graph of the equivalence relation given by the associated regular foliation, and its failure to be smooth was one of the reasons to introduce the holonomy groupoid in the first place, see the remarks in \cite{Phillipsholonomicimperative}). Second, the morphism of Lie groupoids $\Phi$
is usually not injective and hence contains more information than its image.}
\subsection*{{Androulidakis-Skandalis' holonomy groupoids of singular foliations}}
We highlight the aspects of this work that represent the main novelties in comparison with the work of Androulidakis-Skandalis \cite{AndrSk}. Let $G$ be a Lie groupoid and $\mathcal{B}$ be a singular subalgebroid of $A:=Lie(G)$.
\begin{itemize}
\item Our definition of bisubmersion for $\mathcal{B}$ (Def. \ref{subsec:defbisub})
is not a straight-forward generalization of the one of \cite{AndrSk}.
Ours is given by a smooth map to $G$, which typically fails to be a submersion. In the case that $\mathcal{B}$ is a singular foliation, our definition does not recover on the nose the notion of bisubmersion from \cite{AndrSk}, but it corresponds bijectively to it if one assembles two submersions into $M$ to one map to the pair groupoid $M\times M$
(Prop. \ref{prop:equivbi}).
Just as in \cite{AndrSk}, bisubmersions for $\mathcal{B}$ have the following features:
their bisections induce automorphisms of the singular subalgebroid $\mathcal{B}$ (see Remark \ref{rem:rephrasea}), and they
allow for the construction of the holonomy groupoid by providing its ``building blocks'' (Thm. \ref{thm:holgroidconstr}).
\item
The holonomy groupoid depends of the choice of Lie groupoid $G$ integrating $A$.
In \S \ref{section:vary} we display how the holonomy groupoid changes if we replace $G$ by another Lie groupoid of which $G$ is a quotient.
For singular foliations, {\it i.e.}\; when $A=TM$, there is a canonical choice for $G$, namely the pair groupoid $M\times M$. With this choice we recover the holonomy groupoid of a singular foliation of \cite{AndrSk}.
\item Morphisms between Lie algebroids abound, even in the special case of morphisms covering the identity on the base (take for instance the anchor map). We show
that when such a morphism maps a singular subalgebroid into another, there is a canonically induced morphisms between the corresponding holonomy groupoids (Theorem
\ref{thmx:B}), and that this assignment is functorial. When one restricts to singular foliations,
there are not as many morphisms, and the natural ones are given by smooth maps between manifolds with singular foliations. Their effect at the level of holonomy groupoids is
not considered in \cite{AndrSk} and plays an important role in \cite{GZQuotiens}.\end{itemize}
\noindent{\bf Conventions:} All Lie groupoids are {assumed to be source connected}, not necessarily Hausdorff, but with Hausdorff source-fibers. Given a Lie groupoid $\cG\rightrightarrows M$, we denote by $\bt$ and $\bs$ its target and source maps, and by $i\colon \cG\to \cG$ the inversion map. We denote by $1_x\in \cG$ the identity
element corresponding to a point $x\in M$, and by $1_M\subset \cG$ the submanifold of identity elements.
Two elements $g,h\in \cG$ are composable if $\bs(g)=\bt(h)$. We identify the Lie algebroid of $\cG$, which we sometimes denote by $Lie(\cG)$, with $\ker (d\bs)|_M$.
\noindent{\bf Acknowledgements:}
{M.Z. thanks Iakovos Androulidakis -- who collaborated in this project in its early stages and is the author of Appendix \ref{section:convsing} -- for fruitful discussions and constructive advice, and Ivan Struchiner for inspiring comments on this work.}
This work was partially supported
by grants
MTM2011-22612 and ICMAT Severo Ochoa SEV-2011-0087 (Spain), Pesquisador Visitante Especial grant 88881.030367/2013-01 (CAPES/Brazil), IAP Dygest,
the long term structural funding -- Methusalem grant of the Flemish Government,
the FWO under EOS project G0H4518N, the FWO research project G083118N (Belgium).
\section{Singular subalgebroids}\label{section:SingSubalgd}
In this section we introduce the notion of singular subalgebroid, give several examples,
{and in \S \ref{sec:rivf} make some observations for later use.} Throughout this section we are going to consider a Lie algebroid $A\to M$ with anchor $\rho\colon A \to TM$ (see for instance \cite{CW}\cite{MK2}\cite{LecturesIntegrabilty}).
\subsection{Definition of singular subalgebroid}
\label{section:singdef}
We define the main object of interest of this paper:
\begin{definition}\label{dfn:singsubalg}
A {\bf singular subalgebroid} of $A$ is an involutive, locally finitely generated $C^{\infty}(M)$-submodule $\mathcal{B}$ of ${\Gamma_c(A)}$.
\end{definition}
The notion of singular subalgebroid is obtained from the notion of wide Lie subalgebroid, by dropping the requirement of being a (constant rank) subbundle of $A$. This is achieved by focusing on the $C^{\infty}(M)$-module ${\Gamma_c(A)}$ of compactly supported sections of $A$, rather than on the Lie algebroid $A$ itself.
{For any $C^{\infty}(M)$-submodule $\mathcal{B}$ of $\Gamma_c(A)$, we define
its {\bf global hull} \cite[\S 1.1]{AndrSk}\cite{AZ4} to be $$\widehat{\mathcal{B}}:=\{Z\in \Gamma(A): fZ\in \mathcal{B} \text{ for all } f\in C_c^{\infty}(M)\}.$$ It is a $C^{\infty}(M)$-submodule of $\Gamma(A)$ containing $\mathcal{B}$.}
{
A subset $\mathcal{G}$ of $\widehat{\mathcal{B}}$ is said to be a {\bf generating set for $\mathcal{B}$}
if
$\mathcal{B}=Span_{C^{\infty}_{c}(M)}(\mathcal{G})$, where the latter is the set of finite $C^{\infty}_{c}(M)$-linear combinations of elements of $\mathcal{G}$.}
{Now we can explain the meaning of $\mathcal{B}$ being ``locally finitely generated'' in Def. \ref{dfn:singsubalg}: it means that for every point of $M$ there is an open neighbourhood $i\colon U \hookrightarrow M$ such that the submodule
$$i^*\mathcal{B}:=\{Z|_U:Z\in \mathcal{B} \text{ has support in }U\}$$
of $\Gamma_c(A|_U)$ admits a finite generating set. In other words, there are finitely many
$Y_1,\dots,Y_n\in\widehat{i^*\mathcal{B}}$ such that\footnote{Explicitly, $\widehat{i^*\mathcal{B}}=\{{Y\in \Gamma(A|_U): fY\in i^*\mathcal{B} \text{ for all } f\in C^{\infty}_c(U)}\}$.}
every element of $i^*\mathcal{B}$ is a $C^{\infty}_c(U)$-linear combination of the $Y_j$'s.}
\subsection{{Motivating examples}}\label{subsec:motivex}
{This notion of singular subalgebroid is motivated by the following two special cases (whose intersection are exactly the regular foliations).
\begin{ex}[{\bf Singular foliations}]\label{ex:singfol} Recall from \cite{AndrSk} that a {singular foliation} on a manifold $M$ is an involutive, locally finitely generated submodule of the $C^{\infty}(M)$-module of vector fields {with compact support} $\mathfrak{X}_c(M)$. The singular subalgebroids of $A=TM$ are exactly the singular foliations on $M$.
\end{ex}
\begin{ex}[{\bf Wide Lie subalgebroids}]\label{ex:wideliesub} Recall from \cite[Def. 3.3.21]{MK2} that a wide Lie subalgebroid of $A$ is a vector subbundle $B\to M$, whose sections are closed with respect to the Lie bracket. In this case, $\Gamma_c(B)$ is a singular subalgebroid.
\end{ex}
{{Ex. \ref{ex:wideliesub} belongs} to the larger class of singular subalgebroids arising from Lie groupoid morphism, which we introduce in \S \ref{subsec:morph}. In this paper we will focus mainly on these 3 classes, which at present are the only ones for which we are able to describe in an explicit way the holonomy groupoid.}
\begin{center}
\hspace{2cm}
\includegraphics[scale=.28]{pichsing3arising.pdf}
\end{center}
\subsection{Further examples}\label{subsec:ex}
Let us now display four geometric contexts in which singular subalgebroids arise.
{Both \S \ref{subsec:morph} and \S \ref{subsec:Liesubalgoids} contain as a special case wide Lie subalgebroids (Example \ref{ex:wideliesub} above).}
\subsubsection{Arising from Lie algebroid morphisms} \label{subsec:morph}
Let {$\psi \colon E\to A$} be a morphism of Lie algebroids covering the identity on the base manifolds. Then the image of the induced map of compactly supported sections,
$$\mathcal{B}:=\psi(\Gamma_c(E)),$$
is a singular subalgebroid of $A$. We say that $\mathcal{B}$
{\bf arises from the Lie {algebroid} morphism} $\psi$. }
(The above can be vastly generalized, replacing $\Gamma_c(E)$ by any singular subalgebroid of $E$, and by allowing $\psi$ to cover a diffeomorphism of the base or even a surjective submersion (see Lemma \ref{lem:FB2}).)
\begin{remark}
Given two Lie algebroids $A_1\to M_1$ and $A_2\to M_2$, there is a notion of \emph{comorphism}\footnote{That is, a pair $(\Phi,
f)$ where $f\colon M_1\to M_2$ is any differentiable map and
$\Phi\colon f^{!}A_2\to A_1$ is a vector bundle map over $Id_{M_1}$, where $f^{!}A_2$ denotes the pullback of the vector bundle $A_2$ via $f$, such that the induced map of sections $\Gamma(A_2)\to \Gamma(A_1)$ preserves the Lie bracket and the anchor maps satisfy $f_*\circ \rho_{A_1}\circ \Phi=\rho_{A_2}$.} from $A_1$ to $A_2$
(see \cite[Def. 4.3.16]{MK2}).
{It induces a linear map} $\Gamma(A_2)\to \Gamma(A_1)$.
{The $C^{\infty}_c(M_1)$-module generated by its image is a singular subalgebroid of $A_1$}. Example \ref{ex:Poissonmap} below is of this kind, since a Poisson map between Poisson manifolds $M_1\to M_2$ induces a comorphism from $T^*M_1$ to $T^*M_2$. \end{remark}
\begin{exs}\label{ex:proj}
\begin{enumerate}
\item {A singular subalgebroid $\mathcal{B}$ of $A$ is called {\bf projective} if there exists a vector bundle $B$ over $M$ such that $\Gamma_c(B)\cong \mathcal{B}$ as $C^{\infty}(M)$-modules. In that case, there is \cite{AZ4} a Lie algebroid structure on $B$ and almost injective Lie algebroid morphism $\tau \colon B\to A$ inducing the isomorphism $\Gamma_c(B)\cong \mathcal{B}$, and these data are unique. In particular, $\mathcal{B}$ arises from the Lie algebroid morphism $\tau$.
}
{A special case occurs when $\mathcal{B}$ is the space of compactly supported sections of a wide Lie subalgebroid $B$ of $A$.}
{In that case $\tau \colon B\to A$ is the inclusion.}
\item Given any Lie algebroid $A$, the anchor map $\rho\colon A\to TM$ is a Lie algebroid morphism. In this case $\mathcal{B}:=\rho(\Gamma_c(A))$ is the singular foliation underlying $A$. {Further, any Lie algebroid morphism (covering the identity) giving rise to $\mathcal{B}$ must be the anchor map of a Lie algebroid. }
\item Let $A$ be a Lie algebroid. A \emph{Nijenhuis operator} is an endomorphism of vector bundles $N \colon A\to A$ over $Id_M$, whose Nijenhuis torsion $T_N(X, Y ) := [NX, NY ] - N[X, Y ]_N$ vanishes. Here $[X,Y]_N:= [NX, Y ] + [X, NY ] - N[X, Y ]$.
In this case $N$ is a Lie algebroid morphism from $(A,[\cdot,\cdot]_N)$ to $(A,[\cdot,\cdot])$, so $\mathcal{B}:=N(\Gamma_c(A))$ is a singular subalgebroid of the original Lie algebroid $(A,[\cdot,\cdot])$.
\end{enumerate}
\end{exs}
{For later use we make the following definition.
\begin{definition}\label{def:arises}
Let $\mathcal{B}$ be a singular subalgebroid of an integrable Lie algebroid $A$ over $M$.
We say that $\mathcal{B}$ {\bf arises from a Lie groupoid morphism} (covering the identity) if there is a Lie groupoid morphism $\Psi\colon \cK\to\cG$ over $Id_M$, where $\cG$ is any Lie groupoid integrating $A$, such that $$\mathcal{B}=\Psi_*(\Gamma_c(Lie(\cK))).$$
\end{definition}
Examples include compactly supported sections of wide Lie subalgebroids of $A$, for the latter are integrable.
Clearly {Def. \ref{def:arises}} implies that $\mathcal{B}$ arises from a Lie algebroid morphism, namely $\Psi_*\colon Lie(\cK)\to A$. Conversely, if singular subalgebroid arises from a Lie algebroid morphism $\psi \colon E\to A$ with $E$ an integrable Lie algebroid, then this singular subalgebroid arises from a Lie groupoid morphism (namely,
the Lie groupoid morphism $\Psi\colon \cK\to\cG$ integrating $\psi$, where $K$ is the source simply connected Lie groupoid integrating $E$).
}
\subsubsection{Globally finitely generated singular subalgebroids}\label{subsec:generated1}
Let $\boldsymbol{\alpha}_1,\ldots,\boldsymbol{\alpha}_n \in \Gamma(A)$ satisfying the following involutivity condition: For every $1\leq i,j \leq n$ there exist smooth functions $ f_{ij}^1,\ldots,f_{ij}^n \in C^{\infty}(M)$ such that $[\boldsymbol{\alpha}_i,\boldsymbol{\alpha}_j]=\sum_{k=1}^n f_{ij}^k \boldsymbol{\alpha}_k$. Then the $C^{\infty}(M)$-submodule of $\Gamma(A)$ $$\mathcal{B}:=C^{\infty}_c(M)\boldsymbol{\alpha}_1 + \ldots + C^{\infty}_c(M)\boldsymbol{\alpha}_n$$ is a singular subalgebroid.
\begin{exs}\label{ex:momapsingsub}
\begin{enumerate}
\item Given a single section $\boldsymbol{\alpha} \in \Gamma(A)$, the $C^{\infty}(M)$-module $\mathcal{B} = C^{\infty}_c(M)\boldsymbol{\alpha}$ is a singular subalgebroid.
\item {Let $\ensuremath{\mathfrak{g}}$ be a Lie algebra and $\varphi \colon \ensuremath{\mathfrak{g}} \to \Gamma(A)$ a Lie algebra morphism. Defining
$\mathcal{B}$ as the $C^{\infty}_c(M)$-{span} of $\{\varphi(v):v\in \ensuremath{\mathfrak{g}}\}$ we obtain a singular subalgebroid of the kind above.
As $\boldsymbol{\alpha}_1,\ldots,\boldsymbol{\alpha}_n$ we can take the image of a basis of $\ensuremath{\mathfrak{g}}$; notice that in this case the functions $f_{ij}^k$ are constant. This example also falls\footnote{Indeed
$\mathcal{B}$ is the image of the Lie algebroid morphism $\ensuremath{\mathfrak{g}}\times M\to A, (v,x)\mapsto (\varphi(v))|_x$ over $Id_M$, where
$\ensuremath{\mathfrak{g}}\times M$ is the transformation Lie algebroid of the infinitesimal action $\ensuremath{\mathfrak{g}}\to \mathfrak{X}(M), v\mapsto \rho(\varphi(v))$ induced by $\varphi$ and the anchor of $A$.} into the class considered in \S\ref{subsec:morph}.}
{A concrete example is the following.} Let $(M,\omega)$ be a symplectic manifold, $\ensuremath{\mathfrak{g}}$ a Lie algebra, and $J \colon M\to \ensuremath{\mathfrak{g}}^*$ the moment map for some hamiltonian action on $M$. Then the comoment map (pullback of functions)
$J^*\colon \ensuremath{\mathfrak{g}} \to C^{\infty}(M)$ delivers a Lie algebra morphism into the central extension of $TM$ by the trivial vector bundle $M \times \ensuremath{\mathbb R}$ twisted by $\omega$\footnote{Recall that $TM\oplus_{\omega} (M \times \ensuremath{\mathbb R})$ is a Lie algebroid with the bracket $[X\oplus V, Y\oplus W]=[X,Y]\oplus\{X(W)-Y(V)-\omega(X,Y)\}$.}:
$$\ensuremath{\mathfrak{g}} \to \Gamma(TM\oplus_{\omega} (M \times \ensuremath{\mathbb R})), v \mapsto (X_{J^*v}, J^*v)$$
where $X_{J^*v}$ is the Hamiltonian vector field of $J^{*}v \in C^{\infty}(M)$. When $\omega$ is an integral 2-form, $TM\oplus_{\omega} (M \times \ensuremath{\mathbb R})$ is the Atiyah algebroid of a circle bundle prequantizing $(M,\omega)$.
\end{enumerate}
\end{exs}
\subsubsection{From Lie subalgebroids supported on submanifolds}\label{subsec:Liesubalgoids}
Recall that a \emph{Lie subalgebroid} of $A$ over a closed embedded submanifold\footnote{
A Lie subalgebroid of $A$ over an immersion $\iota \colon N \to M$ can be defined as well: It is a vector bundle $B \to N$ together with a vector bundle morphism $j \colon B \to A$ over $\iota \colon N \to M$ such that: i) $\rho(j(B)) \subset \iota_*(TN)$, ii) $\tilde{\Gamma}(B) := \{\boldsymbol{\alpha} \in \Gamma(A) \colon \boldsymbol{\alpha}|_{\iota(N)} \subset j(B)\}$ is involutive, iii) $[\tilde{\Gamma}(B),\tilde{\Gamma}(0_N)]\subset \tilde{\Gamma}(0_N)$.
}
$N$ of $M$ (see \cite[Def. 4.3.14]{MK2}) is a subbundle $B\to N$, such that:
\begin{enumerate}
\item [i)] $\rho(B)\subset TN$,
\item [ii)] $\tilde{\Gamma}(B):=\{\boldsymbol{\alpha}\in \Gamma(A):\boldsymbol{\alpha}|_N\subset B\}$ is involutive,
\item [iii)]
$[\tilde{\Gamma}(B),\tilde{\Gamma}(0_N)]\subset \tilde{\Gamma}(0_N)$, where $\tilde{\Gamma}(0_N):=\{\boldsymbol{\alpha}\in \Gamma(A):\boldsymbol{\alpha}|_N=0\}$.
\end{enumerate}
If $B$ is a Lie subalgebroid of $A$ over a closed {embedded} submanifold $N$,
then $$\mathcal{B}:=\{\boldsymbol{\alpha}\in \Gamma_c(A):\boldsymbol{\alpha}|_N\subset B\}$$ is a singular subalgebroid of $A$.
Let us describe $\mathcal{B}$ near a point $p$ of $N$. Choose coordinates $\{x_i\}$ around $p$ adapted to $N$, {\it i.e.}\; $\{x_i\}_{i>n}$ vanish on $N$ and $\{x_i\}_{i\le n}$, once restricted to $N$, provide coordinates there (here $n=dim(N)$). Let $\{\boldsymbol{\alpha}_j\}$ be a frame of {compactly supported sections} of $A$ adapted to $B$, {\it i.e.}\; $\{\boldsymbol{\alpha}_j|_N\}_{j\le b}\subset B$ where $b=rank(B)$.
Then $\mathcal{B}$, locally near $p$, is generated by $$\{\boldsymbol{\alpha}_j\}_{j\le b}\cup \{x_i\cdot \boldsymbol{\alpha}_j\}_{i>n,j> b},$$
while on open sets disjoint from $N$, $\mathcal{B}$ is just given by restrictions of {compactly supported} sections of $A$.
When $N$ has codimension one in $M$, $\mathcal{B}$ is projective {(see Def. \ref{ex:proj}).}
If $codim(N)\ge 2$ and $B\neq A|_N$, then $\mathcal{B}$ is not projective, because the number of generators above is strictly larger than $rank(A)$.
\begin{exs}\label{ep:GuLi}
\begin{enumerate}
\item Let $A=TM$. Let $B$ be the zero vector subbundle over $N$. Then $\mathcal{B}$ consists of the vector fields on $M$ which vanish at points of $N$.
\item When $N$ has codimension one in $M$, as mentioned above, $\mathcal{B}$ {is a projective singular subalgebroid, i.e.} it consists of {compactly supported sections} of an honest Lie algebroid over $M$, which Gualtieri and Li \cite[Def. 2.11]{GuLi} call \emph{elementary modification of $A$ along $B$} and denote by $[A:B]$. We remark that they construct an integration of $[A:B]$ applying a blow-up procedure to a Lie groupoid integrating $A$ (assuming that $A$ is integrable) \cite[Thm. 2.9, Cor. 2.10]{GuLi}.
{ In particular, when $A=TM$ and $B=TN$, the Lie algebroid $[TM:TN]$ is called the \emph{log tangent bundle} associated to $N$,
and
$\mathcal{B}$ is the projective singular foliation consisting of vector fields on $M$ tangent to $N$.} \end{enumerate}
\end{exs}
\subsubsection{From Poisson geometry}\label{subsec:Pois}
{For the cotangent Lie algebroid of a Poisson manifold, certain singular subalgebroids can be constructed out of functions.}
Let $(M,\pi)$ be a Poisson manifold, and consider the
Poisson algebra $(C^{\infty}(M),\cdot,\{\cdot,\cdot\})$.
Let $\mathcal{S}$ be
a \emph{Poisson} subalgebra of $C^{\infty}(M)$, {which is locally finitely generated as a multiplicative algebra}.
Then
$${\mathcal{B}:=Span_{C_c^{\infty}(M)}\{df:f\in \mathcal{S}\}}$$
is a singular subalgebroid of the cotangent Lie algebroid $T^*M$. {To see that $\mathcal{B}$ is locally finitely generated as a $C^{\infty}(M)$-module one just uses the product rule, and to see that $\mathcal{B}$
is involutive, use the Leibniz rule for the Poisson bracket and the fact that $[df,dg]=d\{f,g\}$.}
\begin{exs}\label{ex:Poissonmap}
\begin{enumerate}
\item {Let $\phi \colon M\to P$ a Poisson map between Poisson manifolds, and $\mathcal{S}_P$ a Poisson subalgebra of $C^{\infty}(P)$ which is locally finitely generated as a multiplicative algebra.
Then the same holds for $\mathcal{S}:=\phi^*(\mathcal{S}_P)\subset C^{\infty}(M)$}.
\item {A special case of the above is given by Poisson maps $M \to \ensuremath{\mathfrak{g}}^*$ to the dual of a Lie algebra, and choosing $\mathcal{S}_{\ensuremath{\mathfrak{g}}^*}$ to consist of polynomial functions on $\ensuremath{\mathfrak{g}}^*$. Notice that in this case $\mathcal{B}$ is of the kind\footnote{Indeed, $\ensuremath{\mathfrak{g}} \to \Gamma(T^*M), v\mapsto d(\phi^*(v))$ is a Lie algebra morphism.} described in Ex. \ref{ex:momapsingsub} b).}
\item Let $f\in C^{\infty}(M)$. Then $\langle df \rangle $ is a singular subalgebroid of $T^*M$.
(This can be seen as a special case of \S \ref{subsec:generated1}, or of the above taking the one dimensional Lie subalgebra of $C^{\infty}(M)$ spanned by $f$.) For instance, take $M=\ensuremath{\mathbb R}^2$ with the standard ``symplectic'' structure $\pi=\partial_x\wedge \partial_y$, and let $f(x,y)=xy$. Then ${\mathcal{B}}$ is given by all $C^{\infty}_c(\ensuremath{\mathbb R}^2)$-multiples of $d(xy)$.
{The anchor map $\Pi\colon T^{\ast}\ensuremath{\mathbb R}^2 \to T\ensuremath{\mathbb R}^2$ of the cotangent Lie algebroid is just contraction with $\pi$. The singular foliation $\Pi(\mathcal{B})$ of $\ensuremath{\mathbb R}^2$ {induced by $\mathcal{B}$} is interesting: its leaves agree with the connected components of the $f$-fibers, except on the preimage of $0$: $f^{-1}(0)$ is the union of the axes, and
it consists of 5 leaves, namely the 4 open half-axes and the origin.}
\end{enumerate}
\end{exs}
\subsection{Singular subalgebroids and right-invariant vector fields}\label{sec:rivf}
\label{sec:twofols}
Let $\mathcal{B}$ be a singular subalgebroid of a Lie algebroid $A$ over $M$ with anchor $\rho$. $\mathcal{B}$ induces a singular foliation on $M$, whose leaves are contained in the orbits of the Lie algebroid $A$, namely
\begin{equation}\label{eq:fb}
\mathcal{F}_{\mathcal{B}}:=\{\rho(\alpha):\alpha\in \mathcal{B}\}.
\end{equation}
In this subsection we are concerned with another singular foliation associated to $\mathcal{B}$, which will be very important to carry out our constructions. Assume $A$ is integrable and fix a Lie groupoid $\cG \gpd M$ integrating $A$. We denote the source and target maps of $\cG$ by $\bs, \bt \colon \cG \to M$ and identify the Lie algebroids $\ker(d\bs)\mid_M$ and $A$.
Given a section $\boldsymbol{\alpha} \in \mathcal{B} \subset \Gamma_c(M,\ker(d\bs)|_M)$, put $\overset{\rightarrow}{\boldsymbol{\alpha}}$ the right-invariant vector field on $\cG$ which extends $\boldsymbol{\alpha}$. Recall that
$\overset{\rightarrow}{\boldsymbol{\alpha}}$ is an element of
$\Gamma(\cG,\ker(d\bs))\subset \mathfrak{X}(\cG)$, and
it is given by the formula $\overset{\rightarrow}{\boldsymbol{\alpha}}_g = (R_g)_*\boldsymbol{\alpha}_{\bt(g)}$, for all $g \in \cG$. We will consider {the singular foliation}
\begin{equation}
\overrightarrow{\mathcal{B}}:={Span_{C^{\infty}_{c}(\cG)} \{ \overset{\rightarrow}{\boldsymbol{\alpha}} \mid \boldsymbol{\alpha} \in \mathcal{B}\}}.
\end{equation}
Likewise, we denote by $\overset{\leftarrow}{\mathcal{B}}$ the ${C^{\infty}_c(\cG)}$-module
generated by the left-invariant vector fields $\overset{\leftarrow}{\boldsymbol{\alpha}}$ for all $\boldsymbol{\alpha} \in \mathcal{B}$. {All the statements made in this subsection for $\overrightarrow{\mathcal{B}}$ hold in a similar way for $\overleftarrow{\mathcal{B}}$ too.}
Notice that $\text{support}(\overrightarrow{\boldsymbol{\alpha}})=\bt^{-1}(\text{support}(\boldsymbol{\alpha}))$, hence $\overrightarrow{\boldsymbol{\alpha}}$ is not necessarily compactly supported.
However, we have:
\begin{lemma}\label{lem:complete}
For every section $\boldsymbol{\alpha} \in \mathcal{B}$ the vector field $\overrightarrow{\boldsymbol{\alpha}} \in \overrightarrow{\mathcal{B}}$ is complete.
\end{lemma}
\begin{proof}
Identifying the anchor map $\rho\colon A \to TM$ with $d\bt|_M\colon\ker(d\bs)\mid_M \to TM$, we get that $\overrightarrow{\boldsymbol{\alpha}}$ is $\bt$-related with the vector field $\rho(\boldsymbol{\alpha})$, which has the same support as $\boldsymbol{\alpha}$, whence it is complete. It follows from \cite[Thm. 3.6.4]{MK2} that $\overrightarrow{\boldsymbol{\alpha}}$ is complete as well.
\end{proof}
{We now relate local generators of $\mathcal{B}$ with local generators of $\overrightarrow{\mathcal{B}}$.}
Given $x\in M$,
let $I_x^M$ denote the ideal of functions on $M$ vanishing at $x$, and
$I_x^{\cG}$ the ideal of functions on $\cG$ vanishing at $x$.
{
\begin{remark}\label{rem:basis}
Let $\boldsymbol{\alpha}_1,\cdots,\boldsymbol{\alpha}_n\in \mathcal{B}$. These elements are
generators of $\mathcal{B}$ in a neighborhood of $x$ if{f} their images $[\boldsymbol{\alpha}_1],\cdots,[\boldsymbol{\alpha}_n]$
in $\mathcal{B}/I_x^M\mathcal{B}$ are a spanning set of this vector space. This is proved exactly as in the case of singular foliations \cite[Prop. 1.5 a)]{AndrSk}. If the latter form a basis of $\mathcal{B}/I_x^M\mathcal{B}$, we say that $\boldsymbol{\alpha}_1,\cdots,\boldsymbol{\alpha}_n$ is a {\bf minimal} set of local generators.
\end{remark}
}
\begin{lemma}\label{lem:basis}
Let $\boldsymbol{\alpha}_1,\dots,\boldsymbol{\alpha}_n$ be a finite subset of $\mathcal{B}$.
Then $[\boldsymbol{\alpha}_1],\dots,[\boldsymbol{\alpha}_n]$ is a basis of $\mathcal{B}/I_x^M\mathcal{B}$ if{f}
$[\overset{\rightarrow}{\boldsymbol{\alpha}_1}],\dots,[\overset{\rightarrow}{\boldsymbol{\alpha}_n}]$ is a basis\footnote{Actually the $\overset{\rightarrow}{\boldsymbol{\alpha}_i}$ do not lie in $\overrightarrow{\mathcal{B}}$ but rather in the
global hull $\widehat{\overrightarrow{\mathcal{B}}}$, see \S \ref{section:singdef}. This does not pose any problems since the inclusion of $\overrightarrow{\mathcal{B}}$ in $\widehat{\overrightarrow{\mathcal{B}}}$ induces an isomorphism
$\overrightarrow{\mathcal{B}}/I_x^{\cG}\overrightarrow{\mathcal{B}}\cong \widehat{\overrightarrow{\mathcal{B}}}/I_x^{\cG}\widehat{\overrightarrow{\mathcal{B}}}$.}
of $\overrightarrow{\mathcal{B}}/I_x^{\cG}\overrightarrow{\mathcal{B}}$.
\end{lemma}
\begin{proof}
``$\Rightarrow$''
We first show that the $[\overset{\rightarrow}{\boldsymbol{\alpha}_i}]$ are linearly independent. Let $c_1,\dots,c_n\in \ensuremath{\mathbb R}$ with $\sum c_i \overset{\rightarrow}{\boldsymbol{\alpha}_i} \in I_x^{\cG}\overrightarrow{\mathcal{B}}$. Restricting from $\cG$ to $M$ we obtain $\sum c_i \boldsymbol{\alpha}_i\in I_x^M\mathcal{B}$, therefore all coefficients $c_i$ are zero. We now show that the $[\overset{\rightarrow}{\boldsymbol{\alpha}_i}]$ are a spanning set of $\overrightarrow{\mathcal{B}}/I_x^{\cG}\overrightarrow{\mathcal{B}}$. The $\boldsymbol{\alpha}_i$ generate the $C^{\infty}(M)$-module $\mathcal{B}$ in a neighborhood of $x$ (see Remark \ref{rem:basis}), and hence
the $\overset{\rightarrow}{\boldsymbol{\alpha}_i}$ generate the $C^{\infty}(\cG)$-module $\overrightarrow{\mathcal{B}}$ near $x$. Given any $X\in \overrightarrow{\mathcal{B}}$, there are $f_i\in {C_c^{\infty}(\cG)}$ such that $X=\sum f_i \overrightarrow{\boldsymbol{\alpha}_i}=\sum f_i(x) \overrightarrow{\boldsymbol{\alpha}_i}+\sum (f_i-f_i(x))\overrightarrow{\boldsymbol{\alpha}_i}$, and since $f_i-f_i(x)\in I_x^{\cG}$ we obtain $[X]=\sum f_i(x) [\overrightarrow{\boldsymbol{\alpha}_i}]$.
``$\Leftarrow$'' The $[\boldsymbol{\alpha}_i]$ are linearly independent:
if $\sum c_i \boldsymbol{\alpha}_i \in I_x^M\mathcal{B}$ then $\sum c_i \overset{\rightarrow}{\boldsymbol{\alpha}_i} \in \bt^*(I_x^{M}) \overrightarrow{\mathcal{B}}
\subset I_x^{\cG}\overrightarrow{\mathcal{B}}$, showing that the $c_i$ all vanish. To show that the $[\boldsymbol{\alpha}_i]$ are a spanning set of $\mathcal{B}/I_x^M\mathcal{B}$, notice that by assumption any element of $\overrightarrow{\mathcal{B}}$ can be written as $\sum c_i \overrightarrow{\boldsymbol{\alpha}_i}$ (for suitable $c_i\in \ensuremath{\mathbb R})$ plus an element of $ I_x^{\cG}\overrightarrow{\mathcal{B}}$. We pick $\boldsymbol{\alpha}\in \mathcal{B}$ and write $\overrightarrow{\boldsymbol{\alpha}}$ in the above form.
Restricting to $M$ we see that $\boldsymbol{\alpha}$ equals $\sum c_i \boldsymbol{\alpha}_i$ plus an element of $ I_x^{M}{\mathcal{B}}$, {\it i.e.}\; $[\boldsymbol{\alpha}]=\sum c_i [\boldsymbol{\alpha}_i]$.
\end{proof}
\section{Bisubmersions for singular subalgebroids}\label{section:relbisub}
In this whole section we
fix an integrable Lie algebroid $A\to M$ and a singular subalgebroid $\mathcal{B}$. Further, we fix a Lie groupoid $\cG$ integrating $A$.
Recall from \cite{AndrSk} that the key ingredient for the construction of the holonomy groupoid of a singular foliation is the notion of bisubmersion. Here, in order to carry out the construction in the case of singular subalgebroids, we reformulate the notion of bisubmersion {in \S \ref{subsec:defbisub}. We then present examples, including
path holonomy bisubmersions. The latter, upon applying the operations we outline in \S \ref{subsec:operations}, will be used to construct the holonomy groupoid in the next Section. {Our \S \ref{subsubsec:phrel} and \S \ref{subsec:operations} follow closely \cite{AndrSk}.}
}
\subsection{Pullbacks of singular foliations}
{We collect background material on pullbacks and generating sets for
singular foliations (see \S \ref{subsec:motivex}).}
\begin{definition}\label{def:pullback}
Let $\varphi \colon U\to V$ a smooth map between smooth manifolds.
\begin{enumerate}
\item Let $X\in \mathfrak{X}(U)$ and $Y\in \mathfrak{X}(V)$. We say that $X$ is {\bf $\varphi$-related} to $Y$ if{f} $\varphi_*(X(p))=Y(\varphi(p))$ for all $p\in U$.
\item
Let $\mathcal{F}$ be a $C^{\infty}(V)$-submodule of ${\mathfrak{X}_c(V)}$.
Define, as in \cite{AndrSk} (see also \cite[\S 1.1]{AZ2})
$$\varphi^{-1}(\mathcal{F}):=\{X\in \mathfrak{X}_c(U):d\varphi(X)=\sum_if_i(Y_i\circ \varphi) \text{ for finitely many $f_i\in C^{\infty}_{c}(U)$ and $Y_i\in \mathcal{F}$}\}.$$
Here $d\varphi\colon TU\to \varphi^*(TV)$ is a vector bundle map covering $Id_U$, where $\varphi^*(TV)$ denotes the pullback vector bundle. Notice that
$\varphi^{-1}(\mathcal{F})$ is a $C^{\infty}(U)$-submodule of $\mathfrak{X}_c(U)$. It is a foliation, {called \textbf{pullback foliation}}, whenever $\mathcal{F}$ is a foliation and $\varphi$ is transverse to $\mathcal{F}$ \cite{AndrSk}.
\end{enumerate}
\end{definition}
Now let $\varphi \colon U\to V$ a smooth map and $\mathcal{F}$ be a $C^{\infty}(V)$-submodule of ${\mathfrak{X}_c(V)}$.
Fix a generating set $\mathcal{G}$ of $\mathcal{F}$, {as defined in \S \ref{section:singdef}}. We display two technical lemmas, which are not completely obvious due to the fact that neither of $\mathcal{G}$ or $\mathcal{F}$ are included in the other.
\begin{lemma}\label{lem:liftequiv}
{The following conditions are equivalent:
\begin{itemize}
\item for every $Y\in \mathcal{F}$ there is
a $Z\in \mathfrak{X}(U)$ which is $\varphi$-related to $Y$.
\item for every $Y\in \mathcal{G}$ there is a $Z\in \mathfrak{X}(U)$ which is $\varphi$-related to $Y$.
\end{itemize}
}
\end{lemma}
\begin{proof}
We only show that the first condition implies the second (the converse is similar). Take a partition of unity $\{\psi_a\}$ on $V$ by functions with compact support. Given $Y\in \mathcal{G}$, by assumption there is $Z_a\in \mathfrak{X}(U)$ that is $\varphi$-related to $\psi_a Y\in \mathcal{F}$. Since the partition of unity is locally finite, {one can arrange that} the sum $\sum_a Z_a$ is locally finite, and the resulting vector field is $\varphi$-related to $\sum_a\psi_a Y=Y$.
\end{proof}
Under certain conditions, $\varphi^{-1}(\mathcal{F})$ has a distinguished generating set.
\begin{lemma}\label{lem:fix}
Assume that any of the equivalent conditions in Lemma \ref{lem:liftequiv} is satisfied. {(This happens for instance when $\varphi$ is a submersion).} Then
\begin{align*}
\varphi^{-1}(\mathcal{F})=&Span_{C^{\infty}_{c}(U)}\{\text{$Z\in \mathfrak{X}(U)$: $Z$ is
$\varphi$-related to an element of $\mathcal{F}$}\}\\
=&Span_{C^{\infty}_{c}(U)}\{\text{$Z\in \mathfrak{X}(U)$: $Z$ is
$\varphi$-related to an element of $\mathcal{G}$ {or to $0$}}\}.
\end{align*}
\end{lemma}
\begin{proof} In the first equality, the inclusion ``$\supset$'' is easily checked to hold even when the assumption is not satisfied. For ``$\subset$'', take $X\in \mathfrak{X}_c(U)$ such that $d\varphi(X)=\sum_if_i(Y_i\circ \varphi)$ where the sum is finite, $f_i\in C^{\infty}_{c}(U)$ and $Y_i\in \mathcal{F}$. By the assumption, there exist $Z_i\in
\mathfrak{X}(U)$ that is $\varphi$-related to $Y_i$, {\it i.e.}\; $d\varphi(Z_i)=Y_i\circ \varphi$. Hence we can write $$
d\varphi(X)=\sum_if_i d\varphi(Z_i)=d\varphi(\sum_if_i Z_i).$$ This means that
$X=\sum_if_i Z_i+Z$ where $Z\in \mathfrak{X}_c(U)$ is $\varphi$-related to the zero vector field on $V$, which is an element of $\mathcal{F}$.
For the second equality, to prove ``$\supset$'', take $Z\in \mathfrak{X}(U)$ which is $\varphi$-related to $Y\in \mathcal{G}$ and $f\in C^{\infty}_{c}(U)$. We can choose a function $\psi\in C^{\infty}_{c}(V)$ with is one on $\varphi(Supp(f))$. We then have $fZ=f\varphi^{*}(\psi)Z$, and clearly $\varphi^{*}(\psi)Z$ is $\varphi$-related to $\psi Y\in \mathcal{F}$. To show ``$\subset$'', take $X\in {\mathfrak{X}(U)}$ which is $\varphi$-related to an element of $\mathcal{F}$, {\it i.e.}\; to some $\sum_i g_i Y_i$ where $g_i\in C^{\infty}_{c}(U)$ and $Y_i\in \mathcal{G}$. By assumption there is $X_i\in \mathfrak{X}(U)$ that is $\varphi$-related to $Y_i$, hence $\sum_i \varphi^{*}(g_i) X_i$ is also related to $\sum_i g_i Y_i$. {Fix $f\in C^{\infty}_{c}(U)$. Then $fX=f\sum_i \varphi^{*}(g_i) X_i+Z$ where $Z\in \mathfrak{X}_c(U)$ is $\varphi$-related to the zero vector field on $V$.}
\end{proof}
\subsection{Definition of bisubmersion}\label{subsec:defbisub}
Let $A$ be an integrable Lie algebroid and $\cG$ a Lie groupoid integrating $A$. Let $\mathcal{B}$ be a singular subalgebroid of $A$.
\begin{definition}\label{dfn:bisubm2} A {\bf bisubmersion} for $\mathcal{B}$ is a smooth map $\varphi\colon U \to \cG$, where $U$ is a manifold, such that
\begin{enumerate}[i)]
\item $\bs_U:=\bs\circ\varphi$ and $\bt_U:=\bt\circ\varphi \colon U \to M$ are submersions,
\item for every $\boldsymbol{\alpha}\in \mathcal{B}$, there is $Z\in\mathfrak{X}(U)$
which is $\varphi$-related to $\overrightarrow{\boldsymbol{\alpha}}$ and $W\in\mathfrak{X}(U)$ which is $\varphi$-related to $\overleftarrow{\boldsymbol{\alpha}}$,
\item
$\varphi^{-1}(\overrightarrow{\mathcal{B}})= \Gamma_c(U,\ker d\bs_U)$ and $\varphi^{-1}(\overleftarrow{\mathcal{B}})= \Gamma_c(U,\ker d\bt_U)$.
\end{enumerate}
\end{definition}
\begin{notation}\label{not:bisubm2}
We denote a bisubmersion of $\mathcal{B}$ by $(U,\varphi,\cG)$. One can bear in mind the following diagram:
\begin{equation*}
\xymatrix{
&U\ar[d]^{\varphi} & \\
&\ar[dr]^{\bs}\cG\ar[dl]_{\bt}&\\
M&&M
}
\end{equation*}
\end{notation}
\begin{remark}
In Def. \ref{dfn:bisubm2} the map $\varphi$ is not required to be transverse to $\overrightarrow{\mathcal{B}}$, and
the conditions in Def. \ref{dfn:bisubm2} do not imply transversality in general (see the examples in \S \ref{subsub:LieGrbisub} with $\cK$ there being the trivial groupoid).
\end{remark}
{The rest of this subsection is devoted to explanations about conditions ii) and iii).}
\begin{remark}\label{rem:condii}
The first part condition ii) in Def. \ref{dfn:bisubm2} is expressed in terms of the generating set $\{ \overset{\rightarrow}{\boldsymbol{\alpha}} \mid \boldsymbol{\alpha} \in \mathcal{B}\}$ of the singular foliation $\overrightarrow{\mathcal{B}}$. Using Lemma \ref{lem:liftequiv} it can be rephrased saying that any element of $\overrightarrow{\mathcal{B}}$ can be lifted to $U$. Notice that any lift will lie in $\ker d\bs_U$, since right-invariant vector fields on $G$ lie in $\ker d\bs$.
An analogue statement holds for the second part of condition ii).
\end{remark}
We now phrase the first part of condition iii) in Definition \ref{dfn:bisubm2} more explicitly.
{
We have
\begin{align}\label{eq:varrar}
\varphi^{-1}(\overrightarrow{\mathcal{B}})&=Span_{C^{\infty}_{c}(U)}\{\text{$Z\in\mathfrak{X}(U): Z$ is $\varphi$-related to $\overset{\rightarrow}{\boldsymbol{\alpha}}$ for some $\boldsymbol{\alpha} \in \mathcal{B}$}\}\\
&=Span_{C^{\infty}_{c}(U)}\Gamma(U,\ker d\bs_U)^{proj,{\mathcal{B}}},\nonumber
\end{align}
where the first equation holds by Lemma \ref{lem:fix} (which can be applied due to condition ii)), {and in the second equation we used that $\overrightarrow{\mathcal{B}}$ lies in the kernel of $d\bs$}.
Here, $$\Gamma(U,\ker d\bs_U)^{proj,{\mathcal{B}}}:=\{Z\in \Gamma(U,\ker d\bs_U): \text{$Z$ is $\varphi$-related to $\overset{\rightarrow}{\boldsymbol{\alpha}}$ for some $\boldsymbol{\alpha} \in \mathcal{B}$}\}.$$
}
\begin{lemma}\label{lem:bisubm2}
Given any map $\varphi\colon U \to \cG$, the following statements are equivalent:
\begin{itemize}
\item[a)] $\varphi^{-1}(\overrightarrow{\mathcal{B}})= \Gamma_c(U,\ker d\bs_U)$
\item[b)] $\Gamma(U,\ker d\bs_U)^{proj,{\mathcal{B}}}$
generates ${\Gamma_c(U,\ker d\bs_U)}$ as a ${C^{\infty}_c(U)}$-module
\item[c)] $\Gamma(U,\ker d\bs_U)^{proj,{\mathcal{B}}}$ spans $\ker (d\bs_U)_u$ at every $u\in U$.
\end{itemize}
\end{lemma}
\begin{proof}
Using eq. \eqref{eq:varrar} it is clear that a) and b) are equivalent.
Clearly b) implies c). For the converse, {we first make an observation}.
By c), for any $u\in U$, we can take a basis of $\ker (d\bs_U)_u$ and extend it to $\boldsymbol{\alpha}^1,\dots,\boldsymbol{\alpha}^n\in \Gamma(U,\ker d\bs_U)^{proj,{\mathcal{B}}}$. {These vector fields are linearly independent on a small open
neighborhood $U'$ of $u$.
Therefore can write any section of $\ker d\bs_U$ with support in $U'$ as a
$C_c^{\infty}(U')$-linear combination of the $\boldsymbol{\alpha}^i$.
}
Now fix $X\in {\Gamma_c(U,\ker d\bs_U)}$. The support of $X$, being compact, can be covered by finitely many open neighborhoods as above. Extend this {cover} to a open cover $\{U'_i\}$ of $U$, and choose a partition of unity $\{g_i\}\subset
C_c^{\infty}(U)$ subordinate to it. Then $X=\sum_ig_i(X|_{U'_i})$ is a \emph{finite} sum. {The summands are
sections of $\ker d\bs_U$ with support in $U'_i$.
Applying the above observation to each summand, we see that $X$ is written as a finite $C_c^{\infty}(U)$-linear combination of elements of $\Gamma(U,\ker d\bs_U)^{proj,{\mathcal{B}}}$.} \end{proof}
{We mention another characterization of Def. \ref{dfn:bisubm2}:}
\begin{remark}\label{donto}
{The first parts of conditions ii) and iii) in Definition \ref{dfn:bisubm2} are equivalent to the following: the map of $C^{\infty}(U)$-modules
$$d\varphi \colon \Gamma_c(U;\ker d\bs_U) \to \varphi^*(\overrightarrow{\mathcal{B}})$$
is well-defined and surjective. Here $\varphi^*(\overrightarrow{\mathcal{B}})$ is the $C^{\infty}(U)$-submodule of $\mathfrak{X}_c(U)$ generated by $f(\xi\circ\varphi)$ with $f \in C^{\infty}_c(U)$ and $\xi \in \overrightarrow{\mathcal{B}}$. }
Indeed, the first part of condition iii) is equivalent to $\varphi^{-1}(\overrightarrow{\mathcal{B}}) \supset {\Gamma_c(U,\ker d\bs_U)}$ (the other inclusion is obvious since $\overrightarrow{\mathcal{B}}$ lies in the kernel of $d\bs$), and therefore is equivalent to the fact that the
above map is well-defined. The first part of condition ii), by Remark \ref{rem:condii}, says that this map is onto.
\end{remark}
\subsection{Examples}\label{subsec:exbisub}
{We exhibit examples of bisubmersions for singular foliations and wide Lie subalgebroids (the two motivating examples displayed in \S \ref{subsec:motivex}), and generalizing the latter, for the examples treated in \S \ref{subsec:morph}.}
\subsubsection{Bisubmersions of singular foliations}\label{section:usualbisub}
In \cite[definition 2.1]{AndrSk} a {\bf bisubmersion for a singular foliation} $(M,\mathcal{F})$ is defined as a triple $(U, \bt_U, \bs_U)$ consisting of a manifold $U$ with two submersions $\bt_U$ and $\bs_U$ to $M$, such that
\begin{equation}\label{eq:bisubfol}
\bt_U^{-1}(\mathcal{F})=\Gamma_c(U,\ker d\bt_U) + \Gamma_c(U,\ker d\bs_U)=\bs_U^{-1}(\mathcal{F}).
\end{equation}
On the other hand, singular foliations are special cases of singular subalgebroids; namely, they are the singular subalgebroids of $TM$. We show that the two notions of bisubmersion for singular foliations {essentially agree, since there is a canonical bijective correspondence between them}.
\begin{prop}\label{prop:equivbi}
Let $(M,\mathcal{F})$ be a singular foliation, $U$ a manifold, and $\bt_U \colon U\to M$ and $\bs_U \colon U\to M$ submersions. The following are equivalent:
\begin{itemize}
\item[1)] $(U, \bt_U, \bs_U)$ is a bisubmersion for the singular foliation $\mathcal{F}$ (in the sense of \cite[definition 2.1]{AndrSk});
\item[2)] the map $(\bt_U, \bs_U)\colon U\to M\times M$ is a bisubmersion (in the sense of definition \ref{dfn:bisubm2}), {where $M\times M$ is endowed with the pair groupoid structure}.
\end{itemize}
\end{prop}
We give the following lemma without proof.
\begin{lemma}\label{lem:easyinclusion}
Let $\bt_U \colon U\to M$ and $\bs_U \colon U\to M$ {be smooth maps}.
If $Z\in \mathfrak{X}(U)$ and $X,Y\in \mathfrak{X}(M)$, then $Z$ is $(\bt_U, \bs_U)$-related to $\overrightarrow{X}+\overleftarrow{Y}$ if{f} it is $\bt_U$-related to $X$ and $\bs_U$-related to $Y$.
\end{lemma}
\begin{proof}[Proof of proposition \ref{prop:equivbi}]
$1)\Rightarrow 2)$: Property i) in definition \ref{dfn:bisubm2} is obviously satisfied. For property ii) we argue as follows. Every $X\in \mathcal{F}$ can be $\bt_U$-lifted to $Z\in \Gamma(\ker d\bs_U)$, by the proof of \cite[Prop. 2.10 b)]{AndrSk}. Such a $Z$ is $(\bt_U, \bs_U)$-related to $\overrightarrow{X}$ by Lemma \ref{lem:easyinclusion}.
Further, $X$ can be $\bs_U$-lifted to $W\in \Gamma(\ker d\bt_U)$, by the same argument in the proof of \cite[Prop. 2.10 b)]{AndrSk}(interchanging the roles of source and target), and such a $W$ is $(\bt_U, \bs_U)$-related to $\overleftarrow{X}$.
We show {property iii)}, that is, $(\bt_U, \bs_U)^{-1}(\overrightarrow{\mathcal{F}})= \Gamma_c(U,\ker d\bs_U)$.
We do so using Lemma \ref{lem:bisubm2} b).
Let $Z\in \Gamma_c(U,\ker d\bs_U)$. By the {first equality in eq.} \eqref{eq:bisubfol}, we have $Z=\sum_ig_iZ_i $ where $g_i\in C_c^{\infty}(U)$ and $Z_i \in \mathfrak{X}(U)$ is $\bt_U$-projectable to an element of $\mathcal{F}$.
We can write each $Z_i$ as the sum of a vector field in $\Gamma(\ker d\bt_U)$ and one in $\Gamma(\ker d\bs_U)$ -- which we denote by the respective subscrips --.
{To see this, we have to apply with some care the first equality in eq. \eqref{eq:bisubfol}: choose a partition of unity $\{\psi_a\}$ with compact support on $U$. Then $\psi_aZ_i$ equals an element of $\Gamma_c(\ker d\bt_U)$ plus an element of $\Gamma_c(\ker d\bs_U)$, and we may assume\footnote{By multiplying them with a suitable function with value $1$ on $Supp(\psi_a)$.}
that their support is contained in a small enough neighborhood of $Supp(\psi_a)$.
Hence $Z_i=\sum_a \psi_aZ_i$ equals a locally finite sum of elements of $\Gamma_c(\ker d\bt_U)$ plus a locally finite sum of elements of $\Gamma_c(\ker d\bt_U)$.}
Altogether we obtain
$$Z= Z'+\sum_ig_i(Z_i)_{\ker d\bs_U},\;\; \text{ where }Z':=\sum_ig_i(Z_i)_{\ker d\bt_U}.$$
Notice that $Z'$ lies in $\Gamma_c(U,\ker d\bs_U)$ (being the difference of two vector fields with this property),
hence it is $(\bt_U, \bs_U)$-related to the zero vector field.
Similarly, each $(Z_i)_{ker d\bs_U}$ is $\bt_U$-projectable to an element of $\mathcal{F}$ (being the difference $Z_i-(Z_i)_{ker d\bt_U}$ of two vector fields with this property), hence {by Lemma \ref{lem:easyinclusion}} it is $(\bt_U, \bs_U)$-related to an element of {$\{\overset{\rightarrow}{X} : X \in \mathcal{F}\}$}.
In conclusion,
$Z$ is a $C_c^{\infty}(U)$-linear combination of vector fields which are $(\bt_U, \bs_U)$-related to elements of {$\{\overset{\rightarrow}{X} : X \in \mathcal{F}\}$},
proving that the condition in Lemma \ref{lem:bisubm2} b) holds. {Lemma \ref{lem:bisubm2} b)}. {The second equality in item iii) of definition \ref{dfn:bisubm2} follows in an analogous way.}
$2)\Rightarrow 1)$: $\bt_U$ and $\bs_U$ are submersions by item i) of definition \ref{dfn:bisubm2}. Hence we just need to show eq. \eqref{eq:bisubfol}.
By Lemma \ref{lem:easyinclusion} and item iii) of definition \ref{dfn:bisubm2} we have
\begin{align*}
\bt_U^{-1}(\mathcal{F})\cap \bs_U^{-1}(\mathcal{F})\supset (\bt_U, \bs_U)^{-1}(\overrightarrow{\mathcal{F}})+(\bt_U, \bs_U)^{-1}(\overleftarrow{\mathcal{F}})=\Gamma_c(U,\ker d\bs_U) + \Gamma_c(U,\ker d\bt_U).
\end{align*}
We show $\bt_U^{-1}(\mathcal{F})\ \subset \Gamma_c(U,\ker d\bs_U) + \Gamma_c(U,\ker d\bt_U)$ (the argument for $\bs_U^{-1}(\mathcal{F})$ is analogous). Let $Z\in \mathfrak{X}(U)$ be $\bt_U$-related to some $X\in \mathcal{F}$. By condition ii), there is a vector field $W$ on $U$ which is $(\bt_U,\bs_U)$-related to $\overrightarrow{X}$, {\it i.e.}\; $W$ lies in $\Gamma(\ker d\bs_U)$ and is
$\bt_U$-related to ${X}$. We conclude by writing
$Z=(Z-W)+W$, with $Z-W\in \Gamma(\ker d\bt_U)$.
\end{proof}
Further, a bisubmersion for any singular subalgebroid $\mathcal{B}$ gives rise to a bisubmersion for $\mathcal{F}_{\mathcal{B}}$, the induced singular foliation defined in eq. \eqref{eq:fb}. We have:
\begin{lemma}\label{lem:usualbi}
Let $(U,\varphi,\cG)$ a bisubmersion of $\mathcal{B}$. Then $(U,\bt_U,\bs_U)$ is a bisubmersion of the singular foliation $\mathcal{F}_{\mathcal{B}}$ (in the sense of \cite{AndrSk}).
\end{lemma}
\begin{proof} Since $\bt_U$ and $\bs_U$ are submersions, we just have to show that
$$\bt_U^{-1}(\mathcal{F}_{\mathcal{B}})=\Gamma_c(U,\ker d\bt_U) + \Gamma_c(U,\ker d\bs_U)$$
and similarly for $\bs_U^{-1}(\mathcal{F}_{\mathcal{B}})$. We start proving the above equality.
``$\supset$'' By item iii) of definition \ref{dfn:bisubm2} and {by Lemma \ref{lem:bisubm2}
b)}, $\Gamma_c(U,\ker d\bt_U) + \Gamma_c(U,\ker d\bs_U)$ is generated by elements which are $\varphi$-related to elements of {$\{ \overset{\rightarrow}{\boldsymbol{\alpha}} : \boldsymbol{\alpha} \in \mathcal{B}\}+
\{ \overset{\leftarrow}{\boldsymbol{\alpha}} : \boldsymbol{\alpha} \in \mathcal{B}\}$}.
The latter are $\bt$-related to elements of $\mathcal{F}_{\mathcal{B}}$. As $\bt\circ \varphi=\bt_U$, the above generators are $\bt_U$-related to elements of $\mathcal{F}_{\mathcal{B}}$.
``$\subset$'' Let $Z\in \mathfrak{X}(U)$ be $\bt_U$-related to some $X\in \mathcal{F}_{\mathcal{B}}$. There exists $\boldsymbol{\alpha}\in \mathcal{B}$ with $\rho(\boldsymbol{\alpha})=X$. Since under the identification $A\cong \ker(d\bs)|_M$ the anchor $\rho$ is identified with $d\bt|_M$, we see that $\overrightarrow{\boldsymbol{\alpha}}$ is $\bt$-related to $X$. By item ii) of definition \ref{dfn:bisubm2}, there is $W\in \mathfrak{X}(U)$ which is $\varphi$-related to $\overrightarrow{\boldsymbol{\alpha}}$. Hence $W$ is also related to $X$ under the map $\bt_{U}=\bt\circ \varphi$. Notice that
$W\in
\Gamma(U,\ker d\bs_U)$, and further $Z-W\in \Gamma(U,\ker d\bt_U)$. Writing $Z=W+(Z-W)$ we conclude the proof of the inclusion.
To show the above equality for $\bs_U^{-1}$ in place of
$\bt_U^{-1}$
we proceed as follows: consider the above equality for the inverse bisubmersion $({U},\bar{\varphi}, \cG)$ (see definition \ref{def:inv}), and use $\bar{\bt}_{U}=\bs_U$, $\bs_{{U}}=\bar{\bt}_U$.
\end{proof}
\subsubsection{Lie groupoid morphisms as bisubmersions}\label{subsub:LieGrbisub}
{If a singular subalgebroid arises from a Lie groupoid morphism (see Def. \ref{def:arises}, this includes wide Lie subalgebroids), then that morphism is automatically a bisubmersion:}
\begin{prop}\label{prop:imagerelbi}
Let $\varphi\colon \cK \to \cG$ be
a morphism of Lie groupoids covering the identity on $M$. Denote by $\mathcal{B}:=\varphi_*(\Gamma_c(Lie(\cK)))$ {the singular subalgebroid of $Lie(\cG)$ it gives rise to}. Then $\varphi\colon \cK \to \cG$ is a bisubmersion for $\mathcal{B}$.
\end{prop}
\begin{proof}
We check the properties of definition \ref{dfn:bisubm2}:
\begin{enumerate}[i)]
\item is satisfied since $\cK$ is a Lie groupoid over $M$.
\item is an immediate consequence of the Claim below, {since any element of $\mathcal{B}$ is of the form $\varphi_*X$ for some $X\in \mathcal{B}$.}
\item {for all $X \in \Gamma_c(Lie(\cK))$, the claim below implies that $\overrightarrow{X}$
lies in $\Gamma(K,\ker d\bs_K)^{proj,{\mathcal{B}}}$. Such $\overrightarrow{X}$ span $\ker d\bs_K$ at every point of $K$, so we can conclude using Lemma \ref{lem:bisubm2} c).
The same argument applies to $\overleftarrow{X}$.}
\end{enumerate}
\underline{Claim:} \emph{Denote $E:=Lie(\cK)$ and $A:=Lie(\cG)$.
Let $X \in \Gamma_c(E)$, and denote $\boldsymbol{\alpha}_X := \varphi_*X\in \Gamma(A)$. Then $\overrightarrow{X}$ is
$\varphi$-related to $\overrightarrow{\boldsymbol{\alpha}_X}$. Similarly, $\overleftarrow{X}$ is
$\varphi$-related to $\overleftarrow{\boldsymbol{\alpha}_X}$. }
Consider $\overrightarrow{X}$, the right-invariant vector field on $\cK$ which restricts to $X$ along $M$. Let $k\in \cK$. We have
$$\varphi_*(\overrightarrow{X}_k )=
\;\varphi_*\;\;\,
(R^{\cK}_{k})_*\left( X_{\bt(k)}\right)
= (R^{\cG}_{\varphi(k)})_*\varphi_*\left( X_{\bt(k)}\right)
= (R^{\cG}_{\varphi(k)})_* \left((\boldsymbol{\alpha}_X)_{\bt(k)}\right)
= \overrightarrow{(\boldsymbol{\alpha}_X)}_{\varphi(k)},$$
showing that $\overrightarrow{X}$ is
$\varphi$-related to $\overrightarrow{\boldsymbol{\alpha}_X}$.
Here we denote $R^{\cG}$ the right-translation by in $\cG$, and likewise $R^{\cK}$ the right-translation in $\cK$. In the second equality we used that $\varphi$ is a groupoid morphism.
Now consider $\overleftarrow{X}$. We have $\overleftarrow{X}=-i^{\cK}_*\overrightarrow{X}$, where $i^{\cK}$ is the inversion map of the Lie groupoid $\cK$. Hence
$$\varphi_*(\overrightarrow{X})=-\varphi_*i^{\cK}_*(\overrightarrow{X})=
-i^{\cG}_*\varphi_*(\overrightarrow{X})=-i^{\cG}_* (\overrightarrow{\alpha_X})=
\overleftarrow{\alpha_X},$$
where in the second equality we used that $\varphi$ is a groupoid morphism, and in the third that $\overrightarrow{X}$ is
$\varphi$-related to $\overrightarrow{\boldsymbol{\alpha}_X}$.
\end{proof}
We spell out Prop. \ref{prop:imagerelbi} in the case of a wide Lie subalgebroid:
\begin{cor}\label{prop:Lierelbi} Let $A$ be an integrable Lie algebroid and $B$ a wide Lie subalgebroid of $A$. Let $\varphi\colon \cK \to \cG$ be
a morphism\footnote{The map $\varphi$ is always a (not necessarily injective) immersion, and covers the identity on $M$.} of Lie groupoids which integrates the inclusion $\iota \colon B\hookrightarrow A$.
Then $\varphi\colon \cK \to \cG$ is a bisubmersion for the
singular subalgebroid $\mathcal{B} = \Gamma_c(B)$.
\end{cor}
\begin{ex}\label{ep:groid}
For $\mathcal{B} = \Gamma_c(A)$ we obtain that ${\hbox{id}}\colon \cG \to \cG$ is a bisubmersion.
\end{ex}
\subsection{Path-holonomy bisubmersions}\label{subsubsec:phrel}
We give an explicit construction of bisubmersions starting from local generators of $\mathcal{B}$. Recall from Lemma \ref{lem:complete} that if $\boldsymbol{\alpha} \in \mathcal{B}$, the vector field $\overrightarrow{\boldsymbol{\alpha}}$ is complete, {We denote by $\mathrm{exp}_y \overrightarrow{\boldsymbol{\alpha}}$ its time-1 flow applied to the point $y\in M$.}
\begin{definition}\label{dfn:pathhol}
\begin{enumerate}
\item Let $x \in M$ and $\boldsymbol{\alpha}_1,\dots,\boldsymbol{\alpha}_n \in \mathcal{B}$ such that $[\boldsymbol{\alpha}_1],\dots,[\boldsymbol{\alpha}_n]$ span $\mathcal{B}/I_x\mathcal{B}$. The associated {\bf {path holonomy} bisubmersion} is the map $$\varphi\colon U \to \cG, \;\;(\lambda,y)\mapsto \mathrm{exp}_y \sum \lambda_i \overset{\rightarrow}{\boldsymbol{\alpha}_i},$$ {where $U$ is a neighborhood of $(0,x)$ in $\ensuremath{\mathbb R}^n \times M$ such that $\bt\circ \varphi$ is\footnote{Such a neighborhood exists since
$\varphi$ is a submersion at $(0,x)$.}
a submersion}, and we write $\lambda=(\lambda_1,\dots,\lambda_n)\in \ensuremath{\mathbb R}^n$.
\item We say that $(U,\varphi, \cG)$ is {\bf minimal} (at $x$) if $[\boldsymbol{\alpha}_1],\dots,[\boldsymbol{\alpha}_n]$ is a basis of $\mathcal{B}/I_x\mathcal{B}$.
\end{enumerate}
\end{definition}
\begin{ex}\label{ex:sfolpair}
Consider the case when $A=TM$, so that $\mathcal{F}:=\mathcal{B}$ is a singular foliation on $M$.
Fix $x\in M$ and let $X_1,\dots,X_n\in \mathcal{F}$ so that they induce a basis of $\mathcal{F}/I_x\mathcal{F}$.
We make two choices of Lie groupoid $\cG$ integrating the Lie algebroid $TM$.
\begin{itemize}
\item[a)] As $\cG$ let us choose the pair groupoid $M\times M$. Then the {path holonomy} bisubmersion we obtain is
$$\varphi\colon U \to M\times M,\;\; (y,\lambda)\mapsto (\mathrm{exp}_y (\sum \lambda_i {X_i}), y).$$
{Notice that under the bijection given by Prop. \ref{prop:equivbi}, it corresponds to the path holonomy bisubmersion $(U, \bt, \bs)$ for the singular foliation $\mathcal{F}$ (induced by $X_1,\dots,X_n$) as in \cite{AndrSk}.}
\item[b)] Now as $\cG$ we take the fundamental groupoid $\Pi(M)$. {Then the {path holonomy} bisubmersion we obtain is
}
$$\widetilde{\varphi}\colon U \to \Pi(M),\;\; (y,\lambda)\mapsto
\text{homotopy class of $\gamma_{(y,\lambda)}$},$$ where
$\gamma_{(y,\lambda)}$ is the path $[0,1] \ni t\mapsto \mathrm{exp}_y (t\sum \lambda_i {X_i})$, {{\it i.e.}\; the integral curve of $\sum \lambda_i {X_i}$ that starts at $y$}.
{To see this, use the canonical Lie groupoid isomorphism $\tilde{M}\times_{\pi_1(M)}\tilde{M}\cong \Pi(M)$, where $\tilde{M}$ is the universal covering space of $M$, and take the unique lift of $\sum \lambda_i {X_i}$ to a vector field on $\tilde{M}$.}
\end{itemize}
{Notice that the above two path holonomy bisubmersions are related} by $\varphi=\pi\circ\widetilde{\varphi}$, where $\pi\colon \Pi(M) \to M \times M$ is the morphism of Lie groupoids sending the homotopy class of a path $\gamma$ in $M$ to $(\gamma(0),\gamma(1))$. We will elaborate on this example in \S\ref{section:vary}.
\end{ex}
Let us show now that the object defined in definition \ref{dfn:pathhol} is really a bisubmersion as in definition \ref{dfn:bisubm2}. To this aim recall that $\bs_U:=\bs\circ\varphi$ and $\bt_U:=\bt\circ\varphi \colon U \to M$.
\begin{prop}\label{prop:pathhol}
Let $x \in M$ , let $\boldsymbol{\alpha}_1,\dots,\boldsymbol{\alpha}_n \in \mathcal{B}$ such that $[\boldsymbol{\alpha}_1],\dots,[\boldsymbol{\alpha}_n]$ span $\mathcal{B}/I_x\mathcal{B}$, and let
$(U,\varphi, \cG)$
be as in definition \ref{dfn:pathhol}. Then $(U,\varphi,\cG)$ is a bisubmersion.
\end{prop}
\begin{remark}\label{rem:FB}
Recall from \S \ref{sec:twofols} that there are two foliations associated to $\mathcal{B}$.
In the proof of proposition \ref{prop:pathhol} we will use the following links between $(U,\varphi, \cG)$
(as in definition \ref{dfn:pathhol})
and bisubmersions for these two foliations:
\begin{itemize}
\item[a)] For the singular foliation
$\mathcal{F}_{\mathcal{B}}$ on $M$: $$(U,\bt_U,\bs_U)$$ is a bisubmersion for $\mathcal{F}_{\mathcal{B}}$ (in the sense of \cite[definition 2.1]{AndrSk}). Indeed, it is the ``path-holonomy bisubmersion'' of $\mathcal{F}_{\mathcal{B}}$
constructed using the generators $X_i=\rho(\boldsymbol{\alpha}_i)$ of the foliation $\mathcal{F}_{\mathcal{B}}$ on $M$.
This is proven\footnote{Once we establish proposition \ref{prop:pathhol}, an alternative proof is obtained applying lemma \ref{lem:usualbi}.} exactly as in \cite[Prop. 2.10 a)]{AndrSk}, which holds since
the $X_i$ define a spanning set of $\mathcal{F}_x:=\mathcal{F}/I_x\mathcal{F}$ (it does not matter that they do not define a basis in general).
\item[b)]
For the singular foliation {$\overrightarrow{\mathcal{B}}$} on $\cG$: a set of local generators of {$\overrightarrow{\mathcal{B}}$} is $\overrightarrow{\boldsymbol{\alpha}_1},\dots,\overrightarrow{\boldsymbol{\alpha}_n}$, by Lemma \ref{lem:basis}. Let $$(V, \bt_V,\bs_V)$$ be the path-holonomy bisubmersion {(in the sense of \cite{AndrSk})} associated to these generators near $x$. Notice that, as $M$ embeds in $\cG$ as the identity section, we can view $U\subset \ensuremath{\mathbb R}^n\times M$ as a subset of $V\subset \ensuremath{\mathbb R}^n\times \cG$. Restricting the source and target map of $V$ we see that $\bs_U=\bs_V|_U\colon U\to M$ (the vector bundle projection) and $\varphi=\bt_V|_U\colon U\to \cG$.
\end{itemize}
\end{remark}
\begin{proof}[Proof of proposition \ref{prop:pathhol}]
We show that the three conditions listed in Definition \ref{dfn:bisubm2} hold.
Condition i) holds since $(U,\bt_U,\bs_U)$ is a bisubmersion for the foliation $\mathcal{F}_{\mathcal{B}}$, {by Remark \ref{rem:FB} a)}.
In the rest of the proof we use the bisubmersion $(V,\bt_V,\bs_V)$ for the foliation $\overrightarrow{\mathcal{B}}$ described in Remark \ref{rem:FB} b).
We show the first part of condition ii), {\it i.e.}\;, that for any $\boldsymbol{\alpha}\in \mathcal{B}$, there is a vector field on $U$
which is $\varphi$-related to $\overrightarrow{\boldsymbol{\alpha}}$. This holds because there exists $\tilde{Z}\in \Gamma(V, \ker d\bs_V)$ which $\bt_V$-projects to $\overrightarrow{\boldsymbol{\alpha}}$ (for example by \cite[Prop. 2.10(b)]{AndrSk}, and therefore $\tilde{Z}|_U$ is tangent to $U$ and $\varphi$-related to $\overrightarrow{\boldsymbol{\alpha}}$.
We show the first equation in condition iii).
We do so using Lemma \ref{lem:bisubm2} b).
Let $Z\in \Gamma_c(U, \ker d\bs_U)$, and take an extension $\tilde{Z}\in \Gamma_c(V, \ker d\bs_V)$ to $V$.
By proposition \ref{prop:equivbi}
{$(\bt_V,\bs_V)\colon V\to G\times G$ is a bisubmersion for the singular subalgebroid $\overrightarrow{\mathcal{B}}$. Applying Lemma \ref{lem:bisubm2} b) to it}
we see that
$\tilde{Z}$ is a $C_c^{\infty}(V)$-linear combination of vector fields which are $(\bt_V, \bs_V)$-related to elements of $\overrightarrow{\mathcal{B}}$. In other words, $\tilde{Z}=\sum \tilde{g_i}\tilde{Y_i}$ where $\tilde{g_i}\in C_c^{\infty}(V)$ and the vector fields $Y_i $ lie in $\Gamma(V,\ker d\bs_V)$ and are $\bt_V$-related to some element of $\overrightarrow{\mathcal{B}}$. In particular,
$(\tilde{Y_i})|_{U}$ is tangent to $U$ and, since $\varphi=\bt_V|_U$, it is $\varphi$-related to some element of $\overrightarrow{\mathcal{B}}$. Hence $Z=\sum_i (\tilde{g_i})|_U(\tilde{Y_i})|_{U}$ lies in $\varphi^{-1}(\overrightarrow{\mathcal{B}})$.
{The second parts of conditions ii) and iii) in Definition \ref{dfn:bisubm2} are proven analogously to the above.}
\end{proof}
{The following Lemma allows to simplify some of the later constructions and proofs, see Rem. \ref{rem:invphbisub}.}
\begin{lemma}\label{lem:kappa} Let $\boldsymbol{\alpha}_1,\dots,\boldsymbol{\alpha}_n \in \mathcal{B}$ and let $(U,\varphi, \cG)$ be {a path holonomy bisubmersion}
as in Definition \ref{dfn:pathhol}.
There exists a diffeomorphism $\kappa$ making the following diagram commute:
\begin{equation*}
\xymatrix{
U\ar[rd]^{\varphi} \ar[rr]^{\kappa} & & U\ar[ld]^{i\circ \varphi} \\
&\cG& }
\end{equation*}
In particular, $\bs_U\circ \kappa=\bt_U$ and $\bt_U\circ \kappa=\bs_U$.
\end{lemma}
\begin{proof} Consider the map
$$\kappa \colon U\to U, \kappa(\lambda,x)=(-\lambda, \bt_U(\lambda,x)).$$
One computes easily that $\kappa^2=Id_{U}$, hence $\kappa$ is a diffeomorphism\footnote{The idea of considering $\kappa$ comes from \cite[Prop. 2.10 a)]{AndrSk}, where $\kappa$ is shown to satisfy
$\bs_U\circ \kappa=\bt_U$ and $\bt_U\circ \kappa=\bs_U$.}.
To check that the diagram commutes, we fix $(\lambda,x)\in U$ and compute
$$(i\circ \varphi\circ \kappa)(\lambda,x)=i(\mathrm{exp}_{\bt_U(\lambda,x)}(-\sum \lambda_i\overrightarrow{\boldsymbol{\alpha}_i})).$$ By definition we have $\varphi(\lambda,x)=\mathrm{exp}_x(\sum \lambda_i\overrightarrow{\boldsymbol{\alpha}_i})$. We need to show that these two points of $\cG$ agree, or equivalently that
\begin{equation}\label{eq:oneminusone}
\mathrm{exp}_{\bt_U(\lambda,x)}(-\sum \lambda_i\overrightarrow{\boldsymbol{\alpha}_i})\cdot \mathrm{exp}_x(\sum \lambda_i\overrightarrow{\boldsymbol{\alpha}_i})=1_x.
\end{equation}
Use the short form notation $\boldsymbol{\alpha}:=\sum \lambda_i{\boldsymbol{\alpha}_i}$.
For all $\epsilon\in \ensuremath{\mathbb R}$ define section of $\bs$ by $\psi_{\epsilon}\colon M\to \cG, \psi_{\epsilon}(x)=\mathrm{exp}_x(\epsilon \overrightarrow{\boldsymbol{\alpha}})$. Its image defines a bisection, at least for $\epsilon$ is small enough. The right invariance of $\overrightarrow{\boldsymbol{\alpha}}$ implies that
the above family of bisections satisfies\footnote{Recall from\cite[Prop. 1.4.2, p.22]{MK2} that the product of bisections is defined as
$(\psi_{\epsilon}\ast \psi_{\sigma})(y)=\psi_{\epsilon}(\bt(\psi_{\sigma}(y)))\cdot \psi_{\sigma}(y)$ for all $y\in M$.} $\psi_{\epsilon}\ast \psi_{\sigma}= \psi_{\epsilon+\sigma}$.
In particular we obtain
$$(\psi_{-1}\ast\psi_1)(x)=\psi_0(x)=x,$$ which is exactly eq. \eqref{eq:oneminusone}.
\end{proof}
\subsection{Operations involving bisubmersions}\label{subsec:operations}
We explain how to handle bisubmersions algebraically. This will be used the construction of the holonomy groupoid in \S \ref{section:HB}. To this end, we fix a Lie groupoid $\cG$ and a singular subalgebroid $\mathcal{B}$ of its Lie algebroid.
{Recall that, given a singular foliation $\mathcal{F}$, in Prop. \ref{prop:pathhol} we established a bijection between bisubmersions for $\mathcal{F}$ in the sense of \cite{AndrSk} and
bisubmersions for $\mathcal{F}$ regarded as a singular subalgebroid (Def. \ref{dfn:bisubm2}). All the operations we define in this subsection, in the special case of singular foliations, correspond under the above bijection with the operations introduced in \cite{AndrSk}.}
\subsubsection{Morphisms}
\begin{definition}\label{def:morph}
{Let $(U_i,\varphi_i,\cG)$, $i=1,2$ be bisubmersions for $\mathcal{B}$.} A {\bf morphism} of bisubmersions is a map $f \colon U_1\to U_2$ such that
$\varphi_1 =\varphi_2\circ f$.
\begin{equation*}
\xymatrix{
U_1 \ar[rd]_{\varphi_1 } \ar[rr]^{f} & & U_2 \ar[ld]^{\varphi_2} \\
&\cG&\\
}
\end{equation*}
\end{definition}
There is a simple way to construct new bisubmersions out of old ones, which we will use in the sequel.
\begin{lemma}\label{lem:UV}
Let $(V,\varphi,\cG)$ be a bisubmersion for $\mathcal{B}$. Let $U$ be a manifold and $p\colon U\to V$ be a submersion. Then
$(U,\varphi\circ p,\cG)$ is a bisubmersion for $\mathcal{B}$.
Further, $p$ is a morphism of bisubmersions.
\begin{equation*}
\xymatrix{
U \ar[r]^{p} \ar@{-->}[dr] & V \ar[d]^{\varphi} \\
& \cG }
\end{equation*}
\end{lemma}
\begin{remark}\label{rem:pq}
If $q\colon V\to U$ is a section of $p$ ({\it i.e.}\;, $p\circ q=Id_V$), then $q$ is also a morphism of bisubmersions. Notice that such a global section might not exist, but local sections of $p$ exist around every point of the open subset $p(U)$ of $V$.
\end{remark}
\begin{proof}
We check that $(U,\varphi\circ p,\cG)$ satisfies the conditions of definition \ref{dfn:bisubm2}. Since $p$ is a submersion, i) clearly holds, and ii) holds too as $p$ allows to lift vector fields on $V$ to vector fields on $U$. For iii),
by Lemma \ref{lem:bisubm2} {c)} that it suffices to show that
\begin{equation*}
\mathcal{S}_U:=\{W\in \Gamma(U,\ker d\bs_U): \text{$W$ is $(\varphi\circ p)$-related to an element
{$\overrightarrow{\alpha}$}
}\}
\end{equation*}
{spans $d_u\bs_U$ at every $u\in U$.}
By assumption,
\begin{equation*}
\mathcal{S}_V:=\{Z\in \Gamma(V,\ker d\bs_V): \text{$Z$ is $\varphi$-related to an element
{$\overrightarrow{\alpha}$}}\}
\end{equation*}
{spans $d_v\bs_V$ at every $v\in V$.}
Since $\ker d\bs_U=p_*^{-1}(\ker d\bs_V)$,
taking all lifts (via $p$) of all elements of $\mathcal{S}_V$ we obtain a subset of $\mathcal{S}_U$ which contains $\Gamma_c(\ker(p_*))$ and {spans $d_u\bs_U$ at every $u\in U$}.
Finally, $p$ is a morphism of bisubmersions by construction.
\end{proof}
\subsubsection{Inverses}
\begin{definition}\label{def:inv}
Let $(U,\varphi,\cG)$ be a bisubmersion for $\mathcal{B}$. Its {\bf inverse} is ${\bar{\varphi}:=}i\circ \varphi \colon U \to \cG$, where $i\colon \cG\to \cG, i(g)=g^{-1}$. We denote it $({U},\bar{\varphi}, \cG)$.
\end{definition}
\begin{prop}\label{prop:inv}
The inverse of a bisubmersion is a bisubmersion.
\end{prop}
\begin{proof}
Given a bisubmersion $(U,\varphi,\cG)$, first notice that $\bar{\bs}_U := \bs\circ \bar{\varphi} = \bt_U$ and likewise $\bar{\bt}_U := \bt\circ \bar{\varphi}=s_U$ are submersions. For condition ii) of definition \ref{dfn:bisubm2}, let $\boldsymbol{\alpha} \in \mathcal{B}$. Since this condition holds for $(U,\varphi,\cG)$, there is $Z\in \mathfrak{X}(U)$ which is $\varphi$-related to $\overrightarrow{\boldsymbol{\alpha}}$.
Hence $Z$ is $i\circ \varphi$-related to $i_*\overrightarrow{\boldsymbol{\alpha}}=-\overleftarrow{\boldsymbol{\alpha}}$, and therefore $-Z$ is $\bar{\varphi}$-related to $\overleftarrow{\boldsymbol{\alpha}}$. Similarly, there is $W\in \mathfrak{X}(U)$ which is $\varphi$-related to $\overleftarrow{\boldsymbol{\alpha}}$, and $-W$ is $\bar{\varphi}$-related to $\overrightarrow{\boldsymbol{\alpha}}$.
For condition iii) of definition \ref{dfn:bisubm2} notice that the inversion map $i \colon \cG \to \cG$ gives rise to an isomorphism $i_* \colon \overrightarrow{\mathcal{B}} \to \overleftarrow{\mathcal{B}}$, whence $\bar{\varphi}^{-1}(\overrightarrow{\mathcal{B}})=\varphi^{-1}(\overleftarrow{\mathcal{B}})$ and $\bar{\varphi}^{-1}(\overleftarrow{\mathcal{B}})=\varphi^{-1}(\overrightarrow{\mathcal{B}})$. We conclude because $(U,\varphi,\cG)$ satisfies condition iii) of definition \ref{dfn:bisubm2} and $\bar{\bs}_U = \bt_U, \bar{\bt}_U = \bs_U$.
\end{proof}
\begin{remark}\label{rem:invphbisub}
{If $U$ is a path holonomy bisubmersion, then its inverse bisubmersion is isomorphic to $U$ itself. This follows from Lemma \ref{lem:kappa}.}
\end{remark}
\subsubsection{Compositions}
\begin{definition}\label{def:comp}
Let $(U_j,\varphi_j,\cG)$ be bisubmersions for $\mathcal{B}$, $j=1,2$. Their {\bf composition} is
$$m \circ (\varphi_1,\varphi_2) \colon U_1\times_{{\bs_{U_1}},\bt_{U_2}}U_2\to \cG\times_{\bs,\bt}\cG\to \cG$$
where $m$ is the groupoid multiplication on $\cG$ and $\bs_{U_1} = \bs\circ\varphi_1$ and $\bt_{U_2} = \bt\circ\varphi_2$. We denote it $(U_1 \circ U_2,\varphi_1\cdot\varphi_2,\cG)$.
\end{definition}
\begin{equation*}
\xymatrix{
&U_1\times_{{\bs_{U_1}},\bt_{U_2}}U_2\ar[d]^{(\varphi_1,\varphi_2)} & \\
&\cG\times_{\bs,\bt}\cG \ar[d]^{m}&\\
&\ar[dr]^{\bs}\cG\ar[dl]_{\bt}&\\
M&&M
}
\end{equation*}
Notice that if the open subsets $\bs_1(U_1)$ and $\bt_2(U_2)$ of $M$ are disjoint, then the composition $U_1 \circ U_2$ is the empty set.
\begin{prop}\label{prop:compo}
The composition of two bisubmersions is a bisubmersion.
\end{prop}
\begin{proof}
Consider two bisubmersions $(U_i,\varphi_i,\cG)$, $i=1,2$.
For condition i) of definition \ref{dfn:bisubm2} consider the two natural maps
$$\bt_{12}, \bs_{12} \colon U_1 \circ U_2 \to M$$ given by
$\bt_{12} :=\bt \circ (\varphi_1\cdot\varphi_2)= \bt_{U_1} \circ p_1$ and
$\bs_{12} := \bs \circ (\varphi_1\cdot\varphi_2) =
\bs_{U_2} \circ p_2$, where $p_i\colon U_1 \circ U_2 \to U_i$ are the projections. They are both submersions because the $p_i$ are submersions and because $\bt_{U_1}$ and $\bs_{U_2}$ are submersions.
For condition ii), let $\boldsymbol{\alpha} \in \mathcal{B}$ and take a vector field $Z_1\in \mathfrak{X}(U_1)$ which is $\varphi_1$-related to $\overrightarrow{\boldsymbol{\alpha}}$ (it exists since condition
ii) holds for the bisubmersions $(U_1,\varphi_1,\cG)$). Notice that $(\overrightarrow{\boldsymbol{\alpha}},0)$ restricts to a vector field on $\cG\times_{\bs,\bt}\cG$ which is $m$-related to $\overrightarrow{\boldsymbol{\alpha}}$.
Hence the vector field $(Z_1,0)$ {on $U_1 \circ U_2$} is related by $\varphi_1\cdot \varphi_2=m\circ (\varphi_1, \varphi_2)$ to $\overrightarrow{\boldsymbol{\alpha}}$. Similarly, taking $W_2\in \mathfrak{X}(U_2)$ which is $\varphi_2$-related to $\overleftarrow{\boldsymbol{\alpha}}$, we see that $(0,W_2)$ is $\varphi_1\cdot \varphi_2$-related to $\overleftarrow{\boldsymbol{\alpha}}$.
Now we prove that $(U_1 \circ U_2,\varphi_1\cdot\varphi_2,\cG)$ satisfies condition iii) of definition \ref{dfn:bisubm2}. For simplicity we'll show the first of the equalities appearing there, namely $$(\varphi_1\cdot\varphi_2)^{-1}(\overrightarrow{\mathcal{B}})=\Gamma_c(U_1\circ U_2;\ker d\bs_{12})$$ (the second is proven similarly).
First notice that there are distinguished elements of $\Gamma_c(U_1\circ U_2;\ker d\bs_{12})$:
\begin{enumerate}
\item[(*)] $(W_1,Z_2)\in \mathfrak{X}(U_1\circ U_2)$ such that $W_1$ is $\varphi_1$-related to $i_*\overrightarrow{\boldsymbol{\alpha}}$, and $Z_2$ is $\varphi_2$-related to $\overrightarrow{\boldsymbol{\alpha}}$, for some $\boldsymbol{\alpha}\in \mathcal{B}$,
\item[(**)] $(Z_1,0)$ such that $Z_1$ is $\varphi_1$-related to $\overrightarrow{\boldsymbol{\alpha}}$ for some $\boldsymbol{\alpha}\in \mathcal{B}$.
\end{enumerate}
{Notice that
both families $(*)$ and $(**)$ consist of vector fields which are $\varphi_1\cdot\varphi_2$-related to $\overrightarrow{\alpha}$ for some $\boldsymbol{\alpha}\in \mathcal{B}$.
Indeed, for any $\boldsymbol{\alpha}\in \mathcal{B}$, $(i_*\overrightarrow{\boldsymbol{\alpha}},\overrightarrow{\boldsymbol{\alpha}})$ is $m$-related to the zero vector field on $\cG$, and
$(\overrightarrow{\boldsymbol{\alpha}},0)$ is $m$-related to $\overrightarrow{\boldsymbol{\alpha}}$, where $m$ is the multiplication.
Hence Lemma \ref{lem:bisubm2} c), together with the following claim, finishes the proof.}
\underline{Claim:}
\emph{The union of the families of vector fields $(*)$ and $(**)$, evaluated at any point $(u_1,u_2)\in U_1\circ U_2$, span the kernel of $d\bs_{12}$ at $(u_1,u_2)$.}\\
Fix a vector in the kernel of $d\bs_{12}$, that is, $(X_1,X_2)\in T_{u_1}U_1\times T_{u_2}U_2$ such that
$$d\bs_{U_1}(X_1)=d\bt_{U_2}(X_2) \text{ and }d\bs_{U_2}(X_2)=0.$$
Since $(\varphi_2)^{-1}(\overrightarrow{\mathcal{B}})=\Gamma_c(\ker d\bs_{U_2})$, {by Lemma \ref{lem:bisubm2} c)} we can extend $X_2$ to a vector field $Z_2\in \mathfrak{X}(U_2)$ which is $\varphi_2$-related to
$\overrightarrow{\boldsymbol{\alpha}}$ for some $\boldsymbol{\alpha}\in \mathcal{B}$. Since $U_1$ satisfies condition ii) in definition \ref{dfn:bisubm2}, there is $W_1\in \mathfrak{X}(U_1)$ which is $\varphi_1$-related to $i_*\overrightarrow{\boldsymbol{\alpha}}=-\overleftarrow{\boldsymbol{\alpha}}$. Notice that $(W_1,Z_2)$ belongs to the family of vector fields $(*)$.
Further, the map $d_{u_1}\bs_{U_1}$ sends both $X_1$ and $(W_1)|_{u_1}$ to the same vector $d_{u_2}\bt_{U_2}(X_2)$. This means that $X_1-(W_1)|_{u_1}$ lies in $\ker d_{u_1}\bs_{U_1}$, and therefore
can be extended to a vector field $Z_1$ lying in $ \Gamma(U_1,\ker d\bs_{U_1})$. Since $(\varphi_1)^{-1}(\overrightarrow{\mathcal{B}})=\Gamma_c(U_1,\ker d\bs_{U_1})$, {by Lemma \ref{lem:bisubm2} c)} we can choose $Z_1$ so that it is $\varphi_1$-related to {some $\overrightarrow{\alpha'}$}, {\it i.e.}\; so that $(Z_1,0)$ belongs to the family of vector fields $(**)$. Now $$(X_1,X_2)=((W_1)|_{u_1},X_2)+(X_1-(W_1)|_{u_1},0)=
\left((W_1,Z_2)+(Z_1,0)\right)|_{(u_1,u_2)},$$
proving the claim.
\end{proof}
\begin{ex}\label{G2bisub} Let $\mathcal{B}=\Gamma_c(A)$, where $A=Lie(\cG)$. We saw in example \ref{ep:groid} that $(\cG,Id_\cG,\cG)$ is a bisubmersion for $\mathcal{B}$. From proposition \ref{prop:compo} we deduce that
$(\cG\times_{\bs,\bt}\cG, m, \cG)$ is also a bisubmersion for $\mathcal{B}$. The multiplication $m\colon \cG\times_{\bs,\bt}\cG \to \cG$ is a morphism of bisubmersions.
\end{ex}
\subsubsection{Bisections}
\begin{definition}\label{def:bisec}
Let $(U,\varphi,\cG)$ be a bisubmersion for $\mathcal{B}$.
\begin{enumerate}
\item A \textbf{bisection} of $(U,\varphi,\cG)$ is a locally closed submanifold $\mathbf{b}$ of $U$ such that the restrictions of both $\bs_U$ and $\bt_U$ to $\mathbf{b}$ are diffeomorphisms from $\mathbf{b}$ onto open subsets of $M$.
\item Let $u \in U$ and $\mathbf{c}$ a bisection of the Lie groupoid $\cG$ with $\varphi(u)\in \mathbf{c}$.
We say that $\mathbf{c}$ is carried by $(U,\varphi,\cG)$ at $u$ if there exists a bisection $\mathbf{b}$ of $U$ such that $u \in \mathbf{b}$ and $\varphi(\mathbf{b})$ is an open subset of $\mathbf{c}$.
\end{enumerate}
\end{definition}
{Notice that if $\mathbf{b}$ is a bisection of $(U,\varphi,\cG)$, then
$\varphi(\mathbf{b})$ is a bisection of the Lie groupoid $\cG$, and further $\varphi|_{\mathbf{b}}\colon \mathbf{b}\to \varphi(\mathbf{b})$ is a diffeomorphism.
This is best seen viewing bisections of $(U,\varphi,\cG)$ as images of maps $b$ which are sections of $\bs_U\colon U\to M$ so that $\bt_U \circ b$ is local diffeomorphism of $M$.
}
The existence of bisections at every $u \in U$ of a bisubmersion $(U,\varphi,\cG)$ is proven exactly like in \cite[Prop. 2.7]{AndrSk}. Notice that the proof uses only the fact that $\bt_U, \bs_U\colon U \to M$ are submersions. {The following proposition is given without proof.}
\begin{prop}
Let $(U,\varphi,\cG)$ and $(U_i,\varphi_i,\cG)$, $i=1,2$ be bisubmersions.
\begin{enumerate}
\item Let $u \in U$ and $\mathbf{c}$ a local bisection of $\cG$ carried by $(U,\varphi,\cG)$ at $u$. Then $\mathbf{c}^{-1}$ is carried by the inverse bisubmersion $({U},\bar{\varphi},\cG)$ at $u$.
\item Let $u_i \in U_i$, $i=1,2$ be such that $\bs_{U_1}(u_1)=\bt_{U_2}(u_2)$ and let $\mathbf{c}_i$ be local bisections of $\cG$ carried by $(U_i,\varphi_i,\cG)$ at $u_i$ respectively, $i=1,2$. Then $\mathbf{c}_1\cdot\mathbf{c}_2$ is carried by the composition $(U_1\circ U_2,\varphi_1 \cdot \varphi_2,\cG)$ at $(u_1,u_2)$.
\end{enumerate}
\end{prop}
\subsubsection{{Modifying a bisubmersion by a bisection}}\label{modify}
Let $(U, \bt_U, \bs_U)$ be a bisubmersion for a {singular foliation} $(M,\mathcal{F})$ (in the sense of \cite{AndrSk}, see \S \ref{section:usualbisub}), and $\mathbf{b}$ a bisection of $U$.
An easy but important fact is that the diffeomorphism carried by the bisection, namely $\phi\colon M\to M, \bs_U(u)\mapsto \bt_U(u)$ where $u\in \mathbf{b}$, is an \emph{automorphism}\footnote{To see this,
notice that $(\bt_U)|_{\mathbf{b}}$ is an isomorphism of foliated manifolds between
the submanifold $\mathbf{b}$ -- endowed with the restriction of the pullback foliation
$\bt_U^{-1}(\mathcal{F})$ -- and $(M,\mathcal{F})$. The same holds for $\bs_U$ in place of $\bt_U$, and the results follows since
$\bt_U^{-1}(\mathcal{F})=\bs_U^{-1}(\mathcal{F})$.}
of the singular foliation $\mathcal{F}$, {\it i.e.}\; $\phi_* \mathcal{F}=\mathcal{F}$. This fact was used in \cite[\S 2.3]{AndrSk} to obtain a new bisubmersion for $\mathcal{F}$ out of $(U, \bt_U, \bs_U)$ and $\mathbf{b}$.
We now derive an analog statement for bisubmersions of singular subalgebroids. This will be used in the proof of Corollary \ref{cor:crucial} and in Lemma \ref{lem:sm1}.
\begin{prop}\label{prop:preserveb}
Let $(U,\varphi,\cG)$ be a bisubmersion for the singular subalgebroid $\mathcal{B}$, and let $\mathbf{b}$ be a bisection of $U$. Denote by $\mathbf{c}:=\varphi(\mathbf{b})$ the induced bisection of the Lie groupoid $\cG$, and by $L_{\mathbf{c}}\colon G\to G$ the
left multiplication by $\mathbf{c}$. Denote by $\mathbf{c}^{-1}$ the image of $\mathbf{c}$ under the inversion map. Then
\begin{itemize}
\item [a)] $(L_{\mathbf{c}})_*\overrightarrow{\mathcal{B}}=\overrightarrow{\mathcal{B}}$.
\item [b)] $(U,L_{\mathbf{c}^{-1}}\circ \varphi,\cG)$ is also a bisubmersion for $\mathcal{B}$.
\end{itemize}
\end{prop}
\begin{remark}\label{rem:rephrasea} Item a) above can be rephrased saying that the singular subalgebroid $\mathcal{B}$ is preserved by the Lie algebroid automorphism of
$A=\ker(\bs_*|_M)$ induced by $\mathbf{c}$.
Recall that the conjugation by the bisection $\mathbf{c}$ is a Lie groupoid automorphism of $G$, which differentiates to the Lie algebroid automorphism $A\to A, a_x\mapsto (R_{{\mathbf{c}(x)}^{-1}})_*(L_{\mathbf{c}})_*(a_x)$, where $\mathbf{c}(x)$ is the unique point of $\mathbf{c}$ with source $x$. The fact that this Lie algebroid automorphism preserves $\mathcal{B}$ is an immediate consequence of Prop. \ref{prop:preserveb} a), upon using the facts that
vector fields of the form $(L_{\mathbf{c}})_*(\overrightarrow{\boldsymbol{\alpha}})$ are right-invariant and that the right invariant vector fields in the global hull of $\overrightarrow{\mathcal{B}}$ are exactly the right-translates of elements of $\mathcal{B}$.
\end{remark}
\begin{proof}
a)
We view the bisection $\mathbf{b}$ as a map $\mathbf{b}\colon M\to U$ which is a right-inverse to $\bs_U$.
In Prop. \ref{lem:bisubrarb} we established that $\widehat{U}:=U \times_{\bs_U, \bt}G$, with the target and source maps indicated there, is a bisubmersion (in the sense of \cite{AndrSk}) for the singular foliation $\overrightarrow{\mathcal{B}}$ on $G$.
From $\mathbf{b}$ we obtain a bisection of $\widehat{U}$, namely $$\widehat{\mathbf{b}}\colon G\to U \times_{\bs_U, \bt}G,\;\; g \mapsto (\mathbf{b}(\bt(g)),g).$$
By the text just before Prop. \ref{prop:preserveb}, the diffeomorphism $\bt_{\widehat{U}}\circ \widehat{\mathbf{b}}\colon G\to G$ carried by this bisection
is an automorphism of the singular foliation $\overrightarrow{\mathcal{B}}$. This diffeomorphism
reads $g\mapsto \mathbf{c}(\bt(g))\cdot g$, {\it i.e.}\; it is exactly $L_{\mathbf{c}}$.
b) We check that the fact that $(U,\varphi,\cG)$ is a bisubmersion for $\mathcal{B}$ implies that $(U,L_{\mathbf{c}^{-1}}\circ \varphi,\cG)$ satisfies
the three conditions in Def. \ref{dfn:bisubm2}.
Condition i) holds because
$\bs\circ L_{\mathbf{c}^{-1}} =\bs$, and $\bt\circ L_{\mathbf{c}^{-1}}=\phi \circ\bt$ for
some diffeomorphism $\phi$ of $M$.
For ii), let $\boldsymbol{\alpha} \in \mathcal{B}$. By item a) (or more precisely Remark \ref{rem:rephrasea}) we have $(L_{\mathbf{c}})_*\overrightarrow{\boldsymbol{\alpha}}=\overrightarrow{\boldsymbol{\alpha}'}$ for some $\boldsymbol{\alpha}'\in \mathcal{B}$.
Hence any $Z\in\mathfrak{X}(U)$
which is $\varphi$-related to $\overrightarrow{\boldsymbol{\alpha}'}$ is also $(L_{\mathbf{c}^{-1}}\circ \varphi)$-related to $\overrightarrow{\boldsymbol{\alpha}}$. The second part of condition ii) holds trivially since $(L_{\mathbf{c}})_*\overleftarrow{\boldsymbol{\alpha}}=\overleftarrow{\boldsymbol{\alpha}}$.
Finally, the first part of condition iii) holds because
$(L_{\mathbf{c}^{-1}}\circ \varphi)^{-1}(\overrightarrow{\mathcal{B}})=\varphi^{-1} (L_{\mathbf{c}})_*(\overrightarrow{\mathcal{B}})=\varphi^{-1} (\overrightarrow{\mathcal{B}})$, where the last equation holds by item a). The second part of condition iii) holds similarly.
\end{proof}
{\begin{remark}\label{rem:ccinv} A variation of Prop. \ref{prop:preserveb} b) is the following: under
the same hypotheses of the proposition, $(U,L_{\mathbf{c}}\circ \varphi,\cG)$ is also a bisubmersion for $\mathcal{B}$. Indeed, since $(L_{\mathbf{c}})^{-1}=L_{\mathbf{c}^{-1}}$, Prop. \ref{prop:preserveb} a) implies that $(L_{\mathbf{c}^{-1}})_*\overrightarrow{\mathcal{B}}=\overrightarrow{\mathcal{B}}$, and the proof of Prop. \ref{prop:preserveb} b) gives the claimed result.\end{remark}
}
\section{The Holonomy Groupoid}\label{section:HB}
In this whole section we
fix an integrable Lie algebroid $A\to M$ and a singular subalgebroid $\mathcal{B}$. Further, we fix a Lie groupoid $\cG$ integrating $A$.
We give the construction of the holonomy groupoid associated with a singular subalgebroid $\mathcal{B}$, relying on the methods developed in \cite{AndrSk}.
{In particular, our \S \ref{subsec:compare} and \S \ref{sub:con} follow closely \cite{AndrSk}. A new feature is that the holonomy groupoid depends on the choice of $G$; in \S \ref{section:vary} we describe this dependence.}
\subsection{Comparison of bisubmersions}\label{subsec:compare}
We start with a technical result, needed in the proof of Proposition \ref{prop:crucial}.
\begin{lemma}\label{lem:linindip}
Let $(U,\varphi,\cG)$ be a bisubmersion for the singular subalgebroid $\mathcal{B}$, $u\in U$, and ${\boldsymbol{\alpha}}_1,\dots,{\boldsymbol{\alpha}}_n\in \mathcal{B}$ which
{induce a linearly independent set of vectors in $\mathcal{B}/I_{\bt_U(u)}\mathcal{B}$.}
Let $Y_1,\dots,Y_n \in \Gamma(U,\ker d\bs_U)$ such that
$Y_i$ is $\varphi$-related to $\overrightarrow{\boldsymbol{\alpha}}_i$ for every $i=1,\dots,n$.
Then $Y_1(u),\dots,Y_n(u)$ are linearly independent.
\end{lemma}
\begin{proof}
The existence of the $Y_i$ follows from Definition \ref{dfn:bisubm2} ii). We show that they are linearly independent at $u$.
First, recall from Rem. \ref{donto} that there is a well-defined map of $C^{\infty}(U)$-modules $d\varphi \colon \Gamma_c(U;\ker d\bs_U) \to \varphi^*(\overrightarrow{\mathcal{B}})$. Denote by $\bt\colon G\to M$ the target map of the Lie groupoid. Upon the identification between the pullback vector bundle $\bt^*(\ker d\bs|_M)$ and $\ker d\bs$ given by right-translation, we have $\bt^* \mathcal{B}=\overrightarrow{\mathcal{B}}$. Hence, since $\bt_U=\bt\circ \varphi$, the above map can be written as $d\varphi \colon \Gamma_c(U;\ker d\bs_U) \to \bt_U^*{\mathcal{B}}$.
Take constants $\lambda_i$ such that $\sum_i \lambda_iY_i$ vanishes at $u$. We need to show that $\lambda_i=0$ for all $i$. By definition \ref{dfn:bisubm2} iii) and Lemma \ref{lem:bisubm2} c)
there are elements $W_1,\dots,W_k\in \Gamma(U,\ker d\bs_U)$ which form a frame for $\ker d\bs_U$ over a neighborhood $U_0$ of $u$, and which are $\varphi$-related to elements {$\overrightarrow{\boldsymbol{\beta}_j}$}, for some $\boldsymbol{\beta}_j\in \mathcal{B}$.
On $U_0$ we have
$\sum_i \lambda_iY_i=\sum_jf_jW_j$
for some $f_j\in I_u\subset C^{\infty}(U_0)$.
Applying $d\varphi$ to this equation we obtain
\begin{equation}\label{eqn:F2}
\sum_i \lambda_i \;\bt_U^*({\boldsymbol{\alpha}_i})=\sum_jf_j\;\bt_U^*({\boldsymbol{\beta}_j})\end{equation}
Choose a (locally defined) section $\tau$ of the submersion $\bt_U\colon U\to M$.
Notice that the l.h.s. of eq. \eqref{eqn:F2} is the pullback by $\bt_U$ of an element of $\mathcal{B}$, namely $\sum_i \lambda_i \boldsymbol{\alpha}_i$, hence the above expression is determined by its value on the image $Im(\tau)$. Therefore the value of the r.h.s. of
eq. \eqref{eqn:F2} is unchanged
if we replace the coefficients $f_j$ with $\bt_U^*F_j$, where $F_j=\tau^*f_j \in C^{\infty}(M)$.
Hence we have the following equality of elements of $\mathcal{B}$:
\begin{equation*}
\sum_i \lambda_i {\boldsymbol{\alpha}_i}=\sum_jF_j {\boldsymbol{\beta}_j}.
\end{equation*}
Since $F_j\in I_{\bt_U(u)}$, the image of this element in $\mathcal{B}/I_{\bt_U(u)}\mathcal{B}$ vanishes. But the image is $\sum_i \lambda_i [{\boldsymbol{\alpha}_i}]$, and from the linear independence of the
$[{\boldsymbol{\alpha}_i}]$, we conclude that $\lambda_i=0$ for all $i$.
\end{proof}
{For the following fundamental result, recall that the minimality of a set of generators is defined in Rem. \ref{rem:basis}.}
\begin{prop}\label{prop:crucial}
Let $x \in M$ and $\boldsymbol{\alpha}_1,\dots,\boldsymbol{\alpha}_n
\in \mathcal{B}$ which form
a minimal set of generators of $\mathcal{B}$ around $x$.
Let $(U_0,\varphi_0,\cG)$ be the {path holonomy} bisubmersion they define as in Def. \ref{dfn:pathhol}. Let $(U,\varphi,\cG)$ be a bisubmersion of $\mathcal{B}$ and suppose that $u \in U$ with $\varphi(u)=1_x$ carries the identity bisection $1_M$ of $\cG$.
Then there exists an open neighborhood $U'$ of $u$ in $U$ and a submersion $g\colon U' \to U_0$ which is a morphism of bisubmersions and $g(u)=(0,x)$.
\begin{equation*}
\xymatrix{
U'\ar[rd]_{ \varphi} \ar@{-->}[rr]^{g} & & U_0\ar[ld]^{ \varphi_0} \\
&\cG& }
\end{equation*}
\end{prop}
\begin{proof}
Replacing $U$ by an open subset containing $u$, we may assume $\bs_U(U) \subset \bs_{U_0}(U_0)$. By Lemma \ref{lem:linindip} there are $Y_1,\dots,Y_n \in \Gamma(U,\ker d\bs_U)$ such that
$Y_i$ is $\varphi$-related to $\overrightarrow{\boldsymbol{\alpha}}_i$ for every $i=1,\dots,n$, and
the $Y_1(u),\dots,Y_n(u)$ are linearly independent. Let $Z'_{n+1},\dots,Z'_{k} \in \Gamma(U,\ker d\bs_U)$ such that $(Y_1,\dots,Y_n,Z'_{n+1},\dots,Z'_k)$ is a {frame of $\ker d\bs_U$} nearby $u$. Consider as in Remark \ref{donto} the map $d\varphi\colon \Gamma_c(U;\ker d\bs_U) \to \varphi^*(\overrightarrow{\mathcal{B}}).$
For all
$i=n+1,\ldots,k$ consider also $d\varphi(Z'_i)\in \varphi^*(\overrightarrow{\mathcal{B}})$.
Since $\varphi^*(\overrightarrow{\mathcal{B}})$ is generated by $\{\varphi^*\overrightarrow{\boldsymbol{\alpha}}_j\}_{j=1,\dots,n}$ nearby $u$,
there exist functions $f_i^j$ nearby $u$ such that
$d\varphi(Z'_i)=d\varphi(\xi_i)$, where
$\xi_i := \sum_{j=1}^n f_i^j Y_j$. Put $Z_i = Z'_i - \xi_i$.
Then $(Y_1,\dots,Y_n,Z_{n+1},\dots,Z_k)$ is also a {frame} nearby $u$, since the $\xi$'s are linear combinations of the $Y_i$'s, and further
\begin{equation}\label{eq:nk}
\text{$\varphi_*(Y_i)=\overrightarrow{\boldsymbol{\alpha}}_i$ for $i\leq n\;\;\;\;\;$ $\varphi_*(Z_i)=0$ for $i>n$}.
\end{equation}
To unify the notation, denote $Y_i:=Z_i$ for $i>n$.
For $\lambda = (\lambda_1,\dots,\lambda_k) \in \ensuremath{\mathbb R}^k$ small enough, we denote by $\psi_{\lambda}$ the partially defined diffeomorphism $\mathrm{exp}(\sum_{i=1}^k\lambda_i Y_i)$ of $U$. {Denote by $\mathbf{b}$ a bisection of $U$ through $u$ carrying the identity bisection of $G$}.
There is an open neighborhood $\mathbf{b}' \subset \mathbf{b}$ of $u$ and an open ball $B^k$ in $\ensuremath{\mathbb R}^k$ such that $$h \colon \mathbf{b}' \times B^k \to U',\quad (y,\lambda) \mapsto \psi_{\lambda}(y)$$ is a diffeomorphism of $\mathbf{b}' \times B^k$ into an open neighborhood $U'$ of $u$ in $U$. Notice that for all $y\in \mathbf{b}'$ we have
\begin{equation}\label{eq:varphipsi}
\varphi(\psi_{\lambda}(y))=\mathrm{exp}_{\varphi(y)}(\sum_{i=1}^{{n}}\lambda_i \overrightarrow{\boldsymbol{\alpha}}_i),
\end{equation}
{where the sum runs only until $n$} as a consequence of eq. \eqref{eq:nk}.
Let $p \colon \ensuremath{\mathbb R}^k \to \ensuremath{\mathbb R}^n$ be the projection to the first $n$ coordinates.
{Use the diffeomorphism $\varphi|_{\mathbf{b}'}$ to identify $\mathbf{b}'$ with an open subset of $M$ (thereby changing the domain of $h$).}
We define $$g := p \circ h^{-1} \colon U' \to U_0.$$
The map $g$ is a morphism of bisubmersions by eq. \eqref{eq:varphipsi}, and it is a submersion.
\end{proof}
Corollary \ref{cor:crucial} below
{allows to define the equivalence relation giving rise to the holonomy groupoid.}
\begin{cor}\label{cor:crucial}
Let $(U_i,\varphi_i,\cG)$, $i=1,2$ be bisubmersions of $\mathcal{B}$ and $u_i \in U_i$ such that $\varphi_1(u_1) = \varphi_2(u_2)$.
\begin{enumerate}
\item If the identity bisection $1_M$ of $G$ is carried by $U_i$ at $u_i$, for $i=1,2$, there exists an open neighborhood $U'_1$ of $U_1$ and a morphism of bisubmersions $f\colon U'_1 \to U_2$ such that $f(u_1)=u_2$.
\item If there is a bisection of $\cG$ carried by both $U_1$ at $u_1$ and by $U_2$ at $u_2$, there exists an open neighborhood $U'_1$ of $u_1$ in $U_1$ and a morphism of bisubmersions $f \colon U'_1 \to U_2$ such that $f(u_1)=u_2$.
\item If there is a morphism of bisubmersions $g \colon U_1 \to U_2$ such that $g(u_1)=u_2$, then there exists an open neighborhood $U'_2$ of $u_2$ in $U_2$ and a morphism of bisubmersions $f \colon U'_2 \to U_1$ such that $f(u_2)=u_1$.
\end{enumerate}
\end{cor}
\begin{proof}
Given proposition \ref{prop:crucial}, a) is proven exactly as in \cite[Cor. 2.11]{AndrSk} a).
b) By assumption there are bisections $\mathbf{b}_i$ of $U_i$ through $u_i$ ($i=1,2$) and a bisection $\mathbf{c}$ of $\cG$ through $\varphi_1(u_1) = \varphi_2(u_2)$
such that $\varphi_1(\mathbf{b}_1)$ and
$\varphi_2(\mathbf{b}_2)$ are open subsets of $\mathbf{c}$.
{By Prop. \ref{prop:preserveb}, $(U_i,L_{\mathbf{c}^{-1}}\circ \varphi_i,\cG)$ are
bisubmersions for $\mathcal{B}$}, where
$L_{\mathbf{c}^{-1}}$ denotes the automorphism of $\cG$ given by left multiplication with the bisection $\mathbf{c}^{-1}:=\{g^{-1}:g\in \mathbf{c}\}$.
The points $u_i$ of these bisubmersions carry $1_M$, for
the $\mathbf{b}_i$ are bisections of these bisubmersions whose images in $\cG$ are contained in $1_M$. By a) we therefore obtain a map $f$ making the following diagram commute:
\begin{equation*}
\xymatrix{
U_1\ar[rd]_{L_{\mathbf{c}^{-1}}\circ \varphi_1} \ar@{-->}[rr]^{f} & & U_2\ar[ld]^{L_{\mathbf{c}^{-1}}\circ \varphi_2} \\
&\cG\ar[d]^{L_{\mathbf{c}}}&\\
&\cG& }
\end{equation*}
Therefore $f$ is the desired morphism.
c) Let $\mathbf{b}$ be a bisection of $U_1$ through $u_1$. Then $g(\mathbf{b}_1)$ is a bisection of $U_2$ through $u_2$. Both $\mathbf{b}$ and $g(\mathbf{b}_1)$ carry the same bisection of $\cG$, that is, $\varphi_2(g(\mathbf{b}))=\varphi_1(\mathbf{b})$. Hence we can apply item b) to obtain the existence of $f$.
\end{proof}
\subsection{Construction of the holonomy groupoid}\label{sub:con}
Recall that we fixed an integrable Lie algebroid $A\to M$, a singular subalgebroid $\mathcal{B}$, and a Lie groupoid $\cG$ integrating $A$.
\begin{definition}\label{def:pathholatlas}
Consider a family $(U_i,\varphi_i,\cG)_{i \in I}$ of {source-connected} {minimal} path-holonomy bisubmersions defined as in definition \ref{dfn:pathhol} such that $M = \cup_{i \in I}\bs_{U_i}(U_i)$. Let $\mathcal{U}$ be the collection of all such bisubmersions, together with their inverses
and finite compositions. {(We can omit the inverses, by Remark. \ref{rem:invphbisub}).}
We call $\mathcal{U}$ a {\bf path holonomy atlas} of $\mathcal{B}$.
\end{definition}
Corollary \ref{cor:crucial} c) shows that for $u_1 \in (U_1,\varphi_1,\cG), u_2 \in (U_2,\varphi_2,\cG)$ the relation
\begin{align*}
u_1 \sim u_2 \Leftrightarrow &\;\;\text{there is an open neighborhood $U'_1$ of $u_1$,}\\
&\;\;\text{there is a morphism of bisubmersions } f\colon U_1' \to U_2 \text{ such that } f(u_1)=u_2
\end{align*}
is an equivalence relation. This allows us to give the following definition:
\begin{definition}\label{def:holgroupoid} Let $\cG$ be a Lie groupoid and $\mathcal{B}$ a singular subalgebroid of $Lie(\cG)$. The {\bf holonomy groupoid of $\mathcal{B}$ over $\cG$} is
\begin{center}
\fbox{\begin{Beqnarray*}
\;\;\;\; H^{\cG}(\mathcal{B}):= \coprod_{U \in \mathcal{U}} U/\sim
\end{Beqnarray*}}
\end{center}
We write $H(\mathcal{B})$ instead of $H^{\cG}(\mathcal{B})$ when the choice of $\cG$ is understood.
\end{definition}
\begin{remark}\label{rem:HGBbisec} The equivalence relation $\sim$ can be made more explicit as follows, as a consequence of Cor. \ref{cor:crucial} b):
\begin{align*}
u_1\sim u_2 \Leftrightarrow &\;\text{
$\varphi_1(u_1)=\varphi_2(u_2)$}, \\
&\;\text{$\exists$ bisections $\mathbf{b}_1$ through $u_1$, $\mathbf{b}_2$ through $u_2$, s.t. $\varphi_1(\mathbf{b}_1)=\varphi_2(\mathbf{b}_2)$.
}
\end{align*}
\end{remark}
{Denote by $\natural \colon \coprod_{U \in \mathcal{U}} U \to H^{\cG}(\mathcal{B})$ the quotient map.}
\begin{remark}\label{rem:open}
{In the following, we endow $H^{\cG}(\mathcal{B})$ with the quotient topology induced by $\natural$. For any bisubmersion $U\in \mathcal{U}$, the image $\natural U$ is open in $H^G(\mathcal{B})$, by the very same argument used at the beginning of \cite[\S 3.4]{AZ1}.}
\end{remark}
The next proposition justifies the use of the term ``groupoid'' for $H^G(\mathcal{B})$.
\begin{thm}\label{thm:holgroidconstr}
Denote $q_U:=\natural|_{U}$ for all $U\in \mathcal{U}$.
\begin{enumerate}
\item There are maps $\bs_H, \bt_H\colon H^{\cG}(\mathcal{B}) \to M$ such that $\bs_H\circ q_U=\bs_U$ and $\bt_H\circ q_U=\bt_U$ for all $U\in \mathcal{U}$.
\item There is a topological groupoid structure on $H^{\cG}(\mathcal{B})$, with objects $M$, source and target maps $\bs_H, \bt_H$ defined above, and with multiplication $q_U(u)q_V(v):=q_{U\circ V}(u,v)$.
\item The canonical map
\begin{center}
\fbox{\begin{Beqnarray*}
\;\;\;\; \Phi \colon H^{\cG}(\mathcal{B})\to \cG,
\end{Beqnarray*}}
\end{center}
determined by $\Phi\circ q_U=\varphi_U$ for all $U\in \mathcal{U}$,
is a morphism of topological groupoids covering $Id_M$.
\end{enumerate}
\end{thm}
\begin{proof}
a) First notice that the map $\Phi$ introduced in c) is well-defined, by the definition of morphism of bisubmersions (definition \ref{def:morph}). Hence $\bs_H:=\bs \circ \Phi$ and $\bt_H:=\bt \circ \Phi$ are well-defined maps $H^{\cG}(\mathcal{B}) \to M$. They clearly satify $\bs_H\circ q_U=\bs_U$ and $\bt_H\circ q_U=\bt_U$ for all $U\in \mathcal{U}$.
b) We prove that the multiplication is well-defined. Let $U,V,U',V'\in \mathcal{U}$ and consider elements satisfying $q_U(u)=
q_{U'}(u')$ and $q_V(v)=
q_{V'}(v')$. Then there is exist local morphisms of bisubmersions $f\colon U\to U'$ with $u\mapsto u'$, and $h\colon V\to V'$ with $v\mapsto v'$. Assume that $\bs_U(u)=\bt_V(v)$, which implies $\bs_{U'}(u')=\bt_{V'}(v')$.
Since morphisms of bisubmersions preserve the source and target maps,
the product map restricts to a well-defined map $(f,h)\colon U\circ V\to U'\circ V'$ with $(u,v)\mapsto(u',v')$ and which is a morphism of bisubmersions, showing that
$(u,v)\sim(u',v')$ and therefore $q_{U\circ V}(u,v)=q_{U'\circ V'}(u',v')$.
{For any $x\in M$, the identity element $1_x\in H^{\cG}(\mathcal{B})$ is represented by any point $u$ in a bisubmersion $U\in \mathcal{U}$ with $\bs_U(u)=\bt_U(u)=x$ and so that $u$ carries (locally) the identity bisection of $\cG$. (For instance, one can take $U\subset \ensuremath{\mathbb R}^n\times M$ to be a minimal path holonomy bisubmersion and $u=(0,x)$.). The inverse of $q_V(v)\in H^{\cG}(\mathcal{B})$ is $q_{\bar{V}}(v)$, where $\bar{V}$ denotes the inverse bisubmersion to $V$ as in Def. \ref{def:inv}.} It is clear that, with these operations, $H^{\cG}(\mathcal{B})$ forms a topological groupoid.
c) The map $\Phi$ is a morphism of topological groupoids, by the definition of inverse and composition of bisubmersions (definitions \ref{def:inv} and \ref{def:comp}).
\end{proof}
\begin{remark}
In Appendix \ref{sec:atlases} we define the notion of atlas for singular subalgebroids, of which $\mathcal{U}$ appearing in definition \ref{def:pathholatlas} is an example, and from each atlas we construct a topological groupoid. \end{remark}
{
{Let us work out by hand an elementary example.}
More classes of examples will be given in \S \ref{subsec:exholgr}.}
\begin{ex}[Lie algebras]\label{ex:Liegr}
Let $\mathcal{B}=A=\ensuremath{\mathfrak{g}}$ a Lie algebra and $G$ any connected Lie group integrating $\ensuremath{\mathfrak{g}}$. There is a neighborhood $U$ of the origin in $\ensuremath{\mathfrak{g}}$
such that the exponential map $U \to G$ is a bisubmersion. Indeed, for any
basis of $\ensuremath{\mathfrak{g}}$, consider the induced path-holonomy bisubmersion $\ensuremath{\mathbb R}^n \to G$; upon the identification $\ensuremath{\mathfrak{g}} \cong \ensuremath{\mathbb R}^n$ given by the basis, it is the exponential map, as the latter is obtained taking integral curves starting at the identity of right-invariant vector fields. The $n$-fold composition of this bisubmersion is $$U^{\times n}:=U\times\dots\times U \to G, \;\;\;(v_1,\dots,v_n)\mapsto \mathrm{exp}(v_1)\dots \mathrm{exp}(v_n).$$
The map $\cup_n U^{\times n} \to G$ is surjective and, by the definition of holonomy groupoid, it descends to an injective map $H^G(\ensuremath{\mathfrak{g}})\to G$ . We conclude that $H^G(\ensuremath{\mathfrak{g}})$ is isomorphic (as a topological groupoid) to $G$.
\end{ex}
\subsection{Examples}\label{subsec:exholgr}
We give some examples of the holonomy groupoid defined in \S \ref{sub:con}. We do so for the two basic examples of singular subalgebroid displayed in \S \ref{subsec:motivex}:
singular foliations
and wide Lie subalgebroids.
{For wide Lie subalgebroids we show that $H^{\cG}(\mathcal{B})$ agrees with $H_{min}$, the minimal integral of $B$ over $\cG$ of Moerdijk and Mr{\v{c}}un recalled in the Introduction.}
Wide Lie subalgebroids are treated as a special case of
images of Lie algebroid morphisms covering diffeomorphisms of the base (\S\ref{subsec:morph}).
{Unfortunately at the moment we have no way to describe explicitly the holonomy groupoids for the other classes of singular subalgebroids displayed in \S \ref{subsec:ex}.}
\subsubsection{For singular foliations}
\begin{ex}[Singular foliations]\label{ex:singfo}
When $A=TM$ (so $\mathcal{B}$ is a singular foliation on $M$), and $\cG=M\times M$, then $H^{M\times M}(\mathcal{B})$ is the holonomy groupoid of the singular foliation as defined in \cite{AndrSk}. {This follows from Ex. \ref{ex:sfolpair} and comparing the construction in \S \ref{sub:con} to the one of \cite{AndrSk}.}
In example \ref{ex:regfol} we will take $\cG=\Pi(M)$ (the fundamental groupoid of $M$, {which is source simply connected}) and construct the topological groupoid $H^{\Pi(M)}(\mathcal{B})$. We will see that $H^{\Pi(M)}(\mathcal{B})$ is not source simply connected in general, and has $H^{M\times M}(\mathcal{B})$ as a quotient.
\end{ex}
\subsubsection{For singular subalgebroids arising from Lie groupoid morphisms}
The next proposition will allow us to construct holonomy groupoids for several classes of singular subalgebroids.
It is based on the ideas explained in \cite[Ex. 3.4(4)]{AndrSk}.
\begin{prop}\label{prop:B} Let $\cG$ be a Lie groupoid over $M$ and $\mathcal{B}$ a singular subalgebroid of $Lie(\cG)$. Let $\cK$ be a
Lie groupoid over $M$. Let
$\varphi\colon \cK \to \cG$ be a morphism of Lie groupoids covering $Id_M$ which is also a bisubmersion for $\mathcal{B}$. Then:
\begin{itemize}
\item [i)] $H^{\cG}(\mathcal{B})=\cK/\mathcal{I}$ as topological groupoids, where $$\mathcal{I}:=\{k\in \cK: \text{$\exists$ a local bisection $\mathbf{b}$ through $k$ such that $\varphi\mathbf{b})\subset 1_M$}\}$$
\item [ii)] {the canonical map $\Phi\colon H^{\cG}(\mathcal{B})\to \cG$ coincides with the map $\cK/\mathcal{I}\to \cG$ induced by $\varphi$.}
\end{itemize}
\end{prop}
\begin{remark}\label{rem:cI} We explain the notation and terminology in the definition of $\mathcal{I}$ in the above proposition.
The symbol $1_M$ denotes the set of identity elements of the Lie groupoid $\cG$. The term ``bisection'' refers to bisection for the Lie groupoid $\cK$.
This coincides with the notion of
bisection for the bisubmersion
$(\cK,\varphi,\cG)$ (in the sense of definition \ref{def:bisec})
since the morphism of Lie groupoids $\varphi$ covers $Id_M$.
The set $\mathcal{I}$ is a normal subgroupoid of $\cK$. This means that $\mathcal{I}$ is a wide subalgebroid and is contained in the union of the isotropy subgroups of $\cK$ ({a consequence of $\mathcal{I}\subset \ker(\varphi)$}). Further, it means that $\mathcal{I}$ is invariant under conjugation: for every $k\in \cK$ we have $k\mathcal{I}_{y}k^{-1}\subset \mathcal{I}_{x}$ where $y=\bs_{\cK}(k)$ and $x=\bt_{\cK}(k)$. The quotient $\cK/\mathcal{I}$ has a unique topological groupoid structure such that the quotient map $\cK \to \cK/\mathcal{I}$ is a groupoid morphism (see \cite[Prop. 2.2.3]{MK2}).
\end{remark}
\begin{remark}\label{rem:bisec}
Every normal subgroupoid $\mathcal{I}$ of $\cK$ corresponds to the equivalence relation $$k_1\sim_{\cK}k_2 \Leftrightarrow k_1 (k_2)^{-1}\in \mathcal{I}$$ For $\mathcal{I}$ as in proposition \ref{prop:B} this relation can be written down\footnote{To see that $k_1 (k_2)^{-1}\in \mathcal{I}$ implies that $k_1$ and $k_2$ satisfy the conditions on the r.h.s. of \eqref{eq:simKbis}, notice the following: if $\mathbf{b}$ is a bisection through $k_1 (k_2)^{-1}$ such that $\varphi(\mathbf{b})\subset 1_M$ and $\mathbf{b}_2$ is any bisection through $k_2$, then $\mathbf{b}\cdot \mathbf{b}_2$ is a bisection through $k_1$ such that $\varphi(\mathbf{b}\cdot \mathbf{b}_2)=\varphi(\mathbf{b}_2)$.
}
explicitly:
\begin{align}\label{eq:simKbis}
k_1\sim_{\cK}k_2 \Leftrightarrow &\;\text{
$\varphi(k_1)=\varphi(k_2)$} \text{ and} \\
\nonumber &\;\text{$\exists$ local bisections $\mathbf{b}_1$ through $k_1$, $\mathbf{b}_2$ through $k_2$, s.t. $\varphi(\mathbf{b}_1)=\varphi(\mathbf{b}_2)$.
}
\end{align}
Analogously to Rem. \ref{rem:HGBbisec}, the equivalence \eqref{eq:simKbis} can be rephrased as follows:
$$k_1\sim_{\cK}k_2
\Leftrightarrow \text{$\exists$ neighborhood $\cK'$ of $k_1$ and $f \colon \cK'\to \cK$ satisfying $\varphi\circ f=\varphi$ and $f(k_1)=k_2$}.
$$
\end{remark}
\begin{proof}[Proof of proposition \ref{prop:B}] {The proof relies in Appendix \ref{sec:atlases}.} i) We first show the following claim:
\underline{Claim:} \emph{$H^{\cG}(\mathcal{B})=\cK/\sim_{\cK}$ as topological spaces.}
The set $\mathcal{U}:=\{(\cK,\varphi,\cG)\}$ is an atlas for $\mathcal{B}$ (see definition \ref{def:atlas}) consisting of just one bisubmersion. This holds by the following two arguments which are consequences of the fact that $\varphi$ is a groupoid morphism:
\begin{itemize}
\item The inverse bisubmersion $(\cK,i_{\cG}\circ \varphi,\cG)$ is adapted to $\mathcal{U}$ because the inversion map $i_{\cK}$ is a morphism of bisubmersions from $(\cK,i_{\cG}\circ \varphi,\cG)$ to $(\cK,\varphi,\cG)$.
\item The composition of bisubmersions $\cK\circ \cK$ is also adapted to $\mathcal{U}$. Indeed it agrees with
the space of composable arrows of $K$ and the groupoid multiplication of $\cK$ is a morphism of bisubmersions, {\it i.e.}\; this diagram commutes:
\begin{equation*}
\xymatrix{
\cK\circ \cK \ar[rd]_{\varphi\cdot\varphi } \ar[rr]^{\text{multipl. of $\cK$}} & & \cK \ar[ld]^{\varphi} \\
&\cG &
}
\end{equation*}
\end{itemize}
The atlas $\mathcal{U}$ is equivalent to a path holonomy atlas (definition \ref{def:pathholatlas}). To prove this, by proposition \ref{prop:pathholadapted}, we just need to show that the bisubmersion
$(\cK,\varphi,\cG)$ is adapted to a path holonomy atlas. Proposition \ref{prop:crucial} implies that for any $x\in M$, there exists a morphism of bisubmersions from an open neighborhood of $1_x$ in $\cK$ to some {path holonomy} bisubmersion. Then simply use that the Lie groupoid $\cK$ is generated by such neighborhoods.
As $\mathcal{U}=\{(\cK,\varphi,\cG)\}$ is equivalent to a path holonomy atlas, by proposition \ref{prop:rem33} we have $H(\mathcal{B})^{\mathcal{U}}=H^{\cG}(\mathcal{B})$. The former, as a topological space, is defined as the quotient of $\cK$ by the equivalence relation
$\sim_{\cK}$ (see eq. \eqref{eq:HUcb}). This proves the claim. \hfill$\bigtriangleup$
Now we can show that $H^{\cG}(\mathcal{B})=\cK/\sim_{\cK}$ as groupoids. Let $k,k'\in \cK$ such that $\bs_{\cK}(k)=\bt_{\cK}(k')$. First, the product of $[k]$ and $[k']$ in $H(\mathcal{B})^{\mathcal{U}}=H^{\cG}(\mathcal{B})$ is the class of $k\circ k'$ in the composition of bisubmersions $\cK\circ \cK$. Second, as seen above, the groupoid multiplication is a morphism of bisubmersions from $\cK\circ \cK$ to $\cK$, hence $k\circ k'\sim_{\cK}kk'$. Combining the last two statements we obtain $[k]\cdot[k']=[kk']$.
ii) {The canonical map $H(\mathcal{B})^{\mathcal{U}}\to \cG$ is the map $\cK/\mathcal{I}\to \cG$ induced by $\varphi$. Now apply Prop. \ref{prop:rem33} ii).}
\end{proof}
We immediately obtain a construction for the holonomy groupoid of singular subalgebroids arising from Lie groupoid morphisms (covering the identity), see Def. \ref{def:arises}:
\begin{prop}\label{cor:HBimage}
{Assume that the singular subalgebroid $\mathcal{B}$ arises from a Lie groupoid morphism $\Psi\colon \cK\to\cG$ covering the identity.}
Then $H^{\cG}(\mathcal{B})=\cK/\mathcal{I}$, where $\mathcal{I}$ is an in proposition \ref{prop:B}, {and
the canonical map $\Phi\colon H^{\cG}(\mathcal{B})\to \cG$ is the map $\cK/\mathcal{I}\to \cG$ induced by $\Psi$.}
\end{prop}
\begin{proof}
By proposition \ref{prop:imagerelbi}, $\Psi\colon \cK \to \cG$ is a bisubmersion for $\mathcal{B}$. Hence we can apply proposition \ref{prop:B}.
\end{proof}
\begin{ex}[Singular foliations arising from Lie algebroids]\label{ex:explicitgr}
\begin{itemize}
\item [a)] {The following example is a rephrasing of \cite[Ex. 3.4(4)]{AndrSk}.}
Let $A$ be a Lie algebroid over $M$, and let $\mathcal{F}:=\rho(\Gamma_c(A))$ be the singular foliation on $M$
associated to $A$ (here $\rho$ is the anchor map). Let $\cK\rightrightarrows M$ be a Lie groupoid integrating $A$. The Lie groupoid morphism given by the target-source map $$\Psi:= (\bt,\bs)\colon \cK \to M\times M$$
integrates the anchor map, so it gives rise to $\mathcal{F}$, i.e.
$\mathcal{F}=\{\Psi_*(\Gamma_c(A))\}$. Therefore Prop. \ref{cor:HBimage} implies that the holonomy groupoid $H(\mathcal{F})$ of the foliation is $$H(\mathcal{F})=\cK/\mathcal{I},$$ where $\mathcal{I}$ consists of the elements $k\in \cK$ through which passes a local bisection inducing the identity (local) diffeomorphism on $M$.
\item [b)] {We now spell out a special case of the above (for linear actions compare with \cite[Ex. 3.7]{AndrSk})}. Consider an action of a Lie group $G$ on $M$. It gives rise to a singular foliation $\mathcal{F}$ on $M$, which is generated by the image of the associated infinitesimal action $\psi\colon \ensuremath{\mathfrak{g}}\to \mathfrak{X}(M), v\to v_M$. Let $A:=\ensuremath{\mathfrak{g}}\times M$ be the transformation algebroid of the infinitesimal action. Its anchor map $\rho\colon \ensuremath{\mathfrak{g}}\times M\to TM, (v,p)\mapsto v_M(p)$ satisfies $\mathcal{F}=\rho(\Gamma_c(A))$. By a), the holonomy groupoid of $\mathcal{F}$ is obtained from the transformation groupoid $G\times M\rightrightarrows M$ as $$H(\mathcal{F})=(G\times M)/\mathcal{I},$$
where $\mathcal{I}$ is very explicit: it consists of $(g,x)\in G\times M$ (necessarily with $g\cdot x=x$) for which there is a neighborhood $U$ of $x$ in $M$ and a smooth map $\tilde{g}\colon U\to G$ such that $\tilde{g}(x)=g$ and $\tilde{g}(y)\cdot y=y$ for all $y\in U$. In other words, it consists of elements of $G\times M$ through which there is a local section of the second projection $G\times M\to M$ which lies in the isotropy groups of the action of $G$ on $M$.
\end{itemize}
\end{ex}
{For singular subalgebroids arising from Lie groupoid morphisms, the holonomy groupoid satisfies a minimality property. In particular, it is a quotient of any Lie groupoid giving rise to the given singular subalgebroid.}
\begin{prop}\label{prop:lift}
Any Lie groupoid morphism {covering the identity} $\Psi\colon \cK\to\cG$ giving rise to a singular subalgebroid $\mathcal{B}$
factors as
\begin{equation*}
\xymatrix{
&H^{\cG}(\mathcal{B})\ar_{\Phi}[d]\\
{\cK}\ar@{-->}[ru]^\tau \ar[r]^{\Psi}&\cG }
\end{equation*}
where $\tau\colon \cG\to H^{\cG}(\mathcal{B})$ is a surjective morphism of topological groupoids.
\end{prop}
\begin{proof}
{Thanks to Prop. \ref{cor:HBimage}, we can take $\tau$ to be the quotient map $K\to \cK/\mathcal{I}=H^{\cG}(\mathcal{B})$.}
\end{proof}
\subsubsection{For wide Lie subalgebroids}\label{subsubsec:wide}
{The proof of the following statement is similar to the one of \cite[Prop. 1.9]{AZ5} but
has the advantage of relying only on elementary facts.}
{
\begin{prop}\label{prop:smoothgroid}
Let $B$ be a wide Lie subalgebroid of $A$, let $G$ be a Lie groupoid integrating $A$, and denote $\mathcal{B}:=\Gamma_c(B)$. Then
\begin{itemize}
\item [i)] $H^{\cG}(\mathcal{B})$ is a Lie groupoid integrating $B$
\item [ii)] the canonical Lie groupoid morphism $\Phi\colon H^{\cG}(\mathcal{B})\to \cG$ integrates the inclusion $\iota\colon B\hookrightarrow A$.
\end{itemize}
\end{prop}
}
\begin{proof}
The Lie algebroid $B$ is integrable since $A$ is \cite{MMRC}.
Let $K$ be the source simply connected Lie groupoid integrating $B$. Let
$\Psi \colon \cK \to \cG$ the morphism of Lie groupoids which integrates the inclusion $\iota \colon B\hookrightarrow A$.
On one hand, clearly the Lie groupoid morphism $\Psi$ gives rise to $\mathcal{B}$. Therefore we can apply Prop. \ref{cor:HBimage}, which states that $H^{\cG}(\mathcal{B})=\cK/\mathcal{I}$ and the canonical map $\Phi\colon H^{\cG}(\mathcal{B})\to \cG$ is the map $\cK/\mathcal{I}\to \cG$ induced by $\Psi$.
On the other hand we have the following
\underline{Claim:} \emph{The subgroupoid $\mathcal{I}$ of $\cK$, defined as in Prop. \ref{prop:B}, satisfies:
\begin{enumerate}
\item Set-theoretically, $\mathcal{I}$ is a normal subgroupoid of $\cK$ lying in the union of the isotropy groups of $\cK$.
\item Topologically, $\mathcal{I}$ is an embedded Lie subgroupoid of $\cK$
and it is $\bs$-discrete ({\it i.e.}\; the intersection of $\mathcal{I}$ with any $\bs$-fiber is discrete).
\end{enumerate}
}
The claim implies (see for example \cite[Thm. 1.20]{GuLi}) that $\cK/\mathcal{I}$ is also a Lie groupoid integrating $B$. Clearly the Lie groupoid morphism $\cK/\mathcal{I}\to \cG$ induced by $\Psi$ integrates the inclusion $\iota \colon B\hookrightarrow A$. This concludes the proof {of the proposition, modulo the claim which we prove right now}.
That a) in the claim holds was explained in Rem. \ref{rem:cI}. We argue that b) holds.
There is a neighborhood $V\subset K$ of the set of identities $1_M$ on which the morphism $\Psi$ is injective (this follows from $\Psi$ being a Lie groupoid morphism covering the identity and whose Lie algebroid map is injective). Since $\mathcal{I}\subset \ker(\Psi)$, we have
$\mathcal{I}\cap V=1_M.$
Let $k\in \mathcal{I}$, and take a bisection $\mathbf{b}$ of $K$ through $k$ as in the definition of $\mathcal{I}$. Denote $U:=\bs(\mathbf{b})$, an open subset of $M$.
Denote $r_{\mathbf{b}}\colon \bs^{-1}(U) \to \bs^{-1}(U)$ the diffeomorphism given by right-multiplication by the bisection $\mathbf{b}$. It maps $1_{\bs(g)}$ to $k$ and it preserves $\mathcal{I}$, since
$\mathbf{b}$ lies in the subgroupoid $\mathcal{I}$. Applying $r_{\mathbf{b}}$ to
$$\mathcal{I}\cap (V\cap \bs^{-1}(U))=1_U$$
we obtain that the intersection of $\mathcal{I}$ with $r_{\mathbf{b}}(V)\cap \bs^{-1}(U)$ (an open neighbourhood of $k$) is exactly $\mathbf{b}$. This shows both that $\mathcal{I}$ is an embedded submanifold (hence, a \emph{Lie subgroupoid}) of $K$ and that $\mathcal{I}$ is \emph{$\bs$-discrete}.
\end{proof}
\begin{remark}
{Recall from \S \ref{subsec:morph} that $\mathcal{B}$ is a projective singular subalgebroid if $\mathcal{B}\cong \Gamma_c(E)$ for some vector bundle $E\to M$, which automatically comes with a Lie algebroid structure and a morphism $\tau \colon E\to A$. In \cite{AZ4}, {generalizing Prop. \ref{prop:smoothgroid} and its proof}, we show that $H^{\cG}(\mathcal{B})$ is a Lie groupoid if{f} $\mathcal{B}$ is projective, that
the Lie groupoid $H^{\cG}(\mathcal{B})$ integrates $E$, and that the canonical morphism $\Phi \colon H^{\cG}(\mathcal{B})\to G$ integrates $\tau$.}
\end{remark}
{We now refine Prop. \ref{prop:smoothgroid}, showing that in the case of wide Lie subalgebroids $H^{\cG}(\mathcal{B})$ is exactly the the minimal integral of $B$ over $\cG$
defined in the work of Moerdjik and Mr{\v{c}}un {\cite[Thm. 2.3]{MMRC} recalled in the Introduction.}}
\begin{prop}\label{prop:HGBisHmin}
{Let $B$ be a wide Lie subalgebroid of an integrable Lie algebroid $A$, and fix a Lie groupoid $\cG$ integrating $A$. Let $\mathcal{B}=\Gamma_c(B)$.
Then
\begin{itemize}
\item [i)] $H^{\cG}(\mathcal{B})$ agrees with $H_{min}$, the minimal integral of $B$ over $\cG$ {recalled in the introduction,}
\item [ii)] the canonical map $\Phi\colon H^{\cG}(\mathcal{B})\to G$ agrees with the map
{recalled in the introduction}.
\end{itemize}
}
\end{prop}
\begin{proof}
{
By Prop. \ref{prop:smoothgroid} the canonical map $\Phi\colon H^{\cG}(\mathcal{B})\to \cG$
satisfies properties 1) and 2) from \cite[Thm. 2.3]{MMRC} recalled in the Introduction. Any Lie groupoid morphism $\tilde{H}\to G$ integrating the inclusion $\iota \colon B\hookrightarrow A$ is a Lie groupoid morphism giving rise to $\mathcal{B}$.
Hence, by Prop. \ref{prop:lift}, property 3) from that theorem holds too. The uniqueness statement in that theorem finishes the proof.}
\end{proof}
The following two examples generalize the elementary Example \ref{ex:Liegr}.
\begin{ex}[Lie algebroids]\label{ex:GammaA}
For any integrable Lie algebroid $A$ take $\mathcal{B}=\Gamma_c(A)$, and let $\cG$ be a Lie groupoid integrating
$A$. Then $H^{\cG}(\mathcal{B})=\cG$ and $\Phi=Id_{\cG}$. {This follows taking $\Psi=Id_{\cG}$ in Cor. \ref{cor:HBimage}. {(It also follows directly from Prop. \ref{prop:HGBisHmin})}.}
\end{ex}
\begin{ex}[Lie subalgebras]
\label{ex:Liegrrevisited} Let $\ensuremath{\mathfrak{g}}$ a Lie algebra, $\mathfrak{k} $ a Lie subalgebra, and
fix a connnected Lie group $G$ integrating $\ensuremath{\mathfrak{g}}$.
Let $\Psi \colon
K\to G$ be any morphism of Lie groups integrating the inclusion $\iota \colon \mathfrak{k} \hookrightarrow \ensuremath{\mathfrak{g}}$, where $K$ is assumed to be connected. (For instance, take $K$ to be the simply connected integration of $\mathfrak{k}$.) Then $$H^G(\mathfrak{k})=K/ker(\Psi).$$ {This follows from Cor. \ref{cor:HBimage}, noticing} that since the space of objects of $\cK$ is just a point, we have $\mathcal{I}=\ker(\Psi)$.
Using Prop. \ref{prop:smoothgroid} we see that $H^G(\mathfrak{k})$ is a Lie group integrating $\mathfrak{k}$, and the map $\Phi\colon H^G(\mathfrak{k})\to G$ induced by $\varphi$ is an injective immersion and group homomorphism. In other words,
$(H^G(\mathfrak{k}),\Phi)$ is the Lie subgroup of $G$ integrating $\mathfrak{k}$.
\end{ex}
\subsection{Dependence of \texorpdfstring{$H^{\cG}(\mathcal{B})$}{} on \texorpdfstring{$\cG$}{}} \label{section:vary}
Let $A\to M$ be a Lie algebroid and $\mathcal{B}$ a singular subalgebroid. Fix a Lie groupoid $\cG$ integrating $A$. In \S \ref{sub:con} we constructed the holonomy groupoid $H(\mathcal{B}):=H^{\cG}(\mathcal{B})$ over $M$, as the quotient of a path-holonomy atlas of $ {\cG}$-bisubmersions\footnote{In this section we use the terminology ``$\cG$-bisubmersion'' instead ``bisubmersion'', to emphatize the dependence on the Lie groupoid $G$.}
by the equivalence relation given by morphisms of $ {\cG}$-bisubmersions. Clearly the construction depends on $\cG$.
Now take another Lie groupoid $\widetilde{\cG}$ integrating $A$, and so that $\cG$ is a quotient of it, {\it i.e.}\; there is a surjective Lie groupoid {morphism} $$\pi \colon \widetilde{\cG}\to\cG$$ with discrete fibers. Denote by $\widetilde{H}(\mathcal{B}):=H^{\widetilde{\cG}}(\mathcal{B})$ the holonomy groupoid constructed using $\widetilde{\cG}$. In this subsection we describe $\widetilde{H}(\mathcal{B})$ in terms of $H(\mathcal{B})$.
\subsubsection{{A theorem describing $\widetilde{H}(\mathcal{B})$ in terms of $H(\mathcal{B})$}}
Consider the fiber product of the canonical groupoid morphism $\Phi\colon H(\mathcal{B})\to \cG$ and $\pi \colon \widetilde{\cG}\to \cG$, {\it i.e.}\; $$H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG}\rightrightarrows M$$ with the component-wise groupoid structure. Upon the identification between $M$ and the diagonal $\Delta M\subset M\times M$, it is the subgroupoid of the product groupoid $H(\mathcal{B})\times \widetilde{\cG}\rightrightarrows M\times M$ given by the preimage of $\Delta \cG$ under the groupoid morphism $(\Phi, \pi)\colon H(\mathcal{B})\times \widetilde{\cG}\to \cG\times \cG$. Notice that the latter morphism does not have connected fibers in general, so that $H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG}$ will not be source-connected in general. Hence we consider the source-connected component of the identities:
\begin{align}\label{eq:pathschar}
(H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG})_0=\{(h,\widetilde{g})\in
H(\mathcal{B})\times \widetilde{\cG}:& \text{ $\exists$ a continuous path $(h(t),\widetilde{g}(t))\subset (\bs_H,\widetilde{\bs})^{-1}(x,x)$}\\
&\text{from $(1_{x}^{H(\mathcal{B})},1_x^{\widetilde{\cG}})$ to $(h,\widetilde{g})$ with $\Phi(h(t))=\pi(\widetilde{g}(t))$}\},\nonumber
\end{align}
where $x:=\bs_H(h)=\widetilde{\bs}(\widetilde{g})\in M$.
\begin{ex}
Take the simple case $\mathcal{B}=\{0\}$. {For any choice of $G$, we have that $H(\mathcal{B})$
is the trivial groupoid $M\rightrightarrows M$.}
We obtain $$H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG}=M\times_{(
\iota_M,\pi)}\widetilde{\cG}\cong \pi^{-1}(1_M^{\cG})=\ker(\pi)$$
{where $\iota_M$ is the inclusion of the identity elements $1_M^{\cG}$ into $G$.}
Hence $(H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG})_0$ consists of the identity elements of $\widetilde{\cG}\rightrightarrows M$, {and therefore is isomorphic to the trivial groupoid $M\rightrightarrows M$}.
\end{ex}
\begin{remark}\label{rem:unique}
As $\pi \colon \widetilde{\cG}\to \cG$ has discrete fibers, for any path $\gamma$ in the $\bs$-fiber of $\cG$ starting at $1_x^{\cG}$ there exists a unique lift starting at $1_x^{\widetilde{\cG}}$, {\it i.e.}\; a unique path $\tau$ in $\widetilde{\cG}$ with $\gamma=\pi\circ \tau$ and $\tau(0)=1_x^{\widetilde{\cG}}$.
This shows that, in the characterization \eqref{eq:pathschar} of $(H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG})_0$, the path $\widetilde{g}(t)$ (hence in particular $\widetilde{g}$) is determined by the path $h(t)$: indeed, $\widetilde{g}(t)$ is the unique $\pi$-lift of $\Phi(h(t))$ starting at $1_x^{\widetilde{\cG}}$.
\end{remark}
Notice that, if we fix a local generating set $\boldsymbol{\alpha}_1,\dots,\boldsymbol{\alpha}_n$ for $\mathcal{B}$, it gives rise to two path-holonomy bisubmersions: a $\cG$-bisubmersion and a $\widetilde{\cG}$-bisubmersion. The domains of both bisubmersions coincide (they are an open subset in $\ensuremath{\mathbb R}^n\times M$). The domains of the compositions of path-holonomy bisubmersions also coincide, hence if $\mathcal{U}$ is a $\cG$-path-holonomy atlas, then $\coprod_{U \in \mathcal{U}} U$ will be the domain of a $\widetilde{\cG}$-path-holonomy atlas too. Given corresponding bisubmersions
$(U,\varphi,\cG)$ and $(U,\widetilde{\varphi},\widetilde{\cG})$, {from Def. \ref{dfn:pathhol} we see that} $\varphi=\pi \circ \widetilde{\varphi}\colon U\to \cG$:
\begin{equation*}
\xymatrix{ U \ar[dd]^{\varphi
}\ar[dr]^{\widetilde{\varphi}}&\\
&\widetilde{\cG}\ar[dl]^{\pi}\\
\cG&}
\end{equation*}
We denote the quotient maps to the holonomy groupoids by $\natural|_U\colon U\to H(\mathcal{B})$ and
$\widetilde{\natural}|_U\colon U\to \widetilde{H}(\mathcal{B})$ respectively,
as in \S\ref{sub:con}.
{The central result of this subsection is the following.}
\begin{thm}\label{thm:tilde}
There is a canonical isomorphism over $Id_M$ of topological groupoids $$T\colon \widetilde{H}(\mathcal{B})\to (H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG})_0 \quad \widetilde{h}\mapsto (\natural u, \widetilde{\Phi}(\widetilde{h}))$$ where $u$ is any point in a path-holonomy atlas with $\widetilde{\natural} u=\widetilde{h}$.
\end{thm}
{Clearly, under the above isomorphism, the canonical map $\widetilde{\Phi}\colon \widetilde{H}(\mathcal{B})\to \widetilde{\cG}$ corresponds to the
{second projection $(h,\widetilde{g})\mapsto \widetilde{g}$.}
\begin{remark}
The relevant diagram is
\begin{equation} \label{eq:bigdia}
\xymatrix{
&\coprod_{U \in \mathcal{U}} U \ar@{-->}[d]_{\widetilde{\natural}}\ar[dl]_{{\natural}}\ar[dr]^{\widetilde{\varphi}}&\\
H(\mathcal{B})\ar[dr]_{\Phi} & \widetilde{H}(\mathcal{B}) \ar@{-->}[r]_{\widetilde{\Phi}} \ar@{-->}[l] &\widetilde{\cG}\ar[dl]^{\pi}\\
&\cG&}
\end{equation}
\end{remark}
\begin{proof}[Proof of theorem \ref{thm:tilde}]
\underline{Claim:} \emph{$T$ is well-defined.}
We show that the map $$\widetilde{H}(\mathcal{B})\to H(\mathcal{B}),\; \widetilde{\natural} u\mapsto {\natural} u$$ is well-defined.
Let $U,V$ be $\widetilde{\cG}$-bisubmersions, and $u,v$ points with $\widetilde{\natural} u=\widetilde{\natural} v$. This means that there is a morphism of $\widetilde{\cG}$-bisubmersions $f\colon U\to V$ with
$f(u)=v$ (possibly shrinking $U$). Composing $\widetilde{\varphi}_V\circ f=\widetilde{\varphi}_U\colon U\to \widetilde{\cG}$ with $\pi \colon \widetilde{\cG}\to \cG$ we find ${\varphi_V}\circ f={\varphi_U}\colon U\to {\cG}$ (using $\pi \circ \widetilde{\varphi}_U ={\varphi_U}$). This shows that $f$ is also a morphism of $G$-bisubmersions, therefore ${\natural} u={\natural} v$.
Further, the image of $T$ is really contained in $(H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG})_0$. {Indeed, it is contained in the fiber product $H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG}$ because we have $\Phi(\natural u))=\varphi(u)=\pi(\widetilde{\varphi}(u))=\pi(\widetilde{\Phi}(\widetilde{h}))$ for all $u$. It is contained in the source-connected component of the identities because $\widetilde{H}(\mathcal{B})$ is source-connected and, as we shall see immediately, $T$ is a continuous groupoid morphism.} \hfill$\bigtriangleup$
\underline{Claim:} \emph{$T$ is a groupoid morphism.}
One checks directly that $\widetilde{H}(\mathcal{B})\to H(\mathcal{B}), \widetilde{\natural} u \mapsto \natural u$ is morphism of groupoids. Also, $\widetilde{\Phi}\colon \widetilde{H}(\mathcal{B})\to \widetilde{\cG}$ is a morphism of groupoids by Thm. \ref{thm:holgroidconstr}.\hfill$\bigtriangleup$
\underline{Claim:} \emph{$T$ is continuous.}
{This holds since
$\widetilde{H}(\mathcal{B})\to H(\mathcal{B})$
is continuous (being the map induced by $\natural \colon \coprod_{U \in \mathcal{U}} U \to H(\mathcal{B})$ on the quotient $\widetilde{H}(\mathcal{B})$) and since
$\widetilde{\Phi}$ is continuous.
}
\underline{Claim:} \emph{$T$ is injective.}
Since $T$ is a morphism of groupoids, it suffices to check that
if $\natural v=1_x^{H(\mathcal{B})}$ and $\widetilde{\varphi}_V(v)=1_x^{\widetilde{\cG}}$ then
$\widetilde{\natural} v =1_x^{\widetilde{H}(\mathcal{B})}$, for any element $v$ in a bisubmersion $V$. Let $(U,\widetilde{\varphi}_U,
\widetilde{\cG})$ be a path-holonomy $\widetilde{\cG}$-bisubmersion containing $(0,x)$, so $U \subset \ensuremath{\mathbb R}^n\times M$. There
exists a morphism of $\cG$-bisubmersions $f\colon U\to V$ with $f(0,x)= v$, by the first assumption above. Notice that the diagram
\begin{equation*}
\xymatrix{
U\ar[dd]^f\ar[r]^{\widetilde{\varphi}_U}&\widetilde{\cG}\ar[dr]^{\pi}&\\
&&\cG\\
V\ar[r]^{\widetilde{\varphi}_V}&\widetilde{\cG}\ar[ur]_{\pi}&}
\end{equation*}
commutes, since $\varphi_U=\pi\circ\widetilde{\varphi}_U$.
Now consider $S:=(\{0\}\times M)\cap U$. We have $\widetilde{\varphi}_U(S)=1_S^{\widetilde{\cG}}$, an open subset of the identity bisection of $\widetilde{\cG}$, and $(\widetilde{\varphi}_V\circ f)(S)$ is a bisection of $\widetilde{\cG}$ which, by the second hypothesis above, contains $1_x^{\widetilde{\cG}}$. Both bisections map under $\pi$ to $1_S^{\cG}$, an open subset of the identity bisection of $\cG$, by the commutativity of the above diagram. Since $\pi$ has \emph{discrete} fibers, it follows that the two bisections of $\widetilde{\cG}$ agree, {\it i.e.}\; $\widetilde{\varphi}_U(S)=\widetilde{\varphi}_V(f(S))$. Hence corollary \ref{cor:crucial} b) implies that there is morphism of $\widetilde{\cG}$-bisubmersions $U\to V$ with $(0,x)\mapsto v$, hence $\widetilde{\natural}v=\widetilde{\natural}(0,x)=1_x^{\widetilde{H}(\mathcal{B})}$.\hfill$\bigtriangleup$
\underline{Claim:} \emph{$T$ is surjective.}
Let $(h,\widetilde{g})\in
H(\mathcal{B})\times \widetilde{\cG}$ so that there is a continuous path $(h(t),\widetilde{g}(t))$ from $(1_{x}^{H(\mathcal{B})},1_x^{\widetilde{\cG}})$ to $(h,\widetilde{g})$ with
$\bs_H(h(t))=\widetilde{\bs}(\widetilde{g}(t))=x$ and $\Phi(h(t))=\pi(\widetilde{g}(t))$.
We have to show that there is a bisubmersion $U$ in the path-holonomy atlas and $u\in U$ with $\natural u=h,\widetilde{\varphi}(u)=\widetilde{g}$. {Since $T$ is a groupoid morphism and every source-connected topological groupoid is generated by any symmetric neighbourhood of the identities, we can assume that $(h,\widetilde{g})$ is arbitrarily close to the set of identities. Hence, by Remark \ref{rem:open}, we can assume that there is a path holonomy bisubmersion $U_0$ such that $h\in \natural U_0$.}
Denote by $L\subset M$ the leaf of the foliation $\rho(\mathcal{B})$ through $x$. {As we shall show in \cite{AZ4},}
$H(\mathcal{B})|_L:=\bs_H^{-1}(L)$ has a smooth structure such that
for any $\cG$-bisubmersion $U$ in the path-holonomy atlas of $\mathcal{B}$, the quotient map $\natural\colon U|_L:=\bs_U^{-1}(L)\to H(\mathcal{B})|_L$ is a submersion. Hence there exists a continuous curve $u(t)$ {in $U_0$} with $\natural u(t)=h(t)$ and $u(0)=(0,x)$, where $(0,x)$ lies in a minimal path-holonomy $\cG$-bisubmersion. We claim that $u:=u(1)$ satisfies the above properties.
By definition we have $\natural u(1)=h(1)=h$. Now consider the part of diagram \eqref{eq:bigdia} with solid arrows and the paths:
\begin{equation*}
\xymatrix{&u(t) \ar[dl]_{{\natural}}\ar[dr]^{\widetilde{\varphi}}&\\
h(t) \ar[dr]_{\Phi} &
& \widetilde{\varphi}(u(t))\ar[dl]^{\pi}\\
&{\Phi}(h(t))&}
\end{equation*}
The path $\widetilde{\varphi}(u(t))$ is a $\pi$-lift of $\Phi(h(t))$, and its starting point is
$\widetilde{\varphi}((0,x))=1_x^{\widetilde{\cG}}$. The same holds for $\widetilde{g}(t)$. Hence by the uniqueness of the $\pi$-lift starting at $1_x^{\widetilde{\cG}}$ (see Remark \ref{rem:unique}) we obtain
$\widetilde{\varphi}(u(t))=\widetilde{g}(t)$, and evaluating at $t=1$ we get $\widetilde{\varphi}(u)=\widetilde{g}$.\hfill$\bigtriangleup$
\underline{Claim:} \emph{$T$ is A homeomorphism.}
It suffices to show that $T$ is an open map. Let $(U,\varphi,\cG)$ be a $G$-bisubmersion in the path-holonomy atlas. Then
$\widetilde{\natural} U$ is open in $\widetilde{H}(\mathcal{B})$,
{by Remark \ref{rem:open}. We will show that its image $T(\widetilde{\natural} U)$ is open.
The {Lie groupoid morphism $\pi \colon \widetilde{G}\to G$} is a local homeomorphism, hence the first projection $pr_1\colon H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG}\to H(\mathcal{B})$ is also a local homeomorphism.
Shrinking $U$ if necessary, we can assume that $T(\widetilde{\natural} U)=\{(\natural u, \widetilde{\varphi} u):u\in U\}$ is contained in an open subset $N$ of $(H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG})_0$ such that $pr_1|_N\colon N\to pr_1(N)$ is a homeomorphism onto an open subset of ${H}(\mathcal{B})$. The subset $pr_1(T(\widetilde{\natural} U))$ equals $\natural U$, which is open in $H(\mathcal{B})$
by Remark \ref{rem:open}. Hence $T(\widetilde{\natural} U)$ is open in $N$ and therefore in $(H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG})_0$.
Summarizing: for small enough bisubmersions $U$ in the path-holonomy atlas, $T$ maps the open subsets $\widetilde{\natural} U$ of $\widetilde{H}(\mathcal{B})$ to open subsets. Since any open subset of
$\widetilde{H}(\mathcal{B})$ is a union of such $\widetilde{\natural} U$, we are done.}
\hfill$\bigtriangleup$
\end{proof}
\begin{remark}\label{rem:ker} There is a groupoid morphism $$\widetilde{H}(\mathcal{B})\to H(\mathcal{B}),\; \widetilde{\natural} u\mapsto {\natural} u$$(see the first two claims in the proof of theorem \ref{thm:tilde}), which is clearly surjective. Under the
canonical isomorphism $\widetilde{H}(\mathcal{B})\cong(H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG})_0$ it is given by the projection onto the first component.
{Its kernel is $(1_M^{H(\mathcal{B})}\times \widetilde{\cG})\cap (H(\mathcal{B})\times_{\Phi,\pi}\widetilde{\cG})_0$, which is contained in
$\{(1_x^{H(\mathcal{B})},\widetilde{g}): x\in M, \pi(\widetilde{g})= 1_x^{\cG}\}\cong\ker \pi$. A more explicit description of the kernel can be obtained using equation \eqref{eq:pathschar}. (Here $\pi\colon\widetilde{\cG} \to \cG$ is the original covering map.)}
\end{remark}
\subsubsection{Examples for {Theorem \ref{thm:tilde}.}}
{We present a few examples for Theorem \ref{thm:tilde}. Ex. \ref{ex:notssc} in particular shows that even when we use a source simply connected Lie groupoid $G$ to construct the holonomy groupoid $H^G(\mathcal{B})$, the latter might not be source simply connected}.
\begin{ex}
When $\mathcal{B}=\Gamma_c(A)$, Theorem \ref{thm:tilde} and Example \ref{ex:GammaA} recover the obvious isomorphism $\widetilde{\cG}\cong \cG\times_{Id,\pi}\widetilde{\cG}$.
\end{ex}
\begin{ex}\label{ex:regfol}
Consider the case of a singular foliation $\mathcal{F}$. First we integrate the Lie algebroid $TM$ to the pair groupoid $\cG:=M \times M$, giving rise to $H(\mathcal{F}):=H^{\cG}(\mathcal{F})$, the holonomy groupoid of the singular foliation as in \cite{AndrSk} (see example \ref{ex:singfo}). We have $\Phi=(\bt_H,\bs_H)\colon H(\mathcal{F})\to M\times M$.
We can also integrate $TM$ to the fundamental groupoid $\widetilde{\cG}:=\Pi(M)$.
The construction of \S \ref{sub:con} gives rise to another topological groupoid $ \widetilde{H}(\mathcal{F}):=H^{\widetilde{\cG}}(\mathcal{F})$, which has a canonical groupoid morphism $\widetilde{\Phi}$ to $ \Pi(M)$.
By Theorem \ref{thm:tilde} we have $$\widetilde{H}(\mathcal{F})\cong (H(\mathcal{F})\times_{(\bt_H,\bs_H),\pi}\Pi(M))_0,$$
where $\pi \colon \Pi(M)\to M\times M$ is the target-source map of $\Pi(M)$ (sending the homotopy class of a path in $M$ to its endpoints).
\end{ex}
Example \ref{ex:regfol} can be made more explicit when $\mathcal{F}$ is a regular foliation.
\begin{prop}\label{prop:regfol}
When $\mathcal{F}$ is a regular foliation one obtains
\begin{equation}\label{eq:paths}
(H(\mathcal{F})\times_{(\bt_H,\bs_H),\pi}\Pi(M))_0= \big\{([\gamma],\langle \gamma \rangle_M): \gamma \text{ is a path in a leaf of the foliation}\big\}
\end{equation}
where $[\gamma]\in H(\mathcal{F})$ denotes the holonomy class of $\gamma$, and
$\langle \gamma \rangle_M\in \Pi(M)$ its \emph{homotopy} class (fixing endpoints) as a path in $M$.
\end{prop}
\begin{proof}
Use that, by equation \eqref{eq:pathschar}, $(H(\mathcal{F})\times_{(\bt_H,\bs_H),\pi}\Pi(M))_0$ equals
\begin{align*}
\big\{([\delta],\langle \sigma \rangle_M)\in
H(\mathcal{F})\times \Pi(M):& \exists \text{ homotopy $\{\delta^t\}$ in $L$ with $\delta^0=$(loop with trivial holonomy), $\delta^1=\delta$}\\ &\exists \text{
homotopy $\{\sigma^t\}$ in $M$ with $\sigma^0=$(contractible loop), $\sigma^1=\sigma$}\\
& \text{ such that $\delta^t(0)=\sigma^t(0)= x$ and $\delta^t(1)=\sigma^t(1) \text{ for all $t$}$}\big\},
\end{align*}
where
$L$ denotes the leaf through ${x:=\delta(0)=\sigma(0)}$.
Fix a path $\delta$ in $L$ and a path $\sigma$ in $M$ as above (in particular, both start at $x$ and have the same endpoint). We first focus on $\delta$.
Consider the map
$$S\colon [0,1]\times [0,1]\to L, (s,t)\mapsto \delta^t(s).$$
\begin{figure}[htp] \centering{
\includegraphics[scale=0.75]{Square.pdf}}
\end{figure}
As the square is contractible, the holonomy of the restriction of $S$ to the boundary is zero. $S$ maps the left edge of the square to the constant path at $x$, and the lower edge to the loop $\delta^0$ which has trivial holonomy. Considering the remaining two edges we conclude that
$t \mapsto \gamma(t):=\delta^t(1)$ is a path in the leaf $L$ with the same starting/ending point and same holonomy as $\delta$, {\it i.e.}\;, $[\delta]=[\gamma]$.
Further $\bar{\gamma}\circ \sigma$, which is defined composing $\sigma$ with $\widebar{\gamma}(t):=\gamma(1-t)$, is a contractible loop in $M$ based at $x$. Indeed, {recalling that $\gamma(t)=\sigma^t(1)$, we see that} the family of loops $\widebar{\gamma|_{[0,t]}}\circ \sigma^t$ parametrized by $t\in[0,1]$ provides a contraction, since at time $t=0$ it equals the contractible loop $\sigma^0$. In other words, $\langle \sigma \rangle_M=\langle \gamma \rangle_M$. Altogether we get the inclusion ``$\subset$'' in equation \eqref{eq:paths}.
For the opposite inclusion, given a path $\gamma$ in a leaf, use the homotopy $\{\gamma^t\}$ defined by $\gamma^t(s)=\gamma(st)$ to deform it to {the constant path at $\gamma(0)$}.
\end{proof}
\begin{ex}\label{ex:notssc}
As above, let $\mathcal{F}$ be a regular foliation.
{Denote by $D$ the associated involutive distribution, which in particular is a Lie algebroid. We display three Lie groupoids integrating the Lie algebroid $D$. The first one is $H(\mathcal{F})$, the holonomy groupoid of $\mathcal{F}$. The second is $Mon(\mathcal{F})$, the monodromy groupoid of $\mathcal{F}$. It is a} source simply connected Lie groupoid, consisting of all homotopy classes (fixing endpoints) $\langle \gamma \rangle_{leaf}$ of paths $\gamma$ in the leaves of the foliation.
{The third one is $\widetilde{H}(\mathcal{F})$ as in Ex. \ref{ex:regfol}. (It integrates $D$ since it is the minimal integral of $D$ over $\Pi(M)$, by Prop. \ref{prop:HGBisHmin}.)}
{As for any (source connected) Lie groupoid integrating $D$, the Lie groupoid $\widetilde{H}(\mathcal{F})$ is a quotient of $Mon(\mathcal{F})$ and maps surjectively onto $H(\mathcal{F})$.}
Due to Proposition \ref{prop:regfol} the quotient maps read
$$Mon(\mathcal{F})\to \widetilde{H}(\mathcal{F})\to H(\mathcal{F}),\;\; \langle \gamma \rangle_{leaf} \mapsto
([\gamma], \langle \gamma \rangle_M)\mapsto [\gamma].$$
{In particular {we see that}, even though $\widetilde{H}(\mathcal{F})$ was constructed using a source simply connected Lie groupoid $\widetilde{\cG}$ in Ex. \ref{ex:regfol}, $\widetilde{H}(\mathcal{F})$ itself is not source simply connected in general.}
\end{ex}
\section{Morphisms of holonomy groupoids covering the identity}\label{section:morph}
In \S \ref{section:HB}, starting from a Lie groupoid $\cG$ and singular subalgebroid $\mathcal{B}$ of the Lie algebroid $Lie(\cG)$, we constructed the holonomy groupoid $H^{\cG}(\mathcal{B})$ endowed with a map of topological groupoids to $\cG$.
In this section we extend this construction to morphisms covering the identity: given a suitable ``morphism between singular subalgebroids'', we construct a morphism between the associated holonomy groupoids.
More precisely, given a morphism of Lie groupoids $F\colon \cG_1\to \cG_2$ covering the identity on the base and singular subalgebroids $\mathcal{B}_i$ of $Lie(\cG_i)$ (${i=1,2}$)
with
$F_*(\mathcal{B}_1)\subset \mathcal{B}_2$, there is a canonical morphism of topological groupoids
$$H^{\cG_1}(\mathcal{B}_1)\to H^{\cG_2}(\mathcal{B}_2)$$ commuting with the canonical maps (see Theorem \ref{thm:morph}).
We consider the simple case of {``surjective morphisms between singular subalgebroids''} in \S\ref{subsec:fol}.
The integration of arbitrary morphisms (covering the identity) then follows easily in \S\ref{sec:genmor}. All examples are collected in \S\ref{sec:morex}, where
we recover in a unified fashion several of the constructions we already gave.
We describe the resulting functor in \S\ref{sec:functor}.
{It is only for the sake of presentation that} in this section we restrict ourselves to morphisms covering the identity on the base manifold. Analogous results for morphisms covering surjective submersions hold, and are collected in Appendix \ref{sec:appmorsub} (see Thm. \ref{thm:morphsub}).
\subsection{{Surjective morphisms covering the identity}}\label{subsec:fol}
\begin{prop}\label{prop:hbhfb}
Let $F\colon \cG_1\to \cG_2$ be a morphism of Lie groupoids over $M$, covering $Id_M$.
Let $\mathcal{B}_1$ be a singular subalgebroid of $A_1:=Lie(\cG_1)$, and $$\mathcal{B}_2:=F_*(\mathcal{B}_1):=\{F_*(\boldsymbol{\alpha}):\boldsymbol{\alpha}\in \mathcal{B}_1\},$$ which clearly is a singular subalgebroid of $ Lie(\cG_2)$.
Then there is a canonical, surjective morphism of topological groupoids $$\Xi \colon H^{\cG_1}(\mathcal{B}_1)\to H^{\cG_2}(\mathcal{B}_2)$$ covering $Id_M$ and
making the following diagram commute:
\begin{equation*}
\xymatrix{
H^{\cG_1}( {\mathcal{B}}_1) \ar[d]^{\Phi_1} \ar@{-->}[r]^{\Xi} &H^{\cG_2}(\mathcal{B}_2) \ar[d]^{\Phi_2} \\
\cG_1 \ar[r]^{F} & \cG_2 }
\end{equation*}
\end{prop}
\begin{proof}
Let $x \in M$ and $\boldsymbol{\alpha}_1,\dots,\boldsymbol{\alpha}_n \in \mathcal{B}_1$ such that $[\boldsymbol{\alpha}_1],\dots,[\boldsymbol{\alpha}_n]$ is a basis of $\mathcal{B}_1/I_x\mathcal{B}_1$. Denote by $(U,\varphi, \cG_1)$ the associated path holonomy bisubmersion, where $U\subset \ensuremath{\mathbb R}^n\times M$.
{\underline{Claim} \emph{$(U,F\circ \varphi,\cG_2)$ is
the path holonomy bisubmersion for $\mathcal{B}_2$
associated to the (not necessarily minimal) set of local generators $\{F_*(\boldsymbol{\alpha}_1),\dots,F_*(\boldsymbol{\alpha}_n)\}$ of $\mathcal{B}_2$. }
}
\begin{equation*}
\xymatrix{
U\ar[d]^{\varphi} & \\
\cG_1 \ar[r]^{F} & \cG_2 }
\end{equation*}
By definition \ref{dfn:pathhol} we have
$\varphi\colon U \to \cG_1, (\lambda,x)\mapsto \mathrm{exp}_x \sum \lambda_i \overset{\rightarrow}{\boldsymbol{\alpha}_i}$.
Composing with $F$ we obtain
\begin{equation*}
(F\circ \varphi)((\lambda,x))=F(\mathrm{exp}_x \sum \lambda_i \overset{\rightarrow}{\boldsymbol{\alpha}_i})=\mathrm{exp}_{f(x)} \sum \lambda_i \overrightarrow{F_*(\boldsymbol{\alpha}_i)}.
\end{equation*}
Here the second equality holds because $F$ being a Lie groupoid morphism implies that $F_*(\overset{\rightarrow}{\boldsymbol{\alpha}_i})=\overrightarrow{F_*(\boldsymbol{\alpha}_i)}$.
\hfill$\bigtriangleup$
Consider a family $(U_i,\varphi_i,\cG_1)_{i \in I}$ of {path holonomy} bisubmersions for $\mathcal{B}_1$ such that $M = \cup_{i \in I}\bs_{i}(U_i)$. Let $\mathcal{U}$ be the path holonomy atlas it generates (definition \ref{def:pathholatlas}), {\it i.e.}\; the collection of the $U_i$'s together with their inverses and finite compositions. Since $F$ is a Lie groupoid morphism over $Id_M$, the family $\{(U,F\circ \varphi, \cG_2):U
\in \mathcal{U}\}$
defines an atlas of bisubmersions for $\mathcal{B}_2$.
{Denote by $\sim_{i}$ the equivalence relation defined on $\coprod_{U \in \mathcal{U}}U$ viewing the $U$'s as $G_i$-bisubmersions for ${\mathcal{B}_i}$, for $i=1,2$.
The equivalence classes of $\sim_{1}$ are contained in those of $\sim_{2}$,} inducing a surjective morphism of topological groupoids
$$ H^{\cG_1}(\mathcal{B}_1)=\coprod_{U \in \mathcal{U}} U/\sim_{1}\to \coprod_{U \in \mathcal{U}} U/\sim_{2}.$$
The latter groupoid is the holonomy groupoid $H^{\cG_2}(\mathcal{B}_2)$, by Proposition \ref{prop:equivpathhol} and Rem. \ref{rem:HHu}. {The morphism is independent of the chosen path-holonomy bisubmersions and hence canonical.}
\end{proof}
{We consider the special case in which the Lie groupoid morphism $F\colon \cG_1\to \cG_2$ is injective (then $F_*\colon A_1\to A_2$ is injective, and therefore induces an isomorphism of $C^{\infty}(M)$-modules $\mathcal{B}_1\cong \mathcal{B}_2$). We show that in that case it makes no difference whether we regard $\mathcal{B}_1$ a singular subalgebroid of $Lie(\cG_1)$ or as a singular subalgebroid of $Lie(\cG_2)$, for the
corresponding holonomy groupoids are isomorphic.}
\begin{cor}\label{prop:morphG}
If the Lie groupoid morphism $F\colon \cG_1\to \cG_2$ is injective, then the canonical morphism of topological groupoids $\Xi \colon H^{\cG_1}(\mathcal{B}_1)\to H^{\cG_2}(\mathcal{B}_2)$ of Prop. \ref{prop:hbhfb} is an isomorphism.
\end{cor}
\begin{proof}
{We just have to show that $\Xi$ is injective.
To this aim, we show that the equivalence classes of the equivalence relation $\sim_2$ appearing in the proof of Prop. \ref{prop:hbhfb} are contained in those of $\sim_1$. Let $u\in U$ and $v\in V$ be points of bisubmersions in $\mathcal{U}$. Assume that $u\sim_2 v$.
By definition, this means that there is a locally defined map $\tau\colon U\to V$ taking $u$ to $v$ and which is a morphism of $G_2$-bisubmersions, that is, $$(F \circ \varphi_U)=(F \circ\varphi_V)\circ \tau.$$
Since $F$ is injective, it follows that $\varphi_U= \varphi_V\circ \tau$, that is, $\tau$ is a morphism of $G_1$-bisubmersions. In particular, $u\sim_1 v$.}
\end{proof}
\begin{remark}
If we only assume that Lie algebroid map $F_*$ is injective, but $F\colon \cG_1\to \cG_2$ itself not, then $\Xi$ is not injective in general.
This can be seen already in the case that $\cG_1$ and $\cG_2$ are non-isomorphic Lie groupoids integrating the same Lie algebroid $A$, and $F\colon \cG_1\to \cG_2$ is a surjective but not injective map
differentiating the identity at the level of Lie algebroids {(use Ex. \ref{ex:GammaA})}.
\end{remark}
\subsection{{Arbitrary morphisms covering the identity}}
\label{sec:genmor}
We now extend Prop. \ref{prop:hbhfb} by
removing the assumption that the map $F_*|_{\mathcal{B}_1} \colon \mathcal{B}_1\to \mathcal{B}_2$ be surjective.
We first look at two singular subalgebroids of the same Lie algebroid, one containing the other.
\begin{lemma}\label{prop:morph0}
Let $\cG$ be a Lie groupoid over $M$, and denote its Lie algebroid by $A$. Let $\mathcal{B},\widetilde{\mathcal{B}}$ be a singular subalgebroids of $A$, with $\mathcal{B} \subset \widetilde{\mathcal{B}}$.
Then there is a canonical morphism of topological groupoids $H^G(\mathcal{B})\to H^G(\tilde{\mathcal{B}})$ making the following diagram commute:
\begin{equation*}
\xymatrix{
H^G( {\mathcal{B}}) \ar[rd]_{\Phi} \ar@{-->}[rr] & & H^G(\widetilde{\mathcal{B}}) \ar[ld]^{\widetilde{\Phi}} \\
&\cG & }
\end{equation*}
\end{lemma}
\begin{proof}
Let $x\in M$. Let $\{\boldsymbol{\alpha}_1,\dots,\boldsymbol{\alpha}_n\}$ be a minimal set of local generators of $\mathcal{B}$ near $x$, that is,
$[\boldsymbol{\alpha}_1],\dots,[\boldsymbol{\alpha}_n]$ form a basis of the vector space
$\mathcal{B}/I_x\mathcal{B}$. Let
$(U_0,\varphi,\cG)$ be the corresponding (minimal) path holonomy bisubmersion for $\mathcal{B}$, hence $U_0\subset \ensuremath{\mathbb R}^n\times M$.
The inclusion $\mathcal{B}\subset \widetilde{\mathcal{B}}$ induces a linear map $J\colon \mathcal{B}/I_x\mathcal{B} \to \widetilde{\mathcal{B}}/I_x\widetilde{\mathcal{B}}$, which is generally not injective. Completing the image of the above basis to a spanning set of
$\widetilde{\mathcal{B}}/I_x\widetilde{\mathcal{B}}$ we obtain a generating set
$\{\boldsymbol{\alpha}_1,\dots,\boldsymbol{\alpha}_n, \boldsymbol{\gamma}_1,\dots,\boldsymbol{\gamma}_k\}$ of
$\widetilde{\mathcal{B}}$ near $x$, {see Remark \ref{rem:basis}}. Let $(\widetilde{U}_0,\widetilde{\varphi},\cG)$
be the corresponding path holonomy bisubmersion for $\widetilde{\mathcal{B}}$, so $\widetilde{U}_0\subset \ensuremath{\mathbb R}^n\times \ensuremath{\mathbb R}^k \times M$. The inclusion
$$\iota \colon U_0\to \widetilde{U}_0, (\lambda,y)\mapsto (\lambda,0,y)$$
commutes with the maps to $\varphi\colon U_0\to \cG$ and $\widetilde{\varphi} \colon \widetilde{U}_0\to \cG$ associated to the bisubmersions
as in Definition \ref{dfn:pathhol}.
Let $\mathcal{U}$ denote the atlas for $\mathcal{B}$ generated by bisubmersions $U_0$ as above, and $\widetilde{\mathcal{U}}$ the atlas for $\widetilde{\mathcal{B}}$ generated by the corresponding $\widetilde{U}_0$ as above. {The inclusion}
$\iota$ extends in a straightforward way to compositions of bisubmersions, and hence to a map
\begin{equation}\label{eq:utildeu}
\iota\colon \coprod_{U\in \mathcal{U}}U \to \coprod_{\widetilde{U}\in \widetilde{\mathcal{U}}}\widetilde{U}
\end{equation}
commuting with the canonical maps to $\cG$.
The latter property assures that
if {$u\in U$ for some element $U$ of the atlas $\mathcal{U}$}, and $\mathbf{b}$ is a bisection of $U$ through $u$, then its image $\iota(\mathbf{b})$ is a bisection of $\widetilde{U}$ through $\iota(u)$ {and both carry the same bisection of $G$}. {By Remark \ref{rem:HGBbisec}} this implies that the map \eqref{eq:utildeu} descends to a morphism of topological groupoids
\begin{equation}\label{eq:utildeugpd}
H^G(\mathcal{B})=\coprod_{U\in \mathcal{U}}U/\sim \;\; \to\;\; \coprod_{\widetilde{U}\in \widetilde{\mathcal{U}}}\widetilde{U}/\sim = H^G(\widetilde{\mathcal{B}})
\end{equation}
which commutes the canonical maps to $\cG$, {and also that this morphism is
independent of the chosen atlas $\mathcal{U}$ and hence
canonical}.
{Notice that the topological groupoid on the r.h.s. is really $H^G(\widetilde{\mathcal{B}})$, by Proposition \ref{prop:equivpathhol} and Rem. \ref{rem:HHu}.}
\end{proof}
\begin{remark}
{The morphism $H^G(\mathcal{B})\to H^G(\tilde{\mathcal{B}})$ in Lemma \ref{prop:morph0} is not injective in general. For instance, taking $G=M\times M$, $\mathcal{B}$ a foliation and $\widetilde{\mathcal{B}}=\Gamma_c(TM)$, this map is the target-source map of the holonomy groupoid of the foliation.}
\end{remark}
The next theorem generalizes Prop. \ref{prop:hbhfb} and establishes the functoriality of the holonomy groupoid construction:
\begin{thm}\label{thm:morph}
Let $F\colon \cG_1\to \cG_2$ be a morphism of Lie groupoids covering $Id_M$. Let $\mathcal{B}_i$ be a singular subalgebroid of $Lie(\cG_i)$ for $i=1,2$,
such that
$F_*(\mathcal{B}_1)\subset \mathcal{B}_2$.
Then there is a canonical morphism of topological groupoids $$\Xi \colon H^{\cG_1}(\mathcal{B}_1)\to H^{\cG_2}(\mathcal{B}_2)$$ covering $Id_M$ and
making the following diagram commute:
\begin{equation*}
\xymatrix{
H^{\cG_1}( {\mathcal{B}}_1) \ar[d]^{\Phi_1} \ar@{-->}[r]^{\Xi} &H^{\cG_2}(\mathcal{B}_2) \ar[d]^{\Phi_2} \\
\cG_1 \ar[r]^{F} & \cG_2 }
\end{equation*}.
\end{thm}
\begin{proof}
Compose the canonical morphism $\Xi \colon H^{{G_1}}(\mathcal{B}_1)\to H^{{G_2}}(F_*(\mathcal{B}_1))$ given by Prop. \ref{prop:hbhfb} with the canonical morphism $H^{{G_2}}(F_*(\mathcal{B}_1))\to H^{{G_2}}(\mathcal{B}_2)$ given by Lemma \ref{prop:morph0}.
\end{proof}
\subsection{Examples}\label{sec:morex}
We display a few examples for
Prop. \ref{prop:hbhfb}, recovering in a unified manner several results obtained so far. {Then we display examples} for Thm. \ref{thm:morph}.
\begin{ex}[Images of Lie algebroid morphisms]
Let $F\colon \cK\to\cG$ be a Lie groupoid morphism over $Id_M$.
Applying Prop. \ref{prop:hbhfb} with $\mathcal{B}_1:=\Gamma_c(Lie(\cK))$
{we obtain a canonical surjective morphism $$H^{\cK}(\mathcal{B}_1)\to
H^{\cG}(F_*(\mathcal{B}_1)).$$
In particular the latter groupoid} is a quotient of {$H^{\cK}(\mathcal{B}_1)=K$ (here we used Ex. \ref{ex:GammaA})}, recovering part of Cor. \ref{cor:HBimage}.
\end{ex}
\begin{ex}[The underlying singular foliation $\mathcal{F}_{\mathcal{B}}$]\label{ex:folia}
Let $\cG\rightrightarrows M$ be a Lie groupoid, and $\mathcal{B}$ a singular subalgebroid of $A:=Lie(\cG)$. Consider the Lie groupoid morphism
$$F=(\bt,\bs)\colon \cG \to M\times M$$ over the identity $Id_M$. The corresponding Lie algebroid morphism is the anchor map $\rho \colon A\to TM$, hence $F_*(\mathcal{B})=\mathcal{F}_{\mathcal{B}}$, the singular foliation on $M$ induced by $\mathcal{B}$.
Prop. \ref{prop:hbhfb} implies the existence of a canonical surjective morphism of topological groupoids
\begin{equation}\label{eq:hghfact}
H^{\cG}(\mathcal{B})\to H(\mathcal{F}_{\mathcal{B}})
\end{equation}
to the holonomy groupoid \cite{AndrSk} of the singular foliation $\mathcal{F}_{\mathcal{B}}$.
\end{ex}
\begin{remark}
{We give an alternative description of the morphism \eqref{eq:hghfact} in the special case that
$\mathcal{B}$ arises from a Lie groupoid morphism (see Def. \ref{def:arises}), i.e
there is a Lie groupoid morphism $\Psi\colon \cK\to\cG$ over $Id_M$ such that $\mathcal{B}=\Psi_*(\Gamma_c(Lie(\cK)))$.
The composition $$\cK\overset{\Psi}{\to} G\overset{(\bt_G,\bs_G)}{\to} M\times M$$
is the target-source map $(\bt_K,\bs_K)$ of $\cK$. It gives rise to the singular foliation $\mathcal{F}_\mathcal{B}$, hence
$H(\mathcal{F}_\mathcal{B})=\cK/\sim_2$ where $\sim_2$ {identifies two points of $\cK$ if{f} there are bisections through them that carry the same diffeomorphism of $M$ (see Example \ref{ex:explicitgr})}. Since by assumption $\mathcal{B}$ arises from the Lie groupoid morphism $\Psi$,
by Prop. \ref{cor:HBimage} we have
$H^{\cG}(\mathcal{B})=\cK/\sim_1,$ where $\sim_1$ {identifies two points of $\cK$ if{f} there are bisections through them that map under $\Psi$ to the same bisection of $G$
(see Rem. \ref{rem:bisec})}. The equivalence classes of $\sim_1$ are contained in those of $\sim_2$, giving rise to a morphism $H^{\cG}(\mathcal{B})\to H(\mathcal{F}_{\mathcal{B}})$. The latter agrees with
the morphism \eqref{eq:hghfact}.
}
\end{remark}
\begin{ex}[Varying the Lie groupoid $\cG$]
Let $$F\colon \tilde{\cG}\to\cG$$ a morphism of Lie groupoids integrating the identity on the Lie algebroid $A:=Lie( \tilde{\cG})=Lie(\cG)$. Let $\mathcal{B}$ be a singular subalgebroid of $A$.
Clearly, $F_*(\mathcal{B})=\mathcal{B}$. Prop. \ref{prop:hbhfb} implies the existence of a canonical surjective morphism of topological groupoids
$$ {H}^{\tilde{\cG}}(\mathcal{B})\to H^{\cG}({\mathcal{B}})$$ from the holonomy groupoid of $\mathcal{B}$ constructed using $\tilde{\cG}$ to the
holonomy groupoid of $\mathcal{B}$ constructed using ${\cG}$. {Under the identification of
Theorem \ref{thm:tilde}, this map is just the first projection, since both maps are induced by the identity map at the level of bisubmersions.}
\end{ex}
{We now present examples of Thm. \ref{thm:morph}. }
We start with a simple statement about singular foliations:
\begin{ex}[Singular foliations]\label{ex:morsingfol}
{Let $\mathcal{F}_1$, $\mathcal{F}_2$ be singular foliations on a manifold $M$, with $\mathcal{F}_1 \subset \mathcal{F}_2$.
Then there is a canonical morphism of topological groupoids $H(\mathcal{F}_1)\to H(\mathcal{F}_2)$ covering the identity. }
\end{ex}
{The following example} shows that the canonical map $\Phi\colon H^{\cG}(\mathcal{B})\to \cG$ arises
from the inclusion $\mathcal{B}\hookrightarrow \Gamma_c(A)$.
\begin{ex}[{Recovering $\Phi$}]\label{ex:Phi}
{Let $G$ be a Lie groupoid, and $\mathcal{B}$ be a Lie subalgebroid of $A:=Lie(G)$.} By Lemma \ref{prop:morph0}, the inclusion $\mathcal{B}\subset \Gamma_c(A)$ induces a
canonical morphism of topological groupoids $H^{\cG}(\mathcal{B})\to H^{\cG}(\Gamma_c(A))$ making the following diagram commute:
\begin{equation*}
\xymatrix{
H( {\mathcal{B}}) \ar[rd]_{\Phi} \ar@{-->}[rr] & & H^{\cG}(\Gamma_c(A)) \ar[ld] \\
&\cG & }
\end{equation*}
But $H^{\cG}(\Gamma_c(A))=\cG$ and the right map above is $Id_{\cG}$, by Ex. \ref{ex:GammaA}. Hence the canonical morphism dotted above {is exactly $\Phi$}.
\end{ex}
{Specializing Thm. \ref{thm:morph} to wide Lie subalgebroids and using Prop. \ref
{prop:HGBisHmin} to relate holonomy groupoids with minimal integrals we obtain the following example.
\begin{ex}
Let $F\colon \cG_1\to \cG_2$ be a morphism of Lie groupoids covering the identity on $M$. Let $B_i$ be a wide Lie subalgebroid of $Lie(\cG_i)$ for $i=1,2$,
such that
$F_*(B_1)\subset B_2$. Denote by $H_{min}^i$ the minimal integral of $B_i$ over $G_i$.
Then there is a canonical morphism of Lie groupoids $$\Xi \colon H_{min}^1\to H_{min}^2$$ covering $Id_M$ and which, together with $F$, intertwines the immersions $H_{min}^i\to G_i$ integrating the inclusions.
\end{ex}
}
\subsection{The integration functor}
\label{sec:functor}
The purpose of this subsection is put in place our ``integration'' process, describing it as a functor. The term ``integration'' is in quotes, since we have not specified in which sense the holonomy groupoid $H^{\cG}(\mathcal{B})$ is an integration of the singular subalgebroid $\mathcal{B}$ of $Lie(\cG)$. This will be done in a separate publication \cite{AZ4}.
We fix a manifold $M$ and consider two categories. The first one, denoted by $\textsf{SingSub}^{Gpd}_M$, is:
\begin{itemize}
\item objects:
\vspace{-1mm}\begin{align*}
\{(\cG,\mathcal{B})|\;& \cG \text{ a Lie groupoid over $M$},\\
&\mathcal{B} \text{ a singular subalgebroid of } Lie(\cG)\}
\end{align*}
\item arrows from $(\cG_1,\mathcal{B}_1)$ to $(\cG_2,\mathcal{B}_2)$:
\vspace{-1mm}
\begin{align*}
\{F\colon \cG_1\to \cG_2 &\text{ a morphism of Lie groupoids covering $Id_M$},\\ &\text{ such that } F_*(\mathcal{B}_1)\subset \mathcal{B}_2\}
\end{align*}
\end{itemize}
The second
category, denoted by $\textsf{TopGrd}_M$, is:
\begin{itemize}
\item objects:
\vspace{-1mm}
\begin{align*}
\{\Phi\colon H\to \cG|&\; H \text{ a topological groupoid over $M$},\\
& \;\cG \text{ a Lie groupoid over $M$,}\\
&\; \Phi \text{ a morphism of topological groupoids covering $Id_M$}\}
\end{align*}
\item arrows from $(\Phi_1\colon H_1\to \cG_1)$ to $(\Phi_2\colon H_2\to \cG_2)$:
\vspace{-1mm}
\begin{align*}
\{(\Xi,F)|& \;\Xi\colon H_1\to H_2 \text{ a morphism of topological groupoids over $Id_M$},\\
&\; F \colon \cG_1\to \cG_2 \text{ a morphism of Lie groupoids over $Id_M$},\\
& \;\text{s.t.
the diagram below commutes}\}
\end{align*}
\begin{equation*}
\xymatrix{
H_1 \ar[d]_{\Phi_1} \ar[r]^{\Xi} &H_2 \ar[d]^{\Phi_2} \\
\cG_1 \ar[r]^{F} & \cG_2 }
\end{equation*}
\end{itemize}
Our construction provides a functor
\begin{align*}
\text{$\textsf{SingSub}^{Gpd}_M$}\;\;\;\;\;&\to\;\;\;\;\;\text{$\textsf{TopGrd}_M$}\\
(\cG,\mathcal{B})\;\;\;\;\;&\mapsto\;\;\;\;\; H^{\cG}(\mathcal{B})\\
F\;\;\;\;\;&\mapsto\;\;\;\;\; (\Xi, F)
\end{align*}
where $H^{\cG}(\mathcal{B})$ is constructed as in Definition \ref{def:holgroupoid} and
$\Xi$ is constructed as in Theorem \ref{thm:morph}. This is really a functor, due to the canonicity of our constructions.
|
{
"timestamp": "2018-05-08T02:16:56",
"yymm": "1805",
"arxiv_id": "1805.02480",
"language": "en",
"url": "https://arxiv.org/abs/1805.02480"
}
|
\section{Introduction}
Dynamic programming principle (DPP), originated by Bellman in 1950s, is a
powerful tool to solve optimal control problems. Since then DPP and related
Hamilton-Jacobi-Bellman (HJB) equations have been intensively studied by a lot
of researchers for various kinds of stochastic optimal control problems
(see\cite{Bay1,Bay2,Buc1,Buc2,Buckdahn-Li,Li-W,
Ma-Y-2,Peng-1992,Tang1,Yong-Zhou,Zhou-1990-2,Zhou-1991} and the references therein).
In this paper, we study the existence and uniqueness of the viscosity solution
to the following HJB equation,
\begin{equation}
\left\{
\begin{array}
[c]{l
\partial_{t}W(t,x)+\inf\limits_{u\in U}H(t,x,W(t,x),DW(t,x),D^{2}W\left(
t,x\right) ,u)=0,\\
W(T,x)=\phi(x),
\end{array}
\right. \label{intro-hjb
\end{equation}
where
\begin{equation
\begin{array}
[c]{l
H(t,x,v,p,A,u)\\
=\frac{1}{2}\mathrm{tr}[\sigma\sigma^{\intercal
(t,x,v,V(t,x,v,p,u),u)A]+p^{\intercal}b(t,x,v,V(t,x,v,p,u),u)\\
\ \ +g(t,x,v,V(t,x,v,p,u),u),\\
V(t,x,v,p,u)=p^{\intercal}\sigma(t,x,v,V(t,x,v,p,u),u),\\
(t,x,v,p,A,u)\in\lbrack0,T]\times\mathbb{R}^{n}\times\mathbb{R}\times
\mathbb{R}^{n}\times\mathbb{S}^{n}\times U.
\end{array}
\label{intro-h
\end{equation}
To solve (\ref{intro-hjb}) we need to find the solution $V$ of the algebra
equation in (\ref{intro-h}) first. Once $V$ is computed, it is directly
plugged to $H(\cdot)$ so that we obtain the true generator of (\ref{intro-hjb}).
This kind of problem has the following stochastic optimal control
interpretation. The controlled system is described by the following fully
coupled forward-backward stochastic differential equation (FBSDE):
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{t,x;u}= & b(s,X_{s}^{t,x;u},Y_{s}^{t,x;u},Z_{s}^{t,x;u},u_{s
)ds+\sigma(s,X_{s}^{t,x;u},Y_{s}^{t,x;u},Z_{s}^{t,x;u},u_{s})dB_{s},\\
dY_{s}^{t,x;u}= & -g(s,X_{s}^{t,x;u},Y_{s}^{t,x;u},Z_{s}^{t,x;u
,u_{s})ds+Z_{s}^{t,x;u}dB_{s},\\
X_{t}^{t,x;u}= & x,\ Y_{T}^{t,x;u}=\phi(X_{T}^{t,x;u}),\;s\in\lbrack t,T],
\end{array}
\right. \label{intro-fbsde
\end{equation}
where $B=(B_{s})_{s\in\lbrack t,T]}$ is a standard $d$-dimensional Brownian
motion and $u\in\mathcal{U}[t,T]$ is an admissible control. The cost
functional is defined by the solution to the backward stochastic differential
equation (BSDE) at time $t$ in (\ref{intro-fbsde}) and the value function of
our control problem is
\begin{equation}
W(t,x)=\underset{u\in\mathcal{U}[t,T]}{ess\inf}Y_{t}^{t,x;u}.
\label{intro-value
\end{equation}
The fully coupled forward-backward stochastic control problem
$(\ref{intro-fbsde})-(\ref{intro-value})$ has its own independent interests in
mathematical finance such as the stochastic differential utility,
leader-follower stochastic differential games, principal-agent problem, and so forth.
It is worth pointing out that the extra algebraic equation in (1.2) stems from
that the solution procedure of the FBSDE depends on a relationship between the
solution $Z$ of the backward SDE and the diffusion coefficients of the forward
SDE. This fact is first revealed in the famous ``Four Step Scheme" for solving
an FBSDE (see \cite{MPY,Ma-Y}). Essentially, this algebraic equation is
exactly the First Step of the ``Four Step Scheme" (see \cite{MPY,Ma-Y}).
It is well-known that the above value function may not necessarily smooth
enough as we want, that is the reason why the researchers introduce the notion
of viscosity solution (see \cite{Crandall-lecture,Lions2} and references
therein). When the coefficients $b$ and $\sigma$ of (\ref{intro-fbsde}) are
independent of the variables $y$ and $z$, Peng \cite{Peng-1992, Peng-lecture}
first obtained that the above defined $W$ is a viscosity solution to
(\ref{intro-hjb}). For this case, the uniqueness of the viscosity solution to
(\ref{intro-hjb}) can be obtained by applying the method in Barles, Buckdahn
and Pardoux \cite{Baeles-BP} (see Theorem 5.3 in \cite{Buckdahn-Li} for details).
When $b$ and $\sigma$ depend on $y$ and $z$ in (\ref{intro-fbsde}), the
control system (\ref{intro-fbsde}) becomes a fully coupled FBSDE and the
corresponding HJB equation (\ref{intro-hjb}) becomes a fully nonlinear
parabolic partial differential equation (PDE) combined with an algebra
equation which leads to the solvability and uniqueness of (\ref{intro-hjb})
being extraordinary difficult. Note that when (\ref{intro-fbsde}) is
independent of the control variable $u$, the HJB equation (\ref{intro-hjb})
degenerates to a semilinear parabolic PDE. In fact, even for this extreme
case, the well-posedness of (\ref{intro-hjb}) is still an open problem which
is proposed by Peng \cite{Peng99}. Recently, Li and Wei \cite{Li-W}, Li
\cite{Li jun} proved that $W$ is a viscosity solution to (\ref{intro-hjb})
under the monotonicity conditions for $b$, $\sigma$, $g$ and $\phi$. As for
the uniqueness of the viscosity solution, there are only a few results for
some special cases. For instance, when $b$, $\sigma$, $g$ are independent of
control variable $u$, and $\sigma$ is independent of $y$ and $z$, by applying
the method in Barles, Buckdahn and Pardoux \cite{Baeles-BP}, Wu and Yu
\cite{Wu-Y} proved that $W$ is the unique viscosity solution to
(\ref{intro-hjb}) under the monotonicity conditions for $b$, $\sigma$, $g$ and
$\phi$ in the space of all continuous functions which are Lipschitz continuous
in $x$.
So the existence and uniqueness of the viscosity solution to (\ref{intro-hjb})
is an interesting and challenging problem. In this paper, we apply the
probabilistic approach to deal with this problem. In more details, with the
help of the value function of the above stochastic optimal control problem and
the existence and uniqueness of the solution to the controlled system
(\ref{intro-fbsde}), we attacks this difficult problem, especially the
uniqueness part. Before we focus on the uniqueness results, we need to deal
with the following problems.
The first problem is the well-posedness of the fully coupled forward-backward
controlled system (\ref{intro-fbsde}). There are many literatures on the
well-posedness of fully coupled FBSDEs. When the coefficients of a fully
coupled FBSDE are deterministic and the diffusion coefficient of the forward
equation is nondegenerate, Ma, Protter and Yong \cite{MPY} proposed the
four-step scheme approach. Under some monotonicity conditions, Hu and Peng
\cite{Hu-Peng95} first obtained an existence and uniqueness result which was
generalized by Peng and Wu \cite{PW}. Yong \cite{Yong1997, Yong-2010B}
developed this approach and called it the method of continuation. The fixed
point approach is due to Antonelli \cite{Antonelli}, Pardoux and Tang
\cite{Pardoux-Tang}. The readers may refer to Ma and Yong \cite{Ma-Y},
Cvitani\'{c} and Zhang \cite{Cvi-Zhang}, Ma, Wu, Zhang and Zhang
\cite{Ma-WZZ}, Yong and Zhou \cite{Yong-Zhou} for the FBSDE theory. In this
paper, we adopt the fixed point approach which is given in Hu, Ji and Xue
\cite{Hu-JX}.
The second problem is the existence of the viscosity solution to
(\ref{intro-hjb}). We obtain that value function $W$ defined in
(\ref{intro-value}) is a viscosity solution to the HJB equation
(\ref{intro-hjb}) under the assumption in Hu, Ji and Xue \cite{Hu-JX}. As
pointed out in Remark \ref{re-mon}, our proofs still hold under monotonicity
conditions. Comparing with Li and Wei \cite{Li-W}, both papers
develop Peng's stochastic backward semigroup approach in \cite{Peng-1992,
Peng-lecture}. But our main proofs are different from the ones in Li and Wei
\cite{Li-W}. To establish the DPP, the stochastic backward semigroup is
defined by a fully coupled FBSDE. Unlike the decoupled case,
changing $(Y,Z)$ does affect $X$ in a fully coupled FBSDE. Thus, the approach
of establishing the DPP for the decoupled forward-backward controlled system
does not work any more. By constructing two new auxiliary FBSDEs (see
(\ref{new-new-3422}) and (\ref{eq-xy-til-t})), we introduce a new approach to
prove the DPP for the fully coupled forward-backward controlled system. We
also simplify some proofs as follows. To prove the continuity property of the
value function $W(t,x)$ in $t$, we build a FBSDE (see (\ref{FBSDE-continuous
)) which makes the proof easier. For the combined algebra equation, we
construct a simple contraction mapping to prove the existence and uniqueness,
and obtain some properties of this algebra equation.
Then, we study the uniqueness of viscosity solution to the HJB equation
(\ref{intro-hjb}) in four cases. The first case is that $\sigma$ is
independent of $y$ and $z$. By using the method in \cite{Baeles-BP}, we prove
the uniqueness of viscosity solution to (\ref{intro-hjb}) in the space of all
continuous functions which are Lipschitz continuous in $x$. But, when $\sigma$
depends on $y$ and $z$, the method in \cite{Baeles-BP} does not work as
pointed out in Remark \ref{re-appen}. The second case is that $\sigma$ is
independent of $z$. Different from the analysis method in \cite{Baeles-BP},
for this case, we propose a novel probabilistic approach to prove the
uniqueness. In more details, we construct a new fully coupled forward-backward
stochastic control system (\ref{new-lx-18}) in which $\sigma$ only depends on
the variables $t$, $x$ and $u$. By the uniqueness result in the first case,
the value function for this new control system is the unique viscosity
solution to the HJB equation (\ref{new-lx-17}). Thanks to Proposition
\ref{pr-um}, we prove that $W$ defined in (\ref{intro-value}) is the minimum
viscosity solution to the HJB equation (\ref{intro-hjb}) in the space of all
continuous functions which are Lipschitz continuous in $x$. It is worthing to
point out that when $b$, $\sigma$, $g$ are independent of the control variable
$u$, $W$ is just the unique viscosity solution. The third case is that
$\sigma$ depends on $y$ and $z$. We construct a new decoupled forward-backward
stochastic control system (\ref{new-lx-22}). Following the similar approach in
the second case, we prove that $W$ defined in (\ref{intro-value}) is the
minimum viscosity solution to the HJB equation (\ref{intro-hjb}) in a smaller
space (see Theorem \ref{th-um2}). Especially, when $b$, $\sigma$, $g$ are
independent of the control variable $u$, $W$ is also the unique viscosity
solution. The fourth case is that the solution to HJB equation
(\ref{intro-hjb}) is smooth. We construct a new BSDE (\ref{new-asd-11}). With
the help of the comparison theorem for BSDEs, we prove that the solution is
just the value function defined in (\ref{intro-value}).
{ In order to study the well-posedness of FBSDEs, Ma, Wu, Zhang and
Zhang\cite{Ma-WZZ} proposed an important concept \textquotedblleft decoupling
field". When $b$, $\sigma$, $g$ are independent of the control variable $u$,
in Theorem \ref{th-uni-pde} or \ref{uni-pde}, the value function $W$ is a
decoupling field. For another viscosity solution $\tilde{W}$, we have verified
that $\tilde{W}=W$ by a probabilistic approach. From the perspective of
decoupling field, we have proved that $\tilde{W}$ is just a regular decoupling
field which leads to the uniqueness naturally. This reflects that the
decoupling field is a very important concept in the theory of FBSDEs.}
{Our paper is organized as follows. In section 2, we formulate our problem and
a related stochastic optimal control problem. In section 3, we prove that the
value function of the related stochastic control problem is a viscosity
solution to the HJB equation by establishing the DPP and the properties of the
value function. The uniqueness results are obtained in section 4.}
\section{The problem formulation}
Denote by $\mathbb{R}^{n}$ the $n$-dimensional real Euclidean space,
$\mathbb{R}^{k\times n}$ the set of $k\times n$ real matrices and
$\mathbb{S}^{n}$ the set of $n\times n$ symmetric matrix. Let $U$ be a
nonempty and compact subset in $\mathbb{R}^{k}$. Let $\langle\cdot
,\cdot\rangle$ (resp. $\Vert\cdot\Vert$) denote the usual scalar product
(resp. usual norm) of $\mathbb{R}^{n}$ and $\mathbb{R}^{k\times n}$. The
scalar product (resp. norm) of $M=(m_{ij})$, $N=(n_{ij})\in\mathbb{R}^{k\times
n}$ is denoted by $\langle M,N\rangle=\mathrm{tr}\{MN^{\intercal}\}$ (resp.
$\Vert M\Vert=\sqrt{MM^{\intercal}}$), where the superscript $^{\intercal}$
denotes the transpose of vectors or matrices.
We will study the existence and uniqueness of viscosity solution to the
following { HJB equation combined with an algebra equation
\begin{equation}
\label{pre-hjb
\begin{array}
[c]{l
\partial_{t}W(t,x)+\inf\limits_{u\in U}H(t,x,W(t,x),DW(t,x),D^{2}W\left(
t,x\right) ,u)=0,\ W(T,x)=\phi(x).
\end{array}
\end{equation}
Here }$H\left( \cdot\right) $ is defined as follows{
\begin{equation
\begin{array}
[c]{l
H(t,x,v,p,A,u)\\
=\frac{1}{2}\mathrm{tr}[\sigma\sigma^{\intercal
(t,x,v,V(t,x,v,p,u),u)A]+p^{\intercal}b(t,x,v,V(t,x,v,p,u),u)\\
\ \ +g(t,x,v,V(t,x,v,p,u),u),\\
(t,x,v,p,A,u)\in\lbrack0,T]\times\mathbb{R}^{n}\times\mathbb{R}\times
\mathbb{R}^{n}\times\mathbb{S}^{n}\times U,
\end{array}
\label{pre-h
\end{equation}
}$V(t,x,v,p,u)$ is the solution to the following algebra equation
\[
V(t,x,v,p,u)=p^{\intercal}\sigma(t,x,v,V(t,x,v,p,u),u),
\]
where
\[
b:[t,T]\times\mathbb{R}^{n}\times\mathbb{R}\times\mathbb{R}^{1\times d}\times
U\rightarrow\mathbb{R}^{n}, \ \sigma:[t,T]\times\mathbb{R}^{n}\times
\mathbb{R}\times\mathbb{R}^{1\times d}\times U\rightarrow\mathbb{R}^{n\times
d},
\
\[
g:[t,T]\times\mathbb{R}^{n}\times\mathbb{R}\times\mathbb{R}^{1\times d}\times
U\rightarrow\mathbb{R}, \ \phi:\mathbb{R}^{n}\rightarrow\mathbb{R}.
\]
We impose the following assumption on these functions.}
\begin{assumption}
\label{assum-1} (i) $b,\sigma,g,\phi$ are continuous with respect to
$s,x,y,z,u$, and there exist constants $L_{i}>0$, $i=1,2,3$ such tha
\[
|b(s,x_{1},y_{1},z_{1},u)-b(s,x_{2},y_{2},z_{2},u)|\leq L_{1}|x_{1
-x_{2}|+L_{2}(|y_{1}-y_{2}|+|z_{1}-z_{2}|),
\
\[
|\sigma(s,x_{1},y_{1},z_{1},u)-\sigma(s,x_{2},y_{2},z_{2},u)|\leq L_{1
|x_{1}-x_{2}|+L_{2}|y_{1}-y_{2}|+L_{3}|z_{1}-z_{2}|,
\
\[
|g(s,x_{1},y_{1},z_{1},u)-g(s,x_{2},y_{2},z_{2},u)|\leq L_{1}(|x_{1
-x_{2}|+|y_{1}-y_{2}|+|z_{1}-z_{2}|),
\
\[
|\phi(x_{1})-\phi(x_{2})|\leq L_{1}|x_{1}-x_{2}|,
\]
for all $s\in\lbrack0,T],x_{i},y_{i},z_{i}\in\mathbb{R}^{n}\times
\mathbb{R}\times\mathbb{R}^{1\times d},$ $i=1,2$, $u\in U$.
(ii)\ $\bar{\Lambda}:=8C_{2}(L)(1+T^{2})c_{1}^{2}<1$, where
$C_{2}(\cdot)$ is defined in Lemma \ref{sde-bsde} and Remark \ref{re-app-2};
$L=\max\{L_{1}, \newline\sqrt{C_{2}(L_{1})} (1-\sqrt{\Lambda})^{-1} \}$,
$\Lambda=8C_{2}(L_{1})(1+T^{2})c_{1}^{2}$, $c_{1}=\max\{L_{2,}L_{3}\}$ .
\end{assumption}
\begin{remark}
\label{linear growth}Since $U$ is compact, from the above assumption (i) we
obtain that\ $|\psi(s,x,y,z,u)|\leq L(1+|x|+|y|+|z|), $ where $L>0$ is a
constant and $\psi=b,$ $\sigma,$ $g$ and $\phi$.
\end{remark}
\begin{remark}
Since $C_{2}(\cdot)$ is increasing and $\bar{\Lambda}<1$, we have $\Lambda<1$.
If $c_{1}\downarrow0$, then it is easy to verify that $\Lambda\downarrow0$
which leads to $\bar{\Lambda}\downarrow0$. Thus, the above assumption (ii)
holds when $c_{1}$ is sufficient small.
\end{remark}
As pointed out in the introduction, this kind of problem has a stochastic
optimal control interpretation. Now we formulate this related stochastic
optimal control problem.
Let $B=(B_{t}^{1},B_{t}^{2},...,B_{t}^{d})_{0\leq t\leq T}^{\intercal}$ be a
standard $d$-dimensional Brownian motion defined on a complete probability
space $(\Omega,\mathcal{F},P)$\ over $[0,T]$. Denote by $\mathbb{F
=\{\mathcal{F}_{t},0\leq t\leq T\}$ the natural filtration of $B$, where
$\mathcal{F}_{0}$ contains all $P$-null sets of $\mathcal{F}$. Given
$t\in\lbrack0,T)$, denote by $\mathcal{U}[t,T]$ the set of all $\mathbb{F
$-adapted $U$-valued processes on $[t,T]$. For each given $p\geq1$, we
introduce the following spaces.
$L^{p}(\mathcal{F}_{t};\mathbb{R}^{n})$ : the space of $\mathcal{F}_{t
$-measurable $\mathbb{R}^{n}$-valued random vectors $\zeta$ such that
$\mathbb{E}[|\zeta|^{p}]<\infty$; $L^{\infty}(\mathcal{F}_{t};\mathbb{R}^{n
)$: the space of $\mathcal{F}_{t}$-measurable $\mathbb{R}^{n}$-valued random
vectors $\xi$ such that \ $||\xi||_{\infty}=ess\sup_{\omega\in\Omega
|\xi(\omega)|<\infty; $ $L_{\mathcal{F}}^{p}(t,T;\mathbb{R}^{n})$: the space
of $\mathbb{F}$-adapted $\mathbb{R}^{n}$-valued stochastic processes on
$[t,T]$ such that \ $\mathbb{E}[\int_{t}^{T}|f(r)|^{p}dr]<\infty; $
$L_{\mathcal{F}}^{\infty}(t,T;\mathbb{R}^{n})$: the space of $\mathbb{F
$-adapted $\mathbb{R}^{n}$-valued stochastic processes on $[t,T]$ such that
\ $||f(\cdot)||_{\infty}=ess\sup_{(r,\omega)\in\lbrack t,T]\times\Omega
}|f(r,\omega)|<\infty; $ $L_{\mathcal{F}}^{p,q}(t,T;\mathbb{R}^{n})$: the
space of $\mathbb{F}$-adapted $\mathbb{R}^{n}$-valued stochastic processes on
$[t,T]$ such that \ $||f(\cdot)||_{p,q}=\{\mathbb{E}[(\int_{t}^{T
|f(r)|^{p}dr)^{\frac{q}{p}}]\}^{\frac{1}{q}}<\infty; $ $L_{\mathcal{F}
^{p}(\Omega;C([t,T];\mathbb{R}^{n}))$: the space of $\mathbb{F}$-adapted
$\mathbb{R}^{n}$-valued continuous stochastic processes on $[t,T]$ such that
\ $\mathbb{E}[\sup_{t\leq r\leq T}|f(r)|^{p}]<\infty. $
Let $t\in\lbrack0,T]$, $\xi\in L^{2}(\mathcal{F}_{t};\mathbb{R}^{n})$ and an
admissible control\ $u(\cdot)\in\mathcal{U}[t,T]$. Consider the following
controlled fully coupled FBSDE:
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{t,\xi;u}= & b(s,X_{s}^{t,\xi;u},Y_{s}^{t,\xi;u},Z_{s}^{t,\xi;u
,u_{s})ds+\sigma(s,X_{s}^{t,\xi;u},Y_{s}^{t,\xi;u},Z_{s}^{t,\xi;u
,u_{s})dB_{s},\\
dY_{s}^{t,\xi;u}= & -g(s,X_{s}^{t,\xi;u},Y_{s}^{t,\xi;u},Z_{s}^{t,\xi;u
,u_{s})ds+Z_{s}^{t,\xi;u}dB_{s},\;s\in\lbrack t,T],\\
X_{t}^{t,\xi;u}= & \xi,\ Y_{T}^{t,\xi;u}=\phi(X_{T}^{t,\xi;u}).
\end{array}
\right. \label{state-eq
\end{equation}
Under the Assumption \ref{assum-1}, by Theorem 2.2 in \cite{Hu-JX}, the
equation \eqref{state-eq} has a unique solution $(X^{t,\xi;u},Y^{t,\xi;u},
\newline Z^{t,\xi;u})\in L_{\mathcal{F}}^{2}(\Omega;C([t,T];\mathbb{R
^{n}))\times L_{\mathcal{F}}^{2}(\Omega;C([t,T];\mathbb{R}))\times
L_{\mathcal{F}}^{2,2}(t,T;\mathbb{R}^{1\times d})$. For each given
$(t,x)\in\lbrack0,$ $T]\times\mathbb{R}^{n}$, define the value function
\begin{equation}
W(t,x)=\underset{u\in\mathcal{U}[t,T]}{ess\inf}Y_{t}^{t,x;u}. \label{obje-eq
\end{equation}
\section{The existence of viscosity solutions}
{In order to prove the existence of the viscosity solution, we need to study
the} above fully coupled stochastic optimal control problem. It is well-known
that DPP is an important approach to solving stochastic optimal control
problems (see \cite{Yong-Zhou,Zhou-1990-2,Zhou-1991}). So in the first
subsection, {the DPP for the stochastic control problem (\ref{state-eq
)-(\ref{obje-eq}) is established. Then we prove the value function is a
viscosity solution to the HJB equation (\ref{pre-hjb}) in the second
subsection.}
\subsection{The dynamic programming principle}
Define $\mathcal{U}^{t}[t,T]$ the space of all $U$-valued $\{\mathcal{F
_{s}^{t}\}_{t\leq s\leq T}$-adapted processes on $[t,T]$, where $\{\mathcal{F
_{s}^{t}\}_{t\leq s\leq T}$ is the $P$-augmenta-tion of the natural filtration
of $(B_{s}-B_{t})_{t\leq s\leq T}$. For each $v\in\mathcal{U}^{t}[t,T]$, it is
easy to verify that the solution $(X_{s}^{t,x;v},Y_{s}^{t,x;v},Z_{s
^{t,x;v})_{s\in\lbrack t,T]}$ to equation \eqref{state-eq} is $\{\mathcal{F
_{s}^{t}\}_{t\leq s\leq T}$-adapted, which implies that $Y_{t}^{t,x;v
\in\mathbb{R}$.
It should note that in this paper, the constant $C$ will change from line to
line in the following proofs.
\begin{proposition}
\label{pro-new-1}Suppose Assumption \ref{assum-1} holds. Then
\begin{equation}
W(t,x)=\underset{v\in\mathcal{U}^{t}[t,T]}{\inf}Y_{t}^{t,x;v}.
\label{eq-deter
\end{equation}
\end{proposition}
\begin{proof}
Since $\mathcal{U}^{t}[t,T]\subset\mathcal{U}[t,T]$, we obtain $W(t,x)\leq
\inf_{v\in\mathcal{U}^{t}[t,T]}Y_{t}^{t,x;v}$ by the definition of $W(t,x)$.
On the other hand, for each given $u\in\mathcal{U}[t,T]$, by Lemma 13 in
\cite{Hu-J}, there exists a sequence $(u^{m})$ in $\mathcal{U}[t,T]$ such that
$\mathbb{E}\left[ \int_{t}^{T}\left\vert u_{s}^{m}-u_{s}\right\vert
^{2}ds\right] \rightarrow0$ as $m\rightarrow\infty$. Moreover, we can take
$u_{s}^{_{m}}=\sum_{i=1}^{m}v_{s}^{i,m}I_{A_{i}}$, $s\in\lbrack t,T]$, where
$\left\{ A_{i}\right\} _{i=1}^{m}$ is a partition of $\left( \Omega
,\mathcal{F}_{t}\right) $ and $v^{i,m}\in\mathcal{U}^{t}[t,T]$. By Theorem
2.2 in \cite{Hu-JX}, we ge
\begin{equation
\begin{array}
[c]{l
\mathbb{E}\left[ \left\vert Y_{t}^{t,x;u^{m}}-Y_{t}^{t,x;u}\right\vert
^{2}\right] \\
\leq C\mathbb{E}\left[ \int_{t}^{T}|b(s,X_{s}^{t,x;u},Y_{s}^{t,x;u
,Z_{s}^{t,x;u},u_{s}^{m})-b(s,X_{s}^{t,x;u},Y_{s}^{t,x;u},Z_{s}^{t,x;u
,u_{s})|^{2}ds\right] \\
\text{ \ \ }+C\mathbb{E}\left[ \int_{t}^{T}\left\vert g(s,X_{s}^{t,x;u
,Y_{s}^{t,x;u},Z_{s}^{t,x;u},u_{s}^{m})-g(s,X_{s}^{t,x;u},Y_{s}^{t,x;u
,Z_{s}^{t,x;u},u_{s})\right\vert ^{2}ds\right] \\
\text{ \ \ }+C\mathbb{E}\left[ \int_{t}^{T}|\sigma\left( s,X_{s
^{t,x;u},Y_{s}^{t,x;u},Z_{s}^{t,x;u},u_{s}^{m}\right) -\sigma(s,X_{s
^{t,x;u},Y_{s}^{t,x;u},Z_{s}^{t,x;u},u_{s})|^{2}ds\right] .
\end{array}
\label{eq-diff-contr
\end{equation}
By Remark \ref{linear growth}
\[
|\psi(s,X_{s}^{t,x;u},Y_{s}^{t,x;u},Z_{s}^{t,x;u},u)|\leq L(1+|X_{s
^{t,x;u}|+|Y_{s}^{t,x;u}|+|Z_{s}^{t,x;u}|),
\]
where $\psi=b,$ $\sigma,$ $g$ and $s\in\lbrack t,T]$, $u\in U$. Applying the
dominated convergence theorem to (\ref{eq-diff-contr}), we obtai
\begin{equation}
\mathbb{E}\left[ \left\vert Y_{t}^{t,x;u^{m}}-Y_{t}^{t,x;u}\right\vert
^{2}\right] \rightarrow0\text{ as }m\rightarrow\infty. \label{eq-deff-1
\end{equation}
Note tha
\
\begin{array}
[c]{l
\left( X_{s}^{t,x;u^{m}},Y_{s}^{t,x;u^{m}},Z_{s}^{t,x;u^{m}}\right)
_{s\in\lbrack t,T]}\\
=\left( \sum_{i=1}^{m}X_{s}^{t,x;v^{i,m}}I_{A_{i}},\sum_{i=1}^{m
Y_{s}^{t,x;v^{i,m}}I_{A_{i}},\sum_{i=1}^{m}Z_{s}^{t,x;v^{i,m}}I_{A_{i
}\right) _{s\in\lbrack t,T]}.
\end{array}
\]
Then
\begin{equation}
Y_{t}^{t,x;u^{m}}=\sum_{i=1}^{m}Y_{t}^{t,x;v^{i,m}}I_{A_{i}}\geq
\underset{v\in\mathcal{U}^{t}[t,T]}{\inf}Y_{t}^{t,x;v}\text{, }P\text{-}a.s..
\label{eq-deff-2
\end{equation}
Combining (\ref{eq-deff-1}) and (\ref{eq-deff-2}), we obtai
\[
Y_{t}^{t,x;u}\geq\underset{v\in\mathcal{U}^{t}[t,T]}{\inf}Y_{t}^{t,x;v}\text{,
}P\text{-}a.s.,
\]
which yields that $W(t,x)\geq\inf_{v\in\mathcal{U}^{t}[t,T]}Y_{t}^{t,x;v}$.
Thus $W(t,x)=\inf_{v\in\mathcal{U}^{t}[t,T]}Y_{t}^{t,x;v}$.
\end{proof}
The above Proposition shows that $W(t,x)$ is a deterministic function. To
prove $W(t,\xi)=ess\inf_{u\in\mathcal{U}[t,T]}Y_{t}^{t,\xi;u}$, we need the
following two lemmas.
\begin{lemma}
\label{est-initial}Suppose Assumption \ref{assum-1} holds. Then there exists a
constant $C$ depending on $L_{1}$, $L_{2}$, $L_{3}$ and $T$ such that, for
each $u\in\mathcal{U}[t,T]$ and $\xi,\xi^{\prime}\in L^{2}(\mathcal{F
_{t};\mathbb{R}^{n})$
\begin{equation
\begin{array}
[c]{rl}
& \mathbb{E}\left[ \left. \sup\limits_{t\leq s\leq T}\left( |X_{s
^{t,\xi;u}-X_{s}^{t,\xi^{\prime};u}|^{2}+|Y_{s}^{t,\xi;u}-Y_{s}^{t,\xi
^{\prime};u}|^{2}\right) +\int_{t}^{T}|Z_{s}^{t,\xi;u}-Z_{s}^{t,\xi^{\prime
};u}|^{2}ds\right\vert \mathcal{F}_{t}\right] \\
& \ \ \leq C\left\vert \xi-\xi^{\prime}\right\vert ^{2};
\end{array}
\label{est-initial-1
\end{equation
\begin{equation}
\mathbb{E}\left[ \left. \sup\limits_{t\leq s\leq T}\left( |X_{s}^{t,\xi
;u}|^{2}+|Y_{s}^{t,\xi;u}|^{2}\right) +\int_{t}^{T}|Z_{s}^{t,\xi;u
|^{2}ds\right\vert \mathcal{F}_{t}\right] \leq C\left( 1+\left\vert
\xi\right\vert ^{2}\right) . \label{est-initial-2
\end{equation}
\end{lemma}
\begin{proof}
Without loss of generality, we only prove the case $d=1$. Set $\hat
{X}=X^{t,\xi;u}-X^{t,\xi^{\prime};u}$, $\hat{Y}=Y^{t,\xi;u}-Y^{t,\xi^{\prime
};u}$, $\hat{Z}=Z^{t,\xi;u}-Z^{t,\xi^{\prime};u}$. Then $\left( \hat{X
,\hat{Y},\hat{Z}\right) $ satisfies the following FBSDE
\[
\left\{
\begin{array}
[c]{rl
d\hat{X}_{s}= & \left[ b^{1}(s)\hat{X}_{s}+b^{2}(s)\hat{Y}_{s}+b^{3
(s)\hat{Z}_{s}\right] ds\\
& +\left[ \sigma^{1}(s)\hat{X}_{s}+\sigma^{2}(s)\hat{Y}_{s}+\sigma^{3
(s)\hat{Z}_{s}\right] dB_{s},\\
d\hat{Y}_{s}= & -\left[ g^{1}(s)\hat{X}_{s}+g^{2}(s)\hat{Y}_{s}+g^{3
(s)\hat{Z}_{s}\right] ds+\hat{Z}_{s}dB_{s},\text{ }s\in\lbrack t,T],\\
\hat{X}_{t}= & \xi-\xi^{\prime},\ \hat{Y}_{T}=\phi^{1}(T)\hat{X}_{T},
\end{array}
\right.
\]
where
\[
b^{1}\left( s\right) =\left\{
\begin{array}
[c]{cc
\frac{b(s,X_{s}^{t,\xi;u},Y_{s}^{t,\xi;u},Z_{s}^{t,\xi;u},u_{s})-b(s,X_{s
^{t,\xi^{\prime};u},,Y_{s}^{t,\xi;u},Z_{s}^{t,\xi;u},u_{s})}{X_{s}^{t,\xi
;u}-X_{s}^{t,\xi^{\prime};u}}\text{,} & \hat{X}_{s}\neq0,\\
0\text{,} & \hat{X}_{s}=0,
\end{array}
\right.
\]
and $b^{i}$, $\sigma^{i}$, $g^{i}$, $\phi^{1}$ are defined similarly,
$i=1,2,3$. By Assumption \ref{assum-1}, $b^{i}$, $\sigma^{i}$, $g^{i}$ and
$\phi^{1}$ are bounded for $i=1,2,3$. Then (\ref{est-initial-1}) holds by
Theorem 2.2 in \cite{Hu-JX}.
Since $|\psi(s,0,0,0,u)|\leq L$ for $s\in\lbrack t,T]$ and $u\in U$ where
$\psi=b,$ $\sigma,$ $g$, by Theorem 2.2 in \cite{Hu-JX}, we obtain
\
\begin{array}
[c]{l
\mathbb{E}\left[ \left. \sup\limits_{t\leq s\leq T}\left( |X_{s}^{t,\xi
;u}|^{2}+|Y_{s}^{t,\xi;u}|^{2}\right) +\int_{t}^{T}|Z_{s}^{t,\xi;u
|^{2}ds\right\vert \mathcal{F}_{t}\right] \\
\text{ \ }\leq C\mathbb{E}\left[ \left. \left\vert \xi\right\vert
^{2}+\left( \int_{t}^{T}\left( |b|+|g|\right) \left( s,0,0,0,u_{s}\right)
ds\right) ^{2}+\int_{t}^{T}|\sigma\left( s,0,0,0,u_{s}\right)
|^{2}ds\right\vert \mathcal{F}_{t}\right] \\
\text{ \ }\leq C\left( 1+\left\vert \xi\right\vert ^{2}\right) .
\end{array}
\]
This completes the proof.
\end{proof}
\begin{lemma}
\label{le-w-new}Suppose Assumption \ref{assum-1} holds. Then there exist two
constants $C$ and $C^{\prime}$ depending on $L_{1}$, $L_{2}$, $L_{3}$ and $T$
such that, for each $t\in\lbrack0,T]$ and $x,x^{\prime}\in\mathbb{R}^{n}$
\[
|W(t,x)-W(t,x^{\prime})|\leq C|x-x^{\prime}|\text{ and }|W(t,x)|\leq
C^{\prime}(1+|x|),
\]
where $C=\sqrt{C_{2}}\left( 1-\sqrt{\Lambda}\right) ^{-1}$.
\end{lemma}
\begin{proof}
By Proposition \ref{pro-new-1} and Lemma \ref{est-initial}, we obtain
\
\begin{array}
[c]{rl
|W(t,x)-W(t,x^{\prime})| & =\left\vert \underset{v\in\mathcal{U
^{t}[t,T]}{\inf}Y_{t}^{t,x;v}-\underset{v\in\mathcal{U}^{t}[t,T]}{\inf
Y_{t}^{t,x^{\prime};v}\right\vert \\
& \leq\underset{v\in\mathcal{U}^{t}[t,T]}{\sup}\left\vert Y_{t}^{t,x;v
-Y_{t}^{t,x^{\prime};v}\right\vert \\
& \leq\underset{v\in\mathcal{U}^{t}[t,T]}{\sup}\left\{ \mathbb{E}\left[
\left. \sup\limits_{t\leq s\leq T}\left\vert Y_{s}^{t,x;v}-Y_{s
^{t,x^{\prime};v}\right\vert ^{2}\right\vert \mathcal{F}_{t}\right] \right\}
^{\frac{1}{2}}\\
& \leq C|x-x^{\prime}|.
\end{array}
\]
By the proof of Theorem 2.2 in \cite{Hu-JX}, we can obtain $C=\sqrt{C_{2
}\left( 1-\sqrt{\Lambda}\right) ^{-1}$. The second inequality can be proved similarly.
\end{proof}
\begin{proposition}
\label{pro-w-random}Suppose Assumption \ref{assum-1} holds. Then, for each
$\xi\in$$L^{2}(\mathcal{F}_{t};\mathbb{R}^{n})$, we have \ $W(t,\xi
)=ess\inf_{u\in\mathcal{U}[t,T]}Y_{t}^{t,\xi;u}. $
\end{proposition}
\begin{proof}
It is clear that there exists a sequence of random vectors $\xi^{m}
$=\sum_{i=1}^{m}x_{i}^{m}I_{A_{i}^{m}}$ such that $\mathbb{E}\left[
\left\vert \xi^{m}-\xi\right\vert ^{2}\right] \rightarrow0$ as $m\rightarrow
\infty$, where $\left\{ A_{i}^{m}\right\} _{i=1}^{m}$ is a partition of
$\left( \Omega,\mathcal{F}_{t}\right) $ and $x_{i}^{m}\in\mathbb{R}^{n}$.
Similar to the proof of Proposition \ref{pro-new-1}, we hav
\
\begin{array}
[c]{l
\left( X_{s}^{t,\xi_{m};u},Y_{s}^{t,\xi_{m};u},Z_{s}^{t,\xi_{m};u}\right)
_{s\in\lbrack t,T]}\\
=\left( \sum_{i=1}^{m}X_{s}^{t,x_{i}^{m};u}I_{A_{i}^{m}},\sum_{i=1}^{m
Y_{s}^{t,x_{i}^{m};u}I_{A_{i}^{m}},\sum_{i=1}^{m}Z_{s}^{t,x_{i}^{m};u
I_{A_{i}^{m}}\right) _{s\in\lbrack t,T]}.
\end{array}
\]
Thus
\begin{equation
\begin{array}
[c]{rl
\underset{u\in\mathcal{U}[t,T]}{ess\inf}Y_{t}^{t,\xi_{m};u} & =\underset{u\in
\mathcal{U}[t,T]}{ess\inf}\sum\limits_{i=1}^{m}Y_{t}^{t,x_{i}^{m};u
I_{A_{i}^{m}}\\
& =\sum\limits_{i=1}^{m}\left( \underset{u\in\mathcal{U}[t,T]}{ess\inf
Y_{t}^{t,x_{i}^{m};u}\right) I_{A_{i}^{m}}\\
& =\sum\limits_{i=1}^{m}W\left( t,x_{i}^{m}\right) I_{A_{i}^{m}}\\
& =W\left( t,\xi^{m}\right) .
\end{array}
\label{eq-new-111
\end{equation}
On the other hand, by Lemmas \ref{est-initial} and \ref{le-w-new}, we have
\begin{equation
\begin{array}
[c]{rl
\left\vert \underset{u\in\mathcal{U}[t,T]}{ess\inf}Y_{t}^{t,\xi_{m
;u}-\underset{u\in\mathcal{U}[t,T]}{ess\inf}Y_{t}^{t,\xi;u}\right\vert &
\leq\underset{u\in\mathcal{U}[t,T]}{ess\sup}\left\vert Y_{t}^{t,\xi_{m
;u}-Y_{t}^{t,\xi;u}\right\vert \\
& \leq C\left\vert \xi^{m}-\xi\right\vert ,
\end{array}
\label{eq-new-112
\end{equation
\begin{equation}
\left\vert W(t,\xi^{m})-W\left( t,\xi\right) \right\vert \leq C\left\vert
\xi^{m}-\xi\right\vert . \label{eq-new-113
\end{equation}
Combining (\ref{eq-new-111}), (\ref{eq-new-112}) and (\ref{eq-new-113}), we
ge
\[
\left\vert \underset{u\in\mathcal{U}[t,T]}{ess\inf}Y_{t}^{t,\xi;u}-W\left(
t,\xi\right) \right\vert \leq2C\left\vert \xi^{m}-\xi\right\vert .
\]
Thus we obtain the desired result by letting $m\rightarrow\infty$.
\end{proof}
Before studying the DPP, we introduce the notion of backward semigroup, which
was first introduced by Peng in \cite{Peng-lecture}. For each given
$(t,x)\in\lbrack0,T)\times\mathbb{R}^{n}$, $\delta\in(0,T-t]$ and
$u\in\mathcal{U}[t,t+\delta]$, define $G_{t,t+\delta
^{t,x;u}[\cdot]:\mathcal{L}(\mathbb{R}^{n})\rightarrow L^{2}(\mathcal{F
_{t};\mathbb{R})$ as
\[
G_{t,t+\delta}^{t,x;u}\left[ \psi(\tilde{X}_{t+\delta}^{t,x;u})\right]
=\tilde{Y}_{t}^{t,x;u}\text{ for each }\psi\in\mathcal{L}(\mathbb{R}^{n}),
\]
where
\[
\mathcal{L}(\mathbb{R}^{n})=\{f:\mathbb{R}^{n}\rightarrow\mathbb{R
|\ |f(x)-f(x^{\prime})|\leq L|x-x^{\prime}| \},
\]
$L$ is the constant in Assumption \ref{assum-1} and $(\tilde{X
^{t,x;u},\tilde{Y}^{t,x;u}, \tilde{Z}^{t,x;u})$ is the solution to the
following FBSDE on $[t,t+\delta]$
\begin{equation}
\left\{
\begin{array}
[c]{rl
d\tilde{X}_{s}^{t,x;u}= & b(s,\tilde{X}_{s}^{t,x;u},\tilde{Y}_{s
^{t,x;u},\tilde{Z}_{s}^{t,x;u},u_{s})ds+\sigma(s,\tilde{X}_{s}^{t,x;u
,\tilde{Y}_{s}^{t,x;u},\tilde{Z}_{s}^{t,x;u},u_{s})dB_{s},\\
d\tilde{Y}_{s}^{t,x;u}= & -g(s,\tilde{X}_{s}^{t,x;u},\tilde{Y}_{s
^{t,x;u},\tilde{Z}_{s}^{t,x;u},u_{s})ds+\tilde{Z}_{s}^{t,x;u}dB_{s},\text{
}s\in\lbrack t,t+\delta],\\
\tilde{X}_{s}^{t,x;u}= & x,\ \tilde{Y}_{t+\delta}^{t,x;u}=\psi(\tilde
{X}_{t+\delta}^{t,x;u}).
\end{array}
\right. \label{eq-xy-til
\end{equation}
Since the coefficients of (\ref{eq-xy-til}) satisfies Assumption
\ref{assum-1}, there exists a unique solution $(\tilde{X}^{t,x;u},\tilde
{Y}^{t,x;u},\tilde{Z}^{t,x;u})$ to (\ref{eq-xy-til}). Thus $G_{t,t+\delta
}^{t,x;u}[\cdot]$ is well-defined.
Now we prove the DPP for the control system (\ref{state-eq})-(\ref{obje-eq}).
\begin{theorem}
\label{th-ddp}Suppose Assumption \ref{assum-1} holds. Then for each
$(t,x)\in\lbrack0,T)\times\mathbb{R}^{n}$ and $\delta\in(0,T-t]$, we hav
\[
W(t,x)=\underset{u\in\mathcal{U}[t,t+\delta]}{ess\inf}G_{t,t+\delta
^{t,x;u}\left[ W(t+\delta,\tilde{X}_{t+\delta}^{t,x;u})\right]
=\underset{v\in\mathcal{U}^{t}[t,t+\delta]}{\inf}G_{t,t+\delta}^{t,x;v}\left[
W(t+\delta,\tilde{X}_{t+\delta}^{t,x;v})\right] .
\]
\end{theorem}
\begin{proof}
We first prove
\begin{equation}
W(t,x)\geq\underset{v\in\mathcal{U}^{t}[t,t+\delta]}{\inf}G_{t,t+\delta
}^{t,x;v}\left[ W(t+\delta,\tilde{X}_{t+\delta}^{t,x;v})\right] .
\label{new-141
\end{equation}
For each given $v\in\mathcal{U}^{t}[t,T]$, noting that
\[
\left( X_{s}^{t,x;v},Y_{s}^{t,x;v},Z_{s}^{t,x;v}\right) _{s\in\left[
t+\delta,T\right] }=\left( X_{s}^{t+\delta,X_{t+\delta}^{t,x;v};v
,Y_{s}^{t+\delta,X_{t+\delta}^{t,x;v};v},Z_{s}^{t+\delta,X_{t+\delta
^{t,x;v};v}\right) _{s\in\left[ t+\delta,T\right] },
\]
we hav
\[
\left\{
\begin{array}
[c]{rl
dX_{s}^{t,x;v}= & b(s,X_{s}^{t,x;v},Y_{s}^{t,x;v},Z_{s}^{t,x;v},v_{s
)ds+\sigma(s,X_{s}^{t,x;v},Y_{s}^{t,x;v},Z_{s}^{t,x;v},v_{s})dB_{s},\\
dY_{s}^{t,x;v}= & -g(s,X_{s}^{t,x;v},Y_{s}^{t,x;v},Z_{s}^{t,x;v
,v_{s})ds+Z_{s}^{t,x;v}dB_{s},\text{ }s\in\lbrack t,t+\delta],\\
X_{t}^{t,x;v}= & x,\ Y_{t+\delta}^{t,x;v}=Y_{t+\delta}^{t+\delta,X_{t+\delta
}^{t,x;v};v}.
\end{array}
\right.
\]
By Proposition \ref{pro-w-random}, we get \ $W(t+\delta,X_{t+\delta
^{t,x;v})\leq Y_{t+\delta}^{t+\delta,X_{t+\delta}^{t,x;v};v}. $ Taking $u=v$
and $\psi(\cdot)=W(t+\delta,\cdot)$ in (\ref{eq-xy-til}), by comparison
theorem for FBSDE (see Theorem \ref{th-comp} ), we get \ $Y_{t}^{t,x;v
\geq\tilde{Y}_{t}^{t,x;v}. $ By Proposition \ref{pro-new-1}, we obtain
(\ref{new-141}).
Next, we prov
\begin{equation}
W(t,x)\leq\underset{u\in\mathcal{U}[t,t+\delta]}{ess\inf}G_{t,t+\delta
}^{t,x;u}\left[ W(t+\delta,\tilde{X}_{t+\delta}^{t,x;u})\right] .
\label{new-142
\end{equation}
It is obvious that we only need to prov
\begin{equation}
G_{t,t+\delta}^{t,x;u}\left[ W(t+\delta,\tilde{X}_{t+\delta}^{t,x;u})\right]
\geq W(t,x),\text{ }P\text{-a.s.} \label{new-111
\end{equation}
for each $u\in\mathcal{U}[t,t+\delta]$. The proof for (\ref{new-111}) is
divided into four steps.
\textbf{Step 1.} Let $(\tilde{X}^{t,x;u},\tilde{Y}^{t,x;u},\tilde{Z}^{t,x;u})$
be the solution to the following FBSDE:
\begin{equation}
\left\{
\begin{array}
[c]{rl
d\tilde{X}_{s}^{t,x;u}= & b(s,\tilde{X}_{s}^{t,x;u},\tilde{Y}_{s
^{t,x;u},\tilde{Z}_{s}^{t,x;u},u_{s})ds+\sigma(s,\tilde{X}_{s}^{t,x;u
,\tilde{Y}_{s}^{t,x;u},\tilde{Z}_{s}^{t,x;u},u_{s})dB_{s},\\
d\tilde{Y}_{s}^{t,x;u}= & -g(s,\tilde{X}_{s}^{t,x;u},\tilde{Y}_{s
^{t,x;u},\tilde{Z}_{s}^{t,x;u},u_{s})ds+\tilde{Z}_{s}^{t,x;u}dB_{s},\text{
}s\in\lbrack t,t+\delta],\\
\tilde{X}_{t}^{t,x;u}= & x,\ \tilde{Y}_{t+\delta}^{t,x;u}=W(t+\delta,\tilde
{X}_{t+\delta}^{t,x;u}).
\end{array}
\right. \label{eq-xy-til-w
\end{equation}
Since $\tilde{X}_{t+\delta}^{t,x;u}\in L^{2}(\mathcal{F}_{t+\delta
;\mathbb{R}^{n})$, for each integer $m$, we can choose a partition
$\{A_{i}^{m}:i=1,\ldots,m\}$ of $\mathcal{F}_{t+\delta}$ and $x_{i}^{m
\in\mathbb{R}^{n}$ such that \ $\mathbb{E}\left[ \left\vert \xi^{m}-\tilde
{X}_{t+\delta}^{t,x;u}\right\vert ^{2}\right] \rightarrow0\text{ as
}m\rightarrow\infty, $ where $\xi^{m}=\sum_{i=1}^{m}x_{i}^{m}I_{A_{i}^{m}}$.
For each given $x_{i}^{m}$, by Proposition \ref{pro-new-1}, we can find a
$v^{i,m}\in\mathcal{U}^{t+\delta}[t+\delta,T]$ such tha
\begin{equation}
W(t+\delta,x_{i}^{m})\leq Y_{t+\delta}^{t+\delta,x_{i}^{m};v^{i,m}}\leq
W(t+\delta,x_{i}^{m})+\frac{1}{m}. \label{new-131
\end{equation}
Set \ $u_{s}^{m}=u_{s}I_{[t,t+\delta]}(s)+\left( \sum_{i=1}^{m}v_{s
^{i,m}I_{A_{i}^{m}}\right) I_{(t+\delta,T]}(s). $
\textbf{Step 2}. By (\ref{est-initial-1}) in Lemma \ref{est-initial}, we ge
\begin{equation}
\mathbb{E}\left[ \left\vert Y_{t+\delta}^{t+\delta,\tilde{X}_{t+\delta
}^{t,x;u};u^{m}}-Y_{t+\delta}^{t+\delta,\xi^{m};u^{m}}\right\vert ^{2}\right]
\leq C\mathbb{E}\left[ \left\vert \tilde{X}_{t+\delta}^{t,x;u}-\xi
^{m}\right\vert ^{2}\right] \rightarrow0\text{ as }m\rightarrow\infty.
\label{new-new-2312
\end{equation}
Similar to the proof of Proposition \ref{pro-new-1}, one can check that
\begin{equation
\begin{array}
[c]{l
\left( X_{s}^{t+\delta,\xi^{m};u^{m}},Y_{s}^{t+\delta,\xi^{m};u^{m}
,Z_{s}^{t+\delta,\xi^{m};u^{m}}\right) _{s\in\lbrack t+\delta,T]}\\
=\left( \displaystyle\sum_{i=1}^{m}X_{s}^{t+\delta,x_{i}^{m};v^{i,m}
I_{A_{i}^{m}},\displaystyle\sum_{i=1}^{m}Y_{s}^{t+\delta,x_{i}^{m};v^{i,m
}I_{A_{i}^{m}},\displaystyle\sum_{i=1}^{m}Z_{s}^{t+\delta,x_{i}^{m};v^{i,m
}I_{A_{i}^{m}}\right) _{s\in\lbrack t+\delta,T]}.
\end{array}
\label{new-132
\end{equation}
Combining (\ref{new-131}) and (\ref{new-132}), we obtai
\begin{equation}
W(t+\delta,\xi^{m})\leq Y_{t+\delta}^{t+\delta,\xi^{m};u^{m}}\leq
W(t+\delta,\xi^{m})+\frac{1}{m}. \label{new-133
\end{equation}
Thus, by Lemma \ref{le-w-new},
\begin{equation}
\mathbb{E}\left[ \left\vert Y_{t+\delta}^{t+\delta,\xi^{m};u^{m}
-W(t+\delta,\tilde{X}_{t+\delta}^{t,x;u})\right\vert ^{2}\right]
\rightarrow0\text{ as }m\rightarrow\infty. \label{y-hat-w
\end{equation}
By (\ref{new-new-2312}) and (\ref{y-hat-w}), we get
\begin{equation}
\mathbb{E}\left[ \left\vert Y_{t+\delta}^{t+\delta,\tilde{X}_{t+\delta
}^{t,x;u};u^{m}}-W(t+\delta,\tilde{X}_{t+\delta}^{t,x;u})\right\vert
^{2}\right] \rightarrow0\text{ as }m\rightarrow\infty. \label{new-134
\end{equation}
Consider the following decoupled FBSDE
\begin{equation}
\left\{
\begin{array}
[c]{rl
d\bar{X}_{s}^{m} & =b(s,\bar{X}_{s}^{m},\tilde{Y}_{s}^{t,x;u},\tilde{Z
_{s}^{t,x;u},u_{s})ds+\sigma(s,\bar{X}_{s}^{m},\tilde{Y}_{s}^{t,x;u},\tilde
{Z}_{s}^{t,x;u},u_{s})dB_{s},\\
d\bar{Y}_{s}^{m} & =-g(s,\bar{X}_{s}^{m},\bar{Y}_{s}^{m},\bar{Z}_{s}^{m
,u_{s}^{m})ds+\bar{Z}_{s}^{m}dB_{s},\text{ }s\in\lbrack t,t+\delta],\\
\bar{X}_{t}^{m} & =x,\ \bar{Y}_{t+\delta}^{m}=Y_{t+\delta}^{t+\delta,\tilde
{X}_{t+\delta}^{t,x;u};u^{m}}.
\end{array}
\right. \label{new-new-3422
\end{equation}
By (\ref{eq-xy-til-w}), we know that $(\tilde{X}_{s}^{t,x;u})_{s\in\lbrack
t,t+\delta]}$ satisfies the SDE in (\ref{new-new-3422}), which implies
$\bar{X}_{s}^{m}=\tilde{X}_{s}^{t,x;u}$ on $[t,t+\delta]$. Thus, by the
estimate of BSDE, we ge
\begin{equation
\begin{array}
[c]{rl}
& \mathbb{E}\left[ \underset{s\in\lbrack t,t+\delta]}{\sup}\left\vert
\tilde{Y}_{s}^{t,x;u}-\bar{Y}_{s}^{m}\right\vert ^{2}+\int_{t}^{t+\delta
}\left\vert \tilde{Z}_{s}^{t,x;u}-\bar{Z}_{s}^{m}\right\vert ^{2}ds\right] \\
& \ \ \leq C\mathbb{E}\left[ \left\vert Y_{t+\delta}^{t+\delta,\tilde
{X}_{t+\delta}^{t,x;u};u^{m}}-W(t+\delta,\tilde{X}_{t+\delta}^{t,x;u
)\right\vert ^{2}\right] .
\end{array}
\label{new-124
\end{equation}
It follows from (\ref{new-134}) and (\ref{new-124}) that
\begin{equation}
\mathbb{E}\left[ \underset{s\in\lbrack t,t+\delta]}{\sup}\left\vert \tilde
{Y}_{s}^{t,x;u}-\bar{Y}_{s}^{m}\right\vert ^{2}+\int_{t}^{t+\delta}\left\vert
\tilde{Z}_{s}^{t,x;u}-\bar{Z}_{s}^{m}\right\vert ^{2}ds\right] \rightarrow0
\label{new-new-3423
\end{equation}
as $m\rightarrow\infty.$
\textbf{Step 3.} Define $(X_{s}^{m},Y_{s}^{m},Z_{s}^{m})_{s\in\lbrack t,T]}$
as follow
\begin{align*}
(X_{s}^{m},Y_{s}^{m},Z_{s}^{m})_{s\in\lbrack t,t+\delta]} & =(\bar{X
_{s}^{m},\bar{Y}_{s}^{m},\bar{Z}_{s}^{m})_{s\in\lbrack t,t+\delta]},\text{ }\\
(X_{s}^{m},Y_{s}^{m},Z_{s}^{m})_{s\in(t+\delta,T]} & =(X_{s}^{t+\delta
,\tilde{X}_{t+\delta}^{t,x;u};u^{m}},Y_{s}^{t+\delta,\tilde{X}_{t+\delta
}^{t,x;u};u^{m}},Z_{s}^{t+\delta,\tilde{X}_{t+\delta}^{t,x;u};u^{m}
)_{s\in(t+\delta,T]}.
\end{align*}
It is easy to verify that $(X_{s}^{m},Y_{s}^{m},Z_{s}^{m})_{s\in\lbrack t,T]}$
satisfies the following FBSDE:
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{m} & =[b(s,X_{s}^{m},Y_{s}^{m},Z_{s}^{m},u_{s}^{m})+l_{s
^{1}]ds+[\sigma(s,X_{s}^{m},Y_{s}^{m},Z_{s}^{m},u_{s}^{m})+l_{s}^{2}]dB_{s},\\
dY_{s}^{m} & =-g(s,X_{s}^{m},Y_{s}^{m},Z_{s}^{m},u_{s}^{m})ds+Z_{s}^{m
dB_{s},\\
X_{t}^{m} & =x,\ Y_{T}^{m}=\phi\left( X_{T}^{m}\right) ,
\end{array}
\right. \label{eq-xy-til-t
\end{equation}
wher
\
\begin{array}
[c]{l
l_{s}^{1}=b(s,\bar{X}_{s}^{m},\tilde{Y}_{s}^{t,x;u},\tilde{Z}_{s
^{t,x;u},u_{s})-b(s,\bar{X}_{s}^{m},\bar{Y}_{s}^{m},\bar{Z}_{s}^{m
,u_{s}),\text{ }s\in\lbrack t,t+\delta];\\
\text{ }l_{s}^{1}=0,\text{ }s\in(t+\delta,T];\\
l_{s}^{2}=\sigma(s,\bar{X}_{s}^{m},\tilde{Y}_{s}^{t,x;u},\tilde{Z}_{s
^{t,x;u},u_{s})-\sigma(s,\bar{X}_{s}^{m},\bar{Y}_{s}^{m},\bar{Z}_{s}^{m
,u_{s}),\text{ }s\in\lbrack t,t+\delta];\\
\text{ }l_{s}^{2}=0,\text{ }s\in(t+\delta,T].
\end{array}
\]
By Theorem 2.2 in \cite{Hu-JX}, we obtain
\begin{equation
\begin{array}
[c]{l
\mathbb{E}\left[ \left\vert Y_{t}^{m}-Y_{t}^{t,x;u^{m}}\right\vert
^{2}\right] =\mathbb{E}\left[ \left\vert \bar{Y}_{t}^{m}-Y_{t}^{t,x;u^{m
}\right\vert ^{2}\right] \\
\leq C\mathbb{E}\left[ \displaystyle{\int}_{t}^{t+\delta}\left( \left\vert
\tilde{Y}_{s}^{t,x;u}-\bar{Y}_{s}^{m}\right\vert ^{2}+\left\vert \tilde{Z
_{s}^{t,x;u}-\bar{Z}_{s}^{m}\right\vert ^{2}\right) ds\right] .
\end{array}
\label{new-122
\end{equation}
\textbf{Step 4.} By (\ref{new-new-3423}) and (\ref{new-122}), we get as
$m\rightarrow\infty$
\begin{equation}
\mathbb{E}\left[ \left\vert Y_{t}^{t,x;u^{m}}-\tilde{Y}_{t}^{t,x;u
\right\vert ^{2}\right] \leq2\left\{ \mathbb{E}\left[ \left\vert
Y_{t}^{t,x;u^{m}}-\bar{Y}_{t}^{m}\right\vert ^{2}\right] +\mathbb{E}\left[
\left\vert \bar{Y}_{t}^{m}-\tilde{Y}_{s}^{t,x;u}\right\vert ^{2}\right]
\right\} \rightarrow0. \label{new-121
\end{equation}
By the definition of $W(t,x)$, we know that $Y_{t}^{t,x;u^{m}}\geq W(t,x)$
$P$-a.s. for $m\geq1$. Thus we obtain (\ref{new-111}) by (\ref{new-121}).
Finally, since
\[
\underset{v\in\mathcal{U}^{t}[t,t+\delta]}{\inf}G_{t,t+\delta}^{t,x;v}\left[
W(t+\delta,\tilde{X}_{t+\delta}^{t,x;v})\right] \geq\underset{u\in
\mathcal{U}[t,t+\delta]}{ess\inf}G_{t,t+\delta}^{t,x;u}\left[ W(t+\delta
,\tilde{X}_{t+\delta}^{t,x;u})\right] ,
\]
we obtain the desired result by (\ref{new-141}) and (\ref{new-142}).
\end{proof}
\begin{remark}
It is important to note that $(Y_{s}^{t,x;u^{m}},Z_{s}^{t,x;u^{m}
)_{s\in\lbrack t,t+\delta]}$ varies with $u^{m}$, which leads to the change of
$(X_{s}^{t,x;u^{m}})_{s\in\lbrack t,t+\delta]}$ in the fully coupled case.
This is different from the decoupled case, and the approach of establishing
the DPP for the decoupled case does not work now. To overcome this difficulty,
we introduce two auxiliary FBSDEs (\ref{new-new-3422}) and (\ref{eq-xy-til-t})
to prove the DPP.
\end{remark}
Now we prove the continuity property of $W(t,x)$ in $t$.
\begin{lemma}
\label{le-w-t}Suppose Assumption \ref{assum-1} holds. Then the value function
$W(t,x)$ is $\frac{1}{2}$ H\"{o}lder continuous in $t$.
\end{lemma}
\begin{proof}
For each $(t,x)\in\lbrack0,T)\times\mathbb{R}^{n}$ and $\delta\in(0,T-t]$, by
Theorem \ref{th-ddp}, we have
\[
W(t,x)=\inf_{v\in\mathcal{U}^{t}[t,t+\delta]}G_{t,t+\delta}^{t,x;v}\left[
W(t+\delta,\tilde{X}_{t+\delta}^{t,x;v})\right] .
\]
Thu
\[
\left\vert W(t,x)-W\left( t+\delta,x\right) \right\vert \leq\underset{v\in
\mathcal{U}^{t}[t,t+\delta]}{\sup}\left\vert G_{t,t+\delta}^{t,x;v}\left[
W(t+\delta,\tilde{X}_{t+\delta}^{t,x;v})\right] -W\left( t+\delta,x\right)
\right\vert .
\]
For each $v\in\mathcal{U}^{t}[t,t+\delta]$, by the definition of
$G_{t,t+\delta}^{t,x;v}\left[ \cdot\right] $, we have
\ $G_{t,t+\delta}^{t,x;v}\left[ W(t+\delta,\tilde{X}_{t+\delta
^{t,x;v})\right] \newline=\mathbb{E}\left[ W(t+\delta,\tilde{X}_{t+\delta
}^{t,x;v})+\int_{t}^{t+\delta}g\left( s,\tilde{X}_{s}^{t,x;v},\tilde{Y
_{s}^{t,x;v},\tilde{Z}_{s}^{t,x;v},v_{s}\right) ds\right] . $
Thus, by Lemma \ref{le-w-new},
\begin{equation
\begin{array}
[c]{l
\left\vert G_{t,t+\delta}^{t,x;v}\left[ W(t+\delta,\tilde{X}_{t+\delta
}^{t,x;v})\right] -W\left( t+\delta,x\right) \right\vert \\
\leq\mathbb{E}\left[ \left\vert W(t+\delta,\tilde{X}_{t+\delta
^{t,x;v})-W\left( t+\delta,x\right) \right\vert +\displaystyle{\int
_{t}^{t+\delta}\left\vert g\left( s,\tilde{X}_{s}^{t,x;v},\tilde{Y
_{s}^{t,x;v},\tilde{Z}_{s}^{t,x;v},v_{s}\right) \right\vert ds\right] \\
\leq C\mathbb{E}\left[ \left\vert \tilde{X}_{t+\delta}^{t,x;v}-x\right\vert
+\displaystyle{\int}_{t}^{t+\delta}\left( 1+\left\vert \tilde{X}_{s
^{t,x;v}\right\vert +\left\vert \tilde{Y}_{s}^{t,x;v}\right\vert +\left\vert
\tilde{Z}_{s}^{t,x;v}\right\vert \right) ds\right] .
\end{array}
\label{eq-new-133
\end{equation}
It follows from Theorem 2.2 in \cite{Hu-JX} that
\
\begin{array}
[c]{rl
\mathbb{E}\left[ \sup\limits_{t\leq s\leq t+\delta}\left( |\tilde{X
_{s}^{t,x;v}|^{2}+|\tilde{Y}_{s}^{t,x;v}|^{2}\right) +\displaystyle{\int
_{t}^{t+\delta}|\tilde{Z}_{s}^{t,x;v}|^{2}ds\right] & \leq C\left(
1+\left\vert x\right\vert ^{2}\right) ,
\end{array}
\]
which implies tha
\begin{equation}
\mathbb{E}\left[ \int_{t}^{t+\delta}\left( 1+\left\vert \tilde{X
_{s}^{t,x;v}\right\vert +\left\vert \tilde{Y}_{s}^{t,x;v}\right\vert
+\left\vert \tilde{Z}_{s}^{t,x;v}\right\vert \right) ds\right] \leq C\left(
1+\left\vert x\right\vert \right) \delta^{\frac{1}{2}}. \label{eq-new-134
\end{equation}
By (\ref{eq-new-133}) and (\ref{eq-new-134}), we just need to estimate
$\mathbb{E}\left[ \left\vert \tilde{X}_{t+\delta}^{t,x;v}-x\right\vert
\right] $. Denote $\hat{X}_{s}=\tilde{X}_{s}^{t,x;v}-x$, $\hat{Y}_{s
=\tilde{Y}_{s}^{t,x;v}-W\left( t+\delta,x\right) $, $\hat{Z}_{s}=\tilde
{Z}_{s}^{t,x;v}$, then $\left( \hat{X},\hat{Y},\hat{Z}\right) $ satisfies
the following FBSDE:
\begin{equation}
\left\{
\begin{array}
[c]{rl
d\hat{X}_{s}= & b(s,\hat{X}_{s}+x,\hat{Y}_{s}+W\left( t+\delta,x\right)
,\hat{Z}_{s},v_{s})ds\\
& +\sigma(s,\hat{X}_{s}+x,\hat{Y}_{s}+W\left( t+\delta,x\right) ,\hat{Z
_{s},v_{s})dB_{s},\\
d\hat{Y}_{s}= & -g(s,\hat{X}_{s}+x,\hat{Y}_{s}+W\left( t+\delta,x\right)
,\hat{Z}_{s},v_{s})ds+\hat{Z}_{s}dB_{s},\text{ }s\in\lbrack t,t+\delta],\\
\hat{X}_{t}= & 0,\ \hat{Y}_{t+\delta}=W(t+\delta,\hat{X}_{t+\delta
}+x)-W\left( t+\delta,x\right) .
\end{array}
\right. \label{FBSDE-continuous
\end{equation}
By Theorem 2.2 in \cite{Hu-JX} and Lemma \ref{le-w-new}, we get
\
\begin{array}
[c]{rl
\mathbb{E}\left[ \sup\limits_{t\leq s\leq t+\delta}\left\vert \hat{X
_{s}\right\vert ^{2}\right] & \leq C\mathbb{E}\left[ \left(
\displaystyle{\int}_{t}^{t+\delta}[|b|+|g|](s,x,W\left( t+\delta,x\right)
,0,v_{s})ds\right) ^{2}\right] \\
& \text{ \ }+C\mathbb{E}\left[ \displaystyle{\int}_{t}^{t+\delta}\left\vert
\sigma(s,x,W\left( t+\delta,x\right) ,0,v_{s})\right\vert ^{2}ds\right] \\
& \leq C\left( 1+\left\vert x\right\vert ^{2}\right) \delta,
\end{array}
\]
which yields $\mathbb{E}[\vert\tilde{X}_{t+\delta}^{t,x;u}-x\vert] \leq
C\left( 1+\left\vert x\right\vert \right) \delta^{\frac{1}{2}}$. Noting that
the above constant $C$ does not depend on $u$, then \ $|W(t,x)-W\left(
t+\delta,x\right) |\leq C\left( 1+\left\vert x\right\vert \right)
\delta^{\frac{1}{2}}. $ This completes the proof.
\end{proof}
\begin{remark}
In order to prove $\mathbb{E} [\vert\tilde{X}_{t+\delta}^{t,x;v}-x\vert] \leq
C\left( 1+\left\vert x\right\vert \right) \delta^{\frac{1}{2}}$, we
construct another FBSDE, which different from the proof in \cite{Li-W}.
Specially, we do not need additional assumption on $L_{3}$ as in \cite{Li-W}.
\end{remark}
\subsection{The value function and the HJB equation}
In this subsection, we show that the value function $W(t,x)$ defined in
(\ref{obje-eq}) is a viscosity solution to the following HJB equation
\begin{equation}
\left\{
\begin{array}
[c]{l
\partial_{t}W(t,x)+\inf\limits_{u\in U}H(t,x,W(t,x),DW(t,x),D^{2}W\left(
t,x\right) ,u)=0,\\
W(T,x)=\phi(x),
\end{array}
\right. \label{eq-hjb
\end{equation}
where
\begin{equation
\begin{array}
[c]{l
H(t,x,v,p,A,u)\\
=\frac{1}{2}\mathrm{tr}[\sigma\sigma^{\intercal
(t,x,v,V(t,x,v,p,u),u)A]+p^{\intercal}b(t,x,v,V(t,x,v,p,u),u)\\
\ \ +g(t,x,v,V(t,x,v,p,u),u),\\
V(t,x,v,p,u)=p^{\intercal}\sigma(t,x,v,V(t,x,v,p,u),u),\\
(t,x,v,p,A,u)\in\lbrack0,T]\times\mathbb{R}^{n}\times\mathbb{R}\times
\mathbb{R}^{n}\times\mathbb{S}^{n}\times U.
\end{array}
\label{def-G
\end{equation}
We first give the definition of viscosity solution (see
\cite{Crandall-lecture}).
\begin{definition}
(i) A real-valued continuous function $W(\cdot,\cdot)\in C\left(
[0,T]\times\mathbb{R}^{n}\right) $ is called a viscosity subsolution (resp.
supersolution) to (\ref{eq-hjb}) if $W(T,x)\leq\phi(x)$ (resp. $W(T,x)\geq
\phi(x)$) for all $x\in\mathbb{R}^{n}$ and if for all $\varphi\in C_{b
^{2,3}\left( [0,T]\times\mathbb{R}^{n}\right) $ such that $W(t,x)=\varphi
(t,x)$ and $W-\varphi$ attains a local maximum (resp. minimum) at
$(t,x)\in\lbrack0,T)\times\mathbb{R}^{n}$, we hav
\[
\left\{
\begin{array}
[c]{l
\partial_{t}\varphi(t,x)+\inf\limits_{u\in U}H(t,x,\varphi(t,x),D\varphi
\left( t,x\right) ,D^{2}\varphi\left( t,x\right) ,u)\geq0\\
(\text{resp. }\varphi_{t}(t,x)+\inf\limits_{u\in U}H(t,x,\varphi
(t,x),D\varphi\left( t,x\right) ,D^{2}\varphi\left( t,x\right) ,u)\leq0).
\end{array}
\right.
\]
(ii) A real-valued continuous function $W(\cdot,\cdot)\in C\left(
[0,T]\times\mathbb{R}^{n}\right) $ is called a viscosity solution to
(\ref{eq-hjb}), if it is both a viscosity subsolution and viscosity supersolution.
\end{definition}
In order to prove that $W(t,x)$ is a viscosity solution to the HJB equation
(\ref{eq-hjb}), we need the following assumption.
\begin{assumption}
\label{assum-l3} $L_{3}L_{W}<1$ and $8C_{4}L_{3}^{4}<1$, where $L_{W
=\sqrt{C_{2}}\left( 1-\sqrt{\Lambda}\right) ^{-1}$ is the Lipschitz constant
of value function $W$ with respect to $x$, $C_{2}\,$and $C_{4}$ are defined in
Lemma \ref{sde-bsde}.
\end{assumption}
\begin{theorem}
\label{th-vis}Suppose Assumptions \ref{assum-1} and \ref{assum-l3} hold. Then
the value function $W(t,x)$ is the viscosity solution to the HJB equation
(\ref{eq-hjb}).
\end{theorem}
\textbf{Proof. }Obviously, $W\left( T,x\right) =\phi(x)$, $x\in
\mathbb{R}^{n}$. By Lemmas \ref{le-w-new} and \ref{le-w-t}, we know that
$W(\cdot,\cdot)\in C\left( [0,T]\times\mathbb{R}^{n}\right) $. We first
prove that $W$ is a viscosity subsolution. For each given $(t,x_{0})\in
\lbrack0,T)\times\mathbb{R}^{n}$, suppose $\varphi\left( \cdot\right) \in
C_{b}^{2,3}\left( \left[ 0,T\right] \times\mathbb{R}^{n}\right) $ such
that $\varphi(t,x_{0})=W(t,x_{0})$, $\varphi\geq W$ on $[0,T]\times
\mathbb{R}^{n}$ an
\begin{equation}
L_{3}\sup_{(s,x)\in\lbrack0,T]\times\mathbb{R}^{n}}|D\varphi(s,x)|<1.
\label{new-new12345
\end{equation}
Consider the following FBSDE and BSDE: $\forall s\in\lbrack t,t+\delta
]\subset\lbrack0,T]$
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{u}= & b(s,X_{s}^{u},Y_{s}^{u},Z_{s}^{u},u_{s})ds+\sigma(s,X_{s
^{u},Y_{s}^{u},Z_{s}^{u},u_{s})dB_{s},\\
dY_{s}^{u}= & -g(s,X_{s}^{u},Y_{s}^{u},Z_{s}^{u},u_{s})ds+Z_{s}^{u}dB_{s},\\
X_{t}^{u}= & x_{0},\ Y_{t+\delta}^{u}=\varphi(t+\delta,X_{t+\delta}^{u}),
\end{array}
\right. \label{eq-11111
\end{equation
\begin{equation}
dY_{s}^{1,u}=-F_{1}(s,X_{s}^{u},Y_{s}^{1,u},Z_{s}^{1,u},u_{s})ds+Z_{s
^{1,u}dB_{s},\ Y_{t+\delta}^{1,u}=0, \label{eq-11112
\end{equation}
where
\
\begin{array}
[c]{rl
F_{1}\left( s,x,y,z,u\right) = & \partial_{t}\varphi\left( s,x\right)
+\left( D\varphi\left( s,x\right) \right) ^{\intercal}b\left(
s,x,y+\varphi\left( s,x\right) ,h(s,x,y,z,u),u\right) \\
& +\frac{1}{2}\mathrm{tr}\left[ \sigma\sigma^{\intercal}\left(
s,x,y+\varphi\left( s,x\right) ,h(s,x,y,z,u),u\right) D^{2}\varphi\left(
s,x\right) \right] \\
& +g\left( s,x,y+\varphi\left( s,x\right) ,h(s,x,y,z,u),u\right) ,
\end{array}
\
\begin{equation}
h(s,x,y,z,u){=z+{D\varphi(s,x)}^{\intercal}\sigma}\left( s,x,y+\varphi
(s,x\right) ,h(s,x,y,z,u),u). \label{eq-111121
\end{equation}
\begin{remark}
For $\varphi\left( \cdot\right) \in C_{b}^{2,3}\left( \left[ 0,T\right]
\times\mathbb{R}^{n}\right) $ such that $\varphi(t,x_{0})=W(t,x_{0})$ and
$\varphi\geq W$, we hav
\[
\varphi(t,x)-\varphi(t,x_{0})={{D\varphi(t,x_{0})}^{\intercal}(x-x
_{0})+o(|x-x_{0}|)
\]
an
\[
\varphi(t,x)-\varphi(t,x_{0})\geq W(t,x)-W(t,x_{0})\geq-L_{W}|x-x_{0}|.
\]
Taking $x\rightarrow x_{0}$ such that ${{D\varphi(t,x_{0})}^{\intercal
(x-x}_{0})=-|{{D\varphi(t,x_{0})||}x-x}_{0}|$, we get $|{{D\varphi
(t,x_{0})|\leq}}L_{W}$. From the definition of viscosity solution and
$L_{3}L_{W}<1$, we can assume (\ref{new-new12345}) holds without loss of generality.
\end{remark}
We first prove the following lemmas.
\begin{lemma}
\label{le-h}Suppose Assumptions \ref{assum-1} and \ref{assum-l3} hold. Then
there exists a unique function $h(s,x,y,z,u)$ satisfying (\ref{eq-111121}) for
each $s\in\lbrack0,T]$, $x$, $\bar{x}\in\mathbb{R}^{n}$, $y\in\mathbb{R}$,
$z\in\mathbb{R}^{1\times d}$ and $u\in U$. Furthermore, for each given
$s\in\lbrack0,T]$, $x\in\mathbb{R}^{n}$, $y$, $\bar{y}\in\mathbb{R}$, $z$,
$\bar{z}\in\mathbb{R}^{1\times d}$ and $u\in U$,
\begin{equation
\begin{array}
[c]{l
|h(s,x,y,z,u)|\leq C(1+|x|+|y|+|z|),\\
|h(s,x,y,z,u)-h(s,\bar{x},\bar{y},\bar{z},u)|\leq C[\left( 1+\left\vert
x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert \right)
\left\vert x-\bar{x}\right\vert +|y-\bar{y}|+|z-\bar{z}|],
\end{array}
\label{new-new-new-1
\end{equation}
and $h(\cdot)$ is continuous with respect to $s$, $x$, $y$, $z$, $u$.
\end{lemma}
\begin{proof}
For each given $s\in\lbrack0,T]$, $x\in\mathbb{R}^{n}$, $y\in\mathbb{R}$,
$z\in\mathbb{R}^{1\times d}$ and $u\in U$, we define a mapping $\Gamma
:\mathbb{R}^{1\times d}\rightarrow\mathbb{R}^{1\times d}$ as follow
\[
\Gamma z^{\prime}={z+{D\varphi(s,x)}^{\intercal}\sigma}\left( s,x,y+\varphi
(s,x\right) ,z^{\prime},u)\text{ for }z^{\prime}\in\mathbb{R}^{1\times d}.
\]
For each $z_{1}$, $z_{2}\in\mathbb{R}^{1\times d}$, we have
\
\begin{array}
[c]{rl}
& \left\vert \Gamma z_{1}-\Gamma z_{2}\right\vert \\
& =\left\vert {\sigma}\left( s,x,y+\varphi(s,x\right) ,z_{1},u)^{\intercal
}{D\varphi(s,x)}-{\sigma}\left( s,x,y+\varphi(s,x\right) ,z_{2
,u)^{\intercal}{D\varphi(s,x)}\right\vert \\
& \leq L_{3}\sup_{(t,x)\in\lbrack0,T]\times\mathbb{R}^{n}}|D\varphi
(t,x)|\left\vert z_{1}-z_{2}\right\vert ,
\end{array}
\]
which implies that $\Gamma$ is a contraction mapping. Thus there exists a
unique $z^{\prime}\in\mathbb{R}^{1\times d}$ such that $\Gamma z^{\prime
={z}^{\prime}$. Define $h(s,x,y,z,u)=z^{\prime}$, then $h\left( \cdot\right)
$ satisfies (\ref{eq-111121}).
For each $s\in\lbrack0,T]$, $x$, $\bar{x}\in\mathbb{R}^{n}$, $y$, $\bar{y
\in\mathbb{R}$, $z$, $\bar{z}\in\mathbb{R}^{1\times d}$ and $u\in U$
\
\begin{array}
[c]{l
|h(s,x,y,z,u)|\\
\leq|z|+L_{3}||D\varphi||_{\infty}|h(s,x,y,z,u)|+||D\varphi||_{\infty
|{\sigma}\left( s,x,y+\varphi(s,x\right) ,0,u)|\\
\leq L_{3}||D\varphi||_{\infty}|h(s,x,y,z,u)|+C(1+|x|+|y|+|z|),
\end{array}
\
\
\begin{array}
[c]{l
|h(s,x,y,z,u)-h(s,\bar{x},\bar{y},\bar{z},u)|\\
\leq|z-\bar{z}|+\left\vert D\varphi(s,x)-D\varphi(s,\bar{x})\right\vert
\left\vert {\sigma}\left( s,x,y+\varphi(s,x\right)
,h(s,x,y,z,u),u)\right\vert \\
\ +||D\varphi||_{\infty}\left\vert {\sigma}\left( s,x,y+\varphi(s,x\right)
,h(s,x,y,z,u),u)\right. \\
\text{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\left. -{\sigma}\left( s,\bar{x
,\bar{y}+\varphi(s,\bar{x}\right) ,h(s,\bar{x},\bar{y},\bar{z
,u),u)\right\vert \\
\leq|z-\bar{z}|+||D\varphi||_{\infty}\{L_{3}|h(s,x,y,z,u)-h(s,\bar{x},\bar
{y},\bar{z},u)|+C\left( \left\vert x-\bar{x}\right\vert +|y-\bar{y}|\right)
\}\\
\ +C\left\vert x-\bar{x}\right\vert (1+|x|+|y|+|h\left( s,x,y,z,u\right) |),
\end{array}
\]
which implies (\ref{new-new-new-1}). Now we prove that $h(\cdot)$ is
continuous. For $(s_{m},x_{m},y_{m},z_{m},u_{m})$ $\rightarrow(s,x,y,z,u)$
\
\begin{array}
[c]{l
|h(s_{m},x_{m},y_{m},z_{m},u_{m})-h(s,x,y,z,u)|\\
\leq|z_{m}-z|+L_{3}||D\varphi||_{\infty}|h(s_{m},x_{m},y_{m},z_{m
,u_{m})-h(s,x,y,z,u)|\\
\ \ +\left\vert {D\varphi(s}_{m}{,x}_{m}{)}^{\intercal}{\sigma}\left(
s_{m},x_{m},y_{m}+\varphi(s_{m},x_{m}\right) ,h(s,x,y,z,u),u_{m})\right. \\
\ \ \ \ \ \left. -{D\varphi(s,x)}^{\intercal}{\sigma}\left( s,x,y+\varphi
(s,x\right) ,h(s,x,y,z,u),u)\right\vert ,
\end{array}
\]
which implies that $h(\cdot)$ is continuous with respect to $s$, $x$, $y$,
$z$, $u$.
\end{proof}
\begin{lemma}
\label{le-1app} For each $s\in\lbrack t,t+\delta]$, we hav
\begin{equation
\begin{array}
[c]{l
Y_{s}^{1,u}=Y_{s}^{u}-\varphi\left( s,X_{s}^{u}\right) ,\ Z_{s}^{1,u
{=Z_{s}^{u}-{D\varphi\left( s,X_{s}^{u}\right) }^{\intercal}\sigma\left(
s,X_{s}^{u},Y_{s}^{u},Z_{s}^{u},u_{s}\right) .
\end{array}
\label{eq-11113
\end{equation}
\end{lemma}
\begin{proof}
Applying It\^{o}'s formula to $Y_{s}^{u}-\varphi\left( s,X_{s}^{u}\right) $,
we can obtain the desired result.
\end{proof}
Under Assumption \ref{assum-l3}, we can choose a $\delta_{0}>0$ such that
\[
8C_{4}\left[ L_{2}^{4}\left( \delta_{0}^{2}+\delta_{0}^{4}\right)
+L_{3}^{4}\right] <1.
\]
By Theorem \ref{th-lp} in Appendix, then for each $\delta<\delta_{0}$, we hav
\begin{equation
\begin{array}
[c]{l
\mathbb{E}\left[ \sup\limits_{t\leq s\leq t+\delta}\left( |X_{s}^{u
|^{4}+|Y_{s}^{u}|^{4}\right) +\left( \displaystyle{\int}_{t}^{t+\delta
}|Z_{s}^{u}|^{2}ds\right) ^{2}\right] \\
\leq C\left\{ |x_{0}|^{4}+\mathbb{E}\left[ \left( \displaystyle{\int
_{t}^{t+\delta}(|b(s,0,0,0,u_{s})|+|g(s,0,0,0,u_{s})|)ds\right) ^{4}\right.
\right. \\
\ \ \ \ \left. \left. +\left( \displaystyle{\int}_{t}^{t+\delta
|\sigma(s,0,0,0,u_{s})|^{2}ds\right) ^{2}\right] \right\} \\
\leq C\left( 1+\left\vert x_{0}\right\vert ^{4}\right) ,
\end{array}
\label{new-new-23452
\end{equation}
where $C$ is a constant independent of $u$ and $\delta$. Consider the
following BSDE: $\forall s\in\lbrack t,t+\delta]$
\begin{equation}
dY_{s}^{2,u}=-F_{1}(s,x_{0},0,0,u_{s})ds+Z_{s}^{2,u}dB_{s},\ Y_{t+\delta
}^{2,u}=0. \label{eq-new-12345
\end{equation}
We have the following estimate.
\begin{lemma}
\label{le-129-1} For each given $v\in\mathcal{U}^{t}[t,t+\delta]$, \ we have
$\vert Y_{t}^{1,v}-Y_{t}^{2,v}\vert\leq C\delta^{\frac{3}{2}},$
where $C$ is a positive constant depend on $x$ and independent of $v$,
$\delta$.
\end{lemma}
\begin{proof}
Sinc
\begin{equation
\begin{array}
[c]{l
Y_{t}^{1,v}=\mathbb{E}\left[ \int_{t}^{t+\delta}F_{1}(s,X_{s}^{v},Y_{s
^{1,v},Z_{s}^{1,v},v_{s})ds\right] ,\\
Y_{t}^{2,v}=\mathbb{E}\left[ \int_{t}^{t+\delta}F_{1}(s,x_{0},0,0,v_{s
)ds\right] ,
\end{array}
\label{ne-new-2
\end{equation}
we obtai
\[
\left\vert Y_{t}^{1,v}-Y_{t}^{2,v}\right\vert \leq\mathbb{E}\left[ \in
_{t}^{t+\delta}\hat{F}_{s}ds\right] ,
\]
where
\[
\hat{F}_{s}=\left\vert F_{1}(s,X_{s}^{v},Y_{s}^{1,v},Z_{s}^{1,v},v_{s
)-F_{1}(s,x_{0},0,0,v_{s})\right\vert .
\]
Note that $\varphi\in C_{b}^{2,3}\left( [0,T]\times\mathbb{R}^{n}\right) $.
By Lemmas \ref{le-h} and \ref{le-1app}, it is easy to check that
\[
\hat{F}_{s}\leq C\left( \left\vert X_{s}^{v}-x_{0}\right\vert +|Y_{s
^{1,v}|+|Z_{s}^{1,v}|+\left\vert X_{s}^{v}-x_{0}\right\vert ^{2}+|Y_{s
^{1,v}|^{2}+|Z_{s}^{1,v}|^{2}\right) .
\]
Set $\tilde{X}_{s}^{v}=X_{s}^{v}-x_{0}$, $\tilde{Y}_{s}^{v}=Y_{s}^{1,v}$,
$\tilde{Z}_{s}^{v}=Z_{s}^{v}$. Then $(\tilde{X}_{s}^{v},\tilde{Y}_{s
^{v},\tilde{Z}_{s}^{v})$ satisfies the following FBSDE
\[
\left\{
\begin{array}
[c]{rl
d\tilde{X}_{s}^{v}= & b(s,\tilde{X}_{s}^{v}+x_{0},\tilde{Y}_{s}^{v
+\varphi\left( s,X_{s}^{v}\right) ,\tilde{Z}_{s}^{v},v_{s})ds\\
& +\sigma(s,\tilde{X}_{s}^{v}+x_{0},\tilde{Y}_{s}^{v}+\varphi\left(
s,X_{s}^{v}\right) ,\tilde{Z}_{s}^{v},v_{s})dB_{s},\\
d\tilde{Y}_{s}^{v}= & -g(s,\tilde{X}_{s}^{v}+x_{0},\tilde{Y}_{s}^{v
+\varphi\left( s,X_{s}^{v}\right) ,\tilde{Z}_{s}^{v},v_{s})ds+\tilde{Z
_{s}^{v}dB_{s},\\
\tilde{X}_{t}^{v}= & 0,\ \tilde{Y}_{t+\delta}^{v}=0.
\end{array}
\right.
\]
By (\ref{new-new-23452}) and Theorem \ref{th-lp} in Appendix, we hav
\begin{equation
\begin{array}
[c]{rl}
& \mathbb{E}\left[ \sup\limits_{t\leq s\leq t+\delta}\left( |\tilde{X
_{s}^{v}|^{p}+|\tilde{Y}_{s}^{v}|^{p}\right) +\left( {\int}_{t}^{t+\delta
}|Z_{s}^{v}|^{2}ds\right) ^{\frac{p}{2}}\right] \\
& \leq C\left( 1+\mathbb{E}\left[ \sup\limits_{t\leq s\leq t+\delta
|X_{s}^{v}|^{p}\right] \right) \delta^{\frac{p}{2}}\\
& \leq C\delta^{\frac{p}{2}},
\end{array}
\label{ne-new-3
\end{equation}
where $p\in\lbrack2,4]$. Thu
\begin{equation}
\mathbb{E}\left[ \int_{t}^{t+\delta}\left( \left\vert X_{s}^{v
-x_{0}\right\vert +|Y_{s}^{1,v}|+\left\vert X_{s}^{v}-x_{0}\right\vert
^{2}+|Y_{s}^{1,v}|^{2}\right) ds\right] \leq C\delta^{\frac{3}{2}}.
\label{ne-new-1
\end{equation}
On the other hand, by (\ref{eq-11112}) and (\ref{ne-new-2}), we hav
\begin{align*}
\mathbb{E}\left[ {\int}_{t}^{t+\delta}|Z_{s}^{1,v}|^{2}ds\right] &
=|Y_{t}^{1,v}|^{2}+\mathbb{E}\left[ \left( {\int}_{t}^{t+\delta
F_{1}(s,X_{s}^{v},Y_{s}^{1,v},Z_{s}^{1,v},v_{s})ds\right) ^{2}\right] \\
& \leq2\mathbb{E}\left[ \left( {\int}_{t}^{t+\delta}|F_{1}(s,X_{s
^{v},Y_{s}^{1,v},Z_{s}^{1,v},v_{s})|ds\right) ^{2}\right] .
\end{align*}
It is easy to check tha
\[
|F_{1}(s,X_{s}^{v},Y_{s}^{1,v},Z_{s}^{1,v},u_{s})|\leq C(1+|X_{s}^{v
|^{2}+|Y_{s}^{v}|^{2}+|Z_{s}^{v}|^{2}).
\]
Thus, by (\ref{new-new-23452}) and (\ref{ne-new-3}), we obtai
\[
\mathbb{E}\left[ {\int}_{t}^{t+\delta}|Z_{s}^{1,v}|^{2}ds\right] \leq
C\left( \delta^{2}+\mathbb{E}\left[ \left( {\int}_{t}^{t+\delta}|Z_{s
^{v}|^{2}ds\right) ^{2}\right] \right) \leq C\delta^{2}.
\]
Since
\[
\mathbb{E}\left[ {\int}_{t}^{t+\delta}|Z_{s}^{1,v}|ds\right] \leq\left(
\mathbb{E}\left[ {\int}_{t}^{t+\delta}|Z_{s}^{1,v}|^{2}ds\right] \right)
^{\frac{1}{2}}\delta^{\frac{1}{2}}\leq C\delta^{\frac{3}{2}},
\]
we obtain the desired result.
\end{proof}
Now we compute $\inf_{v\in\mathcal{U}^{t}[t,t+\delta]}Y_{t}^{2,v}$.
\begin{lemma}
\label{le-inf}We have\ $Y_{t}^{0}=\inf_{v\in\mathcal{U}^{t}[t,t+\delta]
Y_{t}^{2,v},$ where $Y_{t}^{0}$ is the solution to the following ordinary
differential equation:
\
\begin{array}
[c]{rl
dY_{s}^{0}=-F_{0}\left( s,x_{0}\right) ds{,}\ Y_{t+\delta}^{0}=0,\text{
\ }s\in\lbrack t,t+\delta], &
\end{array}
\]
where \ $F_{0}\left( s,x_{0}\right) =\inf_{u\in U}F_{1}\left(
s,x_{0},0,0,u\right) .$
\end{lemma}
\begin{proof}
For each given $v\in\mathcal{U}^{t}[t,t+\delta]$, $F_{1}\left( s,x_{0
,0,0,v_{s}\right) \geq F_{0}\left( s,x_{0}\right) $, by comparison theorem
of BSDE, we get $Y_{t}^{2,v}\geq Y_{t}^{0}$. On the other hand, we can choose
a deterministic control $\mu$ in $\mathcal{U}^{t}[t,t+\delta]$ such
that\ $F_{0}\left( s,x_{0}\right) =F_{1}\left( s,x_{0},0,0,\mu_{s}\right)
.$ It is clear that $Y_{t}^{2,\mu}=Y_{t}^{0}$. Thus we obtain the desired result.
\end{proof}
By Theorem \ref{th-ddp}, we hav
\[
W\left( t,x_{0}\right) =\underset{v\in\mathcal{U}^{t}[t,t+\delta]}{\inf
}G_{t,t+\delta}^{t,x_{0};v}\left[ W(t+\delta,\tilde{X}_{t+\delta}^{t,x_{0
;v})\right] .
\]
Since $\varphi\left( t+\delta,\cdot\right) \geq W\left( t+\delta
,\cdot\right) $, by Theorem \ref{th-comp} in Appendix, we get $Y_{t}^{v}\geq
W\left( t,x_{0}\right) $ for each $v\in\mathcal{U}^{t}[t,t+\delta]$, which
implies
\[
\underset{v\in\mathcal{U}^{t}[t,t+\delta]}{\inf}\left[ Y_{t}^{v
-\varphi\left( t,x_{0}\right) \right] =\underset{v\in\mathcal{U
^{t}[t,t+\delta]}{\inf}Y_{t}^{1,v}\geq0\text{.
\]
By Lemma \ref{le-129-1}, we deduce\ $\inf_{v\in\mathcal{U}^{t}[t,t+\delta
]}Y_{t}^{2,v}\geq-C\delta^{\frac{3}{2}}.$ It yields that $Y_{t}^{0
\geq-C\delta^{\frac{3}{2}}$ by Lemma \ref{le-inf}. Thus \ $-C\delta^{\frac
{1}{2}}\leq\frac{1}{\delta}Y_{t}^{0}=\frac{1}{\delta}\int_{t}^{t+\delta
F_{0}\left( s,x_{0}\right) ds.$ Letting $\delta\rightarrow0$, we get
$F_{0}\left( t,x_{0}\right) \geq0$, which implies that $W\ $is a viscosity
subsolution. By the same method, we can prove that $W$ is a viscosity
supersolution. Thus $W\ $is a viscosity solution. $\square$
\begin{remark}
\label{re-mon} Note that Assumption \ref{assum-1} (ii) is only used to
guarantees the well-posedness of our fully coupled forward-backward controlled
system. In fact, following our approach, the readers may verify that all the
results in Section 3 still hold under Assumptions \ref{assum-1} (i),
\ref{assum-l3} and the following monotonicity conditions.
\end{remark}
Given a nonzero $G\in\mathbb{R}^{1\times n}$, define
\[
\lambda=(x,y,z)^{\intercal},\ A\left( t,\lambda,u\right) =(-G^{\intercal}g,
Gb, G\sigma)^{\intercal}(t,\lambda,u).
\]
\begin{assumption}
\label{assm-mon} (Monotonicity conditions)\newline(i) $\left\langle A\left(
t,\lambda,u\right) -A\left( t,\bar{\lambda},u\right) ,\lambda-\bar{\lambda
}\right\rangle \leq-\beta_{1}\left\vert G\hat{x}\right\vert ^{2}-\beta
_{2}\left( \left\vert G^{\intercal}\hat{y}\right\vert ^{2}+\left\vert
G^{\intercal}\hat{z}\right\vert ^{2}\right) $, for $u\in U$;\newline(ii)
$\left\langle \phi\left( x\right) -\phi\left( \bar{x}\right) ,G\hat
{x}\right\rangle \geq\mu_{1}\left\vert G\hat{x}\right\vert ^{2}$, where
$\hat{x}=x-\bar{x}$, $\hat{y}=y-\bar{y}$, $\hat{z}=z-\bar{z}$, $\beta_{1}$,
$\beta_{2}$, $\mu_{1}$ are given nonnegative constants with $\beta_{1
+\beta_{2}>0$, $\beta_{2}+\mu_{1}>0$. Moreover, $\beta_{2}>0$ when $n>1$.
\end{assumption}
\section{The uniqueness of viscosity solutions}
In this section, we study the uniqueness of the viscosity solution to the HJB
equation (\ref{eq-hjb}).
\subsection{$\sigma$ independent of $y$ and $z$}
In this case, the corresponding HJB equation becomes
\begin{equation}
\left\{
\begin{array}
[c]{l
\partial_{t}W(t,x)+\inf\limits_{u\in U}H(t,x,W(t,x),DW(t,x),D^{2
W(t,x),u)=0,\\
W(T,x)=\phi(x),
\end{array}
\right. \label{eq-hjb-yz
\end{equation}
wher
\
\begin{array}
[c]{l
H(t,x,v,p,A,u)\\
=\frac{1}{2}\mathrm{tr}[\sigma\sigma^{\intercal}(t,x,u)A]+p^{\intercal
}b(t,x,v,p^{\intercal}\sigma(t,x,u),u)+g(t,x,v,p^{\intercal}\sigma
(t,x,u),u),\\
(t,x,v,p,A,u)\in\lbrack0,T]\times\mathbb{R}^{n}\times\mathbb{R}\times
\mathbb{R}^{n}\times\mathbb{S}^{n}\times U.
\end{array}
\]
We adopt the approach in Barles, Buckdahn and Pardoux \cite{Baeles-BP} (see
also Wu and Yu \cite{Wu-Y}) to prove the uniqueness of the viscosity solution
to (\ref{eq-hjb-yz}) in the following theorem. Note that applying the approach
in \cite{Baeles-BP}, Buckdahn and Li \cite{Buckdahn-Li} studied a decoupled
case and Wu and Yu \cite{Wu-Y} obtained the uniqueness result for coefficients
which are independent of $u$.
\begin{theorem}
\label{th-vis-uni}Suppose that $\sigma$ is independent of $y$ and $z$ and
Assumption \ref{assum-1} (i) holds. Then there exists at most one viscosity
solution to (\ref{eq-hjb-yz}) in the class of continuous functions which are
Lipschitz continuous with respect to $x$.
\end{theorem}
See Subsection \ref{subse-pf} in Appendix for the details.
\subsection{$\sigma$ depends on $y$ and $z$}
Wu and Yu \cite{Wu-Y} studied a PDE system for which the coefficient $\sigma$
of the corresponding FBSDE satisfies $\sigma\left( t,x,y,0\right) =0$. Under
this assumption, the fully coupled FBSDE degenerates to a forward-backward
ordinary differential equation and the PDE system degenerates to a first order
PDE. Thus, for this case, the uniqueness result is implied by Theorem
\ref{th-vis-uni}.
In this subsection, we study the HJB equation in which $\sigma$ is dependent
on $y$ and $z$. As pointed out in Remark \ref{re-appen}, the method in
\cite{Baeles-BP} does not work. We first give the following proposition.
\begin{proposition}
\label{pr-um}Suppose $\sigma$ is independent of $y$ and $z$; and one of the
following two conditions holds true:
\begin{description}
\item[(i)] Assumption \ref{assum-1} holds;
\item[(ii)] Assumptions \ref{assum-1} (i) and \ref{assm-mon} hold.
\end{description}
\noindent Let $W$ be the value function. Then, for each $(t,x)\in
\lbrack0,T)\times\mathbb{R}^{n}$, we can find a sequence $u^{m}\in
\mathcal{U}^{t}[t,T]$ such that
\[
\mathbb{E}\left[ {\int}_{t}^{T}\left\vert Y_{s}^{t,x;u^{m}}-W\left(
s,X_{s}^{t,x;u^{m}}\right) \right\vert ^{2}ds\right] +\left\vert
Y_{t}^{t,x;u^{m}}-W\left( t,x\right) \right\vert \rightarrow0\text{ as
}m\rightarrow\infty.
\]
\end{proposition}
\begin{proof}
We only prove the first case (the condition (i) holds). The proof for the
second case is similar. The proof is divided into three Steps.
\textbf{Step 1.} For each given integer $m\geq1$, set $t_{i}^{m
=t+i(T-t)m^{-1}$ for $i=0,\ldots,m$. We want to choose a $u^{i,m
\in\mathcal{U}^{t}[t_{i}^{m},t_{i+1}^{m}]$ such tha
\begin{equation}
|Y_{t_{i}^{m}}^{i,m}-W\left( t_{i}^{m},X_{t_{i}^{m}}^{i-1,m}\right)
|\leq\frac{1}{m^{2}(2C+1)^{m}}\text{ for }i=0,\ldots,m-1, \label{new-lx-2
\end{equation}
where $C$ is given in Lemma \ref{le-w-new}, $X_{t_{0}^{m}}^{-1,m}=x$ and
$(X^{i,m},Y^{i,m},Z^{i,m})$ is the solution to following FBSDE
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{i,m}= & b(s,X_{s}^{i,m},Y_{s}^{i,m},Z_{s}^{i,m},u_{s}^{i,m
)ds+\sigma(s,X_{s}^{i,m},u_{s}^{i,m})dB_{s},\\
dY_{s}^{i,m}= & -g(s,X_{s}^{i,m},Y_{s}^{i,m},Z_{s}^{i,m},u_{s}^{i,m
)ds+Z_{s}^{i,m}dB_{s},\text{ }s\in\lbrack t_{i}^{m},t_{i+1}^{m}],\\
X_{t_{i}^{m}}^{i,m}= & X_{t_{i}^{m}}^{i-1,m},\ Y_{t_{i+1}^{m}}^{i,m
=W(t_{i+1}^{m},X_{t_{i+1}^{m}}^{i,m}).
\end{array}
\right. \label{new-lx-3
\end{equation}
Precisely, on the interval $[t_{0}^{m},t_{1}^{m}]$, by Theorem \ref{th-ddp},
we can choose a $u^{0,m}\in\mathcal{U}^{t}[t_{0}^{m},t_{1}^{m}]$ such tha
\[
\left\vert Y_{t_{0}^{m}}^{0,m}-W\left( t_{0}^{m},x\right) \right\vert
\leq\frac{1}{m^{2}(2C+1)^{m+1}},
\]
where $(X^{0,m},Y^{0,m},Z^{0,m})$ is the solution to FBSDE (\ref{new-lx-3})
for $i=0$. On the interval $[t_{1}^{m},t_{2}^{m}]$, we first choose a
partition $\{A_{j}^{m}:j\geq1\}$ of $\mathbb{R}^{n}$ and $x_{j}^{m
\in\mathbb{R}^{n}$ such tha
\[
\left\vert \sum_{j=1}^{\infty}x_{j}^{m}I_{A_{j}^{m}}(X_{t_{1}^{m}
^{0,m})-X_{t_{1}^{m}}^{0,m}\right\vert \leq\frac{1}{m^{2}(2C+1)^{m+1}}.
\]
By Theorem \ref{th-ddp}, for each $x_{j}^{m}$, we can choose a $u^{1,j,m
\in\mathcal{U}^{t}[t_{1}^{m},t_{2}^{m}]$ such tha
\[
\left\vert Y_{t_{1}^{m}}^{1,j,m}-W\left( t_{1}^{m},x_{j}^{m}\right)
\right\vert \leq\frac{1}{m^{2}(2C+1)^{m+1}},
\]
where $(X^{1,j,m},Y^{1,j,m},Z^{1,j,m})$ is the solution to the following
FBSDE
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{1,j,m}= & b(s,X_{s}^{1,j,m},Y_{s}^{1,j,m},Z_{s}^{1,j,m},u_{s
^{1,j,m})ds+\sigma(s,X_{s}^{1,j,m},u_{s}^{1,j,m})dB_{s},\\
dY_{s}^{1,j,m}= & -g(s,X_{s}^{1,j,m},Y_{s}^{1,j,m},Z_{s}^{1,j,m},u_{s
^{1,j,m})ds+Z_{s}^{1,j,m}dB_{s},\text{ }s\in\lbrack t_{1}^{m},t_{2}^{m}],\\
X_{t_{1}^{m}}^{1,j,m}= & x_{j}^{m},\ Y_{t_{2}^{m}}^{1,j,m}=W(t_{2
^{m},X_{t_{2}^{m}}^{1,j,m}).
\end{array}
\right. \label{new-lx-4
\end{equation}
Set
\begin{equation}
u_{s}^{1,m}=\sum_{j=1}^{\infty}I_{A_{j}^{m}}(X_{t_{1}^{m}}^{0,m})u_{s
^{1,j,m}\in\mathcal{U}^{t}[t_{1}^{m},t_{2}^{m}]\;\text{for }s\in\lbrack
t_{1}^{m},t_{2}^{m}].
\end{equation}
By Theorem 2.2 in \cite{Hu-JX} for (\ref{new-lx-3}) and (\ref{new-lx-4})
\[
\left\vert Y_{t_{1}^{m}}^{1,m}-Y_{t_{1}^{m}}^{1,j,m}\right\vert I_{A_{j}^{m
}(X_{t_{1}^{m}}^{0,m})\leq C\left\vert X_{t_{1}^{m}}^{0,m}-x_{j
^{m}\right\vert I_{A_{j}^{m}}(X_{t_{1}^{m}}^{0,m}).
\]
Thus, by Lemma \ref{le-w-new}
\
\begin{array}
[c]{l
\left\vert Y_{t_{1}^{m}}^{1,m}-W\left( t_{1}^{m},X_{t_{1}^{m}}^{0,m}\right)
\right\vert \\
\leq\left\vert Y_{t_{1}^{m}}^{1,m}-\sum_{j=1}^{\infty}Y_{t_{1}^{m}
^{1,j,m}I_{A_{j}^{m}}(X_{t_{1}^{m}}^{0,m})\right\vert +\left\vert \sum
_{j=1}^{\infty}Y_{t_{1}^{m}}^{1,j,m}I_{A_{j}^{m}}(X_{t_{1}^{m}}^{0,m
)-W\left( t_{1}^{m},X_{t_{1}^{m}}^{0,m}\right) \right\vert \\
\leq C\left\vert X_{t_{1}^{m}}^{0,m}-\sum_{j=1}^{\infty}x_{j}^{m}I_{A_{j}^{m
}(X_{t_{1}^{m}}^{0,m})\right\vert +\frac{1}{m^{2}(2C+1)^{m+1}}\\
\ \ +\left\vert W\left( t_{1}^{m},\sum_{j=1}^{\infty}x_{j}^{m}I_{A_{j}^{m
}(X_{t_{1}^{m}}^{0,m})\right) -W\left( t_{1}^{m},X_{t_{1}^{m}}^{0,m}\right)
\right\vert \\
\leq\frac{1}{m^{2}(2C+1)^{m}}.
\end{array}
\]
Similarly, we can choose the desired $u^{i,m}\in\mathcal{U}^{t}[t_{i
^{m},t_{i+1}^{m}]$ for $i=2,\ldots,m-1$.
\textbf{Step 2.} Set \ $
u_{s}^{m} =\sum_{i=0}^{m-1}u_{s}^{i,m}I_{[t_{i}^{m},t_{i+1}^{m})}(s),\text{
}X_{s}^{m}=\sum_{i=0}^{m-1}X_{s}^{i,m}I_{[t_{i}^{m},t_{i+1}^{m})}(s),\newline
Y_{s}^{m} =\sum_{i=0}^{m-1}Y_{s}^{i,m}I_{[t_{i}^{m},t_{i+1}^{m})}(s),\text{
}Z_{s}^{m}=\sum_{i=0}^{m-1}Z_{s}^{i,m}I_{[t_{i}^{m},t_{i+1}^{m})}(s),$
where $u^{i,m}$, $X^{i,m}$, $Y^{i,m}$ and $Z^{i,m}$ are given in Step 1. It is
easy to check that $X^{m}$ satisfies the following SDE
\[
\left\{
\begin{array}
[c]{rl
dX_{s}^{m}= & b(s,X_{s}^{m},Y_{s}^{m},Z_{s}^{m},u_{s}^{m})ds+\sigma
(s,X_{s}^{m},u_{s}^{m})dB_{s},\\
X_{t}^{m}= & x,\text{ }s\in\lbrack t,T].
\end{array}
\right.
\]
Let $(X^{m},\tilde{Y}^{m},\tilde{Z}^{m})$ be the solution to the following
decoupled FBSDE
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{m}= & b(s,X_{s}^{m},Y_{s}^{m},Z_{s}^{m},u_{s}^{m})ds+\sigma
(s,X_{s}^{m},u_{s}^{m})dB_{s},\\
d\tilde{Y}_{s}^{m}= & -g(s,X_{s}^{m},\tilde{Y}_{s}^{m},\tilde{Z}_{s}^{m
,u_{s}^{m})ds+\tilde{Z}_{s}^{m}dB_{s},\text{ }s\in\lbrack t,T],\\
X_{t}^{m}= & x,\ \tilde{Y}_{T}^{m}=\phi(X_{T}^{m}).
\end{array}
\right. \label{new-lx-8
\end{equation}
In the followings, we want to prov
\begin{equation
\begin{array}
[c]{rl}
& \mathbb{E}\left[ {\int}_{t}^{T}\left( \left\vert \tilde{Y}_{s
^{m}-W\left( s,X_{s}^{m}\right) \right\vert ^{2}+|\tilde{Y}_{s}^{m
-Y_{s}^{m}|^{2}+|\tilde{Z}_{s}^{m}-Z_{s}^{m}|^{2}\right) ds\right] \\
& \ \ +|\tilde{Y}_{t}^{m}-W\left( t,x\right) |\rightarrow0\text{ as
}m\rightarrow\infty.
\end{array}
\label{new-lx-9
\end{equation}
Note that $(X^{m},\tilde{Y}^{m},\tilde{Z}^{m})$ satisfies the following FBSDE
on $[t_{i}^{m},t_{i+1}^{m}]$
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{m}= & b(s,X_{s}^{m},Y_{s}^{m},Z_{s}^{m},u_{s}^{m})ds+\sigma
(s,X_{s}^{m},u_{s}^{m})dB_{s},,\\
d\tilde{Y}_{s}^{m}= & -g(s,X_{s}^{m},\tilde{Y}_{s}^{m},\tilde{Z}_{s}^{m
,u_{s}^{m})ds+\tilde{Z}_{s}^{m}dB_{s},\text{ }s\in\lbrack t_{i}^{m
,t_{i+1}^{m}],\\
X_{t_{i}^{m}}^{m}= & X_{t_{i}^{m}}^{i-1,m},\ \tilde{Y}_{t_{i+1}^{m}
^{m}=\tilde{Y}_{t_{i+1}^{m}}^{m}.
\end{array}
\right. \label{new-lx-10
\end{equation}
Then by Theorem 2.2 in \cite{Hu-JX} for FBSDEs (\ref{new-lx-3}) and
(\ref{new-lx-10})
\begin{equation
\begin{array}
[c]{rl}
& \mathbb{E}\left[ \left. \sup\limits_{t_{i}^{m}\leq s\leq t_{i+1}^{m
}\left\vert \tilde{Y}_{s}^{m}-Y_{s}^{m}\right\vert ^{2}+\int_{t_{i}^{m
}^{t_{i+1}^{m}}|\tilde{Z}_{s}^{m}-Z_{s}^{m}|^{2}ds\right\vert \mathcal{F
_{t_{i}^{m}}\right] \\
& \leq C^{2}\mathbb{E}\left[ \left. \left\vert W(t_{i+1}^{m},X_{t_{i+1}^{m
}^{m})-\tilde{Y}_{t_{i+1}^{m}}^{m}\right\vert ^{2}\right\vert \mathcal{F
_{t_{i}^{m}}\right] ,
\end{array}
\label{new-lx-11
\end{equation}
where $C$ is the same as in Step 1. It yields tha
\begin{equation
\begin{array}
[c]{rl}
& \left\vert \tilde{Y}_{t_{i}^{m}}^{m}-W(t_{i}^{m},X_{t_{i}^{m}
^{m})\right\vert \\
& \leq\left\vert Y_{t_{i}^{m}}^{m}-W(t_{i}^{m},X_{t_{i}^{m}}^{m})\right\vert
+C\left\{ \mathbb{E}\left[ \left. \left\vert W(t_{i+1}^{m},X_{t_{i+1}^{m
}^{m})-\tilde{Y}_{t_{i+1}^{m}}^{m}\right\vert ^{2}\right\vert \mathcal{F
_{t_{i}^{m}}\right] \right\} ^{\frac{1}{2}}.
\end{array}
\label{new-lx-12
\end{equation}
Note that (\ref{new-lx-2}), (\ref{new-lx-12}), $W(t_{m}^{m},X_{t_{m}^{m}
^{m})=\phi(X_{T}^{m})$ and Lemma \ref{le-w-new}. By doing estimate
recursively, we obtai
\begin{equation}
\left\vert \tilde{Y}_{t_{i}^{m}}^{m}-W(t_{i}^{m},X_{t_{i}^{m}}^{m})\right\vert
\leq\frac{1+C+\cdots+C^{m-i-1}}{m^{2}(2C+1)^{m}}\leq\frac{1}{m}.
\label{new-lx-13
\end{equation}
Combining (\ref{new-lx-11}) and (\ref{new-lx-13}), we ge
\begin{equation
\begin{array}
[c]{rl
\mathbb{E}\left[ {\int}_{t}^{T}\left( |\tilde{Y}_{s}^{m}-Y_{s}^{m
|^{2}+|\tilde{Z}_{s}^{m}-Z_{s}^{m}|^{2}\right) ds\right] & \leq C^{2
\sum\limits_{i=0}^{m-1}\mathbb{E}\left[ \left\vert W(t_{i+1}^{m
,X_{t_{i+1}^{m}}^{m})-\tilde{Y}_{t_{i+1}^{m}}^{m}\right\vert ^{2}\right] \\
& \leq\frac{C^{2}}{m}.
\end{array}
\label{new-lx-14
\end{equation}
By Theorem 2.2 in \cite{Hu-JX} for FBSDE (\ref{new-lx-8}) and (\ref{new-lx-14
), we obtain tha
\begin{equation}
\mathbb{E}\left[ \sup\limits_{t\leq s\leq T}|X_{s}^{m}|^{2}+{\int}_{t
^{T}\left( |Y_{s}^{m}|^{2}+|Z_{s}^{m}|^{2}+|\tilde{Y}_{s}^{m}|^{2}+|\tilde
{Z}_{s}^{m}|^{2}\right) ds\right] \leq C^{\prime}(1+|x|^{2}),
\label{new-lx-15
\end{equation}
where $C^{\prime}$ is a constant which is independent of $m$. For $s\in\lbrack
t_{i}^{m},t_{i+1}^{m}]$, by (\ref{new-lx-15}) and Lemma \ref{le-w-new}, we
have
\begin{align*}
\mathbb{E}\left[ \left\vert \tilde{Y}_{s}^{m}-\tilde{Y}_{t_{i+1}^{m}
^{m}\right\vert ^{2}\right] & =\mathbb{E}\left[ \left\vert \mathbb{E
\left[ \left. \int_{s}^{t_{i+1}^{m}}g(r,X_{r}^{m},\tilde{Y}_{r}^{m
,\tilde{Z}_{r}^{m},u_{r}^{m})dr\right\vert \mathcal{F}_{s}\right] \right\vert
^{2}\right] \\
& \leq\frac{T}{m}\mathbb{E}\left[ \int_{t_{i}^{m}}^{t_{i+1}^{m}
|g(r,X_{r}^{m},\tilde{Y}_{r}^{m},\tilde{Z}_{r}^{m},u_{r}^{m})|^{2}dr\right] \\
& \leq\bar{C}(1+|x|^{2})\frac{1}{m},
\end{align*
\
\begin{array}
[c]{l
\mathbb{E}\left[ \left\vert X_{s}^{m}-X_{t_{i+1}^{m}}^{m}\right\vert
^{2}\right] \\
\leq2\mathbb{E}\left[ \left( \int_{t_{i}^{m}}^{t_{i+1}^{m}}|b(r,X_{r
^{m},Y_{r}^{m},Z_{r}^{m},u_{r}^{m})|dr\right) ^{2}+\int_{t_{i}^{m}
^{t_{i+1}^{m}}|\sigma(r,X_{r}^{m},u_{r}^{m})|^{2}dr\right] \\
\leq\frac{2T}{m}\mathbb{E}\left[ \int_{t_{i}^{m}}^{t_{i+1}^{m}}|b(r,X_{r
^{m},Y_{r}^{m},Z_{r}^{m},u_{r}^{m})|^{2}dr\right] +\frac{2T}{m
\mathbb{E}\left[ \left( L+L_{1}\sup\limits_{t_{i}^{m}\leq r\leq t_{i+1}^{m
}|X_{r}^{m}|\right) ^{2}\right] \\
\leq\bar{C}(1+|x|^{2})\frac{1}{m},
\end{array}
\
\
\begin{array}
[c]{rl}
& \mathbb{E}\left[ \left\vert \tilde{Y}_{s}^{m}-W\left( s,X_{s}^{m}\right)
\right\vert ^{2}\right] \\
& \leq\mathbb{E}\left[ \left\vert \tilde{Y}_{s}^{m}-\tilde{Y}_{t_{i+1}^{m
}^{m}+\tilde{Y}_{t_{i+1}^{m}}^{m}-W(t_{i+1}^{m},X_{t_{i+1}^{m}}^{m
)+W(t_{i+1}^{m},X_{t_{i+1}^{m}}^{m})-W\left( s,X_{s}^{m}\right) \right\vert
^{2}\right] \\
& \leq\bar{C}(1+|x|^{2})\frac{1}{m},
\end{array}
\]
where $\bar{C}$ is a constant which is independent of $m$. Thu
\begin{equation
\begin{array}
[c]{rl
\mathbb{E}\left[ {\int}_{t}^{T}\left\vert \tilde{Y}_{s}^{m}-W\left(
s,X_{s}^{m}\right) \right\vert ^{2}ds\right] & \leq\sum_{i=0}^{m-1
\int_{t_{i}^{m}}^{t_{i+1}^{m}}\mathbb{E}\left[ \left\vert \tilde{Y}_{s
^{m}-W\left( s,X_{s}^{m}\right) \right\vert ^{2}\right] ds\\
& \leq\bar{C}(1+|x|^{2})\frac{T}{m}.
\end{array}
\label{new-lx-16
\end{equation}
Then we obtain (\ref{new-lx-9}) by (\ref{new-lx-13}), (\ref{new-lx-14}) and
(\ref{new-lx-16}).
\textbf{Step 3.} Note that $(X^{t,x;u^{m}},Y^{t,x;u^{m}},Z^{t,x;u^{m}})$
satisfies the following FBSDE
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{t,x;u^{m}}= & b(s,X_{s}^{t,x;u^{m}},Y_{s}^{t,x;u^{m}},Z_{s
^{t,x;u^{m}},u_{s}^{m})ds+\sigma(s,X_{s}^{t,x;u^{m}},u_{s}^{m})dB_{s},\\
dY_{s}^{t,x;u^{m}}= & -g(s,X_{s}^{t,x;u^{m}},Y_{s}^{t,x;u^{m}},Z_{s
^{t,x;u^{m}},u_{s}^{m})ds+Z_{s}^{t,x;u^{m}}dB_{s},\text{ }s\in\lbrack t,T],\\
X_{t}^{t,x;u^{m}}= & x,\ Y_{T}^{t,x;u^{m}}=\phi(X_{T}^{t,x;u^{m}}).
\end{array}
\right. \label{new-lx-6
\end{equation}
Then, by Theorem 2.2 in \cite{Hu-JX} for FBSDEs (\ref{new-lx-8}) and
(\ref{new-lx-6}), we obtai
\begin{equation
\begin{array}
[c]{l
\mathbb{E}\left[ \sup\limits_{t\leq s\leq T}\left( |X_{s}^{m}-X_{s
^{t,x;u^{m}}|^{2}+|\tilde{Y}_{s}^{m}-Y_{s}^{t,x;u^{m}}|^{2}\right) +{\in
}_{t}^{T}|\tilde{Z}_{s}^{m}-Z_{s}^{t,x;u^{m}}|^{2}ds\right] \\
\leq\tilde{C}\mathbb{E}\left[ {\int}_{t}^{T}\left( |\tilde{Y}_{s}^{m
-Y_{s}^{m}|^{2}+|\tilde{Z}_{s}^{m}-Z_{s}^{m}|^{2}\right) ds\right] ,
\end{array}
\label{new-lx-7
\end{equation}
where $\tilde{C}$ is a constant which is independent of $m$. Due t
\
\begin{array}
[c]{rl
\left\vert Y_{s}^{t,x;u^{m}}-W\left( s,X_{s}^{t,x;u^{m}}\right) \right\vert
& \leq\left\vert Y_{s}^{t,x;u^{m}}-\tilde{Y}_{s}^{m}\right\vert +\left\vert
\tilde{Y}_{s}^{m}-W\left( s,X_{s}^{m}\right) \right\vert \\
& \ \ +\left\vert W\left( s,X_{s}^{m}\right) -W\left( s,X_{s}^{t,x;u^{m
}\right) \right\vert ,
\end{array}
\]
then, by (\ref{new-lx-9}), (\ref{new-lx-7}) and Lemma \ref{le-w-new}, we ge
\begin{equation}
\mathbb{E}\left[ {\int}_{t}^{T}\left\vert Y_{s}^{t,x;u^{m}}-W\left(
s,X_{s}^{t,x;u^{m}}\right) \right\vert ^{2}ds\right] +\left\vert
Y_{t}^{t,x;u^{m}}-W\left( t,x\right) \right\vert \rightarrow0\text{ as
}m\rightarrow\infty. \label{new-lx-5
\end{equation}
This completes the proof.
\end{proof}
We first give a uniqueness result when $\sigma$ is independent of $z$.
\begin{theorem}
\label{th-um1} Suppose $\sigma$ is independent of $z$, $\tilde{W}$ is a
viscosity solution to HJB equation (\ref{eq-hjb}); and one of the following
two conditions holds true:
\begin{description}
\item[(i)] Assumption \ref{assum-1} holds;
\item[(ii)] Assumptions \ref{assum-1} (i) and \ref{assm-mon} hold. Moreover,
\[
\tilde{A}\left( t,x,y,z,u\right) =\left( -G^{\intercal}g\left(
t,x,y,z,u\right) , Gb\left( t,x,y,z,u\right) , G\sigma( t,x,\tilde
{W}\left( t,x\right) ,u)\right) ^{\intercal
\]
satisfies Assumption \ref{assm-mon}.
\end{description}
\noindent Let $W$ be the value function. Furthermore, we assume that
$\tilde{W}$ is Lipschitz continuous in $x$. Then $W\leq\tilde{W}$.
\end{theorem}
\begin{proof}
We only prove the first case (the condition (i) holds). The proof for the
second case is similar. Consider the following HJB equation
\begin{equation}
\left\{
\begin{array}
[c]{l
\partial_{t}F(t,x)+\inf\limits_{u\in U}H(t,x,F(t,x),DF(t,x),D^{2
F(t,x),u)=0,\\
F(T,x)=\phi(x),
\end{array}
\right. \label{new-lx-17
\end{equation}
where
\
\begin{array}
[c]{l
H(t,x,v,p,A,u)\\
=\frac{1}{2}\mathrm{tr}[\sigma\sigma^{\intercal}(t,x,\tilde{W
(t,x),u)A]+p^{\intercal}b(t,x,v,p^{\intercal}\sigma(t,x,\tilde{W
}(t,x),u),u)\\
\ \ +g(t,x,v,p^{\intercal}\sigma(t,x,\tilde{W}(t,x),u),u),\\
(t,x,v,p,A,u)\in\lbrack0,T]\times\mathbb{R}^{n}\times\mathbb{R}\times
\mathbb{R}^{n}\times\mathbb{S}^{n}\times U
\end{array}
\]
and $F\left( \cdot,\cdot\right) :[0,T]\times\mathbb{R}^{n}\rightarrow
\mathbb{R}. $ Note that $\tilde{\sigma}(t,x,u):=\sigma(t,x,\tilde{W}(t,x),u) $
is Lipschitz continuous in $x$. By the definition of viscosity solution, it is
easy to verify that $\tilde{W}$ is also a viscosity solution to HJB equation
(\ref{new-lx-17}). Since $\tilde{\sigma}$ is independent of $(y,z)$, by
Theorems \ref{th-vis} and \ref{th-vis-uni}, $\tilde{W}$ is the value function
of the following optimization problem:
\[
\underset{u(\cdot)\in\mathcal{U}^{t}[t,T]}{\inf}\bar{Y}_{t}^{t,x;u},
\]
where the controlled system i
\begin{equation}
\left\{
\begin{array}
[c]{rl
d\bar{X}_{s}^{t,x;u}= & b(s,\bar{X}_{s}^{t,x;u},\bar{Y}_{s}^{t,x;u},\bar
{Z}_{s}^{t,x;u},u_{s})ds+\sigma(s,\bar{X}_{s}^{t,x;u},\tilde{W}(s,\bar{X
_{s}^{t,x;u}),u_{s})dB_{s},\\
d\bar{Y}_{s}^{t,x;u}= & -g(s,\bar{X}_{s}^{t,x;u},\bar{Y}_{s}^{t,x;u},\bar
{Z}_{s}^{t,x;u},u_{s})ds+\bar{Z}_{s}^{t,x;u}dB_{s},\;s\in\lbrack t,T],\\
\bar{X}_{t}^{t,x;u}= & x,\ \bar{Y}_{T}^{t,x;u}=\phi(\bar{X}_{T}^{t,x;u}).
\end{array}
\right. \label{new-lx-18
\end{equation}
For each fixed $(t,x)\in\lbrack0,T)\times\mathbb{R}^{n}$, by Proposition
\ref{pr-um}, we can find a sequence $u^{m}\in\mathcal{U}^{t}[t,T]$ such that
\begin{equation}
\mathbb{E}\left[ {\int}_{t}^{T}\left\vert \bar{Y}_{s}^{t,x;u^{m}}-\tilde{W}
\left( s,\bar{X}_{s}^{t,x;u^{m}}\right) \right\vert ^{2}ds\right]
+\left\vert \bar{Y}_{t}^{t,x;u^{m}}-\tilde{W}\left( t,x\right) \right\vert
\rightarrow0\text{ as }m\rightarrow\infty. \label{new-le-21
\end{equation}
Consider the following FBSDE
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{t,x;u^{m}}= & b(s,X_{s}^{t,x;u^{m}},Y_{s}^{t,x;u^{m}},Z_{s
^{t,x;u^{m}},u_{s}^{m})ds\\
& +\sigma(s,X_{s}^{t,x;u^{m}},Y_{s}^{t,x;u^{m}},u_{s}^{m})dB_{s},\\
dY_{s}^{t,x;u^{m}}= & -g(s,X_{s}^{t,x;u^{m}},Y_{s}^{t,x;u^{m}},Z_{s
^{t,x;u^{m}},u_{s}^{m})ds+Z_{s}^{t,x;u^{m}}dB_{s},\;s\in\lbrack t,T],\\
X_{t}^{t,x;u^{m}}= & x,\ Y_{T}^{t,x;u^{m}}=\phi(X_{T}^{t,x;u^{m}}).
\end{array}
\right. \label{new-lx-19
\end{equation}
By Theorem 2.2 in \cite{Hu-JX} for FBSDEs (\ref{new-lx-18}) and
(\ref{new-lx-19})
\begin{equation
\begin{array}
[c]{l
|Y_{t}^{t,x;u^{m}}-\bar{Y}_{t}^{t,x;u^{m}}|\\
\leq C\left\{ \mathbb{E}\left[ \int_{t}^{T}\left\vert \sigma(s,\bar{X
_{s}^{t,x;u^{m}},\bar{Y}_{s}^{t,x;u^{m}},u_{s}^{m})-\sigma(s,\bar{X
_{s}^{t,x;u^{m}},\tilde{W}(s,\bar{X}_{s}^{t,x;u^{m}}),u_{s}^{m})\right\vert
^{2}ds\right] \right\} ^{\frac{1}{2}}\\
\leq C\left\{ \mathbb{E}\left[ {\int}_{t}^{T}\left\vert \bar{Y
_{s}^{t,x;u^{m}}-\tilde{W}\left( s,\bar{X}_{s}^{t,x;u^{m}}\right)
\right\vert ^{2}ds\right] \right\} ^{\frac{1}{2}}.
\end{array}
\label{new-lx-20
\end{equation}
Since $Y_{t}^{t,x;u^{m}}\geq W(t,x)$ for any $m\geq1$, we get $W(t,x)\leq
\tilde{W}(t,x)$ by (\ref{new-le-21}) and (\ref{new-lx-20}).
\end{proof}
Now we study the case in which $\sigma$ is dependent on $y$ and $z$.
\begin{theorem}
\label{th-um2} Suppose one of the following two conditions holds true:
\begin{description}
\item[(i)] Assumptions \ref{assum-1} and \ref{assum-l3} hold;
\item[(ii)] Assumptions \ref{assum-1} (i), \ref{assum-l3} and \ref{assm-mon} hold.
\end{description}
\noindent Let $W$ be the value function and $\tilde{W}$ be a viscosity
solution to HJB equation (\ref{eq-hjb}). Furthermore, we assume that
$\tilde{W}$ is Lipschitz continuous in $(t,x)$, $D\tilde{W}$ is Lipschitz
continuous in $x$ and $||D\tilde{W}||_{\infty}L_{3}<1$. Then $W\leq\tilde{W}$.
\end{theorem}
\begin{proof}
We only prove the first case (the condition (i) holds). The proof for the
second case is similar. Consider the following HJB equation
\begin{equation}
\label{new-lx-21}\partial_{t}F(t,x)+\inf\limits_{u\in U
H(t,x,F(t,x),DF(t,x),D^{2}F(t,x),u)=0,\ F(T,x)=\phi(x),
\end{equation}
wher
\
\begin{array}
[c]{l
H(t,x,v,p,A,u)\\
=\frac{1}{2}\mathrm{tr}[\sigma\sigma^{\intercal}(t,x,\tilde{W}(t,x),\tilde{V
}(t,x,u),u)A]+p^{\intercal}b(t,x,\tilde{W}(t,x),\tilde{V}(t,x,u),u)\\
\text{ \ \ }+g(t,x,v,p^{\intercal}\sigma(t,x,\tilde{W}(t,x),\tilde{V
}(t,x,u),u),u),\\
\tilde{V}(t,x,u)=D\tilde{W}(t,x)^{\intercal}\sigma(t,x,\tilde{W
(t,x),\tilde{V}(t,x,u),u),\\
(t,x,v,p,A,u)\in\lbrack0,T]\times\mathbb{R}^{n}\times\mathbb{R}\times
\mathbb{R}^{n}\times\mathbb{S}^{n}\times U
\end{array}
\]
and $F\left( \cdot,\cdot\right) :[0,T]\times\mathbb{R}^{n}\rightarrow
\mathbb{R}. $
Recall that for given $\tilde{W}$, there exists a unique solution $\tilde{V
}(t,x,u)$ to the above algebra equation by Lemma \ref{le-h}.
Note that
\
\begin{array}
[c]{rl
\tilde{b}(t,x,u):= b(t,x,\tilde{W}(t,x),\tilde{V}(t,x,u),u),\ \tilde{\sigma
}(t,x,u):= \sigma(t,x,\tilde{W}(t,x),\tilde{V}(t,x,u),u) &
\end{array}
\]
satisfy the following conditions
\begin{equation
\begin{array}
[c]{l
| \tilde{b}(t,x,u)| +\left\vert \tilde{\sigma}(t,x,u)\right\vert \leq
C(1+\left\vert x\right\vert ),\\
| \tilde{b}(t,x,u)-\tilde{b}(t,x^{\prime},u)| +\left\vert \tilde{\sigma
}(t,x,u)-\tilde{\sigma}(t,x^{\prime},u)\right\vert \leq C(1+\left\vert
x\right\vert +\left\vert x^{\prime}\right\vert )|x-x^{\prime}|.
\end{array}
\label{new-lx-30
\end{equation}
By the definition of viscosity solution, $\tilde{W}$ is also a viscosity
solution to HJB equation (\ref{new-lx-21}). Consider the following controlled
system
\begin{equation}
\left\{
\begin{array}
[c]{rl
d\bar{X}_{s}^{t,x;u}= & b(s,\bar{X}_{s}^{t,x;u},\tilde{W}(s,\bar{X
_{s}^{t,x;u}),\tilde{V}(s,\bar{X}_{s}^{t,x;u},u_{s}),u_{s})ds\\
& +\sigma(s,\bar{X}_{s}^{t,x;u},\tilde{W}(s,\bar{X}_{s}^{t,x;u}),\tilde{V
}(s,\bar{X}_{s}^{t,x;u},u_{s}),u_{s})dB_{s},\\
d\bar{Y}_{s}^{t,x;u}= & -g(s,\bar{X}_{s}^{t,x;u},\bar{Y}_{s}^{t,x;u},\bar
{Z}_{s}^{t,x;u},u_{s})ds+\bar{Z}_{s}^{t,x;u}dB_{s},\;s\in\lbrack t,T],\\
\bar{X}_{t}^{t,x;u}= & x,\ \bar{Y}_{T}^{t,x;u}=\phi(\bar{X}_{T}^{t,x;u}).
\end{array}
\right. \label{new-lx-22
\end{equation}
By Proposition 3.28 in \cite{Pardoux-book}, the FBSDE (\ref{new-lx-22}) has a
unique solution $(\bar{X}^{t,x;u}, \bar{Y}^{t,x;u},\newline\bar{Z}^{t,x;u})\in
L_{\mathcal{F}}^{p}(\Omega;C([t,T];\mathbb{R}^{n}))\times L_{\mathcal{F}
^{p}(\Omega;C([t,T];\mathbb{R}))\times L_{\mathcal{F}}^{2,p}(t,T;\mathbb{R
^{1\mathbb{\times}d})$ for any $p\geq2$. Since $\tilde{b}$ and $\tilde{\sigma
}$ are independent of $(y,z)$, we can obtain that $\tilde{W}$ is the value
function of the above controlled system (\ref{new-lx-22}). For each fixed
$(t,x)\in\lbrack0,T)\times\mathbb{R}^{n}$, by Proposition \ref{pr-um}, we can
find a sequence $u^{m}\in\mathcal{U}^{t}[t,T]$ such that
\begin{equation}
\mathbb{E}\left[ {\int}_{t}^{T}\left\vert \bar{Y}_{s}^{t,x;u^{m}}-\tilde{W
}\left( s,\bar{X}_{s}^{t,x;u^{m}}\right) \right\vert ^{2}ds\right]
+\left\vert \bar{Y}_{t}^{t,x;u^{m}}-\tilde{W}\left( t,x\right) \right\vert
\rightarrow0\text{ as }m\rightarrow\infty. \label{new-lx-23
\end{equation}
Let $\vartheta(t,x):\mathbb{R\times R}^{n}\rightarrow\mathbb{R}$\ be a
non-negative smooth function such that its support is included in the unit
ball and $\int_{\mathbb{R\times R}^{n}}\vartheta\left( t,x\right) dxdt=1$.
For Lipschitz functions $\tilde{W}:[0,T]\times\mathbb{R}^{n}\rightarrow
\mathbb{R}$, we set for $(t,x)\in\mathbb{R\times R}^{n}\text{, }\epsilon>0,$
\[
\tilde{W}_{\epsilon}\left( t,x\right) =\epsilon^{-(n+1)}\in
_{\mathbb{R\times R}^{n}}\tilde{W}\left( \left[ \left( t-t^{\prime}\right)
\vee0\right] \wedge T,x-x^{\prime}\right) \vartheta\left( \epsilon
^{-1}t^{\prime},\epsilon^{-1}x^{\prime}\right) dx^{\prime}dt^{\prime}.
\]
Then, it is easily to verify tha
\begin{align*}
\partial_{t}\tilde{W}_{\epsilon}\left( t,x\right) & =\epsilon^{-(n+1)
\int_{\mathbb{R\times R}^{n}}\partial_{t}\tilde{W}\left( \left[ \left(
t-t^{\prime}\right) \vee0\right] \wedge T,x-x^{\prime}\right)
\vartheta\left( \epsilon^{-1}t^{\prime},\epsilon^{-1}x^{\prime}\right)
dx^{\prime}dt^{\prime},\\
D\tilde{W}_{\epsilon}\left( t,x\right) & =\epsilon^{-(n+1)}\in
_{\mathbb{R\times R}^{n}}D\tilde{W}\left( \left[ \left( t-t^{\prime
}\right) \vee0\right] \wedge T,x-x^{\prime}\right) \vartheta\left(
\epsilon^{-1}t^{\prime},\epsilon^{-1}x^{\prime}\right) dx^{\prime}dt^{\prime
},
\end{align*}
where $\partial_{t}\tilde{W}$ is defined almost everywhere.
Thus we obtain that $||\partial_{t}\tilde{W}_{\epsilon}||_{\infty}\leq
L_{\tilde{W}}$, $||D\tilde{W}_{\epsilon}||_{\infty}\leq L_{D\tilde{W}}$,
$\tilde{W}_{\epsilon}\rightarrow\tilde{W}$ and $D\tilde{W}_{\epsilon
}\rightarrow D\tilde{W}$ pointwisely as $\epsilon\rightarrow0$, where
$L_{\tilde{W}}$ is the Lipschitz constant of $\tilde{W}$ with respect to $t$
and $L_{D\tilde{W}}$ is the Lipschitz constant of $D\tilde{W}$ with respect to
$x$. Set \ $Y_{s}^{\epsilon}=\tilde{W}_{\epsilon}(s,\bar{X}_{s}^{t,x;u^{m
}),$
\[
Z_{s}^{\epsilon}=(D\tilde{W}_{\epsilon}(s,\bar{X}_{s}^{t,x;u^{m}
))^{\intercal}\sigma(s,\bar{X}_{s}^{t,x;u^{m}},\tilde{W}(s,\bar{X
_{s}^{t,x;u^{m}}),\tilde{V}(s,\bar{X}_{s}^{t,x;u^{m}},u_{s}^{m}),u_{s}^{m}).
\]
Applying It\^{o}'s formula to $\tilde{W}_{\epsilon}(s,\bar{X}_{s}^{t,x;u^{m
})$ on $[t,T]$, we have
\begin{equation}
\left\{
\begin{array}
[c]{l
dY_{s}^{\epsilon}\\
= \left[ \left( D\tilde{W}_{\epsilon}(s,\bar{X}_{s}^{t,x;u^{m}})\right)
^{\intercal}b(s,\bar{X}_{s}^{t,x;u^{m}},\tilde{W}(s,\bar{X}_{s}^{t,x;u^{m
}),\tilde{V}(s,\bar{X}_{s}^{t,x;u^{m}},u_{s}^{m}),u_{s}^{m})\right. \\
\ \ +\frac{1}{2}\mathrm{tr}\left( \sigma\sigma^{\intercal}(s,\bar{X
_{s}^{t,x;u^{m}},\tilde{W}(s,\bar{X}_{s}^{t,x;u^{m}}),\tilde{V}(s,\bar{X
_{s}^{t,x;u^{m}},u_{s}^{m}),u_{s}^{m})D^{2}\tilde{W}_{\epsilon}(s,\bar{X
_{s}^{t,x;u^{m}})\right) \\
\ \ \left. + \partial_{t}\tilde{W}_{\epsilon}(s,\bar{X}_{s}^{t,x;u^{m
})\right] ds+Z_{s}^{\epsilon}dB_{s},\\
Y_{T}^{\epsilon}= \phi(\bar{X}_{T}^{t,x;u^{m}}),\;s\in\lbrack t,T].
\end{array}
\right. \label{new-lx-26
\end{equation}
Applying It\^{o}'s formula to $\left\vert \bar{Y}_{s}^{t,x;u^{m}
-Y_{s}^{\epsilon}\right\vert ^{2}$ on $[t,T]$
\begin{equation
\begin{array}
[c]{rl
\mathbb{E}\left[ {\int}_{t}^{T}\left\vert \bar{Z}_{s}^{t,x;u^{m}
-Z_{s}^{\epsilon}\right\vert ^{2}ds\right] & \leq2\mathbb{E}\left[ {\in
}_{t}^{T}\left\vert \bar{Y}_{s}^{t,x;u^{m}}-Y_{s}^{\epsilon}\right\vert
I_{s}^{m}ds\right] \\
& \leq2\left\{ \mathbb{E}\left[ {\int}_{t}^{T}\left\vert \bar{Y
_{s}^{t,x;u^{m}}-Y_{s}^{\epsilon}\right\vert ^{2}ds\right] \right\}
^{\frac{1}{2}}\left\{ \mathbb{E}\left[ {\int}_{t}^{T}|I_{s}^{m
|^{2}ds\right] \right\} ^{\frac{1}{2}},
\end{array}
\label{new-lx-27
\end{equation}
wher
\begin{equation
\begin{array}
[c]{l
I_{s}^{m}\\
= \left\vert g(s,\bar{X}_{s}^{t,x;u^{m}},\bar{Y}_{s}^{t,x;u^{m}},\bar{Z
_{s}^{t,x;u^{m}},u_{s}^{m})+\partial_{t}\tilde{W}_{\epsilon}(s,\bar{X
_{s}^{t,x;u^{m}})\right. \\
\text{\ \ \ } +\left( D\tilde{W}_{\epsilon}(s,\bar{X}_{s}^{t,x;u^{m
})\right) ^{\intercal}b(s,\bar{X}_{s}^{t,x;u^{m}},\tilde{W}(s,\bar{X
_{s}^{t,x;u^{m}}),\tilde{V}(s,\bar{X}_{s}^{t,x;u^{m}},u_{s}^{m}),u_{s}^{m})\\
\text{\ \ \ }\left. +\frac{1}{2}\mathrm{tr}\left( \sigma\sigma^{\intercal
}(s,\bar{X}_{s}^{t,x;u^{m}},\tilde{W}(s,\bar{X}_{s}^{t,x;u^{m}}),\tilde
{V}(s,\bar{X}_{s}^{t,x;u^{m}},u_{s}^{m}),u_{s}^{m})D^{2}\tilde{W}_{\epsilon
}(s,\bar{X}_{s}^{t,x;u^{m}})\right) \right\vert \\
\leq C\left( 1+\left\vert \bar{X}_{s}^{t,x;u^{m}}\right\vert ^{2}+\left\vert
\bar{Y}_{s}^{t,x;u^{m}}\right\vert +\left\vert \bar{Z}_{s}^{t,x;u^{m
}\right\vert \right) .
\end{array}
\label{new-ism
\end{equation}
By standard estimate for decoupled FBSDE (\ref{new-lx-22}), we obtain\ $\sup
_{m\geq1}\mathbb{E}[ {\int}_{t}^{T}|I_{s}^{m}|^{2}ds] \leq C\left(
1+\left\vert x\right\vert ^{4}\right) . $ By dominate convergence theorem, we
have
\[
\mathbb{E}\left[ {\int}_{t}^{T}\left( \left\vert Y_{s}^{\epsilon}-\tilde
{W}\left( s,\bar{X}_{s}^{t,x;u^{m}}\right) \right\vert ^{2}+\left\vert
Z_{s}^{\epsilon}-\tilde{V}(s,\bar{X}_{s}^{t,x;u^{m}},u_{s}^{m})\right\vert
^{2}\right) ds\right] \rightarrow0
\]
as $\epsilon\rightarrow0$. Then we deduc
\begin{equation
\begin{array}
[c]{rl}
& \mathbb{E}\left[ {\int}_{t}^{T}\left\vert \bar{Z}_{s}^{t,x;u^{m}}-\tilde
{V}(s,\bar{X}_{s}^{t,x;u^{m}},u_{s}^{m})\right\vert ^{2}ds\right] \\
& \leq C\left( 1+\left\vert x\right\vert ^{2}\right) \left\{ \mathbb{E
\left[ {\int}_{t}^{T}\left\vert \bar{Y}_{s}^{t,x;u^{m}}-\tilde{W}\left(
s,\bar{X}_{s}^{t,x;u^{m}}\right) \right\vert ^{2}ds\right] \right\}
^{\frac{1}{2}
\end{array}
\label{new-lx-29
\end{equation}
by taking $\epsilon\rightarrow0$ in (\ref{new-lx-27}). Consider the following
FBSDE
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{t,x;u^{m}}= & b(s,X_{s}^{t,x;u^{m}},Y_{s}^{t,x;u^{m}},Z_{s
^{t,x;u^{m}},u_{s}^{m})ds\\
& +\sigma(s,X_{s}^{t,x;u^{m}},Y_{s}^{t,x;u^{m}},Z_{s}^{t,x,u^{m}},u_{s
^{m})dB_{s},\\
dY_{s}^{t,x;u^{m}}= & -g(s,X_{s}^{t,x;u^{m}},Y_{s}^{t,x;u^{m}},Z_{s
^{t,x;u^{m}},u_{s}^{m})ds+Z_{s}^{t,x;u^{m}}dB_{s},\;s\in\lbrack t,T],\\
X_{t}^{t,x;u^{m}}= & x,\ Y_{T}^{t,x;u^{m}}=\phi(X_{T}^{t,x;u^{m}}).
\end{array}
\right. \label{new-lx-24
\end{equation}
By Theorem 2.2 in \cite{Hu-JX} for FBSDEs (\ref{new-lx-22}) and
(\ref{new-lx-24}), we obtai
\begin{equation
\begin{array}
[c]{l
|Y_{t}^{t,x;u^{m}}-\bar{Y}_{t}^{t,x;u^{m}}|\\
\leq C\left\{ \mathbb{E}\left[ \left( \displaystyle \int_{t}^{T}\left\vert
b(s,\bar{X}_{s}^{t,x;u^{m}},\bar{Y}_{s}^{t,x;u^{m}},\bar{Z}_{s}^{t,x;u^{m
},u_{s}^{m})\right. \right. \right. \right. \\
\text{ \ \ \ \ \ \ \ \ }\left. \left. -b(s,\bar{X}_{s}^{t,x;u^{m}},\tilde
{W}(s,\bar{X}_{s}^{t,x;u^{m}}),\tilde{V}(s,\bar{X}_{s}^{t,x;u^{m}},u_{s
^{m}),u_{s}^{m})\right\vert ds\right) ^{2}\\
\text{ \ \ } +\displaystyle\int_{t}^{T}\left\vert \sigma(s,\bar{X
_{s}^{t,x;u^{m}},\bar{Y}_{s}^{t,x;u^{m}},\bar{Z}_{s}^{t,x;u^{m}},u_{s
^{m})\right. \\
\text{ \ \ \ \ \ \ \ \ } \left. \left. \left. -\sigma(s,\bar{X
_{s}^{t,x;u^{m}},\tilde{W}(s,\bar{X}_{s}^{t,x;u^{m}}),\tilde{V}(s,\bar{X
_{s}^{t,x;u^{m}},u_{s}^{m}),u_{s}^{m})\right\vert ^{2}ds\right] \right\}
^{\frac{1}{2}}\\
\leq C\left\{ \mathbb{E}\left[ \displaystyle {\int}_{t}^{T}\left(
\left\vert \bar{Y}_{s}^{t,x;u^{m}}-\tilde{W}\left( s,\bar{X}_{s}^{t,x;u^{m
}\right) \right\vert ^{2}\right. \right. \right. \\
\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } \left. \left. \left. +\left\vert
\bar{Z}_{s}^{t,x;u^{m}}-\tilde{V}(s,\bar{X}_{s}^{t,x;u^{m}},u_{s
^{m})\right\vert ^{2}\right) ds\right] \right\} ^{\frac{1}{2}}.
\end{array}
\label{new-lx-25
\end{equation}
Since $Y_{t}^{t,x;u^{m}}\geq W(t,x)$ for any $m\geq1$, we get $W(t,x)\leq
\tilde{W}(t,x)$ by (\ref{new-lx-23}), (\ref{new-lx-29}) and (\ref{new-lx-25}).
\end{proof}
\begin{remark}
Note that $\tilde{b}$ and $\tilde{\sigma}$ are only local Lipschitz continuous
in $x$. Under this condition (\ref{new-lx-30}), we can still prove that the
value function of the controlled system (\ref{new-lx-22}) is the viscosity
solution to HJB equation (\ref{new-lx-21}) by using the method in
\cite{Peng-lecture}. The uniqueness of the solution to HJB equation
(\ref{new-lx-21}) can be still obtained similarly as in
\cite{Baeles-BP,Buckdahn-Li}.
\end{remark}
\begin{remark}
If $b$ and $\sigma$ are independent of $z$, we only need to suppose that
$\tilde{W}$ is Lipschitz continuous in $x$ in Theorem \ref{th-um2}.
\end{remark}
\begin{remark}
In Theorem \ref{th-um2}, if $\partial_{t}\tilde{W}\in C([0,T]\times
\mathbb{R}^{n})$, then we do not need the assumption that $\tilde{W}$ is
Lipschitz continuous in $t$. In this case, by the definition of viscosity
solution, we can deduce that$|\partial_{t}\tilde{W}(t,x)| \leq C(1+|x|^{2}). $
Thus the inequality (\ref{new-ism}) still holds. Following the same steps as
in the above proof, we can obtain the same result.
\end{remark}
Now we study the case in which the coefficients of the controlled system $b$,
$\sigma$ and $g$ are independent of control variable $u$. It is obviously that
for this case the corresponding HJB equation (\ref{eq-hjb}) degenerates to a
semilinear parabolic equation with an algebra equation.
\begin{theorem}
\label{th-uni-pde}Suppose $b$, $\sigma$ and $g$ are independent of control
variable $u$, $\sigma$ is independent of $z$, $\tilde{W}$ is a viscosity
solution to HJB equation (\ref{eq-hjb}); and one of the following two
conditions holds true:
\begin{description}
\item[(i)] Assumption \ref{assum-1} holds;
\item[(ii)] Assumptions \ref{assum-1} (i) and \ref{assm-mon} hold. Moreover,
\[
\tilde{A}\left( t,x,y,z\right) =\left( -G^{\intercal}g\left(
t,x,y,z\right) , Gb\left( t,x,y,z\right) , G\sigma\left( t,x,\tilde
{W}\left( t,x\right) \right) \right) ^{\intercal}
\]
satisfies Assumption \ref{assm-mon}.
\end{description}
\noindent Let $W$ be the value function. Furthermore, we assume that
$\tilde{W}$ is Lipschitz continuous in $x$. Then $W=\tilde{W}$.
\end{theorem}
\begin{proof}
We only prove the first case (the condition (i) holds). The proof for the
second case is similar. Following the same steps in Theorem \ref{th-um1},
$\tilde{W}$ is also a viscosity solution to PDE system
\[
\partial_{t}F(t,x)+H(t,x,F(t,x),DF(t,x),D^{2}F(t,x))=0,\ F(T,x)=\phi(x),
\]
where $H\left( \cdot\right) $ is the function in equation (\ref{new-lx-17})
without control variables. Since $\tilde{\sigma}$ is independent of $(y,z)$,
by Theorems \ref{th-vis} and \ref{th-vis-uni}, we have $\tilde{W}\left(
t,x\right) =\bar{Y}_{t}^{t,x}$, where $\bar{Y}_{t}^{t,x}$ is the solution to
the following FBSDE at time $t$
\begin{equation}
\left\{
\begin{array}
[c]{rl
d\bar{X}_{s}^{t,x}= & b(s,\bar{X}_{s}^{t,x},\bar{Y}_{s}^{t,x},\bar{Z
_{s}^{t,x})ds+\sigma(s,\bar{X}_{s}^{t,x},\tilde{W}(s,\bar{X}_{s}^{t,x
))dB_{s},\\
d\bar{Y}_{s}^{t,x}= & -g(s,\bar{X}_{s}^{t,x},\bar{Y}_{s}^{t,x},\bar{Z
_{s}^{t,x})ds+\bar{Z}_{s}^{t,x}dB_{s},\;s\in\lbrack t,T],\\
\bar{X}_{t}^{t,x}= & x,\ \bar{Y}_{T}^{t,x}=\phi(\bar{X}_{T}^{t,x}).
\end{array}
\right. \label{new-lx-81
\end{equation}
For each fixed $(t,x)\in\lbrack0,T)\times\mathbb{R}^{n}$, by Proposition
\ref{pr-um}, we have
\begin{equation}
\bar{Y}_{s}^{t,x}=\tilde{W}\left( s,\bar{X}_{s}^{t,x}\right) \text{ for
}s\in\lbrack t,T]. \label{new-lx-911
\end{equation}
Then $\left( \bar{X}^{t,x},\bar{Y}^{t,x},\bar{Z}^{t,x}\right) $ satisfies
the following fully coupled FBSDE
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{t,x}= & b(s,X_{s}^{t,x},Y_{s}^{t,x},Z_{s}^{t,x})ds+\sigma
(s,X_{s}^{t,x},Y_{s}^{t,x})dB_{s},\\
dY_{s}^{t,x}= & -g(s,X_{s}^{t,x},Y_{s}^{t,x},Z_{s}^{t,x})ds+Z_{s}^{t,x
dB_{s},\;s\in\lbrack t,T],\\
X_{t}^{t,x}= & x,\ Y_{T}^{t,x}=\phi(X_{T}^{t,x}).
\end{array}
\right. \label{new-lx-91
\end{equation}
By the uniqueness of FBSDEs (\ref{new-lx-81}) and (\ref{new-lx-91}), one ha
\begin{equation}
Y_{t}^{t,x}=\bar{Y}_{t}^{t,x}. \label{new-lx-912
\end{equation}
Since $Y_{t}^{t,x}=W(t,x)$, we get $W(t,x)=\tilde{W}(t,x)$ by
(\ref{new-lx-911}) and (\ref{new-lx-912}).
\end{proof}
Similarly, we have the following theorem.
\begin{theorem}
\label{uni-pde}Suppose $b$, $\sigma$ and $g$ are independent of control
variable $u$; and one of the following two conditions holds true:
\begin{description}
\item[(i)] Assumptions \ref{assum-1} and \ref{assum-l3} hold;
\item[(ii)] Assumptions \ref{assum-1} (i), \ref{assum-l3} and \ref{assm-mon} hold.
\end{description}
\noindent Let $W$ be the value function and $\tilde{W}$ be a viscosity
solution to HJB equation (\ref{eq-hjb}). Furthermore, we assume that
$\tilde{W}$ is Lipschitz continuous in $(t,x)$, $D\tilde{W}$ is Lipschitz
continuous in $x$ and $||D\tilde{W}||_{\infty}L_{3}<1$. Then $W=\tilde{W}$.
\end{theorem}
\begin{remark}
In the above theorems, we assume that the Assumption \ref{assum-1} or the
monotonicity conditions hold. It is well-known that there are other conditions
which can guarantee the existence and uniqueness of the fully coupled
controlled system (\ref{state-eq}). In fact, our approach can be generalized
to deal with any fully coupled controlled system which is well-posed and the
related $L^{2}$-estimates of the solution hold.
\end{remark}
\subsection{The smooth case}
{In this subsection, we assume that the solution of the HJB equation
$\tilde{W}\in C^{1,2}([0,T]\times\mathbb{R}^{n})$. Then, we have the following
theorem.}
\begin{theorem}
Suppose one of the following two conditions holds true:
\begin{description}
\item[(i)] Assumptions \ref{assum-1} and \ref{assum-l3} hold;
\item[(ii)] Assumptions \ref{assum-1} (i), \ref{assum-l3} and \ref{assm-mon} hold.
\end{description}
\noindent Let $W$ be the value function and $\tilde{W}\in C^{1,2
([0,T]\times\mathbb{R}^{n})$ be a solution to the HJB equation (\ref{eq-hjb}).
Furthermore, we assume $||\sigma||_{\infty}<\infty$, $||D\tilde{W}||_{\infty
}L_{3}<1$ and $||D^{2}\tilde{W}||_{\infty}<\infty$. Then $W=\tilde{W}$.
\end{theorem}
\begin{proof}
Without loss of generality, we only prove the case $d=1$. For each given
$u\in\mathcal{U}^{t}[t,T]$, let $(X^{t,x;u},Y^{t,x;u},Z^{t,x;u})$ be the
solution to FBSDE (\ref{state-eq}) with $\xi=x$. Applying It\^{o}'s formula to
$\tilde{W}(s,X_{s}^{t,x;u})$, we obtai
\[
\left\{
\begin{array}
[c]{l
d\tilde{W}(s,X_{s}^{t,x;u})\\
= \left\{ \partial_{t}\tilde{W}(s,X_{s}^{t,x;u}))+\left( D\tilde{W
(s,X_{s}^{t,x;u})\right) ^{\intercal}b(s,X_{s}^{t,x;u},Y_{s}^{t,x;u
,Z_{s}^{t,x;u},u_{s})\right. \\
\ \ \left. +\frac{1}{2}\mathrm{tr}[\sigma\sigma^{\intercal}(s,X_{s
^{t,x;u},Y_{s}^{t,x;u},Z_{s}^{t,x;u},u_{s})D^{2}\tilde{W}(s,X_{s
^{t,x;u})]\right\} ds\\
\ \ +D\tilde{W}(s,X_{s}^{t,x;u})^{\intercal}\sigma(s,X_{s}^{t,x;u
,Y_{s}^{t,x;u},Z_{s}^{t,x;u},u_{s})dB_{s},\;s\in\lbrack t,T],\\
\tilde{W}(T,X_{T}^{t,x;u})= \phi(X_{T}^{t,x;u}).
\end{array}
\right.
\]
Set $
\tilde{Y}_{s}=\tilde{W}(s,X_{s}^{t,x;u}),\ \tilde{Z}_{s}=D\tilde{W
(s,X_{s}^{t,x;u})^{\intercal}\sigma(s,X_{s}^{t,x;u},Y_{s}^{t,x;u
,Z_{s}^{t,x;u},u_{s}),\newline\hat{Y}_{s}=Y_{s}^{t,x;u}-\tilde{Y}_{s},\text{
}\hat{Z}_{s}=Z_{s}^{t,x;u}-\tilde{Z}_{s}.$
Then,
\begin{equation}
\label{new-asd-11}d\hat{Y}_{s}= -\left( \Pi_{1}(s)+\Pi_{2}(s)\right)
ds+\hat{Z}_{s}dB_{s},\ \hat{Y}_{T}= 0,
\end{equation}
wher
\
\begin{array}
[c]{rl
\Pi_{1}(s)= & H(s,X_{s}^{t,x;u},\tilde{W}(s,X_{s}^{t,x;u}),D\tilde{W
(s,X_{s}^{t,x;u}),D^{2}\tilde{W}(s,X_{s}^{t,x;u}),u_{s})\\
& +\partial_{t}\tilde{W}(s,X_{s}^{t,x;u})\geq0,\\
\Pi_{2}(s)= & \left( D\tilde{W}(s,X_{s}^{t,x;u})\right) ^{\intercal}\left[
b_{1}(s)-b_{2}(s)\right] +g_{1}(s)-g_{2}(s)\\
& +\frac{1}{2}\mathrm{tr}[\left( \sigma_{1}\sigma_{1}^{\intercal
(s)-\sigma_{2}\sigma_{2}^{\intercal}(s)\right) D^{2}\tilde{W}(s,X_{s
^{t,x;u})],\\
b_{1}(s)= & b(s,X_{s}^{t,x;u},Y_{s}^{t,x;u},Z_{s}^{t,x;u},u_{s}),\\
b_{2}(s)= & b(s,X_{s}^{t,x;u},\tilde{W}(s,X_{s}^{t,x;u}),\tilde{V
(s,X_{s}^{t,x;u},u_{s}),u_{s}),\\
\tilde{V}(t,x,u)= & D\tilde{W}(t,x)^{\intercal}\sigma(t,x,\tilde{W
}(t,x),\tilde{V}(t,x,u),u),
\end{array}
\]
and $\sigma_{i}$, $g_{i}$ are defined similarly for $i=1,2$. Le
\[
b_{1}(s)-b_{2}(s)=\beta_{1}(s)\hat{Y}_{s}+\gamma_{1}(s)\left( Z_{s
^{t,x;u}-\tilde{V}(s,X_{s}^{t,x;u},u_{s})\right)
\]
an
\
\begin{array}
[c]{l
Z_{s}^{t,x;u}-\tilde{V}(s,X_{s}^{t,x;u},u_{s})\\
=\hat{Z}_{s}+D\tilde{W}(s,X_{s}^{t,x;u})^{\intercal}\left( \sigma
_{1}(s)-\sigma_{2}(s)\right) \\
=\hat{Z}_{s}+D\tilde{W}(s,X_{s}^{t,x;u})^{\intercal}\left[ \beta_{2
(s)\hat{Y}_{s}+\gamma_{2}(s)\left( Z_{s}^{t,x;u}-\tilde{V}(s,X_{s
^{t,x;u},u_{s})\right) \right] ,\\
=\hat{Z}_{s}+D\tilde{W}(s,X_{s}^{t,x;u})^{\intercal}\beta_{2}(s)\hat{Y
_{s}+D\tilde{W}(s,X_{s}^{t,x;u})^{\intercal}\gamma_{2}(s)\left( Z_{s
^{t,x;u}-\tilde{V}(s,X_{s}^{t,x;u},u_{s})\right) ,
\end{array}
\]
where
\
\begin{array}
[c]{l
\beta_{1}(s)=\left\{
\begin{array}
[c]{cl
\frac{b_{1}(s)-b_{2}(s)}{Y_{s}^{t,x;u}-\tilde{Y}_{s}}, & \text{if
Y_{s}^{t,x;u}-\tilde{Y}_{s}\neq0,\\
0, & \text{if }Y_{s}^{t,x;u}-\tilde{Y}_{s}=0,
\end{array}
\right. \\
\gamma_{1}(s)=\left\{
\begin{array}
[c]{cl
\frac{b_{1}(s)-b_{2}(s)}{Z_{s}^{t,x;u}-\tilde{V}(s,X_{s}^{t,x;u},u_{s})}, &
\text{if }Z_{s}^{t,x;u}-\tilde{V}(s,X_{s}^{t,x;u},u_{s})\neq0,\\
0, & \text{if }Z_{s}^{t,x;u}-\tilde{V}(s,X_{s}^{t,x;u},u_{s})=0,
\end{array}
\right.
\end{array}
\]
$\beta_{2}\left( \cdot\right) $ and $\gamma_{2}\left( \cdot\right) $ are
defined similarly. Set
\begin{equation}
b_{1}(s)-b_{2}(s)=\tilde{\beta}_{1}(s)\hat{Y}_{s}+\tilde{\gamma}_{1}(s)\hat
{Z}_{s}, \label{new-lx-31
\end{equation}
where \ $
\tilde{\beta}_{1}(s)=\beta_{1}(s)+\left( 1-D\tilde{W}(s,X_{s}^{t,x;u
)^{\intercal}\gamma_{2}(s)\right) ^{-1}D\tilde{W}(s,X_{s}^{t,x;u
)^{\intercal}\beta_{2}(s)\gamma_{1}\left( s\right) ,\newline\tilde{\gamma
}_{1}(s)=\left( 1-D\tilde{W}(s,X_{s}^{t,x;u})^{\intercal}\gamma
_{2}(s)\right) ^{-1}\gamma_{1}(s).$
It is easy to check that $\beta_{1}$, $\gamma_{1}$, $\beta_{2}$, $\gamma_{2}$,
$\tilde{\beta}_{1}$ and $\tilde{\gamma}_{1}$ are bounded. Note that $\sigma$
is bounded. Similar to the proof of (\ref{new-lx-31}), we have
\
\begin{array}
[c]{rl
\mathrm{tr}[\left( \sigma_{1}\sigma_{1}^{\intercal}(s)-\sigma_{2}\sigma
_{2}^{\intercal}(s)\right) D^{2}\tilde{W}(s,X_{s}^{t,x;u})] & =\tilde{\beta
}_{2}(s)\hat{Y}(s)+\tilde{\gamma}_{2}(s)\hat{Z}(s),\\
g_{1}(s)-g_{2}(s) & =\tilde{\beta}_{3}(s)\hat{Y}(s)+\tilde{\gamma}_{3
(s)\hat{Z}(s),
\end{array}
\]
where $\tilde{\beta}_{2}$, $\tilde{\gamma}_{2}$, $\tilde{\beta}_{3}$
$\tilde{\gamma}_{3}$ are defined similarly and bounded. Then, we can rewrite
$\Pi_{2}(s)$ as \ $\Pi_{2}(s)=\beta(s)\hat{Y}(s)+\gamma(s)\hat{Z}(s), $ where
$\beta$, $\gamma\in\mathbb{R}$ are bounded. By the comparison theorem of
BSDEs, we get $\hat{Y}_{t}\geq0$ which implies $\tilde{W}(t,x)\leq
Y_{t}^{t,x;u}$. Thus $\tilde{W}\leq W$. On the other hand, by Theorem
\ref{th-um2}, we have $W\leq\tilde{W}$. Thus, $W=\tilde{W}$.
\end{proof}
\section{Appendix}
\subsection{The comparison theorem for FBSDEs}
Under Assumption \ref{assum-1}, we deduce a generalized comparison theorem for
FBSDEs. The proof is similar to that of Theorem 3.1 in \cite{Wu-compar} and
Theorem 5.11 in \cite{Li-W}. For the reader's convenience, we give a detailed
proof. Consider the following FBSDEs:
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}^{i}= & b(s,X_{s}^{i},Y_{s}^{i},Z_{s}^{i})ds+\sigma(s,X_{s}^{i
,Y_{s}^{i},Z_{s}^{i})dB_{s},\\
dY_{s}^{i}= & -g(s,X_{s}^{i},Y_{s}^{i},Z_{s}^{i})ds+Z_{s}^{i}dB_{s},\\
X_{t}^{i}= & \xi,\ Y_{t+\delta}^{i}=\phi_{i}(X_{t+\delta}^{i}),\text{ }i=1,2.
\end{array}
\right. \label{eq-appen
\end{equation}
\begin{theorem}
\label{th-comp}Suppose Assumption \ref{assum-1} holds. Then for $\delta
\in(0,T-t]$\ and $\xi\in L^{2}(\mathcal{F}_{t};\mathbb{R}^{n})$,
(\ref{eq-appen}) has a unique solution $(X_{s}^{i},Y_{s}^{i},Z_{s}^{i
)_{s\in\lbrack t,t+\delta]}$ associated with $(b,\sigma,$$g,\phi_{i})$. If
$\phi_{1}(X_{t+\delta}^{2})\geq\phi_{2}(X_{t+\delta}^{2}),P$-a.s. (resp.
$\phi_{1}(X_{t+\delta}^{1})\geq\phi_{2}(X_{t+\delta}^{1}),$ $P$-a.s.), then we
have $Y_{t}^{1}\geq Y_{t}^{2},$ $P$-$a.s.$.
\end{theorem}
\begin{proof}
Without loss of generality, we only prove the case $d=1$. Let $\hat{X
=X^{1}-X^{2}$, $\hat{Y}=Y^{1}-Y^{2}$, $\hat{Z}=Z^{1}-Z^{2}$. Then $\left(
\hat{X},\hat{Y},\hat{Z}\right) $ satisfies the following FBSDE:
\[
\left\{
\begin{array}
[c]{rl
d\hat{X}_{s}= & \left[ b^{1}(s)\hat{X}_{s}+b^{2}(s)\hat{Y}_{s}+b^{3
(s)\hat{Z}_{s}\right] ds\\
& +\left[ \sigma^{1}(s)\hat{X}_{s}+\sigma^{2}(s)\hat{Y}_{s}+\sigma^{3
(s)\hat{Z}_{s}\right] dB_{s},\\
d\hat{Y}_{s}= & -\left[ g^{1}(s)\hat{X}_{s}+g^{2}(s)\hat{Y}_{s}+g^{3
(s)\hat{Z}_{s}\right] ds+\hat{Z}_{s}dB_{s},\text{ }s\in\lbrack t,t+\delta],\\
\hat{X}_{t}= & 0,\ \hat{Y}_{t+\delta}=\phi^{1}(t+\delta)\hat{X}_{t+\delta
}+\phi_{1}(X_{t+\delta}^{2})-\phi_{2}\left( X_{t+\delta}^{2}\right) ,
\end{array}
\right.
\]
where $b^{i}$, $\sigma^{i}$, $g^{i}$, $\phi^{1}$, $i=1,2,3$ are defined in the
proof of Lemma \ref{est-initial}. Introduce the adjoint equation for the above
equation as follow
\begin{equation}
\left\{
\begin{array}
[c]{rl
dh_{s}= & \left[ g^{2}(s)h_{s}+b^{2}(s)m_{s}+\sigma^{2}(s)n_{s}\right]
ds+\left[ g^{3}(s)h_{s}+b^{3}(s)m_{s}+\sigma^{3}(s)n_{s}\right] dB_{s},\\
dm_{s}= & -\left[ g^{1}(s)h_{s}+b^{1}(s)m_{s}+\sigma^{1}(s)n_{s}\right]
ds+n_{s}dB_{s},\\
h_{t}= & 1,\ m_{t+\delta}=\phi^{1}(t+\delta)h_{t+\delta}.
\end{array}
\right. \label{eq-dual-comp
\end{equation}
It is easy to check that (\ref{eq-dual-comp}) satisfies the assumptions of
Theorem 2.2 in \cite{Hu-JX}. Consequently, it has a unique solution
$(h,m,n)\in L_{\mathcal{F}}^{2}(\Omega;C([t,t+\delta];\mathbb{R}))\times
L_{\mathcal{F}}^{2}(\Omega;C([t,t+\delta];\mathbb{R}^{n}))\times
L_{\mathcal{F}}^{2,2}(t,t+\delta;\mathbb{R}^{n\times d})$. Applying It\^{o}'s
formula to $m\hat{X}-h\hat{Y}$, we ge
\[
\hat{Y}_{t}=\mathbb{E}\left[ \left. \left( \phi_{1}(X_{t+\delta}^{2
)-\phi_{2}\left( X_{t+\delta}^{2}\right) \right) h_{t+\delta}\right\vert
\mathcal{F}_{t}\right] .
\]
Since $\phi_{1}(X_{t+\delta}^{2})\geq\phi_{2}(X_{t+\delta}^{2})$, $P$-$a.s.$,
we only need to prove $h_{t+\delta}\geq0$, $P$-$a.s.$. Define $\tau
=\inf\left\{ s>t:h_{s}=0\right\} \wedge\left( t+\delta\right) $ and
consider the following FBSDE on $\left[ \tau,t+\delta\right] $,
\begin{equation}
\left\{
\begin{array}
[c]{rl
d\tilde{h}_{s}= & \left[ g^{2}(s)\tilde{h}_{s}+b^{2}(s)\tilde{m}_{s
+\sigma^{2}(s)\tilde{n}_{s}\right] ds\\
& +\left[ g^{3}(s)\tilde{h}_{s}+b^{3}(s)\tilde{m}_{s}+\sigma^{3}(s)\tilde
{n}_{s}\right] dB_{s},\\
d\tilde{m}_{s}= & -\left[ g^{1}(s)\tilde{h}_{s}+b^{1}(s)\tilde{m}_{s
+\sigma^{1}(s)\tilde{n}_{s}\right] ds+\tilde{n}_{s}dB_{s},\\
\tilde{h}_{\tau}= & 0,\ \tilde{m}_{t+\delta}=\phi^{1}(t+\delta)\tilde
{h}_{t+\delta}.
\end{array}
\right. \label{eq-dual-comp2
\end{equation}
This FBSDE has a unique solution $(\tilde{h},\tilde{m},\tilde{n})=\left(
0,0,0\right) $. Set
\
\begin{array}
[c]{rl}
& \bar{h}_{s}=h_{s}I_{\left[ t,\tau\right] }\left( s\right) +\tilde{h
_{s}I_{(\tau,t+\delta]}\left( s\right) ,\text{ }\bar{m}_{s}=m_{s}I_{\left[
t,\tau\right] }\left( s\right) +\tilde{m}_{s}I_{(\tau,t+\delta]}\left(
s\right) ,\\
& \bar{n}_{s}=n_{s}I_{\left[ t,\tau\right] }\left( s\right) +\tilde{n
_{s}I_{(\tau,t+\delta]}\left( s\right) .
\end{array}
\]
It is clear that $(\bar{h},\bar{m},\bar{n})$ is a solution to
(\ref{eq-dual-comp}). The definition of $\tau$ yields the desired result
$\bar{h}_{t+\delta}\geq0$.
\end{proof}
\subsection{$L^{p}$ estimate of FBSDEs}
The following Lemma is a combination of Theorem 3.17 and Theorem 5.17 in
\cite{Pardoux-book}.
\begin{lemma}
\label{sde-bsde} Consider the following decoupled FBSDE
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}= & \bar{b}(s,X_{s})ds+\bar{\sigma}(s,X_{s})dB_{s},\\
dY_{s}= & -\bar{g}(s,X_{s},Y_{s},Z_{s})ds+Z_{s}dB_{s},\\
X_{0}= & x_{0},\ Y_{T^{\prime}}=\bar{\phi}(X_{T^{\prime}}),
\end{array}
\right. \label{fbsde-pardoux
\end{equation}
where $T^{\prime}\leq T$ for some fixed $T>0$,
\[
\bar{b}:[0,T]\times\Omega\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{n},
\ \bar{\sigma}:[0,T]\times\Omega\times\mathbb{R}^{n}\rightarrow\mathbb{R
^{n\times d},
\
\[
\bar{g}:[0,T]\times\Omega\times\mathbb{R}^{n}\rightarrow\mathbb{R},
\ \bar{\phi}:\mathbb{R}^{n}\rightarrow\mathbb{R}.
\]
For each fixed $p>1$, if the coefficients satisfy
(i) $\bar{b}(\cdot,0)$, $\bar{\sigma}(\cdot,0)$, $\bar{g}(\cdot,0,0,0)$ are
$\mathbb{F}$-adapted processes and
\[
\mathbb{E}\left\{ |\bar{\phi}(0)|^{p}+\left( \int_{0}^{T^{\prime}}\left[
|\bar{b}(s,0)|+|\bar{g}(s,0,0,0)|\right] ds\right) ^{p}+\left( \in
_{0}^{T^{\prime}}|\bar{\sigma}(s,0)|^{2}ds\right) ^{\frac{p}{2}}\right\}
<\infty,
\]
(ii
\
\begin{array}
[c]{rl
|\bar{\psi}(s,x_{1})-\bar{\psi}(s,x_{2})| & \leq L_{1}|x_{1}-x_{2
|,\ \ \text{for }\ \bar{\psi}=\bar{b},\bar{\sigma},\bar{\phi};\\
|\bar{g}(s,x_{1},y_{1},z_{1})-\bar{g}(s,x_{2},y_{2},z_{2})| & \leq
L_{1}(|x_{1}-x_{2}|+|y_{1}-y_{2}|+|z_{1}-z_{2}|),
\end{array}
\]
then (\ref{fbsde-pardoux}) has a unique solution $(X,Y,Z)\in L_{\mathcal{F
}^{p}(\Omega;C([0,T^{\prime}],\mathbb{R}^{n}))\times L_{\mathcal{F}
^{p}(\Omega;C([0,T^{\prime}],\mathbb{R}^{m}))\times L_{\mathcal{F}
^{2,p}([0,T^{\prime}];\mathbb{R}^{m\times d})$ and there exists a constant
$C_{p}$ which only depends on $L_{1}$, $p$, $T$ such that
\
\begin{array}
[c]{l
\mathbb{E}\left\{ \sup\limits_{s\in\lbrack0,T^{\prime}]}\left[ |X_{s
|^{p}+|Y_{s}|^{p}\right] +\left( \int_{0}^{T^{\prime}}|Z_{s}|^{2}ds\right)
^{\frac{p}{2}}\right\} \\
\ \leq C_{p}\mathbb{E}\left\{ \left[ \int_{0}^{T^{\prime}}\left( |\bar
{b}(s,0)|+|\bar{g}(s,0,0,0)|\right) ds\right] ^{p}+\left( \in
_{0}^{T^{\prime}}|\bar{\sigma}(s,0)|^{2}ds\right) ^{\frac{p}{2}}+|\bar{\phi
}(0)|^{p}+|x_{0}|^{p}\right\} .
\end{array}
\]
\end{lemma}
\begin{remark}
\label{re-app-2} In the above lemma, the constant $C_{p}$ depends on the
Lipschitz constant $L_{1}$. To represent this dependence, we sometimes denote
$C_{p}$ by $C_{p}(L_{1})$. Generally, we can choose $C_{p}(L_{1})$ as an
increasing function with respect to $L_{1}$.
\end{remark}
Consider the following FBSDE
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{s}= & b(s,X_{s},Y_{s},Z_{s})ds+\sigma(s,X_{s},Y_{s},Z_{s})dB_{s},\\
dY_{s}= & -g(s,X_{s},Y_{s},Z_{s})ds+Z_{s}dB_{s},\text{ }s\in\lbrack
0,T^{\prime}],\\
X_{0}= & x,\ Y_{T^{\prime}}=\phi(X_{T^{\prime}}),
\end{array}
\right. \label{new-new-new-new-1
\end{equation}
where $T^{\prime}\leq T$ for some fixed $T>0$.
\begin{theorem}
\label{th-lp}Suppose Assumption \ref{assum-1} (i) holds and $C_{p
2^{p-1}\left[ L_{2}^{p}\left( T^{\frac{p}{2}}+T^{p}\right) \right. $
$\left. + L_{3}^{p} \right] <1$ for some $p\geq2$, where $C_{p}$ is defined
in Lemma 5.1 in \cite{Hu-JX}.$\ $Then FBSDE (\ref{new-new-new-new-1}) admits a
unique solution $(X,Y,Z)\in L_{\mathcal{F}}^{p}(\Omega;C([0,T^{\prime
}];\mathbb{R}^{n}))\times L_{\mathcal{F}}^{p}(\Omega;C([0,T^{\prime
}];\mathbb{R}))$$\times L_{\mathcal{F}}^{2,p}(0,T^{\prime};\mathbb{R}^{1\times
d})$ and
\
\begin{array}
[c]{l
||(X,Y,Z)||_{p}^{p}=\mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T^{\prime
]}\left( |X_{t}|^{p}+|Y_{t}|^{p}\right) +\left( \int_{0}^{T^{\prime}
|Z_{t}|^{2}dt\right) ^{\frac{p}{2}}\right] \\
\ \leq C\mathbb{E}\left[ \left( \int_{0}^{T^{\prime}
[|b|+|g|](t,0,0,0)dt\right) ^{p}+\left( \int_{0}^{T^{\prime}}|\sigma
(t,0,0,0)|^{2}dt\right) ^{\frac{p}{2}}+|\phi(0)|^{p}+|x|^{p}\right] ,
\end{array}
\]
where $C$ depends only on $T$, $p$, $L_{1}$, $L_{2}$, $L_{3}$.
\end{theorem}
\begin{proof}
Let $\mathcal{L}$ denote the space of all $\mathbb{F}$-adapted processes
$(Y,Z)$ such that
\[
\mathbb{E}\left[ \sup\limits_{0\leq t\leq T^{\prime}}|Y_{t}|^{p}+\left(
\int_{0}^{T^{\prime}}|Z_{t}|^{2}dt\right) ^{\frac{p}{2}}\right] <\infty.
\]
For each given $(y,z)\in\mathcal{L}$, consider the following decoupled FBSDE:
\begin{equation}
\left\{
\begin{array}
[c]{rl
dX_{t}= & b(t,X_{t},y_{t},z_{t})dt+\sigma(t,X_{t},y_{t},z_{t})dB_{t},\\
dY_{t}= & -g(t,X_{t},Y_{t},Z_{t})dt+Z_{t}dB_{t},\\
X_{0}= & x,\ Y_{T^{\prime}}=\phi(X_{T^{\prime}}).
\end{array}
\right. \label{fbsde-y0
\end{equation}
Under Lipschitz conditions for $b$, $\sigma$, $g$ and $\phi$, it is easy to
deduce that the solution $(Y,Z)$ to \eqref{fbsde-y0} belongs to $\mathcal{L}$.
Denote the operator $(y,z)\rightarrow(Y,Z)$ by $\Gamma$. For two elements
$(y^{i},z^{i})\in\mathcal{L}$, $i=1,2$, let $(X^{i},Y^{i},Z^{i})$ be the
corresponding solution to \eqref{fbsde-y0}. Set
\[
\hat{y}_{t}=y_{t}^{1}-y_{t}^{2},\text{ }\hat{z}_{t}=z_{t}^{1}-z_{t}^{2},\text{
}\hat{X}_{t}=X_{t}^{1}-X_{t}^{2},\text{ }\hat{Y}_{t}=Y_{t}^{1}-Y_{t
^{2},\text{ }\hat{Z}_{t}=Z_{t}^{1}-Z_{t}^{2}.
\]
Due to Lemma 5.1 in \cite{Hu-JX}, we obtain
\begin{equation
\begin{array}
[c]{l
\mathbb{E}\left[ \sup\limits_{0\leq t\leq T^{\prime}}\left( |\hat{X
_{t}|^{p}+|\hat{Y}_{t}|^{p}\right) +\left( \int_{0}^{T^{\prime}}|\hat{Z
_{t}|^{2}dt\right) ^{\frac{p}{2}}\right] \\
\leq C_{p}\mathbb{E}\left[ \left( \int_{0}^{T^{\prime}}\left\vert
b(t,X_{t}^{2},y_{t}^{1},z_{t}^{1})-b(t,X_{t}^{2},y_{t}^{2},z_{t
^{2})\right\vert dt\right) ^{p}\right. \\
\ \ \left. +\left( \int_{0}^{T^{\prime}}\left\vert \sigma(t,X_{t}^{2
,y_{t}^{1},z_{t}^{1})-\sigma(t,X_{t}^{2},y_{t}^{2},z_{t}^{2})\right\vert
^{2}dt\right) ^{\frac{p}{2}}\right] \\
\leq C_{p}2^{p-1}\left[ L_{2}^{p}\left( \left( T^{\prime}\right)
^{\frac{p}{2}}+\left( T^{\prime}\right) ^{p}\right) +L_{3}^{p}\right]
\mathbb{E}\left[ \sup\limits_{0\leq t\leq T^{\prime}}|\hat{y}_{t
|^{p}+\left( \int_{0}^{T^{\prime}}|\hat{z}_{t}|^{2}dt\right) ^{\frac{p}{2
}\right] \\
\leq C_{p}2^{p-1}\left[ L_{2}^{p}\left( T^{\frac{p}{2}}+T^{p}\right)
+L_{3}^{p}\right] \mathbb{E}\left[ \sup\limits_{0\leq t\leq T^{\prime}
|\hat{y}_{t}|^{p}+\left( \int_{0}^{T^{\prime}}|\hat{z}_{t}|^{2}dt\right)
^{\frac{p}{2}}\right] .
\end{array}
\label{fbsde-delty
\end{equation}
Since $C_{p}2^{p-1}\left[ L_{2}^{p}\left( T^{\frac{p}{2}}+T^{p}\right)
+L_{3}^{p}\right] <1$, the operator $\Gamma$ is a contraction mapping and has
a unique fixed point $(Y,Z)$.\ Let $X$ be the solution to (\ref{fbsde-y0})
with respect to the fixed point $(Y,Z)$. Thus, $(X,Y,Z)$ is the unique
solution to (\ref{new-new-new-new-1}). Following the same steps in Theorem 2.2
in \cite{Hu-JX}, we can obtain the estimate.
\end{proof}
\subsection{The proof of Theorem \ref{th-vis-uni}}
\label{subse-pf}
In order to prove this theorem, we need the following lemmas.
\begin{lemma}
\label{le-uni2} Suppose that $\sigma$ is independent of $y$ and $z$ and
Assumption \ref{assum-1} (i) holds. Let $W_{1}$ be a viscosity subsolution and
$W_{2}$ be a viscosity supersolution to (\ref{eq-hjb-yz}). Furthermore, assume
that $W_{1}$ and $W_{2}$ are Lipschitz continuous with respect to $x$. Then
the function $w:=W_{1}-W_{2}$ is a viscosity subsolution to the following
equation
\begin{equation}
\left\{
\begin{array}
[c]{l
w_{t}\left( t,x\right) +\sup\limits_{u\in U}\left\{ \frac{1}{2
\mathrm{tr}[\sigma\sigma^{\intercal}(t,x,u)D^{2}w(t,x)]+C\left( 1+\left\vert
x\right\vert \right) \left\vert Dw\left( t,x\right) \right\vert \right. \\
\text{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\left. +C\left\vert w\left(
t,x\right) \right\vert \right\} =0,\\
w\left( T,x\right) =0,\text{ }\left( t,x\right) \in\lbrack0,T)\times
\mathbb{R}^{n},
\end{array}
\right. \label{eq-uni1
\end{equation}
where $C$ is a constant depending only on the Lipschitz constants of $b$,
$\sigma$, $g$, $W_{1}$ and $W_{2}$.
\end{lemma}
\begin{proof}
Let $\varphi\in C_{b}^{2,3}\left( \left[ 0,T\right] \times\mathbb{R
^{n}\right) $ and let $\left( t_{0},x_{0}\right) \in(0,T)\times
\mathbb{R}^{n}$ be a global maximum point of $W_{1}-W_{2}-\varphi$
with$\ w(t_{0},x_{0})=\varphi(t_{0},x_{0})$. Define the functio
\[
\psi_{\epsilon,\alpha}\left( t,x,s,y\right) =W_{1}\left( t,x\right)
-W_{2}\left( s,y\right) -\frac{\left\vert x-y\right\vert ^{2}}{\epsilon^{2
}-\frac{\left\vert t-s\right\vert ^{2}}{\alpha^{2}}-\varphi\left( t,x\right)
,
\]
where $\epsilon$, $\alpha$ are positive parameters which are devoted to tend
to zero. By Lemma 3.7 in \cite{Baeles-BP}, there exists a sequence $\left(
t_{\epsilon,\alpha},s_{\epsilon,\alpha},x_{\epsilon,\alpha},y_{\epsilon
,\alpha}\right) $ such that
(i) $\left( t_{\epsilon,\alpha},x_{\epsilon,\alpha},s_{\epsilon,\alpha
},y_{\epsilon,\alpha}\right) $ is a global maximum point of $\psi
_{\epsilon,\alpha}$ in $\left[ 0,T\right] \times\bar{B}_{R}\times\left[
0,T\right] \times\bar{B}_{R}$, where $\bar{B}_{R}$ is a ball with a large
radius $R$;
(ii) $\left( t_{\epsilon,\alpha},x_{\epsilon,\alpha}\right) $, $\left(
s_{\epsilon,\alpha},y_{\epsilon,\alpha}\right) \rightarrow\left( t_{0
,x_{0}\right) $ as $\left( \epsilon,\alpha\right) \rightarrow0$;
(iii) $\frac{\left\vert x_{\epsilon,\alpha}-y_{\epsilon,\alpha}\right\vert
^{2}}{\epsilon^{2}}$ and $\frac{\left\vert t_{\epsilon,\alpha}-s_{\epsilon
,\alpha}\right\vert ^{2}}{\alpha^{2}}$ are bounded and tend to zero when
$\left( \epsilon,\alpha\right) \rightarrow0$;
(iv) there exist $X$, $Y\in\mathbb{S}^{n}$ such that
\
\begin{array}
[c]{c
( \frac{2\left( t_{\epsilon,\alpha}-s_{\epsilon,\alpha}\right) }{\alpha^{2
}+\varphi_{t}\left( t_{\epsilon,\alpha},x_{\epsilon,\alpha}\right)
,\frac{2\left( x_{\epsilon,\alpha}-y_{\epsilon,\alpha}\right) }{\epsilon
^{2}}+D\varphi\left( t_{\epsilon,\alpha},x_{\epsilon,\alpha}\right) ,X)
\in\bar{D}^{2,+}W_{1}( t_{\epsilon,\alpha},x_{\epsilon,\alpha}) ,\\
\left( \frac{2\left( t_{\epsilon,\alpha}-s_{\epsilon,\alpha}\right)
}{\alpha^{2}},\frac{2\left( x_{\epsilon,\alpha}-y_{\epsilon,\alpha}\right)
}{\epsilon^{2}},Y\right) \in\bar{D}^{2,-}W_{2}\left( s_{\epsilon,\alpha
},y_{\epsilon,\alpha}\right) ,\\
\left(
\begin{array}
[c]{cc
X & 0\\
0 & -Y
\end{array}
\right) \leq\frac{4}{\epsilon^{2}}\left(
\begin{array}
[c]{cc
I & -I\\
-I & I
\end{array}
\right) +\left(
\begin{array}
[c]{cc
D^{2}\varphi\left( t_{\epsilon,\alpha},x_{\epsilon,\alpha}\right) & 0\\
0 & 0
\end{array}
\right) ,
\end{array}
\]
where $\bar{D}^{2,+}$ and $\bar{D}^{2,-}$ can be found in
\cite{Crandall-lecture}. Since $W_{1}$ and $W_{2}$ are sub and supersolution
to (\ref{eq-hjb-yz}) respectively, by (iv) we have
\begin{equation
\begin{array}
[c]{rl}
& \inf\limits_{u\in U}H(t_{\epsilon,\alpha},x_{\epsilon,\alpha},W_{1
(t_{\epsilon,\alpha},x_{\epsilon,\alpha}),\frac{2\left( x_{\epsilon,\alpha
}-y_{\epsilon,\alpha}\right) }{\epsilon^{2}}+D\varphi\left( t_{\epsilon
,\alpha},x_{\epsilon,\alpha}\right) ,X,u)\\
& +\frac{2\left( t_{\epsilon,\alpha}-s_{\epsilon,\alpha}\right) }{\alpha
^{2}}+\varphi_{t}\left( t_{\epsilon,\alpha},x_{\epsilon,\alpha}\right)
\geq0,
\end{array}
\label{w1-ineq
\end{equation
\begin{equation}
\frac{2\left( t_{\epsilon,\alpha}-s_{\epsilon,\alpha}\right) }{\alpha^{2
}+\inf\limits_{u\in U}H(s_{\epsilon,\alpha},y_{\epsilon,\alpha},W_{2
(s_{\epsilon,\alpha},y_{\epsilon,\alpha}),\frac{2\left( x_{\epsilon,\alpha
}-y_{\epsilon,\alpha}\right) }{\epsilon^{2}},Y,u)\leq0. \label{w2-ineq
\end{equation}
It follows from (\ref{w1-ineq}) and (\ref{w2-ineq}) tha
\begin{equation
\begin{array}
[c]{l
\varphi_{t}\left( t_{\epsilon,\alpha},x_{\epsilon,\alpha}\right)
+\sup\limits_{u\in U}\{H(t_{\epsilon,\alpha},x_{\epsilon,\alpha
,W_{1}(t_{\epsilon,\alpha},x_{\epsilon,\alpha}),\frac{2\left( x_{\epsilon
,\alpha}-y_{\epsilon,\alpha}\right) }{\epsilon^{2}}+D\varphi\left(
t_{\epsilon,\alpha},x_{\epsilon,\alpha}\right) ,X,u)\\
-H(s_{\epsilon,\alpha},y_{\epsilon,\alpha},W_{2}(s_{\epsilon,\alpha
},y_{\epsilon,\alpha}),\frac{2\left( x_{\epsilon,\alpha}-y_{\epsilon,\alpha
}\right) }{\epsilon^{2}},Y,u)\}\geq0.
\end{array}
\label{new-1234569
\end{equation}
By (iv) and Lipschitz continuity of $\sigma$, $b$ and $W_{i}$, the
\begin{equation
\begin{array}
[c]{l
\mathrm{tr}[\sigma\sigma^{\intercal}(t_{\epsilon,\alpha},x_{\epsilon,\alpha
},u)X]-\mathrm{tr}[\sigma\sigma^{\intercal}(s_{\epsilon,\alpha},y_{\epsilon
,\alpha},u)Y]\\
\leq\frac{4}{\epsilon^{2}}\mathrm{tr}[(\sigma(t_{\epsilon,\alpha
,x_{\epsilon,\alpha},u)-\sigma(s_{\epsilon,\alpha},y_{\epsilon,\alpha
},u))(\sigma(t_{\epsilon,\alpha},x_{\epsilon,\alpha},u)-\sigma(s_{\epsilon
,\alpha},y_{\epsilon,\alpha},u))^{^{\intercal}}]\\
\text{ \ }+\mathrm{tr}[\sigma\sigma^{\intercal}(t_{\epsilon,\alpha
},x_{\epsilon,\alpha},u)D^{2}\varphi\left( t_{\epsilon,\alpha},x_{\epsilon
,\alpha}\right) ]\\
\leq\frac{4}{\epsilon^{2}}\rho_{\epsilon}(|t_{\epsilon,\alpha}-s_{\epsilon
,\alpha}|)+C\frac{\left\vert x_{\epsilon,\alpha}-y_{\epsilon,\alpha
}\right\vert ^{2}}{\epsilon^{2}}+\mathrm{tr}[\sigma\sigma^{\intercal
}(t_{\epsilon,\alpha},x_{\epsilon,\alpha},u)D^{2}\varphi\left( t_{\epsilon
,\alpha},x_{\epsilon,\alpha}\right) ],
\end{array}
\label{new-1234567
\end{equation
\[
\left\vert \frac{2\left( x_{\epsilon,\alpha}-y_{\epsilon,\alpha}\right)
}{\epsilon^{2}}+D\varphi\left( t_{\epsilon,\alpha},x_{\epsilon,\alpha
}\right) \right\vert \leq L_{W_{1}}\text{, }\left\vert \frac{2\left(
x_{\epsilon,\alpha}-y_{\epsilon,\alpha}\right) }{\epsilon^{2}}\right\vert
\leq L_{W_{2}
\]
an
\
\begin{array}
[c]{l
\left( \frac{2\left( x_{\epsilon,\alpha}-y_{\epsilon,\alpha}\right)
}{\epsilon^{2}}+D\varphi\left( t_{\epsilon,\alpha},x_{\epsilon,\alpha
}\right) \right) ^{\intercal}b\left( t_{\epsilon,\alpha},x_{\epsilon
,\alpha},W_{1}(t_{\epsilon,\alpha},x_{\epsilon,\alpha}),\right. \\
\text{\ \ \ \ \ \ \ \ \ \ \ \ \ }\left( \frac{2\left( x_{\epsilon,\alpha
}-y_{\epsilon,\alpha}\right) }{\epsilon^{2}}+D\varphi\left( t_{\epsilon
,\alpha},x_{\epsilon,\alpha}\right) \right) ^{\intercal}\sigma
(t_{\epsilon,\alpha},x_{\epsilon,\alpha},u),u)\\
\text{ \ }-\left( \frac{2\left( x_{\epsilon,\alpha}-y_{\epsilon,\alpha
}\right) }{\epsilon^{2}}\right) ^{\intercal}b(s_{\epsilon,\alpha
},y_{\epsilon,\alpha},W_{2}(s_{\epsilon,\alpha},y_{\epsilon,\alpha}),\left(
\frac{2\left( x_{\epsilon,\alpha}-y_{\epsilon,\alpha}\right) }{\epsilon^{2
}\right) ^{\intercal}\sigma(s_{\epsilon,\alpha},y_{\epsilon,\alpha},u),u)\\
\leq C\left( \left\vert \frac{2\left( x_{\epsilon,\alpha}-y_{\epsilon
,\alpha}\right) }{\epsilon^{2}}\right\vert \rho_{\epsilon}(|t_{\epsilon
,\alpha}-s_{\epsilon,\alpha}|)+\left\vert w\left( t_{\epsilon,\alpha
},x_{\epsilon,\alpha}\right) \right\vert \right. \\
\ \ \ \ \ \ \left. +\left( 1+\left\vert x_{\epsilon,\alpha}\right\vert
\right) \left\vert D\varphi\left( t_{\epsilon,\alpha},x_{\epsilon,\alpha
}\right) \right\vert +\left\vert x_{\epsilon,\alpha}-y_{\epsilon,\alpha
}\right\vert +\frac{\left\vert x_{\epsilon,\alpha}-y_{\epsilon,\alpha
}\right\vert ^{2}}{\epsilon^{2}}\right) ,
\end{array}
\]
where $L_{W_{i}}$ is the Lipschitz constant for $W_{i}$ and $\rho_{\epsilon
}(s)\rightarrow0$ as $s\rightarrow0^{+}$ for fixed $\epsilon$. We can do the
same analysis for $g$. For (\ref{new-1234569}) we first let $\alpha
\rightarrow0$. And then let $\epsilon\rightarrow0$, we have
\
\begin{array}
[c]{rl}
& \sup\limits_{u\in U}\left\{ \frac{1}{2}\mathrm{tr}[\sigma\sigma^{\intercal
}(t_{0},x_{0},u)D^{2}\varphi(t_{0},x_{0})]+C\left( 1+\left\vert
x_{0}\right\vert \right) \left\vert D\varphi\left( t_{0},x_{0}\right)
\right\vert +C\left\vert w\left( t_{0},x_{0}\right) \right\vert \right\} \\
& +\varphi_{t}\left( t_{0},x_{0}\right) \geq0.
\end{array}
\]
Therefore, $w$ is a subsolution to (\ref{eq-uni1}).
\end{proof}
\begin{remark}
\label{re-appen}When $\sigma$ depends on $y$, the right hand side of
(\ref{new-1234567}) will include the ter
\[
\frac{C}{\epsilon^{2}}|W_{1}(t_{\epsilon,\alpha},x_{\epsilon,\alpha
)-W_{2}(t_{\epsilon,\alpha},x_{\epsilon,\alpha})|^{2},
\]
which tends to $\infty$ as $\epsilon\rightarrow0$. Thus the above method does
not work.
\end{remark}
Set $\psi\left( x\right) =\left[ \log\left( \left( \left\vert
x\right\vert ^{2}+1\right) ^{\frac{1}{2}}\right) \right] ^{2}$,
$x\in\mathbb{R}^{n}$.
\begin{lemma}
\label{le-uni3} Suppose Assumption \ref{assum-1} (i) holds. Then, for any
$A>0$, there exists a constant $C_{1}>0$ such that the function
\[
\chi\left( t,x\right) =\exp\left\{ \left( C_{1}\left( T-t\right)
+A\right) \psi(x)\right\}
\]
satisfies
\[
\chi_{t}\left( t,x\right) +\sup\limits_{u\in U}\left\{ \frac{1
{2}\mathrm{tr}[\sigma\sigma^{\intercal}(t,x,u)D^{2}\chi(t,x)]+C\left(
1+\left\vert x\right\vert \right) \left\vert D\chi\left( t,x\right)
\right\vert +C\chi\left( t,x\right) \right\} <0
\]
in $\left[ t_{1},T\right] \times\mathbb{R}^{n}$, where $t_{1}=T-\frac
{A}{C_{1}}$.
\end{lemma}
It is easy to verify this lemma directly. So we omit the proof.
\textbf{Proof of Theorem \ref{th-vis-uni} }We only need to prove that for any
$\alpha>0$, $w$ satisfie
\[
\left\vert w\left( t,x\right) \right\vert \leq\alpha\chi\left( t,x\right)
\text{, in }\left[ 0,T\right] \times\mathbb{R}^{n}\text{.
\]
It is clear that for some $A>0$,
\[
\lim_{\left\vert x\right\vert \rightarrow\infty}\left\vert w\left(
t,x\right) \right\vert \exp\left( -A\left[ \log\left( \left( \left\vert
x\right\vert ^{2}+1\right) ^{\frac{1}{2}}\right) \right] ^{2}\right) =0
\]
uniformly for $t\in\left[ 0,T\right] $. This implies that $\left\vert
w\right\vert -\alpha\chi$ is bounded from above in $\left[ t_{1},T\right]
\times\mathbb{R}^{n}$ and that
\[
M:=\max_{\left[ t_{1},T\right] \times\mathbb{R}^{n}}\left( \left\vert
w\left( t,x\right) \right\vert -\alpha\chi\left( t,x\right) \right)
\exp\left( -C\left( T-t\right) \right)
\]
is achieved at some point $\left( t_{0},x_{0}\right) $. Without loss of
generality, we assume that\newline$|w( t_{0},x_{0})| >0$ and $w\left(
t_{0},x_{0}\right) >0$.
Note that
\[
w\left( t,x\right) -\alpha\chi\left( t,x\right) \leq\left( w\left(
t_{0},x_{0}\right) -\alpha\chi\left( t_{0},x_{0}\right) \right)
\exp\left( -C\left( t-t_{0}\right) \right) .
\]
Then, $\left( t_{0},x_{0}\right) $ can be seen as a global maximum point for
$w(t,x)-h\left( t,x\right) $ where
\[
h\left( t,x\right) =\alpha\chi\left( t,x\right) +\left( w\left(
t_{0},x_{0}\right) -\alpha\chi\left( t_{0},x_{0}\right) \right)
\exp\left( -C\left( t-t_{0}\right) \right) .
\]
Since $w$ is a viscosity subsolution to (\ref{eq-hjb-yz}), if $t_{0}\in\lbrack
t_{1},T)$, then we have
\
\begin{array}
[c]{rl}
& \sup\limits_{u\in U}\left\{ \frac{1}{2}\mathrm{tr}[\sigma\sigma^{\intercal
}(t_{0},x_{0},u)D^{2}h(t_{0},x_{0})]+C\left( 1+\left\vert x_{0}\right\vert
\right) \left\vert Dh\left( t_{0},x_{0}\right) \right\vert +Cw\left(
t_{0},x_{0}\right) \right\} \\
& +h_{t}\left( t_{0},x_{0}\right) \geq0.
\end{array}
\]
That is
\
\begin{array}
[c]{rl}
& \sup\limits_{u\in U}\left\{ \frac{1}{2}\mathrm{tr}[\sigma\sigma^{\intercal
}(t_{0},x_{0},u)D^{2}\chi(t_{0},x_{0})]+C\left( 1+\left\vert x_{0}\right\vert
\right) \left\vert D\chi\left( t_{0},x_{0}\right) \right\vert +C\chi\left(
t_{0},x_{0}\right) \right\} \\
& +\chi_{t}\left( t_{0},x_{0}\right) \geq0.
\end{array}
\]
It is a contradiction to Lemma \ref{le-uni3}. Therefore $t_{0}=T$. Since
$\left\vert w\left( T,x\right) \right\vert =0$, we obtai
\[
\left\vert w\left( t,x\right) \right\vert \leq\alpha\chi\left( t,x\right)
\text{, in }\left[ t_{1},T\right] \times\mathbb{R}^{n}\text{.
\]
Thus, let $\alpha\rightarrow0$., we can obtain $\left\vert w\right\vert =0$ in
$\left[ t_{1},T\right] \times\mathbb{R}^{n}$. Applying successively the same
argument on the interval $\left[ t_{2},t_{1}\right] $ where $t_{2}=\left(
t_{1}-A/C_{1}\right) ^{+}$ and then, if $t_{2}>0$ on $\left[ t_{3
,t_{2}\right] $ where $t_{3}=\left( t_{2}-A/C_{1}\right) ^{+}$...etc..We
finally obtain that $\left\vert w\right\vert =0$ in $\left[ 0,T\right]
\times\mathbb{R}^{n}$. This completes the proof. $\blacksquare$
|
{
"timestamp": "2019-06-19T02:10:20",
"yymm": "1805",
"arxiv_id": "1805.02337",
"language": "en",
"url": "https://arxiv.org/abs/1805.02337"
}
|
\section{Introduction}
\label{sec:1}
\IEEEPARstart{W}{ith} the remarkable success of convolutional neural networks (CNNs)~\cite{krizhevsky2012imagenet, Simonyan15, szegedy2015going, He_2016_CVPR, Huang_2017_CVPR}, deep embedding methods, which aim to learn an end-to-end compact feature embedding of raw images, have made significant progress to advance many related computer vision tasks, including face verification~\cite{schroff2015facenet}, fine-grained image retrieval~\cite{wang2014learning, wei2017selective}, product search~\cite{bell2015learning} and person re-identification (re-ID)~\cite{li2014deepreid, yi2014deep}. Besides utilizing ``very deep" network structures and employing different kinds of loss functions (\textit{e.g.},~Softmax~\cite{xiao2016learning, zheng2016person}, triplet~\cite{weinberger2009distance, hermans2017defense} and Online Instance Matching~\cite{Xiao_2017_CVPR}), a variety of solutions have been exploited intensively to enable more effective and efficient feature learning. Among these efforts is the attention mechanism focusing on the most discriminative parts of images in order to solve challenging recognition problems like person re-ID by distinguishing subtle fine-grained visual structures from other irrelevant parts~\cite{xu2015show, liu2017end, Wang_2017_CVPR, Zhao_2017_ICCV, rahimpour2017person}.
Existing attention mechanism in the context of deep learning often uses soft gating functions to select discriminative image parts. For example, recent works~\cite{Wang_2017_CVPR, Zhao_2017_ICCV} develop attention models incorporated into feed-forward CNNs that almost achieves state-of-the-art performance.
Their attention masks serve as control gates to perform an element-wise product with convolutional feature maps to localize discriminative visual structures. These attention masks are obtained from sigmoid functions with their ranges on $[0,1]$, as shown in Fig.~\ref{fig:1}(a). The continuous nature of these soft attention masks makes them have large uncertainty in localizing subtle discriminative parts for identifying different people and fine-grained object categories when their values are far from two assertive statuses of being attended ($1$) or unattended ($0$).
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{1.pdf}
\caption{Two different types of attention mask generator. (a) Soft attention mask employed in~\cite{Wang_2017_CVPR, Zhao_2017_ICCV}. (b) Sharp attention mask introduced by us. }
\label{fig:1}
\end{figure}
For example, as shown in Fig.~\ref{fig:2}(a), the soft attention masks from sigmoid gates look ambiguous in selecting the most discriminative part (it is the knapsack in this case) for identifying the person in the given image. In many applications, we need sharper attention selectors that can more aggressively and assertively distinguish relevant visual structures from irrelevant ones. This is in particular important when the training examples are scarce, as otherwise the model could be prone to being overfitting to irrelevant visual structures.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{2.pdf}
\caption{(a) The feature changing process after different masks. Compared with gating-based soft attention masks, the sampling-based sharp attention masks are more assertive in localizing subtle features relevant to re-identifying people (\textit{e.g.},~a distinctive knapsack). (b) Schematic diagram of mask values. The sharp attention mask values are prone to be either $1$ (attended) or $0$ (unattended), with no attention ambiguity.}
\label{fig:2}
\end{figure*}
The above challenge with gating-based soft attention mechanism inspires us to propose an alternative attention model, which can generate sharper attentions on these subtle visual structures that can identify different people and/or discriminate between fine-grained image categories. Unlike the gating-based model with soft uncertain attentions, we seek to generate sharper attention masks that are more assertive on selecting attended/unattended visual structures by directly sampling from the underlying feature maps.
As illustrated in Fig.~\ref{fig:2}, the sampled mask is much sharper than its soft attention counterpart -- it either attends ($1$) or unattends ($0$) to a particular location with no attention ambiguity. A sharper attention mask is particularly useful for person re-ID problem, which aims to retrieve and match the same person across non-overlapping surveillance camera views deployed at different locations. Fig.~\ref{fig:2} illustrates such a sharper mask is more sensitive to localize subtle details (\textit{e.g.},~a distinctive knapsack) that can uniquely determine a particular person.
Technically, these sharper attention masks are generated through differentiable samplers drawn from Gumbel-Softmax distribution~\cite{jang2016categorical, maddison2016concrete} (see Fig.~\ref{fig:1}(b)). This distribution can separate the sampling randomness from the model parameters deciding where to select discriminative visual features. This allows us to backpropagate the error signals through these samplers of attention masks to update the trainable model parameters. Clearly, the discrete nature of these attention samplers ensures the generated attention masks could be sharper than soft attention masks with continuous values. Fig.~\ref{fig:1} summarizes the difference between the gating-based soft attention model and the proposed sharper attention model.
A cross-feature interaction learning scheme is also explored for enhancing the complementary benefit and joint learning compatibility of the original output features of the CNN backbone and the introduced sharp attention features, which further improves the re-ID performance. In addition, to achieve satisfactory results for those challenging re-ID scenarios where visual structures uniquely identifying a particular person can be localized only in a certain context (such as CUHK03 detected~\cite{li2014deepreid} and Market-1501~\cite{zheng2015scalable}),
the sharp attention mask generator is essential to be equipped with a front-end unit, which can capture the high-level context-aware features in a larger receptive field to provide sampling guidance.
Full details of the above cross-feature interaction learning mechanism and context-aware sampling-guiding unit are to be presented in Sec.~\ref{sec:3}.
In summary, the contributions of this paper are threefold.
\begin{itemize}
\item A novel sampling-based attention mechanism is proposed by training discrete attention masks from the CNN architectures in an end-to-end fashion.
\item The generated attention is sharper than gating-based soft attention and can more assertively localize subtle visual structures to uniquely determine a particular person, which is exactly suitable for solving the person re-ID problem.
\item The proposed sharp attention mask generator, cooperated with the well-designed cross-feature interaction learning scheme and the compact yet effective context-aware unit, achieves a consistent and significant performance gain compared with the baseline and other related methods on three challenging person re-ID datasets.
\end{itemize}
\section{Related Work}
\label{sec:2}
Deep learning based approaches have greatly boosted person re-ID task in recent years, as they incorporate feature extraction and distance metric into an unified framework, in which adaptive feature representations can be well learned under the supervision of a certain similarity metric.
Specifically, these methods utilize deep CNN architectures~\cite{krizhevsky2012imagenet, Simonyan15, szegedy2015going, He_2016_CVPR} to extract feature representations from raw images and employ different kinds of loss functions to optimize the embedding space,
such that data points of positive pairs (\textit{i.e.},~images from the same identity) are closer to each other than those of negative pairs (\textit{i.e.},~images from different identities).
Softmax loss, which regards the images of one identity as a category, is widely used recently and shows excellent superiority~\cite{xiao2016learning, zheng2016mars, zheng2016person}, as classification task can take full advantages of re-ID annotations and learn outstanding features with large inter-class variance. Xiao \textit{et al.}~\cite{xiao2016learning} carefully design a baseline network where a Softmax loss is employed to optimize classification task and almost achieve state-of-the-art performance on some large datasets, \textit{e.g.},~CUHK03~\cite{li2014deepreid}.
On some other large datasets, the classification model also yields excellent performance without meticulous training sample selection~\cite{zheng2016mars, zheng2016person}.
Triplet loss~\cite{weinberger2009distance} and its variants~\cite{ding2015deep, oh2016deep, hermans2017defense, Chen_2017_CVPR} is another commonly employed loss function. An up-to-date work~\cite{hermans2017defense} shows that, using a variant of the triplet loss to perform end-to-end metric learning outperforms any other published method by a large margin. Chen \textit{et al.}~\cite{Chen_2017_CVPR} design a generalized quadruplet loss, which can lead to the model output with a larger inter-class variation and a smaller intra-class variation compared to the triplet loss. Some other loss functions~\cite{huang2016local, zhou2017large, yao2017deep, Xiao_2017_CVPR, Zheng_2017_ICCV, Shen2017Deep} are also proposed for effective training. Online Instance Matching (OIM) loss is introduced by~\cite{Xiao_2017_CVPR}, which is scalable to datasets with numerous identities and converges much faster and better than the conventional Softmax loss.
Shen \textit{et al.}~\cite{Shen2017Deep} apply similarity perception loss to multi-level feature maps (\textit{i.e.},~low-level and high-level). Therefore, the network can efficiently learn discriminative feature representations at different levels, which significantly improves the re-ID performance.
A strong neural activation extraction scheme is proposed in~\cite{shen2017learning} to joint learn global features and local features.
In this paper, we follow the triplet mining strategy introduced by Hermans \textit{et al.}~\cite{hermans2017defense} and adopt ResNet-50~\cite{He_2016_CVPR} network structure trained with triplet loss to produce a strong CNN baseline, which outperforms most of the existing deep learning frameworks.
Another clear trend for person re-ID is focusing on the feature extraction part and exploiting various techniques, which can be integrated into deep neural network with an end-to-end training pattern, for the purpose of more effective and efficient feature representing. Among these efforts, attention mechanism is one of the most recent architectural innovations. Liu \textit{et al.}~\cite{liu2017end} first apply attention model to person re-ID problem. A recurrent soft attention based model is employed to generate different attention location information by comparing image pairs of persons through multiple glimpses and then integrate them together. However, RNN architecture and pairwise input are necessary in~\cite{liu2017end}, which is computationally expensive and intolerant for large-scale real-world applications.
Later,~\cite{Zhao_2017_ICCV, rahimpour2017person, Wang_2017_CVPR} simplify the attention scheme to integrate into CNN structures. Zhao \textit{et al.}~\cite{Zhao_2017_ICCV} exploit the Spatial Transformer Network~\cite{jaderberg2015spatial} as the hard attention model for searching discriminative parts given a pre-defined spatial constraint. Thus, a simple human part-aligned representation is proposed for handling the body part misalignment problem. Rahimpour \textit{et al.}~\cite{rahimpour2017person} introduce a gradient-based visual attention model, which learns to focus selectively on parts of the input image for which the networks' output is most sensitive to.
Wang \textit{et al.}~\cite{Wang_2017_CVPR} present Residual Attention Network for image classification, built by stacking attention modules which generate attention-aware features, and bottom-up top-down feedforward structures which unfold the feedforward and feedback attention process into a single process.
Importantly, a core component of the above methods is utilizing sigmoid function to regularize the mask values range to $[0, 1]$. In other words, they all generate soft attention masks. On the contrary, we address attention mask generation from another perspective, that is, generating sharper attention masks that are more assertive on selecting attended/unattended visual structures by directly sampling from the convolutional feature maps. The sharper attention mask is more sensitive to localize subtle details that can uniquely determine a typical person, which is particularly suitable for the person re-ID problem.
\section{The Proposed Approach}
\label{sec:3}
The proposed sampling-based sharp attention mechanism can be directly embedded into the state-of-the-art CNN frameworks. Fig.~\ref{fig:3} illustrates the overall architecture of a Sharp Attention Network along with its backbone CNN. In this paper, we adopt ResNet-50~\cite{He_2016_CVPR} as the backbone network\footnote{This choice is independent of our model design and others can be readily considered such as AlexNet~\cite{krizhevsky2012imagenet}, Inception~\cite{szegedy2015going} and VggNet~\cite{Simonyan15}.}. ResNet-50 is constructed by four sequential residual blocks and each of them can be expanded to a sharp attention block.
In the CNN hierarchical framework, this block-wise sharp attention design naturally allows hierarchical multi-level (\textit{i.e.},~from coarse level to fine level) attention learning to progressively refine the attention maps and boost the re-ID performance collaboratively.
Specifically, for each residual block, the original output features form its trunk, and after the last feature layer of the block is a mask branch which consists of an optional context-aware unit and a sharp attention mask generator. The former is a U-net~\cite{ronneberger2015u, noh2015learning} like structure to capture the high-level context-aware features in a larger receptive field to decide if an output feature should be selected by an attention mask.
The attention mask generator employs Gumbel-Softmax sampling to acquire sharp attentions. This generator can be either based on the output of the backbone residual block or the output of the context-aware unit.
Once an attention mask is generated, it is multiplied element-wise with the original trunk features to give attention-aware features.
Additionally, we optimize the continuity of attention-aware features in the spatial domain by introducing total variation (TV) regularization penalty.
Conceptually, the above attention-aware feature learning aims at depicting the most discriminative local image regions of a person bounding box image, while the original trunk feature learning is dedicated to encoding the optimal global level features from the entire person image.
In this sense, the attention-aware features can be viewed as some kind of local features and are largely complementary with the original features in functionality. Intuitively, their combination can integrate both advantages (\textit{i.e.},~preserving global information and being more sensitive to particular local positions) and relieve the modeling burden from the same (particularly small) training data. Thus, we further introduce a cross-feature interaction learning scheme for maximizing the complementary benefit and compatibility of both the global and local feature representations. To be specific,
the original features are additively combined with the obtained attention-aware features by a skip connection.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{3.pdf}
\end{center}
\caption{Illustration of Sharp Attention Network structure. We adopt ResNet-50~\cite{He_2016_CVPR} as its backbone and each residual block can be expanded to a sharp attention block. For each attention block, $T$ represents the output of the trunk residual block, $\widehat M$ represents the attention mask through Gumbel-Softmax sampling, $A$ represents the attention-aware features, $F$ represents the final output of the sharp attention block, and $X$ represents the input of the attention generator. $X$ comes from either the output of the trunk residual block of ResNet-50 (\textit{i.e.}, $T$) or the output of the optional context-aware unit. We further introduce a cross-feature interaction learning scheme for maximizing the complementary benefit and compatibility of both the original feature and attention-aware feature representations.}
\label{fig:3}
\end{figure*}
Formally, suppose $T(x)$ is the output feature map of a trunk residual block with an input image $x$, and its size is $C \times H \times W$, where $C, H, W$ represent the number of elements in the channel, height and width dimensions, respectively. The mask branch generates an attention mask $M(x)$ of the same size with $T(x)$ through Gumbel-Softmax sampling (see the next subsection). The attention-aware feature map can be computed element-wise as
\begin{equation}\label{eq:1}
A_{c,h,w}(x) = M_{c,h,w}(x) \times T_{c,h,w}(x),
\end{equation}
\noindent where the subscript $(c, h, w)$ denotes the coordinate of an arbitrary position/pixel on the feature map and
$c \in \{1, \ldots, C\}, h \in \{1, \ldots, H\}, w \in \{1, \ldots, W\}$ index the channel, the height and the width, respectively.
After that, the final output $F(x)$ of the attention block additively combines the attention-aware features and the original residual block features as
\begin{equation}\label{eq:2}
\begin{aligned}
F_{c,h,w}(x) &= A_{c,h,w}(x) + T_{c,h,w}(x) \\
&= (1 + M_{c,h,w}(x)) \times T_{c,h,w}(x).
\end{aligned}
\end{equation}
By the above equation, we formulate the cross-feature interaction learning scheme for further enhancing the complementarity between original trunk features and attention-aware features, \textit{i.e.},~perserving global characteristics while highlighting relevant local parts.
For the person re-ID problem, usually the resultant network is trained by minimizing the triplet loss~\cite{hermans2017defense}, denoted as $L_{tri}$. We will show that by using Gumbel-Softmax sampling to acquire $A$, the error signals can be backpropagated directly through the sampled $A$ to update the model parameters.
In the next four subsections, we will discuss in detail about four core components in the proposed networks: sharp attention mask generator, attention-aware feature continuity optimizing, cross-feature interaction learning and context-aware unit.
\subsection{Sharp Attention Mask Generator}
\label{sec:3_1}
We use the Gumbel-Softmax sampling to generate sharp attention masks, and it can be performed after either the output of the trunk residual block of ResNet-50 or the output of the optional context-aware unit (refer to Fig.~\ref{fig:4} in Sec.~\ref{sec:3_4}).
Given an input $X$ to this sharp attention mask generator, it is first normalized onto an interval $[0,1]$ as
\begin{equation}\label{eq:3}
f(X_{c,h,w}) = \frac{X_{c,h,w} - \min_c}{\max_c - \min_c},
\end{equation}
\noindent where the subscript $(c, h, w)$ denotes the coordinate of an arbitrary position on the input;
$(h,w)$ ranges over all height and width locations and $c$ over all channels; $\max_c$ and $\min_c$ denote the maximum and the minimum value over $c$-th channel, respectively.
The normalized feature can be regarded as the probability of sampling this feature. Clearly, it tends to keep the highly activated features, while suppressing those weakly activated ones. It is indeed imposing a {\em parsimony} prior that pushes attention masks to only preserve the most relevant features while disregarding as many irrelevant ones as possible. Thus, it eventually leads to attention-aware features in which {\em strong gets stronger, and weak becomes weaker or even vanishes}.
Based on this probabilitic interpretation of normalized input $f(X_{c,h,w})$, a direct idea is to perform an in-place Bernoulli sampling according to it. However, the resultant attention-aware features would not be differentiable w.r.t.~$f(X_{c,h,w})$, and thus the back-propagation cannot be performed to update the network parameters through $X$. Fortunately, the Gumbel-Max trick~\cite{gumbel2012statistics, maddison2014sampling} provides an alternative way to draw an attention mask sample $M_{c,h,w}\in\{0,1\}$ from the Bernoulli distribution $\{\pi_1\triangleq f(X_{c,h,w}),\pi_0\triangleq 1-\pi_1\}$:
\begin{equation}\label{eq:4}
M_{c,h,w} = \mathop{\arg\max}_{j\in\{0,1\}}(g_j + \log\pi_j),
\end{equation}
\noindent where $g_0, g_1$ are i.i.d samples drawn from $\text{Gumbel}(0,1)$. After that, a Softmax function can be used to produce a continuous, differentiable approximation to relax $\arg\max$, which generates
\begin{equation}\label{eq:5}
\widehat M_{c,h,w} = \frac{\exp((\log\pi_1 + g_1) / \tau)}{\sum_{j \in \{0,1\}} \exp((\log\pi_j + g_j) / \tau)}. \ \ \
\end{equation}
\noindent When the temperature $\tau \to 0$, the above samples from the Gumbel-Softmax distribution~\cite{jang2016categorical, maddison2016concrete} become identical to those from the Bernoulli distribution.
Thus, we will employ the Softmax approximation $\widehat M_{c,h,w}$ as the attention masks applied to original residual block features to obtain attention-ware features.
The Gumbel-Softmax distribution is smooth for $\tau > 0$, and thus we can compute the gradient $\partial \widehat M_{c,h,w} / \partial \pi_1$ as $\pi_1$ is separate from the random discrete samples $g_1$ and $g_0$ drawn from $\text{Gumbel}(0,1)$.
In the implementation, we will start at a high temperature ($\tau = 1$) and gradually anneal to a small one ($\tau = 0.5$)~\cite{jang2016categorical}.
\subsection{Attention-aware Feature Continuity Optimizing}
\label{sec:3_2}
The proposed sampling-based sharp attention selectors can assertively localize subtle discriminative parts and eliminate irrelevant features. Nevertheless, because of the inevitable sampling randomness, a fraction of noisy masks (or features) occur in the spatial domain. For instance, a mask with value of $0$ (corresponding to unattended feature) appears among a bunch of masks with value of $1$ (corresponding to attended feature). Hence, we introduce total variation (TV) regularization penalty to optimize the continuity of attention-aware features, aiming to trim the above noisy and meaningless features, in the spatial domain. TV regularization~\cite{rudin1992nonlinear} is based on the principle that signals with excessive and possibly spurious detail have high total variation, and is remarkably effective at simultaneously preserving important details whilst smoothing away noise.
Technically speaking, given an arbitrary attention-aware feature $A \in \mathbb{R}^{C \times H \times W}$ where $C, H, W$ denote the number of pixel in the channel, height and width dimensions, we firstly introduce two matrices, denoted as:
$$
D_1 =
\left[
\begin{matrix}
\begin{array}{rrrrr}
1 & -1 & \cdots & 0 & 0 \\
0 & 1 & -1 & \cdots & 0 \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & 0 & \cdots & 1 & -1 \\
\end{array}
\end{matrix}
\right] \in \mathbb{R}^{(H - 1)\times H},
$$
and
$$
D_2 =
\left[
\begin{matrix}
\begin{array}{rrrr}
1 & 0 & \cdots & 0 \\
-1 & 1 & \cdots & 0 \\
0 & -1 & \ddots & \vdots \\
\vdots & \vdots & \ddots & 1 \\
0 & 0 & \cdots & -1 \\
\end{array}
\end{matrix}
\right] \in \mathbb{R}^{W \times (W - 1)}.
$$
After that, the TV regularization penalty for $A$ is defined as:
\begin{equation}\label{eq:6}
L_{TV}^{A} = \sum_{c \in \{1, \ldots, C\}} \|D_1 A_c \|_2^2 + \|A_c D_2 \|_2^2,
\end{equation}
where $A_c \in \mathbb{R}^{H \times W}$ represents the c-th channel feature map of $A$. And the gradient can be effectively computed as
\begin{equation}\label{eq:7}
\partial L_{TV}^{A} / \partial A_c = 2(D_1^\mathrm{T} D_1 A_c + A_c D_2 D_2^\mathrm{T}).
\end{equation}
\noindent At last, the whole loss function is defined as:
\begin{equation}\label{eq:8}
L = L_{tri} + \mu \sum_{i} L_{TV}^{A_{i}},
\end{equation}
where $i$ indexes the sharp attention blocks (\textit{i.e.}, block1 to block4) and the hyperparameter $\mu$ is used to control the balance between the TV regularization penalty $L_{TV}$ and the aforementioned triplet loss $L_{tri}$.
\subsection{Cross-Feature Interaction Learning}
\label{sec:3_3}
For a typical sharp attention block, given the original residual trunk features (\textit{i.e.},~global-level features) and the attention-aware features (\textit{i.e.},~part-level features) above, we further consider a cross-feature interaction mechanism for enriching their complementary benefit and joint learning compatibility. Specifically, the cross-feature interaction learning scheme is formulated by Eq.~\ref{eq:2} and can be implemented by an identity shortcut connection and element-wise addition (channel by channel).
We also tested concatenation of both attention-aware features and the original ResNet-50 residual block features, but found the resultant network is hard to converge. This shows that an additive combination is a better choice. In this way, the attention-aware features can also be viewed as attention residuals that can be additively combined with the original ResNet-50 features (see Fig.~\ref{fig:3}), similar to ideas in residual learning~\cite{He_2016_CVPR}, which can make it easier to optimize and gain performance improvement consistently.
\subsection{Context-aware Unit for Sampling Guiding}
\label{sec:3_4}
In the proposed framework, the attention masks can be generated directly by sampling the original residual block without involving any new network layers. This results in a {\em lazy} context-free sharp attention mask generator, which can still yield satisfactory performance improvement when we do not need to localize which image parts should be attended in a suitable context.
However, if visual structures that can uniquely identify a particular person can be localized only in a certain context, the attention mask generator should be equipped with a front-end unit to model visual contexts. For instance, in practical re-id scenarios, person images are typically automatically detected for scaling up to large visual data. Under these circumstances, to select a discriminative visual structure or not should rely on the high-level context-aware features in a larger receptive field to alleviate the negative impacts from background clutter, occlusion and missing body parts caused by the poor detected bounding boxes.
Inspired by the ``U-net" like structure~\cite{ronneberger2015u, noh2015learning} in segmentation, detection and pose estimation, we introduce a context-aware unit. It is constructed by stacking convolutional layers and mirrored deconvolutional layers together. This structure can be viewed as a bottom-up forward and top-down feedback pipeline, which combines multi-scale visual information at various levels.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{4.pdf}
\end{center}
\caption{Illustration of the context-aware unit structure. The input layer represents the last output layer of original residual block. \textcircled{\scriptsize 1} and \textcircled{\scriptsize 2} denote directly sampling and sampling through context-aware guiding, respectively. $p, q$ are two hyper-parameters denoting the numbers of convolutional residual units and deconvolutional layers within context-aware unit. In our experiments, we use the following setting: ${p = 1, q = 1}$.}
\label{fig:4}
\end{figure}
For the convolutional layers, we reuse the architecture of the backbone residual units (with $2$ stride) but train it with a different group of parameters. On the other hand, in the deconvolutional layers, we simply employ a kernel filter of size $1 \times 1$ and a fractional $1/2$ stride to upsample the contextual feature maps. Eventually, the output layer of this ``context-aware network unit" has the same size as the input ResNet-50 features, but captures a larger size of receptive field and deeper context-aware features. In this fashion, attention masks can be generated side-by-side by sampling such contextual layer.
The structure of context-aware unit \iffalse and sharp attention mask generator \fi is illustrated in Fig.~\ref{fig:4}.
Notice that we just need a relatively compact yet effective structure (\textit{i.e.},~a residual unit for down sampling and a deconvolutional layer for up sampling), cooperated with the proposed sampling-based attention mechanism, to achieve satisfactory performances. That is attribute to the superiority of sharper attention selectors over soft attention ones, which can more aggressively and assertively distinguish relevant visual parts. Conversely, some soft attention models (\textit{e.g.},~\cite{Wang_2017_CVPR}) also consider context-aware information to guide attention selection but with complicated bottom-up top-down structure. This sub-network design with high complexity~\cite{Wang_2017_CVPR} is ineffective in model deployment and prone to being overfitting when only a small set of labeled data is available for model training.
\section{Experiments}
\label{sec:4}
\subsection{Datasets and Evaluation Protocols}
\label{sec:4_1}
We conduct experiments on three person re-ID datasets CUHK03~\cite{li2014deepreid}, Market-1501~\cite{zheng2015scalable} and DukeMTMC-reID~\cite{ristani2016MTMC, Zheng_2017_ICCV} widely used in literature. In our experiments, given a test probe image $I^p$ from one camera view and a set of test gallery images ${I_i^g}$ from other non-overlapping camera views, we first compute their corresponding deep feature representations by forward-feeding the images to a trained sharp attention network model, denoted as $x^p$ and $x_i^g$, then we compute the similarities (based on the Euclidean distance) between $x^p$ and $x_i^g$. After that, the ranked gallery list are returned in a descent order of the similarities.
The performances are evaluated by the commonly used Cumulative Matching Characteristics (CMC)~\cite{moon2001computational} top-k accuracy, which is an estimate of the expectation of finding the correct match in the top k retrieved items. Following~\cite{zheng2015scalable}, we also report the mean Average Precision (mAP) over all three datasets. All the experiments are performed in a single-query setting (\textit{i.e.},~one query each time).
\textbf{CUHK03.}
The CUHK03 dataset consists of five pairs of camera views, including $14,097$ images of $1,467$ pedestrians. Each identity only appears in two disjoint camera views on the CUHK campus, on average with $4.8$ images in each view. Li \textit{et al.}~\cite{li2014deepreid} provide two types of bounding boxes: labeled (human annotated) and detected (automatically produced by the DPM detector~\cite{felzenszwalb2010object}).
In this paper, we conduct experiments on both sets, using the provided training/testing splits ($1,267$ identities for training, $100$ for validation, and the last $100$ for testing).
For each test identity, two images are randomly sampled from different camera views as the probe and gallery images, respectively, and the average performance over 20 times is reported as the final result.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{5.pdf}
\end{center}
\caption{(a), (b) The relative performances in terms of mAP and CMC top-1 accuracy over the baseline ResNet-50 with different block combination strategies on CUHK03 labeled, respectively. (c) The relative performances in terms of mAP over the baseline ResNet-50 with different sampling strategies on several datasets.}
\label{fig:5}
\end{figure*}
\textbf{Market-1501.}
The Market-1501 dataset contains $32,668$ pedestrian images of $1,501$ identities captured from six cameras in different resolutions: five high-resolution cameras, and one low-resolution camera. It is a large-scale benchmark dataset for person re-ID.
$19,732$ images are used for testing ($751$ identities, along with $2,793$ distractor images) and the remaining $12,936$ images ($750$ identities) are used for training.
There is an average of $17.2$ training images per identity in this set.
The people in the images are automatically detected by the deformable part model (DPM)~\cite{felzenszwalb2010object}, so the incorrect detections of people are common, along with partial occlusion, which makes the dataset challenging and close to real-world scenarios.
\textbf{DukeMTMC-reID.} DukeMTMC~\cite{ristani2016MTMC} is a newly-released multi-target, multi-camera pedestrian tracking dataset. It contains eight 85-minute high-resolution videos from eight different cameras. Hand-drawn pedestrian bounding boxes are available.
In this paper, we use its re-ID version benchmarked DukeMTMC-reID~\cite{Zheng_2017_ICCV}, which is a subset of the original dataset and contains $1,404$ identities appearing in more than two cameras. $702$ identities are selected into the training set and the remaining $702$ identities into the testing set. This results in $16,522$ training images, $2,228$ queries, and $17,661$ gallery images. In the testing set, one query image for each identity in each camera is picked and the remaining images are viewed as the gallery.
\subsection{Implement Details}
\label{sec:4_2}
\textbf{Network architecture.} We adopt ResNet-50~\cite{He_2016_CVPR}
as the backbone network, and replace the last $1000$-dimensional fully-connected layer with two fully-connected layers. The output features of the first FC layer are $1024$-dimension, followed by batch normalization~\cite{ioffe2015batch} and ReLU~\cite{krizhevsky2012imagenet} layers; the output of the second FC layer goes down to $128$-dimension, yielding the final feature representation. Same backbone architecture is adopted for all experiments for fair comparison.
During training, each input image is cropped from a random portion of the image sampled in $[0.64, 1.0]$ with an aspect ratio randomly chosen in $[2, 3]$. The cropped area is then resized to $256 \times 128$, for data augmentation. Horizontal flip and per-pixel mean subtraction are also used. During testing, for comparison we adopt the standard 10-crop testing~\cite{krizhevsky2012imagenet}.
In experiments, which residual block of ResNet-50 should be extended to a sharp attention block is decided by cross-validation, and later we will also empirically compare different ways of extending sharp attention blocks at different levels of residual blocks.
The optional context-aware unit in the sharp attention block is composed of a downsampled residual unit (whose architecture is reused from the backbone network but with a different group of parameters) and a deconvolutional layer with a kernel filter of size $1 \times 1$ and a fractional $1/2$ stride.
\textbf{Network training.}
The model is implemented on PyTorch and runs on a workstation configured with NVIDIA M40 GPU cards.
For all experiments, within each mini-batch, we randomly sample 64 identities and then randomly sample four images for each person, thus resulting in a batch of $256$ images. We select the hardest positive and the hardest negative samples within a batch to form the triplet units~\cite{hermans2017defense}. We use the pretrained model on ImageNet~\cite{imagenet_cvpr09} to initialize the weights and train the network for up to 150 epochs. Adam optimizer~\cite{kingma2015adam} is adopted to perform weight updates.
We set the initial learning rate to 0.0002 (annealed strategy is according to~\cite{hermans2017defense}), weight decay to 5e-4, triplet margin to 0.5, hyperparameter $\mu$ in loss function Eq.~\ref{eq:8} to 0.1.
The temperature $\tau$ in Eq.~\ref{eq:5} is initialized to 1.0 and anneals by:
\begin{equation}\label{eq:9}
\tau=
\begin{cases}
1.0 \cdot e^{-\alpha t}, \ \ \ \ \text{if $\tau > \tau_1$ }\\
\tau_1, \ \ \ \ \ \ \ \ \ \ \ \ \ \text{if $\tau \le \tau_1$}
\end{cases},
\end{equation}
\noindent where $\alpha$ is the annealed rate of 0.008, $\tau_1$ denotes the final small temperature of 0.5 and $t$ indexes the epoch.
\subsection{Empirical Analysis}
\label{sec:4_3}
\textbf{The impact of different combinations of sharp attention blocks.} We empirically study different combinations of sharp attention blocks for person re-ID. We conduct an experiment on CUHK03 labeled dataset. The relative performances in terms of mAP (as well as CMC top-1 accuracy) over the ResNet-50 baseline are illustrated in Fig.~\ref{fig:5}(a), (b). Seven different combination strategies, including applying sharp attention to a single block (from block1 to block4), two blocks (block1+block2), three blocks (block1+block2+block3), and even all four blocks, are evaluated in the same experimental setting: the same hyper-parameters and the same sharp attention block design (SAB for short). For the latter, we utilize the proposed sharp attention mask generator (SAMG for short) combined with cross-feature interaction learning (CIL for short), while not involving the optional context-aware unit (CU for short), \textit{i.e.}, \textit{SAMG + CIL, no CU}.
It can be seen that, directly applying Gumbel-Softmax sampling to residual block1 (\textit{i.e.}, low-level feature maps) contributes the major improvement. Involving deep blocks alone, may lead to a slight decline in mAP and CMC top-1 accuracy (\textit{e.g.}, block4). We conjecture the reason is that low-level feature maps (such as block1) contain more subtle visual structures from which the sharp attention selectors can sample discriminative features to uniquely identify different pedestrians. On the contrary, the high-level feature maps (such as block4) are usually sparse and too coarse to distinguish between different people, making selecting discriminative structures from high-level residual block alone less effective than that from low-level one. Further, combining block1 with all other blocks achieves the best performance: an absolute 2.9\% improvement in mAP and 2.7\% improvement in CMC top-1 accuracy over the ResNet-50 baseline. Thus, in all of the next experiments, we adopt such sharp attention block combination strategy. Although applying Gumbel-Softmax sampling to block4 alone leads to a slight decline (just about -0.1\%), the performance of block(1+2+3+4) is a little better than block(1+2+3). We think the main reason is that
the performance gain among different blocks should not be considered as a simple linear additive relationship, because different-level attentions (from different blocks) should have complementary information such that our block-wise (multiple levels of attention learning) design can provide additional top-down attention refinement to boost the re-ID performance collaboratively.
\textbf{Whether or not involving context-aware unit.}
We evaluate how context-aware unit affect the re-ID performance on three datasets: CUHK03 labeled, CUHK03 detected, and Market-1501. Experiments are conducted in the same setting: the same hyper-parameters, and the same sharp attention block design. For the latter, all four residual blocks expand to sharp attention blocks (as clarified above) with SAMG and CIL. The only difference is whether involving the context-aware unit or not. Notice that the context-aware unit should be only added to block1 -- block3, since block4 is the highest-level features already.
The experimental results in Fig.~\ref{fig:5}(c) and the comparison between Row-3 and Row-5 of Table~\ref{tab:1} show that using the context-aware unit can consistently improve the performance.
Specifically, the improvement on CUHK03 labeled is slight, while the effects on CUHK03 detected and Market-1501 are especially significant. These results show that for CUHK03 labeled dataset, the proposed sharp attention mechanism can already obtain pleasing performance without the context-aware unit. However, for CUHK03 detected and Market-1501 datasets, the context-aware unit is essential to play a role of appropriate sampling guiding and cooperate with the sharp attention mechanism for the purpose of acquiring satisfactory results. We think the reason as follows. Compared with CUHK03 labeled dataset where bounding boxes are carefully annotated by human, the other two datasets use DPM detector to produce bounding boxes of people.
Thus, the latter two datasets are prone to misalignment
due to the poor detected bounding boxes.
They represent more challenging scenarios than CUHK03 labeled dataset. Therefore, involving a larger receptive field and high-level information in context-aware unit could be helpful to alleviate the negative impact from background clutter, occlusion and missing body parts, and supply adequate guidance to more accurately generate attentions on identifying different pedestrians. Notice that, we do not imply that the context-aware unit is the largest contributor in most cases, since context-aware unit is just a front-end tool for sampling guiding and cannot be utilized solely. Instead, the proper statement is that the context-aware unit is essential and fundamental to assist the proposed sharp attention achieving significant effects for those more challenging re-ID scenarios.
\begin{table*}[t]
\centering
\caption{Performance comparison of different sharp attention block designs (SAB, \textit{i.e.}, different components and their combinations) on several datasets. The CMC rank-1 accuracy (\%) and mAP (\%) are presented. ``SAMG" means sharp attention mask generator, ``CIL" means cross-feature interaction learning,
``CU" means context-aware unit, ``TV" means TV regularization penalty.}
\label{tab:1}
\scalebox{1.1}{
\begin{tabular}{l|cccccccccccc}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{Method}} & & \multicolumn{2}{l}{CUHK03 labeled} & & \multicolumn{2}{l}{CUHK03 detected} & & \multicolumn{2}{l}{Market-1501} & &
\multicolumn{2}{l}{DukeMTMC-reID} \\ \cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13}
\multicolumn{1}{c|}{} & & rank-1 & mAP & & rank-1 & mAP & & rank-1 & mAP & & rank-1 & mAP \\ \hline
ResNet-50 baseline & & 85.1 & 82.1 & & 82.1 & 79.7 & & 84.0 & 67.9 & & 75.3 & 56.4 \\
Baseline + SAMG & & 87.2 & 84.4 & & - & - & & - & - & & - & - \\
Baseline + SAMG + CIL & & 87.8 & 85.0 & & 82.5 & 80.3 & & 84.3 & 68.3 & & 75.9 & 57.1 \\
Baseline + SAMG + CU & & - & - & & 83.4 & 81.2 & & 85.2 & 69.0 & & 76.9 & 57.9 \\
Baseline + SAMG + CIL + CU & & 88.0 & 85.4 & & 83.9 & 81.6 & & 85.6 & 69.6 & & 77.5 & 58.4 \\
\textbf{Baseline + SAMG + CIL + CU + TV} & \textbf{} & \textbf{88.3} & \textbf{85.9} & \textbf{} & \textbf{84.3} & \textbf{82.2} & \textbf{} & \textbf{85.9} & \textbf{70.1} & \textbf{} & \textbf{77.9} & \textbf{58.8} \\ \hline
\end{tabular}}
\end{table*}
\textbf{The separate effectiveness of sharp attention mask generator (SAMG) and cross-feature interaction learning (CIL).}
Moreover, SAMG is evaluated with/without CIL to demonstrate their individual effectiveness. As stated in the previous paragraph, for CUHK03 detected, Market-1501 and DukeMTMC-reID datasets, we conduct experiments under the circumstance of involving context-aware unit, since context-aware unit is essential to guarantee the sharp attention mechanism making effects for those more challenging re-ID scenarios. While for CUHK03 labeled dataset, context-aware unit is not required. As can be seen from the comparisons of Row-2, Row-3 of Table~\ref{tab:1} (for CUHK03 labeled) and Row-4, Row-5 of Table~\ref{tab:1} (for the other three datasets), SAMG plays a major role and
containing CIL further improves the re-ID performance consistently. Therefore, these two are both effective components among our sharp attention block design.
\textbf{The effectiveness of optimizing attention-aware feature continuity.}
On the basis of the above optimal experimental practice (all four blocks combination: SAMG + CIL and involving CU), we further add the TV regularization penalty, aiming to eliminate noisy features and optimize the continuity of attention-aware features. The results between Row-5 and Row-6 of Table~\ref{tab:1} clearly present a slight performance gain, about +0.3\% -- +0.6\% for all datasets, demonstrating the effectiveness of our introduced regularization technique.
\subsection{Comparison with the ResNet-50 Baseline}
\label{sec:4_4}
In this subsection, the overall performances of the proposed method are summarized (ablation study has been demonstrated in the previous subsection). We adopt ResNet-50~\cite{He_2016_CVPR} network trained with triplet loss (using the triplet mining strategy introduced by Hermans \textit{et al.}~\cite{hermans2017defense}) to obtain a strong CNN baseline for all datasets. As shown in Table~\ref{tab:1}, we have the baseline results as follows. The mAP is 82.1\%, 79.7\%, 67.9\%, and 56.4\% on CUHK03 labeled, CUHK03 detected, Market-1501 and DukeMTMC-reID datasets, respectively. The corresponding CMC rank-1 accuracy is 85.1\%, 82.1\%, 84.0\% and 75.3\%. Note that, the baseline alone exceeds most of the existing deep learning frameworks (see Table~\ref{tab:3} -- Table~\ref{tab:6} in detail).
We adopt the practice discussed in Sec.~\ref{sec:4_3} to validate the effectiveness of the proposed sampling-based sharp attention mechanism. Table~\ref{tab:1} detailed summarizes the positive effects of the proposed algorithm modules over the baseline, including the basic sharp attention block design (\textit{i.e.}, all four residual blocks expanding to sharp attention blocks with sharp attention mask generator and cross-feature interaction learning, refer to Sec.~\ref{sec:3_1} and Sec.~\ref{sec:3_3}), the optional context-aware unit (refer to Sec.~\ref{sec:3_4}) and the TV regularization optimization (refer to Sec.~\ref{sec:3_2}).
The results show that we gain significant improvements consistently in both mAP and CMC rank-1 accuracy over the strong baseline on all datasets. Specifically, we observe final improvements of +3.8\%, +2.5\%, +2.2\%, +2.4\% in mAP and +3.2\%, +2.2\%, +1.9\%, +2.6\% in CMC rank-1 accuracy. These results also
demonstrate that all four core components in our approach are pretty important designs, improving the baseline and achieving splendid performances in various scenarios steadily and consistently.
\begin{table*}[t]
\centering
\caption{Comparisons of additional parameter number and performance on CUHK03 labeled dataset. The additional parameter number (million) indicates the extra parameters needed in the context-aware unit. For performance comparison, the CMC rank-1 accuracy (\%) and mAP (\%) are listed.}
\label{tab:2}
\scalebox{1.2}{
\begin{tabular}{l|cccccccc}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{Method}} & & \multicolumn{4}{l}{Additional parameter number(M)} & & \multicolumn{2}{l}{CUHK03 labeled} \\ \cline{3-6} \cline{8-9}
\multicolumn{1}{c|}{} & & \multicolumn{1}{c}{block1} & \multicolumn{1}{c}{block2} & \multicolumn{1}{c}{block3} & total & & rank-1 & mAP \\ \hline
Soft attention network~\cite{Wang_2017_CVPR} & & \multicolumn{1}{c}{15.84} & \multicolumn{1}{c}{15.08} & \multicolumn{1}{c}{12.06} & 42.98 & & 87.0 & 85.1 \\ \hline
\textbf{Our sharp attention network} & & \multicolumn{1}{c}{\textbf{0.51}} & \multicolumn{1}{c}{\textbf{2.03}} & \multicolumn{1}{c}{\textbf{8.13}} & \textbf{10.67} & & \textbf{88.3} & \textbf{85.9} \\ \hline
\end{tabular}}
\end{table*}
\begin{figure}[]
\begin{center}
\includegraphics[width=0.8\linewidth,height=7cm]{6.pdf}
\end{center}
\caption{Visualization of different types of Conv1 attention features. (a) Raw images. (b) Gating-based soft attention feature maps. (c) Sampling-based sharp attention feature maps. These qualitative results indicate the proposed sharp attention mechanism is more assertive in localizing subtle visual details.}
\label{fig:6}
\end{figure}
\subsection{Comparison with Soft Attention Models}
\label{sec:4_5}
\cite{rahimpour2017person, liu2017end, Zhao_2017_ICCV, Wang_2017_CVPR} represent four state-of-the-art soft attention models, from which we firstly choose the method proposed by~\cite{Wang_2017_CVPR} to compare its performance with ours. We highlight the comparison with~\cite{Wang_2017_CVPR} because it performs best among the above-mentioned four methods (see Table~\ref{tab:3} -- Table~\ref{tab:5}). Another reason is that it is easily compared with us, on the contrary, other methods~\cite{rahimpour2017person, liu2017end, Zhao_2017_ICCV} contain some additional techniques (such as RNN, STN~\cite{jaderberg2015spatial}), which are not directly comparable to us. For direct and fair comparison between sharp attention and soft attention, we conduct experiments within the same setting: the same backbone network (\textit{i.e.}, ResNet-50), the same loss function (\textit{i.e.}, triplet loss), the same hyper-parameters and the same add-ons (\textit{i.e.}, CIL and CU). The major difference is that our method employ a distinguished sampling-based attention generation mechanism rather than soft attention ones. Besides, the context-aware unit structure utilized by us is much more concise.
According to~\cite{Wang_2017_CVPR}, we reproduce the proposed residual attention network based on ResNet-50 backbone and conduct experiments on CUHK03 labeled dataset. Results from Table~\ref{tab:2} show that we only use 25\% parameter number (10.67M with 42.98M) within context-aware unit while outperforms by +1.3\% (88.3\% with 87.0\%) in CMC rank-1 accuracy and +0.8\% (85.9\% with 85.1\%) in mAP, comparing to~\cite{Wang_2017_CVPR}. That is attribute to the superiority of sharper attention selectors over soft attention ones.
For some typical difficult cases intuitively reflected in Fig.~\ref{fig:6} and the former Fig.~\ref{fig:2},
the soft attentions look ambiguous in selecting subtle discriminative parts, because their mask values are far from two assertive statuses of being attended ($1$) or unattended ($0$) (shown in Fig.~\ref{fig:2}(b)). Compared with the soft attentions, the sharp attentions are more certain and assertive in selecting discriminative visual structures (\textit{e.g.}, the regions of backpack, T-shirt, book or pants, shown in Fig.~\ref{fig:6}) to identify people, which makes them particularly suitable for solving person re-ID in more challenging scenarios. Also thanks to the discrete nature of sharp attention samplers which are more aggressive and assertive on selecting attended/unattended visual structures, we just need a relatively compact front-end unit (\textit{i.e.}, less parameters) for visual contexts modeling and attention generation guiding. This makes our network more effective in model deployment and reduces the risk of overfitting.
More experimental comparison results with other soft attention methods on other datasets are detailed summarized in Table~\ref{tab:3} -- Table~\ref{tab:5}. For~\cite{liu2017end, rahimpour2017person, Zhao_2017_ICCV}, we just list the results reported in their papers. We find that the proposed method outperforms the soft attention models
on all datasets, including CUHK03 labeled, CUHK03 detected and Market-1501. This demonstrates that, our approach not only provides an alternative way to generate attentions, but also achieves the state-of-the-art performance for person re-ID within different attention models.
\begin{table}[t]
\centering
\caption{Performance comparison with other simple alternatives on CUHK03 labeled and Market-1501 datasets. The CMC rank-1 accuracy (\%) and mAP (\%) are presented.}
\label{tab:7}
\begin{tabular}{l|cccccc}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{Method}} & & \multicolumn{2}{l}{CUHK03 labeled} & & \multicolumn{2}{l}{Market-1501} \\ \cline{3-4} \cline{6-7}
\multicolumn{1}{c|}{} & & rank-1 & mAP & & rank-1 & mAP \\ \hline
ResNet-50 baseline & & 85.1 & 82.1 & & 84.0 & 67.9 \\
Thresholding & & 85.3 & 82.4 & & 84.1 & 68.0 \\
Power ($x^2$) & & 86.1 & 83.1 & & 84.4 & 68.2 \\
Power ($x^3$) & & 85.8 & 82.7 & & 84.3 & 68.1 \\
\textbf{Our SAN} & & \textbf{87.2} & \textbf{84.4} & & \textbf{85.2} & \textbf{69.0} \\ \hline
\end{tabular}
\end{table}
\subsection{Comparison with Other Simple Alternatives}
\label{sec:4_6}
As mentioned in Sec.~\ref{sec:3_1}, the normalized feature Eq.~\ref{eq:3}
is viewed as the probability of sampling this feature, which aims to generate attention-aware features in which strong gets stronger, and weak becomes weaker or even vanishes. We evaluate the proposed sampling-based mechanism with other two simple alternatives which share the similar intuition, on CUHK03 labeled and Market-1501. The first is thresholding, where features that are below a certain threshold $\lambda$ are set to 0, and the second is power (\textit{e.g.}, square or cube). We tune the hyperparameter $\lambda$ to 0.3 on the validation dataset. Experiments are conducted in the same setting (without CIL and TV, CU are only involved for Market-1501). The only difference lies in the attention mask generation techniques.
The results of Table~\ref{tab:7} show that our SAMG outperforms other simple alternatives, which demonstrates the effectiveness of our delicately designed sharp attention mechanism.
\begin{table}
\centering
\caption{Performance comparison on CUHK03 labeled dataset. The compared methods are separated into two categories: gating-based soft attention models (GSA) and other state-of-arts (SOA). The CMC rank-1/5/10 accuracy (\%) and mAP (\%) are presented.}
\label{tab:3}
\small
\setlength\tabcolsep{5pt}
\begin{tabular}{l|l|cccc}
\hline
\multicolumn{2}{c|}{Method} & rank-1 & rank-5 & rank-10 & mAP \\ \hline
\multicolumn{1}{c|}{\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}GSA\end{tabular}}} & GAN~\cite{rahimpour2017person} & 61.2 & 89.1 & 91.3 & - \\
\multicolumn{1}{c|}{} & CAN~\cite{liu2017end} & 77.6 & 95.2 & 99.3 & - \\
\multicolumn{1}{c|}{} & Part Aligned~\cite{Zhao_2017_ICCV} & 85.4 & 97.6 & 99.4 & - \\
\multicolumn{1}{c|}{} & SAN~\cite{Wang_2017_CVPR} & 87.0 & 98.3 & 99.4 & 85.1 \\ \hline
\multirow{8}{*}{\begin{tabular}[c]{@{}l@{}}SOA\end{tabular}} & LOMO~\cite{Liao_2015_CVPR} & 52.2 & 82.2 & 92.1 & - \\
& DNS~\cite{Zhang_2016_CVPR} & 58.9 & 85.6 & 92.5 & - \\
& Latent Part~\cite{Li_2017_CVPR} & 74.2 & 94.3 & 97.5 & - \\
& SSM~\cite{Bai_2017_CVPR} & 76.6 & 94.6 & 98.0 & - \\
& MuDeep~\cite{Qian_2017_ICCV} & 76.9 & 96.1 & 98.4 & - \\
& MSP-CNN~\cite{Shen2017Deep} & 85.7 & 97.6 & 99.2 & - \\
& Spindle~\cite{Zhao_2017_CVPR} & 88.5 & 97.8 & 98.6 & - \\
& PDC~\cite{Su_2017_ICCV} & \textbf{88.7} & 98.6 & 99.2 & - \\ \hline
& \textbf{Our SAN} & 88.3 & \textbf{98.8} & \textbf{99.4} & \textbf{85.9} \\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Performance comparison on CUHK03 detected dataset. The CMC rank-1/5/10 accuracy (\%) and mAP (\%) are shown.}
\label{tab:4}
\small
\setlength\tabcolsep{5pt}
\begin{tabular}{l|l|cccc}
\hline
\multicolumn{2}{c|}{Method} & rank-1 & rank-5 & rank-10 & mAP \\ \hline
\multicolumn{1}{c|}{\multirow{3}{*}{GSA}} & CAN~\cite{liu2017end} & 69.2 & 88.5 & 94.1 & - \\
\multicolumn{1}{c|}{} & Part-Aligned~\cite{Zhao_2017_ICCV} & 81.6 & 97.3 & 98.4 & - \\
\multicolumn{1}{c|}{} &
SAN~\cite{Wang_2017_CVPR} & 83.1 & 97.3 & 98.3 & 81.0 \\ \hline
\multirow{9}{*}{SOA} & LOMO~\cite{Liao_2015_CVPR} & 46.3 & 78.9 & 88.6 & - \\
& SI-CI~\cite{Wang_2016_CVPR} & 52.2 & 84.3 & 94.8 & \\
& DNS~\cite{Zhang_2016_CVPR} & 53.7 & 83.1 & 93.0 & - \\
& Latent Part~\cite{Li_2017_CVPR} & 68.0 & 91.0 & 95.4 & - \\
& SSM~\cite{Bai_2017_CVPR} & 72.7 & 92.4 & 96.1 & - \\
& LSRO~\cite{Zheng_2017_ICCV} & 73.1 & 92.7 & 96.7 & 77.4 \\
& MuDeep~\cite{Qian_2017_ICCV} & 75.6 & 94.4 & 97.5 & - \\
& PDC~\cite{Su_2017_ICCV} & 78.3 & 94.9 & 97.2 & - \\
& SVDNet~\cite{Sun_2017_ICCV} & 81.8 & - & - & - \\ \hline
& \textbf{Our SAN} & \textbf{84.3} & \textbf{97.4} & \textbf{98.4} & \textbf{82.2} \\
\hline
\end{tabular}
\end{table}
\subsection{Comparison with the State-of-the-Arts}
\label{sec:4_7}
Finally, we compare our method with the existing published state-of-the-art methods on CUHK03 (including both labeled and detected settings), Market-1501 and DukeMTMC-reID in Table~\ref{tab:3} -- Table~\ref{tab:6}, respectively. On CUHK03 labeled dataset, we have 88.3\% in rank-1 accuracy, 98.8\% in rank-5 accuracy, and 99.4\% in rank-10 accuracy. The rank-5 and rank-10 accuracy reach the best results in literature. We also achieve the mAP of 85.9\%, which is also very competitive. \iffalse compared with the other models\fi On CUHK03 detected dataset, we achieve the best results: 84.3\% in CMC rank-1 accuracy and 82.2\% in mAP. On Market-1501 dataset, we also beat all other methods with 85.9\% in rank-1 accuracy and 70.1\% in mAP, even some models (\textit{e.g.}, the second best one~\cite{Bai_2017_CVPR}) utilize an additional re-ranking technique. For the newest DukeMTMC-reID dataset, we reach a new state-of-the-art performance as well: 77.9\% in rank-1 accuracy, 58.8\% in mAP.
All these results demonstrate the superiority of our novel sharp attention model for person re-ID over the other existing published state-of-the-art approaches.
\begin{table}[t]
\centering
\caption{Performance comparison on Market-1501 dataset. The CMC rank-1/5/10 accuracy (\%) and mAP (\%) are shown.}
\label{tab:5}
\small
\setlength\tabcolsep{5pt}
\begin{tabular}{l|l|cccc}
\hline
\multicolumn{2}{c|}{Method} & rank-1 & rank-5 & rank-10 & mAP \\ \hline
\multicolumn{1}{c|}{\multirow{3}{*}{GSA}} & CAN~\cite{liu2017end} & 60.3 & - & - & 35.9 \\
\multicolumn{1}{c|}{} & Part-Aligned~\cite{Zhao_2017_ICCV} & 81.0 & 92.0 & 94.7 & 63.4 \\
\multicolumn{1}{c|}{} &
SAN~\cite{Wang_2017_CVPR} & 84.8 & 94.2 & 96.7 & 69.2 \\ \hline
\multirow{8}{*}{SOA} & DNS~\cite{Zhang_2016_CVPR} & 55.4 & - & - & 29.9 \\
& Spindle~\cite{Zhao_2017_CVPR} & 76.9 & 91.5 & 94.6 & - \\
& LSRO~\cite{Zheng_2017_ICCV} & 78.1 & - & - & 56.2 \\
& Latent Part~\cite{Li_2017_CVPR} & 80.3 & - & - & 57.5 \\
& MSP-CNN~\cite{Shen2017Deep} & 81.9 & 92.8 & 95.2 & 63.6 \\
& SVDNet~\cite{Sun_2017_ICCV} & 82.3 & - & - & 62.1 \\
& SSM~\cite{Bai_2017_CVPR} & 82.2 & - & - & 68.8 \\
& PDC~\cite{Su_2017_ICCV} & \multicolumn{1}{c}{84.4} & \multicolumn{1}{c}{92.7} & \multicolumn{1}{c}{94.9} & \multicolumn{1}{c}{63.4} \\ \hline
& \textbf{Our SAN} & \textbf{85.9} & \textbf{94.9} & \textbf{97.0} & \textbf{70.1} \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
\label{sec:5}
In this paper, we propose a sampling-based sharp attention mechanism, which can generate sharper attention masks that are more assertive on selecting discriminative visual structures than gating-based soft attention models by directly sampling from the convolutional features. The sharper attention mask is adequate to distinguish subtle visual details from irrelevant parts, and is particularly suitable for solving challenging recognition problems like person re-ID.
A differentiable Gumbel-Softmax sampler is employed to approximate the Bernoulli sampling to train the sharp attention networks in a end-to-end fashion. We further introduce a compact context-aware unit to capture high-level context-aware features to better guide sampling of attention masks in complex contexts.
Experiments on three large-scale datasets demonstrates the superiority of the proposed approach over gating-based soft attention models as well as other existing published state-of-the-art methods.
\begin{table}
\centering
\caption{Performance comparison on DukeMTMC-reID dataset. The CMC rank-1 accuracy (\%) and mAP (\%) are listed.}
\label{tab:6}
\small
\setlength\tabcolsep{12pt}
\begin{tabular}{l|cc}
\hline
\multicolumn{1}{c|}{Method} & rank-1 & mAP \\ \hline
LOMO+XQDA~\cite{Liao_2015_CVPR} & 30.8 & 17.0 \\
LSRO~\cite{Zheng_2017_ICCV} & 67.7 & 47.1 \\
OIM ~\cite{Xiao_2017_CVPR} & 68.1 & - \\
ACRN~\cite{Arne_2017_CVPR_Workshops} & 72.6 & 52.0 \\
SVDNet~\cite{Sun_2017_ICCV} & 76.7 & 56.8 \\ \hline
\textbf{Our SAN} & \textbf{77.9} & \textbf{58.8} \\ \hline
\end{tabular}
\end{table}
\appendices
\section*{Acknowledgment}
This work was supported in part by the Fundamental Research Funds for the Central Universities.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-09-27T02:09:44",
"yymm": "1805",
"arxiv_id": "1805.02336",
"language": "en",
"url": "https://arxiv.org/abs/1805.02336"
}
|
\section{Introduction}
\label{sec:intro}
Let $\F$ denote a field and let $V$ denote a vector space over $\F$ with
finite positive dimension.
We consider a pair $A, A^*$ of diagonalizable $\F$-linear maps on $V$,
each of which acts on an eigenbasis for the other one
in an irreducible tridiagonal fashion.
Such a pair is called a Leonard pair (see \cite[Definition 1.1]{T:Leonard}).
The Leonard pair $A, A^*$ is said to be self-dual whenever
there exists an automorphism of the endomorphism algebra of $V$
that swaps $A$ and $A^*$.
In this case such an automorphism is unique, and called the duality $A \leftrightarrow A^*$.
The literature contains many examples of self-dual Leonard pairs.
For instance
(i)
the Leonard pair associated with an irreducible module for the Terwilliger algebra
of the hypercube (see \cite[Corollaries 6.8, 8.5]{Go});
(ii)
a Leonard pair of Krawtchouk type (see \cite[Definition 6.1]{NT:Krawt});
(iii)
the Leonard pair associated with an irreducible module for the Terwilliger algebra
of a distance-regular graph that has a spin model in the Bose-Mesner algebra
(see \cite[Theorem]{C:thin}, \cite[Theorems 4.1, 5.5]{CN:hyper});
(iv)
an appropriately normalized totally bipartite Leonard pair
(see \cite[Lemma 14.8]{NT:bipartite});
(v)
the Leonard pair consisting of
any two of a modular Leonard triple $A,B,C$ (see \cite[Definition 1.4]{Cur});
(vi)
the Leonard pair consisting of a pair of opposite generators for
the $q$-tetrahedron algebra,
acting on an evaluation module (see \cite[Proposition 9.2]{IRT}).
The example (i) is a special case of (ii),
and the examples (iii), (iv) are special cases of (v).
Let $A, A^*$ denote a Leonard pair on $V$.
We can determine whether $A, A^*$ is self-dual in the following way.
By \cite[Lemma 1.3]{T:Leonard} each eigenspace of $A$, $A^*$ has dimension one.
Let $\{\th_i\}_{i=0}^d$ denote an ordering of the eigenvalues of $A$.
For $0 \leq i \leq d$ let $v_i$ denote a $\th_i$-eigenvector for $A$.
The ordering $\{\th_i\}_{i=0}^d$ is said to be standard whenever
$A^*$ acts on the basis $\{v_i\}_{i=0}^d$ in an irreducible tridiagonal fashion.
If the ordering $\{\th_i\}_{i=0}^d$ is standard then the ordering $\{\th_{d-i}\}_{i=0}^d$
is also standard, and no further ordering is standard.
Similar comments apply to $A^*$.
Let $\{\th_i\}_{i=0}^d$ denote a standard
ordering of the eigenvalues of $A$.
Then $A,A^*$ is self-dual if and only if $\{\th_i\}_{i=0}^d$ is a standard
ordering of the eigenvalues of $A^*$ (see \cite[Proposition 8.7]{NT:affine}).
For a given self-dual Leonard pair,
it is not obvious what is the corresponding duality.
The purpose of this paper is to describe this duality.
Our description is summarized as follows.
Let $A, A^*$ denote a self-dual Leonard pair on $V$,
and let $\{\th_i\}_{i=0}^d$ denote a standard ordering of the eigenvalues of $A$.
By construction $\{\th_i\}_{i=0}^d$ is a standard ordering of
the eigenvalues of $A^*$.
For $0 \leq i \leq d$ let $E_i : V \to V$ (resp.\ $E^*_i : V \to V$)
denote the projection
onto the eigenspace of $A$ (resp.\ $A^*$) corresponding to $\th_i$.
Using the projections $\{E_i\}_{i=0}^d$ and $\{E^*_i\}_{i=0}^d$
we define a certain $\F$-linear map $T : V \to V$.
We show that $T$ is invertible, and the
map $X \mapsto T X T^{-1}$ is the duality $A \leftrightarrow A^*$.
In order to illuminate the nature of $T$, we show how
$T$ acts on $4$ flags, $12$ decompositions, and $24$ bases
attached to $A, A^*$.
Here are some details.
By a flag on $V$ we mean a sequence $\{H_i\}_{i=0}^d$ of subspaces of $V$
such that $H_i$ has dimension $i+1$ for $0 \leq i \leq d$ and
$H_{i-1} \subseteq H_i$ for $1 \leq i \leq d$.
By a decomposition of $V$ we mean a sequence $\{V_i\}_{i=0}^d$ of one dimensional
subspaces whose direct sum is $V$.
For a decomposition $\{V_i\}_{i=0}^d$ of $V$, define
$H_i = V_0 + V_1 + \cdots + V_i$ for $0 \leq i \leq d$.
The sequence $\{H_i\}_{i=0}^d$ is a flag on $V$, said to be induced by $\{V_i\}_{i=0}^d$.
Two flags $\{H_i\}_{i=0}^d$ and $\{H'_i\}_{i=0}^d$ on $V$ are called opposite
whenever there exists a decomposition $\{V_i\}_{i=0}^d$ of $V$
that induces $\{H_i\}_{i=0}^d$ and $\{V_{d-i}\}_{i=0}^d$ induces $\{H'_i\}_{i=0}^d$.
In this case $V_i = H_i \cap H'_{d-i}$ for $0 \leq i \leq d$.
In particular the decomposition $\{V_i\}_{i=0}^d$ is uniquely determined by the
ordered pair $\{H_i\}_{i=0}^d$, $\{H'_i\}_{i=0}^d$;
we say that this ordered pair induces $\{V_i\}_{i=0}^d$.
For each symbol $z$ among the symbols $0, D, 0^*, D^*$
we define a flag $[z]$ on $V$ as follows.
The flag $[0]$ is induced by $\{E_i V\}_{i=0}^d$ and the flag $[D]$ is induced by
$\{E_{d-i} V\}_{i=0}^d$.
The flag $[0^*]$ is induced by $\{E^*_i V\}_{i=0}^d$ and the
flag $[D^*]$ is induced by $\{E^*_{d-i} V\}_{i=0}^d$.
By \cite[Theorem 7.3]{T:24points}
the flags $[0]$, $[D]$, $[0^*]$, $[D^*]$ are mutually opposite.
For distinct $z,w$ among the symbols $0,D,0^*,D^*$,
let $[z w]$ denote the decomposition of $V$ induced by $[z]$ and $[w]$.
There are $12$ choices for the ordered pair $z,w$ and this
gives 12 decompositions of $V$.
For each decomposition, pick a nonzero vector in
each component of the decomposition.
The resulting sequence of vectors is a basis for $V$.
We normalize the basis in two ways that seem attractive;
this yields two bases for each decomposition.
By this procedure we obtain 24 bases for $V$.
We obtain the action of $T$ on each of these bases.
As we will see, with respect to four of the bases among the 24,
the matrix representing $T$ is independent of the four bases
and its entries take a very attractive form.
The paper is organized as follows.
In Sections \ref{sec:LP}--\ref{sec:trace} we
review some background and establish some
basic results for general Leonard pairs.
Starting in Section \ref{sec:selfdual} we consider a self-dual Leonard pair $A,A^*$.
In Section \ref{sec:main}
we introduce the map $T$ and discuss its basic properties.
In Sections \ref{sec:formulaT}, \ref{sec:proofmain}
we show that $T$ is invertible, and the
map $X \mapsto T X T^{-1}$ is the duality $A \leftrightarrow A^*$.
In Section \ref{sec:decomp} we use $A, A^*$ to define 4 flags and 12 decompositions.
In Section \ref{sec:actTflag} we obtain the action of $T$ on these flags
and decompositions.
In Sections \ref{sec:24bases}, \ref{sec:rel24}
we obtain two bases from each of the 12 decompositions, and describe how these
two bases are related.
In Section \ref{sec:Taction}
we obtain the action of $T$ on the 24 bases.
We also display four bases among the 24, with respect to
which the matrix representing $T$ is independent of
the bases and its entries take an attractive form.
\section{Leonard pairs}
\label{sec:LP}
We now begin our formal argument.
In this section we recall the notion of a Leonard pair.
We use the following terms.
A square matrix is said to be {\em tridiagonal} whenever
each nonzero entry lies on either the diagonal, the subdiagonal,
or the superdiagonal.
A tridiagonal matrix is said to be {\em irreducible} whenever
each entry on the subdiagonal is nonzero and each entry on
the superdiagonal is nonzero.
Let $\F$ denote a field.
\begin{definition} \cite[Definition 1.1]{T:Leonard}
\label{def:LP} \samepage
\ifDRAFT {\rm def:LP}. \fi
Let $V$ denote a vector space over $\F$ with finite positive dimension.
By a {\em Leonard pair} on $V$ we mean an ordered pair
of $\F$-linear maps $A: V \to V$ and $A^* : V \to V$
that satisfy the following {\rm (i), (ii)}.
\begin{itemize}
\item[\rm (i)]
There exists a basis for $V$ with respect to which the matrix representing $A$
is irreducible tridiagonal and the matrix representing $A^*$ is diagonal.
\item[\rm (ii)]
There exists a basis for $V$ with respect to which the matrix representing $A^*$
is irreducible tridiagonal and the matrix representing $A$ is diagonal.
\end{itemize}
We say that $A,A^*$ is {\em over} $\F$.
\end{definition}
\begin{note}
According to a common notational convention,
for a matrix $A$ its conjugate-transpose is denoted by $A^*$.
We are not using this convention.
In a Leonard pair $A,A^*$ the linear maps $A,A^*$ are arbitrary subject to (i) and (ii) above.
\end{note}
We refer the reader to \cite{T:survey} for background on Leonard pairs
\begin{note}
Assume that $A,A^*$ is a Leonard pair on $V$.
Then $A^*,A$ is a Leonard pair on $V$.
\end{note}
For the rest of this paper,
let $V$ denote a vector space over $\F$ with finite positive dimension.
Let $\text{\rm End}(V)$ denote the $\F$-algebra consisting of the
$\F$-linear maps from $V$ to $V$.
The algebra $\text{\rm End}(V)$ is called the {\em endomorphism algebra} of $V$.
\begin{lemma} \cite[Corollary 5.6]{T:survey}
\label{lem:generate} \samepage
\ifDRAFT {\rm lem:generate}. \fi
Let $A,A^*$ denote a Leonard pair on $V$.
Then $A$, $A^*$ together generate the algebra $\text{\rm End}(V)$.
\end{lemma}
We recall the notion of an isomorphism for Leonard pairs.
Let $A,A^*$ denote a Leonard pair on $V$.
Let $V'$ denote a vector space over $\F$ with finite positive dimension,
and let $A', A^{*\prime}$ denote a Leonard pair on $V'$.
By an {\em isomorphism of Leonard pairs} from $A,A^*$ to $A', A^{*\prime}$
we mean an isomorphism of $\F$-algebras from $\text{\rm End}(V)$
to $\text{\rm End}(V')$ that sends $A \mapsto A'$ and $A^* \mapsto A^{*\prime}$.
The Leonard pairs $A,A^*$ and $A', A^{*\prime}$ are said to be {\em isomorphic}
whenever there exists an isomorphism of Leonard pairs from $A,A^*$
to $A', A^{*\prime}$.
In this case, the isomorphism involved is unique by Lemma \ref{lem:generate}.
An isomorphism of Leonard pairs can be seen from the following point of view.
By the Skolem-Noether theorem (see \cite[Corollary 7.125]{Rot}),
a map $\sigma : \text{\rm End}(V) \to \text{\rm End}(V')$ is an $\F$-algebra
isomorphism if and only if there exists an $\F$-linear bijection $K : V \to V'$
such that $X^\sigma = K X K^{-1}$ for all $X \in \text{\rm End}(V)$.
In this case, we say that {\em $K$ gives $\sigma$}.
Assume that $K$ gives $\sigma$.
Then an $\F$-linear map $\widetilde{K} : V \to V'$ gives $\sigma$
if and only if there exists $0 \not=\alpha \in \F$ such that $\widetilde{K}=\alpha K$.
\begin{definition} \label{def:selfdual} \samepage
\ifDRAFT {\rm def:selfdual}. \fi
A Leonard pair $A,A^*$ is said to be {\em self-dual}
whenever $A, A^*$ is isomorphic to $A^*, A$.
\end{definition}
Let $A,A^*$ denote a self-dual Leonard pair on $V$.
For an automorphism $\sigma$ of $\text{\rm End}(V)$
the following are equivalent:
\begin{itemize} \samepage
\item[\rm (i)]
$\sigma$ is an isomorphism of Leonard pairs from $A,A^*$ to $A^*,A$;
\item[\rm (ii)]
$\sigma$ is an isomorphism of Leonard pairs from $A^*,A$ to $A,A^*$.
\end{itemize}
There exists a unique automorphism $\sigma$ of $\text{\rm End}(V)$
that satisfies (i), (ii).
\begin{definition} \label{def:duality} \samepage
\ifDRAFT {\rm def:duality}. \fi
Let $A,A^*$ denote a self-dual Leonard pair on $V$.
By the {\em duality $A \leftrightarrow A^*$} we mean the
automorphism $\sigma$ of $\text{\rm End}(V)$ that satisfies (i), (ii) above.
\end{definition}
\section{Leonard systems}
\label{sec:LS}
When working with a Leonard pair, it is convenient to consider a closely related object
called a Leonard system \cite{T:Leonard}.
Before we define a Leonard system,
we recall a few concepts from linear algebra.
We denote by $I$ the identity element of $\text{\rm End}(V)$.
For $A \in \text{\rm End}(V)$ let $\gen{A}$ denote the subalgebra
of $\text{\rm End}(V)$ generated by $A$.
For an integer $d \geq 0$
let $\Mat_{d+1}(\F)$ denote the $\F$-algebra consisting of
the $d+1$ by $d+1$ matrices that have all entries in $\F$.
We index the rows and columns by $0,1,\ldots,d$.
Let $\{v_i\}_{i=0}^d$ denote a basis for $V$.
For $X \in \text{\rm End}(V)$ and $Y \in \Mat_{d+1}(\F)$,
we say that {\em $Y$ represents $X$ with respect to $\{v_i\}_{i=0}^d$} whenever
$X v_j = \sum_{i=0}^d Y_{i,j} v_i$ for $0 \leq j \leq d$.
Let $A \in \text{End}(V)$.
For $\th \in \F$ define
$V(\th) = \{ v \in V \,|\, A v = \th v\}$.
Observe that $V(\th)$ is a subspace of $V$.
The scalar $\th$ is called an {\em eigenvalue} of $A$ whenever $V(\th) \neq 0$.
In this case, $V(\th)$ is called the {\em eigenspace} of $A$ corresponding to $\th$.
We say that
$A$ is {\em diagonalizable} whenever $V$ is spanned by the eigenspaces of $A$.
We say that $A$ is {\em multiplicity-free}
whenever $A$ is diagonalizable, and each eigenspace of $A$ has dimension one.
Assume that $A$ is multiplicity-free, and let
$\{V_i\}_{i=0}^d$ denote an ordering of the eigenspaces of $A$.
Then $\{V_i\}_{i=0}^d$ is a decomposition of $V$.
For $0 \leq i \leq d$ let $\th_i$ denote the eigenvalue of $A$ corresponding to $V_i$.
For $0 \leq i \leq d$ define $E_i \in \text{End}(V)$ such that
$(E_i - I)V_i = 0$ and $E_i V_j = 0$ if $j \neq i$ $(0 \leq j \leq d)$.
Thus $E_i$ is the projection onto $V_i$.
Observe that
(i) $V_i = E_i V$ $(0 \leq i \leq d)$;
(ii) $E_i E_j = \delta_{i,j} E_i$ $(0 \leq i,j \leq d)$;
(iii) $I = \sum_{i=0}^d E_i$;
(iv) $A = \sum_{i=0}^d \th_i E_i$.
Also
\begin{align}
E_i &= \prod_{\begin{smallmatrix} 0 \leq j \leq d \\ j \neq i \end{smallmatrix} }
\frac{A-\th_j I}{\th_i - \th_j}
\qquad\qquad (0 \leq i \leq d). \label{eq:Ei}
\end{align}
We call $E_i$ the {\em primitive idempotent} of $A$ for $\th_i$ $(0 \leq i \leq d)$.
Observe that $\{A^i\}_{i=0}^d$ is a basis for the $\F$-vector space $\gen{A}$,
and $\prod_{i=0}^d (A-\th_i I) =0$.
Also observe that $\{E_i\}_{i=0}^d$ is a basis for the
$\F$-vector space $\gen{A}$.
Let $A,A^*$ denote a Leonard pair on $V$.
By \cite[Lemma 1.3]{T:Leonard} each of $A$, $A^*$ is multiplicity-free.
Let $\{E_i\}_{i=0}^d$ denote an ordering of the primitive idempotents of $A$.
For $0 \leq i \leq d$ pick $0 \neq v_i \in E_i V$.
Then $\{v_i\}_{i=0}^d$ is a basis for $V$.
The ordering $\{E_i\}_{i=0}^d$ is said to be {\em standard}
whenever the basis $\{v_i\}_{i=0}^d$ satisfies Definition \ref{def:LP}(ii).
A standard ordering of the primitive idempotents of $A^*$ is similarly defined.
\begin{definition} \label{def:LS} \samepage
\ifDRAFT {\rm def:LS}. \fi
By a {\em Leonard system} on $V$ we mean a sequence
\begin{equation}
\Phi = (A; \{E_i\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d) \label{eq:Phi}
\end{equation}
of elements in $\text{\rm End}(V)$
that satisfy the following (i)--(iii):
\begin{itemize}
\item[\rm (i)]
$A,A^*$ is a Leonard pair on $V$;
\item[\rm (ii)]
$\{E_i\}_{i=0}^d$ is a standard ordering of the primitive idempotents of $A$;
\item[\rm (iii)]
$\{E^*_i\}_{i=0}^d$ is a standard ordering of the primitive idempotents of $A^*$.
\end{itemize}
We say that $\Phi$ is {\em over} $\F$.
\end{definition}
Referring to Definition \ref{def:LS}, the Leonard pair $A, A^*$ from part (i)
is said to be {\it associated with $\Phi$}.
We recall the notion of an isomorphism for Leonard systems.
Consider the Leonard system \eqref{eq:Phi}.
Let $V'$ denote a vector space over $\F$ with dimension $d+1$.
For an $\F$-algebra isomorphism $\sigma : \text{\rm End}(V) \to \text{\rm End}(V')$
define
\[
\Phi^\sigma = (A^\sigma; \{E^\sigma_i\}_{i=0}^d; (A^*)^\sigma; \{(E^*_i)^\sigma\}_{i=0}^d).
\]
Then $\Phi^\sigma$ is a Leonard system on $V'$.
Let $\Phi'$ denote a Leonard system on $V'$.
By an {\em isomorphism of Leonard systems} from $\Phi$ to $\Phi'$
we mean an $\F$-algebra isomorphism $\sigma : \text{\rm End}(V) \to \text{\rm End}(V')$
such that $\Phi' = \Phi^\sigma$.
The Leonard systems $\Phi$ and $\Phi'$ are said to be {\em isomorphic}
whenever there exists an isomorphism of Leonard systems from $\Phi$ to $\Phi'$.
In this case, the isomorphism involved is unique.
Consider a Leonard system $\Phi = (A; \{E_i\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d)$ over $\F$.
For $0 \leq i \leq d$ let $\th_i$ (resp.\ $\{\th^*_i\}_{i=0}^d$)
denote the eigenvalue of $A$ (resp.\ $A^*$) corresponding to $E_i$ (resp.\ $E^*_i$).
We call $\{\th_i\}_{i=0}^d$ (resp.\ $\{\th^*_i\}_{i=0}^d$)
the {\em eigenvalue sequence} (resp.\ {\em dual eigenvalue sequence}) of $\Phi$.
Note that $\{\th_i\}_{i=0}^d$ are mutually distinct and contained in $\F$.
Similarly $\{\th^*_i\}_{i=0}^d$ are mutually distinct and contained in $\F$.
Consider a Leonard system $\Phi = (A; \{E_i\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d)$ over $\F$.
Then each of the following is a Leonard system over $\F$:
\begin{align*}
\Phi^* &= (A^*; \{E^*_i\}_{i=0}^d; A; \{E_i\}_{i=0}^d),
\\
\Phi^{\downarrow} &= (A; \{E_i\}_{i=0}^d; A^*; \{E^*_{d-i}\}_{i=0}^d),
\\
\Phi^{\Downarrow} &= (A; \{E_{d-i}\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d).
\end{align*}
Viewing $*$, $\downarrow$, $\Downarrow$ as permutations on the set of
all Leonard systems over $\F$,
\begin{align}
*^2 \,=\, \downarrow^2 \,&=\, \Downarrow^2 = 1, \label{eq:rel1}
\\
\Downarrow * \,=\, * \downarrow, \qquad
\downarrow * &\,=\, * \Downarrow, \qquad
\downarrow \Downarrow \,=\, \Downarrow \downarrow. \label{eq:rel2}
\end{align}
The group generated by the symbols $*$, $\downarrow$, $\Downarrow$
subject to the relations \eqref{eq:rel1}, \eqref{eq:rel2} is the dihedral group
$D_4$.
Recall that $D_4$ is the group of symmetries of a square, and has $8$ elements.
The elements $*$, $\downarrow$, $\Downarrow$ induce an action of $D_4$ on the
set of all Leonard systems over $\F$.
Two Leonard systems over $\F$ will be called {\em relatives} whenever they are in the same
orbit of this $D_4$ action.
\begin{definition} \label{def:fg} \samepage
\ifDRAFT {\rm def:fg}. \fi
Let $\Phi$ denote a Leonard system, and let $g \in D_4$.
For any object $f$ attached to $\Phi$,
let $f^g$ denote the corresponding object attached to $\Phi^{g^{-1}}$.
\end{definition}
\begin{lemma} \label{lem:associated} \samepage
\ifDRAFT {\rm lem:associated}. \fi
Let $A,A^*$ denote a Leonard pair on $V$,
and let $\Phi$ denote an associated Leonard system.
Then the Leonard systems associated with $A,A^*$ are
$\Phi$, $\Phi^\downarrow$, $\Phi^\Downarrow$, $\Phi^{\downarrow\Downarrow}$.
\end{lemma}
\begin{proof}
By the comments above Definition \ref{def:LS}.
\end{proof}
\begin{definition} \label{def:selfdualsystem} \samepage
\ifDRAFT {\rm def:selfdualsystem}. \fi
A Leonard system $\Phi$ is said to be {\em self-dual}
whenever $\Phi$ is isomorphic to $\Phi^*$.
\end{definition}
Let $\Phi$ denote a self-dual Leonard system on $V$.
For an automorphism $\sigma$ of $\text{\rm End}(V)$
the following are equivalent:
\begin{itemize} \samepage
\item[\rm (i)]
$\sigma$ is an isomorphism of Leonard systems from $\Phi$ to $\Phi^*$;
\item[\rm (ii)]
$\sigma$ is an isomorphism of Leonard systems from $\Phi^*$ to $\Phi$.
\end{itemize}
There exists a unique automorphism $\sigma$ of $\text{\rm End}(V)$
that satisfies (i), (ii).
\begin{definition} \label{def:duality2} \samepage
\ifDRAFT {\rm def:duality2}. \fi
Let $\Phi$ denote a self-dual Leonard system on $V$.
By the {\em duality $\Phi \leftrightarrow \Phi^*$} we mean
the automorphism of $\text{\rm End}(V)$ that satisfies (i), (ii) above.
\end{definition}
\section{Antiautomorphisms and bilinear forms}
\label{sec:anti}
In this section we recall a few notions from the theory of Leonard pairs.
Let $\cal A$ denote an $\F$-algebra.
By an {\em antiautomorphism} of $\cal A$ we mean
an $\F$-linear bijection $\xi : {\cal A} \to {\cal A}$
such that $(XY)^\xi = Y^\xi X^\xi$ for all $X$, $Y \in {\cal A}$.
\begin{lemma} \cite[Theorem 6.1]{T:survey}
\label{lem:dagger} \samepage
\ifDRAFT {\rm lem:dagger}. \fi
Let $A,A^*$ denote a Leonard pair on $V$.
Then there exists a unique antiautomorphism $\dagger$ of $\text{\rm End}(V)$
such that $A^\dagger = A$ and $(A^*)^\dagger = A^*$.
Moreover $(X^\dagger)^\dagger=X$ for all $X \in \text{\rm End}(V)$.
\end{lemma}
\begin{lemma} \cite[Lemma 6.3]{T:survey}
\label{lem:dagger2} \samepage
\ifDRAFT {\rm lem:dagger2}. \fi
Let $(A; \{E_i\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d)$ denote a Leonard system on $V$.
Then the following hold.
\begin{itemize}
\item[\rm (i)]
We have $X^\dagger = X$ for all $X \in \gen{A}$.
In particular, $E_i^\dagger = E_i$ for $0 \leq i \leq d$.
\item[\rm (ii)]
We have $X^\dagger = X$ for all $X \in \gen{A^*}$.
In particular, $(E^*_i)^\dagger = E^*_i$ for $0 \leq i \leq d$.
\end{itemize}
\end{lemma}
By a {\em bilinear form on $V$} we mean a map
$\b{\; ,\,} : V \times V \to \F$ that satisfies the following
four conditions for all $u,v,w \in V$ and for all $\alpha \in \F$:
(i) $\b{u+v,w}=\b{u,w}+\b{v,w}$;
(ii) $\b{\alpha u,v}=\alpha \b{u,v}$;
(iii) $\b{u,v+w}=\b{u,v}+\b{u,w}$;
(iv) $\b{u,\alpha v}=\alpha \b{u,v}$.
Let $\b{\: ,\,}$ denote a bilinear form on $V$.
This form is said to be {\em symmetric} whenever
$\b{u,v}=\b{v,u}$ for all $u,v \in V$.
Let $\b{\;,\,}$ denote a bilinear form on $V$.
Then the following are equivalent:
(i) there exists a nonzero $u \in V$ such that $\b{u,v}=0$ for all $v\in V$;
(ii) there exists a nonzero $v\in V$ such that $\b{u,v}=0$ for all $u\in V$.
The form $\b{\: ,\,}$ is said to be {\em degenerate} whenever (i), (ii) hold
and {\em nondegenerate} otherwise.
Let $\xi$ denote an antiautomorphism of $\text{\rm End}(V)$.
Then there exists a nonzero bilinear form $\b{\;,\,}$ on $V$
such that $\b{Xu,v}=\b{u,X^\xi v}$ for all $u,v \in V$ and
for all $X \in \text{\rm End}(V)$.
The form is unique up to multiplication by a nonzero scalar in $\F$.
The form is nondegenerate.
We refer to this form as the {\em bilinear form on $V$ associated with $\xi$}.
This form is not symmetric in general.
Let $A, A^*$ denote a Leonard pair on $V$.
Recall the antiautomorphism $\dagger$ of $\text{\rm End}(V)$ from Lemmma \ref{lem:dagger}.
Let $\b{\; , \,}$ denote the bilinear form on $V$ associated with $\dagger$.
By \cite[Corollary 15.4]{T:qRacah} the bilinear form $\b{\; , \,}$ is symmetric.
By construction, for $X \in \text{\rm End}(V)$ we have
\[
\b{Xu,v}=\b{u,X^\dagger v} \qquad\qquad\qquad (u,v \in V).
\]
In particular,
\[
\b{Au,v} = \b{u,Av}, \qquad\quad
\b{A^* u,v} = \b{u, A^* v} \qquad\qquad\qquad (u,v \in V).
\]
\section{The split decomposition and the parameter array}
\label{sec:parray}
Let $\Phi = (A; \{E_i\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d)$ denote a Leonard system on $V$.
In this section we recall the $\Phi$-split decomposition of $V$
and the parameter array of $\Phi$.
Recall the eigenvalue sequence $\{\th_i\}_{i=0}^d$ and the dual eigenvalue sequence
$\{\th^*_i\}_{i=0}^d$ of $\Phi$.
Let $x$ denote an indeterminate, and
let $\F[x]$ denote the $\F$-algebra consisting of the polynomials in $x$
that have all coefficients in $\F$.
\begin{definition} \cite[Definition 4.3]{T:Leonard}
\label{def:tau} \samepage
\ifDRAFT {\rm def:tau}. \fi
For $0 \leq i \leq d$ we define some polynomials in $\F[x]$:
\begin{align*}
\tau_i &= (x-\th_0)(x - \th_1) \cdots (x-\th_{i-1}), \\
\eta_i &= (x - \th_d)(x - \th_{d-1}) \cdots (x-\th_{d-i+1}), \\
\tau^*_i &= (x - \th^*_0)(x - \th^*_1) \cdots (x - \th^*_{i-1}), \\
\eta^*_i &= (x - \th^*_d)(x - \th^*_{d-1}) \cdots (x - \th^*_{d-i+1}).
\end{align*}
\end{definition}
For $0 \leq i \leq d$ define
\begin{equation}
U_i = (E^*_0 V + \cdots + E^*_i V) \cap (E_i V + \cdots + E_d V). \label{eq:Ui}
\end{equation}
By \cite[Theorem 20.7]{T:survey} the sequence $\{U_i\}_{i=0}^d$ is a decomposition of $V$.
This decomposition is called the {\em $\Phi$-split decomposition} of $V$.
By \cite[Lemma 20.9]{T:survey},
\begin{align*}
(A - \th_i I) U_i &= U_{i+1} \qquad (0 \leq i \leq d-1), & (A-\th_d I) U_d &= 0,
\\
(A^* - \th^*_i I) U_i &= U_{i-1} \qquad (1 \leq i \leq d), & (A^*-\th^*_0 I) U_0 &=0.
\end{align*}
For $0 \leq i \leq d$,
\begin{align*}
\tau_i(A) U_0 &= U_i, &
\eta^*_i(A^*) U_d &= U_{d-i}.
\end{align*}
Pick a nonzero $v \in E^*_0V$.
For $0 \leq i \leq d$ define
$u_i = \tau_i(A) v$.
Then $0 \neq u_i \in U_i$ for $0 \leq i \leq d$.
Moreover, the vectors $\{u_i\}_{i=0}^d$ form a basis for $V$.
We call $\{u_i\}_{i=0}^d$ a {\em $\Phi$-split basis} for $V$.
With respect to a $\Phi$-split basis, the matrices representing $A$ and $A^*$ are
\[
A :
\begin{pmatrix}
\th_0 & & & & & \text{\bf 0} \\
1 & \th_1 \\
& 1 & \th_2 \\
& & \cdot & \cdot \\
& & & \cdot & \cdot \\
\text{\bf 0} & & & & 1 & \th_d \\
\end{pmatrix},
\qquad
A^* :
\begin{pmatrix}
\th^*_0 & \vphi_1 & & & & \text{\bf 0} \\
& \th^*_1 &\vphi_2 \\
& & \th^*_2 & \cdot \\
& & & \cdot & \cdot\\
& & & & \cdot & \vphi_d \\
\text{\bf 0} & & & & & \th^*_d \\
\end{pmatrix},
\]
where $\{\vphi_i\}_{i=1}^d$ are nonzero scalars in $\F$.
The sequence $\{\vphi_i\}_{i=1}^d$ is uniquely determined by $\Phi$,
and called the {\em first split sequence} of $\Phi$.
Let $\{\phi_i\}_{i=1}^d$ denote the first split sequence of $\Phi^\Downarrow$.
We call $\{\phi_i\}_{i=1}^d$ the {\em second split sequence} of $\Phi$.
By the {\em parameter array} of $\Phi$ we mean the sequence
$(\{\th_i\}_{i=0}^d; \{\th^*_i\}_{i=0}^d; \{\vphi_i\}_{i=1}^d; \{\phi_i\}_{i=1}^d)$.
By \cite[Theorem 1.9]{T:Leonard} the Leonard system $\Phi$ is determined
up to isomorphism by its parameter array.
For the rest of this section let
\[
(\{\th_i\}_{i=0}^d; \{\th^*_i\}_{i=0}^d; \{\vphi_i\}_{i=1}^d; \{\phi_i\}_{i=1}^d)
\]
denote the parameter array of $\Phi$.
\begin{lemma} \cite[Theorem 1.11]{T:Leonard}
\label{lem:Phis} \samepage
\ifDRAFT {\rm lem:Phis}. \fi
The following {\rm (i)--(iii)} hold.
\begin{itemize}
\item[\rm (i)]
The parameter array of $\Phi^*$ is
\[
(\{\th^*_i\}_{i=0}^d; \{\th_i\}_{i=0}^d; \{\vphi_i\}_{i=1}^d; \{\phi_{d-i+1}\}_{i=1}^d).
\]
\item[\rm (ii)]
The parameter array of $\Phi^\downarrow$ is
\[
(\{\th_i\}_{i=0}^d; \{\th^*_{d-i}\}_{i=0}^d; \{\phi_{d-i+1}\}_{i=1}^d; \{\vphi_{d-i+1}\}_{i=1}^d).
\]
\item[\rm (iii)]
The parameter array of $\Phi^\Downarrow$ is
\[
(\{\th_{d-i}\}_{i=0}^d; \{\th^*_i\}_{i=0}^d; \{\phi_i\}_{i=1}^d; \{\vphi_i\}_{i=1}^d).
\]
\end{itemize}
\end{lemma}
We mention some results for later use.
\begin{lemma} \label{lem:E0Ed} \samepage
\ifDRAFT {\rm lem:E0Ed}. \fi
We have
\begin{align} \label{eq:E0Ed}
E_0 &= \frac{\eta_d(A)}{\eta_d(\th_0)}, &
E_d &= \frac{\tau_d(A)}{\tau_d(\th_d)}, &
E^*_0 &= \frac{\eta^*_d(A^*)}{\eta^*_d(\th^*_0)}, &
E^*_d &= \frac{\tau^*_d(A^*)}{\tau^*_d (\th^*_d)}.
\end{align}
\end{lemma}
\begin{proof}
By \eqref{eq:Ei} and Definition \ref{def:tau}.
\end{proof}
\begin{lemma} \label{lem:MMs} \samepage
\ifDRAFT {\rm note:MMs}. \fi
For the $\F$-vector spaces $\gen{A}$ and $\gen{A^*}$,
we give three bases:
\[
\begin{array}{c|ccc}
\text{\rm vector space $U$} & \multicolumn{3}{c}{\text{\rm three bases for $U$}}
\\ \hline
\gen{A} & \{E_i\}_{i=0}^d & \{\tau_i(A)\}_{i=0}^d & \{\eta_i(A)\}_{i=0}^d \rule{0mm}{2.5ex}
\\
\gen{A^*} & \{E^*_i\}_{i=0}^d & \{\tau^*_i(A^*)\}_{i=0}^d & \{\eta^*_i(A^*)\}_{i=0}^d \rule{0mm}{2.4ex}
\end{array}
\]
\end{lemma}
\begin{proof}
By the comments below \eqref{eq:Ei} along with Definition \ref{def:tau}.
\end{proof}
\section{Some traces}
\label{sec:trace}
Let
$\Phi = (A; \{E_i\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d)$
denote a Leonard system on $V$ with parameter array
\[
(\{\th_i\}_{i=0}^d; \{\th^*_i\}_{i=0}^d; \{\vphi_i\}_{i=1}^d; \{\phi_i\}_{i=1}^d).
\]
Later in the paper we will need some facts about $\Phi$ that involve the trace function tr.
Consider the scalars
\begin{equation}
\tr(E_rE^*_0),\qquad \tr(E_rE^*_d), \qquad \tr(E^*_rE_0), \qquad \tr(E^*_rE_d) \label{eq:traces}
\end{equation}
for $0 \leq r \leq d$.
By \cite[Theorem 17.12]{T:qRacah} we find that for $0 \leq r \le d$,
\begin{align}
\tr(E_r E^*_0) &=
\frac{\vphi_1 \vphi_2 \cdots \vphi_r \, \phi_1 \phi_2 \cdots \phi_{d-r}}
{\eta^*_d(\th^*_0) \tau_r(\th_r) \eta_{d-r}(\th_r)},
\label{eq:trErEs0} \\
\tr(E_r E^*_d) &=
\frac{\phi_d \phi_{d-1} \cdots \phi_{d-r+1}\,
\vphi_d \vphi_{d-1} \cdots \varphi_{r+1}}
{\tau^*_d(\th^*_d) \tau_r(\th_r) \eta_{d-r}(\th_r)},
\label{eq:trErEsd} \\
\tr(E^*_r E_0) &=
\frac{\vphi_1 \vphi_2 \cdots \vphi_r\,
\phi_d \phi_{d-1} \cdots \phi_{r+1}}
{\eta_d(\th_0) \tau^*_r(\th^*_r) \eta^*_{d-r}(\th^*_r)},
\label{eq:trEsrE0} \\
\tr(E^*_r E_d) &=
\frac{\phi_1 \phi_2 \cdots \phi_r\,
\vphi_d \vphi_{d-1} \cdots \vphi_{r+1}}
{\tau_d(\th_d) \tau^*_r(\th^*_r) \eta^*_{d-r}(\th^*_r)}.
\label{eq:trEsrEd}
\end{align}
Note that the scalars in \eqref{eq:trErEs0}--\eqref{eq:trEsrEd} are nonzero.
In particular $\tr(E_0E^*_0)$ is nonzero.
Define $\nu \in \F$ by
\begin{equation}
\nu = \text{\rm tr} (E_0 E^*_0)^{-1}. \label{eq:defnu}
\end{equation}
By \cite[Lemma 9.4]{T:survey},
\begin{equation}
\nu E_0 E^*_0 E_0 = E_0, \qquad\qquad\qquad
\nu E^*_0 E_0 E^*_0 = E^*_0. \label{eq:E0Es0E0}
\end{equation}
By \eqref{eq:trErEs0}--\eqref{eq:defnu},
\begin{align}
\nu &= \frac{\eta_d(\th_0) \eta^*_d(\th^*_0)}
{\phi_1 \cdots \phi_d},
&
\nu^\downarrow &=
\frac{\eta_d(\th_0) \tau^*_d(\th^*_d)}
{\vphi_1 \cdots \vphi_d}, \label{eq:nud}
\\
\nu^\Downarrow &=
\frac{\tau_d(\th_d) \eta^*_d(\th^*_0)}
{\vphi_1 \cdots \vphi_d},
&
\nu^{\downarrow\Downarrow} &=
\frac{\tau_d(\th_d) \tau^*_d(\th^*_d)}
{\phi_1 \cdots \phi_d}. \label{eq:nuD}
\end{align}
We mention a result for later use.
Let $\{U_i\}_{i=0}^d$ denote the $\Phi$-split decomposition of $V$.
For $0 \leq i \leq d$ define $F_i \in \text{\rm End}(V)$
such that $(F_i - I)U_i=0$ and $F_i U_j=0$ if $j \neq i$ $(0 \leq j \leq d)$.
Thus $F_i$ is the projection onto $U_i$.
Observe that (i) $U_i = F_i V$ $(0 \leq i \leq d)$;
(ii) $F_i F_j =\delta_{i,j} F_i$ $(0 \leq i,j\leq d)$;
(iii) $I = \sum_{i=0}^d F_i$.
\begin{lemma} \cite[Corollary 7.4]{NT:unit}
\label{lem:Fi} \samepage
\ifDRAFT {\rm lem:Fi}. \fi
For $0 \leq i \leq d$,
\begin{align}
F_i &= \frac{\nu \tau_i(A) E^*_0 E_0 \tau^*_i (A^*)}
{\vphi_1 \cdots \vphi_i}. \label{eq:Fi}
\end{align}
\end{lemma}
\section{Self-dual Leonard pairs and systems}
\label{sec:selfdual}
Earlier we defined the concept of a self-dual Leonard pair and system.
In this section we make some observations about this concept.
\begin{lemma} \label{lem:sd} \samepage
\ifDRAFT {\rm lem:sd}. \fi
Let $A,A^*$ denote a self-dual Leonard pair on $V$,
and let $\sigma$ denote the duality $A \leftrightarrow A^*$.
Them $\sigma^2 = 1$.
\end{lemma}
\begin{proof}
By construction, $\sigma^2$ fixes each of $A$, $A^*$.
By this and Lemma \ref{lem:generate}, $\sigma^2$ fixes every element
of $\text{\rm End}(V)$. So $\sigma^2 = 1$.
\end{proof}
\begin{lemma}
Let $\Phi = (A; \{E_i\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d)$ denote a self-dual Leonard system,
and let $\sigma$ denote the duality $\Phi \leftrightarrow \Phi^*$.
Then the Leonard pair $A, A^*$ is self-dual.
Moreover $\sigma$ is the duality $A \leftrightarrow A^*$.
\end{lemma}
\begin{proof}
By construction.
\end{proof}
\begin{lemma} \label{lem:selfdualpair} \samepage
\ifDRAFT {\rm lem:selfdualpair}. \fi
Let $A,A^*$ denote a self-dual Leonard pair, and let $\sigma$ denote the
duality $A \leftrightarrow A^*$.
Let $\{E_i\}_{i=0}^d$ denote a standard ordering of the primitive idempotents of $A$.
Then the following {\rm (i)--(iii)} hold:
\begin{itemize}
\item[\rm (i)]
$\{E^\sigma_i\}_{i=0}^d$ is a standard ordering of the primitive idempotents of $A^*$;
\item[\rm (ii)]
the sequence $\Phi =(A; \{E_i\}_{i=0}^d; A^*; \{E^\sigma_i\}_{i=0}^d)$ is a self-dual Leonard system;
\item[\rm (iii)]
$\sigma$ is the duality $\Phi \leftrightarrow \Phi^*$.
\end{itemize}
\end{lemma}
\begin{proof}
Note that $A^\sigma = A^*$ and $(A^*)^\sigma = A$.
(i)
Let $\{E^*_i\}_{i=0}^d$ denote a standard ordering of the primitive idempotents for $A^*$,
and consider the Leonard system
\[
\Phi' = (A; \{E_i\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d).
\]
We have $(\Phi') ^\sigma = (A^* ; \{E^\sigma_i\}_{i=0}^d; A; \{(E^*_i)^\sigma\}_{i=0}^d)$.
The result follows since $(\Phi')^\sigma$ is a Leonard system.
(ii), (iii)
By (i) above and the construction,
$\Phi$ is a Leonard system.
Applying $\sigma$ to $\Phi$ and using Lemma \ref{lem:sd}, we obtain
\[
\Phi^\sigma = (A^*; \{E^\sigma_i\}_{i=0}^d; A; \{E_i\}_{i=0}^d) = \Phi^*.
\]
The result follows.
\end{proof}
The self-dual Leonard systems are characterized as follows.
\begin{lemma} \cite[Proposition 8.7]{NT:affine}
\label{lem:selfdualparam} \samepage
\ifDRAFT {\rm lem:selfdualparam}. \fi
Let $\Phi$ denote a Leonard system over $\F$ with
parameter array
$(\{\th_i\}_{i=0}^d; \{\th^*_i\}_{i=0}^d; \{\vphi_i\}_{i=1}^d; \{\phi_i\}_{i=1}^d)$.
Then $\Phi$ is self-dual if and only if
\begin{align}
\th_i &= \th^*_i && (0 \leq i \leq d). \label{eq:thths}
\end{align}
In this case
\begin{align}
\phi_i &= \phi_{d-i+1} && (1 \leq i \leq d). \label{eq:phi}
\end{align}
\end{lemma}
Let $\Phi = (A; \{E_i\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d)$
denote a self-dual Leonard system on $V$,
and let $\sigma$ denote the duality $\Phi \leftrightarrow \Phi^*$.
Our next general goal is to describe $\sigma$.
To do this we will display an invertible $T \in \text{\rm End}(V)$ that gives $\sigma$.
\section{The element $T$}
\label{sec:main}
For the rest of the paper, fix a Leonard system on $V$:
\begin{equation}
\Phi = (A; \{E_i\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d). \label{eq:Phi2}
\end{equation}
In this section we introduce an element $T \in \text{\rm End}(V)$;
this element will be used to describe the duality $\Phi \leftrightarrow \Phi^*$
in the self-dual case.
Let
\[
(\{\th_i\}_{i=0}^d; \{\th^*_i\}_{i=0}^d; \{\vphi_i\}_{i=1}^d; \{\phi_i\}_{i=1}^d).
\]
denote the parameter array of $\Phi$.
Let $\dagger$ denote the antiautomorphism of $\text{\rm End}(V)$ that
fixes each of $A$, $A^*$.
Let $\b{\; , \,}$ denote the bilinear form on $V$ associated with $\dagger$,
as discussed at the end of Section \ref{sec:anti}.
\begin{definition} \label{def:T} \samepage
\ifDRAFT {\rm def:T}. \fi
Define $T \in \text{\rm End}(V)$ by
\begin{equation}
T = \sum_{i=0}^d \eta_{d-i}(A) E^*_0 E_d \tau^*_i(A^*). \label{eq:T}
\end{equation}
\end{definition}
\begin{note}
Sometimes it is convenient to express $T$ as a polynomial in $A, A^*$.
Evaluating \eqref{eq:T} using \eqref{eq:E0Ed} we get
\[
T = \sum_{i=0}^d \frac{\eta_{d-i}(A) \eta^*_d(A^*) \tau_d(A) \tau^*_i (A^*)}
{\tau_d(\th_d) \eta^*_d(\th^*_0)}.
\]
\end{note}
We have
\begin{align*}
T^* &= \sum_{i=0}^d \eta^*_{d-i}(A^*) E_0 E^*_d \tau_i(A),
\\
T^\dagger &= \sum_{i=0}^d \tau^*_i(A^*) E_d E^*_0 \eta_{d-i}(A),
\\
(T^*)^\dagger &= \sum_{i=0}^d \tau_i(A) E^*_d E_0 \eta^*_{d-i}(A^*).
\end{align*}
We now state our first main result.
\begin{theorem} \label{thm:main} \samepage
\ifDRAFT {\rm thm:main}. \fi
Assume that $\Phi$ is self-dual.
Then the elements $T$, $T^*$, $T^\dagger$, $(T^*)^\dagger$
are equal and this common element gives the duality $\Phi \leftrightarrow \Phi^*$.
\end{theorem}
Our proof of Theorem \ref{thm:main} is contained in Section \ref{sec:proofmain}.
\section{Some products}
\label{sec:formulaT}
We continue to discuss the Leonard system $\Phi$ from \eqref{eq:Phi2}.
Recall the element $T$ from Definition \ref{def:T}.
In this section we consider the elements $T$, $T^*$, $T^\dagger$, $(T^*)^\dagger$.
We obtain formulas for the products of these elements with the
elements $E_0$, $E^*_0$.
These formulas are used to show that $T=T^* = T^\dagger$
in our proof of Theorem \ref{thm:main}.
\begin{lemma} \label{lem:TEs0} \samepage
\ifDRAFT {\rm lem:TEs0}. \fi
We have
\begin{align}
T E^*_0 &=
\frac{\eta_d(\th_0) \vphi_1 \cdots \vphi_d}
{\tau_d(\th_d) \eta^*_d(\th^*_0)} \,
E_0 E^*_0, \label{eq:TEs0}
\\
T^* E_0 &=
\frac{\eta^*_d(\th^*_0) \vphi_1 \cdots \vphi_d}
{\tau^*_d (\th^*_d) \eta_d (\th_0)} \,
E^*_0 E_0, \label{eq:TsE0}
\\
E^*_0 T^\dagger &=
\frac{\eta_d(\th_0) \vphi_1 \cdots \vphi_d}
{\tau_d(\th_d) \eta^*_d(\th^*_0)} \,
E^*_0 E_0, \label{eq:Es0Td}
\\
E_0 (T^*)^\dagger &=
\frac{\eta^*_d(\th^*_0) \vphi_1 \cdots \vphi_d}
{\tau^*_d (\th^*_d) \eta_d (\th_0)} \,
E_0 E^*_0. \label{eq:E0Tsd}
\end{align}
\end{lemma}
\begin{proof}
We first show \eqref{eq:TEs0}.
In \eqref{eq:T}, multiply each side on the right by $E^*_0$.
Simplify the result using
$\tau^*_i (A^*) E^*_0 = \tau^*_i(\th^*_0) E^*_0$
and $\tau^*_i(\th^*_0) = \delta_{i,0}$ $(0 \leq i \leq d)$ to get
\[
T E^*_0 = \eta_d(A) E^*_0 E_d E^*_0.
\]
By \eqref{eq:E0Ed} we have $\eta_d(A) = \eta_d(\th_0) E_0$.
By \eqref{eq:E0Es0E0} applied to $\Phi^\Downarrow$,
$E^*_0 E_d E^*_0 = (\nu^\Downarrow)^{-1} E^*_0$.
By these comments and \eqref{eq:nuD} we obtain \eqref{eq:TEs0}.
The line \eqref{eq:TsE0} is obtained by applying \eqref{eq:TEs0} to $\Phi^*$.
The lines \eqref{eq:Es0Td} and \eqref{eq:E0Tsd} are obtained by applying $\dagger$
to \eqref{eq:TEs0} and \eqref{eq:TsE0}, respectively.
\end{proof}
\begin{lemma} \cite[Lemma 7.1]{NT:maps}
\label{lem:tauiA} \samepage
\ifDRAFT {\rm lem:tauiA}. \fi
For $0 \leq i,j \leq d$,
\begin{align}
E^*_0 \tau_i(A)\tau^*_j(A^*)E_0 &=
\delta_{i,j} \, \varphi_1\varphi_2\cdots\varphi_i \, E^*_0E_0. \label{eq:Es0tauitausjE0}
\end{align}
\end{lemma}
\begin{lemma} \label{lem:TE0} \samepage
\ifDRAFT {\rm lem:TE0}. \fi
We have
\begin{align}
T E_0 &= \frac{\vphi_1 \cdots \vphi_d}{\tau_d(\th_d)} \, E^*_0 E_0, \label{eq:TE0}
\\
T^* E^*_0 &= \frac{\vphi_1 \cdots \vphi_d}{\tau^*_d(\th^*_d)} \, E_0 E^*_0, \label{eq:TsEs0}
\\
E_0 T^\dagger &= \frac{\vphi_1 \cdots \vphi_d}{\tau_d(\th_d)} \, E_0 E^*_0, \label{eq:E0Td}
\\
E^*_0 (T^*)^\dagger &= \frac{ \vphi_1 \cdots \vphi_d}{\tau^*_d(\th^*_d)} \, E^*_0 E_0. \label{eq:Es0Tsd}
\end{align}
\end{lemma}
\begin{proof}
In \eqref{eq:T}, multiply each side on the right by $E_0$.
Simplify the result using $E_d=\tau_d(A) / \tau_d(\th_d)$ and \eqref{eq:Es0tauitausjE0} to get \eqref{eq:TE0}.
The line \eqref{eq:TsEs0} is obtained by applying \eqref{eq:TE0} to $\Phi^*$.
The lines \eqref{eq:E0Td} and \eqref{eq:Es0Tsd} are obtained by applying $\dagger$
to \eqref{eq:TE0} and \eqref{eq:TsEs0}, respectively.
\end{proof}
\section{The proof of Theorem \ref{thm:main}}
\label{sec:proofmain}
In this section we prove Theorem \ref{thm:main}.
Recall the Leonard system $\Phi$ from \eqref{eq:Phi2} and
the element $T$ from Definition \ref{def:T}.
\begin{lemma} \label{lem:T2} \samepage
\ifDRAFT {\rm lem:T2}. \fi
We have
\begin{equation}
T^2 =
(\nu^\Downarrow)^{-1} \phi_1 \cdots \phi_d
\sum_{j=0}^d
\frac{\eta_j(A) E^*_0 E_d \tau^*_j(A^*)}
{\phi_d \cdots \phi_{d-j+1}}. \label{eq:T2}
\end{equation}
\end{lemma}
\begin{proof}
By \eqref{eq:T},
\begin{equation}
T^2= \sum_{i=0}^d \sum_{j=0}^d
\eta_{d-i}(A) E^*_0 E_d \tau^*_i(A^*)
\eta_{d-j}(A) E^*_0 E_d \tau^*_j(A^*). \label{eq:aux1}
\end{equation}
Applying \eqref{eq:Es0tauitausjE0} to $\Phi^{\Downarrow *}$,
\[
E_d \tau^*_i(A^*) \eta_j (A) E^*_0 = \delta_{i,j} \phi_1 \cdots \phi_i E_d E^*_0
\qquad\qquad (0 \leq i,j \leq d).
\]
In this line, replace $j$ with $d-j$ to get
\[
E_d \tau^*_i (A^*) \eta_{d-j}(A) E^*_0
= \delta_{i, d-j} \phi_1 \cdots \phi_{d-j} E_d E^*_0 \qquad\qquad (0 \leq i,j \leq d).
\]
By this and \eqref{eq:aux1},
\begin{equation}
T^2 = \sum_{j=0}^d \phi_1 \cdots \phi_{d-j}
\eta_j(A) E^*_0 E_d E^*_0 E_d \tau^*_j (A^*). \label{eq:aux2}
\end{equation}
Applying \eqref{eq:E0Es0E0} to $\Phi^\Downarrow$,
\[
E^*_0 E_d E^*_0 = (\nu^\Downarrow)^{-1} E^*_0.
\]
By this and \eqref{eq:aux2} we get \eqref{eq:T2}.
\end{proof}
\begin{proposition} \label{prop:T2} \samepage
\ifDRAFT {\rm prop:T2}. \fi
Assume that $\Phi$ is self-dual,
Then $T$ is invertible.
Moreover, $T^2 = \lambda I$,
where
\[
\lambda = (\nu^\Downarrow)^{-2} \phi_1 \cdots \phi_d.
\]
\end{proposition}
\begin{proof}
By Lemma \ref{lem:selfdualparam} the sum in \eqref{eq:T2} is equal to
\[
\sum_{j=0}^d
\frac{\eta_j(A) E^*_0 E_d \tau^*_j(A^*)}
{\phi_1 \cdots \phi_{j}}.
\]
Applying \eqref{eq:Fi} to $\Phi^\Downarrow$ and using $I=\sum_{j=0}^d F_j^\Downarrow$,
we find that the above sum is equal to $(\nu^\Downarrow)^{-1} I$.
Thus $T^2 = \lambda I$.
By construction $\lambda \neq 0$ so $T$ is invertible.
\end{proof}
\begin{lemma} \label{lem:ATTAs} \samepage
\ifDRAFT {\rm lem:ATTAs}. \fi
Assume that $\Phi$ is self-dual.
Then
\begin{align}
A T &= T A^*,
&
A^* T &= T A. \label{eq:ATTAs}
\end{align}
\end{lemma}
\begin{proof}
We first show $AT=TA^*$.
For $0 \leq i \leq d$ define
\[
T_i = \eta_{d-i}(A) E^*_0 E_d \tau^*_i(A^*).
\]
The element $T_i$ is the $i$-summand in \eqref{eq:T}, so $T = \sum_{i=0}^d T_i$.
By Definition \ref{def:tau} along with
$\prod_{\ell=0}^d (A-\th_\ell I) = 0$ and
$\prod_{\ell=0}^d (A^* - \th^*_\ell I) =0$,
\begin{align*}
A T_0 - \th_0 T_0 &= 0,
\\
A T_i - \th_i T_i &= T_{i-1} A^* - \th^*_{i-1} T_{i-1} \qquad\qquad(1 \leq i \leq d),
\\
0 &= T_d A^* - \th^*_d T_d.
\end{align*}
By these comments
\[
A T - \sum_{i=0}^d \th_i T_i = T A^* - \sum_{i=0}^d \th^*_i T_i.
\]
By this and \eqref{eq:thths} we see that $A T = T A^*$.
In this equation, multiply each side on the left and right by $T$.
Simplify the result using Proposition \ref{prop:T2} to get $A^* T = T A$.
\end{proof}
\begin{corollary} \label{cor:EiTTEis} \samepage
\ifDRAFT {\rm cor:EiTTEis}. \fi
Assume that $\Phi$ is self-dual.
Then
\begin{align}
E_i T &= T E^*_i, & E^*_i T &= T E_i && (0 \leq i \leq d). \label{eq:EiT}
\end{align}
\end{corollary}
\begin{proof}
By \eqref{eq:Ei} and \eqref{eq:ATTAs}.
\end{proof}
\begin{proofof}{Theorem \ref{thm:main}}
By Proposition \ref{prop:T2}, $T$ is invertible.
By \eqref{eq:ATTAs} and \eqref{eq:EiT},
$T$ gives the duality $\Phi \leftrightarrow \Phi^*$.
Next we show that $T=T^*$.
In the above statement, we replace $T$ by $T^*$ and swap the roles of $\Phi$, $\Phi^*$
to see that $T^*$ gives the duality $\Phi \leftrightarrow \Phi^*$.
Thus each of $T$ and $T^*$ gives the duality $\Phi \leftrightarrow \Phi^*$.
By this and the comment above Definition \ref{def:selfdual},
there exists $0 \neq \zeta \in \F$ such that $T^* = \zeta T$.
We show that $\zeta = 1$.
By \eqref{eq:TsE0}, \eqref{eq:TE0} together with \eqref{eq:thths}
we find that $T^* E_0$ and $T E_0$ have the same trace.
This trace is nonzero by the comments above \eqref{eq:defnu}.
Thus $\zeta = 1$ and so $T=T^*$.
Next we show that $T=T^\dagger$.
In \eqref{eq:ATTAs} and \eqref{eq:EiT}, apply $\dagger$ to each side
and use Lemma \ref{lem:dagger2} to find that $T^\dagger$ gives the duality $\Phi \leftrightarrow \Phi^*$.
Thus each of $T$ and $T^\dagger$ gives the duality $\Phi \leftrightarrow \Phi^*$.
By this and the comment above Definition \ref{def:selfdual},
there exists $0 \neq \zeta' \in \F$ such that $T^\dagger = \zeta' T$.
We show that $\zeta' = 1$.
By \eqref{eq:Es0Td}, \eqref{eq:TsEs0} together with \eqref{eq:thths} and $T=T^*$,
we find that $E^*_0 T^\dagger$ and $E^*_0 T$ have the same trace.
This trace is nonzero by the comments above \eqref{eq:defnu}.
Thus $\zeta' = 1$ and so $T=T^\dagger$.
In the equation $T = T^*$, apply $\dagger$ to each side to get $T^\dagger = (T^*)^\dagger$.
We have shown that the elements $T$, $T^*$, $T^\dagger$, $(T^*)^\dagger$ are equal.
\end{proofof}
\section{Some decompositions and flags associated with a Leonard system}
\label{sec:decomp}
Consider the Leonard system $\Phi$ from \eqref{eq:Phi2}.
Recall from Section 1 the notion of a flag on $V$,
and what it means for two flags on $V$ to be opposite.
In this section we use $\Phi$ to obtain four mutually
opposite flags on $V$;
these are induced by the eigenspace decompositions of $A$ and $A^*$,
as well as the split decomposition for $\Phi$ and its relatives.
In the next section, we will describe how $T$ acts on these flags
and decompositions.
\begin{definition} \label{def:Omega} \samepage
For notational convenience let
$\Omega$ denote the set consisting of four symbols
$0,D,0^*,D^*$.
\end{definition}
\begin{definition} \label{def:flags} \samepage
\ifDRAFT {\rm def:flags}. \fi
For $z \in \Omega$ we define a flag on $V$ which we denote by $[z]$.
To define this flag we display the $i^\text{th}$ component for $0 \leq i \leq d$.
\[
\begin{array}{c|c}
z & \text{$i^\text{th}$ component of $[z]$} \\
\hline
0 & E_0V+E_1V+\cdots+E_iV \rule{0mm}{2.7ex} \\
D & E_dV+E_{d-1}V+\cdots+E_{d-i}V \rule{0mm}{2.5ex} \\
0^* & E^*_0V+E^*_1V+\cdots+E^*_iV \rule{0mm}{2.5ex} \\
D^* & E^*_dV+ E^*_{d-1}V+\cdots+E^*_{d-i}V \rule{0mm}{2.5ex}
\end{array}
\]
\end{definition}
\begin{lemma} \cite[Theorem 7.3]{T:24points}
\label{lem:opposite} \samepage
\ifDRAFT {\rm lem:opposite}. \fi
The four flags in Definition \ref{def:flags} are mutually opposite.
\end{lemma}
\begin{definition} \label{def:decompositions} \samepage
\ifDRAFT {\rm def:decompositions}. \fi
Let $z,w$ denote an ordered pair of distinct elements of $\Omega$.
By Lemma \ref{lem:opposite} the flags $[z]$, $[w]$ are opposite.
Let $[zw]$ denote the decomposition of $V$ induced by $[z]$, $[w]$.
\end{definition}
Let $\{V_i\}_{i=0}^d$ denote a decomposition of $V$.
By the {\em inversion} of this decomposition we mean
the decomposition $\{V_{d-i}\}_{i=0}^d$.
By \cite[Lemma 8.6]{NT:switch}
the decompositions in Definition \ref{def:decompositions}
have the following features.
For distinct $z$, $w \in \Omega$ we have
(i)
the decomposition $[zw]$ is the inversion of $[wz]$;
(ii)
for $0 \leq i \leq d$ the $i^\text{th}$ component of $[zw]$ is the intersection
of the $i^\text{th}$ component of $[z]$ and the $(d-i)^\text{th}$ component of $[w]$;
(iii)
the decomposition $[zw]$ induces $[z]$ and the inversion of $[zw]$ induces $[w]$.
\begin{example} \label{exam:decompositions} \samepage
\ifDRAFT {\rm exam:decompositions}. \fi
We display some of the decompositions from
Definition \ref{def:decompositions}.
For each decomposition in the table below we give
the $i^\text{th}$ component for $0 \leq i \leq d$.
\[
\begin{array}{c|c}
\text{decomposition} & \text{$i^\text{th}$ component} \\
\hline
\;\;\; [0^*D] \;\;\; &
\;\;\; (E^*_0V+\cdots+E^*_iV)\cap(E_iV+\cdots+E_dV) \rule{0mm}{2.7ex} \\
{[D^*D]} &
(E^*_dV+\cdots+E^*_{d-i}V)\cap(E_iV+\cdots+E_dV) \rule{0mm}{2.5ex} \\
{[0^*0]} &
(E^*_0V+\cdots+E^*_{i}V)\cap(E_{d-i}V+\cdots+E_{0}V) \rule{0mm}{2.5ex} \\
{[D^*0]} &
(E^*_dV+\cdots+E^*_{d-i}V)\cap(E_{d-i}V+\cdots+E_{0}V) \rule{0mm}{2.5ex} \\
{[0D]} &
E_iV \\
{[0^*D^*]} &
E^*_iV
\end{array}
\]
\end{example}
\section{The action of $T$ on the flags and decompositions}
\label{sec:actTflag}
Recall the Leonard system $\Phi$ from \eqref{eq:Phi2}
and the element $T$ from Definition \ref{def:T}.
In this section we describe how $T$ acts on the flags from Definition \ref{def:flags}
and the decompositions from Definition \ref{def:decompositions}.
\begin{lemma} \label{lem:TEiV} \samepage
\ifDRAFT {\rm lem:TEiV}. \fi
Assume that $\Phi$ is self-dual.
Then
\[
T E_i V = E^*_i V, \qquad\qquad T E^*_i V = E_i V \qquad\qquad (0 \leq i \leq d).
\]
\end{lemma}
\begin{proof}
By \eqref{eq:EiT}, $T E_i V = E^*_i TV$.
We have $T V = V$ since $T$ is invertible.
By these comments $T E_i V = E^*_i V$.
Similarly $T E^*_i V = E_i V$.
\end{proof}
For a sequence $H=\{H_i\}_{i=0}^d$ of subspaces of $V$,
let $T H$ denote the sequence $\{T H_i\}_{i=0}^d$.
\begin{proposition} \label{prop:Tflags} \samepage
\ifDRAFT {\rm prop:Tflags}. \fi
Assume that $\Phi$ is self-dual.
Then
\begin{align*}
T [0] &= [0^*], & T[0^*] &=[0], & T[D] &=[D^*], & T[D^*] &=[D].
\end{align*}
\end{proposition}
\begin{proof}
By Definition \ref{def:flags} and Lemma \ref{lem:TEiV}.
\end{proof}
\begin{proposition} \label{prop:Tdecomp} \samepage
\ifDRAFT {\rm prop:Tdecomp}. \fi
Assume that $\Phi$ is self-dual.
In the table below we give some decompositions $u$ of $V$.
For each decomposition $u$ we give $T u$.
\[
\begin{array}{c|cccccc}
u & [0^*D] & [D^*D] & [0^*0] & [D^*0] & [0D] & [0^*D^*]
\\ \hline
T u & [0D^*] & [DD^*] & [00^*] & [D0^*] & [0^*D^*] & [0D] \rule{0mm}{2.4ex}
\end{array}
\]
\end{proposition}
\begin{proof}
First consider the case $u = [0^* D]$.
By Definition \ref{def:decompositions} the decomposition $u$ is induced by
the ordered pair of flags $[0^*]$, $[D]$.
By this and since $T$ is invertible,
the decomposition $T u$ is induced by the ordered pair of flags
$T [0^*]$, $T[D]$.
By Proposition \ref{prop:Tflags} we have $T [0^*] = [0]$ and $T[D]= [D^*]$.
By these comments and Definition \ref{def:decompositions}, $T u = [0 D^*]$.
We have shown the result for the case $u = [0^* D]$.
For the other cases the proof is similar.
\end{proof}
\section{The 24 bases}
\label{sec:24bases}
Recall the Leonard system $\Phi$ from \eqref{eq:Phi2}
and the element $T$ from Definition \ref{def:T}.
In \cite{T:24points} the second author introduced $24$ bases for $V$
on which $A$, $A^*$ act in an attractive manner.
Our next goal is to describe how $T$ acts on these bases.
In this section we define the $24$ bases and give their basic properties.
Let $v_0$, $v_d$, $v^*_0$, $v^*_d$ denote nonzero vectors in $V$ such that
\begin{equation} \label{eq:defv0vd}
v_0 \in E_0V, \qquad
v_d \in E_dV, \qquad
v^*_0 \in E^*_0V, \qquad
v^*_d \in E^*_dV.
\end{equation}
We consider the decompositions from Definition \ref{def:decompositions}.
\begin{lemma} \label{lem:0sD} \samepage
\ifDRAFT {\rm lem:0sD}. \fi
For each row in the table below, consider the decomposition $\{U_i\}_{i=0}^d$ of $V$
in the first column.
For $0 \leq i \leq d$
the vector in the second column and third column is a basis for $U_i$.
\[
\begin{array}{c|c|c}
\text{\rm decomposition $\{U_i\}_{i=0}^d$} & \text{\rm basis for $U_i$}
& \text{\rm basis for $U_i$}
\\ \hline
{[0^* D]} & \tau_i (A) v^*_0 & \eta^*_{d-i}(A^*) v_d \rule{0mm}{2.7ex}
\\
{[D^* D]} & \tau_i (A) v^*_d & \tau^*_{d-i}(A^*) v_d \rule{0mm}{2.5ex}
\\
{[0^* 0]} & \eta_i (A) v^*_0 & \eta^*_{d-i}(A^*) v_0 \rule{0mm}{2.5ex}
\\
{[D^* 0]} & \eta_i (A) v^*_d & \tau^*_{d-i} (A^*) v_0 \rule{0mm}{2.5ex}
\end{array}
\]
\end{lemma}
\begin{proof}
By \cite[Lemma 8.8]{NT:switch}.
\end{proof}
\begin{corollary}
\label{cor:0sD} \samepage
\ifDRAFT {\rm cor:0sD}. \fi
Each of the following $8$ sequences is a basis for $V$:
\begin{align}
& \{\tau_i(A) v^*_0\}_{i=0}^d,
&& \{\tau_i(A) v^*_d\}_{i=0}^d,
&& \{\eta_i(A) v^*_0\}_{i=0}^d,
&& \{\eta_i(A) v^*_d\}_{i=0}^d, \label{eq:tauivs0}
\\
& \{\tau^*_{d-i}(A^*) v_0\}_{i=0}^d,
&& \{\tau^*_{d-i}(A^*) v_d \}_{i=0}^d,
&& \{\eta^*_{d-i}(A^*) v_0\}_{i=0}^d,
&& \{\eta^*_{d-i}(A^*) v_d \}_{i=0}^d. \label{eq:tausd-iv0}
\end{align}
\end{corollary}
\begin{proof}
By Lemma \ref{lem:0sD}.
\end{proof}
\begin{lemma} \label{lem:D0s} \samepage
\ifDRAFT {\rm lem:D0s}. \fi
For each row in the table below, consider the decomposition $\{U_i\}_{i=0}^d$ of $V$
in the first column.
For $0 \leq i \leq d$
the vector in the second column and third column is a basis for $U_i$.
\[
\begin{array}{c|c|c}
\text{\rm decomposition $\{U_i\}_{i=0}^d$}
& \text{\rm basis for $U_i$} & \text{\rm basis for $U_i$}
\\ \hline
{[D 0^*]} & \eta^*_i (A^*) v_d & \tau_{d-i}(A) v^*_0 \rule{0mm}{2.7ex}
\\
{[D D^*]} & \tau^*_i (A^*) v_d & \tau_{d-i}(A) v^*_d \rule{0mm}{2.5ex}
\\
{[0 0^*]} & \eta^*_i (A^*) v_0 & \eta_{d-i} (A) v^*_0 \rule{0mm}{2.5ex}
\\
{[0 D^*]} &\tau^*_i (A^*) v_0 & \eta_{d-i}(A) v^*_d \rule{0mm}{2.5ex}
\end{array}
\]
\end{lemma}
\begin{proof}
These are the inversions of the decompositions in Lemma \ref{lem:0sD}.
\end{proof}
\begin{corollary}
\label{cor:D0s} \samepage
\ifDRAFT {\rm cor:D0s}. \fi
Each of the following $8$ sequences is a basis for $V$:
\begin{align}
& \{\tau^*_i(A^*) v_0\}_{i=0}^d,
&& \{\tau^*_i(A^*) v_d\}_{i=0}^d,
&& \{\eta^*_i(A^*) v_0\}_{i=0}^d,
&& \{\eta^*_i(A^*) v_d\}_{i=0}^d, \label{eq:tausiv0}
\\
& \{\tau_{d-i}(A) v^*_0 \}_{i=0}^d,
&& \{\tau_{d-i}(A) v^*_d\}_{i=0}^d,
&& \{\eta_{d-i}(A) v^*_0 \}_{i=0}^d,
&& \{\eta_{d-i}(A) v^*_d\}_{i=0}^d. \label{eq:taud-ivs0}
\end{align}
\end{corollary}
\begin{proof}
By Lemma \ref{lem:D0s}.
\end{proof}
\begin{lemma} \label{lem:0D} \samepage
\ifDRAFT {\rm lem:0D}. \fi
For each row in the table below, consider the decomposition $\{U_i\}_{i=0}^d$ of $V$
in the first column.
For $0 \leq i \leq d$
the vector in the second column and third column is a basis for $U_i$.
\[
\begin{array}{c|c|c}
\text{\rm decomposition $\{U_i\}_{i=0}^d$}
& \text{\rm basis for $U_i$} & \text{\rm basis for $U_i$}
\\ \hline
{[0D]} & E_i v^*_0 & E_i v^*_d \rule{0mm}{2.7ex}
\\
{[0^* D^*]} & E^*_i v_0 & E^*_i v_d \rule{0mm}{2.5ex}
\\
{[D0]} & E_{d-i} v^*_0 & E_{d-i} v^*_d \rule{0mm}{2.5ex}
\\
{[D^* 0^*]} & E^*_{d-i} v_0 & E^*_{d-i} v_d \rule{0mm}{2.5ex}
\end{array}
\]
\end{lemma}
\begin{proof}
First consider the decomposition $[0D]$.
By Example \ref{exam:decompositions}, $E_i v^*_0 \in U_i$.
By \cite[Lemma 10.2]{T:survey}, $E_i v^*_0 \neq 0$.
Thus $E_i v^*_0$ is a basis for $U_i$.
Similarly $E_i v^*_d$ is a basis for $U_i$.
The proof is similar for the remaining decompositions.
\end{proof}
\begin{corollary}
\label{cor:0D} \samepage
\ifDRAFT {\rm cor:0D}. \fi
Each of the following $8$ sequences is a basis for $V$:
\begin{align}
& \{E_i v^*_0\}_{i=0}^d,
&& \{E_i v^*_d \}_{i=0}^d,
&& \{E_{d-i} v^*_0\}_{i=0}^d,
&& \{E_{d-i} v^*_d\}_{i=0}^d, \label{eq:Eivs0}
\\
& \{E^*_i v_0\}_{i=0}^d,
&& \{E^*_i v_d\}_{i=0}^d,
&& \{E^*_{d-i} v_0\}_{i=0}^d,
&& \{E^*_{d-i} v_d\}_{i=0}^d. \label{eq:Esiv0}
\end{align}
\end{corollary}
\begin{proof}
By Lemma \ref{lem:0D}.
\end{proof}
\begin{note}
The 24 bases \eqref{eq:tauivs0}--\eqref{eq:Esiv0} are investigated by
the second author in \cite{T:24points}.
In \cite[Theorem 11.2]{T:24points} the matrices representing $A$ and $A^*$
with respect to these 24 bases are given.
In \cite[Section 15]{T:24points}
the transition matrices between these 24 bases are given.
\end{note}
Let $\{u_i\}_{i=0}^d$ denote a basis for $V$. Then $\{u_{d-i}\}_{i=0}^d$ is a basis for $V$,
called the {\em inversion of $\{u_i\}_{i=0}^d$}.
For each of the $24$ bases listed in \eqref{eq:tauivs0}--\eqref{eq:Esiv0},
its inversion is listed in \eqref{eq:tauivs0}--\eqref{eq:Esiv0}.
\section{Some relationship among the $24$ bases}
\label{sec:rel24}
Recall the Leonard system $\Phi$ from \eqref{eq:Phi2}.
In Lemmas \ref{lem:0sD}, \ref{lem:D0s}, \ref{lem:0D}
we gave some decompositions of $V$.
For each decomposition and $0 \leq i \leq d$
we gave two bases for its $i^\text{th}$ component.
In this section we show how these bases are related.
To do this, we consider the following inner products:
\begin{align}
& \b{v_0, v_0}, \qquad
\b{v_d, v_d}, \qquad
\b{v^*_0, v^*_0}, \qquad
\b{v^*_d, v^*_d}, \label{eq:first}
\\
& \b{v_0, v^*_0}, \qquad
\b{v_0, v^*_d}, \qquad
\b{v_d, v^*_0}, \qquad
\b{v_d, v^*_d}. \label{eq:second}
\end{align}
The above scalars are all nonzero
by \cite[Lemma 15.5]{T:qRacah} applied to the relatives of $\Phi$.
\begin{lemma} \cite[Lemma 9.5]{NT:maps}
\label{lem:E0xis0} \samepage
\ifDRAFT {\rm lem:E0xis0}. \fi
We have
\begin{align}
E_0 \,v^*_0 &= \frac{\b{v_0, v^*_0}}{\b{v_0, v_0}} \, v_0, &
E_d \, v^*_0 &= \frac{\b{v_d, v^*_0}}{\b{v_d, v_d}} \, v_d,
\label{eq:vs0} \\
E_0 \, v^*_d &= \frac{\b{v_0, v^*_d}}{\b{v_0, v_0}} \, v_0, &
E_d \, v^*_d &= \frac{\b{v_d, v^*_d}}{\b{v_d, v_d}} \, v_d,
\label{eq:vsd} \\
E^*_0 \, v_0 &= \frac{\b{v_0, v^*_0}}{\b{v^*_0, v^*_0}} \, v^*_0, &
E^*_d \, v_0 &= \frac{\b{v_0, v^*_d}}{\b{v^*_d, v^*_d}} \, v^*_d,
\label{eq:v0} \\
E^*_0 \, v_d &= \frac{\b{v_d, v^*_0}}{\b{v^*_0, v^*_0}} \, v^*_0, &
E^*_d \, v_d &= \frac{\b{v_d, v^*_d}}{\b{v^*_d, v^*_d}} \, v^*_d.
\label{eq:vd}
\end{align}
\end{lemma}
The scalars \eqref{eq:first}, \eqref{eq:second} satisfy the following relations.
\begin{lemma} \cite[Lemma 9.7]{NT:maps}
\label{lem:rels1} \samepage
\ifDRAFT {\rm lem:rels1}. \fi
We have
\begin{align}
\frac{\b{v_0, v^*_d} \b{v_d,v^*_0} }{ \b{v_0, v^*_0} \b{v_d,v^*_d} }
&= \frac{\vphi_1 \cdots \vphi_d}
{\phi_1 \cdots \phi_d}. \label{eq:newrel}
\end{align}
\end{lemma}
\begin{lemma} \cite[Corollary 8.3, Lemma 9.6]{NT:maps}
\label{lem:rels2} \samepage
\ifDRAFT {\rm lem:rels2}. \fi
We have
\begin{align}
\frac{ \b{v_0,v_0} \b{v^*_0,v^*_0} } { \b{v_0,v^*_0}^2 }
&= \frac{ \eta_d(\th_0) \eta^*_d (\th^*_0) } { \phi_1 \cdots \phi_d}, \label{eq:00s}
\\
\frac{ \b{v_0,v_0} \b{v^*_d,v^*_d} } { \b{v_0,v^*_d}^2 }
&= \frac{ \eta_d(\th_0) \tau^*_d(\th^*_d) } { \vphi_1 \cdots \vphi_d }, \label{eq:0ds}
\\
\frac{ \b{v_d,v_d} \b{v^*_0,v^*_0} } { \b{v_d,v^*_0}^2 }
&= \frac{ \tau_d(\th_d) \eta^*_d(\th^*_0) } { \vphi_1 \cdots \vphi_d }, \label{eq:d0s}
\\
\frac{ \b{v_d,v_d} \b{v^*_d,v^*_d} } { \b{v_d,v^*_d}^2 }
&= \frac{ \tau_d(\th_d) \tau^*_d(\th^*_d) } { \phi_1 \cdots \phi_d}. \label{eq:dds}
\end{align}
\end{lemma}
\begin{note} \label{note:rel} \samepage
\ifDRAFT {\rm note:rel}. \fi
By \eqref{eq:00s}--\eqref{eq:dds} the scalars \eqref{eq:second} are determined
up to sign by the scalars \eqref{eq:first} and the parameter array.
\end{note}
Our next goal is to describe
how the bases in Lemmas \ref{lem:0sD}, \ref{lem:D0s}, \ref{lem:0D} are related.
The bases in Lemma \ref{lem:0sD} are related as follows.
\begin{lemma} \label{lem:trans1b} \samepage
\ifDRAFT {\rm lem:trans1b}. \fi
For $0 \leq i \leq d$,
\begin{align}
\tau^*_{d-i}(A^*) v_0
&= \frac{\tau^*_d(\th^*_d) } {\vphi_d \cdots \vphi_{d-i+1} } \,
\frac{ \b{v_0,v^*_d} } { \b{v^*_d,v^*_d} } \, \eta_i(A) v^*_d, \label{eq:tausd-iAsv02}
\\
\eta^*_{d-i}(A^*) v_0
&= \frac{\eta^*_d(\th^*_0) } { \phi_1 \cdots \phi_i } \,
\frac{ \b{v_0,v^*_0} } { \b{v^*_0,v^*_0} } \, \eta_i(A) v^*_0, \label{eq:etasd-iAsv02}
\\
\tau^*_{d-i}(A^*) v_d
&= \frac{ \tau^*_d(\th^*_d) } { \phi_d \cdots \phi_{d-i+1} } \,
\frac{ \b{v_d,v^*_d} } { \b{v^*_d,v^*_d} } \, \tau_i(A) v^*_d, \label{eq:tausd-iAsvd2}
\\
\eta^*_{d-i}(A^*) v_d
&= \frac{ \eta^*_d(\th^*_0) } { \vphi_1 \cdots \vphi_i } \,
\frac{ \b{v_d,v^*_0} } { \b{v^*_0, v^*_0} } \, \tau_i(A) v^*_0. \label{eq:etasd-iAsvd2}
\end{align}
\end{lemma}
\begin{proof}
We first show \eqref{eq:tausd-iAsv02}.
By \cite[Theorem 5.2]{NT:unit},
\begin{equation}
\tau^*_{d-i}(A^*) E_0
= \frac{\tau^*_d(\th^*_d)} { \vphi_d \cdots \vphi_{d-i+1} } \, \eta_i(A) E^*_d E_0. \label{eq:formula1}
\end{equation}
In this line, apply each side to $v_0$ and use $E_0 v_0 = v_0$.
Simplify the result using the equation on the right in \eqref{eq:v0}.
This gives \eqref{eq:tausd-iAsv02}.
To get the remaining equations, apply \eqref{eq:tausd-iAsv02} to
$\Phi^\downarrow$, $\Phi^\Downarrow$, $\Phi^{\downarrow\Downarrow}$,
and use Lemma \ref{lem:Phis}.
\end{proof}
The bases in Lemma \ref{lem:D0s} are related as follows.
\begin{lemma} \label{lem:trans1c} \samepage
\ifDRAFT {\rm lem:trans1c}. \fi
For $0 \leq i \leq d$,
\begin{align}
\tau_{d-i}(A) v^*_0
&= \frac{ \tau_d(\th_d) } { \vphi_d \cdots \vphi_{d-i+1} } \,
\frac{ \b{v_d,v^*_0} } {\b{v_d,v_d} } \, \eta^*_i (A^*) v_d, \label{eq:taud-iAvs02}
\\
\eta_{d-i}(A) v^*_0
&= \frac{ \eta_d(\th_0) } { \phi_d \cdots \phi_{d-i+1} } \,
\frac{ \b{v_0,v^*_0} } { \b{v_0,v_0} } \, \eta^*_i(A^*) v_0, \label{eq:etad-iAvs02}
\\
\tau_{d-i}(A) v^*_d
&= \frac{\tau_d(\th_d) } {\phi_1 \cdots \phi_i } \,
\frac{ \b{v_d,v^*_d} } { \b{v_d,v_d} } \, \tau^*_i(A^*) v_d, \label{eq:taud-iAvsd2}
\\
\eta_{d-i}(A) v^*_d
&= \frac{\eta_d(\th_0) } { \vphi_1 \cdots \vphi_i } \,
\frac{ \b{v_0, v^*_d} } { \b{v_0,v_0} } \, \tau^*_i(A^*) v_0. \label{eq:etad-iAvsd2}
\end{align}
\end{lemma}
\begin{proof}
Apply Lemma \ref{lem:trans1b} to $\Phi^*$, and use Lemma \ref{lem:Phis}.
\end{proof}
The bases in Lemma \ref{lem:0D} are related as follows.
\begin{lemma} \label{lem:trans3b} \samepage
\ifDRAFT {\rm lem:trans3b}. \fi
For $0 \leq i \leq d$,
\begin{align}
E^*_i v_d
&= \frac{\phi_1 \cdots \phi_i } {\vphi_1 \cdots \vphi_i } \,
\frac{ \b{v_d,v^*_0} } { \b{v_0,v^*_0} } \, E^*_i v_0, \label{eq:Esivd2}
\\
E^*_{d-i} v_d
&= \frac{\vphi_d \cdots \vphi_{d-i+1} } {\phi_d \cdots \phi_{d-i+1} } \,
\frac{ \b{v_d,v^*_d} } { \b{v_0,v^*_d} } \, E^*_{d-i} v_0, \label{eq:Esd-ivd2}
\\
E_i v^*_d
&= \frac{\phi_d \cdots \phi_{d-i+1} } {\vphi_1 \cdots \vphi_i } \,
\frac{ \b{v_0,v^*_d} } { \b{v_0,v^*_0} } \, E_i v^*_0, \label{eq:Eivsd2}
\\
E_{d-i} v^*_d
&= \frac{\vphi_d \cdots \vphi_{d-i+1} } {\phi_1 \cdots \phi_i } \,
\frac{ \b{v_d,v^*_d} } { \b{v_d,v^*_0} } \, E_{d-i} v^*_0. \label{eq:Ed-ivsd2}
\end{align}
\end{lemma}
\begin{proof}
We first show \eqref{eq:Esivd2}.
In the equation on the right in \eqref{eq:v0}, multiply each side on the left by $E^*_i E_d$.
Simplify the result using the equation on the right in \eqref{eq:vsd}. This gives
\begin{equation}
E^*_i E_d E^*_d v_0 =
\frac{\b{v_0, v^*_d} \b{v_d,v^*_d} } { \b{v_d, v_d} \b{v^*_d, v^*_d} } \, E^*_i v_d. \label{eq:Esivdaux1}
\end{equation}
By \cite[Lemma 7.2]{NT:maps},
\begin{equation}
E_0 E^*_d E_d E^*_i =
\frac{\vphi_1 \cdots \vphi_d } { \tau_d(\th_d) \tau^*_d (\th^*_d) } \,
\frac{\phi_1 \cdots \phi_i } { \vphi_1 \cdots \vphi_i } \, E_0 E^*_i. \label{eq:formula2pre}
\end{equation}
Applying $\dagger$ to \eqref{eq:formula2pre} we obtain
\begin{equation}
E^*_i E_d E^*_d E_0 =
\frac{\vphi_1 \cdots \vphi_d } { \tau_d(\th_d) \tau^*_d (\th^*_d) } \,
\frac{\phi_1 \cdots \phi_i } { \vphi_1 \cdots \vphi_i } \, E^*_i E_0. \label{eq:formula2}
\end{equation}
In this line, apply each side to $v_0$, and use $E_0 v_0 = v_0$.
Comparing the result with \eqref{eq:Esivdaux1} we find that $E^*_i v_d$ is equal to
\begin{equation}
\frac{\vphi_1 \cdots \vphi_d} {\tau_d(\th_d) \tau^*_d(\th^*_d) } \,
\frac{\b{v_d,v_d} \b{v^*_d,v^*_d} } { \b{v_0,v^*_d} \b{v_d,v^*_d} } \label{eq:Esivdaux2}
\end{equation}
times
\[
\frac{\phi_1 \cdots \phi_i} {\vphi_1 \cdots \vphi_i } \, E^*_i v_0.
\]
By \eqref{eq:newrel} and \eqref{eq:dds},
the line \eqref{eq:Esivdaux2} is equal to
\[
\frac{ \b{v_d,v^*_0} } { \b{v_0,v^*_0} }.
\]
By these comments we obtain \eqref{eq:Esivd2}.
To get \eqref{eq:Esd-ivd2},
replace $i$ with $d-i$ in \eqref{eq:Esivd2} and use \eqref{eq:newrel}.
To get \eqref{eq:Eivsd2}, apply \eqref{eq:Esivd2} to $\Phi^*$, and use Lemma \ref{lem:Phis}.
The line \eqref{eq:Ed-ivsd2} is similarly obtained by applying \eqref{eq:Esd-ivd2} to $\Phi^*$.
\end{proof}
\section{The action of $T$ on the 24 bases}
\label{sec:Taction}
Recall the Leonard system $\Phi$ from \eqref{eq:Phi2} and
the element $T$ from Definition \ref{def:T}.
Consider the $24$ bases from \eqref{eq:tauivs0}--\eqref{eq:Esiv0}.
In this section we describe how $T$ acts on these bases,
under the assumption that $\Phi$ is self-dual.
\begin{lemma} \label{lem:actT} \samepage
\ifDRAFT {\rm lem:actT}. \fi
Assume that $\Phi$ is self-dual.
Then
\begin{align*}
T v_0 &= \alpha v^*_0, \qquad\qquad \;
T v_d = \beta v^*_d,
\\
T v^*_0 &= \alpha^* v_0, \qquad\qquad
T v^*_d = \beta^* v_d,
\end{align*}
where
\begin{align*}
\alpha &= \frac{\vphi_1 \cdots \vphi_d}{\tau_d(\th_d)} \,
\frac{ \b{v_0,v^*_0} } { \b{v^*_0,v^*_0} },
&
\beta &= \frac{\vphi_1 \cdots \vphi_d} { \eta_d(\th_0) } \,
\frac{ \b{v_d,v^*_d} } { \b{v^*_d, v^*_d} },
\\
\alpha^* &= \frac{\vphi_1 \cdots \vphi_d} { \tau_d(\th_d)} \,
\frac{ \b{v_0,v^*_0} } { \b{v_0,v_0} },
&
\beta^* &= \frac{\vphi_1 \cdots \vphi_d} {\eta_d(\th_0)} \,
\frac{ \b{v_d, v^*_d} } { \b{v_d,v_d} }.
\end{align*}
\end{lemma}
\begin{proof}
By Theorem \ref{thm:main} we have $T=T^*$.
By this and \eqref{eq:TsE0},
\[
T E_0 = \frac{\vphi_1 \cdots \vphi_d} {\tau_d(\th_d)} \, E^*_0 E_0.
\]
In this line, apply each side to $v_0$.
Simplify the result using $E_0 v_0 = v_0$ to get
\[
T v_0 = \frac{\vphi_1 \cdots \vphi_d} {\tau_d(\th_d)} \, E^*_0 v_0.
\]
In this line, eliminate $E^*_0 v_0$ using the equation on the left in \eqref{eq:v0} to
get $T v_0 = \alpha v^*_0$.
The remaining equations are obtained in a similar way.
\end{proof}
\begin{proposition} \label{prop:TEivs0} \samepage
\ifDRAFT {\rm prop:TEivs0}. \fi
Assume that $\Phi$ is self-dual.
Then for $0 \leq i \leq d$,
\begin{align*}
T E^*_i v_0 &= \alpha E_i v^*_0, &
T \tau^*_i (A^*) v_0 &= \alpha \tau_i (A) v^*_0, &
T \eta^*_i (A^*) v_0 &= \alpha \eta_i (A) v^*_0,
\\
T E^*_i v_d &= \beta E_i v^*_d, &
T \tau^*_i (A^*) v_d &= \beta \tau_i (A) v^*_d, &
T \eta^*_i (A^*) v_d &= \beta \eta_i (A) v^*_d,
\\
T E_i v^*_0 &= \alpha^* E^*_i v_0, &
T \tau_i (A) v^*_0 &= \alpha^* \tau^*_i (A^*) v_0, &
T \eta_i (A) v^*_0 &= \alpha^* \eta^*_i (A^*) v_0,
\\
T E_i v^*_d &= \beta^* E^*_i v_d, &
T \tau_i (A) v^*_d &= \beta^* \tau^*_i (A^*) v_d, &
T \eta_i (A) v^*_d &= \beta^* \eta^*_i (A^*) v_d,
\end{align*}
where $\alpha$, $\beta$, $\alpha^*$, $\beta^*$ are from Lemma \ref{lem:actT}.
\end{proposition}
\begin{proof}
By \eqref{eq:EiT} we have $T E^*_i T^{-1} = E_i$.
By Lemma \ref{lem:actT}, $T v_0 = \alpha v^*_0$.
By these comments
\[
T E^*_i v_0 = T E^*_i T^{-1} T v_0 = \alpha E_i v^*_0.
\]
We have shown that $T E^*_i v_0 = \alpha E_i v^*_0$.
The remaining equations are obtained in a similar way.
\end{proof}
To motivate the next result we make some comments.
Consider the following bases for $V$:
\begin{equation}
\{\eta^*_i (A^*) v_0 \}_{i=0}^d, \qquad \label{eq:4bases}
\{\eta_i(A) v^*_0 \}_{i=0}^d, \qquad
\{\tau^*_i(A^*) v_d \}_{i=0}^d, \qquad
\{\tau_i(A) v^*_d\}_{i=0}^d.
\end{equation}
By \cite[Theorem 11.2]{T:24points},
with respect to these bases the matrices representing $A$ and $A^*$
are as follows.
\[
\begin{array}{c|ccc}
\text{\rm basis} & & \text{\rm matrix representing $A$}
& \text{\rm matrix representing $A^*$}
\\ \hline
\{\eta^*_i(A^*) v_0 \}_{i=0}^d \rule{0mm}{12ex}
& &
\begin{pmatrix}
\th_0 & \phi_d & & & & \text{\bf 0} \\
& \th_1 & \cdot \\
& & \cdot & \cdot \\
& & & \cdot & \phi_2 \\
& & & & \th_{d-1} & \phi_1 \\
\text{\bf 0} & & & & & \th_d
\end{pmatrix}
&
\begin{pmatrix}
\th^*_d & & & & & \text{\bf 0} \\
1 & \th^*_{d-1} \\
& 1 & \cdot \\
& & \cdot & \cdot \\
& & & \cdot & \th^*_1 \\
\text{\bf 0} & & & & 1 & \th^*_0
\end{pmatrix}
\\
\{\eta_i(A) v^*_0\}_{i=0}^d \rule{0mm}{12ex}
& &
\begin{pmatrix}
\th_d & & & & & \text{\bf 0} \\
1 & \th_{d-1} \\
& 1 & \cdot \\
& & \cdot & \cdot \\
& & & \cdot & \th_1 \\
\text{\bf 0} & & & & 1 & \th_0
\end{pmatrix}
&
\begin{pmatrix}
\th^*_0 & \phi_1 & & & & \text{\bf 0} \\
& \th^*_1 & \phi_2 \\
& & \cdot & \cdot \\
& & & \cdot & \cdot \\
& & & & \th^*_{d-1} & \phi_d \\
\text{\bf 0} & & & & & \th^*_d
\end{pmatrix}
\\
\{\tau^*_i(A^*) v_d \}_{i=0}^d \rule{0mm}{12ex}
& &
\begin{pmatrix}
\th_d & \phi_1 & & & & \text{\bf 0} \\
& \th_{d-1} & \phi_2 \\
& & \cdot & \cdot \\
& & & \cdot & \cdot \\
& & & & \th_1 & \phi_d \\
\text{\bf 0} & & & & & \th_0
\end{pmatrix}
&
\begin{pmatrix}
\th^*_0 & & & & & \text{\bf 0} \\
1 & \th^*_1 \\
& 1 & \cdot \\
& & \cdot & \cdot \\
& & & \cdot & \th^*_{d-1} \\
\text{\bf 0} & & & & 1 & \th^*_d
\end{pmatrix}
\\
\{\tau_i(A) v^*_d\}_{i=0}^d \rule{0mm}{12ex}
& &
\begin{pmatrix}
\th_0 & & & & & \text{\bf 0} \\
1 & \th_1 \\
& 1 & \cdot \\
& & \cdot & \cdot \\
& & & \cdot & \th_{d-1} \\
\text{\bf 0} & & & & 1 & \th_d
\end{pmatrix}
&
\begin{pmatrix}
\th^*_d & \phi_d & & & & \text{\bf 0} \\
& \th^*_{d-1} & \cdot \\
& & \cdot & \cdot \\
& & & \cdot & \phi_2 \\
& & & & \th^*_1 & \phi_1 \\
\text{\bf 0} & & & & & \th^*_0
\end{pmatrix}
\end{array}
\]
\begin{theorem} \label{thm:matrixT} \samepage
\ifDRAFT {\rm thm:matrixT}. \fi
Assume that $\Phi$ is self-dual.
Then with respect to each basis \eqref{eq:4bases}
the matrix representing $T$ is
\begin{equation}
\frac{\vphi_1 \cdots \vphi_d} {\tau_d(\th_d) \eta_d(\th_0) } \,
\begin{pmatrix}
\text{\bf 0} & & & & & \phi_1 \cdots \phi_d \\
& & & & \cdot \\
& & & \cdot \\
& & \phi_1 \phi_2 \\
& \phi_1 \\
1 & & & & & \qquad\;\; \text{\bf 0}
\end{pmatrix}. \label{eq:mat}
\end{equation}
\end{theorem}
\begin{proof}
First consider the basis $\{\eta^*_i (A^*) v_0\}_{i=0}^d$.
By Proposition \ref{prop:TEivs0},
\begin{align*}
T \eta^*_i (A^*) v_0 &=
\frac{\vphi_1 \cdots \vphi_d} { \tau_d(\th_d) } \,
\frac{ \b{v_0,v^*_0} } { \b{v^*_0, v^*_0} } \, \eta_i (A) v^*_0 && (0 \leq i \leq d).
\end{align*}
By \eqref{eq:thths} and \eqref{eq:etasd-iAsv02},
\begin{align*}
\eta_i (A) v^*_0 &=
\frac{\phi_1 \cdots \phi_i} { \eta_d(\th_0) } \,
\frac{ \b{v^*_0, v^*_0} } { \b{v_0, v^*_0} } \, \eta^*_{d-i} (A^*) v_0 && (0 \leq i \leq d).
\end{align*}
By these comments,
\begin{align}
T \eta^*_i (A^*) v_0 &=
\frac{\vphi_1 \cdots \vphi_d} { \tau_d(\th_d) \eta_d(\th_0) } \,
\phi_1 \cdots \phi_i \, \eta^*_{d-i} (A^*) v_0 && (0 \leq i \leq d). \label{eq:TetasiAsv0}
\end{align}
Thus the matrix \eqref{eq:mat} represents $T$ with respect to $\{\eta^*_i (A^*) v_0\}_{i=0}^d$.
Next consider the basis $\{\tau^*_i (A^*) v_d\}_{i=0}^d$.
In a similar way as above using \eqref{eq:phi} and \eqref{eq:tausd-iAsvd2},
we obtain
\begin{align}
T \tau^*_i (A^*) v_d &=
\frac{\vphi_1 \cdots \vphi_d} { \tau_d(\th_d) \eta_d(\th_0) } \,
\phi_1 \cdots \phi_i \, \tau^*_{d-i} (A^*) v_d && (0 \leq i \leq d). \label{eq:TtausiAsvd}
\end{align}
Thus the matrix \eqref{eq:mat} represents $T$ with respect to $\{\tau^*_i (A^*) v_d\}_{i=0}^d$.
Next consider the basis $\{\eta_i(A) v^*_0\}_{i=0}^d$.
Apply \eqref{eq:TetasiAsv0} to $\Phi^*$, and use Lemma \ref{lem:Phis}.
This gives
\begin{align*}
T^* \eta_i (A) v^*_0 &=
\frac{\vphi_1 \cdots \vphi_d } { \tau^*_d (\th^*_d) \eta^*_d (\th^*_0) } \,
\phi_d \cdots \phi_{d-i+1} \, \eta_{d-i}(A) v^*_0 && (0 \leq i \leq d).
\end{align*}
We have $T^*=T$ by Theorem \ref{thm:main}.
By this and \eqref{eq:thths}, \eqref{eq:phi},
the above line becomes
\begin{align*}
T \eta_i (A) v^*_0 &=
\frac{\vphi_1 \cdots \vphi_d} { \tau_d(\th_d) \eta_d(\th_0) } \,
\phi_1 \cdots \phi_i \, \eta_{d-i} (A) v^*_0 && (0 \leq i \leq d).
\end{align*}
Thus the matrix \eqref{eq:mat} represents $T$ with respect to $\{\eta_i(A) v^*_0\}_{i=0}^d$.
Next consider the basis $\{\tau_i (A) v^*_d\}_{i=0}^d$.
Apply \eqref{eq:TtausiAsvd} to $\Phi^*$, and use Lemma \ref{lem:Phis}.
Simplify the result in a similar way as above to get
\begin{align*}
T \tau_i (A) v^*_d &=
\frac{\vphi_1 \cdots \vphi_d} { \tau_d(\th_d) \eta_d(\th_0) } \,
\phi_1 \cdots \phi_i \, \tau_{d-i} (A) v^*_d && (0 \leq i \leq d).
\end{align*}
Thus the matrix \eqref{eq:mat} represents $T$ with respect to $\{\tau_i (A) v^*_d\}_{i=0}^d$.
\end{proof}
\bigskip
{
\small
|
{
"timestamp": "2018-10-23T02:04:17",
"yymm": "1805",
"arxiv_id": "1805.02545",
"language": "en",
"url": "https://arxiv.org/abs/1805.02545"
}
|
\section{Introduction}
Given a simple polygon~$P$ on the plane, the \emph{Polygon Placement Problem} consists in finding a function~$\uptau$,
usually
consisting of the composition of a rotation and a translation, such that $\uptau(P)$ satisfies some
geometric constraints.~In the literature, $\uptau(P)$ is known as a
\emph{placement} of $P$.~The oldest problem of this family
who, given two polygons $P$~and~$Q$, explored the problem of finding,
if it exists, a
placement of~$P$ that contains $Q$.~The most recent contribution to
these problems, in 2014,
can be found in \cite{barequet_2014} (see Section~1.4 there for a
summary of previous work).~Among other results, for a point set~$S$
and a simple polygon~$P$, they show how to compute a
placement of~$P$ that contains as many points of~$S$ as possible.~If
$n$~and~$m$ are the sizes of~$S$ and~$P$ respectively, their algorithm
runs in $O(n^3 m^3 \log (nm))$ time and $O(nm)$ space.
Although translation-only problems have also been
considered~\cite{agarwal_2002,barequet_1997}, surprisingly enough there are no previous results with~$\uptau$ being only a
rotation.~It is important to note that existing results with $\uptau$ being a composition of a rotation, a translation, and even a scaling, cannot be adapted to solve the rotation-only problem considered here: All those previous results reduce the search space complexity
by considering only placements where a constant number of points from
$S$ lie on the boundary of $P$ (see for example references
\cite{barequet_2014}~and~\cite{dickerson_1998} for algorithms based
respectively, on two-point and one-point placements).~Rotation-only
adaptations of these results would not allow the rotation center to be
fixed or restricted to lie on a given curve and therefore, cannot be
applied to the problems we deal with in this paper.~This is why the following \emph{Maximum Cover under Rotation (MCR)}
problems are considered in this paper:
\begin{problem}[Fixed MCR]\label{FixedMCR}
\label{pro:intro:fixed_mcr}
Given a point~$r$, a polygon $P$, and a point set~$S$ in the
plane, compute an angle~$\theta\in [0,2\pi)$ such that, after
clockwise rotating $P$ around $r$ by~$\theta$, the number of
points of~$S$ contained in~$P$ is maximized.
\end{problem}
\begin{problem}[Segment-restricted MCR]
\label{pro:intro:segment_mcr}
Given a line segment~$\ell$, a polygon~$P$, and a point set~$S$ in
the plane, find a point~$r$ on~$\ell$ and an
angle~$\theta\in [0,2\pi)$ such that, after a clockwise rotating of
$P$ around $r$ by~$\theta$, the number of points of~$S$ contained
in~$P$ is maximized.
\end{problem}
In addition, we complete the scene opening a path towards the study of these problems in~3D, by presenting a three-dimensional version of Problem~\ref{FixedMCR}:
\begin{problem}[3D Fixed MCR]
\label{pro:intro:3Dfixed_mcr}
Given a point~$r$, a polyhedron~$P$, and a point set~$S$
in~$\mathbb{R}^3$, compute the azimuth and
altitude~$(\theta,\varphi)\in[0,2\pi]\times [-\pi,\pi]$ giving the
direction in the unit sphere such that, after rotating a polyhedron
$P$ by taking the $z$-axis to that direction, the number of points
of~$S$ contained in~$P$ is maximized.
\end{problem}
Applications of polygon placement problems include global localization
of mobile robots, pattern matching, and geometric tolerance; see the
references in~\cite{barequet_2014}.~Rotation-only problems arise, e.g., in
robot localization using a rotating camera~\cite{robots_92}, with
applications to quality control of objects manufactured around an
axis~\cite{tolerance_1997}.
We first show that Problem~\ref{FixedMCR} is 3SUM-hard,
i.e.,
solving it in subquadratic time would imply an affirmative answer to
the open question of whether
a subquadratic time algorithm for 3SUM
exists, which is unlikely~\cite{gajentaan_1995}.~Then, we present two algorithms to solve
Problem~\ref{FixedMCR}: The first one requires $O(nm\log (nm))$ time and $O(nm)$ space,
for $n$~and~$m$ being the sizes of~$S$ and~$P$, respectively.~The
second one takes $O((n+k) \log n + m \log m)$ time and $O(n+m+k)$
space, for~$k$ in~$O(nm)$ being the number of certain events.
We also describe an algorithm that solves
Problem~\ref{pro:intro:segment_mcr} in $O(n^2 m^2\log(nm))$ time and
$O(n^2 m^2)$ space.~This algorithm can be easily extended to solve
variations of Problem~\ref{pro:intro:segment_mcr} where~$r$ lies on
a line or a polygonal chain.~Furthermore, our techniques for
Problem~\ref{pro:intro:fixed_mcr} can be extended to~3D to solve
Problem~\ref{pro:intro:3Dfixed_mcr} within the same time and space
complexities as Problem~\ref{pro:intro:segment_mcr}.
\section{Fixed MCR (Problem~\ref{pro:intro:fixed_mcr})}
\label{sec:fixed_mcr}
Given a point $r$ on the plane and a point $p \in S$, let $C_p(r)$
be the circle with center~$r$ and radius~$|\overline{r p}|$.~If we
rotate~$S$ in the counterclockwise direction around~$r$, $C_p(r)$
is the curve described by $p$ during a $2\pi$ rotation of $S$
around~$r$.~The endpoints of the circular arcs resulting from
intersecting $P$ and $C_p(r)$ determine the rotation angles where
$p$ enters (\emph{in-event}) and leaves (\emph{out-event})
the polygon~$P$.~In the worst case, the number of such events per element
of~$S$ is $O(m)$, see Figure~\ref{fig:fixed_mcr:comb}.~If we consider all
the points in $S$ we could get $O(nm)$ events.
\begin{figure}[ht]
\centering{}
\includegraphics{fixed_mcr_comb}
\caption{A comb-shaped simple polygon can generate $\Omega(m)$ in-
and out-events per point in~$S$.}
\label{fig:fixed_mcr:comb}
\end{figure}
\subsection{A 3SUM-hard reduction}
\label{sec:3sum-hard}
We show next that Problem~\ref{pro:intro:fixed_mcr} is 3SUM-hard, by a
reduction from the \emph{Segments Containing Points Problem} that was
proved to be 3SUM-hard in~\cite{barequet_2001}:~Given a set $A$ of $n$
real numbers and a set $B$ of $m=O(n)$ pairwise-disjoint intervals on
the real line, is there a real number $u$ such that
$A + u \subseteq B$?
\begin{theorem}\label{thm:3sum-hard}
The Fixed MCR problem is 3SUM-hard.
\end{theorem}
\begin{proof}
Let $I$ be an interval of the real line that contains the set $A$ of
points, and the set $B$ of intervals of an instance of the Segments
Containing Points Problem.~Wrap $I$ on a circle $C$ whose perimeter
has length at least twice the length of $I$.~This effectively maps
the points in~$A$ and the intervals in~$B$ into a set~$A'$ of points
and a set~$B'$ of intervals on $C$.
Clearly, finding a translation (if it exists) of the elements of $A$
such that $A + u \subseteq B$, is equivalent to finding a rotation
of the set of points $A'$ around the center of $C$ such that all of
the elements of $A'$ are mapped to points contained in the intervals
of $B'$.
\begin{figure}[ht]
\centering
\subcaptionbox{\label{fig:3sum-hard:mapping:1}}
{\includegraphics{fixed_mcr_real_line}}
\\[1.5em]
\subcaptionbox{\label{fig:3sum-hard:mapping:2}}
{\includegraphics{fixed_mcr_unit_circle}}
\caption{Wrapping $I$ from \subref{fig:3sum-hard:mapping:1} the
real line to \subref{fig:3sum-hard:mapping:2} a circle
$C$.~Intervals forming $B$ and $B'$ are highlighted with
blue.~Elements of $A$ and $A'$ are represented by white
points.~Additional vertices forming the polygon are the
intersection points between the tangents to $C$ at the endpoints
of each interval in $B^{\prime}$.}
\label{fig:3sum-hard:mapping}
\end{figure}
To finish our reduction, construct a polygon as shown in
Figure~\ref{fig:3sum-hard:mapping}.
\end{proof}
\subsection{An $O(nm\log(nm))$ algorithm}\label{sec:nmlognm}
Here we present an $O(nm\log(nm))$
algorithm for
Problem~\ref{pro:intro:fixed_mcr} (note that, by Theorem~\ref{thm:3sum-hard}, this complexity is close to be optimal):
\begin{enumerate}
\item \label{enum:nmlognm:1} \textbf{Intersect rotation
circles.}~Given a fixed point $r$, compute the intersection
points of $C_{p_j}(r)$ and $P$, for all $p_j \in S$.~Each of these
points determines an angle of rotation of $p_j$ around $r$ when
$p_j$ enters or leaves~$P$, see
Figure~\ref{fig:fixed_pc:in-out_events}.~These angles, in turn,
determine a set of intervals
$\mathcal I_j =\{I_{j,1}, \ldots , I_{j, m_j}\}$ whose endpoints
correspond to the rotation angles in which $p_j$ enters or
leaves~$P$ and, hence, specify the rotation angles on the unit
circle for which $p_j$ belongs to~$P$, see again
Figure~\ref{fig:fixed_pc:in-out_events}.~Let
$\mathcal I = \mathcal I_1 \cup \cdots \cup \mathcal I_n$.~The set
of endpoints of the intervals in $\mathcal I$ can be sorted in
$O(mn \log (mn))$ time.
\begin{figure}[ht]
\centering
\includegraphics{fixed_mcr_events}
\caption{An in-event at $x$ (left turn), and an out-event at $y$
(right turn).}
\label{fig:fixed_pc:in-out_events}
\end{figure}
\item \label{enum:nmlognm:4} \textbf{Compute the angle of maximum
coverage.} Using standard techniques, we can now perform a sweep
on the set $\mathcal I = \mathcal I_1 \cup \cdots \cup \mathcal I_n$
as depicted in Figure~\ref{fig:nmlognm:table}.
\begin{figure}[ht]
\centering
\includegraphics{fixed_mcr_table}
\caption{The events sequence and the sweeping line at angle
$\theta$.~Highlighted with a red circle, the intersection of
line $\ell$ with an interval corresponding to $p_1$ (where $p_1$ is
inside $P$).~Highlighted with a blue circle, the intersection of
line $\ell$ with one of the endpoints of an interval
corresponding to $p_n$ (an in-event).}
\label{fig:nmlognm:table}
\end{figure}
During the sweeping process, we maintain the number of points of~$S$
lying in~$P$.~If an in-event or an out-event occurs, that number is
increased or decreased by one, respectively.~At the end of the
sweeping process, we report the angular interval(s) where the number
is maximized.
\end{enumerate}
Since the complexity of our algorithm is dominated by
Step~\ref{enum:nmlognm:1}, which takes $O(nm\log (nm))$ time, we
conclude the following result.
\begin{theorem}\label{thm:nmlognm}
The Fixed MCR problem can be solved in $O(nm\log(nm))$ time and
$O(nm)$ space.
\end{theorem}
\subsection{An output-sensitive algorithm}
\label{sec:sensitive}
We now show that, performing a plane sweep using a \emph{sweeping
circle} centered at~$r$ whose diameter increases continuously, it
is possible to intersect $P$ and the set of rotation circles in a more
efficient way.~The idea is to maintain a list of the edges
intersecting the \emph{sweeping-circle}, ordered by appearance along
the sweeping-circle.~Using the same technique shown in
Figure~\ref{fig:fixed_pc:in-out_events}, the edges are labeled as
defining in- or out- events.~The algorithm is outlined next.
\begin{enumerate}
\item \label{enum:sensitive:nm:0} \textbf{Normalize $P$}.~In the
following steps, we consider~$P$ to have no edges intersecting any
circle centered at $r$ more than once.~This can be guaranteed by
performing a preprocessing step on $P$: For every edge~$e = uv$
of~$P$, let $p_e$ be the intersection point between the line $\ell$
containing $e$ and the line perpendicular to $\ell$ passing through
$r$.~If $p_e$ belongs to the relative interior of~$e$, subdivide
this edge into the edges $u p_e$~and~$p_e v$.~In the worst case,
each edge of~$P$ gets subdivided into two parts. See
Figure~\ref{fig:sensitive:mn:split_edge}.
\begin{figure}[ht]
\centering
\includegraphics{fixed_mcr_split_edge}
\caption{Splitting an edge of $P$.}
\label{fig:sensitive:mn:split_edge}
\end{figure}
\item \label{enum:sensitive:nm:1} \textbf{Process a vertex of $P$.}
Sort first the vertices of $P$ and $S$ according to their distance
from $r$. This is the order in which an expanding sweeping circle
centered at $r$ will reach them.
\vspace{0.5em}
As the sweeping-circle increases in size, we stop at each vertex
$p_j$ of~$P$.~Each time this happens, the number of intersections of
$C_{p_j}(r)$ with the boundary of~$P$ will increase or decrease by
two.~We can maintain and update the ordered list of edges
intersected by $C_{p_j}(r)$, using a red-black tree, in
logarithmic time.~This enables us to calculate the intersections of
$C_{p_j}(r)$ in time proportional to their number.~It suffices to
walk along the ordered list of edges intersected by the
sweeping-circle.~Each time the sweeping circle reaches an element of
$S$, the number and order of intersections of the sweeping circle
with the edges of $P$ remains unchanged.~However, since the points
of intersection change, we need to recalculate them each time we
reach a point of $P$ or $S$.
\item \label{enum:sensitive:nm:2} \textbf{Compute the intervals
sequence for each element of $S$.} We can now compute, within the
same time complexity, the intervals in which $C_{p_j}(r)$
intersects the interior of $P$. Note that these intervals are not
the elements of $\mathcal I_j$, they have to be rotated according to
the position of $p_j$ with respect to $r$.
\item \textbf{Construct the events
sequence.} \label{enum:sensitive:nm:3} Since for each point $p_j$
in $S$ we have computed the corresponding sequence of sorted
intervals $\mathcal I_j$, all we need to do is to merge these (at
most $n$) sequences into a complete sequence of events.
\end{enumerate}
The normalization process takes $O(m)$ time.~Sorting the points in $S$
and the vertices of $P$ by distance from $r$ takes $O(n \log n)$ and
$O(m \log m)$ time, respectively. The ordered list of edges
intersecting the sweep-line is maintained in an $O(m)$-size red-black
tree, so we can process all the vertices of~$P$ in $O(m \log m)$
time.~On the other hand, processing all the points in $S$ takes $O(k)$
time, where~$k$ denotes the total number of in- and out-events in a
Fixed MCR problem.~Finally, merging the~$O(n)$ sequences of sorted
intervals takes $O(k \log n)$ time. We then sweep the merged list of
$\mathcal I_1 \cup \cdots \cup \mathcal I_n$ in $O(k)$ time to obtain
a solution to our problem.~The total time complexity
is~$O(n \log n + m \log m + k \log n)$.~The space complexity is
$O(n + m + k)$.~We have thus proved:
\begin{theorem}\label{lem:sensitive:nm}
The Fixed MCR problem can be solved in $O((n+k) \log n + m \log m)$
time and $O(n + m + k)$ space.
\end{theorem}
\section{Segment-restricted MCR (Problem~\ref{pro:intro:segment_mcr})}
\label{sec:segment_restricted_mcr}
Our approach to solve Problem~\ref{pro:intro:segment_mcr} is to
characterize, for each $p$ in $S$, the intersection between the
polygon~$P$ and the rotation circle~$C_p(r)$ while the center $r$
of $C_p(r)$ moves along a line segment $\ell = \overline{ab}$
from~$a$ to~$b$. For simplicity, we assume that $a$ lies on the
origin~$(0,0)$ and $b$ on the positive $x$-axis. For each edge
$e = \overline{uv}$ of $P$, we parameterize the intersection
between~$C_p(r)$ and~$e$ using a function~$\omega = f(x)$, where $x$
is the $x$-coordinate of $r$ (ranging from $0$ to the
$x$-coordinate~$b.x$ of $b$) and $\omega$ is the counterclockwise
angle swept by the ray~$\overrightarrow{r p}$ until it coincides
with the ray emanating from $r$ and passing through the current
point of intersection~$q$ of $C_p(r)$ and~$e$ (assume for the moment
that there exists exactly one such point of intersection). See
Figure~\ref{fig:restr_mcr:fig9}.
Leaving the details for
Section~\ref{appendix}, we obtain the following expression of~$\omega$
as a function of~$x$:
\begin{equation}\label{eq:final_eq_omega}
\omega \ = \ \arccos \left(
\frac{\gamma(x) \pm \sqrt{\delta(x)}}{\epsilon(x)}
\right),
\end{equation}
where $\gamma(x)$, $\delta(x)$, and $\epsilon(x)$ are polynomials of
degrees $2$, $4$, and $2$, respectively.~The motion of~$r$
along~$\ell$ thus corresponds to a set of points~$(x,\omega)$ for
which~$p$ hits the boundary of $P$.~For each point~$p \in S$, these
points form $O(m)$ curves bounding a collection of simple regions in
the $x$-$\omega$ plane; each point~$(x,\omega)$ of any such region
corresponds to a rotation of $p$, by a counterclockwise angle of
size~$\omega$ with respect to a rotation center at $(x,0)$, for which
$p$ belongs to $P$.~Note that each pair of such regions have disjoint
interiors, whereas their boundaries may intersect at most at a common
vertex due to the simplicity of $P$.
\subsection{Subdividing the Edges of the Polygon}
We mentioned earlier that, for convenience, we subdivide the edges of
the polygon~$P$ about their points of intersection (if any) with the
$x$-axis; so, in the following, we assume that each edge has no points
on either side of the $x$-axis.~We further subdivide the edges in
order to simplify the computation of the angle~$\omega$ in terms of
the $x$-coordinate of the rotation center~$r$ as it moves along the
segment~$\overline{a b}$.
\bigskip\noindent
\textbf{Theoretical Framework.} \quad Let us consider that we process
the point~$p \in S$, and
denote by $D_p(r)$ the closed disk bounded by $C_p(r)$, where
$r$ is a point in $\overline{a b}$.~In Figure~\ref{fig:fig_br1}, $p$
is taken to lie above the $x$-axis where either $a.x \le p.x \le b.x$
(top figure) or $b.x < p.x$ (bottom figure).~The cases where
$p.x < a.x$ or where $p$ lies below the $x$-axis are symmetric,
whereas the case where $p$ lies on the $x$-axis is similar (see
figures~\ref{fig:fig_unique_angle1}~and~\ref{fig:fig_unique_angle2}).~Moreover,
let $p'$ be the mirror image of $p$ with respect to the $x$-axis;
clearly, $p'$ coincides with $p$ if $p$ lies on the $x$-axis.
Finally, let $H_p^L$ ($H_p^R$, resp.) be the open halfplane to the
left (right, resp.) of the line perpendicular to the $x$-axis that
passes through~$p$.
Then, it is useful to observe the following properties.
\begin{lemma} \label{lemma:circle_union} Let $p$ be a point, and let
$H_p^L$, $H_p^R$, $C_p(r)$, and $D_p(r)$, for
$r \in \overline{a b}$, be as defined above.
\begin{itemize}
\item[(i)] Consider any two points $r, r' \in \overline{a b}$ with
$r \ne r'$. If the point~$p$ lies on the $x$-axis, then the
circles $C_p(r), C_p(r')$ intersect only at $p$. If the point~$p$
does not lie on the $x$-axis, the circles $C_p(r), C_p(r')$
intersect at $p$ and at $p$'s mirror image~$p'$ about the
$x$-axis, and the line segment~$\overline{p p'}$ belongs to both
$D_p(r), D_p(r')$.
\item[(ii)]
\begin{itemize}
\item[$\triangleright$\,] For every point~$s$ in the interior of
$H_p^L \cap D_p(r)$, there exists a unique circle centered on
the $x$-axis that passes from $p$ and $s$ and its center lies to
the right of $r$;
\item[$\triangleright$\,] for every point~$t$ in $H_p^L - D_p(r)$,
there exists a unique circle centered on the $x$-axis that
passes from $p$ and $t$ and its center lies to the left of $r$.
\end{itemize}
Symmetrically,
\begin{itemize}
\item[$\triangleright$\,] for every point~$s'$ in the interior of
$H_p^R \cap D_p(r)$, there exists a unique circle centered on
the $x$-axis that passes from $p$ and $s'$ and its center lies
to the left of $r$;
\item[$\triangleright$\,] for every point~$t'$ in
$H_p^R - D_p(r)$, there exists a unique circle centered on the
$x$-axis that passes from $p$ and $t'$ and its center lies to
the right of $r$.
\end{itemize}
\end{itemize}
\end{lemma}
\begin{proof}~
\paragraph{(i)} From the definition of the circles $C_p(r)$ for all
$r \in \overline{a b}$, $p$ belongs to each such circle.
Next, assume that $p$ lies on the $x$-axis and suppose for
contradiction that two circles $C_p(r), C_p(r')$ with $r \ne r'$
intersect at a point~$p' \ne p$ as well. Then, both $r, r'$ would
belong to the perpendicular bisector of the line
segment~$\overline{p p'}$; thus, the perpendicular bisector should
coincide with the $x$-axis. Then, since $p$ lies on the $x$-axis,
$p'$ would coincide with $p$, in contradiction to the assumption
that $p' \ne p$. Therefore, if $p$ lies on the $x$-axis, any two
circles $C_p(r), C_p(r')$ with $r \ne r'$ intersect only at $p$.
Now, assume that $p$ does not lie on the $x$-axis. Then, since $p'$
is the mirror image of $p$ with respect to the $x$-axis, the
$x$-axis is the perpendicular bisector of the
segment~$\overline{p p'}$. Thus, $p'$ belongs to all the circles
centered on the $x$-axis that pass from $p$. The fact that
$\overline{p p'}$ belongs to each of the disks~$D_p(r)$, for all
$r \in \overline{a b}$, follows from the fact that each
disk~$D_p(r)$ is a convex set containing $p$ and $p'$.
\begin{figure}[t]
\begin{center}
\includegraphics[height=4.8cm]{figs_pg1_cropped}
\caption{For the proof of Lemma~\ref{lemma:circle_union}. \
(left)~The perpendicular bisectors $B_{sp}$, $B_{qp}$,
$B_{tp}$ intersect the $x$-axis at points $r', r, r''$,
respectively. \ (right)~The lines through $p$ that are
perpendicular to the tangent at $p$ and to $\overline{t p}$
intersect the $x$-axis at points $r, r''$, respectively.}
\label{fig:fig_lemma1}
\end{center}
\end{figure}
\paragraph{(ii)} Let $q$ be the point of intersection of $C_p(r)$
with the line~$L$ through $p$ and~$s$; see
Figure~\ref{fig:fig_lemma1}(left).~The line~$L$ is well defined
since $s \ne p$. In fact, $s.x < p.x$ (because $s$ belongs to
$H_p^L$), and thus $L$ is not perpendicular to the $x$-axis, which
implies that the perpendicular bisector~$B_{qp}$ of the line
segment~$\overline{q p}$ intersects the $x$-axis at a single point;
this point of intersection is precisely the center~$r$ of $C_p(r)$.
Since the perpendicular bisector of the line
segment~$\overline{s p}$ is parallel to $B_{qp}$ and lies to the
right of $B_{qp}$ (because $s$ is an interior point of
$\overline{q p}$), it intersects the $x$-axis at a single point~$r'$
to the right of $r$; $r'$ is the center of the circle centered on
the $x$-axis that passes from $p$ and $s$.
Now, consider $t \in H_p^L - D_p(r)$, and let $T_p(r)$ be the open
halfplane that is tangent to the circle~$C_p(r)$ at $p$ and contains
$r$. If $t \in T_p(r)$, then the line~$L$ through $p$ and $t$
intersects $C_p(r)$ at $p$ and at another point~$q$, and
$q \in \overline{t p}$. Then, as above, the perpendicular
bisector~$B_{q p}$ of $\overline{q p}$ intersects the $x$-axis at
$r$, whereas the perpendicular bisector of $\overline{t p}$ is
parallel and to the left of $B_{q p}$ (since $q$ is an interior
point of $\overline{t p}$), and thus intersects the $x$-axis at a
point~$r''$ to the left of~$r$; see
Figure~\ref{fig:fig_lemma1}(left). It is important to observe that
the proof so far applies no matter whether $p$ lies on the $x$-axis
or not.
Next, let us consider the case in which $t \not\in T_p(r)$; this
case is not possible if $p$ lies on the $x$-axis since then
$T_p(r) = H_p^L$. Then, the line through $p$ perpendicular to the
tangent to the circle~$C_p(r)$ at $p$ intersects the $x$-axis at
$r$. Since $t \not\in T_p(r)$, the line perpendicular to the line
through $t$ and $p$ is not parallel to the $x$-axis and thus
intersects the $x$-axis at a single point~$r''$. In fact, since the
angle $\widehat{t p r}$ of the triangle with $t, p, r$ as vertices
is larger than $\pi/2$, $r''$ is to the left of $r$; see
Figure~\ref{fig:fig_lemma1}(right).
The results for points $s'$ in the interior of $H_p^R \cap D_p(r)$
and $t' \in H_p^R - D_p(r)$ are obtained in a fashion left-to-right
symmetric to the one we used in order to obtain the results for the
points $s$ in the interior of $H_p^L \cap D_p(r)$ and
$t \in H_p^L - D_p(r)$, respectively.
\end{proof}
Statement~(ii) of Lemma~\ref{lemma:circle_union} directly implies that
the union of all the circles~$C_p(r)$ forms precisely the closure of
the symmetric difference $D_p(a) \oplus D_p(b)$ of the disks $D_p(a)$
and $D_p(b)$ centered at $a$ and $b$, respectively (see
Figure~\ref{fig:fig_br1}); note that any point in the interior of
\[\Bigl( \bigl( D_p(a) - D_p(b) \bigr) \cap H_p^L \Bigr)
\cup \Bigl( \bigl(D_p(b) - D_p(a) \bigr) \cap H_p^R \Bigr)\] lies on
a circle~$C_p(r)$ with $r$ in the interior of $\overline{a b}$,
whereas no other point does so.~Lemma~\ref{lemma:circle_union}(ii)
also implies the following corollary.
\begin{corollary} \label{corol:circle_union}~
\begin{itemize}
\item[(i)] For any $r, r' \in \overline{a b}$ with $r$ to the left
of $r'$:
\begin{itemize}
\item[$\triangleright$\,]
$\bigl( C_p(r) \cap D_p(r') \bigr) \cap H_p^L \ =\ \emptyset$ \
\ and \ \ $D_p(r') \cap H_p^L \ \subset\ D_p(r) \cap H_p^L$;
\item[$\triangleright$\,]
$\bigl( C_p(r') \cap D_p(r) \bigr) \cap H_p^R \ =\ \emptyset$ \
\ and \ \ $D_p(r) \cap H_p^R \ \subset\ D_p(r') \cap H_p^R$.
\end{itemize}
\item[(ii)] Suppose that a line segment~$I$ intersects a
circle~$C_p(r)$, where $r \in \overline{a b}$, at points
$w_1, w_2$ such that the line segment~$\overline{w_1 w_2}$ lies
entirely in the closure of $\bigl( D_p(a) - D_p(b) \bigr)$. Then,
the segment~$I$ is tangent to a circle~$C_p(r')$ for some
$r' \in \overline{a b}$ and the point of tangency belongs to
$\overline{w_1 w_2}$. Symmetrically, the same result holds if the
segment~$\overline{w_1 w_2}$ lies entirely in the closure of
$\bigl( D_p(b) - D_p(a) \bigr)$.
\end{itemize}
\end{corollary}
\begin{proof}~
\paragraph{(i)} We prove the propositions for the halfplane~$H_p^L$;
the proofs for $H_p^R$ are left-to-right symmetric.
Since $r$ is to the left of $r'$, Lemma~\ref{lemma:circle_union}(ii)
implies that $C_p(r') \cap H_p^L$ lies in the interior of
$D_p(r) \cap H_p^L$.~This in turn implies that
(i)~$\bigl( C_p(r) \cap H_p^L \bigr) \cap \bigl( D_p(r') \cap H_p^L
\bigr) = \emptyset$, i.e.,
$\bigl( C_p(r) \cap D_p(r') \bigr) \cap H_p^L = \emptyset$, and
(ii)~$\bigl( D_p(r') \cap H_p^L \bigr) \subset \bigl( D_p(r) \cap
H_p^L \bigr)$ since the disk~$D_p(r')$ is bounded by $C_p(r')$ and
since each such disk is a convex set; we have a proper subset
relation because the points in $C_p(r) \cap H_p^L$ do not belong to
$D_p(r') \cap H_p^L$.
\paragraph{(ii)} Below, we prove the statement for the case that
$\overline{w_1 w_2}$ lies entirely in the closure of
$\bigl( D_p(a) - D_p(b) \bigr)$; the proof for the case that
$\overline{w_1 w_2} \in \hbox{closure} \bigl( D_p(b) - D_p(a)
\bigr)$ is left-to-right symmetric.
Since $w_1 \ne w_2$ and
$\overline{w_1 w_2} \in \hbox{closure} \bigl( D_p(a) - D_p(b)
\bigr)$, then $r \ne b$; let $t \in \overline{a b}$ be a point
infinitesimally to the right of $r$. Then, according to
statement~(i),
$\bigl( C_p(r) \cap D_p(t) \bigr) \cap H_p^L = \emptyset$ and
$\bigl( D_p(t) \cap H_p^L \bigr) \subset \bigl( D_p(r) \cap H_p^L
\bigr)$, which together imply that
$\bigl( D_p(t) \cap I \bigr) \subset \overline{w_1 w_2}$; note that
at least one of $w_1, w_2$ (which belong to $C_p(r)$) belongs to
$H_p^L$, for otherwise, either $\overline{w_1 w_2}$ degenerates to a
single point, in contradiction to the fact that $w_1 \ne w_2$, or
$\overline{w_1 w_2} = \overline{p p'}$ with $p \ne p'$, in
contradiction to the fact that $\overline{w_1 w_2}$ lies entirely in
the closure of $\bigl( D_p(a) - D_p(b) \bigr)$. Since the rotation
center moves continuously along $\overline{a b}$ there exists a
point~$r' \in \overline{r b}$ such that $D_p(r') \cap I$ is a single
point, i.e., the line segment~$I$ is tangent to the
circle~$C_p(r')$; moreover, since
$D_p(r') \cap I \subset \overline{w_1 w_2}$, the point of tangency
belongs to the line segment~$\overline{w_1 w_2}$.
\end{proof}
\begin{figure}[t]
\begin{center}
\includegraphics[height=7.5cm]{figs_pg2_cropped}
\caption{Subdividing the polygon edges so that each sub-edge is
intersected at most once by each of the circles~$C_p(r)$ (white
disks denote points of edge subdivision).}
\label{fig:fig_br1}
\end{center}
\end{figure}
\bigskip\noindent
\textbf{The Subdivision Procedure.} \quad Our subdivision procedure
for the polygon edges while processing point $p \in S$ works in two
phases: in Phase~1, we ensure that each circle~$C_p(r)$ intersects
each resulting sub-edge in at most one point; in Phase~2, we ensure
that for each sub-edge either $0 \le \omega \le \pi$ or
$\pi \le \omega \le 2 \pi$ implying that the value of $\omega$ is
uniquely determined from the value of its cosine.
\medskip\noindent
\textit{Phase~1:}
\ If an edge~$\overline{u v}$ of the polygon~$P$ does not intersect
$D_p(a) \cup D_p(b)$ or if at least one of its endpoints belongs to
$D_p(a) \cap D_p(b)$, then we need not do anything, otherwise:
\begin{itemize}
\item If $\overline{u v}$ does not intersect the interior of
$D_p(a) \cap D_p(b)$, then $\overline{u v}$ is tangent to at most
two of the circles $C_p(r)$ and we subdivide it at these points of
tangency; see edges $\overline{u_1 v_1}$ and $\overline{u_2 v_2}$ in
Figure~\ref{fig:fig_br1}.
\item If $\overline{u v}$ intersects the interior of
$D_p(a) \cap D_p(b)$, then it crosses $D_p(a) \cap D_p(b)$. If
$\overline{u v}$ intersects the segment~$\overline{p p'}$, then we
subdivide $\overline{u v}$ at its point of intersection with
$\overline{p p'}$ (see edge~$\overline{u_3 v_3}$ in
Figure~\ref{fig:fig_br1}); if not, then the points of intersection
of $\overline{u v}$ with the boundary of $D_p(a) \cap D_p(b)$ both
belong to either $C_p(a)$ or $C_p(b)$ (see edge~$\overline{u_4 v_4}$
in Figure~\ref{fig:fig_br1}), in which case we subdivide
$\overline{u v}$ at its closest point to $a$ or $b$, respectively.
\end{itemize}
It is not difficult to see that if the edge~$\overline{u v}$ has two
points of intersection with a circle~$C_p(r)$, these two points of
intersection end up belonging to different parts of the subdivided
edge.
After Phase~1 has been complete, we apply Phase~2 on the resulting
sub-edges. Let $a'$ and $b'$ be points such that $a$ and $b$ are the
midpoints of segments $\overline{p a'}$ and $\overline{p b'}$,
respectively; see Figure~\ref{fig:fig_br2}.~Then, Phase~2 involves the
following subdivision steps.
\medskip\noindent
\textit{Phase~2:}
\begin{itemize}
\item If a sub-edge intersects $\overline{a' b'}$, we subdivide it at
this point of intersection (in Figure~\ref{fig:fig_br2}, see
sub-edges $\overline{u_1 v_1}$ and sub-edge~$\overline{u_2 v_2}$ in
the top figure).
\item Additionally, if the sub-edge is tangent to two circles, we
subdivide it at its point of intersection with the line through $p$
perpendicular to the $x$-axis (see sub-edges $\overline{u_2 v_2}$ in
Figure~\ref{fig:fig_br2}).
\end{itemize}
\begin{figure}[t]
\begin{center}
\includegraphics[height=7.5cm]{figs_pg3_cropped}
\caption{Further subdividing the polygon edges so that the
angle~$\omega$ belongs either to $[0, \pi]$ or to $[\pi, 2 \pi]$
(white disks denote points of edge subdivision).}
\label{fig:fig_br2}
\end{center}
\end{figure}
\bigskip
By taking into account that each of Phase~1 and Phase~2 may introduce
at most two subdivision points on a polygon edge, we conclude that
each edge ends up subdivided into at most $5$ sub-edges.
Finally, it is important to note that the above described edge
subdivision is introduced precisely for the processing of the current
point~$p \in S$ being processed; that is, for the next element of $S$,
we ignore the subdivision points introduced and start working again
with the edges of the polygon~$P$ (subdivided only about the
$x$-axis).
\bigskip\noindent
\textbf{Correctness.}
\quad Before proving Theorem~\ref{thm:edge_subdiv} which establishes
the correctness of the subdivision procedure, we show the following
useful lemma.
\begin{lemma} \label{lemma:phase2} Let $p$ be an element of the point
set~$S$ and $p'$ be the mirror image of $p$ with respect to the
$x$-axis.
\begin{itemize}
\item[(i)] If the point~$p$ is such that $0 = a.x \le p.x \le b.x$,
then $p'$ belongs to the line segment~$\overline{a' b'}$.
\item[(ii)] For any point~$q \in \overline{a' b'}$ such that
$q \ne p'$, there is a point $r \in \overline{a b}$ for which
$C_p(r)$ has the segment $\overline{q p}$ as its diameter.
\end{itemize}
\end{lemma}
\begin{proof}
~\paragraph{(i)} First, assume that $p$ lies on the $x$-axis.~Then,
$p' = p$.~The assumption $a.x \le p.x \le b.x$ implies that
$p \in \overline{a b}$, which in turn implies that
$\overline{a b} \subset \overline{a' b'}$; see
Figure~\ref{fig:fig_unique_angle2}.~Thus, $p \in \overline{a' b'}$,
i.e., $p' = p \in \overline{a' b'}$.~Now, consider the case that $p$
does not lie on the $x$-axis.~Let $c$ be the (vertical) projection
of $p$ onto the $x$-axis.~Since $a.x \le p.x \le b.x$,
$c \in \overline{a b}$.~The line defined by $p, c$ (note that
$p \ne c$) is perpendicular to the $x$-axis and let $d$ be its point
of intersection with the line supporting $\overline{a' b'}$. Since
$c \in \overline{a b}$, we conclude that
$d \in \overline{a' b'}$.~Moreover, by its construction, the line
segment~$\overline{a' b'}$ is parallel to the $x$-axis, and since
$|\overline{p a}| = |\overline{a a'}|$, the similarity of the
triangles with vertices $p, a, c$ and $p, a', d$ implies that
$|\overline{p c}| = |\overline{c d}|$.~Thus, $p' = d$ and hence
$p' \in \overline{a' b'}$.
\paragraph{(ii)} Assume that $p$ lies on the $x$-axis.~Let
$q \in \overline{a' b'}$ with $q \ne p$, and suppose without loss of
generality that $q$ is to the left of $p$ (the case where $q$ is to
the right of $p$ is symmetric).~Then, the midpoint of
$\overline{q p}$ lies in $\overline{a p}$ and it is the center of
the unique circle~$C_p(r)$ passing through $q$.~Therefore, $C_p(r)$
has $\overline{q p}$ as its diameter.
Now assume that $p$ does not lie on the $x$-axis.~Consider any
point~$q \in \overline{a' b'}$ with $q \ne p'$.~Let $z$ be the point
of intersection of the line segment~$\overline{p q}$ with the
$x$-axis ($z$ exists because $p$ and $\overline{a' b'}$, and hence
$p$ and $q$, lie on opposite sides of the $x$-axis).~Note that
$z \in \overline{a b}$ since $q \in \overline{a' b'}$.~Then, by the
similarity of the triangles $\triangle p a z$ and $\triangle p a' q$
we have that $|\overline{p z}| = |\overline{z q}|$; i.e., the
point~$z$ is the midpoint of $\overline{p q}$.~Therefore, $z$
belongs to the perpendicular bisector of $\overline{p q}$ and in
fact, it is the only point of intersection of such bisector and the
$x$-axis.~Note that, since $q \ne p'$, the line passing through $p$
and $q$ (remember that $p \ne q$) is not perpendicular to the
$x$-axis.~This implies that the center~$r$ of any circle~$C_p(r)$
passing through $q$ coincides with $z$, that is, $\overline{q p}$ is
a diameter of $C_p(r)$.
\end{proof}
\noindent
Lemma~\ref{lemma:phase2}(ii) implies that for any point~$q \ne p'$
belonging to $\overline{a' b'}$, the corresponding
angle~$\omega = \widehat{p r q}$ is equal to $\pi$, where
$r \in \overline{a b}$ is the center of the circle~$C_p(r)$
passing from $q$.
Now we are ready to prove Theorem~\ref{thm:edge_subdiv} which
establishes that the subdivision steps of Phases 1 and 2 achieve the
set goals.
\begin{theorem}\label{thm:edge_subdiv}~
\begin{itemize}
\item[(i)] After the completion of Phase~1, no resulting sub-edge
intersects any circle~$C_p(r)$ for some $r \in \overline{a b}$ in
more than one point.
\item[(ii)] After the completion of Phase~2, for any two points
$q, q'$ (lying on circles $C_p(r)$ and $C_p(r')$, respectively) of
each resulting sub-edge, the counterclockwise angles
$\widehat{p r q}$ and $\widehat{p r' q'}$ either both belong to
$[0, \pi]$ or both belong to $[\pi, 2 \pi]$.
\end{itemize}
\end{theorem}
\begin{proof}~
\paragraph{(i)} Suppose for contradiction that there exists a
sub-edge~$\overline{c d}$ and a circle~$C_p(r)$ with
$r \in \overline{a b}$ that intersect in two points $w_1$ and $w_2$.
The point~$p$ and its mirror image~$p'$ subdivide the
circle~$C_p(r)$ into two arcs, $A_p^L$ and $A_p^R$, the former to
the left of the line through $p$ perpendicular to the $x$-axis and
the latter to the right (note that if $p$ lies on the $x$-axis, one
of these arcs degenerates into a single point). Then, $w_1, w_2$
should belong to the same arc; otherwise, $p$ would not lie on the
$x$-axis and the line segment~$\overline{w_1 w_2}$ would intersect
the line segment~$\overline{p p'}$, and thus the sub-edge~$c d$
would have been subdivided in Phase~1 about its point of
intersection with $\overline{p p'}$. Suppose without loss of
generality that $w_1, w_2$ belong to the arc~$A_p^L$. But then, no
matter whether the segment~$\overline{w_1 w_2}$ intersects the
interior of $D_p(a) \cap D_p(b)$ or not, we have a contradiction.
In the former case, the sub-edge~$c d$ would have been subdivided in
Phase~1 about the perpendicular projection of $b$ onto $c d$; $b$'s
projection onto $c d$ belongs to $D_p(a) \cap D_p(b)$ and thus is an
interior point of $\overline{w_1 w_2}$. In the latter case, the
sub-edge~$c d$ would have been subdivided in Phase~1 about its point
of tangency with a circle~$C_p(t)$ with $t \in \overline{a b}$; this
point of tangency belongs to $\overline{w_1 w_2}$ as shown in
Corollary~\ref{corol:circle_union}(ii).~Therefore, after Phase~1, no
resulting sub-edge intersects any circle~$C_p(r)$ for some
$r \in \overline{a b}$ in more than one point.
\paragraph{(ii)} Suppose without loss of generality that the
point~$p$ lies above or on the $x$-axis and it holds that
$p.x \ge a.x$; the case where it holds that $p.x < a.x$ is
left-to-right symmetric (the corresponding angles are equal to
$2 \pi$ minus the corresponding angles when $p.x > b.x$), whereas
the case where $p$ lies below the $x$-axis is top-down symmetric (in
this case too, the corresponding angles are equal to $2 \pi$ minus
the corresponding angles when $p$ lies above the $x$-axis).
Let $R_1$ ($R_3$, respectively) be the subsets of points in the
closure of the symmetric difference~$D_P(a) \oplus D_p(b)$ that are
on or to the left of the line through $p$ that is perpendicular to
the $x$-axis and are on or above (on or below, respectively)
$\overline{a' b'}$; symmetrically, let $R_2$ ($R_4$, respectively)
be the subsets of points in the closure of the symmetric
difference~$D_P(a) \oplus D_p(b)$ that are on or to the right of the
line through $p$ that is perpendicular to the $x$-axis and are on or
above (on or below, respectively) $\overline{a' b'}$; see
Figure~\ref{fig:fig_unique_angle1} and
Figure~\ref{fig:fig_unique_angle2}. Consider a point~$w$ lying on a
circle~$C_p(t)$ with $t \in \overline{a b}$. Since according to
Lemma~\ref{lemma:phase2}(ii), for any
point~$q \in \overline{a' b'}$, the segment~$\overline{q p}$ is a
diameter of the circle centered on the $x$-axis and passing from
$p, q$, if $w \in R_1$, the counterclockwise angle~$\widehat{p t w}$
belongs to $[0, \pi]$. Similarly, if $w \in R_2$ then
$\widehat{p t w} \in [\pi, 2 \pi]$, if $w \in R_3$ then
$\widehat{p t w} \in [\pi, 2 \pi]$, and if $w \in R_4$ then
$\widehat{p t w} \in [0, \pi]$.~Since no sub-edge resulting after
Phase~2 contains points in more than one of the regions
$R_1, R_2, R_3, R_4$, the statement of the theorem follows.
\end{proof}
\begin{figure}[t]
\begin{center}
\includegraphics[height=7.0cm]{figs_pg4_cropped}
\caption{The partition of the closure of the symmetric difference
$D_p(a) \oplus D_p(b)$ about the line segment~$\overline{a' b'}$
and the line defined by $p, p'$ into regions
$R_1, R_2, R_3, R_4$ when the point~$p$ does not lie on the
$x$-axis. Note that the line segments $p s, p s', p t, p t'$
are diameters.}
\label{fig:fig_unique_angle1}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[height=5.5cm]{figs_pg5_cropped}
\caption{The partition of the closure of the symmetric difference
$D_p(a) \oplus D_p(b)$ about the line segment~$\overline{a' b'}$
and the line that is perpendicular to the $x$-axis at $p$ into
regions $R_1, R_2, R_3, R_4$ when the point~$p$ lies on the
$x$-axis. Note that the line segments $p s, p s', p t, p t'$
are diameters.}
\label{fig:fig_unique_angle2}
\end{center}
\end{figure}
\subsection{The Algorithm}
We are now ready to outline our algorithm for
Problem~\ref{pro:intro:segment_mcr}:
\begin{enumerate}
\item \label{enum:srMCR:nm:0} \textbf{Subdivide the edges of
polygon~$P$ about the $x$-axis.}
\item \label{enum:srMCR:nm:1} \textbf{Process each point~$p \in S$.}
For each point~$p$, we subdivide each edge of polygon~$P$ (resulting
from the previous step) into sub-edges (see the edge subdivision
process described earlier). Next, for each sub-edge, we compute the
curve of the angle~$\omega$ with respect to the $x$-coordinate~$x$
of the rotation center as it moves along $\overline{a b}$ (see
Equation~\ref{eq:final_eq_omega}), and finally we form the regions
bounded by these curves.
\item \label{enum:srMCR:nm:2} \textbf{Construct and traverse the
arrangement of all the regions.} Using standard techniques, we
construct the arrangement of all the regions of all the elements of
$S$.~Next, we traverse the dual graph of the resulting arrangement
looking for a sub-region of maximum depth; any point in this
sub-region determines a position~$(x,0)$ of $r$ and a rotation
angle~$\omega$ that constitute a solution to the problem.
\end{enumerate}
\subsection{Time and Space Complexity}
Step~\ref{enum:srMCR:nm:0} clearly takes $O(m)$ time and space,
resulting into at most $2 m$ sub-edges.~The edge subdivision while
processing a point~$p \in S$ in Step~\ref{enum:srMCR:nm:1} takes
$O(m)$ time and space, producing $O(m)$ sub-edges: For each
sub-edge~$\overline{u v}$, $O(1)$ time suffices to determine whether
its endpoints belong to the disks $D_p(a)$ and $D_p(b)$, and whether
$\overline{u v}$ intersects the circles $C_p(a), C_p(b)$, the
segment~$\overline{p p'}$, or the line supporting $\overline{p p'}$,
as well as to compute any points of intersection. Moreover, the
centers of the circles $C_p(r)$, for $r \in \overline{a b}$, to which
$\overline{u v}$ is tangent are precisely the points of intersection
of the segment~$\overline{a b}$ with the parabola that is equidistant
from point~$p$ and the line supporting $\overline{u v}$.~Then,
processing $p$ yields $O(m)$ curves bounding $O(m)$ regions.~Thus,
processing all the points in $S$ in Step~\ref{enum:srMCR:nm:1} takes a
total of $O(n m)$ time and produces a set of $O(n m)$ regions bounded
by $O(n m)$ curves in the $x$-$\omega$ plane.~From
Step~\ref{eq:final_eq_omega}, we can show the following lemma:
\begin{lemma}
Any two ($\omega$-$x$)-curves as in Equation~\ref{eq:final_eq_omega}
have at most $32$ points of intersection.
\end{lemma}
\begin{proof}
The idea is based on the fact that a polynomial of constant degree
has a constant number of roots. In our case, we have a square root
which needs to be squared in order to be removed. Let us consider
the two ($\omega$-$x$)-curves
\[
\omega = \arccos \left( \frac{\gamma_1(x) \pm
\sqrt{\delta_1(x)}}{\epsilon_1(x)} \right)
\qquad \hbox{and} \qquad
\omega = \arccos \left( \frac{\gamma_2(x) \pm
\sqrt{\delta_2(x)}}{\epsilon_2(x)} \right).
\]
Since a point of intersection of these curves belongs to both of
them, we have:
\[
\omega \ = \ \arccos \left( \frac{\gamma_1(x) \pm
\sqrt{\delta_1(x)}}{\epsilon_1(x)} \right)
\ = \ \arccos \left( \frac{\gamma_2(x) \pm
\sqrt{\delta_2(x)}}{\epsilon_2(x)} \right)
\]
\begin{equation}
\Longrightarrow
\ \gamma_1(x) \, \epsilon_2(x) - \gamma_2(x) \, \epsilon_1(x)
\ = \ \pm \Bigl( \epsilon_2(x) \, \sqrt{\delta_1(x)} -
\epsilon_1(x) \, \sqrt{\delta_2(x)} \Bigr)
\label{eq:eq1}
\end{equation}
from which, by squaring twice to get rid of the square
roots, we get
\begin{align}\label{eq:poly}
& \Bigl( \gamma_1(x) \, \epsilon_2(x) - \gamma_2(x) \, \epsilon_1(x)
\Bigr)^2
\ = \ \Bigl( \epsilon_2(x) \, \sqrt{\delta_1(x)} -
\epsilon_1(x) \, \sqrt{\delta_2(x)} \Bigr)^2 \nonumber \\
& \Longrightarrow
\ \Bigl(
\gamma_1(x) \, \epsilon_2(x) - \gamma_2(x) \, \epsilon_1(x)
\Bigr)^2
- {\epsilon^2_2(x)} \, {\delta_1(x)}
- {\epsilon^2_1(x)} \, {\delta_2(x)}
\nonumber \\
& \phantom{xxxxxxxxxxxxxxxxxxxxxxxxxxxxx}
\ =
\ -2 \, {\epsilon_1(x) \, \epsilon_2(x)}
\, {\sqrt{\delta_1(x) \, \delta_2(x)}}
\nonumber \\
& \Longrightarrow
\ \left( \Bigl( \gamma_1(x) \, \epsilon_2(x)
- \gamma_2(x) \, \epsilon_1(x) \Bigr)^2
- {\epsilon^2_2(x)} \, {\delta_1(x)}
- {\epsilon^2_1(x)} \, {\delta_2(x)} \right)^2
\nonumber \\
& \phantom{xxxxxxxxxxxxxxxxxxxxxxxxxxxxx}
\ =
\ 4 \, {\epsilon^2_1(x) \, \epsilon^2_2(x)}
\, \delta_1(x) \, \delta_2(x).
\end{align}
The last equality is a polynomial of degree at most $16$ and, thus,
it has at most~$16$ real roots for $x$ (it is important to note that
the value of $x$ in any pair $(\omega,x)$ satisfying
Equation~\ref{eq:eq1} satisfies the polynomial in
Equation~\ref{eq:poly}, although the reverse does not necessarily
hold, i.e., not every root of the polynomial satisfies
Equation~\ref{eq:eq1}).~Thus, if we substitute the real roots of the
polynomial in Equation~\ref{eq:poly} into
Equation~\ref{eq:final_eq_omega}, we get at most $32$ possible
points of intersection, due to the $\pm$ operand.
\end{proof}
Hence, the total number of intersection points of all the curves is
$O(n^2 m^2)$.~Using standard techniques, in $O(n^2 m^2 \log (nm))$
time the arrangement of all these regions can be computed, and the
dual graph of the resulting arrangement can be traversed looking for a
sub-region of maximum depth.~Any point in this sub-region determines a
position of the rotation center~$r$ and a rotation angle~$\omega$
that constitute a solution to the problem.~The space complexity is
$O(n^2 m^2)$. Then:
\begin{theorem}\label{lem:restr:nm}
The Segment-restricted MCR problem
can be solved in\\
$O(n^2 m^2\log (n m))$ time and $O(n^2 m^2)$ space.
\end{theorem}
Note that Problem~\ref{pro:intro:segment_mcr} can also be solved in
$O(n^2 m^2\log (n m))$ time even when the rotation center is
restricted to lie on a line~$L$: Compute the Voronoi diagram
of~$P \cup S$, and apply the algorithm we just described to a segment
of~$L$ containing all the intersection points of~$L$ and the Voronoi
edges.~Moreover, if we restrict the rotation center to lie on a
polygonal chain with $s$ line segments, we can trivially obtain the
optimal placement of~$P$ using $O(sn^2 m^2\log (n m))$ time.~In both
cases, the space complexity is $O(n^2 m^2)$.
\subsection{Equation~\ref{eq:final_eq_omega}: expressing~$w$ as a function of~$x$}
\label{appendix}
In order to simplify the exposition leading to
Equation~\ref{eq:final_eq_omega}, for each point~$s$ in the plane
other than the current rotation center~$r$, we define a
corresponding angle~$\vartheta_s$ with respect to $r$.~In
particular, let $H_{\reflectbox{\rotatebox[origin=c]{270}{$\Lsh$}}}$ be the set of points above the $x$-axis or on
the $x$-axis and to the right of $r$ and let $H_{\reflectbox{\rotatebox[origin=c]{90}{$\Lsh$}}}$ be the set of
points below the $x$-axis or on the $x$-axis and to the left of $r$
(clearly, the sets $H_{\reflectbox{\rotatebox[origin=c]{270}{$\Lsh$}}}$ and $H_{\reflectbox{\rotatebox[origin=c]{90}{$\Lsh$}}}$ partition
$\mathbb{R}^2 - \{r\}$). Then,
\begin{itemize}
\item if $s \in H_{\reflectbox{\rotatebox[origin=c]{270}{$\Lsh$}}}$, $\vartheta_s$ is the angle swept by the
rightward horizontal ray emanating from $r$ as it moves in
counterclockwise direction around $r$ until it coincides with the
ray~$\overrightarrow{r s}$ (see Figure~\ref{fig:fig3.1}, left);
\item if $s \in H_{\reflectbox{\rotatebox[origin=c]{90}{$\Lsh$}}}$, $\vartheta_s$ is the angle swept by the
leftward horizontal ray emanating from $r$ as it moves in
counterclockwise direction around $r$ until it coincides with the
ray~$\overrightarrow{r s}$ (see Figure~\ref{fig:fig3.1}, right).
\end{itemize}
(Note that for all points~$s$ on the $x$-axis, $\vartheta_s = 0$.)
From the definition of $\vartheta_s$, it follows that in all cases
\begin{equation}
\label{eq:theta}
0 \le \vartheta_s < \pi
\end{equation}
(we consider counterclockwise and clockwise angles being positive and
negative, respectively) and
\begin{equation}
\label{eq:cos_sin}
\cos \vartheta_s = \frac{s.x - r.x}{d(s,r)} \, sgn(s.y)
\qquad
\sin \vartheta_s = \frac{|s.y|}{d(s,r)} =
\frac{s.y}{d(s,r)} \, sgn(s.y)
\end{equation}
where $d(s,r)$ denotes the distance of point~$s$ from the rotation
center~$r$, $p.x$ and $p.y$ are respectively the \noindent
$x-$ and $y-$coordinates of a point~$p$, and $sgn(s.y)$ is the sign of $s.y$.
\begin{figure}[t]
\centering
\psfrag{s}[][]{$s$}
\psfrag{r}[][]{$r$}
\psfrag{x}[][]{$x$}
\psfrag{d.}[][]{$\vartheta_s$}
\includegraphics[height=2.5cm]{figure_3_1_cropped}
\caption{The definition of the angle~$\vartheta_s$ for any
point~$s \neq r$.}
\label{fig:fig3.1}
\end{figure}
Now, we distinguish two main cases:
\begin{itemize}
\item \emph{Point~$p$ and the intersection point~$q$ of the
circle~$C_p(r)$ and the edge~$e = \overline{uv}$ of $P$ both
belong to either $H_{\reflectbox{\rotatebox[origin=c]{270}{$\Lsh$}}}$ or $H_{\reflectbox{\rotatebox[origin=c]{90}{$\Lsh$}}}$ (see
Figure~\ref{fig:restr_mcr:fig9}(a))}: if
$\vartheta_p \le \vartheta_q$ then
\begin{equation}
\label{eq:same2}
\omega \ = \ \vartheta_q - \vartheta_p
\end{equation}
otherwise
\begin{equation}
\label{eq:same1}
\omega \ = \ (\pi - \vartheta_p) + \pi + \vartheta_q
\ = \ 2 \pi + \vartheta_q - \vartheta_p.
\end{equation}
\item \emph{Point~$p$ and the intersection point~$q$ of the
circle~$C_p(r)$ and the edge~$e = \overline{uv}$ of $P$ do not
both belong to either $H_{\reflectbox{\rotatebox[origin=c]{270}{$\Lsh$}}}$ or $H_{\reflectbox{\rotatebox[origin=c]{90}{$\Lsh$}}}$ (see
Figure~\ref{fig:restr_mcr:fig9}(b))}: in this case,
\begin{equation}
\label{eq:opposite}
\omega \ = \ (\pi - \vartheta_p) + \vartheta_q
\ = \ \pi + \vartheta_q - \vartheta_p.
\end{equation}
\end{itemize}
It is important to observe that the definition of $H_{\reflectbox{\rotatebox[origin=c]{270}{$\Lsh$}}}$ and $H_{\reflectbox{\rotatebox[origin=c]{90}{$\Lsh$}}}$
ensures that the above expressions for $\omega$ hold for all special
cases in which at least one of $p, q$ lies on the $x$-axis, as
summarized in the following table.
\smallskip
{
\footnotesize
\begin{center}
\begin{tabular}{cc|c|c|c|c|}
\cline{3-6}
& & \multicolumn{2}{c|}{$p \in H_{\reflectbox{\rotatebox[origin=c]{270}{$\Lsh$}}}$} & \multicolumn{2}{c|}{$p \in H_{\reflectbox{\rotatebox[origin=c]{90}{$\Lsh$}}}$}
\\
& & {\scriptsize $p$ on $x$-axis} & {\scriptsize $p$ above $x$-axis}
& {\scriptsize $p$ on $x$-axis} & {\scriptsize $p$ below $x$-axis}
\\
& & {\small $\vartheta_p = 0$} & {\small $0 < \vartheta_p < \pi$}
& {\small $\vartheta_p = 0$} & {\small $0 < \vartheta_p < \pi$}
\\
\cline{1-6}
\multicolumn{1}{ |c }{\multirow{4}{*}{$q \in H_{\reflectbox{\rotatebox[origin=c]{270}{$\Lsh$}}}$}}
& \multicolumn{1}{ c| }{{\scriptsize $q$ on $x$-axis}}
& {\multirow{2}{*}{ $\omega = 0$ }} & {\multirow{2}{*}{ $\omega = 2 \pi - \vartheta_p$ }}
& {\multirow{2}{*}{ $\omega = \pi$ }} & {\multirow{2}{*}{ $\omega = \pi - \vartheta_p$ }}
\\
\multicolumn{1}{ |c }{} & \multicolumn{1}{ c| }{{\small $\vartheta_q = 0$}} & {} & {} & {} & {}
\\ \cline{2-6}
\multicolumn{1}{ |c }{} & \multicolumn{1}{ c| }{{\scriptsize $q$ above $x$-axis}}
& {\multirow{2}{*}{ $\omega = \vartheta_q$ }} & {\multirow{2}{*}{ {\small Eq.~(\ref{eq:same2}), (\ref{eq:same1})} }}
& {\multirow{2}{*}{ $\omega = \pi + \vartheta_q$ }}
& {\multirow{2}{*}{ {\small Eq.~(\ref{eq:opposite})} }}
\\
\multicolumn{1}{ |c }{} & \multicolumn{1}{ c| }{{\small $0 < \vartheta_q < \pi$}} & {} & {} & {} & {}
\\ \cline{1-6}
\multicolumn{1}{ |c }{\multirow{4}{*}{$q \in H_{\reflectbox{\rotatebox[origin=c]{90}{$\Lsh$}}}$} } & \multicolumn{1}{ c| }{{\scriptsize $q$ on $x$-axis}}
& {\multirow{2}{*}{ $\omega = \pi$ }} & {\multirow{2}{*}{ $\omega = \pi - \vartheta_p$ }}
& {\multirow{2}{*}{ $\omega = 0$ }} & {\multirow{2}{*}{ $\omega = 2 \pi - \vartheta_p$ }}
\\
\multicolumn{1}{ |c }{} & \multicolumn{1}{ c| }{{\small $\vartheta_q = 0$}} & {} & {} & {} & {}
\\ \cline{2-6}
\multicolumn{1}{ |c }{} & \multicolumn{1}{ c| }{{\scriptsize $q$ below $x$-axis}}
& {\multirow{2}{*}{ $\omega = \pi + \vartheta_q$ }} & {\multirow{2}{*}{ {\small Eq.~(\ref{eq:opposite})} }}
& {\multirow{2}{*}{ $\omega = \vartheta_q$ }}
& {\multirow{2}{*}{ {\small Eq.~(\ref{eq:same2}), (\ref{eq:same1})} }}
\\
\multicolumn{1}{ |c }{} & \multicolumn{1}{ c| }{{\small $0 < \vartheta_q < \pi$}} & {} & {} & {} & {}
\\ \cline{1-6}
\end{tabular}
\end{center}
}
\smallskip
\begin{figure}[ht]
\centering
\psfrag{a}[][]{$a$}
\psfrag{b}[][]{$b$}
\psfrag{p}[][]{$p$}
\psfrag{q}[][]{$q$}
\psfrag{r}[][]{$r$}
\psfrag{u}[][]{$u$}
\psfrag{v}[][]{$v$}
\psfrag{x}[][]{$x$}
\psfrag{y}[][]{$y$}
\psfrag{w}[][]{$\omega$}
\psfrag{d.}[][]{$\vartheta_p$}
\psfrag{d,}[][]{$\vartheta_q$}
\subcaptionbox{\label{fig:restr_mcr:fig9:1}}
{\includegraphics[height=4.6cm]{figure_3_B2_cropped} \hspace{3em}
\includegraphics[height=4.6cm]{figure_3_B1_cropped}}
\\[2em]
\subcaptionbox{\label{fig:restr_mcr:fig9:2}}
{\includegraphics[height=4.6cm]{figure_3_A_cropped}}
\caption
Parameterizing the intersection between the circle~$C_p(r)$ and
the edge~$\overline{uv}$ while $r$ moves along
segment~$\overline{ab}$ when point~$p$ and the intersection~$q$ of
$C_p(r)$ and $\overline{uv}$ are
\subref{fig:restr_mcr:fig9:1}~in the same halfplane and
\subref{fig:restr_mcr:fig9:2}~in opposite halfplanes (with respect
to the $x$-axis).}
\label{fig:restr_mcr:fig9}
\end{figure}
In all cases,
$\cos(\omega) \ = \ \cos(\vartheta_q - \vartheta_p) \ = \
\cos(\vartheta_q) \, \cos(\vartheta_p) + \sin(\vartheta_q) \,
\sin(\vartheta_p)$ which, due to Equation~\ref{eq:cos_sin} and to the
fact that $d(q,r) = d(p,r)$, implies that
\begin{align}\label{eq:eq_omega}
\cos(\omega)
& \ = \ \frac{(q.x - x) \, (p.x - x) + q.y \, p.y}
{d^2(p,r)} \, sgn(q.y) \, sgn(p.y) \nonumber \\
& \ = \ \frac{(q.x - x) \, (p.x - x) + q.y \, p.y}
{(p.x - x)^2 + (p.y)^2} \, sgn(q.y) \, sgn(p.y) \nonumber \\
& \ = \ \frac{x^2 - (q.x + p.x) \, x
+ q.x \, p.x + q.y \, p.y}
{x^2 - 2 \, p.x \, x + (p.x)^2 + (p.y)^2}
\, sgn(q.y) \, sgn(p.y).
\end{align}
For convenience, we subdivide each edge that intersects the $x$-axis
at this point of intersection so that the value of $sgn(q.y)$ is fixed
at each sub-edge no matter where $q$ is.
The coordinates $q.x, q.y$ of intersection point~$q$ can be expressed
in terms of~$x$ by taking into account that $q$ belongs to the line
supporting the edge~$\overline{u v}$ and that $r$ is equidistant
from $q$ and $p$.~The former implies that there exists a real
number~$\lambda$ with $0 \le \lambda \le 1$ such that the
vector~$\overrightarrow{u q}$ is $\lambda$ times the
vector~$\overrightarrow{u v}$, which yields
\begin{equation}\label{eq:q_in_uv_x}
(q.x - u.x) \ = \ \lambda \,
(v.x - u.x) \quad \Longleftrightarrow \quad q.x \ = \ \lambda \,
(v.x - u.x) + u.x
\end{equation}
and
\begin{equation}\label{eq:q_in_uv_y}
(q.y - u.y) \ = \ \lambda \, (v.y - u.y)
\quad \Longleftrightarrow \quad
q.y \ = \ \lambda \, (v.y - u.y) + u.y,
\end{equation}
whereas the latter implies
\begin{align}\label{eq:equidistant}
& \quad d^2(q,r) \ = \ d^2(p,r) \nonumber \\
\Longleftrightarrow & \quad (q.x - x)^2 + (q.y)^2
\ = \ (p.x - x)^2 + (p.y)^2 \nonumber \\
\Longleftrightarrow & \quad (q.x)^2 - 2 \, x \, q.x + (q.y)^2 -
(p.x)^2 + 2 \, x \, p.x - (p.y)^2 \ = \ 0.
\end{align}
By substituting $q.x, q.y$ from equations~\ref{eq:q_in_uv_x} and
\ref{eq:q_in_uv_y} into Equation~\ref{eq:equidistant}, we get
\begin{align*}
& \quad \bigl[ \lambda \, (v.x - u.x) + u.x \bigr]^2
- 2 \, x \, \bigl[ \lambda \, (v.x - u.x) + u.x \bigr]
\\
& \quad + \bigl[ \lambda \, (v.y - u.y) + u.y \bigr]^2
- (p.x)^2 + 2 \, x \, p.x - (p.y)^2 \ = \ 0
\\
\Longleftrightarrow & \quad \lambda^2 \, \left[ (v.x - u.x)^2
+ (v.y - u.y)^2 \right]
\\
& \quad - 2 \, \lambda \, \bigl[ x \, (v.x - u.x)
- u.x \, (v.x - u.x) - u.y \, (v.y - u.y) \bigr]
\\
& \quad - 2 \, x \, (u.x - p.x) + (u.x)^2 + (u.y)^2
- (p.x)^2 - (p.y)^2 \ = \ 0,
\end{align*}
which has at most $2$ roots for $\lambda$ in terms of $x$ of the form
\begin{equation}\label{eq:lambda_equation}
\lambda \ = \ \alpha(x) \pm \sqrt{\beta(x)},
\end{equation}
where $\alpha(x)$ and $\beta(x)$ are polynomials of degrees $1$ and
$2$, respectively.
Then, by substituting $q.x, q.y$, and $\lambda$ from equations
\ref{eq:q_in_uv_x}, \ref{eq:q_in_uv_y} and \ref{eq:lambda_equation}
respectively, into Equation~\ref{eq:eq_omega}, we get:
\begin{equation
\cos (\omega)
\ = \ \frac{\gamma(x) \pm \sqrt{\delta(x)}}{\epsilon(x)}
\ \quad \Longrightarrow \ \quad
\omega \ = \ \arccos \left(
\frac{\gamma(x) \pm \sqrt{\delta(x)}}{\epsilon(x)}
\right),
\end{equation}
where $\gamma(x)$, $\delta(x)$, and $\epsilon(x)$ are polynomials of
degrees $2$, $4$, and $2$, respectively.
\section{3D Fixed MCR (Problem~\ref{pro:intro:3Dfixed_mcr})}
In this section we extend our techniques to the 3D-equivalent of
Problem~\ref{pro:intro:fixed_mcr}.
We consider a set $S$ of $n$ points in 3D, a rotation center $r$, and
a non self-intersecting polyhedron $P$ with complexity $m$, i.e., with
$m$ facets. We identify rotations around~$r$ with points in a sphere
with center $r$.~The following shows how to extend the algorithm we
used to solve the Fixed MCR problem:
\begin{enumerate}
\item {\bf Compute the inclusion regions}. For each
$p_j\in S$, the intersection of the sphere~$C_{p_j}(r)$ with center at
$r$ and radius~$|\overline{r p_j}|$ with the polyhedron~$P$ results in a set of regions on
the boundary of the sphere.~These regions consist of the rotated
copies of $p_j$
that lie in the interior of $P$.
\begin{itemize}
\item Regardless of $P$ being convex or not, each facet can contribute to those regions a constant number of times. Hence, the overall complexity is $O(m)$.~Moreover, notice that a region can have many
holes, even in the case that~$P$ is convex.
\item The sides of these regions on the sphere $C_{p_j}(r)$ are
arcs of circles, since they are the intersection of the sphere
with a planar facet of the polyhedron.~Then, these sides can be
computed in constant time each, as the intersection of the planes
containing the faces of the polyhedron with~$C_{p_j}(r)$.
\item Thus the total time and space complexities of computing all
the $O(nm)$ regions is $O(nm)$.
\end{itemize}
\item {\bf Normalize inclusion regions}. Let $R_{p_j}$ be the set of
inclusion regions of $p_j\in S$. Consider the unit sphere $S^2$ to
be centered at $r$ and project the regions to $S^2$. Choose a
point $N$ in $S^2$ as reference and compute the rotation $\tau_j$
required to send $p_j$ to $N$. Then compute $\tau_j(R_{p_j})$ to
set the same reference for all the inclusion regions.
\item {\bf Computing the depth of $N$}. For later use, we need to
compute how many of the above regions contain the point $N$ (in
its interior or boundary), what we call the \emph{depth} of
$N$. In order to compute it, we perform point location in the
planar subdivision on the sphere, i.e., we check whether the point
$N$ belongs to each of the $O(nm)$ regions with a cost of
$O(\log m)$ per region, for a total time complexity of
$O(nm\log m)$.
\item {\bf Stereographic projection}. We use the well-known
stereographic projection from the point $N$, considered as the
north pole, to the tangent plane at the antipodal south pole. The
fact that this projection is conformal implies that circles in the
sphere are mapped to circles in the
plane~\cite{needham_2002}.~Therefore, the projections of the
inclusion regions $\tau_j(R_{p_j})$ have boundaries composed by
circular arcs.~Because any two sides (arcs of circles) of the
regions can intersect at most two times, the arrangement
$\mathcal{A}$ of projected regions can be computed in $O(n^2m^2)$
time and space, since the total number of intersection points
between arcs is $O(n^2m^2)$.~Notice that for computing the
projected arc we proceed as follows: We compute the projection of
the two endpoints of the arc, and also the projection of a third
point of the arc (for example the corresponding to the midpoint of
the arc); with these three projected points, we compute the circle
containing the projected arc and the projected arc itself.
\item {\bf Computing the region in $\mathcal{A}$ with largest
depth}. To do this computation we work on the dual graph of the
arrangement $\mathcal{A}$, just knowing that the exterior
(unbounded) face of $\mathcal{A}$ is the face which was containing
the point~$N$, and hence we know its depth.~Starting in this face,
we perform a traversal of the dual graph, computing the depth of
each region and maintaining the region with maximum depth, in a
total $O(n^2m^2)$ time.
Computing an interior point of the region with maximum depth, we
compute its corresponding point in the unit sphere and then we
know the two parameters $\theta,\varphi$ giving such direction,
which is the solution of our problem.
\end{enumerate}
\begin{theorem}\label{MCR-3D}
The Fixed MCR problem in 3D
can be
solved in $O(n^2m^2\log (nm))$ time and $O(n^2m^2)$ space.
\end{theorem}
\section{Concluding Remarks}
We studied the problem of finding a rotation of a simple polygon that
covers the maximum number of points from a given point set.~We
described algorithms to solve the problem when the rotation center is
fixed, or lies on a line segment, a line, or a polygonal
chain.~Without much effort, our algorithms can also be applied when
the polygon has holes, and can be easily modified to solve
minimization versions of the same problems. We also solved the problem
with a fixed rotation center in 3D, leaving as open problem the
3D-analogue of Problem~\ref{pro:intro:segment_mcr}.
\section{Acknowledgements}
David Orden is supported by MINECO Projects MTM2014-54207 and MTM2017-83750-P, as well as
by H2020-MSCA-RISE project 734922 - CONNECT.~Carlos
Seara is supported by projects Gen. Cat. DGR 2017SGR1640, by MINECO
MTM2015-63791-R, and by H2020-MSCA-RISE project 734922 - CONNECT.~Jorge
Urrutia is supported in part by SEP-CONACYT
of M\'{e}xico, Proyecto 80268 and by PAPPIIT IN102117 Programa de Apoyo a la Investigación e Innovación Tecnológica, Universidad Nacional Autónoma de México..
\bibliographystyle{plain}
|
{
"timestamp": "2018-05-08T02:19:02",
"yymm": "1805",
"arxiv_id": "1805.02570",
"language": "en",
"url": "https://arxiv.org/abs/1805.02570"
}
|
\section{Introduction}
\label{sec1}
Future wireless systems including 5G systems need to operate in dynamic
channel conditions, where operation in high mobility scenarios (e.g.,
high-speed trains) and millimeter wave (mm Wave) bands are envisioned.
The wireless channels in such scenarios are doubly-dispersive, where
multipath propagation effects cause time dispersion and Doppler shifts
cause frequency dispersion \cite{jakes}. OFDM systems are usually
employed to mitigate the effect of inter-symbol interference (ISI)
caused by time dispersion \cite{ofdm1}. However, Doppler shifts result
in inter-carrier interference (ICI) in OFDM and degrades performance
\cite{ofdm2}. An approach to jointly combat ISI and ICI is to use pulse
shaped OFDM systems \cite{pulse1}-\cite{pulse3}. Pulse shaped OFDM systems
use general time-frequency lattices and optimized pulse shapes in the
time-frequency domain. However, systems that employ the pulse shaping
approach do not efficiently address the need to support high Doppler shifts.
Orthogonal time frequency space (OTFS) modulation is a recently proposed
multiplexing scheme \cite{otfswhitepaper}-\cite{otfs3} which meets the
high-Doppler signaling need through a different approach, namely,
{\em multiplexing the modulation symbols in the delay-Doppler domain}
(instead of multiplexing symbols in time-frequency domain as in traditional
modulation techniques such as OFDM). OTFS waveform has been shown to be
resilient to delay-Doppler shifts in the wireless channel. For example,
OTFS has been shown to achieve significantly better error performance
compared to OFDM for vehicle speeds ranging from 30 km/h to 500 km/h in
4 GHz band, and that the robustness to high-Doppler channels (e.g.,
500 km/h vehicle speeds) is especially notable, as OFDM performance
breaks down in such high-Doppler scenarios \cite{otfs2}. When OTFS
waveform is viewed in the delay-Doppler domain, it corresponds to a
2D localized pulse. Modulation symbols, such as QAM symbols, are
multiplexed using these pulses as basis functions. The idea is to
transform the time-varying multipath channel into a 2D time-invariant
channel in the delay-Doppler domain. This results in a simple and
symmetric coupling between the channel and the modulation symbols, due
to which significant performance gains compared to other multiplexing
techniques are achieved \cite{otfswhitepaper}. OTFS modulation can be
architected over any multicarrier modulation by adding pre-processing and
post-processing blocks. This is very attractive from an implementation
view-point.
Recognizing the promise of OTFS in future wireless systems, including
mmWave communication systems \cite{otfs3}, several works on OTFS have
started emerging in the recent literature \cite{otfs4}-\cite{otfs9}.
These works have addressed the formulation of input-output relation
in vectorized form, equalization and detection, and channel estimation.
Multiple-input multiple-output (MIMO) techniques along with OTFS
(MIMO-OTFS) can achieve increased spectral/energy efficiencies and
robustness in rapidly varying MIMO channels. It is shown in
\cite{otfswhitepaper} that OTFS approaches channel capacity through
linear scaling of spectral efficiency with the MIMO order. We, in this
paper, consider the signal detection and channel estimation aspects
in MIMO-OTFS.
\begin{figure*}[t]
\centering
\includegraphics[width=14.0 cm, height=4.0 cm]{mimo_otfs_fig1.eps}
\caption{OTFS modulation scheme.}
\label{fig2}
\vspace{-2mm}
\end{figure*}
Our contributions can be summarized as follows. We first present a
vectorized input-output formulation for the MIMO-OTFS system.
Initially, we assume perfect channel knowledge at the receiver and
employ an iterative algorithm based on message passing for signal
detection. The algorithm has low complexity and it achieves very good
performance. For example, in a $2\times 2$ MIMO-OTFS system, a bit
error rate (BER) of $10^{-5}$ is achieved at an SNR of about 14 dB
for a Doppler of 1880 Hz (500 km/hr speed at 4 GHz). For the same
system, MIMO-OFDM BER performance floors at a BER of 0.02. Next, we
relax the perfect channel estimation assumption and present a channel
estimation scheme in the delay-Doppler domain. The proposed scheme
uses impulses in the delay-Doppler domain as pilots for MIMO-OTFS
channel estimation. The proposed scheme is simple and effective in
high-Doppler MIMO channels. For example, compared to the case of
perfect channel knowledge, the proposed scheme loses performance
only by less than a fraction of a dB.
The rest of the paper is organized as follows. The OTFS modulation is
introduced in Sec. \ref{sec2}. The MIMO-OTFS system model and the
vectorized input-output relation are developed in Sec. \ref{sec3}.
MIMO-OTFS signal detection using message passing and the resulting BER
performance are presented in Sec. \ref{sec4}. The channel estimation
scheme in the delay-Doppler domain and the achieved performance are
presented in Sec. \ref{sec5}. Conclusions are presented in Sec. \ref{sec6}.
\section{OTFS Modulation}
\label{sec2}
OTFS modulation uses the delay-Doppler domain for multiplexing the
modulation symbols and for channel representation. When the channel
impulse response is represented in the delay-Doppler domain, the
received signal $y(t)$ is the sum of reflected copies of the transmitted
signal $x(t)$, which are delayed in time ($\tau$), shifted in frequency
($\nu$), and multiplied by the complex gain $h(\tau,\nu)$ \cite{otfs1}.
Thus, the coupling between an input signal and the channel in this domain
is given by the following double integral:
\begin{equation}
\label{channel}
y(t)=\int_{\nu} \int_{\tau} h(\tau,\nu)x(t-\tau)e^{j2\pi\nu(t-\tau)} \mathrm{d} \tau \mathrm{d} \nu.
\end{equation}
The block diagram of the OTFS modulation scheme is shown in Fig. \ref{fig2}.
The inner box is the familiar time-frequency multicarrier modulation, and
the outer box with a pre- and post-processor implements the OTFS modulation
scheme in the delay-Doppler domain. The information symbols $x[k,l]$ (e.g.,
QAM symbols) residing in the delay-Doppler domain are first transformed to
the familiar time-frequency (TF) domain signal $X[n,m]$ through the 2D
inverse symplectic finite Fourier transform (ISFFT) and windowing. The
Heisenberg transform is then applied to the TF signal $X[n,m]$ to transform
to the time domain signal $x(t)$ for transmission. At the receiver, the
received signal $y(t)$ is transformed back to a TF domain signal $Y[n,m]$
through Wigner transform (inverse Heisenberg transform). $Y[n,m]$ thus
obtained is transformed to the delay-Doppler domain signal $y[k,l]$ through
the symplectic finite Fourier transform (SFFT) for demodulation.
In the following subsections, we describe the signal models in TF modulation
and OTFS modulation. Let $T$ denote the TF modulation symbol time and
$\Delta f$ denote the subcarrier spacing. Let $x[k,l]$, $k=0,\cdots,N-1$,
$l=0,\cdots,M-1$ be the information symbols transmitted in a given packet
burst. Let $W_{tx}[n,m]$ and $W_{rx}[n,m]$ denote the transmit and receive
windows, respectively.
\subsection{Time-frequency modulation }
\label{sec2a}
\begin{itemize}
\item Let $\varphi_{tx}(t)$ and $\varphi_{rx}(t)$ denote the transmit and
receive pulses, respectively, which are bi-orthogonal with respect to time
and frequency translations. Signal in the TF domain $X[n,m]$,
$n=0,\cdots,N-1$, $m=0,\cdots,M-1$ is transmitted in a given packet burst.
\item TF modulation/Heisenberg transform: The signal in the time-frequency
domain $X[n,m]$ is transformed to the time domain signal $x(t)$ using the
Heisenberg transform given by
\begin{equation}
\hspace{-2mm}
x(t)= \sum_{n=0}^{N-1} \sum_{m=0}^{M-1} X[n,m]\varphi_{tx}(t-nT)e^{j2\pi m \Delta f (t-nT)}.
\label{tfmod}
\end{equation}
\item TF demodulation/Wigner transform: At the receiver, the time domain
signal is transformed back to the TF domain using Wigner transform given by
\begin{equation}
\label{wigner}
Y[n,m] = A_{\varphi_{rx},y}(\tau,\nu)|_{\tau =nT,\nu =m \Delta f},
\end{equation}
where $A_{\varphi_{rx},y}(\tau,\nu)$ is the cross ambiguity function given by
\begin{equation}
\label{crossambig}
A_{\varphi_{rx},y}(\tau,\nu)=\int \varphi_{rx} ^*(t-\tau) y(t) e^{-j2 \pi \nu(t-\tau)} \mathrm{d}t,
\end{equation}
\end{itemize}
and $y(t)$ is related to $x(t)$ by (\ref{channel}). The relation between
$Y[n,m]$ and $X[n,m]$ for TF modulation can be derived as \cite{otfs2}
\begin{equation}
\label{tfinpop}
Y[n,m] = H[n,m]X[n,m] + V[n,m],
\end{equation}
where $V[n,m]$ is the additive white Gaussian noise and $H[n,m]$ is given by
\begin{equation}
H[n,m]=\int_{\tau} \int_{\nu} h(\tau,\nu) e^{j2\pi \nu nT} e^{-j2\pi (\nu + m \Delta f) \tau} \mathrm{d} \nu \mathrm{d} \tau.
\end{equation}
\subsection{OTFS modulation}
\label{sec2b}
\begin{itemize}
\item Let $X_p[n,m]$ be the periodized version of $X[n,m]$ with period
$(N,M)$. The SFFT of $X_p[n,m]$ is given by
\begin{equation}
x_p[k,l] = \sum_{n=0}^{N-1} \sum_{m=0}^{M-1} X_p[n,m] e^{-j2\pi( {nk \over N} - {ml \over M} )},
\end{equation}
and the ISFFT is {\small $X_p[n,m] = SFFT^{-1} (x[k,l])$}, given by
\begin{equation}
X_p[n,m] = {1 \over MN }\sum_{k=0}^{N-1} \sum_{l=0}^{M-1} x[k,l] e^{j2\pi( {nk \over N}-{ml \over M})}.
\end{equation}
\item Information symbols $x[k,l]$, $k=0,\cdots,N-1$, $l=0,\cdots,M-1$,
are transmitted in a given packet burst.
\item OTFS transform/pre-processing: The information symbols in the
delay-Doppler domain $x[k,l]$ are mapped to TF domain symbols $X[n,m]$ as
\begin{equation}
X[n,m] = W_{tx}[n,m]SFFT^{-1}(x[k,l]),
\label{otfsmod}
\end{equation}
where $W_{tx}[n,m]$ is the transmit windowing square summable function.
\item $X[n,m]$ thus obtained is in the TF domain and it is TF modulated
as described in the previous subsection, and $Y[n,m]$ is obtained by
(\ref{wigner}).
\item OTFS demodulation/post-processing: A receive window $W_{rx}[n,m]$ is
applied to $Y[n,m]$ and periodized to obtain $Y_p[n,m]$ which has the period
$(N,M)$, as
\begin{eqnarray}
Y_W[n,m] & = & W_{rx}[n,m]Y[n,m], \nonumber \\
Y_p[n,m] & = & \sum_{k,l=- \infty}^{\infty} Y_W[n-kN,m-lM].
\label{otfsdemod1}
\end{eqnarray}
The symplectic finite Fourier transform is then applied to $Y_p[n,m]$ to
convert it from TF domain back to delay-Doppler domain $\hat{x}[k,l]$, as
\begin{equation}
\hat{x}[k,l]=SFFT (Y_p[n,m]).
\label{otfsdemod2}
\end{equation}
\end{itemize}
The input-output relation in OTFS modulation can be derived
as \cite{otfs2}
\begin{equation}
\hat{x}[k,l]={1 \over MN} \hspace{-1mm} \sum_{m=0}^{M-1} \sum_{n=0}^{N-1}\hspace{-1.0mm} x[n,m] h_w \hspace{-1.0mm} \left( {k-n \over NT}, {l-m \over M \Delta f} \right)\hspace{-1mm} + v[k,l],
\label{otfsinpoutp}
\end{equation}
where
\begin{equation}
h_w \left({k-n \over NT}, {l-m \over M \Delta f} \right) = h_w (\nu',\tau')|_{\nu'={k-n \over NT},\tau'={l-m\over M \Delta f}},
\label{deldoppchannel}
\end{equation}
where
$h_w(\nu',\tau')$ is the circular convolution of the channel response
with a windowing function $w(\tau,\nu)$, given by
\begin{equation}
h_w(\nu',\tau')=\int_{\nu} \int_{\tau} h(\tau,\nu)w(\nu'-\nu,\tau'-\tau) \mathrm{d} \tau \mathrm{d} \nu,
\end{equation}
where $w(\tau,\nu)$ is given by
\begin{equation}
w(\tau,\nu) = \hspace{-1mm} \sum_{m=0}^{M-1} \sum_{n=0}^{N-1} \hspace{-1mm} W_{tx}[n,m] W_{rx}[n,m] e^{-j2 \pi (\nu nT - \tau m \Delta f)}.
\end{equation}
\begin{figure*}[t]
\centering
\includegraphics[width=16 cm, height=4.3cm]{mimo_otfs_fig2.eps}
\caption{MIMO-OTFS modulation scheme.}
\label{MIMO_BD}
\vspace{-4mm}
\end{figure*}
\vspace{-2mm}
\subsection{Vectorized formulation of the input-output relation}
\label{sec2c}
Consider a channel with $P$ signal propagation paths (taps). Let the path
$i$ be associated with a delay $\tau_i$, a Doppler $\nu_i$, and a fade
coefficient $h_i$. The channel impulse response in the delay-Doppler
domain can be written as
\begin{equation}
h(\tau,\nu) =\sum_{i=1}^{P} h_i \delta(\tau -\tau_i) \delta(\nu-\nu_i).
\label{sparsechannel}
\end{equation}
Assume that the windows used in modulation, $W_{tx}[n,m]$ and $W_{rx}[n,m]$
are rectangular. Define $\tau_i={ \alpha _i \over M \Delta f}$ and
$\nu_i={\beta_i \over NT}$, where $\alpha_i$ and $\beta_i$ are
integers denoting the indices of the delay tap (with delay $\tau_i$) and
Doppler tap (with Doppler value $\nu_i$). In practice, although the delay
and Doppler values are not exactly integer multiples of the taps, they can
be well approximated by a few delay-Doppler taps in the discrete domain
\cite{channelestimation1}. With the above assumptions, the input-output
relation for the channel in (\ref{sparsechannel}) can be derived as
\cite{otfs5}
\begin{equation}
y[k,l] = \sum_{i=1}^{P} h_i' x[((k-\beta_i))_N,((l-\alpha_i))_M)] + v[k,l].
\label{inpopnofracdopp}
\end{equation}
where $h_i'=h_i e^{-j2 \pi \nu_i \tau_i}$. The above equation
can be represented in vectorized form as \cite{otfs5}
\begin{equation}
\mathbf{y} = \mathbf{Hx} + \mathbf{v},
\label{vecform}
\end{equation}
where $\mathbf{x}, \mathbf{y}, \mathbf{v} \in \mathbb{C} ^{NM \times 1}$,
$\mathbf{H} \in \mathbb{C}^{NM\times NM}$, the $(k + Nl)$th element of
$\mathbf{x}$, $x_{k+Nl}=x[k,l]$, $k=0,\cdots,N-1, l=0,\cdots,M-1$, and
the same relation holds for $\mathbf{y}$ and $\mathbf{z}$ as well. In
this representation, there are only $P$ non-zero elements in each row and
column of the equivalent channel matrix ($\mathbf{H}$) due to modulo
operations.
\section{MIMO-OTFS Modulation}
\label{sec3}
Consider a MIMO-OTFS system as shown in Fig. \ref{MIMO_BD} with equal
number of transmit ($n_t$) and receive antennas ($n_r$), i.e.,
$n_t=n_r=n_a$. Each antenna transmits OTFS modulated information symbols
independently. Let the windows $W_{tx}[n,m]$, $W_{rx}[n,m]$ used for
modulation be rectangular. Assume that the channel corresponding to
$p$th transmit antenna and $q$th receive antenna has $P$ taps as in
(\ref{sparsechannel}). Therefore, the channel representation can be
written as
\begin{equation}
h_{qp}(\tau,\nu)=\sum_{i=1}^{P} h_{qp_i} \delta(\tau-\tau_i) \delta(\nu-\nu_i),
\label{sparsechannelmimo}
\end{equation}
$p=1,2,\cdots,n_a$, $q=1,2,\cdots,n_a$. Thus, we can use the vectorized
formulation in Sec. \ref{sec2c} for each transmit and receive antenna
pair to describe the input-output relation.
\subsection{Vectorized formulation of the input-output relation for MIMO-OTFS}
\label{sec3a}
Let $\mathbf{H}_{qp}$ denote the equivalent channel matrix corresponding
to $p$th transmit antenna and $q$th receive antenna. Let $\mathbf{x}_p$
denote the $NM \times 1$ transmit vector from the $p$th transmit antenna
and $\mathbf{y}_q$ denote the $NM \times 1$ received vector corresponding
to $q$th receive antenna in a given frame. Then, similar to the system
model in (\ref{vecform}) for a SISO-OTFS, we can derive a linear system
model describing the input and output for the MIMO-OTFS system as given below
\begin{eqnarray}
\mathbf{y}_1 & = & \mathbf{H}_{11}\mathbf{x}_1+\mathbf{H}_{12}\mathbf{x}_2 + \cdots + \mathbf{H}_{1{n_a}}\mathbf{x}_{n_a}+\mathbf{v}_1, \nonumber \\
\mathbf{y}_2 & = & \mathbf{H}_{21}\mathbf{x}_1+\mathbf{H}_{22}\mathbf{x}_2 + \cdots + \mathbf{H}_{2{n_a}}\mathbf{x}_{n_a}+\mathbf{v}_2,\nonumber \\
\vdots \nonumber \\
\hspace{-4mm}\mathbf{y}_{n_a} & = & \mathbf{H}_{{n_a}1}\mathbf{x}_1+\mathbf{H}_{{n_a}2}\mathbf{x}_2 + \cdots + \mathbf{H}_{{n_a}{n_a}}\mathbf{x}_{n_a}+\mathbf{v}_{n_a}.\hspace{4mm}
\label{mimoeqns}
\end{eqnarray}
Define
\vspace{-3mm}
\begin{align*}
\ \mathbf{H}_{ {\tiny \mbox{MIMO}}} &= \begin{bmatrix}
\mathbf{H} _{11} & \mathbf{H}_{12} & \dots & \mathbf{H}_{1{n_a}} \\
\mathbf{H} _{21} & \mathbf{H}_{22} & \dots & \mathbf{H}_{2{n_a}} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{H} _{{n_a}1} & \mathbf{H}_{{n_a}2} & \dots & \mathbf{H}_{{n_a}{n_a}}
\end{bmatrix},
\end{align*}
\vspace{-3mm}
\begin{small}
\begin{align*}
&\mathbf{x}_{{\tiny \mbox{MIMO}}}={[{\mathbf{x}_1}^{T},{\mathbf{x}_2}^{T},\cdots, {\mathbf{x}_{n_a}}^{T}] }^{T}, \mathbf{y}_{{\tiny \mbox{MIMO}}} = {[{\mathbf{y}_1}^{T},{\mathbf{y}_2}^{T},\cdots, {\mathbf{y}_{n_a}}^{T}] }^{T}, \\ & \mathbf{v}_{{\tiny \mbox{MIMO}}} = {[{\mathbf{v}_1}^{T},{\mathbf{v}_2}^{T},\cdots, {\mathbf{v}_{n_a}}^{T}] }^{T}.
\end{align*}
\end{small}
Then, (\ref{mimoeqns}) can be written as
\begin{equation}
\mathbf{y}_{{\tiny \mbox{MIMO}}} = \mathbf{H}_{{\tiny \mbox{MIMO}}}\mathbf{x}_{{\tiny \mbox{MIMO}}} + \mathbf{v}_{\tiny \mbox{MIMO}},
\label{mimovecform}
\end{equation}
where
$\mathbf{x}_{{\tiny \mbox{MIMO}}}, \mathbf{y}_{{\tiny \mbox{MIMO}}}, \mathbf{v}_{\tiny \mbox{MIMO}} \in \mathbb{C} ^{{n_a}NM \times 1}$,
$\mathbf{H}_{{\tiny \mbox{MIMO}}} \in \mathbb{C}^{{n_a}NM\times {n_a}NM}$.
Thus, in this representation, each row and column of
$\mathbf{H}_{{\tiny \mbox{MIMO}}}$ has only $n_aP$ non-zero elements
due to modulo operations.
\section{MIMO-OTFS Signal Detection}
\label{sec4}
In this section, we present a MIMO-OTFS signal detection scheme using an
iterative algorithm based on message passing and present a performance
comparison between MIMO-OTFS and MIMO-OFDM in high-Doppler scenarios.
\subsection{Algorithm for MIMO-OTFS signal detection}
\label{sec4a}
Let the sets of non-zero positions in the $b$th row and $a$th column of
$\mathbf{H}_{{\tiny \mbox{MIMO}}}$ be denoted by $\zeta_b$ and $\zeta_a$,
respectively. Using (\ref{mimovecform}), the system can be modeled as a
sparsely connected factor graph with ${n_a}NM$ variable nodes corresponding
to the elements in $\mathbf{x}_{{\tiny \mbox{MIMO}}}$ and ${n_a}NM$
observation nodes corresponding to the elements in
$\mathbf{y}_{{\tiny \mbox{MIMO}}}$. Each observation node $y_b$ is
connected to the set of variable nodes \{$x_c, c \in \zeta_b$\}, and each
variable node $x_a$ is connected to the set of observation nodes
\{$y_c, c \in \zeta_a$\}. Also, $|\zeta_b|=|\zeta_a|={n_a}P$. The maximum
a posteriori (MAP) decision rule for (\ref{mimovecform}) is given by
\begin{equation}
{\hat{\mathbf{x}}_{{\tiny \mbox{MIMO}}}} = \mathop{\text{argmax}}_{\mathbf{x}_{{\tiny \mbox{MIMO}}} \in \mathbb{A} ^ {n_aNM}} \mbox{Pr}(\mathbf{x}_{{\tiny \mbox{MIMO}}}|\mathbf{y}_{{\tiny \mbox{MIMO}}},\mathbf{H}_{{\tiny \mbox{MIMO}}}),
\label{mapx}
\end{equation}
where $\mathbb{A}$ is the modulation alphabet (e.g., QAM) used. The
detection as per (\ref{mapx}) has exponential complexity. Hence, we use
symbol by symbol MAP rule for $ 0 \leq a \leq n_aNM-1 $ for detection
as follows:
\begin{equation}
\label{MAPsbsmimo}
\begin{split}
\hat{x}_a & = \mathop{\text{argmax}}_{a_j \in \mathbb{A}} \mbox{Pr}(x_a = a_j | \mathbf{y}_{{\tiny \mbox{MIMO}}},\mathbf{H}_{{\tiny \mbox{MIMO}}}) \\
& = \mathop{\text{argmax}}_{a_j \in \mathbb{A}} {1 \over |\mathbb{A}|} \mbox{Pr}(\mathbf{y}_{{\tiny \mbox{MIMO}}}|x_a =a_j , \mathbf{H}_{{\tiny \mbox{MIMO}}} ) \nonumber \\
& \approx \mathop{\text{argmax}}_{a_j \in \mathbb{A}} \prod_{c \in \zeta_a} \mbox{Pr}(y_c|x_a =a_j , \mathbf{H}_{{\tiny \mbox{MIMO}}}).
\end{split}
\end{equation}
The transmitted symbols are assumed to be equally likely and the components
f $\mathbf{y}_{{\tiny \mbox{MIMO}}}$ are nearly independent for a given
$x_a$ due to the sparsity in $\mathbf{H}_{{\tiny \mbox{MIMO}}}$. This can
be solved using the message passing based algorithm described below. The
message that is passed from the variable node $x_a$, for each
$a= \{0,1, \cdots,n_aNM-1\}$, to the observation node $y_b$ for
$b \in \zeta_a$, is the pmf denoted by
$\textbf{p}_{ab} = \{p_{ab}(a_j)|a_j \in \mathbb{A} \}$ of the symbols in
the constellation $\mathbb{A}$. Let $H_{ab}$ denote the element in the
$a$th row and $b$th column of $\mathbf{H}_{{\tiny \mbox{MIMO}}}$. The
message passing algorithm is described as follows.
\begin{algorithmic}[1]
\label{alg1}
\State \textbf{Inputs}: $\mathbf{y}_{{\tiny \mbox{MIMO}}}$, $\mathbf{H}_{{\tiny \mbox{MIMO}}}$, $ N_{iter}$: max. number of iterations.
\State \textbf{Initialization}: Iteration index $t=0$, pmf $\mathbf{p}_{ab}^{(0)}=1/|\mathbb{A}| \ \forall \ a \in \{0,1,\cdots,n_aNM-1 \}$ and $b \in \zeta_a $.
\State \textbf{Messages from $y_b$ to $x_a$}: The mean $(\mu_{ba}^{(t)})$
and variance $((\sigma_{ba}^{(t)})^2)$ of the interference term $I_{ba}$
are passed as messages from $y_b$ to $x_a$. $I_{ba}$ can be approximated
as a Gaussian random variable and is given by
\begin{small}
\begin{equation}
I_{ba}= \sum_{c \in \zeta_b,c \neq a } x_cH_{b,c} + v_b .
\end{equation}
\end{small}
The mean and variance of $I_{ba}$ are given by
\begin{small}
\begin{equation*}
\mu_{ba}^{(t)} = \mathbb{E}[I_{ba}] = \sum_{c \in \zeta_b, c \neq a } \sum_{j=1}^{|\mathbb{A}|} p_{cb}^{(t)}(a_j) a_j H_{b,c},
\end{equation*}
\begin{align*}
&( \sigma_{ba}^{(t)})^2 = \text{Var}[I_{ba}] \nonumber \\
&= \sum_{\substack{c \in \zeta_b \\ c \neq a}} \Bigg( \sum_{j=1}^{\mathbb{|A|}}p_{cb}^{(t)}(a_j)|a_j|^2 |H_{b,c}|^2 - \bigg| \sum_{j=1}^{\mathbb{|A|}}p_{cb}^{(t)}(a_j)a_jH_{b,c} \bigg|^2 \Bigg)
\end{align*}
\hspace{4.5mm} $+ \ \sigma^2.$
\end{small}
\State \textbf{Messages from $x_a$ to $y_b$}: Messages passed from variable
nodes $x_a$ to observation nodes $y_b$ is the pmf vector
$\textbf{p}_{ab}^{(t+1)}$ with the elements given by
\begin{equation}
p_{ab}^{(t+1)}=\Delta \ p_{ab}^{(t)}(a_j)+(1- \Delta) \ p_{ab}^{(t-1)}(a_j),
\end{equation}
where $\Delta \in (0,1]$ is the damping factor for improving convergence
rate, and
\begin{equation}
p_{ab}^{(t)} \propto \prod_{c \in \zeta_a , c \neq b} \text{Pr}(y_c|x_a=a_j,\mathbf{H}_{{\tiny \mbox{MIMO}}}),
\end{equation}
where
\begin{equation*}
\text{Pr}(y_c|x_a=a_j,\mathbf{H}_{{\tiny \mbox{MIMO}}}) \propto \text{exp} \Bigg( {-|y_c - \mu_{ca}^{(t)}-H_{c,a}a_j|^2 \over \sigma_{c,a}^{2(t)}} \Bigg).
\end{equation*}
\State \textbf{Stopping criterion}: Repeat steps 3 \& 4 till
$\max\limits_{a,b,a_j} |p_{ab}^{(t+1)}(a_j) - p_{ab}^{(t)}(a_j)| < \epsilon$
(where $\epsilon$ is a small value) or the maximum number of iterations,
$N_{iter}$, is reached.
\State \textbf{Output}: Output the detected symbol as
\begin{equation}
\hat{x}_a = \mathop{\text{argmax}}_{a_j \in \mathbb{A}}p_a(a_j), \:\: a \in {0,1,2,\cdots,n_aNM-1},
\end{equation}
where
\begin{equation}
p_a(a_j) = \prod_{c \in \zeta_a} \text{Pr}(y_c|x_a =a _j,\mathbf{H}_{{\tiny \mbox{MIMO}}}).
\end{equation}
\end{algorithmic}
\subsection{Vectorized formulation of the input-output relation for MIMO-OFDM}
\label{subsec4b}
In this subsection, in order to provide a performance comparison between
MIMO-OTFS and MIMO-OFDM, we present the vectorized formulation of the
input-output relation for MIMO-OFDM. OFDM uses the TF domain for signaling
and channel representation. We will first derive the vectorized formulation
for a SISO-OFDM and extend it to MIMO-OFDM. For a fair comparison with the
OTFS modulation, we will consider $N$ consecutive OFDM blocks (each of size
$M$) to be one frame, i.e., the transmit vector
$\mathbf{x}_{{\tiny \mbox{OFDM}}} \in \mathbb{C} ^{NM \times 1}$, and
message passing detection is done jointly over one $NM \times 1$ frame.
Consider the channel in (\ref{sparsechannel}). The time-delay
representation $h(\tau,t)$ is related to the delay-Doppler representation
$h(\tau,\nu)$ by a Fourier transform along the time axis, and is given by
\begin{equation}
h(\tau,t) =\sum_{i=1}^{P} h_i e^{j2 \pi \nu_i t}\delta(\tau-\tau_i).
\label{tdrep}
\end{equation}
Sample the time axis at $t = nTs ={n \over M\Delta f}$. The sampled
time-delay representation $h(\tau,n)$ is given by
\begin{equation}
h(\tau,n) =\sum_{i=1}^{P} h_i e^{j2 \pi \nu_i n \over M\Delta f}\delta(\tau-\tau_i).
\label{tdrep_discrete}
\end{equation}
Let $CP=P-1$ denote the cyclic prefix length used in each OFDM block and
let $L=M+CP$. The size of one frame after cyclic prefix insertion to each
block will then be $NL$. Let
$\mathbf{T}_{CP}={[ \mathbf{C}_{CP}^T \ \mathbf{I}_M ]}^T$ denote the
$L \times M$ matrix that inserts cyclic prefix for one block, where
$\mathbf{C}_{CP}$ contains the last $CP$ rows of the identity matrix
$\mathbf{I}_M$. Also, let
$\mathbf{R}_{CP}= [ {\bf 0}_{M \times CP } \ {\mathbf I}_M ]$ denote the
$M \times L$ the matrix that removes the cyclic prefix for one block
\cite{rapid_tv_channels}. Let $\mathbf{W}_{M \times M}$ and
$\mathbf{W}^{H}_{M \times M}$ denote the DFT and IDFT matrices of size $M$.
We use the following notations.
\begin{itemize}
\item $\mathbf{B}_{cpin}= \text{diag} \underbrace{(\mathbf{T}_{CP},\mathbf{T}_{CP},\cdots,\mathbf{T}_{CP})}_{N \ times}$
: cyclic prefix insertion matrix for $N$ consecutive OFDM blocks.
\item $\mathbf{B}_{cpre}= \text{diag} \underbrace{(\mathbf{R}_{CP},\mathbf{R}_{CP},\cdots,\mathbf{R}_{CP})}_{N \ times}$
: cyclic prefix removal matrix for $N$ consecutive OFDM blocks.
\item $\mathbf{D}= \text{diag} \underbrace{(\mathbf{W},\mathbf{W},\cdots,\mathbf{W})}_{N \ times}$
: DFT matrix for $N$ consecutive OFDM blocks.
\item $\mathbf{D}^{H}= \text{diag} \underbrace{(\mathbf{W}^{H},\mathbf{W}^{H},\cdots,\mathbf{W}^{H})}_{N \ times}$
: IDFT matrix for $N$ consecutive OFDM blocks.
\item The channel in the time-delay domain for a given frame can be written
as a matrix $\mathbf{H}_{td}$ using (\ref{tdrep_discrete}) and has size
$NL \times NL$.
\end{itemize}
Using the above, the end-to-end relationship in OFDM modulation can be
described by the following linear model:
\begin{align}
\mathbf{y}_{{\tiny \mbox{OFDM}}} & = \underbrace{\mathbf{D} \mathbf{B}_{cpre} \mathbf{H}_{td} \mathbf{B}_{cpin} \mathbf{D}^H}_{\mathbf{H}_{{\tiny \mbox{OFDM}}}} \mathbf{x}_{{\tiny \mbox{OFDM}}} + \mathbf{v} \nonumber \\
&= \mathbf{H}_{{\tiny \mbox{OFDM}}} \mathbf{x}_{{\tiny \mbox{OFDM}}} + \mathbf{v},
\end{align}
where
$\mathbf{x}_{{\tiny \mbox{OFDM}}}, \mathbf{y}_{{\tiny \mbox{OFDM}}}, \mathbf{v} \in \mathbb{C} ^{NM \times 1}$,
$\mathbf{H}_{{\tiny \mbox{OFDM}}} \in \mathbb{C}^{NM\times NM}$.
\subsubsection{MIMO-OFDM}
The vectorized formulation of the input-output relation for SISO-OFDM
derived above can be extended to MIMO-OFDM in a similar fashion as was done
for the MIMO-OTFS system described in Sec. \ref{sec3a}. Let
$\mathbf{H}_{{{\tiny \mbox{OFDM}}}_{qp}}$ denote the equivalent channel
matrix corresponding to $p$th transmit antenna and $q$th receive antenna.
Let $\mathbf{x}_{{{\tiny \mbox{OFDM}}}_p}$ denote the $NM \times 1$ transmit
vector from the $p$th transmit antenna and
$\mathbf{y}_{{{\tiny \mbox{OFDM}}}_q}$ denote the $NM \times 1$ received
vector corresponding to $q$th receive antenna in a given frame. Define
\begin{align*}
\ \mathbf{H}_{ {\tiny \mbox{MIMO-OFDM}}} &= \begin{bmatrix}
\mathbf{H} _{{\tiny \mbox{OFDM}}_{11}} & \mathbf{H}_{{\tiny \mbox{OFDM}}_{12}} & \dots & \mathbf{H}_{{\tiny \mbox{OFDM}}_{1{n_a}}} \\
\mathbf{H} _{{\tiny \mbox{OFDM}}_{21}} & \mathbf{H}_{{\tiny \mbox{OFDM}}_{22}} & \dots & \mathbf{H}_{{\tiny \mbox{OFDM}}_{2n_a}} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbf{H} _{{\tiny \mbox{OFDM}}_{n_a1}} & \mathbf{H}_{{\tiny \mbox{OFDM}}_{n_a2}} & \dots & \mathbf{H}_{{\tiny \mbox{OFDM}}_{n_a n_a}}
\end{bmatrix},
\end{align*}
\begin{small}
\begin{align*}
&\mathbf{x}_{{\tiny \mbox{MIMO-OFDM}}}={[{\mathbf{x}_{{\tiny \mbox{OFDM}}_1}}^{T},{\mathbf{x}_{{\tiny \mbox{OFDM}}_2}}^{T},\cdots,{\mathbf{x}_{{\tiny \mbox{OFDM}}_{n_a}}}^{T}]}^{T}, \\ & \mathbf{y}_{{\tiny \mbox{MIMO-OFDM}}} = {[{\mathbf{y}_{{\tiny \mbox{OFDM}}_1}}^{T},{\mathbf{y}_{{\tiny \mbox{OFDM}}_2}}^{T},\cdots,{\mathbf{y}_{{\tiny \mbox{OFDM}}_{n_a}}}^{T}]}^{T}.
\end{align*}
\end{small}
\vspace{-2mm}
\hspace{-5mm}
The input-output relation for MIMO-OFDM can be written as
\begin{equation}
\mathbf{y}_{{\tiny \mbox{MIMO-OFDM}}} = \mathbf{H}_{{\tiny \mbox{MIMO-OFDM}}}\mathbf{x}_{{\tiny \mbox{MIMO-OFDM}}} + \mathbf{v}_{\tiny \mbox{MIMO-OFDM}},
\label{mimoofdmvecform}
\end{equation}
where
$\mathbf{x}_{{\tiny \mbox{MIMO-OFDM}}},\mathbf{y}_{{\tiny \mbox{MIMO-OFDM}}}, \mathbf{v}_{\tiny \mbox{MIMO-OFDM}} \in \mathbb{C} ^{{n_a}NM \times 1}$
and
$\mathbf{H}_{{\tiny \mbox{MIMO-OFDM}}} \in \mathbb{C}^{{n_a}NM\times {n_a}NM}$.
\subsection{Performance results and discussions}
\label{sec4c}
In this subsection, we present the BER performance of MIMO-OTFS and compare
it with that of MIMO-OFDM. Perfect channel knowledge is assumed at the
receiver. Message passing algorithm is used for both MIMO-OTFS and MIMO-OFDM.
A damping factor of 0.5 is used. The maximum number of iterations and the
$\epsilon$ value used are 30 and 0.01, respectively. We use the channel
model in (\ref{sparsechannelmimo}) and the number of taps $P$ is taken to
be 5. The delay-Doppler profile considered in the simulation is shown in
Table \ref{delay_Dopp_prof}. Other simulation parameters used are given in
Table \ref{SimPar}.
\begin{table}
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
Path index $(i)$ & $1$ & $2$ & $3$ & $4$ & $5$ \\
\hline
Delay ($\tau_i$), $\mu$s & $2.08$ & $4.164 $ & $6.246$ & $8.328$ & $10.41$\\
\hline
Doppler ($\nu_i$), Hz & $0$ & $470$ & $940$ & $1410$ & $1880$ \\
\hline
\end{tabular}
\vspace{2mm}
\caption{Delay-Doppler profile for the channel model with $P=5$.}
\vspace{-3mm}
\label{delay_Dopp_prof}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|p{0.50\linewidth}|p{0.25\linewidth}|}
\hline
\textbf{Parameter} & \textbf{Value} \\
\hline
Carrier frequency (GHz) & 4 \\
\hline
Subcarrier spacing (kHz) & 15\\
\hline
Frame size $(M,N)$ & $(32,32)$ \\
\hline
Modulation scheme & BPSK \\
\hline
MIMO configuration & 1$\times$1, 2$\times$2, 3$\times$3 \\
\hline
Maximum speed (kmph) & 507.6\\
\hline
\end{tabular}
\vspace{2mm}
\caption{System parameters.}
\label{SimPar}
\vspace{-9mm}
\end{center}
\end{table}
Figure \ref{BER1} shows the BER performance of MIMO-OTFS for SISO as well as
$2\times 2$ and $3\times 3$ MIMO configurations. The maximum considered
speed of 507.6 kmph corresponds to 1880 Hz Doppler frequency at a carrier
frequency of 4 GHz. Even at this high-Doppler value, MIMO-OTFS is found to
achieve very good BER performance. We observe that, a BER of $10^{-5}$ is
achieved at an SNR of about 14 dB for the 2$\times$2 system, while the SNR
required to achieve the same BER reduces by about 2 dB for the 3$\times 3$
system. Thus, with the proposed detection algorithm, MIMO-OTFS brings in
the advantages of linear increase in spectral efficiency with number of
transmit antennas and the robustness of OTFS modulation in high-Doppler
scenarios.
\begin{figure}
\hspace{4mm}
\includegraphics[width=9 cm, height= 6.0cm]{mimo_otfs_fig3.eps}
\vspace{-6mm}
\caption{BER performance of MIMO-OTFS for SISO, and $2\times 2$ and
$3\times 3$ MIMO systems.}
\label{BER1}
\vspace{-4mm}
\end{figure}
\begin{figure}
\hspace{4mm}
\includegraphics[width=9 cm, height= 6.0cm]{mimo_otfs_fig4.eps}
\vspace{-6mm}
\caption{BER performance comparison between MIMO-OTFS and MIMO-OFDM in
a $2\times 2$ MIMO system.}
\label{BER2}
\vspace{-3mm}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.32]{mimo_otfs_fig5.eps}
\vspace{-6mm}
\caption{Illustration of pilots and channel response in delay-Doppler
domain in a 2$\times$1 MIMO-OTFS system.}
\label{CSE_MIMO}
\vspace{-4mm}
\end{figure*}
Figure \ref{BER2} shows the BER performance comparison between MIMO-OTFS
and MIMO-OFDM in a $2\times 2$ MIMO system. The maximum Doppler spread in
the considered system is high (1880 Hz) which causes severe ICI in the TF
domain. Because of the severe ICI, the performance of MIMO-OFDM is found
to break down and floor at a BER value of about $2 \times 10^{-2}$. However,
MIMO-OTFS is able to achieve a BER of $10^{-5}$ at an SNR value of about
14 dB. This is because OTFS uses the delay-Doppler domain for signaling
instead of TF domain. Thus, the BER plots clearly illustrate the robust
performance of MIMO-OTFS and its superiority over MIMO-OFDM under rapidly
varying channel conditions.
\section{Channel Estimation for MIMO-OTFS}
\label{sec5}
In this section, we relax the assumption of perfect channel knowledge and
present a channel estimation scheme in the delay-Doppler domain. The scheme
uses impulses in the delay-Doppler domain as pilots. Figure \ref{CSE_MIMO}
gives an illustration of the pilots, channel response, and received signal
in a $2\times 1$ MIMO system with the delay-Doppler profile and system
parameters given in Tables \ref{delay_Dopp_prof} and \ref{SimPar}. Each
transmit and receive antenna pair sees a different channel having a finite
support in the delay-Doppler domain. The support is determined by the delay
and Doppler spread of the channel \cite{otfs1}. This fact can be used to
estimate the channel for all the transmit-receive antenna pairs
simultaneously using a single MIMO-OTFS frame as described below.
The OTFS input-output relation for $p$th transmit antenna and $q$th
receive antenna pair can be written using (\ref{otfsinpoutp}) as
\vspace{-2mm}
\begin{small}
\begin{equation}
{\hat{x}}_{q}[k,l]= \sum_{m=0}^{M-1} \sum_{n=0}^{N-1} x_{p}[n,m] {1 \over MN} {h_{w_{qp}}} \left( {k-n \over NT}, {l-m \over M \Delta f} \right)+v_q[k,l].
\label{otfsinpoutpmimo}
\end{equation}
\end{small}
\vspace{-3mm}
\hspace{-4mm}
If we transmit
\begin{align}
x_{p}[n,m]&=1 \ \text{if}\ (n,m)=(n_{p},m_{p}) \nonumber\\
&=0 \ \forall \ (n,m) \neq (n_{p},m_{p}),
\end{align}
as pilot from the $p$th antenna, the received signal at the $q$th antenna
will be
\begin{equation}
\label{estimate}
{\hat{x}}_{q}[k,l]={1 \over MN} {h_{w_{qp}}} \left( {k-n_p \over NT}, {l-m_p \over M \Delta f} \right)+v_q[k,l].
\end{equation}
We can estimate
${1 \over MN} {h_{w_{qp}}} \left( {k \over NT}, {l \over M \Delta f} \right)$
from (\ref{estimate}), since, being the pilots, $n_p$ and $m_p$ are known at
the receiver a priori. From this, we can get the equivalent channel matrix
$\hat{\mathbf{H}}_{qp}$ using the vectorized formulation of Sec. \ref{sec2c}.
From (\ref{estimate}) we also see that, due to the 2D-convolution input-output
relation, the impulse at $(n,m)=(n_{p},m_{p})$ is spread by the channel only
to the extent of the support of the channel in the delay-Doppler domain. Thus,
if we send the pilot impulses from the transmit antennas with sufficient
spacing in the delay-Doppler domain, they will be received without overlap.
Hence, we can estimate the channel responses corresponding to all the
transmit-receive antenna pairs simultaneously and get the estimate of the
equivalent MIMO-OTFS channel matrix $\hat{\mathbf{H}}_{{\tiny \mbox{MIMO}}}$
using a single MIMO-OTFS frame. This is illustrated in Fig. \ref{CSE_MIMO}
for a $2\times 1$ MIMO-OTFS system with frame size $(M,N)=(32,32)$ at an SNR
value of 4 dB. The first antenna transmits the pilot impulse at
$(n_1,m_1)=(0,0)$ and the second antenna transmits the pilot impulse at
$(n_2,m_2)=(16,16)$ in the delay-Doppler domain. We observe that the impulse
response ${h_{w_{11}}}\left( {k-n_1 \over NT},{l-m_1\over M \Delta f} \right)$
and ${h_{w_{12}}} \left({k-n_2 \over NT}, {l-m_2 \over M \Delta f} \right)$
are non-overlapping at the receiver. Thus, they can be estimated
simultaneously using a single pilot MIMO-OTFS frame.
\subsection{Performance results and discussions}
\label{sec5c}
In this subsection, we present the BER performance of the MIMO-OTFS system
using the estimated channel. We use the MIMO-OTFS channel estimation scheme
described above, for estimating the equivalent channel matrix
$\hat{\mathbf{H}}_{{\tiny \mbox{MIMO}}}$ and use the message passing
algorithm for detection. The delay-Doppler profile and the simulation
parameters are as given in Table \ref{delay_Dopp_prof} and Table
\ref{SimPar}, respectively.
In Fig. \ref{est_err_p_snr}, we plot the Frobenius norm of the difference
between the equivalent channel matrix (${\mathbf{H}}_{{\tiny \mbox{MIMO}}}$)
and the estimated equivalent channel matrix
($\hat{\mathbf{H}}_{{\tiny \mbox{MIMO}}}$) (a measure of estimation error)
as a function of pilot SNR for a $2\times 2$ MIMO-OTFS system with system
parameters as in Tables \ref{delay_Dopp_prof} and \ref{SimPar}. We observe
that, as expected, the Frobenius norm of the difference matrix decreases
with pilot SNR. Figure \ref{BER3} shows the corresponding BER performance
using the proposed channel estimation scheme for the $2\times 2$ MIMO-OTFS
system. It is observed that the BER performance achieved with the estimated
channel is quite close to the performance with perfect channel knowledge.
For example, a BER of $2\times 10^{-5}$ is achieved at SNR values of about
12.5 dB and 13 dB with perfect channel knowledge and estimated channel
knowledge, respectively. At the considered maximum Doppler frequency of
1880 Hz, channel estimation in the time-frequency domain leads to inaccurate
estimation because of the rapid variations of the channel in time. On the
other hand, the sparse channel representation in the delay-Doppler domain
is time-invariant over a larger observation time. This, along with the OTFS
channel-symbol coupling (2D periodic convolution) in the delay-Doppler
domain, enables the proposed channel estimation for MIMO-OTFS to be simple
and efficient.
\begin{figure}
\hspace{4mm}
\includegraphics[width=9cm, height= 6.0cm]{mimo_otfs_fig7_u.eps}
\vspace{-7mm}
\caption{Frobenius norm of the difference between the equivalent channel
matrix (${\mathbf{H}}_{{\tiny \mbox{MIMO}}}$) and the estimated equivalent
channel matrix ($\hat{\mathbf{H}}_{{\tiny \mbox{MIMO}}}$) as a function of
pilot SNR in a 2$\times$2 MIMO-OTFS system.}
\label{est_err_p_snr}
\vspace{-2mm}
\end{figure}
\begin{figure}
\hspace{4mm}
\includegraphics[width=9cm, height= 6.0cm]{mimo_otfs_fig6.eps}
\vspace{-5mm}
\caption{BER performance of MIMO-OTFS system using the estimated channel
in a 2$\times$2 MIMO-OTFS system.}
\label{BER3}
\vspace{-4mm}
\end{figure}
\section{Conclusions}
\label{sec6}
We investigated signal detection and channel estimation aspects of MIMO-OTFS
under high-Doppler channel conditions. We developed a vectorized formulation
of the input-output relationship for MIMO-OTFS which enables MIMO-OTFS signal
detection. We presented a low complexity iterative algorithm for MIMO-OTFS
detection based on message passing. The algorithm was shown to achieve very
good BER performance even at high Doppler frequencies (e.g., 1880 Hz) in a
$2\times 2$ MIMO system where MIMO-OFDM was shown to floor in its BER
performance. We also presented a channel estimation scheme in the
delay-Doppler domain, where delay-Doppler impulses are used as pilots. The
proposed channel estimation scheme was shown to be efficient and the BER
degradation was small as compared to the performance with perfect channel
knowledge. The sparse nature of the channel in the delay-Doppler domain
which is time-invariant over a larger observation time enabled the proposed
estimation scheme to be simple and efficient.
|
{
"timestamp": "2018-05-08T02:11:45",
"yymm": "1805",
"arxiv_id": "1805.02209",
"language": "en",
"url": "https://arxiv.org/abs/1805.02209"
}
|
\section{Introduction}
Combinatorial game theory is not only an interesting theory on its own, it links to many other fields of mathematics. For example, Conway and Sloane showed in~\cite{conway1986lexicographic} that the ``losing positions'' of some combinatorial games give linear error detecting and correcting codes. Using this view of these codes as combinatorial games, Fraenkel and Rahat showed in~\cite{fraenkel2002complexity} that the codes can be computed in polynomial time and memory.
In this paper we demonstrate a similar application of combinatorial game theory to coding: we show that the optimal play (when both players apply their only non-losing strategies) of a certain combinatorial game constitutes the well known prefer-max De Bruijn sequence (defined below). Then, parallel to the work of Fraenkel and Rahat in~\cite{fraenkel2002complexity}, we also show how non-losing strategies can be computed in linear time and memory, yielding an efficiently computable shift-rule (mapping a state of a shift register to its follower) for the sequence.
We use lower-case Greek letters to denote symbols in the alphabet $[k]=\{0,\cdots, k-1\}$ and lower-case Latin letters to denote words in $[k]^*$, except for $n$ and $k$ which represent natural numbers and indexes.
\section{The pref-max De Bruijn sequence and the corresponding Hamiltonian cycle in the De Bruijn graph}
A De Bruijn sequence of order $n$ on the alphabet $[k]=\{0,\dots,k-1\}$ is a cyclic sequence of length $k^n$ such that every possible string of length $n$ (member of $[k]^n$) appears exactly once as a substring~\cite{bruijn1946combinatorial}.
The directed De Bruijn graph of order $n$ over the alphabet $[k]$ (also described in ~\cite{bruijn1946combinatorial}) is the graph whose vertexes are the strings of length $n$ over the alphabet $[k]$ (i.e. the set $[k]^n$) and whose edges are such that each vertex $v=x \sigma$ is connected with a directed edge to all the vertexes in $\{ \tau x \colon \tau \in [k] \}$.
There is a one-to-one correspondence between De Bruijn sequences and Hamiltonian cycles in the De Bruijn graph of the same order and alphabet, described in~\cite{bruijn1946combinatorial} and it is as follows:
\begin{enumerate}
\item If, for each $i$, $w_i=x_i \sigma_i$, and $(w_1, w_2, \dots)$ is an Hamiltonian cycle then $(\sigma_1, \sigma_2,\dots)$ is a De Bruijn sequence.
\item A Hamiltonian cycle can be constructed from a De Bruijn sequence $(\sigma_1,\dots,\sigma_{k^n})$ by visiting the vertex ${\sigma_1 \cdots \sigma_n}$, then ${\sigma_2 \cdots \sigma_{n+1}}$ and so on, until we return to where we started.
\end{enumerate}
In this paper we focus on a specific Hamiltonian cycle in the De Bruijn graph called the prefer-max cycle (and the corresponding De Bruijn sequence) defined as follows. For simplicity, we only list the vertexes on the cycle, as the edges are induced.
\begin{definition} \label{def:pref-max}
The $(k,n)$-prefer-max cycle, $(w_i)_{i=0}^{k^n-1}$, is such that $w_0=0^{n-1}(k-1)$ and if $w_i=\sigma x$ then $w_{i+1}= x\tau$ where $\tau$ is the \emph{maximal} letter such that $x \tau \notin \{w_0,\dots,w_{i}\}$.
\end{definition}
\begin{example}\label{example:pref-max}
For example, if we set $n=3$ and $k=3$, we have:
$002 \to 022 \to 222 \to 221 \to 212 \to 122 \to 220 \to 202 \to 021 \to 211 \to 112 \to 121 \to 210 \to 102 \to 020 \to 201 \to 012 \to 120 \to 200 \to 001 \to 011 \to 111 \to 110 \to 101 \to 010 \to 100 \to 000$.
\end{example}
Martin~\cite{Mar34} proved that the cycle given in Definition~\ref{def:pref-max} is Hamiltonian, i.e., that for $k^n-1$ steps there is always a $\tau$ such that $w \tau \notin \{w_0,\dots,w_{i}\}$, so $w_{i+1}$ is well defined, as demonstrated with Example~\ref{example:pref-max}.
The Hamiltonian cycle $(w_i)_{i=0}^{k^n-1}$, given in Definition~\ref{def:pref-max}, corresponds to the well known prefer-max De Bruijn sequence (defined, e.g., in~\cite{Gol81}). One challenge raised in the literature (see, e.g.,~\cite{SAWADA2017524}) for such sequences is finding an efficiently computable function that maps each vertex on the cycle to its follower, called a \emph{shift-rule}. We will arrive to such a rule at the end of the paper.
\section{Two properties of the prefer-max cycle}
We first state and prove two properties of the prefer-max cycle. These properties will be used later to prove properties of the game that we are going to propose and for its analysis in the following sections.
For the following propositions, let $\prec$ be the order induced by the sequence given in Definition~\ref{def:pref-max}, i.e., $w_i \prec w_j$ if $i<j$ where $(w_i)_{i=0}^{k^n-1}$ is as given in Definition~\ref{def:pref-max}. This is a linear order over $[k]^n$ because the sequence represents a Hamiltonian cycle of the De Bruijn graph.
\begin{proposition}\label{proposition:xy}
For any $x,y$ such that $|x|+|y|=n-1$, if $0 < \sigma_1<\sigma_2$ then $x\sigma_2y \prec x\sigma_1y$.
\end{proposition}
\begin{proof}
By induction on the length of $y$. If $y$ is empty, the statement is true from the definition of the prefer-max cycle (Definition~\ref{def:pref-max}). For the induction step, assume that the statement is true for all $y$ of some constant length $t<n$. We need to show that $x\sigma_2y\tau \prec x\sigma_1y\tau$ for any $\tau$ and any $x$ of length $n-t-2$. Since $x \sigma_1 y (k-1) \prec x \sigma_1 y (k-2)\prec \cdots \prec x \sigma_1 y 0$ and because the predecessor on the cycle of each of these vertices is a node in $\{\varphi x \sigma_1 y \colon 0 \leq \varphi < k\}$ we have that at least $k-\tau$ vertices in this set precede $x \sigma_1 y \tau$.
The term `predecessor' is well defined here because $\sigma_1 \neq 0$. It assures us that $x \sigma_1 y (k-1)$ is not the first vertex on the cycle. By the induction hypothesis, we have that $\varphi x \sigma_2 y \prec \varphi x \sigma_1 y$, for any $\varphi$. Therefore, there are at least $k-\tau$ vertices in $\{\varphi x \sigma_2 y \colon 0 \leq \varphi < k\}$ before $x \sigma_1 y \tau$. Since the follower of each of these vertices is in $\{x \sigma_2 y (k-1),x \sigma_2 y (k-2), \dots,x \sigma_2 y 0\}$ and because $x \sigma_2 y (k-1) \prec x \sigma_2 y (k-2)\prec \cdots \prec x \sigma_2 y 0$, we get that $x \sigma_2 y \tau$ must be before $x \sigma_1 y \tau$.
\end{proof}
\begin{example}
Consider again the sequence given in Example~\ref{example:pref-max}. If we take, for example, $x=0$ and $y=2$, we see $022 \prec 012$. If we take, as another example $x=2$ and $y=2$, we see that $222 \prec 212$. More generally we see that $x2y \prec x1y$ for any $x$ and $y$ whose combine length is two. Note that the proposition does not say where does $x0y$ fit, it may come before $x2y$, after $x1y$, or between the two.
\end{example}
\begin{proposition} \label{proposition:followers}
Considering $(w_i)_{i=0}^{k^n-1}$ from Definition~\ref{def:pref-max}. For each $0 \leq i <k^n-1$, exactly one of the following is true for some $x\in [k]^{n-1}$ and $\sigma \in [k]$:
\begin{itemize}
\item $( w_{i}, w_{i+1} ) = ( \sigma x, x \sigma )$ and $\sigma x \prec 0 x$;
\item $(w_{i}, w_{i+1}) = (0 x, x \sigma) $ and $(\sigma+1) x \prec 0x \prec \sigma x$;
\item $(w_{i}, w_{i+1}) = ((\sigma+1) x, x \sigma) $ and $0 x \prec (\sigma+1) x$.
\end{itemize}
\end{proposition}
\begin{proof}
Let $x$ be a word of length $n-1$ over $[k]$. From Proposition~\ref{proposition:xy}, the subsequence $\{(k-1)x,\dots, 1x \}$ appears on the cycle in a decreasing order. By Definition~\ref{def:pref-max}, the subsequence $\{x(k-1), \dots, x0\}$ also appears in a decreasing order. Since the predecessor of each node in the second list is either $0x$ or a node from the first list, we get that if $\tau x \prec 0y$ then the follower of $\tau x$ must be $x \tau$ and that if $0 x \prec \tau x$ then the follower of $\tau x$ must be $x (\tau-1)$. See an illustration of the proof in Figure~\ref{fig:pred}.
\end{proof}
\begin{figure}
\begin{center}
\scalebox{1}{
\begin{tikzpicture}[every node/.style={rectangle,fill=lightgray,join,text width=2.3cm,text centered},every join/.style={->,thin},
node distance=0.3cm and 1cm]
{ [start chain=going below]
\node [on chain] (p1-first) {$(k-1)x$};
\node [on chain] {$x(k-1)$};
\node [on chain,fill=none] {$\vdots$};
\node [on chain] {$(k-d)x$};
\node [on chain] (p1-last) {$x(k-d)$};
\node [on chain,fill=none] {$\vdots$};
\node [on chain] (m-first) {$0x$};
\node [on chain] (m-last) {$x(k-d-1)$};
\node [on chain,fill=none] {$\vdots$};
\node [on chain] (p2-first) {$(k-d-1)x$};
\node [on chain] {$x(k-d-2)$};
\node [on chain,fill=none] {$\vdots$};
\node [on chain] {$1x$};
\node [on chain] (p2-last) {$x0$};
}
\draw[decorate,decoration={brace,raise=6pt,amplitude=10pt}, thick]
(p1-first.north east)--(p1-last.south east) node[fill=none,midway, right=0.7cm, text width=6cm] {Initially, the two lists go together $d$ times for some $0 \leq d \leq k$.};
\draw[decorate,decoration={brace,raise=6pt,amplitude=10pt}, thick]
(m-first.north east)--(m-last.south east) node[fill=none,midway, right=0.7cm,text width=6cm] {When $0x$ enters, the second list advances and the first does not.};
\draw[decorate,decoration={brace,raise=6pt,amplitude=10pt}, thick]
(p2-first.north east)--(p2-last.south east) node[fill=none,midway, right=0.7cm,text width=6cm] {After that, the second list is one step ahead of the first list for the remaining $k-d-1$ times.};
\end{tikzpicture}
}
\end{center}
\caption{An illustration of the proof of Proposition~\ref{proposition:followers}}.
\label{fig:pred}
\end{figure}
\begin{example}
Consider the sequence given in Example~\ref{example:pref-max} again. If we take, for example, $x=22$, we see that the predecessor of $222$ is $022$, the predecessor of $221$ is $222$, and the predecessor of $220$ is $122$. We see that, in this case, the leftmost digit of the predecessor is either zero or an increase by one of the rightmost digit of its successor. In this case, the node comes before the beginning of the first list considered in the proof, corresponding to having $d=0$ in Figure~\ref{fig:pred}.
\end{example}
\section{A combinatorial game}\label{sec:game}
Our next step is a proposal of a combinatorial game that will, after some analysis, be used to derive an efficient computation of the predecessor in the prefer-max sequence. In light of Proposition~\ref{proposition:followers} we ask ourselves: Given $x\sigma$, when do we need to increase, keep as is, or take a zero instead of $\sigma$ when computing the predecessor of $x \sigma$. We propose an indirect answer: A combinatorial game whose rules correspond to these options. One player, called Bob, can force an increase step while the other player, called Alice, can force a zero step when Bob does not want to increase. Alice's goal is to continue as far as possible without repeating a state and Bob's goal is to get to a repeated state as fast as he can. As we will prove later, the game is constructed such that best option for both player is to step along the sequence, i.e., that the optimal strategies for both players are equivalent to computing the predecessors of $(w_i)_{i=0}^{k^n-1}$ from Definition~\ref{def:pref-max}.
The following definition states the rules and the objectives of the game:
\begin{definition}\label{def:shift-game}
The $(n,k)$-shift-game is a two-player combinatorial game played by Alice and Bob as follows: A play of the game consists of a sequence $s_0,\dots,s_m$ such that $s_0=0^n$ and if $s_t=x\sigma$ then:
$$s_{t+1}=\begin{cases}
(\sigma+1)x &
\text{if $B(s_t)=1$}, \\
0x&
\text{if $B(s_t)=0$ and $A(s_t)=1$}, \\
\sigma x &
\text{otherwise;}
\end{cases}$$
where $A,B \colon [k]^n \to \{0,1\}$, are functions, called \emph{strategies} for Bob and for Alice, respectively, such that $B(x(k-1))=A(x0)=0$ for all $x \in [k]^{n-1}$. The game ends at state $s_m$ if $m>0$ and $s_m \in \{s_0,\dots,s_{m-1}\}$. Alice wins if $m< k^n$ and $s_m = 0^n$. Bob wins if $s_m \neq 0^n$. If $m=k^n$ and $s_m=0^n$ its a tie.
\end{definition}
Note that this definition introduces the notion of strategies for the players. I,e,. that a function $S \colon [k]^n \to \{0,1\}$ is a strategy for Bob if $S(x(k-1))=0$ for all $x \in [k]^{n-1}$ and it is a strategy for Alice if $S(x0)=0$ for all $x \in [k]^{n-1}$. For future use, we note the following observation:
\begin{observation}\label{obs:strategyC}
If $S$ is a strategy for Alice or for Bob and $S'(w) < S(w)$ for all $w$, then $S'$ is also a strategy for the same player.
\end{observation}
\begin{example}\label{example:game}
A play $(s_0,\dots,s_{10})$ of the $(2,4)$-shift-game can be such that $s_0=00, s_1=10, s_2=01, s_3=20, s_4=12, s_5=21, s_6=22, s_7=32, s_8=23, s_9=02, s_{10}=00$. In this example, Bob plays on moves $\{00, 20, 12,22,32\}$, i.e., $B(00)=B(20)=B(12)=B(22)=B(32)=1$, Alice plays on $\{23,02\}$, i.e., $A(23)=A(02)=1$, and neither play on $\{10,12\}$. In this example, Alice won because the play reached $00$ in $m=10$ moves, which is less than $4^2=16$ moves.
\end{example}
This is a deterministic game with perfect information but it is not impartial (i.e., it is a partisan game), as there are differences, beyond who goes first, between Alice's and Bob's goals and moves. See Figure~\ref{fig:game} for an illustration of how a round of the game is executed.
\begin{figure}
\begin{center}
\begin{tikzpicture}[
node distance=2.5cm,
minimum height=0.8cm,
startstop/.style={rectangle, rounded corners,text centered, draw=black, fill=red!30},
process/.style={rectangle, text centered, draw=black, fill=orange!30},
io/.style={trapezium, trapezium left angle=70, trapezium right angle=110, minimum width=3cm, minimum height=1cm, text centered, draw=black, fill=blue!30},
decision/.style={diamond, text centered, draw=black, fill=green!30},
]
\node (node0) [startstop] {$s_t=x\sigma$};
\node (node1) [decision, below=0.5cm of node0] {$B(s_t)$};
\node (node2) [decision, below left of=node1] {$A(s_t)$};
\node (node3) [process, below left of=node2] { $s_{t+1} \leftarrow \sigma x$};
\node (node4) [process, below right of=node2] { $s_{t+1} \leftarrow 0x$};
\node (node5) [process, below right = 1 cm of node1] { $s_{t+1} \leftarrow (\sigma+1)x$};
\draw [arrows=-Stealth] (node0) -- (node1);
\draw [arrows=-Stealth] (node1) -| node[anchor=south] {1} (node5);
\draw [arrows=-Stealth] (node1) -| node[anchor=south] {0} (node2);
\draw [arrows=-Stealth] (node2) -| node[anchor=south] {0} (node3);
\draw [arrows=-Stealth] (node2) -| node[anchor=south] {1} (node4);
\end{tikzpicture}
\end{center}
\caption{A flowchart depicting a round of the game. Bob has a priority: if $B(s_t)=1$ then $s_{t+1}$ is $(\sigma+1)x$ (an increase and shift), no matter how Alice plays. The value $A(s_t)$ is only considered when $B(s_t)=0$. In this case it determines whether $s_{t+1}$ is $0 x$ (a reset and shift) or $s_{t+1}$ is $\sigma x$ (just a shift), corresponding to $A(s_t)=1$ and to $A(s_t)=0$, respectively.}
\label{fig:game}
\end{figure}
The $(n,k)$-shift-game proposed in Definition~\ref{def:shift-game} is a generalization of a game proposed in~\cite{weiss2007combinatorial} for solving a problem in control-theory. There, the game was only for the binary case ($k=2$) in which case the prefer-max sequence is also called `prefer-one'. In terms of Definition~\ref{def:shift-game}, the binary game was
$$s_{t+1}=\begin{cases}
1x & \text{if $\sigma=0$ and $B(s_t)=1$}, \\
0x & \text{if $\sigma=1$ and $A(s_t)=1$}, \\
\sigma x & \text{otherwise.}
\end{cases}$$
It is easy to verify that it is equivalent to the $(n,2)$-shift-game. Indeed, the game proposed in~\cite{weiss2007combinatorial} and the strategies proposed there to solve it can generate a known shift-rule for the prefer-one sequence as shown in~\cite{sawada2016surprisingly} and in~\cite{fredricksen1972generation}.
Before we move to solving the $(n,k)$-shift-game, we state and prove a complexity property for it. There has been several criteria in the literature for classifying complexities of De Bruijn sequences, e.g.,~\citep{chan1982complexities}. One such criterion, for binary sequences, is the number of ones in the truth table of the feedback function that generates the sequence. For De Bruijn sequences generated by combinatorial games, this can be translated, e.g., to counting the number of moves that Bob plays, i.e., the number of ones in the truth table of Bob's strategy. A result that goes along these lines is given in the next proposition:
\begin{proposition}
For every play, $(s_t)_{t=0}^m$, of the $(n,k)$-shift game, let $b=|\{s_t \colon B(s_t)=1\}|$ be the number of times Bob plays and $a=|\{s_t \colon B(s_t)=0 \wedge A(s_t)=1\}|$ be the number of times Alice plays. Then, $b \leq (n+a)(k-1)$.
\end{proposition}
\begin{proof}
Let $E(\sigma_0,\dots,\sigma_{n-1})=\Sigma_{i=0}^{n-1} \sigma_i$. From Definition~\ref{def:shift-game}, we get that $E(s_0)=0$ and that if $s_t=x\sigma$ then
$$E(s_{t+1})=
\begin{cases}
E(s_t)+1 & \text{if $B(s_t)=1$}, \\
E(s_t)-\sigma & \text{if $B(s_t)=0$ and $A(s_t)=1$}, \\
E(s_t) & \text{otherwise}.
\end{cases}
$$
Since $\sigma \leq k-1$, then $E(s_m) \geq b - a (k-1)$, and because $s_{i} \in [k]^n$ for all $i$, then $E(s_m) \leq (k-1)n$. So $(k-1)n +1 \geq b - a (k-1)$ and the required result follows from this inequality.
\end{proof}
\begin{example}
If we look back at Example~\ref{example:game}, we can count and see that $b=5$ and $a=2$ and verify that $5 \leq (5 \cdot 2) =10$.
\end{example}
Next, we turn to establishing a connection between the prefer-max and the shift-game. We will first define a pair of strategies $A^*$ and $B^*$ such that if both Alice and Bob use the respective strategy, the play of the game follows the prefer-max sequence (in reverse). Then, we will show that $A^*$ and $B^*$ are the unique strategies for each of the players, respectively, that guarantee not losing the game. This will give us that the efficient implementations of non-losing strategies for both of the players, that we will provide in Section~\ref{sec:efficient strategies}, can serve as an efficient shift-rule for the sequence.
The strategies $A^*$ and $B^*$ use the prefer-max sequence as an internal `oracle', as specified in the following definition:
\begin{definition}\label{def:star}
Considering $(w_i)_{i=0}^{k^n-1}$ from Definition~\ref{def:pref-max}, let $A^{*},B^{*} \colon [k]^n \to \{0,1\}$ be the strategies for Alice and Bob, respectively, defined by
$$
B^{*}(w_i) =
\begin{cases}
1 & \text{if $w_i=x\sigma$ and $w_{i-1}=(\sigma+1) x$},\\
0 & \text{otherwise;}
\end{cases}
$$
and
$$
A^{*}(w_i) =
\begin{cases}
1 & \text{if $(w_i=x\sigma \wedge w_{i-1}=0x) \vee B^{*}(w_i) = 1$},\\
0 & \text{otherwise}.
\end{cases}
$$
\end{definition}
From Definition~\ref{def:pref-max}, we can see that $A^*$ is a strategy, i.e., $A^*(x0)=0$ for all $x \in [k]^{n-1}$, and the next proposition will show that $B^*$ is also a strategy, i.e., $B^*(x (k-1))=0$ for all $x \in [k]^{n-1}$. The next proposition also shows that the computation of $B^*$ can be reduced to a computation of $A^*$ over a slightly alternated input. We will use this fact to simplify the analysis and to allow a succinct implementation of the strategies.
\begin{proposition}\label{proposition:pmaxeq}
For every $x\in [k]^{n-1}$, $B^*(x(k-1))=0$ and, for every $\sigma<k-1$, $B^{*}(x\sigma)=A^{*}(x(\sigma+1))$.
\end{proposition}
\begin{proof}
If $w_i=x(k-1)$ then $w_{i+1}$ cannot be $kx$ because this word is not in $[k]^n$, thus $B^*(w_i)$ must be zero. From Proposition~\ref{proposition:xy} and Proposition~\ref{proposition:followers}, we can see that if we define $zs \colon [k]^{n-1} \to [k]$ by $zs(x)=|\{i \colon ix \prec 0x \}|+1$, then for every $x \in [k]^{n-1}$, and $w_i=x \sigma$, if $\sigma<zs(x)$ then $w_{i+1}=(\sigma+1)x$, if $\sigma=zs(x)$ then $w_{i+1}=0x$, and if $\sigma>zs(x)$ then $w_{i+1}=\sigma x$. Therefore, $(B^{*}(x\sigma)=1) \Leftrightarrow (\sigma < zs(x))$ and $(A^{*}(x\sigma)=1) \Leftrightarrow (\sigma \leq zs(x))$. This means that $B^{*}(x\sigma)=1$ if and only if $A^{*}(x(\sigma+1))=1$.
\end{proof}
\begin{example}
For example, for $n=3$ and $k=2$, if both players play according to the strategies above, then the resulting play is $000 \to 100 \to 010 \to 101 \to 110 \to 111 \to 011 \to 001 \to 000$ which yields a tie.
\end{example}
We can see that in the example above, the play is exactly the prefer-max cycle in reverse. The following proposition establishes that this is true for all $n$ and $k$:
\begin{proposition}\label{proposition:sweq}
Let $(s_t)_{t=0}^{k^n-1}$ be the play of the $(n,k)$-shift-game when Alice uses the $A^{*}$ strategy and Bob uses the $B^{*}$ strategy. Then,
$(s_t)_{t=0}^{k^n-1} = (w_{k^n-t})_{t=1}^{k^n}$ where $(w_i)_{i=0}^{k^n-1}$ is the prefer-max sequence given in Definition~\ref{def:pref-max}.
\end{proposition}
\begin{proof}
From Proposition~\ref{proposition:followers} we know that if $w_{i+1}=x\sigma$ then either $w_i=\sigma x$, $w_{i}=0 x$ or $w_{i}=(\sigma+1) x$. We also have that $(w_i)_{i=0}^{k^n-1}$ contains all of the elements in $[k]^n$, because of its relation to the prefer-max De Bruijn sequence. Thus, by Definition~\ref{def:shift-game}, every step in the play of the game must follow a (reversed) step in $(w_i)_{i=0}^{k^n-1}$ and the proof follows by induction.
\end{proof}
From Proposition~\ref{proposition:sweq} we have that if Alice plays according to $A^*$ and if Bob plays according to $B^*$ the game ends with a tie. In the next two propositions, we show that each of them wins against any other strategy. Then, in the two propositions that follow, we show that these strategies are the unique strategies with these properties.
\begin{proposition}\label{proposition:Bstar}
If Bob applies the strategy $B^{*}$ he wins against any strategy that Alice may apply which is not $A^*$ and achieves a tie against $A^*$.
\end{proposition}
\begin{proof}
First, From Proposition~\ref{proposition:sweq} we know that if the players play by the strategies $A^*$ and $B^*$ respectively we get a tie.
Now, Assume towards contradiction that there exists a strategy $A$ for Alice which wins against $B^*$.
Then, there must be a state $s_t=x\sigma$ where $x \neq 0^{n-1}$ such that $A(s_t) \neq A^*(s_t)$, $B^*(s_t) = 0$ and $\sigma \neq 0$ because otherwise the play is equal to the play with $A^*$ and $B^*$ which is, as said, a tie.
Let $s_t$ be the first such state. By Proposition~\ref{proposition:followers} and the Definition~\ref{def:star} of $A^*$, we have that if $A^*(s_t)=1$ then $\sigma x \prec x\sigma $, and if $A^*(s_t)=0$ then $0x \prec x\sigma$. If $A^*(s_t)=1$ then $A(s_t)=0$ and $s_{t+1}=\sigma x$. If $A^*(s_t)=0$ then $A^*(s_t)=0$ and $s_{t+1}=0x$. In both cases $s_{t+1} \in \{s_0,\dots,s_t\}$ and $s_{t+1} \neq s_0$ which is in contradiction with the assumption that $A$ is a winning strategy.
\end{proof}
\begin{proposition}\label{proposition:Astar}
If Alice applies the strategy $A^{*}$ she wins against any strategy that Bob may apply which is not $B^*$ and achieves a tie against $B^*$.
\end{proposition}
\begin{proof}
Assume towards contradiction that there exists a strategy $B$ for Bob which wins against $A^*$ and let $s_0,\dots,s_m$ be the play of the game with these two strategies.
Consider the function $g \colon [k]^n \to \mathbb{N}$ such that $g(w_i)=i$ for all $(w_i)_{i=0}^{k^n-1}$ in the order specified in~Definition \ref{def:pref-max}.
By the definition of $A^*$ and of $B^*$ (Definition~\ref{def:star}), if $B(s_t)=B^*(s_t)$ we have that $g(s_{t+1})=g(s_t)-1$. Otherwise, If $B(s_t)=0$ and $B^*(s_t)=1$ we have that $A^*(s_t)=1$ thus $w_{i-1}=(\sigma +1)x$ and $s_{t+1}=0x$, so, by Proposition~\ref{proposition:followers}, we have that $s_{t+1} \prec s_{t}$ which gives us that $g(s_{t+1}) < g(s_{t})$. If $B(s_t)=1$ and $B^*(s_t)=0$ we have to check two cases:
\begin{enumerate}
\item If $A^*(s_t)=1$ then $w_{i-1}=0x$, $\sigma \neq 0$ and $s_{t+1}=(\sigma+1)x$ so, by Proposition~\ref{proposition:followers}, we have that $s_{t+1} \prec s_{t}$ which gives us that $g(s_{t+1}) < g(s_{t})$.
\item If $A^*(s_t)=0$ then $w_{i-1}=\sigma x$ and $s_{t+1}=(\sigma+1)x$ so, by Proposition~\ref{proposition:followers}, we have that $s_{t+1} \prec s_{t}$ which gives us that $g(s_{t+1}) < g(s_{t})$.
\end{enumerate}
Thus, by this enumeration of all the three cases, we see that $g$ is a strictly decreasing along the play $s_0,\dots,s_m$. Since the minimum of $g$ is attained at $s=0^n$, we get that $s_m=0^n$ in contradiction to the assumption that Bob wins.
\end{proof}
The following two propositions establish the uniqueness of both $B^*$ and $A^*$. Since it is easy to generate the prefer-max sequence from an implementation of these strategies, these results establish a polynomial reduction from computing an efficient shift-rule for the prefer-max sequence to efficient computation of non-losing strategies for both players, a problem that we will solve in the next section.
\begin{proposition}\label{proposition:Bunique}
$B^{*}$ is the only non-losing strategy for Bob.
\end{proposition}
\begin{proof}
Assume towards contradiction that there exists a non-losing strategy $B$ for Bob
such that $B \neq B^*$. Let $s_0,\dots,s_m$ be the play of the game where Bob
applies $B$ and Alice applies $A^*$. Let $g$ be as in Proposition~\ref{proposition:Astar}. For $t>0$, assuming that $s_t = \sigma x$, we have two options:
\begin{enumerate}
\item If $B(s_t)=0$ and $B^*(s_t)=1$ we have, by the definition of $A^*$, that $A^*(s_t)=1$. By the definition of $B^*$ then $w_{i-1}=(\sigma +1)x$.
By the definition of the game, we have that $s_{t+1}=0x$. By Proposition~\ref{proposition:followers}, $s_{t+1} \prec s_{t}$, i.e., $g(s_{t+1}) < g(s_{t})$.
\item If $B(s_t)=1$ and
$B^*(s_t)=0$ we have two cases:
\begin{enumerate}
\item If $A^*(s_t)=1$, by the definition of $A^*$, $w_{i-1}=0x$ and $\sigma \neq 0$. By the definition of the game, $s_{t+1}=(\sigma+1)x$. By Proposition~\ref{proposition:followers},
$s_{t+1} \prec s_{t}$, i.e., $g(s_{t+1}) < g(s_{t})$.
\item If $A^*(s_t)=0$, by the definition of $A^*$, $w_{i-1}=\sigma x$. By the definition of the game, $s_{t+1}=(\sigma+1)x$. By Proposition~\ref{proposition:followers},
$s_{t+1} \prec s_{t}$, i.e., $g(s_{t+1}) < g(s_{t})$.
\end{enumerate}
\end{enumerate}
We get that $g$ is strictly decreasing along the play $s_0,\dots,s_m$. In particular, $g(s_m) \notin \{g(s_1),\dots,g(s_{m-1})\}$ which means that $s_m \notin \{s_1,\dots,s_{m-1}\}$. Since $B$ is assumed to be a non-losing strategy for Bob, therefore, by the definition of the ending condition of the game, we must have $s_m=s_0 = 0^n$.
Since $B \neq B^*$, there must be a $0 \leq t < m$ such that $g(t+1) < g(t)-1$. Therefore, the length of the game, $m$, must be smaller than $n^k$. By the definition of the game, Bob loses, in contradiction to the assumption that $B$ is a non-losing strategy for Bob.
\end{proof}
\begin{proposition}\label{proposition:Aunique}
$A^{*}$ is the only non-losing strategy for Alice.
\end{proposition}
\begin{proof}
Assume towards contradiction that there exists a non-losing strategy $A$ for Alice such that $A \neq A^*$. Let $s_0,\dots,s_m$ be the play of game when Alice and Bob use the strategies $A$ and $B^*$, respectively. Let $t$ be the first index in this play such that $A(s_t) \neq A^*(s_t)$. Let $x$ and $\sigma$ be such that $s_t=x\sigma$. Note that $\sigma \neq 0$, as either $A(s_t)=1$ or $A^*(s_t)=1$ but not both.
By Proposition~\ref{proposition:sweq}, since until $s_t$ Alice and Bob applied the strategies $A^*$ and $B^*$, respectively, we have that $(s_i)_{i=0}^{t} = (w_{k^n-i})_{i=1}^{t+1}$ , i.e., the game follows the reversed prefer-max sequence until $s_t$.
We first show that $B^*(s_t)$ must be zero. Otherwise, assume towards contradiction that $B^*(s_t)=1$. By the definition of $B^*$, $w_{k^n-t-2} = (\sigma+1)x$. By Proposition~\ref{proposition:xy}, since $\sigma \neq 0$, then $(\sigma+1)x \prec \sigma x$, i.e., $\sigma x \in \{w_{k^n-i}\}_{i=1}^{t+1} = \{s_i\}_{i=0}^{t}$. Consider, now, the strategy $B'$ such that $B'(s)=B^{*}(s)$ for all $s \neq s_t$ and $B'(s_t)=0$. By Observation~\ref{obs:strategyC} $B'$ is a strategy.
Let $A'$ be such that $A'=A$ if $A(s_t)=0$ and $A'=A^*$ if $A^*(s_t)=0$ (exactly one of them must be true because $A(s_t) \neq A^*(s_t)$). By Observation~\ref{obs:strategyC} $A'$ is a strategy. When Bob applies $B'$ and Alice applies $A'$, we have that the play follows the reversed prefer-max sequence until it gets to $s_t$ and then, by the definition of the game, since $A'(s_t) = B'(s_t)=0$, it goes to $\sigma x$
which is a state that was visited before, i.e., Bob wins. This contradicts the assumption that $A$ is a non-losing strategy or the fact (Proposition~\ref{proposition:Astar}) that $A^*$ is a non-losing strategy. Thus $B^{*}(s_{t})=0$.
If $A^*(s_t)=1$ then, by the definition of $A^*$, $w_{k^n-t-2}=0x$ which, by Proposition~\ref{proposition:followers}, means that $x\sigma \prec \sigma x$ or $x\sigma =\sigma x$. But, since $A(s_t) \neq A^*(s_t)$, we have that $A(s_t)=0$ so, by the definition of the game, since $B^*(s_t)=0$, we have $s_{t+1} = \sigma x$ which is a state that we already visited, i.e., Bob wins. This is in contradiction to the assumption that $A$ is a non-losing strategy for Alice.
If $A^*(s_t)=0$ then, by the definition of $A^*$ and of $B^*$, $w_{k^n-t-2}=\sigma x$ which, by Proposition~\ref{proposition:followers}, means that $x\sigma \prec 0 x$ or $x\sigma = 0 x$. But, since $A(s_t) \neq A^*(s_t)$, we have that $A(s_t)=1$ so, by the definition of the game, since $B^*(s_t)=0$, we have $s_{t+1} = 0 x$ which is a state that we already visited, i.e., Bob wins. This is in contradiction to the assumption that $A$ is a non-losing strategy for Alice.
\end{proof}
\section{An efficient computation of the non-losing strategies for both players}
\label{sec:efficient strategies}
In this section we propose algorithms for efficient computation of the strategies for both players in the $(n,k)$-shift-game. As said before, this can also be translated to an efficiently computable shift rule for the prefer max sequence. The main ingredient in this is the function $val\colon [k]^n \to \mathbb{N}$ given in the following definition together with two helper functions, $head$ and $tail$, that we will later use in the algorithms and in their analysis:
\begin{definition}\label{def:val_und_freund}
For a state $s= \sigma_0\cdots\sigma_{n-1} \in [k]^n$ and $m \in [n]$, let:
\begin{description}
\item $val(s,m)=\Sigma_{i=1}^n \sigma_{(m-i) \bmod n} k^{i-1}$ be the value of $s$ read from position $m$.
\item $val(s)= \max \{ val(s,m) \colon 0 \leq m < n \}$ be the (maximal) value of $s$.
\item $head(s)=\min(\arg\max\{val(s,m)\colon 0 \leq m < n\})$ be the head of $s$.
\item $tail(s) = (\max \{i < head(s) \colon \sigma_{i \bmod n} \neq 0\}) \bmod n$ be the tail of $s \neq 0^n$.
\end{description}
\end{definition}
The function $val\colon [k]^n \times [n]\to \mathbb{N}$ given in Definition~\ref{def:val_und_freund} transforms a state $s$ to a number by reading its symbols as a number in base $k$ from the $m$th place to the left in a cyclic order. The function $head\colon [k]^n \to [n]$ gives the index $m$ of the place from which reading the value $val(s,m)$ is maximal. The function $val\colon [k]^n \to \N$ gives this maximal value, i.e., $val(s)=val(s,head(s))$. The function $tail \colon [k]^n \to [n]$ gives the index of the least significant non-zero digit of $s$ as $val(s)$ reads it.
\begin{example}\label{example:val_y_amigos}
For example, $val(120,0)=0\cdot 1+1\cdot 3 +2\cdot 3^2=21$, $val(120,1)=1\cdot1 +0 \cdot 3 +2 \cdot 3^2 =19$, and $val(120,2)=2\cdot 1 +1\cdot 3 +0\cdot 3^2 =5$, so $val(120)=\max\{19,5,21 \}=21$ and thus $head(120)=0$ and $tail(120)=1$.
\end{example}
As the function $tail$ plays a key role in the algorithm for computing the strategies that we propose in the sequel and, consequently, also in the shift-rule for the prefer-max sequence, we state its computational complexity as follows:
\begin{proposition}\label{prop:O(n)_not_to_market}
The function $tail$ can be computed in $O(n)$ time and memory.
\end{proposition}
\begin{proof}
We can compute each iteration of the $val(x,i)$ based on the $val(x,i-1)$ in $O(1)$ time and memory using the equation $val(x,i)= (val(x,i-1)-\tau)/k +\tau \cdot k^n$ where $\tau =val(x,i-1) \bmod k$. Computing $val(x,1)$ is $O(n)$ and checking which value is the maximal is also $O(1)$ since we only need to check if it is greater then the previous maximum, where the first maximum is $val(x,1)$. Also, from Definition~\ref{def:val_und_freund}, we can can compute $tail$ from $head$ in $O(n)$ memory and time.
\end{proof}
We turn now to the analysis of the behavior of the function $val$ over plays of the shift-game. The following three claims establish bounds on the difference $val(s_{t+1}) - val(s_t)$ where $s_t$ and $s_{t+1}$ are consecutive states in a play of the game. If $s_t=x\sigma$ then, by definition, $s_{t+1}$ is either $(\sigma+1)x$, $0x$, or $\sigma x$. Since in the third case $val(s_t)=val(s_{t+1})$, we only need to analyze the first two cases.
\begin{claim}\label{claim:swB}
$val((\sigma +1)x) - val(x \sigma) \geq k^{head(x \sigma)}$.
\end{claim}
\begin{proof}
If $h=head(x \sigma)$ and $x=\sigma_0\cdots\sigma_{n-2}$ then:
\begin{eqnarray*}
val((\sigma +1)x) &\geq& val((\sigma +1)x, (h+1) \bmod n) \\
&=& val((\sigma_{n-1}+1)\sigma_0 \cdots \sigma_{n-2},(h+1) \bmod n) \\
&=& k^{h}+\Sigma_{i=1}^n \sigma_{((h+1)-i-1) \bmod n}k^{i-1} \\
&=& k^h+\Sigma_{i=1}^n \sigma_{(h-i) \bmod n}k^{i-1}=k^h+val(x \sigma).
\end{eqnarray*}\qedhere
\end{proof}
\begin{claim}\label{claim:head+1}
If $x \neq 0^{n-1}$ and $tail(x\sigma)=n-1$, then $head(0x)=(head(x\sigma)+1) \bmod n$.
\end{claim}
\begin{proof}
Let $h=head(x\sigma)$. Since $tail(x\sigma)=n-1$, we can write $x\sigma=0^{h-1}\sigma_h y\sigma$ and $0x=0^h\sigma_hy$ for some $y$ and some $\sigma_h \neq 0$. Let $h'= head(0x)$. From Definition~\ref{def:val_und_freund}, Since the symbol at the head cannot be zero, we have that $h' \geq h$. Assume towards contradiction that $h' > h+1$. Then $val(0x,h')=val(x\sigma,h'-1)-\sigma k ^{h'} < val(x\sigma,h)-\sigma k^{h+1} = val(0x,h+1)$ which contradicts that $h'$ is the head. Thus $h'=h+1$.
\end{proof}
\begin{claim}\label{claim:swA}
If $x \neq 0^{n-1}$ and $tail(x\sigma)=n-1$, then
$val(0x)-val(x\sigma) \geq -(k-1) \cdot k^{head(x\sigma)}$.
\end{claim}
\begin{proof}
If $h=head(x\sigma)$, $head(x0)=h'=h+1$, $0x=0\sigma_0\dots \sigma_{n-2}$, and $x\sigma= \sigma_0\cdots \sigma_{n-2}\sigma$ then:
\begin{eqnarray*}
val(0x,h') &=& val(x\sigma,h'-1)-\sigma k^{h'} \\
&=& (\Sigma_{i=1}^n \sigma_{((h+1)-1-i) \bmod n}k^{i-1} )-\sigma k^{h'} \\
&=& (\Sigma_{i=1}^n \sigma_{(h-i) \bmod n}k^{i-1})-\sigma k^{h} \\
&\geq & (\Sigma_{i=1}^n \sigma_{(h-i) \bmod n}k^{i-1})-(k-1)k^{h}\\
&=& val(x\sigma,h)-(k-1)k^h
\end{eqnarray*}
\end{proof}
The above three propositions give us the tools needed for describing a non-losing strategy for Alice and Bob, as follows. The `trick' for Alice is to force Bob to have $B(s_t)=1$ at least once every $2n$ steps of the game if he does not want to lose in these $2n$ steps. Then, by Claim~\ref{claim:swB} and Claim~\ref{claim:swA}, we get that $val$ increases more than it may decrease between any two states in which we increase the value, which means that Alice guarantees that the play reaches the state with the maximal value of $val$, which is $(k-1)^n$, from which Alice can easily win in $n$ moves.
\begin{definition}\label{def:tailst}
Let $A_{tail} \colon [k]^n \to \{0,1\}$ be the strategy for Alice defined by
$$
A_{tail}(x\sigma) =
\begin{cases}
1 & \text{if $tail(x\sigma)=n-1$}; \\
0 & \text{otherwise}.
\end{cases}
$$
and let $B_{tail} \colon [k]^n \to \{0,1\}$ be the strategy for Bob defined by
$$
B_{tail}(x\sigma) =
\begin{cases}
1 & \text{if $\sigma < k-1$ and $tail(x (\sigma+1))=n-1$}; \\
0 & \text{otherwise.}
\end{cases}
$$
\end{definition}
From Definition~\ref{def:tailst}, we can see that $A_{tail}$ and $B_{tail}$ are strategies, i.e., $A_{tail}(x0)=B_{tail}(x(k-1))=0$, since $tail(x0)\neq n-1$.
\vspace{0.3cm}
Our next goal is to prove that $A_{tail}$ is a non-losing strategy for Alice. To this end, we define a variant of the game and establish three claims, as follows.
\begin{definition}\label{def:inf_game}
The infinite shift-game goes exactly as the game in Definition~\ref{def:shift-game} the only difference being that a play does not end when $s_m \in \{s_1,\dots,s_{m-1}\}$, just when $s_m=s_0=0^n$. In this game, Bob wins when the play goes forever, Alice wins if $s_m=0^n$ and there is no option for a tie.
\end{definition}
\begin{claim}\label{claim:0-at-the-end}
In any play $s_0, s_1,\dots$ of the infinite shift-game where Alice applies the strategy $A_{tail}$ and Bob applies a strategy $B$, if $t_0 < t_1 < t_2$ are such that $B(s_{t_0})=B(s_{t_2})=1$, $A_{tail}(s_{t_1})=1$, and $B(s_t)=0$ for all $t_0 < t < t_2$, then $s_{t'}$ is of the form $0 x$ for for all $t_1 < t' \leq t_2$.
\end{claim}
\begin{proof}
By the definition of $tail$ we have that if $tail(\sigma_0 \cdots \sigma_{n-1})=\max\{ i \colon \sigma_i \neq 0\}$ for some $\sigma_0 \cdots \sigma_n-1$ then $tail(0 \sigma_0 \cdots \sigma_{n-2})=\max\{ i \leq n-2 \colon \sigma_i \neq 0\}$. We use this fact to prove, by induction, that for all $t_1 < t \leq t_2$ if $s_t=\sigma_0\cdots\sigma_{n-1}$ then $tail(s_t)=\max\{ i \colon \sigma_i \neq 0\}$. The base of the induction is by the definition of $A_{tail}$ that gives us that $tail(s_{t_1})=n-1$. The induction step splits into the following two cases: If $\sigma_{n-1} \neq 0$ then $s_{t+1}=0\sigma_2 \cdots \sigma_{n-2}$ because $A_{tail}(s_t)=1$. If $\sigma_{n-1} = 0$ then $s_{t+1}=0\sigma_2 \cdots \sigma_{n-2}$ because the state shifts. In both cases, the symbol that enters at the left is $0$ and the invariant $tail(s_t)=\max\{ i \colon \sigma_i \neq 0\}$ is kept.
\end{proof}
\begin{claim}\label{claim:inf-B}
In any play $s_0, s_1,\dots$ of the infinite shift-game where Alice applies the strategy $A_{tail}$, Bob applies a strategy $B$, and Alice loses, there must be infinitely many indexes $t$ such that $B(s_t)=1$.
\end{claim}
\begin{proof}
Assume towards contradiction that there is a $t$ such $B(s_t)=B(s_{t+1})=\cdots=B(s_{t+2n})=0$ and $s_t=\sigma_0 \cdots \sigma_{n-1}$. Let $m\geq t$ be the first index that satisfies $tail(s_{t+m})=n-1$. By Definitions~\ref{def:inf_game} of the infinite game and by Definition~\ref{def:tailst} of $A_{tail}$, $s_{t+m} = \sigma_{m} \cdots \sigma_{n-1} \sigma_0 \cdots \sigma_{m-1}$. By the arguments used in the previous proof, $s_{t+m+n} = 0^n$. This contradicts the assumption that Alice loses.
\end{proof}
\begin{claim}\label{claim:val-increases}
In any play $s_0, s_1,\dots$ of the infinite shift-game where Alice applies the strategy $A_{tail}$ and Bob applies a strategy $B$, if $t_1 < t_2$ are such that $B(s_{t_1})=B(s_{t_2})=1$ then $val(s_{t_1+1}) < val(s_{t_2+1})$.
\end{claim}
\begin{proof}
Assume without loss of generalization that $B(s_t)=0$ for all $t_1 < t < t_2$.
If $A_{tail}(s_t)=0$ for all $t_1 < t < t_2$ then $s_{t_2+1}$ is a rotation of $s_{t_1+1}$
with one of its digits increased by one. By the definition of $val$, this gives that $val(s_{t_2+1}) > val(s_{t_1+1})$.
If there is a $t_1 < t < t_2$ such that $A(s_{t})=1$, we are in the case covered by Claim~\ref{claim:0-at-the-end}. Then:
\begin{eqnarray}
val(s_{t_1+1}) &=& val(s_{t}) \label{a} \\
&\leq& val(s_{t_2}) + \sum_{i=0}^{t_2-t}(k-1)k^{head(s_{t})+i} \label{b} \\
&<& val(s_{t_2}) + k^{head(s_{t})-(t_2-t)} \\
&=& val(s_{t_2}) + k^{head(s_{t_2})} \label{d} \\
&\leq& val(s_{t_2+1}) \label{e}
\end{eqnarray}
\eqref{a} is because the state rotates when $B(s)=A(s)=0$. \eqref{b} is by Claim~\ref{claim:0-at-the-end} and by Claim~\ref{claim:swA}. \eqref{d} is by Claim~\ref{claim:head+1}. \eqref{e} is by Claim~\ref{claim:swB}.
\end{proof}
\begin{proposition}\label{proposition:tailAwins}
$A_{tail}$ is a non-losing strategy for Alice both in the infinite game and in the finite game.
\end{proposition}
\begin{proof}
Assume towards contradiction that there is a strategy $B$ for Bob such that the play of $B$ against $A_{tail}$ of the infinite game is the infinite sequence $s_0, s_1,\dots$ where $s_t \neq 0^n$ for all $t>0$. By Claim~\ref{claim:inf-B} and Claim~\ref{claim:val-increases}, value of the states must grow without an upper bound along the game. This is a contradiction since, by definition, $val(s)<(k-1)^n$ for all $s$. This means that Bob cannot force the game to loop before reaching $0^n$, i.e., that he cannot win the finite game either.
\end{proof}
\begin{proposition}\label{proposition:tailA}
$A_{tail}=A^*$.
\end{proposition}
\begin{proof}
From Proposition~\ref{proposition:tailAwins}, we have that $A_{tail}$ is a non losing strategy, and from Proposition~\ref{proposition:Aunique} we know that $A^{*}$ is the only non-losing strategy for Alice.
\end{proof}
\begin{proposition}\label{proposition:tailB}
$B_{tail}=B^{*}$
\end{proposition}
\begin{proof}
From Proposition~\ref{proposition:tailAwins} we know that $A_{tail}$ is a non losing strategy and from Proposition~\ref{proposition:tailA} we know that $A_{tail}=A^{*}$. From Proposition~\ref{proposition:pmaxeq} we know that $B^{*}(w\sigma)=A^{*}(w(\sigma+1))$ and from Definition~\ref{def:tailst} we can see that $A_{tail}(w(\sigma+1))=B_{tail}(w\sigma)$, so we get:
$$B^{*}(w\sigma)=A^{*}(w(\sigma+1))=A_{tail}(w(\sigma+1))=B_{tail}(w\sigma)$$
Thus $B_{tail}=B^{*}$.
\end{proof}
\section{An efficiently computable shift rule }
By Proposition~\ref{proposition:tailA} and Proposition~\ref{proposition:tailB} we have that $A^{*}=A_{tail}$ and that $B^{*} = B_{tail}$. In this section, we elaborate a simple construction that uses this fact, and the fact that both $A_{tail}$ and $B_{tail}$ can be computed efficiently (Proposition~\ref{prop:O(n)_not_to_market}), to provide an efficient construction of the prefer-max sequence:
\begin{theorem}
The function
$$ \shift(x \sigma)=
\begin{cases}
(\sigma+1) x & \text{if $\sigma < k-1$ and $tail(x(\sigma+1))=n-1$}; \\
0x & \text{else, if $tail(x\sigma)=n-1$};\\
\sigma x & \text{otherwise}.
\end{cases} $$
maps each vertex on the prefer-max cycle to its predecessor. It can be computed in $O(n)$ time and memory.
\end{theorem}
\begin{proof}
From Proposition~\ref{proposition:tailB} and Proposition~\ref{proposition:tailA}, we know that $A^{*}=A_{tail}$ and that $B^{*}=B_{tail}$, thus, we get that $(s_t)_{t=0}^{k^n-1} = (w_{k^n-1-t})_{t=0}^{k^n-1}$, and from Definition~\ref{def:tailst} we get that $s_{t+1}=\shift(s_t)$. By Proposition~\ref{prop:O(n)_not_to_market} we can calculate $tail$ in $O(n)$ time and memory and that the only part that requires computation.
\end{proof}
\section{Acknowledgments}
We would like to thank Daniel Berend, Michael Elkin, Avraham Hayoun, and Moshe Schwartz for fruitful discussions and their help in this project.
This work was conducted as part of the Alpha Program at the Davidson Institute of Science Education, which is funded and operated nationally by Maimonides Fund’s Center for the Advancement of the Gifted and Talented and the Education Ministry’s Department for Gifted and Talented Students.
\section{References}
\bibliographystyle{elsarticle-num}
|
{
"timestamp": "2018-05-18T02:04:47",
"yymm": "1805",
"arxiv_id": "1805.02405",
"language": "en",
"url": "https://arxiv.org/abs/1805.02405"
}
|
\section{Introduction}
\label{sec:intro}
The tremendous progress made on both direct and indirect particle dark matter (DM) searches over
the past few decades has yielded an incredible wealth of data, calling for predictions as reliable
as possible in order to draw robust conclusions on models (see reviews on DM models and search
strategies in
\textit{e.g.}\ Refs.~\cite{Feng2010,LavalleEtAl2012,BringmannEtAl2012c,EssigEtAl2013,Strigari2013,FreeseEtAl2013,Slatyer2016,LiuEtAl2016,CarrEtAl2016,Green2017}).
Galactic DM searches are among the most promising because the Milky Way (MW) is a
local and constrained system. However, most associated theoretical predictions are
still based on simplifying assumptions for the DM distributions in real space and/or phase space,
despite regular improvements in modeling techniques and observational constraints. Therefore,
it is usually difficult to figure out the level of uncertainties associated with these assumptions.
The fact that different studies use different assumptions makes it even more difficult to
self-consistently exploit the genuine complementarity between the constraints or discovery
avenues, which now becomes crucial as experiments have started to probe significant parts
of the parameter space allowed for popular particle DM scenarios. It is worth emphasizing that
designing constrained and theoretically sound models for the DM distribution in real space and
phase space in target systems is crucial for any astrophysical DM search, irrespective of the
DM scenario.
The Gaia mission \cite{GaiaCollaborationEtAl2016,BrownEtAl2018} is currently shedding new and
unprecedented light on the distribution of DM in the Milky Way (MW), complementary to other stellar
surveys (see \textit{e.g.}\ Refs.~\cite{ReidEtAl2014,PifflEtAl2014}). The Gaia data will increase the accuracy
in predictions of DM-related observables, provided reliable methods ensure their use
in a sensible way. The overall challenge is to better control not only the spatial distribution of
DM, but also its full phase-space distribution function (DF henceforth), which are the major
sources of uncertainties in predictions for DM searches. This will likely not be an easy task
\cite{Binney2017}, and there is room for significant theoretical improvement over the techniques
currently used in DM searches. The phase-space DF enters the calculations of many important
DM-related observables that depend directly on the DM velocity distribution---for example the
direct DM
detection rate, averaged $p$-wave-suppressed or Sommerfeld-enhanced annihilation cross sections,
the microlensing event rate of compact DM objects, \textit{etc}. Moreover, since the spatial distribution of
DM is the integral of the phase-space DF over momentum space, it is clear that a common framework is
necessary to make self-consistent comparisons between direct and indirect Galactic DM searches
in the broad sense, as both should exhibit some correlations (largely ignored so far, except in
a few studies, \textit{e.g.}\ Ref.~\cite{CerdenoEtAl2016}).
In this paper, we wish to investigate the status of some theoretical approaches that attempt to
self-consistently predict the DM phase-space DF from the full content of the target system by
virtue of the (steady-state) Boltzmann equation, the Jeans theorem, and the Poisson equation,
{\em i.e.} from first principles---we will place ourselves in the context of collisionless cold
DM from now on. These methods go beyond the simplistic approximation of a Maxwell-Boltzmann
distribution, well
suited to get fast order-of-magnitude estimates, but known for long not to apply to bounded systems
\cite{King1966}, and not to comply with dynamical constraints on the MW. These methods are
complementary to data-driven approaches (\textit{e.g.}\ Ref.~\cite{Herzog-ArbeitmanEtAl2018}). Other
approaches rely on fits from hydrodynamic cosmological simulations, but except for the essential
physical insight provided by generic features found in simulations
(\textit{e.g.}\ Refs.~\cite{PillepichEtAl2014,SchayeEtAl2015}), the
blind extrapolation of these fits to describe a single, specific, and constrained object like the
MW is questionable; not to mention the uncertainties induced by the empirical assumptions in the
description of baryonic effects and by the limited resolution. Cosmological simulations are still
very important tools to test prediction methods as they provide a framework in which all the
gravitational constituents are dynamically correlated \cite{LacroixEtAl2018}.
A well-known example and {\em a priori} self-consistent phase-space DF prediction from the
gravitational system content is the so-called Eddington inversion method \cite{Eddington1916} (and
its anisotropic extensions, like the Osipkov-Merritt models \cite{Osipkov1979,Merritt1985}), which
we discuss extensively in this paper. This approach has already been used in the context of direct
particle DM searches (see \textit{e.g.}\ Refs.~\cite{UllioEtAl2001,VergadosEtAl2003,
CatenaEtAl2012,PatoEtAl2013,BhattacharjeeEtAl2013,BozorgniaEtAl2013,FornasaEtAl2014,
LavalleEtAl2015}), as well as indirect searches (see \textit{e.g.}\ Refs.~\cite{FerrerEtAl2013,Hunter2014,
BoddyEtAl2017,PetacEtAl2018}).
A net benefit from this method is that it can make use of evolved and constrained Galactic mass
models (\textit{e.g.}\ Refs.~\cite{CatenaEtAl2010a,McMillan2011,PifflEtAl2014,McMillan2017}), providing a
much more sensible theoretical description of the phase space. However, its validity range has not
been studied in detail in the context of DM searches, especially in a complex system like the MW,
whose gravitational potential is dominated by the baryons in the central regions. In this
work, we will show that it actually cannot apply to all DM-baryon pair configurations, leading to
ill-defined phase-space DFs even for rather conventional Galactic mass models. This is the
manifestation of gravitationally unstable DFs, and of the fact that some degrees of freedom are
missing to fully describe the system. We carefully delineate the DM-baryon parameter space where
the Eddington-like calculations may apply. We also discuss several other theoretical issues that
have been overlooked in the literature, such as the impact of the radial boundary of the system,
which should not be neglected to guarantee the existence of a closed system of equations, but may
in turn induce divergences in the velocity distribution. We propose ways to circumvent these issues,
and provide results for some observables specific to DM searches in the framework of the Galactic
mass model of Ref.~\cite{McMillan2017} (see \citeapp{app:mass_models}), namely radial profiles of
the moments of the DM speed (direct DM searches, microlensing event rate for compact DM objects,
\textit{etc}.) and of the (two-body) relative DM speed ($p$-wave-suppressed and Sommerfeld-enhanced
annihilation) distributions. We stress that although we focus on the MW in this paper, the general
aspects of this study are still relevant to the use of the Eddington formalism to describe the DM
phase-space DF of any other bounded system (with or without baryons).
\change{We also emphasize that
this study focuses on the theoretical self-consistency of the formalism itself, which is a first
important step with, as we will see, quantitative consequences. It is very likely that
several assumptions inherent to this theoretical description, like steady state, spherical
symmetry, or the fact that potential effects coming from large substructures or recent mergers
are neglected (\textit{e.g.}~the Large or Small Magellanic Cloud), will break down at some level, inducing
another layer of systematic uncertainties. However, more detailed comparisons between the
theoretical errors addressed here and other systematic uncertainties are left to a forthcoming
dedicated paper\cite{LacroixEtAl2018}.\footnote{Preliminary
results based on tests on hydrodynamic cosmological simulations show that,
surprisingly enough, the formalism performs rather well on ``Milky Way-like''
virtual galaxies.} }
The paper is organized as follows. In \citesec{sec:edd}, we review the Eddington-inversion
formalism and some of its anisotropic extensions. In \citesec{sec:issues}, we explain in detail
the issues mentioned above and their physical consequences---the divergences induced by the radial
boundary and the inability of the formalism to describe some DM-baryon configurations allowed by
kinematic constraints. In that section, we discuss some possible ways out that allow one to
recover a self-consistent description of the phase space. In \citesec{sec:dd}, we illustrate our results by calculating a series of observables relevant to particle DM direct and indirect searches. These results can be straightforwardly used for predictions in these
fields. Finally, we conclude in \citesec{sec:concl}.
\section{Eddington's inversion method and its anisotropic extensions}
\label{sec:edd}
In this section, we review the basic concepts that will be useful throughout the
discussion. Though mostly reviewing standard knowledge \cite{BinneyEtAl2008}, we will also
point out several technical details that are often overlooked or unclear in the literature.
\subsection{Jeans' theorem and spherical systems}
\label{ssec:jeans}
The Jeans theorem states that any steady-state solution of the collisionless Boltzmann equation
can be written as a function of isolating integrals of motion
\cite{Ollongren1962a,BinneyEtAl2008}. In the
particular case of a system with spherical symmetry, the energy and the modulus of the angular
momentum are such integrals of motion.
Consequently, the phase-space DF of such a system can be written
$f(\vec{r},\vec{v})\equiv f(\mathcal{E},L)$, where $L=|\vec{r}\times\vec{v}|$ is the modulus of
the angular momentum per unit mass, and
\begin{eqnarray}
\label{eq:energy}
\mathcal{E} = \Psi(r) - \dfrac{v^{2}}{2}
\end{eqnarray}
is the relative energy per unit mass---we assume all the DM particles in the system to be
identical. In \citeeq{eq:energy}, $v$ is the velocity, and
\begin{eqnarray}
\Psi(r) = \Phi_{0} - \Phi(r)
\end{eqnarray}
is the (positive-definite) relative gravitational potential, where $\Phi(r)$ is the
solution to Poisson's equation going to 0 at infinity. The constant $\Phi_{0}$ is the value of
$\Phi(r)$ at some reference radius---usually taken to be the physical boundary of the
system---called $R_{\rm max}$ in the following.
This ensures that the potential is positive-definite over the system except at the
boundary where it vanishes. It will sometimes prove convenient to distinguish the
baryonic ($\Psi_{\rm B}$) and DM ($\Psi_{\rm D}$) contributions to the potential that we
introduce here through the following equation,
\begin{eqnarray}
\Psi(r) =\Psi_{\rm D}(r) + \Psi_{\rm B}(r)\,.
\end{eqnarray}
For the full system or for each component, and provided the mass profile or the density profile
are known, the relative potential $\Psi$ can be related to the mass distribution of the system
(or its individual components) through Poisson's equation, and reads
\begin{eqnarray}
\Psi(r) = \int_{r}^{R_{\rm max}} \! \mathrm{d}r' \, \dfrac{G m(r')}{r'^{2}} \,,
\label{eq:relative_potential}
\end{eqnarray}
where the mass inside the sphere of radius $r$ is related to the mass density $\rho$ through
\begin{eqnarray}
m(r) = 4 \pi \int_{0}^{r} \! \mathrm{d}r'\, \rho(r') r'^{2} \,.
\label{eq:mass}
\end{eqnarray}
Like for the potential, the mass can be split into several components,
\textit{e.g.}\ a baryonic component ($m_{\rm B}$) and a DM one ($m_{\rm D}$).
We stress that the DM potential $\Psi_{\rm D}$ can be calculated from
\citeeq{eq:relative_potential} only when the DM content is specified from its density profile
$\rho$; we will see later that in some cases, we can only self-consistently get the potential
from the DF, where the radial coordinate $r$ only emerges by solving the Poisson equation
given below in \citeeq{eq:poisson}. In contrast, the baryonic potential will invariably be defined
from \citeeq{eq:relative_potential} from now on.
We limit our study to systems with spherical symmetry, therefore when dealing with a non-spherical
density component $\rho(\vec{x})$ (\textit{e.g.}~baryons which often have an approximate axial symmetry in
galaxies) we compute the corresponding mass inside a radius $r$ via
\begin{eqnarray}
\label{eq:mass_axisymm}
m(r) = \int_{|\vec{x}|\le r}\mathrm{d}^{3}\vec{x} \,\rho(\vec{x})\,,
\end{eqnarray}
and its ``spherically symmetrized potential" using \citeeq{eq:relative_potential}. This
approximation can be relaxed in principle, though the consistent treatment of an axisymmetric
distribution is much more involved (see \citesec{ssec:axisymmetry}). \change{In the following, all
non-spherical components such as the bulge and disks in the model of Ref.~\cite{McMillan2017}
(see App.~\ref{app:mass_models}), will be ``spherically symmetrized'' relying on
Eq.~\eqref{eq:mass_axisymm}.}
The DF is therefore related to the mass density via
\begin{eqnarray}
\label{eq:rho_definition}
\rho (r) = \int \mathrm{d}^{3}\vec{v}\, f(r,\vec{v})
= \int \mathrm{d}^{3}\vec{v} \, f(\mathcal{E},L) \,.
\end{eqnarray}
Note that in \citeeq{eq:rho_definition}, the DF is normalized to the total mass of the component
of interest. We keep this convention in the following.
We can further define the velocity distribution $f_{\vec{v}}$ and the speed distribution $f_{v}$
as follows:
\begin{subequations}
\label{eq:v_df}
\begin{eqnarray}
f_{\vec{v}}(\vec{v},r) &\equiv& \frac{f(\mathcal{E},L)}{\rho (r)}\\
f_{v}(|\vec{v}|,r) &\equiv& v^2 \int {\rm d}\Omega_v \, f_{\vec{v}}(\vec{v},r)\,,
\end{eqnarray}
\end{subequations}
where ${\rm d}\Omega_v$ encodes the angular content of the velocity distribution. From the
above definition, both $f_{\vec{v}}$ and $f_{v}$ carry the usual units and are normalized
to unity. We stress that the DF introduced in \citeeq{eq:rho_definition} is implicitly assumed to
further satisfy Poisson's equation
\begin{eqnarray}
\Delta \Psi_i = -4\,\pi \, G \,\rho_i(r) = -4\,\pi\, G \int \mathrm{d}^{3}\vec{v} \,
f_i({\cal E},L)
= -4\,\pi\, G \int_0^\Psi {\rm d}{\cal E}'\,\sqrt{2(\Psi-{\cal E}')}\,f_i({\cal E}',L)
\,,
\label{eq:poisson}
\end{eqnarray}
which will turn out to be important later on. When $\rho_i(r)$ is specified, the above
equation reduces to \citeeq{eq:relative_potential} if the boundary condition $\Psi_i(R_{\rm max})=0$
is considered (here, this will always be the case for the baryonic component). Otherwise,
\citeeq{eq:poisson} will have to be solved explicitly to compute the potential.
The $i$ index makes
it clear that although the energy ${\cal E}$ depends on the full potential $\Psi=
\sum_i\Psi_i$ and thereby on all the gravitational components of the system, the Poisson
equation only relates the individual components to their own phase-space DF.
There is no general classification of the solutions of the collisionless Boltzmann equation.
Therefore, further assumptions on the properties of the phase space are needed. In the following,
we recall the main equations of the Eddington inversion formalism---which allows one to derive a
phase-space DF for a given galactic mass model and for particular assumptions on the anisotropy of
the system---before discussing in detail theoretical issues that may arise from the
method.
\subsection{Eddington's inversion for an isotropic system}
\label{ssec:iso}
We first set about describing the simplest case of a spherically symmetric and isotropic
DM distribution. In that case, the angular momentum is irrelevant, and the dependence of the DF
on integrals of motion simplifies to an energy dependence, $f \equiv f(\mathcal{E})$. Such a DF
is referred to as ergodic. Using \citeeq{eq:energy} as a change of variables to eliminate the
velocity, we can rewrite \citeeq{eq:rho_definition} as
\begin{eqnarray}
\label{eq:rho_fE}
\rho (r) = 4 \pi \sqrt{2} \int_{0}^{\Psi(r)} \!
f(\mathcal{E}) \sqrt{\Psi(r) - \mathcal{E}} \, \mathrm{d}\mathcal{E}\,.
\end{eqnarray}
Note that we only consider \emph{self-gravitating} systems, which means all particles in the
system are gravitationally bound to it and have $\mathcal{E} \geq 0$. As a result,
$f(\mathcal{E} < 0) = 0$. This translates into a lower bound of $\mathcal{E}=0$ in the integral
in \citeeq{eq:rho_fE}. For general systems that are not self-gravitating, the lower bound would be
$\mathcal{E}=-\infty$.
Since $\Psi$ is a monotonically decreasing function of $r$ in a realistic stationary system, one can
define $\rho$ as a function of $\Psi$ instead of $r$. Differentiating \citeeq{eq:rho_fE} with
respect to $\Psi$, one obtains
\begin{eqnarray}
\dfrac{\mathrm{d}\rho}{\mathrm{d}\Psi} = \sqrt{8}\pi \int_{0}^{\Psi} \! \dfrac{f(\mathcal{E})}{\sqrt{\Psi - \mathcal{E}}} \, \mathrm{d}\mathcal{E}\,.
\label{eq:abel_equation_sec2}
\end{eqnarray}
This is an Abel equation, which can be inverted to give Eddington's formula
\cite{Eddington1916,BinneyEtAl2008}:
\begin{eqnarray}
f(\mathcal{E}) = \dfrac{1}{\sqrt{8}\pi^{2}} \dfrac{\mathrm{d}}{\mathrm{d}\mathcal{E}}
\int_{0}^{\mathcal{E}} \! \dfrac{\mathrm{d}\Psi}{\sqrt{\mathcal{E} - \Psi}}\,
\dfrac{\mathrm{d}\rho}{\mathrm{d}\Psi} \,.
\label{eq:eddington_form1}
\end{eqnarray}
A more convenient form of Eddington's formula that does not explicitly feature a derivative with
respect to $\mathcal{E}$ can be obtained after integrating by parts:
\begin{eqnarray}
\label{eq:eddington_form2}
f(\mathcal{E}) &=& \dfrac{1}{\sqrt{8}\pi^{2}}
\left\{ \dfrac{1}{\sqrt{\mathcal{E}}} \left[ \dfrac{\mathrm{d}\rho}{\mathrm{d}\Psi} \right]_{\Psi=0}
+ \int_{0}^{\mathcal{E}} \! \dfrac{\mathrm{d}\Psi}{\sqrt{\mathcal{E} - \Psi}} \,
\dfrac{\mathrm{d}^{2}\rho}{\mathrm{d}\Psi^{2}} \right\}\\
\Big[ &=& \dfrac{2}{\sqrt{8}\pi^{2}}
\left\{ \dfrac{1}{2\,\sqrt{\mathcal{E}}}
\left[ \dfrac{\mathrm{d}\rho}{\mathrm{d}\Psi} \right]_{\Psi=0}
+ \sqrt{\cal E} \left[ \dfrac{{\rm d}^2\rho}{{\rm d}\Psi^2} \right]_{\Psi=0}
+ \int_{0}^{\mathcal{E}} \! {\rm d}\Psi \,\sqrt{\mathcal{E} - \Psi}
\, \dfrac{\mathrm{d}^{3}\rho}{\mathrm{d}\Psi^{3}} \,
\right\} \Big]\,.\nonumber
\end{eqnarray}
This is the form we will use and discuss extensively in the following (the last line in brackets
corresponds to an additional integration by parts, which will prove insightful later on).
Integrating \citeeq{eq:abel_equation_sec2}, one can reconstruct the density profile from the DF:
\begin{eqnarray}
\rho (\Psi) = \rho(\Psi = 0) + 4 \pi \sqrt{2} \int_{0}^{\Psi} \! \mathrm{d}\mathcal{E}\,
\sqrt{\Psi - \mathcal{E}} \, f(\mathcal{E})\,,
\label{eq:rho_reconstruction}
\end{eqnarray}
where $\rho(\Psi = 0) = \rho(r=R_{\mathrm{max}})$ is the density at the boundary of the system,
very often neglected in the literature whereas it is an important ingredient to test the
self-consistency of the chain of calculations (one must obviously recover the initial input
density profile from integrating the DF). Indeed, the Abel inversion is performed on
$\mathrm{d}\rho/\mathrm{d}\Psi$ rather than $\rho$. We also emphasize the importance
of the term $\propto 1/\sqrt{\cal E}$ in \citeeq{eq:eddington_form2} to get a consistent
reconstruction of $\rho$ up to the radial boundary $R_{\rm max}$ of the system, except
in the special limit $R_{\rm max}\to \infty$. As a potentially important technical consequence,
the self-consistent normalization of the velocity or speed distributions $f_{\vec{v}/v}$ defined
in \citeeq{eq:v_df} is no longer guaranteed---neglecting the term $\propto 1/\sqrt{\cal E}$
therefore imposes to normalize the distributions $f_{\vec{v}/v}$ by hand. An illustration is
presented in \citefig{fig:radial_cut}, where the dashed curves show the reconstructed profiles
when neglecting $\rho(\Psi = 0)$, the dotted curves further neglect the term of the DF
$\propto 1/\sqrt{\cal E}$, all compared with the fully reconstructed profiles (solid lines).
\change{As a side remark, note that $\rho$ and $\Psi$ need not be related for the Eddington
inversion to work. For instance, if the system contains DM and baryons, $\rho$ refers to the DM
density, whereas $\Psi = \Psi_{\mathrm{D}} + \Psi_{\mathrm{B}}$ is the total potential. In that case,
$\Psi$ cannot be determined from the sole knowledge of the DM density. That $\rho$ and $\Psi$
can be independent will have consequences in terms of physical self-consistency of the derived
DF, as will be discussed in \citesec{ssec:gamma}.}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.6\linewidth]{{{fig_density_mcmillan_2}}}
\caption{\small
Initial density profiles with different
inner slopes $\gamma=0.25,0.5,1$ (see \citeapp{app:mass_models}) taken from
Ref.~\cite{McMillan2017} and their reconstruction from \citeeq{eq:rho_reconstruction}. The
black circles show the original profiles, the solid lines the full reconstructions based
on \citeeq{eq:rho_reconstruction}, the dashed lines neglect the constant term of
\citeeq{eq:rho_reconstruction}, and the dotted lines neglect both the latter and the
$1/\sqrt{\cal E}$ term in the calculation of the DF
$f({\cal E})$ [see \citeeq{eq:eddington_form2} and \citesec{ssec:rmax}].
}
\label{fig:radial_cut}
\end{figure*}
\subsection{Anisotropic extensions}
\label{ssec:aniso}
When the system features some degree of anisotropy, the density profile and the total
gravitational potential are no longer sufficient to determine the DF because the angular momentum
$\vec{L}$ enters the game, and an ansatz for $f(\mathcal{E},\vec{L})$ is required to account for
the dependence of the DF on these new degrees of freedom---for the spherically symmetric systems
considered here, the phase space is only extended by the modulus $|\vec{L}|=L$. An anisotropic
system is usually characterized in terms of an anisotropy parameter \cite{Binney1980}:
\begin{eqnarray}
\beta(r) = 1 - \dfrac{\sigma_{\theta}^{2} + \sigma_{\phi}^{2}}{2 \sigma_{r}^{2}}\,,
\end{eqnarray}
where $\sigma_{r}$, $\sigma_{\theta}$ and $\sigma_{\phi}$ are the velocity dispersions in spherical
coordinates. If orbits in the system of interest are mostly tangential, we have
$\sigma^{2}_{r}\ll\sigma^{2}_{\theta}+\sigma^{2}_{\phi}$ and $\beta<0,\,|\beta|\gg1$. If orbits are
mostly radial, we get $\sigma^{2}_{r}\gg\sigma^{2}_{\theta}+\sigma^{2}_{\phi}$ and $\beta=1$.
In the following, we describe two simple ans\"atze that provide semi-analytical solutions
from the Abel inversion procedure in the anisotropic case, and briefly discuss more sophisticated
approaches.
\subsubsection{Constant anisotropy}
A simple extension of the Eddington method deals with systems having a constant anisotropy
parameter $\beta(r)=\beta_{0}$.
The simplest ansatz for the DF that separates the effects of energy and angular momentum
takes the following form \cite{Henon1973,KentEtAl1982,BinneyEtAl2008}:
\begin{eqnarray}
f_{\beta_{0}}(\mathcal{E},L) = G(\mathcal{E}) L^{-2\beta_{0}}\,.
\label{eq:df_constant_beta}
\end{eqnarray}
The function $G$ is related to the density profile through
\begin{eqnarray}
\chi \equiv r^{2\beta_{0}}\rho &= \lambda(\beta_{0})
&
\int_{0}^{\Psi} \! G(\mathcal{E})\,(\Psi-\mathcal{E})^{\frac{1}{2}-\beta_{0}} \,
\mathrm{d}\mathcal{E}\,,
\label{eq:density_beta}
\end{eqnarray}
where
\begin{eqnarray}
\lambda(\beta_{0}) = 2^{\frac{3}{2}-\beta_{0}}\pi^{\frac{3}{2}}
\frac{\Gamma(1-\beta_{0})}{\Gamma(3/2-\beta_{0})}\,,
\end{eqnarray}
where $\Gamma$ is the Gamma function (Euler integral of the second kind).
This leads to the Abel equation
\begin{eqnarray}
\frac{\mathrm{d}^{n}\chi}{\mathrm{d}\Psi^{n}} = \lambda(\beta_{0})\left(\frac{1}{2}-\beta_{0}\right)!
\int_{0}^{\Psi} \! G(\mathcal{E})(\Psi-\mathcal{E})^{\frac{1}{2}-\beta_{0}-n} \, \mathrm{d}\mathcal{E},
\label{eq:abel_eq_constant_beta}
\end{eqnarray}
where
\begin{eqnarray}
& \left(\dfrac{1}{2}-\beta_{0}\right)!
& \equiv \left\{
\begin{array}{ll}
\left( \dfrac{1}{2}-\beta_{0} \right) ... \left( \dfrac{1}{2}-\beta_{0}-(n-1) \right) \ &\mathrm{for}\ \beta_{0}<\dfrac{1}{2} \\
1\ &\mathrm{for}\ \dfrac{1}{2}\leqslant \beta_{0}<1
\end{array}
\right.,
\end{eqnarray}
and
\begin{eqnarray}
n=\left[\frac{3}{2}-\beta_{0}\right]\,,
\label{eq:index_beta}
\end{eqnarray}
with $[x]$ the floor of $x$. The solution of this equation can be expressed as
\begin{eqnarray}
G(\mathcal{E}) =
\dfrac{\sin((n-1/2+\beta_{0})\pi)}{\pi \lambda(\beta_{0})\left(1/2-\beta_{0}\right)!}
\frac{\mathrm{d}}{\mathrm{d}\mathcal{E}}\int_0^{\mathcal{E}} \! \mathrm{d}\Psi\,
\frac{\mathrm{d}^{n}\chi}{\mathrm{d}\Psi^{n}}(\mathcal{E}-\Psi)^{n-3/2+\beta_{0}}\,,
\label{eq:G_constant_beta}
\end{eqnarray}
We note that in the isotropic limit $\beta_{0}\rightarrow0$, the expression of $G$ in
\citeeq{eq:G_constant_beta} boils down to the Eddington DF given in \citeeq{eq:eddington_form1}
as expected.
\change{If $\beta_{0}$ is a half-integer, the integral in Eq.~(\ref{eq:G_constant_beta}) boils down to a derivative \citep{Cuddeford1991}. This allows one to analytically express the DF of any system with a half-integer anisotropy \citep{Evans2005}.}
\subsubsection{Osipkov-Merritt model}
Another extension of the Eddington formalism is the Osipkov-Merritt DF
\cite{Osipkov1979,Merritt1985} which describes a system where the anisotropy parameter is
no longer constant, but takes the following radial dependence:
\begin{eqnarray}
\beta(r) = \frac{r^2}{r^2+r_{\rm a}^2}\,,
\label{eq:beta_om}
\end{eqnarray}
where $r_{\rm a}$ is a free parameter referred to as the anisotropy radius.
This model is isotropic in the inner regions $r\ll r_{\rm a}$, while it exhibits a full
radial anisotropy in regions $r\gg r_{\rm a}$. It cannot describe tangential anisotropy. The full
isotropic case is recovered in the limit $r_{\rm a}\to\infty$. This expression is derived by
assuming that the DF no longer factorizes out its dependence on energy and angular momentum, but
mixes them through a variable $Q$,
\begin{eqnarray}
f(\mathcal{E},L) = f_{\mathrm{OM}}(Q)\,,
\label{eq:df_om}
\end{eqnarray}
where $Q = \mathcal{E} - \dfrac{L^{2}}{2 r_{\mathrm{a}}^{2}}$. By solving
\begin{eqnarray}
\rho(r) = \int{\rm d}^3\vec{v} \, f_{\mathrm{OM}}(Q)\,,
\end{eqnarray}
one readily obtains
\begin{eqnarray}
\rho(r) = \frac{r_{\rm a}^2}{r^2+r_{\rm a}^2}\,\rho_{\rm OM}(r)\,,
\label{eq:density_om}
\end{eqnarray}
where
\begin{eqnarray}
\rho_{\rm OM}(r) = \rho_{\rm OM}\left(\Psi(r) \right) =
4 \pi \sqrt{2} \int_{0}^{\Psi}f_{\rm OM}(Q) \sqrt{\Psi - Q} \, \mathrm{d}Q\,.
\end{eqnarray}
The Abel equation is then
\begin{eqnarray}
\frac{\mathrm{d}\rho_{\rm OM}}{\mathrm{d}\Psi} =
\sqrt{8}\pi\int_{0}^{\Psi}\frac{f_{\rm OM}}{\sqrt{\Psi-Q}}\,\mathrm{d}Q\,,
\end{eqnarray}
and its solution
\begin{eqnarray}
f_{\rm OM}(Q) = \dfrac{1}{\sqrt{8}\pi^{2}}
\frac{\mathrm{d}}{\mathrm{d}Q}\int_{0}^{Q}\frac{\mathrm{d}\Psi}{\sqrt{Q-\Psi}}
\frac{\mathrm{d}\rho_{\rm OM}}{\mathrm{d}\Psi}\,.
\label{eq:om_form1}
\end{eqnarray}
The expression of $f_{\rm OM}$ is identical to that of the standard Eddington DF in
\citeeq{eq:eddington_form2} when $Q$ and $\rho_{\rm OM}$ are identified with
$\mathcal{E}$ and $\rho$, respectively (in the isotropic limit $r_{\rm a}\rightarrow\infty$, both
expressions match).
\subsubsection{Other possibilities}
\label{sssec:}
The two methods discussed above are the simplest ones accounting for anisotropy in the
velocity distribution, as they depend only on one free parameter ($\beta_{0}$ or $r_{\rm a}$). Other
DFs involving more free parameters can be found in the literature, such as a straightforward
generalization of both constant anisotropy and Osipkov-Merritt models \cite{Cuddeford1991},
\begin{eqnarray}
f(\mathcal{E},L) = G(Q)L^{-2\beta_{0}}\,.
\end{eqnarray}
Motivated by the anisotropy profiles $\beta(r)$ observed in N-body simulations, some authors have
also considered linear combinations of the constant anisotropy DF and the Osipkov-Merritt
DF \cite{BozorgniaEtAl2013}
\begin{eqnarray}
f(\mathcal{E},L) = w f_{\rm OM}(Q)+(1-w)G(\mathcal{E})L^{-2\beta_{0}}\,,
\end{eqnarray}
while others have looked at different functional forms \cite{WojtakEtAl2008}:
\begin{eqnarray}
f(\mathcal{E},L) = F(\mathcal{E})
\left(1+\frac{L^{2}}{2L_{0}^{2}}\right)^{-\beta_{\infty}+\beta_{0}}L^{-2\beta_{0}}\,.
\end{eqnarray}
Models of Refs.~\cite{BozorgniaEtAl2013,WojtakEtAl2008} both contain a set of three free parameters
($\{w,r_{\rm a},\beta_{0}\}$ or $\{L_{0},\beta_{0},\beta_{\infty}\}$) calibrated on simulations.
Irrespective of the different proposals to cope with anisotropy in the DM velocity field,
we stress that the latter is still hardly constrained by kinematic observations of visible
matter.
\subsection{Beyond spherical symmetry}
\label{ssec:axisymmetry}
\change{In this study, we will not go beyond spherical symmetry except to approximately integrate
the effects of some non-spherical components like the baryonic bulge and disks [see
\citeeq{eq:mass_axisymm} and discussion below]. Here, for the sake of completeness, we just
review some more involved theoretical methods that can be used to cope with this delicate
problem}.
When dealing with a system that is not spherically symmetric, the energy and the angular momentum
might not be the most convenient variables to work with. The authors of
Refs.~\cite{BinneyEtAl2008,SandersEtAl2016} promote instead the \textit{angle-action}
variables as a phase-space coordinate system. The components of the action vector $\vec{J}$ are
integrals of motion and the angle vector $\vec{\Theta}$ is the Hamiltonian conjugate of $\vec{J}$.
A crucial property of the actions is their constancy in a slowly varying potential. In such a
potential, a DF of the form $f(\vec{J})$ is then also a constant. This property was used
as a starting point in Ref.~\cite{PifflEtAl2015} to compute a phase-space model
of the Milky Way, assuming baryons are slowly accreted onto an initially spherical dark halo. This
led, in this theoretical framework, to the exclusion of an adiabatic compression of the
dark halo \cite{BinneyEtAl2015}, favoring instead heating at its center and the presence of a
$\sim$2 kpc core \cite{ColeEtAl2017} in agreement with a detailed study of the bar/bulge dynamics
\cite{PortailEtAl2017}.
The philosophy behind this technique is opposite to Eddington's since here the starting point is
the DF, from which the potential is computed through an iterative procedure, while in the Eddington
case one starts with the potential and the density and derives the DF from there. Just like there
is a lot of freedom when choosing the functional form of the DF in the anisotropic extensions of the
Eddington inversion method, there is also some freedom in choosing the functional form of the
action-dependent DF. Assumptions must therefore be made on its dependency with respect to each
action and this may introduce theoretical uncertainties in the calculation which are difficult to
evaluate. Nevertheless, this method constitutes the state of the art of Galactic phase-space
modeling and it captures details beyond the reach of the Eddington formalism.
This level of detail might not be required in the context of DM searches though, as one is mostly
interested in evaluating the astrophysical uncertainties relevant to complementary
observables of interest in a self-consistent framework. The Eddington formalism actually provides
such a framework, while being in practice more flexible than the angle-action approach. Moreover,
global dynamical constraints are easier to account for in the Eddington approach from a technical
point of view. However, as we will show in the following, the Eddington inversion is not a
self-regulated approach as it does not prevent from getting unstable or ill-defined phase-space
configurations, whereas action-angle methods are a priori immune to these defects. It is therefore
important to delineate as rigorously as possible the domain of application of the Eddington
inversion. Ultimately, more systematic comparisons with action-angle methods should help further
reduce the theoretical uncertainties and provide complementary understanding of the potential
failures of the Eddington inversion, but this goes beyond the scope of this paper.
\section{Some issues of the Eddington formalism}
\label{sec:issues}
In this section, we discuss in detail two issues that we have identified in the Eddington
inversion method, and which have been overlooked in the literature focused on DM searches.
The first one concerns the impact of the spatial boundary of the dark halo, which is usually
neglected while this leads to theoretical inconsistency and also potentially to mistreatments
of the tail of the DM velocity distribution. The second one is related to the fact that some
perfectly licit DM-baryons configurations may actually lead to unstable DFs.
We recall that the main benefits of the Eddington formalism (including its anisotropic
extensions) in the context of DM searches is precisely to provide a self-consistent and
constrained framework to compute both density-dependent and velocity-dependent observables.
A noticeable strength is to be able to use a kinematically constrained Galactic mass model and
self-consistently propagate the associated uncertainties to the DM observables. However, the two
issues mentioned above and further detailed in this section jeopardize this possibility.
In the following, all concrete calculations of the DFs will be made using the best-fit
Galactic mass models of Ref.~\cite{McMillan2017} (McM17 models henceforth), unless specified
otherwise. The nominal model is featured by an NFW DM halo and a baryonic component
made of a stellar bulge, stellar disks, and gaseous disks, all of these components being
constrained from recent kinematic data. The parameters of these models are summarized in
\citeapp{app:mass_models}.
\subsection{Radial-boundary-induced divergence, the escape speed, and some regularization
procedures}
\label{ssec:rmax}
\subsubsection{Characterization of the spatial-boundary-induced divergence}
\label{sssec:div_rmax}
A generic issue with the Eddington DF is the presence of a divergence in the limit
$\mathcal{E}\to 0$ due to the term
$(\mathrm{d}\rho/\mathrm{d}\Psi)_{\Psi=0}\times 1/\sqrt{\mathcal{E}}$ present in
\citeeq{eq:eddington_form2}. This derivative is evaluated at the radial boundary of the
system and does not vanish for conventional halo profiles, unless the boundary is sent
to infinity. However, this boundary must be finite just because of the presence of neighboring
galaxies. It is precisely what allows us to make a realistic interpretation of an escape speed,
which has some impact on \textit{e.g.}\ direct searches of low-mass WIMPs \cite{LavalleEtAl2015}.
This diverging term $\propto 1/\sqrt{\mathcal{E}}$ is actually very often dropped without deep
justifications. \change{However, this jeopardizes the self-consistency of the approach, since the
reconstructed DM density profile then significantly departs from the initial one, unless one is
interested in describing only the inner parts of the Galaxy (see \citefig{fig:radial_cut} and the
green curve in Fig.~\ref{fig:reconstructed_rho_king}, as well as a more extended discussion on the
density profile in \citesec{sssec:reg_dens}). More specifically, the reconstructed density differs
from the initial one by $\sim 10\%$ above $0.1 R_{\mathrm{max}}$, i.e.~$\sim 2 r_{\mathrm{s}}$, and the
difference increases even more at larger radii. Even if these numbers do not look dramatic,
they still undermine the appealing aspects of this framework as a consistent and global framework
for DM-signal predictions, as one loses control on the input mass model uncertainties}. On the
other hand, sending the boundary to infinity spoils control on the tail of the velocity
distribution.
Since the speed distribution $f_{v}(v,r)$ is directly related to the DF through \citeeq{eq:v_df},
the divergence when $\mathcal{E}\to 0$ translates into a divergence in velocity space when
$v^2\to 2\psi(r)$, \textit{i.e.}\ at the escape speed and at \textit{any} position in the system.
In \citefig{fig:divergence_vesc} (left panel), we illustrate this divergence in the speed
distribution evaluated at a radius $r=20\,\rm kpc$ (solid red line) in our default halo model
with a radial extension set to $R_{\rm max}=500\,\rm kpc$. This divergence is the sign that the
system under consideration is artificially compressed in phase space. A population of
particles is squeezed near the escape speed, while we would expect a stable DF
to verify $f(\mathcal{E}\to0)\to 0$. The right panel of \citefig{fig:divergence_vesc} shows
the pathological DF $f(\mathcal{E})$ as a function of $\mathcal{E}$ (solid red curve), where
the divergence occurs at $\mathcal{E}\to 0$.
This divergence is present whenever the derivative $(\mathrm{d}\rho/\mathrm{d}\Psi)_{\Psi=0}$ is
non-zero, which is always the case for conventional halo profiles with finite boundaries unless
one modifies the asymptotic behavior at the boundaries. This issue is therefore intimately
related to the spatial extension of the system, since the troublesome derivative is evaluated
at $\Psi=0$ (equivalently $r=R_{\rm max})$. The gravitational potential being defined up to a
constant, the position $r=R_{\rm max}$ where $\Psi$ vanishes is a matter of choice. For example,
taking $R_{\rm max}\to\infty$ solves the issue and the DF satisfies $f(\mathcal{E}\to 0)\to 0$,
as shown by the blue solid curve in the right panel of \citefig{fig:divergence_vesc}.
This actually matches with the boundary condition of having the gravitational potential
$\psi(r)=-\phi(r) \to 0$ as $r\to\infty$ when solving the Poisson equation. The speed
distribution is then regularized---see the blue solid curve in the left panel of
\citefig{fig:divergence_vesc}. The DF obtained for this idealized--though
unrealistic--choice of $R_{\mathrm{max}}$ is fully consistent with the mass model and is a solution
of the collisionless Boltzmann equation by construction.
This leads to the following interpretation of the divergence showing up at finite radial
extensions: particles that could have probed infinite distances in agreement with the
conventional infinite boundary condition are now prevented from radially escaping the system and
have their phase space compressed accordingly.
However, choosing $R_{\rm max}\to\infty$ is physically problematic in this context. DM halos
always have a finite extension due to the gravitational influence of other neighboring halos
(like the dark halo of M31 in the case of our Galaxy), or the host halo if the system under
consideration is a subhalo. Taking this finite extension into account is crucial for DM searches
as its fixes the definition of the escape speed of the system
\begin{eqnarray}
v_{\rm esc}(r) = \sqrt{2(\phi(R_{\rm max})-\phi(r))}\,.
\end{eqnarray}
The value of the escape speed at the position of the Solar System is for instance a major
ingredient when making predictions for direct WIMP searches in the low-mass region
\cite{LavalleEtAl2015}. The escape speed is also a target observable that can be inferred
from stellar surveys \cite{PifflEtAl2014a,Herzog-ArbeitmanEtAl2018}. Finally, in the particular
case of the MW, the closest neighbor is the Andromeda galaxy which is about $800\,\rm kpc$ away
from the Galactic center. Consequently the Galactic halo cannot extend much farther than
$R_{\rm max}\sim 500\,\rm kpc$, which we take as our reference value from now on.
\footnote{Note that $R_{\rm max}$ is almost twice as large as the estimated virial radius
$R_{200}\sim 250\,\rm kpc$.}
In the left panel of \citefig{fig:divergence_vesc_OM}, we compute the relative change in the
escape speed when increasing the value of $R_{\rm max}$. One can see that the escape speed at
$r=8\,\rm kpc$ increases by up to 10\% ($\sim 50$km/s) when the radial boundary moves further out.
The relative increase gets bigger as the position is farther away from the center of the halo,
though lower in absolute value. It is therefore important to be as consistent as possible
when one wants to relate the concept of escape to the phase-space DF.
The discussion above focused on the isotropic case, but the situation is very similar in the
anisotropic case with a constant anisotropy parameter $\beta$. Sending $R_{\rm max}$ to infinity
removes the diverging term in $G(\mathcal{E})$ [see \citeeq{eq:df_constant_beta}], and the DF is
regularized at the cost of changing the escape speed. However, the situation is different in the
Osipkov-Merritt case, as the troublesome derivative is
$(\mathrm{d}\rho_{\rm OM}/\mathrm{d}\Psi)_{\Psi=0}$ with $\rho_{\rm OM}$ defined in
\citeeq{eq:density_om}. One can check that if the density behaves as a power-law at
large radii $\rho\propto r^{-b}$---as is almost always the case---then
$\mathrm{d}\rho_{\rm OM}/\mathrm{d}r\propto r^{1-b}$ and $\mathrm{d}\Psi/\mathrm{d}r\propto r^{1-b}$,
meaning that the derivative $(\mathrm{d}\rho_{\rm OM}/\mathrm{d}\Psi)_{\Psi=0}$ goes to a constant
as $R_{\rm max}$ goes to infinity. Consequently, in the Osipkov-Merritt model, not only
does considering an infinite system affect the escape speed, but it also does not remove the
phase-space divergence, as illustrated in the right panel of \citefig{fig:divergence_vesc_OM}.
Moreover, for
this model the divergence in the speed distribution (Eq.~\ref{eq:v_df}) does not
occur at $v_{\mathrm{esc}}$ but appears in the peak of the distribution due to the angular integral.
This makes it more difficult to regularize the DF.
In the following, we discuss different ways of getting rid of this divergence in order to
obtain physically viable solutions.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.500\linewidth]{{{fig_speed_distribution_20kpc_divergence}}}
\includegraphics[width=0.49\linewidth]{{{fig_f_eps_tail_v2}}}
\caption{\small \textbf{Left panel:} Velocity distribution functions (i) for the NFW profile of
Ref.~\cite{McMillan2017} at a radius of 20 kpc, for three situations regarding the status of the
divergence at the escape velocity; and (ii) for the regularization \`a la King given
in \citeeq{eq:king_model}. The red, blue, green, and yellow lines represent the DFs obtained
by keeping the divergence, sending the boundary of the system to infinity, removing the
divergence, and using the regularization \`a la King, respectively.
\textbf{Right panel:} Corresponding DFs as a function of the energy
${\cal E}$. DFs are in units of $\rho_{\rm s}(4\,\pi\,G_{\rm N}\,\rho_{\rm s}\,r_{\rm s}^2)^{-3/2}$.}
\label{fig:divergence_vesc}
\end{figure*}
\begin{figure}[!t]
\centering
\includegraphics[width=0.485\linewidth]{{{fig_err_vesc_rmax}}}
\includegraphics[width=0.505\linewidth]{{{fig_fv_OM_20kpc_divergence_treatment}}}
\caption{\small \textbf{Left panel:} Relative variation of the escape velocity at a position $r$
as the system's boundary $R_{\rm max}$ is modified (the reference value is set to
$R_{\rm max}=500\,\rm kpc$). \textbf{Right panel:} Same as left panel of
\citefig{fig:divergence_vesc}, for the Osipkov-Merritt model.}
\label{fig:divergence_vesc_OM}
\end{figure}
\subsubsection{Regularization through the density profile}
\label{sssec:reg_dens}
The most simple solution to the boundary-induced divergence is to slightly modify the
DM density profile in such a way that it is still consistent with the kinematic
constraints which it was derived from, and that $(\mathrm{d}\rho/\mathrm{d}r)_{r=R_{\rm max}}$
vanishes---this is not the case with standard NFW, $\alpha\beta\gamma$, or Einasto profiles. If
such a solution exists, then the Eddington formalism fully applies and provides a self-consistent
description of the phase space up to the spatial boundary of the DM halo.
Therefore, since the bulk of the kinematic constraints pertains to the inner 50 kpc of the
Galaxy \cite{McMillan2017}, we need to make sure that both the input DM mass profile and
gravitational potential are not affected in this range. The kinematics of satellite galaxies can
also be used to constrain the MW mass within $\sim 300$ kpc with larger uncertainties
(see \textit{e.g.}\ Ref.~\cite{WatkinsEtAl2010}), but not farther, so the modified mass profile should not
depart too much from the initial one and remain consistent with these bounds.
Modifications of standard functional forms of the density profile can be found in the literature.
For instance, the authors of Ref.~\cite{SpringelEtAl1999} (see also
Ref.~\cite{KazantzidisEtAl2004}) account for tidal stripping of the outer parts of a halo with an
exponential suppression. However, this modified profile has
$(\mathrm{d}\rho/\mathrm{d}\Psi)_{\Psi=0}\neq0$ and therefore leads to a diverging DF once we set
a radial boundary. Instead, we propose the following alternative density profile to model the
halo:
\begin{eqnarray}
\tilde{\rho} = \rho-\Psi_{\rm D}\left(\frac{\mathrm{d}\rho}{\mathrm{d}\Psi_{\rm D}}\right)_{\Psi=0}.
\label{eq:alt_density}
\end{eqnarray}
The corresponding DM component, the gravitational potential of which is $\Psi_{\rm D}$, is
consistently obtained from the Poisson equation (with the vanishing condition at the radial
boundary $R_{\rm max}$), which then reduces to
\begin{eqnarray}
\label{eq:alt_potential}
\tilde{\Psi}_{\rm D}(r) &=& G\int_r^{R_{\rm max}}dr'\,\frac{\tilde{m}_{\rm D}(r')}{r'^2}\\
{\rm where}\;\tilde{m}_{\rm D}(r) &=&4\,\pi\int_0^r dr'\,r'^2\,\tilde{\rho}(r')\,.\nonumber
\end{eqnarray}
This potential is the one to be used in the Eddington inversion along with the modified
density profile $\tilde{\rho}$ (the baryonic component is left unchanged). That new
density profile $\tilde{\rho}$, defined
in \citeeq{eq:alt_density}, flattens at the edge of the system, \textit{i.e.}\
\begin{eqnarray}
(\mathrm{d}\tilde{\rho}/\mathrm{d}\tilde{\Psi})_{\tilde{\Psi}=0} =
[(\mathrm{d}\tilde{\rho}/\mathrm{d}r)
(\mathrm{d}\tilde{\Psi}/\mathrm{d}r)^{-1}]_{r = R_{\mathrm{max}}} = 0\,.\nonumber
\end{eqnarray}
This flattening at $r\to R_{\rm max}$ can be thought of as the border with the
homogeneous background or with neighboring self-gravitating systems (though physical
space cannot be filled up with spheres). This functional form is actually guided by the
reconstructed density profile obtained when removing the divergence by hand, as discussed in
\citesec{sec:reg_div_phase_space}.
This prescription needs slight modifications when dealing with anisotropic systems since the
diverging term takes a different form in that case. In the constant-$\beta$ case this term is
proportional to $(\mathrm{d}\chi/\mathrm{d}\Psi)_{\Psi=0}$ where $\chi=r^{2\beta_{0}}\rho$, therefore
we propose the following profile:
\begin{eqnarray}
\tilde{\rho} = \rho-\frac{\Psi_{\rm D}}{r^{2\beta_{0}}}
\left(\frac{\mathrm{d}^{n}\chi}{\mathrm{d}\Psi_{\rm D}^{n}}\right)_{\Psi_{\rm D}=0}\,,
\label{eq:alt_density_beta}
\end{eqnarray}
where $\chi$ is defined in \citeeq{eq:density_beta} and $n$ is given in \citeeq{eq:index_beta}.
For the Osipkov-Merritt models
\begin{eqnarray}
\tilde{\rho} = \rho - \frac{\Psi_{\rm D}}{1+r^{2}/r_{\rm a}^{2}}\left(\frac{\mathrm{d}\rho_{\rm OM}}{\mathrm{d}\Psi_{\rm D}}\right)_{\Psi_{\rm D}=0}\,.
\label{eq:alt_density_om}
\end{eqnarray}
In these two cases, the gravitational potential is consistently calculated from
\citeeq{eq:alt_potential}. Note that in the anisotropic case, the modified profile
depends on the anisotropy variable ($\beta_{0}$ or $r_{\rm a}$) specific to the model.
The modified density and mass are compared to the original ones in
\citefig{fig:reconstructed_rho}.
The modified density differs from the original one in the outer part of the halo, and
underestimates the original profile by up to 40\% at $\sim 0.8\,R_{\rm max}$ (30\% at $R_{200}$)
in the isotropic case, which translates into a difference in the mass of only 15\% at
$r=R_{\rm max}$ (10\% at $R_{200}$). The inner, dynamically constrained part of the profile is
therefore kept
mostly unchanged by the prescription when the system is isotropic. The introduction of a
constant anisotropy causes a departure from the isotropic result in a systematic way that
depends on the sign of $\beta_{0}$. The difference in density and mass is higher in the
$\beta_{0}>0$ case than in the isotropic and $\beta_{0}<0$ cases, which is consistent with the
expectation for more radial orbits. The difference for Osipkov-Merritt models is even bigger:
the mass difference reaches 50\% at $R_{200}$. These differences can be understood in terms of the
anisotropy of the system. Our prescription removes matter at the edge of the halo to flatten the
density profile at $R_{\rm max}$. However, if the particles at $R_{\rm max}$ are mostly on radial
orbits, as is the case in the Osipkov-Merritt models where $\beta(R_{\rm max})\simeq 1$, they also
contribute to the density in the inner part of the halo. Therefore in a system with a high
positive $\beta$, removing matter in the outskirts also removes matter in the inner regions.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.495\linewidth]{{{fig_reconstructed_density}}}
\includegraphics[width=0.495\linewidth]{{{fig_reconstructed_mass}}}
\caption{\small \textbf{Left panel:} Density profile $\rho$ in the standard NFW case (black line)
compared to the modified profiles for the isotropic, Osipkov-Merritt, $\beta = \pm 0.3$ (red,
blue, green and magenta, respectively) defined in Eqs.~(\ref{eq:alt_density}),
(\ref{eq:alt_density_beta}), (\ref{eq:alt_density_om}). The bottom panel shows the relative
difference between the original and the modified profile. \textbf{Right panel:} Same as for left
panel, for the mass profile.}
\label{fig:reconstructed_rho}
\end{figure*}
To summarize, we found that slight modifications of the density profile are enough
to get rid of the boundary-induced divergence in the isotropic case and in the case of
tangential anisotropy, while keeping the overall mass model consistent with the constrained
initial configuration. Indeed, the error induced on the Galactic mass at large radii is of order
$\sim 10\%$ in these cases. Therefore, such modifications preserve the self-consistency of the
formalism, and do not affect the density- and velocity-dependent observables related to
DM searches calculated in the inner $\sim$50-100 kpc of the MW. Errors of
$\gtrsim 10\%$ are expected in the outskirts, but should be negligible \textit{e.g.}\ when integrated
over the line of sight, like in the case of $p$-wave suppressed or Sommerfeld-enhanced
annihilation calculations. On the other hand, this regularization procedure fails in the
case of significant radial anisotropy, like in the Osipkov-Merritt model, unless the
anisotropy radius is taken very large (isotropy limit).
\subsubsection{Regularization through the phase-space distribution}
\label{sec:reg_div_phase_space}
The problem of the spatial boundaries in self-gravitating systems is rather classical
when one takes the DF as the fundamental characterizing function. A well-known example
is the so-called King model \cite{Woolley1954,WoolleyEtAl1956,King1962a,King1966,Michie1963,MichieEtAl1963}, meant to consistently describe bounded pseudo-isothermal systems (and applied to
globular clusters).
For the boundary-induced divergence at stake in our study, one can apply a similar procedure
by (i) cutting the non-physical diverging term in the phase-space DF $f({\cal E})$; (ii)
numerically deriving the modified gravitational potential from the Poisson equation fed by
the new DF
and appropriate boundary conditions (this is an important step which also defines the new mapping
between the potential and the radial coordinate); and (iii) integrating the new DF to get the
modified density profile. Although much more involved than the regularization through the
density presented in \citesec{sssec:reg_dens}, this method ensures to get a well-behaved
solution consistent with both the Boltzmann equation and the Poisson equation. It is
particularly well-suited to describe bounded systems like galaxies \cite{WidrowEtAl2005}, and
also to account for tidal effects induced by either neighboring systems like dwarf galaxies
\cite{DrakosEtAl2017,StrigariEtAl2017,PetacEtAl2018}, or hosted systems, like DM subhalos
\cite{BerezinskyEtAl2014,StrefEtAl2017}. In the present context, one still needs
then to make sure that the modified density profile does not depart too much from the initial
density profile, at least within the inner 50-100 kpc of the MW, not to spoil its
consistency with kinematic data.
Before inspecting possible ways of cutting the initial DF $f({\cal E})$, let us review
the full chain of calculations. Let us call $F(\tilde{\cal E})$ the modified DF after
truncation, where
\begin{eqnarray}
\tilde{\cal E} = \tilde{\Psi} - \frac{\tilde{v}^2}{2}
\end{eqnarray}
is the new energy associated with the system, $\tilde{\Psi}$ the new potential, and $\tilde{v}$
the new velocity coordinate. A priori, tilde quantities are different from non-tilde quantities
that pertain to the initial configuration. However, since $F(\tilde{\cal E})$ is known (inferred
from a modification of $f({\cal E})$ that we shall discuss later), we can fully determine the
DM component of the gravitational potential from the Poisson equation
\begin{eqnarray}
\Delta\tilde{\Psi}_{\rm D} &=& -4\pi G_{\rm N}\,\tilde{\rho}(\tilde{\Psi})
= -4\pi G_{\rm N}\int {\rm d}^3\vec{\tilde{v}}\,F(\tilde{\cal E})\nonumber\\
&=& -4\pi G_{\rm N}\,
\left[\tilde{\rho}_0+
4\pi\sqrt{2}\int_{0}^{\tilde{\Psi}}\sqrt{\tilde{\Psi}-\mathcal{E}}\,
F(\mathcal{E})\,\mathrm{d}\mathcal{E}\right]\,,
\label{eq:poisson_tilde}
\end{eqnarray}
where though the density profile $\tilde{\rho}$ is still undetermined, it is accessed through
the integral of the DF over the potential. Note that $\tilde{\Psi}=\tilde{\Psi}_{\rm D}+
\tilde{\Psi}_{\rm B}$, and that only the DM component is modified, such that
we actually take $\tilde{\Psi}_{\rm B}=\Psi_{\rm B}$. An important point here is that the mapping
between the radial coordinate and $\tilde{\Psi}$ is only defined through the Laplacian operator
$\Delta$ on the left-hand side, not on the right-hand side. Therefore, one needs appropriate
boundary conditions to solve this equation consistently with the physical system at hand.
In the present context, we are in principle forced to demand that $\tilde{\Psi}(R_{\rm max})=0$,
and since we do not want a significant departure from the initial potential in the inner parts of
the Galaxy, we further impose that $\mathrm{d}\tilde{\Psi}/\mathrm{d}r(0) =
\mathrm{d}\Psi/\mathrm{d}r(0)$. Besides, note that we allow for the presence of a constant
$\tilde{\rho}_0$ in the above equation, which is a free parameter and cannot be
recovered from the equation itself. This freedom in choosing the value of the density at the
boundary of the system is inherent to the Eddington formalism as previously seen in
\citeeq{eq:rho_reconstruction}---note that it can be neglected here as the density profile is no
longer an input in the regularization procedure, but an output. Finally, we stress that the above
differential equation has to be solved numerically.
We now discuss some possible forms for the modified DF $F$, which are to be considered as
ans\"atze aimed at recovering the non-diverging part of the initial DF while ensuring that
$F(\tilde{\cal E}\to 0)\to 0$.
We first consider the Eddington DF computed for a finite system with radial extension
$R_{\rm max}$. The initial DF is given in \citeeq{eq:eddington_form2} and diverges as
$\mathcal{E}\to0$. One way of modifying that DF to get a well-behaved distribution is simply to
remove the diverging term $\propto 1/\sqrt{\mathcal{E}}$. The modified DF is then
\begin{eqnarray}
F(\tilde{\cal E}) = f(\tilde{\cal E})-\frac{1}{\sqrt{8}\pi^{2}}
\frac{1}{\sqrt{\tilde{\cal E}}}\left(\frac{\mathrm{d}\rho}{\mathrm{d}\Psi}\right)_{\Psi=0
= \dfrac{1}{\sqrt{8}\pi^{2}} \int_{0}^{\tilde{\cal E}} \!
\dfrac{\mathrm{d}^{2}\rho}{\mathrm{d}\Psi^{2}} \,
\dfrac{\mathrm{d}\Psi}{\sqrt{\tilde{\cal E} - \Psi}}\,.
\label{eq:df_div_removed}
\end{eqnarray}
Such an ansatz makes sense only if $\tilde{\cal E}$ spans the same range as ${\cal E}$ (note
that $\rho$ and $\Psi$ are non-tilde quantities). This is possible only if the condition
$\tilde{\Psi}_{\rm max}=\tilde{\Psi}(r=0)=\Psi(r=0)$ is obeyed, which is in contradiction with
the presumed boundary condition to solve \citeeq{eq:poisson_tilde}, \textit{i.e.}\
$\tilde{\Psi}(R_{\rm max})=0$. The latter condition must therefore be traded for the former in that
case, and the spatial boundary of the system is no longer $R_{\rm max}$, but a new
$\tilde{R}_{\rm max}$. In practice, though, we find that $\tilde{R}_{\rm max} \approx R_{\rm max}$,
such that the above ansatz can still be applied.
Since in the initial DF the diverging term becomes important only as $\mathcal{E}\to 0$, we expect
the modified potential to remain close to the original one which allows us to estimate the modified
density from the Abel equation
\begin{eqnarray}
\frac{\mathrm{d}\tilde{\rho}}{\mathrm{d}\tilde{\Psi}} =
\sqrt{8}\pi\int_{0}^{\tilde{\Psi}}\frac{F(\mathcal{E})}{\sqrt{\tilde{\Psi}-\mathcal{E}}}\,
\mathrm{d}\mathcal{E}\,.
\label{eq:abel2}
\end{eqnarray}
Assuming $\tilde{\Psi}\approx\Psi$ and $\tilde{\rho}(\tilde{\Psi}=0)\approx\rho(\Psi=0)$, we get
\begin{eqnarray}
\tilde{\rho} \approx \rho-\Psi\left(\frac{\mathrm{d}\rho}{\mathrm{d}\Psi}\right)_{\Psi=0}\,.
\end{eqnarray}
The modification of the constant-$\beta$ DF is very similar, except the modification is only
performed on the energy-dependent part of the DF $G(\mathcal{E})$ in \citeeq{eq:df_constant_beta}.
The modification of the Osipkov-Merritt models is identical to the isotropic case with the change
$\mathcal{E}\rightarrow Q$. However, \citefig{fig:divergence_vesc_OM} shows that removing the
divergence by hand leads to a huge modification of the speed distribution. As a result, the
Osipkov-Merritt DF is very hard to regularize in a self-consistent way.
Note that the above expression for $\tilde{\rho}$ is similar to the one proposed in
\citeeq{eq:alt_density}, except that the potential that appears is the
DM only potential rather than the total potential. The density and mass shown in
\citefig{fig:reconstructed_rho} are therefore also relevant for the modified DF discussed here.
We now turn to a truncation of the DF more fundamentally inspired from the King model
\cite{King1966}. The original approach focused on making isothermal spheres finite in phase
space but was later generalized to generic mass distributions (see \textit{e.g.}\ \cite{WidrowEtAl2005}).
It was also very recently used to implement a realistic tidal truncation of satellite DM
halos \cite{DrakosEtAl2017}. The spirit of the method is slightly different from what was
presented just above in the sense that we no longer start from a diverging and ill-defined DF,
but from a well-behaved DF describing a self-gravitating system with spatial boundaries sent to
infinity (thereby resembling the King model, which starts from the Maxwellian DF that describes
the infinite isothermal sphere).
In the isotropic case, this initial DF is precisely the Eddington function $f({\cal E})$ given
in \citeeq{eq:eddington_form2}, taking the gravitational potential $\Psi(r)=-\phi(r)$ as the
solution to Poisson's equation with the boundary condition $\phi(r\to \infty)\to 0$.
We then implement a truncation in energy related to the desired radial boundary $R_{\rm max}$
from a procedure similar to the one introduced above: (i) cut the phase-space volume in energy
below a cutoff ${\cal E}_c=\Psi_0=\Psi(R_{\rm max})$; (ii) define a new phase-space DF
$F(\tilde{\cal E}\equiv {\cal E}-\Psi_0)$ from $f({\cal E})$ above the cutoff, with the expected
asymptotic behavior $F(\tilde{\cal E}\to 0)\to 0$; (iii) determine the new associated gravitational
potential $\tilde{\Psi}$ from \citeeq{eq:poisson_tilde} (as previously, this defines the new
mapping between the radial coordinate and the potential); (iv) integrate the new DF to get
the modified density profile $\tilde{\rho}$.
According to this procedure, the ansatz for the modified DF $F$ that relates a cutoff in energy
to a radial cutoff is then defined as
\begin{eqnarray}
F(\tilde{\cal E}) = \left\{
\begin{array}{ll}
f(\tilde{\cal E}+\Psi_{0})-f(\Psi_{0})\, &\mathrm{for}\,\tilde{\cal E}\geqslant 0\\
0\, &\mathrm{for}\,\tilde{\cal E}<0
\end{array}
\right.
\label{eq:king_model}
\end{eqnarray}
This DF is continuous and satisfies $F(\tilde{\cal E}=0)=0$ by construction. The associated
gravitational potential $\tilde{\Psi}$ is solution of the Poisson equation
\citeeq{eq:poisson_tilde}, with initial conditions to be specified. If we set the cutoff in the
initial DF to ${\cal E}_c=\Psi_0=\Psi(R_{\rm max})$, then
$\tilde{\Psi}_{\rm max}=\Psi_{\rm max}-\Psi(R_{\rm max})$ by construction, which by no means
guarantees that $\tilde{\Psi}$ vanishes at $R_{\rm max}$. In practice though, we find that the
radius $\tilde{R}_{\rm max}$ at which $\tilde{\Psi}(\tilde{R}_{\rm max})=0$ is very close to
$R_{\rm max}$, though slightly larger. To get $\tilde{\Psi}(R_{\rm max})=0$ directly from the
Poisson equation, one would instead need to tune the initial cutoff potential $\Psi_0$
until equality is reached---in the same vein, we find in that case that $\Psi_0\approx
\Psi(R_{\rm max})$.
Note that unlike removing the diverging term
``by hand", the King approach may lead to a physical interpretation in terms of tidal cut,
since it has been shown in numerical simulations that tidal stripping tends to remove
particles based on heir energy rather than their angular momentum \cite{ChoiEtAl2009}. In the
present context, such stripping could have resulted from gravitational interactions with the
neighboring galaxies. We show the dark halo profile reconstructed from the DF of
\citeeq{eq:king_model} after solving \citeeq{eq:poisson_tilde} in
\citefig{fig:reconstructed_rho_king}, where the difference in setting the cutoff discussed
just above is illustrated explicitly.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.495\linewidth]{{{fig_reconstructed_density_king_models}}}
\includegraphics[width=0.495\linewidth]{{{fig_reconstructed_mass_king}}}
\caption{\small Same as \citefig{fig:reconstructed_rho}, showing the profiles resulting from DFs
regularized \`a la King, based on the ansatz of \citeeq{eq:king_model}. {\bf Left panel:}
reconstructed density profiles compared with the initial one. {\bf Right panel:} corresponding
dark halo mass profiles.}
\label{fig:reconstructed_rho_king}
\end{figure*}
\subsubsection{Regularization of the boundary-induced divergence: Summary}
\label{sssec:v_div_summ}
Here we summarize the pros and cons of the regularization procedures implemented above to remove the
radial boundary-induced divergence of the phase-space DF as ${\cal E}\to 0$ or equivalently
$v\to v_{\rm esc}$. For the isotropic case, we saw that the technically easiest way to remove the
divergence while ensuring the self-consistency of the Eddington inversion method was to slightly
modify the input density profile around the radial boundary $R_{\rm max}$ in such a way that the
dynamics is unaffected in the central regions of the Galaxy. In that case, one can
straightforwardly find the new gravitational potential $\tilde{\Psi}_{\rm D}$ by directly
integrating the Poisson equation over the radial coordinate from \citeeq{eq:relative_potential}.
The regularization through modifications of the DF is more involved as it requires to calculate
$\tilde{\Psi}_{\rm D}$ by numerically solving the Poisson equation. This is the only way to recover
a mapping between the potential and the radial coordinate, and then to compute the resulting
modified density profile. Both methods give similar distortions to the initial density profile,
which lie within the current statistical and systematic uncertainties on the dark halo mass
profile. Ultimately, the best, while much more involved approach, would be to start from
well-defined DF and profile before performing the likelihood analysis to account for the
kinematic constraints and to get best-fitting Galactic mass models, similar to the action-angle
analyses (\textit{e.g.}~\cite{BinneyEtAl2015}). This goes beyond the scope of this paper.
For anisotropic systems, we saw that both methods may apply to tangential anisotropy ($\beta<0$),
but fail for radial anisotropy (both $\beta>0$ and the Osipkov-Merritt models). In the latter
case, the only way to get finite results is to remove the diverging term
($\propto 1/\sqrt{\cal E}$ or $\propto 1/\sqrt{Q}$) by hand, but this is at the cost of a
meaningful and self-consistent normalization of the phase-space DF. Therefore, we are then left
with a theory that is no longer a self-consistent inversion of the integral
\citeeq{eq:rho_definition}, whose DF must be normalized to unity (or $\rho$ or $\tilde{\rho}$)
by hand and is no longer simply related to the DM density profile. Although such
a DF might be perfectly licit as a description of a gravitational system, its theoretical status
appears unclear to us.
\subsection{Positivity and stability issues}
\label{ssec:gamma}
We now move to another kind of issues that may arise in the Eddington formalism:
the potential breakdown of the inversion, very often due to the presence of baryonic components.
More concretely, it turns out that some perfectly sound configurations of Galactic mass
models may lead to ill-defined DFs through this method, which are the manifestation of unstable
configurations in phase space. In these cases, Eddington-like inversions can no longer be used to
self-consistently describe the DM halo, because some degrees of freedom are likely
missing to make full physical sense of the DM component (axisymmetry, action-angles
coordinates, \textit{etc}.). We stress that the potential breakdown of the Eddington formalism may
only manifest itself in some regions of the phase space. This is actually barely checked in the
context of predictions for direct DM searches. A typical signature of such a breakdown
is a DF exhibiting negative values in specific regions of phase space, which will be
discussed in \citesec{sssec:pos}. More subtle while complementary considerations linked to the
stability of gravitational systems will be discussed in \citesec{ssec:gamma}.
\subsubsection{Positive distribution functions}
\label{sssec:pos}
A trivial requirement for a DF to be well-behaved is positivity everywhere,
\textit{i.e.}~$f(\vec{r},\vec{v})\geqslant0$ for any $(\vec{r},\vec{v})$.
Although most dark halo shapes are fully Eddington invertible for
DM-only systems (\textit{e.g.}~\cite{Widrow2000}), there is in general no guarantee that
Eddington's inversion leads to a DF positive all over the halo for any given pair of
DM density profile $\rho$ and total gravitational potential $\Psi=\Psi_{\rm D}+\Psi_{\rm B}$.
We will inspect below the specific case of cored profiles, but it usually turns out that the
presence of a baryonic component, which breaks the plain correlation between the density and
the potential, may drive the DF negative in some regions of the system.
Sufficient conditions for positivity were identified in Refs.~\cite{CiottiEtAl1992,Ciotti1996} for
the Osipkov-Merritt models, in the general case of multi-component systems. From
\citeeq{eq:om_form1} we can identify a necessary condition for the positivity of $f_{\rm OM}$,
which is
\begin{eqnarray}
\frac{\mathrm{d}\rho_{\rm OM}}{\mathrm{d}\Psi}\geqslant
0~{\rm for}\,0\leqslant\Psi\leqslant\Psi_{\rm max}.
\end{eqnarray}
In this equation, $\rho_{\rm OM}$ corresponds to the DM while $\Psi=\Psi_{\rm D}+\Psi_{\rm B}$ is the
total potential (from the DM plus baryons). If this necessary condition is satisfied, a sufficient
condition for positivity is \cite{CiottiEtAl1992,Ciotti1996}
\begin{eqnarray}
\frac{\mathrm{d}}{\mathrm{d}\Psi_{\rm D}}
\left[\frac{\mathrm{d}\rho_{\rm OM}}{\mathrm{d}\Psi_{\rm D}}
\left(\frac{\mathrm{d}\Psi}{\mathrm{d}\Psi_{\rm D}}\right)^{-1}\sqrt{\Psi}\right]\geq 0
~\forall\,0\leqslant\Psi\leqslant\Psi_{\rm max}.
\end{eqnarray}
One can readily see that these conditions are also valid for isotropic systems as well as single
component systems. All McM17 halo profiles verify this condition.
Let us return to the isotropic case and inspect it in detail. Most standard
single-component mass
distributions (\textit{e.g.}~NFW, Einasto, \textit{etc}.) have well-defined ergodic DFs \cite{Widrow2000}. Yet,
some well-motivated profiles do lead to a negative DF. Troublesome profiles can be identified
using \citeeq{eq:abel_equation_sec2}. If the derivative $\mathrm{d}\rho/\mathrm{d}\Psi_{\rm D}$
cancels for some values of $\Psi_{\rm D}$, then \citeeq{eq:abel_equation_sec2} forces $f$ to take
negative values. This is expected to happen if the DM profile if very flat somewhere, as is the
case for cored distributions for instance. In the case of single-component systems, the left-hand
side of \citeeq{eq:abel_equation_sec2} can be written
\begin{eqnarray}
\frac{{\rm d}\rho}{{\rm d}\Psi_{\rm D}} =
\frac{{\rm d}r}{{\rm d}\Psi_{\rm D}}\frac{\mathrm{d}\rho}{\mathrm{d}r} =
-\frac{r^{2}}{G\,m_{\rm D}(r)}\frac{\mathrm{d}\rho}{\mathrm{d}r}\,,
\label{eq:drho_dpsi}
\end{eqnarray}
where the mass $m_{\rm D}(r)$ is related to the density $\rho(r)$ through \citeeq{eq:mass}. Let us
now consider as an example the following class of cored DM density profiles:
\begin{eqnarray}
\rho(r) = \rho_{\rm s}\left[1+\left(\frac{r}{r_{\rm s}}\right)^{\alpha}\right]^{-\beta/\alpha}\,,
\label{eq:alpha_beta_density}
\end{eqnarray}
with $\alpha>0$ and $\beta>0$. In the limit $r\to 0$ (equivalently
$\Psi_{\rm D} \to \Psi_{\rm max}$) we have $\mathrm{d}{\rho}/\mathrm{d}r\propto r^{\alpha-1}$
and $m\propto r^{3}$, therefore $\mathrm{d}\rho/\mathrm{d}\Psi_{\rm D} \propto r^{\alpha-2}$. The
asymptotic value of the derivative is then non-zero only if $\alpha\leqslant 2$. Consequently,
for \textit{any} single-component system with a density profile given by
\citeeq{eq:alpha_beta_density} and with $\alpha>2$, the Eddington method leads to a negative
ergodic DF.
We stress that
\begin{eqnarray}
0<\alpha\leqslant 2 \;\;\text{is a \textit{necessary} condition (isotropic case)}
\label{eq:cond_alpha_dm}
\end{eqnarray}
to get a positive DF for a DM-only system. However, it is certainly not sufficient for a
multi-component system. Since the argument is based on the asymptotic behavior of
$\mathrm{d}\rho/\mathrm{d}\Psi_{\rm D}$ as $r\rightarrow0$, our result holds for Osipkov-Merritt
models as well since the associated anisotropy goes to zero when $r\ll r_{\rm a}$. In the constant
anisotropy case the situation is different because an artificial slope $2\beta_{0}$ is present in
the Abel equation given in \citeeq{eq:abel_eq_constant_beta}. Consequently, if the density profile
$\rho$ has an inner slope $-\gamma$, the pseudo-density $\chi$ has an inner slope
$2\beta_{0}-\gamma$. Note that a requirement for Eddington's method and its extensions to work
is that the generalized density ($\rho,\ \rho_{\rm OM},\ \chi$ depending on the model) is a growing
function of $\Psi$. Therefore,
\begin{eqnarray}
2\beta_{0}\leq \gamma\;\;\text{is a \textit{necessary} condition (anisotropic $\beta_0$ case)}
\label{eq:cond_beta0}
\end{eqnarray}
to get a positive constant-anisotropy DF.
This forbids for instance any cored system to have a constant, positive anisotropy, and in
general sets an upper limit on the constant anisotropy a system can feature. This is
a subset of a more general slope-anisotropy inequality \cite{AnEtAl2006,CiottiEtAl2010}.
Adding a baryonic component to the system can affect these results. If the DM profile follows
\citeeq{eq:alpha_beta_density} and the baryonic profile is cored, the low-radius behavior of
$\mathrm{d}\rho/\mathrm{d}\Psi$ (with $\Psi=\Psi_{\rm D}+\Psi_{\rm B}$ the total potential) is
unchanged with respect to that of $\mathrm{d}\rho/\mathrm{d}\Psi_{\rm D}$. Therefore, the positivity
condition remains $\alpha\leqslant 2$. If the baryonic density profile is cuspy with inner slope
$-\gamma_{\rm B}$ (\textit{e.g.}\, $\gamma_{\rm B}=1$ for a Hernquist profile), the result is modified. The
mass is now dominated by the baryonic component as $r\rightarrow0$, and we have
$\mathrm{d}\rho/\mathrm{d}\Psi\propto r^{\alpha-2+\gamma_{\rm B}}$. The necessary condition for
positivity becomes
\begin{eqnarray}
0<\alpha\leqslant 2-\gamma_{\rm B}\,
\label{eq:cond_alpha_bar}
\end{eqnarray}
\textit{i.e.}\ baryons reduce the parameter space providing a positive DF.
\subsubsection{Stable distribution functions}
\label{sssec:stab}
We would like to stress here that positivity is not strong enough a criterion for a DF to
give a satisfactory description of a DM halo. Indeed, some $(\rho,\Psi)$ pairs satisfying the
positivity conditions can still lead to a DF that is an \textit{unstable} solution of the
collisionless Boltzmann equation. Some conditions for stability against different kinds of
perturbations are reviewed in Ref.~\cite{BinneyEtAl2008}. A result of interest for us is
Antonov's second law \cite{Antonov1962,Lebovitz1965,Lynden-BellEtAl1968} which guarantees the
stability of an ergodic DF $f$ against non-radial modes if $\mathrm{d}f/\mathrm{d}\mathcal{E}>0$.
A complementary result is the Doremus-Feix-Baumann theorem \cite{DoremusEtAl1971,KandrupEtAl1985},
which ensures stability against radial modes if $\mathrm{d}f/\mathrm{d}\mathcal{E}>0$.
Consequently, a \textit{sufficient condition for the stability} of ergodic DFs $f(\mathcal{E})$
against all perturbations is
\begin{eqnarray}
\frac{\mathrm{d}f}{\mathrm{d}\mathcal{E}}(\mathcal{E})>0~{\rm for\,all}\,\mathcal{E}\,.
\label{eq:stability1}
\end{eqnarray}
We now investigate the consequences of this condition on DM density profiles. In practice
we use profiles of the form of \citeeq{eq:alt_density} in order to get rid of the divergence
discussed in \citesec{ssec:rmax}. Note that we previously established that this divergence is a
sign of an artificial compression of the phase space, but it can also be viewed as an
unstable configuration as it violates the stability
criterion given in \citeeq{eq:stability1}. We wish to find a more convenient criterion involving
the density profile and the potential rather than the DF itself. We recall the expression of the
DF when the boundary term is zero:
\begin{eqnarray}
f(\mathcal{E}) = \int_{0}^{\mathcal{E}}\frac{\mathrm{d}^{2}\rho}{\mathrm{d}\Psi^{2}}
\frac{1}{\sqrt{\mathcal{E}-\Psi}}\,\mathrm{d}\Psi\,.
\label{eq:df_without_divergence}
\end{eqnarray}
From this expression we see that $\mathrm{d}f/\mathrm{d}\mathcal{E}>0,\,\forall\mathcal{E}$ only
if $\mathrm{d}^{2}\rho/\mathrm{d}\Psi^{2}>0,\,\forall\Psi$. Moreover, starting from the Abel
equation in \citeeq{eq:abel_equation_sec2} and performing an integration by parts, we get
\begin{eqnarray}
\dfrac{\mathrm{d}\rho}{\mathrm{d}\Psi} = 2\sqrt{8}\pi\int_{0}^{\Psi}
\sqrt{\Psi-\mathcal{E}}\,\frac{\mathrm{d}f}{\mathrm{d}\mathcal{E}}\,\mathrm{d}\mathcal{E}\,,
\end{eqnarray}
which implies that $\mathrm{d}^{2}\rho/\mathrm{d}\Psi^{2}>0,\,\forall\Psi$ only if
$\mathrm{d}f/\mathrm{d}\mathcal{E}>0,\,\forall\mathcal{E}$. To summarize, we have
\begin{eqnarray}
\frac{\mathrm{d}f}{\mathrm{d}\mathcal{E}}>0,~\forall\mathcal{E} \iff
\frac{\mathrm{d}^{2}\rho}{\mathrm{d}\Psi^{2}}>0,~\forall\Psi\,.
\label{eq:stability2}
\end{eqnarray}
Therefore, the stability criterion takes the very simple following form:
$\mathrm{d}^{2}\rho/\mathrm{d}\Psi^{2}>0,\,\forall\Psi$. From \citeeq{eq:df_without_divergence}, it
is obvious that this criterion is also a sufficient condition for positivity. From now on, we
consider \citeeq{eq:stability2} as defining the range of applicability of the Eddington formalism
since a system that violates this condition could lead to unstable phase-space
configurations or a negative DF.
The stability criterion of \citeeq{eq:stability2} can be extended in part to non-isotropic
spherical systems with a DF of the form $f(\mathcal{E},L)$. It is shown in
Ref.~\cite{DoremusEtAl1973} that
systems satisfying $\partial f/\partial\mathcal{E}>0$ for all $(\mathcal{E},L)$ are stable against
radial perturbations. This is directly applicable to the constant-$\beta$
(Eq.~\ref{eq:df_constant_beta}) and Osipkov-Merritt (Eq.~\ref{eq:df_om}) models, resulting in
\begin{subequations}
\label{eq:stability_anisotropy}
\begin{eqnarray}
\frac{\mathrm{d} G}{\mathrm{d}\mathcal{E}}&>&0,~\forall \mathcal{E}\\
\frac{\mathrm{d} f_{\rm OM}}{\mathrm{d}Q}&>&0,~\forall Q\,.
\end{eqnarray}
\end{subequations}
However, the response of anisotropic systems to non-radial perturbations is much more complex,
due to the possibility of radial-orbit instabilities, so that no simple stability criteria are
known. Analytical studies are usually involved
(\textit{e.g.}~\cite{Antonov1987,PerezEtAl1996,ReinEtAl2003,MarechalEtAl2010}), and the stability
properties of anisotropic systems are very often investigated thanks to numerical simulations
(\textit{e.g.}~\cite{MerrittEtAl1985,BarnesEtAl1986,MezaEtAl1997}), which is far beyond the scope
of this work. In the following, we rely on the criterion given in
\citeeq{eq:stability_anisotropy}, which should be understood as necessary rather than sufficient.
\begin{figure*}[t!]
\centering
\includegraphics[width = 0.495\linewidth]{{{fig_df_mcmillan_models}}}
\includegraphics[width = 0.495\linewidth]{{{fig_d2rhodPsi2_mcmillan}}}
\includegraphics[width = 0.495\linewidth]{{{fig_d3rhodPsi3_mcmillan}}}
\caption{\small \textbf{Top left panel}: Ergodic distribution functions for several mass models from
Ref.~\cite{McMillan2017}. The DFs are in units of
$(4\pi G_{\rm N})^{-3/2}\rho_{\rm s}^{-1/2}r_{\rm s}^{-3}$. \textbf{Top right panel}:
Second derivative of the density $\rho$ with respect to the total potential $\Psi$. The
derivative is in units of $(4\pi G_{\rm N})^{-2}\rho_{\rm s}^{-1}r_{\rm s}^{-4}$.
{\bf Bottom panel}: Third derivative ${\rm d}^3\rho/d\Psi^3$ in units of
$(4\pi G_{\rm N})^{-3}\rho_{\rm s}^{-3/2}r_{\rm s}^{-6}$. }
\label{fig:df_mcmillan_models}
\end{figure*}
We investigated the stability of the phase-space configurations obtained by
Eddington-inverting realistic and kinematically constrained McM17 MW dark halos
\cite{McMillan2017}. Shown in the top left panel of \citefig{fig:df_mcmillan_models} are the
isotropic DFs for each mass model, both with and without the baryonic contribution to the
potential $\Psi$. To simplify the discussion, the DFs are shown \textit{without} the
diverging term discussed in \citesec{ssec:rmax}, and without any regularization plugged in.
Indeed, we will see that in these examples, instabilities manifest themselves mostly in the
central regions of the Galaxy, \textit{i.e.}~${\cal E}/\Psi_{\rm max}\gtrsim 0.5$. The dark halos shown
in the figure mostly differ in the inner slope $\gamma$ of the density profile. In the
absence of baryons (dashed lines), all the DFs satisfy the stability criterion given in
\citeeq{eq:stability1} and are therefore stable. We explicitly verified that the models also
satisfy the condition in \citeeq{eq:stability2} by plotting the second-order derivative
$\mathrm{d}^{2}\rho/\mathrm{d}\Psi^{2}$ in the top right panel of \citefig{fig:df_mcmillan_models}.
The situation changes when baryons are added to the potential. Then the DFs flatten at high
energy (toward the central regions), and may even turn into a dip, as is the case of the DM core
($\gamma=0$, solid magenta line), which violates the stability criterion in
\citeeq{eq:stability1}. The derivative
$\mathrm{d}^{2}\rho/\mathrm{d}\Psi^{2}$ takes negative values in that case and the stability
criterion in \citeeq{eq:stability2} is also violated as expected. This mass model is therefore
very likely to correspond to an unstable phase-space configuration.\footnote{More precisely,
the initial assumption of ergodicity cannot accommodate this density-potential pair; one
would need to increase the number of degrees of freedom in phase space to find a stable DF.}
The presence of a dip in the ergodic DF has direct consequences in the speed distribution
defined in \citeeq{eq:v_df}. In the left panel of \citefig{fig:fv_mass_ratio}, we
show the speed distributions for the different mass models at $r=0.01\,\rm kpc$,
\textit{i.e.}~corresponding to regions where the energy range probes the dip. The speed distribution
of the unstable model (magenta line) exhibits a very strong double-peak feature: a very large peak
at $v\sim 450\,\rm km/s$, and a much smaller one at $v\sim 50\,\rm km/s$.
The phase-space distribution is somewhat artificially forced to large velocities to allow for a
kinetic pressure strong enough to prevent the halo from collapsing to a cusp---hardly a stable
configuration in the isotropic case. The appearance of such a double-peak feature is
characteristic of a troublesome configuration, and we stress that it has to be checked all over
the halo (equivalently all over the available energy range). Indeed, the same problematic model
would have given a perfectly licit speed distribution at larger radii (where the energy range
would not probe the dip in the DF). We note, however, that it is not straightforward to firmly
analyze this feature in terms of instability since it is also present in the $\gamma=0.25$ case,
which satisfies the stability criterion, while clearly exhibiting a transition to a double-peak
distribution. This can be seen in the blue curve of \citefig{fig:fv_mass_ratio}. In fact, as can
readily be guessed from \citeeq{eq:eddington_form2} (the part in brackets) and from both the top
right and the bottom panels of \citefig{fig:df_mcmillan_models}, a way to select
better-behaved speed distributions (without double-peak feature) is simply to impose an additional
criterion based on the third derivative instead of the second:
\begin{eqnarray}
\frac{\mathrm{d}^{3}\rho}{\mathrm{d}\Psi^{3}}>0,~\forall\Psi\,.
\label{eq:stability3}
\end{eqnarray}
In the following, we will remain agnostic about the origin of this two-peak behavior and just
stick to the stability criterion of \citeeq{eq:stability2}, keeping in mind that
\citeeq{eq:stability3} could further be applied to remove controversial cases. We therefore keep
the McM17 $\gamma=0.25$ case as viable, while we reject the $\gamma=0$ case.
We now wish to characterize in more detail the instability when baryons contribute to the
potential. We write the mass of the system as $m=m_{\rm D}+m_{\rm B}$ and the gravitational
potential as $\Psi=\Psi_{\rm D}+\Psi_{\rm B}$. Then the derivative that appears in the stability
criterion can be written
\begin{eqnarray}
\frac{\mathrm{d}^{2}\rho}{\mathrm{d}\Psi^{2}} =
\left(\frac{m_{\rm D}}{m_{\rm D}+m_{\rm B}}\right)^{2}
\left[\frac{\mathrm{d}^{2}\rho}{\mathrm{d}\Psi_{\rm D}^{2}}-
\frac{\mathrm{d}\rho}{\mathrm{d}\Psi_{\rm D}}
\frac{\mathrm{d}}{\mathrm{d}\Psi}\left(\frac{m_{\rm B}}{m_{\rm D}}\right)\right].
\end{eqnarray}
From this expression, we get the sufficient condition for stability:
\begin{eqnarray}
\frac{\mathrm{d}^{2}\rho}{\mathrm{d}\Psi_{\rm D}^{2}}
\left/\frac{\mathrm{d}\rho}{\mathrm{d}\Psi_{\rm D}}\right. >
\frac{\mathrm{d}}{\mathrm{d}\Psi}\left(\frac{m_{\rm B}}{m_{\rm D}}\right)\,.
\label{eq:stability_baryons}
\end{eqnarray}
The quantities appearing on the left-hand side of \citeeq{eq:stability_baryons} only refer to
DM, while baryons appear on the right-hand side through their mass $m_{\rm B}=m_{\rm B}(r)$ and the
total potential $\Psi$. In the absence of baryons, $m_{\rm B}=0$ and \citeeq{eq:stability_baryons}
simplifies to $\mathrm{d}^{2}\rho/\mathrm{d}\Psi_{\rm D}^{2}>0$ which is exactly the stability
criterion in the DM-only case. Let us discuss the right-hand term in more detail. The
baryonic mass is present in the ratio $m_{\rm B}/m_{\rm D}$ and in the potential $\Psi$, so we do
not expect it to be the most important parameter here. Rather, the spatial extension of the
baryonic distribution with respect to the DM one is the relevant factor. To illustrate this,
we show the ratio $m_{\rm B}/m_{\rm D}$ as a function of $\Psi$ in the right panel of
\citefig{fig:fv_mass_ratio}. We show the isolated contribution of the bulge and the disk, as well
as the total baryonic contribution. We can see that the bulge-to-DM ratio is steeper than both
the disk-to-DM and baryons-to-DM ratios. The bulge-only configuration is therefore more likely
to be inconsistent with the ergodic assumption than the disk-only configuration and the
full mass model, even though the baryonic mass is much less important in that case.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.502\linewidth]{{{fig_fv_r1e-2kpc}}}
\includegraphics[width=0.488\linewidth]{{{fig_mass_ratio_mcmillan}}}
\caption{\small \textbf{Left panel}: Speed distribution at $r=0.01\,\rm kpc$ for the mass models
of \cite{McMillan2017}, computed from the ergodic DF in \citeeq{eq:df_without_divergence}.
\textbf{Right panel}: Ratio of the baryonic mass to the DM mass as a function of
the total potential $\Psi$ (in units of $4\pi G_{\rm N}\rho_{\rm s}r_{\rm s}^{2}$).}
\label{fig:fv_mass_ratio}
\end{figure*}
We also investigated the effects on the second derivative $\mathrm{d}^{2}\rho/\mathrm{d}\Psi^{2}$
of changing the bulge characteristic mass density $\rho_{\rm 0,b}$ and radius $r_{\rm b}$
(see Eq.~\ref{eq:bulge}), while keeping the disks parameters fixed. Results are shown in
\citefig{fig:contours}, where we plot
$r_{\rm b}/r_{\rm s}$ as a function of $\rho_{\rm 0,b}/\rho_{\rm s}$. The bulge parameters are scaled
to the dark halo parameters. The points correspond to the McM17 mass models \cite{McMillan2017}
for $\gamma=0$, 0.25, 0.5. Those three models have nearly identical values for the bulge
parameters, the difference in coordinates only comes from the change in the halo parameters
$\rho_{\rm s}$ and $r_{\rm s}$. The red shaded areas are the portions of parameter space where
$\mathrm{d}^{2}\rho/\mathrm{d}\Psi^{2}$ goes negative, \textit{i.e.}~the Eddington DF violates the stability
criterion. We can see in \citefig{fig:contours} that the $\gamma=0$ mass model point is inside
the $\gamma=0$ excluded area, while the $\gamma=0.25$ and $\gamma=0.5$ models are
in their allowed regions. This is in agreement with the right panel of
\citefig{fig:df_mcmillan_models}, where the $\gamma=0$ case is explicitly shown to violate
the stability criterion. This figure further allows one to easily check whether one's favorite
Galactic mass model can be Eddington inverted.
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\linewidth]{{{fig_contours_McMillan_bulge_plus_disk}}}
\caption{\small Sign of the minimum of $\mathrm{d}^{2}\rho/\mathrm{d}\Psi^{2}$ on the plane
($\rho_{\rm 0,b}/\rho_{\rm s},r_{\rm b}/r_{\rm s}$) with $\rho_{\rm 0,b}$ ($\rho_{\rm s}$) and $r_{\rm b}$
($r_{\rm s}$) the characteristic density and radius of the bulge (dark halo). The parameters of the disk are fixed to the MCM17 values. Results are shown
for $\gamma=0, 0.25,0.5$ with $\gamma$ the inner slope of the DM profile. Points
indicate the positions of the McM17 models.}
\label{fig:contours}
\end{figure}
\subsubsection{Positive and stable distribution functions: summary}
\label{sssec:stab_summ}
In this section, we have discussed several theoretical issues that arise when
trying to infer the DF of a galactic system in a self-consistent way with the Eddington
formalism and its most simple anisotropic extensions. We have established its validity range,
and provided prescriptions to deal with these issues. These prescriptions can be readily
used to ensure a self-consistent application of Eddington-like inversions.
We have first discussed in \citesec{sssec:pos} the conditions to get a DF positive over the whole
energy range---this is mostly relevant to systems with both DM and baryons. For a DM
profile of the $\alpha\beta\gamma$ type [see \citeeq{eq:halo}], we isolated a rather simple
necessary condition on the index $\alpha$ given in \citeeq{eq:cond_alpha_bar}, which forces the
transition between the asymptotic indices $\gamma$ and $\beta$ to be smoother and smoother as
the baryonic distribution steepens in the very central parts of the Galaxy---this also applies
to the anisotropic Osipkov-Merritt model, which actually tends to the isotropic case when
$r\ll r_{\rm a}$. For constant-anisotropy models, a necessary condition exists in terms of
$\beta_0$, given in \citeeq{eq:cond_beta0}.
We have then discussed in \citesec{sssec:stab} more fundamental features which can be related
to the (in)stability of gravitational systems. We have shown that the Eddington inversion
can only provide a well-behaved DF when the condition given in \citeeq{eq:stability2} is
fulfilled. An even more stringent condition providing an unambiguous speed distribution
is given in \citeeq{eq:stability3}. In contrast to the positivity issue though, stability
conditions cannot be derived in the anisotropic cases, except for the very special case of
radial perturbations. In that case, the stability conditions are given in
\citeeq{eq:stability_anisotropy}.
Finally, we showed in \citefig{fig:contours} how to quickly check whether realistic Galactic
mass models are Eddington-invertible, only from the bulge-to-halo ratio of the scale densities.
This figure can be used as a preliminary diagnosis before going into more involved calculations.
In any case, all the discussion developed in this section fully applies to the general case, for
systems with or without baryons.
\section{Impact on predictions for dark matter searches}
\label{sec:dd}
In this section, we study the impact of the issues discussed in the previous sections on
predictions for DM searches. We shall obviously focus on velocity-dependent observables, and more
particularly on observables related to both direct DM searches and indirect DM searches:
the moments (and inverse moments) of the DM speed (relevant to direct DM searches, DM capture
by stars, or PBH microlensing), and the moments (an inverse moments) of the two-DM-particle
relative speed (relevant to $p$-wave-suppressed or Sommerfeld-enhanced DM annihilation).
We will embed the former observables in a {\em direct-search} class, while the latter
will define the {\em indirect-search} class, to make more explicit contact with the WIMP
phenomenology. We shall make quantitative comparisons between the self-consistent Eddington
approach (whenever applicable) and the Maxwellian approximation, which is most commonly used in
this context.
Note that the Maxwell-Boltzmann (MB) DF or velocity distribution is consistent with the
collisionless Boltzmann equation only if the underlying density profile is an infinite isothermal
sphere that also dominates the potential. It is therefore by no means theoretically consistent
with the input dark halo profile we will consider in the following calculations, but is usually
assumed to provide a ``reasonable'' approximation. Like in the isothermal sphere, we will
still use the link between the 3D velocity dispersion $\sigma$ and the circular velocity as
follows:
\begin{eqnarray}
\sqrt{\frac{2}{3}}\,\sigma = v_{\rm circ}(r) = \sqrt{\frac{G_{\rm N}\,m(r)}{r}}\,,
\end{eqnarray}
with $m(r)$ consistently derived from the mass model. Therefore, the dispersion velocity
associated with the MB DF will be radial dependent in the following.
\subsection{Direct-search-like observables}
Let us define a generic function for the moments of the DM speed in the Galactic frame:
\begin{subequations}
\begin{eqnarray}
\Xi_n(v_{\rm min},v_{\rm max},r) &\equiv&\omega^{-1}(r)
\int_{v_{\rm min} \leqslant |\vec{v}| \leqslant v_{\rm max}}
{\rm d}^{3} \vec{v} \, |\vec{v}|^n \, f_{\vec{v}}(\vec{v},r)\\
\omega(r) &\equiv &\int {\rm d}^{3} \vec{v} \, f_{\vec{v}}(\vec{v},r) \,,
\end{eqnarray}
\end{subequations}
where $f_{\vec{v}}(\vec{v},r)$ is the velocity distribution in the Galactic frame, generically
defined in the context of the Eddington inversion by \citeeq{eq:v_df}, and $\omega(r)$
ensures the normalization of the distribution to unity over the full available range in
velocity [1 by construction in the Eddington formalism, except if some terms are
neglected---see discussion below \citeeq{eq:rho_reconstruction}].
Direct searches for WIMP dark matter are typically sensitive to the inverse moment
of the velocity, expressed as the following integral:
\begin{eqnarray}
\eta(v_{\rm min}) = \int_{v_{\rm min} \leqslant v \leqslant v_\oplus+v_{\rm esc}}
\mathrm{d}^{3}\vec{v}\, \frac{f_{\vec{v},\oplus}(\vec{v})}{v}\,,
\end{eqnarray}
where $f_{\vec{v},\oplus}$ is the WIMP velocity distribution in the rest frame of the Earth. The speed
$v_{\rm min}$ is the minimal speed a DM particle must have to induce a detectable recoil in the
detector. Consequently, low-threshold experiments are sensitive to the high-velocity tail of the
distribution.
For low-mass DM candidates (noted $\chi$ for convenience), with masses much
lower than the target nucleus mass, the minimal speed is $v_{\rm min}\propto 1/m_\chi$ and
can be close to the maximal speed in the laboratory frame $v_{\rm max}=v_\oplus+v_{\rm esc}$,
where the Earth speed in the Galactic frame $v_\oplus$ is close to the Sun speed
$v_\odot\sim 240$~km/s. Giving an accurate description of the tail of the speed distribution
in the Galactic frame is therefore critical, and the regularization of the divergence
associated with $R_{\rm max}$ is crucial in this context. We compare the prediction of the
self-consistent Eddington inversion to the MB approximation. In the context of direct searches,
the MB distribution in the Galactic frame is usually truncated at the escape speed
\cite{LewinEtAl1996}, either
sharply,
\begin{eqnarray}
f_{\vec{v}}^{\rm shm}(\vec{v}) = \frac{1}{N_{\rm shm}}\,e^{-v^{2}/v_{\rm circ}^{2}}\,\Theta(v_{\rm esc}-v)\,,
\end{eqnarray}
where $\Theta$ is the Heaviside step function, or smoothly,
\begin{eqnarray}
f_{\vec{v}}^{\widetilde{\rm shm}}(\vec{v}) = \frac{1}{N_{\widetilde{\rm shm}}}\,
\left(e^{-v^{2}/v_{\rm circ}^{2}}-e^{-v_{\rm esc}^{2}/v_{\rm circ}^{2}}\right)\,.
\end{eqnarray}
The respective normalizations are
$N_{\rm shm}=(\pi v_{\rm circ}^{2})^{3/2} [{\rm erf}(z)-2z/\sqrt{\pi}\,\exp(-z^{2})]$ and
$N_{\widetilde{\rm shm}}=(\pi v_{\rm circ}^{2})^{3/2} [{\rm erf}(z)-2z/\sqrt{\pi}\,(1+2z^{2}/3)\,\exp(-z^{2})]$, and $z=v_{\rm esc}/v_{\rm circ}$. Note that the sharply-cut MB distribution is
obviously non-physical due to the step at $v_{\rm esc}$. We consider it nonetheless since it has
been used extensively in the direct searches literature. These deformed MB velocity distributions
are usually dubbed {\em standard halo model} (SHM). In the following, we will pick the values
of $v_{\rm circ}$ at $r=R_\odot$ consistently with the McM17 models used in this study.
Our comparison of $\eta$ for the various cases should not depend significantly on the frame of
reference up to a Galilean shift in velocity, so for simplicity we consider the Galactic frame
rather than the Earth frame (which is the frame relevant for direct searches)---our
$v_{\rm min}$ should thereby be shifted by the Sun speed in the Galactic frame $\sim v_\odot$ to
get values more relevant to direct WIMP searches. We consider the McM17 NFW model for
illustration (see \citesec{app:mass_models}) and the different regularization methods discussed
in \citesec{ssec:rmax}, and assume
\begin{eqnarray}
\label{eq:approx_eta}
\eta(v_{\rm min})\simeq \Xi_{-1}(v_{\rm min},v_{\rm esc},R_\odot)\,.
\end{eqnarray}
We compare the predictions inferred from the SHM and the Eddington inversion for
$\eta(v_{\rm min})$ in the left and right panels of \citefig{fig:one_over_v_vmin} for the isotropic
and Osipkov-Merritt cases, respectively. Generically, predictions derived from the Eddington
inversion differ significantly from that of the SHM over the whole range of $v_{\rm min}$,
as already noticed in the literature \cite{UllioEtAl2001,VergadosEtAl2003,BozorgniaEtAl2013,FornasaEtAl2014,LavalleEtAl2015}---the main difference with previous studies comes from our
rigorous treatment of the issues emphasized in \citesec{sec:issues}, and the selection of stable
configurations only. Differences are especially striking when $v_{\rm min}$ is large due to the
different shapes predicted in the tail of the speed distribution. The smoothly-cut MB distribution
is closer to the Eddington prediction than the sharply-cut MB distribution, but it is also very
discrepant near $v_{\rm esc}$. We also make the comparison with the Osipkov-Merritt models.
\footnote{We do not show the constant-$\beta$ case as the regularization is very similar to the
isotropic case.} The difference between the SHM and these models are much larger than in the
isotropic case. This is an illustration of the difficulty to regularize the Osipkov-Merritt
models, for which none of the prescriptions are fully satisfactory (see \citesec{ssec:rmax}).
Either the divergence is not removed ($R_{\rm max}\rightarrow\infty$ case) or the underlying
density profile is significantly modified.
Thus, irrespective of the regularization and the anisotropy, the prediction of the self-consistent
approach systematically differs from the SHM. We are able to quantify the theoretical
uncertainties associated with the treatment of the divergence, which is especially important for
large values of $v_{\rm min}$. This is critical for low-mass DM candidates in direct searches.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.495\textwidth]{{{fig_eta_isotropic_king_models}}}
\includegraphics[width=0.495\textwidth]{{{fig_eta_vmin_om}}}
\caption{\small \textbf{Left panel:} $\eta$ integral as a function of $v_{\mathrm{min}}$. The
various curves shown are the sharply-cut SHM (solid magenta), the smoothly-cut SHM (dashed
magenta), and the predictions of the Eddington formalism for an isotropic system, with the
regularizations of the phase-space divergence discussed in \citesec{ssec:rmax}, namely setting
$R_{\mathrm{max}}$ to infinity (green), removing the diverging term (red), modifying the
density profile (blue), regularizing \`a la King with ${\cal E}_c=\Psi(R_{\rm max})$ (yellow)
or with $\tilde\Psi(R_{\rm max})=0$ (cyan). \textbf{Right panel:} Same as left panel, for the
Osipkov-Merritt model.}
\label{fig:one_over_v_vmin}
\end{figure*}
For the sake of completeness, we also compare the Eddington inversion and MB results obtained
for the observables proportional to $\bar \eta \equiv \Xi_{-1}(0,v_{\rm max},r)$, which
could be related to the capture of DM in stars or planets (\textit{e.g.}\ \cite{PressEtAl1985,Gould1987,SalatiEtAl1989,BouquetEtAl1989,BouquetEtAl1989a,Kouvaris2008,BertoneEtAl2008}), and to
$\langle v \rangle = \Xi_{1}(0,v_{\rm esc},r)$, which could be related to the microlensing event
rate of compact DM objects (\textit{e.g.}\ \cite{Griest1991,Green2017a})---the latter is simply the mean
speed across the Galaxy.
Our results are illustrated in Figs.~\ref{fig:other_1_to_vmoments} and
\ref{fig:other_vmoments} for $\bar{\eta}$ (for which we set $r=R_\odot$) and $\langle v \rangle$,
respectively. For $\bar \eta$, we see significant differences between the Eddington inversion
and the Maxwellian approximation, decreasing from $\sim 40\%$ to $\sim 10\%$ as $v_{\rm max}$ spans
the full dynamical range---we also see that isotropic DFs are poorly sensitive to the radial
cutoff treatment, in contrast to anisotropic DFs, where radial orbits come into play.
\newchange{
For the mean speed $\langle v \rangle$, the only regions where the Maxwellian approximation provides results similar to the Eddington inversion are
the outer parts of the Galaxy. The departure between the two prediction increases as the radius gets smaller, with up to an order of magnitude of difference at the center of the Galaxy. This should therefore be considered seriously in
predictions of related observables.}
The negative $\beta$ case leads to a mean speed
curve closer to the Maxwellian case, as expected for more circular orbits (the mean speed then tends
to the circular speed). Note that the Maxwellian results are obviously similar for all Galactic
models when both the DM and baryons are included, as these models are constrained from rotation
curves; they consequently separate from each other when only the DM halo is considered (the DM mass
profiles may vary significantly in regions dominated by the baryons). The results
obtained for the moments of the relative speed in \citesec{ssec:idlike} exhibit the same behavior.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.495\textwidth]{{fig_eta_vs_vmax_isotropic}}
\includegraphics[width=0.495\textwidth]{{fig_eta_vs_vmax_om}}
\caption{\small Same as Fig.~\ref{fig:one_over_v_vmin} for $\bar \eta \equiv
\Xi_{-1}(0,v_{\rm max},R_\odot)$.}
\label{fig:other_1_to_vmoments}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.495\textwidth]{{fig_vmean_isotropic_new}}
\includegraphics[width=0.495\textwidth]{{{fig_vmean_om_new}}}
\includegraphics[width=0.495\textwidth]{{{fig_vmean_beta-0.3_new}}}
\caption{\small Mean speed profiles for the Standard Halo Model and the Eddington formalism for
the isotropic (top left panel), Osipkov-Merritt (top right panel) and $\beta_{0} = -0.3$
(bottom panel) cases. Here we show the DM-only (thin line) and DM+baryons (thick line) cases,
for the McM17 mass models providing well-behaved Eddington-inverted DFs.}
\label{fig:other_vmoments}
\end{figure*}
\subsection{Indirect-search-like observables}
\label{ssec:idlike}
Other DM-related signals are related to moments (or inverse moments) of the relative
speed instead of the speed. This concerns signals related to two-body processes, whose
most striking example is the self-annihilation of DM. We therefore define a new moment
function for the relative speed $\vec{v}_{\rm r}=\vec{v}_2-\vec{v}_1$ between two DM particles,
\begin{subequations}
\label{eq:vr_moments}
\begin{eqnarray}
\Pi_{n}(v_{\rm min},v_{\rm max},r)&\equiv& \kappa^{-1}(r)
\int_{v_{\rm min}}^{v_{\rm max}} {\rm d}^{3}\vec{v}_{1}
\int_{v_{\rm min}}^{v_{\rm max}} {\rm d}^{3}\vec{v}_{2}\,
|\vec{v}_{\rm r}|^{n}\,
f_{\vec{v}}(\vec{v}_{1},r)\,
f_{\vec{v}}(\vec{v}_{2},r)\\
\kappa(r)&\equiv&
\int {\rm d}^{3}\vec{v}_{1} \int {\rm d}^{3}\vec{v}_{2}\,
f_{\vec{v}}(\vec{v}_{1},r) \, f_{\vec{v}}(\vec{v}_{2},r)\,,
\end{eqnarray}
\end{subequations}
where the velocity distribution $f_{\vec{v}}$ is conventionally defined by \citeeq{eq:v_df}
in the context of Eddington's inversion formalism, and the function $\kappa(r)$ ensures
the correct normalization to unity in the relevant range of individual speed
[1 by construction in the Eddington formalism, except if some terms are
neglected---see discussion below \citeeq{eq:rho_reconstruction}].
Indirect searches for self-annihilating DM are sensitive to the following moments
\begin{eqnarray}
\label{eq:moments_vr}
\langle |\vec{v}_{\rm r}|^{n} \rangle (r) = \Pi_{n}(0,v_{\rm esc},r)
\end{eqnarray}
Searches for $p$-wave annihilation typically probe the (relative) velocity dispersion ($n=2$),
though in some interaction models the annihilation cross-section can be modified by
non-perturbative effects \cite{HisanoEtAl2005} that lead to the so-called Sommerfeld enhancement,
which induces a dependency on the $n=-1$ moment---as well as the $n=-2$ moment at resonances. Note
that in practice it proves convenient to perform the following change of variable to express the
integrals in terms of the center-of-mass velocity $\vec{v}_{\rm c}$ and relative velocity
$\vec{v}_{\rm r}$ (\textit{e.g.}\ \cite{GondoloEtAl1991}):
\begin{eqnarray}
\left\{
\begin{array}{ll}
\vec{v}_{\rm c} &= (\vec{v}_{1}+\vec{v}_{2})/2\\
\vec{v}_{\rm r} &= \vec{v}_{2}-\vec{v}_{1}.
\end{array}
\right.
\end{eqnarray}
As a result, \citeeq{eq:moments_vr} can be rewritten
\begin{eqnarray}
\langle v_{\rm r}^{n} \rangle =
\int {\rm d}^{3}\vec{v}_{\rm r}\,|\vec{v}_{\rm r}|^{n}\,F_{\rm r}(\vec{v}_{\rm r},r)\,,
\end{eqnarray}
where $F_{\rm r}$ is the relative velocity DF, which is defined as
\begin{eqnarray}
\label{eq:def_vr_df}
F_{\rm r}(\vec{v}_{\rm r},r) \equiv \kappa^{-1}(r)
\int {\rm d}^{3}\vec{v}_{\rm c}\,f_{\vec{v}}(\vec{v}_{1},r)\,f_{\vec{v}}(\vec{v}_{2},r)\,.
\end{eqnarray}
The full derivation of $F_{\rm r}(\vec{v}_{\rm r},r)$ is given in \citeapp{app:relative_dist_DF} in
the Eddington formalism and its anisotropic extensions discussed above. To our knowledge, the
computation of $F_{\rm r}(\vec{v}_{\rm r},r)$ in the general anisotropic case is an original result.
An alternative treatment for the Osipkov-Merritt models is presented in Ref.~\cite{PetacEtAl2018}.
We show the predictions for the relative speed moments inferred from the
\newchange{(smoothly-truncated)}
SHM and the isotropic
Eddington inversion in \citefig{fig:moments_vr}. Following our discussion regarding the stability
of the DFs, we only consider here mass models leading to stable solutions of the Boltzmann
equation. The velocity distribution in the Eddington case have been computed \textit{without} the
diverging term, \textit{i.e.}~using \citeeq{eq:df_without_divergence}. We recall that this is in practice
similar to assuming a flattened density profile at the outskirts of the halo, as in
\citeeq{eq:alt_density}. One can see in \citefig{fig:moments_vr} that, for both the SHM and the
Eddington model, the moments with and without baryons converge at large radii. This is because the
total mass, and therefore the gravitational dynamics, is then fully dominated by DM, and
baryons become irrelevant. Though similar in shape, predictions from the two models are numerically
quite different. For the $n>0$ moments, the Eddington model's predictions typically exceed the
SHM's. At the center of the Galaxy, the two models differ by at least an order of magnitude, up to
three orders of magnitude. The hierarchy of the moments with respect to the value of DM inner
slope $\gamma$ is also reverted. While the cuspiest mass model ($\gamma=1$) leads to the largest
prediction for the SHM, it is the model closest to the core ($\gamma=0.25$) that dominates the
Eddington result.
We stress that even locally at $r=R_\odot\sim 8\,\rm kpc$, and for all $n$, there are
sizable differences between the Eddington formalism and the Maxwell-Boltzmann approximation.
Therefore, since the Eddington formalism turns out to better capture the dynamical properties of
the DM halo than the SHM \cite{LacroixEtAl2018}, the latter should only be used to make very rough
estimates of $p$-wave annihilating DM signals, even when isotropy is assumed.
We also compared the (isotropic) SHM with some of the anisotropic extensions of the Eddington
formalism. The prediction of the Osipkov-Merritt model is shown in \citefig{fig:moments_vr_om} for
a particular choice of the anisotropy radius $r_{\rm a}=r_{\rm s}$. Note that the value of $r_{\rm s}$
depends on the underlying mass model (see \citetab{tab:dm_mass_models}). The result is close to the
isotropic case at radii $r\ll r_{\rm a}$, as expected from the behavior of the anisotropy parameter
\citeeq{eq:beta_om}. At large radii however, the slope of the moments steepens significantly. The
steepening starts roughly where $r\simeq r_{\rm a}$ which is where the system begins to be strongly
anisotropic. We stress again the fact the regularization of the diverging term changes considerably
the underlying density profile in the Osipkov-Merritt case, as seen from
\citefig{fig:reconstructed_rho}. The behavior of $\langle v_{\rm r}^{n}\rangle $ beyond
$r=r_{\rm a}$ should therefore be treated with caution. We also studied the constant anisotropy
case, focusing on $\beta_{0}=-0.3$. We considered a negative anisotropy to get a well-defined DF
for all the mass models of relevance here. The corresponding relative speed moments are shown in
\citefig{fig:moments_vr_beta}. They differ from the isotropic ones at all radii, unlike the
Osipkov-Merritt ones, which is not surprising since the constant anisotropy is non zero everywhere.
Regardless of the assumption made on the anisotropy, the Eddington formalism generically predicts
huge differences with respect to the SHM. The various anisotropic models we used allow us to
bracket the theoretical uncertainty on the Eddington method.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.495\textwidth]{{{moment_vr1_isotropic_v2}}}
\includegraphics[width=0.495\textwidth]{{{moment_vr2_isotropic_v2}}}
\includegraphics[width=0.495\textwidth]{{{moment_vr-1_isotropic_v2}}}
\includegraphics[width=0.495\textwidth]{{{moment_vr-2_isotropic_v2}}}
\caption{\small Moments of the relative velocity distribution, for the Standard Halo Model and
the Eddington formalism (isotropic case). Here we show the DM-only (thin line) and
DM+baryons (thick line) cases, for several mass models from Ref.~\cite{McMillan2017}.}
\label{fig:moments_vr}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.495\textwidth]{{{moment_vr1_om_v2}}}
\includegraphics[width=0.495\textwidth]{{{moment_vr2_om_v2}}}
\includegraphics[width=0.495\textwidth]{{{moment_vr-1_om_v2}}}
\includegraphics[width=0.495\textwidth]{{{moment_vr-2_om_v2}}}
\caption{\small Same as \citefig{fig:moments_vr}, for the Osipkov-Merritt model with
$r_{\rm a}=r_{\rm s}$.}
\label{fig:moments_vr_om}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.495\textwidth]{{{moment_vr1_cst_beta_v2}}}
\includegraphics[width=0.495\textwidth]{{{moment_vr2_cst_beta_v2}}}
\includegraphics[width=0.495\textwidth]{{{moment_vr-1_cst_beta_v2}}}
\includegraphics[width=0.495\textwidth]{{{moment_vr-2_cst_beta_v2}}}
\caption{\small Same as \citefig{fig:moments_vr}, for the constant-$\beta$ model with
$\beta_{0}=-0.3$.}
\label{fig:moments_vr_beta}
\end{figure*}
\section{Conclusion}
\label{sec:concl}
In this paper, we have reviewed the Eddington inversion formalism, and a few of its generic
extensions to anisotropic systems. This formalism is powerful to consistently include the dynamical
correlations featured by a self-gravitating system in the DM-search-related velocity-dependent
observables, from a mass model constrained on real data. It represents a strong improvement over
the Maxwellian approximation from both the theoretical and quantitative points of view, and should
therefore become a ``next-to-minimal'' standard approach to refine the predictions and better
quantify the dynamical uncertainties in DM-search predictions pertaining to (sub)Galactic scales.
It is also more appealing theoretically than blindly using ad-hoc fits from cosmological
simulations, which likely hide environmental dependencies or other artifacts.
Though not as evolved nor as adaptable to a large variety of potential-density pairs as the
action-angle formalism \cite{SandersEtAl2016},
Eddington's inversion method still provides a decent description of galactic DM halos
\cite{LacroixEtAl2018} from a minimal set of physical assumptions and a moderate level
of technicalities\change{--pending the breakdown of some assumptions (\textit{e.g.}~spherical symmetry,
steady state, smoothness of the dark halo, \textit{etc}.) that induces additional systematic errors which
remain to be quantified}. Inspecting the self-consistency of this approach is therefore
particularly important at the time of a boost in astrometric precision made
possible by the Gaia mission \cite{BrownEtAl2018}, which should provide much better constraints on
the DM content of the MW and its satellites. This is of special relevance in the context of
intense DM searches, as the Eddington-like inversion methods are well suited to better control and
further reduce the astrophysical uncertainties in the signal predictions---\textit{e.g.}\ in direct
\cite{UllioEtAl2001,CatenaEtAl2012,FornasaEtAl2014}, indirect \cite{FerrerEtAl2013,Hunter2014,
BoddyEtAl2017,PetacEtAl2018}, or combined \cite{CerdenoEtAl2016} WIMP searches, but not only.
After carefully inspecting the Eddington inversion formalism in \citesec{sec:edd}, however, we
noticed several theoretical issues related to (i) the radial boundary of the dark halo,
important to make sense of the constraints on the escape speed \cite{LavalleEtAl2015}, and (ii)
to the stability of the phase-space DF, which have been overlooked in the DM-related literature,
but which are actually expected to arise very often when Eddington inverting Galactic mass models
with a baryonic component. We have described and addressed these issues in \citesec{sec:issues},
and provided generic methods to cure some of the potential inconsistencies. For the divergence
induced in the phase-space DF in the limit of $v\to v_{\rm esc}$ (see \citesec{ssec:rmax}), after
explaining why the diverging term $\propto 1/\sqrt{\cal E}$ should actually not be dropped, we
defined two ways of getting a non-anomalous phase-space DF without spoiling too much the initial
mass model, based either on {\em a priori} modifications of the DM profile or on new converging
ans\"atze for the DF itself. Properly describing the boundary of the system is particularly
important to characterize
the theoretical uncertainties affecting DM-search observables depending on the high-velocity tail
of the DM velocity distribution, like the direct detection rate of GeV- or subGeV-mass WIMPs.
The proposed regularization methods proved efficient in all cases, except in the case of the
anisotropic Osipkov-Merritt model, which cannot consistently accommodate radial boundaries. As for
the stability issue (see \citesec{ssec:gamma}), we recovered stability criteria for
both the isotropic case [see \citeeq{eq:stability2}] and the anisotropic case [see
\citeeq{eq:stability_anisotropy}, a criterion for stability against radial perturbations only].
The former criterion could be complemented by \citeeq{eq:stability3} to ensure a smoother
velocity distribution, while this restricts the phase-space volume beyond the requirement of
stability only. We also analyzed these stability conditions to get selection criteria for a Galactic
halo based on its relative baryonic content---see \citeeq{eq:stability_baryons} and
\citefig{fig:contours}. This allows one to quickly check whether \change{one's} favorite Galactic
mass model is Eddington invertible or not. The main conclusion of this part is that Eddington's
inversion (and its anisotropic extensions) cannot blindly apply to any density-potential pair.
Particular attention should be given to moderately cuspy or cored halo profiles, which are more
likely to exhibit ill-defined DFs.
Finally, we have explicitly computed some DM-search velocity-dependent observables to
explicitly compare the Eddington inversion predictions with those derived in the Maxwellian
approximation, in the framework of the McM17 constrained Galactic mass models \cite{McMillan2017}.
In particular, we have computed observables that depend on the speed moments
(and inverse moments), and on the relative speed moments (and inverse moments). The former ones
regard the direct WIMP detection rate, and the latter ones regard $p$-wave self-annihilation of
DM. For the self-annihilation case, we have derived a convenient way to express the relative
velocity distribution function for anisotropic systems, reviewed in
\citeapp{app:relative_dist_DF}. We have seen that the differences are quite sizable in all
observables, which somewhat quantifies the associated level of theoretical uncertainties.
We will actually show in a forthcoming study that the Eddington inversion methods provide a
significantly better description of the true DF than the Maxwellian approximation in
zoomed-in cosmological simulations with baryons \cite{LacroixEtAl2018}. This further motivates
applications of this approach to exploit the upcoming Gaia-constrained mass models in the
context of DM searches.
\acknowledgments{
The PhD grant of MS is funded by the OCEVU Labex (ANR-11-LABX-0060), which also provided
financial support to this project. We also benefited from financial support from the theory
project {\em Galactic Dark Matter} funded by CNRS-IN2P3. We further acknowledge support from the
European Union's Horizon 2020 research and innovation program under the Marie
Sk\l{}odowska-Curie grant agreements No 690575 and No 674896; beside recurrent institutional
funding by CNRS-IN2P3 and the University of Montpellier.
}
|
{
"timestamp": "2018-09-28T02:07:40",
"yymm": "1805",
"arxiv_id": "1805.02403",
"language": "en",
"url": "https://arxiv.org/abs/1805.02403"
}
|
\section{Introduction}
We begin with the background of the Steenrod squares. We will then mention quantum cohomology and the results in this paper.
The Steenrod squares are cohomology operations that are uniquely defined by a set of axioms, although this uniqueness does not include a construction. There are multiple ways of constructing the squares, one of which involves constructing the operations on $H^{*}(K(\mathbb{Z}/2,n);\mathbb{Z}/2)$ for Eilenberg-MacLane spaces $K(\mathbb{Z}/2,n)$.
For a topological space $M$, the Steenrod squares are additive homomorphisms $$ Sq^{i} : H^{n}(M) \rightarrow H^{n+i}(M)$$
using $\mathbb{Z}/2$ coefficients. They generalize the squaring operation on cohomology with respect to the cup product, $x \mapsto x \cup x$. The $\mathbb{Z}/2$ coefficients ensure that the $Sq^i$ are additive. These Steenrod squares together determine a degree doubling operation that we call the Steenrod square,
$$ Sq: H^*(M) \rightarrow H^*(M)[h],$$ where $$Sq(x) = \sum Sq^{|x|-i}(x) \, h^{i} .$$
Here $h$ is a formal variable in degree 1 that represents the generator of $$H^{*}(\mathbb{RP}^{\infty}; \mathbb{Z}/2)=\mathbb{Z}/2[h].$$
The Steenrod square satisfies the Cartan relation
\begin{equation} \label{equation:introcartan} Sq(x \cup y) = Sq(x) \cup Sq(y) \end{equation}
which, for example, allows one to inductively compute the Steenrod squares for the cohomology of $\mathbb{CP}^n$ (which we will review in Example \ref{exmpl:classcpn}). The Steenrod square also satisfies the Adem relations, which are relations between compositions of the $Sq^i$. Namely, for all $p,q>0$ such that $q<2p$,
\begin{equation} \label{equation:ademrel} Sq^{q}Sq^{p} = \sum_{s=0}^{[q/2]} {{p-s-1}\choose{q-2s}}Sq^{p+q-s} Sq^{s} \end{equation} where $[q/2]$ is the integer part of $q/2$. The Adem relations are classically implied by the axioms.
This paper begins in Section \ref{sec:preliminaries} with a preliminary section that explains in more detail the relevant background material.
We will then describe two different constructions of the Steenrod square in Section \ref{sec:morssteensqu}: the first construction uses Morse homology and the second uses intersections of cycles. The first construction is based on the definition for Floer theory by Seidel in \cite{seidel}, the origins of which are in the flowlines construction of Betz in \cite{betz}, and Fukaya in \cite{fukaya}, the former of which was extended to a more categorical definition by Betz, Cohen and Norbury in \cite{betzcoh,cohnor}. The second construction we give will be isomorphic to the first construction, using the isomorphism between Morse and singular cohomology.
After considering these constructions of the Steenrod square, in Section \ref{sec:SqQviaMorse} we extend them to define a quantum Steenrod square on the quantum cohomology of a closed monotone symplectic manifold $(M,\omega)$.
In Section \ref{subsec:quantcupprod} we will give details of the quantum cohomology $QH^*(M)$. Briefly, $QH^*(M,\omega)$ is $H^*(M)[[t]]$ as a vector space using a graded formal variable $t$ of degree $2$. However, the cup product is deformed by quantum contributions from counting 3-pointed genus zero Gromov-Witten invariants. That is, by counting certain $J$-holomorphic spheres in $M$ where $J$ is an almost complex structure on $M$ compatible with $\omega$. We often abbreviate by $T= t^N$ for $N$ the minimal Chern number.
The quantum Steenrod square will be a degree doubling operation, denoted $Q \mathcal{S}$, where \begin{equation} \label{equation:sqstatement} Q\mathcal{S} : QH^{*}(M) = H^{*}(M)[[t]] \rightarrow H^{*}(M)[[t]][h] = QH^{*}(M)[h]. \end{equation} As in the case of the classical Steenrod square, $Q\mathcal{S}$ will be built using additive homomorphisms $Q\mathcal{S}_{i,j}: QH^*(M) \rightarrow QH^{2*-i-2jN}(M)$, so $$Q\mathcal{S}(x) = \sum_{i,j \ge 0} Q\mathcal{S}_{i,j}(x) h^i T^j.$$
The quantum Steenrod square is not necessarily axiomatically defined, but a construction was first suggested by Fukaya in \cite{fukaya} based on his Morse homotopy theory. Our construction is different from Fukaya's, and can be viewed as a Morse theory analogue of the work by Seidel in Floer theory in \cite{seidel}. The first goal of this paper is to solve an open problem posed by Fukaya in \cite[Problem 2.11]{fukaya} as to whether the Adem and Cartan relations hold for quantum Steenrod squares and, if not, what their quantised versions should be. Our second goal is to explore consequences of the solution to this problem, specifically in computations for certain closed monotone symplectic manifolds.
In answer to the first part of Fukaya's problem, the immediate generalisation of the Cartan and Adem relations fail. In the case of the Cartan relation, this means that it is not in general true that $Q \mathcal{S}(x*y) = Q \mathcal{S}(x) * Q \mathcal{S}(y)$. We will show this in the following example.
\begin{exmpl}
\label{exmpl:difficulties}
In Definition \ref{defn:mqss} of the quantum Steenrod square, we will see that \begin{equation} \label{equation:qssttot2} Q \mathcal{S}(aT) = Q \mathcal{S}(a)T^2 \end{equation} for any $a \in QH^*(M)$.
Let $M = \mathbb{P}^{1}$. Let $x$ be the generator of $H^{2}(M)$. Recall that the quantum product is $x*x=T$, where $T$ has degree $4$. Then $$Q\mathcal{S}(x*x) = Q\mathcal{S}(T)=T^{2},$$ using \eqref{equation:qssttot2} and the fact that $Q \mathcal{S}(1) = 1$. Using degree reasons, knowledge of the classical Steenrod square for $\mathbb{P}^1$ and of the quantum cohomology ring, one can show that $Q\mathcal{S}(x) = xh^{2}+T$. Then $$Q\mathcal{S}(x)*Q\mathcal{S}(x) = (xh^{2}+T)*(xh^{2}+T) = Th^{4}+T^{2}.$$
Hence, in this case $Q \mathcal{S}(x) * Q \mathcal{S}(x) \neq Q \mathcal{S}(x*x)$.
\end{exmpl}
In Section \ref{sec:quancar} we will prove why the Cartan relation does not immediately generalise and compute the actual quantum Cartan relation. Briefly, the quantum Cartan relation is deformed because the moduli space $\overline{M}_{0,5}$ of genus zero stable curves with 5 marked points $(z_0,z_1,z_2,z_3,z_4)$ has non-trivial $\mathbb{Z}/2$-equivariant cohomology, under the $\mathbb{Z}/2$ action that transposes marked points via the permutation $(12)(34)$. More precisely, the two configurations in Figure \ref{fig:m05elmts} that determine $Q\mathcal{S}(x*y)$ and $Q\mathcal{S}(x) * Q\mathcal{S}(y)$ are not connected by a $\mathbb{Z}/2$-invariant path in $\overline{M}_{0,5}$.
We will prove that a quantum deformation of the Cartan relation holds:
\begin{thm}[Quantum Cartan relation]
\label{thm:quancar}
$$Q\mathcal{S}(x*y) = Q\mathcal{S}(x)*Q\mathcal{S}(y) + \sum_{i,j} q_{i,j}(W_0 \times D^{i-2,+})(x,y)h^{i}$$ where the correction term is written in terms of linear homomorphisms $$q_{i,j}: H_*^{\mathbb{Z}/2}(\overline{M}_{0,5}) \otimes QH^*(M) \otimes QH^*(M) \rightarrow QH^*(M),$$ such that $q_{i,j}(W_0 \times D^{i-2,+})$ is nonzero only if $i \ge 2$ and $j > 0$. The $q_{i,j}$ will be defined precisely in Definition \ref{defn:qopn}.
\end{thm}
In the correction term, $$W_0 \times D^{i,+} \subset \overline{M}_{0,5} \times_{\mathbb{Z}/2} S^{\infty},$$ where $W_0 \subset \overline{M}_{0,5} \simeq Bl_{\{(0,0),(1,1),(\infty,\infty) \} } (\mathbb{CP}^1 \times \mathbb{CP}^1)$ is the exceptional divisor over $(0,0)$ (compare to Figure \ref{fig:w0}). The notation $D^{i,+}$ means the upper $i$-dimensional hemisphere in $S^i \subset S^{\infty}$. In fact, we are abusing notation as we are really interested in the homology class represented by $W_0 \times D^{i,+}$ in $H_*(\overline{M}_{0,5} \times_{\mathbb{Z}/2} S^{\infty})$, where we are using the singular homology.
In Section \ref{sec:computingqsstoric} we use Theorem \ref{thm:quancar} to calculate the quantum Steenrod squares for a Fano toric variety $M$, as proven in Theorem \ref{thm:SqQtoric}. Here, for $\mu \in H_2(M; \mathbb{Z})$ (which is a free $\mathbb{Z}$-module as a Fano toric variety $M$ is simply connected), let $\mu_2$ be the image of $\mu$ under $H_2(M ; \mathbb{Z}) \rightarrow H_2(M ; \mathbb{Z}/2)$. Denote by $x *_{ \mu, k} y$ the coefficient of $t^{kN}$ in the quantum product $x * y$, using spheres representing $\mu$. Let $N$ be the minimal Chern number, and $|t|=2$.
\begin{thm}
\label{thm:SqQtoric}
Let $M$ be a Fano toric manifold. For $b, x \in H^*(M)$ and $|x| = 2$,
\begin{equation} \label{equation:SqQtoric} q_{i,j}(W_0 \times D^{i,+})(b,x) = \sum_{j \ge 1} \sum_{k=1}^{j} \sum_{c_1(\mu) = 2kN} n(x,\mu_2) \cdot \left( Q\mathcal{S}_{|b|-i+2,j-k}(b) *_{\mu, k} x \right) \cdot t^{jN} \end{equation}
summing over a basis of $\mu \in H_2(M; \mathbb{Z})$, so $c_1(\mu) = 2kN \in \mathbb{Z}$ and if $\chi$ is some pseudocycle representative of $x$ then $n(x,\mu_2) := \# (\chi \bullet \mu_2) \in \mathbb{Z}/2$.
\end{thm}
For example, if $M = \mathbb{CP}^n$ then setting $b = x^i$ for the generator $x \in H^2(\mathbb{CP}^n)$, we will show in Lemma \ref{lem:qWcpn} that:
$$ q_{4i+2-2n,1}(W_0 \times D^{4i-2n,+})(x^{i},x) = {{i} \choose {n-i}} T, \text{ else } q_{i,j} = 0.$$ Hence
\begin{equation} \label{equation:qsforcpn} Q\mathcal{S}(x^{i}) = \sum_{j=0}^{i} \left( {{i} \choose {j}}+ \sum_{k=0}^{\lfloor n/2 \rfloor + 1} {{n-k}\choose{k}}\cdot {{i-(n+1-k)} \choose {j-k}} \right) x^{i+j} h^{2(i-j)}, \end{equation} where $x^p$ denotes the $p$-th quantum power of $x$. In particular, if $i+j \ge n+1$ in \eqref{equation:qsforcpn} then $x^{i+j}$ refers to $x^{i+j - n - 1} T$. Omitting the inner summation would give the classical Steenrod square.
\begin{corollary}
\label{corollary:fanotoricdecided}
Let $M$ be a Fano toric manifold. Then if we can compute $QH^*(M)$ (over the Novikov ring as in \cite[Section 9.2]{jhols}) then we can compute $Q\mathcal{S}$ through recursive calculations.
\end{corollary}
In Section \ref{sec:QAR} we extend the Adem relations from Equation \eqref{equation:ademrel} to the quantum Steenrod square. In order to state the quantum Adem relation, we next introduce operations $Q\mathcal{S}^{a,b}: QH^*(M) \rightarrow QH^*(M)$, such that the sum of these $Q\mathcal{S}^{a,b}$ is the total quantum Steenrod square $Q \mathcal{S}$. The index $a$ is the change in homological degree, and the index $b$ is the change in the index of $T$.
\begin{defn}
Define $Q\mathcal{S}^{a,b}$ by $$Q\mathcal{S} (x T^i) = \sum_{a,b \in \mathbb{Z}} Q\mathcal{S}^{a,b} (x T^i) \cdot h^{|x| - 2N(b+i) - a} \quad \textrm{where} \quad Q\mathcal{S}^{a,b}(xT^i) \in T^{b+i} H^{|x|+a}(M),$$ for any $x \in H^*(M)$.
\end{defn}
Computing $Q\mathcal{S}^{a,b}$ for $\mathbb{CP}^2$, we find that the naive generalisation of the Adem relation (Equation \eqref{equation:ademrel}), namely $$\sum_{b,d} \left( Q\mathcal{S}^{q-2bN,b} \circ Q\mathcal{S}^{p-2dN,d}(\alpha) - \sum_{s=0}^{q/2} {{p-s-1}\choose{q-2s}} Q\mathcal{S}^{p+q-s-2bN,b} \circ Q\mathcal{S}^{s-2dN,d}(\alpha) \right) = 0,$$ does not hold, as in the example below.
\begin{exmpl}
Let $M = \mathbb{CP}^2$, so $2N = 6$. Then $Q\mathcal{S}^{2-2N,1} \circ Q\mathcal{S}^{2-0N,0}(x) = T$, but $$\sum_{s=0}^{s = 1} {{1-s}\choose{2-2s}} Q\mathcal{S}^{2+2-s-2iN,i} \circ Q\mathcal{S}^{s-2jN,j}(x) = 0$$ for all $i,j$.
\end{exmpl}
In order to prove the quantum Adem relation, we begin with the technical Theorem \ref{thm:QAR}. The terminology used in Theorem \ref{thm:QAR} will be fully defined in Section \ref{subsec:QAR}.
\begin{thm}
\label{thm:QAR}
For $M$ a closed monotone symplectic manifold, with $\alpha \in QH^*(M)$, and for $p,q>0$ such that $q<2p$:
$$qq_{|\alpha|+p-q,|\alpha|-p}(\alpha) = \sum_{s=0}^{q/2} {{p-s-1}\choose{q-2s}} qq_{|\alpha|+2s-p-q,|\alpha|-s}(\alpha).$$
\end{thm}
The homomorphism $$qq: H^*(M) \rightarrow QH^*(M) \otimes H^*(BD_8),$$ where $D_8$ is the dihedral group. This $qq$ operation will include the data of the composition $Q \mathcal{S} \circ Q \mathcal{S}$. The ring $H^*(BD_8)$ has three generators, labelled $e, \sigma_1, \sigma_2$ (of which we only need to consider $e$ and $\sigma_2$). We then denote by $qq_{i,j}(\alpha)$ the coefficient of $e^i \sigma_2^j$ in $qq(\alpha)$, defined in Equation \eqref{equation:qqijdef}.
This should be compared to equation \eqref{equation:ademrel}. The above theorem leads to a Corollary in more familiar terms:
\begin{corollary}[Quantum Adem Relations]
\label{corollary:QAR}
For $p,q > 0$ such that $q < 2p$, and $\alpha \in QH^*(M)$,
\begin{equation} \label{equation:QAR} \sum_{b,d} \left( Q\mathcal{S}^{q,b} \circ Q\mathcal{S}^{p,d}(\alpha) - \sum_{s=0}^{q/2} {{p-s-1}\choose{q-2s}} Q\mathcal{S}^{p+q-s,b} \circ Q\mathcal{S}^{s,d}(\alpha) \right) = T \cdot Q(\alpha) \end{equation} for the correction term
$$
\begin{array}{rcl}
T \cdot Q(\alpha) &=& q_{D_8}((g m_1 + g^2 m_1) \otimes \Psi (e^{|\alpha| + p - q} \sigma_2^{|\alpha|-p}))(\alpha)
\\[0.5em] &&- \sum_{s=0}^{[q/2]} {{p-s-1}\choose{q-2s}} q_{D_{8}}((g m_1 + g^2 m_1) \times \Psi (e^{|\alpha| +2s - p - q} \sigma_2^{|\alpha|-s}))(\alpha).
\end{array}
$$
\end{corollary}
In the above corollary, the dihedral group $D_8 = \langle (12), (13)(24) \rangle \subset S_4$ acts on the four incoming marked points $(z_1,z_2,z_3,z_4)$ by permutations. The operation $q_{D_8}$ is a linear homomorphism determined by homology classes in $\overline{M}_{0,5} \times_{D_8} ED_8$, so $q_{D_8}(A) : H^*(M) \rightarrow H^{4*-|A|}(M)$ for $A \in H_*(\overline{M}_{0,5} \times_{D_8} ED_8)$. It is analogous to the $q_{i,j}$ in Theorem \ref{thm:quancar}. Here $m_1 \in \overline{M}_{0,5}$ (see Figure \ref{fig:m05elmts}), $e^i \sigma_2^j \in H^*(BD_8)$ and $\Psi: H^*(\overline{M}_{0,5} \times_{D_8} ED_8) \rightarrow H_*(\overline{M}_{0,5} \times_{D_8} ED_8)$ is the universal coefficients isomorphism. Let $g=(123) \in S_4$, such that the cosets of $D_8$ in $S_4$ are $D_8, gD_8, g^2 D_8$.
In Section \ref{sec:blowups}, we calculate $Q \mathcal{S}$ in the case of the blowups $M= Bl_Y(\mathbb{CP}^3)$ and $M= Bl_Y(\mathbb{CP}^1 \times \mathbb{CP}^1 \times \mathbb{CP}^1)$ where $Y$ is respectively the intersection of two quadrics and the intersection of two linear hypersurfaces. The setup here is similar to Blaier \cite{blaier}. Most of the squares can be computed using the methods from Section \ref{sec:computingqsstoric}. The new computation is of $Q \mathcal{S}_{1,1}$, which is given in the following theorem.
\begin{thm}
\label{thm:blowupID}
\begin{equation} \label{equation:blowupID1} \qquad Q\mathcal{S}_{1,1} = id : H^3(Bl_Y(\mathbb{CP}^3)) \rightarrow H^3(Bl_Y(\mathbb{CP}^3)) \end{equation}
and
\begin{equation} \label{equation:blowupID2} \qquad Q\mathcal{S}_{1,1} = id : H^3(Bl_Y(\mathbb{CP}^1 \times \mathbb{CP}^1 \times \mathbb{CP}^1)) \rightarrow H^3(Bl_Y(\mathbb{CP}^1 \times \mathbb{CP}^1 \times \mathbb{CP}^1)). \end{equation}
\end{thm}
Observe that $Q\mathcal{S}_{1,1}$ are quantum correction terms to the classical Steenrod square on the blowup $M$. They are determined by lifts of contributions to the classical Steenrod square on $Y$.
\subsection*{Acknowledgements}
I thank my supervisor Alexander Ritter for his guidance and support throughout my Ph.D. and beyond. I thank Paul Seidel for suggesting this project, for helpful conversations and for providing funding for part of the writing process. I thank the anonymous referee for all of the comments and points of clarification, which have been crucial both to correct errors and to improve the exposition. I also thank Dominic Joyce and Ivan Smith, Frances Kirwan and Ulrike Tillmann for helpful comments and corrections on my thesis, and therefore this paper.
This forms a chapter of my Ph.D. thesis.
\section{Preliminaries}
\label{sec:preliminaries}
Henceforth we always work with coefficients in $\mathbb{Z}/2$, unless otherwise stated. For example $H^{*}(M)$ means $H^{*}(M;\mathbb{Z}/2)$.
\subsection{Equivariant Cohomology}
\label{subsec:equivcohom}
We follow \cite[Section 2]{seidel}.
\begin{defn}[Equivariant cohomology of a chain complex]
\label{defn:equcohom}
Let $(C^{\bullet},d)$ be a cochain complex over $\mathbb{Z}/2$. Suppose $(C^{\bullet},d)$ has a chain involution $\iota$, so $\iota: C^{\bullet} \rightarrow C^{\bullet}$ is a chain map with $\iota^2 = id_{C^{\bullet}}$. Let $h$ be a formal variable in grading $1$. The equivariant chain complex is $$(C^{\bullet}_{\mathbb{Z}/2} ,\delta) = (C^{\bullet}[h],d + h(id_{C} + \iota)).$$
Define $H^{*}_{\mathbb{Z}/2}(C) := H^*(C^{\bullet}_{\mathbb{Z}/2} ,\delta) $, the equivariant cohomology of $(C,d,\iota)$.
\end{defn}
\begin{defn}[Equivariant Cohomology of a manifold]
\label{defn:equivcohomman}
Let $N$ be a topological space with a continuous involution $\iota: N \rightarrow N$. Let $C = C^{*}(N)$ be the singular cochain complex of the topological space $N$. There is a $\mathbb{Z}/2$ action on $C^* (N)$ induced by $\iota$. As in Definition \ref{defn:equcohom}, the \textit{equivariant cohomology of $N$} is $H^* _{\mathbb{Z}/2}(N) := H^* _{\mathbb{Z}/2}(C^* (N))$.
\end{defn}
The important examples of this will be $M \times M$ with the involution swapping the factors and $M$ with the trivial involution. We will respectively denote the equivariant chains in this case by $C^{\bullet}_{\mathbb{Z}/2}(M \times M)$ (or in that case of Morse cohomology by $CM^{\bullet}_{\mathbb{Z}/2}(M \times M)$) and by $C^{\bullet}_{\mathbb{Z}/2}(M)$. Similarly, equivariant cohomology will be denoted respectively by $H^{\bullet}_{\mathbb{Z}/2}(M \times M)$ and by $H^{\bullet}_{\mathbb{Z}/2}(M)$
\begin{rmk}
There is another description of $H^*_{\mathbb{Z}/2}(N)$ for a manifold $N$ with a continuous involution. Recall that $E \mathbb{Z}/2$ is the classifying space of $ \mathbb{Z}/2$: a contractible space with a free $\mathbb{Z}/2$ action, for example $E \mathbb{Z}/2 = S^{\infty}$ with the involution being the antipodal map. Then $$H^{*}_{ \mathbb{Z}/2}(N) := H^{*}(N \times_{\mathbb{Z}/2} E \mathbb{Z}/2).$$
This definition is equivalent to Definition \ref{defn:equivcohomman}. If we let $N = \{ pt \}$ then we obtain $pt \times_{\mathbb{Z}/2} S^{\infty} = S^{\infty} / (\mathbb{Z}/2) = \mathbb{RP}^{\infty}$, hence $$H^{*}_{\mathbb{Z}/2}(pt) = H^{*}(\mathbb{RP}^{\infty}) = \mathbb{Z}/2[h].$$
\end{rmk}
\subsection{The Steenrod Squares}
\label{subsec:theSqs}
For a reference, see \cite[Section 4.L]{algtop}.
The Steenrod square operations $\{ Sq^{i} \}$ are the unique collection of additive homomorphisms such that:
\begin{enumerate}
\item $Sq^{i} : H^{n}(M) \rightarrow H^{n+i}(M)$ for each $n \ge 0$ and topological space $M$,
\item Each $Sq^{i}$ is natural in $M$,
\item $Sq^{0}$ is the identity,
\item $Sq^{n}$ acts as the cup square on $H^{n}$, so $Sq^{|x|}(x) = x \cup x$,
\item If $n > |x|$ or $n < 0$ then $Sq^{n}(x) = 0$,
\item (Cartan relation) For each $n$, $$Sq^{n}(x \cup y) = \sum_{i+j=n} Sq^{i}(x) \cup Sq^{j}(y).$$
\end{enumerate}
Here $|x|$ is the cohomological grading of $x \in H^*(M)$. Recall that we use $\mathbb{Z}/2$ coefficients to ensure additivity: $(x+y) \cup (x+y) = x \cup x + y \cup y$ modulo 2. These $Sq^{i}$ together define a single operator, the ``total Steenrod square" $$Sq:H^{*}(M) \rightarrow (H^{\bullet}(M)[h])^{2*},$$ where $Sq^{i}$ is the coefficient of $h^{n-i}$, so $Sq(x) = \sum_{i} Sq^{|x|-i}(x) \cdot h^i$. The cup product on $H^{n}(M)[h]$ is $(a \cdot h^i) \cup (b \cdot h^j) = (a \cup b) \cdot h^{i+j}$, so the Cartan relation becomes $Sq(x \cup y) = Sq(x) \cup Sq(y)$ and thus $Sq$ is a unital ring homomorphism. We will henceforth call $Sq$ the ``Steenrod square" when there is no ambiguity, noting that it contains the same information as $\{ Sq^i \}$.
One must note that although these axioms imply that there is a unique Steenrod square, there are many different approaches to constructing them.
\begin{exmpl}[The classical Steenrod square for $\mathbb{CP}^{n}$]
\label{exmpl:classcpn}
$$H^{*}(\mathbb{CP}^{n}) \cong \mathbb{Z}/2 [x] / (x^{n+1})$$ where $|x| = 2$. We see that $Sq^{0}(x) = x$ and $Sq^{2}(x) = x^{2}$ using axioms $3$ and $4$, and these are all of the nonzero terms by axiom $5$. Hence $Sq(x) = xh^{2} + x^{2}$. By the Cartan relation (axiom 6), $$Sq(x^{i}) = Sq(x)^{i} = (xh^{2} + x^{2})^{i} = x^{i} \sum_{j=0}^{i} {{i}\choose{j}} x^{j} h^{2(i-j)}.$$ Looking at the coefficient of $h^{2i-k}$, $Sq^k(x) = 0$ for $k$ odd and $Sq^{2j}(x^i) = {{i}\choose{j}} x^{i+j}$.
\end{exmpl}
\subsection{The Betz-Cohen Construction}
\label{subsec:prelimbcncon}
The details are relevant for Section \ref{subsec:tmssissq}.
Fix a Morse-Smale function $f$ on $M$, and pick a small convex neighbourhood $U_f$ of $f$ in $C^{\infty}(M)$ consisting of Morse-Smale functions. Let $\Gamma$ be the $Y$-shaped graph, oriented and parametrised as $(-\infty,0] \vee_0 [0,\infty) \vee_0 [0,\infty)$. We denote $S$ to be the set of triples $\sigma = (f_{1,s},f_{2,s},f_{3,s})$ such that $f_{1,s} \in U_f$ for each $s \in (-\infty,0]$ and $f_{2,s}, f_{3,s} \in U_f$ for $s \in [0,\infty)$, subject to:
\begin{enumerate}
\item $f_{1,0}, f_{2,0}, f_{3,0}$ are pairwise distinct.
\item $f_{i,s} = \beta(|s|) f_{i,0} + (1-\beta(|s|)) f$, where $\beta: [0,\infty) \rightarrow [0,1]$ is a fixed monotone bump function such that $\beta(s) =1$ for $s \le 1/2$ and $\beta(s) = 0$ for $s \ge 1$.
\end{enumerate}
We define $\mathcal{M}_{\sigma}$ to be the set of continuous maps $\gamma: \Gamma \rightarrow M$ that are smooth on the edges, such that for each edge $E_i$ of $\Gamma$ we denote $\gamma_i = \gamma|_{E_i}$, and require $$d \gamma_i / dt (s) + \nabla f_{i,s}(\gamma_i(s)) = 0.$$ This is actually slightly different to the construction in \cite{betzcoh}, in which the $f_1,f_2,f_3$ were pairwise distinct and had no $s$ dependence. The construction due to Betz-Cohen is equivalent to that given here, as we are simply using a deformation retraction of their moduli space of metric Morse flows.
Let $\mathcal{M}_{BC}= \sqcup_{\sigma \in S} \mathcal{M}_{\sigma}$, topologised so that $\mathcal{M}_{BC} \rightarrow S$ is continuous. Observe that there is a $\mathbb{Z}/2$-action $\iota_S$ on $S$, induced by the permutation $(23)$. This induces a $\mathbb{Z}/2$-action on $\mathcal{M}_{BC}$, via $(\sigma, \gamma) \mapsto (\iota_S \circ \sigma, \gamma \circ R_{\Gamma})$. Here, $R_{\Gamma}$ is the involution on $\Gamma$ that swaps the two positive half-lines and fixes the negative half-line.
For $a_1,a_2, a_3 \in \text{crit}(f)$, define $\mathcal{M}_{BC}(a_1,a_2,a_3)$ to consist of equivalence classes of pairs $[\sigma, \gamma] \in \mathcal{M}_{BC}/(\mathbb{Z}/2)$ such that $${\displaystyle \lim_{i \rightarrow -\infty}} \gamma_1(s) = a_1, \ {\displaystyle \lim_{i \rightarrow \infty}} \gamma_2(s) = a_2, \ {\displaystyle \lim_{i \rightarrow \infty}} \gamma_3(s) = a_3.$$
The space $S$ is contractible and has a free $\mathbb{Z}/2$-action, so $SB := S /( \mathbb{Z}/2)$ is homotopy equivalent to $\mathbb{RP}^{\infty}$. Thus, there are representatives $\delta_i$ of the nontrivial generator of $H_i(SB) \cong \mathbb{Z}/2$ for each $i$. Strictly, we consider some $\delta_i = \sum_{j} \tau_{i,j}$ where $\tau_{i,j}: \Delta_j \rightarrow S$ is a simplex. For each $i \ge 0$, let $$\mathcal{M}_{BC,i}(a_1,a_2,a_3) = {\displaystyle \bigcup_j} \tau_{i,j}^* \mathcal{M}_{BC}(a_1,a_2,a_3),$$ the union of the pullback of $\mathcal{M}_{BC}(a_1,a_2,a_3)$ along $\tau_{i,j}: \Delta_i \rightarrow S$, glued along faces.
Recal that in Morse theory, $CM^*(M \times M, f \oplus f)$ and $CM^*(M,f) \otimes CM^*(M,f)$ are identified via the K\"unneth isomorphism, where $$f \oplus f: M \times M \rightarrow \mathbb{R}, \quad (f \oplus f)(x,y) = f(x) + f(y).$$ One uses the correspondence between critical points of $f \oplus f$ and formal pairs of critical points of $f$, denoted $a \otimes b$ for $a,b \in \text{crit}(f)$. The isomorphism between Morse and singular cohomology respects the involution that swaps the factors, and hence we may replace the equivariant cohomology of $C^*(M) \otimes C^*(M)$, denoted $H^*_{\mathbb{Z}/2}(M \times M)$, with the equivariant cohomology of $CM^*(M,f) \otimes CM^*(M,f)$. One can think of this in terms of equivariant Morse cohomology, as detailed in \cite[Section 2]{seidelsmith}, where for the given $\mathbb{Z}/2$-action all of the necessary transversality conditions are satisfied.
Using the fact that the equivariant chains are $C^*_{\mathbb{Z}/2}(M \times M) = C^*(M \times M)[h]$, as well as using the previous paragraph, elements of $C^*_{\mathbb{Z}/2}(M \times M)$ may be written as a finite sum of $(b \otimes c)h^j$ for some $j \ge 0$ and $b,c \in \text{crit}(f)$. Let $a \in \text{crit}(f)$ and $\delta_i$ be the generator of $H_i(SB) \cong \mathbb{Z}/2$ (because $S$ is an $E \mathbb{Z}/2$).
We define $$q: H_i(SB) \otimes H^j_{\mathbb{Z}/2}(M \times M) \rightarrow H^{j-i}(M),$$ at the chain level, such that the coefficient of $a$ in $q(\delta_i \otimes (b \otimes c)h^k)$ is $\# \mathcal{M}_{BC,i-k}(a,b,c)$, when $\mathcal{M}_{BC,i-k}(a,b,c)$ is a collection of points. Let $q_i(b \otimes c) = q(\delta_i \otimes b \otimes c)$. One defines $$Sq^{|x|-i}(x) = q_i (x \otimes x).$$
\subsection{The Quantum Cup Product}
\label{subsec:quantcupprod}
For more details on the quantum cup product, see \cite[Chapter 8]{jhols}. Throughout this paper, for $M$ a closed $n$-manifold, we denote by $PD: H^*(M) \rightarrow H_{n-*}(M)$ and $PD: H_*(M) \rightarrow H^{n-*}(M)$ the Poincar\'e duality operation over $\mathbb{Z}/2$ coefficients.
Let $(M, \omega)$ be a monotone symplectic manifold of dimension $n$, with a fixed almost complex structure $J$ compatible with $\omega$.
\begin{defn}
A symplectic manifold $(M,\omega)$ is \textit{monotone} if the restriction to spherical homology classes of the cohomology class of $\omega$ is positively proportional to the first Chern class of $TM$. In other words, there exists a constant $\lambda > 0$ such that $$[\omega]|_{\pi_2(M)} = \lambda \cdot c_1(TM)|_{\pi_2(M)}.$$
\end{defn}
As an abelian group, $QH^{*}(M) = H^{*}(M)[[t]]$ where $t$ is a formal variable of degree $2$. Let $T = t^N$, where $N \geq 0$ is the minimal Chern number of $M$, determined by $c_{1}(\pi_{2}(M)) = N \mathbb{Z}$. By rescaling our symplectic form if necessary, we will assume that $\lambda = 1/N$, and so referring to a {\it $J$-holomorphic map $u$ of energy $k$} means that $c_1(u_*[S^2]) = N \cdot [\omega](u)= kN$.
As an important note, we define the quantum cochains $$QC^*(M) := C^*(M) \otimes_{\mathbb{Z}/2} \mathbb{Z}/2[[T]].$$ Then $QH^*(M) = H^*(QC^*(M), d \otimes id)$, where $d$ is the differential on $C^*(M)$. Most of the operations that we consider are defined at the chain level, and then descend to maps on (co)homology.
We pick a basis $\mathcal{B}$ for $H^*(M)$ and a dual basis with respect to the nondegenerate cup product pairing $(e,f) \mapsto \langle e \cup f, [M] \rangle$. There is a dual basis $\mathcal{B}^{\vee}$ with respect to this pairing. Let $\alpha^{\vee} \in H^{n-|\alpha|}(M)$ denote the dual of the cohomology class $\alpha \in H^{|\alpha|}(M)$. Our operations on cohomology will not depend on this choice of basis, although they may affect the chain level description.
Given $A \in H_{2}(M)$, let $\mathcal{M}_{A}(J)$ be the moduli space of $J$-holomorphic spheres $u: S^2 \rightarrow M$ such that $u_{*}([S^2]) = A$, up to reparametrisation by $PSL(2,\mathbb{C})$. For a generic choice of $J$, this moduli space is a smooth manifold with $$\dim \mathcal{M}_{A}(J) = 2c_{1}(A) + \dim(M).$$ For each $z \in S^{2}$, there is an evaluation map $ev_{A,z} : \mathcal{M}_{A}(J) \rightarrow M$ with $ev_{A,z}(u) = u(z)$. Pick three distinct points, $z_{1}, z_{2},z_{3} \in S^2$. We use $0,1,\infty$ throughout, and denote $ev_{A} = ev_{A,0} \times ev_{A,1} \times ev_{A,\infty} : \mathcal{M}_{A}(J) \rightarrow M \times M \times M$.
\begin{defn}[Quantum Product]
\label{defn:quantumproduct}
Let $\alpha, \beta \in H^{*}(M) \subset QH^{*}(M)$. Pick generic pseudocycle representatives $a, b$ of the classes $PD(\alpha)$ and $PD(\beta)$ (so that they are transverse to the evaluation maps in the previous paragraph). Similarly, for each $\gamma \in \mathcal{B}$, we pick a representative $c^{\vee}$ of $\gamma^{\vee}$. Denote by $a \times b \times c^{\vee}$ the product of these cycles, landing in $M \times M \times M$.
Then we define $$\alpha * \beta = \sum_{j \in \mathbb{Z}, \ \gamma \in \mathcal{B} : |\gamma| = |\beta| + |\alpha| - 2jN} n(\gamma, \alpha, \beta, j) \cdot \gamma \cdot T^j,$$ $$n(\gamma, \alpha, \beta, j) = \sum_{A \in H_2 (M) : c_{1}(A) = jN} ev_A \bullet (a \times b \times c^{\vee}).$$ Here $\bullet$ is the intersection number of pseudocycles of complementary dimension. Extending $\mathbb{Z}/2[[t]]$-linearly defines $*$ on $QH^*(M)$.
\end{defn}
Observe that for generic $J$, the evaluation map is a pseudocycle, \cite{jhols}. In order to show that this is well defined, one must prove that the outcome is independent of choice of pseudocycle representatives that we choose, such as in \cite[Lemma 7.1.4]{jhols}. The degree condition ensures that the pseudocycles are of complimentary dimension. Notice that $|a*b| = |a|+|b|$, using that $|T|=2N$. If $A=0$ (so $E(u)=0$ and $u$ is constant), this recovers the classical intersection product.
\begin{rmk}
In concrete terms, in Definition \ref{defn:quantumproduct} we count the number of $J$-holomorphic spheres in $M$ intersecting some choice of pseudocycle representatives of the $PD(\alpha)$, $PD(\beta)$ and $PD(\gamma^{\vee})$. This can be thought of as the intersection $$ev_{A,0}^{-1}(a) \cap ev_{A,1}^{-1}(b) \cap ev_{A,\infty}^{-1}(c^{\vee})$$ in the space of $J$-holomorphic stable maps representing $A$.
\end{rmk}
\section{Two constructions of the Steenrod Squares}
\label{sec:morssteensqu}
The first construction will use Morse theory, and will be based on that given in \cite{seidel}, \cite{fukaya} and \cite{betzcoh}. The second construction is a generalisation of the first, involving pseudocycles. In this section $\Gamma$ is the Y-shaped graph with incoming edge $e_1$ and outgoing edges $e_2$ and $e_3$. Let $e_1$ be parametrised by $(-\infty, 0]$ and $e_2,e_3$ by $[0,\infty)$. This is illustrated in Figure \ref{fig:classmorsqu}.
Throughout this section, $M$ will be a smooth closed manifold. We recall the Morse theoretic cup product: given a Morse function $f$, pick three generic perturbations $f^1_{s}$ for $s \in (- \infty,0]$ and $f^2_{s}$, $f^3_{s}$ for $s \in [0,\infty)$ (so that they are ``transverse at $0$"). Making a generic choice ensures that the moduli space in Definition \ref{defn:morsecupprod} is cut out transversely: specifically, the genericity condition ensures that the moduli space is a smooth manifold. This is discussed in \cite[Chapter 5.2, Chapter 2]{schwarz} by Schwarz. It should be pointed out that the construction of the Morse theoretic cup product in the cited work uses three distinct fixed Morse functions $f^1,f^2,f^3$, rather than using perturbations $f^1_s,f^2_s,f^3_s$. Combining the case of Schwarz with the standard notion of continuation maps from $f^i$ to our fixed Morse function $f$, and applying a gluing argument, means that we can instead consider $s$-dependent functions on the edges. After applying such a gluing of continuation maps, the requirement that $f^1,f^2,f^3$ be chosen generically translates to requiring that $f^1_0,f^2_0, f^3_0$ be chosen generically, which is what we meant by ``transverse at $0$" above. This idea is made precise in \cite[Section 2]{morsetrajectories}. With this in mind, we choose the $f^i_s$ such that there is an $R > 0$ with $f^i_{s} = f$ if $|s| \ge R$, so that we can apply Morse theoretic arguments outside of a compact neighbourhood of the vertex in $\Gamma$. Denote by $\text{crit}_k(f)$ the critical points of $f$ of Morse index $k$. Write $|x|$ for the Morse index of $x \in \text{crit}(f)$.
\begin{defn}[Morse cup product]
\label{defn:morsecupprod}
Let $a_{2},a_{3}$ be critical points of $f$, with respective Morse indices $|a_{2}|, |a_{3}|$ and let $k = |a_{2}|+|a_{3}|$, then $$a_{2} \cdot a_{3} := \sum_{a_{1} \in \text{crit}_k(f)} n_{a_1, a_2, a_3} a_1$$ where $n_{a_1, a_2, a_3}$ is the number of elements in the $0-$dimensional moduli space $\mathcal{M}(f^i_{s},a_{1},a_{2},a_{3})$ of continuous maps $u: \Gamma \rightarrow M$, smooth on the edges, such that:
\begin{enumerate}
\item $d (u|_{e_i})/ds = -\nabla f^i_{s}$,
\item $u|_{e_1}(x) \rightarrow a_1$ as $x \rightarrow -\infty$,
\item $u|_{e_i}(x) \rightarrow a_i$ as $x \rightarrow \infty$ for $i=2,3$.
\end{enumerate}
\end{defn}
\subsection{Morse Steenrod square}
\label{subsec:msss}
Henceforth, we will consider a nested sequence of spheres $S^{0} \subset S^{1} \subset ... \subset S^{\infty}$, consisting of equators that exhaust $S^{\infty}$ and are preserved under the involution $v \mapsto -v$. Denote $$S^{\infty} = \{ (x_0,x_1,x_2, \ldots) \subset \bigoplus_{i \ge 0} \mathbb{R}^{i} : \sum_i x_i^2 = 1 \},$$ the subset $S^i \subset S^{\infty}$ consists of those elements of $S^{\infty}$ of the form $(x_0, \ldots, x_i, 0 ,\ldots)$.
We refine the choice of $f^i_{s}$ by picking a collection of smooth functions $f^i_{v,s}: M \rightarrow \mathbb{R}$, smoothly parametrised by $v \in S^{\infty}$ and $s \in (-\infty,0]$ for $i=1$, respectively $s \in [0,\infty)$ for $i=2,3$, satisfying the following conditions:
\begin{enumerate}
\item $f^2_{v,s} = f^3_{-v,s}$,
\item For each $i$, the smooth map $f^2_{\cdot,0}: S^{i} \times M \rightarrow \mathbb{R}$ must be chosen generically, with more details provided in Appendix \ref{subsec:mssrmks}.
\item There is an $R>0$ such that $f^i_{v,s} = f$ for all $|s| \ge R$ and $v \in S^{\infty}$.
\item $f^1_{v,s} = f^1_{-v,s}$ .
\end{enumerate}
Given $a_1, a_2, a_3 \in \text{crit}(f)$, and $v \in S^{\infty}$, we define $\mathcal{M}'_{v}(a_1, a_{2}, a_{3})$ to be the set of pairs $(u: \Gamma \rightarrow M, v)$ such that:
\begin{enumerate}
\item $d(u|_{e_i})/ds = -\nabla f^i_{v,s}$.
\item $u|_{e_1} (s) \rightarrow a_1$ as $s \rightarrow -\infty$ and $u|_{e_i} (s) \rightarrow a_{i}$ for $i=2,3$ as $s \rightarrow \infty$.
\end{enumerate}
Let $$\mathcal{M}'_{i}(a_1, a_{2},a_{3}) = \bigsqcup_{v \in S^{i}} \mathcal{M}'_{v}(a_1,a_{2}, a_3),$$ topologised as a subset of $C(\Gamma, M) \times S^i$ (where $C(\Gamma, M)$ is the space of continuous maps from $\Gamma$ to $M$ that are smooth on the edges). The projection to $S^i$ is then continuous for all $i$. Indeed, $\mathcal{M}'_{i}(a_1, a_{2},a_{3})$ is a smooth manifold for each $i$, by the genericity conditions as given in Appendix \ref{subsec:mssrmks}.
Let $r : \Gamma \rightarrow \Gamma$ be the reflection that swaps $e_2$ and $e_3$ (preserving parametrisations) and fixes $e_1$. If $a_2 = a_3$ as in Figure \ref{fig:classmorsqu}, there is a free $\mathbb{Z}/2$ action on the moduli space $\mathcal{M}'_{i}(a_1,a_2, a_2)$, via $$(u,v) \mapsto (u \circ r, -v).$$
Let $\mathcal{M}_{i}(a_1,a_2, a_2) = \mathcal{M}'_{i}(a_1, a_2,a_2)/(\mathbb{Z}/2)$, the quotient by the $\mathbb{Z}/2$ action. If $a_2 \neq a_3$, $$\mathcal{M}_{i}(a_1,a_2, a_3) = \bigsqcup_{v \in D^{i,+}} \mathcal{M}'_{v}(a_1,a_{2}, a_3) = \bigsqcup_{v \in D^{i,-}} \mathcal{M}'_{v}(a_1,a_{3}, a_2),$$ where $D^{i,\pm}$ is the upper/lower $i-$dimensional hemisphere in $S^i \subset S^{\infty}$. Observe that when $v \in \partial D^{i,\pm}$, there is no overcounting of solutions (when $a_2 \neq a_3$). This is because a solution for $v \in \partial D^{i,+}$, with asymptotics $a_1, a_2, a_3$, does not correspond to a solution for $-v \in \partial D^{i,+}$ with asymptotics $a_1, a_2, a_3$: the action $u \mapsto u \circ r$ swaps the $a_2$ and $a_3$ asymptotics. Indeed, when $a_2 \neq a_3$ the number of solutions for $v \in \partial D^{i,\pm}$ exactly corresponds to the $Sq'((a_2 \otimes a_3 + a_3 \otimes a_2)h)$ term in Equation \eqref{equation:Sq'chainmap}.
Consider the natural projection $\mathcal{M}_{i}(a_1, a_2,a_3) \rightarrow \mathbb{RP}^{i}$. Over a generic $v \in \mathbb{RP}^{i}$ there is a smooth manifold of degree $|a_{1}| - 2 |a_{2}|$, so the dimension of the moduli space is $$\text{dim}\mathcal{M}_{i}(a_{1},a_{2}) = |a_{1}| - 2 |a_{2}| + i.$$ This is an example of genericity in family Morse theory, as in \cite[Theorem 3.4]{hutchingsfamilies}, and as used in \cite[Equations (4.26), (4.95)]{seidel}.
\begin{figure}
\input{classicalsquaremorse.pdf_t}
\caption{Morse flowline configurations for the Steenrod square.}
\label{fig:classmorsqu}
\end{figure}
Before giving the definition, we recall the notation of Section \ref{subsec:equivcohom}. Specifically, given the chain complex $CM^{\bullet}(M,f)$ with the trivial action of $\mathbb{Z}/2$, one defines the $\mathbb{Z}/2$-equivariant Morse cohomology using the equivariant chain complex $CM_{\mathbb{Z}/2}^{\bullet}(M,f)$. Similarly, given the chain complex $CM^{\bullet}(M,f) \otimes CM^{\bullet}(M,f)$ (which we identify with $CM^{\bullet}(M \times M ,f \oplus f)$ via the K\"unneth isomorphism), there is the action of $\mathbb{Z}/2$ that swaps the two factors, and we denote the $\mathbb{Z}/2$-equivariant chain complex in this case $CM_{\mathbb{Z}/2}^{\bullet}(M \times M)$.
\begin{defn}[The Morse Steenrod Square]
\label{defn:mss}
Let $a_2, a_3 \in \text{crit}(f)$. This determines $a_2 \otimes a_3 \in CM_{\mathbb{Z}/2}^{\bullet}(M \times M)$. Define $$Sq': CM^{\bullet}_{\mathbb{Z}/2}(M \times M) \rightarrow CM_{\mathbb{Z}/2}^{\bullet}(M),$$ by $$Sq'(a_2 \otimes a_3) = \sum_{i =0}^{|a_2| + |a_3|} \sum_{a_1 \in \text{crit}_{|a_2| + |a_3|-i}(f)} n_{a_1, a_2, a_3,i} \cdot a_1 \cdot h^i$$ where $n_{a_1, a_2,a_3,i} = \# \mathcal{M}_{i}(a_1,a_2,a_3)$ for $\#$ the number of points modulo 2. Then extend as a $h$-module.
We then need to prove that $Sq'$ descends to a map on equivariant cohomology. To do this, we use a standard argument involving a $1$-dimensional moduli space (see for example \cite[Section 2.4, Section 5.3]{schwarz}, applied as in \cite[Proposition 1.9, Lemma 1.10]{fukaya}). We then consider its compactification, as covered in detail in Appendix \ref{sec:equivariantcompact}. This in turn shows that $Sq'$ is a chain map, i.e.: \begin{equation} \label{equation:Sq'chainmap} Sq'( (a_2 \otimes a_3 + a_3 \otimes a_2) h + (d a_2) \otimes a_3 + a_2 \otimes (d a_3)) = d Sq'(a_2 \otimes a_3). \end{equation}
Further, post-composing with the doubling operation $$\text{double}: CM^{*}(M) \rightarrow CM^{2*}_{\mathbb{Z}/2}(M \times M), \ a \mapsto a \otimes a,$$ which also descends to a map on equivariant cohomology, we define $$Sq := [Sq'] \circ [\text{double}].$$ Here $[-]$ denotes the cohomology level operation of the respective map of chains. This definition is independent of the choice of parametrised Morse functions by a standard continuation argument, such as in \cite[Section 3.4]{salamonfloer}.
The coefficient of $h^{|a|-i}$ is denoted by $Sq^{i}(a) \in H^{|a|+i}(M)$.
\end{defn}
\begin{propn}
\label{propn:propositionftw}
The homomorphism $Sq$ is additive, and satisfies axioms $1,2,4$ and $5$ from Section \ref{subsec:theSqs}.
\end{propn}
\begin{proof}
To prove additivity, observe first $Sq(x+y) = Sq(x) + Sq(y) + Sq'(x \otimes y + y \otimes x)$. Hence we must show that $[Sq'(x \otimes y + y \otimes x)] =0 $ when $dx = dy = 0$. In such a case, we see $d(x \otimes y) = (x \otimes y + y \otimes x) h$. Using that $Sq'$ is a chain map, it follows that $Sq'((x \otimes y + y \otimes x)h) =d Sq'(x \otimes y)$. As multiplication by $h$ is injective, this shows that $[Sq'(x \otimes y + y \otimes x)]$ is exact, as required.
\textit{Axiom 1} is immediate from the definition of $Sq^{i}$
\textit{Axiom 2} and naturality is true for the same reason as for the Morse cup product: see for example \cite[Section 2.1]{rot}.
For \textit{Axiom 4}, for $|y| = 2 |x|$ the coefficient of $y$ in $Sq^{|x|}(x)$ is the number of elements of the $0-$dimensional moduli space $\mathcal{M}_{0}(y,x,x)$. From the definition of $\mathcal{M}_{0}(y,x,x)$, and Definition \ref{defn:morsecupprod}, this number is the same as the coefficient of $y$ in $x^2$.
For \textit{Axiom 5}(1), $Sq^{i}(x) = 0$ for $i > |x|$ by definition, as only non-negative powers of $h$ are counted in Definition \ref{defn:mss}.
For \textit{Axiom 5}(2), $f_{v,s}$ is a perturbation of $f$. The perturbation may be chosen arbitrarily small in the $C^{2}$ topology. For generic $f$ there is no $-\nabla f$ flowline from $b$ to $a$ if $|b| < |a|$. As $f_{v,s}$ is close to $f$, this means that generically for any $v$ there is no `flowline' from $b$ to $a$ that has gradient $- \nabla f$ for $s < 0$ and $- \nabla f_{v,s}$ for $s > 0$. Hence $Sq^{i}(x) = 0$ for $i < 0$.
We verify Axiom $3$ in Section \ref{subsec:propofSq} and Axiom $6$ in Section \ref{subsec:Cartan}.
\end{proof}
\begin{rmk}
\label{rmk:msqaxioms}
Note that showing $Sq$ satisfies these axioms is not sufficient to show that it is indeed the Steenrod square, because we have not shown naturality under all continuous maps: this definition is only applicable for closed smooth manifolds. Nonetheless it provides a sanity check.
\end{rmk}
\begin{rmk}
\label{rmk:mssrmks} It is not straightforward to prove $Sq^{0} = id$ without a specific choice of Morse functions. We prove it in Section \ref{subsec:intss} using a different approach.
\end{rmk}
\subsection{The Morse Steenrod square is the Steenrod square}
\label{subsec:tmssissq}
Recall from Section \ref{subsec:prelimbcncon} the Steenrod square due to Betz and Cohen. Recall in Section \ref{subsec:msss} Definition \ref{defn:mss} of the Morse Steenrod square. We will show that these are the same.
In the previous section we chose $f^i_{v,s}$ for $(v,s) \in S^{\infty} \times [0,\infty)$ and $i=1,2,3$, such that $f^2_{v,s} = f^3_{-v,s}$. We abbreviate $f_{v,s} = f^2_{v,s}$ where appropriate, and observe we may choose $f_{v,0}$ distinct from $\{ f_{-v,0}, f \}$ for each $v$ (as $\text{Conf}_3(C^{\infty}(M))$ is open and dense in $(C^{\infty}(M))^3$, hence the condition is generic). Recall, from Section \ref{subsec:prelimbcncon}, that $S$ was a space consisting of triples $(f^1_{s},f^2_{s},f^3_{s})$, with each $f^p_{s} \in U_f$, a small neighbourhood of the Morse function $f$. Observe $S \xrightarrow{\simeq} \text{Conf}_3(C^{\infty}(M))$ is a $\mathbb{Z}/2$-equivariant homotopy equivalence, using the map $(f^1_{s},f^2_{s}, f^3_{s}) \mapsto (f^1_{0},f^2_{0}, f^3_{0})$, with the obvious homotopy inverse. Henceforth, assume $S = \text{Conf}_3(C^{\infty}(M))$. Let $SB = S / \langle (23) \rangle$, where the transposition $(23)$ acts on $S$ by permutation of the components. As remarked previously, $SB $ is homotopy equivalent to $\mathbb{RP}^{\infty}$.
There is a natural $\mathbb{Z}/2$-equivariant map $i: S^{\infty} \xhookrightarrow{} S$ induced by $v \mapsto (f,f_{v,0},f_{-v,0})$, which descends to $i: \mathbb{RP}^{\infty} \rightarrow SB$.
\begin{lemma}
$i_*: H_*(\mathbb{RP}^{\infty}) \rightarrow H_*(SB)$ is an isomorphism.
\end{lemma}
\begin{proof}
If $i$ is a weak homotopy equivalence then it is a quasi-isomorphism, see \cite[Proposition 4.21]{algtop}. As the two spaces are both homotopy equivalent to $\mathbb{RP}^{\infty}$ (which is a $K(\mathbb{Z}/2,1)$), it is sufficient to show that $$i_*: \pi_1(\mathbb{RP}^{\infty}) \cong \mathbb{Z}/2 \rightarrow \pi_1(SB) \cong \mathbb{Z}/2$$ is nontrivial.
Identify $S^1 \subset S^{\infty}$ with $\mathbb{R}/(2 \pi \mathbb{Z})$, parametrised by $\theta \in [0,2 \pi)$. Denote $f_v = f_{e^{iv},0}$. We wish to show that $\theta \mapsto [(f, f_{\theta/2}, f_{\theta / 2 + \pi})]$ determines a nontrivial loop, where $[\cdot]$ denotes the $\mathbb{Z}/2$-equivalence class. Observe that $\theta \mapsto (f, f_{\theta/2}, f_{\theta / 2 + \pi})$ is a path in $\text{Conf}_3(C^{\infty}(M))$ with different endpoints, hence the loop is not contractible.
\end{proof}
Consider Diagram \eqref{cohnorsquareandmorse}:
\begin{equation}\label{cohnorsquareandmorse}
\xymatrix{
H_*(\mathbb{RP}^\infty) \otimes H^*(M)
\ar@{->}_-{i_* \otimes id}^-{\cong}[d]
\ar@{->}^-{MSq}[rr]
&
&
H^*(M)
\ar@{->}^{=}[d]
\\
H_*(SB) \otimes H^*(M)
\ar@{->}^-{s}[r]
\ar@/_2.0pc/@{->}_{Sq}[rr]
&
H_*(SB) \otimes H^*_{\mathbb{Z}/2}(M \times M)
\ar@{->}^-{q}[r]
&
H^*(M)
}
\end{equation}
Here $s(A \otimes x) = A \otimes x \otimes x$, and the map $q$ is as in Section \ref{subsec:prelimbcncon}. We have reinterpreted the Morse Steenrod square from the previous section, here denoted $MSq$, to be a map $MSq: H_*(\mathbb{RP}^\infty) \otimes H^*(M) \rightarrow H^*(M)$, which we can do canonically as there is a unique graded basis of the homology of $\mathbb{RP}^{\infty}$. Observe that if we use the pushforward of the generator of $H_i(\mathbb{RP}^{\infty})$ by $i_*$ as the generator of $H_i(SB)$, then it is immediate that Diagram \eqref{cohnorsquareandmorse} commutes. Hence, Definition \ref{defn:mss} yields the Steenrod square.
\subsection{The Cartan Relation}
\label{subsec:Cartan}
Let $T$ be a family of graphs as in Figure \ref{fig:modspatree}, parametrised by $t \in (0, \infty)$. Edge $e_1$ is a negative half-line and edges $e_3,e_4,e_6,e_7$ are positive half-lines. Edges $e_2,e_5$ are parametrised by $[0,t]$. Compactify $T$ by adding the graphs at $0$ and $\infty$ as in the figure, to obtain the compactification $T^{c} \cong [0,1]$. Use edge labels as given in Figure \ref{fig:modspatree}. Fix a Morse function $f$ on $M$. The edge parameter in each case will be denoted by $s$.
\begin{figure}
\input{modulispace.pdf_t}
\caption{Elements of $T^{c}$.}
\label{fig:modspatree}
\end{figure}
Pick 5 perturbations of $f$ corresponding to the 5 tree edges in $t=0 \in T^{c}$ in figure \ref{fig:modspatree}. These are $f^{p}_{v,s,0}$ for $p$ the edge label, $s \in \mathbb{R}^{\pm}$ and $v \in S^{\infty}$. We choose $f^{1}_{v,s,0}=f$ for all $s, v$. We ensure that $f^{3}_{v,s,0} = f^{4}_{-v,s,0}$ and $f^{6}_{v,s,0} = f^{7}_{-v,s,0}$ for all $v,s$. The choice of $f^p_{v,s,0}$ is made along with an $S_0 \in \mathbb{R}$ such that $f^p_{v,s,0} = f$ for $|s| \ge S_0$ and for all edge labels $p$.
Choose 7 perturbations of $f$ labelled $f^{p}_{v,s,t}$ for $p=1,...,7$ corresponding to the edge labels in Figure \ref{fig:modspatree}, where $t \in T^{c}$, $v \in S^{\infty}$ and $s \in \mathbb{R}^{+}$ for $p=3,4,6,7$, $s \in \mathbb{R}^{-}$ for $p=1$ and $s \in [0,t]$ for $p=2,5$. Choose $f^{1}$ to be independent of $s,v,t$ in this case. Choose Morse functions $f^{2}_{s,2},f^{5}_{s,2}$ for $s \in [0,2]$ such that $f^{p}_{s} = f$ for $s > 1$ and $p=2,5$. The $f^{p}$ must be chosen ``generically at each vertex of $\Gamma$", which is discussed in Appendix \ref{subsec:appendcartrel}. This ensures the transversality of the moduli spaces. The $f^{p}_{v,s,t}$ satisfy the following conditions:
\begin{enumerate}
\item $f^{p}_{v,s,t} = f^{p}_{v,s,0}$ as picked previously for $p = 1,3,4,6,7$ and for all $t$.
\item $f^2_{v,s,t},f^5_{v,s,t}$ are independent of $v$.
\item For $t \ge 2$ and $p= 2,5$: \ $\begin{cases} \begin{array}{l} f^p_{s,t} = f^p_{s,2} \text{ for } s \le 2, \\ f^p_{s,t} = f \text{ for } s \ge 2. \end{array} \end{cases}$ In particular, $f^p_{2,2} = f$.
\end{enumerate}
Fix $i \in \mathbb{N}$ and $x,y \in \text{crit}(f)$. Let $\overline{T} \xrightarrow{\cong} T^c$ consist of pairs $(|t|,t)$ where $t \in T^c \cong [0,\infty]$ and $|t|$ is the metric tree represented by $t$ as a topological space. The metric structure for $t \in [0,\infty)$ is that the outer edges are semi-infinite and parametrised by respectively $(-\infty,0]$ for the incoming edge and $[0,\infty)$ for the outgoing edges. The inner edges are of length $t$, parametrised by $[0,t]$. For the $t=\infty$ boundary, the metric structure on $|\infty|$ is that the edges attached to bivalent vertices are semi-infinite with the infinite end at the bivalent vertex.
For $z \in \text{crit}_{2|x|+2|y|-i}(f)$ consider the space $\tilde{\mathcal{M}}_{1}(x,y,z)$ of triples $(t,u,v)$ with $t \in T^c$, $u: |t| \rightarrow M$ a map and $v \in S^{|x| + |y| - i}$, such that $u$ satisfies:
$$\partial_{s}u_{s,t} = -\nabla f^{p}_{v,s,t}$$ along edge $p$, with asymptotic conditions $(z,x,x,y,y)$ on the exterior edges $(1,3,4,6,7)$. One needs to use an equivariant gluing theorem at the $t=\infty$ boundary, as discussed in Appendix \ref{sec:equivariantgluing}.
For generic $t \in T^{c}$ there is a $0$-dimensional subset of pairs $(u: |t| \rightarrow M ,v \in S^i)$ satisfying the conditions. So $\tilde{\mathcal{M}}_{1}(x,y,z)$ is 1-dimensional. Observe that $\tilde{\mathcal{M}}_{1}(x,y,z)$ has a free $\mathbb{Z}/2$ action, $(t,u,v) \mapsto (t,u \circ \overline{r},-v)$ for $\overline{r}$ acting on $|t|$ by the permutation of edges $(34)(67)$. Let $\mathcal{M}_{1}(x,y,z) = \tilde{\mathcal{M}}_{1}(x,y,z) / (\mathbb{Z}/2)$, which is still 1-dimensional.
We also define a moduli space $\tilde{\mathcal{M}}_{2}(x,y,z)$ by choosing another 7 Morse functions, labelled $f^{p}_{v,s,t}$ as above, but now with the conditions:
\begin{enumerate}
\item $f^{p}_{v,s,t}= f^{q}_{-v,s,t}$ for $(p,q)=(3,4),(6,7),(2,5)$,
\item $f^{p}_{v,s,t}$ is independent of $(v,t)$ for large enough $t$ and for $p = 1,3,4,6,7$,
\item $f^p_{v,s,t} = f$ for $p=1,3,4,6,7$ and $|s| \ge 1$.
\item For large enough $t$ and $s \in [1,t]$, $f^{2}_{v,s,t} = f^5_{v,s,t}= f$.
\end{enumerate}
\begin{figure}
\input{modulispace2.pdf_t}
\caption{Tree labelling for $\mathcal{M}_{2}$.}
\label{fig:modspatree2}
\end{figure}
In defining equations for pairs $(t,u,v) \in \tilde{\mathcal{M}}_{2}(x,y,z)$, use the edge labellings in Figure \ref{fig:modspatree2}, i.e. the edge labels $4$ and $6$ from Figure \ref{fig:modspatree} have been swapped. For each edge label the equations and asymptotic conditions are the same as in the $\tilde{\mathcal{M}}_{1}$ case. Further, there is a free $\mathbb{Z}/2$ action on $\tilde{\mathcal{M}}_{2}(x,y,z)$ similarly to $\tilde{\mathcal{M}}_{1}$ but with edge permutation $(25)(34)(67)$ (using the new edge labels in Figure \ref{fig:modspatree2}). Taking the quotient defines $\mathcal{M}_{2}(x,y,z) = \tilde{\mathcal{M}}_{2}(x,y,z)/ (\mathbb{Z}/2)$.
The following theorem is classical, and the following proof is a modification of \cite[Section 2, Example 2]{betzcoh} for our definition of the Steenrod square. The modification uses a cobordism argument as in \cite[Section 3.4]{salamonfloer}.
\begin{thm}[The Cartan Relation]
\label{thm:classicalcartan}
$$Sq^{i}(x \cup y) = \sum_{j+k=i} Sq^{j}(x) \cup Sq^{k}(y).$$
\end{thm}
\begin{proof}
The moduli space $\mathcal{M}_{1}(x,y,z)$ is a $1$-dimensional cobordism, corresponding to $[0,\infty]$, so $\# \partial \mathcal{M}_{1}(x,y,z) = 0$. Simliarly $\# \partial \mathcal{M}_{2}(x,y,z) = 0$. The $t = \infty$ boundary of $\mathcal{M}_{1}(x,y,z)$ is the count of the contribution of $z$ in $$\sum_{j+k=i} Sq^{j}(x) \cup Sq^{k}(y)$$ (see Figure \ref{fig:cartanSqSq} and Lemma \ref{lemma:lemmaSqSq}). The number of points in the boundary at $t=0$ for $\mathcal{M}_{2}(x,y,z)$ is the same as for $\mathcal{M}_{1}(x,y,z)$, as follows: suppose that $(0,u, v)$ is a point in the $t=0$ boundary of $\mathcal{M}_{2}(x,y,z)$. The domain of $u$ consists of a parametrised graph $\Gamma'$ with an incoming edge labelled $1$, and four outgoing edges labelled $3,4,6,7$. Consider the automorphism $r': \Gamma' \rightarrow \Gamma'$ that acts by the permutation $(46)$ on the edges (without changing the parametrisation). Then $(0, u \circ r', v)$ is a point in the $t=0$ boundary of $\mathcal{M}_{1}(x,y,z)$, and as $r'$ is an involution we see that this is a bijective correspondence. Notice that as we are working with $\mathbb{Z}/2$-coefficients, we do not need to worry about changing the orientation of the moduli space.
The number of points in the $t=\infty$ boundary component of $\mathcal{M}_{2}(x,y,z)$ is the count of the contribution of $z$ in $Sq^{i}(x \cup y)$, by Lemma \ref{lemma:lemmaSqcup}. Hence, the bijection between the $t=0$ boundaries of the moduli spaces, along with the $1$-cobordisms assigned to $\mathcal{M}_{1}(x,y,z)$ and $\mathcal{M}_{2}(x,y,z)$, yield that $$\sum_{j+k=i} Sq^j(x) \cup Sq^k(y) = Sq^i(x \cup y),$$ as required.
\end{proof}
\begin{figure}
\input{cartanSqSq.pdf_t}
\caption{Flowline configurations for $Sq(x) \cup Sq(y)$. }
\label{fig:cartanSqSq}
\end{figure}
\begin{lemma}
\label{lemma:lemmaSqSq}
Summing over all choices of $w_1,w_2 \in \text{crit}(f)$, counting equivalence classes $[(u,v)] \in \mathcal{M}_1(x,y,z)$ satisfying the asymptotic conditions as shown in Figure \ref{fig:cartanSqSq}, yields the coefficient of $z$ in $\sum_{j+k=i} Sq^{j}(x) \cup Sq^{k}(y)$.
\end{lemma}
\begin{proof}
We have that $|w_1| + |w_2| = |z| = |x|+|y| + i$. Hence if $|w_1| = |x|+j$ and $|w_2| = |y| + k$ then $j+k = i$. Throughout fix $w_1$,$w_2$ for the configuration, as outputs of $Sq^{j}(x), Sq^{k}(y)$ respectively.
Restrict attention to the upper right-hand Y-shaped graph of Figure \ref{fig:cartanSqSq}. Suppose that we restrict the $v$ parameter space to $\mathbb{RP}^{|x|-j} \subset \mathbb{RP}^{|x|+|y|-i}$: in this case, counting $[(u,v)]$ satisfying the configuration conditions would be exactly the count of the coefficient of $w_1 \cdot h^{|x|-j}$ in $Sq^{j}(x)$, which we denote $n_{w_1}$. In our case, $v$ varies in the entirety of $\mathbb{RP}^{|x|+|y|-i}$, we call the set of such pairs $$\mathcal{U}_x = \left\{ [v,u] \biggr\vert \begin{array}{l} v \in S^{|x|+|y|-i} \text{ and } u: \Gamma \rightarrow M \text{ satisfies conditions as} \\ \text{illustrated in the upper right-hand graph of Figure } \ref{fig:cartanSqSq} \end{array}\right\}.$$ Here $[v,u]$ refers to taking the quotient by the $\mathbb{Z}/2$-action $(v,u) \rightarrow (-v,u \circ r)$ (where $r$ is the involution on the $Y$-shaped graph as seen previously). Similarly for the lower right-hand branch, for each $\mathbb{RP}^{|y|-k} \subset \mathbb{RP}^{|x|+|y|-i}$ there is a count of $n_{w_2}$, the coefficient of $w_2 \cdot h^{|y|-k}$ in $Sq^{k}(y)$. Define similarly $$\mathcal{U}_y = \left\{ [v,u] \biggr\vert \begin{array}{l} v \in \mathbb{RP}^{|x|+|y|-i} \text{ and } u: \Gamma \rightarrow M \text{ satisfies conditions as} \\ \text{illustrated in the lower right-hand graph of Figure } \ref{fig:cartanSqSq} \end{array}\right\}.$$
Let $n_{z, w_1, w_2}$ be the coefficient of $z$ in $w_1 \cup w_2$ (the chain level Morse cup product obtained by using the perturbed Morse functions $f^1_s, f^2_s, f^5_s$). This is obtained by counting elements of the zero dimensional set corresponding to configurations as in the left hand $Y$-shaped graph of Figure \ref{fig:cartanSqSq}. We will show that the contribution of configurations as in Figure \ref{fig:cartanSqSq} to the coefficient of $z$ is $n_{z, w_1,w_2} \cdot n_{w_1} \cdot n_{w_2}$.
Following \cite[Lemmas 4.2-4.5]{schwarzmorsesingiso}, suppose in fact that $x$ is a Morse cycle (specifically some sum of critical points, $\sum_i a_i \cdot x_i$ where $x_i \in \text{crit}(f)$). Then we may modify $\mathcal{U}_{x}$ to $\overline{\mathcal{U}_{x}},$ obtained by first taking the disjoint union of $a_i$ copies of $\mathcal{U}_{x_i}$ for each $i$ (defined as for $x$ above) and then adding in codimension $1$ strata, and identifying them in pairs (this can be done exactly because $dx = 0$). Here, the codimension $1$ strata correspond to the case when the $Y$-shaped graph undergoes a ``breaking" at one end. The outcome is then the union of a $Y$-shaped graph and an unparametrised flowline, such that:
\begin{itemize}
\item one of the $Y$-shaped graph's positively/negatively asymptotic critical points coincides with the flowline's negatively/positively asymptotic critical points. The other three asymptotic critical points are $w_1, x_i, x_i$ for some $i$.
\item the index difference between the asymptotic critical points of the unparametrised flowline is $1$.
\end{itemize}
Then observe that $\overline{\mathcal{U}_{x}}$ is a smooth manifold, \cite[Lemma 4.4]{schwarzmorsesingiso}. Let $$\pi_x : \overline{\mathcal{U}_{x}} \rightarrow \mathbb{RP}^{|x|+|y|-i}$$ be the projection onto the second coordinate. Then $\pi_x$ is a pseudocycle: specifically, consider $[v_n, u_n]$ such that $v_n$ converges in $\mathbb{RP}^{|x|+|y|-i}$ but $[v_n, u_n]$ has no convergent subsequence in $\overline{\mathcal{U}_{x}}$. By parametrised compactness of Morse flowlines we know that $[v_n, u_n]$ must have a convergent subsequence in the full compactification of $\mathcal{U}_{x}$: but that convergent subsequence must be in the codimension $2$ strata, as $\overline{\mathcal{U}_{x}}$ contains its codimension $1$ strata.
Observe also that by the second paragraph of the proof (i.e. knowing the intersection of $\pi_x$ with $\mathbb{RP}^{|x|-j}$, and in fact with any perturbation of $\mathbb{RP}^{|x|-j}$, is $n_{w_1}$) we deduce that $$\pi_x \bullet [\mathbb{RP}^{|x|-j}] = n_{w_1},$$ where $\bullet$ is the intersection number, hence $\pi_x$ is a weak representative of $n_{w_1} \cdot [\mathbb{RP}^{|y|-k}]$ (by which we mean that $\pi_x$ has the same intersection number with all chains as $n_{w_1} \cdot [\mathbb{RP}^{|y|-k}]$). Similarly the second projection $\pi_y :\mathcal{U}_y \rightarrow \mathbb{RP}^{|x|+|y|-i}$ is a weak representative of $n_{w_2} \cdot [\mathbb{RP}^{|x|-j}]$.
The count of all solutions $[(u,v)]$ satisfying the configuration in Figure \ref{fig:cartanSqSq} is now $$n_{z, w_1,w_2} \cdot ( \pi_x \bullet \pi_{y}) = n_{z,w_1,w_2} \cdot n_{w_1} \cdot n_{w_2}.$$
Now recall from the definitions of $n_{w_1}, n_{w_2}$ that $$Sq^j(x) = \sum_{w_1 \in \text{crit}_{|x|+j}(f)} n_{w_1} w_1 h^{|x|-j}$$ and $$Sq^k(y) = \sum_{w_2 \in \text{crit}_{|y|+k}(f)} n_{w_2} w_2 h^{|y|-k}.$$ Then $$Sq^j(x) \cup Sq^k(y) = \sum_{w_1,w_2} n_{w_1} \cdot n_{w_2} \cdot w_1 \cup w_2,$$ and recalling that $n_{z,w_1,w_2}$ is the coefficient of $z$ in $w_1 \cup w_2$, the lemma is proved.
\end{proof}
\begin{lemma}
\label{lemma:lemmaSqcup}
The count for the $t=\infty$ boundary component of $\mathcal{M}_{2}(x,y,z)$ is the count of the contribution of $z$ in $Sq^{i}(x \cup y)$.
\end{lemma}
\begin{proof}
The edge and asymptotic conditions are as shown in Figure \ref{fig:cartanSqcup}. The edges attached to bivalent vertices are semi-infinite with the infinite end at the bivalent vertex, which is a critical point of $f$. For this operation, the $t=\infty$ boundary, we choose the perturbed Morse functions so that the two right-hand Y-shaped graphs use the same perturbations $f^3,f^6$ of $f$. Specifically, we may assume that $f^3$ and $f^6$ are independent of $v$. The number of such setups is then immediately the coefficient of $z$ in $Sq^{i}(x \cup y)$.
\end{proof}
\begin{figure}
\input{cartanSqcup.pdf_t}
\caption{Flowline configurations for $Sq(x \cup y)$.}
\label{fig:cartanSqcup}
\end{figure}
\subsection{Steenrod Squares via intersections of cycles}
\label{subsec:intss}
Recall that there are nested equators $S^{i} \subset S^{\infty}$, invariant under the antipodal action. Let $a \in H^{|a|}(M)$. Let $\mathcal{B}$ be a basis of $H^* (M)$.
In practise, we would like to work with representatives. A representative of a homology class $A$ is a pair $(X,\alpha)$, often denoted simply $\alpha$, where $X$ is a smooth compact manifold and $\alpha: X \rightarrow M$ is smooth such that $\alpha_*[X] = A$. We recall that over $\mathbb{Z}/2$-coefficients every homology class has a representative (see e.g. \cite[Theorem B]{buonhacon}). For notation, we will denote a homology class by $A$, $a$ will denote its Poincar\'e dual cohomology class, and $\alpha$ will be a representative as above. We will say that $\alpha$ represents a cohomology class $a$ if $\alpha$ represents its Poincar\'e dual homology class. Similarly for $b \in \mathcal{B}$, we denote $B = PD(b)$. As previously, we denote by $b^{\vee}$ the dual basis element to $b$ in the dual basis $\mathcal{B}^{\vee}$.
In order to link this definition to the previous definition, we will weaken our requirements below: in fact we only ask that $\alpha: X \rightarrow M$ is a pseudocycle representative of $a$. Note however that the definition will proceed identically in the cases where we can instead use either representatives or embeddings. Denote by $\beta^{\vee}: Y_b \rightarrow M$ a pseudocycle representative of $PD(b^{\vee})$.
We will choose some smooth manifold $X$, along with a sequence of smooth maps $\alpha_i: X \times S^i \rightarrow M \times S^i$ (for brevity we shorten $X_i := X \times S^i$) such that:
\begin{enumerate}
\item For $\pi_{2}: M \times S^i \rightarrow S^i$ the second projection, $\pi_{2}(\alpha_i(x,v)) = v$ for all $(x,v) \in X \times S^{i}$.
\item The restriction $\alpha_i |_{X_j} = \alpha_j$ for $j \le i$.
\item For $\pi_{1}: M \times S^i \rightarrow M$ the first projection, for any $v \in S^{i}$ then \begin{equation} \label{equation:alphav} \alpha_v:= \pi_1 \circ \alpha|_{X \times \{ v \}} : X_v := X \times \{ v \} \rightarrow M \end{equation} is a pseudocycle representative of $A$ in $M$ (and is well defined by (2) above).
\item For $b \in \mathcal{B}$, \begin{equation} \label{tripleintersection} \text{ at all points of intersection, the pseudocycles } \Delta \times id \text{ and } \euscr{W} \text{ meet transversely} \end{equation} in $M \times M \times M \times S^{i}$, where $$\euscr{W}: Y_b \times X \times X \times S^i \rightarrow M \times M \times M \times S^i$$ is defined by $(y, x,x',v) \mapsto (\beta^{\vee}(y),\alpha_i(x,v), \alpha_i(x',-v), v)$ and $$\Delta \times id: M \times S^i \rightarrow M \times M \times M \times S^i$$ is defined by $(z,v) \mapsto (z,z,z, v)$.
\end{enumerate}
The pseudocycles $\Delta \times id \text{ and } \euscr{W}$ in \eqref{tripleintersection} descend to pseudocycles $$[\Delta \times id]: M \times \mathbb{RP}^i \rightarrow M \times ((M \times M) \times_{\mathbb{Z}/2} S^i),$$ and $$[\euscr{W}]: Y_b \times ((X \times X) \times_{\mathbb{Z}/2} S^i) \rightarrow M \times ((M \times M) \times_{\mathbb{Z}/2} S^i),$$ respectively. Provided $|b| = 2 |a|- i$, define $n_{i,b,a} = [\Delta \times id] \bullet [\euscr{W}]$, the intersection of these two pseudocycles (of complementary dimension).
\begin{defn}[Steenrod Square]
\label{defn:miss}
Define $$Sq(a) = \sum_{i \in \mathbb{Z}, \ b \in \mathcal{B}, \ |b| = 2|a|-i} n_{i,b,a} b h^{i},$$ where $\#$ is the count modulo $2$.
\end{defn}
\begin{rmk}
To see that Definition \ref{defn:miss} is a good one, i.e. independent of the choice of $\alpha_i$ (all the other choices are immediately covered by pseudocycle theory, e.g. \cite{zinger}), observe that the given number of points $n_{i,b,a}$ in any given degree (by which we mean for any fixed choice of $S^i \subset S^{\infty}$) is obtained as the number of intersection points of two pseudocycles. A construction as in \cite[Lemma 3.2]{zinger} for two different choices of $\alpha_i$ yields a bordism of pseudocycles, meaning that the intersection number $n_{i,b,a}$ is independent of this choice.
\end{rmk}
\begin{rmk}
\label{rmk:ourdefinitionsthesame}
The Morse Steenrod square of Definition \ref{defn:mss} is the same as Definition \ref{defn:miss} using the isomorphism $HM^*(M,f) \cong H^*(M)$ that intertwines the Morse product and the cup product, in particular as described in \cite{schwarzmorsesingiso}.
Recall that for each $v$, and Morse cocycle $a = \sum n_i \cdot a_i$ ($n_i \in \mathbb{Z}$ and $a_i \in \text{crit}(f)$) there is a pseudocycle associated to the $s$-dependent Morse function $f_{v,s}$. The domain of this pseudocycle is constructed first by taking the spaces $W^s(a_i,f_{v,s})$ of smooth $u: [0,\infty) \rightarrow M$ such that $\partial u/ \partial t(s) = - \nabla f_{v,s}(u(s))$ and $u(\infty) = a_i$, the stable manifold under $f_{v,s}$. One then adds in the codimension $1$ strate of the standard Morse compactification, and then glues together the disjoint union of $n_i$ copies of each $W^s(a_i,f_{v,s})$, along the codimension $1$ strata, which one knows can be done because $da = 0$. We call this space $\overline{W}(a,f_{v,s})$. The map of this pseudocycle is (on the codimension $0$ strata) evaluation at $0$, denoted $E_v: \overline{W}(a,f_{v,s}) \rightarrow M$. Details are in \cite[Lemma 4.5]{schwarzmorsesingiso}, for the pseudocycle $\overline{W}(a,f)$ associated to the fixed Morse function $f$.
Recall that we chose $f_{v,s}$ in Section \ref{subsec:tmssissq}, based on Section \ref{subsec:prelimbcncon}. Specifically, they satisfy $f_{v,s} = \beta(s) f_{v} + (1-\beta(s))f$ (where $f_v$ is confined to a small contractible neighbourhood of Morse functions $U_f$ containing $f$). Observe that in this instance $f_{v,s} = f$ for $s \ge 1$. Recall that for each $v \in S^i$ there is a $1$-parameter family of diffeomorphisms $\phi_{v,s}: M \rightarrow M$ for $s \in [0,1]$, defined by $\phi_{v,0} = id$ and $$\partial \phi_{v,t} /\partial t |_{t=s}(x) = - \nabla f_{v,s}(x).$$ Then $W(a,f_{v,s}) = \phi_{v,1}^{-1}(W(a,f))$.
Hence, for each $i \in \mathbb{Z}_{\ge 0}$ we obtain an $\alpha_i: \overline{W}(a,f) \times S^i \rightarrow M \times S^i$, defined by $$\alpha_i(u,v) = (E_v \phi_{v,1}^{-1} u, v).$$ Then recalling the conditions we required from $\alpha$, earlier in Section \ref{subsec:intss}, we see that:
\begin{itemize}
\item condition $(1)$ holds,
\item condition $(2)$ is immediate because we define our map fibrewise for each $v$,
\item condition $(3)$ holds because of \cite[Lemma 4.5]{schwarzmorsesingiso},
\item condition $(4)$ holds because of condition $(2)$ at the beginning of Section \ref{subsec:msss}.
\end{itemize}
\end{rmk}
\begin{rmk}
\label{rmk:embeddedsubs}
As the definition in this section will be used as a computational tool for our purposes, for simplicity we will assume in certain places that our homology classes in $\mathcal{B}$ can be represented as embedded submanifolds: in this instance, we may replace a $\alpha_i: X \times S^i \rightarrow M \times S^i$ (which in such a case satisfies that $\pi_1 \alpha_i(\cdot,v): X \rightarrow M$ is an embedding for each $v \in S^i$) by $X_v := \pi_1 \alpha_i(X,v)$.
\end{rmk}
\begin{rmk}
Suppose that $\alpha$ is represented by an embedded submanifold $\mathcal{A} \subset M$. Then each $\alpha_i(X_i)$ cannot simply be $\{ (p,v) | p \in \mathcal{A}, v \in S^{i} \}$, because then transversality would not hold. More generally, we cannot assume that the pseudocycles $\alpha_i$ are independent of $v$. In the next section we construct a family of admissible choices of $\alpha_i$. However, we may take $B^{\vee} \times S^{i}$ to be such a ``standard representative". This is analogous to how, in the Morse definition, $f^1_s$ is chosen to be independent of $v$.
\end{rmk}
\subsection{Properties of the Steenrod Square}
\label{subsec:propofSq}
As promised in Section \ref{subsec:msss} we now check Axiom 3 from Section \ref{subsec:theSqs}.
\begin{lemma}
\label{lemma:sq0baby}
$Sq^{0}(PD(pt))=PD(pt)$.
\end{lemma}
\begin{proof}
Let $n=\text{dim}(M)$. Write $a = PD(pt)$. We construct a representative of $\{ pt \} \times S^n$ in $M \times S^n$:
The submanifold $pt \subset M$ has trivialisable normal bundle, so the disc subbundle $D(pt)$ of the normal bundle $N(pt)$ embeds into $M$ as a small disc around $pt$. Let $S^{n}, D^n \subset \mathbb{R}^{n+1}$, where $$S^n = \biggr\{ (x_1,\ldots x_{n+1} ) \in \mathbb{R}^{n+1} \biggr\vert \sum_{i} x_i^2 = 1 \biggr\}$$ and $$D^n = \biggr\{ (x_1,\ldots x_{n+1}) \in \mathbb{R}^{n+1} \biggr\vert x_{n+1} = 0, \sum_{i} x_i^2 \le 1 \biggr\}$$ is the $n$-disc with the $n+1^{th}$ coordinate $0$. There is a natural flattening map denoted $\phi':S^{n} \rightarrow D^{n}$, where $\phi'(x_1,\ldots, x_n, x_{n+1}) = (x_1,\ldots x_n, 0)$ is projection of $S^n$ onto the first $n$ coordinates. Note that $\phi'$ is a double cover except on the equator, which is $\partial D^{n} \cong S^{n-1}$.
There is a diffeomorphism $D^{n} \cong D(pt) \subset M$. Composing $\phi'$ with this diffeomorphism defines $\phi : S^{n} \rightarrow M$. The map $\phi$ is homotopic to a constant map hence $\bigsqcup_{v \in S^n} (\phi(v), v)$, the graph of $\phi$ in $M \times S^n$, is cobordant to $\{ pt \} \times S^{n} \subset M \times S^n$. Specifically, we denote $\alpha_n : \{ pt \} \times S^n \rightarrow M \times S^n$ by $\alpha_n(pt, v) = (\phi(v),v)$, and this immediately satisfies most of the relevant properties of $\alpha_n$ from Section \ref{subsec:intss} (we will verify transversality after computing the points of intersection).
Observe that for $b \neq [M]^*$ (hence $PD(b^{\vee}) \neq [M]$), and for a general choice of the dual basis pseudocycles $\beta^{\vee}$, there is no intersection as in Statement \eqref{tripleintersection} (hence transversality holds trivially). To check transversality in the case where $b = [M]^*$, we pick $\beta^{\vee} = id_M : M \rightarrow M$. Then for $\Delta \subset M \times M$ the diagonal, $\Delta \times S^i$ intersects $\Phi := \sqcup_{v \in S^i} \{ \phi(v) \} \times \{ \phi(-v) \} \times \{v \}$ exactly when $\phi(v) = \phi(-v)$. We know there is exactly one such pair $\{ \pm v_0 \}$, where $v_0 = (0,...,0,1) \subset S^{n} \subset \mathbb{R}^{n+1}$.
To verify transversality, consider the tangent directions at $(\beta^{\vee}(x), \phi(v_0),\phi(-v_0), v_0)$ in $T(M \times M \times M \times S^i) = TM \oplus TM \oplus TM \oplus TS^i$. Those tangent directions in $0 \oplus 0 \oplus 0 \oplus TS^i$ and $T \Delta \oplus 0 \subset T(M \times M \times M) \oplus TS^i$ are all contained in $T(\Delta \times S^i) = T \Delta \oplus TS^i$. Similarly, as $\beta^{\vee} = id_M$ we obtain all tangent vectors in $TM \oplus 0 \oplus 0 \oplus 0$. It remains to show that we may obtain the rest of the tangent vectors of $T(M \times M \times M \times S^i)$. Observe that $d \phi(v_0) = - d \phi(-v_0)$ is nondegenerate, because $v_0 \not\in \phi'(\partial D^n)$. Hence in particular $\{ (\phi(v), \phi(-v)) \}_{v \in S^n} \subset M \times M$ intersects $\{(x,x)\}_{x \in M} \subset M \times M$ transversely at $(\phi(v_0),\phi(-v_0))$. This immediately implies transversality.
To calculate the coefficient of $a$ in $Sq^{0}(a)$, count the number of (pairs of) solutions to $\phi(v) = \phi(-v)$ modulo $\mathbb{Z}/2$. Recall from above there exists exactly one such (pair of) solutions $v = \pm v_0$. Taking this modulo the $\mathbb{Z}/2$ action gives $Sq^0(a) = a + ...$. The cycle $\{ a \}$ generates $H^n (M)$ so there are no more contributions to $Sq^0(a)$ for degree reasons.
\end{proof}
An easy generalisation of the above proof shows:
\begin{lemma}
\label{lemma:sq0}
For $x \in H^*(M)$, when $PD(x)$ is represented by an embedded submanifold $\chi$ then $Sq^{0}(x)=x$.
\end{lemma}
\begin{proof}
Let $x$ be as given in the statement. Proceed as in the previous lemma, but now $x = PD(X)$ for some cycle $X$. It is convenient to assume that $X$ is in a basis for the homology of $M$, with $PD(X^{\vee}) = x^{\vee}$ being the corresponding member of the dual basis under the intersection product. Similarly to above, for general pseudocycle representatives $\alpha: \chi \rightarrow M$ and $\alpha^{\vee}: Y \rightarrow M$ of $X, X^{\vee}$, we detemine that $\alpha \cdot \alpha^{\vee}$ consists of a finite, odd number of points $\{ p_{i} \}$ (since $x \cdot x^{\vee} = 1$ mod $2$ by definition). In particular, this is true when $\alpha$ is the embedding of $\chi$. Moreover, this is true of any generic sufficiently small perturbation of $\alpha$, such as when the image of $\alpha$ is contained in a sufficiently small normal disc bundle of $\chi$.
Each of these $p_i$ has a small neighbourhood $U_{i} \subset M$ such that the normal bundle $N(\chi)$ of $\chi$ is trivial on $U_{i} \cap \chi$, with the $U_{i}$ being pairwise disjoint. Pick a bump function $\beta_{i}$ for each neighbourhood $U_{i}$. On the neighbourhood $U_{i}$, there is a diffeomorphism between the disc bundle and the trivial bundle $D(U_{i}) \cong (U_{i} \cap \chi) \times D^{n-\dim(\chi)}$. Using the tubular neighbourhood theorem, $N(\chi)$ and hence $(U_{i} \cap \chi) \times D^{n-\dim(\chi)}$ embeds into $M$ via a map $e$.
Hence there is a smooth map $\phi: \chi \times S^{n- \dim(\chi)} \rightarrow M$, such that if $x \in \chi$ is not in any $U_{i}$, then $\phi(x,v) = x$. Otherwise $x$ is in exactly one $U_{i}$ and we define $\phi(x,v): = e(x, \beta_{i}(x) \phi'(v))$, where $\phi': S^{n-\dim (\chi)} \rightarrow D^{n- \dim (\chi)}$ is the flattening map as in the previous lemma. This yields $\alpha_{n-\dim(\chi)}(x,v) :=\phi(x,v)$, recalling that $n-dim(\chi) = |x|$. Consider the intersection modulo $\mathbb{Z}/2$, whose transversality is verified as in Lemma \ref{lemma:sq0baby}. The coefficient of $xh^{|x|}$ is obtained by using as the output cycle $\alpha^{\vee}(Y) \times S^{n-dim(\chi)}$. By construction, such intersections only occur when the first coordinate is one of the $p_{i}$. At $p_{i}$, there is exactly one pair of solutions corresponding to the two solutions as in the previous claim: i.e. $\phi'$ is $2$ to $1$ on a dense open subset.
Take the quotient by the $\mathbb{Z}/2$ action to deduce that the number of contributions is an odd number (the number of $p_{i}$) multiplied by an odd number (the number of pairs of solutions at each $p_{i}$), hence is odd. Therefore $Sq^{0}(x) = xh^{|x|} + ...$. To show that there are no more terms in $Sq^0(x)$, repeat this with $S^{n-dim(\chi)} \times B$ as the output cycle, for $B$ representing another element of the dual basis of homology. Strictly, to cover all cases at once we must choose pseudocycle representatives for every $B \in \mathcal{B}^{\vee}$. Then instead of considering $\{ p_i \}$, we now have $\{p_{B,i} \}$, where $B$ varies in $\mathcal{B}^{\vee}$, which are pairwise distinct. Define similarly pairwise disjoint $U_{B,i} \ni p_{B,i}$, and a map $\phi$ as previously. Then as $B \neq X^{\vee}$ is in the dual basis, a general pseudocycle representative of $B$ intersects $\chi$ with an even number of points. We count exactly as in the previous case, except the number of contributions is an even number (the intersection number of $B \cdot \chi$) multiplied by an odd number. Hence the count is even and the contributions due to other $B$ are $0$.
\end{proof}
\begin{corollary}
\label{corollary:trivialisable}
Let $A$ be a closed submanifold of $M$, with trivialisable normal bundle. Then $Sq^{i}(PD([A])) = 0$ for $i \neq 0$.
\end{corollary}
\begin{proof}
Use the embedding $e: A \times D^{n-\text{dim}(A)} \rightarrow M$ by inclusion of the unit disc bundle of $A$, which exists because $A$ has trivialisable normal bundle, to define $A_v$ for $v \in S^{n-i}$ for $i > 0$. Immediately no intersections occur for $i \neq 0$, as $A_v \cap A_{-v} = \emptyset$ for all $v \in S^{n-i}$.
\end{proof}
\begin{rmk}
More generally, for any immersed submanifold $A \xrightarrow{} X$, consider the homology class $[A] \in H_*(X)$. Then $Sq^i (PD([A]))$ is the Stiefel-Whitney class $w_i(N_A X)$ (where $N_A X$ is the normal bundle of $A$ in $X$). An account of Stiefel-Whitney classes is given in \cite{stiefelwhitney}.
\end{rmk}
\section{Quantum Steenrod Square via Morse theory}
\label{sec:SqQviaMorse}
Let $M$ be a closed monotone symplectic manifold. The definition of the quantum Steenrod square uses a $Y$-shaped graph as with the Morse Steenrod square, but now allows for a $J$-holomorphic sphere at the trivalent vertex in the Y-shaped graph in the definition. This is a $J$-holomorphic sphere with $2+1$ marked points, and 2 incoming and 1 outgoing Morse flowlines from the respective marked points.
Make a choice of $f^p_{s,v}$ as in Subsection \ref{subsec:msss}, for $p=1,2,3$. Let $N$ be the minimal Chern number of $M$. Fix $i,j \in \mathbb{Z}_{\ge 0}$ and $a,b \in H^{*}(M)$ with $$|b| - 2 |a| + i + 2jN = 0.$$
Let $\mathcal{M}'_{i,j}(b,a)$ be the moduli space of pairs $(u,v)$, such that:
\begin{itemize}
\item $v \in S^{i}$,
\item $u: S^2 \rightarrow M$ is a simple $J$-holomorphic map of Chern number $2jN$, i.e. \begin{equation} \label{equation:jvsholoc} du(z) = J(u(z)) \circ du(z) \circ j_{S^2}(z), \end{equation} where $j_{S^2}$ is the standard almost complex structure on $S^2$,
\item the $-\nabla f^1_{s,v}$ flowline from $u(0)$ converges to $b$ as $s \rightarrow -\infty$ and the $-\nabla f^p_{s,v}$ flowline from $u(1), u(\infty)$ converge to $a$ as $s \rightarrow \infty$ for $p=2,3$ respectively.
\end{itemize}
There is a free $\mathbb{Z}/2$-action on this moduli space: $$\iota_{\mathcal{M}} (u,v) = (u \circ R, -v)$$ where $R$ is the unique M\"obius map in $PSL(2,\mathbb{C})$ swapping $1 \text{ and } \infty$ and fixing $0$. Let $$\mathcal{M}_{i,j}(a,b) = \mathcal{M}'_{i,j}(a,b) / \iota_{\mathcal{M}}.$$ The space $\mathcal{M}_{i,j}(a,b)$ is a smooth manifold of dimension $|b| - 2|a| + i + 2jN$. See Appendix \ref{subsection:transvholspheres} for a discussion of transversality for the equivariant case in the presence of pseudoholomorphic spheres.
\begin{defn}[Morse Quantum Steenrod Square]
\label{defn:mqss}
Pick a basis $\mathcal{B}$ of $H^*(M)$. Let $a \in H^*(M)$. For each $i,j$, let $$Q\mathcal{S}_{i,j}(a) = \sum_{b \in \mathcal{B} : |b| + i + 2jN = 2 |a|} \# \mathcal{M}_{i,j}(b,a) \cdot b,$$
$$Q\mathcal{S}(a) = \sum_{i,j} Q\mathcal{S}_{i,j}(a) \cdot h^{i}T^{j}.$$
Extend to a general element of $QH^{*}(M)$ by $Q\mathcal{S}(at^{j}) = Q\mathcal{S}(a)t^{2j}$.
\end{defn}
The proof that $Q\mathcal{S}$ is an additive homomorphism is identical to Proposition \ref{propn:propositionftw}. First define $Q \mathcal{S}' : H^*_{\mathbb{Z}/2}(M \times M) \rightarrow QH^*(M) \otimes H^*(B \mathbb{Z}/2)$. This is identical to $Sq'$ from Definition \ref{defn:mss}, but one uses moduli spaces $\mathcal{M}_{i,j}(a_1,a_2,a_3)$ that have a $J$-holomorphic map $u:S^2 \rightarrow M$ in place of the intersection of the Morse flowlines. Then $Q \mathcal{S} = Q \mathcal{S}' \circ \text{double}$, and observe that $Q \mathcal{S}'(x_1 \otimes x_2 + x_2 \otimes x_1) = 0$.
\begin{rmk}
\label{rmk:propertiesqs}
For $a \in H^*(M)$, $$Q\mathcal{S}_{i,0}(a) = Sq^{|a|-i}(a)$$ as it counts constant spheres. Further, $$\sum_{j \ge 0} Q\mathcal{S}_{0,j}(a) T^j = a * a$$ is the usual quantum product.
\end{rmk}
\subsection{Quantum Steenrod Squares via intersections of cycles}
\label{subsec:qssintcyc}
Let $a \in H^{|a|}(M)$, and we pick a basis $\mathcal{B}$ of $H^*(M)$. Denote $\alpha = PD(a), \beta = PD(b)$ for $b \in \mathcal{B}$. We define a moduli space and evaluation maps analogously to Section \ref{subsec:quantcupprod}: given $j \in \mathbb{Z}_{\ge 0},$ consider $\mathcal{M}_{j}(J) \times S^{i}$ consisting of pairs $(u,v)$ where $u$ is a $J$-holomorphic map such that $u_*[S^2]$ has Chern number $jN$ and $v \in S^i$. Fixing $q \in \mathbb{CP}^1$, the evaluation maps are $ev_{q} \times id_{S^i}:\mathcal{M}_j (J) \times S^i \rightarrow M \times S^i$, which we abusively denote $ev_q$. Choose a sequence of maps $(\alpha_i)_{i=0}^{\infty}: X \times S^i \rightarrow M \times S^{i}$ as in Section \ref{subsec:intss}, satisfying conditions (1), (2) and (3) but we will modify (4). Firstly, let
$\mathcal{M}(j,J)$ be the space of $J$-holomorphic spheres of Chern number $jN$, with a $\mathbb{Z}/2$-action acting by $u \mapsto u \circ R$, where as in Section \ref{sec:SqQviaMorse} $R: S^2 \rightarrow S^2, \ R(z) = z/(z-1)$. Further, for $b \in \mathcal{B}$, and $i \in \mathbb{Z}_{\ge 0}$ we define:
$$\euscr{Y}_Q: Y_b \times ((X \times X) \times_{\mathbb{Z}/2} S^i) \rightarrow M \times ((M \times M) \times_{\mathbb{Z}/2} S^i)$$ by $$(y,((x,x'),[v])) \mapsto (\beta^{\vee}(y),[\alpha_i(x,v), \alpha_i(x',-v), v]),$$ and
$$ev: \mathcal{M}(j,J) \times_{\mathbb{Z}/2} S^i \rightarrow M \times ((M \times M) \times_{\mathbb{Z}/2} S^i)$$ is defined by $$[u,v] \mapsto (u(0), [u(1), u(\infty), v]).$$ The required condition (4) is then:
\begin{enumerate}
\setcounter{enumi}{3}
\item For $b \in \mathcal{B}$, and $i \in \mathbb{Z}_{\ge 0}$, the intersection of pseudocycles \begin{equation} \label{tripleintersectionquant} ev(\mathcal{M}(j,J) \times_{\mathbb{Z}/2} S^i) \cap \euscr{Y}_Q(X \times \mathbb{RP}^{i}) \end{equation} is transverse in $M \times ((M \times M) \times_{\mathbb{Z}/2} S^{i})$.
\end{enumerate}
Given $i,j \in \mathbb{Z}_{\ge 0}$, for $|b| = 2 |a| - i -2j$, the pseudocycles are of complementary dimension. Define $n_{i,j}(a,b)$ to be the intersection number of these pseudocycles.
\begin{defn}[Quantum Steenrod Square]
\label{defn:singqss}
For $a \in H^*(M)$ define $Q\mathcal{S} : QH^{*}(M) \rightarrow QH^{*}(M)[h]$ such that
$$Q\mathcal{S}(a):= \sum_{i,j \in \mathbb{Z}_{\ge 0}, \ b \in \mathcal{B}, \ |b| = 2|a|-i-2jN} n_{i,j}(a,b) \cdot b T^{j} h^i$$
with $Q\mathcal{S}$ a linear homomorphism. Then extend $Q\mathcal{S}$ linearly to $QH^*$ by requiring that $Q\mathcal{S}(a t^k) = Q\mathcal{S}(a) t^{2k}$. Also define $Q\mathcal{S}_{i,j}(a)$ as previously.
\end{defn}
As in the classical case this is equivalent to Definition \ref{defn:mqss}.
\subsection{Quantum Stiefel Whitney Class}
For a smooth compact manifold $M$, the classical Stiefel-Whitney class of $TM$, $w(TM)$, is constructed as in \cite[Section 5.3]{cohnor}, using a certain graph operation. We will not go into details. A more classical treatment is found in \cite{stiefelwhitney}.
Using the convention that $\langle ah,A \rangle = \langle a,A \rangle h$ for $a \in H^*(M), A \in H_*(M)$, one can use a gluing theorem as in \cite[Theorem 20]{cohnor}, or a direct argument to prove that:
\begin{lemma}
$$w(TM) = \sum_{y \in \mathcal{B}} Sq(y) \cdot \langle Sq(y^{\vee}),[M] \rangle.$$
\end{lemma}
\begin{proof}
Recalling that $w(TM) = Sq(v)$, where $v$ is the Wu class of $M$, it is sufficient to prove that \begin{equation} \label{equation:wTM} v = \sum_{y \in \mathcal{B}} y \cdot \langle Sq(y^{\vee}),[M] \rangle. \end{equation} Suppose that we write $v$ as an element of $H^*(M)[h]$, i.e. $$v = \sum_{y \in \mathcal{B}, \ i \ge 0} n_{y,i} \cdot y h^i.$$ Substituting this into the definition of $v$, i.e. $\langle Sq(b), [M] \rangle = \langle b \cup v, [M] \rangle$ for any $b \in H^*(M)$, we obtain that $$\langle Sq(b), [M] \rangle = \sum_{y \in \mathcal{B}} n_{y,i} \cdot \langle b \cup y h^i, [M] \rangle.$$ For each $y \in \mathcal{B}$, let $b = y^{\vee}$. Hence $$\langle Sq(y^{\vee}), [M] \rangle = n_{y,i} \cdot \langle y^{\vee} \cup y h^i, [M] \rangle = n_{y,i} \cdot h^i \langle [M]^{*}, [M] \rangle = n_{y,i} \cdot h^i,$$ and \eqref{equation:wTM} follows.
\end{proof}
Let $M$ be a closed monotone symplectic manifold.
\begin{defn}[Quantum Stiefel-Whitney Class]
The Quantum Stiefel-Whitney class is $$w_Q(TM) := \sum_{y \in \mathcal{B}} Q\mathcal{S}(y) \langle Sq(y^{\vee}),[M] \rangle.$$
\end{defn}
It follows from this definition and a grading argument that:
\begin{lemma}
\label{propn:quantumstiefel4Ng2n}
If the minimal Chern number $N > (\dim M)/2$ then $w_Q(TM) = w(TM)$.
\end{lemma}
\begin{proof}
We will show that given the assumptions of this lemma, for every $y \in H^*(M)$, either $Q \mathcal{S}(y) = Sq(y)$ or $\langle Sq(y^{\vee}),[M] \rangle = 0$.
Suppose that $Q \mathcal{S}(y)$, which is of degree $2|y|$, has a summand containing some nontrivial power of $T$, which is of degree $2N$. This implies that $2|y| \ge 2N > \dim M$. Hence $|y| > (\dim M)/2$. Hence $|y^{\vee}| < (\dim M)/2$, and therefore for degree reasons there can be no summand of the form $[M]^* h^j$ in the expansion of $Sq(y^{\vee})$. Therefore $\langle Sq(y^{\vee}),[M] \rangle = 0$.
\end{proof}
\begin{corollary}
Let $M = \mathbb{CP}^n$. Then $w_Q(TM) = w(TM)$.
\end{corollary}
\begin{proof}
The minimal Chern number for $\mathbb{CP}^n$ is $N = n+1 > n = (\dim \mathbb{CP}^n) /2$. Now apply Lemma \ref{propn:quantumstiefel4Ng2n}.
\end{proof}
\section{The Quantum Cartan relation}
\label{sec:quancar}
We continue the discussion from Example \ref{exmpl:difficulties}. Consider the space $M^{\#}_{0,5}$ of 5 distinct marked points on the $2$-sphere, and let $$M_{0,5} = M^{\#}_{0,5} / PSL(2,\mathbb{C})$$ where the M\"obius group $G = PSL(2,\mathbb{C})$ acts diagonally on the 5 marked points. There are two different descriptions of $M_{0,5}$ that will be useful:
\begin{enumerate}
\item $\{ (z_{0},z_{1},z_{2},z_{3},z_{4}) \} / G$ of five distinct points modulo the action of $G$, reparametrising M\"obius maps.
\item $\{ (0,1, \infty, z_{3},z_{4}) \}$ with $z_{3},z_{4}$ distinct from each other and from $0,1,\infty$.
\end{enumerate}
The former description gives a simpler definition of the compactification, but the latter description is more useful when describing homology classes. Letting $z_{3},z_{4}$ vary in the description (2) yields a third description:
\begin{enumerate}
\setcounter{enumi}{2}
\item $$M_{0,5} \cong ((\mathbb{CP}^{1} - \{ 0,1,\infty \}) \times (\mathbb{CP}^{1} - \{0,1,\infty \})) - \Delta,$$ where $\Delta$ is the diagonal.
\end{enumerate}
One compactifies this space, adding stable genus $0$ nodal curves with $5$ marked points (there are 10 copies of $\mathbb{CP}^{1} - \{0,1,\infty \}$ and 15 points to add), one obtains a space $$\overline{M}_{0,5} \simeq Bl_{\{ (0,0),(1,1),(\infty, \infty) \}}(\mathbb{CP}^{1} \times \mathbb{CP}^{1}).$$ See \cite[Section D.7.]{jholssympl}. Then $\overline{M}_{0,5}$ is homotopy equivalent to $(\mathbb{CP}^{1} \times \mathbb{CP}^{1}) \# 3 (\overline{\mathbb{CP}^{2}})$, which means:
\begin{equation} \label{equation:m05coh} \begin{array}{l} H^{*}(\overline{M}_{0,5}) = \mathbb{F}_{2}[\delta_1, \delta_2,w_{0},w_{1},w_{\infty}] / I \\ I = (\delta_1^{2},\ \delta_2^{2}, \ w_{i}^{3},w_{i}^{2}+\delta_1 \delta_2 \text{ for all } i, \text{ and } \delta_i \omega_j \text{ for all } i,j, \text{ and } \omega_i \omega_j \text{ for } i \neq j) \end{array} \end{equation}
where $w_{i}$ corresponds to the exceptional divisor at $(i,i)$ and $\delta_1, \delta_2$ correspond to the spheres $\mathbb{CP}^{1} \times \{ pt \}$ and $\{ pt \} \times \mathbb{CP}^{1}$ respectively: thus all the generators have degree $2$. A treatment of this is \cite[Section D.7]{jholssympl}. Henceforth $W_i = PD(w_i) \text{ and } \Delta_j = PD(\delta_j)$ for $i=0,1,\infty$ and $j=1,2$.
Let $x,y,z$ be cohomology classes in $H^*(M)$. Let $\zeta: Z^{\vee} \rightarrow M$ be a pseudocycle representative of $PD(z^{\vee})$. There is a natural $\mathbb{Z}/2$ action on $\overline{M}_{0,5}$, induced by $(12)(34)$. Specifically, $$\iota : (z_{0},z_{1},z_{2},z_{3},z_{4}) \mapsto (z_{0},z_{2},z_{1},z_{4},z_{3}).$$ Then $\iota \times -\text{id}$ defines a free diagonal $\mathbb{Z}/2$ action on $\overline{M}_{0,5} \times S^{i}$ for each $i$. Define $$P_i := ( \overline{M}_{0,5} \times S^i) / (\iota \times -\text{id}).$$
Pick smooth maps $\chi_i: X_{i} \rightarrow M$ and $\gamma_i: Y_{i} \rightarrow M$, as in Section \ref{subsec:qssintcyc}, for the cohomology classes $x,y$. Then $\mathcal{M}'_{i,j}(x,y,z)$ consists of triples $(u,m,v)$, where $m$ is a $5$-pointed genus $0$ holomorphic nodal curve, and $u: |m| \rightarrow M$ is a smooth stable (nodal) $J$-holomorphic map representing a homology class of Chern number $jN$ (here $|m|$ refers to forgetting the marked points of $m$). The parameter space is $v \in S^{i}$. The map $u$ satisfies $u(z_{0}) \in \zeta(Z^{\vee})$, $u(z_{1}) \in \chi_v(X_{v})$, $u(z_{2}) \in \chi_{-v}(X_{-v})$, $u(z_{3}) \in \gamma_v(Y_{v})$ and $u(z_{4}) \in \gamma_{-v}(Y_{-v})$.
There is a $\mathbb{Z}/2$-action on $\mathcal{M}'_{i,j}(x,y,z)$, acting by:
\begin{equation} \label{equation:actiononmoduli} (u,m,v) \mapsto (u,\iota m,-v), \end{equation} recalling that $\iota$ acts, as on $\overline{M}_{0,5}$, by the permutation of marked points $(12)(34)$. Then the action \eqref{equation:actiononmoduli} is well defined because $|\iota m| = |m|$. There is also an action induced by reparametrisation: specifically, if $g \in PSL(2,\mathbb{C})$ acts on some holomorphic sphere $m^a$ of $m$, with corresponding $J$-holomorphic map $u^a: |m^a| \rightarrow M$, then \begin{equation} \label{equation:actionGonmoduli} g \cdot (u^a,m^a,v) = (u^a \cdot g^{-1}, g \cdot m^a, v). \end{equation} We denote $$\mathcal{M}_{i,j}(x,y,z)$$ the moduli space obtained after quotienting $\mathcal{M}'_{i,j}(x,y,z)$ by the actions in Equation \eqref{equation:actiononmoduli} and \eqref{equation:actionGonmoduli}.
There is a natural map $$\pi_{x,y,z}: \mathcal{M}_{i,j}(x,y,z) \rightarrow P_{i}, \ [u,m,v] \mapsto [\text{stab}(m),v],$$ where $\text{stab}(m)$ denotes taking the stabilisation of the $5$-pointed genus $0$ nodal curve $m$, which corresponds to an element of $\overline{M}_{0,5}$. The square brackets denote taking equivalences classes with respect to the actions of $PSL(2,\mathbb{C})$ and $\mathbb{Z}/2$.
\begin{rmk}
We can think of $\mathcal{M}_{i,j}(x,y,z)$ as the inverse image under the evaluation map $$ev: \overline{\mathcal{M}_{0,5}(J,j)} \times_{\mathbb{Z}/2} S^{i} \rightarrow M \times (M^{4} \times_{\mathbb{Z}/2} S^{i}),$$ $$ev([[u,(p_0, p_1...,p_4)],v]) = (u(p_0),[(u(p_1),...,u(p_4)),v])$$ where $\overline{\mathcal{M}_{0,5}(J,j)}$ is the set of all stable, genus $0$, $5$-pointed $J$-holomorphic maps $u: \mathbb{CP}^{1} \rightarrow M$ of Chern number $jN$, which contain no repreated or multiply covered components. The $\mathbb{Z}/2$-action acts on the marked points by the permutation $(12)(34)$. Observe that one uses some large machinery, namely the gluing theorem for $J$-holomorphic curves (see \cite[Chapter 10]{jholssympl}), to show that this partial compactification of the space of simple maps has a fundamental class. Then given $x,y,z$ as previous to this remark, it is immediate from the definition that $ \mathcal{M}_{i,j}(x,y,z)$ is obtained by \begin{equation} \label{equation:lotsofpseudo} ev^{-1} \left( \zeta(Z^{\vee}) \times \left[ \bigcup_{[v] \in \mathbb{RP}^{i} } \alpha_v(X_{v}) \times \alpha_{-v}(X_{-v}) \times \gamma_v(Y_{v}) \times \gamma_{-v}(Y_{-v}) \times \{ v \} \right] \right). \end{equation} As in Appendix \ref{subsec:mssrmks}, we may interpret the expression between $\left(, \right)$ in Equation \eqref{equation:lotsofpseudo} as a pseudocycle. The square brackets $\left[, \right]$ in Equation \eqref{equation:lotsofpseudo} denote the equivalence class under the $\mathbb{Z}/2$ action. This allows us to calculate $\dim \mathcal{M}_{i,j}(x,y,z)$, as follows.
\end{rmk}
Let $Q$ be a closed submanifold of the parameter space $P_{i}$. Then $Q$ represents a cycle in $H_{*}(P_{i})$, such that: \begin{equation} \label{eq:dimension} \dim \pi_{x,y,z}^{-1}(Q) = |z| - 2|x| - 2|y| + \dim(Q) + 2jN. \end{equation} In particular, for $Q = P_{i}$, using $\dim(P_{i}) = 4+i$, $$\dim \mathcal{M}_{i,j}(x,y,z) = \dim \pi_{x,y,z}^{-1}(P_{i}) = |z| - 2|x| - 2|y| + i+4 + 2jN.$$
\begin{defn}
\label{defn:qopn}
Let $W$ be a cycle in $H_*(P_{i})$ with $i,j$ fixed, represented by a union of embedded closed submanifolds $\bigcup_{a \in A} Q_{a} \subset P_i$. Let $x,y \in H^*(M)$. Define $$q_{i,j}(W)(x,y) = \sum_{z : \dim \pi_{x,y,z}^{-1}(Q) = 0} \left( \sum_{a \in A} \# (\pi_{x,y,z}^{-1}(Q_a)) \right) \cdot zT^{j}$$ where the first sum is taken over a basis of $z$ for $H^{|z|}(M)$ such that Equation (\ref{eq:dimension}) is $0$. Extending bilinearly over $\mathbb{Z}/2 [T]$, this defines a bilinear map $$q_{i,j}(W) : QH^{k}(M) \otimes QH^{l}(M) \rightarrow QH^{k+l-|W|}(M).$$
\end{defn}
\begin{lemma}
\label{lemma:additiveandindep}
The homomorphism $$q_{i,j}(W) : QH^{k}(M) \otimes QH^{l}(M) \rightarrow QH^{k+l-|W|}(M)$$ does not depend on the representative of $W$, and is additive.
\end{lemma}
\begin{proof}
Represent $W$ by a pseudocycle $\omega: U \rightarrow P_i$ (in the case of Definition \ref{defn:qopn}, we chose a union of embedded submanifolds). Observe that the coefficient of $z$ in $q_{i,j}(W)(x,y)$ is the intersection number of two pseudocycles. Using notation as previously, these are $\pi_{x,y,z}: \mathcal{M}_{i,j}(x,y,z) \rightarrow P_i$ and $\omega: U \rightarrow P_i$. We know that intersection numbers are independent of the choice of pseudocycle representative, i.e. $q_{i,j}(W)$ only depends on the homology class of $W$.
Then it is immediate that $q_{i,j}(W+W') = q_{i,j}(W) + q_{i,j}(W')$, as if $\omega: U \rightarrow P_i$ represents $W$ and $\omega': U' \rightarrow P_i$ represents $W'$ then consider $\omega'': U \sqcup U' \rightarrow P_i$, defined by $\omega''|_{U^{\lambda}} = \omega^{\lambda}$ for $\lambda = ' \text{or } ''$. Then $\omega ''$ represents $W+W'$. The intersection numbers from the previous paragraph are additive.
\end{proof}
\begin{figure}
\input{m05elmts.pdf_t}
\caption{$m_{1}$, $m_{2}$ $\in$ $\overline{M}_{0,5}$}
\label{fig:m05elmts}
\end{figure}
\begin{rmk}
One likewise calculates the coefficients of $z T^j$ in $q_{i,j}(W)(x,y)$ (with notation as in Definition \ref{defn:qopn}) in the following way. Take the cup product of the cycle $\pi^* \rho$, where $\rho = PD(Q) \in H^* (P_i)$, with the pullback of $z^{\vee} \times x \times x \times y \times y$ under the evaluation map (specifically, the cup product takes place in $H^*(\overline{\mathcal{M}_{0,5}(j,J)} \times_{\mathbb{Z}/2} S^i)$). Integrate this over the equivariant fundamental class of $\overline{\mathcal{M}_{0,5}(j,J)}$.
\end{rmk}
In the following, use a cell decomposition for $S^{i}$ with cells $D^{i,\pm}$ in degree $i$, corresponding to the two hemispheres of dimension $i$. For $d$ the differential on cellular chains, $d(D^{i,\pm}) = D^{i-1,+} + D^{i-1,-}$.
The class of cases we consider are when $\text{dim}(Q) = i$. If $m_{1}, m_{2} \in \overline{M}_{0,5}$ are as given in Figure \ref{fig:m05elmts}, then $m_{1}$ and $m_{2}$ are invariant under the $\mathbb{Z}/2$ action on $\overline{M}_{0,5}$. Hence $\{ m_{1} \} \times D^{i,+}$ and $\{ m_{2} \} \times D^{i,+}$, which are embedded submanifolds of $P_i$, represent well defined cycles in $H_*(\overline{M}_{0,5} \times_{\mathbb{Z}/2} S^{i})$. For $p=1,2$ call these cycles $Q_{p}^{i}$. To see that these cycles are indeed closed, observe that (for example using singular homology), if $X \subset \overline{M}_{0,5}$ is an embedded submanifold, then $X \times D^{i,+}$ represents some chain in $H_*(P_i)$. Then abusing notation (applying the K\"unneth isomorphism, and writing the submanifold $X$ instead of a sum of the simplices representing $X$), $$d([X \times D^{i,+}]) = [(dX) \times D^{i,+}] + [X \times (D^{i-1,+} + D^{i-1,-})] = [(X + \iota X) \times D^{i-1,+}].$$ The brackets $[,]$ represent that we have taken the quotient by $\mathbb{Z}/2$ of the chain complex $C_*(\overline{M}_{0,5} \times S^i)$. The last equality uses the $\mathbb{Z}/2$-action on $\overline{M}_{0,5} \times S^{i}$. Hence, if $X$ is a $\mathbb{Z}/2$-invariant closed submanifold, such as $\{ m_1 \}$ and $\{ m_2 \}$, then the chain represented by $X \times D^{i,+}$ is closed in equivariant homology.
Indeed, by the previous, for $i > 0$ the chain ``$\{ pt \} \times D^{i,+}$" is only a cycle when $pt$ is a fixed point of the $\mathbb{Z}/2$ action on $\overline{M}_{0,5}$. The space of fixed points $(\overline{M}_{0,5})^{\mathbb{Z}/2}$ is the disjoint union of a sphere containing $m_{2}$ and the single point $m_{1}$. See Remark \ref{rmk:z2actioncompact} at the end of this section for more details on this $\mathbb{Z}/2$-action.
\begin{lemma}
\label{lemma:lem1}
$$\sum_{i,j} q_{i,j}(Q_{1}^{i})(x,y)h^i = Q\mathcal{S}(x)*Q\mathcal{S}(y) \text{ and } \sum_{i,j} q_{i,j}(Q_{2}^{i})(x,y)h^i = Q\mathcal{S}(x*y).$$
\end{lemma}
\begin{proof}
For the rest of this proof, we fix $i,j$, and we show that $$q_{i,j}(Q_{1}^{i})(x,y) = [Q\mathcal{S}(x)*Q\mathcal{S}(y)]_{i,j}T^{j}.$$ To do this we proceed as in Lemma \ref{lemma:lemmaSqSq}, using $1$-dimensional moduli spaces, the ends of which count e.g. $\sum_{i,j} q_{i,j}(Q_{1}^{i})(x,y) \cdot h^i$ and $Q\mathcal{S}(x)*Q\mathcal{S}(y)$ respectively. This yields a bordism between the endpoints, with more details provided in Appendix \ref{subsec:bordismquantumcartantrans}.
Fixing some $i \in \mathbb{N}$ (the dimension of the sphere in which $v$ will vary), we consider the $1$-dimensional moduli spaces from Section \ref{subsec:Cartan}, denoted $\tilde{\mathcal{M}}_1(x,y,z)$ and $\tilde{\mathcal{M}}_2(x,y,z)$. Recall the spaces $T^c \cong [0,\infty]$, and $\overline{T} \rightarrow T^c$. For each $t \in (0,\infty]$, we define quantum analogues $\tilde{\mathcal{M}}^Q_{p}(x,y,z)$ of the $\tilde{\mathcal{M}}_p(x,y,z)$, where now each element of $\tilde{\mathcal{M}}^Q_{p}(x,y,z)$ is a pair $(u,v)$ where $v \in S^i$ and $u: (|t|_Q,t) \rightarrow M$ is continous. Here, $|t|_Q$ is obtained by taking the graph associated to $|t|$, and adding a sphere at each trivalent vertex (in such a way that the incoming edge of $t$ is attached at $0$ on the sphere, and the outgoing vertices are attached at $1,\infty$ respectively). We then require that $u$ is $J$-holomorphic on each sphere, satisfies the edge and asymptotic equations as in Section \ref{subsec:Cartan}, and the sum of the Chern numbers of the three spheres is $jN$. The $\mathbb{Z}/2$-action acts by $(u, v) \mapsto (u \circ \overline{r}, -v)$, where $\overline{r}$ acts on $|t|$ as in Section \ref{sec:quancar} and extends to the holomorphic spheres in the following ways:
\begin{itemize}
\item for $\tilde{\mathcal{M}}^Q_{1}(x,y,z)$, the involution $\overline{r}$ acts by $z \mapsto z / (z-1)$ on the two right holomorphic spheres and the identity on the left holomorphic sphere.
\item for $\tilde{\mathcal{M}}^Q_{2}(x,y,z)$, the involution $\overline{r}$ acts by $z \mapsto z / (z-1)$ on the left holomorphic sphere and the identity on the two right holomorphic spheres.
\end{itemize}
For the $t=0$ end of the moduli spaces, we instead consider for $\tilde{\mathcal{M}}^Q_{p}(x,y,z)$ continuous maps $u: |m_p|' \rightarrow M$. Here, $m_p$ are the elements of $\overline{M}_{0,5}$ as in Figure \ref{fig:m05elmts}, and $|m_p|'$ is obtained from the nodal sphere configuration $|m_p|$ associated to $m_p$, by attaching to $z_0$ the negative half-line $(-\infty,0]$ and to $z_1,z_2,z_3,z_4$ the positive half-line $[0,\infty)$. We then require the same conditions on the edges, the asymptotics and the energy of the nodal spheres. The $\mathbb{Z}/2$-action extends continuously from the $t \in (0,\infty]$ action. We let $\mathcal{M}^Q_{p}(x,y,z) = \tilde{\mathcal{M}}^Q_{p}(x,y,z) / (\mathbb{Z}/2)$ for $p=1,2$.
It is then immediate from the definition that counting the setups corresponding to the $t=0$ end of $\mathcal{M}_{p,j}(x,y,z)$ is the coefficient of $z$ in $q_{i,j}(Q_{p}^{i})(x,y)h^i$, for each $i$. Hence, it remains to show that counting the $t=\infty$ ends of $\mathcal{M}^Q_{p}(x,y,z)$ corresponds to the coefficient of $Q\mathcal{S}(x)*Q\mathcal{S}(y)$ and $Q\mathcal{S}(x*y)$ respectively for $p=1,2$. The proof of this is identical to Lemmas \ref{lemma:lemmaSqSq} and \ref{lemma:lemmaSqcup} respectively.
We do not need to worry about the bubbling off of extra spheres because $M$ is monotone. Specifically, any $J$-holomorphic bubble must have strictly less than $2$ marked points. Introducing ``phantom" marked points, we may consider this to be a $3$-pointed Gromov-Witten invariant corresponding to intersections with the Poincar\'e dual of $1 \in H^0(M)$. We know that, for a general choice of $J$, such Gromov-Witten invariants only contain contributions from constant spheres, as in \cite[Proposition 11.1.11(ii)]{jholssympl}.
\end{proof}
We now prove a slightly more general lemma for $\mathbb{Z}/2$-equivariant homology.
\begin{lemma}
\label{lemma:lem111}
Suppose that $M$ is a smooth connected manifold with a smooth $\mathbb{Z}/2$-action $\iota:M \rightarrow M$. Suppose that $W^n, L^{n-1} \subset M$ are submanifolds, fixed set-wise by $\iota$, of dimensions $n, n-1$ respectively, and representing respective homology classes $[W]$, $[L]$. Suppose further that $W = L \cup U \cup \iota U$ for some open submanifold $U \subset W$ such that $\partial \overline{U} = L$, where $\overline{U}$ is the closure of $U$ in $W$.
Then denoting $D^{i,+}$ for the upper $i$-dimensional hemisphere in $S^j$ ($i \le j$), the submanifolds $W \times D^{i-1,+}$ and $L \times D^{i,+}$ of $M \times S^j$ represent homologous elements of $H_*(M \times_{\mathbb{Z}/2} S^j)$ i.e. $$[W \times D^{i-1,+}] = [L \times D^{i,+}].$$
\end{lemma}
\begin{proof}
By the K\"unneth isomorphism, using singular homology, there is a quasi-isomorphism between $C_{\bullet}(M) \otimes C_{\bullet}(S^j)$ and $C_{\bullet}(M \times S^j)$, here using singular homology. In fact we may replace singular homology of $C_{\bullet}(S^j)$ by cellular homology, as the K\"unneth isomorphism is natural on chain complexes. In this cellular decomposition, there are two $i$-cells for each $0 \le i \le j$, such that one obtains a decomposition of $S^{j+1}$ from $S^j$ by attaching two $j+1$-cells along their boundaries at $S^j$.
Observe then that there is an involution on $C_{\bullet}(M) \otimes C_{\bullet}(S^j)$, which is the chain map $\phi := \iota_* \otimes (-id)_*$. We consider the homology of the complex $(C_{\bullet}(M) \otimes C_{\bullet}(S^j))/\phi$, the quotient of the complex $C_{\bullet}(M) \otimes C_{\bullet}(S^j)$ by $\phi$, with the differential being induced by the differential on the tensor product. Then we know that the K\"unneth isomorphism is natural (in particular, with respect to the action of $\phi$), hence $$H_*((C_{\bullet}(M) \otimes C_{\bullet}(S^j))/\phi) \cong H_*(C_{\bullet}(M \times S^j) / \phi).$$ The homology of the latter complex is isomorphic to the homology of $M \times_{\mathbb{Z}/2} S^j$, but we will represent chains using the former complex.
We will (abusively) denote by $W \times D^{i-1,+}$ the $\phi$-equivalence class of the chain (i.e. sum of simplices) corresponding to the submanifold $W \times D^{i-1,+} \subset M \times S^j$. Observe that $$d(\overline{U} \times D^{i,+}) = (d \overline{U}) \times D^{i,+} + \overline{U} \times (d D^{i,+}),$$ abusively also denoting by $d$ the differentials on all possible complexes. We know that $d \overline{U} = L$ by assumption. Further, $d D^{i,+} = D^{i-1,+} + D^{i-1,-}$. Hence $$d(\overline{U} \times D^{i,+}) = L \times D^{i,+} + \overline{U} \times (D^{i-1,+} + D^{i-1,-}).$$ Note that $$\begin{array}{lll} \overline{U} \times (D^{i-1,+} + D^{i-1,-}) &=& \overline{U} \times D^{i-1,+} + \overline{U} \times D^{i-1,-} \\ & =& \overline{U} \times D^{i-1,+} + \iota \overline{U} \times D^{i-1,+} \\ &=& (\overline{U} + \iota \overline{U}) \times D^{i-1,+}, \end{array}$$ using for the second equality that the chains represent elements of the complex quotiented by the involution $\phi$. But note that by definition the chains $\overline{U} + \iota \overline{U} = W$ (summing simplices, the boundaries match and cancel along $L$). Hence $$d(\overline{U} \times D^{i,+}) = L \times D^{i,+} + W \times D^{i-1,+},$$ as required.
\end{proof}
Let $A^{i} = Q_{1}^{i} - Q_{2}^{i}$. Let $W_{q}$ be the pullback under the blowdown $$Bl_{(0,0), (1,1), (\infty,\infty)}(\mathbb{CP}^{1} \times \mathbb{CP}^{1}) \rightarrow Bl_{(q,q)}(\mathbb{CP}^{1} \times \mathbb{CP}^{1}),$$ of the exceptional $\mathbb{CP}^{1}$ divisor in $Bl_{(q,q)}(\mathbb{CP}^{1} \times \mathbb{CP}^{1})$, for $q=0,1,\infty$. The elements of $W_0$ are given in Figure \ref{fig:w0}.
\begin{figure}
\input{w0.pdf_t}
\caption{Elements of $W_0$}
\label{fig:w0}
\end{figure}
\begin{lemma}
\label{lemma:lem222}
$[W_0 \times D^{i-2,+}] = [\{ m_1 \} \times D^{i,+}] + [\{ m_2 \} \times D^{i,+}]$.
\end{lemma}
\begin{proof}
We use $M = \overline{M}_{0,5}$ and $W = W_0$, and $\iota = (12)(34)$. Observe that we may identify $W_0 \cong S^2$ with the extended complex plane, fixing $(z_0,z_3,z_4) = (0,1,\infty)$. The $z \in \mathbb{C} \cup \{ \infty \}$ corresponds to the freely moving point on the four-pointed component of an element $m$ of $W_0$: specifically, the node connecting together the two components (i.e. copies of $S^2$) that together comprise $m$. The $\mathbb{Z}/2$-action on $\mathbb{C} \cup \{ \infty \}$ is then $z \mapsto z/(z-1)$. Let $L = \mathbb{R} \subset \mathbb{C} \cup \{ \infty \}$. By Lemma \ref{lemma:lem111}, we know that $$[W_0 \times D^{i-2,+}] = [L \times D^{i-1,+}].$$ Now observe that $L$ contains two fixed points, $\{ m_1, m_3 \} \in W_0$ corresponding to the points $\{ 0, 2 \} \in \mathbb{R} \subset \mathbb{C} \cup \{ \infty \}$. Applying Lemma \ref{lemma:lem111} again, we obtain that $$[W_0 \times D^{i-2,+}] = [L \times D^{i-1,+}] = [\{m_1, m_3 \} \times D^{i,+}] = [\{m_1\} \times D^{i,+}] + [\{m_3\} \times D^{i,+}].$$
Hence it remains to prove that $[\{m_3\} \times D^{i,+}] = [\{m_2\} \times D^{i,+}]$. Recall we stated earlier (and will elaborate in Remark \ref{rmk:z2actioncompact}) that the fixed point set of $\overline{M}_{0,5}$ corresponds to the union of $\{ m_1 \}$ and a $2$-dimensional sphere. In particular, the points $m_3$ and $m_2$ can be joined by a path of invariant points, which we denote $l$. Then $d (l \times D^{i,+}) = \{m_3\} \times D^{i,+} + \{m_2\} \times D^{i,+}$, as required.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:quancar}]
By Lemmas \ref{lemma:lem222} and \ref{lemma:additiveandindep}, $q_{i,j}(\{ m_{1} \} \times D^{i,+}) = q_{i,j}(\{ m_{2} \} \times D^{i,+}) + q_{i,j}(W_{0} \times D^{i-2,+})$. Multiply by $h^{i}$ and sum over all $i,j$ and apply Lemma \ref{lemma:lem1}.
\end{proof}
\begin{lemma}
\label{lemma:lemqWtermscor}
The homomorphism $q_{i,j}(W_0 \times D^{i-2,+})$ is only nonzero for $i > 0, j >0$.
\end{lemma}
\begin{proof}
The $j=0$ case corresponds to $J$-holomorphic maps that are constant. Suppose that we have chosen input cocycles $x,y$ and a test output cocycle $z^{\vee}$, such that the moduli space used to calculate the coefficient of $z$ in $q_{i,j}(W_0 \times D^{i-2,+})(x,y)$ is $0$-dimensional. In particular, we require that $|z| = 2|x| + 2|y| - (i-2)$, using Equation \eqref{eq:dimension} and recalling that $\dim W_0 \times D^{i-2,+} = i$ and $j=0$. However, such setups in fact consists of pairs $(m \in W_0, (v,u))$ such that $(v,u)$ is a configuration as in the $t=0$ end of Figure \ref{fig:modspatree}. For generic choices of data, there do not exist any such $(v,u)$ (the space of such pairs is of virtual dimension $|z| - 2|x| - 2|y| + i = -2$), hence the moduli space is empty and therefore trivially transverse.
For the vanishing for $i=0$, observe that the $h^{i}$ terms correspond to calculating $q_{i,j}(W_0 \times D^{i-2,+})$ which vanishes for $i < 2$, as $S^{\infty}$ has no cells of negative dimension.
\end{proof}
We will verify Theorem \ref{thm:quancar} in the case of $\mathbb{CP}^{1}$.
\begin{exmpl}[$\mathbb{CP}^{1}$]
\label{exmpl:cp1calc}
Let $x$ be the generator of 2-dimensional cohomology. We verify that $$[Q\mathcal{S}(x)*Q\mathcal{S}(x)]_{i,j} = Q\mathcal{S}_{i,j}(x*x) + q_{i,j}(W_{0} \times D^{i-2,+})(x,x).$$ We know that there can only be contributions from $j>0$ and $i>0$, by Lemma \ref{lemma:lemqWtermscor}. In the cases where $j=0$ or $i=0$, we have already verified this using Example \ref{exmpl:difficulties}. For degree reasons, there cannot be any solutions for $j \ge 2$ or $i \ge 4$, and there cannot be solutions for $i=1,3$ (as $\mathbb{CP}^1$ has only even cohomology). Hence we need to consider only the cases $(i,j) = (4,1), (2,1)$.
For the case $(i,j) = (4,1)$, $$[Q\mathcal{S}(x) * Q\mathcal{S}(x)]_{4,1} = [T^{2} + h^{4}T]_{4,1} = h^{4}T$$ $$Q\mathcal{S}_{4,1}(x*x) = Q\mathcal{S}_{4,1}(T) = [T^{2}]_{4,1} = 0.$$ We then calculate $q_{4,1}(W_{0} \times D^{2,+})(x,x)$.
Pick representatives of $PD(x) \times S^{2}$ as follows: let $\phi : S^{2} \rightarrow D^{2}$ be the ``flattening map" of the sphere, i.e. if $S^{2} \subset \mathbb{R}^{3}$ it is projection onto $\mathbb{R}^{2} \subset \mathbb{R}^{3}$. Pick two disjoint discs in $\mathbb{CP}^{1} = S^{2}$, call them $D$ and $D'$, and pick maps $\eta: D^2 \rightarrow D \xhookrightarrow{} \mathbb{CP}^{1}$ and $\eta': D^2 \rightarrow D \xhookrightarrow{} \mathbb{CP}^{1}$ identifying $D^{2}$ with $D,D'$ respectively. Let $\psi = \eta \circ \phi: S^2 \rightarrow \mathbb{CP}^{1}$, and likewise $\psi' = \eta' \circ \phi: S^2 \rightarrow \mathbb{CP}^{1}$. Then two representatives of $PD(x)_{v} = \{ pt \}_{v}$ are $\psi(v)$ and $\psi'(v)$ where $v$ varies in $S^{2}$. Recall that elements of $W_{0}$ are as in Figure \ref{fig:w0}.
Every element of $W_0$ consists of two spheres, joined at a point, which in this discussion we call ``components": recall that one has three special points, and one has four. The space $S^2$ has minimal Chern number $N = 2$, so for $J$-holomorphic maps $u$ from elements of $W_0$ to $\mathbb{CP}^1$, the map $u$ may be non-constant on only one of the two components. Further, this $J$-holomorphic map must be degree 1 on the other component. It is immediate that the map must be constant on the component with three marked points (if it were constant on the other component, then the solution cannot be rigid as $z_{4}$ can vary freely), and the other sphere of $u$ has degree $1$. Then $u(z_{1}) = \psi(v)$ and $u(z_{2}) = \psi(-v)$ meet at the unique point on the sphere where $\psi(v) = \psi(-v)$. Hence there is one solution, and this solution gives the correction term $h^{4}T$.
For the case $(i,j) = (2,1)$, observe that the coefficient of $h^2$ in the correction term corresponds to using as the parameter space $D^{2-2,+} \subset S^2$, which is a single point: thus, we are simply performing a nonequivariant calculation. As we are calculating the coefficient of $T$, as in the previous paragraph we know that any $J$-holomorphic maps must be constant on the component with three special points. This setup then corresponds to deducing the coefficient of $x T$ in $(x*x) * (x \cup x)$, but $x \cup x = 0$ in $\mathbb{CP}^1$. Example \ref{exmpl:difficulties} provides the contributions $Q\mathcal{S}_{2,1}(x*x) = 0$ and $[Q\mathcal{S}(x)*Q\mathcal{S}(x)]_{2,1} = 0$.
\end{exmpl}
Henceforth, for brevity we will denote $$q(W)(x,y) := \sum_{i,j} q_{i,j}(W_0 \times D^{i-2,+})(x,y)h^i$$ for $x,y \in QH^*(M)$.
\begin{rmk}[Quantum Cartan in Classical Case]
Lemma \ref{lemma:lemqWtermscor} gives a sanity check that in the classical case, $Sq(x) \cup Sq(y) = Sq(x \cup y)$.
\end{rmk}
\begin{rmk}
\label{rmk:z2actioncompact}
We recall that the $\mathbb{Z}/2$-action $\iota$ on $M_{0,5}$ acts on the labels of the points by the transposition $(12)(34)$. Hence, suppose that $(0,1,\infty,z_3,z_4) \in M_{0,5}$. Then $\iota (0,1,\infty,z_3,z_4) = [0,\infty,1,z_4,z_3]$. This is no longer from description $(2)$ of $M_{0,5}$. We must apply the element $R \in PSL(2,\mathbb{C})$ such that $R(z) = z/(z-1)$. Then $$[0,\infty,1,z_4,z_3] = [R0,R\infty,R1,Rz_4,Rz_3] = (0,1,\infty,z_4/(z_4 - 1), z_3/(z_3-1)).$$ Such a point in $M_{0,5}$ is fixed exactly when $z_3 = z_4/(z_4-1)$. This provides a $2$-dimensional family $F$ of fixed points of $\iota$, as $z_3$ varies in $S^2 - \{0,1,\infty,2 \}$.
Using description $(2)$ of $M_{0,5}$, the action $\iota$ extends in the obvious way to the compactification, by permuting edge labels of the marked points. Fixed points of the $\iota$-action on the compactification can be found in the limit as $z_3 \rightarrow 0,1,\infty,2$, assuming that $z_4 \rightarrow 0,\infty,1,2$ respectively. These four points compactify $F$ to a $2$-sphere that we denote $\overline{F}$. The point when $z_3 \rightarrow 1$ and $z_4 \rightarrow \infty$ is $m_2$.
We now use description $(1)$ of $M_{0,5}$. It can be deduced by inspection that there are no fixed points if exactly one of the pairs $(z_1, z_3), (z_2,z_4), (z_1,z_4), (z_2,z_3)$ collide. The collisions of $(z_1,z_2)$ are only fixed if they collide at $2$, which is covered above. The collision of $(z_3,z_4)$ is only fixed if it occurs at $2$, which is the single point when $(z_1,z_2)$ collide at $0$, which is counted above. It is an easy check that any point in a collision of $(z_0,z_i)$ is not fixed for $i=1,2,3,4$. Hence, the only other possibility to check is a collision when two pairs collide at the same time, say $(z_i,z_j),(z_k,z_l)$ for $(i,j,k,l)$ all distinct. Checking the cases, we see that there is a single point that has not yet been accounted for in $\overline{F}$, namely $(z_1,z_2)$ and $(z_3,z_4)$. This is the point $m_1$.
\end{rmk}
\section{Computing the Quantum Steenrod Square for toric varieties}
\label{sec:computingqsstoric}
In this section, we will use the intersection definition of the quantum Steenrod square (Definition \ref{defn:singqss}). We will require that $\alpha_v: X_v \rightarrow M$ is an embedded submanifold for each $v$ (and not just a pseudocycle), and we will abusively replace $\alpha_v(X_v)$ by $X_v$.
\subsection{Quantum Steenrod squares for $\mathbb{CP}^{n}$}
\label{subsec:computingqsscpn}
Let $x^{i}$ generate $H^{2i}(\mathbb{CP}^{n})$. By the quantum Cartan relation, Theorem \ref{thm:quancar}, $$Q\mathcal{S}(x^{i+1}) = Q\mathcal{S}(x^{i}) * Q\mathcal{S}(x) + q(W)(x^{i},x)$$ We can iteratively construct $Q\mathcal{S}(x^{i+1})$ as long as we know $q(W)(x^{i},x)$. Using a combination of degree reasons and Remark \ref{rmk:propertiesqs}, $Q\mathcal{S}(x) = x * x + xh^2$.
\begin{lemma}
\label{lem:qWcpn}
For $2i<n$, $q(W)(x^i,x)=0$. For $n \le 2i \le 2n$,
$$q(W)(x^{i},x) = {{i} \choose {n-i}} T h^{4i+2-2n}.$$
\end{lemma}
\begin{proof}
Recall that we make a generic choice of $C^{n-1}_v \subset \mathbb{CP}^n$, parametrised by $v \in S^{\infty}$, such that $C^{n-1}_v$ represents $PD(x) \in H_{2(n-1)}(\mathbb{CP}^n)$ for each $v$. Similarly, we choose $C^{n-i}_v \subset \mathbb{CP}^n$ representing $PD(x^i) \in H_{2(n-i)}(\mathbb{CP}^n)$ for each $v \in S^{\infty}$. Observe that $q(W)(x^i,x)$ has degree $4i+4$ so, by Lemma \ref{lemma:lemqWtermscor}, for $i=1,...,n$ we deduce that:
\begin{equation}
\label{equation:qW}
\begin{array}{rcl}
q(W)(x^i, x) & = & \sum_{j=n-i}^{i} m^{i+1}_{j} x^{i+j-n} T h^{2(i+1)-2j}
\\[2em]
& = &
m_{n-i}^{i+1} x^0 T h^{4i+2-2n}
+
m_{n-i-1}^{i+1} x^1 T h^{4i-2n}
+
\cdots
+
m_i^{i+1} x^{2i-n} T h^2
\end{array}
\end{equation}
where $m^{i+1}_{j}$ are coefficients and the degrees are $|x| = 2$, so $|x^{i}| + |x| = 2i+2$ and $|T| = 2(n+1)$. Equation \eqref{equation:qW} follows for grading reasons.
We claim that $m^{i+1}_j$, the coefficient of $x^{i+j-n} T h^{2(i+1)-2j}$, is the number of (unparametrised) $J$-holomorphic spheres that intersect both $\mathbb{CP}^{{i+j}-n}$ and some representative of $PD(Sq^{2j}(x^i))$. We proceed in the following steps:
\begin{enumerate}[i)]
\item Counting the coefficient of $x^{i+j-n} T h^{2(i+1)-2j}$ in $q(W)(x^i,x)$ is the same as counting setups as in Figure \ref{fig:qWcpn}(1) for $v \in D^{2i-2j, +}$ (recall $D^{2i-2j,+}$ corresponds to the $h^{2i-2j+2}$ term when defining $q(W)$ in Theorem \ref{thm:quancar}). Only $T^{1}$ appears in equation \eqref{equation:qW}, so one of the holomorphic bubbles has degree $0$ and the other degree $1$. For the solutions to be rigid, the sphere with the marked points $z_1, z_2$ must be constant (as in Example \ref{exmpl:cp1calc}). This yields the setup in Figure \ref{fig:qWcpn}(2).
\item Let $b$ be an element of the basis of cohomology, $\mathcal{B}$. The intersection of $C^{n-i}_{v}$ and $C^{n-i}_{-v}$ with some representative of $PD(b^{\vee})$, taken over all $v \in D^{2i-2j, +}$, is the coefficient of $b$ in $Sq^{2j}(x^{i})$ (by definition).
\item Suppose that we neglect the intersections with $C^{n-1}_{\pm v}$ in Figure \ref{fig:qWcpn}(2). Then we count the number of (unparametrised) $J$-holomorphic spheres $u:S^2 \rightarrow M$ that intersect:
\begin{itemize}
\item a representative of $PD((x^{i+j-n})^{\vee}) = PD(x^{2n-(i+j)})$ (an example of which is a copy of $\mathbb{CP}^{i+j-n} \subset \mathbb{CP}^n$) and
\item a representative of $PD(Sq^{2j}(x^{i}))$.
\end{itemize} We recall that $Sq^{2j}(x^{i}) = {{i} \choose {j}} x^{i+j}$ (see \cite[Section 4.L.]{algtop}). This implies that $PD(Sq^{2j}(x^{i})) = {{i} \choose {j}} PD(x^{i+j})$. Recall that $PD(x^{i+j})$ is represented by a copy of $\mathbb{CP}^{n-(i+j)}$. Our problem reduces to asking how many lines there are intersecting $\mathbb{CP}^{(i+j)- n }$ and $\mathbb{CP}^{n - (i+j) }$, and multiplying this by the coefficient ${{i} \choose {j}}$.
However, this only makes sense if $i+j-n \ge 0$ and $n-(i+j) \ge 0$ (both of the representatives must be of nonnegative dimension), hence $j=n-i$. In particular, the representative of of $PD((x^{i+j-n})^{\vee})$ is a point, denoted $pt$. Further, there are a finite number (congruent to ${{i} \choose {n-i}} \text{ mod } 2$) of pairs $\{ v_k, -v_k \}$ such that $C^{n-i}_{v_k} \cap C^{n-i}_{-v_k} = pt_k \cong \mathbb{CP}^0$. For each $pt_k$ there is exactly one line between $pt_k$ and $pt$ (i.e. there is always exactly one line between any two points in $\mathbb{CP}^n$).
\item The homology class of each of the degree $1$ $J$-holomorphic spheres (the lines from the previous step) is the same homology class as that of $\mathbb{CP}^{1}$. Observe that $\mathbb{CP}^{1} \cap C^{n-1}_v = \{ pt'_v \}$ for each $v$. We make a generic choice of $C^{n-1}_v$ such that the $J-$holomorphic spheres are not contained in $C^{n-1}_{v}$ for generic $v$: specifically, we choose some hypersurface in $\mathbb{C}^{n-1}$ not containing the finite collection of lines from step iii, and then require that $C^{n-1}_v$ is a $C^2$-small perturbation in $v$ from this hypersurface (to ensure transversality). Then for each pair $\{ v_k , -v_k \}$, the intersection of the line at $pt'_{v_k}$ fixes the parametrisation of the $J$-holomorphic map, and the intersection of the line at $pt'_{-v_k}$ fixes which element $m$ of $W_0$ that we are using as the domain.
\end{enumerate}
Hence for each of the ${{i} \choose {n-i}}$ lines from step iii, there is exactly one choice of tuple $(m,u,v_k)$ (up to reparametrisation and the $\mathbb{Z}/2$-action) satisfying the configuration in Figure \ref{fig:qWcpn}(2).
\end{proof}
\begin{thm}
\label{thm:SqQcpn}
For all $i \ge 0$,
\begin{equation}
\label{equation:quant1}
Q\mathcal{S}(x^{i}) = \sum_{j=0}^{i} \left( {{i} \choose {j}}+ \sum_{k=0}^{\lfloor n/2 \rfloor + 1} {{n-k}\choose{k}}\cdot {{i-(n+1-k)} \choose {j-k}} \right) x^{i+j} h^{2(i-j)},
\end{equation}
where $x^{i+j}$ is the $(i+j)$-th quantum power of $x$.
\end{thm}
Observe that if $i+j \ge n$ then $x^{i+j} = x^{i+j-n} T$, as this is the quantum power.
Recall that $Q\mathcal{S}(x^i) = Sq(x^i) + T(...)$ where by Example \ref{exmpl:classcpn}: $$Sq(x^{i}) = \sum_{j=0}^{n-i} {{i} \choose {j}} x^{i+j} h^{2i-2j}.$$
\begin{figure}
\input{qWcpn.pdf_t}
\caption{Configurations for $q(W)(x^{i},x)$ for $\mathbb{CP}^{n}$}
\label{fig:qWcpn}
\end{figure}
\begin{proof}[Proof of theorem \ref{thm:SqQcpn}]
Since $T = x^{n+1}$, we can express the square as:
$$
\begin{array}{rcl}
Q\mathcal{S}(x^{i}) & = & \sum_{j=0}^{i} l^{i}_{j} x^{i+j} h^{2i-2j}
\\[2em]
& = &
l_0^i x^i h^{2i} + l_1^i x^{i+1} h^{2i-2} + \cdots
+ l_i^{i} x^{2i} h^{0}
\end{array}
$$
for some $l^i_j \in \mathbb{Z}/2$.
By the Quantum Cartan relation, Theorem \ref{thm:quancar} and Lemma \ref{lem:qWcpn}, the coefficients $l^i_j$ satisfy $l^{i+1}_{j} = l^{i}_{j} + l^{i}_{j-1}$ for $j \neq n-i$ and $l^{i+1}_{n-i} = l^{i}_{n-i-1} + l^i_{n-i} + {{i}\choose{n-i}}$ (the latter term arises from the quantum correction). Using a Pascal Triangle and the iterative formula for the $l^{i}_{j}$, one can write down the closed form solution.
\end{proof}
In particular, truncating the sum in equation \eqref{equation:quant1} to $j \le n-i$ recovers the classical Steenrod square formula for $\mathbb{CP}^{n}$ from Example \ref{exmpl:classcpn}. This is because if $j \le n-i$ then every term in the second summation in Equation \eqref{equation:quant1} vanishes because either
\begin{itemize}
\item $j-k < 0$
\item or $i-(n+1-k) < 0$
\end{itemize}
To see that this is true, observe that if $j \ge k$ then $$i-(n+1-k) = k+i-n-1 \le j+i-n-1.$$ Then as $j \le n-i$, we see that $j+i-n-1 \le -1 < 0$
Explicit examples:
\begin{enumerate}
\item[$\mathbb{CP}^1$] :
$q(W)(x,x) = {{1}\choose{1-1}}T h^{4+2-2}$
$Q\mathcal{S}(x) = xh^{2} + T$
$Q\mathcal{S}(T) = (x h^2 + T)^2 + {\bf Th^4} = T^2$.
\item [$\mathbb{CP}^2$] :
$q(W)(x,x) = {{1}\choose{2-1}}T h^{4+2-4}$
$Q\mathcal{S}(x) = xh^{2} + x^{2}$
$Q\mathcal{S}(x^{2}) =(xh^{2} + x^{2})^2 + {\bf Th^2} = x^{2}h^{4} + Th^{2} + xT$.
\item[$\mathbb{CP}^3$] :
$q(W)(x,x) = {{1}\choose{3-1}}T h^{4+2-6} = 0$ and $q(W)(x^2,x) = {{2}\choose{3-2}}T h^{8+2-6} = 0$
$Q\mathcal{S}(x) = xh^{2} + x^{2}$
$Q\mathcal{S}(x^{2}) = (xh^{2} + x^{2})^2 = x^{2}h^{4} + T$
$Q\mathcal{S}(x^{3}) = (xh^{2} + x^{2})(x^{2}h^{4} + T) = x^{3}h^{6} + Th^{4} + xTh^{2} + x^{2}T$.
\end{enumerate}
\begin{rmk}
Observe that, after one appeals to dimension reasons to rule out the other cases, the proof of Theorem \ref{thm:SqQcpn} only uses $GW(\mathbb{CP}^{n-1},\{pt \}, \{ pt' \} )$.
\end{rmk}
\subsection{Fano Toric Varieties}
\label{subsec:gentorvar}
Let $M$ be a compact monotone toric manifold, with $b \in H^{|b|}(M)$ and $x \in H^{2}(M)$, and let $X = PD(x)$. Then analogously to Theorem \ref{thm:SqQcpn}, one proves Theorem \ref{thm:SqQtoric}.
\begin{proof}[Proof of Theorem \ref{thm:SqQtoric}]
Consider setups as in Figure \ref{fig:qTORIC}, which we henceforth call setups. These are configurations that, when counted, yield the coefficient of $c T^{c_1(\mu)} h^{i+2}$ in $q(W)(b,x_p)$. Henceforth we fix the dimension of the equivariant parameter space, $i \in \mathbb{Z}_{\ge 0}$ corresponding to $S^{i} \subset S^{\infty}$, and some $\mu \in H_2(M, \mathbb{Z})$ such that the $J$-holomorphic curves we consider represent $\mu$. We also fix $x \in H^2(M)$ and $b \in H^*(M)$, as in the statement of the theorem. We make choices of $X_v, B_v$ for $v \in S^{\infty}$, with the usual conditions for the input cycles used in $q(W)(b,x_p)$. Given a test output cycle $c \in H^*(M)$, we pick an embedded submanifold representing $PD(c^{\vee})$.
We will describe configurations that are related to Figure \ref{fig:qTORIC}, which we call {\it reduced setups}, which arise by neglecting the intersection with $X_v$ and the marked point $z_4$ corresponding to it. The setup as given remains dimension $0$: removing $z_4$ ``removes 2 dimensions", and removing the intersection with $X_v$ ``adds 2 dimensions".
A ``reduced setup" is a pair $(v,u_{red})$ such that $v \in S^{i}$ and $u_{red}: S^2 \vee_{1 \sim 0} S^2 \rightarrow M$. Then $v \in S^{i}$ and $u_{red}$ is $J$-holomorphic, and subject to $(u_{red})_*[S^2 \vee S^2] = \mu$. The map $u_{red}$ satisfies: $$u_{red}(0) \in PD(c^{\vee}), u_{red}(\infty) \in X_{-v}, u_{red}(1) \in B_v, u_{red}(\infty) \in B_{-v}.$$ Note that given a setup, we may obtain a reduced setup by forgetting the point $z_4$ (and the associated intersection condition). Observe that the space of setups and reduced setups is of the same dimension, $|c| + i + 2c_1(\mu) - 2|x| - 2|b|$.
We would like to prove that for a generic choice of $\{ X_v \}$, if $(v, u_{red})$ is a reduced setup, then for every $p \in S^2$ such that $u_{red}(p) \in X_v$:
\begin{itemize}
\item $u_{red}$ and $X_v$ intersect transversely in $M$ at $u_{red}(p)$, and
\item $p$ is an injective point of $u_{red}$.
\end{itemize}
Observe that if we are in the situation where the set of reduced setups is $0$-dimensional, i.e. $|c| + i + 2c_1(\mu) - 2|x| - 2|b|=0$, then we may assume that no intersections occur for $v \in \partial D^{i,+} = S^{i-1}$. Further, counting reduced setups with $v \in S^i$ and then quotienting by the free $\mathbb{Z}/2$-action of Equation \eqref{equation:actiononmoduli} is identical to simply restricting to $v \in \mathring{D}^{i,+}$ (without taking a quotient). With this in mind, we may freely perturb our choice of $X_v$ for $v \in \mathring{D}^{i,+}$, without changing the reduced setups. We make sure that the perturbation is sufficiently small that the moduli spaces of setups remains transverse. Then the argument becomes a classical argument that a generic perturbation of the embedded submanifold/pseudocycle $X_v$ will be transverse to $u_{red}$, and \cite[Proposition 1.3.1]{jhols} implies that the set of injective points of a simple curve $u_{red}$ is open and dense: hence, generically each intersection occurs at an injective point of $u_{red}$.
Now suppose that we are given a reduced setup $(v, u_{red})$. Then there are $\# (X_v \bullet \mu)$ (modulo $2$) setups corresponding to it. Observe that the actual number of corresponding setups is $\# (X_v \cap \mu)$, where $\cap$ is the absolute number of intersection points counted without signs. Generally such a count is not preserved under changes of representatives of $X_v$ and $\mu$, but one immediately sees that $\# (X_v \bullet \mu) = \# (X_v \cap \mu)$ for transversely intersecting pseudocycles in characteristic $2$. This choice of $\# (X_v \bullet \mu)$ setups corresponds to a choice of the marked point $z_4$ on the domain, which we know bijects with a choice of intersection points of $X_v$ and $\text{Im}(u)$ (as it is an injective point).
In fact, setups and reduced setups are in a $1$ to $\#( X \bullet \mu)$ correspondence (recalling that $X = PD(x)$). This holds because one may pick $X_v$ such that every $X_v$ is a normal perturbation in a $C^2$-small tubular neighbourhood of some fixed submanifold representative $\euscr{X}$ of $X$ (argue likewise for a pseudocycle representative). This is then bordant to having chosen $X_v = \euscr{X}$ for all $v$, by deformation retracting the tubular neighbourhood to $\euscr{X}$. Hence $\# (X_v \bullet \mu) = \#( X \bullet \mu)$.
It is now sufficient to prove that reduced setups count $$\sum_{2i=0}^{|b|} \sum_{j \ge 1} \sum_{k=1}^{j} \sum_{\mu \in H_2(M) : E(\mu) = k} \left( Q\mathcal{S}_{2i,j-k}(b) *_{\mu, k} x \right) \cdot h^{|b|-2i+2} T^{j}.$$ However, considering reduced setups alone one may choose $X_{-v}$ to be independent of $v \in D^{i,+}$ (again, choose a deformation retraction of a tubular neighbourhood to its core $\euscr{X}$). The result follows immediately from the definitions of $Q\mathcal{S}$ and the quantum product.
\end{proof}
\begin{figure}
\input{qWcpnTORIC.pdf_t}
\caption{Configurations for $q(W)(b,x_{p})$ for toric varieties, where $x$ and $b$ are the inputs and $c$ is the output. Here we are using the notation where $X_v$ and $B_v$ are embedded submanifold, as in Remark \ref{rmk:embeddedsubs}.}
\label{fig:qTORIC}
\end{figure}
As the cohomology of a toric variety $M$ is generated by $H^{2}(M)$, iterated application of \eqref{equation:SqQtoric} yields a general solution, i.e. one can calculate $Q\mathcal{S}(x_{p_{1}}x_{p_{2}}...x_{p_{r}})$ assuming the base cases $Q\mathcal{S}(x_{p_{i}})$ for a basis $\{ x_{p} \}$ of $H^2(M)$ are known. Using a combination of degree reasons and Remark \ref{rmk:propertiesqs}, $Q\mathcal{S} (x_p) = x_p * x_p + x_p \cdot h^2$.
\begin{proof}[Proof of Corollary \ref{corollary:fanotoricdecided}]
We induct on degree. The base case is for $|x|=2$, and we know from above that $Q\mathcal{S}(x)= xh^2 + x * x$ is determined by $QH^*(M)$. Given $a \in H^{*}(M)$ for $* > 2$, write $a = b * x$ for $x \in H^{2}(M)$. By Theorem \ref{thm:quancar}, we have $Q\mathcal{S} (a) = Q\mathcal{S} (b) * Q\mathcal{S} (x) + q(W)(b,x)$. By induction $Q\mathcal{S} (b)$ and $Q\mathcal{S} (x)$ are determined by $QH^* (M)$, hence so is $Q\mathcal{S} (b) * Q\mathcal{S} (x)$. By Theorem \ref{thm:SqQtoric}, $q(W)(b,x)$ is determined by $QH^* (M)$ (observing that $\# (X \bullet \mu)$ is determined from singular cohomology).
\end{proof}
Let $\beta: QH^*(M) \rightarrow QH^*(M)$ be a ring homomorphism satisfying $\beta(T) = T$. In the notation of Theorem \ref{thm:SqQtoric}, we deduce that for $a,b \in QH^*(M)$, $$\beta(a *_{0,0} b ) = \beta(a) *_{0,0} \beta(b).$$ This is because $\mu =0$ is the only possible element of $H_2(M, \mathbb{Z})$ of Chern number $0$ when $M$ is monotone. Indeed, Theorem \ref{thm:SqQtoric} simplifies to state that if $|x| = 2$, then \begin{equation} \label{equation:toriccartanagain} Q \mathcal{S}( b * x) = Q \mathcal{S}(b) * Q \mathcal{S}(x) + (Q \mathcal{S}(b)* x - Q \mathcal{S}(b)*_{0,0} x). \end{equation} Thus, any ring homomorphism $\beta$ with the given constraint is compatible with the quantum Steenrod square.
\begin{exmpl}[$\mathbb{CP}^{1} \times \mathbb{CP}^{1}$]
We let $x,y$ be the generators of $H^2(\mathbb{CP}^1 \times \mathbb{CP}^1)$, with $PD(x) = [\{ pt \} \times \mathbb{CP}^1]$ and $PD(y) = [\mathbb{CP}^1 \times \{ pt' \}]$ . Here $q(W_{0} \times D^{i-2,+})(x,y) = 0$ hence $Q\mathcal{S}(x) * Q\mathcal{S}(y) = Q\mathcal{S}(x*y)$. Indeed by equation \eqref{equation:SqQtoric},
$$q(W_{0} \times D^{i-2,+})(x,y) =\sum_{2i=0}^{2} \sum_{j \ge 1} \sum_{k=1}^{j} k \cdot Q\mathcal{S}_{2i,j-k}(x) *_{\mu, k} y h^{4-2i} T^{j},$$ recalling that $x *_{ \mu, k} y$ the coefficient of $T^{k}$ in the quantum product $x * y$, using spheres representing $\mu$. Working from definitions, $Q\mathcal{S} (x) = x h^{2} + T$. Then $\alpha *_{k} y \neq 0 \implies k = 1, \alpha = y$. There are no $i,j,k$ such that $Q\mathcal{S}_{2i,j-k}(x) = y$. Hence the sum on the right hand side is $0$.
\end{exmpl}
\section{The Quantum Adem Relations}
\label{sec:QAR}
\subsection{Classical Adem Relations}
\label{subsec:classadem}
We begin with a discussion of the group cohomology of $S_4$ and $D_8$. This will involve adding details to the argument alluded to by Cohen-Norbury to prove the classical Adem relations in \cite[Section 5.2]{cohnor}.
It is proved in \cite[Sections IV.1, VI.1]{ademmilgram} that \begin{equation} \label{equation:HBD8} H^{*}(BD_{8}) = \mathbb{Z}/{2}[e,\sigma_{1},\sigma_{2}]/(e \sigma_{1}) \end{equation} where $e, \sigma_{1}$ are of degree 1 and $\sigma_{2}$ is of degree 2, and \begin{equation} \label{equation:HBS4} H^{*}(BS_{4}) = \mathbb{Z}/{2}[n_{1},n_{2},c_{3}]/(n_{1} c_{3}), \end{equation} where again subscripts denote the degree of the elements. Considering $$D_{8} = \langle (12),(34),(13)(24) \rangle \subset S_{4},$$ there are subgroups $$\mathbb{Z}/2 = \langle (13)(24) \rangle, \qquad \mathbb{Z}/2 \times \mathbb{Z}/2 = \langle (12),(34) \rangle.$$ Then $$H^{*}(B \mathbb{Z}/2) = \mathbb{Z}/{2}[e], \qquad H^{*}(B (\mathbb{Z}/2 \times \mathbb{Z}/2)) = \mathbb{Z}/{2}[x,y].$$ Consider the commutative diagram \eqref{commutativediagramofgroups} induced by the various inclusion maps of groups. As in \cite{ademmilgram}, one shows that:
\begin{equation}\label{commutativediagramofgroups}
\xymatrix{
H^*(B \mathbb{Z}/2)
\\
H^*(BD_8)
\ar@{->}^-{i_1}[u]
\ar@{->}_-{i_2}[d]
&
H^*(BS_4)
\ar@{->}^-{j_1}[ul]
\ar@{->}^-{j_2}[dl]
\ar@{->}^-{\pi^*}[l]
\\
H^*(B(\mathbb{Z}/2 \times \mathbb{Z}/2))
}
\end{equation}
\begin{tabular}{l }
$i_{1}(e) = e$\\
$i_{2}(\sigma_{1}) = x+y, \quad i_{2}(\sigma_{2})=xy$\\
$j_{1}(n_{2}) = e^{2}$\\
$j_{2}(n_{1}) = x+y, \quad j_{2}(n_{2}) = xy$\\
\end{tabular}
All of the other generators map to $0$ via the $i,j$ maps. From this, and the fact that $\pi^*$ is injective, we deduce that $$\pi^{*}(n_{1}) = \sigma_{1} \qquad \pi^{*}(n_{2}) = \sigma_{2} + e^{2} \qquad \pi^{*}(c_{3}) = e \sigma_{2}.$$
By Cohen-Norbury, \cite{cohnor}, there is a commutative diagram, namely diagram \eqref{classicalademdiagram}, where $qq$ satisfies $$qq(\alpha) = \sum_{p,q} Sq^{q} \circ Sq^{p}(\alpha) e^{|\alpha| + p - q} \sigma_{2}^{|\alpha| - p}.$$
\begin{equation}\label{classicalademdiagram}
\xymatrix{
H^*(M)
\ar@{->}^-{q_{S_{4}}}[r]
\ar@{->}_-{=}[d]
&
H^*(M) \otimes H^*(BS_4)
\ar@{->}^-{id_{H^*(M)} \otimes \pi^*}[d]
\\
H^*(M)
\ar@{->}^-{qq}[r]
&
H^*(M) \otimes H^*(BD_8)
}
\end{equation}
We do not in general know a closed form definition of $q_{S_4}$ in terms of compositions of Steenrod squares, but in fact we do not need to: the Adem relation is a purely algebraic relation, only using the fact that $qq$ lifts to a function $q_{S_4}$ (and not any information about the homomorphism itself). For a definition of $q_{S_4}$ in Diagram \eqref{classicalademdiagram}, use the $T^0$-component from Definition \ref{defn:qs4}.
\begin{Fact}
\label{Fact:commuteAdem}
By Theorem 19 (Invariance) in \cite{cohnor}, the diagram \eqref{classicalademdiagram} commutes. This implies that the image of $qq$ lies in the image of $id_{H^{*}(M)} \otimes \pi^{*}$. Hence there are constraints on the image. Specifically, $e^{2i} \sigma_{2}^{j}$ may only appear in $qq(\alpha)$ if it arises from some $(e \sigma_{2})^{2k} (e^{2} + \sigma_{2})^{i+j-3k}$ for $k=0,1,...$, with coefficient ${i+j-3k}\choose{i-k}$. This is a special case of Lemma \ref{lemma:quantumademdiagram}.
\end{Fact}
\begin{lemma}
\label{lem:bincoeff}
For any $s,m$,
\begin{equation}
\label{equation:combinatorial}
{{3s+m}\choose{s+m}} = \sum_{l=0}^{\infty} {{m+l-1}\choose{2l}} {{3s+m}\choose{s-l}}
\end{equation}
modulo 2.
\end{lemma}
\begin{proof}
We prove this by induction. Let $c(m,s) = {{m+3s}\choose{m+s}}$. Then modulo 2, $$c(m+2,s) = c(m,s) + c(m+3,s-1).$$
Define $S(m,s) = \sum_{l} {{m-1+l}\choose{2l}}{{3s+m}\choose{s-l}}$. Check that $S(m,s) = c(m,s)$ for $s=0,1$ and $m=1,2$. These are the base cases. Hence if $S(m+2,s) = S(m,s) + S(m+3s,s-1)$ for all $m,s$ then the lemma holds by induction. This is an exercise in binomial coefficient algebra modulo $\mathbb{Z}/2$.
\end{proof}
\begin{thm}[Classical Adem Relations]
\label{thm:car}
Given $\alpha \in H^{*}(M)$ and $q,p>0$ such that $q<2p$,
\begin{equation} \label{equation:classicalademrelations} Sq^{q}Sq^{p}(\alpha) = \sum_{k=0}^{[q/2]} {{p-k-1}\choose{q-2k}}Sq^{p+q-k} Sq^{k}(\alpha).\end{equation}
\end{thm}
\begin{proof}
Suppose $q$ is even. Let $l = |\alpha|-p$, $m=p-q/2$, $n=q/2-k$, thus $$ {{p-k-1}\choose{q-2k}} = {{m+n-1}\choose{2n}}.$$ Assume $l=2r$. The cases for $q$ or $l$ odd are proven identically, except for slight modifications in the substitutions and the exponents of the labelled equations. Throughout, for $E \in H^*(B D_8)$, let $\text{cff}(E)$ be the coefficient of $E$ in $qq(\alpha)$. By definition of $qq$: $$Sq^{q} \circ Sq^{p} (\alpha)= \text{cff}(e^{l+2m} \sigma_{2}^{l}) \qquad \textrm{and} \qquad Sq^{p+1-k} \circ Sq^k (\alpha) = \text{cff}(e^{l-2n} \sigma_{2}^{l+m-n}).$$ By Fact $1$ (which also ensures that the right hand sides of the following two equations are well defined), \begin{equation} \label{equation:cff1} \text{cff}(e^{l+2m} \sigma_{2}^{l})= \sum_{i=0}^{r} {{3r+m-3i}\choose{r+m-i}} \cdot \text{cff}((e \sigma_{2})^{2i} (e^{2} + \sigma_{2})^{3r+m-3i})\end{equation} and \begin{equation} \label{equation:cff2} \text{cff}(e^{l-2n} \sigma_{2}^{l+m+n}) = \sum_{i=0}^{r} {{3r+m-3i}\choose{r-n-i}} \cdot \text{cff}((e \sigma_{2})^{2i} (e^{2} + \sigma_{2})^{3r+m-3i}).\end{equation}
The claim now follows since by Lemma \ref{lem:bincoeff}, $${{3r+m-3i}\choose{r+m-i}} = \sum_{n=0}^{\infty} {{m+n-1}\choose{2n}}{{3r+m-3i}\choose{r-n-i}}.$$ Substitute this into Equation \eqref{equation:cff1}, swap the summation, and then substitute Equation \eqref{equation:cff2}. This yields Equation \eqref{equation:classicalademrelations}, after substituting back for $p,q$ and $k$.
The terms with $n > q/2$ will not appear in the final statement because $n > q/2$ implies $k < 0$, and $Sq^k(\alpha) = 0$ for $k<0$.
\end{proof}
\subsection{Quantum Adem Relations}
\label{subsec:QAR}
In this section we will denote by $\mathcal{B}$ some basis of $H^*(BS_4)$, by $\hat{\mathcal{B}}$ some basis of $H^*(BD_8)$ and by $\mathcal{B}_M$ some basis of $H^*(M)$.
Recall that in Definition \ref{defn:qopn}, for $W \in H_{*}(\overline{M}_{0,5} \times_{\mathbb{Z}/2} S^i)$ for some $i$, we defined additive homomorphisms $$q_{i,j}(W): QH^a(M) \otimes QH^b(M) \rightarrow QH^{2a+2b-i-2jN}(M).$$ We will define a similar construction of operators that are parametrised by $H_{*}^{D_{8}} (\overline{M}_{0,5})$ and $H_{*}^{S_{4}} (\overline{M}_{0,5})$, where $$D_{8} = \langle (12),(34),(13)(24) \rangle \subset S_{4}$$ acts by permutations on the indices of $[z_{0},z_{1},z_{2},z_{3},z_{4}] \in \mathcal{M}_{0,5}$. We will abbreviate $P^{p,q,r}_{D_8} = \overline{M}_{0,5} \times_{D_8} ES_4^{p,q,r}$ and $P^{p,q,r}_{S_4} = \overline{M}_{0,5} \times_{S_4} ES_4^{p,q,r}$, recalling the constructions in Appendix \ref{sec:ed8es4}, where we expressed $ES_4$ as the union of a countable nested sequence of smooth closed manifolds, $ES_4^{p,q,r}$ of respective dimension $2p-1+3q+6r$.
We note that for any $M$ with $H^*(M)$ finitely generated in all degrees, there is a map $$\Psi: H^*(M) \rightarrow H_*(M),$$ along with its inverse also denoted $\Psi: H_*(M) \rightarrow H^*(M)$. This is an isomorphism via universal coefficients (as usual working over $\mathbb{Z}/2$): explicitly, one picks a dual basis under the pairing $\langle \alpha, a \rangle \mapsto \alpha (a)$ given by evaluation of cocycles. For brevity we denote $P_{D_8} = \overline{M}_{0,5} \times_{D_8} ES_4$ and $P_{S_4} = \overline{M}_{0,5} \times_{S_4} ES_4$. The homology of $P_{D_8}$ and $P_{S_4}$ satisfy this finite generation condition: this is due to the Cartan Leray spectral sequence.
Pick a pseudocycle representative $\zeta^{\vee}: Z^{\vee} \rightarrow M$ for each $z^{\vee} \in \mathcal{B}_M^{\vee}$. For $\alpha \in H^*(M)$, choose pseudocycles $i_v: A_v \rightarrow M$ for $v \in ES_{4}$ (where $A_v = A \times \{v \} \subset A \times ES_4^p,q,r$ for some sufficiently large $p,q,r$). We do this such that $i_v A_v$ is a weak representative of $PD(\alpha)$ for each $v$, by which we mean that $i_v A_v \bullet X = PD(\alpha) \bullet X$ for all $X \in H_*(M)$, where $\bullet$ is the intersection number. We choose $i_v$ with invariance and genericity conditions as follows:
\begin{enumerate}
\item $A_v = A_{(23) \cdot v} = A_{(24) \cdot v}$ and $i_v = i_{(23) \cdot v} = i_{(24) \cdot v}$ for all $v \in ES_{4}$.
\item Let $\mathcal{M}_{0,5}(J,j)$ be the space of $J$-holomorphic maps of Chern number $jN$ from $S^2$ to $M$ with $5$ marked points. Let $\overline{\mathcal{M}_{0,5}(J,j)}$ be its compactification with stable nodal maps. Then the $i_v$ must be chosen sufficiently generically so that the intersection of the $S_4$-equivariant pseudocycles in Equations \eqref{pseudoaaa1} and \eqref{pseudoaaa2} is transverse: \begin{equation} \label{pseudoaaa1} \begin{array}{l} ev: \overline{\mathcal{M}_{0,5}(J,j)} \times ES_4^{p,q,r} \rightarrow M \times M \times M \times M \times M \times ES_4^{p,q,r} \\ (u,v) \mapsto (u(z_0), u(z_1), u(z_2), u(z_3), u(z_4), v) \end{array} \end{equation} and \begin{equation} \label{pseudoaaa2} \begin{array}{l} Z^{\vee} \times A \times A \times A \times A \times ES_4^i \rightarrow M \times M \times M \times M \times M \times ES_4^i \\ (x,a_1, a_2, a_3,a_4, v) \mapsto (\zeta(x), i_v(a_1), i_{(12) \cdot v} (a_2), i_{(13) \cdot v} (a_3), i_{(14) \cdot v} (a_4), v). \end{array} \end{equation}
\end{enumerate}
Observe that we may restrict to the special case of Morse theory, as we have done throughout this paper. Specifically we choose $f_{v,s}$ for $v \in ES_4$ and $s \in [0,\infty)$. We do this such that $f_{v,s} = f$ for $s \gg 0$, and $f_{(23) \cdot v, s} = f_{(24) \cdot v, s} = f_{v, s} $ for all $v,s$, and we replace the incidence condition with $i_{(1p) \cdot v} (a_p)$ by incidence with a $-\nabla f_{(1 p) \cdot v,s}$-flowline asymptotic to a critical point $\alpha$.
\begin{defn}
\label{defn:s4operators}
Let $\alpha \in \text{crit}(f)$. For $d \in H^{*}_{S_{4}}(\overline{M}_{0,5})$, we pick a pseudocycle representative $\delta: D \rightarrow P^{p,q,r}_{S_4}$ of $\Psi (d) \in H_* (P^{p,q,r}_{S_4})$ (for some sufficiently large $p,q,r$). Then $$q_{S_4}(D)(\alpha) := \sum_{z \in \text{crit}(f), \ j \ge 0} n_{z, \alpha, j} \cdot z \cdot T^j,$$ where $n_{z, \alpha,j}$ counts the number of $S_4$ equivalence classes of triples $(m,u,v)$ with $[m,v] \in \delta(D) \subset P_{S_4}$ and $u: m \rightarrow M$ is a $J$-holomorphic curve of energy $j$, along with $$u_0: (-\infty,0] \rightarrow M, \ u_p: [0,\infty) \rightarrow M,$$ for $p=1,2,3,4$, such that $$\begin{array}{l} \partial_t u_0(s) = -\nabla f(u_0(s)) , \ \partial_t u_p(s) = -\nabla f_{(1 p) \cdot v}(u_p(s)), \\ u(z_p) = u_p(0), \ u_0(-\infty) = z, \text{ and } u_p(\infty) = \alpha. \end{array}$$ On cohomology the operation will be independent of the representative $\delta$ of $\Psi (d)$, by the same proof as Lemma \ref{lemma:additiveandindep} (i.e. we express our coefficients as the intersections of pseudocycles, and bordant pseudocycles give the same intersection number).
\end{defn}
The definition of $q_{D_8}(D)$ is identical to Definition \ref{defn:s4operators}, replacing everywhere $S_4$ by $D_8$ (note specifically that this definition uses $BD_8 = ES_4 / D_8$ as its parameter space.
Henceforth, we will restrict to the subalgebra $H^*(BS_4)_{red}$ of $H^*(BS_4)$ generated by $\sigma_2$ and $e$, and similarly the subalgebra $H^*(BD_8)_{red}$ of $H^*(BD_8)$ generated by $n_2$ and $c_3$. The map $\pi^*: H^*(BS_4)_{red} \rightarrow H^*(BD_8)_{red}$ is well defined and injective because of Diagram \eqref{commutativediagramofgroups}. Indeed, the only difference to using $H^*(BS_4)$ and $H^*(BD_8)$ is that we we forget all additive generators that include monomials with some nontrivial $\sigma_1$ and $n_1$ exponent respectively.
As in the case of the quantum Cartan relation, we would like to consider cycles in $H_*(P_{S_4})$ parametrised by some basis $\mathcal{B}$ of $H^*(BS_4)_{red}$. Compare this to the proof of the quantum Cartan relations, where the classes $[\{ m_1 \} \times D^{i,+}] \in H_*( P_{\mathbb{Z}/2})$ were parametrized by $[D^{i,+}] \in H_*(B \mathbb{Z}/2) = H_* (\mathbb{RP}^{\infty})$. Further, we will show later that \begin{equation} \label{equation:quantumsquarecomposition} Q\mathcal{S} \circ Q\mathcal{S} (\alpha) = \sum_{i,j} q_{D_8} (\{ m_1 \} \otimes \Psi (e^i \sigma_2^j) )(\alpha) \cdot e^i \sigma_2^j. \end{equation}
Hence, ideally we would like the chains represented by $\{ \{ m_1 \} \otimes B \}$ to be elements of $H_*(P_{S_4})$, for $B \in \mathcal{B}$. This will not work because $m_1$ is not $S_4$-invariant. However, the cycle $m_1 + g m_1 + g^2 m_1 \in H_*(\overline{M}_{0,5})$ is $S_4$ invariant, where $g = (123)$ generates the cosets of $D_8$ in $S_4$ (note that $g m_1 = m_2$).
\begin{defn}
\label{defn:qs4}
Given a basis $\mathcal{B}$ of $H^*(BS_4)_{red}$, define: $$q_{S_4}: H^*(M) \rightarrow QH^*(M) \otimes H^*(BS_4)_{red},$$ $$q_{S_4}(\alpha) = \sum_{b \in \mathcal{B}} q_{S_4} ((m_1 + g m_1 + g^2 m_1) \otimes \Psi (b)) (\alpha) \cdot b.$$
\end{defn}
\begin{defn}
Given a basis $\tilde{\mathcal{B}}$ of $H^*(BD_8)_{red}$, define: $$qq: H^*(M) \rightarrow QH^*(M) \otimes H^*(BD_8)_{red},$$ $$qq(\alpha) := \sum_{\tilde{b} \in \tilde{\mathcal{B}}} q_{D_8}((m_1 + g m_1 + g^2 m_1) \otimes \Psi (\tilde{b})) (\alpha) \cdot \tilde{b}.$$
\end{defn}
We fix some additive basis $\mathcal{B}$ for $H^{*}(BS_{4})_{red}$, of the form $\{ n_2^a c_3^q \}$ (with notation as in Equation\eqref{equation:HBS4}). Recall from Diagram \eqref{classicalademdiagram} there is $\pi_* : H_*(BD_8)_{red} \rightarrow H_*(BS_4)_{red}$ and $\pi^* : H^*(BS_4)_{red} \rightarrow H^*(BD_8)_{red}$, which are induced by the continuous quotient map $$\pi: ES_4 / D_8 \rightarrow ES_4 / S_4.$$ We also define: $$i_* : H_*(BS_4)_{red} \rightarrow H_*(BD_8)_{red}, \qquad i_*(D) = D + gD + g^2 D$$ and $$i^* : H^* (BD_8)_{red} \rightarrow H^* (BS_4)_{red}, \qquad i^* (d) = d + g d + g^2 d.$$ As we work over $\mathbb{Z}/2$ we see that $\pi_* \circ i_* = id$ and $i^* \circ \pi^* = id$, which also shows that $\pi^*$ is injective. As $\pi^*$ is injective, $\pi^* \mathcal{B}$ is linearly independent in $H^*(BD_8)_{red}$. We extend this to a basis $\hat{\mathcal{B}} = \pi^* \mathcal{B} \cup \mathcal{B}'$ of $H^*(BD_8)_{red}$.
\begin{lemma}
\label{lemma:adem1}
$$q_{S_4}((m_1 + g m_1 + g^2 m_1) \otimes \pi_* \Psi (b)) = q_{D_8}((m_1 + g m_1 + g^2 m_1) \otimes \Psi (b)).$$
\end{lemma}
\begin{proof}
Suppose that we pick some pseudocycle representative $f: X \rightarrow BD_8$ of $\Psi(b) \in H_*(BD_8)_{red}$ (or specifically, some stratum $BD_8^{p,q,r}$). To define a pseudocycle representative of $\pi_* \Psi(b) \in H_*(BS_4)_{red}$, we choose $\pi \circ f$. So in particular, there is a pseudocycle representative of $(m_1 + g m_1 + g^2 m_1) \otimes \Psi (b)$ of the form $$f': \{pt_1,pt_g,pt_{g^2} \} \times X \rightarrow PD_8, \ f'(pt_a, x) = (a \cdot m_1, f(x)),$$ which we see descends to a $D_8$-equivariant pseudocycle, and similarly an $S_4$-equivariant pseudocycle: $$\pi \circ f': \{pt_1,pt_g,pt_{g^2} \} \times X \rightarrow PS_4, \ \pi \circ f'(pt_a, x) = (a \cdot m_1,\pi \circ f(x)).$$
Let $z \in \text{crit}(f)$. Let $\overline{\mathcal{M}}(J,j)$ be a partial compactification of the space of genus $0$ stable $J$-holomorphic maps (i.e. excluding repeated or multiply covered components). Recall from Lemma \ref{lemma:additiveandindep} the means by which we determine the coefficient of $z$ in $q_{S_4}((m_1 + g m_1 + g^2 m_1) \otimes \pi_* \Psi (b))(x)$ as an intersection number. One defines a $5$-pointed Gromov-Witten invariant assigned to $\overline{\mathcal{M}}(J,j)$. Push this forwards along the map $$\mathcal{W}: \overline{\mathcal{M}}(J,j) \times ES_4^{p,q,r} \rightarrow M \times ((M \times M \times M \times M \times \overline{M}_{0,5}) \times_{S_4} ES_4^{p,q,r})$$ (for some $p,q,r$), which is induced by the evaluation map on the five marked points, the stabilisation map $\overline{\mathcal{M}}(J,j) \rightarrow \overline{M}_{0,5}$ and the identity on the $ES_4^{p,q,r}$ factor. This determined a cohomology class in $M \times ((M \times M \times M \times M \times \overline{M}_{0,5}) \times_{S_4} ES_4^{p,q,r})$. There is also a pseudocycle constructed using the evaluation maps on the partially compactified (un)stable manifolds $W^u(z,f)$, $W^s(x,f_{v},s)$, $W^s(x,f_{(12) \cdot v},s)$, $W^s(x,f_{(13) \cdot v},s)$, $W^s(x,f_{(14) \cdot v},s)$, alongside the map $\pi \circ f'$. The intersection of the image of the (equivariant) Gromov-Witten invariant with the pseudocycle provides the coefficient of $z$. A similar argument holds for $q_{D_8}$, this time using the map $f'$. Then $$M \times ((M \times M \times M \times M \times \overline{M}_{0,5}) \times_{D_8} ES_4^{p,q,r}) \rightarrow M \times ((M \times M \times M \times M \times \overline{M}_{0,5}) \times_{S_4} ES_4^{p,q,r})$$ is a $3$-to-$1$ covering, hence the coefficients of this intersection differ by multiplying the $S_4$-coefficient by three. As we work over $\mathbb{Z}/2$-coefficients, multiplication by three is the identity.
Ensuring transversality for pseudocycles in both the base and the cover simulataneously is not an issue, as the property of an intersection being transverse is preserved under a $p$-fold smooth covering map (being a local diffeomorphism), so it suffices to ensure transversality on the cover, $M \times ((M \times M \times M \times M \times \overline{M}_{0,5}) \times_{D_8} ES_4^{p,q,r})$, which we know by Appendix \ref{subsec:quantumademrels}.
\end{proof}
\begin{lemma}
For $b' \in \mathcal{B}' := \hat{\mathcal{B}} - \pi^* \mathcal{B}$, $$q_{D_8} ((m_1 + g m_1 + g^2 m_1) \otimes \Psi (b')) = 0.$$
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:adem1}, $$q_{D_8} ((m_1 + g m_1 + g^2 m_1) \otimes \Psi (b')) = q_{S_4} ((m_1 + g m_1 + g^2 m_1) \otimes \pi_* \Psi (b')).$$ If $B' = \Psi (b')$ then for all $b \in \mathcal{B}$, $$\langle b, \pi_* \Psi(b') \rangle = \langle b, \pi_* B' \rangle = \langle \pi^* b, B' \rangle = \langle \pi^* b, \Psi(b') \rangle = 0$$ by definition of the dualising isomorphism $\Psi$. Hence $\pi_* \Psi (b') = 0$.
\end{proof}
This implies that \begin{equation} \label{equation:qq} qq(\alpha) := \sum_{\pi^* b \in \pi^* \mathcal{B}} q_{D_8}((m_1 + g m_1 + g^2 m_1) \otimes \Psi (\pi^* b)) (\alpha) \cdot \pi^* b. \end{equation}
\begin{lemma}
\label{lemma:quantumademdiagram}
The following diagram commutes:
\begin{equation} \label{quantumademdiagram}
\xymatrix{
H^*(M)
\ar@{->}^-{q_{S_{4}}}[r]
\ar@{->}_-{=}[d]
&
QH^*(M) \otimes H^*(BS_4)_{red}
\ar@{->}^-{id_{H^*(M)} \otimes \pi^*}[d]
\\
H^*(M)
\ar@{->}^-{qq}[r]
&
QH^*(M) \otimes H^*(BD_8)_{red}
}
\end{equation}
\end{lemma}
\begin{proof}
Observe that $$(id \otimes \pi^*) q_{S_4}(\alpha) = \sum_{\pi^* b \in \pi^* \mathcal{B}} q_{S_4} ((m_1 + g m_1 + g^2 m_1) \otimes \Psi (b)) (\alpha) \cdot \pi^* b.$$ Then $$qq (\alpha) = \sum_{\pi^* b \in \pi^* \mathcal{B}} q_{S_4}((m_1 + g m_1 + g^2 m_1) \otimes \pi_* \Psi (\pi^* b)) (\alpha) \cdot \pi^* b$$ using Equation \eqref{equation:qq} and Lemma \ref{lemma:adem1}. For $b \in \mathcal{B}$, let $D = \Psi (\pi^* b)$. Then $\langle \pi^* b, D \rangle = 1$ and $\langle \hat{b}, D \rangle = 0$ for all $\hat{b} \in \hat{\mathcal{B}} - \pi^* b$, specifically for $\hat{b} = \pi^* d$ with $d \in \mathcal{B} - b$. Hence $\langle d, \pi_* D \rangle = 0$ for $d \in \mathcal{B} - b$ and $\langle b, \pi_* D \rangle = 1$, so $\pi_* \Psi (\pi^* b) = \Psi (b)$ by definition of $\Psi$.
\end{proof}
We now pick a different basis $\tilde{\mathcal{B}}$ for $H^*(BD_8)_{red}$ (i.e. different from $\hat{B}$) consisting of elements of the form $e^i \sigma_2^j$ (see Section \ref{subsec:classadem} to recall the notation). Let \begin{equation} \label{equation:qqijdef}qq_{i,j}(\alpha) := q_{D_8}((m_1 + g m_1 + g m_1) \otimes \Psi (e^{i} \sigma_{2}^{j})), \end{equation} the coefficient of $e^{i} \sigma_{2}^{j}$ in $qq(\alpha)$.
\begin{proof}[Proof of Theorem \ref{thm:QAR}, The Adem Relations]
The theorem follows immediately by Lemma \ref{lemma:quantumademdiagram} and the combinatorial argument in Theorem \ref{thm:car}.
\end{proof}
We relate this to a composition of quantum Steenrod squares. Firstly, we observe that instead of using $BD_8$ as our parameter space, we will use the spaces $E$ and $B = E / D_8$ as defined in Appendix \ref{subsec:ed8}. Recall from Appendix \ref{subsec:es4}, there is a map $\rho: ES_4 \rightarrow \mathbb{S}_2 = E$, a space that is an $ED_8$ (i.e. a contractible space with a free $D_8$-action).
Recall then that $E$ is stratified by finite dimensional $D_8$-invariant submanifolds $E^{i,j} = S^i \times (S^j \times S^j)$. Let $D^{j,+}$ be the upper $i$-dimensional hemisphere as usual. It is immediate that $D^{i,+} \times D^{j,+} \times D^{j,+} \subset E$ represents a closed cycle in $H_*(BD_8)$.
\begin{lemma}
\label{lemma:submandidjdj}
The submanifold $D^{i,+} \times D^{j,+} \times D^{j,+}$ represents $\Psi (e^i \sigma_2^j)$
\end{lemma}
\begin{proof}
Consider the projections $$k_1: E \rightarrow S^{\infty}, \ (x,(x_1,x_2)) \mapsto x$$ and $$k_2: E \rightarrow S^{\infty} \times S^{\infty}, \ (x,(x_1,x_2)) \mapsto (x_1,x_2),$$ which are respectively $\mathbb{Z}/2 \cong \langle (13)(24) \rangle$- and $\mathbb{Z}/2 \times \mathbb{Z}/2 \cong \langle (12), (34) \rangle$-equivariant. Indeed, they induce respectively $\mathbb{Z}/2$- and $\mathbb{Z}/2 \times \mathbb{Z}/2$-equivariant homotopy equivalences for the same reason as the map at the end of Appendix \ref{subsec:es4}. We abusively denote by $k_1,k_2$ the maps after quotienting by the free $\mathbb{Z}/2$-action. Combining these with the quotient maps $$l_1: E /( \mathbb{Z}/2) \rightarrow B$$ and $$l_1: E / (\mathbb{Z}/2 \times \mathbb{Z}/2) \rightarrow B,$$ we obtain $i_p$ from Diagram \ref{commutativediagramofgroups} as the composition $i_p = l_p^* \circ (k_p^*)^{-1}$, for $p=1,2$.
If $j=0$ then observe that $i_1(e^i) = e^i$ using Diagram \ref{commutativediagramofgroups}. Notice that for the homogeneous choice of homology basis, $\Psi(e^i)$ in $B \mathbb{Z}/2$ is represented by $D^{i,+} \subset S^{\infty}$. Letting $(i_{p})_{*}: H_*(B \mathbb{Z}/2) \rightarrow H_*(B)$ be the pushforward, observe that $$\Psi(e^i) = (i_{1})_{*} \Psi (i_{1})^{*}(e^i) = (i_{1})_{*} \Psi(e^i) = (i_{1})_{*}([D^{i,+}]) = [D^{i,+} \times D^{0,+} \times D^{0,+}] \in H_*(B).$$ Hence the result holds for $j=0$. Similarly the result holds for $i=0$, using the homomorphism $i_2$ and replacing $e$ by $\sigma_2^j$ and $D^{i,+}$ by $D^{j,+} \times D^{j,+}$, we see that $\Psi(\sigma_2^j)$ is respresented by $[D^{0,+} \times D^{j,+} \times D^{j,+}]$.
Observe, via the K\"unneth isomorphism, that elements of $H^*(B)$ may be represented as $D_8$-equivalence classes of cochains $x \otimes y \otimes z$, where $x \in C^*(S^{\infty})$. By the previous paragraph, we know that $e^i \sigma_2^j$ is represented by $x_i \otimes x_j \otimes x_j$, where $x_i$ is the indicator homomorphism for a simplex representing $D^{i,+}$. Then $\Psi([x_i \otimes x_j \otimes x_j]) = [D^{i,+} \times D^{j,+} \times D^{j,+}]$ as required.
\end{proof}
We reinterpret the operation $qq$ in terms of the parameter space $B = E / D_8$ (as opposed to $BD_8 = ES_4 / D_8$). Observe that the space $E$ does not have an action of any element of $S_4 - D_8$. Hence, in Definition \ref{defn:s4operators} we no longer ask for our Morse function $f_{v,s}$ to have invariance under $(23),(24)$ (as this is meaningless). Further, we choose the $f_{v,s}$ that we use for incidence conditions to be respectively $f_{v}, f_{(12) \cdot v}, f_{(13)(24) \cdot v}, f_{(14)(23) \cdot v}$. (Observe that when we used the parameter space $ES_4$, the invariance conditions imply that $f_{(13)(24) \cdot v} = f_{(13) \cdot v}$ and $f_{(14)(23) \cdot v} = f_{(14) \cdot v}$).
For the following proof, we fix $\alpha \in \text{crit}(f)$.
\begin{proof}[Proof of Corollary \ref{corollary:QAR}]
The corollary follows from Theorem \ref{thm:QAR} if we prove that for each $i,j \in \mathbb{Z}_{\ge 0}$ \begin{equation} \label{equation:corollaryequation} q_{D_8}(m_1 \otimes \Psi (e^{i} \sigma_{2}^{j}))(\alpha) = \sum_{b,d} Q\mathcal{S}^{i,b} \circ Q\mathcal{S}^{j,d}(\alpha). \end{equation} Henceforth we will fix some choice of $b,d$, and count those contributions to $q_{D_8}$, ostensibly denoted $q_{D_8,b,d}$, which arise from counting configurations with nodal curves with three components, corresponding to a nodal sphere comprised of a sphere of Chern number $bN$ attached to two spheres of Chern number $dN$ at $1$ and $\infty$. As we have now fixed $b,d$, we abusively exclude them from the notation.
To prove Equation \eqref{equation:corollaryequation}, we use a similar idea to proving the Cartan relation, as is illustrated in Figure \ref{fig:ademmodulispace}. Specifically, recall the $1$-dimensional space of graphs $T^c$ from Section \ref{subsec:Cartan}. Recall from Lemma \ref{lemma:lem1} that for each $t \in [0,\infty]$ there is a space $|t|_Q$, consisting of three copies of $S^2$ with semi-infinite or finite lines attached at $0,1,\infty$ (with the length of the finite edges being $t$ for $t \in [0,\infty)$). Associated to each $t \in [0,\infty)$, we define $f_{v,s,t}$ and $g_{v,s,t}$ for $v \in E$ and for $s \in [0,\infty)$ or $[0,t]$ respectively, and $t \in T$. We also choose a perturbation $f_s$ for $s \in (-\infty, 0]$. We choose these such that:
\begin{enumerate}
\item $f_{s} = f$ for $s \le -1$.
\item $f_{(14)(23) \cdot v, s, t} = f_{(13)(24) \cdot v, s, t} = f_{v,s,t}$ for $t \ge 1$,
\item $g_{(12) \cdot v,s,t} = g_{(34) \cdot v,s,t} = g_{v,s,t}$ for all $v,s,t$.
\item $f_{v,s,t} = f$ for $s \ge 1$, for all $v,t$.
\item $g_{v,s,t} = f$ when $t \ge 1$ and $s \ge 1$.
\end{enumerate}
These conditions are analogues of those made in Definition \ref{defn:s4operators} (here adapted to the $D_8$ case).
\begin{figure}
\input{ademmodulispace.pdf_t}
\caption{Moduli space for the Adem relations.}
\label{fig:ademmodulispace}
\end{figure}
Similarly to the case of the quantum Cartan relation, we define for each $t \in [0,\infty]$ a moduli space $\tilde{\mathcal{M}}_t(\alpha, z)$, consisting of pairs $(v,u)$ such that $v \in S^{i,+} \times S^{j,+} \times S^{j,+}$ (see Lemma \ref{lemma:submandidjdj}) and $u: |t|_Q \rightarrow M$ such that $u$ is $J$-holomorphic on each sphere, with the Chern number on each sphere being as fixed at the beginning of the proof, and edge and asymptotic conditions as in Figure \ref{fig:ademmodulispace}. There is the previously given $D_8$-action on $E^{i,j} = S^{i,+} \times S^{j,+} \times S^{j,+}$, and $D_8$ also acts by permutations on $\overline{M}_{0,5}$ (which induces an action on the moduli space of $u: |t|_Q \rightarrow M$). Together these yield a $D_8$-action on $\tilde{\mathcal{M}}_t(\alpha, z)$, and we write $\mathcal{M}_t(\alpha, z) = \tilde{\mathcal{M}}_t(\alpha, z) / D_8$. We let $\mathcal{M}(\alpha, z) = \sqcup_{t \in T^c} \mathcal{M}_t(\alpha, z)$, which is a smooth $1$-dimensional manifold (establishing transversality is a modification of Appendix \ref{subsec:bordismquantumcartantrans} using the considerations of Appendix \ref{subsec:quantumademrels}, so we omit a restatement here).
Note that it is immediate from the definition that the $t=0$ boundary corresponds to $q_{D_8}(m_1 \otimes \Psi (e^{i} \sigma_{2}^{j}))(\alpha)$ (i.e. when the output is $z \in \text{crit}(f)$, this yields the coefficient of $z$ in $q_{D_8}(m_1 \otimes \Psi (e^{i} \sigma_{2}^{j}))(\alpha)$). Hence it remains to prove that the $t= \infty$ boundary yields the coefficient of $z$ in $Q\mathcal{S}^{i,b} \circ Q\mathcal{S}^{j,d}(\alpha)$.
We observe that from our choice of $g_{v,s,t}$, we may ensure that $g_{v,s,\infty}$ depends only on the first summand $S^{\infty}$ of $E = S^{\infty} \times S^{\infty} \times S^{\infty}$, which we denote $\tilde{g}_{v,s}$ for $v \in S^{\infty}$. Similarly, we may ensure that $f_{v,s,\infty}$ depends only on the second two summands, denoted $\tilde{f}_{v_1,v_2,s}$ for $(v_1,v_2) \in S^{\infty} \times S^{\infty}$. Then let $S$ be the space obtained by attaching $(-\infty,0]$ to $S^2$ at $0$, and two copies of $[0,\infty)$ to $S^2$ at $1$ and $\infty$ respectively. Let $R : S \rightarrow S$ be the involution that is $z \mapsto z/(z-1)$ on $S^2$, swapping the positive half-lines and fixing the negative half-line.
The configurations for the $t=\infty$ end then decouple. Specifically, if $v = (v_0,(v_1,v_2)) \in E$ then pairs $(v,u) \in \mathcal{M}(\alpha,z)$ correspond to two tuples as follows:
\begin{itemize}
\item A pair $(v_0,u)$ such that $v_0 \in S^{i}$, $u: S \rightarrow M$ such that $u$ is $J$-holomorphic of Chern number $bN$, satisfying conditions in Figure \ref{fig:ademmodulispace}$(I)$.
\item A four-tuple $(v_1,v_2,u_1, u_2)$ such that $(v_1,v_2) \in S^{j} \times S^{j}$, and $u_p: S \rightarrow M$ for $p=1,2$ such that $u_p$ is $J$-holomorphic of Chern number $dN$, satisfying conditions in Figure \ref{fig:ademmodulispace}$(II), (III)$ respectively.
\end{itemize}
The $D_8$ action on these pairs is as follows (recalling that the invariance conditions on $\tilde{f}$ ensures that $\tilde{f}_{v_1,v_2,t} = \tilde{f}_{-v_1,-v_2,t}$ for any $v_1,v_2 \in S^{\infty}$):
\begin{itemize}
\item $\begin{array}{l} (12) \cdot (v_0,u) = (v_0,u), \\ (34) \cdot (v_0,u) = (v_0,u), \\ (13)(24) \cdot (v_0,u) = (-v_0, u \circ R). \end{array} $ \\
\item $\begin{array}{l} (12) \cdot (v_1,v_2,u_1,u_2) = (-v_1,v_2, u_1 \circ R, u_2 \circ R), \\ (34) \cdot (v_1,v_2,u_1, u_2) = (v_1,-v_2,u_1 \circ R, u_2 \circ R), \\ (13)(24) \cdot (v_1,v_2,u_1, u_2) = (v_2,v_1, u_2, u_1). \end{array}$
\end{itemize}
To begin with, we see that counting the pair $(v_0, u)$ such that $v_0 \in S^{\infty}$, modulo the $D_8$ action, is exactly the coefficient of $(Q \mathcal{S}')^{i,b}(A \otimes B(x))$, where the operation $A \otimes B: QH^*(M) \rightarrow QH^*(M) \otimes QH^*(M)[h]$ is determined by counting configurations corresponding to the $(v_1,v_2,u_1,u_2)$ above (with $(v_1, u_1)$ determining the $A$ component and $(v_2,u_2)$ determining the $B$ component), and $Q \mathcal{S}'$ is recalled from Definitions \ref{defn:mss} and \ref{defn:mqss}.
In fact, we only need to count solutions where $v_1 = v_2$ and $u_1 = u_2$. Firstly, (from Figure \ref{fig:ademmodulispace}) consider contributions from using the intermediate critical points $w_1 \neq w_2$. Then if $(v_0, u), (v_1, v_2,u_1,u_2)$ contributes to a term of the form $(Q \mathcal{S}')^{i,b}(w_1 \otimes w_2 T^d)$, it must be that $(v_0, u), (v_2, v_1,u_2,u_1)$ contributes to $(Q \mathcal{S}')^{i,b}(w_2 \otimes w_1 T^d)$. Hence together counting all such contributions, one will attain a summand of the form $$(Q \mathcal{S}')^{i,b}(n \cdot (w_1 \otimes w_2 + w_2 \otimes w_1) T^d),$$ for some $n \in \mathbb{Z}/2$. We know this to be zero by an argument as in Proposition \ref{propn:propositionftw}. Hence the only contributions we must count occur when $w_1 = w_2$, in which case if $v_1 \neq v_2$ or $u_1 \neq u_2$ then solutions come in pairs $(v_0, u_0), (v_1, v_2, u_1, u_2)$ and $(v_0, u_0), (v_2, v_1, u_2, u_1)$ which are not related by the $D_8$-action (hence are counted separately).
In particular, it is immediate that (with asymptotic conditions as given in Figure \ref{fig:ademmodulispace}) the number of pairs $(v_0,u),(v_1,u_1)$ up to the action of $D_8$, is the coefficient of $w_1$ in $Q \mathcal{S}^{j,d}(\alpha)$ multiplied by the coefficient of $z$ in $Q \mathcal{S}^{i,b}(w_1)$. Summing over all $w_1 \in \text{crit}(f)$, we get that the count of the moduli spaces of maps is then exactly the coefficient of $z$ in $Q \mathcal{S}^{i,b} \circ Q \mathcal{S}^{j,d}(\alpha)$ as required.
\end{proof}
\begin{rmk}
Observe that the coefficients of $zT^0$ in $q_{D_{8}}(g m_1 \times \Psi b)(x)$ and in $q_{D_{8}}(g^2 m_1 \times \Psi b)(x)$ (corresponding to constant spheres) are the same: specifically, we are counting exactly the same moduli space in both cases. Consider the contributions of $q_{D_{8}}(g m_1 \times \Psi b)(x)$ and $q_{D_{8}}(g^2 m_1 \times \Psi b)(x)$ to Equation \eqref{equation:QAR} for constant spheres. These then cancel out modulo $2$, and so we are left with only $q_{D_8}( m_1 \times \Psi b)(x)$, which for constant $J$-holomorphic spheres is $Sq \circ Sq(x)$.
\end{rmk}
\begin{rmk}
The term $qq_{j,0}(\alpha) \in QH^*(M)$ is the $h^{j}$ coefficient in $Q\mathcal{S}(\alpha) * Q\mathcal{S}(\alpha)$. This is one of the correction terms that can be computed, e.g. the $p=|\alpha|$ term in Corollary \ref{corollary:QAR}.
\end{rmk}
\section{$Q\mathcal{S}$ for blow-ups}
\label{sec:blowups}
Denote by $Q \mathcal{S}_{i,j}(x)$ the coefficient of $h^i T^j$ in $Q \mathcal{S}$ (where $|T|$ is the minimal Chern number of $M$). We will demonstrate calculations of $QS_{1,1}$ in two cases. The setup in both cases will be similar to the setup in \cite[Section 8]{blaier}, where Blaier computes the quantum Massey product.
\subsection{$\mathbb{CP}^3$}
Fix two generic quadric hypersurfaces in $X = \mathbb{CP}^3$ . Their intersection $Y$ is an elliptic curve, hence a torus. We let $M = Bl_Y X$, equipped with the blowdown $\rho: M \rightarrow X$. Recall that there is a $\mathbb{CP}^1$-bundle $\pi: E \rightarrow Y$ over the torus and an inclusion $i: E \rightarrow M$ of the exceptional divisor $E$. Specifically $E$ is the projectivisation of the normal bundle of $y: Y \xhookrightarrow{} X$.
Consider the continuous $3$-disc bundle $\pi': DY \rightarrow Y$ such that $E \xhookrightarrow{} DY$ is an inclusion of the subbundle $E$, with the maps of fibres being inclusion of the boundary $S^2 \xhookrightarrow{} D^3$. Locally, let $U \subset Y$ is a trivialising neighbourhood for $Y$, so there is a homeomorphism $\phi_U: U \times S^2 \xrightarrow{\cong} \pi^{-1}(U)$. Then we define $DY$ to have the same trivialising neighbourhoods as $E$, i.e. $\phi'_U: U \times D^3 \cong \pi'^{-1}(U)$. Further, we require that the transition functions $\psi'_{U_1,U_2} :=(\phi'_{U_1})^{-1} \circ \phi'_{U_2}: (U_1 \cap U_2) \times D^3 \rightarrow (U_1 \cap U_2) \times D^3$ are defined by $$\psi'_{U_1,U_2}((r,\theta),x) = ((r, \psi_{U_1,U_2}(\theta)), x),$$ where here we use polar coordinates on $D^3$ and $\psi_{U_1,U_2}$ is the transition function on $Y$. One can use the Mayer-Vietoris sequence, by observing that $M \cup_{E} DY$ is homotopy equivalent to $X$, to write down the {\it long exact sequence of a blow-up},
\begin{equation}\label{lesblowup}
\xymatrix{
\ldots \ar@{->}[r]
&
H_*(E) \ar@{->}^-{i_* \oplus \pi_*}[r]
&
H_*(M) \bigoplus H_*(Y) \ar@{->}^-{\rho_* - y_*}[r]
&
H_*(X) \ar@{->}^-{\delta}[r]
&
H_{*-1}(E) \ar@{->}[r]
&
\ldots
}
\end{equation}
One can also use the homological Gysin sequence for the bundle $DY \rightarrow Y$ to get an exact sequence (after applying the Thom isomorphism and observing that $DY \rightarrow Y$ is a homotopy equivalence):
\begin{equation}\label{homgysin}
\xymatrix{
\ldots \ar@{->}[r]
&
H_*(E) \ar@{->}^-{\pi_*}[r]
&
H_*(Y) \ar@{->}^-{g}[r]
&
H_{*-3}(Y) \ar@{->}[r]^{\phi}
&
H_{*-1}(E) \ar@{->}[r]
&
\ldots
}
\end{equation}
Where $\delta, \phi$ are the connecting homomorphisms in the long exact sequence and the maps $i_*, \pi_*, \rho_*$ and $y_*$ are induced by the continuous maps above. The map $g$ is induced by the composition of $H_*(Y) \cong H_*(DY) \rightarrow H_*(DY, DY - Y)$ (where $DY - Y$ denotes the removal of the $0$-section) with the Thom isomorphism. Putting these together, we see that:
\begin{equation} \label{equation:eq111} H_2(M) \cong H_2(X) \oplus H_2(E)/H_2(Y) \end{equation} \begin{equation} \label{equation:eq222} i_*: H_3(E) \xrightarrow{\cong} H_3(M) \end{equation} \begin{equation} \label{equation:eq333} \phi: H_1(Y) \xrightarrow{\cong} H_3(E) .\end{equation} In particular $\dim H_3 (E) = 2$. The class of a sphere lifted from $\mathbb{P}^3$ to $M$ and the class of a fibre $\pi^{-1}(y)$ of $\pi$ over $y \in Y$ generate $H_2(M)$.
We calculate $c_1(TM)$. There is a natural embedding $j: M \rightarrow \mathbb{P}^3 \times \mathbb{P}^1$ as a complex hypersurface of bidegree $(2,1)$, with respect to the generators of $H_2(M)$ in the previous paragraph. By bidegree, we mean that the number of points of intersection of $M$ with a general curve $A$ of $\mathbb{P}^3 \times \mathbb{P}^1$ is the following:
\begin{itemize}
\item if $A = \mathbb{P}^1 \times \mathbb{P}^0$, then we get an intersection number of $2$ because we are counting the number of solutions of a general quadric equation (the blow-up is defined as the set of $(t,[r:s]) \in \mathbb{CP}^3 \times \mathbb{CP}^1$ such that $r f(t) + s g(t) = 0$, where $f$ and $g$ are the quadric equations defining $Y$).
\item if $A = \mathbb{P}^0 \times \mathbb{P}^1$, then we obtain an intersection number of $1$ because a general point $p \in \mathbb{P}^3$ is such that there is only one point $q$ such that $(p,q) \in M.$
\end{itemize}
Functorality and the Whitney sum formula imply that $$c_{1}(TM) + c_1(v M) = c_1(T(\mathbb{P}^3 \times \mathbb{P}^1)|_M)$$ where $v M$ is the normal bundle of $M$ in $\mathbb{P}^3 \times \mathbb{P}^1$. Recall $c_1(T(\mathbb{P}^3 \times \mathbb{P}^1)|_M) = (4,2)$, and $c_1(v M) = (2,1)$ because it is the same as the degree (here we note that the Euler class can be reinterpreted as $j^* PD([M])$, represented by the self intersection of $M$, where $[M] \in H_6(\mathbb{P}^3 \times \mathbb{P}^1)$). Hence $c_1(TM) = (2,1)$. Therefore, when calculating $Q\mathcal{S}_{1,1}$ we only need to consider the spheres in the fibre class of $M$ as these are the only $J$-holomorphic spheres of Chern number $1$, which are confined to be in $E$.
Consider $Q\mathcal{S}_{1,1} : H^3(M) \rightarrow H^3(M)$. We will show that $Q\mathcal{S}_{1,1}|_{H^3(M) } = id$. First we show that calculating $Q\mathcal{S}_{1,1}|_{H^3(M)}$ reduces to calculating $Q\mathcal{S}_{1,1}: H^3(E) \rightarrow H^1(E)$. We use $i^{!}$ on cohomology to mean $PD \circ i_* \circ PD^{-1}$, and let $i_! = PD^{-1} \circ i^* \circ PD$ on homology.
\begin{lemma}
\label{lemma:SqE2M}
Fix $a \in H^3(M)$. Then
$$Q\mathcal{S}_{1,1} \circ i^* (a) = i^! \circ Q\mathcal{S}_{1,1}(a)$$ for $a \in H^3(M),$ i.e. \eqref{squareEMcommutes} commutes:
\begin{equation}\label{squareEMcommutes}
\xymatrix{
H^3(M)
\ar@{->}^-{Q\mathcal{S}_{1,1}}[r]
\ar@{->}_-{i^*}[d]
&
H^3(M)
\ar@{<-}^-{i^!}[d]
\\
H^3(E)
\ar@{->}^-{Q\mathcal{S}_{1,1}}[r]
&
H^1(E)
}
\end{equation}
\end{lemma}
\begin{proof}
\begin{figure}
\input{p_3setup.pdf_t}
\caption{Configurations for $Q\mathcal{S}_{1,1}$ on $M$ and $Q\mathcal{S}_{1,1}$ on $E$.}
\label{fig:p_3setup}
\end{figure}
Fix a generator $a \in H^3(M)$ whose Poincar\'e dual is represented by a smooth submanifold $A$. To compute the coefficient of $b$ in $Q \mathcal{S}_{1,1}(a)$, we choose $A_v$ for $v \in S^{\infty}$ (such that $A_v$ represents $PD(a)$ for each $v$) and $B^{\vee}$ that represents $PD(b^{\vee})$. To simplify the notation, $^{\vee}$ acts on $H_*(M)$ as the conjugation of the cohomological intersection dual by Poicar\'e duality, so $PD(b)^{\vee} := PD(b^{\vee})$. We therefore see that the coefficient of $b$ in $Q \mathcal{S}_{1,1}(a)$ is determined by counting setups as in Figure \eqref{fig:p_3setup}I.
In general, if $a$ is represented by $A$ then a representative of $i^*(a)$ is obtained by choosing some small perturbation $A'$ of $A$, such that $A'$ intersects $E$ transversely. Then the intersection $A' \cap E$, a submanifold of $E$, represents $i^*(a)$. Hence, supposing that we perturb $A$ to intersect $E$ transversely, if we choose $A_v$ to be a sufficiently small perturbation of $A$ for each $v$ then $A_v \cap E$ is transverse for each $v \in S^{\infty}$. By this procedure, we obtain representatives of $i^*A$ for each $v$ that we denote $A_v \cap E$, and we similarly obtain a representative $B^{\vee} \cap E$ of $i^*(B^{\vee})$. This can be done in such a way as to ensure that the space of setups as in Figure \eqref{fig:p_3setup}II are transverse: in particular any perturbation of $A_v \cap E$ may be extended to yield a perturbation of $A_v$ in a small neighbourhood of $E$. By making this perturbation sufficiently small, we ensure that the intersection between $A_v$ and $E$ remains transverse.
Observe that using the choice of basis induced from $H_1(Y)$ by Equations \eqref{equation:eq222} and \eqref{equation:eq333}, a direct computation shows that for any $b \in H^3(M)$, \begin{equation} \label{Bvees} i_! PD(b)^{\vee} = (i_*^{-1} PD(b))^{\vee}. \end{equation} Hence in particular $B^{\vee} \cap E$ represents $((i^!)^{-1}b)^{\vee}$. With all of this in mind, the coefficient of $(i^!)^{-1}b$ in $Q \mathcal{S}_{1,1}(i^* a)$ is determined by counting setups as in Figure \eqref{fig:p_3setup}II.
Hence, to show that the diagram commutes we need to show that setups of type I and II biject. This is immediate, however, because every $J$-holomorphic curve $u$ of Chern number $1$ (in $M$) is contained in $E$. Hence if $(v,u)$ is a setup of type I (i.e. $u$ intersects $A_{\pm v}$ and $B^{\vee}$) then $u$ will automatically intersect with $E \cap A_{\pm v}$ and $E \cap B^{\vee}$ (hence $(v,u)$ will be a setup of type II), and vice versa.
\end{proof}
\begin{rmk}
In Lemma \ref{lemma:SqE2M}, in order to show that $E \cap A_v$ is of the correct form for Definition \ref{defn:singqss}, we need to demonstrate that in fact $$\bigsqcup_{v \in S^i} (E \cap A_{v}) \times \{ v \}$$ is of the form $A' \times S^i$ for some smooth manifold $A'$. We make two observations:
\begin{itemize}
\item The space $\sqcup_{v \in S^i} A_{v} \times \{ v \}$ is a smooth manifold by assumption, transversely intersecting $E \times S^i$. Hence $\sqcup_{v \in S^i} (E \cap A_{v}) \times \{ v \} = (\sqcup_{v \in S^i} A_{v} \times \{ v \}) \cap (E \times S^i)$ is a smooth manifold. Further, the map $\sqcup_{v \in S^i} E \cap A_{v} \times \{ v \} \rightarrow S^i$ induced by projection to the second factor is a proper surjective submersion between two smooth manifolds, hence it is a fibre bundle by Ehresmann's Lemma.
\item This fibre bundle is trivial, because we know that it must extend to a fibre bundle over $D^{i+1,+}$, the upper hemisphere in $S^i$, which is a contractible base.
\end{itemize}
A similar lemma to \ref{lemma:SqE2M} would hold if we replaced embedded submanifolds by pseudocycles.
\end{rmk}
Note that the index of the codomain of $Q \mathcal{S}_{1,1}$ changes by $2$, between $H^3(M)$ and $H^1(M)$, in Diagram \eqref{squareEMcommutes}. This comes from the fact that $i^!$ changes cohomological degree by 2 (and is also to be expected because the minimal Chern number of $E$ is $2$, whereas the minimal Chern number of $M$ is $1$).
Henceforth, we will use the Morse theoretic definition of the quantum square. Observe that $E = Y \times \mathbb{P}^1$ so we may pick the Morse function on $E$ to be $f+g$ where $f: Y \rightarrow \mathbb{R}$ and $g: \mathbb{P}^1 \rightarrow \mathbb{R}$, such that $g$ has two critical points of index $0,2$, which we call $a_0, a_2$, and $f$ has critical points $b_0, b_1, b_1^{'}, b_2$ (whose indices are the subscripts). Recalling that $\pi: E \rightarrow Y$ is the projection map, $$(\pi^!)^{-1} (b_1) = (b_1, a_0) \text{ and } \pi^*(b_1) = (b_1,a_2).$$
\begin{lemma}
\label{lemma:SqE2Y}
Let $Sq_i(x)$ be the coefficient of $h^i$ in $Sq(x)$. Then $Q\mathcal{S}_{1,1} =\pi^* \circ Sq_1 \circ \pi^!$.
\end{lemma}
\begin{proof}
Recall that input elements of $H^3(E)$ correspond to $(b_1,a_2)$ or $(b^{'}_1,a_2)$, which project down under $\pi$ to $b_1$ or $b_1^{'}$ respectively. Output elements of $H^1(E)$ correspond to $(b_1,a_0)$ or $(b^{'}_1,a_0)$.
We will show that pairs $(\tilde{v},\tilde{u})$ in the moduli space used to calculate the coefficient of $c$ in $Q\mathcal{S}_{1,1}(x)$ correspond to pairs $(v,u)$ in the moduli space yielding the coefficient of $\pi^* c$ in $Sq_{1}(\pi^{!} x)$. For clarity we will fix $x = (b_1,a_2)$ and $c = (b_1,a_0)$ (hence $\pi^{*} (b_1) = x$ and $\pi^! (c) = b_1$), although the argument follows identically for any choice of $x$ and $c$. For conciseness we denote the moduli spaces of pairs respectively as $\mathcal{M}_Q$ and $\mathcal{M}$ for $Q \mathcal{S}$ on $E$ and $Sq$ on $M$.
Consider a pair $(v,u) \in \mathcal{M}_Q$, as on the right hand side of Figure \eqref{fig:sqtosqQblowup}. We observe that, using the projection $\pi: E \rightarrow Y$, the setup $(v, \pi u)$ is one that is counted when calculated the coefficient of $b_1$ in $Sq_{1}(b_1)$, i.e. $(v, \pi u) \in \mathcal{M}$. This is because a fibre sphere in $E$ lives above a point $y \in Y$, hence the incidence condition of the flowlines attaching to $S^2$ at the points $0,1,\infty$ translates under this projection to the three flowlines coinciding at the point $y \in Y$. As the transversality condition is generic, we may choose the perturbations $f_{v,s}$ and $g_{v,s}$ of $f$ and $g$ in such a way that both the moduli space $\mathcal{M}$ and $\mathcal{M}_Q$ are transverse.
We show that every pair $(v,u) \in \mathcal{M}$ arises uniquely from the a pair $(\tilde{v}, \tilde{u}) \in \mathcal{M}_Q$ as in the previous paragraph. Consider such a pair $(v,u)$: specifically, the image of $u$ consists of three perturbed half-flowlines meeting at some point $y$. Consider the $- \nabla f_{v,s}$ flowline $l$, which is the image of $u$ restricted to one of the two positive halflines. This flowline is asymptotic to $b_1$ in $Y$, hence it lifts uniquely to a $- \nabla (f_{v,s} + g_{v,s})$ flowline that is asymptotic to $(b_1,a_2)$ in $E$. The uniqueness is because $a_2$ is the maximum of $g$, and hence there is a unique $- \nabla g_{v,s}$-flowline $L$ asymptotic to $a_2$. Specifically this is the flowline $L: [0,\infty) \rightarrow S^2$ such that $L(s) = a_2$ for $s \gg 0$. See Figure \ref{fig:sqtosqQblowup}. Likewise the output flowline on $Y$, which is a $- \nabla f_s$-flowline, lifts uniquely to a $-\nabla (f_s+g_s)$-flowline on $E$, asymptoting to $(b_1,a_0)$, which is unique because $a_0$ is the minimum of $g$.
\begin{figure}
\input{sqtosqQblowup.pdf_t}
\caption{Lifting configurations of $Sq$ on $Y$ to $Q\mathcal{S}$ on $M$.}
\label{fig:sqtosqQblowup}
\end{figure}
Moreover, because this setup lifts from $Y$ we know that the three flowlines will all intersect the $J$-holomorphic sphere $\pi^{-1} y$ at $0$. We also know where each lifted flowline intersects $J$-holomorphic sphere $\pi^{-1} y$, as this is determined by the gradient of $f+g$ on each of the flowlines at $s=0$. Hence there is a unique $J$-holomorphic sphere that fits into the lifted setup, giving a unique configuration on $E$ corresponding to the configuration on $Y$.
\end{proof}
\begin{proof}[Proof of Equation \eqref{equation:blowupID1} in Theorem \ref{thm:blowupID}]
Note that $Sq|_{H^1(Y)}: H^1(Y) \rightarrow H^1(Y)$ is the identity, which is known by the definition of $Sq$. Lemmas \ref{lemma:SqE2M} and \ref{lemma:SqE2Y} imply that Diagram \eqref{squareMEYcommutes} commutes.
\begin{equation}\label{squareMEYcommutes}
\xymatrix{
H^3(M)
\ar@{->}^-{Q\mathcal{S}_{1,1}}[r]
\ar@{->}_-{i^*}[d]
&
H^3(M)
\ar@{<-}^-{i^!}[d]
\\
H^3(E)
\ar@{->}^-{Q\mathcal{S}_{1,1}}[r]
\ar@{->}_-{\pi^!}[d]
&
H^1(E)
\ar@{<-}_-{\pi^*}[d]
\\
H^1(Y)
\ar@{->}^-{Sq_{1}}[r]
&
H^1(Y)
}
\end{equation}
The abelian group $H^i(E)$ is generated by $\{ (b_1,a_{i-1}), (b'_1,a_{i-1}) \}$ for $i=1,3$. From the axioms of $Sq$, we know that $Sq_1 = id$. Then from Lemma \ref{lemma:SqE2Y}: \begin{equation} \label{equation:eq211} Q\mathcal{S}_{1,1}(b_1,a_0) = (b_1, a_2) \text{ and } Q\mathcal{S}_{1,1}(b^{'}_1,a_0) = (b^{'}_1, a_2) \end{equation}
We apply the isomorphism between Morse and classical cohomology and then Poincar\'e duality to the Morse cocycles $(b_1,a_2) \in H^3(E)$ and $(b_1,a_0) \in H^1(E)$. This yields cycles $B_1 \in H_1(E)$ and $B_3 \in H_3(E)$. Likewise we define $B'_i \in H_i(E)$ for $(b'_1,a_{3-i})$ for $i=1,3$. We recall that $^{\vee}$ is the intersection dual on homology (defined as the conjugation by Poincar\'e duality of the duality on cohomology, for our given basis). Note that \begin{equation} \label{equation:eq212} B^{\vee}_3 = B'_1, \end{equation} and so on. In this notation $Q\mathcal{S}_{1,1} (B_1) = B_3$. Observe that $B_3 \cap B_3 = \emptyset$ so we see that $i_* B_3 \cap i_* B_3 = \emptyset$ (which is immediate if one chooses a generic submanifold representative: then nonintersection in $E$ implies nonintersection in $M$). As $H_3(M)$ is generated by $i_* B_3$ and $i_* B'_3$, this implies
\begin{equation} \label{istarB} (i_* B_3)^{\vee} = i_* B'_3.\end{equation}
By Equation \eqref{istarB}, \begin{equation} \label{equation:bbb1} i_* \circ Q\mathcal{S}_{1,1} \circ i_! ((i_* B_3)^{\vee}) = i_* \circ Q\mathcal{S}_{1,1} \circ i_! ((i_* B'_3)).\end{equation}
By Equation \eqref{Bvees}, then Equations \eqref{equation:eq211} and \eqref{equation:eq212}, \begin{equation} \label{equation:bbb2} i_* \circ Q\mathcal{S}_{1,1} \circ i_! ((i_* B_3)^{\vee}) = i_* \circ Q\mathcal{S}_{1,1} (B^{\vee}_3) = i_* B'_3. \end{equation}
From Equations \eqref{equation:bbb1} and \eqref{equation:bbb2}, along with identical calculations for the other generators, plus the fact that $i_*$ is an isomorphism, we deduce that $i_* \circ Q\mathcal{S}_{1,1} \circ i^* = id.$ Diagram \eqref{squareMEYcommutes} then implies that $$Q \mathcal{S}_{1,1} = i_* \circ Q\mathcal{S}_{1,1} \circ i^* = id.$$
\end{proof}
\subsection{$\mathbb{CP}^1 \times \mathbb{CP}^1 \times \mathbb{CP}^1$}
Now let $X = \mathbb{CP}^1 \times \mathbb{CP}^1 \times \mathbb{CP}^1$, with $Y \subset X$ defined by the intersection of two generic linear hypersurfaces. ``{\it Linear}" means that we require that the defining equation of the hypersurfaces are linear in the coordinates of each $\mathbb{CP}^1$ (when the other coordinates are treated as constants). The subvariety $Y$ is in fact a torus, which one can see by using the adjunction formula: in particular, one proves that $K_Y = 0$. Specifically, $K_X = (-2,-2,-2)$ and the two linear hypersurfaces are $(1,1,1)$ by definition, and hence $K_Y = (1,1,1) + (1,1,1) + (-2,-2,-2) = 0$. Then the genus $g$ of $Y$ satisfies $g = 1 + (\deg K_Y)/2 = 1$, hence $Y$ is a surface of genus $1$.
Define $M = Bl_Y X$. Using a similar method to the $\mathbb{CP}^3$ case, we can show that the Chern class of $M$ is $(1,1,1,1)$, where the first three entries correspond to lifting the $J$-holomorphic spheres on each of the $3$ coordinates of $X$, and the final entry corresponds to a fibrewise $J$-holomorphic sphere in the exceptional divisor. Hence, when calculating $Q\mathcal{S}_{1,1}: H^3(M) \rightarrow H^3(M)$ there are contributions from the fibre direction plus those from $J$-holomorphic spheres in $X$ that have been lifted to $M$. The fibrewise contributions are calculated in exactly the same way as for $\mathbb{CP}^3$, so we turn our attention to the spheres lifted from $\mathbb{CP}^1 \times \mathbb{CP}^1 \times \mathbb{CP}^1$.
\begin{proof}[Proof of Equation \eqref{equation:blowupID2} in Theorem \ref{thm:blowupID}]
Suppose the defining linear equations for $Y$ are $P_1(x,y,z)$ and $P_2(x,y,z)$ in local coordinates on $\mathbb{P}^1 \times \mathbb{P}^1 \times \mathbb{P}^1$. Fixing $x$ and $y$, there is at most one solution $z$ such that $P_1(x,y,z) = P_2(x,y,z)=0$. Hence, let $S = \{ x \} \times \{ y \} \times \mathbb{CP}^1$ be a $J$-holomorphic curve in $X$, and $\tilde{S}$ its lift to $M$. By the previous, $\tilde{S} \cap E$ is at most $1$ point. If $A \in H_3(M)$ then recall from Equation \eqref{equation:eq222}, we may assume that $A = i_* A_E$ for some $A_E \in H_3(E)$. Recall that to calculate $Q \mathcal{S}_{1,1}(a)$, where $A = PD (a)$, we need to choose some $A_v$ satisfying the transversality conditions. We may pick the representatives $A_v$ to be some $D_v \times \mathbb{P}^1$, where $D_v$ is a representative of $D \in H_1(Y)$. Then assuming $S$ is not contained in $Y$, there are no solutions to Figure \ref{fig:p_1_3setup}. For such a solution, we would need that $\tilde{S}$ intersects $E$ in at least $2$ points (as $A_v \subset E$ for all $v$), which we know is impossible. Hence the space of such setups is transverse, because it is empty.
\begin{figure}
\input{p_1_3setup.pdf_t}
\caption{Configurations for contributions to $Q\mathcal{S}_{1,1}$ from lifts on $\mathbb{P}^1 \times \mathbb{P}^1 \times \mathbb{P}^1$.}
\label{fig:p_1_3setup}
\end{figure}
The case $S \subset Y$ is not possible, because there is no degree $1$ holomorphic map $\mathbb{P}^1 \rightarrow Y$.
\end{proof}
|
{
"timestamp": "2019-07-01T02:20:17",
"yymm": "1805",
"arxiv_id": "1805.02438",
"language": "en",
"url": "https://arxiv.org/abs/1805.02438"
}
|
\section{Introduction}
A ubiquitous problem in science and engineering is to infer the parameter of interest, say $\bs{x}$,
given noisy indirect measurements $\bs{y}\in\mathbb{R}^m$.
Suppose the parameter and measurements are linked by
a parameter-to-observable map $\bs{f}:\mathbb{R}^n\times\mathbb{R}^p\times\mathbb{R}^q\rightarrow \mathbb{R}^m$
\begin{align}\label{eq: general}
\bs{y}=\bs{f}(\bs{x},\bs{n},\bs{\eta})
\end{align}
where $\bs{y}\in\mathbb{R}^m$ is the (observation) data,
$\bs{x}\in\mathbb{R}^n$ is the primary (interesting) unknown and
$\bs{n}\in\mathbb{R}^p$ and $\bs{\eta}\in\mathbb{R}^q$ denote uninteresting related random variables
which can often be interpreted as noise.
The first task would then be to marginalize over the uninteresting variables.
In the context of inverse problems which are the focus in this paper, we often have $n\ge m$.
The most common model for $\bs{f}(\bs{x},\bs{n},\bs{\eta})$ is the additive error model \cite{Tarantola2004,kaipio2005}
\begin{align}\label{eq: additive}
\bs{y}=\bs{A}(\bs{x})+\bs{\eta}
\end{align}
where the mapping $\bs{A}: \bs{x}\mapsto\bs{y}$ is referred to as the forward map (problem).
However, in several imaging modalities including optical coherence tomography (OCT) \cite{Wong2010,Yin2013},
ultrasound \cite{Burckhardt1978,Michailovich2006}, synthetic aperture radar (SAR) imaging \cite{Foucher2001,Tison2004},
and electrical impedance tomography (EIT) \cite{Borcea1999,Zhang2015}, noise can be proportional to the data.
In such a case, we have
\begin{align}\label{eq: noadd}
\bs{y}=\bs{n}\odot\bs{A}\left(\bs{x}\right)
\end{align}
where $\odot$ denotes component-wise (Hadamard) product.
Such a set up is usually referred to as the multiplicative noise model.
Moreover, in many of these cases there may simultaneously be additive noise present,
see, for example, \cite{Krissian2007,Durand2010,kaipio2005,Garcia2011}, so that we can write
\begin{align}\label{eq: fullmodel}
\bs{y}=\bs{n}\odot\bs{A}\left(\bs{x}\right)+\bs{\eta}.
\end{align}
In most papers, see for example \cite{Krissian2007,Durand2010}, the effects of any additive errors $\bs{\eta}$ have been
assumed to be small compared to
the effects of the multiplicative noise $\bs{n}$, and thus the additive error term
has often been neglected.
Furthermore, the multiplicative noise has systematically been assumed to be
mutually independent.
In the current paper however, we retain the additive error term.
In this paper, we take a model discrepancy style approach to transform Equation (\ref{eq: fullmodel}) to
Equation (\ref{eq: additive}) with a modified additive error term,
into which both the additive and multiplicative errors are embedded.
The approach is based on a joint model $\pi(\bs{x},\bs{n},\bs{\eta})$
and the computation of the approximate statistics
of the adjusted additive error term, followed by approximate marginalization.
This procedure yields an approximate posterior model $\pi( \bs{x} \,\vert\, \bs{y})$.
The paper is organised as follows. In Section \ref{sec: Noise}, we review the marginalization of
noise terms
in the Bayesian framework.
In Section~\ref{sec: StdApp}, we give a brief review of
the methods used to deal with multiplicative noise.
Section~\ref{sec: BAE} outlines the approximation of the noise statistics and the subsequent marginalization,
which approach is sometimes referred to as the Bayesian approximation error (BAE) approach
\cite{kaipio2005,Kaipio2007,Kaipio2013}.
The multiplicative noise term is not assumed to be uncorrelated.
In Section~\ref{sec:DeblurringApplication}, we consider a deconvolution example with different distributions for the
multiplicative noise term, including correlated multiplicative noise.
\dontshow{The results are compared to those based on the same transformation without updating the additive error term so as
to take into account the effects of the multiplicative noise.}
\section{Exact marginalization over additive and multiplicative terms}
\label{sec: Noise}
In this paper, we assume that the noise terms $\bs{n}$ and $\bs{\eta}$ and the parameter of interest $\bs{x}$
are pair-wise mutually independent.
Thus, joint model of the noise terms and the parameter can be stated as
$\pi(\bs{x},\bs{n},\bs{\eta})=\pi_x(\bs{x})\pi_n(\bs{n})\pi_{\eta}(\bs{\eta})$.
Furthermore, in line with the literature \cite{Shi2008,Aubert2008,Durand2010,Steidl2010,Zhao2014},
we assume that the multiplicative noise is i.i.d., so that $\pi_n(\bs{n})=\prod_{i=1}^m\pi_{n_i}(n_i)$.
The likelihood is obtained formally by marginalization
\begin{align}
\label{eq: fulllike}
\pi(\bs{y}|\bs{x})=\iint\pi(\bs{y}|\bs{x},\bs{n},\bs{\eta})\pi_n(\bs{n})\pi_{\eta}(\bs{\eta})\;d\bs{n}d\bs{\eta}=\iint \delta(\bs{y}-\bs{n}\odot\bs{A}(\bs{x})-\bs{\eta})\pi_n(\bs{n})\pi_{\eta}(\bs{\eta})\;d\bs{n}d\bs{\eta}.
\end{align}
where $\delta(\cdot)$ is the Dirac distribution.
We now look at the three individual cases of interest.
In the purely additive noise model, we set $\pi(\bs{n})=\delta(\bs{n}-\bs{1})=\prod_{i=1}^m\delta(n_i-1)$.
Thus, (\ref{eq: fulllike}) can be written as
\begin{align}
\pi(\bs{y}|\bs{x})&=\iint \delta(\bs{y}-\bs{n}\odot\bs{A}(\bs{x})-\bs{\eta})\delta(\bs{n}-\bs{1})\;d\bs{n}\pi_{\eta}(\bs{\eta})\;d\bs{\eta}
=\int\delta(\bs{y}-\bs{A}(\bs{x})-\bs{\eta})\pi_{\eta}(\bs{\eta})\;d\bs{\eta}\nonumber\\
&=\pi_\eta(\bs{y}-\bs{A}(\bs{x})),
\end{align}
In the purely multiplicative noise model, we set $\pi(\bs{\eta})=\delta(\bs{\eta})=\prod_{i=1}^m\delta(\eta_i)$, then
(\ref{eq: fulllike}) can be written as
\begin{align}\pi(\bs{y}|\bs{x})&=\iint \delta(\bs{y}-\bs{n}\odot\bs{A}(\bs{x})-\bs{\eta})\delta(\bs{\eta})\;d\bs{\eta}\pi_{n}(\bs{n})\;d\bs{n}
=\int\delta(\bs{y}-\bs{n}\odot\bs{A}(\bs{x}))\pi_{n}(\bs{n})\;d\bs{n}\nonumber\\
&=\prod_{i=1}^m\left(\frac{1}{\abs{A_i(\bs{x})}}\pi_{n_i}\left(\frac{y_i}{A_i(\bs{x})}\right)\right),
\end{align}
where $A_i(\bs{x})$ is the $i$th component of $\bs{A}(\bs{x})$.
In the case of simultaneous multiplicative and additive noise terms, due to Fubini's theorem,
the integrations in (\ref{eq: fulllike}) can be carried out in either order
resulting in either
\begin{align}\label{eq: int1}
\pi(\bs{y}|\bs{x})&=\iint \delta(\bs{y}-\bs{n}\odot\bs{A}(\bs{x})-\bs{\eta})\pi_n(\bs{n})\pi_{\eta}(\bs{\eta})\;d\bs{n}d\bs{\eta}\nonumber\\&=\int\pi_n(\bs{n})\pi_{\eta}(\bs{y}-\bs{n}\odot\bs{A}(\bs{x}))\;d\bs{n}.
\end{align}
or
\begin{align}\label{eq: int2}
\pi(\bs{y}|\bs{x})&=\iint \delta(\bs{y}-\bs{n}\odot\bs{A}(\bs{x})-\bs{\eta})\pi_n(\bs{n})\pi_{\eta}(\bs{\eta})\;d\bs{\eta}d\bs{n}\nonumber\\&=\prod_{i=1}^m\left(\frac{1}{\abs{A_i(\bs{x})}}\right)
\int\prod_{i=1}^m\left(\pi_{n_i}\left(\frac{y_i-\eta_i}{A_i(\bs{x})}\right)\right)\pi_\eta(\bs{\eta})\;d\bs{\eta}.
\end{align}
Unfortunately, the integrals defined as in either (\ref{eq: int1}) or (\ref{eq: int2})
cannot be computed analytically for general multiplicative noise models $\pi_{\eta}(\bs{\eta})$.
\section{Approaches to handle multiplicative noise models}
\label{sec: StdApp}
For the remainder of the paper we will consider linear forward models
$\bs{A}(\bs{x})=\bs{A}\bs{x}$, as is the case in deblurring (setting $\bs{A}=\bs{I}$ is the case of denoising).
Furthermore, since the focus of the present paper is in inverse problems and since some approaches depend
directly on properties of the unknown (such as positivity), we refer directly on posterior models.
Moreover, since the proposed approach is targeted on relatively large dimensional problems, we
will consider the computation of MAP estimates and the Laplace approximations for
the posterior covariances only.
There are several approaches documented in the literature for dealing with multiplicative noise.
Many of the techniques are framed in the context of denoising.
Moreover, it is often assumed that $\bs{x}\ge\bs{0}$ and has bounded variations
(i.e. $\bs{x}\in BV(\Omega)$) and thus the total variation (TV) prior is used, see for example
\cite{Shi2008,Aubert2008,Durand2010,Steidl2010,Rodriguez2013,Zhao2014}. In the approach proposed below, however, we do not need to assume positivity or boundedness
of the primary unknown $x$.
{\bf The} \bs{\log} {\bf model (multiplicative noise only):}
The most common of these techniques is to simply apply
the logarithm transform, resulting in a problem of the form of (\ref{eq: additive}), see for example \cite{Guo1994,Durand2010}.
However, there are some drawbacks to applying the logarithm transform method.
Firstly, if any of the components of the data $\bs{y}$, the model prediction $\bs{A}\bs{x}$
or the noise term $\bs{n}$ are negative, the method fails. Secondly if one in fact retains
the additive error, as in Equation (\ref{eq: fullmodel}), the logarithm transform is of little use.
Thirdly, it has been noted that one cannot directly apply standard additive noise removal algorithms
and that such a method does not produce satisfactory results \cite{Aubert2008}.
Such an approach leads
to the following MAP estimate for a general prior on $\pi_x(\bs{x})$
\dontshow{placed on $\bs{x}$,}
\begin{align}\label{eq: logMAP}
\bs{x}_{\rm MAP}=\max_{\bs{x}}\pi_\xi\left(\log(\bs{y})-\log\left(\bs{A}\bs{x}\right)\right)\pi_x(\bs{x}),
\end{align}
where $\pi_\xi$ is the density of $\xi=\log(\bs{n})$.
Iterative method can then be used to solve for $\bs{x}_{\rm MAP}$.
The basic idea of transforming multiplicative noise to additive noise is in principle
similar to the procedure
we propose in the current paper, except that, in this paper, the measurements are not transformed.
{\bf The AA model:} The so-called AA model was derived in \cite{Aubert2008} for the MAP estimate
under the assumption that the multiplicative noise follows a Gamma distribution and under the prior assumption
that $\bs{x}$ has bounded variations and is positive \cite{Aubert2008}.
The MAP estimate is then
computed for a forward map
$\bs{A}$
\begin{align}\label{eq: AAMAP}
\bs{x}_{\rm MAP}=\min_{\bs{x}}\sum_{i=1}^n\left(L\left(\log(A_i\bs{x})+\frac{y_i}{A_i\bs{x}}\right)+\gamma\phi(x_i)\right),
\end{align}
where $A_i$ denotes the $i$th row of $\bs{A}$ and the term $\gamma\phi(x_i)$ is
induced by the total variation prior on $\bs{x}$.
The computation of the MAP estimate in this case also requires iterative methods even when
the prior on $\bs{x}$ were Gaussian.
Furthermore, the likelihood potential
is not always strictly convex although the existence of a minimiser was proven in
\cite{Aubert2008}.
{\bf The separable model:} The separable model was introduced in \cite{huang2013} and takes into account both additive and multiplicative noise. Furthermore, for several different multiplicative noise models, closed form functionals are derived to find the respective MAP estimates. Here we give a brief outline of how one can derive the posterior. In accordance with \cite{huang2013}, The separable model takes the form
\begin{align}\label{eq: sep}
\bs{y}=\bs{f}(\bs{x},\bs{n},\bs{\eta})=\bs{n}\odot\left(\bs{A}\left(\bs{x}\right)+\bs{\eta}\right).
\end{align}
The key ingredient to dealing with the separable model is the introduction of an intermediate variable,
\begin{align}
\bs{u}=\bs{A}\bs{x}+\bs{\eta}.
\end{align}
Introduction of this intermediate variable results in the posterior of interest being given by
\begin{align}\label{eq: CRposts}
\hat{\pi}_{\rm post}(\bs{u},\bs{x}|\bs{y})\propto\pi(\bs{u},\bs{x})\pi(\bs{y}|\bs{u},\bs{x})=\pi_x(\bs{x})\pi(\bs{u}|\bs{x})\pi(\bs{y}|\bs{u}).
\end{align}
Each of the conditional densities in (\ref{eq: CRposts}) can be derived similarly to how the likelihood densities were dealt with in Section \ref{sec: Noise}. Firstly,
\begin{align}
\pi(\bs{y}|\bs{u})&=\int \delta(\bs{y}-\bs{n}\odot\bs{u})\pi_n(\bs{n})\;d\bs{n}\nonumber\\&=\prod_{i=1}^m\left(\frac{1}{\abs{u_i}}\pi_{n_i}\left(\frac{y_i}{u_i}\right)\right),
\end{align}
and secondly,
\begin{align}
\pi(\bs{u}|\bs{x})&=\int \delta(\bs{\eta}-\bs{u}-\bs{A}\bs{x})\pi_\eta(\bs{\eta})\;d\bs{\eta}\nonumber\\&=\pi_\eta\left(\bs{u}-\bs{A}\bs{x}\right),\end{align}
hence the posterior can be written as
\begin{align}
\pi(\bs{u},\bs{x}|\bs{y})=\prod_{i=1}^m\left(\frac{1}{\abs{u_i}}\pi_{n_i}\left(\frac{y_i}{u_i}\right)\right)\pi_\eta\left(\bs{u}-\bs{A}\bs{x}\right)\pi_x(\bs{x}).
\end{align}
The MAP estimate, $(\bs{u}_{\rm MAP},\bs{x}_{\rm MAP})$ is shown in closed form for several prior densities on the multiplicative noise in \cite{huang2013}. The main drawback to this method is the lack of convexity for the functionals which need to be minimised in order to calculate the MAP when $\bs{u}$ is not strictly positive. Furthermore the computation of the MAP estimate,
again requiring iterative method irrespective of the prior model,
will need to be carried out for the number of primary unknowns and the number of measurements.
Other methods for dealing with multiplicative noise in the denoising context include filtering type approaches such as those discussed in \cite{Garcia2011} and the use and construction of similarity measures \cite{Teuber2012}. For filtering type methods the problem is framed in the so called state-space formalism. On the other hand, approaches using similarity measures usually set values of the restored image to some weighted mean of the surrounding pixels, where the weights depend on the similarity of the pixels.
\section{Approximate marginalization of multiplicative noise}
\label{sec: BAE}
In this paper, we carry out approximative marginalization over both the additive and multiplicative noise terms.
In the inverse problems literature, this approach is referred to as the
Bayesian approximation error approach (BAE) since the approximative marginalization is carried over the prior distribution.
The BAE was introduced in \cite{kaipio2005,Kaipio2007} to take into account
the discrepancy between accurate and reduced order models.
Since that, the approach has been extended, for example,
to account for errors and uncertainties related to
uninteresting distributed parameters in PDE's \cite{kolehmainen2011},
errors in the geometry of the domain \cite{nissinen2011a},
unknown boundary data \cite{lehikoinen2007},
approximation of the (physical) forward map \cite{tarvainen2010},
and state estimation problems \cite{jhuttunen2007a,lehikoinen2009,Lipponen2010}.
For a more general discussion of the approach and a more extended
list of extensions, see \cite{Kaipio2013}.
Below, we adapt the approach to the context of multiplicative noise.
The goal is to embed the additive and multiplicative noise terms
in an {\it approximate} additive error only model.
With such an approximation together with linear forward and normal prior models,
the computation of the approximate MAP estimate and the approximate
posterior covariance reduces to linear algebra.
With the present observation model, we can write
\begin{align}
\bs{y}&=\bs{n}\odot\bs{A}\bs{x}+\bs{\eta}\nonumber \\
&=\bs{A}\bs{x}+\left(\bs{n}-\bs{1}\right)\odot\bs{A}\bs{x}+\bs{\eta}\nonumber \\
&=\bs{A}\bs{x}+\bs{\varepsilon}+\bs{\eta}\nonumber\\
&=\bs{A}\bs{x}+\bs{e},
\end{align}
which is alternative and {\it exact} additive error model to use in place of (\ref{eq: fullmodel}).
Exact marginalization over $\bs{e}$ would then yield
the likelihood model $\pi(\bs{y}\,\vert\, \bs{x}) = \pi_{e\,\vert\, x}(\bs{y} -\bs{A}\bs{x}\,\vert\,\bs{x})$
the computation of which is, however, not generally possible analytically.
In the BAE approach, at this stage,
one (usually) makes the normal approximation
$\pi_{e\vert x}(\bs{e})=\mathcal{N}(\bs{e}_{*\vert x},\bs{\Gamma}_{e\vert x})$, see,
for example, \cite{kaipio2005,Kaipio2007,Kaipio2013}.
We note, however, that some work has been carried out on retaining the full density of the errors,
$\pi_e(\bs{e})$ \cite{Calvetti2014,Calvetti2017}.
Furthermore, in theory, the full density of the approximation errors could be calculated as
the product density of $\bs{p}\odot\bs{q}$ for $\bs{p}=\bs{n}-\bs{1}$ and $\bs{q}=\bs{A}\bs{x}$,
see for example \cite{Rohatgi2015}.
\dontshow{
In the BAE approach, a further (technical) approximation
$\pi_{e\,\vert\, x}(y - Ax\,\vert\, x)\approx \pi_{e}(y - Ax)$ is sometimes carried out, see for example
\cite{kaipio2005}.
This approximation leads to the so-called enhanced error model and it is often
computationally less heavy to determine than the full BAE model.
For the obvious reason that $e = e(x)$, this further approximation is not always a feasible one,
see for example \cite{Kaipio2007}.
Below, however, we employ the enhanced error model, that is, we approximate
$\bs{e}_{*\vert x} \approx \bs{e}_{*}$ and $\bs{\Gamma}_{e\vert x} \approx \bs{\Gamma}_{e}$.
Below, we make the standard assumption that $(n,\eta,x)$ are mutually independent.
}
For the mean $\mathbb{E}(\bs{e}) = \bs{e}_\ast$, we get
\begin{align}
\bs{e}_*=\bs{\eta}_*+(\bs{n}_*-\bs{1})\odot\bs{A}\bs{x}_*.
\end{align}
In case we have, as is the standard assumption, $\mathbb{E}(\bs{n}) = \bs{n}_* = \bs{1}$ and $\mathbb{E}(\bs{\eta}) = \bs{0}$,
we also have $\bs{e}_*=\bs{0}$.
For the joint covariance matrix
\[
\bs{\Gamma}_{x,e} = \mtrx{cc}{\bs{\Gamma}_{xx} & \bs{\Gamma}_{xe} \\ \bs{\Gamma}_{ex} & \bs{\Gamma}_{ee} }
\] we have
\begin{eqnarray}
\label{eq:ApprCov}
\bs{\Gamma}_{ee} &=& \bs{\Gamma}_{\eta\eta}+\bs{\Gamma}_{nn}\odot\bs{A}\bs{\Gamma}_{xx}\bs{A}^T \\
\bs{\Gamma}_{ex} &=& \bs{\Gamma}_{\eta x}
\end{eqnarray}
due to the assumption that $n$ is uncorrelated with both $(x,\eta)$.
Note that we do not have to assume the uncorrelatedness of $n$ or mutual
uncorrelatedness of $(x,\eta)$.
For the conditional covariance $\Gamma_{e\vert x}$, we have
\[
\bs{\Gamma}_{e\vert x}
= \bs{\Gamma}_{\eta\eta} + \bs{\Gamma}_{nn}\odot\bs{A}\bs{\Gamma}_{xx}\bs{A}^T
- \bs{\Gamma}_{\eta x}\bs{\Gamma}_{xx}^{-1}\bs{\Gamma}_{x\eta}
\]
If $\bs{\Gamma}_{e\vert x}$ has full rank, the approximate likelihood model can then
be written as
\[
\pi(\bs{y}\,\vert\, \bs{x}) = \pi_{e\vert x}(\bs{y} - \bs{A}\bs{x}\,\vert\, \bs{x}) \propto \exp\left( -\frac12 \left\Vert
\bs{L}_{e\vert x}\left( \bs{y} - \bs{A}\bs{x} - \bs{\eta}_\ast - \bs{\Gamma}_{\eta x}\bs{\Gamma}_{xx}^{-1}(\bs{x} - \bs{x}_\ast) \right)
\right\Vert_2^2 \right)
\]
where $\bs{L}_{e\vert x}^T \bs{L}_{e\vert x} = \bs{\Gamma}_{e\vert x}^{-1}$.
However, with the typical assumptions of i.i.d. additive and multiplicative noise and
mutually uncorrelatedness of $x$ and $\eta$,
we have
\begin{eqnarray}
\bs{\Gamma}_{ee}
&=& \sigma^2_\eta\bs{I}+\sigma_n^2\text{diag}\left(\bs{A}\bs{\Gamma}_x\bs{A}^T\right)
\end{eqnarray}
so that we have
$\bs{\Gamma}_{e\vert x} = \bs{\Gamma}_{ee}$.
Again, if $\bs{\Gamma}_{ee}$ is full rank, we can write
$\bs{L}_e^T \bs{L}_e = \bs{\Gamma}_{ee}^{-1}$ which results in the approximate likelihood model
\[
\pi(\bs{y}\,\vert\, \bs{x}) \propto \exp\left(-\frac12 \Vert \bs{L}_e(\bs{y} - \bs{A}\bs{x}) \Vert_2^2 \right)
\]
Note that the structure of the covariance $\Gamma_{ee}$ is, in the general case, nontrivial
and depends also on the prior covariance $\Gamma_{xx}$, which is the case in BAE type approaches.
As far as the authors are aware, correlated multiplicative noise has not previously been considered
in the literature.
\section{Application to deblurring}
\label{sec:DeblurringApplication}
We consider an image deblurring (deconvolution) example with three different multiplicative noise statistics:
normal, Gamma and uniform distributions.
The image used assumes both positive and negative values.
For each case additive noise of level with standard deviation corresponding to
1\% of the range of the noiseless observations.
We also consider the same example with normal multiplicative noise that is spatially correlated.
Since the focus in this paper is on the multiplicative noise, we take the image to be uncorrelated with the
additive noise component.
\subsection{Multiplicative and additive noise models}
Without loss of generality, we set $\mathbb{E}(\bs{n})=\bs{1}$ for all cases
as is customary \cite{Aubert2008,Shi2008,Li2010}.
In this section, we take the components of $n$ to be iid so that also $\Gamma_{nn} = \sigma_n^2 I$.
We also take the additive noise model to be $\bs{\eta}\sim\mathcal{N}(\bs{0},\sigma^2_\varepsilon\bs{I})$.
A correlated additive noise model is straightforward to handle as in Section~\ref{sec: BAE}.
We consider three different distributions for the multiplicative noise $\bs{n}$ and scale
them so that the variances coincide.
Furthermore, in two cases, the probability $\mathbb{P}(n_i)<0$ does not vanish.
The first model for $\bs{n}$ is the iid Gamma distribution which has been
the most common model for multiplicative noise
\cite{Aubert2008,Shi2008,Durand2010},
\begin{align}
n_i\sim\Gamma(\alpha,\beta),\quad i=1,2,\dots, m
\end{align}
where $\alpha$ and $\beta$ are the shape and scale parameters, respectively.
For $\bs{n}$ such that is $\mathbb{E}(\bs{n}) = \bs{1}$ we can also write
\begin{align}
n_i\sim\Gamma\left(L,\frac{1}{L}\right),\quad i=1,2,\dots, m.
\end{align}
We set $L=1$ so that $\hbox{var}(n_i) = 1$.
For the Gamma distribution, $\mathbb{P}(n_i<0) = 0$.
The second model for $\bs{n}$ which is less seldom considered is the iid normal model
\begin{align}
n_i\sim\mathcal{N}(1,\sigma^2),\quad i=1,2,\dots, m,
\end{align}
where throughout the literature $\sigma\leq 0.2$ is referred to as {\it tiny noise},
see, for example, \cite{Aubert2008}.
The assumption of tiny noise is done in an attempt to avoid multiplicative noise terms becoming negative,
as discussed in more detail in Section~\ref{sec: StdApp}.
In the approach proposed in this paper, we do not need to make such an assumption and we set,
again, $\sigma = 1$
which results in $\mathbb{P}(n_i<0)\approx 0.15$.
As the third model, we consider multiplicative noise with iid uniform distribution,
\begin{align}
n_i\sim\mathcal{U}(1-\nu,1+\nu),\quad i=1,2,\dots, m
\end{align}
and we set $\nu = \sqrt{3}$ so that, again, $\hbox{var}(n_1) = 1$ and
which results in $\mathbb{P}(n_i<0)\approx 0.21$.
Draws from the three multiplicative noise models are shown in Fig.~\ref{fig: noisedraws}.
\begin{figure}[h!]
\includegraphics[width=16cm]{Noises.png}
\caption{Draws from the different iid multiplicative noise distributions.
Left: Gamma distribution $\Gamma(1,1)$.
Centre: normal distribution $\mathcal{N}(\bs{1},\bs{I})$.
Right: uniform distribution $\mathcal{U}(1-\sqrt{3},1+\sqrt{3})$.}
\label{fig: noisedraws}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=16cm]{pres.png}
\caption{Left: the target image $\bs{x}_{\rm true}$. Centre: the Gaussian convolution kernel $\mathcal{K}$
centred at the center of the image.
Right: the blurred noiseless image $\bs{K}*\bs{x}$.}\label{fig: presetup}
\end{figure}
\subsection{The target, the observations and the prior model}
For all examples, we specify a $50\times50$ pixel target image shown in Fig.~\ref{fig: presetup}.
We blur the image with a symmetric Gaussian blurring kernel
\begin{align}
\mathcal{K}(s_1,s_2) = \frac{1}{2\pi\kappa^2}\exp\left(-\frac{s_1^2+s_2^2}{2\kappa^2}\right),
\end{align}
with $\kappa=5$ also shown in Fig.~\ref{fig: presetup}.
Both the image and the kernel are taken to be piecewise constant in a grid
with rectangular elements.
We take the forward operator to be the circulant convolution operator $\mathcal{K}$ \cite{Calvetti2005}
\begin{align}
\bs{y}=\bs{n}\odot \left(\bs{K}*\bs{x}\right)+\bs{\eta}=\bs{n}\odot \bs{A}\bs{x}+\bs{\eta},
\end{align}
where $\bs{K}$ is the circulant realization of the kernel $\mathcal{K}$
and, further, $\bs{A}$ is the realization of $\bs{K}$ in matrix form.
The blurred (noiseless) image $\bs{y}=\bs{A}\bs{x}$ is also shown in Fig.~\ref{fig: presetup}.
The observations with the three different multiplicative noise models are shown in Fig.~\ref{fig: data}.
In this paper, we employ a normal prior model $\bs{x}\sim\mathcal{N}(\bs{x}_*,\bs{\Gamma}_x)$.
The mean of $\bs{x}$ is set to be spatially homogeneous $\mathbb{E}(\bs{x}) = \bs{x}_* = x_*\bs{1}$.
For the prior covariance matrix, we employ so-called PDE-based covariance matrices
\cite{Stuart2010,Bui-Thanh2013}.
More specifically, we take
\begin{align}\label{eq: pCov}
\bs{\Gamma}_{xx}=\left(c_1\left(c_2\bs{G}+\bs{M}\right)\right)^{-2} = \left(L_x^T L_x \right)^{-1},
\end{align}
where $c_1$ is a constant inversely proportional to the variance, $c_2$ is a constant which
controls the correlation length.
The matrix square root $L_x$ is the whitening operator of the random field
\cite{RueBook}.
The matrices $\bs{G}$ and $\bs{M}$ are the stiffness and mass matrices, respectively
\begin{align}
G_{ij}=\int_\Omega\nabla\phi_i\cdot\nabla\phi_j\;d\bs{s} \quad M_{ij}=\int_\Omega\phi_i\phi_j\;d\bs{s},\quad i,j=1,2,\dots,n.
\end{align}
The parameters are set as $c_1=10^{-1}$ and $c_2=20$ so that the range of $x$ and the correlation length
are approximatively consistent with the structure of the target image, and we also set $\bs{x}_*=\bs{0}$, see Fig.~\ref{fig:PriorCov}
for the covariance function and two draws from the prior model.
\begin{figure}[t!]
\includegraphics[width=16cm]{priorCF_draw.png}
\caption{Left: The correlation function induced by the PDE based model with
$c_1=10^{-1}$ and $c_2=20$.
Center and right: two draws from the prior model.}\label{fig:PriorCov}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=16cm]{dats.png}
\caption{Data corrupted by multiplicative noise generated from left: Gamma, centre: Normal, and right: Uniform distributions.}\label{fig: data}
\end{figure}
\subsection{Reconstructions with spatially uncorrelated multiplicative noise}
The reconstructions computed using the proposed approximation,
denoted $\bs{x}_{\rm MAP}^{\rm gamma}$, $\bs{x}_{\rm MAP}^{\rm normal}$,
and $\bs{x}_{\rm MAP}^{\rm uniform}$
are shown in Fig.~\ref{fig: baeMAPS}.
Furthermore, in the bottom row of Figure \ref{fig: baeMAPS} we show the estimates and
posterior confidence intervals along the cross section shown in the images in the top row.
We see that embedding the multiplicative noise into the additive error leads to feasible results
in the sense that the actual target is supported by the approximative MAP $ \pm 3\sqrt{\bs{\Gamma}_{x\vert y}(k,k)}$
intervals.
It is noteworthy that the fact that, in the case of normal and uniform multiplicative noise distributions,
$n$ exhibits negative samples.
Clearly, this does not constitute a problem for the proposed approach.
The feasibility of the posterior estimates is similar with all three distributions of the multiplicative noise.
The fact that the estimates obtained here are fairly smooth in comparison to the true image is due to the use
of a Gaussian smoothness prior.
To reconstruct the sharp edges one could employ a TV type prior.
Even with a normal approximation for the posterior,
this would result in the need for iterative methods to compute the MAP estimate.
\begin{figure}[h!]
\includegraphics[width=16cm]{bae_maps.png}
\caption{Top row.
The MAP estimates attained by using the BAE approach with different iid multiplicative noise models.
Left: Gamma, Centre: normal and Right: uniform noise models.
Bottom row.
Cross sections of the actual target and reconstructions with approximate MAP $\pm 3\sqrt{\bs{\Gamma}_{x\vert y}(k,k)}$
intervals along the lines in the top row reconstructions.}\label{fig: baeMAPS}
\end{figure}
\subsection{Reconstructions with spatially correlated multiplicative noise}
The derivations of the existing methods to handle multiplicative noise are largely based on the assumed iid property.
In the proposed approach, such an assumption does not need to be done, as indicated by the approximate
joint covariance $\bs{\Gamma}_{e,x}$ in
Section~\ref{sec: BAE}.
With a deblurring problems such as the present example, it is clear that when the spatial correlation
structure of the multiplicative noise gets more complicated, we can expect the actual estimation errors to increase.
This can be expected, in particular, with noise distributions with positive spatial and
increasing correlation length.
In this section, we only consider normal multiplicative noise.
We generate three distributions with different spatial decay rates.
The traces of the multiplicative noise covariances are the same as in the cases of spatially uncorrelated
noise case.
Furthermore, the variance of the (spatially uncorrelated) additive noise is as in the previous case.
The respective correlation functions and draws from these distributions are shown in Fig.~\ref{fig:CorrNoise}.
The respective observations are shown in Fig.~\ref{fig:CorrNoiseData}.
The approximate MAP estimates and the posterior $\pm3$ STD intervals are shown in Fig.~\ref{fig:CorrNoiseRes}.
The estimates are, again, feasible with respect to the posterior error intervals.
The error estimates are larger than in the case of spatially uncorrelated multiplicative noise
which was expected.
As was also expected, the error estimates increase with decreasing decay rate of spatial correlation of the
multiplicative noise.
\begin{figure}[h!]
\includegraphics[width=16cm]{CorrNoise.png}
\caption{Top row.
Spatial correlation models for the multiplicative noise with different spatial decay rates.
Bottom row.
Draws from the respective models.}\label{fig:CorrNoise}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=16cm]{CorrNoiseData.png}
\caption{The observations with the three different spatially correlated multiplicative noise models
shown in Fig.~\ref{fig:CorrNoise}.}\label{fig:CorrNoiseData}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=16cm]{CorrNoiseRes.png}
\caption{Top row.
The MAP estimates attained by using the BAE approach with different
spatially correlated multiplicative noise models shown in Fig.~\ref{fig:CorrNoise}.
Left: Gamma, Centre: normal and Right: uniform noise models.
Bottom row.
Cross sections of the actual target and reconstructions with approximate MAP $\pm 3\sqrt{\bs{\Gamma}_{x\vert y}(k,k)}$
intervals along the lines in the top row reconstructions.}
\label{fig:CorrNoiseRes}
\end{figure}
\section{Conclusion}
In this paper, we proposed an approach to approximate (linear) inverse problems
corrupted by both additive and multiplicative noise with an additive noise model.
The approximate additive noise model is constructed by (approximate) marginalization over the
discrepancy between the model predictions of the original and the approximate model,
which is referred to as the Bayesian approximation error (BAE) approach.
The resulting additive noise term is then approximated with a normal.
The covariance of this term is nontrivial and depends on the prior covariance.
The computation of the approximate MAP estimate does not suffer from convexity-related problems other
than those arising from the forward map.
The approach does not need the multiplicative noise to be uncorrelated.
The results in this paper are, however, based on mutual independence of the primary unknown
and the multiplicative noise.
As such, the mutual independence of the primary unknown and the multiplicative noise is, however,
not essential for the proposed approach.
In such a case, the computation of the related joint covariance of the modified additive noise and the
primary unknown involves rather tedious mappings of general fourth order statistics (kurtosis).
We considered numerical examples with different the multiplicative noise distributions related to
an image processing deconvolution problem with both additive and multiplicative noise.
The results show that the approximation is feasible in the sense that the posterior error estimates support the
actual target image.
Furthermore, the results were feasible also when the multiplicative noise was spatially highly correlated.
\dontshow{However, the approach outlined could be used for many other multiplicative noise models, such as Rayleigh distributed multiplicative noise. Moreover, for any distribution placed on the multiplicative noise the resulting functional to minimised in order to calculate the MAP estimate is quadratic, avoiding all of the convexity issues which are apparent with other methods equiped to deal with multiplicative noise.
Two distinct ideas have yet to be addressed. Firstly in some cases the product density referred to in Section \ref{sec: BAE} can be calculated in closed form, thus it may not be too difficult to drop the Gaussian approximation on the updated additive error term in favour of the retaining the true density. Secondly, the methods outlined in this paper do not require the presumption of a Gaussian prior to be placed on the parameter of interest, hence it would be of interest to apply these methods when using a TV-type prior density.
}
\bibliographystyle{siam}
|
{
"timestamp": "2018-05-08T02:14:07",
"yymm": "1805",
"arxiv_id": "1805.02344",
"language": "en",
"url": "https://arxiv.org/abs/1805.02344"
}
|
\section{Introduction and Main Results}
In this paper, we consider the derivative nonlinear Schr$\ddot{\mbox{o}}$dinger equation with periodic boundary conditions
\begin{equation}\label{1.1}
\mathbf{i}u_t+u_{xx}+\mathbf{i}\Big(f(x,u,\bar{u})
\Big)_x=0,\quad x\in\mathbb{T},\end{equation}
where $f$ is an analytic function of the form
\begin{equation}f(x,u,\bar{u})=
\mu|u|^2u+f_{\geq4}(x,u,\bar{u}),\quad 0\neq\mu\in\mathbb{R},\end{equation}
and $f_{\geq4}(x,u,\bar{u})$ denotes terms of order at least four in $u,\bar{u}$. Moreover, we require
\begin{equation}f_{\geq4}(x,u,\bar{u})=
\frac{\partial F_{\geq5}}{\partial{\bar{u}}}(x,u,\bar{u}),\end{equation}
such that \eqref{1.1} can be viewed as a Hamiltonian system, where $F_{\geq5}(x,u,\bar{u})$ is a real analytic function of order at least five in $u,\bar{u}$. As in \cite{Poschel3}, we may assume $\mu=1$. Then the equation \eqref{1.1} can be regarded as
a perturbation of the following equation:
\begin{equation}\label{1.2}\mathbf{i}u_t+u_{xx}+
\mathbf{i}(|u|^2u)_x=0,\end{equation}
which appears in various physical applications and has been widely studied in the literature.
We study \eqref{1.1} as a Hamiltonian system on some suitable phase space $\mathcal{P}$, for example, we may take $H_0^2(\mathbb{T})$, the usual Soblev space on $\mathbb{T}$ with vanishing average. Under the standard inner product on $L^2(\mathbb{T})$, \eqref{1.1} can be written in the form
\begin{equation}\label{1.3}\frac{\partial u}{\partial t}=-\frac{d}{dx}\frac{\partial H}{\partial \bar{u}}\end{equation}
with the real analytic Hamiltonian
\begin{equation}\label{1.4}H=-\mathbf{i}
\int_{\mathbb{T}}u_x\bar{u}dx+
\frac12\int_{\mathbb{T}}|u|^4dx+
\int_{\mathbb{T}}F_{\geq5}(x,u,\bar{u})dx.\end{equation}
We will construct Cantor families of time quasi-periodic solutions of small amplitude.
A result of similar form was firstly obtained in
\cite{Poschel3} by Kuksin and P\"{o}schel for the nonlinear
Schr\"{o}dinger equation with Dirichlet boundary conditions
\begin{equation*}
{\mi}u_t=u_{xx}-mu-f(|u|^2)u,\hspace{12pt}u(t,0)=0=u(t,\pi),
\end{equation*}
where $m$ is real, $f$ is real analytic in some neighborhood of the
origin in $\mathbb{C}$, $f(0)=0$, and $f'(0)\neq0$. For convenience,
we keep fidelity with the notation and terminology from \cite{Poschel3}. Let
$$\phi_j(x)=\frac{1}{\sqrt{2\pi}}e^{{\mi}jx},
\hspace{12pt}j\in\bar{\mathbb{Z}}:
=\mathbb{Z}\setminus\{0\}$$
be the basic modes. For every index set
$$J=\{j_1<j_2<\cdots<j_n\}\subset\bar{\mathbb{Z}},$$
denote by $E_J$ the linear subspace of complex dimension $n$
which is completely foliated into rotational tori
$$E_J=\{u=q_1\phi_{j_1}+\cdots+q_n\phi_{j_n}
:q\in\mathbb{C}^n\}=\bigcup_{I\in\overline{\mathbb{P}^n}}
\mathcal{T}_J(I),$$
where $\mathbb{P}^n=\{I\in\mathbb{R}^n:I_b>0,\ 1\leq b\leq n\}$ is
the positive quadrant in $\mathbb{R}^n$ and
$$\mathcal{T}_J(I)=\{u=q_1\phi_{j_1}+\cdots+q_n\phi_{j_n}
:|q_b|^2=I_b,\ 1\leq b\leq n\}.$$
The
following is our result for (\ref{1.1}):
\begin{thm}\label{1.1}
For any integer $n\geq2$ and index set
$J=\{j_1<j_2<\cdots<j_n\}\subset\bar{\mathbb{Z}}
$ satisfying
\begin{equation}\label{18.3.31.2}
2n-1\nmid\sum_{b=1}^nj_b\end{equation}
and additionally $j_1j_2<0$ if $n=2$,
there exist:
\begin{itemize}
\item[{(1)}] a Cantor set $\mathcal{C}\subset\mathbb{P}^n$ with full
density at the origin;
\item[{(2)}] a Lipschitz embedding
$\Psi:\mathcal{T}_J[\mathcal{C}]\hookrightarrow\mathcal{P}$, which
is a higher order perturbation of the inclusion mapping
$\Psi_0:E_J
\hookrightarrow\mathcal{P}$ restricted to
$\mathcal{T}_J[\mathcal{C}]$,
where
$\mathcal{T}_J[\mathcal{C}]:=\bigcup_{I\in
\mathcal{C}}\mathcal{T}_J(I)\subset{E_J}$
is a family of $n$-tori over $\mathcal{C}$,
\end{itemize}
such that the image
$\mathcal{E}_J:=\Psi\big(\mathcal{T}_J[\mathcal{C}]\big)$ is a
Cantor manifold of diophantine $n$-tori for the derivative nonlinear
Schr\"{o}dinger equation (\ref{1.1}). Moreover, the restriction
of $\Psi$ to each torus $\mathcal{T}_J(I)$, $I\in\mathcal{C}$ is
smooth, and $\mathcal{E}_J$ has a tangent space at the origin equal
to $E_J$.
\end{thm}
The above theorem is proved by using infinite dimensional KAM theory.
Historically, KAM theory for partial differential equations was originated by Kuksin \cite{Kuksin4} and Wayne \cite{Wayne}, where one dimensional nonlinear wave and Schr\"{o}dinger equations under Dirichlet boundary conditions were studied. Owing to the special boundary conditions and the absence of
spatial derivatives in the nonlinearity, the corresponding infinite dimensional Hamiltonian systems have simple normal frequencies and bounded perturbations. Then infinite dimensional KAM theory for bounded perturbations was deeply investigated, including both simple and multiple normal frequencies. See
\cite{Berti7,Berti4,Bourgain1,Bourgain2,
CYou,Criag-Wayne,Eliasson1,Eliasson,
GY,GY1,Gre1,Gre,K4,K5,Poschel3,procesi,P,P2,Y,Y3} for example.
(We can not list all papers in this field.)
Since the nonlinearity
of (\ref{1.1}) contains a spatial derivative, a suitable KAM theorem for unbounded perturbation vector field is required for our result. The first KAM
theorem for unbounded perturbations is due to Kuksin \cite{Kuksin3,Kuksin1}. Under the assumption $0<\delta<d-1$ ($d$ is the order
of the linear vector field and $\delta$ is the order of the perturbation vector field), a suitable estimate, which is now called Kuksin's lemma, is proved for the small-denominator equation with large variable coefficient. By using this estimate, a
KAM theorem is established to prove the persistence of the finite-gap solutions of the KdV equation. For the case $0<\delta<d-1$, also see \cite{KP} for perturbed KdV equations by Kappeler and P\"{o}schel, and \cite{Bambusi} for a class of time dependent Schr$\ddot{\mbox{o}}$dinger operators by Bambusi and Graffi. More recently, KAM theory for unbounded
perturbations has been extended by Liu-Yuan \cite{L-Y2,L-Y1}. The small-denominator equation
with large variable coefficient is suitably estimated for the limiting case $0<\delta=d-1$, and consequently
the corresponding KAM theorems are established with applications to quantum Duffing oscillator, derivative nonlinear Schr$\ddot{\mbox{o}}$dinger and Benjamin-Ono equations. Also see Zhang-Gao-Yuan \cite{Zhang} for reversible derivative nonlinear Schr$\ddot{\mbox{o}}$dinger equations.
For the case $\delta>d-1$, both Kuksin's Lemma and Liu-Yuan's estimate are invalid, and thus there is no general KAM theorem in which the perturbation only satisfies smallness condition.
However, this does not mean that nothing is known for partial differential equations with their nonlinearity containing higher order space derivative. Actually, a great progress has been made for quasi-linear and even fully nonlinear partial differential equations recently. See Baldi-Berti-Montalto \cite{Baldi2,Baldi1,Baldi3} for KdV and mKdV equations, Berti-Montalto \cite{Berti5} for water wave equations, Feola-Procesi \cite{Feola} for reversible derivative nonlinear Schr$\ddot{\mbox{o}}$dinger equations, and Montalto \cite{Montalto} for Kirchoff equation.
The idea of the proof is to use pseudo-differential calculus in order to conjugate the original system to a system with a smoothing perturbation and then to apply KAM theory.
Also see Bambusi \cite{Bambusi1,Bambusi2}
for the reducibility of Schr\"{o}dinger operators.
Moreover, derivative nonlinear wave equations were studied by Berti-Biasco-Procesi \cite{Berti2,Berti1}. The idea is to prove
first order asymptotic expansions of the perturbed normal frequencies by exploiting the quasi-T$\ddot{\mbox{o}}$plitz property and penalize high-momentum terms by introducing weighted norms.
On the other hand, both Kuksin's Lemma and Liu-Yuan's estimate are only valid for scalar homological equations, which requires the normal frequencies to be simple, that is, $\Omega_j^\sharp=1$. This implies the range of applications of the
previous KAM theorems for unbounded perturbations pertain to those
PDEs with simple frequencies. This precludes the derivative
nonlinear Schr$\ddot{\mbox{o}}$dinger equation (\ref{1.1}) with periodic
boundary conditions, since the multiplicity $\Omega_j^\sharp=2$.
For \eqref{1.1} with its nonlinearity being gauge invariant and not containing $x$ explicitly, see Liu-Yuan \cite{L-Y}. The above difficulty was avoided by using momentum conservation, which guarantees the indices of the monomials
\begin{equation*}\label{18.3.28.3}e^{\mathbf{i}
k_1x_1+\cdots+\mathbf{i}k_nx_n}y_1^{m_1}\cdots y_n^{m_n}\prod_{j\in\bar{Z}\setminus J}
z_j^{l_j}\bar{z}_j^{\bar{l}_j}\end{equation*}
satisfying
\begin{equation}\label{18.3.28.2}\sum_{1\leq b\leq n}k_bj_b
+\sum_{j\in\bar{Z}\setminus J}(l_j-\bar{l}_j)j=0.\end{equation}
Take the most difficult terms in the KAM iteration scheme: $e^{\mathbf{i}k\cdot x}z_i\bar{z}_j,k\in\mathbb{Z}^n,i,j\in\bar{\mathbb{Z}}
\setminus J$, where $k\cdot x=k_1x_1+\cdots+k_nx_n$.
Following Bourgain's observation in \cite{Bourgain3},
the restriction \eqref{18.3.28.2} means that $|i|+|j|$ is controlled by $|k|$ unless $i=j$. Hence, for a fixed $k$, all the nearly resonant terms except $e^{\mathbf{i}k\cdot x}z_j\bar{z}_j$ can be eliminated. As a result, only $e^{\mathbf{i}k\cdot x}z_j\bar{z}_j$ are left as normal form terms. The homological equations are then scalar, and the estimate in \cite{L-Y2} for small-denominator equation with large variable coefficients still works.
In the present paper, the nonlinearity contains the space variables $x$ explicitly so that \eqref{18.3.28.2} is not true, and thus $e^{\mathbf{i}k\cdot x}z_{-j}\bar{z}_{j}$ are difficult to handle. Luckily, a key observation for \eqref{1.1} provides a chance to solve this problem.
After introducing action-angle coordinates to Birkhoff normal form of order four and choosing parameters properly, the normal frequencies $\Omega_j$ take the form (see \eqref{17.10.19.2})
\begin{equation}\label{18.3.28.4}\Omega_j=j^2+cj,\quad j\in\bar{\mathbb{Z}}\setminus J,\end{equation}
where
\begin{equation}\label{18.4.1.1}
c=\frac{2\sum_{b=1}^n\xi_b}{2n-1}.
\end{equation}
This formally indicates that $\Omega_{-j}$ and $\Omega_j$ do not coincide, and even $|\Omega_{-j}-\Omega_j|\rightarrow\infty$ as $j\rightarrow\infty$. Also see \cite{Liu}
for this observation.
However, the problem is more complicated than it appears because of these two characters of $c$: first, in view of \eqref{18.4.1.1} \eqref{18.3.31.1} \eqref{18.4.1.2},
\begin{equation}c\approx\epsilon^{\frac67},\end{equation}
which indicates that $c$ is rather small;
second, $c$ depends on the parameters with Lipschitz semi-norm
\begin{equation}|c|^{lip}=\frac{2n}{2n-1},\end{equation}
which is not small. In the following, for these two characters respectively, we will introduce the corresponding difficulties and the methods to overcome them.
In order to eliminate $e^{\mathbf{i}k\cdot x}z_{-j}\bar{z}_{j}$, $k\in\mathbb{Z}^n,\pm j\in\bar{\mathbb{Z}}\setminus J$, the corresponding small-divisors are
\begin{equation}\label{18.4.1.3}\langle k,\omega\rangle+\Omega_{-j}-\Omega_j.\end{equation}
By the usual method of measure estimate, roughly determined by
\begin{equation}|\Omega_{-j}-
\Omega_j|<|k||\omega|_{\mathcal{O}},\end{equation}
for a fixed $k$, the number of small-divisors is
\begin{equation}\frac{|k||\omega|_{\mathcal{O}}}{2c},
\end{equation}
which is out of control for $c$ too small. Thus, an important fact in Theorem \ref{thm7.12.1} is that,
$|\Omega_{-j}-\Omega_j|$ is not required as ``frequency asymptotic'' by assumption (A) as usual, but required as ``small-divisor'' by assumption (C).
In this sense, Theorem \ref{thm7.12.1} can not be viewed as a usual unbounded KAM theorem with simple normal frequencies.
In order to solve the above problem, i.e., controlling the number of small-divisors \eqref{18.4.1.3}, we introduce momentum majorant norm (see \eqref{2.1.16}). At the $\nu$-th KAM step, we only eliminate $e^{\mathbf{i}k\cdot x}z_{-j}\bar{z}_{j}$ with lower momentum, roughly that is,
\begin{equation}|\sum_{1\leq b\leq n}k_bj_b
-2j|<|\ln\epsilon_{\nu+1}|.\end{equation}
Therefore, for a fixed $k$, the number of small-divisors is
\begin{equation}\label{18.4.1.4}|k|\max_{1\leq b\leq n}|j_b|+|\ln\epsilon_{\nu+1}|,\end{equation}
which is accepted in the KAM iteration. Consequently, $e^{\mathbf{i}k\cdot x}z_{-j}\bar{z}_{j}$ with lower momentum can be eliminated. On the other hand, $e^{\mathbf{i}k\cdot x}z_{-j}\bar{z}_{j}$ with higher momentum are put into the perturbation. As a result, only $e^{\mathbf{i}k\cdot x}z_{j}\bar{z}_{j}$ are left as normal forms and the homological equations are scalar.
Nevertheless, owing to the momentum majorant norm, the estimate for small-denominator equation with large coefficient in \cite{L-Y2} does not work directly.
By properly designing momentum weight and analyticity width, the momentum majorant norm and the sup-norm of a function can be controlled by each other
(see \eqref{17.11.3.2} \eqref{17.11.4.2}). So we obtain a lemma which is Theorem 1.4 in \cite{L-Y2} with the momentum majorant norm estimate instead of the sup-norm estimate (see Lemma \ref{lem17.7.31.1}).
Recall the second character of $c$ mentioned above, that is, $c$ depends on the parameters with $|c|^{lip}$ not small. Then the Lipschitz semi-norm $|\Omega|^{lip}_{-1}$ is not small, and thus we are not able to get the twist of $\langle k,\omega(\xi)\rangle+\langle l,\Omega(\xi)\rangle$ by the Lipschitz continuity of $\omega$ in both directions as usual. To overcome this problem, we add assumption (C) in the KAM theorem.
Therefore, we must verify: assumption (C) is preserved under KAM iteration; assumption (C) is satisfied for the derivative nonlinear Schr$\ddot{\mbox{o}}$dinger equation \eqref{1.1}.
The former is relatively trivial; while the latter is rather complicated, and some restrictions (see \eqref{18.3.31.2} and additionally $j_1j_2<0$ if $n=2$) on the index set $J$ are necessary. As a counterexample, taking $n=2$, $j_1=-1,j_2=1$, then the small-divisor
\begin{equation}4\omega_1-4\omega_2+\Omega_3-\Omega_{-3}
\equiv0.
\end{equation}
Of course, the above restrictions on $J$ can be more flexible by more discussions.
We now lay out an outline of the present paper:
In Section 2, we introduce the momentum majorant norm of the vector field and then formulate the KAM theorem. In our theorem,
the Lipschitz semi-norm of $\omega^{-1}$ is not required as usual; moreover, assumption (C) is added, which means that for any fixed $k\in\mathbb{Z}^n,|l|\leq2$ the small-divisor
$\langle k,\omega\rangle+\langle l,\Omega\rangle$ is big by itself or has a big twist.
Section 3 contains the proof of Theorem \ref{1.1}.
We transform the Hamiltonian into a partial Birkhoff normal form up to order four with estimates of momentum majorant norm; then introduce action-angle coordinates for tangential variables and extract parameters by amplitude-frequency modulation; finally Theorem \ref{1.1} is achieved by using the KAM theorem. Therein
many efforts are paid to verify assumption (C) in the KAM theorem, seeing the proof of Lemma \ref{lem18.3.27.1}, where the condition
$j_1j_2<0$ for $n=2$ is used in subcase 2.2
and the condition $2n-1\nmid\sum_{b=1}^nj_b$ is used in subcase 2.3.
In Section 4-7, the KAM theorem is proved. In Section 4, we derive the homological equations and prove two lemmas (Lemma \ref{lem17.7.31.1}, Lemma \ref{lem17.9.22.1}) for solving them.
For $k\in\mathbb{Z}^n,i\neq \pm j$, the homological equations are solved in the same way as \cite{L-Y1};
for $
k\in\mathbb{Z}^n,i=-j$, the lower momentum terms are eliminated by Lemma \ref{lem17.7.31.1} and the higher momentum terms are left as perturbation. In Section 5, the new Hamiltonian including normal form and perturbation are estimated. In Section 6, by choosing iterative parameters properly, we prove the iterative lemma and the convergence. Therein the transformation and its derivative are estimated by sup-norm as usual.
In Section 7, the measure of excluded parameters is estimated by Lemma
\ref{lem18.4.1.2} for $k\in\mathbb{Z}^n,i=-j$ and Lemma \ref{lem18.4.1.3} for the others. Therein we design $\alpha_{2,\nu}\rightarrow0$ because of \eqref{18.4.1.4}$\rightarrow\infty$ as $\nu\rightarrow\infty$.
Section 8 contains several lemmas: Lemma \ref{lem18.4.1.1} provides three elementary inequalities being frequently used in this paper; Lemma \ref{lem18.3.19.1} shows that the $|\cdot|_{s,\tau+1}$ norm of a function is controlled by the momentum majorant norm;
Lemma \ref{lem17.9.7.1} establishes a bridge of the momentum majorant norm between a vector field and its elements; Lemma \ref{lem17.12.22.1}
is an estimate for the momentum majorant norm of
the commutator of two vector fields;
Lemma \ref{lem17.12.23.1} is an estimate for the momentum majorant norm of the transformed Hamiltonian vector field.
Finally we remark that the higher order nonlinearity $f_{\geq4}(x,u,\bar{u})$
only contributes to perturbation. Actually, by using the same KAM theorem, Cantor families of quasi-periodic solutions can be obtained for \eqref{1.2} with more general perturbations of the form $\mathbf{i}\frac{d}{dx}\frac{\partial K}{\partial{\bar{u}}}$, where $K$ is a real analytic Hamiltonian with $\frac{\partial K}{\partial{\bar{u}}}$ being bounded and some other conditions. Of course, the above procedure is invalid for \eqref{1.2} with quasi-linear or fully nonlinear perturbations.
We hope to decrease the order of derivatives with the help of the ideas in \cite{Baldi1}, such that the above procedure is valid with some modifications.
\section{A KAM Theorem}
Recall the index set $J=\{j_1<\cdots<j_n\}\subset\bar{\mathbb{Z}}$,
denote $\mathbb{Z}_*:=\bar{\mathbb{Z}}\setminus J$. For $a,p\in\mathbb{R}$, we define the Hilbert space $\ell^{a,p}_J$ of all complex sequences $z=(z_j)_{j\in\mathbb{Z}_*}$ with $$||z||_{a,p}^2=\sum_{j\in\mathbb{Z}_*}e^{2a|j|}|j|^{2p}|z_j|^2<\infty.$$
We consider the direct product
\begin{equation}\label{17.9.18.11}\mathcal{P}^{a,p}:=\mathbb{C}^n\times\mathbb{C}^n\times
\ell^{a,p}_J\times\ell^{a,p}_J\end{equation}
endowed with $(s,r)$-weighted norm
\begin{equation}\label{17.9.18.1}v=(x,y,z,\bar{z})\in\mathcal{P}^{a,p},\quad ||v||_{s,r,p}=\frac{|x|}{s}+\frac{|y|_1}{r^2}+
\frac{||z||_{a,p}}{r}+
\frac{||\bar{z}||_{a,p}}{r},\end{equation}
where $0<s,r<1$, and $|x|:=\max_{1\leq h\leq n}|x_h|$, $|y|_1:=\sum_{h=1}^n|y_h|$. In the whole of this paper the parameter $a$ is fixed and thus we drop it in the notation $||\cdot||_{s,r,p}$. Note that $z$ and $\bar{z}$ are independent variables. As phase space, we consider the toroidal domain
\begin{equation}\label{17.9.19.1}D(s,r):=\mathbb{T}^n_s\times D(r):=\mathbb{T}^n_s\times B_{r^2}\times B_r\times B_r\subset\mathcal{P}^{a,p},\end{equation}
where
$\mathbb{T}^n_s:=\{x\in\mathbb{C}^n:\mbox{Re}x
\in\mathbb{T}^n:=\mathbb{R}^n/2\pi\mathbb{Z}^n,
\max_{1\leq h\leq n }|\mbox{Im}\ x_h|<s\}, B_{r^2}:=\{y\in\mathbb{C}^n:
|y|_1<r^2\}$
and $B_r\subset\ell_J^{a,p}$ is the open ball of radius $r$ centered at zero.
For $q\in\mathbb{R}$, we consider vector fields $X:D(s,r)\rightarrow\mathcal{P}^{a,q}$ of the form
\begin{equation}\label{2.1.2}X(v)=(X^{(\tilde{x})}(v),X^{(\tilde{y})}(v),
X^{(\tilde{z})}(v),X^{(\bar{{\tilde z}})}(v))\in \mathcal{P}^{a,q},\end{equation}
where $v\in D(s,r)$ and $X^{(\tilde{x})}(v), X^{(\tilde{y})}(v)\in\mathbb{C}^n$, $X^{(\tilde{z})}(v),X^{(\bar{{\tilde z}})}(v)\in\ell_J^{a,q}$. Denote
\begin{equation}V:=\{\tilde{x}_1,\cdots,\tilde{x}_n,\tilde{y}_1,\cdots, \tilde{y}_n,
\cdots,\tilde{z}_j,\cdots,\bar{{\tilde z}}_j,\cdots\},\quad j\in\mathbb{Z}_*.\end{equation}
We can write $X(v)=(X^{\mathbf{v}}(v))_{\mathbf{v}\in V}$, where each component is a formal scalar power series
\begin{equation}\label{17.7.21.17}X^{(\mathbf{v})}(v)=\sum_{(k,i,\alpha,\beta)\in\mathbb{I}}
X^{(\mathbf{v})}_{k,i,\alpha,\beta}e^{\mathbf{i}k\cdot x}y^{i}z^{\alpha}\bar{z}^{\beta}\end{equation}
with coefficients $X^{(\mathbf{v})}_{k,i,\alpha,\beta}\in\mathbb{C}$ and multi-indices in
\begin{equation}\label{2.1.6}\mathbb{I}:=\mathbb{Z}^n
\times\mathbb{N}^n\times
\mathbb{N}^{\mathbb{Z}_*}\times\mathbb{N}^{\mathbb{Z}_*},\end{equation}
where $\mathbb{N}^{\mathbb{Z}_*}:=\{\alpha=(\alpha_j)_{j\in\mathbb{Z}_*}\in
\mathbb{N}^{\mathbb{Z}_*}\mbox{with}\ |\alpha|=\sum_{j\in\mathbb{Z}_*}\alpha_j<\infty\}$. In \eqref{2.1.6}, we use the standard multi-indices notation $z^{\alpha}\bar{z}^{\beta}:=\prod_{j\in\mathbb{Z}_*}z_j^{\alpha_j}
\bar{z}_j^{\beta_j}$. The formal vector field $X$ is absolutely convergent in $\mathcal{P}^{a,q}$ (with norm \eqref{17.9.18.1} for $q$ instead of $p$) at $v\in D(s,r)$ if every component $X^{(\mathbf{v})}(v),\mathbf{v}\in V$ is absolutely convergent and $||(X^{(\mathbf{v})}(v))_{\mathbf{v}\in V}||_{s,r,q}<+\infty$.
We also use the differential geometry notation
\begin{equation}\label{17.9.18.2}X(v)=\sum_{\mathbf{v}\in
V}X^{(\mathbf{v})}\partial_{\mathbf{v}}=\sum_{\mathbf{v}\in V}\sum_{(k,i,\alpha,\beta)\in\mathbb{I}}X^{(\mathbf{v})}_{k,i,\alpha,\beta}
e^{\mathbf{i}k\cdot x}
y^iz^{\alpha}\bar{z}^{\beta}\partial_{\mathbf{v}}.\end{equation}
For a scalar monomial $e^{\mathbf{i}k\cdot x}y^iz^{\alpha}\bar{z}^{\beta}$, we define its momentum as
\begin{equation}\label{2.1.14}\pi(k,\alpha,\beta)=\sum_{b=1}^nk_bj_b+
\sum_{j\in\mathbb{Z}_*}(\alpha_j-\beta_j)j,\end{equation}
and for a vector field monomial $e^{\mathbf{i}k\cdot x}
y^iz^{\alpha}\bar{z}^{\beta}\partial_{\mathbf{v}}$,
we define its momentum as
\begin{equation}\label{2.1.13}\pi(k,\alpha,\beta;\mathbf{v}):=\begin{cases}
\pi(k,\alpha,\beta),&\mbox{if}\ \ \mathbf{v}\in\{\tilde{x}_1,\cdots,\tilde{x}_n,
\tilde{y}_1,\cdots, \tilde{y}_n\},\\ \pi(k,\alpha,\beta)- j,&
\mbox{if}\ \ \mathbf{v}=\tilde{z}_j,\\
\pi(k,\alpha,\beta)+j, & \mbox{if}\ \ \mathbf{v}=\bar{\tilde{z}}_j.\end{cases}\end{equation}
We say that a vector field $X$ satisfies momentum conservation if and only if it is a linear combination of monomial vector fields with zero momentum.
Similarly to \cite{Berti1}, for a formal vector field $X$ in \eqref{17.9.18.2}, we define its momentum majorant norm on $D(s,r)$ as
\begin{equation}\label{2.1.16}||X||_{s,r,q,\mathbf{a}}:
=\sup_{(y,z,\bar{z})\in D(r)}||(\sum_{k,i,\alpha,\beta}e^{\mathbf{a}|\pi(k,\alpha,\beta;v)|}
|X_{k,i,\alpha,\beta}^{(\mathbf{v})}|e^{|k|s}|y|^i|z^{\alpha}||
\bar{z}^{\beta}|)_{\mathbf{v}\in V}||_{s,r,q},\end{equation}
where $\mathbf{a}\geq0$ and $|k|:=|k_1|+|k_2|+\cdots+|k_n|$.
Furthermore, if $X$ depends on parameters $\xi\in\mathcal{O}\subset\mathbb{R}^n$, we define the $\lambda$-Lipschitz (momentum majorant) norm ($\lambda\geq0$):
\begin{eqnarray}\nonumber||X||^{\lambda}_{s,r,q,\mathbf{a};
\mathcal{O}}&:=&||X||_{s,r,q,\mathbf{a};\mathcal{O}}+
\lambda||X||_{s,r,q,\mathbf{a}
;\mathcal{O}}^{lip}\\
\label{17.9.19.2}&:=&\sup_{\xi\in\mathcal{O}}||X(\xi)||
_{s,r,q,\mathbf{a}}+\lambda\sup_{\xi,\zeta\in\mathcal{O}
\atop{\xi\neq\zeta}}\frac{||\Delta_{\xi\zeta}X||_{s,r,q,\mathbf{a}}}
{|\xi-\zeta|},\end{eqnarray}
where $\Delta_{\xi\zeta}X=X(\cdot;\xi)-X(\cdot;\zeta)$.
Similarly, we define the $\lambda$-Lipschitz sup-norm:
\begin{eqnarray}\nonumber||X||^{\lambda}_{s,r,q;D(s,r)\times
\mathcal{O}}&:=&||X||_{s,r,q;D(s,r)\times\mathcal{O}}+
\lambda||X||_{s,r,q
;D(s,r)\times\mathcal{O}}^{lip}\\
\label{17.12.19.1}&:=&\sup_{(v;\xi)\in D(s,r)\times\mathcal{O}}||X(v;\xi)||
_{s,r,q}+\lambda\sup_{\xi,\zeta\in\mathcal{O}
\atop{\xi\neq\zeta}}\sup_{D(s,r)}\frac{||\Delta_{\xi\zeta}X||_{s,r,q}}
{|\xi-\zeta|}.\end{eqnarray}
Obviously, we have
\begin{equation}\label{17.11.8.2}||X||_{s,r,q;D(s,r)}\leq
||X||_{s,r,q,\mathbf{a}},\end{equation}
\begin{equation}||X||^{\lambda}_{s,r,q;D(s,r)\times
\mathcal{O}}\leq||X||^{\lambda}_{s,r,q,\mathbf{a};
\mathcal{O}}.\end{equation}
Now consider small perturbations $H=N+P$ of an infinite dimensional Hamiltonian in the parameter dependent normal form
\begin{equation}\label{2.1}N=\sum_{1\leq b\leq n}\sigma_{j_b}\omega_b(\xi)y_b+\sum_{j\in\mathbb{Z}_*}\sigma_{j}
\Omega_j(\xi)z_j\bar{z}_j\end{equation}
defined on the phase space $\mathcal{P}^{a,p}$ ($a\geq0,p\geq0$)
with the symplectic structure
\begin{equation}\label{7.13.4}\sum_{1\leq b\leq n}\sigma_{j_b}dx_b\wedge dy_b-\mathbf{i}\sum_{j\in\mathbb{Z}_*}\sigma_jdz_j\wedge d\bar{z}_j,\end{equation}
where $\sigma_j=1$ for $j>0$ and $\sigma_j=-1$ for $j<0$. The tangential frequencies $\omega:=(\omega_1,\cdots,\omega_n)$ and the normal frequencies $\Omega:=(\Omega_j)_{j\in\mathbb{Z}_*}$ are real vectors depending on real parameters $\xi\in\mathcal{O}\subset\mathbb{R}^n$, $\mathcal{O}$ a closed bounded
set of positive Lebesgue measure, and roughly
$$\Omega_j(\xi)=j^2+\cdots.$$
The perturbation term $P$ is real analytic in the space coordinates and Lipschitz in the parameters. Moreover, for each $\xi\in\mathcal{O}$ its Hamiltonan
vector field
\begin{equation}\label{7.13.2} X_P=((\sigma_{j_b}P_{y_b})_{1\leq b\leq n}, -(\sigma_{j_b}P_{x_b})_{1\leq b\leq n},-\mathbf{i}(\sigma_jP_{\bar{z}_j})_{j\in\mathbb{Z}_*},\mathbf{i}(\sigma_jP_{z_j})_{j\in\mathbb{Z}_*})^T\end{equation}
defines in the neighbourhood of $\mathcal{T}_0=\mathbb{T}^n\times\{y=0\}\times\{z=0\}\times\{\bar{z}=0\}$, that is $D(s,r)$ in \eqref{17.9.19.1}, a real analytic map
\begin{equation}\label{7.13.1}X_P:\mathcal{P}^{a,p}\rightarrow
\mathcal{P}^{a,q}.\end{equation}
Similarly to the Lipschitz-norm of the vector filed in \eqref{17.9.19.2},
the Lipschitz semi-norms of the frequencies $\omega$ and $\Omega$ are defined as
\begin{equation}|\omega|_{\mathcal{O}}^{lip}=\sup_{\xi,\zeta\in
\mathcal{O}\atop{\xi\neq\zeta}}\frac{|\Delta_{\xi\zeta}\omega|}{|\xi-\zeta|},
\quad |\Omega|_{-\delta,\mathcal{O}}^{lip}=\sup_{\xi,\zeta\in\mathcal{O}
\atop{\xi\neq\zeta}}
\sup_{j\in\mathbb{Z}_*}|j|^{-\delta}\frac{|\Delta_{\xi\zeta}\Omega_j|}
{|\xi-\zeta|}\end{equation}
for any real number $\delta$.
\begin{thm} \label{thm7.12.1}Suppose the normal form $N$ in \eqref{2.1} described above satisfies the following
assumptions:
\begin{itemize}
\item[(A)] There exists a constant $m>0$ such that
\begin{equation}|\Omega_i-\Omega_j|\geq m|i^2-j^2|,\end{equation}
for all $i,j\in\mathbb{Z}_*\cup\{0\}$ uniformly on $\mathcal{O}$. Here $\Omega_0=0$;
\item[(B)]The map $\xi\mapsto\omega(\xi)$ between $\mathcal{O}$ and its image is Lipschitz continuous, i.e. there exist a positive constant $M_1$ such that $|\omega|_{\mathcal{O}}^{lip}\leq M_1$;
there exists $\delta\leq1$ such that the functions $\xi\mapsto\frac{\Omega_j(\xi)}{j^{\delta}}$ are uniformly Lipschitz
on $\mathcal{O}$ for $j\in\mathbb{Z}_*$, i.e. there exist a positive constant $M_2$ such that $|\Omega|_{-\delta,\mathcal{O}}^{lip}\leq M_2$;
\item[(C)]
There exists a constant $M_3>0$ such that, for every $k\in\mathbb{Z}^n$ and $l\in\mathbb{Z}^{\mathbb{Z}_*}$ with $|l|:=\sum_{j\in\mathbb{Z}_*}|l_j|\leq2$, the small divisor
$D_{kl}(\xi):=\langle k,\omega(\xi)\rangle+\langle l,\Omega(\xi)\rangle$
satisfies
\begin{equation}\label{18.3.18.3}\inf_{\xi\in\mathcal{O}}|D_{kl}(\xi)|
+\inf_{\xi-\zeta// v_{kl}}\frac{|\Delta_{\xi\zeta}
D_{kl}|}{|\xi-\zeta|}\geq M_{3}\max\{|k|,\sum_{j\in\mathbb{Z}_*}|jl_j|\},
\end{equation}
where $v_{kl}$ is a unit vector in $\mathbb{R}^n$ depending on $k$ and $l$, and the notation ``$\xi-\zeta// v_{kl}$'' means ``for all $\xi,\zeta\in\mathcal{O}$ with $\xi-\zeta$ parallelling to $v_{kl}$''.
\end{itemize}
Set $M=M_1+M_2$. Then for every $\beta>0$, there exists a positive constant $\gamma$, depending only on $n,m$, the frequencies $\omega$, $\Omega$,
$s>0$ and $\beta$, such that for every perturbation term $P$ described above with
\begin{equation}p-q\leq1\end{equation}
and
\begin{equation}\label{17.7.21.20}\epsilon:=||X_P||_{s,r,q,\mathbf{a};
\mathcal{O}}+\frac{\alpha}{M}||X_P||_{s,r,q,\mathbf{a};
\mathcal{O}}^{lip}\leq
(\alpha\gamma)^{1+\beta}\end{equation}
for some $r>0$ and $0<\alpha<1$, there exist:
\begin{itemize}
\item[(1)] a Cantor set $\mathcal{O}_{\alpha}\subset\mathcal{O}$ with
\begin{equation}\label{17.7.21.21}|\mathcal{O}\setminus\mathcal{O}_{\alpha}|\leq c_1\rho^{n-1}\alpha,\end{equation}
where $|\cdot|$ denotes the Lebesgue measure, $\rho:=\mbox{diam}\mathcal{O}$ represents the diameter of $\mathcal{O}$, and $c_1>0$ is a constant depends on $n,J,\omega,\Omega$;
\item[(2)] a Lipschitz family of smooth torus embeddings $\Phi:\mathbb{T}^n\times\mathcal{O}_{\alpha}\rightarrow
\mathcal{P}^{a,p}$ satisfying: for every non-negative integer multi-index $k=(k_1,\cdots,k_n),$
\begin{equation}\label{17.7.21.22}
||\partial_{x}^k(\Phi-\Phi_0)||_{s,r,p;\mathbb{T}^n
\times\mathcal{O}_{\alpha}}+
\frac{\alpha}{M}||\partial_x^k(\Phi-\Phi_0)||_{s,r,p;
\mathbb{T}^n\times\mathcal{O}_{\alpha}}^{lip}\leq
c_2\epsilon^{\frac{1}{1+\beta}}/\alpha,\end{equation}
where $\partial_x^k:=\frac{\partial^{|k|}}{\partial x_1^{k_1}\cdots\partial x_n^{k_n}}$,
$$\Phi_0:\mathbb{T}^n\times\mathcal{O}\rightarrow\mathcal{T}_0,\quad (x,\xi)\mapsto(x,0,0,0)$$
is the trivial embedding for each $\xi$, and $c_2$ is a positive constant which depends on $k$ and the same parameter as $\gamma$;
\item[(3)] a Lipschitz map $\phi:\mathcal{O}_{\alpha}\rightarrow\mathbb{R}^n$ with
\begin{equation}\label{17.7.21.23}|\phi-\omega|_{\mathcal{O}_{\alpha}}+\frac{\alpha}{M}|\phi-\omega|_{\mathcal{O}_{\alpha}}^{lip}\leq c_3\epsilon,\end{equation}
where $c_3$ is a positive constant which depends on the same parameter as $\gamma$,
\end{itemize}
such that for each $\xi\in\mathcal{O}_{\alpha}$ the map $\Phi$ restricted to $\mathbb{T}^n\times\{\xi\}$ is a smooth embedding of a rotational torus with frequencies $\phi(\xi)$ for the perturbed Hamiltonian $H$ at $\xi$. In other words,
$$t\mapsto\Phi(\theta+\phi(\xi),\xi),\quad t\in\mathbb{R}$$
is a smooth quasi-periodic solution for the Hamiltonian $H$ evaluated at $\xi$ for every $\theta\in\mathbb{T}^n$ and $\xi\in\mathcal{O}_{\alpha}$.
\end{thm}
\section{Proof of Theorem 1.1}
In the first subsection, we write the derivative nonlinear Schr$\ddot{\mbox{o}}$dinger equation \eqref{1.1} in Hamiltonian form of infinitely coordinates, and then transform it into a partial Birkhoff normal form up to order four. In the second subsection, we prove Theorem \ref{1.1} by using Theorem \ref{thm7.12.1}.
\subsection{Birkhoff Normal Form}
We introduce for any $a\geq0$ and $\tilde{p}>3/2$ the phase space
$$\mathcal{H}_{0}^{a,\tilde{p}}=\{u\in L^2(\mathbb{T}):\hat{u}(0)=0,||u||_{a,\tilde{p}}^2
=\sum_{j\in\bar{\mathbb{Z}}}|\hat{u}(j)|^2|j|^{2\tilde{p}}e^{2a|j|}<\infty\}$$
of complex valued functions on $\mathbb{T}$, where
$$\hat{u}(j)=\int_{0}^{2\pi}u(x)\phi_{-j}(x)dx.$$
To write \eqref{1.3} in infinitely many coordinates, we make the ansatz
\begin{equation}\label{3.1}u(t,x)=\sum_{j\in\bar{\mathbb{Z}}}
\gamma_jq_j(t)\phi_j(x),\end{equation}
where $\gamma_j=\sqrt{|j|}$. The coordinates are taken from the Hilbert space $\bar{\ell}^{a,p}$ of all complex-valued
sequences $q=(q_j)_{j\in\bar{\mathbb{Z}}}$ with finite norm
$$||q||_{a,p}^2=\sum_{j\in\bar{\mathbb{Z}}}|q_j|^2
|j|^{2p}e^{2a|j|}<\infty,$$
where $p=\tilde{p}+\frac12$. We remark that $\bar{\ell}^{a,p}$ is $\ell_J^{a,p}$ with $J=\emptyset$. In the following, for convenience the notation $\in\bar{\mathbb{Z}}$ is abbreviated as ``$\neq 0$'' or omitted. Now \eqref{1.3} can be rewritten as
\begin{equation}\label{3.2}\dot{q}_j=-\mathbf{i}\sigma_j\frac{\partial H}{\partial \bar{q}_j},\quad
\sigma_j=\begin{cases}1,\ &j\geq1\\ -1,\ &j\leq-1\end{cases}\end{equation}
with the Hamiltonian
\begin{equation}\label{3.3}H=\Lambda+G+K,\end{equation}
where \begin{equation}\label{3.4}\Lambda=
\sum_{j\neq0}\sigma_jj^2|q_j|^2,\end{equation}
\begin{equation}\label{3.5}G=\frac{1}{4\pi}
\sum_{j,k,l,m\neq0\atop{j-k+l-m=0}}
\gamma_j\gamma_k\gamma_l\gamma_mq_j
\bar{q}_kq_l\bar{q}_m,\end{equation}
\begin{equation}K=\int_{\mathbb{T}^n}
F_{\geq5}(x,\sum_{j\in\bar{\mathbb{Z}}
}\gamma_jq_j\phi_j,\sum_{j\in\bar{\mathbb{Z}}
}\gamma_j\bar{q}_j\phi_{-j})dx.\end{equation}
Now we consider the $4$-order term $G$. The normal form part of $G$ is \eqref{3.5} with $j=k$ or $j=m$, that is
\begin{equation}\label{3.7}B=\frac{1}{4\pi}\sum_{j\neq 0}j^2|q_j|^4+\frac{1}{2\pi}\sum_{j,l\atop{j\neq l}}|jl||q_j|^2|q_l|^2.\end{equation}
Fix a positive integer $N$. Define the index set
$$\Delta=\{{(j,k,l,m)}\in\mathbb{Z}^4:j,k,l,m\neq0,j-k+l-m=0,j\neq k,m\},$$
$$\Delta_1=\{{(j,k,l,m)}\in\Delta:\mbox{There are at least 2 compoment in}\{\pm1,\cdots,\pm N\}\}.$$
Split the non normal form part of $G$ into two parts:
\begin{equation}\label{3.8}Q_1=\frac{1}{4\pi}\sum_{(j,k,l,m)\in\Delta_1}\gamma_j\gamma_k\gamma_l
\gamma_mq_j\bar{q}_kq_l\bar{q}_m,\end{equation}
\begin{equation}\label{3.9}Q_2=\frac{1}{4\pi}\sum_{(j,k,l,m)\in\Delta\setminus\Delta_1}
\gamma_j\gamma_k\gamma_l
\gamma_mq_j\bar{q}_kq_l\bar{q}_m.\end{equation}
Then the Hamiltonian can be written as
\begin{equation}\label{3.10}H=\Lambda+B+Q_1+Q_2+K.\end{equation}
In this section, the symplectic structure is
\begin{equation}\label{3.12}-\mi\sum_{j\neq0}\sigma_jdq_j\wedge d\bar{q}_j,\end{equation}
and the corresponding Poisson bracket for two Hamiltonians $H$, $F$ is
\begin{equation}\label{3.13}\{H,F\}=-\mi\sum_{j\neq0}
\sigma_j(\frac{\partial H}{\partial q_j}\frac{\partial F}{\partial\bar{q}_j}-\frac{\partial H}{\partial \bar{q}_j}\frac{\partial F}{\partial q_j}).\end{equation}
For $J=\emptyset$, i.e., in absence of $x,y$-variables, denote the momentum majorant norm $||\cdot||_{s,r,p-1,\mathbf{a}}$ in \eqref{2.1.16} as
$||\cdot||_{r,p-1,\mathbf{a}}$. From the analyticity of $g(x,z)$ in $x$ and $z$, we can choose $a>0,\mathbf{a}>0,\tilde{r}>0$ such that the vector field $X_K$ is analytic from some neighbourhood of the origin of $\bar{\ell}^{a,p}$ into $\bar{\ell}^{a,p-1}$ with
\begin{equation}\label{3.6}||X_K||_{\tilde{r},p-1,\mathbf{a}}
=O(\tilde{r}^3).\end{equation}
On the other hand, it's easy to see that the functions $B, Q_1, Q_2$ are analytic in $\bar{\ell}^{a,p}$ with real value, and the vector fields
$X_B,$ $X_{Q_1}, X_{Q_2}$ are analytic maps from $\bar{\ell}^{a,p}$ into $\bar{\ell}^{a,p-1}$ with
\begin{equation}\label{3.11}||X_B||_{\tilde{r},p-1,\mathbf{a}},
||X_{Q_1}||_{\tilde{r},p-1,\mathbf{a}},
||X_{Q_2}||_{\tilde{r},p-1,\mathbf{a}}=O(\tilde{r}^2).\end{equation}
In the following lemma, we will search for a symplectic coordinate transformation which is the time $1$-map of the flow of the Hamiltonian vector field $X_F$, then eliminate $Q_1$ in the Hamiltonian and thus get a partial Birkhoff normal form up to order four.
By Lemma \ref{lem17.12.22.1} and Lemma \ref{lem17.12.23.1} with the absence of $x,y$-variables, we get
\begin{equation}||X_{\{B,F\}}||_{\frac{\tilde{r}}{2},p-1,\mathbf{a}}
\leq2^{2n+4}||X_B||_{\tilde{r},p-1,\mathbf{a}}||X_F||_{\tilde{r},p-1,\mathbf{a}},
\end{equation}
\begin{equation}
||X_{B\circ\Phi_F^1}||_{\frac{\tilde{r}}{2},p-1,\mathbf{a}}\leq
\frac{||X_B||_{\tilde{r},p-1,\mathbf{a}}}{1-2^{2n+6}e
||X_F||_{\tilde{r},p,\mathbf{a}}},
\end{equation}
and similar estimates for $Q_1,Q_2,K$. Therefore we obtain the following lemma, which is Lemma 3.2 in \cite{L-Y} with the momentum majorant norm estimates
instead of the usual sup-norm estimates.
\begin{lem}\label{NFlem} There exists a real analytic symplectic coordinate transformation $\Psi$ defined in a neighborhood
of the origin of $\bar{\ell}^{a,p}$ which transforms the above Hamiltonian $H$ into a partial Birkhoff
normal form up to order four. More precisely,
\begin{equation}\label{3.14}H\circ\Psi=\Lambda+B+Q_2+R\end{equation}
with
\begin{equation}\label{3.15}||X_R||_{\frac{\tilde{r}}{2},p-1,\mathbf{a}}
=O(\tilde{r}^3).\end{equation}
\end{lem}
\subsection{Using the KAM Theorem}
For the given $J=\{j_1<j_2<\cdots<j_n\}$ in Theorem \ref{1.1}, define $N:=\max(|j_1|,\cdots,|j_n|)$. Then by the transformation $\Psi$ in Lemma 3.1, we get a new Hamiltonian, still denoted by $H$,
\begin{equation}\label{3.28}H=\Lambda+B+Q_2+R,\end{equation}
which is analytic in some neighbourhood $U$ of the origin of $\bar{\ell}^{a,p}$ with $\Lambda$ in \eqref{3.4}, $B$ in \eqref{3.7}, $Q_2$ in \eqref{3.9}, $R$ satisfying \eqref{3.15}.
Introduce new symplectic coordinates $(x,y,z,\bar{z})$ by setting
\begin{equation}\label{3.29}
\begin{cases}q_{j_b}=\sqrt{\zeta_b+y_b}e^{\mi x_b},
\quad\bar{q}_{j_b}=\sqrt{\zeta_b+y_b}e^{-\mi x_b},\quad b=1,\cdots,n,\\
q_j=z_j,\quad \bar{q}_j=\bar{z}_j,\quad j\in\mathbb{Z}_*:=\bar{\mathbb{Z}}\setminus J,\end{cases}\end{equation}
where $\zeta=(\zeta_1,\cdots,\zeta_n)\in\mathbb{R}_+^n$. Then
\begin{equation}\label{3.30}\Lambda=\sum_{1\leq b\leq n}\sigma_{j_b}j_b^2(\zeta_b+y_b)+\sum_{j\in\mathbb{Z}_*}\sigma_jj^2|z_j|^2,
\end{equation}
\begin{eqnarray}\label{3.31}\nonumber B&=&\frac{1}{4\pi}\sum_{1\leq b\leq n}j_b^2(\zeta_b+y_b)^2+\frac{1}{4\pi}
\sum_{j\in\mathbb{Z}_*}j^2|z_j|^4\\
&&+\frac{1}{2\pi}\sum_{1\leq b,b'\leq n\atop{b\neq b'}}|j_bj_{b'}|(\zeta_b+y_b)(\zeta_{b'}+y_{b'})\nonumber\\
&&+\frac{1}{\pi}\sum_{1\leq b\leq n\atop{j\in\mathbb{Z}_*}}|j_bj|(\zeta_b+y_b)|z_j|^2+\frac{1}{2\pi}\sum_{j,l\in
\mathbb{Z}_*\atop{j\neq l}}|jl||z_j|^2|z_l|^2.
\end{eqnarray}
Thus the new Hamiltonian, still denoted by $H$, up to a constant depending only on $\xi$ , is given by
\begin{equation}\label{3.32}H=N+P=\sum_{1\leq b\leq n}\sigma_{j_b}\omega_by_b+\sum_{j\in\mathbb{Z}_*}\sigma_j\Omega_jz_j\bar{z}_j+\tilde{Q}+Q_2+R\end{equation}
with the symplectic structure
\begin{equation}\label{3.33}\sum_{1\leq b\leq n}\sigma_{j_b}dx_b\wedge dy_b-\mi\sum_{j\in\mathbb{Z}_*}\sigma_jdz_j\wedge d\bar{z}_j,\end{equation}
where
\begin{eqnarray}\label{3.34}\nonumber\omega_b&=&j_b^2+
\frac{\sigma_{j_b}}{2\pi}j_b^2\zeta_b+\frac{\sigma_{j_b}}{\pi}\sum_{1\leq b'\leq n\atop{b'\neq b}}|j_bj_{b'}|\zeta_{b'}\\&&=j_b^2+
\frac{j_b}{\pi}(\frac12|j_b|\zeta_b+\sum_{1\leq b'\leq n\atop{b'\neq b}}|j_{b'}|\zeta_{b'}),\end{eqnarray}
\begin{eqnarray}\label{3.35}\Omega_j&=&j^2+\frac{\sigma_j}{\pi}\sum_{1\leq b\leq n}|j_bj|\zeta_b\nonumber\\
\label{3.35}&=&j^2+\frac{j}{\pi}\sum_{1\leq b\leq n}|j_b|\zeta_b,\end{eqnarray}
\begin{eqnarray}\tilde{Q}&=&\frac{1}{4\pi}\sum_{1\leq b\leq n}j_b^2y_b^2+\frac{1}{4\pi}
\sum_{j\in\mathbb{Z}_*}j^2|z_j|^4\nonumber\\
&&+\frac{1}{2\pi}\sum_{1\leq b,b'\leq n\atop{b\neq b'}}|j_bj_{b'}|y_by_{b'}\nonumber\\
\label{17.10.19.1}&&+\frac{1}{\pi}\sum_{1\leq b\leq n\atop{j\in\mathbb{Z}_*}}|j_bj|y_b|z_j|^2+\frac{1}{2\pi}
\sum_{j,l\in\mathbb{Z}_*\atop{j\neq l}}|jl||z_j|^2|z_l|^2.\end{eqnarray}
For simplicity, introduce \begin{equation}\label{18.3.25.1}\xi_b=\frac{1}{\pi}
(\frac12|j_b|\zeta_b+\sum_{b'\neq b}|j_{b'}|\zeta_{b'}),\quad 1\leq b\leq n.\end{equation}
By direct calculation,
we have
\begin{equation}\label{17.10.19.2}\omega_{b}=j_b^2+j_b\xi_b,\quad
\Omega_{j}=j^2+j\frac{\sum_{b=1}^n\xi_b}{n-\frac12},\end{equation}
\begin{equation} \frac{\partial\xi}{\partial\zeta}=
\frac{1}{\pi}
\begin{pmatrix}\frac12&1&\cdots&1\\
1&\frac12&\cdots&1\\
\cdots&\cdots&\cdots&\cdots\\
1&1&\cdots&\frac12\end{pmatrix}\mbox{diag}(j_b:1\leq b\leq n)\end{equation}
and
\begin{equation}(\frac{\partial\xi}{\partial\zeta})^{-1}=
\frac{4\pi}{2n-1}\mbox{diag}(j_b^{-1}:1\leq b\leq n)
\begin{pmatrix}\frac32-n&1&\cdots&1\\
1&\frac32-n&\cdots&1\\
\cdots&\cdots&\cdots&\cdots\\
1&1&\cdots&\frac32-n\end{pmatrix}.
\end{equation}
Therefore, it is equivalent to treat $\xi$ as parameters. In view of \eqref{17.10.19.2}, for $k\in\mathbb{Z}^n, l\in\mathbb{Z}^{\mathbb{Z}_*}$, we have
\begin{equation}\langle k,\omega(\xi)\rangle+\langle l,\Omega(\xi)\rangle=(\sum_{b=1}^n
k_bj_b^2+\sum_{j\in\mathbb{Z}_*}l_jj^2)
+\sum_{b=1}^n(k_bj_b+\sum_{j\in\mathbb{Z}_*}
\frac{l_jj}{n-\frac12})\xi_b.\end{equation}
The following lemma is used to check assumption (C) in the KAM theorem.
\begin{lem}\label{lem18.3.27.1}For $k\in\mathbb{Z}^n$, $|l|\leq2$,
at least one of the following $n+1$ inequalities holds:
\begin{equation}\label{17.10.23.1}|\sum_{b=1}^n
k_bj_b^2+\sum_{j\in\mathbb{Z}_*}l_jj^2|
\geq\frac{1}{100n}\max\{|k|,
\sum_{j\in\mathbb{Z}_*}|jl_j|\},\end{equation}
\begin{equation}\label{17.10.23.2}|k_bj_b+
\sum_{j\in\mathbb{Z}_*}
\frac{l_jj}{n-\frac12}|\geq \frac{1}{100n\sum_{b=1}^n|j_b|}
\max\{|k|,\sum_{j\in\mathbb{Z}_*}|jl_j|
\},\quad b=1,\cdots,n.
\end{equation}
\end{lem}
\begin{proof}
We prove this lemma in the following cases:
Case 1: $|k|\geq2\sum_{j\in\mathbb{Z}_*}|jl_j|$.
We prove that at least one of the $n$ inequalities in \eqref{17.10.23.2} is true. Otherwise, for any $1\leq b\leq n$,
\begin{equation}|k_bj_b+
\sum_{j\in\mathbb{Z}_*}
\frac{l_jj}{n-\frac12}|< \frac{|k|}{100n\sum_{b=1}^n|j_b|},
\end{equation}
and thus
\begin{equation}|k_b|\leq|k_bj_b|<
\sum_{j\in\mathbb{Z}_*}
\frac{|l_jj|}{n-\frac12}+ \frac{|k|}{100n\sum_{b=1}^n|j_b|}\leq
\frac{|k|}{2n-1}+\frac{|k|}{100n\sum_{b=1}^n|j_b|}.\end{equation}
Taking the sum of the above inequalities with respect to $1\leq b\leq n$, we get
\begin{equation}|k|<\big(\frac{n}{2n-1}
+\frac{1}{100\sum_{b=1}^n|j_b|}\big)|k|,\end{equation}
which is impossible by noticing that $n\geq2$.
Case 2: $|k|<2\sum_{j\in\mathbb{Z}_*}|jl_j|$.
Supposing the lemma not true, then
\begin{equation}\label{17.10.23.1}|\sum_{b=1}^n
k_bj_b^2+\sum_{j\in\mathbb{Z}_*}l_jj^2|
<\frac{1}{50n}
\sum_{j\in\mathbb{Z}_*}|jl_j|,\end{equation}
\begin{equation}\label{18.3.27.2}|k_bj_b+
\sum_{j\in\mathbb{Z}_*}
\frac{l_jj}{n-\frac12}|< \frac{1}{50n\sum_{b=1}^n|j_b|}
\sum_{j\in\mathbb{Z}_*}|jl_j|,\quad b=1,\cdots,n.
\end{equation}
In the following, we will derive contradiction in all possible cases. Denote $e_j\in\mathbb{Z}^{\mathbb{Z}_*}$ as the sequence with all zeros except the $j$-th component, which is $1$.
Subcase 2.1: $l=e_j$ (the same for $-e_j$). From \eqref{17.10.23.1} and \eqref{18.3.27.2}, we get
\begin{equation}\label{17.11.11.1}|\sum_{b=1}^n
k_bj_b^2+ j^2|<\frac{|j|}{50n},\end{equation}
\begin{equation}\label{17.11.11.2}|k_bj_b+\frac{j}{n-\frac12}|<
\frac{|j|}{50n\sum_{b=1}^n|j_b|},\quad b=1,\cdots,n.
\end{equation}
By \eqref{17.11.11.2} and $n\geq2$, we know
\begin{equation}\label{17.11.11.7}|k_bj_b|<
\frac{|j|}{n-\frac12}+\frac{|j|}{50n\sum_{b=1}^n|j_b|}
<\frac{|j|}{n-\frac59},\quad b=1,\cdots,n,\end{equation}
and thus
\begin{eqnarray}
\nonumber|\sum_{b=1}^nk_bj_b^2+j^2|&\geq& j^2-\sum_{b=1}^n|k_bj_b^2|\\
\nonumber&=& j^2-\sum_{k_b\neq0}|k_bj_b^2|\\
\nonumber&\geq& j^2-\sum_{k_b\neq0}k_b^2j_b^2\\
\nonumber&> &j^2-\frac{n}{(n-\frac59)^2}j^2\\
&>&\frac{1}{50n}j^2,\end{eqnarray}
which contradicts with \eqref{17.11.11.1}.
Subcase 2.2: $l=e_i+e_j$ (the same for $-e_i-e_j$). From \eqref{17.10.23.1} and \eqref{18.3.27.2}, we get
\begin{equation}\label{17.10.25.3}|\sum_{b=1}^n
k_bj_b^2+i^2+j^2|<\frac{1}{50n}(|i|+|j|),
\end{equation}
\begin{equation}\label{17.10.25.6}|k_bj_b+\frac{i+j}{n-\frac12}|<
\frac{|i|+|j|}{50n\sum_{b=1}^n|j_b|},\quad b=1,\cdots,n.
\end{equation}
By \eqref{17.10.25.6} and $n\geq2$, we know
\begin{equation}\label{17.10.23.5}|k_bj_b|< \frac{|i+j|}{n-\frac12}+\frac{|i|+|j|}
{50n\sum_{b=1}^n|j_b|}
<\frac{|i|+|j|}{n-\frac{9}{17}},\quad b=1,\cdots,n.\end{equation}
Therefore, for $n\geq3$,
\begin{eqnarray}
\nonumber|\sum_{b=1}^nk_bj_b^2|&=&|\sum_{k_b\neq0}k_bj_b^2|\\
\nonumber&\leq&\sum_{k_b\neq0}k_b^2j_b^2\\
\nonumber&<&\frac{n}{(n-\frac{9}{17})^2}(|i|+|j|)^2\\
\nonumber&\leq&\frac{5}{294}(i^2+j^2),
\end{eqnarray}
and thus
\begin{eqnarray}\nonumber|\sum_{b=1}^nk_bj_b^2
+i^2+j^2|&\geq& i^2+j^2-\sum_{b=1}^n|k_bj_b^2|\\
\nonumber&>&i^2+j^2-
\frac{5}{294}(i^2+j^2)\\
&>&\frac{1}{50n}(i^2+j^2),\end{eqnarray}
which contradicts with \eqref{17.10.25.3}. Finally we
prove the case $n=2$ with $j_1<0<j_2$. If $ij<0$, by the first inequality of \eqref{17.10.23.5},
\begin{eqnarray}
\nonumber|\sum_{b=1}^2k_bj_b^2|&\leq&\sum_{k_b\neq0}k_b^2j_b^2\\
\nonumber&<&2\Big(\frac{2|i+j|}{3}+\frac{|i-j|}{
100(|j_1|+|j_2|)}\Big)^2\\
\nonumber&=&\frac{8}{9}(i+j)^2+
\frac{2|i^2-j^2|}{75(|j_1|+|j_2|)}
+\frac{(i-j)^2}{5000(|j_1|+|j_2|)^2},
\end{eqnarray}
and thus
\begin{eqnarray}\nonumber|\sum_{b=1}^2k_bj_b^2
+i^2+j^2|&\geq& i^2+j^2-\sum_{b=1}^2|k_bj_b^2|\\
\nonumber&>&(i^2+j^2)\Big(1-\frac{8}{9}-\frac{2}{75(|j_1|+|j_2|)}
-\frac{1}{2500(|j_1|+|j_2|)^2}\Big)\\
&>&\frac{1}{100}(i^2+j^2),\end{eqnarray}
which contradicts with \eqref{17.10.25.3}.
Otherwise, $ij>0$. From \eqref{17.10.25.6}, $k_bj_b$ has the same sign as $i+j$. Then in view of
$j_1<0<j_2$, we conclude that $k_1$ and $k_2$ have different signs, and thus by the second inequality of
\eqref{17.10.23.5},
\begin{equation}\label{18.3.27.1}
|\sum_{b=1}^2k_bj_b^2|\leq
\max\{|k_1j_1^2|,|k_2j_2^2|\}
\leq\frac{1}{(2-\frac{9}{17})^2}(i+j)^2\leq
\frac{2}{(2-\frac{9}{17})^2}(i^2+j^2).
\end{equation}
Using \eqref{18.3.27.1}, we obtain
\begin{eqnarray*}
|\sum_{b=1}^2k_bj_b^2+i^2+j^2|&\geq&(i^2+j^2)-
|\sum_{b=1}^2k_bj_b^2|\\
&>&(i^2+j^2)-\frac{2}{(2-\frac{9}{17})^2}(i^2+j^2)\\
&>&\frac{1}{100}(i^2+j^2),
\end{eqnarray*}
which contradicts with \eqref{17.10.25.3}.
Subcase 2.3: $l=e_i-e_j$, $ij>0, i\neq j$. From \eqref{17.10.23.1} and \eqref{18.3.27.2}, we get
\begin{equation}\label{17.11.12.5}|\sum_{b=1}^n
k_bj_b^2+i^2-j^2|<\frac{1}{50n}|i+j|,\end{equation}
\begin{equation}\label{17.11.12.6}|k_bj_b+\frac{i-j}{n-\frac12}|<
\frac{1}{50n\sum_{b=1}^n|j_b|}|i+j|,\quad b=1,\cdots,n.
\end{equation}
By \eqref{17.11.12.6}, we get
\begin{equation}\label{17.11.12.7}|k_bj_b|<
\frac{|i-j|}{n-\frac12}+
\frac{1}{50n\sum_{b=1}^n|j_b|}|i+j|,\quad b=1,\cdots,n,\end{equation}
and thus
\begin{eqnarray}\nonumber|\sum_{b=1}^nk_bj_b^2|&=&|\sum_{k_b\neq0}k_bj_b^2|\\
\nonumber&\leq&\sum_{k_b\neq0}|k_bj_b||j_b|\\
\nonumber&<&\sum_{k_b\neq0}
\Big(\frac{|i-j|}{n-\frac12}+
\frac{1}{50n\sum_{b=1}^n|j_b|}|i+j|\Big)|j_b|\\
\nonumber&=&\frac{|i-j|}{n-\frac12}\sum_{k_b\neq0}
|j_b|+\frac{\sum_{k_b\neq0}|j_b|}{50n\sum_{b=1}^n|j_b|}|i+j|\\
\nonumber&\leq&\frac{|i-j|}{n-\frac12}\sum_{k_b\neq0}
|k_bj_b|+\frac{1}{50n}|i+j|\\
\nonumber&<&\frac{|i-j|}{n-\frac12}\sum_{k_b\neq0}\Big(
\frac{|i-j|}{n-\frac12}+\frac{1}
{50n\sum_{b=1}^n|j_b|}|i+j|\Big)
+\frac{1}{50n}|i+j|\\
\label{17.11.12.8}&\leq&\frac{n}{(n-\frac12)^2}|i-j|^2+
\frac{|i^2-j^2|}{50(n-\frac12)\sum_{b=1}^n|j_b|}+
\frac{1}{50n}|i+j|.
\end{eqnarray}
Furthermore, for $n\geq2$, we obtain
\begin{eqnarray}\nonumber|\sum_{b=1}^nk_bj_b^2+i^2-j^2|&\geq&
|i^2-j^2|-|\sum_{b=1}^nk_bj_b^2|\\
\nonumber&>&|i^2-j^2|\Big(1-\frac{n}{(n-\frac12)^2}-
\frac{1}{50(n-\frac12)\sum_{b=1}^n|j_b|}-\frac{1}{50n}\Big)\\
\nonumber&\geq&|i^2-j^2|\big(1-\frac89-\frac{1}{150}
-\frac{1}{100}\big)\\
\nonumber &>&\frac{1}{50n}|i^2-j^2|,
\end{eqnarray}
which contradicts with \eqref{17.11.12.5}.
Subcase 2.4: $l=e_i-e_j$, $ij<0$. From \eqref{17.10.23.1} and \eqref{18.3.27.2}, we
get
\begin{equation}\label{17.11.12.3}|\sum_{b=1}^n
k_bj_b^2+i^2-j^2|<\frac{1}{50n}|i-j|,\end{equation}
\begin{equation}\label{17.11.12.1}|k_bj_b+\frac{i-j}
{n-\frac12}|<\frac{1}{50n\sum_{b=1}^n|j_b|}|i-j|,\quad b=1,\cdots,n.
\end{equation}
In view of
\eqref{17.11.12.1}, we get
\begin{eqnarray}
\nonumber|\sum_{b=1}^nk_bj_b^2+\frac{i-j}{n-\frac12}\sum_{b=1}^nj_b|
&\leq&\sum_{b=1}^n|k_bj_b+\frac{i-j}{n-\frac12}||j_b|\\
\nonumber&<&\frac{1}{50n\sum_{b=1}^n|j_b|}|i-j|
\cdot\sum_{b=1}^n|j_b|\\
\label{17.11.12.2}&=&\frac{1}{50n}|i-j|.
\end{eqnarray}
From \eqref{17.11.12.3} and \eqref{17.11.12.2}, we get
\begin{equation}\label{17.11.12.4}|(i^2-j^2)-
\frac{i-j}{n-\frac12}
\sum_{b=1}^nj_b|<\frac{1}{25n}|i-j|,\end{equation}
and thus
\begin{equation}\label{18.3.26.8}|(i+j)-
\frac{2}{2n-1}
\sum_{b=1}^nj_b|<\frac{1}{25n}.\end{equation}
On the other hand, recalling $2n-1\nmid\sum_{b=1}^nj_b$, we have
\begin{equation}
|(i+j)-\frac{2}{2n-1}\sum_{b=1}^nj_b|>\frac{1}{2n-1},
\end{equation}
which contradicts with \eqref{18.3.26.8}.
\end{proof}
Denote the normal form $\sum_{1\leq b\leq n}\sigma_{j_b}\omega_by_b+\sum_{j\in\mathbb{Z}_*}
\sigma_j\Omega_jz_j\bar{z}_j$ by $N$ and the perturbation $\tilde{Q}+Q_2+R$ by $P$. Consider the Hamiltonian $H=N+P$ on $D(s,r)\times\Xi_r$, where the phase domain $D(s,r)$ is defined in \eqref{17.9.19.1} with $s,r>0$ suitably small and the parameter domain
\begin{equation}\label{18.3.31.1}\Xi_r
=\{\xi\in\mathbb{R}^n_+
:|\xi|\leq r^{3/2}\}.\end{equation}
In view of \eqref{17.10.19.1}, we have
\begin{equation}\label{18.3.25.2}||X_{\tilde{Q}}||
_{s,r,p-1,\mathbf{a};\Xi_r}=O(r^2).\end{equation}
In view of \eqref{3.9}, $Q_2$ is at least $3$-order about $z,\bar{z}$, and then following the proof of Proposition 7.2 in \cite{Berti2}, we get
\begin{equation}\label{18.3.25.3}||X_{Q_2}||_{s,r,p-1,\mathbf{a};
\Xi_r}=O(r^{\frac74}).\end{equation}
In view of \eqref{3.15}, following the proof of Proposition 7.2 in \cite{Berti2}, we get
\begin{equation}\label{18.3.25.4}||X_{R}||_{s,r,p-1,\mathbf{a};
\Xi_r}=O(r^{\frac74}).\end{equation}
We conclude from \eqref{18.3.25.2}-\eqref{18.3.25.4},
\begin{equation}\label{18.3.25.5}||X_{P}||_{s,r,p-1,\mathbf{a};
\Xi_r}=O(r^{\frac74}).\end{equation}
Define
\begin{equation}\mathcal{O}_r:=U_{-\alpha}
\Xi_r,\end{equation}
where $U_{-\rho}\Xi$ is the subset of all points in $\Xi$ with boundary distance greater than $\rho$.
Now study the Hamiltonian $H=N+P$ on $D(s,r)\times\mathcal{O}_r$ by the KAM theorem. In view of
\eqref{17.10.19.2}, the assumption (A) is fulfilled with $m=\frac12$; the assumption (B) is fulfilled with $\delta=1$ and
\begin{equation}M_1=\max_{1\leq b\leq n}|j_b|,\quad M_2=\frac{n}{n-\frac12};\end{equation}
by Lemma \ref{lem18.3.27.1}, the assumption (C) is fulfilled with
\begin{equation}M_3=\frac{1}{100n\sum_{b=1}^n
|j_b|}.\end{equation}
Choose
\begin{equation}\alpha=r^{\frac85}\gamma^{-1},\quad
\beta=\frac{1}{13},\end{equation}
where $\gamma$ is taken from the KAM theorem.
Set $M=M_1+M_2$, which only depends on the set $J$. Observe that when $r$ is small enough,
\begin{equation}\label{18.4.1.2}\epsilon:=
||X_P||_{s,r,p-1,\mathbf{a};\mathcal{O}_r}+
\frac{\alpha}{M}
||X_P||^{lip}_{s,r,p-1,\mathbf{a};
\mathcal{O}_r}=O(r^{\frac74})\leq
(\alpha\gamma)^{1+\beta},\end{equation}
which is the smallness condition \eqref{17.7.21.20}.
Applying Theorem \ref{thm7.12.1}, we obtain a Cantor set $\mathcal{O}_r^{-}\subset\mathcal{O}_r$ with
\begin{equation}|\mathcal{O}_r\setminus
\mathcal{O}_r^{-}|=O((r^{\frac32})^{n-1}\alpha
)=O(r^{\frac32n+\frac{1}{10}}),\end{equation}
a Lipschitz family of smooth torus embeddins $\Phi_r:\mathbb{T}^n\times
\mathcal{O}_r^-\rightarrow\mathcal{P}^{a,p}$, and a Lipschitz frequency map: $\mathcal{O}_r^-\rightarrow\mathbb{R}^n$, such that for each $\xi\in\mathcal{O}_r^-$ the map $\Phi_r$ restricted to $\mathbb{T}^n\times\{\xi\}$ is a smooth embedding of a rotational torus with frequencies $\phi_r(\xi)$ for the perturbed Hamiltonian $H$ at $\xi$. Moreover, for every non-negative integer multi-index $k=(k_1,\cdots,k_n)$,
\begin{equation}\label{18.3.26.2}||\partial_x^k(\Phi_r-\Phi_0)||
_{s,r,p;\mathbb{T}^n\times\mathcal{O}_r^-}+
\frac{\alpha}{M}||\partial_x^k(\Phi_r-\Phi_0)||^{lip}
_{s,r,p;\mathbb{T}^n\times\mathcal{O}_r^-}=
O(\epsilon^{\frac{1}{1+\beta}}/\alpha)=O(r^{1/40}),
\end{equation}
\begin{equation}\label{18.3.26.3}|\phi_r-\omega|_{\mathcal{O}_r^-}
+\frac{\alpha}{M}|\phi_r-\omega|^{lip}
_{\mathcal{O}_r^-}=O(\epsilon)=O(r^{7/4}).\end{equation}
The Cantor set $\mathcal{O}_r^-$ by itself is not dense at the origin. To obtain such a set, following \cite{Poschel3}, we take the union of a stable sequence of subsets of $\mathcal{O}_r^-$.
Set $r_j=\frac{r_0}{2^j},j\geq0$ and define
\begin{equation}\mathcal{C}:=\bigcup_{j\geq0}
\mathcal{C}_{r_j},\end{equation}
where \begin{equation}\mathcal{C}_r=\mathcal{O}_r^-\cap
\Big(U_{-\alpha}(\Xi_r\setminus\Xi_{\frac r2})\Big).\end{equation}
The same as the proof in \cite{L-Y}, the Cantor set $\mathcal{C}$ has full density at the origin.
Then,
define the embedding $\Phi:\mathbb{T}^n\times\mathcal{C}
\rightarrow\mathcal{P}^{a,p}$ and the frequency map $\phi:\mathcal{C}\rightarrow\mathbb{R}^n$ by piecing together the corresponding definitions on each components. Furthermore, define $\Psi:=\Phi+T_{\xi}$, where $T_{\xi}(x,\xi)=(0,\xi,0,0)$.
The estimates of $\Psi-\Psi_0$ and $\phi-\omega$ on $\mathcal{C}\cap\Xi_{r_j}$ follow from \eqref{18.3.26.2} and \eqref{18.3.26.3}.
This finally completes the proof of Theorem \ref{1.1}.
\section{The Homological Equations}
In this section, the Poisson bracket $\{H,F\}$ for two Hamiltonians $H,F$ is defined with respect to the symplectic structure \eqref{7.13.4}, i.e.,
$$\{H,F\}=\sum_{1\leq b\leq n}\sigma_{j_b}(\frac{\partial H}{\partial x_b}\frac{\partial F}{\partial y_b}-
\frac{\partial H}{\partial y_b}\frac{\partial F}{\partial x_b})-\mathbf{i}\sum_{j\in\mathbb{Z}_*}\sigma_j(\frac{\partial H}{\partial z_j}\frac{\partial F}{\partial \bar{z}_j}-
\frac{\partial H}{\partial \bar{z}_j}\frac{\partial F}{\partial z_j}).$$
\subsection{Derivation of homological equations}
The proof of
Theorem \ref{thm7.12.1} employs the rapidly converging iteration scheme of Newton type to deal with small-divisor problems introduced by Kolmogorov, involving infinite sequence of coordinate transformations. At the $\nu$-th step of the scheme, the Hamiltonian
$$H_{\nu}=N_{\nu}+P_{\nu}$$
is considered, where $N_{\nu}$ is a generalized normal form
$$N_{\nu}=\sum_{1\leq b\leq n}\sigma_{j_b}\omega_{\nu,b}(\xi)y_b+\sum_{j\in\mathbb{Z}_*}\sigma_j\Omega_{\nu,j}(\xi)z_j\bar{z}_j,$$
$P_{\nu}$ is a small perturbation. A transformation $\Phi_{\nu}$ is set up so that
$$H_{\nu+1}=H_{\nu}\circ\Phi_{\nu}=N_{\nu+1}+P_{\nu+1},$$
where $N_{\nu+1}$ is another generalized normal form, $P_{\nu+1}$ is a much smaller perturbation. We drop the index $\nu$ of $H_{\nu}$, $N_{\nu}$, $P_{\nu}$, $\omega_{\nu}$, $\Omega_{\nu}$, $\Phi_{\nu}$ and shorten the index $\nu+1$ as $+$.
For a function $u$ on $\mathbb{T}^n$, let
$$[u]=\frac{1}{(2\pi)^n}\int_{\mathbb{T}^n}u(x)dx.$$
Let $R$ be $2$-order Taylor polynomial truncation of $P$, that is,
\begin{equation}\label{7.12.4} R=R^x+\langle R^y,y\rangle+\langle R^z,z\rangle+\langle R^{\bar{z}},\bar{z}\rangle+
\langle R^{zz}z,z\rangle+\langle R^{z\bar{z}}z,\bar{z}\rangle+\langle R^{\bar{z}\bar{z}}\bar{z},\bar{z}\rangle,\end{equation}
where $\langle\cdot,\cdot\rangle$ denotes the formal products for two column vectors and $R^x,R^y,R^z,R^{\bar{z}},R^{zz},R^{z\bar{z}},R^{\bar{z}\bar{z}}$ depend on $x$ and $\xi$. Denote $[[R]]$ as the part of $R$ in generalized normal form as follows:
$$[[R]]=[R^x]+\langle[R^y],y\rangle+\langle \mbox{diag}(R^{z\bar{z}})z,\bar{z}\rangle,$$
where $\mbox{diag}(R^{z\bar{z}})$ is the diagonal of $R^{z\bar{z}}$. In the following, the term $[R^x]$ will be omitted since it does not affect the dynamics.
The coordinate transformation $\Phi$ is obtained as the time $1$-map $X_F^t|_{t=1}$ of a Hamiltonian vector field $X_F$, where $F$ is of the same form as $R$:
\begin{equation}\label{7.12.5} F=F^x+\langle F^y,y\rangle+\langle F^z,z\rangle+\langle F^{\bar{z}},\bar{z}\rangle+
\langle F^{zz}z,z\rangle+\langle F^{z\bar{z}}z,\bar{z}\rangle+\langle F^{\bar{z}\bar{z}}\bar{z},\bar{z}\rangle,\end{equation}
and $[[F]]=0$. Denote $\partial_{\omega}=\sum_{1\leq b\leq n}\omega_b\frac{\partial}{\partial x_b},$
$\Lambda=\mbox{diag}(\Omega_j:j\in\mathbb{Z}_*)$. Then we have
\begin{align}\nonumber H_+=&H\circ\Phi\\
\nonumber=&(N+R)\circ X_F^1+(P-R)\circ X_F^1\\
\nonumber=&N+\{N,F\}+R+\int_0^1\{(1-t)\{N,F\}+R,F\}\circ X_F^tdt+(P-R)\circ
X_F^1\\
\label{17.7.21.3}=&N+\sum_{j\in\mathbb{Z}_*}\sigma_j\langle \partial_x\Omega_j,F^y\rangle z_j\bar{z}_j+\langle[R^y],y\rangle+
\langle\mbox{diag}(R^{z\bar{z}})z,\bar{z}\rangle\\
\label{17.7.21.4}&+(-\partial_{\omega}F^x+R^x)\\
\label{17.7.21.5}&+\langle -\partial_{\omega}F^y+R^y-[R^y],y\rangle\\
\label{17.7.21.6}&+\langle-\partial_{\omega}F^z+\mathbf{i}\Lambda F^z+R^z,z\rangle\\
\label{17.7.21.7}&+\langle-\partial_{\omega}F^{\bar{z}}-\mathbf{i}\Lambda F^{\bar{z}}+R^{\bar{z}},\bar{z}\rangle\\
\label{17.7.21.8}&+\langle(-\partial_{\omega}F^{zz}+\mathbf{i}\Lambda F^{zz}+\mathbf{i}F^{zz}\Lambda+R^{zz})z,z\rangle\\
\label{17.7.21.9}&+\langle(-\partial_{\omega}F^{\bar{z}\bar{z}}-\mathbf{i}\Lambda F^{\bar{z}\bar{z}}-\mathbf{i}F^{\bar{z}\bar{z}}\Lambda+R^{\bar{z}\bar{z}})\bar{z},\bar{z}\rangle\\
\label{17.7.21.10}&+\langle(-\partial_{\omega}F^{z\bar{z}}-\mathbf{i}\Lambda F^{z\bar{z}}+\mathbf{i}F^{z\bar{z}}\Lambda+R^{z\bar{z}}-
\mbox{diag}(R^{z\bar{z}}))z,\bar{z}\rangle\\
\label{17.7.21.11}&+\int_0^1\{(1-t)\{N,F\}+R,F\}\circ X_F^tdt+(P-R)\circ
X_F^1.
\end{align}
We wish to find the function $F$ such that
\eqref{17.7.21.4}-\eqref{17.7.21.10} vanish. To this end,
$F^x,F^y,F^z,F^{\bar{z}}, F^{zz},F^{z\bar{z}}$ and
$F^{\bar{z}\bar{z}}$ should satisfy the homological equations:
\begin{align}\label{14.2.4.10}\partial_{\omega}F^x&=R^x,\\
\label{14.2.4.11}\partial_{\omega}F^y&=R^y-[R^y],\\
\label{14.2.4.12}\partial_{\omega}F^z_j-\mi\Omega_jF^z_j&=R^z_j,\quad j\in\mathbb{Z}_*,\\
\label{14.2.4.13}\partial_{\omega}F^{\bar{z}}_j+\mi\Omega_jF^{\bar{z}}_j&=R^{\bar{z}}_j,\quad j\in\mathbb{Z}_*,\\
\label{14.2.4.14}\partial_{\omega}F^{zz}_{ij}-\mi(\Omega_i+\Omega_j)F^{zz}_{ij}&=R^{zz}_{ij},\quad
i,j\in\mathbb{Z}_*,\\
\label{14.2.4.15}\partial_{\omega}F^{\bar{z}\bar{z}}_{ij}+\mi(\Omega_i+\Omega_j)F^{\bar{z}\bar{z}}_{ij}&=R_{ij}^{\bar{z}\bar{z}},\quad
i,j\in\mathbb{Z}_*,\\
\label{14.2.4.16}\partial_{\omega}F^{z\bar{z}}_{ij}+\mi(\Omega_i-\Omega_j)F^{z\bar{z}}_{ij}&=R^{z\bar{z}}_{ij},\quad i,j\in\mathbb{Z}_*,i\neq j.
\end{align}
\subsection{Two lemmas for solving the homological equations}
The homological equations \eqref{14.2.4.10} and \eqref{14.2.4.11} can be directly
solved by comparing Fourier coefficients; the homological equations \eqref{14.2.4.12}-\eqref{14.2.4.16} are solved by using Lemma \ref{lem17.7.31.1} and Lemma \ref{lem17.9.22.1} below.
Recall that $J=\{j_1<\cdots<j_n\}\subset\bar{\mathbb{Z}}$. Denote
$C_J=\max_{1\leq b\leq n}|j_b|$ and
\begin{equation}\label{17.11.4.1}
\pi(k,\mathbf{m})=\sum_{b=1}^nk_bj_b+
\mathbf{m},\ \mathbf{m}\in\mathbb{Z}.\end{equation}
For an analytic function $u(x)$ on $D(s)$, define its momentum majorant norm as
\begin{equation*}|u|_{s,\mathbf{a},\mathbf{m}}:=
\sum_{k\in\mathbb{Z}^n}
|\hat{u}_k|e^{|k|s}e^{\mathbf{a}|\pi(k,\mathbf{m})|},\end{equation*}
where $\hat{u}_k$ is the $k$-Fourier coefficient of $u:$ $\hat{u}_k=(2\pi)^{-n}\int_{\mathbb{T}^n}u(x)e^{-\mathbf{i}k\cdot x}dx$.
The following lemma is Theorem 1.4 in \cite{L-Y2} with the momentum majorant norm estimate instead of the sup-norm estimate, see \eqref{17.11.3.1}.
\begin{lem}\label{lem17.7.31.1}Consider the first-order partial differential equation
\begin{equation}\label{17.7.20.15}-\mathbf{i}\partial_{\omega}u+\lambda u+\mu(x)u=p(x),\quad x\in\mathbb{T}^n\end{equation}
for the unknown function $u$ defined on the torus $\mathbb{T}^n$, where $\omega=(\omega_1,\cdots,\omega_n)\in\mathbb{R}^n$ and $\lambda\in\mathbb{C}$. Assume:
\begin{itemize}\item[(1)] There are constants $\alpha_1,\alpha_2,\tilde{\gamma}>0$ and $\tau>n$ such that
\begin{equation}\label{17.7.20.16}|\langle k,\omega\rangle|\geq\frac{\alpha_1}{|k|^{\tau}},\quad k\in\mathbb{Z}^n\setminus\{0\}, \end{equation}
\begin{equation}\label{17.7.20.17}|\langle k,\omega\rangle+\lambda|\geq\frac{\alpha_2\tilde{\gamma}}
{1+|k|^{\tau}},\quad k\in\mathbb{Z}^n. \end{equation}
\item[(2)] $\mu:D(s)\rightarrow\mathbb{C}$ is real analytic (here `real' means $\mu(\mathbb{T}^n)\subset\mathbb{R}$) and is of zero average, i.e., $\int_{\mathbb{T}^n}\mu(\phi)d\phi=0$.
Moreover, assume there is a constant $C>0$ such that
\begin{equation}\label{17.7.20.18}|\mu|_{s,\tau+1}:=\sum_{k\in\mathbb{Z}^n}
|\hat{\mu}_k||k|^{\tau+1}e^{|k|s}
\leq C\tilde{\gamma},\end{equation}
where $\hat{\mu}_k$ is the $k$-Fourier coefficient of $\mu$.
\item[(3)] $p(x)$ is analytic in $x$ in $D(s)$ with finite momentum majorant norm.
\end{itemize}
Then \eqref{17.7.20.15} has a unique solution $u(x)$ which is defined in a narrower domain $D(s-\sigma )$ with $0<\sigma<s$, and which satisfies
\begin{equation}\label{17.11.3.1}|u|_{s-\sigma,\mathbf{a},\mathbf{m}}
\leq\frac{c(n,\tau)}{\alpha_2\tilde{\gamma}
\sigma^{2n+\tau}}
e^{2C\tilde{\gamma s}/\alpha_1}|p|_{s,\mathbf{a},\mathbf{m}}\end{equation}
for $4\mathbf{a}C_J\leq\sigma<\min\{1,s\}$ and the constant $c(n,\tau)=4^{n+\tau}(8e+8)^n(6e+6)^n(1+(\frac{3\tau}{e})^{\tau})$.
\end{lem}
\begin{proof}By Theorem 1.4 in \cite{L-Y2}, we have
\begin{equation}\label{17.7.20.20}\sup_{x\in D(s-\frac{\sigma}{2})}|u(x)|\leq\frac{c_1(n,\tau)}{\alpha_2\tilde{\gamma}
\sigma^{n+\tau}}
e^{2C\tilde{\gamma s}/\alpha_1}\sup_{x\in D(s-\frac{\sigma}{4})}|p(x)|,\end{equation}
where $c_1(n,\tau)=4^{n+\tau}(6e+6)^n(1+(\frac{3\tau}{e})^{\tau})$. In view of $\mathbf{a}C_J\leq\frac{\sigma}{4}$ and \eqref{17.7.20.1}, we have
\begin{eqnarray}\nonumber|u|_{s-\sigma,\mathbf{a},\mathbf{m}}&=&
\sum_{k\in\mathbb{Z}^n}|\hat{u}_k|e^{|k|(s-\sigma)}
e^{\mathbf{a}|\pi(k,\mathbf{m})|}\\
\nonumber&\leq&\sum_{k\in\mathbb{Z}^n}|
\hat{u}_k|e^{|k|(s-\sigma)}e^{\mathbf{a}C_J|k|}e^{\mathbf{a}|\mathbf{m}|}\\
\nonumber&\leq&e^{\mathbf{a}|\mathbf{m}|}\sum_{k\in\mathbb{Z}^n}|
\hat{u}_k|e^{|k|(s-\frac{3}{4}\sigma)}\\
\nonumber&\leq&e^{\mathbf{a}|\mathbf{m}|}(\sup_{x\in D(s-\frac{\sigma}{2})}|u(x)|)
\sum_{k\in\mathbb{Z}^n}e^{-|k|\frac{\sigma}{4}}\\
\label{17.11.3.2}&\leq&\frac{(8e+8)^n}{\sigma^n}e^{\mathbf{a}|\mathbf{m}|}
(\sup_{x\in D(s-\frac{\sigma}{2})}|u(x)|),
\end{eqnarray}
\begin{eqnarray}
\nonumber\sup_{x\in D(s-\frac{\sigma}{4})}|p(x)|&\leq&\sum_{k\in\mathbb{Z}^n}|\hat{p}_k|
e^{|k|(s-\frac{\sigma}{4})}\\
\nonumber&\leq&
\sum_{k\in\mathbb{Z}^n}|\hat{p}_k|
e^{|k|(s-\frac{\sigma}{4})}e^{\mathbf{a}(|
\sum_{b=1}^nk_bj_b+\mathbf{m}|+|\sum_{b=1}^nk_bj_b|-|\mathbf{m}|)}
\\
\nonumber&\leq&\sum_{k\in\mathbb{Z}^n}|\hat{p}_k|
e^{|k|(s-\frac{\sigma}{4})}e^{\mathbf{a}|
\sum_{b=1}^nk_bj_b+\mathbf{m}|}e^{\mathbf{a}C_J|k|}
e^{-\mathbf{a}|\mathbf{m}|}
\\
\label{17.11.4.2}&\leq&e^{-\mathbf{a}|\mathbf{m}|}|p|_{s,\mathbf{a},\mathbf{m}}.
\end{eqnarray}
Combining \eqref{17.7.20.20} \eqref{17.11.3.2} and \eqref{17.11.4.2}, we obtain \eqref{17.11.3.1}.
\end{proof}
For any positive number $K$, we introduce a truncation operator $\Gamma_K$ as follows:
$$(\Gamma_Kf)(x):=\sum_{|k|\leq K}\hat{f}_ke^{\mathbf{i}k\cdot x},\quad \forall \ f:\mathbb{T}^n\rightarrow\mathbb{C},$$
where $\hat{f}_k$ is the $k$-Fourier coefficient of $f$. The following lemma is Lemma 2.6 in \cite{L-Y2} with the momentum majorant norm estimate instead of the sup-norm estimate, see \eqref{17.9.20.7} and \eqref{17.9.20.8}.
\begin{lem}\label{lem17.9.22.1}Consider the first-order partial differential equation with the truncation operator $\Gamma_K$
\begin{equation}\label{17.7.20.9}-\mathbf{i}\partial_{\omega}u+\lambda u+\Gamma_K(\mu u)=\Gamma_Kp,\quad x\in\mathbb{T}^n\end{equation}
for the unknown function $u$ defined on the torus $\mathbb{T}^n$, where $\omega\in\mathbb{R}^n$, $0\neq\lambda\in\mathbb{C}$, $0<2K|\omega|\leq|\lambda|$, $|\omega|:=\max_{1\leq\nu\leq n}|\omega_{\nu}|$.
Assume that $\mu$ is real analytic in $x\in D(s)$ with
\begin{equation}\label{17.7.20.10}
\sum_{k\in\mathbb{Z}^n}|\hat{\mu}_k|e^{|k|s}
e^{\mathbf{a}|\pi(k,0)|}
\leq\frac{|\lambda|}{4}\end{equation}
for some constant $\mathbf{a}\geq0$, and assume $p(x)$ is analytic in $x\in D(s)$ with finite momentum majorant norm.
Then \eqref{17.7.20.9} has a unique solution $u=\Gamma_Ku$ and
\begin{equation}\label{17.9.20.7}|u|_{s,\mathbf{a},\mathbf{m}}\leq\frac{4}
{|\lambda|}|p|_{s,\mathbf{a},\mathbf{m}},\end{equation}
\begin{equation}\label{17.9.20.8}|(1-\Gamma_K)(\mu u)|_{s-\sigma,\mathbf{a},\mathbf{m}}\leq e^{-K\sigma}
|p|_{s,\mathbf{a},\mathbf{m}}\end{equation}
for $0<\sigma<s$.
\end{lem}
\begin{proof}The proof of this lemma is parallel to that of Lemma 2.6 in \cite{L-Y2}. However, we give the details for completeness. Suppose that \eqref{17.7.20.9} has a solution with $u=\Gamma_Ku$. Then we can write
$u(\phi)=\sum_{|k|\leq K}\hat{u}_ke^{\mathbf{i}k\cdot\phi}$. Inserting this formula into \eqref{17.7.20.9} and checking the coefficients of the mode $e^{\mathbf{i}k\cdot\phi}$, we can change \eqref{17.7.20.9} into
\begin{equation}\label{17.9.20.2}(\Lambda+\tilde{\mu})\hat{u}=\hat{p},\end{equation}
where $\Lambda=\mbox{diag}(k\cdot\omega+\lambda:|k|\leq K)$, $\tilde{\mu}=(\hat{\mu}_{k-l})_{|k|,|l|\leq K}$, $\hat{u}=(\hat{u}_k)_{|k|\leq K}$ and
$\hat{p}=(\hat{p}_k)_{|k|\leq K}$.
Recall we have assumed $0<K|\omega|\leq\frac{|\lambda|}{2}$ in this lemma. It follows that
\begin{equation}\label{17.9.20.6}|k\cdot\omega+\lambda|\geq\frac{|\lambda|}{2}\quad \mbox{for}\ |k|\leq K.\end{equation}
Set $\Omega=\mbox{diag}(e^{|k|s}e^{\mathbf{a}|\pi(k,\mathbf{m})|}:|k|\leq K)$. Then we have
$$\Omega\tilde{\mu}\Omega^{-1}=(\hat{\mu}_{k-l}e^{(|k|-|l|)s}e^{\mathbf{a}
(|\pi(k,\mathbf{m})|-|\pi(l,\mathbf{m})|)})_{|k|,|l|\leq K}.$$
Furthermore,
\begin{eqnarray}\nonumber||\Omega\tilde{\mu}\Omega^{-1}||_{\ell^1\rightarrow
\ell^1}&=&\max_{|l|\leq K}\sum_{|k|\leq K}|\hat{\mu}_{k-l}|e^{(|k|-|l|)s}e^{\mathbf{a}(|\pi(k,\mathbf{m})
|-|\pi(l,\mathbf{m})|)}\\
\nonumber&\leq&\max_{|l|\leq K}\sum_{|k|\leq K}|\hat{\mu}_{k-l}|e^{(|k-l|)s}e^{\mathbf{a}|\pi(k-l,0)|}\\
\nonumber&\leq&\sum_{k\in\mathbb{Z}^n}|\hat{\mu}_{k}
|e^{|k|s}e^{\mathbf{a}|\pi(k,0)|}\\
\label{17.9.20.5}&\leq&\frac{|\lambda|}{4},
\end{eqnarray}
where we have used \eqref{17.7.20.10} in the last inequality. Consequently, in view of \eqref{17.9.20.2}, we have
\begin{eqnarray}
\nonumber |p|_{s,\mathbf{a},\mathbf{m}}\geq|\Gamma_kp|_{s,\mathbf{a},\mathbf{m}}
=||\Omega\hat{p}||_{\ell^1}&=&||\Omega
(\Lambda+\tilde{\mu})\hat{u}||_{\ell^1}\\
\nonumber&\geq&||\Omega\Lambda\hat{u}||_{\ell^1}-||\Omega\tilde{\mu}\hat{u}||
_{\ell^1}\\\nonumber&\geq&||\Lambda\Omega\hat{u}||_{\ell^1}-
||\Omega\tilde{\mu}\Omega^{-1}||
_{\ell^1\rightarrow\ell^1}||\Omega\hat{u}||_{\ell^1}\\
\label{17.9.20.4}&\geq&\frac{|\lambda|}{2}||\Omega\hat{u}||_{\ell^1}-
\frac{|\lambda|}{4}||\Omega\hat{u}||_{\ell^1}\\
\nonumber&=&\frac{|\lambda|}{4}||\Omega\hat{u}||_{\ell^1}\\
\label{17.9.20.3}&=&
\frac{|\lambda|}{4}|u|_{s,\mathbf{a},\mathbf{m}},
\end{eqnarray}
where inequalities \eqref{17.9.20.6} and \eqref{17.9.20.5} are used in \eqref{17.9.20.4}. From \eqref{17.9.20.3}, we get \eqref{17.9.20.7}. The proof of \eqref{17.9.20.8} is as follows:
\begin{eqnarray}
\nonumber|(1-\Gamma_K)(\mu u)|_{s-\sigma,\mathbf{a},\mathbf{m}}&\leq&\sum_{|k|>K}|
\sum_{|l|\leq K}\hat{\mu}_{k-l}\hat{u}_l|e^{|k|(s-\sigma)}e^{\mathbf{a}|\pi(k,\mathbf{m})|}\\
\nonumber&\leq&e^{-K\sigma}\sum_{|k|\geq K}|\sum_{|l|\leq K}\hat{\mu}_{k-l}\hat{u}_l|e^{|k|s}e^{\mathbf{a}|
\pi(k,\mathbf{m})|}\\
\nonumber&\leq& e^{-K\sigma}(\sum_{k\in\mathbb{Z}^n}|\hat{\mu}_{k}|e^{|k|s}e^{\mathbf{a}|\pi(k,0)
|})(\sum_{|k|\leq K}|\hat{u}_k|e^{|k|s}e^{\mathbf{a}|\pi(k,\mathbf{m})|})\\
\nonumber&\leq&e^{-K\sigma}\sum_{|k|\leq K}|\hat{p}_k|e^{|k|s}
e^{\mathbf{a}|\pi(k,\mathbf{m})|}\leq e^{-K\sigma}|p|_{s,\mathbf{a},\mathbf{m}},
\end{eqnarray}
where the last inequality comes from \eqref{17.7.20.10} and \eqref{17.9.20.3}.
\end{proof}
\subsection{Solving the homological equations}
Consider the conditions $\delta\leq1$ and $p-q\leq1$. Without loss of generality, we assume $\delta=p-q=1$ by increasing $\delta$ and decreasing $q$ if necessary.
Let $\Omega=(\Omega_j:{j\in\mathbb{Z}_*}),\bar{\Omega}=[\Omega]$ and $\tilde{\Omega}=\Omega-[\Omega]$. Define
$$\langle k\rangle=\max\{1,|k|\},\quad\langle l\rangle_{\infty}=\max\{1,\sup_{j\in\mathbb{Z}_*}|jl_j|\}.$$
Equations \eqref{14.2.4.10}-\eqref{14.2.4.16} will be solved under the following conditions: uniformly on $\mathcal{O}$,
\begin{eqnarray}
\label{17.7.20.22}|\langle l,\bar{\Omega}(\xi)\rangle|&\geq& m|\sum_{j\in\mathbb{Z}_*}j^2l_j|,\quad |l|\leq2,\\
\label{17.7.20.23}|\tilde{\Omega}_j(\xi)|_{s,\tau+1}+
|\tilde{\Omega}_j(\xi)|_{s,\mathbf{a},0}
&\leq&\alpha_1\gamma_0|j|,\quad j\in\mathbb{Z}_*,\\
\label{17.9.21.1}|\langle k,\omega(\xi)\rangle+\langle l,\bar{\Omega}(\xi)\rangle|&\geq&\alpha_1\frac{\langle l\rangle_{\infty}}{\langle k\rangle^{\tau}},
\quad k\in\mathbb{Z}^n\setminus\{0\}, |l|\leq2, \ l\neq e_{-j}-e_j,\\
\label{17.7.20.21}|\langle k,\omega(\xi)\rangle+ \bar{\Omega}_{-j}(\xi)-\bar{\Omega}_{j}(\xi)|&\geq&\alpha_2\frac{|j|}{\langle k\rangle^{\tau}},\quad k\in\mathbb{Z}^n,\pm j\in\mathbb{Z}_*,|j|\leq\Pi,
\end{eqnarray}
with constants $\tau\geq n,0<\gamma_0\leq\frac14$, $\Pi>0$, $0<\alpha_2\leq\alpha_1\leq m$, $0<s<1$, $0<\mathbf{a}\leq\frac{s}{80C_J}$.
We mention that $\alpha_1,\alpha_2,m,\mathbf{a},s,\Pi$
will be the iteration parameters $\alpha_{1,\nu},\alpha_{2,\nu},m_{\nu},\mathbf{a}_{\nu},s_{\nu},\Pi_{\nu}$ in the $\nu$-th KAM step.
Equations \eqref{14.2.4.10} and \eqref{14.2.4.11} can be easily solved by a standard approach in classical, finite dimensional KAM theory, we only give the related results at this end of subsection. Equations \eqref{14.2.4.12}-\eqref{14.2.4.15} are easier than \eqref{14.2.4.16} and can be solved in the same way as
\eqref{14.2.4.16} done, so we only give the details of solving \eqref{14.2.4.16}.
Set $C_0=2|\omega|_{\mathcal{O}}/m$ and $K$ being positive numbers which will be the iteration parameter $K_{\nu}$ in the $\nu$-th KAM step. Recall the definition of the momentum $\pi(k,\alpha,\beta)$ of the scalar monomial $e^{\mathbf{i}k\cdot x}y^lz^{\alpha}\bar{z}^{\beta}$ in \eqref{2.1.14},
the momentum for $e^{\mathbf{i}k\cdot x}z_i\bar{z}_j$ is
\begin{equation*}\pi(k,i,j)=\sum_{b=1}^nk_bj_b+i-j=\pi(k,i-j),\end{equation*}
where in the last equality we use \eqref{17.11.4.1}.
\begin{itemize}
\item[(1)] For $\max\{|i|,|j|\}\leq C_0K, i\neq \pm j$, we solve exactly \eqref{14.2.4.16}:
\begin{equation}\label{17.7.26.3}\partial_{\omega}F^{z\bar{z}}_{ij}+\mathbf{i}\ (\Omega_i-\Omega_j)F^{z\bar{z}}_{ij}=R^{z\bar{z}}_{ij};\end{equation}
\item[(2)] For $\max\{|i|,|j|\}>C_0K, i\neq \pm j$, we solve the truncated equation of \eqref{14.2.4.16}:
\begin{equation}\label{17.7.26.4}\partial_{\omega}F^{z\bar{z}}_{ij}+\mathbf{i}\ \Gamma_K((\Omega_i-\Omega_j)F^{z\bar{z}}_{ij})=\Gamma_KR^{z\bar{z}}_{ij},
\quad \Gamma_KF^{z\bar{z}}_{ij}=F^{z\bar{z}}_{ij};\end{equation}
\item[(3)] For $i=-j$ and $|j|\leq\Pi$, we solve exactly \eqref{14.2.4.16}:
\begin{equation}\label{17.9.20.1}\partial_{\omega}F^{z\bar{z}}_{(-j)j}
+\mathbf{i}(\Omega_{-j}-
\Omega_j)
F_{(-j)j}^{z\bar{z}}=R_{(-j)j}^{z\bar{z}};\end{equation}
\item[(4)] For $i=-j$ and $|j|>\Pi$, let
\begin{equation}\label{17.9.26.4}F^{z\bar{z}}_{(-j)j}=0 \end{equation}
and put $R^{z\bar{z}}_{(-j)j}$ into the perturbation.
\end{itemize}
Combining \eqref{17.7.26.3}-\eqref{17.9.26.4}, we find that \eqref{17.7.21.10} does not vanish. Actually, at this time, \eqref{17.7.21.10} is equal to $\langle \hat{R}^{z\bar{z}}z,\bar{z}\rangle$ with the elements of $\hat{R}^{z\bar{z}}$ being defined by
\begin{equation}\label{17.7.28.1}\hat{R}^{z\bar{z}}_{ij}=\begin{cases}0,\quad \quad \quad\quad
\quad\quad\quad\quad\quad\quad\quad\quad\quad\ \ \max\{|i|,|j|\}\leq C_0K,i\neq \pm j,\\
(1-\Gamma_K)(-\mathbf{i}(\Omega_i-\Omega_j)F^{z\bar{z}}_{ij}+
R^{z\bar{z}}_{ij}),\quad \max\{|i|,|j|\}>C_0K,i\neq \pm j,\\
0,\quad\quad\quad\quad\quad\quad\quad\ \quad\quad\quad\quad\quad\quad \ i=-j,|j|\leq \Pi,\\
R^{z\bar{z}}_{(-j)j},\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\ \ i=-j, |j|>\Pi.\end{cases}\end{equation}
Letting $\Omega_{ij}=\Omega_i-\Omega_j=\bar{\Omega}_{ij}+\tilde{\Omega}_{ij}$ and dropping the superscript $z\bar{z}$ for brevity, \eqref{17.7.26.3}-\eqref{17.7.28.1} become
\begin{equation}\label{17.7.28.2}-\mathbf{i}\partial_{\omega}F_{ij}
+\bar{\Omega}_{ij}F_{ij}+
\tilde{\Omega}_{ij}F_{ij}=-\mathbf{i}\ R_{ij},\end{equation}
\begin{equation}\label{17.7.28.3}-\mathbf{i}\partial_{\omega}F_{ij}+
\bar{\Omega}_{ij}F_{ij}+
\Gamma_K(\tilde{\Omega}_{ij}F_{ij})=-
\mathbf{i}\ \Gamma_KR_{ij},\quad \Gamma_KF_{ij}=F_{ij},\end{equation}
\begin{equation}\label{17.9.22.1}-\mathbf{i}\partial_{\omega}F_{(-j)j}+
\bar{\Omega}_{(-j)j}F_{(-j)j}+\tilde{\Omega}_{(-j)j}F_{(-j)j}
=-\mathbf{i}R_{(-j)j},\end{equation}
\begin{equation}F_{(-j)j}=0,\end{equation}
\begin{equation}\label{17.7.28.4}\hat{R}_{ij}=\begin{cases}0,&
\max\{|i|,|j|\}\leq C_0K,i\neq \pm j,\\
(1-\Gamma_K)(-\mathbf{i}\tilde{\Omega}_{ij}F_{ij}+
R_{ij}),& \max\{|i|,|j|\}>C_0K,i\neq \pm j,\\
0,&i=-j,|j|\leq\Pi,\\
R_{(-j)j},& i=-j,|j|>\Pi.\end{cases}\end{equation}
We are now in position to solve the homological equation \eqref{17.7.28.2}-\eqref{17.9.22.1} by using Lemma \ref{lem17.7.31.1} and Lemma \ref{lem17.9.22.1} in the last subsection.
In what follows the notation $a\lessdot b$ stands for ``there exists a positive constant $c$ such that $a\leq cb$, where $c$
can only depend on $n,\tau$.''
First, let us consider \eqref{17.7.28.2} for $(i,j)$ with $\max\{|i|,|j|\}\leq C_0K,i\neq \pm j$. From \eqref{17.9.21.1}, we get
\begin{equation}\label{17.7.28.5}|\langle k,\omega\rangle|\geq\frac{\alpha_1}{|k|^{\tau}},\quad k\in\mathbb{Z}^n\setminus\{0\},\end{equation}
\begin{equation}\label{17.7.28.6}|\langle k,\omega\rangle+\bar{\Omega}_{ij}|\geq
\frac{\alpha_1\max\{|i|,|j|\}}{1+|k|^{\tau}},
\quad k\in\mathbb{Z}^n.\end{equation}
From \eqref{17.7.20.23} we get
\begin{eqnarray}\label{17.7.31.1}|\tilde{\Omega}_{ij}|_{s,\tau+1}\leq
\alpha_1\gamma_0(|i|+|j|)
\leq2\alpha_1\gamma_0
\max\{|i|,|j|\}.\end{eqnarray}
Setting $\sigma=\frac{s}{20}$, then we get $4\mathbf{a}C_J\leq\sigma$.
Applying Lemma \ref{lem17.7.31.1} to \eqref{17.7.28.2}, we have
\begin{equation}\label{17.7.31.2}|F_{ij}|_{s-\sigma,\mathbf{a},i-j}
\lessdot\frac{e^{4\gamma_0\max\{|i|,|j|\}s}}{\alpha_1
\max\{|i|,|j|\}\sigma^{2n+\tau}}
|R_{ij}|_{s,\mathbf{a},i-j}.\end{equation}
In view of $$\max\{|i|,|j|\}\leq C_0K,$$ we get
\begin{equation}\label{17.7.31.2}|F_{ij}|_{s-\sigma,\mathbf{a},i-j}
\lessdot\frac{e^{4C_0\gamma_0Ks}}{\alpha_1\max\{|i|,|j|\}\sigma^{2n+\tau}}
|R_{ij}|_{s,\mathbf{a},i-j}.\end{equation}
Then, let us consider \eqref{17.7.28.3} for $(i,j)$ with $\max\{|i|,|j|\}>C_0K,i\neq \pm j$. From \eqref{17.7.20.22} \eqref{17.7.20.23} and $C_0=2|\omega|_{\mathcal{O}}/m$, we get
\begin{equation}|\bar{\Omega}_{ij}|\geq m|i^2-j^2|\geq m\max\{|i|,|j|\}>mC_0K=2|\omega|_{\mathcal{O}}K,\end{equation}
$$|\tilde{\Omega}_{ij}|_{s,\mathbf{a},0}
\leq\alpha_1\gamma_0(|i|+|j|)
\leq\frac{\alpha_1\gamma_0|\bar{\Omega}_{ij}|}{m}
\leq\frac{|\bar{\Omega}_{ij}|}{4}
.$$
Now applying Lemma \ref{lem17.9.22.1} to \eqref{17.7.28.3}, we have
\begin{equation}\label{17.9.12.1}|F_{ij}|_{s,\mathbf{a},i-j}\lessdot
\frac{1}{m\max\{|i|,|j|\}}|R_{ij}|_{s,\mathbf{a},i-j},\end{equation}
\begin{equation}\label{17.9.12.2}|(1-\Gamma_K)
(\tilde{\Omega}_{ij}F_{ij})|_{s,\mathbf{a},i-j}
\leq e^{-K\sigma}
|R_{ij}|_{s,\mathbf{a},i-j}.\end{equation}
Finally, let us consider \eqref{17.9.22.1} for $i=-j$. From \eqref{17.9.21.1} \eqref{17.7.20.21}, we get
\begin{equation}|\langle k,\omega\rangle|\geq\frac{\alpha_1}{\langle k\rangle^{\tau}},\quad k\in\mathbb{Z}^n\setminus\{0\},\end{equation}
\begin{equation}|\langle k,\omega\rangle+\bar{\Omega}_{-j}-\bar{\Omega}_j|
\geq\frac{\alpha_2|j|}{1+|k|^{\tau}},\quad k\in\mathbb{Z}^n.\end{equation}
From \eqref{17.7.20.23}, we get
\begin{equation}|\tilde{\Omega}_{-j}-\tilde{\Omega}_{j}|_{s,\tau+1}\leq
2\alpha_1\gamma_0|j|.\end{equation}
Applying Lemma \ref{lem17.7.31.1} to \eqref{17.9.22.1}, we have
\begin{equation}\label{17.9.26.1}|F_{(-j)j}|_{s-\sigma,\mathbf{a},-2j}
\lessdot\frac{e^{4\gamma_0|j|s}}{\alpha_2
|j|\sigma^{2n+\tau}}
|R_{(-j)j}|_{s,\mathbf{a},-2j}.\end{equation}
In view of $$|j|\leq \Pi,$$ we get
\begin{equation}\label{17.9.26.2}|F_{(-j)j}|_{s-\sigma,\mathbf{a},-2j}
\lessdot\frac{e^{4\gamma_0\Pi s}}{\alpha_2
|j|\sigma^{2n+\tau}}
|R_{(-j)j}|_{s,\mathbf{a},-2j}.\end{equation}
By the definition of the Hamiltonian vector field in \eqref{7.13.2}, we obtain
\begin{equation}X_{\langle F^{z\bar{z}}z,\bar{z}\rangle}=
(0,-(\sigma_{j_b}\partial_{x_b}\langle F^{z\bar{z}}z,\bar{z}\rangle)_{1\leq b\leq n},-\mathbf{i}(\sigma_j\partial_{\bar{z}_j}\langle F^{z\bar{z}}z,\bar{z}\rangle
)_{j\in\mathbb{Z}_*},\mathbf{i}(\sigma_j\partial_{z_j}\langle F^{z\bar{z}}z,\bar{z}\rangle
)_{j\in\mathbb{Z}_*})^T.\end{equation}
Using Lemma \ref{lem17.9.7.1} and \eqref{17.7.31.2} \eqref{17.9.12.1} \eqref{17.9.26.2}, we get
\begin{eqnarray}\nonumber||X_{\langle F^{z\bar{z}}z,\bar{z}\rangle}||_{s-2\sigma,r,p,\mathbf{a}}
&\lessdot&\frac{\max\{e^{4\gamma_0C_0Ks},e^{4\gamma_0\Pi s}\}}{\alpha_2\sigma^{3n+\tau}}||X_{\langle R^{z\bar{z}}z,\bar{z}\rangle}||_{s,r,p-1,\mathbf{a}}\\
\label{17.9.27.3}&\leq&
\frac{\max\{e^{4\gamma_0C_0Ks},e^{4\gamma_0\Pi s}\}}{\alpha_2\sigma^{3n+\tau}}||X_R||_{s,r,p-1,\mathbf{a}}.\end{eqnarray}
To obtain the Lipschitz semi-norm, we proceed as follows. Shorting $\Delta_{\xi\eta}$ as $\Delta$ and applying it to
\eqref{17.7.28.2}-\eqref{17.9.22.1}, one gets that, for $(i,j)$ with $\max\{|i|,|j|\}\leq C_0K$,
\begin{equation}\label{17.9.13.1}\mathbf{i}\partial_{\omega}(\Delta F_{ij})+\bar{\Omega}_{ij}\Delta F_{ij}+\tilde{\Omega}_{ij}\Delta F_{ij}=-\mathbf{i}\partial_{\Delta\omega}F_{ij}-(\Delta\Omega_{ij})F_{ij}+\mathbf{i}\Delta R_{ij}:=Q_{ij},\end{equation}
for $(i,j)$ with $\max\{|i|,|j|\}>C_0K$,
\begin{equation}\label{17.9.13.2}\mathbf{i}\partial_{\omega}(\Delta F_{ij})+\bar{\Omega}_{ij}\Delta F_{ij}+\Gamma_K(\tilde{\Omega}_{ij}\Delta F_{ij})
=-\mathbf{i}\partial_{\Delta{\omega}}F_{ij}-\Gamma_K((\Delta\Omega_{ij})
F_{ij}-\mathbf{i}\Delta R_{ij}):=Q_{ij},\end{equation}
and that, for $i=-j$ and $|j|\leq\Pi$,
\begin{eqnarray}\nonumber&&\mathbf{i}\partial_{\omega}(\Delta F_{(-j)j})+\bar{\Omega}_{(-j)j}\Delta F_{(-j)j}+\tilde{\Omega}_{(-j)j}\Delta F_{(-j)j}\\
\label{17.9.27.1}&&=-\mathbf{i}\partial_{\Delta\omega}F_{(-j)j}
-(\Delta\Omega_{(-j)j})F_{(-j)j}+
\mathbf{i}\Delta R_{(-j)j}:=Q_{(-j)j}.\end{eqnarray}
For $\max\{|i|,|j|\}<C_0K$, we have
\begin{eqnarray}\nonumber|Q_{ij}|_{s-2\sigma,\mathbf{a},i-j} &\leq&|\Delta\omega|\cdot\sup_{k\in\mathbb{Z}^n}|k|e^{-|k|\sigma}\cdot
|F_{ij}|_{s-\sigma,\mathbf{a},i-j}\\
\nonumber&&+|\Delta\Omega_{ij}|_{s-2\sigma,
\mathbf{a},0}\cdot|F_{ij}|_{s-2\sigma,\mathbf{a},i-j}+|\Delta R_{ij}|_{s-2\sigma,\mathbf{a},i-j}\\
\nonumber &\lessdot&\frac{e^{4C_0\gamma_0Ks}}{\alpha_1\sigma^{2n+\tau+1}}
(|\Delta\omega|+|\Delta\Omega|_{-1,s-2\sigma,\mathbf{a},0})
|R_{ij}|_{s,\mathbf{a},i-j}+|\Delta R_{ij}|_{s-2\sigma,\mathbf{a},i-j}\\
\label{17.9.13.3}&\lessdot&\frac{e^{4C_0\gamma_0Ks}}{\sigma^{2n+\tau+1}}
(\frac{|\Delta\omega|+|\Delta\Omega|_{-1,s-2\sigma,\mathbf{a},0}}{\alpha_1}
|R_{ij}|_{s,\mathbf{a},i-j}
+|\Delta R_{ij}|_{s-2\sigma,\mathbf{a},i-j}).
\end{eqnarray}
Applying Lemma \ref{lem17.7.31.1} to \eqref{17.9.13.1}, we have
\begin{eqnarray}\label{17.9.13.4}&&|\Delta F_{ij}|_{s-3\sigma,\mathbf{a},i-j}\\
\nonumber&\lessdot&\frac{e^{8C_0\gamma_0Ks}}
{\alpha_1\max\{|i|,|j|\}\sigma^{4n+2\tau+1}}
(\frac{|\Delta\omega|+|\Delta{\Omega}|_{-1,s-2\sigma,
\mathbf{a},0}}{\alpha_1}
|R_{ij}|_{s,\mathbf{a},i-j}+|\Delta R_{ij}|_{s-2\sigma,\mathbf{a},i-j}).
\end{eqnarray}
For $\max\{|i|,|j|\}>C_0K$, similarly to \eqref{17.9.13.3}, we have
\begin{equation}\label{17.9.13.7}|Q_{ij}|_{s-\sigma,\mathbf{a},i-j}
\lessdot\frac{1}{\sigma}
(\frac{|\Delta\omega|+
|\Delta\Omega|_{-1,s-\sigma,\mathbf{a},0}}{m}
|R_{ij}|_{s-\sigma,\mathbf{a},i-j}+|\Delta R_{ij}|_{s-\sigma,\mathbf{a},i-j}).
\end{equation}
Applying Lemma \ref{lem17.9.22.1} to \eqref{17.9.13.2}, we have
\begin{eqnarray}\nonumber&&|\Gamma_K\Delta F_{ij}|_{s-\sigma,\mathbf{a},i-j}\\
\label{17.9.13.5}&\lessdot&\frac{1}{m\max\{|i|,|j|\}\sigma}
(\frac{|\Delta\omega|+|\Delta\Omega|_{-1,s-\sigma,\mathbf{a},0}}{m}
|R_{ij}|_{s,\mathbf{a},i-j}+|\Delta R_{ij}|_{s-\sigma,\mathbf{a},i-j}),
\end{eqnarray}
\begin{eqnarray}
|(1-\Gamma_K)(\tilde{\Omega}_{ij}\Delta F_{ij})|_{s-\sigma,\mathbf{a},i-j}
\label{17.9.13.6}\lessdot
\frac{e^{-K\sigma}}{\sigma}
(\frac{|\Delta\omega|+
|\Delta\Omega|_{-1,s-\sigma,\mathbf{a},0}}{m}
|R_{ij}|_{s,\mathbf{a},i-j}+|\Delta R_{ij}|_{s-\sigma,\mathbf{a},
i-j}).
\end{eqnarray}
For $i=-j$ with $|j|\leq\Pi$, similarly to \eqref{17.9.13.4}, we have
\begin{eqnarray}\nonumber&&|\Delta F_{(-j)j}|
_{s-3\sigma,\mathbf{a},-2j}\\
\label{17.9.27.2}&\lessdot&\frac{e^{8\gamma_0\Pi s}}{
\alpha_2|j|\sigma^{4n+2\tau+1}}
\big(\frac{|\Delta\omega|+|\Delta \Omega|_{-1,s-2\sigma,\mathbf{a},0}}{\alpha_2}
|R_{(-j)j}|_{s,\mathbf{a},-2j}
+|\Delta R_{(-j)j}|_{s-2\sigma,\mathbf{a},-2j}\big).
\end{eqnarray}
In view of \eqref{17.9.13.4} \eqref{17.9.13.5} \eqref{17.9.27.2}, using Lemma \ref{lem17.9.7.1}, we get the estimate of the Hamiltonian vector field $X_{\langle\Delta F^{z\bar{z}}z,\bar{z}\rangle}$:
\begin{eqnarray*}
\label{17.9.27.4}\nonumber||X_{\langle \Delta F^{z\bar{z}}z,\bar{z}\rangle}||_{s-4\sigma,r,p,\mathbf{a}}
&\lessdot&
\frac{\max\{e^{8C_0\gamma_0Ks},e^{8\gamma_0\Pi s}\}}
{\alpha_2\sigma^{4n+2\tau+2}}(\frac{|\Delta\omega|+
|\Delta\Omega|_{-1,s-4\sigma,\mathbf{a},0}}{\alpha_2}||X_{\langle R^{z\bar{z}}z,\bar{z}\rangle}||_{s,r,p-1,\mathbf{a}}\\
\nonumber&&+||X_{\langle\Delta R^{z\bar{z}}z,\bar{z}\rangle}||_{s,r,p-1,\mathbf{a}}).\end{eqnarray*}
Dividing by $|\xi-\zeta|\neq0$ and taking the supremum over $\mathcal{O}$, we get
\begin{eqnarray} \label{17.9.27.5}||X_{\langle F^{z\bar{z}}z,\bar{z}\rangle}||^{lip}_{s-4\sigma,r,p,\mathbf{a}
;\mathcal{O}}
&\lessdot&\frac{\max\{e^{8C_0\gamma_0Ks},e^{8\gamma_0\Pi s}\}}
{\alpha_2\sigma^{4n+2\tau+2}}\\
\nonumber&&\cdot(\frac{M}{\alpha_2}||X_{\langle R^{z\bar{z}}z,\bar{z}\rangle}||_{s,r,p-1,\mathbf{a};\mathcal{O}}+
||X_{\langle R^{z\bar{z}}z,\bar{z}\rangle}||^{lip}_{s,r,p-1,\mathbf{a};\mathcal{O}}),
\end{eqnarray}
where $M:=|\omega|_{\mathcal{O}}^{lip}+
|\Omega|_{-1,s,\mathbf{a},0;\mathcal{O}}^{lip}$.
Recall the definition of $||X||^{\lambda}_{s,r,q,\mathbf{a};\mathcal{O}}$ in \eqref{17.9.19.2}.
Set $0\leq\lambda\leq\alpha_2/M$. From \eqref{17.9.27.3} and \eqref{17.9.27.5}, we get
\begin{equation}\label{17.9.13.12}||X_{\langle F^{z\bar{z}}z,\bar{z}\rangle}||_{s-4\sigma,r,p,\mathbf{a};
\mathcal{O}}^{\lambda}
\lessdot\frac{\max\{e^{8C_0\gamma_0Ks},e^{8\gamma_0\Pi s}\}}
{\alpha_2\sigma^{4n+2\tau+2}}||X_R||_{s,r,p-1,\mathbf{a};
\mathcal{O}}^{\lambda}
.\end{equation}
Now considering the homological equations \eqref{14.2.4.10} and \eqref{14.2.4.11} by a standard approach in finite dimensional KAM theory, we can easily get
\begin{equation}\label{17.7.25.2}||X_{F^x}||_{s-\sigma,r,p,\mathbf{a};
\mathcal{O}},||X_{\langle F^y,y\rangle}||_{s,r,p,\mathbf{a};\mathcal{O}}\lessdot
\frac{1}{\alpha_1\sigma^{\tau}}||X_R||_{s,r,p-1,\mathbf{a};\mathcal{O}}
,\end{equation}
\begin{equation}\label{17.7.25.3}||X_{F^x}||_{s-2\sigma,r,p,\mathbf{a};
\mathcal{O}}^{lip},||X_{\langle F^y,y\rangle}||_{s-2\sigma,r,p,\mathbf{a};\mathcal{O}}^{lip}
\lessdot\frac{1}{\alpha_1\sigma^{2\tau+1}}\big(\frac{M}{\alpha_1}
||X_R||_{s,r,p-1,\mathbf{a};\mathcal{O}}+
||X_R||_{s,r,p-1,\mathbf{a};\mathcal{O}}^{lip}\big)
.\end{equation}
From \eqref{17.7.25.2} and \eqref{17.7.25.3}, we get
\begin{equation}\label{17.9.13.13}||X_{F^x}||_{s-2\sigma,r,p,\mathbf{a};
\mathcal{O}}^{\lambda},||X_{\langle F^y,y\rangle}||_{s-2\sigma,r,p,\mathbf{a};\mathcal{O}}^{\lambda}\\
\lessdot\frac{1}{\alpha_1\sigma^{2\tau+1}}
||X_R||^{\lambda}_{s,r,p-1,\mathbf{a},\mathcal{O}}.\end{equation}
For the other terms of $F$, i.e., $\langle F^z,z\rangle,\langle F^{\bar{z}},\bar{z}\rangle$, $\langle F^{zz}z,z\rangle,$ $\langle F^{\bar{z}\bar{z}}\bar{z},\bar{z}\rangle$, the same results-even better- than \eqref{17.9.13.12} can be obtained. Thus, we finally get the estimate for $F$:
\begin{equation}\label{17.10.11.9}||X_{F}||_{s-4\sigma,r,p,\mathbf{a};
\mathcal{O}}^{\lambda}
\lessdot\frac{\max\{e^{8C_0\gamma_0Ks},e^{8\gamma_0\Pi s}\}}
{\alpha_2\sigma^{4n+2\tau+2}}||X_R||_{s,r,p-1,\mathbf{a};
\mathcal{O}}^{\lambda}.\end{equation}
\section{The New Hamiltonian}
From \eqref{17.7.21.3}-\eqref{17.7.21.11} we get the new Hamiltonian
\begin{equation}H\circ\Phi=N_++P_+,\end{equation}
where $N_+=\eqref{17.7.21.3}$ and
\begin{equation}\label{17.10.11.8}P_+=\hat{R}+\int_{0}^1\{(1-t)(\hat{N}+\hat{R})+tR,F\}\circ X_F^tdt+(P-R)\circ X_F^1,\end{equation}
where $\hat{R}=\eqref{17.7.21.6}+\cdots+\eqref{17.7.21.10}:=
\langle\hat{R}^z,z\rangle+\langle\hat{R}^{\bar{z}},\bar{z}\rangle+\langle\hat{R}^{zz}z,z\rangle
+\langle\hat{R}^{z\bar{z}}z,\bar{z}\rangle+\langle\hat{R}^{\bar{z}\bar{z}}\bar{z},\bar{z}\rangle.$ The aim of this subsection is to estimate the new normal form
$N_+$ and the new perturbation $P_+$.
\\
\subsection{ The New Normal Form} In view of \eqref{17.7.21.3}, denote $N_+=N+\hat{N}$ with
$$\hat{N}=\langle\hat{\omega},y\rangle+
\sum_{j\in\mathbb{Z}_*}\hat{\Omega}_jz_j\bar{z}_j,$$
where \begin{equation}\label{17.9.13.14}\hat{\omega}:=[R^y],\end{equation}
\begin{equation}\label{17.9.13.15}\hat{\Omega}_j:=R_{jj}+\sigma_j\langle\partial_x\Omega_j,F^y\rangle
=R_{jj}+\sigma_j\langle\partial_x\tilde{\Omega}_j,F^y\rangle.\end{equation}
From \eqref{17.9.13.14} we easily get
\begin{equation}\label{17.9.14.1}|\hat{\omega}|^{\lambda}_{\mathcal{O}}
\lessdot s||X_R||_{s,r,p-1,\mathbf{a};\mathcal{O}}^{\lambda}.\end{equation}
In the following, we estimate $\hat{\Omega}=(\hat{\Omega}_j:j\in\mathbb{Z}_*)$. In view of \eqref{17.7.25.2},
\begin{equation}|\sigma_j\langle\partial_x\tilde{\Omega}_j,F^y\rangle
|_{s-\sigma,\mathbf{a},0}\lessdot (s-\sigma)\frac{|\tilde{\Omega}_j|_{s,\mathbf{a},0}}
{\sigma}||X_{\langle F^y,y\rangle}||_{s-\sigma,r,p,\mathbf{a}}
\lessdot s\frac{\gamma_0|j|}{\sigma^{\tau+1}}
||X_R||_{s,r,p-1,\mathbf{a}}.\end{equation}
Thus, together with
\begin{equation}|R_{jj}|_{s-\sigma,\mathbf{a},0}\leq |j|\cdot||X_R||_{s,r,p-1,\mathbf{a}},\end{equation}
we get
\begin{equation}\label{17.9.14.2}|\hat{\Omega}|
_{-1,s-\sigma,\mathbf{a},0}\lessdot\frac{1}{\sigma^{\tau+1}}
||X_R||_{s,r,p-1,\mathbf{a}}.\end{equation}
Applying $\Delta$ to $\hat{\Omega}_j$, we have
\begin{equation}\Delta\hat{\Omega}_j=\Delta R_{jj}+\sigma_j\langle\partial_x\Delta\tilde{\Omega}_j,F^y\rangle+\sigma_j
\langle\partial_x\tilde{\Omega}_j,\Delta F^y\rangle.\end{equation}
Since
$$|\Delta R_{jj}|_{s,\mathbf{a},0}\leq |j|\cdot||\Delta X_R||_{s,r,p-1,\mathbf{a}},$$
\begin{eqnarray*}|\langle\partial_x\Delta\tilde{\Omega}_j,F^y\rangle|_{s-\sigma,
\mathbf{a},0}&\lessdot& (s-\sigma)\frac{1}{\sigma}|\Delta\tilde{\Omega}_j|_{s,
\mathbf{a},0}||X_{\langle F^y,y\rangle}||_{s-\sigma,r,p,\mathbf{a}}\\
&\lessdot&s|j|\frac{|\Delta\tilde{\Omega}|_{-1,s,\mathbf{a},0}
}{\alpha_1\sigma^{\tau+1}}||X_R||_{s,r,p-1,\mathbf{a}}
\end{eqnarray*}
and
\begin{eqnarray*}|\langle\partial_x\tilde{\Omega}_j,\Delta F^y\rangle|_{s-2\sigma,\mathbf{a},0}
&\lessdot&(s-2\sigma)\frac{1}{\sigma}|\tilde{\Omega}_j|_{s-\sigma,\mathbf{a},0}||\Delta X_{\langle F^y,y\rangle}||_{s-2\sigma,r,p,\mathbf{a}}\\
&\lessdot&s\frac{\gamma_0|j|}{\sigma^{2\tau+2}}(\frac{M}{\alpha_1}
|| X_R||_{s,r,p-1,\mathbf{a}}+
||\Delta X_R||_{s,r,p-1,\mathbf{a}}),
\end{eqnarray*}
we get
\begin{equation}\label{17.9.14.3}|\hat{\Omega}|^{lip}
_{-1,s-2\sigma,\mathbf{a},0;\mathcal{O}}
\lessdot\frac{1}{\sigma^{2\tau+2}}
(\frac{M}{\alpha_1}||X_R||_{s,r,p-1,\mathbf{a};\mathcal{O}}
+||X_R||_{s,r,p-1,\mathbf{a};\mathcal{O}}^{lip}).
\end{equation}
Thus, from \eqref{17.9.14.2} and \eqref{17.9.14.3}, we get
\begin{equation}\label{17.9.14.4}|\hat{\Omega}|^{\lambda}
_{-1,s-2\sigma,\mathbf{a},0;\mathcal{O}}\lessdot\frac{1}{\sigma^{2\tau+2}}
||X_R||_{s,r,p-1,\mathbf{a};\mathcal{O}}^{\lambda}.\end{equation}
\subsection {The New Perturbation} We firstly estimate the error term $\hat{R}^{z\bar{z}}$
with its matrix elements $\hat{R}_{ij}$ in \eqref{17.7.28.4}.
For $\max\{|i|,|j|\}>C_0K,i\neq \pm j$, by \eqref{17.9.12.2} and
\begin{equation*}|(1-\Gamma_K)R_{ij}|
_{s-\sigma,\mathbf{a},i-j}\leq e^{-K\sigma}
|R_{ij}|_{s,\mathbf{a},i-j},\end{equation*}
we obtain
\begin{equation}\label{17.11.10.4}|\hat{R}_{ij}|_{s-\sigma,\mathbf{a},i-j}\leq2e^{-K\sigma}
|R_{ij}|_{s,\mathbf{a},i-j}\ ;\end{equation}
by \eqref{17.9.13.6} and
\begin{eqnarray}\nonumber|(1-\Gamma_K)(\Delta\tilde{\Omega}_{ij}\cdot F_{ij})|_{s-\sigma,\mathbf{a},i-j}&\leq&e^{-K\sigma}(|i|+|j|)
|\Delta\Omega|_{-1,s,\mathbf{a},0}
|F_{ij}|_{s,\mathbf{a},i-j}\\
\nonumber&\lessdot&\frac{e^{-K\sigma}}{m}
|\Delta\Omega|_{-1,s,\mathbf{a},0}|R_{ij}|_{s,\mathbf{a},i-j},
\end{eqnarray}
\begin{equation*}\label{17.11.9.1}|(1-\Gamma_K)\Delta R_{ij}|
_{s-\sigma,\mathbf{a},i-j}\leq e^{-K\sigma}
|\Delta R_{ij}|_{s,\mathbf{a},i-j},\end{equation*}
we obtain
\begin{equation}\label{17.11.10.5}|\Delta \hat{R}_{ij}|_{s-\sigma,\mathbf{a},i-j}
\lessdot\frac{e^{-K\sigma}}{\sigma}(\frac{|\Delta\omega|+
|\Delta\Omega|_{-1,s,\mathbf{a},0}}{m}|R_{ij}|_{s,\mathbf{a},i-j}+
|\Delta R_{ij}|_{s,\mathbf{a},i-j}).\end{equation}
For $i=-j,|j|>\Pi$, assuming $0<(\mathbf{a}-\mathbf{a}')C_J<\sigma$, then
$$|\pi(k,-2j)|\geq|2j|-|\sum_{b=1}^nk_bj_b|\geq2\Pi-C_J|k|\geq2\Pi-
\frac{|k|\sigma}{\mathbf{a}-\mathbf{a}'},$$
and thus
\begin{eqnarray}\nonumber|\hat{R}_{(-j)j}|_{s-\sigma,\mathbf{a}',-2j}&=&
\sum_{k\in\mathbb{Z}^n}|\hat{R}_{(-j)jk}|e^{|k|(s-\sigma)}
e^{\mathbf{a}|\pi(k,-2j)|}e^{-(\mathbf{a}-\mathbf{a}')
|\pi(k,-2j)|}\\ \nonumber&\leq&\sum_{k\in\mathbb{Z}^n}|\hat{R}_{(-j)jk}|
e^{|k|(s-\sigma)}e^{\mathbf{a}|\pi(k,-2j)|}
e^{-(\mathbf{a}-\mathbf{a}')(2\Pi-\frac{|k|\sigma}{\mathbf{a}-\mathbf{a}'})}\\
\nonumber&\leq&e^{-2(\mathbf{a}-\mathbf{a}')\Pi}\sum_{k\in\mathbb{Z}^n}
|\hat{R}_{(-j)jk}|e^{|k|s}e^{\mathbf{a}|\pi(k,-2j)|}\\
\label{17.10.11.2}&=&e^{-2(\mathbf{a}-\mathbf{a}')\Pi}
|R_{(-j)j}|_{s,\mathbf{a},-2j}.\end{eqnarray}
Similarly, we have
\begin{equation}\label{17.11.9.2}|\Delta \hat{R}_{(-j)j}|_{s-\sigma,\mathbf{a}',-2j}
\leq e^{-2(\mathbf{a}-\mathbf{a}')\Pi}
|\Delta R_{(-j)j}|_{s,\mathbf{a},-2j}.\end{equation}
Using Lemma \ref{lem17.9.7.1} below, from \eqref{17.11.10.4} \eqref{17.10.11.2}, we get
\begin{eqnarray}\nonumber||X_{\langle \hat{R}^{z\bar{z}}z,\bar{z}\rangle}||_{s-2\sigma,r,p-1,\mathbf{a}'}
&\lessdot&\frac{\max\{e^{-K\sigma},
e^{-2(\mathbf{a}-\mathbf{a}')\Pi}\}}{\sigma}||X_{\langle R^{z\bar{z}}z,\bar{z}\rangle}||_{s,r,p-1,\mathbf{a}}\\
\label{17.10.9.1}&\leq&\frac{\max\{e^{-K\sigma},e^{-2(\mathbf{a}-\mathbf{a}')\Pi}
\}}{\sigma}
||X_R||_{s,r,p-1,\mathbf{a}},\end{eqnarray}
and from \eqref{17.11.10.5} \eqref{17.11.9.2}, we get
\begin{equation}\label{17.10.11.5}||X_{\langle\hat{R}^{z\bar{z}}z,
\bar{z}\rangle}||^{lip}_{s-2\sigma,r,p-1,\mathbf{a'};\mathcal{O}}
\lessdot
\frac{\max\{e^{-K\sigma},
e^{-2(\mathbf{a}-\mathbf{a}')\Pi}\}}{\sigma^{2}}
(\frac{M}{m}||X_{R}||_{s,r,p-1,\mathbf{a};\mathcal{O}}+||X_{R}||^{lip}_{
s,r,p-1,\mathbf{a};\mathcal{O}}).\end{equation}
Therefore, from \eqref{17.10.9.1} and \eqref{17.10.11.5}, we get
\begin{equation}\label{17.10.11.6}||X_{\langle\hat{R}^{z\bar{z}}z,\bar{z}
\rangle}||^{\lambda}_{s-2\sigma,r,p-1,\mathbf{a'};\mathcal{O}}\lessdot
\frac{\max\{e^{-K\sigma},e^{-2(\mathbf{a}-\mathbf{a}')\Pi}\}}{\sigma^{2}}
||X_{R}||_{s,r,p-1,\mathbf{a};
\mathcal{O}}^{\lambda}.\end{equation}
For the other terms of $\hat{R}$, i.e., $\langle\hat{R}^z,z\rangle, \langle\hat{R}^{\bar{z}},\bar{z}\rangle,\langle\hat{R}^{zz}z,z\rangle,
\langle\hat{R}^{\bar{z}\bar{z}}\bar{z},\bar{z}\rangle$, the same results- even better- than \eqref{17.10.11.6} can be obtained. Thus, we finally get the estimate for the error term $\hat{R}$:
\begin{equation}\label{17.10.11.7}||X_{\hat{R}}
||^{\lambda}_{s-2\sigma,r,p-1,\mathbf{a'};\mathcal{O}}
\lessdot\frac{\max\{e^{-K\sigma},
e^{-2(\mathbf{a}-\mathbf{a}')\Pi}\}}{\sigma^2}||X_{R}||_{s,r,p-1,\mathbf{a};
\mathcal{O}}^{\lambda}.\end{equation}
Now we consider the new perturbation \eqref{17.10.11.8}.
By setting $R(t)=(1-t)(\hat{N}+\hat{R})+tR$, we have
\begin{equation}
X_{P_+}=X_{\hat{R}}+\int_0^1 X_{\{R(t),F\}\circ\Phi_F^t}dt
+X_{(P-R)\circ\Phi_F^1}.
\end{equation}
We assume that
\begin{equation}\label{17.10.11.11}||X_P||_{s,r,p-1,\mathbf{a};\mathcal{O}}
^{\lambda}\leq\frac{\alpha_2\eta^2}{B_{\sigma}}\cdot
\frac{1}{\max\{e^{8C_{0}\gamma_0Ks},
e^{8\gamma_0\Pi s}\}}
\end{equation}
for $0\leq\lambda\leq\alpha_2/M$ with some $0<\eta<1$ and $0<s<1,\sigma=s/20$, where
$B_{\sigma}=c\sigma^{-(4n+4\tau+5)}$ with $c$ being a sufficiently large constant depending only on $n$, $\tau$ and $|\omega|_{\mathcal{O}}$. Since $R$ is $2$-order Taylor polynomial truncation in $y,z,\bar{z}$ of $P$, we can obtain
\begin{equation}\label{17.10.11.10}||X_R||^{\lambda}_{s,r,p-1,\mathbf{a};
\mathcal{O}}\leq||X_P||^{\lambda}_{s,r,p-1,\mathbf{a};\mathcal{O}}
,\end{equation}
\begin{equation}\label{17.10.12.3}||X_P-X_R||^{\lambda}_{s,\frac65\eta r,p-1,\mathbf{a};\mathcal{O}}\leq\frac65\eta||X_P||^{\lambda}
_{s,r,p-1,\mathbf{a};\mathcal{O}}.\end{equation}
From \eqref{17.10.11.9} \eqref{17.9.14.1} \eqref{17.9.14.4} and \eqref{17.10.11.7}, we get
\begin{equation}\label{17.12.23.3}||X_{F}||_{s-4\sigma,r,p,\mathbf{a};
\mathcal{O}}^{\lambda}\lessdot\frac{\max\{e^{8C_0\gamma_0Ks},
e^{8\gamma_0\Pi s}\}}
{\alpha_2\sigma^{4n+2\tau+2}}||X_P||_{s,r,p-1,\mathbf{a};
\mathcal{O}}^{\lambda},
\end{equation}
\begin{equation}\label{17.10.12.1}||X_{\hat{N}}||_{s-2\sigma,r,p-1,\mathbf{a};
\mathcal{O}}^{\lambda}\lessdot\frac{1}{\sigma^{2\tau+2}}
||X_P||_{s,r,p-1,\mathbf{a};\mathcal{O}}^{\lambda},\end{equation}
\begin{equation}\label{17.10.12.2}||X_{\hat{R}}
||^{\lambda}_{s-2\sigma,r,p-1,\mathbf{a'};\mathcal{O}}\lessdot
\frac{\max\{e^{-K\sigma},e^{-2(\mathbf{a}-\mathbf{a}')\Pi}\}}{\sigma^{2}}
||X_{P}||_{s,r,p-1,\mathbf{a};
\mathcal{O}}^{\lambda}.\end{equation}
Therefore, by \eqref{17.10.11.10} \eqref{17.10.12.1} \eqref{17.10.12.2}, we get
\begin{equation}\label{17.12.24.2}||X_{R(t)}||^{\lambda}_{s-2\sigma,r,p-1,\mathbf{a}';
\mathcal{O}}\lessdot\frac{1}{\sigma^{2\tau+2}}
||X_P||_{s,r,p-1,\mathbf{a};\mathcal{O}}^{\lambda}.\end{equation}
By \eqref{17.12.23.3} \eqref{17.12.24.2} and Lemma \ref{lem17.12.22.1}, we get
\begin{eqnarray}\nonumber||[X_{\{R(t),F\}}||^{\lambda}_{s-5\sigma,r/2,p-1,
\mathbf{a}';\mathcal{O}}&\lessdot&||X_{R(t)}||
^{\lambda}_{s-4\sigma,r,p-1,\mathbf{a}';\mathcal{O}}
||X_F||^{\lambda}_{s-4\sigma,r,p,\mathbf{a}';\mathcal{O}}\\
\label{17.12.23.4}&\lessdot&\frac{\max\{e^{8C_0\gamma_0Ks},
e^{8\gamma_0\Pi s}\}}
{\alpha_2\sigma^{4n+4\tau+4}}(||X_P||^{\lambda}_{s,r,p-1,\mathbf{a};
\mathcal{O}})^2.\end{eqnarray}
Moreover, together with the smallness assumption \eqref{17.10.11.11}, by properly choosing $c$, we get
\begin{equation}\label{17.12.23.5}||X_F||^{\lambda}
_{s-4\sigma,r,p,\mathbf{a};\mathcal{O}}
\leq\frac{\eta^2\sigma}{c_0}\end{equation}
with some suitably large constant $c_0\geq1$ and thus
\begin{equation}\label{17.12.23.6}2^{2n+5}e\max\{\frac{s-4\sigma}
{(s-4\sigma)-(s-5\sigma)},\frac{r}{r-r/2}\}
||X_F||^{\lambda}_{s-4\sigma,r,p,\mathbf{a}';\mathcal{O}}\leq
2^{2n+9}e\frac{\eta^2\sigma}{c_0}<\frac{1}{10}.\end{equation}
By \eqref{17.12.23.4} \eqref{17.12.23.6} and using Lemma \ref{lem17.12.23.1}, for $-1\leq t\leq1$, the time $t$-Hamiltonian flow $$\Phi_F^t:D(s-5\sigma,r/2)\rightarrow D(s-4\sigma,r)$$
and we get
\begin{eqnarray}\nonumber||X_{\{R(t),F\}\circ\Phi_F^t}
||^{\lambda}_{s-5\sigma,r/2,p-1,\mathbf{a}';\mathcal{O}}
&\leq&\frac{10}{9}||[X_{\{R(t),F\}}||^{\lambda}_{s-4\sigma,r,p-1,\mathbf{a}';
\mathcal{O}}\\
&\lessdot&\frac{\max\{e^{8C_0\gamma_0Ks},
e^{8\gamma_0\Pi s}\}}
{\alpha_2\sigma^{4n+4\tau+4}}(||X_P||^{\lambda}_{s,r,p-1,\mathbf{a};
\mathcal{O}})^2.
\end{eqnarray}
Hence also
\begin{equation}||X_{\{R(t),F\}\circ\Phi_F^t}
||^{\lambda}_{s-5\sigma,\eta r,p-1,\mathbf{a}';\mathcal{O}}
\lessdot\frac{\max\{e^{8C_0\gamma_0Ks},
e^{8\gamma_0\Pi s}\}}
{\alpha_2\eta^2\sigma^{4n+4\tau+4}}(||X_P||^{\lambda}_{s,r,p-1,\mathbf{a};
\mathcal{O}})^2.\end{equation}
From \eqref{17.12.23.5}, we have
\begin{equation}||X_F||^{\lambda}_{s-4\sigma,\frac65\eta r,p,\mathbf{a};\mathcal{O}}\leq\frac{25\sigma}{36c_0},\end{equation}
and thus
\begin{equation}\label{17.12.24.1}2^{2n+5}e\max\{\frac{s-4\sigma}
{(s-4\sigma)-(s-5\sigma)},\frac{\frac65\eta r}{\frac65\eta r-\eta r}\}
||X_F||^{\lambda}_{s-4\sigma,\frac65\eta r,p,\mathbf{a};\mathcal{O}}\leq
2^{2n+9}e\frac{25\sigma}{36c_0}<\frac{1}{10}.\end{equation}
By \eqref{17.10.12.3} \eqref{17.12.24.1} and using Lemma \ref{lem17.12.23.1}, the time $1$-Hamiltonian flow $$\Phi_F^1:D(s-5\sigma,\eta r)\rightarrow D(s-4\sigma,\frac65\eta r)$$
and we get
\begin{eqnarray}\nonumber||X_{(P-R)\circ\Phi_F^1}
||^{\lambda}_{s-5\sigma,\eta r,p-1,\mathbf{a};\mathcal{O}}
&\leq&\frac{10}{9}||X_{P-R}||^{\lambda}_{s-4\sigma,\frac65\eta r,p-1,\mathbf{a};\mathcal{O}}\\
&\leq&\frac43\eta ||X_P||^{\lambda}_{s,r,p-1,\mathbf{a};\mathcal{O}}.
\end{eqnarray}
Together with the estimate of $\hat{R}$ in \eqref{17.10.12.2}, we finally arrive at the estimate
\begin{eqnarray}\label{17.10.16.13}||X_{P_+}||_{s-5\sigma,\eta r,p-1,\mathbf{a}';\mathcal{O}}^{\lambda}
&\leq&(\frac{B_{\sigma}\max\{e^{8C_0\gamma_0Ks},
e^{8\gamma_0\Pi s}\}}{\alpha_2\eta^2}||X_P||^{\lambda}_{s,r,p-1,\mathbf{a};
\mathcal{O}}\\ \nonumber&&+
\frac{B_{\sigma}\max\{e^{-K\sigma},
e^{-2(\mathbf{a}-\mathbf{a}')\Pi}\}
}{\alpha_2\eta^2}+\frac43\eta)||X_P||^{\lambda}_{s,r,p-1,\mathbf{a};
\mathcal{O}}.
\end{eqnarray}
This is the bound for the new perturbation.
\section{Iteration and Convergence}
Set $\beta'=\frac12\min\{\frac{\beta}{1+\beta},\frac14\}$ and $\kappa=\frac{4}{3}-\frac{\beta'}{3}$. Now we give the precise set-up of iteration parameters. Let $\nu\geq0$ be the $\nu$-th KAM step.
\begin{itemize}
\item[]
$m_{\nu}=\frac{m_0}{10}(9+2^{-\nu})$, which is used for describing the growth of external frequencies,
\item[]
$E_{\nu}=\frac{E_0}{9}(10-2^{-\nu})$, which is used to dominate the norm of internal frequencies,
\item[]
$M_{1,\nu}=\frac{M_{1,0}}{9}(10-2^{-\nu}),M_{2,\nu}=
\frac{M_{2,0}}{9}(10-2^{-\nu}),
M_{\nu}=M_{1,\nu}+M_{2,\nu}$, which are used to dominate the Lipschitz semi-norm of frequencies,
\item[]
$M_{3,\nu}=\frac{M_{3,0}}{10}(9+2^{-\nu})$, which describes the lower bound for the sup-norm or Lipschitz semi-norm of the small divisors,
\item[]
$J_0=0$, $J_{\nu}=\gamma_0^{-\frac{\kappa^{\nu-1}}{\tau+1}}, \nu\geq1$,
which are used for the estimate of measure,
\item[]
$s_{\nu}=s_02^{-\nu}$, which dominates the width of the angle variable $x$,
\item[]
$\sigma_{\nu}=s_{\nu}/20$, which serves as a bridge from $s_{\nu}$ to $s_{\nu+1}$,
\item[]
$\mathbf{a}_{\nu}=\sigma_{\nu}/C_J$, which is used to control higher momentum term,
\item[]
$B_{\nu}=24B_{\sigma_{\nu}}=c\sigma_{\nu}^{-(4n+4\tau+5)}$, here $c$ is a large constant only depending on $n,\tau$ and $E_0$,
\item[]
$\epsilon_{\nu}=(\epsilon_0\Pi_{\mu=0}^{\nu-1}(\frac{2^{\mu}
B_{\mu}}{\alpha_{0}})
^{\frac{1}{3\kappa^{\mu+1}}})^{\kappa^{\nu}}$, which dominates the size of the perturbation $P_{\nu}$ in the $\nu$-th KAM iteration,
\item[]
$K_{\nu}=5|\ln\epsilon_{\nu}|/(4\sigma_{\nu})$, which is the length of the truncation of Fourier series,
\item[]
$\Pi_{\nu}=5|\ln\epsilon_{\nu}|/(2\mathbf{a}_{\nu})$, which controls the number of homological equations with double normal frequencies,
\item[]
$\alpha_{1,\nu}=\frac{\alpha_0}{10}(9+2^{-\nu})$ and $\alpha_{2,\nu}=\alpha_0{2^{-\nu}}\Pi_{\nu}^{-1}$, which are used to dominate the measure of removed parameters,
\item[]
$\lambda_0=\frac{\alpha_0}{M_0},\quad\lambda_{\nu}=
\frac{\alpha_{2,\nu}}{M_{\nu}},\quad \nu\geq1$,
\item[]
$(2\eta_{\nu})^3=
\epsilon_{\nu}^{1-\beta'}
\alpha_{2,\nu}^{-1}B_{\nu}, \quad r_{\nu+1}=\eta_{\nu}r_{\nu},\quad D_{\nu}=D(s_{\nu},r_{\nu}).$
\end{itemize}
\subsection{Iterative Lemma}
\begin{lem}
Suppose that
\begin{equation}\label{17.10.16.1}\epsilon_0\leq
(\frac{\alpha_0\gamma_0}{80})^{\frac{1}{1-2\beta'}}
\prod_{\mu=0}^{\infty}(2^{\mu}B_{\mu})^{-\frac{1}{3\kappa^{\mu+1}}},
\quad\alpha_0\leq\min\{\frac{m_0}{10},\frac{M_{3,0}}{2}\}.
\end{equation}
Suppose $H_{\nu}=N_{\nu}+P_{\nu}$ is regular on $D_{\nu}\times\mathcal{O}_{\nu}$, where $N_{\nu}$ is a generalized normal form with coefficients satisfying
\begin{equation}\label{17.10.16.8}|\omega_{\nu}|_{\mathcal{O}_{\nu}}\leq E_{\nu},\quad|\omega_{\nu}|^{lip}_{\mathcal{O}_{\nu}}\leq M_{1,\nu},\end{equation}
\begin{equation}
\label{17.10.16.9}|\Omega_{\nu}|_{-1,s_{\nu},\mathbf{a}_{\nu},0;
\mathcal{O}_{\nu}}^{lip}\leq M_{2,\nu},
\end{equation}
\begin{equation}\label{17.10.16.10}|\langle l,\bar{\Omega}_{\nu}(\xi)\rangle|\geq m_{\nu}|\sum_{j\in\mathbb{Z}_*}j^2l_j|,\quad |l|\leq2,\end{equation}
\begin{equation}\label{17.10.16.6}|\tilde{\Omega}_{\nu,j}(\xi)|_{s_{\nu},
\tau+1}
+|\tilde{\Omega}_{\nu,j}(\xi)|_{s_{\nu},\mathbf{a}_{\nu},0}
\leq(\alpha_0-\alpha_{1,\nu})\gamma_0|j|,\quad j\in\mathbb{Z}_*,\end{equation}
\begin{eqnarray}\nonumber&&\inf_{\xi\in\mathcal{O}_{\nu}}|\langle k,\omega_{\nu}(\xi)\rangle+\langle l,\bar{\Omega}_{\nu}(\xi)\rangle|
+\inf_{\xi-\zeta// v_{kl}}\frac{|\Delta_{\xi\zeta}
(\langle k,\omega_{\nu}\rangle+\langle l,\bar{\Omega}_{\nu}\rangle)|}{|\xi-\zeta|}\\ \label{17.11.14.1} &&\geq M_{3,\nu}\max\{|k|,\sum_{j\in\mathbb{Z}_*}|jl_j|\},\quad
k\in\mathbb{Z}^n,
|l|\leq2\end{eqnarray}
on $\mathcal{O}_{\nu}$ and $P_{\nu}$ satisfies
\begin{equation}\label{17.12.24.5}||X_{P_{\nu}}||^{\lambda_{\nu}}_{s_{\nu},
r_{\nu},p-1,\mathbf{a}_{\nu};\mathcal{O}_{\nu}}
\leq\epsilon_{\nu}.\end{equation}
Let
\begin{equation}\label{17.10.16.11}\mathcal{O}_{\nu+1}=\mathcal{O}_{\nu}
\setminus\big(\bigcup_{k\in\mathbb{Z}^n\setminus\{0\},|l|\leq2\atop{l\neq e_{-j}-e_j}}
\mathcal{R}_{kl}^{\nu}(\alpha_{1,\nu})\cup\bigcup_{k\in\mathbb{Z}^n,\pm j\in\mathbb{Z}_*\atop{|j|\leq \Pi_{\nu}}}
\mathcal{R}_{k(-j)j}^{\nu}
(\alpha_{2,\nu})\big),\end{equation}
where
\begin{equation}\mathcal{R}_{kl}^{\nu}(\alpha_{1,\nu})=
\{\xi\in\mathcal{O}_{\nu}:|\langle k,\omega_{\nu}(\xi)\rangle+\langle l,\bar{\Omega}_{\nu}(\xi)\rangle|<\alpha_{1,\nu}\frac{\langle l\rangle_{\infty}}{\langle k\rangle^{\tau}}\},\end{equation}
\begin{equation}\mathcal{R}_{k(-j)j}^{\nu}(\alpha_{2,\nu})=
\{\xi\in\mathcal{O}_{\nu}:|\langle k,\omega_{\nu}(\xi)\rangle+ \bar{\Omega}_{\nu,(-j)}(\xi)-\bar{\Omega}_{\nu,j}(\xi)|
<\alpha_{2,\nu}\frac{|j|}{\langle k\rangle^{\tau}}\}.\end{equation}
Then there exists a Lipschitz family of real close-to-the-identity analytic symplectic coordinate transformation
$\Phi_{\nu+1}:D_{\nu+1}\times\mathcal{O}_{\nu+1}\rightarrow D_{\nu}$ satisfying
\begin{eqnarray}\nonumber&&||\Phi_{\nu+1}-\mbox{id}||^{\lambda_{\nu}}
_{s_{\nu},r_{\nu},p;D_{\nu+1}\times\mathcal{O}_{\nu+1}},
||D\Phi_{\nu+1}-I||^{\lambda_{\nu}}_{s_{\nu},r_{\nu},p,p;D_{\nu+1}
\times\mathcal{O}_{\nu+1}},\\
\label{17.12.24.3}&&||D\Phi_{\nu+1}-I||^{\lambda_{\nu}}
_{s_{\nu},r_{\nu},q,q;D_{\nu+1}
\times\mathcal{O}_{\nu+1}}\leq\frac{B_{\nu}}{\alpha_{2,\nu}}
\epsilon_{\nu}^{1-\beta'},\end{eqnarray}
where $||\cdot||_{s,r,p,p}$ denotes the operator norm induced by $||\cdot||_{s,r,p}$ and $||\cdot||_{s,r,p}$ in the source and target spaces, respectively,
such that for $H_{\nu+1}=H_{\nu}\circ\Phi_{\nu+1}=N_{\nu+1}+P_{\nu+1}$, the estimate
\begin{equation}\label{17.10.16.2}|\omega_{\nu+1}-\omega_{\nu}|
_{\mathcal{O}_{\nu+1}}^{\lambda_{\nu}},
\quad|\Omega_{\nu+1}-\Omega_{\nu}|^{\lambda_{\nu}}
_{-1,s_{\nu+1},\mathbf{a}_{\nu},0;
\mathcal{O}_{\nu+1}}\leq B_{\nu}\epsilon_{\nu}\end{equation}
holds and the same assumptions as above are satisfied with `$\nu+1$' in place of `$\nu$'.
\end{lem}
\begin{proof}
Setting $C_{0,\nu}=2E_{\nu}/m_{\nu}$, then it is obvious $C_{0,\nu}\leq4C_{0,0}$.
Thus we have
\begin{equation}\label{17.12.24.4}e^{8C_{0,\nu}\gamma_0K_{\nu}s_{\nu}},
e^{8\gamma_0\Pi_{\nu}s_{\nu}}\leq \epsilon_{\nu}^{-\beta'}\end{equation}
by $$
K_{\nu}s_{\nu}=20K_{\nu}\sigma_{\nu}=25|\ln\epsilon_{\nu}|,\quad
\quad\Pi_{\nu}s_{\nu}=50C_J|\ln\epsilon_{\nu}|$$
and choosing $\gamma_0$ small enough such that $800\gamma_0C_{0,0},400C_J\gamma_0\leq\beta'$. In view of the definition of $\eta_{\nu}$, namely, $(2\eta_{\nu})^3=\epsilon_{\nu}^{1-\beta'}\alpha_{2,\nu}^{-1}B_{\nu}$, the smallness condition \eqref{17.10.11.11}, namely,
$$\epsilon_{\nu}\leq\frac{\alpha_{2,\nu}\eta^2_{\nu}}
{B_{\nu}}\cdot
\frac{1}{\max\{e^{8C_{0,\nu}\gamma_0K_{\nu}s_{\nu}},
e^{8\gamma_0\Pi_{\nu}s_{\nu}}\}},$$
is satisfied if
\begin{equation}\label{18.3.22.3}\epsilon_{\nu}^{1-\beta'}\leq
\frac{\alpha_{2,\nu}}{64B_{\nu}}.\end{equation}
To verify the last inequality we argue as follows. As $\frac{2^{\nu}}{\alpha_{0}}$ and $B_{\nu}$ are increasing with $\nu$,
\begin{equation}(\frac{2^{\nu}B_{\nu}}{\alpha_{0}})^{\frac{1}{1-2\beta'}}
=(\frac{2^{\nu}B_{\nu}}{\alpha_{0}})^{\frac{1}{3(\kappa-1)}}=
(\prod_{\mu=\nu}^{\infty}(\frac{2^{\nu}B_{\nu}}{\alpha_{0}})^{\frac{1}
{3\kappa^{\mu+1}}
})^{\kappa^{\nu}}\leq(\prod_{\mu=\nu}^{\infty}(\frac{2^{\mu}B_{\mu}}
{\alpha_{0}})^
{\frac{1}{3\kappa^{\mu+1}}})^{\kappa^{\nu}}.\end{equation}
By the definition of $\epsilon_{\nu}$ above and the smallness condition on $\epsilon_0$ in \eqref{17.10.16.1},
\begin{equation}\label{17.12.28.1}\epsilon_{\nu}^{1-2\beta'}
\frac{2^{\nu}B_{\nu}}{\alpha_{0}}
\leq(\epsilon_{0}\prod_{\mu=0}^{\infty}(\frac{2^{\mu}B_{\mu}}{\alpha_{0}})
^{\frac{1}{3\kappa^{\mu+1}}})^{\kappa^{\nu}(1-2\beta')}
\leq(\frac{\gamma_0}{80})^{\kappa^{\nu}},\end{equation}
and thus we can choose $\gamma_0$ small enough such that \begin{equation}\label{18.3.22.2}\epsilon_{\nu}^{\beta'}
\leq2^{-2\nu}\Pi_{\nu}^{-2}.\end{equation}
In view of \eqref{17.12.28.1} and \eqref{18.3.22.2}, we get
\begin{equation}\label{18.3.22.4}\epsilon_{\nu}^{1-\beta'}
\frac{2^{\nu}B_{\nu}}{\alpha_{0}}\leq
(\frac{\gamma_0}{80})^{\kappa^{\nu}}2^{-2\nu}\Pi_{\nu}^{-2},\end{equation}
which implies \eqref{18.3.22.3} since $(\frac{\gamma_0}{80})^{\kappa^{\nu}}2^{-2\nu}\Pi_{\nu}^{-1}
\leq\frac{1}{64}$, and thus the smallness condition \eqref{17.10.11.11} is satisfied for each $\nu\geq0$. In particular, noticing $\kappa\geq\frac54,$ we have
\begin{equation}\label{17.10.16.5}\epsilon_{\nu}^{1-\beta'}
\frac{B_{\nu}}{\alpha_{2,\nu}}
\leq\frac{\gamma_0}{2^{\nu+6}}.\end{equation}
By \eqref{17.11.8.2} \eqref{17.12.23.3} \eqref{17.12.24.5} and \eqref{17.12.24.4}, we have
\begin{eqnarray}\nonumber||X_{F_{\nu}}||^{\lambda_{\nu}}
_{s_{\nu}-4\sigma_{\nu},r_{\nu},p;D(s_{\nu}-4\sigma_{\nu},
r_{\nu})\times\mathcal{O}_{\nu+1}}&\leq&
||X_{F_{\nu}}||_{s_{\nu}-4\sigma_{\nu}, r_{\nu},p,\mathbf{a}_{\nu};
\mathcal{O}_{\nu+1}}^{\lambda_{\nu}}\\
\nonumber &\lessdot&\frac{\max\{e^{8C_{0,\nu}\gamma_0K_{\nu}s_{\nu}},
e^{8\gamma_0\Pi_{\nu}s_{\nu}}\}}{\alpha_{2,\nu}\sigma_{\nu}
^{4n+2\tau+2}}||X_{P_{\nu}}||_{s_{\nu},
r_{\nu},p-1,\mathbf{a}_{\nu};\mathcal{O}_{\nu+1}}^{\lambda_{\nu}}\\
\label{17.12.25.1}&\leq&\frac{1}{\alpha_{2,\nu}\sigma_{\nu}
^{4n+2\tau+2}}
\epsilon_{\nu}^{1-\beta'}.
\end{eqnarray}
In view of \eqref{17.10.16.5} and \eqref{17.12.25.1}, for suitably small $\gamma_0$, we have
$$||X_{F_{\nu}}||^{\lambda_{\nu}}
_{s_{\nu}-4\sigma_{\nu},r_{\nu},p;D(s_{\nu}-4\sigma_{\nu},
r_{\nu})\times\mathcal{O}_{\nu+1}}\leq\frac{1}{15}.$$
Then the flow $X_{F_{\nu}}^t$ of the vector field $X_{F_{\nu}}$ exists on
$D(s_{\nu}-6\sigma_{\nu},r_{\nu}/2)$ for $-1\leq t\leq1$ and takes this domain into $D(s_{\nu}-5\sigma_{\nu},r_{\nu})$. Similarly, it takes $D(s_{\nu}-7\sigma_{\nu},r_{\nu}/4)$ into $D(s_{\nu}-6\sigma_{\nu},r_{\nu}/2)$.
Moreover, in the same way as Lemma 19.3 and (20.6) of \cite{KP}, we get
\begin{eqnarray}\nonumber&&||DX_{F_{\nu}}||
^{\lambda_{\nu}}_{s_{\nu},r_{\nu},p,p;
D(s_{\nu}-5\sigma_{\nu},r_{\nu})\times
\mathcal{O}_{\nu+1}},||DX_{F_{\nu}}||^{\lambda_{\nu}}
_{s_{\nu},r_{\nu},q,q;
D(s_{\nu}-5\sigma_{\nu},r_{\nu})\times\mathcal{O}_{\nu+1}}\\
\label{17.12.25.2}&&\lessdot
\frac{1}{\sigma_{\nu}}||X_{F_{\nu}}||^{\lambda_{\nu}}_{s_{\nu},
r_{\nu},p;D(s_{\nu}-4\sigma_{\nu},r_{\nu})\times\mathcal{O}_{\nu+1}},
\end{eqnarray}
\begin{equation}
||X_{F_{\nu}}^t-\mbox{id}||^{\lambda_{\nu}}_{s_{\nu},r_{\nu},p;
D(s_{\nu}-6\sigma_{\nu},r_{\nu}/2)
\times\mathcal{O}_{\nu+1}}
\lessdot||X_{F_{\nu}}||^{\lambda_{\nu}}_{s_{\nu},r_{\nu},p;
D(s_{\nu}-5\sigma_{\nu},r_{\nu})
\times\mathcal{O}_{\nu+1}}
,\end{equation}
\begin{equation}||DX^t_{F_{\nu}}-I||^{\lambda_{\nu}}_{s_{\nu},r_{\nu},p,p;
D(s_{\nu}-7\sigma_{\nu},r_{\nu}/4)
\times\mathcal{O}_{\nu+1}}\lessdot||DX_{F_{\nu}}||^{\lambda_{\nu}}
_{s_{\nu},r_{\nu},p,p;
D(s_{\nu}-5\sigma_{\nu},r_{\nu})\times\mathcal{O}_{\nu+1}},\end{equation}
\begin{equation}\label{17.12.25.3}||DX^t_{F_{\nu}}-I||^{\lambda_{\nu}}_{s_{\nu},r_{\nu},
q,q;D(s_{\nu}-7\sigma_{\nu},r_{\nu}/4)
\times\mathcal{O}_{\nu+1}}\lessdot||DX_{F_{\nu}}||^{\lambda_{\nu}}_{s_{\nu},
r_{\nu},q,q;D(s_{\nu}-5\sigma_{\nu},r_{\nu})\times\mathcal{O}_{\nu+1}}.
\end{equation}
Now there exists a coordinate transformation $$\Phi_{\nu+1}:=X_{F_{\nu}}^1:D_{\nu+1}
\times\mathcal{O}_{\nu+1}\rightarrow D_{\nu}$$
taking $H_{\nu}$ into $H_{\nu+1}$. Moreover, \eqref{17.12.24.3} is obtained by \eqref{17.12.25.1}-\eqref{17.12.25.3} and
\eqref{17.10.16.2} is obtained by \eqref{17.10.12.1}. More explicitly, \eqref{17.10.16.2} is written as
\begin{equation}\label{17.10.16.3}|\omega_{\nu+1}-\omega_{\nu}|_
{\mathcal{O}_{\nu+1}},\quad |\Omega_{\nu+1}-\Omega_{\nu}|_{-1,s_{\nu+1},\mathbf{a}_{\nu},0;
\mathcal{O}_{\nu+1}}\leq B_{\nu}\epsilon_{\nu},\end{equation}
\begin{equation}\label{17.10.16.4}|\omega_{\nu+1}-\omega_{\nu}|^{lip}
_{\mathcal{O}_{\nu+1}},\quad |\Omega_{\nu+1}-\Omega_{\nu}|^{lip}_{-1,s_{\nu+1},\mathbf{a}_{\nu},0;
\mathcal{O}_{\nu+1}}\leq\frac{M_{\nu}}{\alpha_{2,\nu}} B_{\nu}\epsilon_{\nu}.\end{equation}
Actually, from \eqref{17.10.12.1}, we have
\begin{equation}\label{18.3.19.1}|\Omega_{\nu+1}-\Omega_{\nu}|_{-1,
s_{\nu-2\sigma_{\nu}},\mathbf{a}_{\nu},0;\mathcal{O}_{\nu+1}}\leq \sigma_{\nu}^{4n+2\tau+3}B_{\nu}\epsilon_{\nu},\end{equation}
and thus by Lemma \ref{lem18.3.19.1}, we have
\begin{eqnarray}\nonumber|\Omega_{\nu+1,j}-\Omega_{\nu,j}|_{s_{\nu+1}
,\tau+1}
&\leq&(\frac{\tau+1}{\tau})^{\tau+1}\frac{1}{(18\sigma_{\nu}
)^{\tau+1}}|\Omega_{\nu+1,j}-\Omega_{\nu,j}|_{s_{\nu-2\sigma_{\nu}}
,\mathbf{a}_{\nu},0}\\
\label{18.3.19.3}&\leq&(\frac{\tau+1}{18\tau})^{\tau+1}
\sigma_{\nu}^{4n+\tau+2}B_{\nu}\epsilon_{\nu}|j|.
\end{eqnarray}
By \eqref{18.3.22.4}, we have
\begin{equation}\label{17.12.28.2}B_{\nu}\epsilon_{\nu}\leq
(\frac{\gamma_0}{80})^{\kappa^{\nu}}\alpha_{2,\nu}
\epsilon_{\nu}^{\beta'}\leq\frac{\alpha_{2,\nu}
\gamma_0^{\kappa^{\nu}}}{2^{\nu+6}}.\end{equation}
In view of \eqref{17.10.16.3} \eqref{17.10.16.4} \eqref{18.3.19.3} \eqref{17.12.28.2}, by choosing $\gamma_0$ properly small,
\eqref{17.10.16.8}-\eqref{17.10.16.6} are satisfied with `$\nu+1$' instead of `$\nu$'.
In the following we only need to check \eqref{17.11.14.1} \eqref{17.12.24.5} with `$\nu+1$'.
By \eqref{17.12.28.2} and choosing $\gamma_0$ properly small, we have
\begin{equation}\label{17.12.28.3}B_{\nu}\epsilon_{\nu}
\leq\frac{\alpha_{2,\nu}\gamma_0}{2^{\nu+6}}\leq \frac12\min\{\frac{M_{\nu}}{\alpha_{2,\nu}},1\}(M_{3,\nu}-M_{3,\nu+1}).
\end{equation}
In view of \eqref{17.10.16.3} \eqref{17.10.16.4} \eqref{17.12.28.3}, for $k\in\mathbb{Z}^n$ and $|l|\leq2$, we have
\begin{eqnarray}\nonumber|\langle k,\omega_{\nu+1}-\omega_{\nu}\rangle+
\langle l,\bar{\Omega}_{\nu+1}-\bar{\Omega}_{\nu}\rangle|
&\leq&|k||\omega_{\nu+1}-\omega_{\nu}|+(\sum_{j\in\mathbb{Z}_*}|jl_j|)
|\bar{\Omega}_{\nu+1}-\bar{\Omega}_{\nu}|_{-1}\\
\nonumber &\leq&\max\{|k|,\sum_{j\in\mathbb{Z}^*}|jl_j|\}B_{\nu}\epsilon_{\nu}\\
\label{17.12.28.5}&\leq&\frac12(M_{3,\nu}-M_{3,\nu+1})\max\{|k|,
\sum_{j\in\mathbb{Z}^*}|jl_j|\}
\end{eqnarray}
on $\mathcal{O}_{\nu+1}$, and
\begin{eqnarray}\nonumber|\langle k,\omega_{\nu+1}-\omega_{\nu}\rangle+
\langle l,\bar{\Omega}_{\nu+1}-\bar{\Omega}_{\nu}\rangle|
^{lip}_{\mathcal{O}_{\nu+1}}
&\leq&|k||\omega_{\nu+1}-\omega_{\nu}|^{lip}_{\mathcal{O}_{\nu+1}}
+(\sum_{j\in\mathbb{Z}_*}|jl_j|)
|\bar{\Omega}_{\nu+1}-\bar{\Omega}_{\nu}|^{lip}_{-1,\mathcal{O}_{\nu+1}}\\
\nonumber &\leq&\max\{|k|,\sum_{j\in\mathbb{Z}^*}|jl_j|\}\frac{M_{\nu}}
{\alpha_{2,\nu}}B_{\nu}\epsilon_{\nu}\\
\label{17.12.28.4}&\leq&\frac12(M_{3,\nu}-M_{3,\nu+1})\max\{|k|,
\sum_{j\in\mathbb{Z}^*}|jl_j|\}.
\end{eqnarray}
Therefore, \eqref{17.11.14.1} is obtained by \eqref{17.12.28.5} \eqref{17.12.28.4} with `$\nu+1$' in place of `$\nu$'.
Finally, from \eqref{17.10.16.13} we get
\begin{eqnarray}\nonumber
||X_{P_+}||_{s_{\nu+1},r_{\nu+1},p-1,\mathbf{a}_{\nu+1};
\mathcal{O}_{\nu+1}}^{\lambda_{\nu+1}}
&\leq&(\frac{B_{\sigma_{\nu}}\max\{e^{8C_0\gamma_0Ks},
e^{4\gamma_0(\Pi+C_JK)s}\}}{\alpha_{2,\nu}\eta_{\nu}^2}\epsilon_{\nu}\\
\nonumber&&+
\frac{B_{\sigma_{\nu}}(e^{-9K_{\nu}\sigma_{\nu}/10}+e^{-(\mathbf{a}_{\nu}-
\mathbf{a}_{\nu+1})\Pi_{\nu}})
}{\alpha_{2,\nu}\eta_{\nu}^2}+\frac43\eta_{\nu})\epsilon_{\nu}\\
\nonumber&\leq&(\frac{B_{\nu}}{24\alpha_{2,\nu}\eta_{\nu}^2}
\epsilon_{\nu}^{1-\beta'}
+\frac{B_{\nu}}{24\alpha_{2,\nu}\eta_{\nu}^2}
\epsilon_{\nu}^{1-\beta'}+
\frac43\eta_{\nu})\epsilon_{\nu}\\
\nonumber&=&(\frac{B_{\nu}}{\alpha_{2,\nu}})^{\frac13}
\epsilon_{\nu}^{\kappa}\\
&=&\epsilon_{\nu+1}.
\end{eqnarray}
This completes the proof of the iterative lemma.
\end{proof}
\subsection{Convergence} We are now in a position to prove the KAM theorem. To apply the iterative lemma with $\nu=0$, we set
$$N_0=N,\quad P_0=P,\quad \mathcal{O}_0=\mathcal{O},\quad s_0=s,\quad r_0=r,\quad \mathbf{a}_0=\mathbf{a}$$
and similarly $E_0=E,M_{1,0}=M_1,M_{2,0}=M_2,M_0=M_1+M_2,M_{3,0}=M_3,
m_0=m,\alpha_0=\alpha$, $\lambda_0=\frac{\alpha}{M}$. Define $\gamma$ in the KAM theorem by setting
\begin{equation}\gamma=\gamma_0\gamma_s,\quad \gamma_s=\frac{1}{80}(\prod_{\mu=0}^{\infty}(2^{\mu}B_{\mu})^
{-\frac{1}{3\kappa^{\mu+1}}})^{1-2\beta'},\end{equation}
where $\gamma_0$ is the same parameter as before and $\gamma_s$ only depends on $n,\tau,E,s,\beta$. The smallness condition \eqref{17.10.16.1} of the iterative lemma is then satisfied by the assumption of the KAM theorem:
\begin{equation}\epsilon_0:=||X_{P_0}||_{s_0,r_0,p-1,\mathbf{a}_0;
\mathcal{O}_0}^{\lambda_0}\leq(\alpha\gamma)^{1+\beta}\leq
(\alpha_0\gamma_0\gamma_s)^{\frac{1}{1-2\beta'}}.\end{equation}
The other conditions \eqref{17.10.16.8}-\eqref{17.11.14.1} about the unperturbed frequencies are obviously true.
Hence, the iterative lemma applies, and we obtain a decreasing sequence of domains $D_{\nu}\times\mathcal{O}_{\nu}$, and a sequence of transformations
$$\Phi^{\nu}=\Phi_1\circ\cdots\Phi_{\nu}:D_{\nu}\times\mathcal{O}_{\nu}
\rightarrow D_0$$
such that $H\circ\Phi_{\nu}=N_{\nu}+P_{\nu}$ for $\nu\geq1$. Moreover, the estimates \eqref{17.12.24.3} and \eqref{17.10.16.2} hold. The following proof of the convergence is parallel to that in \cite{L-Y1}, where the small difference lies in that the norm in the source space $\mathcal{P}^{a,p}$ is $(s,r)$-weighted instead of $r$-weighted in \cite{L-Y1}. However, for completeness we still give the proof.
Shorten $||\cdot||_{s,r,p}$ as $||\cdot||_{s,r}$ and consider the operator norm
$$||L||_{s,r,\tilde{s},\tilde{r}}
=\sup_{W\neq0}\frac{||LW||_{s,r}}{||W||_{\tilde{s},\tilde{r}}}.$$
For $s\geq\tilde{s},r\geq\tilde{r}$, these norms satisfy $||AB||_{s,r,\tilde{s},\tilde{r}}\leq||A||_{s,r,s,r}
||B||_{\tilde{s},\tilde{r},\tilde{s},\tilde{r}}$, since $||W||_{s,r}\leq||W||_{\tilde{s},\tilde{r}}$. For $\nu\geq1$, by the chain rule, using \eqref{17.12.24.3} \eqref{18.3.22.4} \eqref{17.10.16.5} , we get
\begin{equation}||D\Phi^{\nu}||_{s_0,r_0,s_{\nu},r_{\nu};D_{\nu}\times
\mathcal{O}_{\nu}}\leq\prod_{\mu=1}^{\nu}||D\Phi_{\mu}||
_{s_{\mu},r_{\mu},s_{\mu},r_{\mu};D_{\mu}\times\mathcal{O}_{\mu}}
\leq\prod_{\mu=1}^{\infty}(1+\frac{\gamma_0}{2^{\mu+6}})\leq2,\end{equation}
\begin{eqnarray}\nonumber||D\Phi^{\nu}||^{lip}_{s_0,r_0,s_{\nu},
r_{\nu};D_{\nu}\times\mathcal{O}_{\nu}}&\leq&\sum_{\mu=1}^{\nu}
||D\Phi_{\mu}||
_{s_{\mu},r_{\mu},s_{\mu},r_{\mu};D_{\mu}\times\mathcal{O}_{\mu}}
\prod_{1\leq\rho\leq\nu,\rho\neq\nu}||D\Phi_{\rho}||_{s_{\rho},s_{\rho},
r_{\rho},r_{\rho};D_{\rho}\times\mathcal{O}_{\rho}}\\
\nonumber&\leq&2\sum_{\mu=1}^{\nu}||D\Phi_{\mu}-I||^{lip}
_{s_{\mu},r_{\mu},s_{\mu},r_{\mu};D_{\mu}\times\mathcal{O}_{\mu}}\\
\nonumber&\leq&2\sum_{\mu=1}^{\infty}\frac{M_{\mu}}
{\alpha_{2,\mu}}\frac{B_{\mu}}{\alpha_{2,\mu}
}\epsilon_{\mu}^{1-\beta'}\\
\nonumber&\leq&2\sum_{\mu=1}^{\infty}\frac{M_{\mu}}
{\alpha_02^{\mu}}(\frac{\gamma_0}{80})^{\kappa^{\mu}}\\
&\leq&\frac{M_0}{\alpha_0}.\end{eqnarray}
Thus, with the mean value theorem we obtain
\begin{eqnarray}
\nonumber||\Phi^{\nu+1}-\Phi^{\nu}||_{s_0,r_0;D_{\nu+1}
\times\mathcal{O}_{\nu+1}}&\leq&
||D\Phi^{\nu}||_{s_0,r_0,s_{\nu},r_{\nu};D_{\nu}\times\mathcal{O}_{\nu}}
||\Phi_{\nu+1}-\mbox{id}||_{s_{\nu},r_{\nu};D_{\nu+1}\times
\mathcal{O}_{\nu+1}}\\
&\leq&2||\Phi_{\nu+1}-\mbox{id}||_
{s_{\nu},r_{\nu};D_{\nu+1}\times\mathcal{O}_{\nu+1}},
\end{eqnarray}
\begin{eqnarray}
\nonumber||\Phi^{\nu+1}-\Phi^{\nu}||^{lip}_{s_0,r_0;D_{\nu+1}
\times\mathcal{O}_{\nu+1}}&\leq&||D\Phi^{\nu}||^{lip}_{s_0,r_0,s_{\nu},r_{\nu};
D_{\nu}\times\mathcal{O}_{\nu}}||\Phi_{\nu+1}-\mbox{id}||_{s_{\nu},r_{\nu};
D_{\nu+1}\times\mathcal{O}_{\nu+1}}\\
\nonumber&+&||D\Phi^{\nu}||_{s_0,r_0,s_{\nu},r_{\nu};
D_{\nu}\times\mathcal{O}_{\nu}}||\Phi_{\nu+1}-\mbox{id}||^{lip}
_{s_{\nu},r_{\nu};
D_{\nu+1}\times\mathcal{O}_{\nu+1}}\\
\nonumber&\leq&\frac{M_0}{\alpha_0}||\Phi_{\nu+1}-\mbox{id}||
_{s_{\nu},r_{\nu};
D_{\nu+1}\times\mathcal{O}_{\nu+1}}+2||\Phi_{\nu+1}-\mbox{id}||
^{lip}_{s_{\nu},r_{\nu};
D_{\nu+1}\times\mathcal{O}_{\nu+1}}.
\end{eqnarray}
It follows that
\begin{equation}\label{17.12.22.3}||\Phi^{\nu+1}-\Phi^{\nu}||^{\lambda_0}
_{s_0,r_0;D_{\nu+1}\times\mathcal{O}_{\nu+1}}\leq3||\Phi_{\nu+1}-\mbox{id}||
^{\lambda_{\nu}}_{s_{\nu},r_{\nu};D_{\nu+1}\times\mathcal{O}_{\nu+1}}.\end{equation}
From \eqref{17.12.24.3} and \eqref{17.12.22.3}, we get
\begin{equation}||\Phi^{\nu+1}-\Phi^{\nu}||^{\lambda_0}_{s_0,r_0;
D_{\nu+1}\times\mathcal{O}_{\nu+1}}\leq3\frac{B_{\nu}}{\alpha_{2,\nu}}
\epsilon_{\nu}^{1-\beta'}.\end{equation}
For every non-negative integer multi-index $k=(k_1,\cdots,k_n)$, by Cauchy's estimate we have
\begin{equation}||\partial_{x}^k(\Phi^{\nu+1}-\Phi^{\nu})||^{\lambda_0}
_{s_0,r_0;D_{\nu+2}\times\mathcal{O}_{\nu+1}}\leq3\frac{B_{\nu}}{\alpha_{2,\nu}}
\epsilon_{\nu}^{1-\beta'}\frac{k_1!\cdots k_n!}{(\frac{s_0}{2^{\nu+2}})^{|k|}},\end{equation}
the right side of which super-exponentially decay with $\nu$. This shows that $\Phi^{\nu}$ converge uniformly on $D_*\times\mathcal{O}_{\alpha}$, where $D_*=\mathbb{T}^n\times\{0\}\times\{0\}\times\{0\}$ and $\mathcal{O}_{\alpha}=\cap_{\nu\geq0}\mathcal{O}_{\nu}$, to a Lipschitz continuous family of smooth torus embeddings
$$\Phi:\mathbb{T}^n\times\mathcal{O}_{\alpha}\rightarrow\mathcal{P}^{a,p},$$
for which the estimate \eqref{17.7.21.22} holds. Similarly, the frequencies $\omega_{\nu}$ converge uniformly on $\mathcal{O}_{\alpha}$ to a Lipschitz continuous limit $\omega_*$, and the frequencies $\Omega_{\nu}$
converge uniformly on $D_*\times\mathcal{O}_{\alpha}$ to a regular limit $\Omega_*$, with the estimate \eqref{17.7.21.23} holding. Moreover, $X_H\circ\Phi=D\Phi\cdot X_{N_*}$ on $D_*$ for each $\xi\in\mathcal{O}_{\alpha}$, where $N_*$ is the generalized normal form with frequencies $\omega_*$ and $\Omega_*$. Thus, the embedded tori are invariant under the perturbed Hamiltonian flow, and the flow on them is linear. Now it only remains to prove the claim about the set $\mathcal{O}\setminus\mathcal{O}_{\alpha}$, which is the subject of the next section.
\section{Measure Estimate}
We know
\begin{equation}\mathcal{O}\setminus\mathcal{O}_{\alpha}
=\Theta^1_{\alpha}\cup\Theta^2_{\alpha},
\end{equation}
\begin{equation}
\Theta^1_{\alpha}=\bigcup_{\nu\geq0}\bigcup_{k\in\mathbb{Z}^n\setminus
\{0\},|l|\leq2
\atop{l\neq e_{-j}-e_j}}\mathcal{R}_{kl}^{\nu}
(\alpha_{1,\nu}),\end{equation}
\begin{equation}\Theta^2_{\alpha}=\bigcup_{\nu\geq0}\bigcup_{k\in\mathbb{Z}^n, \pm j\in\mathbb{Z}_*\atop{|j|\leq\Pi_{\nu}}}\mathcal{R}_{k(-j)j}^{\nu}
(\alpha_{2,\nu}),\end{equation}
where
\begin{equation}\mathcal{R}_{kl}^{\nu}(\alpha_{1,\nu})=
\{\xi\in\mathcal{O}_{\nu}:|\langle k,\omega_{\nu}(\xi)\rangle+\langle l,\bar{\Omega}_{\nu}(\xi)\rangle|<\alpha_{1,\nu}\frac{\langle l\rangle_{\infty}}{\langle k\rangle^{\tau}}\},\end{equation}
\begin{equation}\label{17.10.24.1}\mathcal{R}_{k(-j)j}^{\nu}(\alpha_{2,\nu})=
\{\xi\in\mathcal{O}_{\nu}:|\langle k,\omega_{\nu}(\xi)\rangle+ \bar{\Omega}_{\nu,-j}(\xi)-\bar{\Omega}_{\nu,j}(\xi)|
<\alpha_{2,\nu}\frac{|j|}{\langle k\rangle^{\tau}}\}.\end{equation}
Here, $\omega_{\nu}$ and $\bar{\Omega}_{\nu}$
satisfy \eqref{17.10.16.8}-\eqref{17.10.16.10} \eqref{17.11.14.1} on $\mathcal{O}_{\nu}$, and especially, $\omega_0=\omega,\bar{\Omega}_0=\Omega$ are the frequencies of the unperturbed system.
\begin{lem} \label{lem18.4.1.3}If $\gamma_0$ is sufficiently small and $\tau\geq n+3$, then
\begin{equation}\label{17.11.14.5}|\Theta_{\alpha}^1|\leq
c\rho^{n-1}\alpha,\end{equation}
where $\rho:=\mbox{diam}\mathcal{O}$ represents the diameter of $\mathcal{O}$ and $c>0$ is a constant depends on $n,E,M_3$ and $m$.
\end{lem}
\begin{proof}
By \eqref{17.12.28.2} and the definition of $J_{\nu+1}$, we have
\begin{equation}\label{17.12.26.1}B_{\nu}\epsilon_{\nu}
\leq\frac{\alpha_{1,\nu}-
\alpha_{1,\nu+1}}{3J_{\nu+1}^{\tau+1}}.\end{equation}
For $\langle k\rangle\leq J_{\nu+1}$, $|l|\leq2, l\neq e_{-j}-e_j $, by \eqref{17.10.16.3} \eqref{17.12.26.1} we obtain
\begin{eqnarray}\nonumber|\langle k,\omega_{\nu+1}-\omega_{\nu}\rangle+
\langle l,\bar{\Omega}_{\nu+1}-\bar{\Omega}_{\nu}\rangle|
&\leq&|k||\omega_{\nu+1}-\omega_{\nu}|+2\langle l\rangle_{\infty}|\bar{\Omega}_{\nu+1}-\Omega_{\nu}|_{-1}\\
\nonumber&\leq&3\langle k\rangle\langle l\rangle_{\infty}B_{\nu}\epsilon_{\nu}\\
\nonumber&\leq&(\alpha_{1,\nu}-\alpha_{1,\nu+1})\frac{\langle k\rangle\langle l\rangle_{\infty}}{J_{\nu+1}^{\tau+1}}\\
&\leq&(\alpha_{1,\nu}-\alpha_{1,\nu+1})\frac{\langle l\rangle_{\infty}}{\langle k\rangle^{\tau}}\end{eqnarray}
on $\mathcal{O}_{\nu+1}$, which implies $\mathcal{R}^{\nu+1}_{kl}(\alpha_{1,\nu+1})\subset
\mathcal{R}^{\nu}_{kl}(\alpha_{1,\nu})$. Hence
\begin{equation}\Theta^1_{\alpha}=\bigcup_{\nu\geq0}
\bigcup_{|k|>J_{\nu},|l|\leq2
\atop{l\neq e_{-j}-e_j}}\mathcal{R}_{kl}^{\nu}
(\alpha_{1,\nu}).\end{equation}
We only need to give the proof of the most difficult case
that $l$ has two non-zero components of opposite sign. In this case, rewriting
\begin{equation}\mathcal{R}^{\nu}_{kij}(\alpha_{1,\nu})
=\{\xi\in\mathcal{O}_{\nu}:|\langle k,\omega_{\nu}(\xi)\rangle+ \bar{\Omega}_{\nu,i}(\xi)-\bar{\Omega}_{\nu,j}(\xi)|<
\alpha_{1,\nu}\frac{\max\{|i|,|j|\}}{\langle k\rangle^{\tau}}\},\quad i\neq \pm j.\end{equation}
Now we consider a fixed $\mathcal{R}^{\nu}_{kij}(\alpha_{1,\nu})$ with $i\neq \pm j$.
If $|k|<\frac{9m_{\nu}}{10E_{\nu}}|i^2-j^2|$, we get $|\langle k,\omega_{\nu}(\xi)\rangle|<\frac{9m_{\nu}}{10}|i^2-j^2|$. By \eqref{17.10.16.10} and $\alpha_{1,\nu}\leq\frac{m_{\nu}}{10}$, we know $\mathcal{R}^{\nu}_{kij}(\alpha_{1,\nu})$ is empty.
If $|k|\geq\frac{9m_{\nu}}{10E_{\nu}}|i^2-j^2|$, we have $|k|\geq(\frac{9}{10})^3\frac{m}{E}(|i|+|j|)$.
In view of \eqref{17.11.14.1}, if
$$\inf_{\xi\in\mathcal{O}_{\nu}}|\langle k,\omega_{\nu}(\xi)\rangle+
\bar{\Omega}_{\nu,i}(\xi)-\bar{\Omega}_{\nu,j}(\xi)|
\geq \frac12M_{3,\nu}\max\{|k|,|i|+|j|\},$$
then we know $\mathcal{R}^{\nu}_{kij}(\alpha_{1,\nu})$ is empty by noticing that $\alpha_{1,\nu}\leq \frac12M_{3,\nu}$; if
$$\inf_{\xi-\zeta// v_{kl}}\frac{|\Delta_{\xi\zeta}(\langle k,\omega_{\nu}\rangle+\bar{\Omega}_{\nu,i}-\bar{\Omega}_{\nu,j})|}
{|\xi-\zeta|}\geq\frac12
M_{3,\nu}\max\{|k|,|i|+|j|\},$$
then we have
\begin{equation}|\mathcal{R}_{kij}^{\nu}(\alpha_{1,\nu})|
\leq(\mbox{diam}
\mathcal{O}_{\nu})^{n-1}\frac{4\alpha_{1,\nu}\max\{|i|,|j|\}}
{M_{3,\nu}\max\{|k|,|i|+|j|\}
\langle k\rangle^{\tau}}\leq4(\frac{10}{9})^2
\rho^{n-1}\frac{\alpha}{M_3\langle k\rangle^{\tau}}.
\end{equation}
Consequently, we have that for any $|k|>J_{\nu}$,
\begin{equation}|\bigcup_{i\neq \pm j}\mathcal{R}_{kij}^{\nu}(\alpha_{1,\nu})|\leq
\bigcup_{|i|+|j|\leq(10/9)^3(E/m)|k|}|\mathcal{R}_{kij}^{\nu}
(\alpha_{1,\nu})|\leq c_1\rho^{n-1}\frac{\alpha}{\langle k\rangle^{\tau-2}},\end{equation}
where $c_1$ depends on $E,m,M_3$. Furthermore, since $\tau\geq n+3$,
we have
\begin{equation}|\bigcup_{|k|>J_{\nu},i\neq \pm j}\mathcal{R}_{kij}^{\nu}(\alpha_{1,\nu})|\leq c_1c_2\rho^{n-1}\frac{\alpha}{1+J_{\nu}},\end{equation}
where $c_2$ depends only on $n$. The sum of the latter inequality over all $\nu$ converges, and thus we obtain \eqref{17.11.14.5}.
\end{proof}
\begin{lem} \label{lem18.4.1.2}If $\gamma_0$ is sufficiently small and $\tau>n+1$, then
\begin{equation}\label{18.3.22.1}|\Theta_{\alpha}^2|\leq
c\rho^{n-1}\alpha,\end{equation}
where $c>0$ is a constant depends on $n,\tau,M_3$.
\end{lem}
\begin{proof}
We consider a fixed $\mathcal{R}^{\nu}_{k(-j)j}(\alpha_{2,\nu})$ with $|j|\leq|\Pi_{\nu}|$.
In view of \eqref{17.11.14.1}, if
$$\inf_{\xi\in\mathcal{O}_{\nu}}|\langle k,\omega_{\nu}(\xi)\rangle+
\bar{\Omega}_{\nu,-j}(\xi)-\bar{\Omega}_{\nu,j}(\xi)|\geq \frac12M_{3,\nu}\max\{|k|,2|j|\},$$
then we know $\mathcal{R}^{\nu}_{k(-j)j}(\alpha_{2,\nu})$ is empty by noticing that $\alpha_{2,\nu}\leq \frac12M_{3,\nu}$;
if
$$\inf_{\xi-\zeta// v_{kl}}\frac{|\Delta_{\xi\zeta}(\langle k,\omega_{\nu}\rangle+\bar{\Omega}_{\nu,-j}-\bar{\Omega}_{\nu,j}
)|}{|\xi-\zeta|}\geq\frac12
M_{3,\nu}\max\{|k|,2|j|\},$$
then we have
\begin{equation}|\mathcal{R}_{k(-j)j}^{\nu}(\alpha_{2,\nu})|
\leq(\mbox{diam}
\mathcal{O}_{\nu})^{n-1}\frac{4\alpha_{2,\nu}|j|}
{M_{3,\nu}\max\{|k|,2|j|\}\langle k\rangle^{\tau}}\leq2\rho^{n-1}\frac{\alpha_{2,\nu}}
{M_{3,\nu}\langle k\rangle^{\tau}}.
\end{equation}
Consequently, we have that for any $k\in\mathbb{Z}^n$,
\begin{eqnarray}\nonumber|\bigcup_{|j|\leq\Pi_{\nu}
}\mathcal{R}_{k(-j)j}^{\nu}(\alpha_{2,\nu})|&\leq&
\bigcup_{|j|\leq\Pi_{\nu}}|\mathcal{R}_{k(-j)j}^{\nu}
(\alpha_{2,\nu})|\\
\nonumber&\leq&2\rho^{n-1}\frac{\alpha_{2,\nu}}{M_{3,\nu}\langle k\rangle^{\tau}}(2\Pi_{\nu})\\
\nonumber&=&4\rho^{n-1}\frac{\alpha}{M_{3,\nu}2^{\nu}\langle k\rangle^{\tau}}.
\end{eqnarray}
Moreover, since $\tau>n+1$, we have
\begin{equation}\label{17.12.27.1}|
\bigcup_{k\in\mathbb{Z}^{n},\pm j\in\mathbb{Z}_*
\atop{|j|\leq\Pi_{\nu}}}\mathcal{R}_{k(-j)j}^{\nu}
(\alpha_{2,\nu})|\leq c\rho^{n-1}\frac{\alpha}{2^{\nu}},\end{equation}
where $c>0$ depends on $n,\tau,M_3$. The sum of the latter inequality over all $\nu$ converges and we finally obtain the estimate \eqref{18.3.22.1}.
\end{proof}
\section{Appendix}
\begin{lem}\label{lem18.4.1.1}
For $\sigma>0$ and $\nu>0$, the following inequalities hold true:
\begin{equation}\label{17.7.20.1}\sum_{k\in\mathbb{Z}^n}e^{-2|k|\sigma}
\leq\frac{1}{\sigma^n}(1+e)^n,\end{equation}
\begin{equation}\label{17.7.20.2}\sum_{k\in\mathbb{Z}^n}e^{-2|k|\sigma}|k|^{\nu}
\leq(\frac{\nu}{e})^{\nu}\frac{1}{\sigma^{\nu+n}}(1+e)^n,\end{equation}
\begin{equation}\label{17.11.10.1}\sup_{k\in\mathbb{Z}^n}(e^{-|k|\sigma}|k|^{\nu})
\leq(\frac{\nu}{e})^{\nu}\frac{1}{\sigma^{\nu}}.\end{equation}
\end{lem}
\begin{proof}\eqref{17.7.20.1} \eqref{17.7.20.2} can be found on page 22 in \cite{Bogo}, while \eqref{17.11.10.1} can be proved by direct calculation.
\end{proof}
\begin{lem}\label{lem18.3.19.1}Let $u(x)$ be an analytic function on $D(s)$ with finite momentum majorant norm. Then for $0<\sigma<s$, we have
\begin{equation}\label{App1}
|u|_{s-\sigma,\tau+1}\leq(\frac{\tau+1}{e})^{\tau+1}
\frac{1}{\sigma^{\tau+1}}|u|_{s,\mathbf{a},0}.\end{equation}
\end{lem}
\begin{proof} By the definition of $|\cdot|_{s,\tau+1}$ and $|\cdot|_{s,\mathbf{a},0}$, we otain
\begin{eqnarray}\nonumber|u|_{s-\sigma,\tau+1}
&=&\sum_{k\in\mathbb{Z}^n}|\hat{u}_k|e^{|k|(s-\sigma)}|k|^{\tau+1}
\\
\nonumber&\leq&\sup_{k\in\mathbb{Z}^n}(e^{-|k|\sigma}
|k|^{\tau+1})\sum_{k\in\mathbb{Z}^n}|\hat{u}_k|e^{|k|s}\\
\label{18.3.19.2}&\mathop\leq^{}&(\frac{\tau+1}{e})^{\tau+1}
\frac{1}{\sigma^{\tau+1}}\sum_{k\in\mathbb{Z}^n}|\hat{u}_k|e^{|k|s}\\
\nonumber&\leq&(\frac{\tau+1}{e})^{\tau+1}
\frac{1}{\sigma^{\tau+1}}\sum_{k\in\mathbb{Z}^n}|
\hat{u}_k|e^{|k|s}e^{\mathbf{a}|\sum_{b=1}^nk_bj_b|}\\
\nonumber&=&(\frac{\tau+1}{e})^{\tau+1}
\frac{1}{\sigma^{\tau+1}}|u|_{s,\mathbf{a},0},
\end{eqnarray}
where in \eqref{18.3.19.2} we use \eqref{17.11.10.1}.
\end{proof}
\begin{lem}\label{lem17.9.7.1} Let $R=(R_{ij})_{i,j\in\mathbb{Z}_*}$ be a matrix depending on $x\in D(s)$ such that the corresponding Hamiltonian
vector field $X_{\langle Rz,\bar{z}\rangle}$ has finite momentum majorant norm on $D(s,r)$.
Suppose $F=(F_{ij})_{i,j\in\mathbb{Z}_*}$ is another matrix depending on $x$ whose elements satisfy
\begin{equation}\label{17.9.30.1}\sum_{k\in\mathbb{Z}^n}|\hat{F}_{ijk}|
e^{|k|(s-\sigma)}e^{\mathbf{a}|\pi(k,i-j)|}\leq\frac{1}{\max\{|i|,|j|\}}
\sum_{k\in\mathbb{Z}^n}|\hat{R}_{ijk}|e^{|k|s}
e^{\mathbf{a}|\pi(k,i-j)|}\end{equation}
for $0<\sigma<\min\{1,s/2\}$. Then for $p\geq0$, $\mathbf{a}\geq0$, we have
\begin{equation}\label{17.9.30.2}||X_{\langle Fz,\bar{z}\rangle}||_{s-2\sigma,r,p,\mathbf{a}}
\leq\frac{3}{\sigma}||X_{\langle Rz,\bar{z}\rangle}||_{s,r,p-1,\mathbf{a}}.\end{equation}
\end{lem}
\begin{proof}
By \eqref{7.13.2}, we obtain
\begin{equation}X_{\langle Fz,\bar{z}\rangle}=
(0,-(\sigma_{j_b}\partial_{x_b}\langle Fz,\bar{z}\rangle)_{1\leq b\leq n},-\mathbf{i}(\sigma_j\partial_{\bar{z}_j}\langle Fz,\bar{z}\rangle
)_{j\in\mathbb{Z}_*},\mathbf{i}(\sigma_j\partial_{z_j}\langle Fz,\bar{z}\rangle
)_{j\in\mathbb{Z}_*})^T\end{equation}
and $X_{\langle Rz,\bar{z}\rangle}$ is defined similarly.
In view of
$\langle Fz,\bar{z}\rangle=\sum_{i,j\in\mathbb{Z}_*\atop{k\in\mathbb{Z}^n}}
\hat{F}_{ijk}e^{\mathbf{i}k\cdot x}z_i\bar{z}_j,$
we have
\begin{equation*}-(\sigma_{j_b}\partial_{x_b}\langle Fz,\bar{z}\rangle)_{1\leq b\leq n}=(\sum_{i,j\in\mathbb{Z}_*\atop{k\in\mathbb{Z}^n}}-\mathbf{i}\sigma_{j_b}k_b
\hat{F}_{ijk}e^{\mathbf{i}k\cdot x}z_i\bar{z}_j)_{1\leq b\leq n},\end{equation*}
\begin{equation*}-\mathbf{i}(\sigma_j\partial_{\bar{z}_j}\langle Fz,\bar{z}\rangle
)_{j\in\mathbb{Z}_*}=(\sum_{i\in\mathbb{Z}_*\atop{k}\in\mathbb{Z}^n}
-\mathbf{i}\sigma_j\hat{F}_{ijk}e^{\mathbf{i}k\cdot x}z_i)_{j\in\mathbb{Z}_*},\end{equation*}
\begin{equation*}\mathbf{i}(\sigma_j\partial_{z_j}\langle Fz,\bar{z}\rangle
)_{j\in\mathbb{Z}_*}=(\sum_{i\in\mathbb{Z}_*\atop{k}\in\mathbb{Z}^n}
\mathbf{i}\sigma_j\hat{F}_{jik}e^{\mathbf{i}k\cdot x}\bar{z}_i)_{j\in\mathbb{Z}_*}.\end{equation*}
The first ingredient of $||X_{\langle Fz,\bar{z}\rangle}||_{s-2\sigma,r,p,\mathbf{a}}$ is zero, and the second ingredient can be controlled by the third ingredient (or the fourth ingredient):
\begin{eqnarray}\nonumber&&\sup_{(y,z,\bar{z})\in D(r)}\frac{1}{r^2}|(\sum_{i,j\in\mathbb{Z}_*\atop{k\in\mathbb{Z}^n}}
|-\mathbf{i}\sigma_{j_b}k_b
\hat{F}_{ijk}|e^{\mathbf{a}|\pi(k,i-j)|}e^{|k|(s-2\sigma)}|z_i||\bar{z}_j|)
_{1\leq b\leq n}|_1\\
\nonumber&&=\sup_{(y,z,\bar{z})\in D(r)}\frac{1}{r^2}\sum_{i,j\in\mathbb{Z}_*\atop{k\in\mathbb{Z}^n}}
|k||\hat{F}_{ijk}|e^{\mathbf{a}|\pi(k,i-j)|}e^{|k|(s-2\sigma)}|z_i||\bar{z}_j|\\
\nonumber&&\leq\sup_{k\in\mathbb{Z}^n}|k|e^{-|k|\sigma}\cdot\sup_{(y,z,\bar{z})\in D(r)}\frac{1}{r^2}\sum_{
i,j\in\mathbb{Z}_*}\big(\sum_{k\in\mathbb{Z}^n}|\hat{F}_{ijk}|
e^{\mathbf{a}|\pi(k,i-j)|}e^{|k|(s-\sigma)}\big)|z_i||\bar{z}_j|\\
\nonumber&&\leq\frac{1}{e\sigma}\sup_{(y,z,\bar{z})\in D(r)}\frac{1}{r^2}\langle(\sum_{i\in\mathbb{Z}_*
\atop{k\in\mathbb{Z}^n}}|\hat{F}_{ijk}|e^{\mathbf{a}|\pi(k,i-j)|}
e^{|k|(s-\sigma)}|z_i|)
_{j\in\mathbb{Z}_*},|\bar{z}|\rangle\\
\nonumber&&\leq\frac{1}{e\sigma}\sup_{(y,z,\bar{z})\in D(r)}\frac{1}{r^2}||(\sum_{i\in\mathbb{Z}_*
\atop{k\in\mathbb{Z}^n}}|\hat{F}_{ijk}|e^{\mathbf{a}|\pi(k,i-j)|}
e^{|k|(s-\sigma)}|z_i|)_{j\in\mathbb{Z}_*}||_{-a,-p}
||\bar{z}|||_{a,p}\\
\label{17.11.6.2}&&\leq\frac{1}{e\sigma}\sup_{(y,z,\bar{z})\in D(r)}\frac{1}{r}||(\sum_{i\in\mathbb{Z}_*\atop{k\in\mathbb{Z}^n}}
|-\mathbf{i}\sigma_j\hat{F}_{ijk}|e^{\mathbf{a}|\pi(k,i-j)|}
e^{|k|(s-\sigma)
}|z_i|)_{j\in\mathbb{Z}_*}||_{a,p}.
\end{eqnarray}
Moreover, the estimates for the third and fourth ingredients of $||X_{\langle Fz,\bar{z}\rangle}||_{s-2\sigma,r,p,\mathbf{a}}$ are parallel. Thus there only remains the estimate for the third ingredient:
\begin{eqnarray}\nonumber&&\sup_{(y,z,\bar{z})\in D(r)}\frac{1}{r}||(\sum_{i\in\mathbb{Z}_*\atop{k\in\mathbb{Z}^n}}
|-\mathbf{i}\sigma_j\hat{F}_{ijk}|e^{\mathbf{a}|\pi(k,i-j)|}e^{|k|(s-\sigma)
}|z_i|)_{j\in\mathbb{Z}_*}||_{a,p}\\
\nonumber&&\leq\sup_{(y,z,\bar{z})\in D(r)}\frac{1}{r}||(\frac{1}{|j|}\sum_{i\in\mathbb{Z}_*\atop{k\in\mathbb{Z}^n}}
|\hat{R}_{ijk}|e^{\mathbf{a}|\pi(k,i-j)|}e^{|k|s
}|z_i|)_{j\in\mathbb{Z}_*}||_{a,p}\\
\label{17.9.26.7}&&=\sup_{(y,z,\bar{z})\in D(r)}\frac{1}{r}||(\sum_{i\in\mathbb{Z}_*\atop{k\in\mathbb{Z}^n}}
|-\mathbf{i}\sigma_j\hat{R}_{ijk}|e^{\mathbf{a}|\pi(k,i-j)|}e^{|k|s
}|z_i|)_{j\in\mathbb{Z}_*}||_{a,p-1}.
\end{eqnarray}
In view of \eqref{17.11.6.2} and \eqref{17.9.26.7}, we get \begin{equation}||X_{\langle Fz,\bar{z}\rangle}||_{s-2\sigma,r,p,\mathbf{a}}
\leq(\frac{2}{\sigma}+1)||X_{\langle Rz,\bar{z}\rangle}||_{s,r,p-1,\mathbf{a}},\end{equation}
and thus \eqref{17.9.30.2}.
\end{proof}
By \eqref{17.11.8.2}, if $X:D(s,r)\rightarrow\mathcal{P}^{a,q}$ with $||X||_{s,r,q,\mathbf{a}}<+\infty$, then $X$ is analytic, namely the
Frechet differential $D(s,r)\ni v\mapsto dX(v)\in\mathcal{L}(D(s,r),\mathcal{P}^{a,q})$ is continuous.
The commutator of two vector fields $X:D(s,r)\rightarrow\mathcal{P}^{a,q},
Y:D(s,r)\rightarrow\mathcal{P}^{a,p}$ is
\begin{equation}[X,Y](v):=dX(v)[Y(v)]-dY(v)[X(v)],\quad\forall
\ v\in D(s,r).\end{equation}
The next lemma is the estimate for the momentum majorant norm of
the commutator of two vector fields. It is Proposition 2.1 in \cite{Berti1} with small modification, that is, the definition space and target space of one vector field are different.
\begin{lem}\label{lem17.12.22.1}Let $X:D(s,r)\rightarrow\mathcal{P}^{a,q}$ and $Y:D(s,r)\rightarrow\mathcal{P}^{a,p}$ with $||X||_{s,r,q,\mathbf{a}},||Y||_{s,r,p,\mathbf{a}}<+\infty$. Then for $s/2\leq s'<s$ and $r/2\leq r'<r$,
\begin{equation}\label{17.12.22.2}||[X,Y]||_{s',r',q,\mathbf{a}}
\leq 2^{2n+3}\max\{\frac{s}{s-s'},\frac{r}{r-r'}\}||X||_{s,r,q,\mathbf{a}}
||Y||_{s,r,p,\mathbf{a}}.\end{equation}
\end{lem}
\begin{proof} For $\mathbf{a}=0$,
the proof is parallel to that of Lemma 2.15 in \cite{Berti2}, in which the following Cauchy estimate (Lemma 2.14 in \cite{Berti2}) is essential
\begin{equation}\label{17.12.22.4}\sup_{v\in D(s',r')}||dW(v)||_{\mathcal{L}
((\mathcal{P}^{a,p},||\cdot||_{s,r,p}),(\mathcal{P}^{a,p},
||\cdot||_{s',r',p}))}\leq 4\max
\{\frac{s}{s-s'},\frac{r}{r-r'}\}||W||_{s,r,p;
D(s,r)},\end{equation}
where
$W:D(s,r)\rightarrow\mathcal{P}^{a,p}\ \mbox{with}\ ||W||_{s,r,p,0}<+\infty.$
Note that $X:D(s,r)\rightarrow\mathcal{P}^{a,q}$. The difference between $X$ and $W$ lies in that the definition space and target space are different. However, parallelly to the proof of \eqref{17.12.22.4}, we can also get the Cauchy estimate
\begin{equation}\label{17.12.22.1}\sup_{v\in D(s',r')}||dX(v)||_{\mathcal{L}((\mathcal{P}^{a,p},||\cdot||_{s,r,p}),
(\mathcal{P}^{a,q},||\cdot||_{s',r',q}))}
\leq 4\max\{\frac{s}{s-s'},
\frac{r}{r-r'}\}||X||_{s,r,q;D(s,r)}.\end{equation}
Therefore, following the proof of Lemma 2.15 in \cite{Berti2} and using \eqref{17.12.22.1}, we get
\begin{equation}\label{17.12.22.7}||[X,Y]||_{s',r',q,0}
\leq 2^{2n+3}\max\{\frac{s}{s-s'},\frac{r}{r-r'}\}||X||_{s,r,q,0}
||Y||_{s,r,p,0}.\end{equation}
For $\mathbf{a}>0$, the proof follows the idea of Proposition 2.1 in \cite{Berti1}. Write
$X=\sum_{h\in\mathbb{Z}}X_h,$
where $X_h$ is the sum of the monomials with $\pi(X)=h$. Similarly, write $Y=\sum_{h\in\mathbb{Z}}Y_h$, and thus
$$[X,Y]=\sum_{h_1,h_2\in\mathbb{Z}}[X_{h_1},Y_{h_2}].$$
It is easy to see that
\begin{equation}\label{17.12.22.5}\pi([X_{h_1},Y_{h_2}])=h_1+h_2=\pi(X_{h_1})
+\pi(Y_{h_2}).\end{equation}
By \eqref{17.12.22.7} and \eqref{17.12.22.5}, the estimate
\begin{eqnarray}\nonumber
||[X_{h_1},Y_{h_2}]||_{s',r',q,\mathbf{a}}&=&
e^{\mathbf{a}(h_1+h_2)}||[X_{h_1},Y_{h_2}]||_{s',r',q,0}\\
\nonumber&\leq&2^{2n+3}\max\{\frac{s}{s-s'},\frac{r}{r-r'}\}
e^{\mathbf{a}(h_1+h_2)}||X_{h_1}||_{s,r,q,0}||Y_{h_2}||_{s,r,p,0}\\
\label{17.12.22.6}&=&2^{2n+3}\max\{\frac{s}{s-s'},\frac{r}{r-r'}\}
||X_{h_1}||_{s,r,q,\mathbf{a}}||Y_{h_2}||_{s,r,p,\mathbf{a}}\end{eqnarray}
holds true. Finally, \eqref{17.12.22.2} for $\mathbf{a}>0$ is obtained by
summing $h_1$ and $h_2$.
\end{proof}
The following lemma is the estimate for the momentum majorant norm of the transformed Hamiltonian vector field. It is Lemma 2.17 in \cite{Berti2} with small modification, that is, the definition space and target space of the Hamiltonian vector field $X_H$ are different.
\begin{lem}\label{lem17.12.23.1}Let $r/2\leq r'<r,s/2\leq s'<s$ and $F$ with
\begin{equation}2^{2n+5}e\max\{\frac{s}{s-s'},\frac{r}{r-r'}\}
||X_F||_{s,r,p,\mathbf{a}}<1.\end{equation}
Then the time $1$-Hamiltonian flow
$\Phi_F^1:D(s',r')\rightarrow D(s,r)$
is well defined, analytic, symplectic, and $\forall \ H$ with $||X_H||_{s,r,q,\mathbf{a}}<+\infty$, we have
\begin{equation}\label{17.12.23.2}||X_{H\circ\Phi_F^1}||_{s',r',q,\mathbf{a}}
\leq\frac{||X_H||_{s,r,q,\mathbf{a}}}{1-2^{2n+5}e
\max\{\frac{s}{s-s'},\frac{r}{r-r'}\}||X_F||_{s,r,p,
\mathbf{a}}}.\end{equation}
\end{lem}
\begin{proof}
With the help of Lemma \ref{lem17.12.22.1} which deal with the case that the definition space and target space of one vector field are different, the proof is parallel to Lemma 2.17 in \cite{Berti2}. Thus, we omit the details here.
\end{proof}
\textbf{Acknowledgement.} {Part of this article was written when the first author was visiting School of Mathematics, Sichuan University. She thanks the school for its pleasant working atmosphere and Professor Weinian Zhang for his invaluable help. The second author was supported by National Research
Foundation of China (No. 11671280).}
|
{
"timestamp": "2018-05-09T02:03:48",
"yymm": "1805",
"arxiv_id": "1805.02321",
"language": "en",
"url": "https://arxiv.org/abs/1805.02321"
}
|
\section*{Introduction}
\setcounter{numsection}{0}
Since the fundamental work of Mumford~\cite{MumfordGIT}, Kirwan~\cite{Kir2},
Guillemin--Sternberg \cite{GS_Mult}, and others, moment map geometry has become
one of the most important tools for studying actions of \emph{complex-reductive
Lie groups} $G=K^{\mathbb{C}}$ on K\"ahler manifolds. Given a Hamiltonian
$G$-manifold, i.e., a K\"ahler $G$-manifold $(X, \omega_X)$ admitting a moment
map $\mu\colon X \to \Lie(K)^*$ for the $K$-action, by the work of
Heinzner--Loose~\cite{ReductionOfHamiltonianSpaces},
Heinzner--Huckleberry--Loose \cite{Extensionofsymplectic}, and
Sjamaar~\cite{Sj2}, the set of $\mu$-semistable points $X^{ss}_G(\mu) := \{x \in
X \mid \overline{G\acts x} \cap \mu^{-1}(0) \neq \emptyset \}$ admits an
analytic Hilbert quotient, i.e., a $G$-invariant holomorphic Stein map
$\pi\colon X^{ss}_G(\mu)\to X^{ss}_G(\mu)\hq G$ onto a K\"ahlerian complex
space $X^{ss}_G(\mu)\hq G$ with structure sheaf
$\mathscr{O}_{X^{ss}_G(\mu)/\negthickspace /
G}=(\pi_*\mathscr{O}_{X^{ss}_G(\mu)})^G$; see also the survey \cite{HH2}. If $X$
is projective algebraic, and if the K\"ahler form $\omega_X$ as well as the
moment map $\mu$ are induced by an embedding of $X$ into some projective space,
both the set $X^{ss}_G(\mu)$ of semistable points and the quotient
$X^{ss}_G(\mu)\hq G$ are the ones constructed via Geometric Invariant Theory
(GIT). This theory crucially uses the fact that complex-analytic objects on $X$
can be averaged over the compact group $K$ to produce $G$-invariant objects,
which then can be used to construct the quotient.
On the other hand, actions of \emph{unipotent groups} on (compact) K\"ahler
manifolds appear naturally in a number of contexts and play an important role in
K\"ahler geometry. By a fundamental result of Lichnerowicz and Matsushima, a
given compact K\"ahler manifold $X$ can admit a constant scalar curvature
K\"ahler metric only if the Lie sub-algebra of all holomorphic vector fields
having a zero is reductive, see e.g.~\cite[Proposition~4.18 and
Remark~4.12]{Sze}. In other words, unipotent subgroups of $\mathrm{Aut}(X)$
appear as obstructions to the existence of such metrics. In a related direction,
the paper \cite{CD} proposes a way to produce canonical destabilising
test-configurations (showing $K$-unstability of $X$) from non-reductive
subgroups of the automorphism group.
Motivated by these and other moduli-theoretic questions, Doran and Kirwan in
\cite{DorKir} started to study actions of unipotent algebraic groups $N$ on
projective manifolds $X$ linearised in very ample line bundles, using
invariant-theoretic methods on the one hand and Geometric Invariant Theory for
related actions of reductive groups $G$ containing $N$ on twisted products $G
\times_N X$ on the other hand. When thinking about the relation of their work to
K\"ahler geometry and moment maps, one encounters three basic questions:
\begin{enumerate}
\item[(a)] What is the correct analogue of a ``linear'' action in K\"ahler
geometry?
\item[(b)] Given a compact K\"ahler $N$-manifold $X$, how can one produce
K\"ahler metrics on the non-compact twisted product $G\times_N X$ ?
\item[(c)] If (b) has a positive answer, can one use moment map geometry on
the non-compact Hamiltonian $G$-manifold to produce quotients for the $N$-action
on $X$ with good geometric and complex-analytic properties?
\end{enumerate}
With a different set of problems in mind, Question (a) has been solved a rather
long time ago by Fujiki~\cite{Fuj}, Lieberman~\cite{Lie}, and
Sommese~\cite{Som}: meromorphic actions and more generally actions for which the
induced action on the Albanese torus is trivial were already called ``linear''
or ``projective'' in \emph{loc.~cit.}, and it turns out that also with a view
towards moment map geometry and K\"ahlerian quotient theory, these are the
correct conditions to impose, see~Remark~\ref{rem:reductive_equivalence}. For
actions of reductive Lie groups this was observed by Huckleberry--Wurzbacher
\cite[Remark on page 262]{HW} and Fujiki~\cite[Lemma 2.1]{Fuj2}. Note that the
Lie algebra of the group of all automorphisms acting trivially on Albanese
consists exactly of those holomorphic vector fields having a zero, e.g.~see
\cite[Proposition 6.8]{Fuj}, which connects the question of ``linearity'' to the
one concerning the existence of extremal K\"ahler metrics discussed above.
\subsection*{Main results}
As our first contribution, using a criterion of Blanchard and and properties of
unipotent groups, with respect to Question (b) we prove the following result.
\paragraph{Theorem (Theorem~\ref{Thm:ExistenceExtensions}).}
{\em Let $N$ be a unipotent subgroup of the simply-connected complex semisimple Lie
group $G$. For a connected compact K\"ahler manifold $(X, \omega_X)$ endowed with a
holomorphic $N$-action the following statements are equivalent.
\begin{enumerate}[\rm (1)]
\item There exists a Hamiltonian $G$-extension $(Z,\omega_Z)$ of the $N$-action
on $X$.
\item The $N$-action on $X$ is meromorphic.
\item The twisted product $G\times_NX$ is K\"ahler.
\end{enumerate}
Moreover, given a meromorphic $N$-action on $X$ we can always find a
Hamiltonian $G$-extension that is a $G$-equivariant compactification of
$G\times_NX$.}
\bigskip
Here, a \emph{Hamiltonian $G$-extension} of the $N$-action on
$(X,\omega_X)$ consists of a connected compact Hamiltonian $G$-manifold
$(Z,\omega_Z)$ and an $N$-equivariant embedding $\iota\colon X\hookrightarrow Z$
such that for the de Rham cohomology classes associated with the K\"ahler forms
we have $\iota^*[\omega_Z]=[\omega_X]$. This theorem links the condition
``meromorphic'' to moment map geometry of the $G$-action on $G \times_N X$, and
therefore opens the door to using the complex-reductive theory for the
construction of quotients of $X$ with respect to the $N$-action.
Once the existence of Hamiltonian $G$-extensions is established, these can be
used to define the set of \emph{$N$-semistable points} $X^{ss}_N[\omega_X]$ with
respect to the given K\"ahler class $[\omega_X] \in H^2(X, \mathbb{R})$, after
one has chosen a suitable K\"ahler form on the quasi-affine
homogeneous space $G/N$. While we show in Theorem~\ref{Thm:Independence}, using
Hodge-theoretic arguments as well as the relation of $G$-semistability to
$K$-invariant strictly plurisubharmonic exhaustion functions, that this
definition does not depend on the chosen $G$-extension, the choice of metric on
$G/N$ influences semistability in a subtle way, as we explore in great detail in
Section~\ref{subsect:discussion_of_choice}. The problems that occur are closely
related to the ones encountered in the algebraic situation when searching for
various kinds of ``reductive envelopes'', cf.~\cite[Sections~5.2 and
5.3]{DorKir}, and can be traced back to the fact that in general there is no
choice of metric on $G/N$ so that the corresponding moment map is
proper or admissible in the sense of \cite{Sj2}. A detailed comparison of the
moment map approach introduced here and the GIT approach of Doran--Kirwan is
presented in Section~\ref{subsect:algebraic_actions}.
Finally, regarding Question (c), we establish that the set of $N$-semistable
points indeed has a number of very desirable complex-geometric properties. The
following result summarises the content of
Theorem~\ref{thm:existence_of_geometric_quotient},
Proposition~\ref{prop:compactI}, Theorem~\ref{thm:Zopen}, and
Theorem~\ref{thm:reducedKaehler}.
\paragraph{Theorem.}
{\em Let $(X, \omega_X)$ be a compact K\"ahler manifold endowed with a meromorphic
$N$-action. Then, the following holds.
\begin{enumerate}
\item[\rm (a)] The set $X^{ss}_N[\omega_X]$ of semi\-stable points admits a
geometric quotient $\pi\colon X^{ss}_N[\omega_X] \to X^{ss}_N[\omega_X] / N$ by
the $N$-action. In fact, $\pi$ is a principal $N$-fibre bundle and
$X^{ss}_N[\omega_X] / N =:Q$ is smooth.
\item[\rm (b)] The definition of semistability naturally induces an open
embedding $\phi\colon Q=X^{ss}_N[\omega_X] / N \hookrightarrow \overline{Q}$ of $Q$ into a compact
complex space $\overline{Q}$ such that $\overline{Q} \setminus \phi( Q )$ is analytic.
\item[\rm (c)] The set $X^{ss}_N[\omega_X]$ of semi\-stable points is Zariski-open
in $X$. Moreover, the quotient map $\pi\colon X^{ss}_N[\omega_X] \to
Q$ extends to a meromorphic map $\pi\colon X \dasharrow
\overline{Q}$.
\item[\rm (d)] There exists a K\"ahler structure $\omega_{\ol{Q}}$ on $\ol{Q}$
whose restriction $\omega_Q = \omega_{\ol{Q}}|_Q^{}$ to $Q \hookrightarrow
\ol{Q}$ is smooth and fulfils
\[ [\pi^*{\omega_Q}] = [\omega_X|_{X^{ss}_N[\omega_X]}]
\in H^2(X^{ss}_N[\omega_X],\, \mathbb{R}).\]
\end{enumerate}}
\subsection*{Future directions}
With the fundamental results of a K\"ahlerian quotient theory for meromorphic
actions of unipotent groups established, interesting questions include whether
despite the difficulties presented by the examples collected in Section 4.3
under certain additional conditions the set of semistable points can be shown to
be independent of further choices, whether in certain applications there are
natural K\"ahler metrics on the homogeneous space $G/N$ leading to a quotient
that is as well-adapted as possible to the given geometric situation at hand,
and whether given a compact K\"ahler manifold $(X, \omega_X)$ with non-trivial
non-reductive part in the automorphism group (obstructing the existence of
special metrics) one can use the K\"ahlerian quotient theory for this unipotent
group in order to produce a K\"ahlerian complex space/manifold where said
obstruction vanishes and special metrics might exist.
\subsection*{Organisation of the paper}
In the first two sections we review basic facts about meromorphic actions and
GIT on K\"ahler manifolds via moment maps. In Section~\ref{Section:Extensions}
we prove our first main result, Theorem~\ref{Thm:ExistenceExtensions}. The
difficulties one encounters in defining the set of semistable points for
holomorphic actions of unipotent groups, the actual definition of
semistability, as well as fundamental properties of the set of semistable points
are discussed in Section~\ref{Section:Semistable}, while in the final
Section~\ref{sect:properties} we prove the second main result concerning the
properties of semistable quotients for unipotent group actions.
\subsection*{Acknowledgements}
The first author wants to thank the Laboratoire de Math\'ematiques Pures et
Appliqu\'ees at Universit\'e du Littoral C\^ote d'Opale as well as FRIAS for
hospitality during research visits in the spring of 2016 and 2018,
respectively. During the preparation of this paper, he has been partially
supported by the DFG-Collaborative Research Center SFB/TR 45 ``Periods, moduli
spaces and arithmetic of algebraic varieties''. The second author gratefully
acknowledges the hospitality of the mathematical departments of the universities
of Duisburg-Essen and Freiburg as well as of the FRIAS during the spring of
2016, 2017 and 2018. Both authors would like to thank Peter Heinzner for
fruitful and interesting discussions.
\subsection*{Global conventions}
We work over the field $\mathbb{C}$ of complex numbers. A \emph{complex space}
is a reduced complex space with countable topology. \emph{Analytic
subsets} are assumed to be closed. \emph{Manifolds} are assumed to be connected.
\section{Meromorphic group actions}\label{Section:mero}
Let us review some facts about meromorphic group actions from~\cite{Fuj} that
we will use freely in the following. Lieberman obtained essentially the same
results in \cite{Lie}. In this section, $G$ denotes a complex Lie group.
A \emph{meromorphic structure} on the complex Lie group $G$ is a compactification
$\ol{G}$ together with a meromorphic mapping
$\ol{\mu}\colon\ol{G}\times\ol{G}\to\ol{G}$ which extends the group
multiplication of $G$ such that $\ol{\mu}$ is holomorphic on
$(G\times\ol{G})\cup(\ol{G}\times G)$. In other words, the compactification
$\ol{G}$ is $(G\times G)$-equivariant. Moreover, the map $G\to G$, $g\mapsto
g^{-1}$, extends to a meromorphic map $\ol{G}\to\ol{G}$.
Let us fix a meromorphic structure on $G$. A complex subgroup $H$ of $G$ is
\emph{meromorphic} if the topological closure $\ol{H}$ of $H$ in $\ol{G}$ is
analytic.
\begin{rem}{\rm
If $G$ is linear algebraic, we may and will choose $\ol{G}$ to be a
projective manifold. The meromorphic subgroups of $G$ are then precisely the
algebraic subgroups of $G$.}
\end{rem}
Let $X$ be a complex space endowed with a holomorphic $G$-action. This
$G$-action is \emph{meromorphic} if the action map $G\times X\to X$ extends to
a meromorphic map $\ol{G}\times X\to X$. For meromorphic actions on compact
K\"ahler manifolds (in fact, on reduced compact complex spaces of class
$\mathcal{C}$) we have the following quotient theorem,
see~\cite[Theorem~4.1]{Fuj}.
\begin{thm}[Quotient Theorem]
Let $X$ be a compact K\"ahler manifold on which $G$ acts meromorphically. Then,
there exist a compact complex space $Y$ and a $G$-invariant surjective
meromorphic map $\pi\colon X\dasharrow Y$ such that the following universal
property is satisfied. If $\pi'\colon X\dasharrow Y'$ is another $G$-invariant
meromorphic map to a compact complex space $Y'$, then there exists a unique
meromorphic map $m\colon Y \dasharrow Y'$ such that $m\circ\pi=\pi'$. In this
situation we call $\pi\colon X\dasharrow Y$ a \emph{meromorphic quotient} for
the $G$-action on $X$.
\end{thm}
\begin{ex}{\rm
Let $H$ be an algebraic subgroup of $G$. Then there exists a $G$-equivariant
smooth projective compactification $\ol{G/H}$ of $G/H$. Moreover, the
$H$-principal bundle $\pi\colon G\to G/H$ extends to a rational map $\ol{\pi}
\colon\ol{G}\dasharrow\ol{G/H}$. Then $\ol{\pi}$ is a meromorphic quotient for the
$H$-action on $\ol{G}$.}
\end{ex}
In fact, by inspecting Fujiki's construction, one obtains the following more
precise result that has appeared in several places in the literature; see for
example \cite[Theorem 0.2.2]{BBS}, as well as \cite[Proposition 3.1]{Greb},
\cite[Section 3]{Hu}, and the references given there for the analogous result in
the algebraic category.
\begin{prop}\label{Chow}
Let $X$ be a compact K\"ahler manifold on which $G$ acts meromorphically. Then,
there exists an irreducible, compact analytic subset $Q_F$ of the cycle space of
$X$, a $G$-invariant meromorphic map $\pi_F\colon X \dasharrow Q_F$, and a
$G$-invariant Zariski-open subset $U_F \subset \mathrm{dom}(\pi_F)$, called a
\emph{Fujiki set} of $X$, such that
\begin{enumerate}[\rm (1)]
\item $\pi_F\colon X \dasharrow Q_F$ is a meromorphic quotient for the
$G$-action on $X$,
\item $U_F \subset X_{\mathrm{gen}} := \{x \in X \mid \dim G\acts x = m \text{
is maximal}\}$,
\item for all $u \in U_F$, we have $\pi_F (u) = \overline{G\acts u}$,
considered as a (reduced) cycle of $X$,
\item $\pi_F(U_F)$ is smooth and Zariski-open in $Y$, and the restriction
$\pi_F|_{U_F}\colon U_F \to \pi_F(U_F)$ is a geometric quotient.
\end{enumerate}
\end{prop}
We call $\pi_F\colon X \dasharrow Q_F$ a \emph{Fujiki quotient} of $X$ by $G$.
\section{Hamiltonian $G$-spaces}
Let $G=K^\mbb{C}$ be a complex reductive group with maximal compact subgroup
$K$. In this section we review the definition and some properties of Hamiltonian
$G$-manifolds. A general reference for the complex-analytic theory of moment
maps on K\"ahler manifolds is~\cite{HH2}.
\subsection{Moment maps}
Let $(Z,\omega_Z)$ be a K\"ahler manifold with K\"ahler form $\omega_Z$. Suppose
that $G$ acts holomorphically on $Z$ such that $\omega_Z$ is $K$-invariant. A
\emph{moment map} for the $K$-action on $Z$ is a $K$-equivariant smooth map
$\mu\colon Z\to\lie{k}^*$, where $K$ acts via the coadjoint representation on
the dual $\lie{k}^*$ of its Lie algebra, such that
\begin{equation}\label{Eqn:momentcondition}
d\mu^\xi=\iota_{\xi_Z}\omega_Z.
\end{equation}
Here, for every $\xi\in\lie{k}$, we write $\mu^\xi\in\mathcal{C}^\infty(Z)$ for
the function defined by $\mu^\xi(z)=\mu(z)(\xi)$, and $\xi_Z$ for the vector
field on $Z$ whose flow is given by $(t,z)\mapsto\exp(-t\xi)\acts z$, and
$\iota_{\xi_Z}\omega_Z$ for the contraction of $\omega_Z$ by $\xi_Z$.
If a moment map for the $K$-action on $Z$ exists, we say that $(Z,\omega_Z)$ is
a \emph{Hamiltonian $G$-manifold}. The notions introduced above make sense for
actions of $G$ on K\"ahlerian complex spaces, see for example \cite[Sections 3.1
and 3.2]{PaHq} and the references given there; in this setup one speaks about
\emph{Hamiltonian $G$-spaces}.
\begin{rem}\label{rem:semisimple_always_Hamiltonian}{\rm
If $G$ is semisimple, then every K\"ahler manifold $(Z,\omega_Z)$ endowed with a
holomorphic $G$-action such that $\omega_Z$ is $K$-invariant is Hamiltonian.
Moreover, in this case the moment map $\mu\colon Z\to\lie{k}^*$ is unique,
see~\cite[Proposition~24.1]{GS} and the discussion following this proposition.}
\end{rem}
\begin{rem}\label{rem:reductive_equivalence}{\rm
For a \emph{compact} K\"ahler manifold $(Z,\omega_Z)$ endowed with a
holomorphic action of the connected complex reductive Lie group $G=K^\mathbb{C}$
and $K$-invariant K\"ahler form $\omega_Z$ the following statements are
equivalent:
\begin{enumerate}[\rm (1)]
\item $Z$ is a Hamiltonian $G$-manifold.
\item $G$ acts meromorphically on $Z$ in the sense of~\cite{Fuj}, see also
Section~\ref{Section:mero}.
\item $G$ acts trivially on the Albanese torus $\Alb(Z)$ of $Z$.
\end{enumerate}
The equivalence $(1)\Longleftrightarrow(3)$ was observed in~\cite[Lemma~2.1 and
subsequent remark]{Fuj2}. The implication $(2)\Longrightarrow(3)$ follows from
\cite[Lemma~3.8]{Fuj} since every complex reductive group is linear algebraic.
The last implication $(3)\Longrightarrow(2)$ follows
from~\cite[Proposition~I]{Som}, see also~\cite[Proposition~6.10]{Fuj}.}
\end{rem}
For later use we record an elementary result on the moment image of a
Hamiltonian $G$-manifold $(Z,\omega_Z)$ with moment map $\mu\colon
Z\to\lie{k}^*$: If the interior of $\mu(Z)$ relative to $\lie{k}^*$ is
non-empty, then, due to Sard's Theorem, there exists a point $z\in Z$ such that
$d\mu_z$ is surjective. Since condition~\eqref{Eqn:momentcondition} implies
\begin{equation}\label{eq:magic_formula}
\ker d\mu_z=(\lie{k}\acts z)^{\perp_{\omega_Z}}=
\{\xi_Z(z)\mid \xi\in\lie{k}\}^{\perp_{\omega_Z}},
\end{equation}
cf.~\cite[Section~2.3]{HH2}, and therefore that the rank of $\mu$ in $z$
coincides with $\dim K\acts z$, we conclude that $K_z$ is finite. Conversely, if
$K_z$ is finite, then $\mu$ is a submersion in $z$, hence the image of $\mu$ has
interior points in $\lie{k}^*$. We summarize this discussion in the following
\begin{lem}\label{Lem:IntPoints}
Suppose that $(Z,\omega_Z)$ is a $G$-connected\footnote{We say that $Z$ is
$G$-connected if it cannot be written as the disjoint union of two non-empty $G$-stable
closed subsets.} Hamiltonian $G$-manifold. Then $\mu(Z)$ has non-empty interior
in $\lie{k}^*$ if and only if $K$ acts with generically finite isotropy on $Z$.
\end{lem}
\subsection{The set of semistable points}
The set of semistable points is defined by
\begin{equation*}
Z^{ss}_G(\mu):=\{z\in Z\mid \ol{G\acts z}\cap\mu^{-1}(0)\not=
\emptyset\}.
\end{equation*}
The $G$-invariant set $Z^{ss}_G(\mu)$ is open and can be characterized in the
following way. For $z\in Z^{ss}_G(\mu)$ consider the inclusion $\iota\colon
\ol{G\acts z}\cap Z^{ss}_G(\mu) \hookrightarrow Z$. Then
$\iota^*\omega_Z=i\partial\ol{\partial}\rho$ for some strictly plurisubharmonic
exhaustion function $\rho$ and $\iota^*\mu=\mu_\rho$ where
$\mu_\rho\colon\ol{G\acts z}\cap Z^{ss}_G(\mu)\to\lie{k}^*$ is given by
\begin{equation}\label{Eqn:murho}
\mu_\rho(z)(\xi):=d\rho_z(J\xi_Z(z)),
\end{equation}
where $J$ denotes the complex structure of $Z$, see~\cite[Section~3]{HH1}.
If $Z$ is compact and if $G$ is semisimple, then $Z^{ss}_G(\mu)$ depends only on
the K\"ahler class $[\omega_Z]$, see~\cite[p.~71]{HH1}. Since we generalize
this result later on for Hamiltonian actions of unipotent groups, we repeat its
proof here for the readers' convenience.
\begin{prop}\label{Prop:Semistable}
Let $(Z,\omega_Z)$ be a compact Hamiltonian $G$-manifold with moment map $\mu$
and let $\omega_Z'$ be another $K$-invariant K\"ahler form on $Z$ such that
$[\omega_Z]=[\omega_Z']\in H^2(Z,\,\mbb{R})$. Then there exists a
moment map $\mu'$ for the $K$-action on $(Z,\omega_Z')$ such that
$Z^{ss}_G(\mu) =Z^{ss}_G(\mu')$.
\end{prop}
\begin{proof}
Since $Z$ is compact K\"ahler, the $\partial\ol{\partial}$-lemma implies
$\omega_Z'=\omega_Z+i\partial\ol{\partial}\varphi$ for some
$\varphi\in\mathcal{C}^\infty(Z)$. Defining $\mu_\varphi\colon Z\to\lie{k}^*$ in
the same way as in Equation~\eqref{Eqn:murho} one checks directly that the map
$\mu':=\mu+\mu_\varphi$ is a moment map for the $K$-action on $(Z,\omega_Z')$.
Now suppose that $z\in Z^{ss}_G(\omega_Z)$. As noted above, we have
$\iota^*\omega_Z=i\partial\ol{\partial}\rho$ and $\iota^*\mu=\mu_\rho$ for some
strictly plurisubharmonic function $\rho$ on $\ol{G\acts z}\cap Z^{ss}_G(\mu)$
where $\iota\colon \ol{G\acts z}\cap Z^{ss}_G(\mu)\hookrightarrow Z$ is the
inclusion. Since $Z$ is assumed to be compact, the function $\varphi$ is
bounded, hence $\rho+\iota^*\varphi$ is still an exhaustion function on
$\ol{G\acts z}\cap Z^{ss}_G(\mu)$. Consequently, $\iota^*\mu'=
\mu_{\rho+\iota^*\varphi}$ has a zero on $\ol{G\acts z}\cap Z^{ss}_G(\mu)$,
which implies $z\in Z^{ss}_G(\mu')$. The converse inclusion follows by symmetry.
\qed
\end{proof}
\begin{rem}{\rm
If $G$ is semisimple and if $(Z,\omega_Z)$ is a compact Hamiltonian
$G$-manifold, the moment map for the $K$-action on $Z$ is unique, see
Remark~\ref{rem:semisimple_always_Hamiltonian} above. Due to
Proposition~\ref{Prop:Semistable}, the set of semistable points therefore
depends only on $[\omega_Z]\in H^2(Z,\,\mbb{R})$. In this case we thus
write $Z^{ss}_G[\omega_Z]$ instead of $Z^{ss}_G(\mu)$.}
\end{rem}
\subsection{Analytic Hilbert quotients}\label{subsect:aHq_properties}
The importance of semistability stems from the fact that the set of semistable
points admits the analogue of a good quotient in the analytic category:
Let $G$ be a complex reductive Lie group and $Z$ a complex space endowed with a
holomorphic $G$-action. A complex space $Y$ together with a $G$-invariant
surjective holomorphic map $\pi\colon Z \to Y$ is called an \emph{analytic
Hilbert quotient}\footnote{In some places in the literature, the terminology
\emph{semistable quotient} is used for the same concept.} of $Z$ by the action
of $G$ if
\begin{enumerate}[\rm (1)]
\item $\pi$ is a locally Stein map, and
\item $(\pi_*\mathscr{O}_Z)^G = \mathscr{O}_Y$ holds.
\end{enumerate}
Here, \emph{locally Stein} means that there exists an open covering of $Y$ by
open Stein subspaces $U_\alpha$ such that $\pi^{-1}(U_\alpha)$ is a Stein
subspace of $Z$ for all $\alpha$; by $(\pi_*\mathscr{O}_Z)^G$ we denote the
sheaf $U \mapsto \mathscr{O}_Z(\pi^{-1}(U))^G = \{f \in
\mathscr{O}_Z(\pi^{-1}(U)) \mid f \;\; G\text{-invariant}\}$, $U$ open in
$Y$.
An analytic Hilbert quotient of a holomorphic $G$-space $Z$ is unique up to
biholomorphism once it exists and we will denote it by $Z\hq G$. The following
properties follow from the corresponding ones in the Stein case, where analytic
Hilbert quotients always exist, see \cite{HeinznerGIT}: Two points $x,x' \in Z$
have the same image in $Z\hq G$ if and only if $\overline{G\acts x} \cap
\overline{G\acts x'} \neq \emptyset$. For each $q \in Z\hq G$, the fibre
$\pi^{-1}(q)$ contains a unique closed $G$-orbit $G\acts x$. The stabiliser
$G_x$ of $x$ in $G$ is a complex reductive Lie group, see \cite{Mat}. If $A
\subset X$ is a $G$-invariant analytic subset, then $\pi(A) \subset X\hq G$ is
analytic, and $\pi|_A\colon A \to \pi(A)$ is an analytic Hilbert quotient.
The main results in the quotient theory for complex reductive group actions on
K\"ahler spaces are summarised in the following theorem.
\begin{thm}[\cite{ReductionOfHamiltonianSpaces},
\cite{Extensionofsymplectic}, \cite{HH2}, \cite{Sj2}]\label{propertiesmomentumquotients}
Let $Z$ be a Hamiltonian $G$-space with K\"ahler form $\omega_Z$ and moment map
$\mu\colon Z \to \mathfrak{k}^*$. Then,
\begin{enumerate}[\rm (1)]
\item $Z^{ss}_G(\mu)$ is open and $G$-invariant, and the analytic Hilbert
quotient $\pi\colon Z^{ss}_G(\mu) \to Z^{ss}_G(\mu)\hq G$ exists,
\item the inclusion $\mu^{-1}(0) \hookrightarrow Z^{ss}_G(\mu)$ induces a
homeomorphism $\mu^{-1}(0)/K \simeq Z^{ss}_G(\mu)\hq G$,
\item the complex space $Z^{ss}_G(\mu)\hq G$ carries a K\"ahler structure that
is induced by symplectic reduction from $\omega_Z$ and that is smooth along a
natural stratification of $Z^{ss}_G(\mu)\hq G$.
\end{enumerate}
\end{thm}
\subsection{Moment maps associated with representations and their
images}\label{sect:moment_maps_for_reps}
Let $G$ be a connected semisimple complex Lie group acting linearly on a finite
dimensional complex vector space $V$. If we equip $V$ with the $K$-invariant
flat K\"ahler metric $\omega_V$ given by a $K$-invariant hermitian inner
product $\langle \cdot,\cdot\rangle$, then a moment map $\mu_V\colon
V\to\lie{k}^*$ for the $K$-action on $V$ is given by $\mu_V^\xi(v)=
-\frac{i}{2}\langle \xi.v,v\rangle$ for every $\xi\in\lie{k}$. Note that
$\omega_V=i\partial\ol{\partial}\rho$ where $\rho(v)=\norm{v}^2$ for all
$x\in V$ and that $\mu_V=\mu_\rho$ in this case.
For any $v\in V$ consider the affine $G$-variety $\ol{G\acts v}$. The
restriction of $\mu_V$ to $\ol{G\acts v}$ yields the moment map for the
$K$-action on $\ol{G\acts v}$ associated with the strictly plurisubharmonic
exhaustion function $\rho|_{\ol{G\acts v}}$. By abuse of notation we will
denote the restricted K\"ahler form and moment map again by $\omega_V$ and
$\mu_V$, respectively. For later use we record the following result of Sjamaar,
see~\cite[Theorem~4.9, Lemma~4.10]{Sj}. For its statement we have to introduce a
maximal torus $T$ of $K$ with Lie algebra $\lie{t}$, the choice of a positive
Weyl chamber $\lie{t}^*_+$, and the corresponding set $\Lambda^+$ of dominant
weights. For $\lambda\in\Lambda^+$ let $V_\lambda$ denote the irreducible
$G$-representation with highest weight $\lambda$.
\begin{thm}\label{Thm:MomentImage}
The moment map $\mu_V\colon\ol{G\acts v}\to\lie{k}^*$ is proper and verifies
\begin{equation*}
\mu_V(\ol{G\acts v})\cap \lie{t}^*_+=\cone\{\lambda\in\Lambda^+\mid
V_\lambda\text{ occurs in $\mbb{C}[\ol{G\acts v}]$}\}.
\end{equation*}
\end{thm}
In general it may be rather difficult to decide for which dominant weight
$\lambda$ the irreducible representation $V_\lambda$ occurs in
$\mbb{C}[G\acts v]$. Note that the inclusion $G\acts v\hookrightarrow
\ol{G\acts v}$ gives an injective homomorphism of algebras $\mbb{C}[\ol{G\acts
v}]\to \mbb{C}[G\acts v]\cong\mbb{C}[G]^{G_v}$. Therefore the situation is
slightly easier for affine completions of quasi-affine homogeneous spaces $G/G_v$
for which the map $\mbb{C}[\ol{G\acts v}]\to\mbb{C}[G]^{G_v}$ is an
isomorphism.
To follow this train of thought, recall that an algebraic subgroup $H$ of $G$
is called \emph{Grosshans} if $G/H$ is quasi-affine and if the algebra
$\mbb{C}[G]^H\cong \mbb{C}[G/H]$ is finitely generated. This is equivalent to
the existence of a finite-dimensional $G$-representation space $W$ containing
$G/H$ as an orbit $G\acts w$ such that the codimension of $\ol{G\acts
w}\setminus G\acts w$ in $\ol{G\acts w}$ is at least $2$,
see~\cite[Theorem~4.3]{Gr2}. Recall that an algebraic subgroup of $G$ is called
\emph{unipotent} if it consists entirely of unipotent elements. If $N$ is a
unipotent subgroup of $G$, then $G/N$ is always quasi-affine, see
\cite[Corollary 1.5]{Gr2}, but not affine, see \cite{Mat}; however, not every
such $N$ is Grosshans. If $N$ is the unipotent radical of a parabolic subgroup
of $G$, then $N$ is Grosshans, see~\cite[Theorem~2.2]{Gr1}.
Suppose that the unipotent subgroup $N$ of $G$ is Grosshans. Then we have a
canonical affine completion $\ol{G/N}^\text{a}=\Spec\mbb{C}[G]^N$. Since $N$ is
contained in a maximal unipotent subgroup of $G$, we can deduce
from~\cite[Example~4.19]{Sj} that
\begin{equation*}
\{\lambda\in\Lambda^+\mid V_\lambda\text{ occurs in $\mbb{C}[G]^N$}\}
=\Lambda^+.
\end{equation*}
Consequently, by embedding $\ol{G/N}^\text{a}$ into any $G$-representation, we
can find a K\"ahler form inducing a proper, surjective moment map for the
$K$-action on $\ol{G/N}^\text{a}$. Combining this observation with an
application of Lemma~\ref{Lem:IntPoints} to the free $K$-action on $G/N\subset
\ol{G/N}^\text{a}$ we obtain the following result.
\begin{lem}\label{Lem:GrosshansMomentImage}
Suppose that $N$ is a unipotent Grosshans subgroup of $G$. Then there exists a
K\"ahler form $\omega_V$ on $G/N$ such that the image of the corresponding
moment map $\mu_V\colon G/N\to\lie{k}^*$ is a $K$-invariant dense open subset
of $\lie{k}^*_{\reg}$. Moreover, $\omega_V$ and $\mu_V$ extend to the canonical
affine completion $\ol{G/N}^{\text{a}}$.
\end{lem}
As the following example shows, this result depends on the choice of the
$G$-repre\-sen\-ta\-tion into which we embed $G/N$.
\begin{ex}\label{Ex:nonsurjectivemomentmap}{\rm
Let $G$ be a simply-connected semisimple complex Lie group. For any choice of
$\lambda_1,\dotsc,\lambda_k\in\Lambda^+$ consider $v:=v_1+\dotsb+v_k \in
V^*_{\lambda_1}\oplus\dotsb \oplus V^*_{\lambda_k}$ where $v_j\in
V^*_{\lambda_j}$ is a highest weight vector. Then we have $\mbb{C}[\ol{G\acts
v}] =\bigoplus_{\lambda\in M}V_\lambda$ where $M$ is the submonoid of
$\Lambda^+$ generated by $\lambda_1,\dotsc,\lambda_k$, see~\cite[Theorem~6]{PV}.
Consequently, $\mu_V(\ol{G\acts v})\cap \mathfrak{t}^*$ is the cone generated
by $\lambda_1,\dotsc,\lambda_k$.
To construct an example where the image of $\ol{G\acts v}$ under the moment map
is not the whole of $\mathfrak{k}^*$, let $G={\rm{SL}}(3,\mbb{C})$ and choose
$\lambda_1=\varpi_1+\varpi_2$ and $\lambda_2=2\varpi_1+\varpi_2$ where
$\varpi_1,\varpi_2$ are the fundamental weights of $G$. Note that
$V_{\lambda_1}$ and hence also $V_{\lambda_1}^*$ are isomorphic to the adjoint
representation of $G={\rm{SL}}(3,\mbb{C})$. It follows that $G_{v_1}$ is the
connected subgroup of $G$ having as Lie algebra the semi-direct sum of the
kernel of $\lambda_1$ in the chosen Cartan sub-algebra of $\lie{g}$ and the
positive maximal unipotent sub-algebra $\lie{n}$ of $\lie{g}$. Since the Lie
algebra of $G_{v_2}$ for $v_2\in V_{\lambda_2}$ contains $\lie{n}$ and since the
kernels of $\lambda_1$ and $\lambda_2$ intersect only trivially, the Lie algebra
of the stabiliser of $v=v_1+v_2$ must coincide with $\lie{n}$. In summary, we
see that in the chosen setup $G_v$ is the unipotent radical of a Borel
subgroup of $G$, hence Grosshans, and that $\mu_V\colon\ol{G\acts
v}\to\lie{k}^*$ is not surjective.}
\end{ex}
\begin{rem}\label{rem:forms_on_unipotent_radicals}{\rm
If $N$ is the unipotent radical of a parabolic subgroup of $G$, a
$G$-representation space $E$ containing $\ol{G/N}^\text{a}$ and a certain
$K$-invariant hermitian inner product on $E$ are described in great detail
in~\cite{Kir}, extending~\cite{GJS} which dealt with unipotent radicals of Borel
subgroups. In this situation it is natural to equip $\ol{G/N}^\text{a}$ with the
restriction of the associated flat K\"ahler form $\omega_E$ as above, since the
associated symplectic structure coincides with the one obtained via symplectic
implosion from the cotangent bundle $T^*K$.}
\end{rem}
\section{Hamiltonian $G$-extensions}\label{Section:Extensions}
Let $G=K^\mbb{C}$ be a complex reductive group with maximal compact subgroup
$K$. Recall that a unipotent subgroup of $G$ is by definition an algebraic
subgroup of $G$ consisting entirely of unipotent elements. Such groups are
automatically nilpotent and connected, see~\cite[Chapter~3.2.2]{OV}, hence
simply-connected. Let $N$ be such a unipotent subgroup of $G$. Since our focus
lies on actions of $N$, \emph{we suppose from now on that $G$ is connected and
semisimple.} Due to the simply-connectedness of $N$, by lifting to the
universal cover if necessary, we may and will often assume that $G$ is
simply-connected as well.
\subsection{Meromorphic actions and Hamiltonian extensions}
We will explore the relation between meromorphic $N$-actions and Hamiltonian
$G$-actions.
\begin{defn}{\rm
Let $(X, \omega_X)$ be a connected compact K\"ahler manifold endowed with a
holomorphic $N$-action. A \emph{Hamiltonian $G$-extension of (the $N$-action on)
$(X,\omega_X)$} consists in a connected compact Hamiltonian $G$-manifold
$(Z,\omega_Z)$ and an $N$-equivariant embedding $\iota\colon X\hookrightarrow Z$
such that for the de Rham cohomology classes associated with the K\"ahler forms
we have
\begin{equation}\label{eq:pullback}\iota^*[\omega_Z]=[\omega_X].\end{equation}}
\end{defn}
\begin{rem}{\rm
As $G=K^\mathbb{C}$, and hence $K$, is assumed to be semisimple, it follows
from the fact that integration over $K$ does not change the cohomology class of
a given K\"ahler form and from Remark~\ref{rem:semisimple_always_Hamiltonian}
that any $N$-equivariant embedding of $(X,\omega_X)$ into a compact K\"ahler
$G$-manifold $(Z,\omega_Z)$ satisfying Equation~\eqref{eq:pullback} is
automatically a Hamiltonian $G$-extension.}
\end{rem}
The definition is motivated by the following example and the role it plays in
the Geometric Invariant Theory of unipotent group actions on projective
varieties, cf.~\cite[Section~5]{DorKir}.
\begin{ex}{\rm
Let $N$ act effectively on a smooth projective variety $X$. Any
$N$-equivariant embedding $N \hookrightarrow \mathbb{P}(W)$, where $W$ is an
$N$-representation space on which $N$ acts via an embedding $N \hookrightarrow
{\rm{SL}}(W)$ is a Hamiltonian ${\rm{SL}}(W)$-extension. }
\end{ex}
\begin{ex}{\rm
Suppose that $G$ acts on $(X,\omega_X)$, extending the $N$-action. Since
integration over $K$ does not change the cohomology class of $\omega_X$, we
may, and will, suppose that $\omega_X$ is $K$-invariant. Therefore we can simply
take $Z=X$ with $\iota=\id_X$ as Hamiltonian $G$-extension of the $N$-action on
$X$.
On the other hand, the twisted product\footnote{The twisted product
$G\times_NX$ is by definition the quotient of $G\times X$ by the proper
holomorphic $N$-action given by $n\acts(g,x)=(gn^{-1},n\acts x)$. The $N$-orbit
of $(g,x)$ will be denoted by $[g,x]\in G\times_NX$.} $G\times_NX$ is
$G$-equivariantly isomorphic to $G/N\times X$ via the map
$[g,x]\mapsto(gN,g\acts x)$ and hence embeds into $Z:=\ol{G/N}\times X$ where
$\ol{G/N}$ is a smooth projective $G$-equivariant compactification of the
quasi-affine homogeneous space $G/N$. Endowing $Z$ with a direct product
K\"ahler metric $\omega_0\oplus\omega_X$ and considering $\iota\colon
X\hookrightarrow Z$, $\iota(x)=(eN,x)$, we obtain another Hamiltonian
$G$-extension of the $N$-action on $(X,\omega_X)$.}
\end{ex}
\begin{rem}{\rm
Although the automorphism group of a compact K\"ahler manifold has a natural
structure of a meromorphic group acting meromorphically on $X$,
see~\cite[Theorem~5.5]{Fuj}, this cannot be used in order to find a natural
embedding of a unipotent algebraic group $N$ acting holomorphically on $X$ into
a complex reductive group $G$ sitting inside $\Aut(X)$ as the following example
shows.}
\end{rem}
\begin{ex}{\rm
Consider the connected algebraic group
\begin{equation*}
G=\left\{
\begin{pmatrix}
(ad-bc)^{-1} & z & w\\
0 & a & b\\
0 & c & d\\
\end{pmatrix};\
ad-bc\not=0\right\}\cong {\rm{GL}}(2,\mbb{C})\ltimes \mbb{C}^2.
\end{equation*}
According to~\cite[Theorem~1]{Br} there exists a $12$-dimensional
projective
complex manifold $X$ having $\Aut^0(X)$ isomorphic to $G$.
The group
\begin{equation*}
N=\left\{
\begin{pmatrix}
1&0&z\\
0&1&w\\
0&0&1\\
\end{pmatrix};\ z,w\in\mbb{C}\right\}\cong\mbb{C}^2
\end{equation*}
is a unipotent subgroup of $G$ which is not conjugate to a subgroup of the
radical $R_u(G)$, nor to a subgroup of a Levi subgroup of $G$.
The group
\begin{equation*}
N=\left\{
\begin{pmatrix}
e^t&te^t&0\\
0&e^t&0\\
0&0&e^{-2t}\\
\end{pmatrix};\ t\in\mbb{C}\right\}\cong\mbb{C}
\end{equation*}
is a non-algebraic subgroup of $G$. Consequently, $N$ does not act
meromorphically on $X$. Its Zariski closure is the group
\begin{equation*}
\ol{N}=\left\{
\begin{pmatrix}
t&s&0\\
0&t&0\\
0&0&t^{-2}\\
\end{pmatrix};\ t\in\mbb{C}^*, s\in\mbb{C}\right\}\cong\mbb{C}^*\times
\mbb{C}.
\end{equation*}
Note that $\ol{N}$ and hence $N$ are not conjugate to subgroups of neither a
Levi subgroup nor the radical of $G$.}
\end{ex}
Let us state the main result of this section.
\begin{thm}\label{Thm:ExistenceExtensions}
Let $N$ be a unipotent subgroup of the simply-connected complex semisimple Lie
group $G$. For a connected compact K\"ahler manifold $(X, \omega_X)$ endowed with a
holomorphic $N$-action the following statements are equivalent.
\begin{enumerate}[\rm (1)]
\item There exists a Hamiltonian $G$-extension $(Z,\omega_Z)$ of the $N$-action
on $X$.
\item The $N$-action on $X$ is meromorphic.
\item The twisted product $G\times_NX$ is K\"ahler.
\end{enumerate}
Moreover, given a meromorphic $N$-action on $X$ we can always find a
Hamiltonian $G$-extension that is a $G$-equivariant compactification of
$G\times_NX$.
\end{thm}
\begin{rem}{\rm
Since the centre of $G$ is finite and since $N$ does not contain finite
subgroups, it follows that the $G$-action on $G\times_NX$ is effective whenever
$N$ acts effectively on $X$.}
\end{rem}
Motivated by the above and by the equivalent conditions listed in
Remark~\ref{rem:reductive_equivalence}, we make the following definition that is
central to our discussion.
\begin{defn}{\rm
Let $N$ be a unipotent subgroup of the simply-connected complex semisimple Lie
group $G$, and let $X$ be a connected compact K\"ahler manifold endowed with a
holomorphic $N$-action. We say that $(X,\omega_X)$ is a
\emph{Hamiltonian $N$-manifold} if there exists a Hamiltonian $G$-extension
$(Z,\omega_Z)$ of the $N$-action on $X$. }
\end{defn}
The rest of this section is devoted to the proof of
Theorem~\ref{Thm:ExistenceExtensions}.
\subsection{Necessary conditions for the existence of a $G$-extension}
In this section, we will show the implications ``(1) $\Rightarrow$ (3)
$\Rightarrow$ (2)'' of Theorem~\ref{Thm:ExistenceExtensions}. Let $(X,\omega_X)$
be a connected compact K\"ahler manifold on which $N$ acts holomorphically, and
let $\alpha\colon X \to \rm{Alb}(X)$ be the natural map from $X$ to its Albanese
torus.
\begin{lem}[\protect{``(1) $\Rightarrow$ (3)''}]\label{Lem:necessarycond1}
If the $N$-action on $X$ admits a Hamiltonian $G$-extension, then the twisted
product $G\times_NX$ is K\"ahler.
\end{lem}
\begin{proof}
Let us consider the induced proper embedding $Y=G\times_NX\hookrightarrow
G\times_NZ\cong G/N \times Z$. The claim follows from the fact $G/N$ is
quasi-affine, and hence K\"ahler.
\qed
\end{proof}
\begin{rem}
{\rm In fact, if $N$ is a connected nilpotent closed complex subgroup of $G$, then
assuming that $G\times_N X$ be K\"ahler we can show that $N$ is in fact
algebraic: applying \cite[Proposition II.1]{Bl} to the fibre bundle $G \times_N
X \to G/N$, we see that $G/N$ is K\"ahler, an application of
\cite[Corollary~4.12]{GMO} then yields the claim. This justifies our a priori
algebraicity assumption on $N < G$.
}
\end{rem}
Next, we embark on proving the implication ``(3) $\Rightarrow$ (2)''. As a
first step, we prove the following
\begin{lem}\label{Lem:necessarycond2}
If $G\times_N X$ is K\"ahler, then the action of $N$ on the Albanese torus
$\Alb(X)$ is trivial.
\end{lem}
\begin{proof}
Suppose that the action of $N$ on $\Alb(X)$ is non-trivial. Then, as $N$ acts
on $\Alb(X)$ by translations, we can find a closed one-parameter subgroup
$\mbb{C}\hookrightarrow N$ such that $\dim\mbb{C}\acts[0]=1$ where $[0]$ denotes
the base point of $\Alb(X)$. The topological closure
$T:=\ol{\mbb{C}\acts[0]}\subset \alpha(X) \subset \Alb(X)$ is a connected
compact subgroup of $\Alb(X)$, hence a subtorus. Its pre-image $\alpha^{-1}(T)$
is a compact $\mbb{C}$-invariant subvariety of $X$. According to the
Jacobson-Morozov Theorem, see~\cite[Theorem~III.17]{Jac}, there is a closed
subgroup $S$ of $G$ that is locally isomorphic to ${\rm{SL}}(2,\mbb{C})$ and
contains the one-parameter subgroup $\mbb{C} \hookrightarrow N$ under
discussion. Since $G\times_NX$ is K\"ahler and contains
$Y':=S\times_\mbb{C}\alpha^{-1}(T)$ as a closed $S$-invariant analytic subset,
every $S$-orbit in $Y'$ and therefore every $\mbb{C}$-orbit in $\alpha^{-1}(T)$
is Zariski-open in its closure by \cite[Corollary~3.9]{GMO}.
There exists a $1$-dimensional orbit $\mbb{C}\acts x$ in $X$ such that
$\alpha(x) = [0] \in \Alb(X)$. Consequently, going through the possible
stabiliser subgroups, we see that there are two possibilities: either the orbit
$\mbb{C}\acts x$ is compact, i.e., it is a $1$-dimensional complex torus, or
the normalisation of $\ol{\mbb{C}\acts x}$ is biholomorphic to $\mbb{P}_1$.
Both cases lead to contradictions. Indeed, in the first case, the isotropy of
$\mbb{C}$ and hence the isotropy in $S$ would be infinite discrete, which
contradicts \cite[Proposition 4.4]{GMO}, while in the second case the fact that
$\alpha|_{\ol{\mbb{C}\acts x}}$ is non-constant by construction would produce a
non-zero holomorphic $1$-form on $\mathbb{P}^1$.
\qed
\end{proof}
We remark that the fact that $N$ acts trivially on $\Alb(X)$ alone does not
imply the existence of a Hamiltonian $G$-extension of the $N$-action on $X$ as
the following example shows.
\begin{ex}{\rm
Let $N=\left(\begin{smallmatrix}1&\mbb{C}\\0&1\end{smallmatrix}\right) \subset
G={\rm{SL}}(2,\mbb{C})$ and consider its action on $X=\mbb{P}_2$ given by
\begin{equation*}
t\acts[x_0:x_1:x_2]:=[e^tx_0:e^{it}x_1:x_2].
\end{equation*}
The $N$-orbits in $X$ are not locally Zariski closed, and there are elements
having isotropy isomorphic to $\mbb{Z}$. According to~\cite[Theorem~3.6]{GMO}
$G\times_NX$ cannot be K\"ahler. Hence, there does not exist a Hamiltonian
$G$-extension of the $N$-action on $X$ although $N$ acts trivially on $\Alb(X)
= \{\mathrm{pt}\}$.}
\end{ex}
Our next goal is to show that, if there exists a Hamiltonian $G$-extension of
the $N$-action on $X$, then $N$ acts meromorphically on $X$. Due to
Lemma~\ref{Lem:necessarycond2} the claim is equivalent to the fact that the map
$N\to \Aut_{\rm{aff}}(X)$ induced by the action has analytically Zariski-closed
image, where $\Aut_{\rm{aff}}(X)$ denotes the kernel of the Jacobi homomorphism
$\alpha_*\colon \Aut^0(X)\to\Aut^0(\Alb(X))$, cf.~\cite[\S2]{Fuj}.
Since by \cite[Lemma 4.6 and Theorem 5.5]{Fuj} or \cite[Proposition 2.1]{Lie}
the analytic Zariski-topology on the meromorphic subgroup $\Aut_{\rm{aff}}(X) <
\Aut^0(X)$ is obtained from the complex structure on the cycle space
$\mathscr{C}_{\dim X}(X\times X)$, as a first step we prove a technical lemma
about induced actions on cycle spaces.
\begin{lem}\label{Lem:twistedBarletspace}
Let $N$ be a unipotent subgroup of $G$, let $M$ be a K\"ahler manifold endowed
with a holomorphic $N$-action, and let $\mathscr{C}_k(M)$ be the Barlet space
of compact $k$-cycles. Then, there exists a natural $G$-equivariant isomorphism
$G\times_N\mathscr{C}_k(M) \cong \mathscr{C}_k(G\times_NM)$.
\end{lem}
\begin{proof}
For every irreducible compact analytic subset $A$ of $M$ and each $g\in N$, the
image $g(A)$ is again an irreducible compact analytic subset of $M$. Hence, we
obtain a holomorphic $N$-action on $\mathscr{C}_k(M)$ by the obvious extension of this
action to $k$-cycles. Moreover, the inclusion $\iota\colon M\to G\times_NM$,
$x\mapsto[e,x]$ induces a proper holomorphic embedding of $\mathscr{C}_k(M)$
into $\mathscr{C}_k(G\times_NM)$ by sending $A \in \mathscr{C}_k(M)$ to
$\iota(A)$. From this we obtain a well-defined injective and immersive
holomorphic map
\begin{equation*}
\Phi_k\colon G\times_N\mathscr{C}_k(M)\hookrightarrow \mathscr{C}_k(G\times_NM),\quad
\Phi_k([g,A]):=g\acts\iota(A).
\end{equation*}
Suppose for a moment that $A\subset G\times_NM$ is an irreducible compact
analytic subset. Since $G/N$ is quasi-affine, and hence in particular
holomorphically separable, the bundle projection $\pi\colon G\times_NX\to G/N$
maps $A$ to a point $gN \in G/N$. Consequently, $\mathscr{C}_k(G\times_NM)$
coincides with the space $\mathscr{C}_k(G\times_NM)_\pi$ of $\pi$-relative
cycles. Moreover, the natural $G$-action on $\mathscr{C}_k(G\times_NM)$ makes
the resulting projection $\mathscr{C}_k(G\times_NM) \to G/N$ equivariant.
We hence conclude from the fact that $\Phi_k$ induces an isomorphism of the
fibres over $eN$ that the image of $\Phi_k$ is all of
$\mathscr{C}_k(G\times_NM)_\pi = \mathscr{C}_k(G\times_NM)$. The claim follows.
\qed
\end{proof}
\begin{prop}[``(3) $\Rightarrow$ (2)'']\label{prop:necessarycond3}
If $G\times_NX$ is K\"ahler, then $N$ acts meromorphically on $X$.
\end{prop}
\begin{proof}
Composed with the the natural embedding $\Aut_{\rm{aff}}(X) \hookrightarrow
\mathscr{C}_n(X\times X)$, the action map $N \to \Aut_{\rm{aff}}(X)$ yields the
following holomorphic map:
\begin{align*}
\iota\colon N&\to \mathscr{C}_n(X\times X)\\
g &\mapsto \Gamma_{x\mapsto g\acts x}:=\{(x,g\acts x)\mid x\in X\}.
\end{align*}
We set $n := \dim X$ and consider the $N$-action on the product $X\times X$
given by $g\acts(x,y)=(x,g\acts y)$. We have $G\times_N(X\times X)\cong
(G\times_NX)\times X$, which shows that $G\times_N(X\times X)$ is K\"ahler.
Hence, the cycle space $\mathscr{C}_n(G\times_N(X\times X))$ is
K\"ahler by ~\cite[Th\'eor\`eme~2]{BV}. Applying
Lemma~\ref{Lem:twistedBarletspace} to $M=N\times N$ we infer that
$G\times_N\mathscr{C}_n(X\times X)$ is likewise K\"ahler. It therefore follows
from~\cite[Theorem~3.6]{GMO} that all $N$-orbits in $\mathscr{C}_n(X\times X)$
are locally closed in the analytic Zariski-topology.
Now, we notice that $\iota(N)$ coincides with the $N$-orbit of $\Delta_X \in
\mathscr{C}_n(X\times X)$, where $\Delta_X$ denotes the diagonal in
$X\times X$. This implies that $\iota(N)$ is a locally Zariski-closed, hence
Zariski-closed subgroup of $\Aut_{\rm{aff}}(X) < \Aut^0(X) \subset
\mathscr{C}_n(X\times X)$, cf.~\cite[Section 7.4]{Humphreys}, as was to be
shown.
\qed
\end{proof}
For later usage, we note the following, related observation.
\begin{lem}\label{lem:meromorphic_implies_Albanese_trivial}
An algebraic subgroup $H$ of $G$ which acts meromorphically on $X$ acts
automatically trivially on $\Alb(X)$.
\end{lem}
\begin{proof}
This follows from the fact that every meromorphic homomorphism from an affine
linear group to a compact complex torus is constant,
see~\cite[Lemma~3.8]{Fuj}.
\qed
\end{proof}
\subsection{Existence of a Hamiltonian $G$-extension}
In this section, we will show the crucial implication ``(2) $\Rightarrow$ (1)''
of Theorem~\ref{Thm:ExistenceExtensions}, i.e., the fact that meromorphic
actions of unipotent algebraic subgroups of semisimple groups $G$ always admit
Hamiltonian $G$-extensions, thereby completing the proof.
Let us recall the setup: Let $N$ be a unipotent subgroup of the simply-connected
complex semisimple Lie group $G$, and let $X$ be a connected compact K\"ahler
manifold endowed with a meromorphic $N$-action.
\subsubsection{The case of unipotent radicals of Borel
subgroups}\label{subsubsect:unipotent_radicals}
Let $B$ be a Borel subgroup of $G$ having Levi decomposition $B=TU = UT$. In a
first step, we make the additional assumption that $N$ coincides with the the
unipotent radical $U$ of $B$; i.e., $N=U$.
Let us consider the twisted product $M:=B\times_UX$ and the $U$-equivariant
inclusion $\iota_X\colon X \hookrightarrow M$ as the fibre over $eU$. Since the
principal bundle $B\to B/U\cong T$ is holomorphically trivial, the same holds
for the associated fibre bundle; i.e., we have $M\cong T\times X$. Explicitly,
an isomorphism $B\times_UX\to T\times X$ is given by $[tu,x]\mapsto(t,u\acts
x)$. A direct calculation shows that the induced $B$-action on $T\times X$ is
given by the formula
\begin{equation}\label{Eqn:B-action}
(tu)\acts(s,x)=(ts,(s^{-1}us)\acts x).
\end{equation}
Let $\ol{G}$ be a projective meromorphic structure on $G$. Since the subgroups
$T$, $U$ and $B$ are algebraic in $G$, their topological closures in $\ol{G}$
are analytic.
\begin{lem}\label{Lem:B-Compactification}
If $U$ acts meromorphically on $X$, then the $B$-action on $T\times X$ defined
in Equation~\eqref{Eqn:B-action} extends to a meromorphic $B$-action on
$\ol{T}\times X$. In particular, there exists a $B$-equivariant K\"ahler
compactification $(\ol{M},\omega_{\ol{M}})$ of $M \supset \iota_X(X)$ such that
$[\iota_X^*(\omega_{\ol{M}})] = [\omega_Z]$.
\end{lem}
\begin{proof}
By the definition of a meromorphic structure the map $c\colon T\times U\to U$,
$(s,u)\mapsto s^{-1}us$, extends to a meromorphic map $\ol{c}\colon
\ol{T}\times\ol{U}\to\ol{U}$ which is holomorphic on $(\ol{T}\times U)
\cup(T\times\ol{U})$. Analogously, for the same reason the multiplication map
$m\colon T \times T \to T$ extend meromorphically to $\ol{T} \times \ol{T} \to
\ol{T}$ making $\ol{T}$ a $T$-bi-equivariant compactification of $T$. Denote the
extended map by $\ol{m}$. With this notation, $B$ acts on $\ol{T}\times X$ by
the formula
\begin{equation*}
(tu)\acts(s,x):= (\ol{m}(t,s), \ol{\alpha}(\ol{c}(s,u),x)),
\end{equation*}
where $\ol{\alpha}\colon\ol{U}\times X\to X$ is the meromorphic extension of the
$U$-action on $X$. It is hence clear that $B$ acts meromorphically on $\ol{T}\times
X$.
\qed
\end{proof}
In the next step we consider the holomorphic fibre bundle $Z:=G\times_B\ol{M}
\overset{\pi}{\longrightarrow} G/B$. The natural inclusion $\ol{M}
\hookrightarrow G\times_B \ol{M}$ as the fibre over $eB$ is denoted by
$\iota_{\ol{M}}$. Both the typical fibre $\ol{M}$ and the base $G/B$ of $Z$ are
K\"ahler. Using Blanchard's theorem~\cite[Th\'eor\`eme principal~II]{Bl}, we
will construct a K\"ahler form on $Z$ such that $Z$ is a Hamiltonian
$G$-extension of the $U$-action on $X$.
Since $G/B$ is simply-connected, in order to be able to apply Blanchard's
result and hence show that $G\times_B\ol{M}$ is K\"ahler, we only have to check
that the \emph{transgression map} from $H^1(\ol{M},\,\mbb{R})$ to
$H^2(G/B,\,\mbb{R})$ is identically zero. Recall that this map is
defined on the set of transgressive elements in
$H^1(\ol{M},\,\mbb{R})$ in the following way. A class $[\alpha]\in
H^1(\ol{M},\,\mbb{R}\big)$ is called \emph{transgressive} if there exists a
$1$-form $\beta$ on $Z$ such that $[\iota_{\ol{M}}^*\beta] =[\alpha]$ and such
that there exists a $2$-form $\tau$ on $G/B$ such that $d\beta=\pi^*\tau$. Since
$\pi^*$ is injective, we have $d\tau=0$, and the transgression map then
associates $[\tau]\in H^2(G/B,\,\mbb{R})$ to $[\alpha]\in
H^1(\ol{M},\,\mbb{R})$, see~\cite[Proposition 18.13]{BoTu}. As a
side-remark we note that this transgression map is zero if and only if
$b_1(Z)=b_1(G/B)+b_1(\ol{M})$ where $b_1$ denotes the first Betti number,
see~\cite[p.~192]{Bl}.
\begin{lem}\label{lem:meromorphic_implies_transgressive}
Suppose that $B$ acts trivially on $\Alb(\ol{M})$. Then, every
$[\alpha]\in H^1(\ol{M},\,\mbb{R})$ is transgressive, and the
transgression map is identically zero.
\end{lem}
\begin{proof}
As $\ol{M}$ is K\"ahler by Lemma~\ref{Lem:B-Compactification}, we may use the
Hodge decomposition to write
\begin{equation*}
[\alpha]\in H^1(\ol{M},\,\mbb{R})\subset
H^1(\ol{M},\,\mbb{C})=H^{1,0}(\ol{M})\oplus
\ol{H^{1,0}(\ol{M})} = H^0(\Omega^1_{\ol{M}}) \oplus
\ol{H^0(\Omega^1_{\ol{M}})}
\end{equation*}
as $[\alpha]=\eta+\ol{\eta}$ where $\eta$ is a holomorphic $1$-form on $\ol{M}$.
Since by hypothesis and Lemma~\ref{lem:meromorphic_implies_Albanese_trivial} the
algebraic subgroup $B < G$ acts trivially on $\Alb(\ol{M})$, the form $\eta$ is
$B$-invariant, hence extends to a $G$-invariant holomorphic $1$-form
$\wh{\eta}\in H^0(\Omega^1_Z)^G$. Since $d\wh{\eta}$ is likewise
$G$-invariant, it is uniquely determined by its restriction
$\iota_{\ol{M}}^*(d\wh{\eta})=d(\iota_{\ol{M}}^*\wh{\eta})=d\eta=0$.
Consequently, $d\wh{\eta}=0=\pi^*\bf{0}$. We conclude that $[\alpha]$ is
transgressive and is mapped to $\mathbf{0} + \mathbf{0} = \mathbf{0} \in
H^2(G/B,\, \mathbb{R} )$ by the transgression map.
\qed
\end{proof}
\begin{rem}{\rm
The proof works more generally for a \emph{parabolic} subgroup $P<G$ acting on a
compact K\"ahler manifold such that the induced action on the Albanese is
trivial. To be more precise, we see that for every such $P$-manifold $X$ the
twisted product $G\times_PX$ is K\"ahler. As the $G$-action on this manifold is
Hamiltonian, by applying~\cite[Theorem~3.6]{GMO} or \cite[Remark after Lemma
2.1]{Fuj2} together with \cite[Proposition 6.10]{Fuj} we conclude that every
$P$-orbit in $X$ is Zariski-open in its closure and that the $P$-action on $X$
is meromorphic. This generalises and gives a new proof for a result of Sommese,
cf.~\cite[\S3]{Som}. For criteria guaranteeing triviality of induced actions on
Albanese tori see \cite[\S6c)]{Fuj}.}
\end{rem}
Combining Lemmata~ \ref{Lem:B-Compactification},
\ref{lem:meromorphic_implies_Albanese_trivial}, and
\ref{lem:meromorphic_implies_transgressive} with Blanchard's theorem we conclude
that the twisted product $Z=G\times_B\ol{M}$ is K\"ahler. Moreover, the first
step of Blanchard's proof~\cite[p.~187]{Bl} shows that the cohomology class of
the constructed K\"ahler form on $Z=G\times_B\ol{M}$ pulls back under
$\iota_{\ol{M}}$ to $[\omega_{\ol{M}}]$ on $\ol{M}$. Embedding $X$ into $\ol{M}$
by $\iota_{X}$ and further into $Z$ by $\iota_{\ol{M}}$ we obtain a Hamiltonian
$G$-extension of the $U$-action on $X$. Notice that $Z$ contains
$G\times_UX\cong G\times_B(B\times_UX)$ as a Zariski open subset. In fact, $Z =
G \times_B \ol{M} \cong G \times_B( \ol{T} \times X ) \to G \times_B \ol{T}$ is
an extension of the $X$-fibre bundle $G\times_U X \to G/N$ to the projective
completion $G \times_B \ol{T}$ of $G/N$.
\subsubsection{The general case}
In the second and final step we show that the case of an arbitrary unipotent
subgroup $N$ of $G$ can be reduced to the unipotent radical $U$ of a Borel
subgroup $B\subset G$.
\begin{lem}\label{Lem:U-Compactification}
Let $N$ be a unipotent subgroup of $G$. Then there exists a Borel subgroup
$B=TU$ of $G$ such that $N\subset U$. In addition, if $N$ acts effectively and
meromorphically on $X$, then $U\times_NX$ admits a $U$-equivariant K\"ahler
compactification on which $U$ acts effectively and meromorphically and whose
K\"ahler class extends the given K\"ahler class $[\omega_X]$ on $X$.
\end{lem}
\begin{proof}
The first statement is \cite[30.4, Theorem]{Humphreys}. Moreover, from the
first paragraph in the proof of \cite[Chapter 5, Theorem 4]{Akhiezer} we
conclude that the $N$-principal bundle $U \to U/N$ admits an algebraic section,
whose image we call $S$. As a consequence, we obtain an $N$-equivariant
isomorphism $N \times S\to U$ with $S \cong U/N$. Let $\ol{S}$ be a
$U$-equivariant smooth projective compactification of $S$. Following the line of
argumentation presented at the beginning of
Section~\ref{subsubsect:unipotent_radicals} we conclude as before that
$\ol{S}\times X$ is a compact K\"ahler manifold endowed with a meromorphic
$U$-action, containing $U \times_NX$ as a Zariski-open set. Again, we can choose
the K\"ahler form on $\ol{S}\times X$ such that its class extends the given
K\"ahler class $[\omega_X]$.
\qed
\end{proof}
Finally, by applying the discussion of
Section~\ref{subsubsect:unipotent_radicals} to the compact K\"ahler manifold $X'
= \ol{S} \times X$ with meromorphic $U$-action obtained in
Lemma~\ref{Lem:U-Compactification} and noticing that
\begin{equation}\label{eq:twisttwist}
Y=G\times_NX\cong G\times_B(B\times_U(U\times_NX))\cong
G\times_B( T\times (U/N) \times X)
\end{equation}
we arrive at
\begin{prop}[``(2) $\Rightarrow$ (1)'']\label{Prop:construction}
Let $N$ be a unipotent subgroup of $G$ acting effectively and meromorphically
on the compact K\"ahler manifold $X$. Then, there exists a Hamiltonian
$G$-extension $Z$ of the $N$-action on $X$, which can be chosen such that it
contains $Y:=G\times_NX$ as a Zariski open subset. More precisely, we might
choose
\begin{equation*}
Z = \ol{Y}= G\times_B( \ol{T}\times \ol{U/N} \times X),
\end{equation*}
where $B=TU$ is a Borel subgroup of $G$ with $N\subset U$ and where
$\ol{T}\times \ol{U/N}$ is the $B$-equivariant compactification of
$T\times(U/N)$ constructed in Lemmata~\ref{Lem:B-Compactification}
and~\ref{Lem:U-Compactification}.
\end{prop}
\begin{rem}[Non-uniqueness]{\rm
There are many possible ways to choose the compactifications
$\ol{T}$ and
$\ol{S}$; in the first case, we have all smooth projective toric varieties of
dimension $\dim T$ to choose from.}
\end{rem}
\subsubsection{Additional observations}
We note a minimality property of the above construction that is crucial for
subsequent arguments on the independence of the set of semistable points from
the chosen $G$-extensions, see Section~\ref{subsect:semistability_independent}
below.
\begin{prop}\label{Prop:Minimality}
Let $X$ be a compact Hamiltonian $N$-manifold and let $\ol{Y}$ be the
$G$-equivariant compactification of $Y:=G\times_NX$ whose existence is
guaranteed by Proposition~\ref{Prop:construction}. Then, for any Hamiltonian
$G$-extension $Z$ of the $N$-action on $X$, there exists a $G$-equivariant
embedding $\ol{Y}\hookrightarrow\ol{G/N}\times Z$ where $\ol{G/N}$ is a certain
$G$-equivariant compactification of $G/N$.
\end{prop}
\begin{proof}
Let $Z$ be any $G$-extension of the $N$-action on $X$. The $N$-equivariant
embedding $X\hookrightarrow Z$ induces a $G$-equivariant embedding
$Y=G\times_NX\hookrightarrow G\times_NZ\cong G/N \times Z$. The identification
$G/N\cong G\times_B (T\times U/N )$, cf.~Equation~\eqref{eq:twisttwist},
suggests choosing the compactified fibre bundle
\begin{equation}\label{eq:compactify_GmodN}
\ol{G/N}:=G\times_B(\ol{T}\times\ol{U/N})
\end{equation}
as a well-adapted $G$-equivariant compactification of $G/N$. With this
definition we obtain the desired embedding $\ol{Y}=G\times_B( \ol{T}\times
\ol{U/N} \times X)\hookrightarrow G\times_B( \ol{T}\times \ol{U/N}
\times Z)\cong \ol{G/N}\times Z$.
\qed
\end{proof}
\begin{rem}\label{Rmk:simplyconnected}{\rm
Since $N$ is a connected subgroup of the simply-connected group $G$, the
quasi-affine variety $G/N$ is simply-connected. Consequently, every smooth
compactification of $G/N$ is likewise simply-connected, see~\cite[0.7(B)]{FL};
in particular, this observation applies to the compactification constructed in
Proposition~\ref{Prop:Minimality}.}
\end{rem}
\begin{ex}{\rm
Suppose that $N=\left(\begin{smallmatrix}1&\mbb{C}\\0&1\end{smallmatrix}
\right)\subset G={\rm{SL}}(2,\mbb{C})$. Choosing $\ol{T}=\mbb{P}_1$, we see that
$\ol{G/N}=G\times_B\ol{T}$ is the blow-up of $\mbb{P}_2$ at the origin, i.e.,
the first Hirzebruch surface.}
\end{ex}
\section{The set of semistable points with respect to unipotent
groups}\label{Section:Semistable}
Let $G=K^\mbb{C}$ be a simply-connected semisimple complex Lie
group with maximal compact subgroup $K$, let $N$ be a unipotent subgroup of $G$,
and let $(X,\omega_X)$ be a compact Hamiltonian $N$-manifold. In this section we
explain how one can use the concept of Hamiltonian $G$-extensions in order to
define the set of semistable points $X^{ss}_N[\omega_X]$ for the $N$-action on
$X$.
\subsection{Defining the set of semistable points}
We will slowly approach the goal of defining the correct notion of
semistability, see Definition~\ref{defn:semistable} below.
\subsubsection{The naive approach}\label{subsubsect:naive_approach}
Since $(X,\omega_X)$ is a compact Hamiltonian $N$-manifold, we can find a
Hamiltonian $G$-extension $(Z,\omega_Z)$ of the $N$-action on $X$ with
$N$-equivariant embedding $\iota\colon X\hookrightarrow Z$. It is tempting to
define the set of $N$-semistable points in $X$ as the $N$-invariant open subset
$\iota^{-1}(Z^{ss}_G[\omega_Z])$. As the following example shows, this set
heavily depends on the choice of the $G$-extension $(Z,\omega_Z)$.
\begin{ex}{\rm
Let $X=\mbb{P}_1$ with the Fubini-Study metric $\omega_{FS}$ and let $N=\mbb{C}$
act on $\mbb{P}_1$ by $t\acts[x_0:x_1]=[x_0+tx_1:x_1]$. Since the $N$-action
extends to an action of $G={\rm{SL}}(2,\mbb{C})$, we have the following three
Hamiltonian $G$-extensions.
\begin{enumerate}[\rm (1)]
\item $Z_1=X$ where $\iota_1=\id_X$,
\item $Z_2=X\times X$ endowed with $\omega_{Z_2}=\frac{1}{2}(\omega_{FS}\oplus
\omega_{FS})$ and the diagonal $G$-action, where the embedding is given by
$\iota_2(x)=(x,x)$, and
\item $Z_3=Z_2$ with $\omega_{Z_3}=2\omega_{Z_2}$ and with the diagonal
$G$-action where $\iota_3(x)=([1:0],x)$.
\end{enumerate}
It is not hard to show that $\iota_1^{-1}(Z_{1,G}^{ss})=\iota_2^{-1}
(Z_{2,G}^{ss})=\emptyset$ while $\iota_3^{-1}(Z_{3,G}^{ss})=
\mbb{P}_1\setminus \{[1:0]\}$.}
\end{ex}
\subsubsection{The case of linear actions on projective
manifolds}\label{subsubsect:algebraic_setup}
Let us recall the approach taken by Doran and Kirwan in \cite{DorKir}. There,
the authors study the situation where $X$ is a subvariety of $\mbb{P}(W)$ for
some finite-dimensional $G$-module $W$ and they consider the embedding
$\wh{\iota}\colon G\times_NX\hookrightarrow
G\times_N\mbb{P}(W)\cong(G/N)\times\mbb{P}(W)$ that is induced by the embedding
$\iota\colon X\hookrightarrow\mbb{P}(W)$. Then, they equip $G\times_NX$ with the
$G$-linearised ample line bundle
$L:=\wh{\iota}^*(\mathrm{pr}_1^*(\mathscr{O}_{G/N})\otimes
\mathrm{pr}_2^*(\mathscr{O}_{\mbb{P}(W)}(1)) )$. In this way, Doran and
Kirwan can consider the set of Mumford-semistable points
$(G\times_NX)^{ss}_G(L)$ in $G\times_NX$ and they proceed by defining the set of
Mumford-semistable points in $X$ as $X^{ss}_N(L):=\iota^{-1}(G\times_NX)^{ss}_G(L)$,
see~\cite[Definition~5.1.5]{DorKir}. As shown in~\cite[Propositions~5.1.8
and~5.1.9]{DorKir}, this set $X^{ss}_N$ does not depend on the choice of the
group $G$ and can be intrinsically defined knowing only the $N$-action on $X$.
\subsubsection{The definition of semistability}\label{subsubsectio:def_of_ss}
The algebraic approach suggests that in our situation, we should choose a
Hamiltonian $G$-extension $(Z,\omega_Z)$ of the $N$-action on $X$ and then
consider the analogous embedding $Y:=G\times_NX\hookrightarrow G\times_NZ\cong
G/N \times Z$, such that the Hamiltonian $G$-extension $Z$ plays the role of the
projectivised $G$-module $\mbb{P}(W)$.
To implement this idea, first note that the $N$-equivariant embedding
$\iota\colon X\hookrightarrow Z$ induces the $G$-equivariant proper embedding
\begin{equation}\label{eq:untwisting_the_action}
\wh{\iota}\colon Y=G\times_NX\hookrightarrow G\times_NZ\cong G/N\times Z
\end{equation}
given by $\wh{\iota}([g,x])=(gN,g\acts\iota(x))$. Secondly, recalling
that $G/N$ is quasi-affine, choose a $G$-representation space $V$ containing
$G/N$ as an orbit as well as a $K$-invariant hermitian inner product on $V$,
which induces $K$-invariant K\"ahler forms $\omega_V$ on $G/N$ and
\begin{equation}\label{eq:definition_of_omega_1}
\wh{\omega}_Z:= \omega_V\oplus
\omega_Z
\end{equation}
on $G/N \times Z$. Then, we take
\begin{equation}\label{eq:definition_of_omega_2}
\omega_Y:=\wh{\iota}^*\wh{\omega}_{Z}
\end{equation}
as K\"ahler form on $Y$. The spaces considered so far fit into following
commutative diagram
\begin{equation*}
\xymatrix{
X\ar@{^(->}[r]^{\iota}\ar[d] & Z\ar[d] \\
Y=G\times_NX\ar@{^(->}[r]^{\wh{\iota}} & (G/N)\times Z,}
\end{equation*}
where the vertical arrows correspond to the maps given by $x\mapsto [e,x]$ and
$z\mapsto(eN,z)$, respectively. It follows that $[\omega_Y]$ extends $[\omega_X]$
to $Y$. Recall that, as $K$ is semisimple and $\omega_Y$ is $K$-invariant,
there exists a unique moment map $\mu_Y\colon Y \to \mathfrak{k}^*$ for the
$K$-action on $Y$, see Remark~\ref{rem:semisimple_always_Hamiltonian}; in other
words, $Y$ is a (non-compact) Hamiltonian $G$-manifold. We will use $\mu_Y$ to
define a notion of semistability for the $N$-action:
\begin{defn}\label{defn:semistable}{\rm
Let $(X,\omega_X)$ be a Hamiltonian $N$-manifold. We define the \emph{set of
$N$-semistable points (with respect to $[\omega_X]$)} as
\[\boxed{X^{ss}_N[\omega_X]:=X\cap Y^{ss}_G[\omega_Y]} \;,
\]
where the K\"ahler form $\omega_Y$ on $Y=G\times_NX$ is given by Equations
\eqref{eq:definition_of_omega_1} and \eqref{eq:definition_of_omega_2} above.
Analogously, the \emph{set of $N$-stable points} in $X$ is defined as
$X^s_N[\omega_X]:=X\cap Y^s_G[\omega_Y]$, where we set
$Y^s_G[\omega_Y]:=G\acts\mu_Y^{-1}(0)$.\footnote{With this definition, a semistable point $x\in X$ is $N$-stable if and only if
$N\acts x$ is closed in $X^{ss}_N[\omega_X]$. In view of Lemma~\ref{Lem:AllOrbitsClosed} and Theorem~\ref{thm:existence_of_geometric_quotient} below this is
the correct notion in our situation. The reader should however be aware that
there are several notions of (proper) stability in use in the literature.
}}
\end{defn}
Note that $X^{ss}_N[\omega_X]$ a priori depends on the choice of $G$,
$\omega_V$, and $Z$ although we do not convey this information in our notation.
We will discuss the choices regarding $G$ and the metric on $G/N$ in
Section~\ref{subsect:discussion_of_choice} below. As we will see, the definition
is actually independent of the choice of a Hamiltonian $G$-extension, once $G$
and the metric on $G/N$ are fixed; see the subsequent section.
\begin{ex}[A non-projective compact K\"ahler manifold with meromorphic
$\mathbb{C}$-action]{\rm
Let $S$ be a non-projective $K3$-surface with $\mathrm{Pic}(S) \neq \{e\}$,
and let $L \to S$ be a non-trivial holomorphic line bundle on $S$ with zero
section $Z_L \subset L$. Let $P:= L \setminus Z_L \to S$ be the associated
$\mathbb{C}^*$-principal bundle, and consider the non-trivial
$\mathbb{P}^1$-bundle \[M:= P \times_{\mathbb{C}^*} \mathbb{P}^1
\longrightarrow P/\mathbb{C}^* = S.\] As the Albanese of $\mathbb{P}^1$ is
trivial and since $S$ is a simply-connected compact K\"ahler manifold, the compact
complex manifold $M$ is K\"ahler by Blanchard's theorem, cf.~the discussion in
Section~\ref{subsubsect:unipotent_radicals}. On the other hand, as the
non-projective surface $S$ embeds into $M$ as zero section, $Y$ is likewise
non-projective. Since the corresponding vector field has zeroes, the effective
$\mathbb{C}^*$-action from the left is trivial on the Albanese of $M$, see
\cite[Prop.~6.8]{Fuj} or \cite[Proposition 3.14]{Lie}. Trivially extend the
action of $\mathbb{C}^*$ on $M$ to an action of the Borel subgroup
$\mathbb{C}^* < B$ of lower-triangular matrices in $\mathrm{SL}(2,\mathbb{C}) =:
G$, and define \[X := G \times_B M.\] Since the $B$-action on the Albanese of
$M$ is trivial and since $\mathbb{P}^1$ is simply-connected, it again follows
from Blanchard's theorem that the compact complex manifold $X$ is K\"ahler, see
also Lemma~\ref{lem:meromorphic_implies_transgressive}. Moreover, the
$G$-action on $X$ induces an action of the unipotent algebraic subgroup $N <
G$ of strictly upper-triangular elements of $\mathrm{SL}(2, \mathbb{C})$. Set
$K:= {\rm{SU}}(2)$. Then, any $K$-invariant K\"ahler form $\omega_X$ on $X$ produces a
moment map $\mu_X\colon X \to \mathfrak{k}^*$ for the $K$-action; i.e., $X$ is
its own Hamiltonian $G$-extension, so that the $N$-action on $X$ is meromorphic.
Now, apply the construction described at the beginning of the present section
to $X$. As $G> N$ already acts on $X$, we can choose $Z=X$ as Hamiltonian
$G$-extension. Let $\omega_V$ be the essentially unique flat $K$-invariant
K\"ahler form on $G/N \cong \mathbb{C}^2 \setminus\{0\} \subset V=\mathbb{C}^2$,
and hence consider the K\"ahler form $\omega_Y = \omega_V \oplus \omega_X$ on $Y
:= G \times_N X \cong G/N \times X$, together with the resulting moment map
$\mu_Y = \mu_{G/N} + \mu_X\colon Y \to \mathfrak{k}^*$. We claim that
$X^{ss}_N[\omega_X] \neq \emptyset$, which will conclude our construction. In
order to establish the claim, we first note that $\mu_X(X) \neq \{0\}$,
as otherwise Equation \eqref{eq:magic_formula} would imply that
the $K$-action on $X$ is trivial. Let $\beta_0 \in \mu_X(X) \setminus \{0\}$
and choose $x_0 \in \mu_X^{-1}(\beta_0)$. We have $\mu_{G/N}(G/N) =
\mathfrak{k}^* \setminus \{0\}$, cf.~Section~\ref{sect:moment_maps_for_reps},
and hence there exists $g_0 \in G$ such that $\mu_{G/N}(g_0U) = -\beta_0$. By
construction, we then have $\mu_Y(g_0\acts(eU, g_0^{-1}\acts x_0))
=\mu_Y(g_0U, x_0) = 0$, and hence $(eU, g_0^{-1}\acts x_0) \in
Y^{ss}_G[\omega_Y] \cap (\{eU\} \times X) = X^{ss}_N[\omega_X]$, which is
therefore non-empty, as claimed.}
\end{ex}
\subsection{Semistability does not depend on the
$G$-extension}\label{subsect:semistability_independent}
In contrast to the naive definition of semistability discussed in
Section~\ref{subsubsect:naive_approach} above, it turns out the set of
$G$-semistable points in $Y$ with respect to $\omega_Y$ as defined in
Definition~\ref{defn:semistable} does not depend on the choice of the
Hamiltonian $G$-extension
$(Z,\omega_Z)$.
\begin{thm}\label{Thm:Independence}
Let $N$ be a unipotent subgroup of the simply-connected
semisimple complex Lie group
$G$, acting meromorphically on the compact K\"ahler manifold $(X,\omega_X)$.
Let $(Z_j,\omega_j)$, $j=1,2$, be two Hamiltonian $G$-extensions of the
$N$-action on $X$. Choose a $G$-equivariant algebraic embedding $G/N
\hookrightarrow V$ into a $G$-representation space $V$, and a $K$-invariant
Hermitian inner product on $V$, inducing K\"ahler forms $\omega_V$ on $G/N$,
$\wh{\omega}_Z$ on $G/N \times Z$, and $\omega_{Y,1}$, $\omega_{Y,2}$ on $Y$, as
described in Section~\ref{subsubsectio:def_of_ss}. Then, we have
$Y^{ss}_G[\omega_{Y,1}]=Y^{ss}_G[\omega_{Y,2}]$.
\end{thm}
\begin{proof}
We will prove that on the smooth K\"ahler compactification $\ol{Y}$ of $Y$
constructed in Proposition~\ref{Prop:construction} there exists a $(1,1)$-form
$\alpha\in\mathcal{A}^{1,1}(\ol{Y})$ satisfying the following two properties:
\begin{enumerate}
\item[(a)] $\alpha|_Y = \omega_{Y,2}-\omega_{Y,1}$, and
\item[(b)] $[\alpha]=0\in H^2(\ol{Y},\,\mbb{R})$.
\end{enumerate}
It then follows from the $\partial\ol{\partial}$-lemma on the compact K\"ahler
manifold $\ol{Y}$ that there is a smooth function $\varphi \in
\mathcal{C}^\infty(\ol{Y})$ such that $\alpha=i\partial\ol{\partial}\varphi$.
Restricting everything to $Y$, we obtain a \emph{bounded} smooth function
$\varphi$ on $Y$ such that
$\omega_{Y,2}=\omega_{Y,1}+i\partial\ol{\partial}\varphi$. In this situation, we
can repeat the proof of Proposition~\ref{Prop:Semistable} to deduce
$Y^{ss}_G[\omega_{Y,1}]=Y^{ss}_G[\omega_{Y,2}]$. Hence, in order to complete the
argument, we must show existence of $\ol{Y}$ and $\alpha$ with the above
properties.
To do so, we note first that $(Z,\omega_Z):=(Z_1\times Z_2, \frac{1}{2}(
\omega_1\oplus\omega_2))$ is another Hamiltonian $G$-extension of the
$N$-action on $X$; here, $G$ acts diagonally on $Z_1\times Z_2$ and
$\iota\colon X\hookrightarrow Z$ is given by the direct product
$\iota(x)=(\iota_1(x),\iota_2(x))$ of the two inclusions
$\iota_j\colon X \hookrightarrow Z_j$, $j=1,2$. Our situation can be summarised
by the following diagram:
\[\begin{xymatrix}{
& & G/N \times Z_1 \ar[r] & Z_1 \\
Y \ar[rr]^{\wh{\iota}} \ar[rru]^{\wh{\iota}_1} \ar[rrd]^{\wh{\iota}_2} &
& G/N \times Z \ar[r]^<<<<{\mathrm{pr}_Z} \ar[u]_{q_1} \ar[d]_{q_2}& Z
\ar[u] \ar[d] \\
& & G/N \times Z_2 \ar[r] & Z_2.
}
\end{xymatrix}
\]
We denote the K\"ahler form $\omega_V \oplus \omega_j$ on $G/N \times Z_j$ by
$\wh{\omega}_j$, see Equation~\eqref{eq:definition_of_omega_1}, and note that by
assumption the same form $\omega_V$ appears in both formulas. Then, it follows
from the general construction and from the diagram above that
\begin{equation}\label{eq:compute_difference}\omega_{Y,2} - \omega_{Y,1} =
\wh{\iota}^*(q_2^*(\wh{\omega}_2) - q_1^*(\wh{\omega}_1)) =
\wh{\iota}^* (\mathrm{pr}_Z^*(\omega_2 - \omega_1) ).
\end{equation}
Let $\ol{G/N}$ as defined in Equation~\eqref{eq:compactify_GmodN} and let us
denote the natural projection $\ol{G/N} \times Z \to Z$ by
$\ol{\mathrm{pr}}_Z$. As $Z$ is a Hamiltonian $G$-extension, by
Proposition~\ref{Prop:Minimality} there exists a $G$-equivariant holomorphic
embedding $\psi\colon \ol{Y} \hookrightarrow \ol{G/N} \times Z$. It now follows
from Equation~\eqref{eq:compute_difference} that
\begin{equation}\label{eq:def_of_alpha}
\alpha:= \psi^*(\ol{\mathrm{pr}}_Z^*(\omega_2 - \omega_1)) \in
\mathcal{A}^{1,1}(\ol{Y})
\end{equation}
fulfils property (a), as desired.
We still must show that the $K$-invariant $2$-form $\alpha$ is cohomologuous to
zero on $\ol{Y}$. For this, we consider the holomorphic fibre bundle $q :=
\mathrm{pr}_{\ol{G/N}} \circ \psi \colon \ol{Y} \hookrightarrow \ol{G/N} \times
Z \to \ol{G/N}$ with typical fibre $X$ and base $\ol{G/N}$. Since $\ol{Y}$ is
K\"ahler, and as by Remark~\ref{Rmk:simplyconnected} the manifold $\ol{G/N}$ is
simply-connected, it follows from \cite[Th\'eor\`eme II.1.1]{Bl} that the
transgression map $H^1(X,\, \mathbb{R}) \to H^2(\ol{G/N},\,
\mathbb{R})$ is zero. Consequently, the Leray spectral sequence for $q$
degenerates at the $E_2$-term by \cite[Th\'eor\`eme II.1.2]{Bl}; see also
\cite[Theorem~4.15 and Remark 4.16]{VoisinII}. Again using simple-connectedness
of $\ol{G/N}$, we conclude that
\[
H^{k}(\ol{G/N},\, \mathbb{R} ) \otimes H^{l}(X,\, \mathbb{R}
) = E_2^{k,l} = E_\infty^{k,l}= \mathrm{Gr}^kH^{k+l}(\ol{Y},\,
\mathbb{R}) \quad\; \text{ and } \quad\; E^{1,1}_\infty = \{0\}.
\]
Computing the corresponding filtration of $H^2( \ol{Y}, \, \mathbb{R}
)$ and comparing it with the Leray spectral sequence for
$\mathrm{pr}_{\ol{G/N}}$ then leads to the following commutative diagram
\[
\begin{xymatrix}{ 0 \ar[r]& H^2(\ol{G/N},\, \mathbb{R}) \ar[r]^{q^*}&
H^2(\ol{Y}, \,\mathbb{R} )\ar[r]^{j_X^*} & H^2(X,\, \mathbb{R}
) \ar[r]& 0\\
0 \ar[r]& H^2(\ol{G/N},\, \mathbb{R}) \ar[u]^=
\ar[r]^<<<<{\mathrm{pr}_{\ol{G/N}}^*}& H^2(\ol{G/N},\, \mathbb{R}) \oplus
H^2(Z,\, \mathbb{R}) \ar[u]^{\psi^*} \ar[r]^<<<<{j_Z^*} &
H^2(Z,\, \mathbb{R} ) \ar[u]^{\iota^*}\ar[r]& 0}
\end{xymatrix}
\]
where $j_X\colon X \hookrightarrow \ol{Y}$ is the inclusion as the fibre over
$eN \in G/N$, and similarly for $j_Z$.
Now, Equation~\eqref{eq:def_of_alpha} says that $[\alpha] = \psi^*([0] \oplus
[\omega_2-\omega_1])$. Together with $[j_X^*(\alpha)] = [\iota_2^*(\omega_2)] -
[\iota_1^*(\omega_1)] = [\omega_X]- [\omega_X] = 0$ this implies that $[\alpha]
= 0$, as claimed.
\qed
\end{proof}
\begin{rem}{\rm
Note that in contrast to their difference the forms $\omega_{Y,j}$ themselves
\emph{do not extend} to the compactification $\ol{Y}$.}
\end{rem}
\subsection{Discussion regarding the choice of K\"ahler metric on
$G/N$}\label{subsect:discussion_of_choice}
As this is a subtle issue, let us discuss the choice of K\"ahler forms on $G/N$
made in Section~\ref{subsubsectio:def_of_ss} and the fact that we have to fix
such a form in some detail. We will provide examples showing that the
independence statement of Theorem~\ref{Thm:Independence} is optimal from many
points of view. The problems occuring are closely related to the ones
encountered in the algebraic situation when searching for various kinds of
``reductive envelopes'', cf.~\cite[Sections~5.2 and 5.3]{DorKir}.
\subsubsection{The algebraic situation}
Let us compare with Doran--Kirwan's approach in the algebraic situation, see
Section~\ref{subsubsect:algebraic_setup}: The key point that explains the choice
of the trivial line bundle on $G/N$ and that eventually makes the proof of
\cite[Proposition~5.1.9]{DorKir} on the independence of semistability from the
choice of the embedding into $G$ work is the following. Given any pair $G$ and
$G'$ of reductive groups such that $N\subset G\subset G'$, the line bundles $L$
on $G\times_NX$ and $L'$ on $G'\times_NX$ constructed in~\cite{DorKir} as above
verify $\iota^*L'=L$ where $\iota\colon G\times_NX\hookrightarrow G'\times_NX$
is the embedding induced by the inclusion $G\hookrightarrow G'$. In the analytic
category, such a canonical choice of K\"ahler metric on $G/N$ does not exist,
even among curvature forms in the trivial line bundle. Indeed, every
$K$-invariant K\"ahler metric of the form $\omega=i\partial\ol{\partial}\rho$
with $\rho\in\mathcal{C}^\infty(G/N)^K$ is the curvature form of a $K$-invariant
hermitian metric in the trivial line bundle on $G/N$, cf.~\cite[Chapter V,
(12.6)]{Dem}. Even if we restrict to metrics of this form, the set of semistable
points might change, as the following example shows.
\begin{ex}\label{ex:varying_c}{\rm
Let us consider the algebraic and hence meromorphic action of $\mbb{C}\cong
N\subset G={\rm{SL}}(2,\mbb{C})$ on $X=\mbb{P}_1$, endowed with the Fubini-Study
form $\omega_{FS}$. As Hamiltonian $G$-extension of the $N$-action on $X$ we
take $Z=X$. Let $K={\rm{SU}}(2)$.
According to~\cite[Lemma~7.10]{Dem}, every $K$-invariant K\"ahler form on
$G/N\cong\mbb{C}^2\setminus\{0\}$ is of the form
$i\partial\ol{\partial}\rho(\norm{z})$ where $\rho$ is a smooth
function on $\mbb{R}^{>0}$ such that $\rho\circ\exp$ is strictly increasing
and strictly convex. Let $\varphi\in\mathcal{C}^\infty(\mbb{R}^{>0})$ be the
function defined by $\varphi(t^2)=\rho(t)$. Then the unique moment map for the
action of ${\rm{SU}}(2)$ on $\mbb{C}^2\setminus\{0\}$ is given by
\begin{equation}\label{eq:general_moment_map}
z=(z_1,z_2)\mapsto \varphi'(\norm{z}^2)
\begin{pmatrix}
\frac{\abs{z_1}^2-\abs{z_2}^2}{2}&\ol{z_1}z_2\\
z_1\ol{z_2}&-\frac{\abs{z_1}^2-\abs{z_2}^2}{2}
\end{pmatrix}.
\end{equation}
We now consider the one-parameter family of K\"ahler forms $\omega_{G/N,c}$
given by $\omega_{G/N,c} := i\partial\ol{\partial}\rho_c(\norm{z})$, where
$\rho_c(t)=c\log(1+t^2)$ with $c>0$. Following the construction of
Section~\ref{subsubsectio:def_of_ss}, the induced K\"ahler form on $Y = G/N
\times\mbb{P}_1$ is
$\omega_c=i\partial\ol{\partial}\rho_c(\norm{z})\oplus\omega_{FS}$. Identifying
$\lie{su}(2)^*$ with $i\lie{su}(2)$ via the Killing form, the corresponding
moment map $\mu\colon (G/N)\times\mbb{P}_1\to i\lie{su}(2)$ is given by
\begin{equation*}
\mu(z,[x_0:x_1])=
\frac{c}{1+\norm{z}^2}
\begin{pmatrix}
\frac{\abs{z_1}^2-\abs{z_2}^2}{2}&\ol{z_1}z_2\\
z_1\ol{z_2}&-\frac{\abs{z_1}^2-\abs{z_2}^2}{2}
\end{pmatrix}
+\frac{1}{\abs{x_0}^2+\abs{x_1}^2}
\begin{pmatrix}
\frac{\abs{x_0}^2-\abs{x_1}^2}{2}&\ol{x_0}x_1\\
x_0\ol{x_1}&-\frac{\abs{x_0}^2-\abs{x_1}^2}{2}
\end{pmatrix}.
\end{equation*}
A slice for the ${\rm{SU}}(2)$-action on $(G/N)\times\mbb{P}_1$ is given by
$S=\{((z,r),[0:1]) \mid z\in\mbb{C}, r\geq0\}$. The point
$((z,r),[0:1])\in S$ is mapped under $\mu$ to
\begin{equation*}
\frac{c}{1+\abs{z}^2+r^2}
\begin{pmatrix}
\frac{\abs{z}^2-r^2}{2}-\frac{1}{2}& r\ol{z}\\
rz & -\frac{\abs{z}^2-r^2}{2}+\frac{1}{2}
\end{pmatrix}.
\end{equation*}
Consequently, $\mu^{-1}(0)$ is non-empty if and $(c-1)\abs{z}^2=1$ for some
$z\in\mbb{C}^*$, which is the case if and only if $c>1$. In summary, we see
that, depending on $c>0$, the set of semistable points $X^{ss}_N$ can be empty
or not.}
\end{ex}
\subsubsection{Proper moment maps}
Notice that Example~\ref{ex:varying_c} shows that for non-pro\-jec\-tive
$G$-varieties in general the set of GIT-semistable points for the linearisation
of the $G$-action in an ample line bundle $L$ and the set of semistable points
with respect to a moment map $\mu$ computed using the curvature form of a
Hermitian metric in the same line bundle $L$ do not have to coincide. In case
the moment map under discussion is \emph{proper}, the two sets coincide by
\cite[Theorem~2.18]{Sj2}. Hence, in the above example one could look for K\"ahler
forms leading to proper moment maps that would then give a link to the algebraic
theory and establish independence of semistabilty from the choice of the metric.
Looking at formula \eqref{eq:general_moment_map} one sees that a moment map of
the most general form possible in the given situation is proper on
$\mbb{C}^2\setminus\{0\}$ if and only if
\begin{equation*}
\lim_{t\to0}\varphi'(t)t=\lim_{t\to\infty}\varphi'(t)t=\infty.
\end{equation*}
Since $t\mapsto\rho(e^t)=\varphi(e^{2t})$ is strictly increasing and strictly
convex, we see that $t\mapsto 2\varphi'(e^{2t})e^{2t}$ is strictly increasing.
Hence, $\lim_{t\to0}\varphi'(t)t=\infty$ is impossible. This proves that there
is no proper ${\rm{SU}}(2)$-equivariant moment map on $\mbb{C}^2\setminus\{0\}$.
Using compactness of $\mathbb{P}^1$, one can use this to conclude that none of
the moment maps for the corresponding K\"ahler forms on $G/N \times
\mathbb{P}^1$ is proper either, so that proper moment maps just do not exist in
the situation at hand.
\subsubsection{Metrics arising from embedding into representations}
As we have seen in Section~\ref{sect:moment_maps_for_reps}, a natural choice in
the situation at hand is to consider K\"ahler metrics on $G/N$ that are obtained
by embedding this homogeneous space as an orbit in a $G$-representation space,
and this is also the choice made in the construction presented in
Section~\ref{subsubsectio:def_of_ss}. We will see in
Section~\ref{sect:properties} below that this leads to a number of desirable
properties. However, also a restriction to this class of metrics does not lead
to a common notion of semistability, as the following example shows.
\begin{ex}{\rm
We continue the discussion at the end of Example~\ref{Ex:nonsurjectivemomentmap}
and consider $G= {\rm{SL}}(3,\mbb{C})$ and $N=G_v$ the unipotent radical of a
certain Borel subgroup of $G$. If we equip $G/N$ with the restriction of the
flat K\"ahler metric for which the image of the moment map $\mu_V$ has
complement with non-empty interior, and if we take $X=X_\alpha$ to be the
$G$-flag manifold corresponding to the coadjoint orbit $\mathrm{Ad}^*(K)\acts
\alpha$ through a point $\alpha \in \mathfrak{k}^*_{\rm reg}$ such that
$-\alpha$ does not lie in the image of $\mu_V$, then the set of semistable
points for the $G$-action on $G\times_NX$ is empty. On the other hand, as $N$ is
a Grosshans subgroup of $G$, there exists a $G$-module $V'$
inducing a moment map $\mu_{V'}$ on $G/N$ whose image is a
$K$-invariant dense open subset of
$\mathfrak{k}^*$, see Lemma~\ref{Lem:GrosshansMomentImage}. Without loss of
generality, we may assume that the point $\alpha \in \mathfrak{k}^*_{\rm reg}$
chosen above fulfils $- \alpha \in \mu_{V'}(G/N)$. Then, for the K\"ahler
metric on $G\times_NX$ induced by the second embedding the set of semistable
points is not empty. }
\end{ex}
\subsubsection{Unipotent radicals of parabolic subgroups}
The next example shows that even in the case that we are able to embed $N$ as
the unipotent radical of parabolic subgroups of two different semisimple groups
$G_1$ and $G_2$ and hence, as explained in
Remark~\ref{rem:forms_on_unipotent_radicals}, for each of the two embeddings
there exists a very natural choice of a K\"ahler form on $G_j/N$, we cannot
expect $X^{ss}_N$ to be independent of the group $G_j$.
\begin{ex} \label{ex:not_independent_of_the_group}{\rm
Let us consider the action of $N=\mbb{C}^2$ on $X=\mbb{P}_2$ given by the
embedding
\begin{equation*}
N\hookrightarrow G_1={\rm{SL}}(3,\mbb{C}) = \mathrm{SU}(3)^\mathbb{C}=
K_1^\mathbb{C},\quad(t,s)\mapsto
\begin{pmatrix}
1&0&t\\0&1&s\\0&0&1
\end{pmatrix}.
\end{equation*}
Taking the obvious Hamiltonian $G_1$-extension $Z_1=X$, we have $G_1\times_NX=
G_1\times_NZ_1=(G_1/N)\times\mbb{P}_2$ with moment map $\mu=\mu_V+
\mu_{\mbb{P}_2}\colon(G_1/N)\times\mbb{P}_2\to\lie{k}_1^* = \mathfrak{su}(3)^*$.
Since $N$ is embedded as the unipotent radical of a parabolic subgroup $P$ of
$G_1$, we may consider the canonical affine completion $\ol{G_1/N}^\text{a}$ and
equip it with the canonical K\"ahler form that is described in the paragraph
before Remark~3.4 in~\cite{Kir},
cf.~Remark~\ref{rem:forms_on_unipotent_radicals}.
The behaviour of the corresponding moment map $\mu_V$ on $\ol{G_1/N}^\text{a}$
is best understood in terms of its description as the universal
$K^{(P)}$-imploded cross-section $(T^*K)^{K,K^{(P)}}_\text{impl}$,
see~\cite[Definition~3.11]{Kir}. According to the discussion following
Remark~3.13 in~\cite{Kir}, the $G_1$-orbits in $\ol{G_1/N}^\text{a}$ correspond
to the strata \[(K_1\times\Ad^*(K_1^{(P)})\acts\sigma)/\negthickspace
\approx_{K_1^{(P)}},\] where $\sigma$ runs through the open faces of
$(\lie{t}_1^*)_+$. In particular, the open orbit $G_1/N$ is associated with the
interior $\inn(\lie{t}_1^*)_+$ of $(\lie{t}_1^*)_+$. The description of the
moment map $\mu_V$ given in~\cite[Theorem~3.12]{Kir} now implies that
$\mu_V(G_1/N)$ is contained in $(\lie{k}_1)^*_{\reg}=\Ad^*(K_1)\acts
\inn(\lie{t}_1^*)_+$. Since $\mu_{\mbb{P}_2}(\mbb{P}_2)$ does not intersect the
interior of $(\lie{t}_1^*)_+$, the zero fibre of $\mu$ is empty, hence
$X^{ss}_N=\emptyset$.
Now, let us consider the second embedding
\begin{equation*}
N\hookrightarrow G_2={\rm{SL}}(2,\mbb{C})\times{\rm{SL}}(2,\mbb{C}) =
(\mathrm{SU}(2) \times \mathrm{SU}(2))^\mathbb{C} \negthinspace=
K_2^\mathbb{C},\quad(t,s)\mapsto \left(
\begin{pmatrix}
1&t\\0&1
\end{pmatrix},
\begin{pmatrix}
1&s\\0&1
\end{pmatrix}\right).
\end{equation*}
Here, $N$ is embedded as the unipotent radical of a Borel subgroup of $G_2$, and
thus in particular again a Grosshans subgroup of $G_2$. As $G_2$-extension of
the $N$-action on $X=\mbb{P}_2$ we choose the embedding
\begin{equation*}
\iota\colon\mbb{P}_2\hookrightarrow\mbb{P}_3,\quad
\iota([z_0:z_1:z_2])= [z_0:z_2:z_1:z_2],
\end{equation*}
which is $N$-equivariant for the $N$-action on $\mbb{P}_3$ given by
\begin{equation*}
N\hookrightarrow G_2\hookrightarrow{\rm{SL}}(4,\mbb{C}), \quad
(t,s)\mapsto
\begin{pmatrix}
1&t&0&0\\0&1&0&0\\0&0&1&s\\0&0&0&1
\end{pmatrix}.
\end{equation*}
A moment map $\mu_{\mbb{P}_3}\colon\mbb{P}_3\to\lie{su}(2)\oplus\lie{su}(2)$
for the $K_2$-action on $\mbb{P}_3$ with respect to the Fubini-Study metric is
given by the explicit formula
$$
\mu_{\mbb{P}_3}([z_0:z_1:z_2:z_3])=
\frac{1}{\abs{z_0}^2+\dotsb+\abs{z_3}^2}
\left[
\begin{pmatrix}
\frac{\abs{z_0}^2-\abs{z_1}^2}{2}&\ol{z_0}z_1\\
z_0\ol{z_1}&-\frac{\abs{z_0}^2-\abs{z_1}^2}{2}
\end{pmatrix}\right.
\oplus\left.
\begin{pmatrix}
\frac{\abs{z_2}^2-\abs{z_3}^2}{2}&\ol{z_2}z_3\\
z_2\ol{z_3}&-\frac{\abs{z_2}^2-\abs{z_3}^2}{2}
\end{pmatrix}\right],
$$
see Example~\ref{ex:varying_c}. In order to determine the semistable locus
$Y^{ss}_{G_2}(\mu_Y)$ in $Y=G_2\times_NX$ we consider the closed embedding
\begin{equation*}
Y=G_2\times_NX\hookrightarrow G_2\times_NZ\cong G_2/N\times\mbb{P}_3
\end{equation*}
and the moment map $\mu_Y=(\mu_V+\mu_{\mbb{P}_3})|_Y$. The canonical affine
closure of \[G_2/N\cong (\mbb{C}^2\setminus\{0\})\times
(\mbb{C}^2\setminus\{0\})\] is $V= \mathbb{C}^2 \oplus \mathbb{C}^2=\mbb{C}^4$,
which we equip with the Hermitian structure
$\frac{1}{2}\langle\cdot,\cdot\rangle_{st}$, where
$\langle\cdot,\cdot\rangle_{st}$ is the standard Hermitian product of
$\mbb{C}^4$. A direct calculation using the formulae given in
Example~\ref{ex:varying_c} and the explicit expression for $\mu_{\mbb{P}_3}$
given above yields \[\mu_Y(eN, [0:0:1])=
\mu_V((1,0),(1,0))+\mu_{\mbb{P}_3}([0:1:0:1])=0.\] Hence, we
have $X^{ss}_N \not=\emptyset$.}
\end{ex}
\subsection{Semistable points induced by affine completions of
$G/N$}\label{subsect:completely_stable}
There is a further way to define $N$-semistable points
in $X$, less
directly linked to the intrinsic geometry of $G/N$ and $X$. Instead
of discussing the diagonal $G$-action on $G/N \times Z$ let us consider an
affine completion $\ol{G/N}^{\text{a}}$ and consider the diagonal $G$-action
on $\ol{G/N}^{\text{a}}\times Z$. Let $\ol{\iota}\colon X\hookrightarrow
\ol{G/N}^{\text{a}}\times Z$ be the $N$-equivariant embedding and define
$X^{\ol{ss}}_N [\omega_X]:=\ol{\iota}^{-1}((\ol{G/N}^{\text{a}}\times
Z)^{ss}_G[\omega_V + \omega_Z])$. Then, $X^{\ol{ss}}_N[\omega_X]$ is an
open $N$-invariant subset which contains but in general is strictly bigger than
$X^{ss}_N[\omega_X]$. Analogously, we define $X^{\ol{s}}_N$ as
$\ol{\iota}^{-1}((\ol{G/N}^{\text{a}}\times Z)^{s}_G[\omega_V +
\omega_Z])$.
\begin{lem}
Let $(X,\omega_X)$ be a compact Hamiltonian $N$-manifold with a Hamiltonian
$G$-extension $(Z,\omega_Z)$. If $N$ is a Grosshans subgroup of $G$, i.e., if
$\mbb{C}[G]^N$ is finitely generated, then for the canonical affine completion
$\Spec\mbb{C}[G]^N$ of $G/N$ the set $X^{\ol{ss}}_N$ is non-empty.
\end{lem}
\begin{proof}
We already noticed in Section~\ref{sect:moment_maps_for_reps} that under the
Grosshans assumption the corresponding moment map $\mu_V\colon
\Spec\mbb{C}[G]^N\to\lie{k}^*$ is surjective. Every moment map
$\mu=\mu_V+\mu_Z\colon \Spec\mbb{C}[G]^N\times Z\to \lie{k}^*$ thus has non-empty zero fibre.
\qed
\end{proof}
\subsection{Algebraic actions on projective manifolds}\label{subsect:algebraic_actions}
In this section, we study the following situation: let $X$ be a projective
manifold and $N$ a unipotent group acting \emph{linearly} on $X$ in the sense
that there exists a finite-dimensional $N$-representation $W$ such that the
corresponding homomorphism $N \to \mathrm{GL}(W)$ embeds $N$ into a semisimple
subgroup $G$ of $\mathrm{SL}(W)$, and an $N$-equivariant embedding $\iota\colon
X\hookrightarrow \mathbb{P}(W)$. We will compare the moment map approach
presented in earlier sections with the Geometric Invariant Theory approach of
Doran--Kirwan \cite{DorKir}.
Consider the (very ample) line bundle $L_X:=
\iota^*(\mathscr{O}_{\mathbb{P}(W)}(1) )$ on $X$, which is
$N$-linearised by construction. Let $\langle \cdot, \cdot \rangle$ be a
Hermitian inner product on $W$ and set $K := \mathrm{SU}(W, \langle \cdot, \cdot
\rangle) \cap G$, so that $G = K^\mathbb{C}$. Endow $\mathbb{P}(W)$ and hence $X$ with
the corresponding Fubini-Study K\"ahler form $\omega_{FS}$ and its restriction
$\omega_X := \iota^*(\omega_{FS})$, respectively, so that $[\omega_X] = c_1(L_X)
\in H^2 ( X, \,\mathbb{R})$. Note that $\mathbb{P}(W)$ is a
Hamiltonian $G$-extension of $X$. Next, as suggested by the construction of
semistable points with respect to $\omega_X$, we look at
\[ \ol{\iota}\colon X \hookrightarrow Y=G\times_NX\hookrightarrow G\times_N
\mathbb{P}(W) \cong G/N\times \mathbb{P}(W) \hookrightarrow
\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W), \]
cf.~Section~\ref{subsect:completely_stable}, and additionally at the
$G$-linearised ample line bundle $ L:= \mathscr{O}_{\ol{G/N}^{\mathrm{a}}}
\boxtimes \mathscr{O}_{\mathbb{P}(W)}(1)$. In this situation, we define
\[X^{\ol{s}}_N(L_X) := \ol{\iota}^{-1}((\ol{G/N}^{\mathrm{a}} \times
\mathbb{P}(W))^{s}_G(L) )\]
to be the pre-image of the GIT-stable points for the $G$-action on
$\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)$ and the given
linearisation, which is unique as
$G$ is semisimple. The main comparison result regarding
moment-map-semistability and GIT-semistability can now be formulated as
follows:
\begin{prop}\label{prop:comparison_mu_GIT}
In the above situation, assume additionally that $\ol{G/N}^{\mathrm{a}}$ is
normal. Then, we have
\begin{equation}\label{eq:comparison_mu_GIT}X^{ss}_N[\omega_X] =
X^{\ol{s}}_N(L_X).
\end{equation}
\end{prop}
\begin{proof}
The inner product $\langle \cdot, \cdot \rangle $ induces a $K$-invariant
Hermitian metric on $\mathscr{O}_{\mathbb{P}(W)}(1)$ such that $\frac{i}{2\pi}\,
\times$ the curvature is $\omega_{FS}$. Using a $K$-invariant Hermitian metric
on the trivial line bundle over $V \supset \ol{G/N}^{\mathrm{a}}$ with
$\frac{i}{2\pi}\, \times$ curvature equal to $\omega_V$, we get a $K$-invariant
Hermitian metric $h$ on $L \to \ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)$ with
$\frac{i}{2\pi}\, \times$ curvature equal to $\omega_V +
\omega_{FS}$. We are hence in the general situation of
\cite[Section~2.2]{Sj2}\footnote{As $G$ is semisimple, the moment map computed
there has to coincide with $\mu_{\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)}$.},
with the exception that $\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)$ is normal
and not smooth, which does not affect Sjamaar's arguments\footnote{See also
\cite{HausenHeinzner} for the result attributed to Roberts. Regarding \cite[Lemma
2.16]{Sj2}, see also \cite{Extensionofsymplectic}.}. In particular, the compact
complex space $(\ol{G/N}^{\mathrm{a}} \times
\mathbb{P}(W))^{ss}_G(\mu_{\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)}) \hq
G$ is projective algebraic by Grauert's version of the Kodaira Embedding
Theorem, see \cite[Theorem~2.17]{Sj2}. Moreover, we claim that
\begin{equation}\label{eq:semistable_the_same}(\ol{G/N}^{\mathrm{a}} \times
\mathbb{P}(W))^{ss}_G(\mu_{\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)}) =
(\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W))^{ss}_G(L).
\end{equation}
In order to prove this, as the moment map $\mu_{\ol{G/N}^{\mathrm{a}} \times
\mathbb{P}(W)}$ is proper we can follow the general line of argumentation
presented in \cite[proof of Theorem~2.18]{Sj2}: Since the possibly singular
variety $\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)$ is contained in $V\times
\mathbb{P}(W)$, and since all differential geometric and symplectic objects are
obtained by restriction, the computations regarding the relation between the
norms of sections and (the norm square of) the moment map given in the first
paragraph of \emph{loc.~cit.} continue to hold, so that for any $\mu$-semistable
$p \in \ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)$ and any $G$-invariant section
$s$ of $L$ over $(\ol{G/N}^{\mathrm{a}} \times
\mathbb{P}(W))^{ss}_G(\mu_{\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)})$,
the restriction of the function $h(s , s)$ to the closure of $G \acts p$ inside
$(\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W))^{ss}_G
(\mu_{\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)})$ takes on its maximum at
the limit $F_\infty(p)$ under the gradient flow of
$-\|\mu_{\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)}\|^2$, from which we
conclude that $s$ is bounded on $(\ol{G/N}^{\mathrm{a}} \times
\mathbb{P}(W))^{ss}_G(\mu_{\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)})$.
Furthermore, an application of \cite[Proposition 7.6]{PaHq} shows that the set
of $\mu$-semistable points is Zariski-open. Since in addition
$\ol{G/N}^{\mathrm{a}} \times \mathbb{P}(W)$ is normal, it therefore follows
from Riemann's Extension Theorem\footnote{In Sjamaar's setup the application of
Riemann's Extension Theorem is not justified, since at this point the complement
of the set of $\mu$-semistable points is not known to be small enough; e.g.,~it
could contain interior points (in the Euclidean topology).} that $s$ extends to
a $G$-invariant section over the whole of $\ol{G/N}^{\mathrm{a}} \times
\mathbb{P}(W)$. The arguments for the two implications ``algebraically
semistable implies analytically semi\-stable'' and ``analytically semi\-stable
implies algebraically semistable'' can now be used without changes, proving
\eqref{eq:semistable_the_same}.
The analogous equality for stable points follows from the fact that on both
sides, these are the ones for which the corresponding fibre of the quotient map
consists of a single (closed) orbit. Intersecting with $\ol{\iota}(X)$ yields
$X^{\ol{s}}_N[\omega_X] = X^{\ol{s}}_N(L_X)$, from which we conclude using
Corollary~\ref{cor:completely_stable_gleich_semistable} proven in
Section~\ref{subsect:geometric_quotients} below.
\qed
\end{proof}
\begin{rem}[Comparison of semistable points]{\rm
In the given situation, Doran and Kirwan in \cite[Definition 5.1.6]{DorKir}
define the set of \emph{GIT-semistable points} to be
\[X^{ss}_N(L_X) := X \cap Y^{ss}_G(\hat\iota^* \mathscr{O}_{G/N} \boxtimes
\mathscr{O}_{\mathbb{P}(W)}(1)),\]
where $\hat\iota$ is given by $\eqref{eq:untwisting_the_action}$. In general,
this set will not coincide with $X^{ss}_N[\omega_X]$, as the following argument
shows. Assume we had $X^{ss}_N[\omega_X] = X^{ss}_N(L_X)$. Since the latter set
only depends on the $N$-action on $X$ and its lift to the $N$-linearised line
bundle $L_X$, see \cite[Proposition 5.1.9]{DorKir}, the same would be true for
$X^{ss}_N[\omega_X]$. In particular, $X^{ss}_N[\omega_X]$ would be independent
of the chosen embedding $N \hookrightarrow G$ and of the chosen embedding $G/N
\hookrightarrow V$ with (normal) affinisation $\ol{G/N}^{\mathrm{a}}$. This
however would stand in contradiction to
Example~\ref{ex:not_independent_of_the_group}.
In this direction, Equality~\eqref{eq:comparison_mu_GIT} gives the inclusion
$X^{ss}_N[\omega_X] \subset X^{ss}_N(L_X) $, which in general is strict, as the
gradient flow of the norm square of the moment map of a GIT-semistable point in
$\hat{\iota}(Y)$ might converge to a point (in the zero fibre of the moment
map) in the \emph{boundary} of $\hat{\iota}(Y)$ in $\ol{G/N}^{\mathrm{a}}
\times \mathbb{P}(W)$.}
\end{rem}
\section{Properties of quotients by unipotent groups}\label{sect:properties}
We establish the existence of a compactifiable geometric quotient of the set of
semistable points by the $N$-action that extends to a meromorphic map from $X$
to the compactification and carries a natural K\"ahler form obtained by
symplectic reduction. We will use the notation established in
Section~\ref{subsubsectio:def_of_ss}.
\subsection{Existence of geometric quotients}\label{subsect:geometric_quotients}
As in the reductive case, sets of semistable points admit quotients, which in
the unipotent case are automatically geometric, since unipotent groups cannot
have properly semistable orbits by the following
\begin{lem}\label{Lem:AllOrbitsClosed}
Let $(X,\omega_X)$ be a Hamiltonian $N$-manifold. Then every $N$-orbit in
$X^{ss}_N[\omega_X]$ is closed in $X^{ss}_N[\omega_X]$, i.e., we have
$X^{ss}_N[\omega_X]=X^s_N[\omega_X]$.
\end{lem}
\begin{proof}
Consider the analytic Hilbert quotient $\pi_G\colon
Y^{ss}_G[\omega_Y]\to Y^{ss}_G[\omega_Y]\hq G$. The fibre
$\pi_G^{-1}(\pi_G^{}(x))$ is an affine $G$-variety, see
\cite[Proposition 3.3.7]{HH2}. It hence follows from a classical result that
every $N$-orbit is closed in $\pi_G^{-1}(\pi_G^{}(x))$ and hence in
$Y^{ss}_G[\omega_X]$. The claim follows.
\qed
\end{proof}
\begin{cor}\label{cor:completely_stable_gleich_semistable}
In the situation of Section~\ref{subsect:completely_stable}, we have
$X^{ss}_N[\omega_X] = X^{\ol{s}}_N[\omega_X]$.
\end{cor}
\begin{proof}
It follows from Lemma~\ref{Lem:AllOrbitsClosed} that every orbit in
$Y^{ss}_G[\omega_Y]$ is closed in $Y^{ss}_G[\omega_Y]$. If $\Phi\colon Y
\hookrightarrow \ol{G/N}^{\mathrm{a}} \times Z$ is the natural inclusion, we
hence have
\[
Y^{ss}[\omega_Y] = \{y \in Y \mid G\acts y \cap \mu_Y^{-1}(0) \neq \emptyset\}
= \{y \in Y \mid G\acts \Phi(y) \cap \mu_{\ol{G/N}^{\mathrm{a}} \times
Z}^{-1}(0) \neq \emptyset\}.
\]
As $\Phi$ restricted to $X \subset Y$ coincides with $\ol{\iota}$, the claim
follows.
\qed
\end{proof}
\begin{thm}\label{thm:existence_of_geometric_quotient}
Let $(X, \omega_X)$ be a compact Hamiltonian $N$-manifold. Then, the set
$X^{ss}_N[\omega_X]$ of semi\-stable points admits a geometric quotient
$\pi\colon X^{ss}_N[\omega_X] \to X^{ss}_N[\omega_X] / N$ by the $N$-action. In
fact, $\pi$ is a principal $N$-fibre bundle and $X^{ss}_N[\omega_X] / N =:Q$ is
smooth.
\end{thm}
\begin{proof}
By the quotient theory for Hamiltonian actions of reductive groups, see
Theorem~\ref{propertiesmomentumquotients}, the set of $G$-semistable points
$Y^{ss}_G[\omega_Y] = G\acts X^{ss}_N[\omega_X]$ admits an analytic Hilbert
quotient by the $G$-action. Moreover, by Lemma~\ref{Lem:AllOrbitsClosed}, every
$G$-orbit in $Y^{ss}_G[\omega_Y]$ is closed there, hence the quotient
$Y^{ss}_G[\omega_Y] \to Y^{ss}_G[\omega_Y]\hq G$ is in fact geometric. By
construction of the twisted product, the restriction to $X^{ss}_N[\omega_X]
\subset Y^{ss}_G[\omega_Y]$ yields the desired geometric quotient $\pi$.
For
every $x\in X^{ss}_N[\omega_X]$, the $G$-orbit is closed, hence the isotropy
subgroup $G_x$ is reductive. On the other hand, as $x \in X$, the isotropy is
contained in $N$, and hence unipotent. It follows that $G_x=N_x = \{e\}$, and
hence that $\pi$ is a principal $N$-fibre bundle.
\qed
\end{proof}
\subsection{Compactifications of the quotient}
We will establish the existence of natural compactifications of the quotient
$X^{ss}_N[\omega_X] / N$, \emph{which we assume to be non-empty in this section}.
Recall that the fundamental construction of Section~\ref{subsubsectio:def_of_ss}
involves the choice of an embedding of ${G/N}$ into a Hermitian
$K$-representation $V$ as a $G$-orbit $G\acts v$, see
Equation~\eqref{eq:definition_of_omega_1}. This leads to an affine completion
$\ol{G/N}^{\mathrm{a}} := \ol{G\acts v}$ of $G/N$ to which both the K\"ahler
form and the moment map extend. Consider the composition $\Phi \colon Y
\hookrightarrow \ol{G/N}^{\mathrm{a}} \times Z$ of the open embedding
$G/N \times Z \hookrightarrow \ol{G/N}^{\mathrm{a}} \times Z$ with the embedding
\eqref{eq:untwisting_the_action} used in the main construction.
\begin{prop}\label{prop:compactI}
The inclusion $Y = G\times_N X \overset{\Phi}{\hookrightarrow}
\overline{G/N}^{\mathrm{a}}\times Z$ induces an open embedding \[\phi\colon
X^{ss}_N[\omega_X] / N \hookrightarrow \overline{Q}\] of $X^{ss}_N[\omega_X] /
N= Q$ into a compact complex space $\overline{Q}$ such that $\overline{Q}
\setminus \phi( Q )$ is analytic and nowhere dense.
\end{prop}
\begin{proof}
We claim that $\Phi(Y)$ is Zariski-open in its closure. For this, we first look
at the compactification $V \times Z \hookrightarrow \mathbb{P}(V \oplus
\mathbb{C}) \times Z$; by slight abuse of notation, the composition of $\Phi$
with this embedding will also be denoted by $\Phi$. Since the $G$-action on the
compact K\"ahler manifold $Z$ is Hamiltonian, it is meromorphic, see
Remark~\ref{rem:reductive_equivalence}. As in addition the $G$-action on $V$ is
algebraic, the $G$-action on the compact K\"ahler manifold $\mathbb{P}(V \oplus
\mathbb{C}) \times Z$ is meromorphic. Secondly, we notice that $\Phi(Y) = G
\acts (\{eU\} \times \iota(X) )$, where $\iota\colon X\hookrightarrow
Z$ is the extension map; i.e., $\Phi(Y)$ is the $G$-sweep of a compact complex
submanifold of $\mathbb{P}(V \oplus \mathbb{C}) \times Z$. It therefore follows
from \cite[Lemma 2.4(1)]{Fuj} that $\Phi(Y)$ is Zariski-open in its closure
inside $\mathbb{P}(V \oplus \mathbb{C}) \times Z$, and hence it is Zariski-open
in its closure inside $\ol{G/N}^{\mathrm{a}} \times Z$. We denote this closure
by $\ol{Y}$.
By Theorem~\ref{Thm:MomentImage} the moment map $\mu_V$ is proper on
$\ol{G/N}^{\mathrm{a}}$. It follows that the moment map for the action of $K$ on
$\ol{G/N}^{\mathrm{a}} \times Z$ and hence the restriction $\mu_{\ol{Y}}\colon
\ol{Y} \to \mathfrak{k}^*$ of this moment map to the analytic subset $\ol{Y}
\subset \ol{G/N}^{\mathrm{a}} \times Z$ is likewise proper. Recalling the
construction of the K\"ahler form $\omega_Y$ and the associated moment map
$\mu_Y$, we can summarise the situation in the following commutative diagram:
\[\begin{xymatrix}{
Y \ar@{^(->}[r] \ar[rd]_{\mu_{Y}}& \ol{Y}
\ar[d]^{\mu_{\ol{Y}}}\ar@{^(->}[r] & \ol{G/N}^{\mathrm{a}} \times Z
\ar@{^(->}[r] \ar[d] & V \times Z \ar[ld]^{\mu_V + \mu_Z}\\
& \mathfrak{k}^* \ar[r]^= &
\mathfrak{k}^* . &
}
\end{xymatrix}\]
Here, in the first line, the first inclusion is open and the other two
inclusions are closed.
Since $\mu_{\ol{Y}}$ is proper, its zero fibre is compact, and hence the
associated analytic Hilbert quotient $\ol{Y}^{ss}_G(\mu_{\ol{Y}}) \hq G \simeq
\mu^{-1}_{\ol{Y}}(0) / K$ is a compact complex space, which in fact comes with a
natural closed embedding into the (non-compact) analytic Hilbert quotient $(V
\times Z)^{ss}_G(\mu_V + \mu_Z)\hq G$. As the inclusion $Y \hookrightarrow
\ol{Y}$ has Zariski-open image, and as every $G$-orbit in $Y^{ss}_G(\mu_Y) =
Y^{ss}_G[\omega_Y]$ is closed by the argument in the proof of Theorem
\ref{thm:existence_of_geometric_quotient}, the inclusion $Y^{ss}_G[\omega_Y]
\hookrightarrow \ol{Y}^{ss}_G(\mu_{\ol{Y}})$ is Zariski-open and saturated with
respect to the quotient map $\ol{\pi}\colon \ol{Y}^{ss}_G(\mu_{\ol{Y}}) \to
\ol{Y}^{ss}_G(\mu_{\ol{Y}}) \hq G$. It therefore induces the desired
Zariski-open embedding \[\phi\colon Q=X^{ss}_N[\omega_X]/N \cong
Y^{ss}_G[\omega_Y]/G \hookrightarrow \ol{Y}^{ss}_G(\mu_{\ol{Y}})\hq G =: \ol{Q}
\]
into the compact complex space $\ol{Q}$.
\qed
\end{proof}
\subsection{Zariski-openness of semistable points and meromorphic extension of
the quotient map}
While there is no general result for analyticity of the complement of the set
of semi\-stable points in a Hamiltonian $G$-manifold with non-proper moment
map, in our setup this can be shown by hand.
\begin{thm}\label{thm:Zopen}
Let $(X, \omega_X)$ be a compact Hamiltonian $N$-manifold. Then, the set
$X^{ss}_N[\omega_X]$ of semi\-stable points is Zariski-open in $X$. Moreover,
the quotient map $\pi\colon X^{ss}_N[\omega_X] \to X^{ss}_N[\omega_X] / N$
extends to a meromorphic map\footnote{The reader is referred to \cite[Chapter 6,
Section 3]{Whitney} for an in depth discussion of meromorphic mappings between
complex spaces.} $\pi\colon X \dasharrow \overline{Q}$ to the compact complex
space $\overline{Q}$ constructed in the proof of
Proposition~\ref{prop:compactI}.
\end{thm}
\begin{proof}
By part (1) of Theorem~\ref{propertiesmomentumquotients}, $X^{ss}_N[\omega_X]$
is open in the Euclidean topology. Let $\pi_F \colon X \dasharrow Q_F$ be a
Fujiki quotient of $X$ by the $N$-action, whose existence is guaranteed by
Proposition~\ref{Chow}, and let $\Gamma \subset X \times Q_F$ be the graph. In
particular, $\Gamma$ is an $N$-invariant, irreducible, compact analytic subset
of $X \times Q_F$, where $N$ acts only on the first factor. Embedding $X \times
Q_F$ into $Y\times Q_F$ and further into $\mathbb{P}(V \oplus \mathbb{C}) \times
Z \times Q_F$ as in the proof of Proposition~\ref{prop:compactI} we can
interpret $\Gamma$ as an $N$-invariant subvariety in $\mathbb{P}(V \oplus
\mathbb{C}) \times Z \times Q_F$. Using again that the $G$-action on the latter
space is meromorphic, we conclude that $\hat{\Gamma}:= \ol{G \acts \Gamma}$
is Zariski-open in its closure in $\ol{Y} \times Q_F$, and in particular
irreducible. On $Y \subset \ol{Y}$ it is the graph of the $G$-invariant
extension of the $N$-invariant meromorphic map $\pi_F$ from $X$ to $Y = G
\times_N X$. It follows that $\hat{\Gamma}$ is the graph of a $G$-invariant
meromorphic map from $\ol{Y}$ to $Q_F$, which we will call $\hat{\pi}_F$.
The graph of the restriction of $\hat{\pi}_F$ to
$\ol{Y}^{ss}_G(\mu_{\ol{Y}})$ is equal to $\hat{\Gamma}^{\circ} :=
\hat{\Gamma} \cap (\ol{Y}^{ss}_G(\mu_{\ol{Y}}) \times Q_F)$. Now,
$\ol{Y}^{ss}_G(\mu_{\ol{Y}}) \times Q_F$ admits an analytic Hilbert quotient by
the $G$-action, namely $\Pi = \ol{\pi} \times \id_{Q_F}\colon
\ol{Y}^{ss}_G(\mu_{\ol{Y}}) \times Q_F \to \overline{Q} \times Q_F$. As
$\hat{\Gamma}^\circ$ is a $G$-invariant, irreducible analytic subset of
$\ol{Y}^{ss}_G(\mu_{\ol{Y}}) \times Q_F $, its image
$\hat{\Gamma}^\circ_{\rm red} = \Pi(\hat{\Gamma}^\circ)$ is an
irreducible analytic subset of $\ol{Q} \times Q_F$ by the basic properties of
analytic Hilbert quotients listed in Section~\ref{subsect:aHq_properties}.
On the one hand, as orbits through points in $Y^{ss}_G(\mu_Y) \subset
\ol{Y}^{ss}_G(\mu_{\ol{Y}})$ are closed, $\hat{\Gamma}^\circ_{\rm red}$
defines a meromorphic map $\pi_{F, \rm red}$ from $\ol{Q}$ to $Q_F$, cf.~the
argument given in the proof of \cite[Proposition~4.5]{PaHq}.
On the other hand,
consider the open subset $U := U_F \cap X^{ss}_N[\omega_X]$,
cf.~Proposition~\ref{Chow}. As this set is $N$-invariant, and since both
$\pi_F|_U$ and $\pi|_U$ are geometric quotients for the $N$-action on $U$ by
Proposition~\ref{Chow} and Theorem~\ref{thm:existence_of_geometric_quotient},
respectively, the respective images $\pi_F(U) \subset Q_F$ and $\pi(U) \subset Q
\subset \ol{Q}$ are biholomorphic via $\pi_{F, \rm red}$. It follows that
$\pi_{F, \rm red}\colon \ol{Q} \dasharrow Q_F$ is bimeromorphic. From this, we
conclude that $(\pi_{F, \rm red})^{-1} \circ \pi_F \colon X \dasharrow \ol{Q}$
is a meromorphic extension of $\pi$ and that there are Zariski-open, dense
subsets $\overline{\Omega} \subset \ol{Q}$ and $\Omega_F \subset Q_F$ that are
biholomorphic via $\pi_{F, \rm red}$. By shrinking $U_F$ if necessary, we may
assume that $\Omega_F = \pi_F(U_F)$ and $\ol{\Omega} \subset Q \subset \ol{Q}$.
The situation is hence summarised by the following commutative diagram
\[\begin{xymatrix}
{ U_F \ar@{^(->}[r]\ar@{->>}[d]_{\pi_F} & X^{ss}_N[\omega_X] \ar@{->>}[d]^{\pi}& \\
\Omega_F \ar@{^(->}[r]^>>>>>>{(\pi_{F, \rm red})^{-1}}& Q \ar@{^(->}[r] & \ol{Q}. }
\end{xymatrix}
\]
In particular, the Zariski-open subset $U_F$ is contained in
$X^{ss}_N[\omega_X]$. Since $X$ is compact, using a Noetherian induction
argument applied to the analytic subset $X' :=X \setminus U_F$ we conclude that
$X^{ss}_N[\omega_X]$ and hence $X\setminus X^{ss}_N[\omega_X]$ is constructible
in the Zariski-topology of $X$. As we know from the start that $X\setminus
X^{ss}_N[\omega_X]$ is closed in the Euclidean topology of $X$, the claim
follows.
\qed
\end{proof}
\subsection{Reduced K\"ahler structure on the quotient}
We will show that using a symplectic reduction procedure the quotient
$X^{ss}_N[\omega_X] / N$ can be endowed with a K\"ahler form naturally induced
from $\omega_X$. This form extends to the compactification $\ol{Q}$ and its
class pulls back under $\pi$ to the class of $[\omega_X]$ on
$X^{ss}_N[\omega_X]$.
\begin{thm}\label{thm:reducedKaehler}
In the setup of Proposition~\ref{prop:compactI}, there exists a K\"ahler
structure\footnote{See \cite[Sections 3.1 and 3.2]{PaHq} for the basic
definitions regarding K\"ahler structures on (singular) complex spaces.}
$\omega_{\ol{Q}}$ on the compact complex space $\ol{Q}$ whose restriction
$\omega_Q = \omega_{\ol{Q}}|_Q^{}$ to $Q \hookrightarrow \ol{Q}$ is smooth and
fulfils
\[[\pi^*{\omega_Q}] = [\omega_X|_{X^{ss}_N[\omega_X]}] \in
H^2(X^{ss}_N[\omega_X],\, \mathbb{R}).\]
\end{thm}
\begin{proof}
Once again, recall our setup in the following commutative diagram
\begin{equation}\label{eq:big_diagram}
\begin{gathered}
\begin{xymatrix}{
X^{ss}_N[\omega_X] \ar@{^(->}[r] \ar@{->>}[d]_\pi & Y^{ss}_G[\omega_Y]
\ar@{->>}[d] \ar@(ur,ul)[rrr]^\psi \ar@{^(->}[r] &
\ol{Y}^{ss}_G[\hat{\omega}_{Z}] \ar@{^(->}[r] \ar@{->>}[d]^{\ol{\pi}} &
\ol{G/N}^{\mathrm{a}} \times Z \ar@{^(->}[r]& V \times Z \ar[d]^{\mathrm{pr}_Z}
\\
Q \ar[r]^<<<<<<<\cong& Y^{ss}_G[\omega_Y]\hq G \ar@{^(->}[r] & \ol{Q} & & Z.
}
\end{xymatrix}
\end{gathered}
\end{equation}
By applying the K\"ahlerian reduction procedure of \cite{Extensionofsymplectic}
to $\ol{Y}^{ss}_G[\hat{\omega}_{Z}]$ and to the quotient $\ol{Q}$, we obtain
a K\"ahlerian structure $\omega_{\ol{Q}}$ on $\ol{Q}$ induced by restricting
local $K$-invariant potentials of $\hat{\omega}_Z$ to $\mu_{\ol{Y}}^{-1}(0)$
and by the homeomorphism $\mu_{\ol{Y}}^{-1}(0)/K \simeq \ol{Q}$,
cf.~Theorem~\ref{propertiesmomentumquotients}. We denote the restriction of
$\omega_{\ol{Q}}$ to $Q$ by $\omega_Q$.
In order to show that $\omega_Q$ is smooth, we first note that
$Y^{ss}_G[\omega_Y] \subset \ol{Y}^{ss}_G[\hat{\omega}_{Z}]$ is smooth and
$\ol{\pi}$-saturated, and secondly recall the observation made in the proof of
Theorem~\ref{thm:existence_of_geometric_quotient} above that the $G$-action on
$Y^{ss}_G[\omega_Y]$ is free. Therefore, it follows from the construction of the
reduced K\"ahler form $\omega_{\ol{Q}}$, see \cite[Lemma 2 on page 132 and the
proof on pages 133/134]{Extensionofsymplectic} and also compare with
\cite[Theorem 2.10]{Sj2}, that in the fundamental commutative diagram
\begin{equation}\label{eq:fundamental_diagram}
\begin{gathered}
\begin{xymatrix}{\mu_Y^{-1}(0)\ar@{->>}[d]_{\pi_K} \ar@{^(->}[r]^{\tau}&Y^{ss}_G[\omega_Y] \ar@{->>}[d]^{\ol{\pi}}\\
\mu_Y^{-1}(0)/K \ar[r]_<<<<<{\simeq}^<<<<<{\tau_{\mathrm{red}}} & Q}
\end{xymatrix}
\end{gathered}
\end{equation}
the fibre $\mu_Y^{-1}(0)$ is smooth, the $K$-action on $\mu_Y^{-1}(0)$ is free,
and that the K\"ahler structure $\omega_Q$ is smooth and fulfils the
``symplectic reduction'' equation
\begin{equation}\label{eq:symp_red}
\tau^*(\ol{\pi}^*\omega_Q) = \pi_K^*(\tau_{\mathrm{red}}^*\omega_Q) =
\omega_Y|_{\mu_Y^{-1}(0)} = \tau^*(\omega_Y|_{Y^{ss}_G[\omega_Y]}).
\end{equation}
More is true. Since $Y^{ss}_G[\omega_Y]$ is $\ol{\pi}$-saturated, and since the
moment map $\mu_{\ol{Y}}\colon \ol{Y} \to \mathfrak{k}^*$ is proper as observed
in the proof of Proposition~\ref{prop:compactI}, the moment map $\mu_Y$ is
\emph{admissible} in the sense that the gradient flow $F_t$ of $- \|\mu_Y \|^2$
through any point $p \in Y^{ss}_G[\omega_Y]$ exists for all times,
cf.~\cite[\S9]{Kir2}, and hence there exists a continuous retraction of
$Y^{ss}_G[\omega_Y]$ to $\mu_Y^{-1}(0)$ defined by $z \mapsto \lim_{t \to
\infty} F_t(z)$, see \cite[page 109]{Sj2} and the references given there, as
well as \cite{Ler}. In particular, the inclusion displayed in the first line of
Diagram~\eqref{eq:fundamental_diagram} induces an isomorphism between de Rham
cohomology groups,
\begin{equation*}
\tau^*\colon H^2(Y^{ss}_G[\omega_Y],\, \mathbb{R} )
\overset{\cong}{\longrightarrow} H^2(\mu_Y^{-1}(0), \mathbb{R} ).
\end{equation*}
Equation~\eqref{eq:symp_red} therefore implies that
\begin{equation}\label{eq:1}
[\ol{\pi}^*\omega_Q] = [\omega_Y|_{Y^{ss}_G[\omega_Y]}] \in
H^2(Y^{ss}_G[\omega_Y], \, \mathbb{R} ).
\end{equation}
In addition, from the right hand part of Diagram~\eqref{eq:big_diagram}, from
Equations~\eqref{eq:definition_of_omega_1} and \eqref{eq:definition_of_omega_2},
and from the the fact that the de Rham cohomology class of $\omega_V$ is trivial
we infer that
\begin{equation*}
[\omega_Y|_{Y^{ss}_G[\omega_Y]}] = [\psi^* (\mathrm{pr}^*_Z
(\omega_Z))] \in H^2(Y^{ss}_G[\omega_Y],\, \mathbb{R}),
\end{equation*}
so that \eqref{eq:1} becomes
\begin{equation*}
[\ol{\pi}^*\omega_Q] =[\psi^* (\mathrm{pr}^*_Z (\omega_Z))]
\in H^2(Y^{ss}_G[\omega_Y],\, \mathbb{R}).
\end{equation*}
Finally, using this, the left hand part of Diagram~\eqref{eq:big_diagram}, and
the fact that $Z$ as a $G$-extension of $X$ fulfils \eqref{eq:pullback} we
conclude
\[[\pi^*{\omega_Q}] = [\omega_X|_{X^{ss}_N[\omega_X]}] \in
H^2(X^{ss}_N[\omega_X],\, \mathbb{R}).\]
\vspace{-0.65cm}\qed
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\bibliographystyle{amsalpha}
\bibliographymark{References}
\def\cprime{$'$}
|
{
"timestamp": "2018-11-09T02:16:23",
"yymm": "1805",
"arxiv_id": "1805.02551",
"language": "en",
"url": "https://arxiv.org/abs/1805.02551"
}
|
\section{Introduction}
Finite-temperature corrections to the effective potential in quantum field theory
play a fundamental role in the description of phase transitions in the
early universe. In particular, symmetry restoration at high temperatures is
an essential ingredient of the Higgs mechanism for electroweak symmetry breaking.
In flat space-time, such thermal corrections were computed for the
first time
in the seminal papers of Dolan and Jackiw \cite{DJ} and Weinberg \cite{Weinberg} using thermal Green functions methods. The possibility of extending those methods to
more realistic scenarios incorporating space-time curvature
meets certain difficulties since finite temperature field
theory is only well-defined provided the geometry possesses a global time-like Killing field.
Thus for example, for static or stationary space-times, the thermal Green functions method has been
applied for homogeneous and isotropic Einstein static spaces in \cite{DC}. These methods were extended to conformally static Robertson-Walker backgrounds in \cite{Dr}. The conditions for the construction of a thermal field theory in more general expanding universes (not strictly static) were discussed in \cite{Parker,Hu} where the adiabatic techniques were introduced.
An alternative approach to the adiabatic expansion for thermal field theory in general curved
space-time is the so called Schwinger-DeWitt \cite{Schwinger,DeWitt} expansion of the effective action.
Both approaches are known to agree in the results for the ultraviolet divergences
in zero-temperature field theory. The Schwinger-DeWitt expansion, being a local curvature expansion, is manifestly covariant but it is not sensitive to the global properties of the space-time such as the presence of boundaries and does not contain information about the
non-local part of the effective action. Going beyond the Schwinger-DeWitt approximation
requires brute force methods based on explicit mode summation \cite{Birrell}.
Thus for example, in \cite{Huang} phase transitions in homogeneous but anisotropic Bianchi I and Kasner cosmologies
were studied using explicit modes sum.
In recent works \cite{Maroto, Higgs}, we started this program in the
case of weak inhomogeneous gravitational fields by studying the one-loop corrections to the vaccum expectation value (VEV) of the energy-momentum tensor and the effective potential of a massive scalar field. Thus, in \cite{Maroto}, using a regularization procedure based on a simple comoving cutoff, a nonvanishing contribution of metric perturbations to the effective potential was obtained. However, the renormalization
procedure required the use of non-covariant counterterms. In contrast, dimensional regularization was used in \cite{Higgs} to isolate the divergences, applying techniques developed specifically to deal with non-rational integrands. In this case, the renormalized
effective potential, being explicitly covariant, did not contain contributions from the
inhomogeneous gravitational fields at the leading order in metric perturbation and in the adiabatic expansion in both static and cosmological space-times.
In this work, we extend these methods to include finite temperature effects. The inclusion of the Bose-Einstein factor, accounting for the statstical distribution of the energy states, produces a smooth behavior of all quantities involved at large energies, not being necessary to apply any regularization technique (once the vacuum contribution is renormalized). As mentioned above, in order to compute the aforementioned contribution, we apply the "brute force" method described in \cite{Maroto, Higgs}, i.e.\ performing a summation over the perturbed modes of the quantum field obtained as solutions of the Klein-Gordon equation. We are able to get analytical expressions for the effective potential and the energy-momentum tensor in the non-relativistic and the ultra-relativistic limits. In the static limit, we find that local gravitational effects can be taken into account through the Tolman temperature \cite{Tolman}. This is in accordance with computations of the energy-momentum tensor of a scalar field at finite temperature in a static space-time using the Schwinger-DeWitt approach, \cite{Nakazawa} and \cite{Holstein}. However, we also obtain the explicit time dependence of the expectation values for finite times, which shows that the Tolman temperature can only be defined in the asymptotic time regions.
The work is organized as follows. Section \ref{sec:back} describes the general approach to compute an expectation value over a thermal state in a perturbed FRW metric. The particular expressions to be computed in the case of static space-times are presented in Section \ref{sec:static}. Section \ref{sec:low} and \ref{sec:high} explain the approximations applied to obtain the final result in the non-relativitic and ultra-relativistic limits, respectively. Shifts in the minimum of the effective potential produced by thermal correction are discussed in Section \ref{sec:shift}. Our conclusions are presented in Section \ref{sec:con}.
\section{Finite temperature corrections}
\label{sec:back}
Given a scalar field $\phi$, with potential $V(\phi)$, its classical action in a $(D+1)$-dimensional space-time with
metric tensor $g_{\mu\nu}$ can be written as
\begin{eqnarray}
S[\phi,g_{\mu\nu}]\,=\,\int \text{d}^{D+1}x \,\sqrt{g} \left(\frac{1}{2}\,g^{\mu\nu}\,\partial_\mu\phi\,\partial_\nu\phi\,-\,V(\phi)\right).
\end{eqnarray}
As is well known, the solutions $\phi=\hat \phi$ of the classical equation of motion
\begin{eqnarray}
\Box \, \hat\phi \,+\,V'(\hat\phi)\, =\, 0 \,
\label{KGc}
\end{eqnarray}
are those that minimize the action. On the other hand, quantum fluctuations around the classical solution
$\delta \phi=\phi-\hat\phi$ satisfy the equation of motion
\begin{eqnarray}
\left(\Box \, +\,m^2(\hat\phi)\right)\delta\phi\, =\, 0
\label{pKGc}
\end{eqnarray}
with
\begin{eqnarray}
m^2(\hat\phi)\,=\,V''(\hat\phi)\,
\end{eqnarray}
Let us consider a metric which can be written as a scalar perturbation around
a flat Robertson-Walker background
\begin{eqnarray}
\text{d}s^2 \,=\,a^2(\eta) \left\{ \left[1 + 2 \Phi(\eta,{\bf x})\right]\, \text{d}\eta^2 - \left[1 - 2\Psi(\eta,{\bf x})\right]\,\text{d}{\bf x}^2 \right\}\label{metric}
\end{eqnarray}
where $\eta$ is the conformal time, $a(\eta)$ the scale factor, and $\Phi$ and $\Psi$ are the scalar perturbations in the longitudinal gauge. Given this geometry, the mode solutions $\delta\phi_k$ to \eqref{pKGc} can be found using a WKB approximation to first order in metric perturbations and to the leading adiabatic order as \cite{Higgs}
\begin{eqnarray}
\delta\phi_k(\eta,{\bf x})\,=\,\delta\phi_k^{(0)}(\eta,{\bf x})\left(1\,+\,P_k(\eta,{\bf x})\,+\,i\,\delta\theta_k(\eta,{\bf x})\vphantom{\frac{1}{1}}\right)
\label{sol}
\end{eqnarray}
where
\begin{eqnarray}
\delta\phi_k^{(0)}(\eta,{\bf x})\,=\,\frac{1}{(2\,\pi)^{D/2}}\frac{1}{a(\eta)^{(D-1)/2}\,\sqrt{2\,\omega_k}}\,e^{i{\bf k}\cdot{\bf x}-i\int^\eta\omega_k(\eta')\text{d}\eta'}\,;
\label{sol0}
\end{eqnarray}
are the unperturbed mode solutions with
\begin{eqnarray}
\omega_k^2(\eta)=k^2+m^2a^2(\eta)
\end{eqnarray}
The explicit expressions for $P_k(\eta,{\bf x})$ and $\delta\theta_k(\eta,{\bf x})$ in Fourier space are shown in Appendix \ref{appsol}.
The effects of quantum fluctuations on the classical field configuration can be taken into account using the one-loop effective potential \cite{Mukhanov,Higgs}
\begin{eqnarray}
V_{\text{eff}}(\hat\phi)\,=\,V(\hat\phi)\,+\,\frac{1}{2}\int_0^{m^2(\hat\phi)} \text{d}m^2 \langle \delta\phi^2 \rangle\,
\label{effpot}
\end{eqnarray}
where $V(\hat\phi)$ is the tree-level potential and the expectation value of the operator $\langle \delta\phi^2 \rangle$ is taken over a particular quantum state of the field. Taking into account \eqref{sol} and \eqref{sol0} and assuming that the quantum state has a fixed number of particles per mode $n_k$, the one-loop contribution to the effective potential reads
\begin{eqnarray}
\label{effpotT}
\frac{1}{2}\int_0^{m^2(\hat\phi)} \text{d}m^2 \;\langle \delta\phi^2 \rangle\,&=&\,\frac{1}{(2\pi)^D\,a^{D-1}(\eta)}\frac{1}{2}\int_0^{m^2(\hat\phi)} \text{d}m^2 \,\int\text{d}^D{\bf k}\left(\frac{1}{2}\,+\,n_k\right)\frac{1\,+\,2\,P_k(\eta,{\bf p})}{\sqrt{k^2\,+\,m^2\,a^2(\eta)}}\\
&=&\,\frac{2}{(2\pi)^D\,a^{D-1}(\eta)}\,\frac{2\,\pi^{(D-1)/2}}{\Gamma((D-1)/2)}\frac{1}{2} \int_0^{m^2(\hat\phi)} \text{d}m^2 \,\int_0^{\infty}\text{d}k\,k^{D-1}\left(\frac{1}{2}\,+\,n_k\right)\frac{1\,+\,\hat{P}_k(\eta,{\bf p})}{\sqrt{k^2\,+\,m^2\,a^2(\eta)}}\,\nonumber
\end{eqnarray}
where we have defined
\begin{eqnarray}
\hat{P}_k (\eta,{\bf p})\,=\,\int_{-1}^1 \text{d}\hat{x}\,\left(1-\hat{x}^2\right)^{(D-3)/2}\,P_k (\eta,{\bf p})\,,
\end{eqnarray}
being $\hat{x}={\bf k}\cdot {\bf p}/(k\,p)$ and including the general integration measure in $D$ dimensions.
From now on, we consider a thermal quantum state. Then, the number of particles per mode is given by the Bose-Einstein distribution
\begin{eqnarray}
n^T_{k}\,=\,\frac{1}{e^{\omega_k/T}-1}\,,
\end{eqnarray}
where $T$ is the temperature of the state, for the moment understood as a parameter of the Bose-Einstein distribution (see next section).
Let us define $V_1(\hat\phi)$ as the one-loop quantum vacuum contribution, i.e.
\begin{eqnarray}
V_1(\hat\phi)&=&\,\frac{1}{(2\pi)^D\,a^{D-1}(\eta)}\,\frac{2\,\pi^{(D-1)/2}}{\Gamma((D-1)/2)}\, \int_0^{m^2(\hat\phi)} \text{d}m^2 \,\int_0^{\infty}\text{d}k\,k^{D-1}\frac{1}{2}\frac{1\,+\,\hat{P}_k(\eta,{\bf p})}{\sqrt{k^2\,+\,m^2\,a^2(\eta)}}\,
\end{eqnarray}
and $V_T(\hat\phi)$ as the term that includes finite temperature corrections
\begin{eqnarray}
V_T(\hat\phi)&=&\,\frac{1}{(2\pi)^D\,a^{D-1}(\eta)}\,\frac{2\,\pi^{(D-1)/2}}{\Gamma((D-1)/2)} \int_0^{m^2(\hat\phi)} \text{d}m^2 \,\int_0^{\infty}\text{d}k\,k^{D-1}\,n_k^T\,\frac{1\,+\,\hat{P}_k(\eta,{\bf p})}{\sqrt{k^2\,+\,m^2\,a^2(\eta)}}\,
\end{eqnarray}
so that we can write the one-loop effective potential at
finite temperature as
\begin{eqnarray}
V_{\text{eff}}(\hat\phi)\,=\,V(\hat\phi)\,+\,V_1(\hat\phi)\,+\,V_T(\hat\phi)\,.
\end{eqnarray}
It is important to notice that both the vacuum and the thermal contributions have a homogeneous term, corresponding to the background geometry, and an inhomogeneous one, proportional to the perturbations. Then,
\begin{eqnarray}
V_1({\hat \phi})\,=\,V^{\text{h}}_1({\hat \phi})\,+\,V^{\text{i}}_1({\hat \phi})\,,\\
V_T({\hat \phi})\,=\,V^{\text{h}}_T({\hat \phi})\,+\,V^{\text{i}}_T({\hat \phi})\,.
\end{eqnarray}
The homogeneous part due to vacuum effects $V_1^{\text{h}}({\hat \phi})$, after applying the minimal substraction scheme $\overline{\text{MS}}$ in dimensional regularization with $D=3+\epsilon$, is given by \cite{Higgs}
\begin{eqnarray}
V^{\text{h}}_1(\hat\phi)\,=\,
\frac{m^4(\hat\phi)}{64\pi^2}\left[\ln\left(\frac{m^2(\hat\phi)}{\mu^2}\right)-\frac{3}{2}\right].
\end{eqnarray}
A detailed analysis of the inhomogeneous part of the vacuum $V_1^{\text{i}}({\hat \phi})$ was performed in \cite{Maroto} with a cutoff regularization, and in \cite{Higgs} using dimensional regularization. When a cutoff $\Lambda$ is used, the
result turns out to be proportional to $m^2(\hat\phi)\Lambda^2\Phi$ in the static case,
i.e. only the quadratic divergence appears.
In dimensional regularization we find to first order in perturbations and to the leading adiabatic order that
\begin{eqnarray}
V^{\text{i}}_1(\hat\phi)\,=\,0
\end{eqnarray}
in agreement with the absence of logarithmic divergences in the cutoff case.
In this work, we focus on the thermal contribution $V_T(\hat\phi)$. The corresponding
inhomogeneous contribution can in turn be split in the terms proportional to
$\Phi$ and $\Psi$ as
\begin{eqnarray}
V_T(\hat\phi)\,&=&\,V^{\text{h}}_T(\hat\phi)\,+\,V^\Phi_T(\hat\phi)\,+\,V^\Psi_T(\hat\phi)
\end{eqnarray}
It is important to note that expression \eqref{effpot} defines the potential except for the addition of an arbitrary function which could depend on the space-time coordinates and the temperature. This function does not modify the dynamics of the field \eqref{KGc} since it does not introduce any dependence on $m(\hat\phi)$.
In the same fashion, the thermal contribution to the components of the energy-momentum tensor can be obtained from the expressions given in the reference \cite{Higgs} including the number of particles per mode $n_k^T$, thus
\begin{eqnarray}
\langle T^0_{\;0} (\eta,{\bf p})\rangle\,&=&\,\rho(\eta,{\bf p})\,=\,\frac{1}{(2\pi)^D}\frac{1}{a^{D+1}}\int \text{d}^D {\bf k}\,\left(\frac{1}{2}\,+\,n_k^T\right)\omega_k\left[1\,+\,2\,\frac{k^2}{\omega_k^2}\,\Psi({\bf p})\,+\,2\,P_{k}(\eta,{\bf p})\,+\,2\,i\,\frac{{\bf k\cdot p}}{\omega_k^2}\,\delta\theta_k(\eta,{\bf p})\right]\\
\langle T^i_{\;i}(\eta,{\bf p})\rangle\,&=&\,-p_i(\eta,{\bf p})\,=\,-\frac{1}{(2\pi)^D}\frac{1}{a^{D+1}}\int \text{d}^D\, {\bf k}\,\left(\frac{1}{2}\,+\,n_k^T\right)\left[\frac{k^2_i}{\omega_k}\left(1\,+\,2\,\Psi({\bf p})\,+\,2\,P_{k}(\eta,{\bf p})\vphantom{\frac{1}{1}}\right)\,+\,2\,i\,\frac{k_i\,p_i}{\omega_k}\,\delta\theta_k(\eta,{\bf p})\right]\nonumber\\ \ \\
\langle T^i_{\;0}(\eta,{\bf p})\rangle\,&=&\,\frac{1}{(2\pi)^D}\frac{1}{a^{D+1}}\int \text{d}^D {\bf k}
\left(\frac{1}{2}\,+\,n_k^T\right)\left[k_i\left(1\,+\,2\,P_k(\eta,{\bf p})\,+\,2\,i\frac{{\bf k}\cdot{\bf p}}{\omega_k^2}\delta\theta_k(\eta,{\bf p})\right)+
i\,p_i\,\delta\theta_k(\eta,{\bf p})
\right]\\
\langle T^i_{\;j}(\eta,{\bf p})\rangle\,&=&\,-\frac{1}{(2\pi)^D}\frac{1}{a^{D+1}}\int \text{d}^D {\bf k}\left(\frac{1}{2}\,+\,n_k^T\right)\left[\frac{k_i\,k_j}{\omega_k}\left(1\,+\,2\,\Psi({\bf p})\,+\,2\,P_{k}(\eta,{\bf p})\vphantom{\frac{1}{1}}\right)\,+\,i\,\frac{k_i\,p_j+k_j\,p_i}{\omega_k}\,\delta\theta_k(\eta,{\bf p})\right]\\
\langle T^\mu_{\;\mu}(\eta,{\bf p})\rangle\,&=&\,\frac{1}{(2\pi)^D}\frac{1}{a^{D+1}}\int \text{d}^D {\bf k}\,\left(\frac{1}{2}\,+\,n_k^T\right)\left[\frac{m^2}{\omega_k}\left(1\,+\,2\,P_k(\eta,{\bf p})\vphantom{\frac{1}{1}}\right)\right]\,.
\label{Tmunufull}
\end{eqnarray}
Let us divide the energy-momentum tensor, in the same way as for the potential case, in a vacuum contribution, which does not depend on the number of particles per mode $n_k^T$, and a thermal contribution.
\begin{eqnarray}
\langle T^\mu_{\;\nu}(\eta,{\bf p})\rangle\,=\,\langle T^\mu_{\;\nu}(\eta,{\bf p})\rangle_{\text{vac}}\,+\,\langle T^\mu_{\;\nu}(\eta,{\bf p})\rangle_{T}\,\,.
\end{eqnarray}
each one having a homogeneous and an inhomogeneous part. It can be shown \cite{Higgs} that the energy-momentum tensor of the vaccum is given by $\langle T^\mu_{\;\nu}(\eta,{\bf p})\rangle_{\text{vac}}=\rho_{\text{vac}}\,\delta^\mu_{\;\nu}$, where the energy density $\rho_{\text{vac}}$ and pressure $p_{\text{vac}}$ are given in the $\overline{\text{MS}}$ renormalization scheme with $D=3+\epsilon$ by
\begin{eqnarray}
\rho_{\text{vac}}\,=-\,p_{\text{vac}}\,=\,\frac{m^4}{64\,\pi^2}\,\left[\log\left(\frac{m^2}{\mu^2}\right)\,-\,\frac{3}{2}\right].
\end{eqnarray}
This implies that the inhomogeneous part of the vacuum contribution is zero when dimensional regularization is used, therefore metric perturbations do not contribute to the leading adiabatic order. In this paper, we compute the homogeneous and inhomogeneous parts of $\langle T^\mu_{\;\nu}(\eta,{\bf p})\rangle_{T}$.
\section{Static space-times}
\label{sec:static}
Although the expressions for the perturbed solutions given in Appendix \ref{appsol} are valid for general perturbed FRW space-times, in this work we focus on static space-times, i.e.
we will take $a=1$ and $\Phi=\Phi({\bf x})$, $\Psi=\Psi({\bf x})$. The general case is of great interest for cosmological scenarios, nevertheless the time dependence of the scale factor increases the complexity of the computations, making extremely difficult to obtain analytical expressions. In addition, in order to define a thermodynamic temperature, there must be a timelike Killing vector field, namely the space-time must be static or stationary.
In order to compute $V_T(\hat\phi)$ and the energy-momentum tensor $\langle T^\mu_{\;\nu}(\eta,{\bf p})\rangle_{T}$ thermal contributions, our first step will be to expand the $P_k(\eta,{\bf p})$ and $\delta\theta_k(\eta,{\bf p})$ functions in powers of $p\eta$ [Appendix \ref{appexp}]. These expansions, allows us to find a common structure of the integrals involved.
\subsection{Effective potential}
Taking into account \eqref{pserie} and \eqref{effpotT}, it is clear that we have to deal with the following kind of integrals
\begin{eqnarray}
\frac{1}{2}\int_0^{m^2(\hat\phi)}\text{d}m^2\int_0^{\infty}\text{d}k\,k^{D-1}\,\frac{1}{e^{\omega_k/T}-1}\,\frac{1}{\omega_k}\,\left(\frac{k}{\omega_k}\right)^{2\alpha}\,\left(\frac{m}{\omega_k}\right)^{2n}\hspace{0.5cm}\alpha=0,1,2,...\hspace{0.5cm}n=0,1,2
\end{eqnarray}
to compute the finite temperature correction to the effective potential
It is convenient to use the dimensionless variables $u=\omega_k/T$ and $x=m/T$ instead of $k$ and $m$ respectively. In terms of these new variables the integral reduces to (extracting a global factor $T^{D+1}$)
\begin{eqnarray}
I^X_{\alpha,n}\,&\equiv&\,\int_0^{X}\,\text{d}x\int_x^{\infty}\text{d}u\,\frac{1}{e^{u}-1}\frac{x^{1+2n}}{u^{2\alpha+2n}}\,\left(u^2-x^2\right)^{D/2+\alpha-1
\end{eqnarray}
where $X\equiv m(\hat\phi)/T$.
It is also useful to interchange the order of integration of this integral and divide it in the following way
\begin{eqnarray}
I^X_{\alpha,n}\,=\,\left(\int_0^{X}\text{d}u\int_0^{u}\text{d}x\,+\,\int_X^{\infty}\text{d}u\int_0^{X}\text{d}x\right)\left(\frac{1}{e^{u}-1}\frac{x^{1+2n}}{u^{2\alpha+2n}}\,\left(u^2-x^2\right)^{D/2+\alpha-1}\right)
\label{div}
\end{eqnarray}
where the first part takes into account the contribution from modes with energies below the mass of the field while the second part includes the contribution from modes with energies above the mass of the field.
\subsection{Energy-momentum tensor}
To compute the energy-momentum tensor, the following integrals appear
\begin{eqnarray}
\int_0^{\infty}\text{d}k\,k^{D-1}\,\frac{1}{e^{\omega_k/T}-1}\,\omega_k\,\left(\frac{k}{\omega_k}\right)^{2\alpha}\,\left(\frac{m}{\omega_k}\right)^{2n}\hspace{0.5cm}\alpha=0,1,2,...\hspace{0.5cm}n=0,1,2
\end{eqnarray}
Using the same dimensionless variables $u$ and $x$ we get (also extracting a global factor $T^{D+1}$)
\begin{eqnarray}
\label{intTmunu}
J^X_{\alpha,n}\,&\equiv&\,\int_X^{\infty}\text{d}u\,\frac{1}{e^{u}-1}\frac{X^{2n}}{u^{2\alpha+2n-2}}\,\left(u^2-X^2\right)^{D/2+\alpha-1
\end{eqnarray}
Only modes with energies above the mass of the field contribute to the energy-momentum tensor.
In the following we compute the integrals $I^X_{\alpha,n}$ \eqref{div} and $J^{X}_{\alpha,n}$ \eqref{intTmunu} in the non-relativistic and the ultra-relativistic limits
\section{Non-relativistic limit}
\label{sec:low}
\subsection{Effective potential}
In the non-relativistic limit $m(\hat\phi)/T\rightarrow \infty$ (or $X\rightarrow \infty$), the contribution from modes with energies above the mass of the field is exponentially damped because of the Bose-Einstein factor, hence the leading contribution in the non-relativistic limit is given by the first part of \eqref{div} when taking $X=\infty$
\begin{eqnarray}
I^{\infty}_{\alpha,n}\,&=&\,\int_0^{\infty}\,\text{d}u\int_0^{u}\text{d}x\,\frac{1}{e^{u}-1}\frac{x^{1+2n}}{u^{2\alpha+2n}}\,\left(u^2-x^2\right)^{D/2+\alpha-1}\,\nonumber\\
&=&\,\frac{\Gamma(D/2+\alpha)\,n!}{2\,\Gamma(D/2+\alpha+n+1)}\,D!\,\zeta(D+1)
\label{tinf}
\end{eqnarray}
where $\zeta(x)$ is the Riemann Zeta function.
Therefore, using expression \eqref{effpotT} together with the result for the integral \eqref{tinf} and the expansion of $\hat P_k(\eta,{\bf p})$ \eqref{pserie}, we obtain (assuming $D=3$) for the leading contributions after resummation of the series
in $p\eta$
\begin{eqnarray}
V^{h}_{T(L)}(\hat\phi)\,&=&\,\frac{\pi^2}{90}T^4\\
V^{\Phi}_{T(L)}(\hat\phi)\,&=&\,\frac{\pi^2}{90}T^4\,\Phi({\bf p})\,\times\,4\left[3\frac{\sin(p\,\eta)}{(p\,\eta)^3}-3\frac{\cos(p\,\eta)}{(p\,\eta)^2}-1\right]
\label{resultstinfT}\\
V^{\Psi}_{T(L)}(\hat\phi)\,&=&\,\frac{\pi^2}{90}T^4\,\Psi({\bf p})\,\times\,12\,\left[\left(\frac{6}{(p\,\eta)^4}-\frac{1}{(p\,\eta)^2}\right)\cos(p\,\eta)+\left(\frac{3}{(p\,\eta)^3}-\frac{6}{(p\,\eta)^5}\right)\sin(p\,\eta)\right].
\label{resultstinfS}
\end{eqnarray}
Note that there is no dependence on the field (which may appear through mass terms). Therefore these expressions do not affect the field dynamics and they can be neglected.
On the other hand, even though we are considering static backgrounds, there is an explicit time dependence of the result. This can be traced back to the particular mode choice
in (\ref{Pk}) and (\ref{thetak}). In particular, taking the $\eta\rightarrow \infty$
limit, which corresponds to setting initial conditions for the modes in the remote past, we recover static results for the effective potential.
In the static limit $\eta\rightarrow\infty$, the following expression is obtained
\begin{eqnarray}
V_{T(L)}(\hat\phi)\,=\,\frac{\pi^2}{90}\,T^4\,\left(1\,-\,4\,\Phi({\bf p})\right)\,.
\end{eqnarray}
It can be shown that the leading inhomogeneous effect in the static limit only depends on the $\Phi$ potential and in fact it can be obtained from the homogeneous result replacing the temperature by the local Tolman temperature \cite{Tolman}
\begin{eqnarray}
T_{\text{Tolman}}\,=\,\frac{T}{\sqrt{g_{00}}}\,\simeq\,T\left(1-\,\Phi({\bf p})\right)\,.
\label{Tolman}
\end{eqnarray}
Notice however that in the results for finite time given
in \eqref{resultstinfT} and \eqref{resultstinfS}, the explicit time dependence of the effective potential prevents the
introduction of a Tolman temperature.
The next-to-leading correction, $V^{(NL)}_T$, including terms ${\mathcal O}(T/m(\hat\phi))$, can be obtained by applying a modified version of the Laplace's method to the following integral\footnote{The symbol $\simeq$ stands for an approximation in the Taylor sense, while $\sim$ stands for an asymptotic approximation, namely the quotient between both results equals $1$ in the appropriate limit.}
\begin{eqnarray}
I^{X}_{\alpha,n}\,-\,I^{\infty}_{\alpha,n}\,&\simeq&\,-\int_X^{\infty}\text{d}u\int_X^{u}\text{d}x\,e^{-u}\,\frac{x^{1+2n}}{u^{2\alpha+2n}}\,\left(u^2-x^2\right)^{D/2+\alpha-1}\nonumber\\&=&\,\int_X^{\infty}\text{d}u\,\frac{1}{2}\,e^{-u}\,u^D\,\left[B_{X^2/u^2}\left(1+n,D/2+\alpha\right)\,-\,\frac{\Gamma(1+n)\,\Gamma(D/2+\alpha)}{\Gamma(D/2+\alpha+n+1)}\right]
\end{eqnarray}
where we have replaced the Bose-Einstein factor by the Boltzmann factor. $B_z(a,b)$ is the incomplete Beta function. When $u/X\gg 1$, the integrand is exponentially damped as $e^{-u/X}$. Then, we Taylor expand the expression inside the brackets around $X^2/u^2=1$ to obtain
\begin{eqnarray}
I^{X}_{\alpha,n}\,-\,I^{\infty}_{\alpha,n}\,\,&{\sim}&\,-\int_X^{\infty}\text{d}u\,\frac{1}{D+2\alpha}\,e^{-u}\,u^D\,\left(1-\frac{X^2}{u^2}\right)^{D/2+\alpha}\nonumber\\&=&\,-\int_X^{\infty}\text{d}u\,\frac{1}{D+2\alpha}\,\exp\left[-u\,+\,D\,\log(u)\,+\,\left(\frac{D}{2}+\alpha\right)\,\log\left(1-\frac{X^2}{u^2}\right)\right]\,.
\end{eqnarray}
The expression inside the exponential has a maximum at $u\sim X$ when $X\rightarrow\infty$.\footnote{Here we are dropping a term linear in $\alpha$ in the expression for the maximum. This means that we cannot allow $\alpha\rightarrow \infty$. Since $\alpha$ is related with the order of the expansion in $p\,\eta$,
the results are only valid if the series appearing in \eqref{pserie} is truncated at some order such that $\alpha \ll X$. Although it could be done for arbitrary $\alpha$, it would not be very useful if the expression cannot be ressummed. Nevertheless, it will be shown that the $l$-term is supressed by a factor $1/X^l$, thus only the first terms are relevant in this limit ($X\rightarrow\infty$).} Taylor expanding the argument of the exponential around $u=X$ up to order $O(u)$ [including the logarithmic divergence] the integration in $u$ can be performed to get the following result
\begin{eqnarray}
I^{X}_{\alpha,n}\,-\,I^{\infty}_{\alpha,n}\,\,&{\sim}&\,-2^{3(D/2+\alpha)+1}\,\Gamma(D/2+\alpha)\,\,e^{-X}\,\frac{X^{D+1}}{(4\,X-D+6\alpha)^{D/2+\alpha+1}}\,\nonumber\\&\sim &\,-2^{D/2+\alpha-1}\,\Gamma(D/2+\alpha)\,\,e^{-X}\,X^{D/2-\alpha}\,
\label{finalT0}
\end{eqnarray}
which does not depend on $n$. Because of the factor $X^{D/2-\alpha}$ in the last expression, the expansion in $p\eta$ mixes with the expansion in $X(=m(\hat{\phi})/T)$
Finally, the next to leading contribution to the potential for $p\eta\ll 1$ is given by
\begin{eqnarray}
V_{T(NL)}(\hat \phi)=\,-\,\frac{T^4}{2\sqrt{2}\,\pi^{3/2}}\,e^{-m(\hat\phi)/T}\,\left(\frac{m(\hat\phi)}{T}\right)^{3/2}\left(1\,+\,3\,\Psi({\bf p})\,-\,\frac{(p\eta)^2}{2}\,\Phi({\bf p})\,\right).
\label{correctionT0}
\end{eqnarray}
A better approximation for smaller values of $X$ is obtained if we do not drop $\alpha$ in the denominator in \eqref{finalT0}. This improved approximation is shown in Figure \ref{fig:Vinfinity} (right panel). It is important to note that each order in $(p\eta)$ is suppressed by a factor $(T/m(\hat \phi))$ with respect to the previous order, because of the mixing discussed above. For instance, the correction proportional to $\Psi$ does not depend on $(p\eta)$ to leading order in $(T/m(\hat \phi))$ [see eq. \ref{correctionT0}], then the dependence on $(p\eta)^2$ proportional to $\Psi$ is suppressed by a factor $(T/m(\hat \phi))$ with respect to the $(p\eta)^2$ correction proportional to $\Phi$, as shown in Figure \ref{fig:Vinfinity} (right panel).
Because of the mixing between the expansion in $X(=m(\hat{\phi})/T)$ and $p\eta$ we cannot obtain a result valid for arbitrary scales $p$ and times $\eta$. However, it is possible to obtain the static result by taking the limit $\eta\rightarrow\infty$ directly on \eqref{effpotT}. According to this procedure, we get
\begin{eqnarray}
V_{T(NL)}(\hat \phi)=\,-\,\frac{T^4}{2\sqrt{2}\,\pi^{3/2}}\,e^{-m(\hat\phi)/T}\,\left(\frac{m(\hat\phi)}{T}\right)^{3/2}\left(1\,-\,\frac{35}{8}\,\Phi({\bf p})\,-\,\left(\frac{m(\hat\phi)}{T}\right)\Phi({\bf p})\right).
\label{NLNR}
\end{eqnarray}
As can be checked in a straightforward way from \eqref{NLNR}, also for the next to leading contribution in the static limit, the inhomogeneous correction can be obtained from the homogeneous result by replacing the temperature with the Tolman temperature \eqref{Tolman}.
\begin{figure}
\begin{center}
{\includegraphics[width=0.48\textwidth]{V_infinity.eps}\hspace{0.5cm}\includegraphics[width=0.48\textwidth]{V_infinity_residuals.eps}}
\caption {\footnotesize Left panel: Points correspond to the numerical value of the thermal contributions to the potential proportional to $\Phi$ and $\Psi$ taking $m(\hat\phi)/T=10$, whereas the solid lines represent the leading approximations \eqref{resultstinfT} and \eqref{resultstinfS}. Right panel: Points are the difference between the numerical values of the
potential and the approximations \eqref{resultstinfT} (blue points) and \eqref{resultstinfS} (black points) for $m(\hat\phi)/T=10$. The next-to-leading correction for $m(\hat\phi)/T=10$ up to $(p\eta)^2$ (dashed lines) and $(p\eta)^{30}$ (solid lines) is shown for comparison.
}
\label{fig:Vinfinity}
\end{center}
\end{figure}
\subsection{Energy-momentum tensor}
The leading order of the energy-momentum tensor is already exponentially damped, since only modes with energies above the mass of the field contribute. We write the integral \eqref{intTmunu} as
\begin{eqnarray}
J^{X}_{\alpha,n}\,{\simeq}\,\int_X^{\infty}\text{d}u\,e^{-u}\,\frac{X^{2n}}{u^{2\alpha+2n-2}}\,\left(u^2-X^2\right)^{D/2+\alpha-1}
\end{eqnarray}
where the Bose-Einstein factor has been replaced by the Boltzmann factor. Applying the Laplace's method again we get
\begin{eqnarray}
\label{JXinf}
J^{X}_{\alpha,n}\,&{\simeq}&\,X^{2n}\,\int_X^{\infty}\text{d}u\,\exp\left[-u\,-2\left(\alpha+n-1\right)\,\log\left(u\right)\,+\,\left(\frac{D}{2}+\alpha-1\right)\log\left(u^2\,-\,X^2\right)\right]\nonumber\\
&\sim & 2^{3(D/2+\alpha)-1 }\,\Gamma(D/2+\alpha)\,\,e^{-X}\,\frac{X^{D+1}}{(4\,X-D+6\alpha+8n-6)^{D/2+\alpha}}\,\nonumber\\&\sim &\,2^{D/2+\alpha-1}\,\Gamma(D/2+\alpha)\,\,e^{-X}\,X^{D/2-\alpha+1}\,
\end{eqnarray}
Then, taking into account the expressions given in Section \ref{sec:back} and the result \eqref{JXinf}, the energy-momentum tensor for $p\eta\ll1$ is given by
\begin{eqnarray}
\rho_T\,\equiv\,\langle T^0_{\;0}(\eta,{\bf p})\rangle_{T}\,&{\sim}&\,\frac{T^4}{2\sqrt{2}\,\pi^{3/2}}\,e^{-m(\hat\phi)/T}\,\left(\frac{m(\hat\phi)}{T}\right)^{5/2}\left(1\,+\,3\Psi({\bf p})\,-\,\frac{(p\eta)^2}{2}\,\Phi({\bf p})\right)\\
-p_T\,\equiv\,\langle T^i_{\;i}(\eta,{\bf p})\rangle_{T}\,&{\sim}&\,-\frac{T^4}{2\sqrt{2}\,\pi^{3/2}}\,e^{-m(\hat\phi)/T}\,\left(\frac{m(\hat\phi)}{T}\right)^{3/2}\left(1\,+\,5\Psi({\bf p})\,-\,\frac{5(p\eta)^2}{6}\,\Phi({\bf p})\right)\,\\
\langle T^i_{\;0}(\eta,{\bf p})\rangle_{T}\,&{\sim}&\,-\frac{T^4}{2\sqrt{2}\,\pi^{3/2}}\,e^{-m(\hat\phi)/T}\,\left(\frac{m(\hat\phi)}{T}\right)^{5/2}\,(ip_i)\,\eta\,\Phi({\bf p})\\\
\langle T^i_{\;j}(\eta,{\bf p})\rangle_{T}\,&{\sim}&\,-\frac{T^4}{2\sqrt{2}\,\pi^{3/2}}\,e^{-m(\hat\phi)/T}\,\left(\frac{m(\hat\phi)}{T}\right)^{3/2}(i^2p_ip_j)\,\eta^2\,\Phi({\bf p})\hspace{1cm}i\neq j
\end{eqnarray}
where $\rho_T$ and $p_T$ are the energy density and pressure produced by the thermal corrections. We have only retained the leading order in $m(\hat \phi)/T$.
Further corrections ${\mathcal O}((p\eta)^{2l})$ are suppressed by a factor $(m(\hat \phi)/T)^l$.
In the non-relativistic case, it is not possible to take the static limit in the final expressions since we only have the results for $p\eta\ll1$ as discussed before. However, the static expression can be obtained by taking the static limit in the original expressions \eqref{Tmunufull}
\begin{eqnarray}
\rho_T\,\equiv\,\langle T^0_{\;0}(\eta,{\bf p})\rangle_{T}\,&{\sim}&\,\,\frac{T^4}{2\sqrt{2}\,\pi^{3/2}}\,e^{-m(\hat\phi)/T}\,\left(\frac{m(\hat\phi)}{T}\right)^{5/2}\left(1\,-\,\frac{39}{8}\,\Phi({\bf p})\,-\,\left(\frac{m(\hat\phi)}{T}\right)\Phi({\bf p})\right)\\
-p_T\,\equiv\,\langle T^i_{\;i}(\eta,{\bf p})\rangle_{T}\,&{\sim}&\,-\,\frac{T^4}{2\sqrt{2}\,\pi^{3/2}}\,e^{-m(\hat\phi)/T}\,\left(\frac{m(\hat\phi)}{T}\right)^{3/2}\left(1\,-\,\frac{35}{8}\,\Phi({\bf p})\,-\,\left(\frac{m(\hat\phi)}{T}\right)\Phi({\bf p})\right).
\end{eqnarray}
Once again, in the static limit, the inhomogeneous corrections, depending only on
the $\Phi$ potential and can be obtained from the homogenous
one by introducing the Tolman temperature.
\section{Ultra-relativistic limit}
\label{sec:high}
\subsection{Effective potential}
In the ultra-relativistic limit, $m(\hat\phi)/T\rightarrow 0$ (or $X\rightarrow 0$), the dominant contribution comes from modes with energies higher than the mass of the field. Therefore, the second part of \eqref{div} gives
\begin{eqnarray}
I^{X}_{\alpha,n}\,&{\simeq}&\,\int_X^{\infty}\text{d}u\int_0^{X}\text{d}x\,\frac{1}{e^{u}-1}\frac{x^{1+2n}}{u^{2\alpha+2n}}\,\left(u^2-x^2\right)^{D/2+\alpha-1}\nonumber\\&=&\,\int_X^{\infty}\text{d}u\,\frac{1}{2}\,\frac{u^D}{e^{u}-1}\,B_{X^2/u^2}(1+n,\,D/2+\alpha)\nonumber\\&{\simeq}&\int_X^{\infty}\text{d}u\,\frac{1}{2}\,\frac{u^{D-2n-2}}{e^{u}-1}\,\frac{X^{2+2n}}{1+n}
\label{IPot}
\end{eqnarray}
where we have expanded the incomplete Beta function $B_z(a,b)$ for $X\ll 1$ in the last line. The leading contribution comes from $n=0$. Replacing the lower limit of integration by 0 we get in that limit
\begin{eqnarray}
I^{X}_{\alpha,0}\,&{\simeq}&\,\int_0^{\infty}\text{d}u\,\frac{1}{2}\,\frac{u^{D-2}}{e^{u}-1}\,X^{2}\nonumber\\
&=&\frac{1}{2}\,\Gamma(D-1)\,\text{Li}_{D-1}(1)\,X^2
\label{t0}
\end{eqnarray}
where $\text{Li}_n(z)$ is the polylogarithm function.
Therefore, from \eqref{div} and using the expansion of $\hat P_k(\eta,{\bf p})$ in \eqref{pserie} and the result \eqref{t0}, we can resum this contribution to get
the leading contribution
\begin{eqnarray}
V^{h}_{T(L)}(\hat\phi)\,&=&\,\frac{T^4}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\\
V^{\Phi}_{T(L)}(\hat\phi)\,&=&\,\frac{T^4}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\,\Phi({\bf p})\,\times\left(\frac{\sin(p\,\eta)}{p\,\eta}\,-\,1\right)
\label{resultst0T}\\
V^{\Psi}_{T(L)}(\hat\phi)\,&=&\,\frac{T^4}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\,\Psi({\bf p})\,\times\left(\frac{\sin(p\,\eta)}{p\,\eta}\right)
\label{resultst0S}
\end{eqnarray}
The explicit time dependence of the general results obtained in a static metric can be traced back to the initial conditions of the modes. Taking the limit $\eta\rightarrow\infty$
in (\ref{resultst0T}) and (\ref{resultst0S}), the initial conditions are washed out and the remaining correction in Fourier space is
\begin{eqnarray}
V_{T(L)}(\hat\phi)\,&=&\,\frac{T^4}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\left(1\,-\,2\,\Phi({\bf p})\vphantom{\frac{1}{1}}\right)\,.
\label{staticp}
\end{eqnarray}
In this case we can also obtain the inhomogeneous result by replacing the temperature by the local Tolman temperature \eqref{Tolman} in the homogeneous result.
To get the real space result in the static limit, one has to compute the Fourier transform of the complete expression and then take the static limit, $\eta\rightarrow\infty$. Following this procedure, it is possible to get the real space result for arbitrary perturbation (see Appendix \ref{apppoles}) which reads
\begin{eqnarray}
V_{T(L)}(\hat\phi)\,&=&\,\frac{T^4}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\left(1\,-\,2\,\Phi({\bf r})\vphantom{\frac{1}{1}}\right)\,.
\label{staticr}
\end{eqnarray}
Therefore, as expected, the static limit and the Fourier transform commute (compare \eqref{staticp} and \eqref{staticr}). This is a general conclusion for the functions in Fourier space appearing in this paper due to the results of Appendix \ref{apppoles}.
In real space, the corrections due to Newtonian perturbations $\Phi_N({\bf p})$ and $\Psi_N({\bf p})$ given by
\begin{eqnarray}
\Phi_N({\bf p})\,&=&\,\Psi_N({\bf p})=-4\,\pi\frac{GM}{p^2}\\
\Phi_N({\bf r})\,&=&\,\Psi_N({\bf r})=-\frac{GM}{r}\,,
\end{eqnarray}
inside the lightcone ($r<|\eta|$) are
\begin{eqnarray}
V^{\Phi_N}_{T(L)}(\hat\phi)\,&=&\,\frac{T^4}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\,\Phi_N({\bf r})\,\times\left(\frac{r}{|\eta|}-1\right)\\
V^{\Psi_N}_{T(L)}(\hat\phi)\,&=&\,\frac{T^4}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\,\Psi_N({\bf r})\,\times\left(\frac{r}{|\eta|}\,\right)
\label{resultst0real}
\end{eqnarray}
while on and outside the lightcone ($r\geq|\eta|$) are
\begin{eqnarray}
V^{\Phi_N}_{T(L)}(\hat\phi)\,&=&\,0\\
V^{\Psi_N}_{T(L)}(\hat\phi)\,&=&\,\frac{T^4}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\,\Psi_N({\bf r})\,.
\label{resultst0real}
\end{eqnarray}
The next-to-leading order corrections can be obtained by computing the first part of equation \eqref{div} plus next-to-leading terms coming from equation \eqref{IPot} (see Appendix \ref{appnexto}). Finally, after resummation of the the series expansion \eqref{pserie} we get for $V_{T(NL)}$, (up to $O((m/T)^5)$)
\begin{eqnarray}
V^{h}_{T(NL)}(\hat\phi)\,&{=}&\,T^4\left(\frac{m(\hat\phi)}{T}\right)^3\left[-\frac{1}{12\,\pi}\,+\,\frac{1}{32\,\pi^2}\left(\frac{m(\hat\phi)}{T}\right)\left(\log\left(\frac{T}{M}\right)\,+\,\frac{3}{4}\,-\,\gamma\,+\,\log(4\,\pi)\,
\right)\right]\\
V^{\Phi}_{T(NL)}(\hat\phi)\,&{=}&\,T^4\,\left(\frac{m(\hat\phi)}{T}\right)^3\,\Phi({\bf p})\left[-\frac{1}{12\,\pi}
\,\left(J_0(p\,\eta)\,-\,1\right)\,+\,\frac{1}{32\,\pi^2}\left(\frac{m(\hat\phi)}{T}\right)
\,\left(\cos(p\,\eta)\,-\,1\right)\,+\,\frac{1}{720\,\pi}\left(\frac{m(\hat\phi)}{T}\right)^2
\,p\,\eta\,J_1(p\,\eta)\right]\nonumber\\
\label{resultst1T}\\
V^{\Psi}_{T(NL)}(\hat\phi)\,&{=}&\,T^4\,\left(\frac{m(\hat\phi)}{T}\right)^3\Psi({\bf p})\left[-\frac{1}{12\,\pi}
\,\left(J_0(p\,\eta)\,+\,\frac{J_1(p\,\eta)}{p\,\eta}\right)\,+\,\frac{1}{32\,\pi^2}\left(\frac{m(\hat\phi)}{T}\right)
\,\left(\cos(p\,\eta)\,+\,\frac{2\,\sin{p\,\eta}}{p\,\eta}\right)\,+\,\right.\nonumber\\&&+\,\left.\frac{1}{720\,\pi}\left(\frac{m(\hat\phi)}{T}\right)^2
\,\left(p\,\eta\,J_1(p\,\eta)\,-\,3\,J_0(p\,\eta)\right)\right]
\label{resultst1S}
\end{eqnarray}
where $\gamma$ is Euler's constant and $J_n(x)$ Bessel functions.
Considering Newtonian perturbations $\Phi_N$ and $\Psi_N$, in real space we get for the region inside the lightcone ($r<|\eta|$)
\begin{eqnarray}
V^{\Phi_N}_{T(NL)}(\hat\phi)\,&{=}&\,T^4\left(\frac{m(\hat\phi)}{T}\right)^3
\,\Phi_N({\bf r})\,\times\left[\frac{1}{12\,\pi}\,-\,\frac{1}{6\,\pi^2}\arcsin\left(\frac{r}{|\eta|}\right)\right]\\
V^{\Psi_N}_{T(NL)}(\hat\phi)\,&{=}&\,T^4\left(\frac{m(\hat\phi)}{T}\right)^3
\,\Psi_N({\bf r})\,\times\left[-\frac{1}{12\,\pi^2}\frac{r}{\eta}\sqrt{1-\frac{r^2}{\eta^2}}\,-\,\frac{1}{4\,\pi^2}\arcsin\left(\frac{r}{|\eta|}\right)\right]\,,
\label{resultst1real}
\end{eqnarray}
and outside and on the lightcone ($r\geq|\eta|$)
\begin{eqnarray}
V^{\Phi_N}_{T(NL)}(\hat\phi)\,&{=}&\,0\\
V^{\Psi_N}_{T(NL)}(\hat\phi)\,&{=}&\,T^4\left(\frac{m(\hat\phi)}{T}\right)^3
\,\Psi_N({\bf r})\,\times\left[-\frac{1}{8\,\pi}\right]\,.
\label{resultst1real}
\end{eqnarray}
Here for simplicity we have only shown the ${\mathcal O}(m/T)^3$ contributions.
In the static limit $\eta\rightarrow\infty$ one gets
\begin{eqnarray}
V_{T(NL)}(\hat\phi)\,&{=}&\,-\frac{T^4}{12\,\pi}\left(\frac{m(\hat\phi)}{T}\right)^3\,\left(\vphantom{\frac{1}{1}}1\,-\,\Phi({\bf p})\right)\,,
\label{resultst1p}
\end{eqnarray}
which is also valid in real space replacing $\Phi({\bf p})$ by $\Phi({\bf r})$ (see Appendix \ref{apppoles}). Here again we find that the inhomogeneous result can be obtained by replacing the temperature in the homogeneous contribution by the local Tolman temperature.
\begin{figure}
\begin{center}
{\includegraphics[width=0.49\textwidth]{V_0_T+S.eps}\hspace{0.2cm}\includegraphics[width=0.49\textwidth]{V_0_residuals0.eps}}
{\includegraphics[width=0.49\textwidth]{V_0_residuals1.eps}\hspace{0.2
cm}\includegraphics[width=0.49\textwidth]{V_0_residuals2.eps}}
\caption {\footnotesize Left upper panel: Points show the numerical value of the thermal contribution to the potential taking $m(\hat\phi)/T=0.1$ and the continuous line corresponds to the approximations in \eqref{resultst0T} and \eqref{resultst0S}. Right upper panel: Difference between the
numerical value of the potential for $m(\hat\phi)/T=0.1$ and the approximations \eqref{resultst0T} (blue points) or \eqref{resultst0S} (black points). The next-to-leading corrections ($O((m/T)^3)$) given by \eqref{resultst1T} (green solid line) and \eqref{resultst1S} (red solid line) for $m(\hat\phi)/T=0.1$. Left bottom panel: Difference between the numerical value and the $O((m/T)^3)$ approximation (blue and black points). The $O((m/T)^4)$ correction is plotted as a solid line. Right bottom panel: Difference between the numerical value and the $O((m/T)^4)$ approximation (blue and black points). The $O((m/T)^5)$ correction is plotted as a solid line.
}
\label{fig:V0}
\end{center}
\end{figure}
\subsection{Energy-momentum tensor}
The leading contribution is given by the integral \eqref{intTmunu} when $n=0$
\begin{eqnarray}
J^{X}_{\alpha,0}\,&=&\,\int_X^{\infty}\text{d}u\,\frac{1}{e^{u}-1}\frac{1}{u^{2\alpha-2}}\,\left(u^2-X^2\right)^{D/2+\alpha-1}\nonumber\\
&{\simeq}&\,\int_0^{\infty}\text{d}u\,\frac{u^{D-2}}{e^u-1}\left(\vphantom{\frac{1}{1}}u^2\,+\,(1-D/2-\alpha)X^2\right)\nonumber\\
&=&\,\Gamma(D-1)\left[\vphantom{\frac{1}{1}}(D-1)D\,\zeta(D+1)\,+\,(1-D/2-\alpha)\zeta(D-1)\,X^2\right]
\end{eqnarray}
where we have replaced the lower limit of integration by $0$ and expanded the integrand around $X=0$ in the second line. Therefore, we get for the energy-momentum tensor
\begin{eqnarray}
\label{00}
\frac{\rho_T}{T^4}\,=\,\frac{\langle T^0_{\;0}(\eta,{\bf p})\rangle_{T}}{T^4}\,&{\simeq}&\,\frac{\pi^2}{30}\left(1\,-\,4\Phi({\bf p})\,+\,\frac{4\sin(p\eta)}{p\eta}(\Phi({\bf p})+\Psi({\bf p}))\right)\,\\&&-\frac{1}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\left[1\,+\left(2\cos(p\eta)\,+\,\frac{4\sin(p\eta)}{p\eta}\right)\Psi({\bf p})+\left(2\cos(p\eta)\,-\,2\right)\Phi({\bf p})\right]\nonumber\\
\label{ii}
\frac{p_T}{T^4}\,=\,-\frac{\langle T^i_{\;i}(\eta,{\bf p})\rangle_{T}}{T^4}\,&{\simeq}&\,\frac{\pi^2}{90}\left(1\,-\,4\Phi({\bf p})\,+\,\frac{4\sin(p\eta)}{p\eta}(\Phi({\bf p})+\Psi({\bf p}))\right)\,\\&&-\,\frac{1}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\left[1\,+\,\left(\frac{2}{3}\cos(p\eta)\,+\,\frac{8\sin(p\eta)}{3p\eta}\right)\Psi({\bf p})\,+\,\left(\frac{2}{3}\cos(p\eta)\,+\,\frac{4\sin(p\eta)}{3p\eta}\,-\,2\right)\Phi({\bf p})\right]\nonumber\\
\label{i0}
\frac{\langle T^i_{\;0}(\eta,{\bf p})\rangle_{T}}{T^4}\,&{\simeq}&\,\left(i\frac{p_i}{p}\right)\frac{2\pi^2}{15} \left(\frac{\cos(p\eta)}{p\eta}\,-\,\frac{\sin(p\eta)}{(p\eta)^2}\right)(\Phi({\bf p})\,+\,\Psi({\bf p}))\,\\&&+\left(i\frac{p_i}{p}\right)\,\frac{1}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\,\left[\sin(p\eta)\,\Phi({\bf p})\,+\,\left(\sin(p\eta)\,+\,2\frac{\sin(p\eta)}{(p\eta)^2}\,-\,2\frac{\cos(p\eta)}{p\eta}\right)\Psi({\bf p})\right]\nonumber\\
\label{ij}
\frac{\langle T^i_{\;j}(\eta,{\bf p})\rangle_{T}}{T^4}\,&{\simeq}&\,\left(i^2\frac{p_ip_j}{p^2}\right)\frac{2\pi^2}{15}\left(\frac{\sin(p\eta)}{p\eta}+\,3\frac{\cos(p\eta)}{(p\eta)^2}-3\frac{\sin(p\eta)}{(p\eta)^3}\,\right)(\Phi({\bf p})\,+\Psi({\bf p}))\\&&+\left(i^2\frac{p_ip_j}{p^2}\right)\,\frac{1}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\,\times\nonumber\\&&\left[\left(\frac{\sin(p\eta)}{p\eta}\,-\,\cos(p\eta)\right)\Phi({\bf p})\,+\,\left(6\frac{\sin(p\eta)}{(p\eta)^3}\,-\,\frac{\sin(p\eta)}{p\eta}-\,6\frac{\cos(p\eta)}{(p\eta)^2}\,-\,\cos(p\eta)\right)\Psi({\bf p})\right]\hspace{0,5cm}i\neq j\nonumber\\
\frac{\langle T^\mu_{\;\mu}(\eta,{\bf p})\rangle_{T}}{T^4}\,&{\simeq}&\,\frac{1}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\left[1\,+\,\left(\frac{2\sin(p\eta)}{p\eta}\,-\,2\right)\Phi({\bf p})\,+\,\frac{2\sin(p\eta)}{p\eta}\,\Psi({\bf p})\right]
\label{tr}
\end{eqnarray}
which does not correspond to a perfect fluid\footnote{The energy-momentum tensor given by equations \eqref{00}, \eqref{ii}, \eqref{i0} and \eqref{ij} is conserved.}.
In real space, we have for Newtonian perturbations inside the lightcone ($r<|\eta|$)
\begin{eqnarray}
\frac{\rho_T}{T^4}\,=\,\frac{\langle T^0_{\;0}(\eta,{\bf r})\rangle_{T}}{T^4}\,&\,{\simeq}\,&\,\frac{\pi^2}{30}\left\lbrace 1\,-\,4\left[\left(1-\frac{r}{|\eta|}\right)\Phi_N({\bf r})\,-\,\frac{r}{|\eta|}\Psi_N({\bf r})\right]\right\rbrace\,-\frac{1}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\left[1\,-\,2\,\Phi_N({\bf r})\,+\,4\,\frac{r}{|\eta|}\Psi_N({\bf r})\vphantom{\frac{1}{1}}\right]\nonumber\\\ \\
\frac{p_T}{T^4}\,=\,-\frac{\langle T^i_{\;i}(\eta,{\bf r})\rangle_{T}}{T^4}\,&\,{\simeq}\,&\,\frac{\pi^2}{90}\left\lbrace 1\,-\,4\left[\left(1-\frac{r}{|\eta|}\right)\Phi_N({\bf r})\,-\,\frac{r}{|\eta|}\Psi_N({\bf r})\right]\right\rbrace\,\\&&-\frac{1}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\left[1\,-\,2\,\left(1-\frac{2}{3}\frac{r}{|\eta|}\right)\Phi_N({\bf r})\,+\,\frac{8}{3}\frac{r}{|\eta|}\Psi_N({\bf r}))\vphantom{\frac{1}{1}}\right]\nonumber\\
\
\frac{\langle T^i_{\;0}(\eta,{\bf r})\rangle_{T}}{T^4}\,&{\simeq}&\,\frac{\pi^2}{45\,\eta^2}\,\partial_i\left[\vphantom{\frac{1}{1}}r^3\,\left(\Phi_N({\bf r})\,+\,\Psi_N({\bf r})\right)\right]\,-\frac{1}{36\,\eta^2}\left(\frac{m(\hat\phi)}{T}\right)^2\partial_i\left(\vphantom{\frac{1}{1}}r^3\,\Psi_N({\bf r})\right)\\
\
\frac{\langle T^i_{\;j}(\eta,{\bf r})\rangle_{T}}{T^4}\,&{\simeq}&\,-\frac{\pi^2}{300\,|\eta|^3}\partial_i\partial_j\left[\vphantom{\frac{1}{1}}r^5\,\left(\Phi_N({\bf r})\,+\,\Psi_N({\bf r})\right)\right]\,+\,\frac{1}{240\,|\eta|^3}\left(\frac{m(\hat\phi)}{T}\right)^2\partial_i\partial_j\left(\vphantom{\frac{1}{1}}r^5\,\Psi_N({\bf r})\right)\hspace{0,5cm} i\neq j\nonumber\\ \ \\
\frac{\langle T^\mu_{\;\mu}(\eta,{\bf r})\rangle_{T}}{T^4}\,&\,{\simeq}\,&\,\frac{1}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\left[1\,-2\,\left(1-\frac{r}{|\eta|}\right)\Phi_N({\bf r})\,+\,2\,\frac{r}{|\eta|}\Psi_N({\bf r})\vphantom{\frac{1}{1}}\right].
\end{eqnarray}
Outside the lightcone ($r>|\eta|$), we get
\begin{eqnarray}
\frac{\rho_T}{T^4}\,=\,\frac{\langle T^0_{\;0}(\eta,{\bf r})\rangle_{T}}{T^4}\,&\,{\simeq}\,&\,\frac{\pi^2}{30}\left(1\,+\,4\,\Psi_N({\bf r})\vphantom{\frac{1}{1}}\right)\,-\frac{1}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\left[1\,+\,6\,\Psi_N({\bf r})\vphantom{\frac{1}{1}}\right]\\
\frac{p_T}{T^4}\,=\,-\frac{\langle T^i_{\;i}(\eta,{\bf r})\rangle_{T}}{T^4}\,&\,{\simeq}\,&\,\frac{\pi^2}{90}\left( 1\,+\,4\,\Psi_N({\bf r})\vphantom{\frac{1}{1}}\right)\,-\frac{1}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\left[1\,+\,\frac{10}{3}\,\Psi_N({\bf r}))\vphantom{\frac{1}{1}}\right]\\
\
\frac{\langle T^i_{\;0}(\eta,{\bf r})\rangle_{T}}{T^4}\,&{\simeq}&\,-\frac{2\,\pi^2\,\eta}{45}\,\partial_i\left(\vphantom{\frac{1}{1}}\Phi_N({\bf r})\,+\,\Psi_N({\bf r})\right)+\frac{\eta}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\partial_i\left(\vphantom{\frac{1}{1}}\Phi_N({\bf r})\,+\,\frac{5}{3}\,\Psi_N({\bf r})\right)\\
\
\frac{\langle T^i_{\;j}(\eta,{\bf r})\rangle_{T}}{T^4}\,&{\simeq}&\,-\frac{2\,\pi^2\,\eta^2}{225}\partial_i\partial_j\left(\vphantom{\frac{1}{1}}\Phi_N({\bf r})\,+\,\Psi_N({\bf r})\right)\,+\,\frac{\eta^2}{36}\left(\frac{m(\hat\phi)}{T}\right)^2\partial_i\partial_j\left(\vphantom{\frac{1}{1}}\Phi_N({\bf r})\,+\,\frac{7}{5}\Psi_N({\bf r})\right)\hspace{0,5cm} i\neq j\nonumber\\ \ \\
\frac{\langle T^\mu_{\;\mu}(\eta,{\bf r})\rangle_{T}}{T^4}\,&\,{\simeq}\,&\,\frac{1}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\left[1\,+\,2\,\Psi_N({\bf r})\vphantom{\frac{1}{1}}\right],
\end{eqnarray}
and on the lightcone ($r=|\eta|$) the results are
\begin{eqnarray}
\frac{\rho_T}{T^4}\,=\,\frac{\langle T^0_{\;0}(\eta,{\bf r})\rangle_{T}}{T^4}\,&\,{\simeq}\,&\,\frac{\pi^2}{30}\left(1\,+\,4\,\Psi_N({\bf r})\vphantom{\frac{1}{1}}\right)\,-\frac{1}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\left[1\,-\,\Phi_N({\bf r})\,+\,5\,\Psi_N({\bf r})\vphantom{\frac{1}{1}}\right]\\
\frac{p_T}{T^4}\,=\,-\frac{\langle T^i_{\;i}(\eta,{\bf r})\rangle_{T}}{T^4}\,&\,{\simeq}\,&\,\frac{\pi^2}{90}\left( 1\,+\,4\,\Psi_N({\bf r})\vphantom{\frac{1}{1}}\right)\,-\frac{1}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\left[1\,-\,\frac{1}{3}\,\Phi_N({\bf r})\,+\,3\,\Psi_N({\bf r})\vphantom{\frac{1}{1}}\right]\\
\
\frac{\langle T^i_{\;0}(\eta,{\bf r})\rangle_{T}}{T^4}\,&{\simeq}&\,-\frac{2\,\pi^2\,\eta}{45}\,\partial_i\left(\vphantom{\frac{1}{1}}\Phi_N({\bf r})\,+\,\Psi_N({\bf r})\right)+\frac{\eta}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\partial_i\left(\vphantom{\frac{1}{1}}\Phi_N({\bf r})\,+\,\frac{5}{3}\,\Psi_N({\bf r})\right)\\
\
\frac{\langle T^i_{\;j}(\eta,{\bf r})\rangle_{T}}{T^4}\,&{\simeq}&\,-\frac{2\,\pi^2\,\eta^2}{225}\partial_i\partial_j\left(\vphantom{\frac{1}{1}}\Phi_N({\bf r})\,+\,\Psi_N({\bf r})\right)\,+\,\frac{\eta^2}{36}\left(\frac{m(\hat\phi)}{T}\right)^2\partial_i\partial_j\left(\vphantom{\frac{1}{1}}\Phi_N({\bf r})\,+\,\frac{7}{5}\Psi_N({\bf r})\right)\hspace{0,5cm} i\neq j\nonumber\\ \ \\
\frac{\langle T^\mu_{\;\mu}(\eta,{\bf r})\rangle_{T}}{T^4}\,&\,{\simeq}\,&\,\frac{1}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\left[1\,+\,2\,\Psi_N({\bf r})\vphantom{\frac{1}{1}}\right]\,.
\end{eqnarray}
In the static limit, the energy density and pressure are
\begin{eqnarray}
\frac{\rho_T}{T^4}\,=\,\frac{\langle T^0_{\;0}(\eta,{\bf p})\rangle_{T}}{T^4}\,&\,{\simeq}\,&\,\frac{\pi^2}{30}\left(\vphantom{\frac{1}{1}}1\,-\,4\,\Phi({\bf p})\right)\,-\,\frac{1}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\left(\vphantom{\frac{1}{1}}1\,-\,2\,\Phi({\bf p})\right)\\
\frac{p_T}{T^4}\,=\,-\frac{\langle T^i_{\;i}(\eta,{\bf p})\rangle_{T}}{T^4}\,&\,{\simeq}\,&\,\frac{\pi^2}{90}\left(\vphantom{\frac{1}{1}}1\,-\,4\,\Phi({\bf p})\right)\,-\,\frac{1}{24}\left(\frac{m(\hat\phi)}{T}\right)^2\left(\vphantom{\frac{1}{1}}1\,-\,2\,\Phi({\bf p})\right)\\
\frac{\langle T^\mu_{\;\mu}(\eta,{\bf p})\rangle_{T}}{T^4}\,&\,{\simeq}\,&\,\frac{1}{12}\left(\frac{m(\hat\phi)}{T}\right)^2\left(\vphantom{\frac{1}{1}}1\,-\,2\,\Phi({\bf p})\right)\,,
\end{eqnarray}
the non-diagonal terms being zero. Once again, these results can be interpreted as being the corresponding energy density and pressure for a classical gas at the local Tolman temperature \eqref{Tolman} in agreement with
\cite{Nakazawa} and \cite{Holstein}. The same expressions for the static limit apply in real space (see Appendix \ref{apppoles}).
\section{Thermal shift of the effective potential minima}
\label{sec:shift}
Once the effective potential is obtained, the value of the field for which
\begin{eqnarray}
{V_{\text{eff}}}'(\hat\phi)\,=\,0
\end{eqnarray}
determines the value attained by the classical field $\hat\phi$. The inhomogeneous contributions to the effective potential will now induce a spatial dependence
on $\hat\phi$ which can be written as
\begin{eqnarray}
\hat\phi_{}(\eta,{\bf x})\,=\,\hat\phi_0+\Delta\hat\phi(\eta,{\bf x}),
\end{eqnarray}
where $\hat\phi_0$ is the minimum of the potential in the absence of metric
perturbations, but including the one-loop corrections, i.e.
\begin{eqnarray}
{V_{\text{eff}}^{\text{h}}}'(\hat\phi_0)\,=\,V'(\hat\phi_0)\,+\,{V_{1}^{\text{h}}}'(\hat\phi_0)\,+\,{V_{T}^{\text{h}}}'(\hat\phi_0)\,=\,0 ,
\end{eqnarray}
then to first order in metric perturbations and taking into account that
$V_1^i=0$ in dimensional regularization, we get
\begin{eqnarray}
\Delta \hat\phi\,=-\,\frac{{V_{T}^{\text{i}}}'(\hat\phi_0)}{{V_{\text{eff}}^{\text{h}}}''(\hat\phi_0)}\,=\,-\frac{1}{{V_{\text{eff}}^{\text{h}}}''(\hat\phi_0)}\,\left.\frac{\text{d}m^2}{\text{d}\hat\phi}\right|_{\hat\phi=\hat\phi_0}\,\left.\frac{\text{d}V^{\text{i}}_T}{\text{d}m^2}\,\right|_{\hat\phi=\hat\phi_0}\,.
\label{minh}
\end{eqnarray}
Thus, the relative classical field variation is given by the temperature correction
\begin{eqnarray}
\Delta \hat\phi
=\,-\frac{V^{'''}({\hat \phi}_0)}{{V_{\text{eff}}^{\text{h}}}''(\hat\phi_0)}
\left.\frac{\text{d}V^{\text{i}}_T}{\text{d}m^2}
\right|_{\hat\phi=\hat\phi_0}\,.
\end{eqnarray}
The perturbation is therefore proportional to the third derivative of the tree-level potential, so that variations in the field expectation value are only
generated in theories with self-interactions.
In the non-relativistic limit and in the static limit we get in Fourier space
\begin{eqnarray}
\Delta \hat\phi_T(\eta,{\bf p})\,=\,\,\frac{e^{-m(\hat\phi)/T}}{4\sqrt{2}\,\pi^{3/2}}\,\,\left(\frac{m(\hat\phi)}{T}\right)^{3/2}
V^{'''}({\hat \phi}_0)\,\left(\frac{T^2}{{V_{\text{eff}}^{\text{h}}}''(\hat\phi_0)}\right)
\,\Phi({\bf p})\,.
\end{eqnarray}
In the ultra-relativistic limit, we obtain for arbitrary $\eta$
\begin{eqnarray}
\Delta \hat\phi(\eta,{\bf p})\,=\,-\,\frac{V^{'''}({\hat \phi}_0)}{12}\left(\frac{T^2}{{V_{\text{eff}}^{\text{h}}}''(\hat\phi_0)}\right)\,\,\left[\left(\frac{\sin(p\,\eta)}{p\,\eta}\,-\,1\right)\Phi({\bf p})\,+\left(\frac{\sin(p\,\eta)}{p\,\eta}\right)\Psi({\bf p})\right]\,.
\end{eqnarray}
which in the static limit reduces to
\begin{eqnarray}
\Delta \hat\phi({\bf p})\,=\,\,\frac{V^{'''}({\hat \phi}_0)}{12}\left(\frac{T^2}{{V_{\text{eff}}^{\text{h}}}''(\hat\phi_0)}\right)\Phi({\bf p}),
\label{staticshift}
\end{eqnarray}
valid also in real space replacing $\Phi({\bf p})$ by $\Phi({\bf r})$. In particular, in real space we get for Newtonian potentials inside the lightcone ($r<|\eta|$)
\begin{eqnarray}
\Delta{{\hat \phi}}(\eta,r)\,=\,-\frac{\, V^{'''}({\hat \phi}_0)}{12}\left(\frac{T^2}{{V_{\text{eff}}^{\text{h}}}''(\hat\phi_0)}\right)\,\left[\left(\frac{r}{|\eta|}-1\right)\Phi_N({\bf r})\,+\,\frac{r}{|\eta|}\Psi_N({\bf r})\right]\,\,
\end{eqnarray}
while oustide and on the lightcone ($r\geq|\eta|$)
\begin{eqnarray}
\Delta{{\hat \phi}}(\eta,r)\,=\,-\frac{\, V^{'''}({\hat \phi}_0)}{12}\left(\frac{T^2}{{V_{\text{eff}}^{\text{h}}}''(\hat\phi_0)}\right)\,\Psi_N({\bf r})\,.
\end{eqnarray}
Thus, we see that outside and on the lightcone ($r\geq|\eta|$), the result
reduces to minus the static limit result \eqref{staticshift}.
Inside the lightcone ($r<|\eta|$), the thermal shift depends on time and approaches asymptotically the static case.
From these results we see that there is a negligible shift in the classical field ${\hat \phi}$ at low temperature because of the exponential suppression, however, depending on
the form of the tree-level potential, the shift generated
by metric perturbations in the ultra-relativistic limit could be relevant
in certain cases.
Now, let us focus on the critical temperature of the phase transition $T_{\text{c}}$ defined by \cite{Mukhanov}
\begin{eqnarray}
V_{\text{eff}}(\hat\phi_0+\Delta\hat{\phi})\,=\,V_{\text{eff}}(0)
\label{tc}
\end{eqnarray}
where $V_{\text{eff}}$, $\hat{\phi}_0$ and $\Delta\hat{\phi}$ depend on the temperature $T$. Expanding equation \eqref{tc} around the critical temperature in the absence of metric perturbations $T^0_{\text{c}}$, we get for the leading order
\begin{eqnarray}
V^\text{h}_{\text{eff}}(\hat\phi_0)\,=\,V^\text{h}_{\text{eff}}(0)
\end{eqnarray}
which is the definition of $T^0_{\text{c}}$. Considering the next to leading order and solving for $\delta T_{\text{c}}= T_{\text{c}}-T^0_{\text{c}}$, we obtain the following expression for the shift in the critical temperature produced by metric perturbations\footnote{To get this expression we have redefined the effective potential by adding a function of the temperature in such a way that $V^\text{h}_{\text{eff}}(0)=0$ and $\frac{\text{d}}{\text{d}T}V^\text{h}_{\text{eff}}(0)=0$ for every $T$. This does not change the dynamics of the field since the aforementioned function of the temperature does not depend on the field $\phi$}
\begin{eqnarray}
\delta T_{\text{c}}\,=\, -\left.\frac{V^{\text{i}}_{\text{eff}}(\hat{\phi}_0)}{\frac{\text{d}}{\text{d}T}\left(V^{\text{h}}_{\text{eff}}(\hat{\phi}_0)\right)}\right|_{T=T^0_{\text{c}}}\,.
\end{eqnarray}
It can be shown (see Appendix \ref{appT}) that in the static limit
\begin{eqnarray}
\frac{V^{\text{i}}_{\text{eff}}(\hat{\phi}_0)}{\frac{\text{d}}{\text{d}T}\left(V^{\text{h}}_{\text{eff}}(\hat{\phi}_0)\right)}\,=\,-\,T\,\Phi({\bf p})
\end{eqnarray}
therefore, in that case, the shift in the critical temperature is given by
\begin{eqnarray}
\frac{\delta T_{\text{c}}}{T^0_{\text{c}}}\,=\,\Phi({\bf p})\,.
\end{eqnarray}
i.e. once again the curvature perturbation $\Psi$ does not contribute to the shift.
\section{Conclusions}
\label{sec:con}
Considering a scalar field at finite temperature in an inhomogeneous static space-time, we have computed the one-loop corrections to the effective potential and to the energy-momentum tensor induced by static scalar metric perturbations around a Minkowski
background to first order in metric perturbations. To this aim, we have applied the formalism developed in \cite{Maroto, Higgs}. In particular we have used the explicit expressions for the perturbed field modes together with the assumptions of adiabatic evolution of the field. In order to obtain analytical expressions, the non-relativistic and ultra-relativistic limits have been considered.
In the non-relativistic limit, we obtained the corresponding expressions in the static limit and also the limits for large-scale perturbations (small $p$) or times close to the initial time. In the ultra-relativistic limit, we obtain the complete results for arbitrary $p$ and $\eta$ up to ${\mathcal O}(m/T)^5$. In the
static limit, our results agree with those in \cite{Nakazawa} and \cite{Holstein} which were obtained by means of the Schwinger-de Witt expansion. The energy density and pressure in the static limit are consistent with a local thermal distributions at the local Tolman temperature. Besides, our results are sensitive to the initial conditions set at the initial time for the mode solutions.
We have also discussed the space-dependent shift in the classical field induced by the
metric perturbations. As expected, in the non-relativistic limit the shift is
Boltzmann suppressed. However, in the ultra-relativistic case and depending on the shape
of the potential, the shift could be non-negligible.
The results of the paper have shown that mode summation is a useful technique to obtain
explicit expressions for one-loop quantities at zero and finite temperature. Unlike
the more standard Schwinger-de Witt expansion, this method allows to calculate not only the
local contributions to the effective action, but also the finite non-local ones which will
appear at second order in the perturbative expansion. Future work along this line
will allow to explore this possibility.
\vspace{0.2cm}
{\it Acknowledgements}. This work has been supported by the Spanish MICINNs Consolider-Ingenio 2010 Programme under grant MultiDark CSD2009-00064, by the Spanish Research Agency (Agencia Estatal de Investigaci\'on) through the grant IFT Centro de Excelencia Severo Ochoa SEV-2016-0597 and MINECO grants FIS2014-52837-P, FIS2016-78859-P(AEI/FEDER, UE), AYA-2012-31101 and AYA2014-60641-C2-1-P. FDA acknowledges financial support from `la Caixa'-Severo Ochoa doctoral fellowship.
|
{
"timestamp": "2018-05-08T02:13:48",
"yymm": "1805",
"arxiv_id": "1805.02326",
"language": "en",
"url": "https://arxiv.org/abs/1805.02326"
}
|
\section{Introduction} \label{sec:intro}
The cold classical Kuiper Belt Object (KBO) (486958) 2014 MU$_{69}$ is the primary target for NASA's
\textit{New Horizons} Kuiper Belt Extended Mission.
The cold classical Kuiper Belt consists of objects on low-eccentricity,
low-inclination ($<5^\circ$ to the invariant plane) orbits (that is, dynamically ``cold'')
with heliocentric semimajor axes between about 40 and 50 AU.
The cold classical objects were likely formed in-place and escaped perturbation
from their initial orbits by giant planet migration \citep[][and references therein]{2011ApJ...738...13B},
making them the most distant known remnants of the original protoplanetary disk.
NASA's \textit{New Horizons} spacecraft was launched January 19, 2006,
received a gravitational assist from Jupiter on February 28, 2007,
and flew through the Pluto-Charon system on July 14, 2015
\citep{2015Sci...350.1815S}.
Since the Pluto encounter, \textit{New Horizons} has observed the 3:2 Neptune resonant (15810) Arawn
(provisionally designated 1994 JR$_{1}$) in 2016 as close at a distance of 0.7 AU \citep{2016ApJ...828L..15P}.
\textit{New Horizons} will encounter many other KBOs within 1 AU, some as close as 0.1 AU,
and some (such as Quaoar and Haumea) that are much farther away,
but all can be seen by \textit{New Horizons} at much higher solar phase angles
than is possible from Earth-based telescopes (Porter et al. 2018 in preparation, Verbiscer et al. 2018 in preparation).
However, none of these KBOs will be seen as close as MU$_{69}$, which will be
within 3500 km of the spacecraft on the nominal trajectory.
\textit{New Horizons} will image the surface of MU$_{69}$ at best resolutions of $\approx$35 meter/pixel,
and spectral maps at $\approx$1 km/pixel.
In order to guide the spacecraft to such a close encounter with a KBO that only has a relatively short orbital arc
required a completely new approach to orbit determination and uncertainty analysis, which we describe in this paper.
2014 MU$_{69}$ was discovered in July 2014 by the \textit{Hubble Space Telescope} (HST) following eight years of
dedicated searches for a second \textit{New Horizons} encounter object (Buie et al. 2018, in preparation).
After several ground-based searches down to V$\approx$26, 194 HST
orbits were allocated for a deeper, more systematic search for objects accessible
to \textit{New Horizons} (GO 13633, PI Spencer).
MU$_{69}$ was initially detected in 10 images acquired in two HST orbits,
as were four other KBOs during the \textit{HST} search.
Three objects were potential targets for \textit{New Horizons}:
2014 MU$_{69}$, 2014 OS$_{393}$, and 2014 PN$_{70}$.
In August 2015, the \textit{New Horizons} team selected 2014 MU$_{69}$ as the potential \textit{New Horizons} extended mission target.
The spacecraft performed a series of four burns in October-November 2015 to target 2014 MU$_{69}$.
The \textit{New Horizons} Kuiper Belt Extended Mission was approved by NASA after Senior Review in July 2016,
and its centerpiece is the flyby of 2014 MU$_{69}$ on January 1, 2019.
In this paper, we will discuss our process of performing absolute astrometry on 2014 MU$_{69}$
tied to a pre-release version of \textit{Gaia} DR2,
propagating that error forward to orbital uncertainty, and then using the orbital uncertainty to
guide both occultations of the KBO and to guide the spacecraft to a close flyby.
These techniques represent the highest-precision heliocentric orbit fitting of a Kuiper Belt object
ever, and can provide a basis for future applications of \textit{Gaia}-driven astrometry to small bodies in the solar system.
\section{Data Sources} \label{sec:data}
2014 MU$_{69}$ was discovered with the \textit{HST} search program (GO 13633) described in (Buie et al. 2018, in preparation).
This program was designed to take five full-frame 370-second Wide Field Camera 3 (WFC3) UVIS images in one orbit with the F350LP
broadband filter, skip an \textit{HST} orbit, and then repeat the same observation.
The images were tracked on a nominal cold-classical orbit, producing streaked stars,
but non-streaked KBOs.
To perform the search, the images were shift-stacked at 20 different representative cold-classical shift rates.
Objects that appeared in both search orbits for one of the shift rates were identified and targeted for follow up
\textit{HST} observations.
The first object identified this way was designated 1110113Y (HST orbit IDs 11/12, WFC3 CCD 1, shift rate ID 011, random ID 3Y),
later given the provisional designation 2014 MU$_{69}$.
Four more KBOs were subsequently detected, all of which were brighter, but all of which required more fuel
for \textit{New Horizons} to reach.
The available dataset for 2014 MU$_{69}$ has both a short temporal arc (July 2014-October 2017),
and an extremely high data quality, making it ideal for the analysis described below.
MU$_{69}$ is a very faint object, with V$\approx$27.5 (Benecchi et al. 2018, in preparation),
and is in an extremely crowded star field (galactic longitudes from -8$^\circ$ to -12$^\circ$).
These constraints have made it effectively impossible to detect with ground-based telescopes,
and all observations of MU$_{69}$ have been with the \textit{Hubble Space Telescope}.
A list of these \textit{HST} observations is in Table \ref{tab:astro}.
Both the initial follow up observations and most observations conducted since then have adopted the search program's basic
format of five 367 to 370-second, F350LP filter, full frame UVIS WFC3 images.
The key exceptions are the color campaign in summer of 2016 (GO 14092, PI Benecchi),
in which four orbits each included two 348-second images using F606W followed by three 373-second images with F814W.
As shown in Table \ref{tab:astro},
half of the follow up orbits were roughly evenly spread over August 2014-October 2017,
while the other half were spread over a roughly one week interval in June-July 2017.
The latter was the lightcurve campaign (GO 14627, PI Benecchi), which was critical in successfully predicting the July 2017 occultation
(See Section \ref{sec:occult}).
In addition to \textit{HST} and the July 2017 occultation, the other data source for this analysis was stellar astrometry from the ESA \textit{Gaia} project.
Initially, the process in Section \ref{sec:image} was built using a custom star catalog built from
a deep composite of the MU$_{69}$ field obtained using the Canada-France-Hawaii Telescope \citep{2014JInst...9C4003G}.
When the \textit{Gaia} Data Release 1 (DR1) was made available in September 2016 \citep{2016A&A...595A...2G}, we began using that,
applying a mean proper motion correction to images significantly after the DR1 2015.0 epoch.
The mean proper motion was calculated from the \textit{Gaia} TGAS catalog \citep{2015A&A...574A.115M}.
The MU$_{69}$ observation fields were in areas of very low coverage for TGAS, so the TGAS stars could not be used directly.
With the success of our application of DR1 and after a special support request to the \textit{Gaia} project,
we were able to obtain a sky patch from a pre-release version of Data Release 2 (DR2) around the path of MU$_{69}$.
The major advances with DR2 are proper motion for all catalog stars, obviating the need for a mean
proper motion correction, as well as a much more homogeneous, bias-free distribution of errors on the sky,
and much lower uncertainty (one order of magnitude).
This early version of DR2 was used to plan the three 2017 occultations, both for correcting the \textit{HST} absolute
astrometry and for knowledge of the occultation stars themselves.
We obtained a second preview version of DR2 in our field of interest in October 2017, and that version
is used for the astrometry in Table \ref{tab:astro}.
\section{Image Analysis} \label{sec:image}
\begin{figure*}[t]
\plotone{id8i16moq_2_wcspdf_match2_compress.pdf}
\caption{An image from the HST lightcurve campaign (id8i16moq).
The blue circles show the \textit{Gaia} DR2 positions of stars,
the red circles show the locations of star PDFs used for the WCS solution,
and the outer yellow circles indicate a match
(all PDF stars were matched in this case, which is typical).
The inset green box shows the location of 2014 MU$_{69}$;
see Figure \ref{fig:corner}.
\label{fig:hst}}
\end{figure*}
A typical \textit{HST} MU$_{69}$ image is shown in Figure \ref{fig:hst}.
Over the course of 2014-2017, MU$_{69}$ has moved between galactic latitudes of $-8^\circ$ and $-12^\circ$.
Accordingly, the background star density has always been very high in any images of MU$_{69}$
(from both Earth and \textit{New Horizons}),
and this dense background field will persist through the \textit{New Horizons} encounter.
To mitigate this background, we developed a Python program called \textit{warpy.py} to perform simple star subtraction.
Because almost every observation sequence consists of five images of roughly the same field,
\textit{warpy.py} iterates though the images, warps the other four images of the visit to the frame of the fifth,
median combines the four warped images, and subtracts them from the target image.
The images are coregistered by matching sources detected with \textit{Source Extractor} \citep{1996A&AS..117..393B}.
Because the stars in the images are all smeared differently, they do not subtract cleanly.
However, the star-subtracted images are useful for verification that the following steps are fitting
the KBO and not a background source.
The smeared stars are also a problem for determining the pointing of the images.
Since \textit{HST} is tracking on the motion of the object, a single Tiny Tim \citep{2011SPIE.8127E..0JK}
point spread function (PSF) would not accurately describe the effective PSFs of the stars.
We thus built up "smear kernels" that describe the motion of the stars relative to
the KBO through the 370 second exposures.
By shifting a Tiny Tim PSF to 400 discrete times during the exposure and averaging them,
we were able to build up an exact model of each star's PSF.
We did this for each star used for the WCS solution (effectively, all the \textit{Gaia} stars in the field),
since the Tiny Tim PSF varies across the WFC3 field.
We next used the stellar PSFs to build up probability distribution functions (PDFs) for the pixel location
of each star within each image.
We did this task with a Markov-Chain Monte Carlo (MCMC) algorithm implemented in IDL.
This minimal MCMC with a single ``walker'' was iterated for 2000 steps to build a PDF of both pixel position and
total DN/second flux.
The result is 2000 equal-probability pixel positions for the star, encompassing the true shape of the
uncertainty distribution.
We found this number of steps to be sufficient, as all the stars used in the astrometric solution had a high
signal-to-noise ratio and typically there were 70-100 stars used in a given solution.
The flux number was not used directly in the fits, but provided diagnostics if the fits were successful.
A typical star had a 1-$\sigma$ pixel position uncertainty of $<$0.1 pixels, equating to an angular uncertainty of
$<$4 milliarcseconds.
With these stellar PDFs, we could now build PDFs for the pointing of \textit{HST} in each image.
WFC3 UVIS images use a three-layer World Coordinate System (WCS) to translate pixel to sky coordinates,
as described in \citet{2006A&A...446..747G}.
The first layer is the basic pointing, roll, and trapezoidal warp, the second is a set of SIP polynomials that
describe the low-frequency distortion on the chip, and the third layer is a look-up table that describes
high-frequency pixel distortion (e.g. by irregularities in the lithography of the CCD).
All of the distortion parameters are highly-calibrated, and we only needed to update the pointing
with deltas to the \textit{CRVAL1} and \textit{CRVAL2} keywords, and the roll by multiplying the CD matrix by a rotation matrix.
Thus for each image we have three parameters for the WCS PDF: delta RA, delta Dec, and delta roll.
We built this WCS PDF by selecting a pixel coordinate for each star from their PDFs, fitting a best-fit
WCS solution, and recording the deltas from the original WCS.
This process was repeated 10,000 times to build a discretely-sampled PDF of the WCS offsets.
The resulting typical 1-$\sigma$ uncertainty in the \textit{CRVAL1} and \textit{CRVAL2} keywords
(and thus in the pointing of \textit{HST}) was $<$2 milliarcseconds.
The pixel position PDFs for the KBO used the same basic MCMC algorithm as the star pixel PDFs,
but with the single walker iterated 10000 times to build the PDF.
Since \textit{HST} tracked on the KBO, we did not need a smear kernel to fit the KBO and could use a Tiny Tim PSF directly.
In addition, all but the discovery observations had MU$_{69}$ near the center of WFC3 chip 2 (FITS extension 1),
making for a less distorted PSF than near the chip edges.
Initially, the KBO fit was started with a manual click on the rough position of the KBO in the image.
However, as the orbit improved, this manual position was replaced with a calculated initial position from prior orbit solutions and the WCS.
We also made manual masks of stars and cosmic rays near the KBO that might adversely affect the PDF generation.
\begin{figure}[t]
\plotone{id8i16moq_2_2014MU69_kbopdf_corner.pdf}
\caption{The RA/Dec/Magnitude Probability Distribution Function (PDF) for MU$_{69}$
in the image id8i16moq (see Figure \ref{fig:hst}).
RA and Dec are in milliarcseconds, so as to show the uncertainty in appropriate units.
We generated similar PDFs for all the astrometry shown in Table \ref{tab:astro}.
\label{fig:corner}}
\end{figure}
Finally, we needed to combine the KBO pixel PDFs with the WCS PDFs to make KBO sky PDFs.
Similarly to the WCS PDFs, this was accomplished by selecting a randomly-selected KBO pixel location with a randomly-selected
WCS PDF, translating to RA and Dec, and then repeating 10,000 times.
An example of one of these PDFs is shown in Figure \ref{fig:corner}.
The instrument magnitude in the pixel PDF was converted to AB apparent magnitude with the PHOTFLAM header keyword.
While the magnitude was not used directly for astrometry, it was an important diagnostic for the quality of the pixel PDF.
For the solutions presented, we filtered out any points with magnitude uncertainties larger than 0.5,
which were generally failures to fit the object,
or any points with uncertainties smaller than 0.1 magnitudes,
typically a cosmic ray close to the KBO causing spuriously high signal-to-noise ratios.
We also only used the F350LP points for the orbit, as the narrower-band points had worse signal-to-noise ratios for both stars and MU$_{69}$.
This left 214 of the 264 images, which we used for an initial fit.
An additional nine points were rejected because they had greater than 30 milliarcsecond residuals relative to the initial fit.
Our final HST astrometry thus used 205 of the 264 images (78\%); these are shown in Table \ref{tab:astro}.
After the method described here was used to successfully predict the occultation of MU$_{69}$ on July 17, 2017
(See Section \ref{sec:occult}, Buie et al. 2018, in preparation),
we were able to use the occultation itself as a high-quality occultation point.
Because five solid-body chords were obtained on July 17, we chose the mid-time of the longest chord
and used it as the nominal center-of-figure.
We could then combine this mid-time, the topocentric location of the portable telescope that obtained the longest chord,
and the location and uncertainty of the occultation star from \textit{Gaia} DR2 to produce an effective
astrometric PDF.
See Buie et al. (2018, in preparation) for more details about the circumstances and analysis of this occultation.
This occultation PDF could then be combined with the \textit{HST}-derived PDFs in the process described in Section \ref{sec:orbit}.
\section{Orbit Determination} \label{sec:orbit}
Typically, small body orbits in the literature are described in either mean or osculating heliocentric elements,
with error bars representing a normal error distribution.
This is typically sufficient for general dynamical studies and rough targeting from the ground, but not
for spacecraft flybys or occultation planning.
The actual uncertainty of an object's astrometry is rarely perfectly described by a normal distribution,
and neither is that object's location and velocity in space.
We thus sought to develop an orbit-fitting method that would accurately map
the full astrometric uncertainty distribution into the ephemeris.
To perform these fits, we developed a high-precision few-body orbital integrator.
Since 2014 MU$_{69}$ is a cold-classical KBO, all of the planetary perturbations on it are interior,
and tend to result in a slow precession.
Non-gravitational factors (i.e. YORP) and general relativity are not a factor for Kuiper Belt objects.
We therefore developed a few-body conservative force integrator, capable of modeling
the major planets and their perturbing forces on a massless test particle.
This integrator (PyNBody\footnote{\url{https://github.com/ascendingnode/PyNBody}})
is based on the 12/13th order Runge-Kutta-Nystrom intergrator of \citet{Brankin_1989}
and was previously described in \citet{2016ApJ...828L..15P}.
This integrator is not the fastest, but it is very accurate and can typically conserve
system energy and momentum to within machine precision over the relevant timescales for orbit fitting.
The KBO's orbit is parameterized as a cartesian state vector relative to the solar system barycenter at a fixed epoch.
The inertial frame for the integrations the International Celestial Reference Frame (ICRF).
\textit{Gaia} DR1 is aligned to ICRF by matching optical detections of quasars with a subset of ICRF2,
while \textit{Gaia} DR2 uses several thousand quasars from ICRF3 and half a million AGNs to perform frame alignment
\citep{2016A&A...595A...2G}.
The integration epoch is set to 2014-06-01 00:00:00.000 UTC, a few weeks before the first observation
(originally a safety factor in case of any precoveries in the \textit{HST} search).
To test the solution against the data, we propagate the state vector with the PyNBody integrator to the desired time
and calculate its apparent ICRF RA/Dec from \textit{HST}, with appropriate light-time correction.
We use the JPL NAIF \textit{HST} and \textit{DE430} SPICE kernels to determine the location of \textit{HST}
relative to the solar system barycenter \citep{2014IPNPR.196C...1F}.
We used the \textit{emcee} Markov-Chain Monte Carlo package \citep{2013PASP..125..306F}
to translate the astrometric uncertainty to orbital uncertainty.
The \textit{emcee} package provides a fast and natively multithreaded way to run MCMC from Python.
As input to \textit{emcee}, the fitting program calculates
the likelihood for any solution by taking the predicted RA/Decs for that solution and comparing them to the RA/Dec PDFs.
Because the PDFs are discretely-sampled, we created a Kernel Density Estimator (KDE) for each observations, using
Silverman's Rule of Thumb \citep{1986desd.book.....S} to choose the bandwidths, since most of the PDFs were roughly Gaussian.
The log likelihoods for all the images could then be summed to provide a total log likelihood to \textit{emcee}.
For any solution, the first step is to make an initial guess (typically an older solution) and minimize its $\chi^2$ with
a downhill simplex method \citep{doi:10.1093/comjnl/7.4.308}.
This polished solution is then used to create 200 slightly perturbed state vectors as the initial ``walkers'' for
\textit{emcee} to use.
We then run the 200 walkers for 100 iterations to ``burn-in'' and allow them to move away from the artificial initial
distribution.
We then reset \textit{emcee} and run it for 500 iterations to produce the full PDF cloud of 10,000 state vectors at the
fitting epoch.
These numbers of iterations were arrived at after much testing, and are typically more burn-in than is actually
necessary, so as to ensure that the solutions are well-distributed.
We save the resulting state vector PDF in a format that can then be propagated to any time of interest.
The state vector and orbit for our ``rd2b'' orbit solution are presented in Table \ref{tab:state}
\begin{deluxetable}{ccccc}[t]
\tablecaption{ The ``rd2b'' orbit solution for 2014 MU$_{69}$.
State vector and orbit are relative to the solar system barycenter
and in the ICRF Ecliptic frame at the epoch 2014-06-01 00:00:00 UTC.
\label{tab:state}}
\tablehead{\colhead{} & \colhead{Value} & \colhead{} & \colhead{1-$\sigma$} }
\startdata
$x$ & $+1.163133074444e+09$ &$\pm$& $2.80233e+02$ & km \\
$y$ & $-6.385039581373e+09$ &$\pm$& $1.52754e+03$ & km \\
$z$ & $+2.373261916929e+08$ &$\pm$& $5.87015e+01$ & km \\
$v_x$ & $+4.461378977476e+00$ &$\pm$& $5.92714e-06$ & km/s \\
$v_y$ & $+9.619622770583e-01$ &$\pm$& $2.45488e-05$ & km/s \\
$v_z$ & $-1.066958207821e-01$ &$\pm$& $9.45150e-07$ & km/s \\
\hline
$a$ & 44.23555350 &$\pm$& 0.00003999 & AU \\
$e$ & 0.03787388 &$\pm$& 0.00000476 & \\
$I$ & 2.44993086 &$\pm$& 0.00000203 & deg \\
$\Omega$ & 159.04712465 &$\pm$& 0.00006746 & deg \\
$\omega$ & 183.74800591 &$\pm$& 0.00469779 & deg \\
$M$ & 301.30454775 &$\pm$& 0.00406885 & deg \\
\enddata
\end{deluxetable}
While the full PDF cloud of 10,000 states encapsulates the uncertainty in the location of MU$_{69}$ any any time,
it is rather unwieldy to use in most circumstances.
We therefore calculated the state clouds of 2014 MU$_{69}$ at 2000 discrete times between January 1, 2004 and
January 1, 2024, and averaged the states at each time.
We then generated order-27 Chebyshev polynominals \citep{tchebychev1853theorie} for the positions of the KBO, and saved them in a
JPL SPICE Type 02 SPK kernel.
JPL SPICE could then be used to rapidly interpolate the location of MU$_{69}$ at any time along the interval.
Testing the kernel at 769 random points over the interval returned a root-mean-squared residual of 20 meters,
well below the uncertainty in the orbit.
\section{New Horizons Trajectory Planning} \label{sec:traj}
The primary reason to determine the orbit of 2014 MU$_{69}$ to very high precision is to ensure the success of
the \textit{New Horizons} flyby.
\textit{New Horizons} performed the major Trajectory Correction Maneuver (TCM) to guide it to MU$_{69}$ over a series
of four burn segments in October and November 2015, after all Pluto observations had finished.
The initial orbit used to target the spacecraft was based on the first year of data, from June 2014 to July 2015
(GO 13633 and GO/DD 14053, PI Spencer).
After that burn, early versions of the analysis described here showed that significantly more \textit{HST} observations
would be required to enable a close flyby of 2014 MU$_{69}$.
We thus proposed and were awarded six \textit{HST} orbits in 2016, and five in 2017 (GO/DD 14485, GO 14629, and GO 15158, PI Buie).
In addition, 24 \textit{HST} orbits were used in June/July 2017 to measure the lightcurve of MU$_{69}$ (GO 14627, PI Benecchi).
The orbit presented here uses data from all of these \textit{HST} programs, in addition to the July 17 occultation.
The \textit{New Horizons} spacecraft will nominally fly closest to 2014 MU$_{69}$ at 05:33 January 1, 2019 UTC.
This time was chosen to enable both the Goldstone and Canberra Deep Space Network (DSN) 70-meter dishes to uplink to the
spacecraft simultaneously for an attempted bistatic radar experiment
\citep[as was performed at Pluto;][]{2016DPS....4821304L}.
\textit{New Horizons} will not be able to acquire MU$_{69}$ any earlier than August 2018.
Because of \textit{New Horizons}'s almost radial trajectory out of the solar system, the KBO will move very slowly
against the background stars until a just few weeks before encounter.
While the spacecraft can use the LOng Range Reconnaissance Imager \citep[LORRI,][]{2005SPIE.5906..407C}
to well constrain the location of 2014 MU$_{69}$ in the ``B-Plane''
(the plane perpendicular to the spacecraft's motion and containing the flyby target),
the time-of-flight (ToF) uncertainty along the direction of the
spacecraft's motion is constrained only by the Earth-based orbital solution.
The distance to the spacecraft from Earth will be well-constrained by Doppler radio measurements on approach to MU$_{69}$,
and so the uncertainty in the absolute location of MU$_{69}$ relative to the solar system barycenter will
determine the flyby ToF uncertainty.
\section{Application to Occultation Planning} \label{sec:occult}
In addition to guiding \textit{New Horizons}, we also used our orbit solution to predict three stellar occultations
by 2014 MU$_{69}$ in 2017 and one in 2018.
The 2017 occultation campaign is comprehensively described in Buie et al. (2018, in preparation) and here we detail only the procedures
used to predict the occultations.
MU$_{69}$ is a small object, with an absolute magnitude of $H_V\approx11$ Benecchi et al. (2018, in preparation),
corresponding to a size likely smaller than 50 km diameter.
We therefore knew that the occultations would only be successful if we had very high-quality orbital estimates
and uncertainty models.
Thankfully, that is exactly what we had developed for guiding \textit{New Horizons}.
Stellar occultations occur when a solar system object passes in front of a star from the perspective of an observer.
They have been used to discover the atmosphere of Pluto \citep{1989Icar...77..148E}
and the rings around Uranus, Chariklo, and Haumea
\citep[respectively]{1977Natur.267..328E,2014Natur.508...72B,2017Natur.550..219O}.
The latter is most important for planning the \textit{New Horizons} flyby of MU$_{69}$, as occultations
provide the only way of detecting rings or other opacity structures around the KBO before the spacecraft
is close enough to see them directly.
In addition, occultations can (and in this case did) provide estimates of the size and shape of a body.
Knowledge of the approximate size of MU$_{69}$ enabled estimates of its bulk albedo,
and therefore allowed mission planners to better estimate the correct exposure times for the flyby images.
Because of the motion and rotation of the Earth, stellar occultations sweep across Earth from west to east.
The typical approach to observe an occultation of an object with an uncertain orbit is therefore
to set up a north-south ``picket fence'' of portable telescope teams perpendicular or ``crosstrack'' to the occultation path.
It is therefore important to know both the crosstrack uncertainty of the prediction, and what the crosstrack
distance of any station is.
The latter often requires some iteration, as finding a logistically viable site with the proper crosstrack
can be challenging, especially in an unfamiliar country.
We thus developed tools to export KML files to Google Maps with lines showing the target crosstracks for each observing team.
These could be used for planning site reconnaissance, and with GPS-enabled smartphones, used to see in real time where
a potential site was located compared to the desired crosstrack line.
To estimate the crosstrack uncertainty, we propagated the full 10,000 states to the occultation time and
calculated the geometry for all of those states.
This produced a full PDF of the occultation uncertainty, which we could use to plan the crosstracks to maximize the
chance of success.
The first MU$_{69}$ occultation of 2017 was on June 3.
The ground track for this event passed over both South America and South Africa,
and 25 portable telescope teams were deployed to both Mendoza, Argentina and
the Northern Cape and Western Cape Provinces of South Africa.
Two \textit{HST} astrometric observations of MU$_{69}$ had been planned for the spring of 2017,
on March 16 and in mid May.
However, the March observation failed due to a safing event on \textit{HST},
and the observation could not be rescheduled in April,
because MU$_{69}$ was passing through quadrature and did not have enough sky motion
from \textit{HST} to be detected.
Thus, the first MU$_{69}$ observations of 2017 were acquired by \textit{HST} on May 1 and 25.
An initial solution though May 1, ``may1c'', was used to plan the deployments to Argentina and South Africa.
This was superseded by the ``may25a'' solution with data through May 25,
which was produced after the orbit fitters (Porter and Buie) had deployed,
and was used to plan the actual ground tracks.
The ``may25a'' solution purported to have a crosstrack uncertainty of 44 km,
though the subsequent June-July 2017 observations showed that the ``may25a'' ground track
had been too far north by almost 2-sigma.
This offset precluded a solid-body occultation on June 3, though the high signal-to-noise ratio observations
of the event at the South African Astronomical Observatory 74-inch telescope and
at Gemini South did exclude optically-thick rings around MU$_{69}$.
See Buie et al. (2018, in preparation) for more details.
\begin{deluxetable}{ccc}[t]
\tablecaption{ Mean, Free and Forced Elements for Best-Fit MU$_{69}$.
\label{tab:mean}}
\tablehead{\colhead{Best Fit,} & \colhead{10$^8$ years} }
\startdata
Mean $a$ & 44.23 AU \\
Forced $e$ & 6 x 10$^{-5}$ \\
Free $e$ & 0.037 \\
Forced $i_{mss}$ & 0.26$^\circ$ \\
Forced $i_{mkb}$ & 0.0012$^\circ$ \\
Free $i$ & 2.54$^\circ$ \\
\enddata
\end{deluxetable}
The second MU$_{69}$ occultation of 2017 was on July 10.
This event occurred mainly over the Pacific Ocean with a much dimmer star (V$\approx$15.5)
and nearly-full Moon, thus preventing a ground-based observation campaign.
However, the NASA-DLR Stratospheric Observatory for Infrared Astronomy (SOFIA)
airborne observatory was able to reach the occultation track from its
southern deployment base in Christchurch, New Zealand,
and NASA awarded a flight to observe the July 10 occultation (PI E. Young).
Between the June 3 and July 10 events,
\textit{HST} observed 2014 MU$_{69}$ over 24 orbits between June 25 and July 4 (GO 14627, PI Benecchi).
This program provided a wealth of new images to integrate into the MU$_{69}$ orbit solution
in a very short amount of time,
and time was especially critical, as the last six orbits worth of data was downlinked from \textit{HST}
after the orbit fitting team (Porter and Buie) had arrived in Christchurch.
Thus, the orbit solution used to guide the SOFIA flight was necessarily determined at the
United States Antarctic Program Christchurch facility, and delivered to the SOFIA mission
planners 36 hours before the flight.
This orbit solution, ``lc1'', used all of the lightcurve campaign points,
plus the highest-quality preceding \textit{HST} MU$_{69}$ observations.
The final MU$_{69}$ occultation of 2017 on July 17 was observed
with portable ground stations in the Chubut and Santa Cruz provinces in Argentina.
No additional observations of \textit{HST} MU$_{69}$ were made between the July 10 and July 17 occultations,
but we did perform a more thorough filtering of low-quality points.
This resulted in the ``lc1gr'' solution that was used to guide the placement of stations for
the July 17 occultation.
The ``lc1gr'' solution had a 1-$\sigma$ uncertainty in crosstrack of 13 km, much tighter than for June 3.
This solution allowed a tighter picket fence of stations up and down the Patagonian coast, centered
a few kilometers north of the city of Comodoro Rivadavia.
Despite high winds on the occultation night, 22 of the 24 deployed stations successfully observed the occultation.
Five of those stations observed the solid-body occultation, with the southernmost being the
predicted centerline from the ``lc1gr'' solution.
This work predicts
an additional stellar occultation opportunity on August 4, 2018.
The ground track for this event passes over
western Africa (Mail, Mauritania, and Senegal)
and northern South America (Guyana, Venezuela, and Colombia).
With the ``rd2b'' solution presented in Table \ref{tab:state},
the 1-$\sigma$ crosstrack uncertainty is 12 km.
This uncertainty will decrease somewhat with additional \textit{HST} observations in 2018.
\vspace{12pt}
\section{Long-Term Orbital Evolution} \label{sec:longterm}
\begin{figure}[t]
\plotone{ei_rd2b_kernel_1e8.pdf}
\caption{Free and forced elements for best-fit 2014 MU$_{69}$;
the centers of the circles show forced inclination/eccentricity,
while the radii show free inclination/eccentricity.
The forced inclination is 0.26$^\circ$ from the mean solar system
angular momentum vector (left),
but almost perfectly fits the mean Kuiper Belt at 44 AU pole from
\citet[right]{2017AJ....154...62V}.
The lack of forced eccentricity or inclination implies that MU$_{69}$
has not experienced any significant orbital evolution since formation.
\label{fig:ei}}
\end{figure}
2014 MU$_{69}$ is a cold classical Kuiper Belt object.
\citet{2005AJ....129.1117E} identified the ``Classical'' KBOs as non-resonant objects
with eccentricities smaller than 0.2.
This classification was further refined as a ``Cold Classical'' or ``Kernel''
population by \citet{2011AJ....142..131P}
with $a\approx44$ AU, $e\approx0.05$, $i<5^\circ$ to the invariant plane.
MU$_{69}$ has $a$=44.2 AU, $e$=0.03, $i$=2.4$^\circ$,
making it an archetype of the cold classical population.
\citet{2011ApJ...738...13B} showed the orbits of cold classical objects were likely formed
in-place and survived being disturbed from their initial orbits by giant planet migration.
The unusually high binary fraction of cold classical KBOs \citep{2008ssbn.book..345N}
is an additional line of evidence that they are mostly undisturbed from their original orbits.
Indeed, the observed cold classical KBO binary fraction is high enough that nearly all must have
originally formed as binaries or higher-order multiple systems \citep{2017NatAs...1E..88F}.
With a few small modifications to the PyNBody code, we were able to integrate the orbit of MU$_{69}$ over
sufficiently long timescales to test this stability and determine mean orbital elements.
Specifically, we changed the time unit in the integration from seconds to years to allow for longer
integrations without worry of overflows and removed the terrestrial planets as perturbers
(instead dropping their masses into the Sun).
The results of integrations forward and back $10^8$ years can be seen in Figure \ref{fig:ei} and Table \ref{tab:mean},
projected in both the mean solar system plane defined by the \textit{de430.bsp} planets in the ICRF J2000 Ecliptic frame,
$i_m=1.6^\circ$ and $\Omega_m=72.4^\circ$,
and in the mean Kuiper Belt plane at 44 AU as determined from known KBOs
\citet{2017AJ....154...62V}, $i_m=1.8^\circ$ and $\Omega_m=77.0^\circ$.
The mean, free, and forced elements of MU$_{69}$'s orbit are shown in Table \ref{tab:mean}.
The forced inclination of MU$_{69}$ to the mean solar system plane is 0.26$^\circ$,
but only 0.0012$^\circ$ to the mean Kuiper Belt at 44 AU.
Likewise, the forced eccentricity of MU$_{69}$ is less than 0.0001.
The apparent lack of any forced inclination or eccentricity to the mean Kuiper Belt is strong evidence
that MU$_{69}$ has not suffered any significant orbital evolution beyond secular perturbations.
MU$_{69}$ should therefore represent a truly pristine fossil of the Sun's protoplanetary disk,
an object unlike any other previously visited by a spacecraft.
\section{Summary} \label{sec:summary}
We have described the process we have used to fit the orbit of
2014 MU$_{69}$, as of the start of 2018.
This process combines \textit{Gaia} DR2 and \textit{HST}/WFC3 to produce extremely high
precision absolute astrometry of MU$_{69}$,
and translates that uncertainty into a cartesian state vector
probability distribution function that can be evolved
to any time of interest.
The results of this analysis were used to successfully predict and observe a solid-body stellar
occultation of MU$_{69}$ on July 17, 2017, predict a stellar occultation on August 4, 2018,
and to guide the \textit{New Horizons} spacecraft
to a close (3500 km) flyby of MU$_{69}$ on January 1, 2019.
The process described here should enable high-precision orbit determination
for future occultations and spacecraft missions.
2014 MU$_{69}$ presents the extreme case of a very interesting object
that is both faint and in a very crowded star field.
Now that the \textit{Gaia} DR2 catalog has been released,
solar system objects with higher signal-to-noise ratios should benefit
even more from this technique,
enabling a substantial improvement in orbital uncertainty and
increasing the number of objects that might be observed with stellar
occultations.
\acknowledgments
This work was supported by NASA's \textit{New Horizons} mission
and \textit{HST} programs GO 13633, GO/DD 14053, GO 14092, GO/DD 14485,
GO 14627, GO 14629, and GO 15158.
Support for this program was provided by NASA through a grant from the Space Telescope Science Institute,
which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
Special thanks to ESA for providing pre-release versions of \textit{Gaia} DR2
over the relevant regions.
Special thanks to Bill Folkner for providing his expertise and verification of the orbit fitting results.
\facility{HST(WFC3), Gaia, SOFIA}
\software{
astropy \citep{2018arXiv180102634T},
scipy \citep{Scipy},
emcee \citep{2013PASP..125..306F},
Matplotlib \citep{Hunter:2007},
photutils,
spiceypy
}
|
{
"timestamp": "2018-05-08T02:12:29",
"yymm": "1805",
"arxiv_id": "1805.02252",
"language": "en",
"url": "https://arxiv.org/abs/1805.02252"
}
|
\section{Introduction}
Let~$R$ be a Dedekind domain. A fundamental and well known property of Dedekind domains is that every ideal $\mathfrak{a}\vartriangleleft R$ has a unique factorization into a product of powers of prime ideals. There are cases when this factorization is algorithmically computable. For instance, if $R = \mathbb{Z}_K$ is the ring of algebraic integers (i.e. the integral closure of~$\mathbb{Z}$) in some algebraic number field $K = \mathbb{Q}(\vartheta)$, then a suitable algorithm can be found e.g. in \cite[Algorithm 2.3.22]{Cohen00} or \cite[\S2.2]{GMN13}. The algorithms can be adapted also to global function fields. They depend however on knowing an embedding of the ring of integers (or polynomials) into~$R$. In this paper we discuss the problem of performing the computations intrinsically in the monoid of $R$-ideals without relaying on these embeddings. The procedure of factoring ideals, that we propose, resembles a method of factoring polynomials over finite fields. We show how to generalize known algorithms for polynomial factorization to make them work with ideals in maximal orders of global function fields. The ideal to be factored passes through a three-stage process: radical decomposition, distinct degree factorization and equal degree factorization.
The algorithms presented in \cite{Cohen00, GMN13} are quite efficient, hence the aim of developing intrinsic methods is not so much to reduce the computation time but rather to construct algorithm that do not dependent on the particular structure of global fields and so have potential to be generalized to other rings. In particular the first step of the process, namely the radical decomposition, can be performed in \emph{any} Dedekind domain in which three elementary operation on ideals are computable. This class of rings include coordinate rings of smooth, algebraically irreducible curves over a computable, perfect field (see Proposition~\ref{prop_coord_is_computable}). Some early experiments of the authors suggest that the algorithms presented here can be generalized to compute primary decomposition of ideals in affine algebras. This subject need further investigation, though.
The paper is organized as follows. In Section~\ref{sec_radical} we discuss the radical decomposition of ideals, which is an analog of a square-free factorization of polynomials. Given an ideal $\mathfrak{a}\vartriangleleft R$, this procedure produces a list of radical ideals $\gg_1, \dotsc, \gg_m$ such that~$\mathfrak{a}$ is a product of their respective powers. Next, in Section~\ref{sec_distinct} we show how to factor a radical ideal (i.e. any of the ideals $\gg_1, \dotsc, \gg_m$) into a product of (radical) ideals such that each one of these new ideals is a product of primes of the same residual degree. Finally in Section~\ref{sec_equal} we present a variant of Cantor--Zassenhaus algorithm (Algorithm~\ref{alg_ed_factor}) capable of factoring radical ideals with prime divisors of a fixed degree. The algorithms discussed in this paper were implemented by the authors in a computer algebra system Magma \cite{magma}. In the closing section we presented two examples obtained with our implementation.
In the whole paper the letter~$R$ always denotes a (fixed) Dedekind domain with a field of fractions~$K$. For readers convenience our notation follows the one used in \cite{AM16}, in particular fraktur letters are used to denote ideals. All the ideals in this paper are integral ideals.
\section{Radical decomposition of ideals}\label{sec_radical}
Let~$R$ be a Dedekind domain and $\mathfrak{a}\vartriangleleft R$ be an ideal in~$R$. Assume that~$\mathfrak{a}$ factors into primes as
\begin{equation}\label{eq_factor_a}
\mathfrak{a} = \mathfrak{p}_1^{k_1}\dotsm \mathfrak{p}_s^{k_s},
\end{equation}
where $\mathfrak{p}_1, \dotsc, \mathfrak{p}_s$ are distinct (and unknown) prime ideals and $k_1, \dotsc, k_s > 0$ their multiplicities. Collate the factors of equal multiplicities. For any $j\leq m := \max\{ k_1, \dotsc, k_s \}$ denote
\[
\gg_j := \bigcap_{\substack{1\leq i\leq s\\ k_i = j}} \mathfrak{p}_i.
\]
This way we may write~$\mathfrak{a}$ as a product analogous to a square-free factorization of a polynomial:
\begin{equation}\label{eq_radical_decomp}
\mathfrak{a} = \gg_1\cdot \gg_2^2\dotsm \gg_m^m.
\end{equation}
We shall call~\eqref{eq_radical_decomp} the \term{radical decomposition} of the ideal~$\mathfrak{a}$. The name is justified by the following observation.
\begin{obs}
Ideals $\gg_1, \dotsc, \gg_m$ are radical.
\end{obs}
Indeed, radicals are preserved by intersection (see e.g. \cite[Ch.~1]{AM16}), hence
\[
\rad(\gg_j)
= \rad\Bigl( \bigcap_{k_i = j}\mathfrak{p}_i \Bigr)
= \bigcap_{k_i = j} \rad(\mathfrak{p}_i)
= \bigcap_{k_i = j} \mathfrak{p}_i
= \gg_j.
\]
In our settings, the ideals $\gg_1, \dotsc, \gg_m$ play roles analogous to square-free factors of a polynomial in case of the square-free factorization, so that we shall call them \emph{radical factors} of~$\mathfrak{a}$.
The following operations are the basic building blocks for our first algorithm:
\begin{itemize}
\item given an ideal~$\mathfrak{a}$ compute its radical $\rad\mathfrak{a}$,
\item given two ideals~$\mathfrak{a}$ and~$\mathfrak{b}$ compute their sum $\mathfrak{a}+\mathfrak{b}$ and the colon ideal $(\mathfrak{a}:\mathfrak{b}) = \{ x\;|\ x\mathfrak{b}\subseteq \mathfrak{a}\}$.
\end{itemize}
We shall say that~$R$ is a ring with \term{computable ideal arithmetic} if all the three operations are computable for ideals of~$R$.
\begin{prop}\label{prop_coord_is_computable}
Let~$\Bbbk$ be a perfect, computable field and $C := \{ F = 0\}$ be a smooth, geometrically irreducible algebraic curve over~$\Bbbk$, defined by a bivariate polynomial $F\in \kk[X,Y]$. Then the coordinate ring $R = \Bbbk[C] = \sfrac{\kk[X,Y]}{\Ideal{F}}$ admits computable ideal arithmetic.
\end{prop}
The proof of the proposition needs to be preceded by a lemma. Let $\kappa: \kk[X,Y]\twoheadrightarrow R$ be the canonical epimorphism. By superscripts $\cdot^c$, $\cdot^e$ we shall denote respectively the ideal contraction and extension with respect to~$\kappa$.
\begin{lem}
Keep the assumptions of the proposition. If $\mathfrak{a}, \mathfrak{b}\vartriangleleft R$ are two ideals, then
\[
\mathfrak{a}^{ce} = \mathfrak{a},\qquad
\rad(\mathfrak{a}) = \bigl( \rad(\mathfrak{a}^c)\bigr)^e,\qquad
(\mathfrak{a} : \mathfrak{b}) = ( \mathfrak{a}^c : \mathfrak{b}^c)^e.
\]
\end{lem}
\begin{proof}
The inclusion $\mathfrak{a}^{ce}\subseteq \mathfrak{a}$ holds always (see e.g. \cite[Proposition 1.17]{AM16}). The other inclusion follows from the fact that~$\kappa$ is an epimorphism. Consequently we have
\[
\rad(\mathfrak{a})
= \bigl( \rad(\mathfrak{a}) \bigr)^{ce}
= \bigl( \rad(\mathfrak{a}^c) \bigr)^e,
\]
where the last equality follows from \cite[Exercise~1.18]{AM16}. Likewise we may write
\[
( \mathfrak{a}^c : \mathfrak{b}^c )^e
\subseteq ( \mathfrak{a}^{ce} : \mathfrak{b}^{ce} )
= ( \mathfrak{a} : \mathfrak{b} )
= ( \mathfrak{a} : \mathfrak{b} )^{ce}
\subseteq ( \mathfrak{a}^c : \mathfrak{b}^c )^e.
\]
This concludes the proof.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop_coord_is_computable}]
If we do not insist on obtaining the $2$-generators representation of the result, the computation of the sum $\mathfrak{a}+ \mathfrak{b}$ of two ideals can be as simple as a concatenation of their lists of generators. Next, an algorithm for computing a quotient of two ideal in a multivariate polynomial ring is well known and so it follows from the above lemma that one may compute the quotient of ideals in~$R$. Finally, being a Dedekind domain, the ring~$R$ has dimension one. Consequently every nontrivial ideal $\mathfrak{a}\vartriangleleft R$ lifts to a zero-dimensional ideal $A\vartriangleleft \kk[X,Y]$. The radical of a zero-dimensional ideal in a multivariate polynomial ring over a perfect field is computable using Seidenberg's formula (see \cite{Seidenberg74}). Thus, the radical of~$\mathfrak{a}$ is computable, as well by the previous lemma.
\end{proof}
We are now ready to present an algorithm for the radical decomposition. The reader may wish to observe that it is a generalization of Musser's algorithm \cite{Musser71} for the square-free factorization of polynomials over a field of characteristic zero.
\begin{alg}[Radical decomposition of an ideal]\label{alg_radical_decomp}
\mbox{}\\
\textbf{Input:} an ideal~$\mathfrak{a}$ in a Dedekind domain~$R$ with computable ideal arithmetic.\\
\textbf{Output:} radical factors $\gg_1, \dotsc, \gg_m$ of~$\mathfrak{a}$.
\begin{algorithmic}[1]
\LineComment{Initialization}
\State $\mathfrak{a}_0 \gets \mathfrak{a}$
\State $i \gets 1$
\State $\mathfrak{b}_1 \gets \rad(\mathfrak{a})$
\State $\mathfrak{a}_1 \gets (\mathfrak{a}_0:\mathfrak{b}_1)$
\LineComment{Main loop}
\While{$\mathfrak{b}_i\neq R$}
\State $\mathfrak{b}_{i+1} \gets \mathfrak{a}_i + \mathfrak{b}_i$
\State $\mathfrak{a}_{i+1} \gets (\mathfrak{a}_i : \mathfrak{b}_{i+1})$
\State $\gg_i \gets (\mathfrak{b}_i : \mathfrak{b}_{i+1})$
\State $i \gets i + 1$
\EndWhile
\State \Return $\gg_1, \dotsc, \gg_i$
\end{algorithmic}
\end{alg}
Before we show the correctness of the algorithm, we present a slightly technical lemma that gives an explicit description of ideals~$\mathfrak{a}_i$ and~$\mathfrak{b}_i$ constructed during the execution of the algorithm.
\begin{lem}
Keep the notation as in Algorithm~\ref{alg_radical_decomp}. The ideals~$\mathfrak{a}_i$ and~$\mathfrak{b}_i$ satisfy:
\[
\mathfrak{a}_i = \gg_{i+1}\cdot \gg_{i+2}^2\dotsm \gg_m^{m-i}
\qquad\text{and}\qquad
\mathfrak{b}_i = \gg_i\cdot \gg_{i+1}\dotsm \gg_m.
\]
\end{lem}
\begin{proof}
We proceed by induction. The assertion is trivially true for~$\mathfrak{a}_0$ and~$\mathfrak{b}_1$. Assume that the two formulas hold for ideals~$\mathfrak{a}_{i-1}$ and~$\mathfrak{b}_i$. Take any $x\in \gg_{i+1}\cdot \gg_{i+2}^2\dotsm \gg_m^{m-i}$ and $y\in \mathfrak{b}_i = \gg_i\cdot \gg_{i+1}\dotsm \gg_m$. Then their product~$xy$ lies in $\gg_i\cdot \gg_{i+1}^2\dotsm \gg_m^{m-i+1} = \mathfrak{a}_{i-1}$. Hence $x\in (\mathfrak{a}_{i-1} : \mathfrak{b}_i) = \mathfrak{a}_i$ proving an inclusion $\mathfrak{a}_i \supseteq \gg_{i+1}\cdot \gg_{i+2}^2\dotsm \gg_m^{m-i}$.
Conversely, take $x\in \mathfrak{a}_i = (\mathfrak{a}_{i-1} : \mathfrak{b}_i)$. Fix any prime ideal~$\mathfrak{p}$ dividing~$\mathfrak{a}_{i-1}$ and let $k := \ord_\mathfrak{p}(\mathfrak{a})$ be the multiplicity of~$\mathfrak{p}$ in the factorization~\eqref{eq_factor_a} of~$\mathfrak{a}$. By the inductive hypothesis, the ideal~$\mathfrak{p}$ divides~$\mathfrak{b}_i$ and $k - i + 1$ is the multiplicity of~$\mathfrak{p}$ in the factorization of~$\mathfrak{a}_{i-1}$. By the strong approximation theorem (see e.g. \cite[Corollary~10.5.11]{Cohn03}) there exists an element $y\in R$ such that
\[
\ord_\mathfrak{p} y = 1\qquad\text{and}\qquad \ord_\mathfrak{q} y \geq 1\quad\text{for all }\mathfrak{q}\mid \mathfrak{b}_i.
\]
In particular, $y$ is an element of~$\mathfrak{b}_i$ and $y\notin \mathfrak{p}^2$. By the definition of the colon ideal, $xy\in x\cdot \mathfrak{b}_i\subseteq \mathfrak{a}_{i-1}\subseteq \mathfrak{p}^{k - i + 1}$. It follows that the $\mathfrak{p}$-adic valuation of the product~$xy$ is at least $k - i + 1$. Therefore, we have
\[
k - i + 1
\leq \ord_\mathfrak{p}(xy)
= \ord_\mathfrak{p} x + 1.
\]
Consequently $\ord_\mathfrak{p} x\geq k - i$, which means that $x\in \mathfrak{p}^{k - i}$. As this holds for every prime~$\mathfrak{p}$ dividing~$\mathfrak{a}_{i-1}$, we see that
\[
x\in \bigcap_{\substack{\mathfrak{p}\mid \mathfrak{a}\\ \mathpalette\mathclapinternal{\ord_\mathfrak{p}(\mathfrak{a})\geq i}}} \mathfrak{p}^{\ord_\mathfrak{p}(\mathfrak{a}) - i}
= \prod_{\substack{\mathfrak{p}\mid \mathfrak{a}\\ \mathpalette\mathclapinternal{\ord_\mathfrak{p}(\mathfrak{a})\geq i}}} \mathfrak{p}^{\ord_\mathfrak{p}(\mathfrak{a}) - i}
= \prod_{k\geq i}\biggl( \prod_{\substack{\mathfrak{p}\mid \mathfrak{a}\\ \mathpalette\mathclapinternal{\ord_\mathfrak{p}(\mathfrak{a})=k}}} \mathfrak{p}\biggr)^{k - i}
\hspace{-1em}= R\cdot \gg_{i+1}\cdot \gg_{i+2}^2\dotsm \gg_m^{m-i}.
\]
This shows that $\mathfrak{a}_i \subseteq \gg_{i+1}\cdot \gg_{i+2}^2\dotsm \gg_m^{m-i}$.
We now prove the equality $\mathfrak{b}_{i+1} = \gg_{i+1}\dotsm \gg_m$. One inclusion is immediate.
\begin{align*}
\mathfrak{b}_{i+1}
&= \mathfrak{a}_i + \mathfrak{b}_i = \Ideal{ \gg_{i+1}\cdot \gg_{i+1}^2\dotsm \gg_m^{m-i}\cup \gg_i\cdot \gg_{i+1}\dotsm \gg_m }\\
\intertext{The radical ideals~$\gg_i$ are pairwise coprime, hence}
&= \Ideal{ \bigcap_{j\geq i+1} \gg_j^{j-i}\cup \Bigl(\gg_i\cap \bigcap_{j\geq i+1}\gg_j\Bigr) }\\
&= \Ideal{ \Bigl( \bigcap_{j\geq i+1} \gg_j^{j-i}\cup \gg_i\Bigr) \cap \Bigl(\bigcap_{j\geq i+1}\gg_j^{j-i}\cup \bigcap_{j\geq i+1}\gg_j\Bigr) }\\
&\subseteq \Ideal{ \Bigl(\bigcap_{j\geq i+1}\gg_j\cup \gg_i\Bigr) \cap \bigcap_{j\geq i+1}\gg_j}\\
&= \Ideal{ \bigcap_{j\geq i+1}\gg_j} = \gg_{i+1}\cdot \gg_{i+1}\dotsm \gg_m.
\end{align*}
In order to show the other inclusion fix an element $x\in \gg_{i+1}\dotsm \gg_m$. Ideals~$\gg_i$ and $\gg_{i+1}\gg_{i+2}^2\dotsm \gg_m^{m-i} = \mathfrak{a}_i$ are relatively prime, hence there exist elements $y\in \gg_i$ and $z\in \mathfrak{a}_i$ such that $x = y + z$. Therefore, for any $j\geq i+1$ we have
\[
y = x - z \in \gg_j + \mathfrak{a}_i\subseteq \gg_j + \gg_j = \gg_j.
\]
It follows that $y\in \gg_{i+1}\cap\dotsc \cap\; \gg_m = \mathfrak{b}_i$. Consequently, $x = y + z\in \mathfrak{b}_i + \mathfrak{a}_i = \mathfrak{b}_{i+1}$.
\end{proof}
We are now ready to show the correctness of the algorithm.
\begin{proof}[Proof of correctness of Algorithm~\ref{alg_radical_decomp}]
It follows immediately from the preceding lemma that the algorithm terminates. All we need to show is that for every index~$i$ the colon ideal $(\mathfrak{b}_i:\mathfrak{b}_{i+1})$ equals the sought radical ideal~$\gg_i$. One inclusion is immediate. By the lemma we have
\[
\gg_i\cdot \mathfrak{b}_{i+1} = \gg_i\cdot \bigl( \gg_{i+1}\dotsm \gg_m\bigr) = \mathfrak{b}_i
\]
and so $\gg_i\subseteq (\mathfrak{b}_i : \mathfrak{b}_{i+1})$. We need to prove the other inclusion. To this end take any $x\in (\mathfrak{b}_i : \mathfrak{b}_{i+1})$ and fix a prime divisor~$\mathfrak{p}$ of~$\gg_i$. The multiplicity of~$\mathfrak{p}$ in the factorization of~$\mathfrak{a}$ is thus $\ord_\mathfrak{p}(\mathfrak{a}) = i$. By the strong approximation theorem there is an element $y\in R$ such that $y\in \mathfrak{b}_{i+1}\setminus \mathfrak{p}$. Now, $xy \in x\cdot \mathfrak{b}_{i+1}\subseteq \mathfrak{b}_i\subseteq \mathfrak{p}$ but $y\notin \mathfrak{p}$, it follows that $x\in \mathfrak{p}$. This shows that~$x$ belongs to every prime divisor~$\mathfrak{p}$ of~$\mathfrak{a}$ of multiplicity $\ord_\mathfrak{p}(\mathfrak{a}) = i$. Therefore
\[
x\in \bigcap_{\substack{\mathfrak{p}\mid a\\ \mathpalette\mathclapinternal{\ord_\mathfrak{p}(\mathfrak{a}) = i}}}\mathfrak{p} = \gg_i.
\]
This proves the correctness of the algorithm.
\end{proof}
\section{Distinct degree factorization}\label{sec_distinct}
In this and the next section we restrict our attention to maximal orders in global function fields. Thus, let~$\Bbbk$ be a fixed finite field and let $R = \Bbbk[C] = \sfrac{\kk[X,Y]}{\Ideal{F}}$ be a Dedekind domain which is a coordinate ring of a smooth and geometrically irreducible curve~$C$. In particular~$R$ is a maximal order in a field~$\Bbbk(C)$ of rational functions on~$C$ (i.e. in a global function field). Given a radical ideal $\mathfrak{a}\vartriangleleft R$, consider its factorization into primes
\[
\mathfrak{a} = \mathfrak{p}_1\dotsm \mathfrak{p}_s.
\]
Collate the primes with respect to their residual degrees setting
\[
\mathfrak{h}_j := \prod_{\substack{\mathfrak{p}\mid \mathfrak{a}\\ \deg \mathfrak{p} = j}} \mathfrak{p}.
\]
Consequently the ideal~$\mathfrak{a}$ may be expressed as a product
\begin{equation}\label{eq_dd_factor}
\mathfrak{a} = \mathfrak{h}_1\dotsm \mathfrak{h}_m,\qquad\text{where}\quad m := \max\{\deg\mathfrak{p}\;|\ \mathfrak{p}\text{ divides }\mathfrak{a}\}.
\end{equation}
By analogy to the polynomial case, we shall call~\eqref{eq_dd_factor} the \term{distinct degree factorization} of~$\mathfrak{a}$.
We will compute the distinct degree factorization of a given ideal~$\mathfrak{a}$ by constructing successive greatest common divisors (in the lattice of $R$-ideals) of~$\mathfrak{a}$ and~$\mathfrak{u}_k$, where~$\mathfrak{u}_k$ is the intersection of all primes of residual degrees dividing~$k$:
\[
\mathfrak{u}_k := \prod_{\substack{\mathfrak{p}\text{ prime}\\\deg\mathfrak{p} \mid k}} \mathfrak{p}.
\]
Before we continue, recall that with every prime ideal~$\mathfrak{p}$ one can associate a unique point $(x_\mathfrak{p}, y_\mathfrak{p})$ on the curve~$C$, with coordinates in the algebraic closure~$\algcl{\Bbbk}$ of~$\Bbbk$. To this end treat elements of $R = \Bbbk[C]$ as polynomial functions on~$C$ and set $(x_\mathfrak{p}, y_\mathfrak{p})$ to be a unique point where all elements of~$\mathfrak{p}$ vanish simultaneously. In this section~$\Bbbk$ is a finite field, say $\Bbbk = \mathbb{F}_q$ for some prime power~$q = p^l$. The degree of~$\mathfrak{p}$ divides~$k$ if and only if $x_\mathfrak{p}, y_\mathfrak{p}$ lie in~$\mathbb{F}_{q^k}$. It is well known that $\mathbb{F}_{q^k}$ consists of elements satisfying $a^{q^k}-a = 0$. Apply this fact to both coordinates.
\begin{lem}\label{lem_universal_ideal}
For every $k\geq 1$, the ideal $\mathfrak{u}_k$ is generated by $x^{q^k}-x$ and $y^{q^k}-y$, where $x,y$ are images in~$R$ of $X,Y\in \kk[X,Y]$.
\end{lem}
\begin{proof}
Fix $k\geq 1$ and denote $\mathfrak{v}_k := \tIdeal{ x^{q^k}-x, y^{q^k}-y}\vartriangleleft R$. We shall prove first that the ideal~$\mathfrak{v}_k$ is contained in~$\mathfrak{u}_k$. It suffices to show that both its generators belong to every prime ideal~$\mathfrak{p}$ of~$R$ whose residual degree divides~$k$. Take any such prime~$\mathfrak{p}$. The coordinates $x_\mathfrak{p}, y_\mathfrak{p}$ of the associated point belong to~$\mathbb{F}_{q^k}$, hence $x_\mathfrak{p}^{q^k}-x_\mathfrak{p} = 0 = y_\mathfrak{p}^{q^k} - y_\mathfrak{p}$. Thus the generators of~$\mathfrak{v}_k$ vanish on $(x_\mathfrak{p}, y_\mathfrak{p})$ and so $\mathfrak{v}_k\subseteq \mathfrak{p}$.
Next, we show that~$\mathfrak{v}_k$ is not contained in any prime ideal~$\mathfrak{p}$ whose degree does not divide~$k$. Suppose that $\mathfrak{v}_k\subset \mathfrak{p}$ for some prime ideal~$\mathfrak{p}$. In particular the generators $x^{q^k}-x, y^{q^k}-y$ belong to~$\mathfrak{p}$. Therefore, they vanish on the associated point $(x_\mathfrak{p}, y_\mathfrak{p})$, which means that $x_\mathfrak{p}, y_\mathfrak{p}\in \mathbb{F}_{q^k}$ and so $\deg \mathfrak{p}$ divides~$k$.
From what we have proved so far it follows that~$\mathfrak{u}_k$ is the radical of~$\mathfrak{v}_k$. In order to conclude the proof, it suffices to show that~$\mathfrak{v}_k$ is a radical ideal itself. To this end we show that for every prime ideal~$\mathfrak{p}$, $\deg \mathfrak{p}\mid k$ the valuation of at least one of the generators of~$\mathfrak{v}_k$ equals~$1$. Consider two (reducible) algebraic curves $C_1 := \bigl\{x^{q^k}= x\bigr\}$ and $C_2 := \bigl\{ y^{q^k}=y\bigr\}$. They both consist of parallel lines but they are not parallel to each other. Suppose that $\ord_\mathfrak{p} (x^{q^k}-x)> 1$ for some~$\mathfrak{p}$. This means that~$\mathfrak{p}$ is a ramified extension of an ideal $p\cdot \mathbb{F}_q[x]$ for some irreducible polynomial~$p$ in~$x$. We may identify the valuation $\ord_\mathfrak{p}(x^{q^k}-x)$ with the intersection index $I\bigl( (x_\mathfrak{p},y_\mathfrak{p}), C\cap C_1\bigr)$. If
\[
\ord_\mathfrak{p}\bigl( x^{q^k}-x ) = I\bigl( (x_\mathfrak{p},y_\mathfrak{p}), C\cap C_1\bigr) > 1,
\]
then~$C$ is tangent to~$C_1$ at $(x_\mathfrak{p}, y_\mathfrak{p})$. Consequently it cannot be tangent to~$C_2$ at $(x_\mathfrak{p}, y_\mathfrak{p})$ as~$C$ is non-singular. Therefore
\[
\ord_\mathfrak{p}\bigl( y^{q^k}-y ) = I\bigl( (x_\mathfrak{p},y_\mathfrak{p}), C\cap C_2\bigr) = 1.
\]
This shows that for every prime ideal~$\mathfrak{p}$, whose residual degree divides~$k$, either $x^{q^k}-x\in \mathfrak{p}\setminus\mathfrak{p}^2$ or $y^{q^k}-y\in \mathfrak{p}\setminus \mathfrak{p}^2$. This implies that~$\mathfrak{v}_k$ is radical.
\end{proof}
We may now present an algorithm for distinct degree factorization.
\begin{alg}[Distinct degree factorization]\label{alg_dd_factor}
\mbox{}\\
\textbf{Input:} a radical ideal~$\mathfrak{a}\vartriangleleft R$.\\
\textbf{Output:} distinct degree factors $\mathfrak{h}_1, \dotsc, \mathfrak{h}_m$ of~$\mathfrak{a}$.
\begin{algorithmic}[1]
\LineComment{Initialization}
\State $k \gets 1$
\State $\mathfrak{a}_1 \gets \mathfrak{a}$
\LineComment{Main loop}
\While{$\mathfrak{a}_k\neq R$}
\State $\mathfrak{u}_k \gets \tIdeal{ x^{q^k}-x, y^{q^k}-y }$
\State $\mathfrak{h}_k \gets \mathfrak{u}_k + \mathfrak{a}_k$
\State $\mathfrak{a}_{k+1} \gets (\mathfrak{a}_k : \mathfrak{h}_k)$
\State $k \gets k + 1$
\EndWhile
\State \Return $\mathfrak{h}_1, \dotsc, \mathfrak{h}_k$
\end{algorithmic}
\end{alg}
\begin{poc}
We proceed by an induction on~$k$. Assume that $\mathfrak{h}_{k-1}$ is the $(k-1)$-th distinct degree factor of $\mathfrak{a}$ and~$\mathfrak{a}_k$ is the product of the prime divisors of~$\mathfrak{a}$ with residual degrees at least~$k$. This is trivially true for $\mathfrak{a}_1 = \mathfrak{a}$ and $\mathfrak{h}_0 := R$. Lemma~\ref{lem_universal_ideal} asserts that $\tIdeal{ x^{q^k}-x, y^{q^k}-y } = \mathfrak{u}_k$. Compute
\begin{multline*}
\mathfrak{u}_k + \mathfrak{a}_k
= \Ideal{ \mathfrak{u}_k\cup \mathfrak{a}_k}
= \Ideal{ \bigcap_{\deg\mathfrak{p}\mid k}\mathfrak{p}\ \cup \bigcap_{\substack{\mathfrak{q}\mid\mathfrak{a}\\ \deg\mathfrak{q}\geq k}}\mathfrak{q} }\\
= \Ideal{ \bigcap_{\substack{\deg \mathfrak{p} \mid k\\ \mathfrak{q}\mid \mathfrak{a}\\ \deg \mathfrak{q}\geq k}} (\mathfrak{p}\cup \mathfrak{q}) }
= \Ideal{ \bigcap_{\substack{\deg \mathfrak{p} \mid k\\ \mathfrak{q}\mid \mathfrak{a}\\ \deg \mathfrak{q}\geq k}} (\mathfrak{p} + \mathfrak{q}) }.
\end{multline*}
Now prime ideals $\mathfrak{p},\mathfrak{q}$ are either equal or relatively prime. Hence $\mathfrak{p}+ \mathfrak{q} = \mathfrak{p}$ when $\mathfrak{p} = \mathfrak{q}$ and $\mathfrak{p}+\mathfrak{q} = R$ if $\mathfrak{p}\neq \mathfrak{q}$. Consequently the above formula simplifies to
\[
\mathfrak{u}_k + \mathfrak{a}_k = \bigcap_{\substack{\mathfrak{p}\mid \mathfrak{a}\\ \deg \mathfrak{p} = k}} \mathfrak{p} = \mathfrak{h}_k.
\]
It follows that $\mathfrak{a}_{k+1} = (\mathfrak{a}_k : \mathfrak{h}_k)$ is the product off all those prime divisors of~$\mathfrak{a}$ that have degrees strictly greater than $k$.
\end{poc}
\section{Equal-degree factorization}\label{sec_equal}
After performing a radical decomposition and distinct degree factorization, we are left with a list of radical ideals such that each one is a product of primes all having the same (known) residual degree. We can deal with such ideals using a generalization of a classical Cantor--Zassenhaus algorithm. We shall first note the following fact.
\begin{lem}
If $\mathfrak{a}\vartriangleleft R$ is a nonzero radical ideal, then the number of elements of the residue ring $\sfrac{R}{\mathfrak{a}}$ is algorithmically computable.
\end{lem}
\begin{proof}
As in the proof of Proposition~\ref{prop_coord_is_computable} we use the ideal contraction with respect to the canonical epimorphism $\kappa: \kk[X,Y]\twoheadrightarrow R$. The ring~$R$ is a Dedekind domain and $\mathfrak{a}\neq \{0\}$, hence $\sfrac{R}{\mathfrak{a}}$ is a finite ring isomorphic to $\sfrac{\kk[X,Y]}{\mathfrak{a}^c}$. The number of elements of the latter ring is computable using a well known trick of counting monomials not in $\lm(\mathfrak{a}^c)$, where $\lm(\mathfrak{a}^c)$ is an ideal spanned by leading monomials of~$\mathfrak{a}^c$ with respect any monomial order in $\kk[X,Y]$.
\end{proof}
From now on we assume that $\mathfrak{a}\vartriangleleft R$ is a radical ideal with some (unknown) factorization
\[
\mathfrak{a} = \mathfrak{p}_1\dotsm \mathfrak{p}_m
\]
and the residual degrees of $\mathfrak{p}_1, \dotsc, \mathfrak{p}_m$ are all the same and a priori known. Denote this common degree by~$d$.
\begin{lem}
Let~$b$ be an element of~$R$ not in~$\mathfrak{a}$. Denote $\overline{b} := b + \mathfrak{a}$ the class of~$b$ in $\sfrac{R}{\mathfrak{a}}$ and $e := q^d-1$. The following conditions are equivalent:
\begin{enumerate}
\item\label{it_proper_divisor} the ideal $\mathfrak{b} := \Ideal{b} + \mathfrak{a}$ is a proper divisor of~$\mathfrak{a}$;
\item\label{it_zero_divisor} the element $\overline{b}$ is a zero-divisor in $\sfrac{R}{\mathfrak{a}}$;
\item\label{it_exponent} $\overline{b}^{e}\neq 1$.
\end{enumerate}
\end{lem}
\begin{proof}
Assume that~$\mathfrak{b}$ is a proper divisor of~$\mathfrak{a}$. This means that $\mathfrak{a}\varsubsetneq \mathfrak{b}\varsubsetneq R$. In particular~$b$ cannot lie in~$\mathfrak{a}$ and so $\overline{b}\neq 0$. The ring $\sfrac{R}{\mathfrak{a}}$ is finite, hence it suffices to show that~$\overline{b}$ is not invertible. Suppose a contrario that there is an element $c\in R$ such that $\overline{c}\cdot \overline{b} = 1$. But then $1\in \mathfrak{b}$ and this contradicts the assumption that $\mathfrak{b}\neq R$.
The implication $\eqref{it_zero_divisor}\Longrightarrow\eqref{it_exponent}$ is trivial. In order to prove the remaining implication $\eqref{it_exponent}\Longrightarrow\eqref{it_proper_divisor}$, assume that $\overline{b}^e\neq 1$. By the Chinese reminder theorem there is an isomorphism
\[
\varphi : \sfrac{R}{\mathfrak{a}} \xrightarrow{\;\sim\;} \sfrac{R}{\mathfrak{p}_1}\times \dotsb \times \sfrac{R}{\mathfrak{p}_m},
\]
where each quotient ring $\sfrac{R}{\mathfrak{p}_i}$ is in turn isomorphic to $\mathbb{F}_{q^d}$. Let $\pi_i : \sfrac{R}{\mathfrak{p}_1}\times \dotsb \times \sfrac{R}{\mathfrak{p}_m}\twoheadrightarrow \sfrac{R}{\mathfrak{p}_i}$ be the projection onto the $i$-th coordinate. For every $i\leq m$, the image $(\pi_i\circ \varphi)(\overline{b}^e)$ is either~$1$ if $b\notin \mathfrak{p}_i$, or~$0$ if $b\in \mathfrak{p}_i$. Not all coordinates can be equal~$1$, because $\overline{b}^e\neq 1$. Neither all the coordinates are equal zero, since $b\notin \mathfrak{a}$. Denote
\[
I := \bigl\{ i\leq m : (\pi\circ \varphi)(b^e) = 0\bigr\} = \bigl\{ i\leq m: b\in \mathfrak{p}_i\},
\]
we then have
\[
\mathfrak{b} = \prod_{i\in I} \mathfrak{p}_i
\]
and it is clear that $\mathfrak{a}\varsubsetneq \mathfrak{b}\varsubsetneq R$.
\end{proof}
We may now present a randomized recursive algorithm, in a spirit of Cantor--Zassenhaus, for factoring radical ideals of constant residual degree.
\begin{alg}[Equal degree factorization]\label{alg_ed_factor}
\mbox{}\\
\textbf{Input:} a radical ideal~$\mathfrak{a}\vartriangleleft R$ and an integer~$d$ such that the residual degree of every prime factor of~$\mathfrak{a}$ equals~$d$.\\
\textbf{Output:} prime factors $\mathfrak{p}_1, \dotsc, \mathfrak{p}_m$ of~$\mathfrak{a}$.
\algrenewcommand\algorithmicuntil{\textbf{end repeat}}
\begin{algorithmic}[1]
\LineComment{Recursion termination}
\If{$\card{\sfrac{R}{\mathfrak{a}}} = q^d$}
\State \Return $\mathfrak{a}$
\EndIf
\LineComment{Main loop}
\Repeat
\State $b\gets\mbox{}$ random element of $R\setminus\mathfrak{a}$
\State $\overline{b} \gets b + \mathfrak{a}\in \sfrac{R}{a}$
\If{$\overline{b}^{q^d-1}\neq 1$}
\State $\mathfrak{b} \gets \Ideal{b} + \mathfrak{a}$
\State $\mathfrak{c} \gets ( \mathfrak{a} : \mathfrak{b} )$
\Statex {\hspace{\algorithmicindent}\hspace{\algorithmicindent}// Recursion}
\State $r_1 \gets \mbox{}$ Equal degree factorization of~$\mathfrak{b}$
\State $r_2 \gets \mbox{}$ Equal degree factorization of~$\mathfrak{c}$
\State \Return $r_1\cup r_2$
\EndIf
\Until{}
\end{algorithmic}
\end{alg}
The correctness of the algorithm follows immediately from the lemma preceding it. For the sake of completeness we present an algorithm for the complete factorization of an ideal, that summarizes the whole discussion.
\begin{alg}[complete factorization]\label{alg_factorization}
\mbox{}\\
\textbf{Input:} an ideal~$\mathfrak{a}$ in~$R$.\\
\textbf{Output:} the list of pairs $(\mathfrak{p}_i, k_i)$ of prime divisors and multiplicities, see Eq.~\eqref{eq_factor_a}.
\begin{algorithmic}[1]
\State $\mathop{Factors} \gets []$
\State $G\gets\mbox{}$ radical decomposition of~$\mathfrak{a}$ \textup(Algorithm~\ref{alg_radical_decomp}\textup)
\For{$j\leq \card{G}$}
\State $\gg_j\gets G[j]$
\State $H \gets\mbox{}$ distinct degree factorization of~$\gg_j$ \textup(Algorithm~\ref{alg_dd_factor}\textup)
\For{$d\leq \card{H}$}
\State $\mathfrak{h}_d \gets H[d]$
\State $P \gets \mbox{}$ equal degree factorization of $\mathfrak{h}_d$ \textup(Algorithm~\ref{alg_ed_factor}\textup)
\State $\mathop{Factors} \gets \mathop{Factors}\cup \bigl[ (\mathfrak{p}, j) : \mathfrak{p} \in P \bigr]$
\EndFor
\EndFor
\State \Return $\mathop{Factors}$
\end{algorithmic}
\end{alg}
\section{Examples}\label{sec_examples}
The authors implemented algorithms described in this paper in a computer algebra system Magma~\cite{magma}. Below we preset two examples computed using our implementation.
\subsubsection*{Example} Let $K = \mathbb{F}_{13}(x, y)$ be a hyperelliptic function field given by a generating polynomial
\[
F = y^2 - (x^5 - x)(x^4 + 2)
\]
and let $R:=\sfrac{\mathbb{F}_{13}[x,y]}{\Ideal{F}}$. Consider the ideal $\mathfrak{a} \vartriangleleft R$
\begin{multline*}
\mathfrak{a} = \langle
x^9 + 8x^7 + 5x^6 + 10x^5 + 6x^4 + 4x^3 + 9x^2 + 6x + 4,\\
\qquad 11x^8 + 8x^7 + 2x^6 + 10x^5 + 6x^4 + x^3y + x^3 + 4x^2y + 7x^2 + 4xy + 9y + 7 \rangle
\end{multline*}
Use Algorithm~\ref{alg_radical_decomp} to compute the radical decomposition $\mathfrak{a} = \gg_1\cdot \gg_2^2$, where
\begin{align*}
\gg_1 &= \langle x^6 + 9x^5 + 7x^4 + 10x^3 + 4x^2 + 4x + 12,\\
&\qquad y + 12x^5 + x^4 + 11x^3 + 10x^2 + 3x + 8 \rangle, \\
\gg_2 &= \Ideal{ x^3 + 4x^2 + 4x + 9, y + 7x^2 + 9x + 12 }.\\
\end{align*}
Next, using Algorithm~\ref{alg_dd_factor}, we compute the distinct degree factorization for each element of the radical decomposition. For $\gg_1$ it returns two trivial factors $\mathfrak{h}_{11} = \mathfrak{h}_{12} = R$ and one nontrivial, degree~$3$ factor
\begin{multline*}
\mathfrak{h}_{13} = \langle 8x^5y + 5x^4y + 9x^3y + xy + 5y + 1,\\
x^6y + 9x^5y + 7x^4y + 10x^3y + 4x^2y + 4xy + 12y \rangle.
\end{multline*}
For $\gg_2$ the situation is fully analogous: $\mathfrak{h}_{21}= \mathfrak{h}_{22} = R$ and
\[
\mathfrak{h}_{23} = \Ideal{ 5x^2y + 5xy + 6y + 1, x^3y + 4x^2y + 4xy + 9y }.
\]
Finally we compute the equal degree factorization for each of the above factors using Algorithm~\ref{alg_ed_factor}. For $\mathfrak{h}_{13}$ we obtain the following primes
\begin{align*}
\mathfrak{p}_1 &= \Ideal{ x^3 + 4x^2 + 4x + 9, y + 6x^2 + 4x + 1 }\\
\mathfrak{p}_2 &= \Ideal{ x^3 + 5x^2 + 9x + 10, y + 3x^2 + 7x + 4 }\\
\intertext{and for $\mathfrak{h}_{23}$ we get}
\mathfrak{p}_3 &= \Ideal{ x^3 + 4x^2 + 4x + 9, y + 7x^2 + 9x + 12 }.
\end{align*}
Hence the complete factorization of~$\mathfrak{a}$ is $\mathfrak{p}_1\cdot \mathfrak{p}_2\cdot \mathfrak{p}_3^2$.
\subsubsection*{Example}
In this example, we consider an elliptic function field $K = \mathbb{F}_{19}(x, y)$ with full constant field $\mathbb{F}_{19}$, where
\[
y^2 + y = x^3 - 2x^2 + 1
\]
Take a and ideal
\begin{align*}
\mathfrak{a}
&= \langle x^{21} + 14x^{20} + 9x^{19} + 4x^{18} + 5x^{17} + 12x^{16} + 9x^{15} + 7x^{14} + 12x^{13} + 8x^{12}\\
&\quad + 3x^{11} + 8x^{10} + 14x^9 + 7x^8 + 12x^7 + x^6 + 9x^5 + 13x^4 + 9x^3 + 4x^2 + 18x + 4, \\
& x^3y + 6x^2y + 3xy + 17y + 7x^{18} + 7x^{17} + 11x^{16} + x^{15} + 18x^{13} + 8x^{12} + 9x^{11}\\
&\quad + 15x^{10} + 13x^9 + 18x^8 + 12x^7 + x^6 + 14x^5 + 10x^4 + 7x^3 + 15x^2 + 9x + 5\rangle.
\end{align*}
We again use Algorithm~\ref{alg_radical_decomp} to factor~$I$ into a product of radical ideals. It returns one trivial factor $\gg_{3} = R$ and three nontrivial factors $\gg_{1}, \gg_{2}$ and $\gg_{4}$ where
\begin{align*}
\gg_1 &= \langle x^3 + 6x^2 + 3x + 17, x^3y + 6x^2y + 3xy + 17y \rangle,\\
\gg_2 &= \Ideal{ x^3 + 4x + 17, y + 8x^2 + 2x + 9 }, \\
\gg_4 &= \Ideal{ x^3 + 2x^2 + 10x + 4, y + 8x^2 + 3x }.
\end{align*}
Now we compute the distinct degree factors of the above ideals. For $\gg_1$ we have two trivial factors $\mathfrak{h}_{11} = \mathfrak{h}_{13} = R$ and two nontrivial one, degrees~$2$ and~$4$, respectively:
\begin{align*}
\mathfrak{h}_{12} &= \Ideal{ x + 1 }.\\
\mathfrak{h}_{14} &= \Ideal{ x^2 + 5x + 17 }
\intertext{For $\gg_2$ it returns two trivial factors $\mathfrak{h}_{21} = \mathfrak{h}_{22} = R$ and one nontrivial, degree~$3$ factor}
\mathfrak{h}_{23} &= \Ideal{x^3 + 4x + 17, y + 8x^2 + 2*x + 9}.
\intertext{Similarly for~$\gg_4$ we have $\mathfrak{h}_{41} = \mathfrak{h}_{42} = R$ and }
\mathfrak{h}_{43} &= \Ideal{x^3 + 2x^2 + 10x + 4, y + 8x^2 + 3x}.
\end{align*}
Finally we use Algorithm~\ref{alg_ed_factor} to compute the equal degree factorization. It turns out that all four ideals $\mathfrak{h}_{12}, \mathfrak{h}_{14}, \mathfrak{h}_{23}$ and~$\mathfrak{h}_{43}$ are in fact prime. Denoting
\[
\mathfrak{p}_1 := \mathfrak{h}_{12},\quad
\mathfrak{p}_2 := \mathfrak{h}_{14},\quad
\mathfrak{p}_3 := \mathfrak{h}_{23},\quad
\mathfrak{p}_4 := \mathfrak{h}_{43},
\]
we obtain the complete factorization of~$\mathfrak{a}$, namely $\mathfrak{a} = \mathfrak{p}_1\cdot \mathfrak{p}_2\cdot \mathfrak{p}_3^2\cdot \mathfrak{p}_4^3$.
\bibliographystyle{plain}
|
{
"timestamp": "2018-05-08T02:13:12",
"yymm": "1805",
"arxiv_id": "1805.02289",
"language": "en",
"url": "https://arxiv.org/abs/1805.02289"
}
|
\section{Introduction}
The dual-polarized (DP) antenna array has many appealing features that make it a strong candidate for adoption in next generation communication systems and massive MIMO \cite{polarization2,3gpp,3gpp_stand,polarization5}.
For example, Foschini and Gans \cite{polarization4} showed that the capacity of systems with DP antennas at the transmitter can be increased up to 50\% compared to systems without polarization. Besides the increased capacity, DP antenna arrays have other key advantages relative to single-polarization counterparts with the same number of antennas, including smaller form factor and easier installation, better interference mitigation capability, and higher link reliability.
In the recent releases of technical specifications suggested by the 3GPP, the DP array and the double directional (DD) channel model are considered key techniques \cite{polarization5,3gpp,3gpp_stand}.
The DD channel model is parsimonious for multipath channels with a small number of dominant paths, and such scenarios arise in millimeter wave (mmWave) based wireless communications. Modlel parsimony is really essential for designing limited feedback schemes for downlink channel state information in frequency-division duplex (FDD) massive MIMO \cite{3gpp}. Specifically, 3GPP suggests that the mobile users estimate the DD channel parameters such as directions-of-arrival (DOAs), directions-of-departure (DODs), the complex path-loss associated with each path, and then feed back these parameters to the base station (BS). This strategy is rather economical, as it is expected that the number of dominant paths will be small to moderate in practical deployments.
On the other hand, there are very few works related to the DD-DP channel/parameter estimation problem. Most of the existing channel estimation algorithms such as \cite{limitedfeedback2,jarvis,jomp,panos,fangjun,fangjun2} do not take polarization into consideration, and thus cannot be applied to this particular kind of system.
There are many challenges in the way of estimating the key parameters of the DD-DP channel. First, considering polarization adds another level of difficulty on top of the (DODs, DOAs, path losses) parametrization, which is already not easy to handle in some cases, e.g., when we have small-size pilot matrices or a large number of multipaths. Formulating the parameter estimation problem for the DD-DP channel in a mathematically tractable form and tackling it using effective signal processing tools that provide analytical performance guarantees is quite nontrivial. Second, although the DD-DP channel (or to be more precise, blocks of the channel) can be modeled using long-existing array processing models (as we will show), it is hard to apply the classic array processing algorithms (e.g., MUSIC \cite{music} and ESPRIT \cite{esprit}) for estimating the key parameters. The reason is that classic array processing methods usually work under relatively restrictive assumptions---e.g., MUSIC and ESPRIT need the number of multipaths to be smaller than the number of transmit and the number of receive antennas, which may not be satisfied in practice. Real systems often have to deal with more multipaths than antennas on one end of the link. Third, the conventional estimation methods use matched filtering or linear least squares to extract an estimate of the channel matrix out of the received signals, and then perform parameter identification. To do this, the pilot sequence has to be quasi-orthogonal or at least full row-rank, respectively. This is very expensive for massive MIMO systems if the number of transmit antennas is large.
Very recently, Zhu \emph{et. al.,} proposed an interesting framework for two-dimensional DOA and DOD estimation of wideband massive MIMO-OFDM systems with DP arrays \cite{polarization5}. The key idea behind this algorithm is to exploit a so-called multi-layer reference signal structure to estimate the arrival and departure angles. Specifically, the transmitter and receiver communicate with each other iteratively, and in each iteration the transmitter (or receiver) fixes a beam and then the receiver (or transmitter) varies a paired beamforming vector to receive (or transmit) data. This way, a closed-form formula for DOA/DOD can be asymptotically derived. The total number of iterations is proportional to the product of the number of paired beams and the number of radio frequency bins at both transmitter and receiver. This closed-loop iterative protocol implicitly assumes that the transmitter, receiver, and scatterers remain static, and the two ends of the link are synchronized in beam sweeping. Its resolution is also limited by the utilized beamwidth.
In this work, we consider the parameter estimation problem for frequency division duplex (FDD) dual-polarized DD channels---but the algorithms can be easily implemented in the time division duplex (TDD) systems as well.
We aim at designing novel efficient channel estimation algorithms and analyzing the identifiability of the key parameters, i.e., DOAs, DODs and path-losses, of this model.
Unlike the existing methods for DD-DP channels as in \cite{polarization5}, our proposed approach does not require multiple iterations between the transmitter and receiver, which may not be desired or realistic in practice, e.g., in a scenario with relatively higher mobility.
Our method is also naturally with high spatial resolution, inheriting nice properties of related array processing techniques.
In addition, we fully characterize the theoretical boundaries of our methods in terms of parameter identifiability, leveraging advanced tensor algebra, and show that our method can work under a variety of challenging scenarios where existing methods tend to fail.
Our detailed contributions can be summarized as follows:
\begin{itemize}[leftmargin=3mm]
\item {\bf Tensor-Based Formulation.}
We show that the DD-DP channel can be naturally modeled as a low-rank tensor. Leveraging this structure, we recast the associated channel estimation problem as a low-rank tensor decomposition problem \cite{Sid2017} and handle it using effective tensor decomposition algorithms.
\item {\bf Rigorous Identifiability Analysis.}
On the theory side, we show that the channel (i.e., the multipath parameters) are identifiable under very mild and practical conditions---even when the number of paths largely exceeds the number of receive antennas, a practically important case that classic DP array processing algorithms, e.g., \cite{polarized1}, cannot cope with.
\item {\bf Reduced-pilot Formulation and Identifiability.}
We propose a downlink signaling strategy that utilizes a judiciously designed pilot structure. We show that this pilot structure combined with {\it compressed tensor modeling} can substantially reduce the downlink overhead, without losing identifiability of the channel parameters. This design is particularly suitable for massive MIMO systems, for which existing methods usually need very long pilot sequences to help the receivers extract the channel matrix and then perform parameter estimation. An effective estimation algorithm is also proposed for the designed piloting strategy.
\end{itemize}
We should mention that some very recent work \cite{fangjun,fangjun2} also studied the downlink and uplink channel estimation problems from a tensor decomposition viewpoint. Nevertheless, the work in \cite{fangjun,fangjun2} did not consider dual-polarized antenna arrays. Hence, the formulated problems and analyses there are quite different from ours.
A preliminary conference version of part of this work was presented at ICASSP 2018 \cite{icassp}. The conference version includes the basic modeling and identifiability claims without detailed proofs. This journal version additionally includes detailed proofs of the identifiability results, and the compressed tensor factorization formulation, its identifiability proof, and a new algorithm that handles the compressed formulation.
\noindent {\bf Notation:} Throughout the paper, superscripts $(\cdot)^T$, $(\cdot)^*$, $(\cdot)^H$, $(\cdot)^{-1}$ and $(\cdot)^\dagger$ represent transpose, complex conjugate, Hermitian transpose, matrix inverse and pseudo inverse, respectively. We use $|\cdot|$, $\|\cdot\|_F$, $\|\cdot\|_1$ and $\|\cdot\|_2$ for absolute value, Frobenius norm, $\ell_1$-norm and $\ell_2$-norm, respectively; $\hat a$ denotes an estimate of $a$, $\text{diag}(\cdot)$ is a diagonal matrix holding the argument in its diagonal, $\text{vec}(\cdot)$ is the vectorization operator and $\angle(\cdot)$ takes the phase of its argument; $[\cdot]_i$ is the $i$th element of a vector, $[\mathbf{X}]_{i,j}$ is the $(i,j)$ entry of $\mathbf{X}$, and $\mathbf{x}_{r,k}$ is the $k$th column of $\mathbf{X}_r$. Symbols $\otimes,\odot, \circledast \text{ and } \circ$ denote the Kronecker, Khatri-Rao, element-wise, and outer products, respectively; $[\mathbf{X}]_{[i:j,m:n]}$ extracts the elements in rows $i$ to $j$ and columns $m$ to $n$, $[\mathbf{X}]_{:,i:j}$ extracts the elements in the columns $i$ to $j$ and $[\mathbf{X}]_{i:j,:}$ extracts the elements in the rows $i$ to $j$. $\mathbf{I}_m$ is the $m\times m$ identity matrix and $\boldsymbol{0}_{m\times n}$ is the $m\times n$ zero matrix.
\section{Signal Model and Problem Statement}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{dpmimo}
\vspace{-2.5em}
\caption{DP MIMO system.}
\label{fig:dpmimo}
\end{figure}
\subsection{Double Directional Dual-Polarized Channel Model}
We consider an FDD massive MIMO system, where there are $M_t$ DP transmit antennas and $M_r$ DP receive antennas, see Fig. \ref{fig:dpmimo}. In the array processing literature, this type of DP array is also known as ``cross-polarized'' array \cite{polarized1}.
Under the considered scenario, each antenna pair consists of a vertical (V) polarized antenna and a twin horizontal (H) polarized antenna---and each antenna is connected with an RF chain.
Therefore, the channel can be represented as a $2M_r\times 2M_t$ matrix, where each element of the channel matrix represents a link between a transmit antenna (which could be a V-polarized or a H-polarized antenna) and a (V- or H-polarized) receive antenna.
The signal received by the user is given by \cite{3gpp_stand}
\begin{align}\label{eq:channel}
\mathbf{x}(t) = \mathbf{H}\mathbf{s}(t) + \mathbf{n}(t),\ t=1,\cdots,N
\end{align}
where $\mathbf{s}(t)\in\mathbb{C}^{2M_t \times 1}$ is the transmitted signal, $\mathbf{n}(t)$ is zero-mean i.i.d. circularly symmetric complex Gaussian noise.
The downlink channel matrix can be represented as
\begin{align}\label{H}
\mathbf{H} = \begin{bmatrix} \mathbf{H}^\mathrm{(V_r,V_t)} & \mathbf{H}^\mathrm{(V_r,H_t)} \\ \mathbf{H}^\mathrm{(H_r,V_t)} & \mathbf{H}^\mathrm{(H_r,H_t)} \end{bmatrix} \in\mathbb{C}^{2M_r\times 2M_t},
\end{align}
where $\mathbf{H}^\mathrm{(V_r,V_t)}\in\mathbb{C}^{M_r\times M_t}$ is a channel matrix between all the V-polarized transmit antennas and V-polarized receive antennas, $\mathbf{H}^\mathrm{(V_r,H_t)}\in\mathbb{C}^{M_r\times M_t}$ is a channel matrix between all the H-polarized transmit antennas and V-polarized receive antennas, and likewise for the other two blocks in \eqref{H}.
For notational simplicity, let ${\rm p}\in\{\mathrm{V_r,H_r}\}$ and ${\rm q}\in\{\mathrm{V_t,H_t}\}$. Then, according to the channel model suggested by the 3GPP \cite{3gpp_stand}, the $(\mathrm{p,q})$th subchannel matrix is modeled as
\begin{align}\label{Hpq}
\mathbf{H}^\mathrm{(p,q)} = \sqrt{\frac{\kappa}{\kappa+1}}\mathbf{H}_\mathrm{LOS}^\mathrm{(p,q)} + \sqrt{\frac{1}{\kappa+1}}\mathbf{H}_\mathrm{NLOS}^\mathrm{(p,q)}
\end{align}
where $\mathbf{H}_\mathrm{LOS}^\mathrm{(p,q)}$ is the component of line-of-sight (LOS) and $\mathbf{H}_\mathrm{NLOS}^\mathrm{(p,q)}$ is the component of non-line-of-sight (NLOS), $\sqrt{\frac{1}{\kappa+1}}$ and $\sqrt{\frac{\kappa}{\kappa+1}}$ are energy normalization factors with $\kappa$ being the ratio between the power related to the LOS and the power related to the NLOS, and
\begin{align}
\mathbf{H}_\mathrm{LOS}^\mathrm{(p,q)} &= \tilde{\beta}_1^\mathrm{(p,q)}\mathbf{a}_{r}(\theta_{1},\phi_{1})\mathbf{a}_{t}^H(\vartheta_{1},\varphi_{1}) \label{Hlos} \\
\mathbf{H}_\mathrm{NLOS}^\mathrm{(p,q)} &= \sum_{k=2}^{K} \tilde{\beta}_k^\mathrm{(p,q)}\mathbf{a}_{r}(\theta_{k},\phi_{k}) \mathbf{a}_{t}^H(\vartheta_{k},\varphi_{k}) \label{Hnlos}
\end{align}
where the first path is assumed to be the LOS path that usually exists in systems operating at millimeter wave (mmWave) frequencies, and $K$ is the number of paths between the two subarrays.
$\mathbf{a}_{r}(\theta_{k},\phi_{k})\in\mathbb{C}^{M_r}$ is associated with the array manifold of subarray ${\rm p}$: $\theta_k$ and $\phi_k$ are azimuth and elevation DOAs of the $k$th path, respectively.
Similarly, $\mathbf{a}_{t}(\vartheta_{k},\varphi_{k})\in\mathbb{C}^{M_t}$ is determined by the subarray ${\rm q}$, and $\vartheta_{k}$ and $\varphi_{k}$ are the azimuth and elevation DODs of the $k$th path, respectively.
Note that $\{\tilde{\beta}_k^\mathrm{(p,q)}\}$ are generalized path-losses, which are random variables and affected by the small-scale loss, large-scale loss, distance between BS and MS, and dual-polarization parameters.
We may express
\begin{align}
\tilde{\beta}_k^\mathrm{(p,q)}=\alpha_k\tilde{\gamma}_k^\mathrm{(p,q)}, \quad k=1,\cdots, K
\end{align}
where $\alpha_k$ denotes the standard path-loss which is caused by propagation and fading, while $\tilde{\gamma}_k$ denotes the polarization factor. Without loss of generality, we can absorb $\sqrt{\frac{\kappa}{\kappa+1}}$ and $\sqrt{\frac{1}{\kappa+1}}$ into $\tilde{\gamma}_1^\mathrm{(p,q)}$ and $\{\tilde{\gamma}_k^\mathrm{(p,q)}\}_{k=2}^K$, respectively, and define $\gamma_1^\mathrm{(p,q)}=\sqrt{\frac{\kappa}{\kappa+1}}\tilde{\gamma}_1^\mathrm{(p,q)}$ and $\Big\{\gamma_k^\mathrm{(p,q)}=\sqrt{\frac{1}{\kappa+1}}\tilde{\gamma}_k^\mathrm{(p,q)}\Big\}_{k=2}^K$. Thus,
\begin{align}
\beta_k^\mathrm{(p,q)} = \alpha_k\gamma_k^\mathrm{(p,q)},\; k=1,\cdots,K.
\end{align}
Substituting \eqref{Hlos} and $\eqref{Hnlos}$ into \eqref{Hpq} produces
\begin{align}
\mathbf{H}^\mathrm{(p,q)} = \mathbf{A}_r\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(p,q)}\big)\mathbf{A}_t^H
\end{align}
where
\begin{align}
\mathbf{A}_r &= \begin{bmatrix} \mathbf{a}_{r}(\theta_{1},\phi_{1}) & \cdots & \mathbf{a}_{r}(\theta_{K},\phi_{K}) \end{bmatrix} \\
\mathbf{A}_t &= \begin{bmatrix} \mathbf{a}_{t}(\vartheta_{1},\varphi_{1}) & \cdots & \mathbf{a}_{t}(\vartheta_{K},\varphi_{K}) \end{bmatrix} \\
\boldsymbol{\beta}^\mathrm{(p,q)} &=
\begin{bmatrix}
\beta_1^\mathrm{(p,q)} & \cdots & \beta_K^\mathrm{(p,q)}
\end{bmatrix}^T.
\end{align}
Now the channel matrix in \eqref{H} can be rewritten as
\begin{align}\label{eq:3-Dmimo}
\mathbf{H} &=
\begin{bmatrix}
\mathbf{A}_r & \\ & \mathbf{A}_r
\end{bmatrix}
\begin{bmatrix}
\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(V_r,V_t)}\big) & \mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(V_r,H_t)}\big) \\
\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(H_r,V_t)}\big) & \mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(H_r,H_t)}\big)
\end{bmatrix} \times\notag\\
&\quad \begin{bmatrix}
\mathbf{A}_t & \\ & \mathbf{A}_t
\end{bmatrix}^H
\end{align}
which in a more compact form is
\begin{align}\label{H3}
\mathbf{H} = (\mathbf{I}_2\otimes\mathbf{A}_r) \mathbf{\Lambda} (\mathbf{I}_2\otimes\mathbf{A}_t)^H
\end{align}
where
\begin{align}
\mathbf{\Lambda} =
\begin{bmatrix}
\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(V_r,V_t)}\big) & \mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(V_r,H_t)}\big) \\
\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(H_r,V_t)}\big) & \mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(H_r,H_t)}\big)
\end{bmatrix}.
\end{align}
The model in \eqref{eq:3-Dmimo} has been advocated by the 3GPP as a standardized channel modeling approach for the long-term evolution (LTE) systems \cite{3gpp_stand,3gpp}.
As mentioned in \cite{3gpp_stand}, most standardized channels like spatial channel model (SCM), SCM extension (SCME) \cite{scme}, WINNER \cite{winner} and ITU \cite{itu} are based on this model.
The model assumes that the H-polarized subarray and the V-polarized subarray share the same array manifolds, while the polarization information is contained in the path-loss vectors, i.e., ${\boldsymbol{\beta}}^{( {\rm p},{\rm q})}$'s. This model has many favorable features. It concisely models the effect of polarization. More importantly, it incorporates the elevation information of the transmit and receive antenna arrays in addition to the azimuth information---leading to the so-called 3-D channel modeling, which is considered very useful for next generation wireless communication systems, since it provides many more degrees of freedom that can potentially enhance system performance; see detailed discussion in \cite{3gpp_stand,3gpp}.
In practice, the specific form of $\mathbf{a}_{r}(\theta_{k},\phi_{k})$ and $\mathbf{a}_{t}(\vartheta_{k},\varphi_{k})$ are intimately tangled with the array geometry.
For example, the BSs are usually equipped with uniform rectangular arrays (URAs) that each has $M_x$ and $M_y$ horizontal and vertical array units\footnote{Note that in the DP arrays, each array unit consists of a pair of DP antennas.}. An illustration is shown in Fig.~\ref{fig:ura}. In this special case, the $k$th steering vector for the transmitter becomes
\begin{align}\label{eq:ura_manifold}
\mathbf{a}_t(\vartheta_{k},\varphi_{k})=\mathbf{a}_{t,k} = \mathbf{a}_{y,k} \otimes \mathbf{a}_{x,k}
\end{align}
where $[\mathbf{a}_{x,k}]_{l_x} = e^{j\omega_{x,k}}, l_x = 0,\cdots,M_x-1$ and $[\mathbf{a}_{y,k}]_{l_y} = e^{j\omega_{y,k}},l_y=0,\cdots,M_y-1$
with $\omega_{x,k}=2\pi (l_x-1)d_x\sin(\varphi_k)\cos(\vartheta_k)/\nu$ and $\omega_{y,k} = 2\pi (l_y-1)d_y\sin(\varphi_k)\sin(\vartheta_k)/\nu$. Here, $\nu$ is the wavelength, $d_x$ and $d_y$ are the inter-element spacing distances for horizontal and vertical units, respectively.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{ura}
\caption{Illustration of a URA at the BS. Each ``cross'' represents a DP antenna pair.}
\label{fig:ura}
\end{figure}
For ease of exposition, let us assume that the receivers employ uniform linear arrays (ULAs).
Note that the analytical tools that we use can be easily extended to cover cases where the receive array has a different geometry, e.g., URA.
Under the ULA assumption, the $k$th steering vector for the receiver is
\begin{align}
\mathbf{a}_{r}(\theta_{k},\phi_{k})=\mathbf{a}_{r,k} =
\begin{bmatrix}
1 & e^{j\omega_{r,k}} & \cdots & e^{j(M_r-1)\omega_{r,k}}
\end{bmatrix}^T
\end{align}
where $\omega_{r,k}=2\pi d_r\sin(\theta_k)/\nu$ with $d_r$ being the inter-element spacing of the ULA at the receiver side.
Note that to avoid phase wrapping, $d_x$ and $d_y$ must be carefully chosen. The most widely adopted choice for $d_x$ and $d_y$ is half-wavelength.
In this paper, given the angular ranges of $\varphi$, $\vartheta$ and $\theta$, we assume that $d_x$, $d_y$ and $d_r$ are determined such that $2\pi d_x\sin(\varphi)\cos(\vartheta)/\nu \leq \pi$, $2\pi d_x\sin(\varphi)\sin(\vartheta)/\nu \leq \pi$ and $2\pi\sin(\theta)/\nu\leq \pi$ for all $ \varphi,\vartheta,\theta$ in their own ranges, respectively.
\subsection{Problem Statement}
Given the described channel model, our goal is to estimate the key parameters of the 3-D downlink channel.
Here, by ``key parameters'', we mean the set of DOAs and DODs that are associated with the paths and the corresponding path-losses, i.e., $\{\theta_k,\phi_k\}_{k=1}^K$, $\{\vartheta_{k},\varphi_{k}\}_{k=1}^K$ and ${\boldsymbol{\beta}}^{({\rm p},{\rm q})}$ for all ${\rm p}\in\{\mathrm{V_r,H_r}\}$ and ${\rm q}\in\{\mathrm{V_t,H_t}\}$.
Note that for massive MIMO systems that follow the channel model in \eqref{eq:3-Dmimo}, one is very well motivated to estimate these parameters. The reason is that the (potentially large) channel matrix ${\mathbf{H}}$ that has $4M_rM_t$ complex-valued elements is fully characterized by the parameters of interest. Since the number of multipaths is usually not large in practice, the number of parameters is relatively small, i.e., $8K$ ($2K$ for the DOAs, $2K$ for the DODs and $4K$ for the complex-valued path-losses) and we usually have
$$ 8K\ll 4M_rM_t. $$
For example, in a scenario where $M_r=M_t=20$ and $K=6$ paths exist, the channel matrix consists of $1,600$ complex-valued elements
while we only have 48 key parameters, of which $24$ are real-valued (DOAs and DODs).
Therefore, by estimating the key parameters, one can feedback the downlink channel to the BS in a very economical way---only feeding back the parameters rather than the whole channel ${\mathbf{H}}$ suffices to recover the downlink MIMO channel at the BS.
In fact, this is the main idea enabling implementation of limited feedback schemes in mmWave massive MIMO systems suggested by the 3GPP \cite{3gpp}.
In this work, we begin with the scenario where the downlink channel matrix ${\mathbf{H}}$ can be estimated at the receiver via simple procedures such as matched filtering or linear least squares (LS).
Our objective is to estimate the key parameters from a given ${\mathbf{H}}$.
We will first study identifiability theory associated with this simpler scenario---i.e., under what conditions the DOAs, DODs and path-losses can be provably identified from ${\mathbf{H}}$? Effective algorithms based on the identifiability analysis will also be proposed.
In addition, we will consider more challenging yet desirable scenarios where only a compressed version of ${\mathbf{H}}$ available at the receiver, and provide a pragmatic and effective algorithm to estimate the parameters of interest---this approach can substantially reduce the pilot length, thereby greatly saving the downlink training overhead.
\subsection{Prior Art and Challenges}
Estimating the DOAs, DODs and path-losses from ${\mathbf{H}}$ is not a trivial task.
Nevertheless, many classic methods from the array processing society can be applied, under some conditions.
For example, if the submatrix
$\mathbf{H}^\mathrm{(p,q)} = \mathbf{A}_r\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(p,q)}\big)\mathbf{A}_t^H$ has full-column rank $\mathbf{A}_r$, $\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(p,q)}\big)$ and $\mathbf{A}_t$, and also if the receive and transmit antenna arrays are ULAs/URAs, the subspace methods such as ESPRIT and MUSIC can be applied to estimate the DOAs and DODs.
After that, the path-losses can be recovered, e.g., via LS estimation.
This is viable, but can only work under relatively stringent conditions.
The classic array processing methods like ESPRIT and MUSIC all assume
full column rank of $\mathbf{A}_r$, $\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(p,q)}\big)$, and $\mathbf{A}_t$, which implicitly assumes that $K\leq \min\{M_r,M_t\}$ (to be precise, $K+1\leq \min\{M_r,M_t\}$ is needed for subspace methods like MUSIC).
In many practical scenarios, $M_r$ is relatively small---e.g., the newest model of iPhone (i.e., iPhone X released in 2017) only supports two receive antennas, while the number of paths can easily exceed two. Is there a more powerful method that can provably identify the parameters of interest under much more relaxed conditions? We will address this question in the next section.
Another possible way to handle the parameter estimation problem is to treat each block $\mathbf{H}^\mathrm{(p,q)} = \mathbf{A}_r\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(p,q)}\big)\mathbf{A}_t^H$ as a sparse optimization problem \cite{jarvis,jomp}.
For each subchannel block, we discretize the DOA and DOD domains into fine angle grids and then construct three overcomplete angle dictionaries (codebooks), denoted by $\mathbf{D}_r$, $\mathbf{D}_x$ and $\mathbf{D}_y$. Then, we have $\mathbf{H}^\mathrm{(p,q)}\approx \mathbf{D}_r\mathbf{G}^\mathrm{(p,q)}(\mathbf{D}_y\otimes\mathbf{D}_x)^H$, where $\mathbf{G}^\mathrm{(p,q)}$ is a sparse matrix that selects out the columns associated with the active DODs and DOAs from the dictionaries. This way, the parameter estimation problem becomes a sparse recovery problem that can be handled by formulations such as LASSO \cite{tibshirani1996regression}, i.e., $$\min_{\mathbf{g}^\mathrm{(p,q)}}~\|\mathbf{h}^\mathrm{(p,q)}-(\mathbf{D}_y^\ast\otimes\mathbf{D}_x^\ast \otimes {\bf D}_r) \mathbf{g}^\mathrm{(p,q)}\|_2^2 + \lambda \|\mathbf{g}^\mathrm{(p,q)}\|_1$$
where $\mathbf{h}^\mathrm{(p,q)}={\rm vec}(\mathbf{H}^\mathrm{(p,q)})$ with $\mathbf{g}^\mathrm{(p,q)}={\rm vec}(\mathbf{G}^\mathrm{(p,q)})$; and other sparse optimization algorithms such as orthogonal matching pursuit \cite{jomp}.
The difficulty is that to ensure good spatial resolution, $\mathbf{D}_r\in\mathbb{C}^{M_r\times D_r}$, $\mathbf{D}_x\in\mathbb{C}^{M_x\times D_x}$ and $\mathbf{D}_y\in\mathbb{C}^{M_y\times D_y}$ are very ``fat'' matrices, where $D_r$, $D_x$ and $D_y$ denotes the number of angle grid points after quantization. Consequently, $( \mathbf{D}_y^\ast\otimes\mathbf{D}_x^\ast \otimes \mathbf{D}_r)$ is of size $M_rM_xM_y\times D_rD_xD_y$. If one quantizes the DOA and DOD space (ranging from $-90^\circ$ to $90^\circ$) using a resolution of one degree, then $D_rD_xD_y=5,929,741$---which poses a very hard sparse optimization problem.
\section{Proposed Approach}
In this section, we propose to estimate the key parameters of interest using low-rank tensor factorization---which has provable guarantees under realistic and relaxed conditions.
\subsection{Tensor Preliminaries}
To make the paper self-contained, we briefly present the definition of tensor and some useful theorems on the uniqueness of tensor decomposition in the following.
\begin{definition}\label{definition:tensor}
\textit{(Tensor)}. A tensor is a multidimensional array indexed by three or more indices. Specifically, an $N$-th order tensor $\pmb{\mathcal{X}}\in\mathbb{C}^{I_1\times \cdots \times I_N}$ that has $N$ latent factor matrices $\mathbf{U}_1\ \cdots\mathbf{U}_N$ can be written as
\begin{align*}
\pmb{\mathcal{X}}=\sum_{f=1}^{F} [\mathbf{U}_1]_{:,f} \circ \cdots \circ [\mathbf{U}_N]_{:,f}
\end{align*}
where $\mathbf{U}_n\in\mathbb{C}^{I_n\times F}$ and the minimal such $F$ is the rank of tensor $\pmb{\mathcal{X}}$ or the canonical polyadic decomposition (CPD) rank of $\pmb{\mathcal{X}}$. \cite{Sid2017}.
\end{definition}
\begin{definition}\label{definition:unfolding}
(\textit{Unfolding}). For an $N$-th order tensor $\pmb{\mathcal{X}}\in\mathbb{C}^{I_n\times \cdots \times I_N}$ in Definition \ref{definition:tensor}, its $n$-mode matrix unfolding can be written as
\begin{align*}
\mathbf{X}_{(n)} = (\mathbf{U}_N\odot\cdots\odot\mathbf{U}_{n+1}\odot\mathbf{U}_{n+1}\odot\cdots\odot\mathbf{U}_1) \mathbf{U}_n^T.
\end{align*}
\end{definition}
Simply speaking, each unfolding is obtained by taking the mode-$n$ slabs of the tensor (i.e., subtensors obtained by fixing the $n$th index of the original tensor), vectorizing the slabs, and then stacking all the vectors into a matrix---see details in \cite{Sid2017}.
Low-rank tensor decomposition [also known as CPD or Parallel Factor Analysis (PARAFAC)] aims at factoring $\pmb{\mathcal{X}}$
into a sum of column-wise outer products of $\mathbf{U}_1,\ldots,\mathbf{U}_N$---with each such outer product being a rank-one tensor.
Unlike matrix factorization, which is in general non-unique, the PARAFAC decomposition has unique solution under mild conditions, up to scaling and permutation of the $F$ components. One of the best-known uniqueness results for third-order tensors is due to Kruskal \cite{kruskal}, which was later extended to higher orders by Sidiropoulos and Bro \cite{nikos1}.
\begin{theorem}\label{kruskal:uniqueness}
\cite{nikos1} Given a $N$th order tensor as in Definition \ref{definition:tensor}, if $\sum_{n=1}^{N}{\rm krank}(\mathbf{U}_n)\geq 2F+N-1$, then $\mathrm{rank}(\pmb{\mathcal{X}})=F$ and the decomposition of $\pmb{\mathcal{X}}$ is essentially unique, where ${\rm krank}(\cdot)$ denotes {\em Kruskal rank}.
\end{theorem}
The essential uniqueness makes the latent factors of a tensor identifiable from the `ambient data' $\pmb{\mathcal{X}}$ up to some trivial ambiguities---which has enabled a tremendous amount of applications---see an overview in \cite{Sid2017}.
Theorem~\ref{kruskal:uniqueness} is known to be a broadly applicable general result.
In some special cases where the factor matrices have special structure, e.g., Vandermonde structure, the uniqueness condition in Theorem \ref{kruskal:uniqueness} can be improved. For example, we have the following theorem:
\begin{theorem}\cite{sid2001ident}
Consider a tensor $\pmb{\mathcal{X}}=\sum_{f=1}^{F} [\mathbf{U}_1]_{:,f} \circ [\mathbf{U}_2]_{:,f} \circ [\mathbf{U}_3]_{:,f}$, where $\mathbf{U}_1\in\mathbb{C}^{I_1\times F}$, ${\mathbf{U}_2}\in\mathbb{C}^{I_2\times F}$ and ${\mathbf{U}_3}\in\mathbb{C}^{I_3\times F}$ with $\mathbf{U}$ Vandermonde
having distinct nonzero generators. Then, if
$${\rm krank}({\mathbf{U}_2})+\min\{I_1 + {\rm krank}({\mathbf{U}_3}),2F\}\geq 2F+2 $$
the factors are essentially unique.\label{theorem:sid2011ident}
\end{theorem}
Theorem~\ref{theorem:sid2011ident} presents a much milder identifiability condition relative to that in Theorem~\ref{kruskal:uniqueness}. The result is tailored for tensors that have a latent factor with Vandermonde structure. Such structure emerges quite often in array processing, since some array geometries (ULA, URA, nested or coprime arrays)
naturally give rise to Vandermonde matrices.
\subsection{Tensor-Based Method and Identifiability}\label{section3a}
Our proposed approach starts by noticing that ${\mathbf{H}}$ is in fact a tensor of rank (at most) $K$, when the BS is equipped with a URA and the receiver with a ULA.
To see this, let us first vectorize each block $\mathbf{H}^\mathrm{(p,q)}$ as
$\mathbf{h}^\mathrm{(p,q)} = \mathrm{vec}(\mathbf{H}^\mathrm{(p,q)}) = \left(\mathbf{A}_y^*\odot\mathbf{A}_x^*\odot\mathbf{A}_r\right)\boldsymbol{\beta}^\mathrm{(p,q)}$, and stack the vectorized $\mathbf{H}^\mathrm{(p,q)}$'s into a matrix as follows:
\begin{align}\label{H2}
\check{\mathbf{H}} &= \left[ \mathbf{h}^\mathrm{(V_r,V_t)}, \mathbf{h}^\mathrm{(V_r,H_t)}, \mathbf{h}^\mathrm{(H_r,V_t)}, \mathbf{h}^\mathrm{(H_r,H_t)} \right] \notag\\
& = \left(\mathbf{A}_y^*\odot\mathbf{A}_x^*\odot\mathbf{A}_r\right)\mathbf{B}^T
\end{align}
where
$\mathbf{A}_x=[\mathbf{a}_{x,1}\ \cdots\ \mathbf{a}_{x,K}]$ and $\mathbf{A}_y=[\mathbf{a}_{y,1}\ \cdots\ \mathbf{a}_{y,K}]$ and
\begin{align}
\mathbf{B} = [\boldsymbol{\beta}^\mathrm{(V_r,V_t)}\; \boldsymbol{\beta}^\mathrm{(V_r,H_t)}\; \boldsymbol{\beta}^\mathrm{(H_r,V_t)}\; \boldsymbol{\beta}^\mathrm{(H_r,H_t)}]^T \in\mathbb{C}^{4\times K}
\end{align}
in which we have used \eqref{eq:ura_manifold} (or, more precisely $\mathbf{A}_t=\mathbf{A}_{x}\odot\mathbf{A}_y$) and ${\rm vec}({\mathbf{X}}{\rm diag}({\mathbf{z})}{\mathbf{Y}}^H)=({\mathbf{Y}}^\ast\odot{\mathbf{X}}){\mathbf{z}}$.
Eq. \eqref{H2} is exactly the definition of a four-slab fourth-order tensor of rank $\leq K$ in the matrix unfolding form \cite{Sid2017} when $\min(M_r,M_x,M_y)>1$ (cf. Definition~\ref{definition:unfolding}).
We have this fourth-order tensor because of the array manifolds that we have assumed for the transmit and receive arrays.
The four factor matrices $\mathbf{A}_r$, ($\mathbf{A}_x$, $\mathbf{A}_y$) and $\mathbf{B}$ are the manifold of the receive antenna array, the manifold matrices of (horizontal and vertical) transmit antenna arrays, and the path-loss matrix, receptively.
A side comment is that if some other array geometries are employed at both sides, one can also derive a low-rank tensor from blocks of $\mathbf{H}$ by rearranging.
We list the resulting tensor structure of some pertinent cases for widely used configurations of the transmit and receive antenna arrays in Table I.
\begin{table}[t]\label{table1}
\begin{center}
\caption{Tensor order of $\mathbf{H}$ }
\begin{tabular}{ ll|c }
\toprule
\multicolumn{2}{c}{Array Configuration} & \multicolumn{1}{c}{Tensor order of $\mathbf{H}$}\\
\midrule
Tx & Rx & $M_r>1$ \\
\midrule
{DP-URA} & {DP-ULA} & {Four} \\
{DP-URA} & {DP-URA} & {Five} \\
DP-URA & DP-UCA & Four \\
DP-UCA & DP-URA & Four \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsubsection{Array Manifold Estimation}
Our idea is to estimate $\mathbf{A}_x$, $\mathbf{A}_y$, $\mathbf{A}_r$ and $\mathbf{B}$ from the tensor $\check{\mathbf{H}}$, and then estimate the multipath parameters using the estimated factor matrices.
As will be shown shortly, if $\mathbf{A}_x$, $\mathbf{A}_y$ and $\mathbf{A}_r$ are accurately estimated, the DOAs and DODs can then be estimated in closed-form.
To estimate the array manifolds and the path-losses (i.e., ${\mathbf{B}}$), we propose to employ the following tensor decomposition formulation:
\begin{align}\label{Prob:H1}
\min_{\mathbf{A}_r,\mathbf{A}_x,\mathbf{A}_y,\mathbf{B}} \left\| \check{\mathbf{H}} - \left(\mathbf{A}_y^*\odot\mathbf{A}_x^*\odot\mathbf{A}_r\right)\mathbf{B}^T \right\|_F^2.
\end{align}
which is the least squares fitting formulation for low-rank tensor factorization.
Various low-rank tensor decomposition algorithms can be applied to identify the loading matrices \cite{parafac2,parafac3,Sid2017}. Among them, one of the most popular methods is the so-called alternating least squares (ALS) technique. To implement ALS for solving \eqref{Prob:H1}, we make use of different unfoldings of the tensor $\check{\mathbf{H}}$, which are denoted as follows:
\begin{align}
\mathbf{H}_{(1)} &= (\mathbf{B}\odot\mathbf{A}_y^*\odot\mathbf{A}_x^*) \mathbf{A}_r^T \\
\mathbf{H}_{(2)} &= (\mathbf{B}\odot\mathbf{A}_y^*\odot\mathbf{A}_r) \mathbf{A}_x^H \\
\mathbf{H}_{(3)} &= (\mathbf{B}\odot\mathbf{A}_x^*\odot\mathbf{A}_r) \mathbf{A}_y^H \\
\mathbf{H}_{(4)} &= (\mathbf{A}_y^*\odot\mathbf{A}_x^*\odot\mathbf{A}_r) \mathbf{B}^T.
\end{align}
Note that $\mathbf{H}_{(4)}$ is exactly $\check{\mathbf{H}}$.
Using the unfoldings, one can easily implement the following alternating optimization algorithm:
\begin{subequations}\label{eq:ten}
\begin{align}
\mathbf{A}_r &\leftarrow \arg\min_{\mathbf{A}_r}
\left\|\mathbf{H}_{(1)}-(\mathbf{B}\odot\mathbf{A}_y^*\odot\mathbf{A}_x^* )\mathbf{A}_r^T\right\|_F^2 \label{sp1} \\
\mathbf{A}_x &\leftarrow \arg\min_{\mathbf{A}_x}
\left\|\mathbf{H}_{(2)}-(\mathbf{B}\odot\mathbf{A}_y^*\odot\mathbf{A}_r) \mathbf{A}_x^H\right\|_F^2 \label{sp2}\\
\mathbf{A}_y &\leftarrow \arg\min_{\mathbf{A}_y}
\left\|\mathbf{H}_{(3)}-(\mathbf{B}\odot\mathbf{A}_x^*\odot\mathbf{A}_r) \mathbf{A}_y^H\right\|_F^2 \label{sp3}\\
\mathbf{B} &\leftarrow \arg\min_{\mathbf{B}}
\left\|\mathbf{H}_{(4)}-(\mathbf{A}_y^*\odot\mathbf{A}_x^*\odot\mathbf{A}_r) \mathbf{B}^T\right\|_F^2 \label{sp4}
\end{align}
\end{subequations}
where the four subproblems are all linear LS problems that can be readily solved in closed-form.
The ALS algorithm repeatedly solves the subproblems until convergence. Derivative-based schemes can also be used for optimization, from Gauss-Newton and BFGS to simple stochastic gradient type methods. See \cite{Sid2017} for more information.
\subsubsection{Parameter Estimation}
Once $\mathbf{A}_r$, $\mathbf{A}_x$, and $\mathbf{A}_r$ are obtained from \eqref{eq:ten}, the angles $\theta_k,\vartheta_{k}$ and $\varphi_{k}$ can be estimated by exploiting the manifold structure of $\mathbf{A}_{r,k},\mathbf{A}_{x,k}$ and $\mathbf{A}_{y,k}$.
To proceed, let us consider the following:
\begin{align}
\hat{\omega}_{r,k} &= \angle(\overline{\mathbf{A}}_{r,k}^H\underline{\mathbf{A}}_{r,k}) \label{omega_r}\\
\hat{\omega}_{x,k} &= \angle(\overline{\mathbf{A}}_{x,k}^H\underline{\mathbf{A}}_{x,k}) \label{omega_x}\\
\hat{\omega}_{y,k} &= \angle(\overline{\mathbf{A}}_{y,k}^H\underline{\mathbf{A}}_{y,k}) \label{omega_y}
\end{align}
where $\overline{\mathbf{x}}$ and $\underline{\mathbf{x}}$ are the vectors consisting of the first and last $(M-1)$ entries of $\mathbf{x}$ with length $M$, respectively. Then we estimate $\theta_k$ from $\hat{\omega}_{r,k}$, and $\vartheta_{k}$ and $\varphi_{k}$ from $\hat{\omega}_{x,k}$ and $\hat{\omega}_{y,k}$ as
\begin{align}
\hat{\theta}_{k} &= \sin^{-1}\left( \frac{\nu}{2\pi d_r}\hat\omega_{r,k} \right) \label{theta}\\
\hat{\varphi}_k &= \sin^{-1}\left( \sqrt{\left( \frac{\nu}{2\pi d_x}\hat{\omega}_{x,k}\right)^2 + \left( \frac{\nu}{2\pi d_y}\hat{\omega}_{y,k}\right)^2} \right) \label{varphi}\\
\hat{\vartheta}_k &= \tan^{-1}\left( \frac{d_x\hat{\omega}_{y,k} }{d_y\hat{\omega}_{x,k}} \right). \label{vartheta}
\end{align}
Eqs.~\eqref{theta}-\eqref{vartheta} hold because we have assumed that the transmit and receive antenna arrays are URA and ULA, respectively. The closed-form solutions are also {\it rotationaly invariant}---not affected by scaling ambiguity that is brought by tensor decomposition.
We should mention that the tensor decomposition algorithm in \eqref{eq:ten} has already given an initial estimate of ${\mathbf{B}}$, i.e., the path-losses.
However, since there is an intrinsic scaling ambiguity of tensor decomposition (cf. Lemma~1 in Appendix A), such an initial estimate may not be useful. Nevertheless, this is easy to fix. Note that $\hat{\mathbf{A}}_r,\hat{\mathbf{A}}_{x},\hat{\mathbf{A}}_y$ {\it can be reconstructed} from $\theta_k,\vartheta_{k}$ and $\varphi_{k}$ without scaling ambiguity.
Then, the estimate of ${\mathbf{B}}$ without scaling ambiguity can be computed as
\begin{align}\label{eq:recal_B}
\hat{\mathbf{B}} &\leftarrow \arg\min_{\mathbf{B}}
\left\|\mathbf{H}_{(4)}-(\hat\mathbf{A}_y^*\odot\hat\mathbf{A}_x^*\odot\hat\mathbf{A}_r) \mathbf{B}^T\right\|_F^2.
\end{align}
Algorithm 1 summarizes the tensor based parameter estimation, where in the first step we have assumed that the pilot matrix ${\mathbf{S}}$ has orthogonal rows so that $\mathbf{X}\mathbf{S}^H$ gives a fairly accurate estimate of $\mathbf{H}$. Note that the order of applying tensor decomposition, angle estimation, and path-loss estimation matters---since angle decomposition can naturally remove the scaling ambiguity, as we discussed.
Note that the complexity of \eqref{omega_r}-\eqref{omega_y} is very low but the resulting estimate could be suboptimal. For better accuracy, one can resort to single-tone frequency estimation algorithms, e.g., \cite{freqest1,freqest2} or maximum likelihood (ML)-based (periodogram) methods, to estimate the DODs and DOAs from the estimated manifolds. These latter methods are statistically efficient (approximately) in the high SNR regime, but are computationally more demanding than the simple closed-form solutions provided earlier.
\subsubsection{Parameter Identifiability}
As we mentioned, the channel parameters can be estimated using some other methods, e.g., MUSIC and ESPRIT, which possibly admit lower complexity compared to tensor decomposition.
However, a salient feature of tensors is that the factors are uniquely identifiable under mild conditions. For example, it can be shown that $\{\mathbf{A}_r,\mathbf{A}_x,\mathbf{A}_y,\mathbf{B}\}$ meet the $k$-rank condition \cite{sid2001ident} provided that all the DOAs, DODs and path-losses are not the same, which is a mild condition considering the random nature of the multiple paths. Then we have the following theorem:
\begin{theorem}\label{theoremrank}
Assume that the scenario where the transmitter is equipped with a URA and the receiver a ULA,
and that $(\theta_k,\vartheta_{k},\varphi_{k})$ and $(\theta_j,\vartheta_j,\varphi_j)$ are different for any $k\neq j$.
Also assume that the pathloss parameters in $\mathbf{B}$ are generated following some jointly continuous distribution.
Then, the key parameters $\{\theta_k,\vartheta_{k},\varphi_{k}\}_{k=1}^K$ and the path-losses ${\boldsymbol{\beta}}^{{\rm (p,q)}}$ for all ${\rm p}\in\{\mathrm{V_r,H_r}\}$ and ${\rm q}\in\{\mathrm{V_t,H_t}\}$ are uniquely identifiable via the proposed approach provided that
\begin{align}\label{identifiability1}
\min{(M_r,K)} + \min(M_x,K) &+ \min(M_y,K) \notag\\
& + \min{(4,K)} \geq 2K + 3
\end{align}
almost surely.
\end{theorem}
The proof relies on the identifiability of the four-slab four-way tensors and is relegated to Appendix A.
Although the proof is relatively straightforward for someone who is frequently exposed to tensors,
the implication of Theorem~\ref{theoremrank} is important: Using the proposed approach, the key parameters are uniquely identifiable even when the number of paths largely exceeds the number of $\min\{M_r, M_t\}$. This makes the proposed method widely applicable to many realistic scenarios, especially where the receive antenna array is of a relatively small size, as in many mobile phones. Furthermore, this identifiability is independent of the array configurations of transmit and receive antennas.
Theorem~\ref{theoremrank} is intuitive and easy to read, but is not the best bound that we can get.
In fact, if we look at the parameter estimation problem from a multi-snapshot 2-D harmonic retrieval viewpoint, a much stronger identifiability result can be obtained, which is summarized in the following theorem:
\begin{theorem}\label{theorem2}
The parameters $\big\{\theta_k,\varphi_k,\vartheta_k,\beta_k^{(\rm p,q)}\big\}$ are all uniquely identifiable provided that
\begin{align}
K\leq \arg\max_{F,P_r,P_x,P_y} &~F\notag\\
\mathrm{s. t.}\quad & \max\Big((P_r-1)P_xP_y, P_r(P_x-1)P_y, \notag\\
&\quad\qquad P_rP_x(P_y-1)\Big) \geq F \notag\\
& 8Q_rQ_xQ_y \geq F
\end{align}
where $P_r+Q_r=M_r+1, P_x+Q_x=M_x+1,P_y+Q_y=M_y+1$.
\end{theorem}
Theorem \ref{theorem2} can be proven by invoking identifiability results in multi-dimensional harmonic retrieval, in particular, the construction following the IMDF algorithm \cite{liu2}. The result in Theorem~\ref{theorem2} is a bit harder to read compared to Theorem~\ref{theoremrank}, but is far better upon close inspection. For example, when $M_x=4,M_y=8,\text{and }M_r=2$, the identifiable case under Theorem~\ref{theoremrank} is with $K$ up to $K=7$, while $K=32$ multi-paths can be guaranteed to be identified under Theorem \ref{theorem2}.
Furthermore, even when the MS only has a single dual-polarized antenna, it can be shown using the IMDF based approach that the number of identifiable paths is upper bounded by $K < 0.8187 M_t$. For details about the IMDF method, we refer the readers to \cite{liu2}.
Due to its simplicity, it is either a good candidate to initialize the method proposed in Section III-B or can be directly applied for computational efficiency.
\subsubsection{Computational Complexity}
We should remark that the complexity of the proposed tensor decomposition method is dominated by the step for solving \eqref{eq:ten}, where each subproblem is a least squares problem. Taking \eqref{sp1} as an example, the solution to this subproblem is as follows:
\begin{align*}
\hat{\mathbf{A}}_r^T =& \((\mathbf{B}^H\mathbf{B})\circledast(\mathbf{A}_y^T\mathbf{A}_y^*)\circledast(\mathbf{A}_x^T\mathbf{A}_x^*)\)^{-1}\times \notag\\
&\(\mathbf{B}\odot\mathbf{A}_y^*\odot\mathbf{A}_x^*\)^H\mathbf{H}_{(1)},
\end{align*}
which needs $\mathcal{O}\big( (4+M_x+M_y)K^2 + 4K^2+ K^3+4K^2M_xM_y + 4KM_rM_xM_y \big)$ flops if one uses the above relatively naive implementation.
The matrix inversion and large matrix product (i.e., $\(\mathbf{B}\odot\mathbf{A}_y^*\odot\mathbf{A}_x^*\)^H\mathbf{H}_{(1)}$) parts are the most costly to compute.
Nevertheless, these two operations can be avoided if some advanced solvers are employed, e.g., \cite{parafac2,xu2013block}.
\section{Parameter Identification Using Frugal Pilots}
In the previous section, we have proposed a tensor decomposition-based method for estimating the DOAs, DODs, and path-losses of the MIMO 3-D channel when a reliable estimate of ${\mathbf{H}}$ is available. This is viable when ${\mathbf{S}}$ is `fat' or square and with full row-rank. In other words, when the pilot sequence is long enough so that ${\mathbf{S}}$ has full row rank, ${\mathbf{H}}$ can be estimated via least squares (or simply matched filtering if $\mathbf{S}\S^H=\mathbf{I}$). Then, the method that is proposed in the previous section can be applied.
In practice, using a long pilot sequence is not desirable since this creates large downlink training overhead. When $M_t$ is large, the size of ${\mathbf{S}}$ is at least $2M_t\times 2M_t$ if one wishes to make the rows of ${\mathbf{S}}$ orthogonal to each other, that could be costly.
In this section, we propose another approach to handle the above challenge. We carefully design the transmit pilot sequence and formulate a \textit{compressed tensor decomposition (CTD)} problem for parameter estimation. As it turns out, we can use a pilot matrix whose size is much smaller than $2M_t \times 2M_t$ to identify the channel parameters.
\subsection{Proposed Downlink Training and Parameter Identification Approach}
To reduce downlink overhead in a massive MIMO system while keeping identifiability of the key parameters of interest, we propose to employ the following specially structured pilot matrix:
\begin{equation}\label{eq:pilot}
{\mathbf{S}}=\begin{bmatrix}
\mathbf{Q}&\boldsymbol{0}\\
\boldsymbol{0}&\mathbf{Q}
\end{bmatrix} \in \mathbb{C}^{2M_t\times N}
\end{equation}
where $\mathbf{Q}\in\mathbb{R}^{M_t\times N/2}$ (assuming $N$ is an even number for simplicity) whose elements are generated following a certain absolutely continuous distribution and $N\in[4,M_t)$.
This way, the (noise-free) received data matrix becomes
\begin{align}\label{eq:new_channel}
\mathbf{X} = \begin{bmatrix}
\mathbf{A}_r\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(V_r,V_t)}\big)\mathbf{A}_t^H\mathbf{Q} & \mathbf{A}_r\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(V_r,H_t)}\big)\mathbf{A}_t^H\mathbf{Q}\\
\mathbf{A}_r\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(H_r,V_t)}\big)\mathbf{A}_t^H\mathbf{Q} & \mathbf{A}_r\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(H_r,H_t)}\big)\mathbf{A}_t^H\mathbf{Q}
\end{bmatrix}
\end{align}
Given the above $\mathbf{X}$, our goal is to identify $\mathbf{A}_r$, $\mathbf{A}_t$, $\boldsymbol{\beta}^{\rm ( p,q)}$---i.e., the path losses and the array manifolds, since all the key parameters can be easily estimated from them as described in the previous section.
Physically, the proposed design of $\mathbf{S}$ in \eqref{eq:pilot} corresponds to a time-division multiplexing strategy that transmits pilots from the H-polarized array first, and then transmits the same pilots from the V-polarized array (or the other way around), with the other turned off---which is very easy to implement in practice. Nevertheless, as we will show, such a simple signaling strategy combined with tensor algebra allows us to identify the parameters of interest under very mild conditions---even when the number of columns of $\mathbf{S}$ is much smaller than $2M_t$.
\subsection{Identification Approach and Theoretical Guarantees}
One can see that the four blocks in \eqref{eq:new_channel} comprise a four-slab three-way tensor, where the $(\rm p,q)$th block is defined as
\begin{align}\label{Xi}
\mathbf{X}^{\rm (p,q)}=\mathbf{A}_r\mathrm{diag}\big(\boldsymbol{\beta}^\mathrm{(\rm p,q)}\big)\mathbf{A}_t^H\mathbf{Q}
\end{align}
for $\mathrm{p}\in\{\mathrm{V_r,H_r}\}$ and $\mathrm{q}\in\{\mathrm{V_t,H_t}\}$. Taking the transpose of $\mathbf{X}^{\rm (p,q)}$ and then vectorizing it yields
\begin{align}
\mathbf{x}^{\rm (p,q)}=\(\mathbf{A}_r\odot(\mathbf{Q}^T\mathbf{A}_t^*)\)\boldsymbol{\beta}^\mathrm{(\rm p,q)}.
\end{align}
Thus, by collecting $\{\mathbf{x}^{\rm (p,q)}\}$, we have
\begin{align}\label{Z}
\mathbf{Z} &= \[ \mathbf{x}^{\rm (V_r,V_t)} ~ \mathbf{x}^{\rm (V_r,H_t)} ~ \mathbf{x}^{\rm (H_r,V_t)} ~ \mathbf{x}^{\rm (H_r,H_t)} \] \notag\\
&= \(\mathbf{A}_r\odot\mathbf{Q}^T\mathbf{A}_t^*\)\mathbf{B}^T \in \mathbb{C}^{NM_r/2\times 4}
\end{align}
It is readily seen that $\mathbf{Z}$ is nothing but a matrix unfolding of a third order tensor whose latent factors are $\mathbf{Q}^T\mathbf{A}_t^*$, $\mathbf{B}$ and $\mathbf{A}_r$. It immediately follows that $\mathbf{Q}^T\mathbf{A}_t^*$, $\mathbf{B}$ and $\mathbf{A}_r$ can be identified under the proposed pilot design under certain conditions.
Hence, at least the angle of arrivals can be easily estimated from $\hat{\mathbf{A}}_r$, under quite mild conditions that guarantee identifiability of third-order tensors, as stated in Theorem~\ref{kruskal:uniqueness}. Let
\begin{align}\label{E}
\mathbf{E}=(\mathbf{Q}^T\mathbf{A}_t^*)^*=\mathbf{Q}^H\mathbf{A}_t.
\end{align}
Estimating $\mathbf{A}_t$ from the compressed measurements $\mathbf{E}$ is nontrivial---can we uniquely identify the DODs from $\mathbf{E}$? The answer is affirmative---if the transmit array is an ULA or URA and certain mild conditions are satisfied, which we will explain shortly.
\subsubsection{Manifold Estimation via Smoothed ESPRIT}
There are many ways to estimate $\mathbf{Q}^T\mathbf{A}_t^*$, $\mathbf{B}$ and $\mathbf{A}_r$ from $\mathbf{Z}$, since this is nothing but a third-order tensor decomposition problem.
The identifiability of this kind of tensor (in which there is at least one factor which has a Vandermonde structure) has also been well understood---e.g., the aforementioned Theorem~2 that was derived in \cite{sid2001ident}.
Here, we propose to employ a method that was recently proposed in \cite{smoothESPRIT} to handle this problem.
The method is in nature a subspace method, which works under very relaxed identifiability conditions by exploiting the Vandermonde structure of a latent factor of the tensor.
The detailed proof of the algorithm can be found in \cite{smoothESPRIT}. Here, we refer to this algorithm as the \textit{smoothed ESPRIT} algorithm.
The algorithm starts by working with a third-order tensor as follows (with a bit of notational abuse):
\begin{align}\label{X}
\mathbf{X} = (\mathbf{A}\odot\mathbf{B})\mathbf{C}^T
\end{align}
where $\mathbf{B}$ and $\mathbf{C}$ are drawn from some continuous distributions, and $J\leq K$, and $\mathbf{A}\in\mathbb{C}^{I\times F}$ is Vandermonde with distinct nonzero generators.
To identify $\mathbf{A}$, $\mathbf{B}$ and $\mathbf{C}$, one can employ the following procedure:
First, let us define a cyclic selection matrix $\mathbf{J}_{i_2}=[\boldsymbol{0}_{I_1\times i_2}~ \mathbf{I}_{I_1}~ \boldsymbol{0}_{I_1\times (I-i_2-I_1)}]$. It is easy to check that
\begin{align}
(\mathbf{J}_{i_2}\otimes\mathbf{I}_J)\mathbf{X} &= \big((\mathbf{J}_{i_2}\mathbf{A})\odot\mathbf{B}\big)\mathbf{C}^T \notag\\
&= (\mathbf{A}_1\otimes \mathbf{B})\mathrm{diag}\big(\[\mathbf{A}\]_{i_2+1,:}\big)\mathbf{C}^T
\end{align}
where $\mathbf{A}_1=\mathbf{J}_{0}\mathbf{A}=[\mathbf{A}]_{1:I_1,:}$ contains the first $I_1$ rows of $\mathbf{A}$.
Thus, by varying $i_2$ from 0 to $(I_2-1)$ where $I_2=I+1-I_1$, we can construct a smoothed matrix as
\begin{align}\label{smoothedX}
\mathbf{X}_s &=
\begin{bmatrix}
\mathbf{J}_{0} \mathbf{X} & \cdots & \mathbf{J}_{I_2-1} \mathbf{X}
\end{bmatrix} \notag\\
&= \(\mathbf{A}_{1}\odot\mathbf{B}\) \( \mathbf{A}_2\odot \mathbf{C} \)^T \in \mathbb{C}^{I_1J\times I_2K}.
\end{align}
where $\mathbf{A}_2$ takes the first $I_2$ rows of $\mathbf{A}$.
Starting from \eqref{smoothedX}, the smoothed ESPRIT algorithm applies a series of re-arranging of the tensor elements and finally converts the factorization problem to a classical DOA estimation problem that can be handled by ESPRIT---and solves it via eigen-decomposition---see details in \cite{smoothESPRIT}.
The algorithm also offers very favorable identifiablity guarantees:
\begin{theorem} \cite{smoothESPRIT}\label{theorem:3wayVdmd}
Consider a third-order tensor $\mathbf{X} = \mathbf{A} (\mathbf{B} \odot \mathbf{C})^T$, where $\mathbf{A}\in\mathbb{C}^{I\times F}$, $\mathbf{B}\in\mathbb{C}^{J\times F}$, $\mathbf{C}\in\mathbb{C}^{K\times F}$, $\mathbf{A}$ is Vandermonde with distinct nonzero generators. Assume that $\mathbf{B}$ and $\mathbf{C}$ are drawn from certain absolutely continuous distributions, respectively. Then, if
\begin{align}
F \leq \min\Big( (I_{1}-1)J,~ I_2K \Big)
\end{align}
where $I_1\geq I_2$ and$I_2= I+1- I_{1}$ are chosen from
\begin{align}
\{I_1,I_2\} = \arg\max_{\{I_1,I_2\}\in\mathbb{Z}^+}~ &\min\Big((I_{1}-1)J, I_2K\Big)
\end{align}
Then $\mathbf{A}$, $\mathbf{B}$ and $\mathbf{C}$ are identifiable up to permutation and scaling of columns, almost surely.
\end{theorem}
The above method can be directly applied to $\mathbf{Z}$ in \eqref{Z}, if we treat $\mathbf{A}_r$, $\mathbf{Q}^T\mathbf{A}_t^*$, $\mathbf{B}$ as $\mathbf{A}$, $\mathbf{B}$, $\mathbf{C}$, respectively, and the corresponding dimensions are $I=M_r$, $J=N/2$ and $K=4$.
It is clear that the $\mathbf{A}_r$, $\mathbf{B}$, and $\mathbf{Q}^T\mathbf{A}_t^*$ are identifiable up to scaling and permutation ambiguities under quite mild conditions---both $N$ and $M_r$ can be smaller than the number of paths.
We remark that $\mathbf{A}_r$, $\mathbf{B}$, and $\mathbf{Q}^T\mathbf{A}_t^*$ can be identified using any tensor factorization method, e.g., the least squares fitting formulation for tensor decomposition and ALS, which may have some other benefits such as being more noise-robust.
Nevertheless, the employed approach admits by far the strongest identifiability result for a third-order tensor which has a Vandermonde latent factor.
Another good feature of the employed approach is that it is very lightweight and consists of only simple algebraic procedures and eigen-decomposition, which is very friendly to real-time implementation.
\subsubsection{Identification of DODs}
By the proposed procedure (or any other tensor factorization algorithm), $\mathbf{A}_r,\mathbf{B}$ and $\mathbf{Q}^T\mathbf{A}_t^*$ can be identified. However, whether $\mathbf{A}_t$ is identifiable from $\mathbf{Q}^T\mathbf{A}_t^*$ is not yet clear.
Recall that we previously defined $\mathbf{E}=\mathbf{Q}^H\mathbf{A}_t$ in \eqref{E}. Since there is complex scaling ambiguity in the estimate of $\mathbf{E}$, i.e., $\hat{\mathbf{E}}$, the problem is equivalent to solving
$${\hat{\mathbf{e}}} =\xi {\mathbf{Q}}^H{\mathbf{a}_t}$$
when $\mathbf{Q}^H$ is a known compression (fat) matrix and $\hat{\mathbf{e}}$ is a given compressed measurement vector, where $\hat{\mathbf{e}}$ can represent any column of the estimated $\mathbf{E}$ and $\mathbf{a}_t$ represents the corresponding column in $\mathbf{A}_t$.
Here, $\xi$ is a complex-valued non-zero scalar that represents the scaling ambiguity inherited from the tensor factorization phase.
Solving the above underdetermined system of equations to recover the vector $\mathbf{a}_t$ is quite similar to the problem of \textit{compressive sensing} \cite{candes}.
However, our $\mathbf{a}_t$ is not sparse, and sparsity is what modern compressive sensing relies on to establish signal identifiability---this raises the question if $\mathbf{a}_t$ is still identifiable from the system of linear equations, since an underdetermined system could have an infinite number of solutions?
To address this issue, we have the following lemma:
\begin{lemma}\label{lem:ident}
Given a system of equations $\hat{\mathbf{e}}=\xi \mathbf{Q}^H\mathbf{a}_t$ where $\mathbf{Q}\in\mathbb{C}^{M_t\times N/2}$, $\xi\in\mathbb{C}$, $\xi \neq 0$, and $\mathbf{a}_t\in\mathbb{C}^{M_t}$ is a function of $\theta$. Assume that $\mathbf{Q}$ is generated following some absolutely continuous distribution, and that $\mathbf{a}_t$ is a steering vector keeping the transmit array structure. Also assume that $M_t\geq N/2\geq 2$. Then, $\mathbf{a}_t$ is identifiable from $\hat{\mathbf{e}} = \xi \mathbf{Q}^H\mathbf{a}_t$ almost surely.
\end{lemma}
\begin{IEEEproof}
Let us assume that there exists another steering vector $\mathbf{a}_t'$ that also satisfies $\mathbf{c}=\xi' \mathbf{Q}^H\mathbf{a}_t'$, where $\mathbf{a}_t'\neq \mathbf{a}_t$ and $\xi'\neq \xi$. Hence, we have
\begin{equation}\label{eq:contradict1}
\mathbf{Q}^H[ \xi\mathbf{a}_t,- \xi'\mathbf{a}_t']={\bf 0}.
\end{equation}
Since $\mathbf{a}_t$ and $\mathbf{a}_t'$ are Kronecker product of two Vandermonde vectors, we have
$${\rm rank}([ \xi\mathbf{a}_t,- \xi'\mathbf{a}_t'])=2$$
if $M_t\geq 2$.
Also because $\mathbf{Q}$ is a random matrix generated following some absolutely continuous distribution and $\xi, \xi' \neq 0$, we have
$$ {\rm rank}( \mathbf{Q}^H[ \xi\mathbf{a}_t,- \xi'\mathbf{a}_t']) = 2$$
holds with probability 1 (Lemma 1, \cite{sid2012paracom})---which is a contradiction to \eqref{eq:contradict1}. This completes the proof.
\end{IEEEproof}
Lemma~\ref{lem:ident} clearly indicates that if $M_t,N/2\geq 2$, the solution to $\hat{\mathbf{E}}=\mathbf{Q}^H\mathbf{A}_t\mathrm{diag}(\boldsymbol{\xi}) $ is unique (where $\boldsymbol{\xi}$ contains the inherited scaling ambiguities from the tensor factorization stage)---if the columns of $\mathbf{A}_t$ are Vandermonde vectors that have different generators. To be specific, we have the following corollary:
\begin{corollary}\label{cor:QA}
Assume that $M_t,N/2\geq 2$, and that $\mathbf{A}_t$ is the manifold of an URA or ULA, which has a set of different DODs, i.e, $(\vartheta_{k},\varphi_{k})\neq (\vartheta_{j},\varphi_{j})$ for $k\neq j$. Then, $\mathbf{A}_t$ can be uniquely identified from the system
$\hat{\mathbf{E}}=\mathbf{Q}^H\mathbf{A}_t\mathrm{diag}(\boldsymbol{\xi})$.
\end{corollary}
Corollary~\ref{cor:QA} indicates that under the premise that the third-order tensor $\mathbf{Z}$ is identifiable, then one can use a pilot matrix has as few as $4$ columns, which can be rather economical in practice. On the other hand, the results in Lemma~\ref{lem:ident} and Corollary~\ref{cor:QA} are not entirely surprising: After all, all the columns in $\mathbf{A}_t$ are parametrized by only two variables---it makes much sense that we can identify them from two equations.
Combining the above results, we have the following theorem that states an integrated result of the two steps (i.e., tensor factorization and $\mathbf{A}_t$ identification):
\begin{theorem}\label{thm:hidden}
Assume that the receive antenna array is a ULA and the transmit array is a URA, and that every component in $(\theta_k,\vartheta_k,\varphi_k)$ is different. Moreover, the path-loss matrix $\mathbf{B}$ is random.
Then, the array manifolds $\mathbf{A}_r$, $\mathbf{A}_x$, $\mathbf{A}_y$ and the path-losses are uniquely identifiable with probability one via the proposed approach if
\begin{align}\label{eq:cond2}
K \leq \min\Big( 4(P_{r}-1), ~ \left( M_r+1- P_{r}\right)N/2 \Big)
\end{align}
where $P_r$ is chosen from
\begin{align}\label{tmax}
P_r = \arg\max_{\{t,P_r\}\in\mathbb{Z}^+}~ &t \notag\\
\text{s. t.}~ & 4(P_{r}-1)\geq t \notag\\
& \left( M_r+1- P_{r}\right)N/2 \geq t.
\end{align}
\end{theorem}
The above theorem shows that when $M_t>N/2$, the identifiability is only determined by $M_r$ and $N$. This means that no matter what the transmit antenna is, as long as the number of multipaths satisfies \eqref{eq:cond2}, we can always identify the whole channel matrix. The associated algorithm, dubbed \textit{CTD}, that ensures the above identifiability result is summarized in Algorithm \ref{CTD}.
To recover $(\vartheta_{k},\varphi_{k})$ for all $k$ from $\hat{\mathbf{E}}=\mathbf{Q}^H\mathbf{A}_t\mathrm{diag}\(\boldsymbol{\xi}\)$, one can formulate this problem as a fitting problem, i.e.,
\begin{align}\label{eq:recov}
\min_{\vartheta_{k},\varphi_{k},\xi }&~\left\|[\hat{\mathbf{E}}]_{:,k} -\xi\mathbf{Q}^H\mathbf{a}_{t,k} \right\|_2^2,~\forall k=1,\cdots,K
\end{align}
where $\mathbf{a}_{t,k}=\mathbf{a}_{y,k}\otimes\mathbf{a}_{x,k}$ if a URA is used at the transmitter.
Problem~\eqref{eq:recov} can be solved via many nonlinear programming algorithms since it is continuously differentiable.
The only difficulty might be that the gradient w.r.t. $(\vartheta_{k},\varphi_{k})$ could be tedious to derive. Here, we use a simpler approximate algorithm to estimate $(\vartheta_{k},\varphi_{k})$ from $\hat{\mathbf{E}}=\mathbf{Q}^H\mathbf{A}_t\mathrm{diag}\(\boldsymbol{\xi}\)$, which works well in practice. The detailed method is presented in Appendix \ref{gradient}.
\subsubsection{Complexity Analysis}
The computational complexity for CTD consists of two parts, i.e., the smoothed ESPRIT in Step 3 and the refinement in Step 5 of Algorithm \ref{CTD}. The complexity for smoothed ESPRIT is dominated by the singular value decomposition (SVD) of $\mathbf{X}_s$ in Step 2, which needs $\mathcal{O}(2P_rQ_rNK+8P_r^2Q_rN+64P_r^3)$ flops. The refinement takes $\mathcal{O}(K^3+K^2NM_r+2KNM_r)$ flops.
The overall complexity of CTD is $\mathcal{O}(2P_rQ_rNK+8P_r^2Q_rN+64P_r^3)$ flops when $K\leq\min(4(P_r-1),Q_rN/2)$. Otherwise, the complexity is $\mathcal{O}(2P_rQ_rNK+8P_r^2Q_rN+64P_r^3 + K^3+K^2NM_r+2KNM_r)$ flops.
\begin{algorithm}
\caption{CTD for channel estimation with frugal pilot}
\label{CTD}
\begin{algorithmic}[1]
\State Determine $P_r$ and the maximum $K$ from Theorem \ref{thm:hidden}, and then compute $Q_r=M_r+1-P_r$
\State Follow \eqref{smoothedX} to construct
$\mathbf{X}_s=\(\mathbf{A}_{r,P_r}\odot\mathbf{B}\) \( \mathbf{A}_{r,Q_r}\odot (\mathbf{A}_t^H\mathbf{Q})^T\)^T $, and estimate the signal subspace $\mathbf{U}_s$ via the SVD of $\mathbf{X}_s$.
\State Apply smoothed ESPRIT \cite{smoothESPRIT} to estimate $\mathbf{A}_r$, $\mathbf{Q}^H\mathbf{A}_t$ and $\mathbf{B}$.
\State Estimate $\theta_{k}$ from $\mathbf{A}_r$ and $\vartheta_{k}$ and $\varphi_{k}$ from the estimate of $\mathbf{A}_t^H\mathbf{Q}$ column-by-column via gradient descent (see Appendix \ref{gradient})
\State Finally, refine $\hat{\mathbf{A}}_r$ from $\{\hat{\theta_{k}}\}$, $\hat{\mathbf{A}}_t$ from $\{\hat{\vartheta}_{k},\hat{\varphi}_k\}$ and $\hat{\mathbf{B}}=\Big( \big(\hat{\mathbf{A}}_r\odot(\mathbf{Q}^T\hat{\mathbf{A}}_t^\ast)\big)^\dagger\mathbf{Z} \Big)^T$
\State Recover the channel from $\hat\mathbf{A}_r,\hat\mathbf{A}_x,\hat\mathbf{A}_y$ and $\hat{\mathbf{B}}$.
\end{algorithmic}\label{algorithm3}
\end{algorithm}
\section{Numerical Results}
We consider a massive MIMO system with a DP URA at the BS and a DP ULA at the MS. This particular case is of considerable practical interest in 3GPP as a candidate for implementation \cite{3gpp}.
In the simulation, we assume that the multipath propagation gains are Rician distributed, and all the multipath parameters are randomly (uniformly) drawn. The BS covers $[0^\circ,90^\circ]$ elevation angular range and $(-45^\circ,45^\circ)$ azimuth angular range, while the MS only covers $[-60^\circ,60^\circ]$ azimuth angular range since the elevation angle is zero for ULA, i.e., $\theta_k\sim \mathcal{U}(-\pi/3,\pi/3),\,
\varphi_k\sim\mathcal{U}(0,\pi/2),\, \vartheta_k\sim\mathcal{U}(-\pi/3,\pi/3)$.
Moreover, similar to \cite{polarization5,kappa}, we set $\kappa=13.2$ dB, which is representative of urban propagation scenarios.
We use the LS estimator of $\mathbf{H}$ as a baseline to compare with the reconstructed channel from the estimated key parameters, when LS is applicable.
All the results are averaged over 500 Monte-Carlo trials using a computer with 3.2 GHz Intel Core i5-4460 and 4 GB RAM.
The normalized MSE (NMSE) of channel estimates is computed from
\begin{align}\label{nmse}
\mathrm{NMSE} = \frac{1}{500}\sum_{i=1}^{500}\|\hat{\mathbf{H}}_{i}-\mathbf{H}\|_F^2/\|\mathbf{H}\|_F^2
\end{align}
where $\hat{\mathbf{H}}_{i}$ denotes the channel that is reconstructed from the estimated key parameters from the $i$th Monte-Carlo trial.
In all the simulations, we assume that the number of paths (i.e., $K$) is known or has been estimated. Estimating $K$ is a tensor rank estimation problem, which is known to be NP-hard \cite{hillar2013most}. In practice, we are interested in the useful signal rank, i.e., the number of significant paths, and for this task we have a few practically effective algorithms, such as the one in \cite{bro2003new,da2008robust,da2011multi}.
In the first example, we compare the performance of the proposed tensor factorization-based method (implemented using alternating least squares (ALS); labeled as `PARAFAC') with IMDF \cite{liu2}, LS, multidimensional unitary ESPRIT (U-ESPRIT) \cite{unitaryesprit} and a CS
based technique \cite{jarvis} which is briefly summarized in Section II-C. Note that in the CS method, we quantize each angle using 7 bits , so the resulting dictionary has size $4M_rM_t\times 2^{23}$, which is intractable in a conventional desktop computer. To circumvent this, we implement the CS algorithm as follows. After obtaining the LS channel estimate, we first reshape each sub-block of the channel estimate as a $M_r\times M_x\times M_y$ tensor, and average the resulting tensors. Then we implement 3-D FFT with $128 \times 128 \times 128$ points to estimate $\{\theta,\vartheta,\varphi\}$, using the so-called peak-picking technique. Finally, we update the path-loss matrix $\mathbf{B}$ via \eqref{sp4}.
We consider a DP MIMO scenario where the receiver has a ULA with $M_r=2$ sensors and the transmitter has a $4\times 8$ URA. Thus, the channel matrix has size $4\times 64$. The number of multipaths randomly varies from 1 to 6. A row-orthogonal pilot matrix $\mathbf{S}$ is used in this case. Thus, PARAFAC, IMDF and CS are performed based on the LS channel estimate. It is worth noting that we initialize PARAFAC using the IMDF estimates. Specifically, we first implement IMDF to estimate $\{\omega_{r,k}, \omega_{x,k}, \omega_{y,k}\}_{k=1}^K$ which are then used to initialize $\mathbf{A}_r$, $\mathbf{A}_x$ and $\mathbf{A}_y$, respectively. Finally, the $\mathbf{B}$ matrix is refined via the LS estimate of \eqref{sp4}.
We test the performance of all the competitors under known and unknown number of multipaths. For the latter, we set $K=6$ for all the algorithms.
It is observed from Fig. \ref{msevssnr} that PARAFAC (i.e., ALS) outperforms the IMDF, U-ESPRIT, LS and CS algorithms in both cases. Compared to Fig. \ref{fig:mseknownChannel1}, PARAFAC, IMDF, U-ESPRIT and CS exhibit slight performance loss in Fig. \ref{fig:mseunkonwnChannel1}, where the exact number of multipath is unknown. When SNR $>14$ dB, we see that the NMSE of CS is even worse than the LS method. This is mainly because as SNR increases, the performance of CS is limited by the resolution of the dictionary grids.
We observe that U-ESPRIT occasionally fail to produce reasonable results. Such failure does not occur frequently, but due to this reason the NMSE of U-ESPRIT is relatively high even in the high SNR cases. It is worth noting that if we remove such outlying cases, then U-ESPRIT performs better than the CS method but is still not as good as IMDF and ALS based PARAFAC.
\begin{figure}
\begin{center}
\subfigure[known $K$]{\label{fig:mseknownChannel1} \includegraphics[width=1\linewidth, trim=0 0 0 0]{channelEstJournalknown}}
\subfigure[unknown $K$]{\label{fig:mseunkonwnChannel1} \includegraphics[width=1\linewidth, trim=0 0 0 0]{channelEstJournalunknown}}
\end{center}
\caption{NMSE versus SNR.}\label{msevssnr}
\end{figure}
The second example examines the NMSE performance versus $M_t$, where $M_t$ varies from 8 to 128 and the size of URA with $(M_x,M_y)\in\{(2,4),(4,4),(4,8),(8,8),(8,16)\}$. SNR is fixed at 10 dB, and the other parameters keep the same as the previous example. Fig. \ref{msevssnrMt} shows similar results as Fig. \ref{msevssnr}, where PARAFAC performs the best. Combining the results in Figs. \ref{msevssnr} and \ref{msevssnrMt}, we see that PARAFAC has similar accuracy as IMDF in low SNR and small sample size cases. With increasing SNR or $M_t$, PARAFAC outperforms IMDF evidently.
Overall, PARAFAC performs better than IMDF. This is because PARAFAC employs the LS criterion, which corresponds to Vandermonde structure-agnostic ML for Gaussian noise, and thus is more robust to noise. IMDF is an algebraic closed-form method, which exploits Vandermonde structure and is much faster than PARAFAC, but is not optimal in handling Gaussian noise. On the other hand, PARAFAC accounts for the low-rank structure of the channel matrix, so it works well even though there are path-losses smaller than the noise variance. However, IMDF is inherently a subspace method, which picks the signal subspace according to the principal eigenvalues or singular values. Once the noise variance is greater than the path-loss values, the signal subspace may be miss-estimated.
\begin{figure}
\begin{center}
\subfigure[known $K$]{\label{fig:mseknownChannel1Mt} \includegraphics[width=1\linewidth, trim=0 0 0 0]{channelEstknownK}}
\subfigure[unknown $K$]{\label{fig:mseunkonwnChannel1Mt} \includegraphics[width=1\linewidth, trim=0 0 0 0]{channelEstunknownK}}
\end{center}
\caption{NMSE versus $M_t$. ($M_r=2$, SNR $=10$ dB, the number of multipaths varies from 1 to 6.)}\label{msevssnrMt}
\end{figure}
We now consider a DP MIMO system where $M_x=M_y=8$, $M_r=3$ and $N=16$. Thus, the task here is to recover a $6\times 128$ matrix from $6\times 16$ received data matrix. We compare the CTD method with the LS and joint orthogonal matching pursuit (J-OMP) algorithm \cite{jomp}. Specifically, for the J-OMP method, we implement it for each column of $\mathbf{Z}$ in \eqref{Z}, and solve the following optimization problem $\min_{\mathbf{g}_{i}}~ \|[\mathbf{Z}]_{:,i}-\mathbf{\Phi}_\text{J-OMP}\mathbf{g}_{i}\|_2^2, ~\text{s. t.}~ \text{supp}(\mathbf{g}_{i})\leq K$,
where $\mathbf{\Phi}_\text{J-OMP}=\((\mathbf{\Phi}_y\otimes\mathbf{\Phi}_x)^H\mathbf{Q}\)^T\otimes\mathbf{\Phi}_r$. Finally, we use the estimates of $\{\mathbf{g}_{i}\}_{i=1}^4$ to recover the whole channel.
For simplicity, the dictionaries $\mathbf{\Phi}_r,\mathbf{\Phi}_x$ and $\mathbf{\Phi}_y$ corresponding to $\omega_r,\omega_x$ and $\omega_y$, respectively, are all computed via uniformly dividing $[-\pi,\pi]$ into 128 points, such that $\mathbf{\Phi}_\text{J-OMP}$ is of size $16\times 2^{21}$. We set $K=6$ for CTD and J-OMP and compare the NMSE performance by varying the number of multipaths from 1 to 6 under $\text{SNR}=20$ dB. We also plot the NMSE of CTD and J-OMP with the correct number of multipaths.
Fig. \ref{fig:hiddenmr3mt88n16} shows the simulation results. We see that the J-OMP and LS do not work well, while our method can offer reliable performance. The reason for the failure of LS is obvious, i.e., the LS channel estimate is rank deficient. Due to the high coherence between the grids of the flat dictionary, solving the linear inverse problem is hard, so J-OMP does not work very well. Since the CTD does not have the aforementioned issues, and its maximum number of resolvable multipaths is guaranteed by Theorem \ref{thm:hidden}, it achieves the best estimation accuracy, especially for small $K$ settings.
The performance gap better the NMSE curves of CTD with the correct $ K< 6$ and with fixed $K=6$ is relative large when the actual number of multipaths is small. The main reason for this phenomenon is that compared to CTD with the correct $K$, the channel estimate of CTD with $K=6$ is composed of several redundant multipaths that do not appear in the actual channel.
Furthermore, it is worth noting that the dictionary in J-OMP is huge -- it takes about 400 MB memory for storage and must be updated when $\mathbf{Q}$ changes. In contrast, our method is dictionary-free. In many cases, it employs only one SVD to identify the channel matrix, so that its complexity is much lower. In this example, the average CPU times for CTD and J-OMP are 0.2694 and 2.9541 seconds, respectively.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{hidden}
\caption{NMSE versus $K$.}
\label{fig:hiddenmr3mt88n16}
\end{figure}
\section{Conclusion}
The downlink channel estimation problem for the DP massive MIMO system has been studied through a tensor decomposition perspective. Using row-orthogonal training pilot matrices, a low-rank tensor decomposition method was devised for channel estimation, and identifiability of the key parameters of interest was established under mild and practical conditions.
Furthermore, for non-orthogonal `frugal' training pilot matrices, a two-stage compressed tensor decomposition approach with identifiability guarantees was developed for retrieving the channel matrix and estimating the key parameters, with which the downlink training overhead can be reduced significantly. Numerical simulations support our analysis and show that the proposed schemes are very effective and promising.
\appendices
\section{}
The loading matrices $\mathbf{A}_x$, $\mathbf{A}_y$, $\mathbf{A}_t$ and $\mathbf{B}$ are uniquely identifiable up to scaling and a common column permutation ambiguity provided that
\begin{align}
\min{(M_r,K)} + \min(M_x,K) &+ \min(M_y,K) \notag\\
& + \min{(4,K)} \geq 2K + 3.
\end{align}
Specifically, if $\tilde\mathbf{A}_r$, $\tilde\mathbf{A}_x$, $\tilde\mathbf{A}_y$ and $\tilde{\mathbf{B}}$ generate $\mathbf{H}$ (i.e., $\breve{\mathbf{H}} = (\tilde\mathbf{A}_y^*\odot\tilde\mathbf{A}_x^*\odot\tilde\mathbf{A}_r)\tilde{\mathbf{B}}^T $), then it must hold that
\begin{align}
&\tilde\mathbf{A}_r = \mathbf{A}_r\boldsymbol{\Pi}\mathbf{\Sigma}_1,\; \tilde\mathbf{A}_x = \mathbf{A}_x\boldsymbol{\Pi}\mathbf{\Sigma}_2,\notag\\
& \tilde\mathbf{A}_y = \mathbf{A}_y\boldsymbol{\Pi}\mathbf{\Sigma}_3,\; \tilde{\mathbf{B}} = \mathbf{B}\boldsymbol{\Pi}\mathbf{\Sigma}_4
\end{align}
where $\boldsymbol{\Pi}$ is the permutation matrix and $\{\mathbf{\Sigma}_i\}_{i=1}^4$ are diagonal scaling matrices satisfying $\mathbf{\Sigma}_1\mathbf{\Sigma}_2\mathbf{\Sigma}_3\mathbf{\Sigma}_4 = \mathbf{I}_K$.
\section{Updating $\vartheta_k$ and $\varphi_k$}\label{gradient}
The goal here is to estimate $\vartheta_k$ and $\varphi_k$ that satisfies the following nonlinear equations"
$$[\hat{\mathbf{E}}]_{:,k}=\xi_k\mathbf{Q}^H\mathbf{a}_{t,k},~\forall k=1,\cdots,K.$$
For notational convenience, let $\hat{\mathbf{e}}_k = [\hat{\mathbf{E}}]_{:,k}$ and $\mathbf{v}_k = \mathbf{Q}^H\mathbf{a}_{t,k}$. Solving the above is similar to the 2-D single-tone harmonic retrieval problem. However, the difficult here is that the measurement is compressed by a fat matrix, and thus the conventional harmonic retrieval methods do not apply. We note the fact that $\mathbf{e}_k$ is parallel to $\mathbf{v}_k$, and its orthogonal projection, i.e., $\mathbf{P}_{\mathbf{e}_k}^\perp = \mathbf{I}_{N/2} - \mathbf{e}_k\mathbf{e}_k^\dagger$, is perpendicular to $\mathbf{v}_k$, i.e.,
\begin{align}
\mathbf{P}_{\mathbf{e}_k}^\perp(\xi_k\mathbf{v}_k) = \mathbf{P}_{\mathbf{e}_k}^\perp\mathbf{v}_k = 0.
\end{align}
which implies that the problem of estimating $\{\vartheta_k,\varphi_k\}$ is independent of the scaling $\xi_k$.
Therefore, we propose to solve
\begin{align}\label{2dmusic}
\min_{\vartheta_{k},\varphi_{k}}&~\left\{f=\left\|[ \mathbf{P}_{\mathbf{e}_k}^\perp\mathbf{v}_k \right\|_2^2\right\},~\forall k=1,\cdots,K.
\end{align}
It is easy to check that the nullity of $\mathbf{P}_{\mathbf{e}_k}^\perp$ is one, so the recovery of $\mathbf{v}_k$ from \eqref{2dmusic} is unique.
This way, we get rid of $\xi$ in our problem formulation.
The last step is to design an efficient method to estimate $(\vartheta_k,\varphi_k)$.
Problem \eqref{2dmusic} is a non-constrained optimization problem with a smooth objective, and thus it can be handled by gradient descent.
To further simplify the procedure, consider the following method:
Since $\vartheta_k$ and $\varphi_k$ are embedded in the $\omega_{x,k}$ and $\omega_{y,k}$, instead of directly estimating $\vartheta_k$ and $\varphi_k$, it is easier to find $\omega_{x,k}$ and $\omega_{y,k}$ first and then update $\vartheta_k$ and $\varphi_k$ via \eqref{vartheta} and \eqref{varphi}, respectively.
To this end, let us compute the gradient w.r.t. $\omega_{x,k}$ and $\omega_{y,k}$, which equals to $\nabla f = \[ \frac{\partial f}{\partial \omega_{x,k}}~ \frac{\partial f}{\partial \omega_{y,k}} \]^T$, where
\begin{align}
\!\!\!\frac{\partial f}{\partial \omega_{x,k}} &= 2\text{Re}\( \mathbf{a}_{t,k}^H\mathbf{Q}\mathbf{P}_{\mathbf{e}_k}^\perp\mathbf{Q}^H(\mathbf{a}_{y,k}\otimes(\mathbf{t}_x\circledast\mathbf{a}_{x,k})) \)\\
\!\!\!\frac{\partial f}{\partial \omega_{y,k}} &= 2\text{Re}\( \mathbf{a}_{t,k}^H\mathbf{Q}\mathbf{P}_{\mathbf{e}_k}^\perp\mathbf{Q}^H((\mathbf{t}_y\circledast\mathbf{a}_{y,k})\otimes\mathbf{a}_{x,k}) \)
\end{align}
with $\mathbf{t}_x = j\[0\ 1\ \cdots\ M_x-1\]^T$ and $\mathbf{t}_y = j\[0\ 1\ \cdots\ M_y-1\]^T$.
Then we update $\omega_{x,k}$ and $\omega_{y,k}$ through
\begin{align}
\begin{bmatrix}
\omega_{x,k} \\
\omega_{y,k}
\end{bmatrix}^{(r+1)}
=
\begin{bmatrix}
\omega_{x,k} \\
\omega_{y,k}
\end{bmatrix}^{(r)}
-
\mu^{(r)}
\begin{bmatrix}
\frac{\partial f}{\partial \omega_{x,k}} \\ \frac{\partial f}{\partial \omega_{y,k}}
\end{bmatrix}^{(r)}.
\end{align}
We see that the objective in Problem \eqref{2dmusic} is the same as that of the 2-D MUSIC algorithm. Therefore, the initial point can be estimated from the following spectrum
\begin{align}\label{P}
\!\!\!\!P =
\frac{1}{(\mathbf{a}_{y}(\omega_y)\otimes\mathbf{a}_x(\omega_x))^H\mathbf{Q}\mathbf{P}_{\mathbf{e}_k}^{\perp}\mathbf{Q}^H(\mathbf{a}_{y}(\omega_y)\otimes\mathbf{a}_x(\omega_x))}
\end{align}
where the maximum is attained at $(\mathbf{a}_x = \mathbf{a}_{x,k}, \mathbf{a}_y = \mathbf{a}_{y,k})$. This way, searching $\omega_{x}$ and $\omega_{y}$ over a certain angle range and picking the one that maximizes $P$, we can obtain the initial estimate of $\omega_{x,k}$ and $\omega_{y,k}$.
One key notice here is that the global optimal solution for $\omega_x$ and $\omega_{y}$ is the phase that corresponds to the peak of the main beam, where the optimization problem in the main beam is locally concave. Hence, we need to find an initial guess of $\omega_x$ and $\omega_{y}$ that is within the main beam.
According to the antenna beamwidth (BW) formula for uniform linear array, i.e., $\mathrm{BW}=0.886\times\mathrm{Carrier~wavelenth}/\mathrm{Antenna~diameter}$, we can pre-calculate the half-power BW (HPBW) and then use it to determine the searching step-size. For a URA, the HPBW along the $x$- and $y$- aperture positions are approximate $0.886/M_x$ and $0.886/M_y$, respective. We choose the searching step-size as half of the HPBW, such that the initial guess may be found in the main beam. It is instructive for illustrating how to select the step-size. For example, when $M_x=M_y=8$, the HPBW is about 0.12 rad. So the step-size can be determined as 0.06 rad. In practice, the antenna usually covers only a certain angle range, e.g., $[-\pi/4,\pi/4]$. In such cases, the 2-D search is efficient.
When the transmitter is a ULA, the problem w.r.t. DOD is a 1-D single-tone estimation problem. Then \eqref{P} reduces to the cost function of the 1-D MUSIC. Since $\mathbf{a}_t$ is Vandermonde, we do not need search any more in this case. Instead, the root-MUSIC comes into play. More importantly, in the 1-D single-tone case, the theoretical variance of the estimate of root-MUSIC is approximately the Cram\'er-Rao bound \cite{rootmusic}. Therefore, we do not need the gradient update to further refine the root-MUSIC estimate.
\bibliographystyle{ieeetran}
|
{
"timestamp": "2018-10-29T01:14:14",
"yymm": "1805",
"arxiv_id": "1805.02223",
"language": "en",
"url": "https://arxiv.org/abs/1805.02223"
}
|
\section{Introduction}
Huge interests in one-dimensional (1D) quantum systems \cite{Giamarchi2004, Cazalilla2011, Guan2013} are renewed in the past decade due to the experimental achievements in trapping 1D ultracold bosonic \cite{Paredes2004, Kinoshita2004, Haller2009} and fermionic gases \cite{Moritz2005, Liao2010}. A fundamental distinction between identical bosons and fermions lies in quantum statistics where bosons tend to condense in the same quantum state below their characteristic temperature while fermions cannot occupy a single quantum state owing to Pauli exclusion principle. When spinless bosonic particles are tightly confined in a quasi-1D regime, they become strongly interacting and fermionized in so-called Tonks-Girardeau (TG) gas limit \cite{Tonks1936, Girardeau1960}. This regime can be reached in a dilute gas such that the effective atom-atom interactions become infinite. Recent studies focus on the ground states or their momentum distributions of spinless bosons \cite{Olshanii1998, Girardeau2001, Minguzzi2002, Olshanii2003, Papenbrock2003, Forrester2003, Xu2015}, quantum magnetism in spinful bosons \cite{Deuretzbacher2008, Deuretzbacher2014, Volosniev2014, Yang2015, Yang2016, Deuretzbacher2016} or Bose-Fermi mixtures \cite{Deuretzbacher2017, Decamp2017}, and broadened momentum distributions of spin-incoherent \cite{Cheianov2005, Fiete2007, Feiguin2010} spin-$1$ Bose Luttinger liquid \cite{Jen2016_spin1, Jen2017_spin1}. As for recent investigations of 1D spinful fermions \cite{Guan2013}, energy spectra and mapping of spin-chain model for SU($\kappa$) fermions have been investigated \cite{Laird2017}, exotic pairing phase of Fulde, Ferrell, Larkin, and Ovchinnikov (FFLO) state with finite center-of-mass momenta has been indirectly observed in a spin-$1/2$ Fermi gas \cite{Liao2010}, and two distinguishable fermions can fermionize like two noninteracting identical fermions by tuning interparticle interactions \cite{Zurn2012}.
For spin-$F$ fermions, only $F+1/2$ s-wave scattering lengths $a_s^{j}$ with even $j=0,2,...,2F-1$ \cite{Yip1999} are required to describe interaction dynamics of the states with a total spin equal to $j$. In two-electron fermionic atoms, there is no hyperfine interaction between the electronic $J=0$ and nuclear spins $I>0$ in the ground state ($^1S_0$). Therefore, all scattering lengths become equal. Under this condition, SU($\kappa=2I+1$) spin symmetry can emerge \cite{Cazalilla2009, Gorshkov2010, Cazalilla2014} in alkaline-earth fermions $^{87}$Sr ($I=9/2$) \cite{Bonnes2012, Messio2012} or $^{173}$Yb ($I=5/2$) \cite{Taie2012} with tunable spins \cite{Pagano2014, Decamp2016} close to the regime of spin-incoherent Luttinger liquid (SILL) \cite{Fiete2007}.
The SILL is a different universal class from conventional Luttinger liquid (LL) \cite{Giamarchi2004, Haldane1981}, which shows exponential decays of single-particle Green's functions other than power-law decays in the respective spin and charge sectors of LL. This spin-incoherent regime is first investigated in semiconductor quantum wire \cite{Cheianov2004, Fiete2004, Fiete2007}, which can be reached when the thermal energy of the system is higher than the energy splitting of different spin states while still low enough that collective charge excitations are suppressed. Other systems in SILL regime, for example, uniform two-component gas \cite{Cheianov2005}, $t$-$J$ models \cite{Feiguin2010, Penc1996, Penc1997}, and two-dimensional Hubbard models \cite{Hazzard2013, Zhou2014}, have also been investigated.
Specifically, for 1D spinful Bose gas in TG gas limit, the spin-independent interaction becomes infinite such that spin Hamiltonian can be ignored and all spin configurations are degenerate. Under this condition, the spatial wave functions of the atoms take the Slater determinant form of noninteracting fermions, and TG spinful Bose gas automatically resides in the regime of SILL \cite{Jen2016_spin1, Jen2017_spin1}. Similarly for spinful fermions in TG gas limit, spin exchange energy of 1D SU($\kappa$) fermions vanishes and all spin configurations are degenerate, which again puts them in SILL regime. Away from the TG limit, the condition for achieving SILL however differs between bosons and fermions. For weakly interacting 1D Bose gas, one has SILL if the differences among $a^j_s$ for different $j$'s are sufficiently small \cite{Jen2016_spin1, Jen2017_spin1}. On the other hand, for noninteracting 1D Fermi gas, the sound and spin wave velocities are both equal to the Fermi velocity if populations of all the components are equal. Hence for a weakly interacting 1D Fermi gas, one does not have SILL even when the interaction is SU($\kappa$) symmetric.
In Refs.\cite{Jen2016_spin1, Jen2017_spin1}, we have investigated SILL 1D spin-$1$ Bose gas in TG gas limit. We find the evident broadening in either the total or spin-dependent momentum distributions in the sector of zero magnetization. We have also derived the $1/p^4$ asymptotic \cite{Minguzzi2002, Olshanii2003, Xu2015, Braaten2008-1, Braaten2008-2, Werner2009, Zhang2009}, and evaluated the coefficient, related to Tan's contact \cite{Tan2008, Barth2011}, up to $N=16$. Here we investigate spinful fermions with tunable SU($\kappa$) spin symmetry in SILL TG regime, and numerically calculate their momentum distributions without the restriction of zero magnetization. We also extend the particle number to $N=32$ by taking the advantage of anyonic statistics (or discrete Fourier transform) which significantly expedites the numerical calculations of single-particle density matrix. Thus, our study provides a comparison with the experiments of repulsive multicomponent alkaline-earth fermions with a tunable SU($\kappa$) spin-symmetry in the spin-incoherent regime.
The rest of the paper is organized as follows. In Sec. II, we introduce the single-particle density matrix for 1D SU($\kappa$) fermions in terms of separate spatial and spin parts of the density matrix with anyonic statistics \cite{Yang2017, Marmorini2016, Hao2016}. In Sec. III, we investigate two cases for the spin parts of the density matrix, which are, respectively, the case of equal populations for each spin components and the other one involving all $S_z$ manifolds. We then show the numerically calculated momentum distributions and high momentum tails in Sec. IV, and compare the tails with analytical predictions. Finally we conclude in Sec. V.
\section{Single-particle density matrix of SILL SU(\texorpdfstring{$\boldsymbol{\kappa}$}{kappa}) fermions}
The effective Hamiltonian of ultracold 1D SU($\kappa$) fermions in TG gas limit can be expressed as \cite{Pagano2014, Decamp2016},
\bea
H&=&\sum_{\nu=1}^\kappa\sum_{j=1}^{N_\nu}\left[-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x_{j,\nu}^2}+\frac{1}{2}m\omega^2x_{j,\nu}^2\right]\mathbb{I}_{\rm spin}\nonumber\\
&+&\sum_{\nu<\nu'}^\kappa\sum_{j=1}^{N_\nu}\sum_{j'=1}^{N_{\nu'}}\delta(x_{j,\nu}-x_{j',\nu'})g_{1D}\mathbb{I}_{\rm spin},
\eea
where we consider the atoms, with mass $m$, trapped in a harmonic potential with the axial trap frequency $\omega$, and $\kappa$ spin components satisfy $\sum_{\nu=1}^\kappa N_\nu=N$ with the number of atoms $N_\nu$ for $\nu$th component. The spin-independent interactions between SU($\kappa$) spin-symmetric fermions can be described by $g_{1D}=-2\hbar^2/(ma_{1D})$, where $a_{1D}$ is the effective scattering length \cite{Olshanii1998} in 1D. Next we consider a general wave function of $N$ fermions with spins,
\bea
|\Psi\rangle=\sum_{s_1,s_2,...s_N}\psi_{s_1,s_2,...s_N}(\vec{x})|s_1,s_2,...,s_N\rangle,\label{wf}
\eea
where we denote the atomic spatial distributions as $\vec x=(x_1,x_2,...,x_N)$ along with corresponding spin configurations $|s_1,s_2,...,s_N\rangle\equiv|\vec s\rangle$. Note that each spin $s_i$ is within the manifold of SU($\kappa$) spin symmetry. The total wave function must satisfy the quantum statistics of the atoms, which is fermionic anti-symmetry considered here, and thus it is sufficient if we only focus on the ordered region of $x_1$ $<$ $x_2$ $<$ $...$ $<$ $x_N$. The other regions can be obtained via permutations of this ordered region.
The single-particle density matrix according to the general wave function of Eq. (\ref{wf}) becomes
\bea
\rho(x',x)=N\sum_{\vec{s}}\int d\bar{x}\psi_{\vec{s}}^*(x',\bar{x})\psi_{\vec{s}}(x,\bar{x}),\label{rhop}
\eea
where $\bar{x}$ $\equiv$ $(x_2,x_3,...,x_N)$. To proceed to calculate Eq. (\ref{rhop}), we consider only the region of $x'<x$ which is symmetric to $x'>x$. Equation \ref{rhop} involves $N(N+1)/2$ distinct and ordered integral regions \cite{Jen2016_spin1, Jen2017_spin1}, which we denote as \cite{Yang2017}
\bea
\Gamma_{m,n}:~&&x_2<...<x_m<x'<x_{m+1}<...\nonumber\\
&&...<x_n<x<x_{n+1}...<x_N,
\eea
where $x'$ and $x$ are located right behind $x_m$ and $x_n$ respectively. Each distinct and ordered integral region has the same spatial integral value, and such that we obtain \cite{Jen2016_spin1, Jen2017_spin1, Yang2017}
\bea
\rho(x'<x)=\sum_{m=1}^N\sum_{n=m}^N\rho_{m,n}(x',x)S_{m,n}.\label{rho2}
\eea
In the above, we proceed to write down the spatial part in TG gas limit as
\bea
\rho_{m,n}(x',x)=&&(-1)^{n-m}N!\int_{\Gamma_{m,n}}d\bar x\varphi_{\vec n}^*(x',\bar x)\varphi_{\vec n}(x,\bar x),\label{rhomn}\\
\varphi_{\vec n}(\vec x)\equiv&&\frac{1}{\sqrt{N!}}\mathbb{A}[\phi_{n_1}(x_1),\phi_{n_2}(x_2),...,\phi_{n_N}(x_N)],\label{psi}
\eea
with orbital indices $\vec n$ $=$ $(n_1,n_2,...,n_N)$ and antisymmetrized ($\mathbb{A}$) eigenfunctions $\phi_{n_j}(x_j)$ of noninteracting fermions in a harmonic trap. Meanwhile, the spin part in SILL regime is denoted as \cite{Jen2016_spin1, Jen2017_spin1}
\bea
S_{m,n}=(-1)^{m-n}\frac{\sum_{\vec s}\langle P_{12...m}(\vec s)|P_{12...n}(\vec s)\rangle}{\rm{Tr}_\chi(E)},\label{Smn}
\eea
with identical and $m$-particle permutation operators $E$ and $P_{12...m}$ respectively. The total number of spin state configurations is Tr$_\chi(E)\equiv\sum_\chi\langle\chi|E|\chi\rangle$ for all spin configurations $|\chi\rangle$. The $S_{m,n}$ represents the normalized spin function overlaps, which is averaged by all possible spin configurations and is nonvanishing if the permuted spins $|P_{12...m}(\vec s)\rangle$ has projections on $|P_{12...n}(\vec s)\rangle$.
To evaluate Eq. (\ref{rho2}) efficiently, we take advantage of the discrete Fourier transform or equivalently anyonic statistics \cite{Yang2017, Marmorini2016, Hao2016}, which transforms respectively Eqs. (\ref{rhomn}) and (\ref{Smn}) to
\bea
\rho_{m,n}(x',x)=&&N^{-2}\sum_{k',k}\rho_{k',k}(x',x)e^{i\pi k' m}e^{-i\pi k n},\\
S_{k',k}=&&N^{-2}\sum_{m,n=1}^N S_{m,n}e^{i\pi k' m}e^{-i\pi k n},
\eea
with discrete statistical parameters \cite{Girardeau2006} of $k,k'=2j/N$ for $j=1,2,...,N$, and
\bea
\rho_{k',k}(x',x)=&&N\int d\bar x\prod_{j=2}^N A^{k'*}(x_j-x')A^{k}(x_j-x)\nonumber\\
&&\times\varphi_{\vec n}^*(x',\bar x)\varphi_{\vec n}(x,\bar x),\label{rhokk}
\eea
where $A^{k}(x_j-x_l)\equiv e^{i\pi(1-k)\theta(x_j-x_l)}$ with the Heaviside step function $\theta(x_j-x_l)$. Finally, we obtain the single-particle density matrix for SILL 1D SU($\kappa$) fermions, in terms of discrete statistical parameters,
\bea
\rho(x',x)=\sum_{k',k}\rho_{k',k}(x',x)S_{k',k}.\label{rhoxx}
\eea
In the next section, we specifically calculate the spatial and spin parts of the density matrix for SU($\kappa$) fermions.
\section{Spatial and Spin parts of the density matrix}
\subsection{Spatial parts of the density matrix}
The spatial parts of the single-particle density matrix have been investigated for spinless bosons \cite{Papenbrock2003, Forrester2003} and anyons \cite{Hao2016,Marmorini2016} in a harmonic trap, where analytically exact formulas can be derived. For 1D SU($\kappa$) fermions in the TG gas limit and confined in a harmonic trap potential, the dimensionless eigenfunctions $\phi_n(y)$ with $y\equiv x/x_{ho}$ and $x_{ho}\equiv\sqrt{\hbar/(m\omega)}$ are
\bea
\phi_n(y)&=&\frac{1}{\sqrt{2^n n!}}\frac{1}{\pi^{1/4}}H_n(y)e^{-y^2/2},\label{eigen}
\eea
where $H_n$ are Hermite polynomials.
Put Eq. (\ref{eigen}) into Eq. (\ref{psi}), the spatial wave function with $\vec n =(0,1,...,N-1)$ can be expressed in a form of Vandermonde determinant \cite{Forrester2003},
\bea
\varphi_{\vec n}(\vec x)=\sqrt{C_N^V}\prod_{l=1}^Ne^{-x_l^2/2}\prod_{1\leq j<m\leq N}(x_j-x_m),\label{V}
\eea
where the normalization constant is
\bea
C_N^V=\frac{2^{N(N-1)/2}}{\pi^{N/2}\prod_{j=1}^N j!}.
\eea
To derive the exact form of Eq. (\ref{rhokk}), in addition to using the above form, we need the following general equality \cite{Papenbrock2003, Forrester2003},
\bea
&&\frac{1}{N!}\prod_{l=1}^N\int_{-\infty}^\infty dx_lg(x_l)(\textrm{det}[f_{j-1}(x_m)]_{j,m=1,...,N})^2\nonumber\\
&&=\textrm{det}\left[\int_{-\infty}^\infty dt g(t)f_{j-1}(t)f_{m-1}(t)\right]_{j,m=1,...,N},\label{equal}
\eea
for any functions $g$ and $f_j$. We separate the dependence of $x$ and $\bar x$ in $\varphi_{\vec n}(x,\bar x)$ by interpreting it in terms of minors, $\phi_n(x)\textrm{det}[\phi_{j}(x_m)]$ with $j=0,...,n-1,n+1,...,N-1$ and $m=2,...,N$, respectively for $n=0,1,...,N-1$. This way we are able to cast $\phi_n(x)$ in a Vandermonde form, while retain the rest of particles at $\bar x$ in a determinant form. Similar treatment to $\varphi_{\vec n}(x',\bar x)$ can be done. We then re-express Eq. (\ref{rhokk}) by grouping anyonic statistics of $A^{k'*}$ and $A^{k}$ with $(x_l-x)$ and $(x_l-x')$, and let $g(x_l)=A^{k'*}(x_l-x')A^{k}(x_l-x)(x_l-x')(x_l-x)$ in Eq. (\ref{equal}) with $l$ starting from $2$. Applying the equality of Eq. (\ref{equal}) to Eq. (\ref{rhokk}), we obtain
\begin{widetext}
\bea
\rho_{k',k}(x',x)=&&\frac{2^{N-1}}{(N-1)!}\frac{e^{-(x'^2+x^2)/2}}{\sqrt{\pi}}\textrm{det}\left[\int_{-\infty}^\infty dtA^{k'*}(t-x')A^{k}(t-x)(t-x')(t-x)\phi_j^*(t)\phi_m(t)\right]_{j,m=0,...,N-2},\nonumber\\
&&=\frac{e^{-(x'^2+x^2)/2}}{\sqrt{\pi}}\textrm{det}\left[\frac{2^{(j+m)/2}}{\Gamma(j+1)\Gamma(m+1)}\frac{b_{j,m}^{k',k}(x',x)}{\sqrt{\pi}}\right]_{j,m=1,...,N-1},
\eea
\end{widetext}
where
\bea
b_{j,m}^{k',k}(x',x)\equiv&&\int_{-\infty}^\infty dt A^{k'*}(t-x')A^{k}(t-x)\nonumber\\
&&\times(t-x')(t-x)t^{j+m-2}e^{-t^2}.\label{b}
\eea
Equation (\ref{b}) can be derived by using equivalent Vandermonde determinant forms for det[$H_{j-1}(x_m)$] and det[$x_m^{j-1}$] which leads to the term of $t^{j+m-2}$. We further derive the exact form of $b_{j,m}^{k',k}(x',x)$ in Appendix, which involves special functions of incomplete gamma functions. This exact form significantly expedites the numerical calculations of single-particle density matrix, but as $N$ increases and larger than $N=32$, the calculation is limited by $64$-digit computer double-precision. And as such, we give results up to $N=32$, which however can be pushed further by using arbitrary precision protocols. Next we study two cases of the spin parts in the density matrix of 1D SU($\kappa$) fermions.
\subsection{Equal populations in each spin components}
For equal populations in each SU($\kappa$) spin components, the spin configurations which contribute to Eq. (\ref{Smn}) involve the states,
\bea
|\underbrace{\alpha_1...\alpha_1}_{N_1}\underbrace{\alpha_2...\alpha_2}_{N_1}...\underbrace{\alpha_\kappa...\alpha_\kappa}_{N_1}\rangle,
\eea
where $N_1=N/\kappa$ and $\kappa$ spin components $(\alpha_1,\alpha_2,...,\alpha_\kappa)$. The $S_{m,n}$ is nonvanishing only when $N_1\geq l$ with $l\equiv|n-m|+1$ from the contributions of $l$ entries in each spin components. Take component $\alpha_1$ as an example, the contributing spin configuration is
\bea
|\underbrace{\alpha_1...\alpha_1}_{l}\underbrace{\alpha_1...\alpha_1}_{N_1-l}\underbrace{\alpha_2...\alpha_2}_{N_1}...\underbrace{\alpha_\kappa...\alpha_\kappa}_{N_1}\rangle.
\eea
Since there are $\kappa$ spin components, we obtain the spin parts of the density matrix as
\bea
S_{m,n}=\frac{(-1)^{m-n}}{w_N}\left[\frac{\kappa(N-l)!}{(N_1-l)![N_1!]^{\kappa-1}}\right],\label{S}
\eea
where
\bea
w_N=\frac{N!}{[N_1!]^\kappa}.
\eea
The bracket in Eq. (\ref{S}) originates from the number of states obtained by permuting the rest of ($N_1-l$) spins for one of the spin component and other $N_1$ spins from $(\kappa-1)$ components. We also define $w_N\equiv\textrm{Tr}_\chi(E)$ as the total number of states in the above.
\subsection{All \texorpdfstring{$\boldsymbol{S_z}$}{Sz} manifolds included}
Next we consider the spin configurations with all $S_z$ manifolds. In contrast to the case of equal populations, the $S_{m,n}$ here is always finite, which has a contribution from $l\equiv|n-m|+1$ entries in each spin components. Take again the component $\alpha_1$ as an example, the contributing spin configuration is
\bea
|\underbrace{\alpha_1...\alpha_1}_{l}\underbrace{......}_{N-l}\rangle,
\eea
where the rest of $(N-l)$ spin components can be any of $\kappa$ ones. We obtain the spin parts of the density matrix as
\bea
S_{m,n}=&&\frac{(-1)^{m-n}}{w_N}\kappa^{N-l+1},\label{S2}\\
w_N=&&\kappa^N,
\eea
and we can further simply $S_{m,n}$ as
\bea
S_{m,n}=&&\frac{(-1)^{m-n}}{\kappa^{|m-n|}}.\label{S3}
\eea
\section{Momentum distributions}
Based on Eq. (\ref{rhoxx}), we numerically calculate the momentum distributions of SILL 1D SU($\kappa$) fermions in TG gas limit ($\hbar$ $=$ $1$),
\bea
\rho(p)=\frac{1}{2\pi}\int_{-\infty}^\infty dx' \int_{-\infty}^\infty dx e^{ip(x'-x)}\rho(x',x).
\eea
Below we investigate various conditions of fixed total number of atoms $N$, spin components $\kappa$, and number of atoms in each spin components $N_1$.
\subsection{Fixed \texorpdfstring{$\boldsymbol{N}$}{N}}
It is instructive at first to compare the momentum distributions of 1D SU($\kappa$) fermions in TG gas limit with noninteracting multi-component ones. In Fig. \ref{fig1}, we focus on the case of equal populations in each spin components [$\rho_{eq}(p)$]. The feature of noninteracting $\kappa$-component fermions manifests in the number of Friedel oscillation peaks, which is exactly $N_1$ of them. As the number of components increases, noninteracting fermions tend to occupy lower momenta, and thus the width of momentum distributions becomes narrower. This can be explained by the decreasing $N_1$ for each spin components with a fixed $N$. This trend is also seen in SILL 1D SU($\kappa$) fermions as $\kappa$ increases. In contrast, for fixed $\kappa$ component and $N$, TG gas has a broadened width of momentum distributions compared to the ones of noninteracting fermions due to the strong interactions in TG gas limit. Furthermore, we note that the kinetic and potential energies of 1D SU$(\kappa)$ fermions in TG gas limit satisfy the virial theorem \cite{Werner2006, Werner2008}, which are equally $N^2\hbar \omega/4$ (half of the total energy of the system) since the fermions have the same density profile as the noninteracting ones. Instead for noninteracting multi-component fermions, the kinetic (potential) energy is $N^2\hbar\omega/(4\kappa)$ which is always smaller than the one of SILL SU($\kappa$) fermions for $\kappa\geq 2$.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm,height=4.5cm]{1.eps}
\caption{Comparison of momentum distributions of 1D SU($\kappa$) fermions in the SILL regime and noninteracting multi-component fermions. The number of fermions is $N=32$. As the number of components increases, the width of momentum distributions decreases for both noninteracting and spin-incoherent cases.}\label{fig1}
\end{figure}
In Fig. \ref{fig2}, we show $\rho_{eq}(p)$ with the same $N$ for different number of spin components. In contrast to the Friedel oscillations of spinless fermions, the oscillations in SILL SU($\kappa$) fermions are smoothed out due to the averaging effect of spin function overlaps $S_{m,n}$, similar to the case of spin-$1$ bosons \cite{Jen2016_spin1, Jen2017_spin1}. As $\kappa$ increases, the momentum distributions are less broadened, which can be seen near $px_{ho}\approx 5$ and also reflects on increasing $\rho_{eq}(p=0)$. In contrast, the high momentum tails have larger values for larger $\kappa$, which we will investigate further in details in the next subsection. For the case of all $S_z$ manifolds included, $\rho_{all}(p)$, we show its difference from $\rho_{eq}(p)$ in the insets of Fig. \ref{fig2}. The ratio of relative difference to $\rho_{eq}(0)$ is on the order of $10^{-3}$, which therefore makes $\rho_{all}(p)$ almost indistinguishable from $\rho_{eq}(p)$. Nonetheless, central maximum of $\rho_{all}(0)$ is smaller than $\rho_{eq}(0)$ in respective $\kappa$ components, while at moderate $3 \leq px_{ho}\leq 8$, $\rho_{all}(p)$ becomes larger than $\rho_{eq}(p)$ between around the two crossing points.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm,height=4.5cm]{2.eps}
\caption{Momentum distributions of 1D SU($\kappa$) fermions in the SILL regime with equal populations in each spin components. The number of fermions is the same as Fig. \ref{fig1}, and we choose $\kappa=2,4,8,32$ to compare with spinless fermions. The inset shows the difference between the cases of all spin manifolds and equal populations, $[\rho_{all}(p)-\rho_{eq}(p)]/\rho_{eq}(0)$, which is order of $10^{-3}$ to the maximum of the distribution, respectively.}\label{fig2}
\end{figure}
As a theoretical interest, we consider the case with a large $\kappa$. In Fig. \ref{fig2}, we show the momentum distribution with $\kappa$ $=$ $N$. For equal populations and fixed $N$, this case represents the maximal $\kappa$ allowed and indicates that every fermion occupies exactly one distinct spin. And as such, $\rho_{eq,\kappa=N}(p)$ has the narrowest width compared to all other $\kappa$ $<$ $N$. Again, $\rho_{all,\kappa=N}(p)$ does not distinguish much from $\rho_{eq,\kappa=N}(p)$ as shown in the inset of Fig. \ref{fig2}. At $\kappa=N$, for equal populations, we have $S_{m,n}=\delta_{m,n}$. Comparing this with Eq. (\ref{S3}) shows that $\rho_{all, \kappa\rightarrow\infty}(x',x)$ coincides exactly with $\rho_{eq,\kappa=N}(x',x)$, and so almost is indistinguishable from $\rho_{all,\kappa=N}(x',x)$. These narrower widths and higher momentum tails are reminiscent of the infinite $\kappa$ regime where the ground state energy \cite{Yang2011} and Tan's contacts \cite{Decamp2016} of 1D SU($\kappa$) fermions approach the case of spinless bosons. However, for SILL 1D SU($\kappa$) fermions at infinite $\kappa$, the spin parts of the density matrix for all spin manifolds become $S_{m,n}\rightarrow\delta_{m,n}$, whereas for spinless bosons, $S_{m,n}$ $=$ $1$ for all $m$ and $n$. Therefore, SILL 1D SU($\kappa$) fermions never behave exactly as spinless bosons as $\kappa\rightarrow\infty$. Under this limit, we note that $S_{k',k}\rightarrow\delta_{k',k}/N$, and single particle matrix becomes $\rho_{all,\kappa\rightarrow\infty}(x',x)=N^{-1}\sum_{k}\rho_{k,k}(x',x)$, an average over anyon density matrices with statistical parameters $k$.
\subsection{Fixed \texorpdfstring{$\boldsymbol{\kappa}$}{kappa}}
In Fig. \ref{fig3}, we show $\rho_{eq}(p)$ with fixed number of spin components $\kappa$. They are broadened uniformly as $N$ increases for various spin components due to strong interactions. We know that for noninteracting multi-component fermions, their peaks scale as $N^{0.5}$, while by fitting our numerical results in Fig. \ref{fig3}, we find that the peaks scale as $N^\alpha$ where $\alpha \lesssim 0.5$.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm,height=4.5cm]{3.eps}
\caption{Momentum distributions of SILL 1D SU($\kappa$) fermions for various $N$ with equal populations in each spin components. As the number of fermions $N$ increases, the momentum distributions are uniformly broadened. For various spin components $\kappa$, we choose $N=16,24,32$ in (a), (b), (d), and $18,24,30$ in (c), respectively.}\label{fig3}
\end{figure}
\subsection{Fixed \texorpdfstring{$\boldsymbol{N_1}$}{N1}}
\begin{figure}[b]
\centering
\includegraphics[width=8.5cm,height=4.5cm]{4.eps}
\caption{Normalized momentum distributions of 1D SU($\kappa$) fermions in the SILL regime with fixed number of atoms per component. The number of atoms per spin component is $N_1=6$. As the number of components $\kappa$ increases from $1$ to $5$ and accordingly $N=\kappa N_1$, the momentum distributions are broadened.}\label{fig4}
\end{figure}
Finally, as in the experiment of 1D fermions with tunable SU($\kappa$) spin symmetry \cite{Pagano2014}, in Fig. \ref{fig4} we plot the normalized momentum distributions ($\int \rho(p)dp=1$) of SILL 1D SU($\kappa$) fermions with fixed $N_1$. We see broadening in momentum distributions as $\kappa$ increases. As spin components of the fermions increase, the total number of atoms also increase. Therefore, the broadening of momentum distributions comes both from strong interactions of fermions in TG gas limit and increasing number of atoms. Under the condition of a fixed $N_1$ in Fig. \ref{fig4}, the kinetic (potential) energy per atom is $N_1\kappa\hbar\omega/4$, which rises up linearly as $\kappa$ increases.
We also compare our results with the experiments, where the system is at finite temperature with finite atom-atom interactions, and experiences inhomogeneous distributions in 2D optical lattice of 1D tubes \cite{Pagano2014}. In the experiment, the broadening of the normalized momentum distributions is also observed as $\kappa$ increases, though the Friedel oscillation is absent in single component measurement due to the averaging of inhomogeneous distributions or finite temperature. In addition, we extrapolate their normalized momentum distributions in Fig. 2(a) of \cite{Pagano2014}, and numerically calculate their kinetic energies. The kinetic energies approximately follow the linear increase of $\kappa$ spin components, which indicates that the behavior of the system is similar as in TG gas limit. We note that the other essential feature of SILL 1D SU($\kappa$) fermions manifests in the trend of momentum distributions toward narrower ones for a fixed $N$ in Fig. \ref{fig2}, as an alternative method to measuring breathing mode oscillations \cite{Pagano2014}.
\subsection{Large \texorpdfstring{$\boldsymbol{p}$}{p} asymptotics}
Here we further investigate the high momentum tails of 1D SU($\kappa$) fermions in the SILL regime. This universal high momentum asymptotic $1/p^4$ originates from many-body systems with two-body contact interaction, which is present in a spinless Bose gas \cite{Minguzzi2002, Olshanii2003, Xu2015, Lang2017, Decamp2018}, SILL spin-$1$ Bose gas \cite{Jen2016_spin1, Jen2017_spin1}, two-component \cite{Braaten2008-1, Braaten2008-2,Werner2009, Zhang2009, Patu2016} or multi-component Fermi gas \cite{Matveeva2016, Decamp2016}, and Tan's relation \cite{Tan2008, Barth2011}. The coefficients of the scaling can be related to the slope of the ground state energy of the many-body system, that is $(-dE/dg_{\rm 1D}^{-1})$ \cite{Olshanii2003, Decamp2016}.
We have derived the analytical results for this high momentum asymptotic in SILL 1D spin-$1$ TG Bose gas \cite{Jen2016_spin1, Jen2017_spin1}, which can be straightforwardly extended to 1D SU($\kappa$) fermions in TG gas limit,
\bea
\rho(p)\underset{p\rightarrow\infty}{=}&&\frac{2(1+S_{m-1,m})}{2\pi p^4}\nonumber\\
&&\times\sum_{(n_i,n_j)}\int_{-\infty}^\infty dx
\left| \begin{array}{cc}
\phi_{n_i}'(x) & \phi_{n_j}'(x) \\
\phi_{n_i}(x) & \phi_{n_j}(x) \end{array} \right|^2,\label{large_p}
\eea
for arbitrary $m$ since $S_{m,n}$ only depends on $|m-n|$. The $(n_i,n_j)$ represents all possible pairs of $N$ harmonic oscillator eigenfunctions. The coefficients depend only on the spin parts of the single-particle density matrix, $S_{m,n}$, with $|m-n|=1$, since they have the contributions only from the integral regions of $x$ $<$ $x_j$ $<$ $x'$ and $x'$ $<$ $x_j$ $<$ $x$ for all $x_j$ $\in$ $\bar{x}$ with $x$ $\approx$ $x'$. The coefficients for spinless or spinful bosons can be obtained by replacing $(1+S_{m-1,m})$ with $2$ or by $S_{m-1,m}\rightarrow -S_{m-1,m}$ respectively in Eq. (\ref{large_p}). The sign change of $S_{m-1,m}$ for spinful bosons restores the bosonic symmetry in the single-particle density matrix. For fermions, we note that the coefficients of $(1+S_{m-1,m})$ in the asymptotic forms of Eq. (\ref{large_p}) increase as $\kappa$ increases. At $\kappa\rightarrow\infty$, $S_{m-1,m}\rightarrow 0$, and such that $(1+S_{m-1,m})$ maximizes to be one but is only half of the coefficient for spinless bosons. This value is thus also half of that of the ground state of 1D fermions in the $\kappa\rightarrow\infty$, TG limit.
In Fig. \ref{fig5}, we show the high momentum asymptotic curves and compare them with the analytical results. As $\kappa$ increases, the coefficients go up as the arrow indicates. The convergence of the numerical calculations can be seen in the inset (a) where the numerical result approaches the analytical one as finer grids are used. We further compare $\rho_{all}(p)$ with $\rho_{eq}(p)$ by showing its relative difference in the inset (b). The coefficients for high momentum tails of $\rho_{all}(p)$ are smaller than the case of $\rho_{eq}(p)$. Similar to the momentum distributions at small $p\lesssim 10$, the difference ratio is of order of $10^{-2}$ relative to $\rho_{eq}(p)$ at large $p$, and therefore the asymptotic curves of $\rho_{all}(p)$ is again close to the ones of $\rho_{eq}(p)$. The analytical results of the coefficients also show only a relative difference of less than $5\%$ (see caption of Fig. \ref{fig5} for numerical vales of the coefficients), which almost overlap with each other. At $10 \lesssim px_{ho}\lesssim 25$, the relative difference of inset (b) saturates to a flat line, indicating the constant ratio of the coefficients between $\rho_{all}(p)$ and $\rho_{eq}(p)$. Meanwhile, for $px_{ho}\gtrsim 25$, this difference goes up, which marks the accuracy range of $px_{ho}\approx 25$ in our numerical calculations.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm,height=4.5cm]{5.eps}
\caption{Asymptotics of high momentum distributions of 1D SU($\kappa$) fermions in the SILL regime. The total number of fermions is $N=30$, and we choose $\kappa=2,3,5,10$ for comparisons. High momentum tails are plotted in logarithmic scales compared with analytical curves (dash). The analytical asymptotics for $\rho_{eq}(\infty)$ are $(165, 220, 264, 297)/p^4$ respectively as $\kappa$ increases, while for $\rho_{all}(\infty)$, they are $(159, 213, 255, 287)/p^4$. The inset (a) shows the convergence of the numerical result of SU($\kappa=2$) to the analytical curve as finer grids increases from $dx=0.2,0.1,0.05$ (dash dots) to $0.025$ (solid), while (b) shows the relative asymptotics of $[\rho_{all}(p)-\rho_{eq}(p)]/\rho_{eq}(p)$.}\label{fig5}
\end{figure}
\section{Conclusion}
In conclusion, we have investigated the momentum distributions of 1D SU($\kappa$) fermions in TG gas limit, which puts the system in a spin incoherent regime, forming a different universal class of SILL from conventional Luttinger liquid. We derive the single-particle density matrices in terms of those of anyons, which help expedite the numerical calculations up to $N=32$. We further investigate SU($\kappa$) fermions in two cases of equal populations in each spin components and all $S_z$ manifolds included. Compared to noninteracting multi-component fermions, their momentum distributions are broadened due to strong interactions in TG gas limit, while become less broadened as $\kappa$ increases. We also compare the numerical results with the analytical predictions in high momentum tails, which follow asymptotically the analytical coefficients we derived in moderately high momentum regions. Our results provide an informative comparison with experiments of multicomponent alkaline-earth fermions with SU($\kappa$) spin-symmetry in the spin-incoherent regime.
\section*{Acknowledgements}
This work is supported by the Ministry of Science and Technology (MOST), Taiwan, under Grant No. MOST-104-2112-M-001-006-MY3 and MOST-106-2112-M-001-005-MY3. H.H.J is partially supported by the Grant No. of MOST-106-2633-M-001-001 and 106-2811-M-001-130 from MOST, as an assistant research scholar in Institute of Physics, Academia Sinica, Taiwan.
|
{
"timestamp": "2018-05-08T02:14:48",
"yymm": "1805",
"arxiv_id": "1805.02383",
"language": "en",
"url": "https://arxiv.org/abs/1805.02383"
}
|
\section{Introduction}
\label{sec:introduction}
In lattice QCD, two methods have been proposed so far to study the baryon-baryon interactions.
One is the direct method~\cite{Yamazaki:2015asa,Wagman:2017tmp, Berkowitz:2015eaa},
where the energy spectrum on finite volume(s) is extracted from the temporal correlation of two baryons
and is converted to the scattering phase shift and/or the binding energy in the infinite volume through
the L\"uscher's finite volume formula~\cite{Luscher:1985dn,Luscher:1990ux}.
The other is the HAL QCD method~\cite{Ishii:2006ec,Aoki:2009ji,HALQCD:2012aa,Aoki:2012tk,Aoki:2012bb},
where the potential between baryons is first derived from the spatial correlations of two baryons,
and it is used to calculate the observables through
the Schr\"odinger-type equation in the infinite volume.
While both methods are supposed to give the same results in principle,
previous numerical studies for two-nucleon ($NN$) systems show clear discrepancy:
The direct method indicates that both dineutron ($^1$S$_0)$ and deuteron ($^3$S$_1$)
are bound for heavy pion masses ($m_{\pi} \geq 300$~MeV),
while the HAL QCD method does not provide such bound states in both channels for heavy pion masses.
This discrepancy was recently discussed in
a series of papers~\cite{Iritani:2016jie,Iritani:2017rlk,Aoki:2017byw,Iritani:2018vfn},
where it was pointed out that the effective two-particle energy as a function of the
Euclidean time may significantly suffer from elastic scattering states of two nucleons.
To elucidate such uncertainties, certain ``normality checks''
for the finite-volume spectrums were introduced~\cite{Iritani:2016jie,Iritani:2017rlk,Aoki:2017byw,Iritani:2018vfn}.
The advantage of the time-dependent HAL QCD method~\cite{HALQCD:2012aa}
over the direct method is that
the former is free from the ground state saturation problem in principle,
since the energy-independent potential controls
both ground state and the elastic excited states simultaneously
as long as the inelastic scatterings in the small Euclidean time are properly suppressed.\footnote{
Otherwise, the coupled channel HAL QCD method should be used to take into account
the inelastic states \cite{Aoki:2012bb}.}
In practice,
there appear systematic uncertainties associated with the truncation of the derivative expansion
for the non-local potential.
Therefore, the main purpose of the present paper is to study the convergence of the derivative expansion,
as well as other sources of systematic uncertainties such as
the inelastic state contaminations and the distortion of the interaction under finite volume.
We consider the $\Xi\Xi$ system in the $^1$S$_0$ channel
and perform the (2+1)-flavor lattice QCD calculation at $m_\pi = 0.51$~GeV and $m_K = 0.62$~GeV.
Because of the large quark masses, the statistical errors in this case become relatively small, so that one can focus on
the detailed analysis of the systematic errors. Also, this channel and the $NN$ system in the $^1$S$_0$ channel belong
to the same multiplet in the flavor SU(3) limit.
This paper is organized as follows.
In Sec.~II, we review the time-dependent HAL QCD method.
In Sec.~III, we present the lattice QCD results for the $\Xi\Xi$ interaction in the $^1$S$_0$ channel
at the next-to-next-to-leading order (N$^2$LO) in the derivative expansion.
The N$^2$LO potential is extracted from a specific combination of the $\Xi\Xi$ correlations
with different source operators. The systematic errors associated with
the inelastic state contaminations and the distortion in the finite volume
are also examined.
In Sec.~IV, we calculate the scattering phase shifts in this channel,
and check the convergence of the derivative expansion in the HAL QCD method.
In Sec.~V,
we demonstrate the self-consistency between the phase shifts obtained from the HAL QCD potential and
those obtained from the energy spectra obtained from the HAL QCD potential combined with
the L\"uscher's formula. Sec.~VI is devoted to the conclusion.
In Appendix A, we discuss the relation between the energy-independent non-local potential
and the energy-dependent local one.
\section{Formalism}
The key quantity in the HAL QCD method~\cite{Ishii:2006ec,Aoki:2009ji,HALQCD:2012aa,Aoki:2012tk,Aoki:2012bb}
is the Nambu-Bethe-Salpeter (NBS) wave function, defined by
\begin{eqnarray}
\psi^W(\vec{r}) &=& \sum_{\vec{x}} \langle 0 \vert T\{ B(\vec{x}+\vec{r},0)B(\vec{x},0) \} \vert 2B, W \rangle,
\label{eq:NBS-def}
\end{eqnarray}
where $\vert 0 \rangle $ is the vacuum state of QCD, $\vert 2B, W\rangle$ is the QCD eigenstate for two baryons with eigenenergy $W$, and $B(\vec{x},t)$ is a single baryon operator
with spin indices omitted for simplicity.
We then define a non-local and energy-independent potential $ U(\vec{r}, \vec{r'})$
so as to satisfy
\begin{equation}
(E_k - H_0) \psi^W(\vec{r})
=\int d\vec{r'}\ U(\vec{r}, \vec{r'})
\psi^W(\vec{r'})
\label{eq:t-indep-HAL}
\end{equation}
below inelastic threshold, $W < W_\mathrm{th} = 2 m_B + m_\pi$, with $m_B$ the baryon mass, $m_\pi$ the pion mass,
and $W = 2\sqrt{m_B^2 + k^2}$.
Here we define $E_k = k^2/(2\mu)$ and $H_0 = -\nabla^2/(2\mu)$ with a reduced mass $\mu = m_B/2$.
We note that $U(\vec{r}, \vec{r'})$ depends on the specific choice of the interpolating operator $B(x)$ used in
Eq.~(\ref{eq:NBS-def}). Nevertheless, the $S$-matrix is free from the choice of $B(x)$
as long as it is an ``almost-local operator field'' \cite{Haag:1958vt} (Nishijima-Zimmermann-Haag theorem).
To extract the NBS wave function in lattice QCD,
we start with the two-baryon correlation function,
\begin{equation}
C_{2B}(\vec{r}, t-t_0) = \sum_{\vec{x}} \langle 0 | T \{ B(\vec{x}+\vec{r},t)B(\vec{x},t)
\overline{\mathcal{J}}_{2B}(t_0) \}| 0\rangle,
\end{equation}
where $\overline{\mathcal{J}}_{2B}(t_0)$ is a source operator for two-baryon.
By inserting the complete set, we obtain
\begin{eqnarray}
C_{2B}(\vec{r},t-t_0) &=& \sum_{\vec{x}} \langle 0 | T\{ B(\vec{x}+\vec{r},t) B(\vec{x},t)\}
\sum_{n} | 2B, W_n \rangle \langle 2B, W_n |
\overline{\mathcal{J}}_{2B}(t_0) | 0 \rangle + \cdots \nonumber \\
&=& \sum_{n } A_{n } \psi^{W_n} (\vec{r})e^{-W_n(t-t_0)} + \cdots,
\end{eqnarray}
where $W_n =2\sqrt{m_B^2+k_n^2}$ is the $n$-th energy eigenvalue,
$A_{n} \equiv \langle 2B, W_n |\overline{\mathcal{J}}_{2B}(0)|0\rangle$
corresponds to the overlap with each elastic eigenstate,
and the ellipses represent the inelastic contributions.
In principle, one can extract $A_0 \psi^{W_0}(\vec{r})$ for the lowest energy $W_0$ from the large $t$ behavior of $C_{2B}(\vec{r},t)$.
In practice, however, since $C_{2B}(\vec{r},t)$ becomes too noisy at large $t$,
we need to employ the time-dependent HAL QCD method~\cite{HALQCD:2012aa}.
Let us define the ratio of correlation functions, which we call the $R$-correlator, as
\begin{equation}
R(\vec{r}, t) \equiv \frac{C_{2B}(\vec{r}, t)}{\{C_B(t) \}^2}
= \sum_{n} A'_n \psi^{W_n}(\vec{r}) e^{-\Delta W_n t} + \mathcal{O}(e^{- \Delta W_\mathrm{th}t})
\label{eq:R-correlator}
\end{equation}
with $\Delta W_n = W_n - 2m_B$,
$\Delta W_\mathrm{th} = W_\mathrm{th} - 2m_B$
and
$A'_n = A_n / {\cal C}^2$,
where $C_B(t)$ and ${\cal C}$ are a single baryon correlation function and the corresponding overlap factor, respectively.
They are given by
\begin{eqnarray}
C_B (t-t_0) = \sum_{\vec{x}} \langle 0 | T\{ B(\vec{x},t) \overline{\mathcal{J}}_{B}(t_0) \} | 0 \rangle
= {\cal C} \cdot e^{-m_B (t-t_0)} + \cdots ,
\end{eqnarray}
where $\overline{\mathcal{J}}_{B}(t_0)$ is a single baryon source operator
and ellipses represent the inelastic states contributions.
Since the non-local potential $U(\vec{r}, \vec{r'})$ is defined to be
energy-independent~\cite{Aoki:2009ji},
all elastic scattering states below the threshold share the same $U(\vec{r}, \vec{r'})$.
Therefore, Eq.~(\ref{eq:t-indep-HAL})
with an identity $\Delta W_n = {k_n^2}/{m_B} - {(\Delta W_n)^2}/{(4m_B)}$ leads to
\begin{equation}
\left[
- H_0
- \frac{\partial}{\partial t}
+ \frac{1}{4m_B} \frac{\partial^2}{\partial t^2}
\right]R(\vec{r},t)
= \int d\vec{r'} \ U(\vec{r}, \vec{r'}) R(\vec{r'},t),
\label{eq:master-equation}
\end{equation}
where the effect of the inelastic channel of $\mathcal{O}(e^{-\Delta W_\mathrm{th}t})$ is neglected
in the right hand side, while there is no term beyond $\partial^2/\partial t^2$ in the left hand side of
Eq.~(\ref{eq:master-equation}), i.e., Eq.~(\ref{eq:master-equation}) is derived without non-relativistic approximation.
Note that the ground state saturation is no more required in this time-dependent HAL QCD method.
Instead, the required condition is that
$R(\vec{r},t)$ is saturated by the contributions from elastic states (``the elastic state saturation''),
which can be achieved by a moderate value of $t$
($\sim {\cal O}({\rm min.}\{\Lambda_{\rm QCD}^{-1}, m_{\rm NG}^{-1}\})$ with
$m_{\rm NG}$ being the mass of the lightest Nambu-Goldstone boson).\footnote{
There is a possibility that the inelastic contributions
cancel partially between the numerator and the denominator of $R(\vec{r},t)$, so that
the elastic state saturation in $R(\vec{r},t)$ may appear for smaller $t$ than
those in $C_{2B}(\vec{r}, t)$ and $C_B(t)$.}
This is the fundamental difference between the HAL QCD method and the direct method.
As discussed in \cite{Aoki:2012tk,Aoki:2012bb},
$U(\vec{r}, \vec{r'})$ in Eq.~(\ref{eq:master-equation}) is not determined uniquely by $R(\vec{r},t)$,
though different $U(\vec{r}, \vec{r'})$s give same observables below the inelastic threshold.
In the HAL QCD method, the derivative expansion scheme enables one to
extract one of the possible $U(\vec{r}, \vec{r'})$s in a unique manner.
Let us consider the two-baryon system in the spin-singlet channel. Then
the leading order (LO) analysis neglecting the higher orders leads to
\begin{equation}
U(\vec{r}, \vec{r'})
= V_0^\mathrm{LO} (r) \delta(\vec{r} - \vec{r'}),
\label{eq:LO-expansion}
\end{equation}
with
\begin{equation}
V_0^\mathrm{LO}(r) =
- \frac{H_0 R(\vec{r},t)}{R(\vec{r},t)}
- \frac{(\partial/\partial t) R(\vec{r},t)}{R(\vec{r},t)}
+ \frac{1}{4m_B} \frac{(\partial^2/\partial t^2) R(\vec{r},t)}{R(\vec{r},t)} .
\label{eq:veff}
\end{equation}
In order to examine the convergence of the derivative expansion,
we consider the N$^2$LO analysis in this paper,
\begin{equation}
U(\vec{r}, \vec{r'})
= \{ V_0^\mathrm{N^2LO}(r) + V_2^\mathrm{N^2LO}(r)\nabla^2\}\delta(\vec{r} - \vec{r'}).
\label{eq:NLO-expansion}
\end{equation}
The relation between the potential from the LO analysis, $V_0^\mathrm{LO}(r)$,
and those from the N$^2$LO analysis,
$V_0^\mathrm{N^2LO}(r)$ and $V_2^\mathrm{N^2LO}(r)$
is given by
\begin{equation}
V_0^\mathrm{LO}(r)
= V_0^\mathrm{N^2LO}(r) + V_2^\mathrm{N^2LO}(r) \frac{\nabla^2 R(\vec{r}, t)}{R(\vec{r},t)},
\label{eq:vnlo}
\end{equation}
which shows
that the N$^2$LO correction in $V_0^\mathrm{LO}(r)$ depends on both $V_2^\mathrm{N^2LO}(r)$
and the spatial profile of the $R$-correlator,
the latter of which depends not only on
the spatial profile of the NBS wave functions $\psi^{W_n}(r)$ but also on their magnitude $A_n^\prime $ in the $R$-correlator.
The potentials $V_{0,2}^\mathrm{N^2LO}(r) $
are $t$-independent as long as the elastic state saturation is achieved and the
higher order contributions in the derivative expansion can be neglected.
One may also estimate the magnitude of systematic errors from the truncation of the derivative expansion
and from the inelastic state contaminations by studying the $t$-dependence of
the potentials.
\section{HAL QCD potential}
\subsection{Lattice Setup}
Throughout this paper, we use 2+1 flavor QCD ensembles~\cite{Yamazaki:2012hi},
generated by using the Iwasaki gauge action and $\mathcal{O}(a)$-improved
Wilson quark action at $a = 0.08995(40)$~fm
on $40^3 \times 48$, $48^3 \times 48$ and $64^3 \times 64$ lattice volumes
with heavy up/down quark masses and the physical strange quark mass,
$m_\pi = 0.51$~GeV,
$m_K = 0.62$~GeV,
$m_N = 1.32$~GeV
and $m_\Xi = 1.46$~GeV,
though only the one with the largest volume is used unless otherwise stated.
We employ the wall source
$q^\mathrm{wall}(t) = \sum_{\vec{y}}q(\vec{y},t)$,
which has been mainly used in the previous studies by the HAL QCD method,
and the smeared source $q^\mathrm{smear}(\vec{x},t) = \sum_{\vec{y}} f(|\vec{x}-\vec{y}|)q(\vec{y},t)$
with the smearing function $f(r) \equiv \{Ae^{-Br}, 1, 0\}$ for
\{$0 < r < (L-1)/2$, $r=0$, $(L-1)/2\leq r$\} ~\cite{Yamazaki:2012hi}.
For the smeared source,
the same $\vec{x}$ is taken as the center of the smeared source for all six quarks in two baryons
as has been done in Ref.~\cite{Yamazaki:2012hi}.
For both sources,
the point-sink operator for each baryon (``point-sink scheme'' in the HAL QCD method~\cite{Kawai:2017goq})
is exclusively employed in this study.
The correlation functions are calculated by the unified contraction algorithm (UCA)~\cite{Doi:2012xd}.
A number of configurations and other parameters are summarized in Table~\ref{tab:lattice_setup}.
Statistical errors are evaluated by the jack-knife method.
For more details on the simulation setup, see Ref.~\cite{Iritani:2016jie}.
In the present study, we focus on the $\Xi\Xi$ system in the $^1$S$_0$ channel:
This is one of the most convenient choices to obtain the insights of $NN$ systems,
since it belongs to the same $\mathbf{27}$ representation
as the $NN$ system in the $^1$S$_0$ channel in the flavor SU(3) limit
but has much better signal to noise ratio than the $NN$($^1$S$_0$) case.
We use the relativistic interpolating operators~\cite{Iritani:2016jie} for $\Xi$,
which are given by
\begin{equation}
\Xi_\alpha^0 = \varepsilon_{abc}(s^{aT}C\gamma_5 u^b)s_\alpha^c, \quad
\Xi_\alpha^- = \varepsilon_{abc}(s^{aT}C\gamma_5 d^b)s_\alpha^c,
\end{equation}
where $C = \gamma_4\gamma_2$ is the charge conjugation matrix,
$\alpha$ and ($a$, $b$, $c$) are the indices for the spinor and color, respectively.
\begin{table}
\centering
\begin{tabular}{|c|c|c|cc|c|}
\hline
\hline
volume & $La$\ [fm] & \# of conf. & \# of smeared sources & $(A,B)$ & \# of wall sources \\
\hline
$40^3 \times 48$ & 3.6 & 207 & 512 & (0.8, 0.22) & 48 \\
$48^3 \times 48$ & 4.3 & 200 & $4 \times 384$ & (0.8, 0.23) & $4 \times 48$ \\
$64^3 \times 64$ & 5.8 & 327 & $1 \times 256$ & (0.8, 0.23) & $4 \times 64$ \\
\hline
\hline
\end{tabular}
\caption{Simulation parameters.
The rotational symmetry for isotropic lattices is used to increase statistics.}
\label{tab:lattice_setup}
\end{table}
\subsection{The $R$-correlator}
We first consider the behaviors of the $R$-correlator defined in Eq.~(\ref{eq:R-correlator}).
Shown in Fig.~\ref{fig:Rcorr}
are the $R$-correlators on the lattice with $L = 64$ at $t = 10 - 16$
from the wall source (Left) and the smeared source (Right).
The results show strong quark-source dependence:
The $R$-correlator from the wall source ($R^\mathrm{wall}(\vec{r},t)$) is delocalized
with a weak $t$-dependence,
while that from the smeared source ($R^\mathrm{smear}(\vec{r},t)$) is localized and has a strong $t$-dependence.
If the $R$-correlator is saturated by the ground state,
its spatial profile should be independent of the source and its temporal profile should be simply dictated by
an overall factor, $\exp(-\Delta W_{n=0} t)$.
To see more closely the $t$-dependence of the spatial profile of the $R$-correlator,
we plot $R(\vec{r},t)$ normalized to be unity at $r=3.5$~fm for the wall source
and at $r=1.0$~fm for the smeared source
in Fig.~\ref{fig:ZRcorr}.
The shape of the $R$-correlator from the wall source has a weak $t$-dependence,
which indicates that the contribution from the elastic scattering states other than the ground state in $R^\mathrm{wall}(\vec{r},t)$ are relatively small.
On the other hand, the shape of $R^\mathrm{smear}(\vec{r},t)$
show a sizable $t$-dependence, which indicates that it has a substantial
admixture from the several elastic scattering states.
Although the parameters $(A,B)$ of the smeared source shown in Table \ref{tab:lattice_setup}
are tuned to suppress the excited states of a single baryon,
the same parameters are not guaranteed to
suppress the elastic scattering states for two baryons.
Indeed,
one of the most relevant parameters
which control the magnitudes of elastic state contributions
is the relative distance $\vec r$ between two baryons at the source, as can be illustrated from
\begin{eqnarray}
\frac{1}{L^3}\sum_{\vec x} B(\vec x) B(\vec x + \vec r)
= \sum_{\vec p} \tilde B(\vec p) \tilde B(-\vec p) e^{i \vec p \cdot \vec r}, \qquad
\tilde B(\vec p) \equiv \frac{1}{L^3} \sum_{\vec x} B(\vec x) e^{-i\vec p\cdot \vec x} .
\label{eq:BBoperator}
\end{eqnarray}
The smeared source operator in all previous works in the direct method
(except for~\cite{Berkowitz:2015eaa})
essentially corresponds to $\vec r=0$
and could be coupled to all elastic scattering states with almost an equal magnitude.
\footnote{
For the studies of the meson-meson scatterings~\cite{Briceno:2017max},
the serious systematics from the excited state contaminations
in the plateau fitting have been widely recognized
and the variational method~\cite{Luscher:1990ck}
is used with the operators analog to Eq.~(\ref{eq:BBoperator}).
}
See Ref.~\cite{Iritani:2018vfn} for more detailed studies on this point.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,clip]{figs/Rcorr_wall_64_t_dep_w_zoom.png}
\includegraphics[width=0.49\textwidth,clip]{figs/Rcorr_smeared_64_t_dep.png}
\caption{\label{fig:Rcorr} The $R$-correlator
at $t=10 - 16$ from
the wall source (Left) and the smeared source (Right).}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,clip]{figs/Rcorr_wall_64_t_dep_rescaled.png}
\includegraphics[width=0.49\textwidth,clip]{figs/Rcorr_smeared_64_t_dep_rescaled.png}
\caption{\label{fig:ZRcorr} The normalized $R$-correlator at $t=10 - 16$
from
the wall source (Left) and the smeared source (Right).}
\end{figure}
\subsection{HAL QCD potential at the leading order}
Let us now study the potential in the HAL QCD method at the leading order, $V_0^\mathrm{LO}(r)$.
Fig.~\ref{fig:breakup} shows the one for $\Xi\Xi$($^1$S$_0$)
and its breakups ($H_0$, $\partial/\partial t$ and $\partial^2/\partial t^2$ terms in Eq.~(\ref{eq:veff}))
on $L = 64$ at $t = 13$ from the wall source (Left) and the smeared source (Right).
For the wall source,
the $H_0$ term is dominant with sizable contributions from the $\partial/\partial t$ term,
while the $\partial^2/\partial t^2$ term is negligible.
The $\partial/\partial t$ term is not constant as a function of $r$, which
indicates that there exist small but non-negligible contributions from the excited states
in $R^\mathrm{wall}(\vec{r},t)$.
For the smeared source, on the other hand,
all terms are important.
In particular, the $\partial/\partial t$ term (green triangles)
shows substantial $r$-dependence indicating
large contributions from the excited states in the smeared source.
However, such dependence is cancelled by the $H_0$ term (blue squares) and is
further corrected by the $\partial^2/\partial t^2$ term (black diamonds).
The final results (red circles) with the smeared source and the wall source
show qualitatively similar behaviors, i.e.,
the repulsive core at the short distance and
the attractive pocket at the intermediate distance.
This illustrates that the time-dependent HAL QCD method works
well for extracting the $\Xi\Xi$ potential irrespective of the source structures.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,clip]{figs/xixi_L64_wall_breakup.png}
\includegraphics[width=0.49\textwidth,clip]{figs/xixi_L64_exp_breakup.png}
\caption{\label{fig:breakup}
The potential at the leading order analysis, $V_0^\mathrm{LO}(r)$,
(red circles)
for the wall source (Left)
and the smeared source (Right) at $t = 13$.
The blue squares, green triangles and black diamonds
denote 1st, 2nd and 3rd terms in Eq.~(\ref{eq:veff}), respectively.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,clip]{figs/pot_t_dep_L64_wall_zoom.png}
\includegraphics[width=0.49\textwidth,clip]{figs/pot_t_dep_L64_exp_zoom.png}
\caption{
The potential at the leading order analysis, $V_0^\mathrm{LO}(r)$,
for the wall source (Left) and the smeared source (Right)
at $t = 10 - 16$.
\label{fig:wall_vs_smear}
}
\end{figure}
Shown in Fig.~\ref{fig:wall_vs_smear} is a
comparison among the LO potentials ($V_0^\mathrm{LO}(r)$)
for different $t$ in each source.
For the wall source, the potentials at $t=10-16$ are consistent with each other
within statistical errors,
while those from the smeared source show the detectable $t$-dependence.
Shown in Fig.~\ref{fig:wall_vs_smear_comp} is a comparison of $V_0^\mathrm{LO}(r)$
between two sources at $t = 10, 12, 14, 16$. As $t$ increases,
the LO potential from the smeared source gradually converges to that from the wall source.
The relatively large $t$-dependence of the potentials from the smeared source
as well as the remaining small discrepancy of potentials between two sources
even at $t = 16$
indicate that
the N$^2$LO analysis in the derivative expansion is necessary
to understand the data from the smeared source.
This is a natural consequence of the fact that
the N$^2$LO contributions in $V_0^\mathrm{LO}(r)$,
$\nabla^2 R(\vec{r},t)/R(\vec{r},t)$ ($\propto H_0$ term) in Eq.~(\ref{eq:vnlo}),
is much more significant in the smeared source than the wall source as shown in Fig.~\ref{fig:breakup}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,clip]{figs/pot_src_dep_L64_t10.png}
\includegraphics[width=0.49\textwidth,clip]{figs/pot_src_dep_L64_t12.png}
\includegraphics[width=0.49\textwidth,clip]{figs/pot_src_dep_L64_t14.png}
\includegraphics[width=0.49\textwidth,clip]{figs/pot_src_dep_L64_t16.png}
\caption{
A comparison of the potential at the leading order analysis, $V_0^\mathrm{LO}(r)$,
between the wall source (red circles) and the smeared source (blue squares)
at $t = 10, 12, 14, 16$.
\label{fig:wall_vs_smear_comp}
}
\end{figure}
\subsection{HAL QCD potential at the next-to-next-to-leading order}
We next apply the N$^2$LO analysis in the derivative expansion to $R$-correlators for both sources.
The potential at the LO analysis, $V_0^\mathrm{LO}(r)$,
and those at the N$^2$LO analysis, $V_0^\mathrm{N^2LO}(r)$, $V_2^\mathrm{N^2LO}(r)$,
satisfy the linear equations given by
\begin{eqnarray}
\{ V_0^\mathrm{N^2LO}(r) + V_2^\mathrm{N^2LO}(r)\nabla^2 \} R^\mathrm{source}(\vec{r},t) = V_0^\mathrm{LO(source)}(r) R^\mathrm{source}(\vec{r},t) ,
\label{eq:nlo_pot}
\end{eqnarray}
where source = wall or smear.
To extract $V_{0,2}^\mathrm{N^2LO}(r)$, we first consider the
following relation derived from Eq.~(\ref{eq:nlo_pot}),
\begin{eqnarray}
D \times V_2^\mathrm{N^2LO}(r) = V_0^\mathrm{LO(wall)}(r) - V_0^\mathrm{LO(smear)}(r) ,
\label{eq:nlo_pot2}
\end{eqnarray}
with $D \equiv
\nabla^2 R^\mathrm{wall}(\vec{r},t)/R^\mathrm{wall}(\vec{r},t)
-\nabla^2 R^\mathrm{smear}(\vec{r},t)/R^\mathrm{smear}(\vec{r},t)$.
In order to avoid numerical instabilities caused by nearly zeros of $D$ when we
divide the right hand side of Eq.~(\ref{eq:nlo_pot2}),
we extract $V_2^\mathrm{N^2LO}(r)$ directly from Eq.~(\ref{eq:nlo_pot2})
with
a fitting function,
$V_2^\mathrm{N^2LO}(r) = b_{1} e^{-b_{2}(r-b_{{3}})^2} + b_{{4}} e^{-b_{{5}}(r-b_{{6}})^2}$
at each $t$.
Once $V_2^\mathrm{N^2LO}(r)$ is obtained,
$V_0^\mathrm{N^2LO}(r)$ can be determined from Eq.~(\ref{eq:vnlo}).
Fig.~\ref{fig:vlo_vnlo} shows the $V_0^\mathrm{N^2LO}(r)$
together with the $V_0^\mathrm{LO(wall)}(r)$ (Left),
and the $V_2^\mathrm{N^2LO}(r) $ (Right) on $L = 64$ at $t= 13$.
We multiply $V_2^\mathrm{N^2LO}(r)$ by $m_\pi^2$ to make its mass dimension $+1$
for a comparison to $V_0(r)$'s.
We find that $V_0^\mathrm{N^2LO}(r)$ agrees well with the
$V_0^\mathrm{LO(wall)}(r) $ except at short distances.
We also find that $V_2^\mathrm{N^2LO}(r) $ is localized within the range of 1~fm, which is much shorter than
the range of $V_{0}^{{\mathrm{LO(wall),N^2LO}}}(r)$.
We note here that the negative sign of $V_2^\mathrm{N^2LO}(r)$ does not necessarily
imply attraction, since the N$^2$LO potential is given by $V_2^\mathrm{N^2LO}(r)\nabla^2$.
As already mentioned,
$\nabla^2 R(\vec{r},t)/R(\vec{r},t)$ from the smeared source is much larger than that of the wall source
(see Fig.~\ref{fig:breakup}).
Intuitively, this is because $R^\mathrm{smear}(r,t)$ ($R^\mathrm{wall}(r,t)$)
contains larger (smaller) contributions from excited states
and thus is more (less) sensitive to higher order terms in the derivative expansion of the potential.
Therefore, the N$^2$LO analysis is mandatory for the smeared source,
while the LO analysis for the wall source leads to the
potential which is almost identical to $V_0^\mathrm{N^2LO}(r)$.
Shown in Fig.~\ref{fig:vlo_vnlo_t_dep} is
the $t$-dependence of $V_{0,2}^\mathrm{N^2LO}(r) $ in the range of $t = 13 - 16$.
Since appreciable $t$-dependence is not seen within the error bars,
the N$^4$LO contribution is expected to be small.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,clip]{figs/comp_v0_t13_zoom.png}
\includegraphics[width=0.49\textwidth,clip]{figs/pot_v2_t13.png}
\caption{
\label{fig:vlo_vnlo}
(Left)
The LO potential at the N$^2$LO analysis, $V_0^\mathrm{N^2LO}(r)$ (red circles),
together with the potential at the LO analysis for the wall source, $V_0^\mathrm{LO(wall)}(r)$
(blue diamonds) at $t=13$.
(Right) The N$^2$LO potential at the N$^2$LO analysis, $V_2^\mathrm{N^2LO}(r)$,
multiplied by $m_\pi^2$.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,clip]{figs/pot_v0_t_dep_L64_w_zoom.png}
\includegraphics[width=0.49\textwidth,clip]{figs/pot_v2_t_dep_L64_w_zoom.png}
\caption{
\label{fig:vlo_vnlo_t_dep}
The LO (Left) and N$^2$LO (Right) potentials
at the N$^2$LO analysis in the range of $t = 13 - 16$.
}
\end{figure}
\subsection{Effect of the Inelastic states}
Fig.~\ref{fig:wall_plateau}~(Left) compares the effective mass of a single $\Xi$ for two sources.
The smeared source is tuned to have a large overlap with the ground state of a single baryon,
so that the corresponding effective mass shows a plateau at an earlier time
than the case of the wall source. Eventually, the plateaux for the single $\Xi$ from two different
sources converge at $t \gtrsim 16$.
Shown in Fig.~\ref{fig:wall_plateau}~(Right) is the $\Xi \Xi$
potential at the LO analysis for the wall source in the range of $t = 9 -17$.
Unlike the case of the single $\Xi$, the resultant potential is stable for $t$ much less than $16$,
suggesting that the systematic error originating from the inelastic contributions of the single-baryon
cancels largely between the numerator and the denominator of the $R$-correlator for the wall source.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,clip]{figs/plateau_demo_meff_xi_L64.pdf}
\includegraphics[width=0.49\textwidth,clip]{figs/pot_v0_t9_17_L64_wall.png}
\caption{
\label{fig:wall_plateau}
(Left) The effective mass of a single baryon $\Xi$
for the wall source (red circles) and the smeared source (blue squares).
(Right) The potential at the LO analysis, $V_0^\mathrm{LO}(r)$, for the wall source at $t=9 - 17$.
}
\end{figure}
\subsection{Effect of the finite volume}
In Fig.~\ref{fig:veff_vol_dep}, we show the volume dependence of
the potential at the LO analysis for the wall source at $t = 13$ with $L = 40$, $48$ and $64$.
All the potentials are consistent with each other within statistical errors. This
indicates that the artifact due to finite volume is negligible for the potential, mainly because the
potential is short ranged.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,clip]{figs/pot_L_dep_t13_zoom.png}
\caption{
\label{fig:veff_vol_dep}
The potential at the LO analysis, $V_0^\mathrm{LO}(r)$, for the wall source on $L = 40, 48$ and $64$
at $t = 13$.
}
\end{figure}
\section{Scattering phase shifts}
In the previous section, we examine systematic uncertainties
on the HAL QCD potential.
In this section, we examine how these systematic uncertainties
affect the physical observables such as
the scattering phase shifts, in particular the effect of
the derivative expansion. To calculate the scattering phase shifts, $\delta_0(k)$,
we first fit the potentials
by a sum of Gaussians,
$ V_0^\mathrm{LO(wall), N^2LO}(r) = \sum_{n=1,3,5,7} a_n e^{-a_{n+1} r^2}$ and
$ V_{2}^\mathrm{N^2LO}(r)= \sum_{n=1,4} b_n e^{-b_{n+1}(r-b_{n+2} )^2} $.
Resulting parameters are summarized in Table~\ref{tab:fit_params}.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c||c|c|}
\hline
\hline
& $V_0^\mathrm{LO(wall)}(r)$ & $V_0^\mathrm{N^2LO}(r)$ & & $V_2^\mathrm{N^2LO}(r)$ \\
\hline
$a_1$ & $0.8759 \pm 0.0270$ & $1.1426 \pm 0.0621$& $b_1 $ & $-0.5291 \pm 0.0418$ \\
$a_2$ & $1.2040 \pm 0.0317$ & $0.9332 \pm 0.0871$& $b_2$ & $0.0757 \pm 0.0162$ \\
$a_3$ & $0.4261 \pm 0.0128$ & $0.4245 \pm 0.0397$& $b_3 $ & $2.195 \pm 0.333$ \\
$a_4$ & $0.3028 \pm 0.0217$ & $0.2358 \pm 0.0382$& $b_4$ & $-0.1091 \pm 0.0194$ \\
$a_5$ & $0.2010 \pm 0.0124$ & $0.2415 \pm 0.0410$& $b_5$ & $0.2177 \pm 0.0633$ \\
$a_6$ & $0.07373 \pm 0.00364$ & $0.07876 \pm 0.00646$& $b_6$ & $7.025 \pm 0.464$ \\
$a_7$ & $-0.02922 \pm 0.00148$ & $-0.03005 \pm 0.00159$ & & \\
$a_8$ & $0.008977 \pm 0.000456$ & $0.009107 \pm 0.000467$ & & \\
\hline
\hline
\end{tabular}
\caption{Summary of fitting parameters for the LO and N$^2$LO potentials in
the lattice unit at $t = 13$.
The fitting range is $r \in [0, 3.5]$ fm, and
$\chi^2/\mathrm{dof}$ are
1.14, 1.01 and 0.0019
for $V_0^\mathrm{LO(wall)}(r)$, $V_0^\mathrm{N^2LO}(r)$ and $V_2^\mathrm{N^2LO}(r)$, respectively.
}
\label{tab:fit_params}
\end{table}
In Fig.~\ref{fig:scat_phase}, we show the comparison
of the scattering phase shifts from $V_0^\mathrm{LO(wall)}(r)$,
$V_0^\mathrm{N^2LO}(r)$ and $V_0^\mathrm{N^2LO}(r) + V_2^\mathrm{N^2LO}(r)\nabla^2$
at $t=13$.
At low energies (Fig.~\ref{fig:scat_phase}~(Left)), the N$^2$LO correction is found to be negligible,
showing not only that the derivative expansion converges well
but also that the LO analysis for the wall source is sufficiently good at low energies.
The N$^2$LO correction becomes non-negligible only at high energies as shown in Fig.~\ref{fig:scat_phase}~(Right)
\footnote{We discuss the magnitude of the N$^2$LO correction in the potential
at high energies in Appendix~\ref{app:N2LOcorr}.}.
We note that $(k/m_\pi)^2 = 0.5$ corresponds to the energy from the threshold as
$\Delta E \equiv W - 2m_B \simeq 90$~MeV.
The good convergence of the derivative expansion
has been also observed for the $NN$ systems in the $^1$S$_0$ and $^3$S$_1$ channels
in quenched QCD with $m_{\pi} \simeq 530$ MeV~\cite{Murano:2011nz}
and the $I=2$ $\pi\pi$ system in (2+1)-flavor QCD with $m_{\pi} \simeq 870$ MeV~\cite{Kawai:2017goq}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,clip]{figs/kcot_delta_comp_L64_t013.pdf}
\includegraphics[width=0.49\textwidth,clip]{figs/delta_comp_L64_t013.pdf}
\caption{
\label{fig:scat_phase}
The scattering phase shifts in the form of $k\cot\delta_0(k)/m_\pi$ (Left) and $\delta_0(k)$ (Right)
from
$V_0^\mathrm{LO(wall)}(r)$ (black diamonds),
$V_0^\mathrm{N^2LO}(r)$ (blue squares) and $V_0^\mathrm{N^2LO}(r) + V_2^\mathrm{N^2LO}(r)\nabla^2$ (red circles) at $t = 13$.
}
\end{figure}
The scattering length $a_0$ obtained through
$\lim_{k\rightarrow 0} k\cot\delta_0(k) = 1/a_0$
from $V_0^\mathrm{LO(wall)}(r)$, $V_0^\mathrm{N^2LO}(r)$ and $V_0^\mathrm{N^2LO}(r) + V_2^\mathrm{N^2LO}(r)\nabla^2$
at $t = 13-16$ is shown in Fig.~\ref{fig:scat_length}.
The result indicates that the scattering length is almost insensitive to the degrees of the approximation
but has a small variation in $t$, which is, however, within statistical errors.
We thus conclude that
the systematic errors from the derivative expansion and the
inelastic state contaminations are well under control for this observable.
Numerical values for the scattering length are summarized in Table~\ref{tab:scat_length},
where the central value and statistical errors are evaluated at $t=13$ and
the systematic errors are estimated from the $t$-dependence among $t = 13-16$.
We have checked that alternative fitting functions of the potential such as the
combination of two Gaussians + (Yukawa)$^2$ form
as employed in ~\cite{Aoki:2012tk, Yamada:2015cra}
give results consistent with those from the present fitting function within errors.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,clip]{figs/a0inv_t_pot_dep.pdf}
\caption{
\label{fig:scat_length}
The scattering length $a_0$ in the form of $(a_0 m_\pi)^{-1}$
from $V_0^\mathrm{LO(wall)}(r)$ (black diamonds) ,
$V_0^\mathrm{N^2LO}(r)$ (blue squares) and
$V_0^\mathrm{N^2LO}(r) + V_2^\mathrm{N^2LO}(r)\nabla^2$ (red circles)
at $t = 13-16$.
}
\end{figure}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\hline
& $V_0^\mathrm{LO(wall)}(r)$ & $V_0^\mathrm{N^2LO}(r)$ & $V_0^\mathrm{N^2LO}(r) + V_2^\mathrm{N^2LO}(r)\nabla^2$ \\
\hline
$(a_0 m_\pi)^{-1}$ &
0.341(36){(${}^{+70}_{-0}$)} &
0.368(39){(${}^{+65}_{-0}$)} &
0.352(36){(${}^{+80}_{-0}$)} \\
\hline
\hline
\end{tabular}
\caption{The scattering length $a_0$ in the form of $(a_0 m_\pi)^{-1}$ from
$V_0^\mathrm{LO(wall)}(r)$, $V_0^\mathrm{N^2LO}(r)$ and
$V_0^\mathrm{N^2LO}(r) + V_2^\mathrm{N^2LO}(r)\nabla^2$.
The central values and statistical errors (in the first parenthesis) are evaluated at $t = 13$,
while the systematic errors (in the second) are estimated using the potentials at $t = 14, 15, 16$.
}
\label{tab:scat_length}
\end{table}
\section{Finite volume formula and effective range expansion}
Before closing the paper, we discuss the relation among
the energy spectrum,
the L\"uscher's finite volume formula and the effective range expansion (ERE).
Once the energy shift of the two-body system on a finite volume is measured,
the scattering phase shift is obtained by the L\"uscher's formula as
\begin{equation}
k\cot\delta_0 (k) = \frac{1}{\pi L} \sum_{\vec{n} \in \mathbf{Z}^3} \frac{1}{|\vec{n}|^2 - (kL/2\pi)^2},
\end{equation}
where $k^2$ is related to the energy shift on a finite volume as
$\Delta E_L = 2\sqrt{m_B^2 + k^2} - 2m_B$.
For the attractive interaction, $k^2 $ can be negative on a finite volume.
Note that the poles of the $S$-matrix with $k\cot\delta_0(k) = - \sqrt{-k^2}$ in the infinite volume
correspond to the bound states.
For the unbound two-body system, the asymptotic behavior of $\Delta E_L$
for large $L$ reads
\begin{equation}
\Delta E_L \simeq - \frac{2\pi a_0}{\mu L^3}
\left[ 1 + c_1 \frac{a_0}{L} + c_2 \left( \frac{a_0}{L} \right)^2 \right] + \mathcal{O}(L^{-6}) ,
\label{eq:L_dep_scat}
\end{equation}
with the reduced mass $\mu$, the scattering length $a_0$,
$c_1 = -2.837297$, and $c_2 = 6.375183$~\cite{Luscher:1985dn, Luscher:1990ux}.
Let us now calculate $k^2$ from eigenvalue spectra of the Hamiltonian\footnote{
Since this non-hermitian eigenvalue problem can be written
as the definite generalized Hermitian eigenvalue problem, eigenvalues are all real.
}
$H = H_0 + V_0^\mathrm{N^2LO}(r) + V_2^\mathrm{N^2LO}(r)\nabla^2$
on the finite volume ($L=40, 48, 64$) for the $A_1^+$ representation of the cubic group,
by employing fitted $V_0^\mathrm{N^2LO}(r)$ and $V_2^\mathrm{N^2LO}(r)$
at $L=64$ in Table~\ref{tab:fit_params}.
Fig.~\ref{fig:luscher}~(Left)
shows the volume dependence of the lowest eigenvalues:
The data are found to be well described by Eq.~(\ref{eq:L_dep_scat}), which
indicates that the system does not have a bound state.
By fitting the data with Eq.~(\ref{eq:L_dep_scat}),
we obtain the scattering length
as $(a_0 m_\pi)^{-1} = 0.402(14)$ consistent with the values in Table~\ref{tab:scat_length},
$(a_0 m_\pi)^{-1} = 0.352(36){(^{+80}_{-0})}$.
As extensively discussed in Ref.~\cite{Iritani:2017rlk},
the ERE, $k\cot\delta_0(k) = 1/a_0 + (1/2)r_\mathrm{eff}k^2 + \cdots$,
provides a systematic and reliable way to relate the volume dependence of $\Delta E_L$,
the scattering phase shifts and the bound state pole around $k^2=0$.
\footnote{It was pointed out in ~\cite{Iritani:2017rlk} that
the singular and/or unphysical behaviors of $k\cot\delta_0(k)$ around $k^2 =0$
can arise in the direct method ~\cite{Yamazaki:2015asa,Wagman:2017tmp, Berkowitz:2015eaa} if the finite-volume
spectrum is not extracted reliably.}
In Fig.~\ref{fig:luscher}~(Right),
we plot the finite volume spectra
on the $(k^2, k\cot\delta_0(k))$ plane,
using the lowest eigenvalues of $H$ on $L = 40$, $48$, and $64$,
and the eigenvalue of the first excited state on $L = 64$.
Note that the data (triangle, square and diamonds) and their errors are
plotted together with the L\"uscher's formula (dotted lines).
The blue band corresponds to the results
obtained by solving the Schr\"odinger equation in the infinite volume.
We find that the finite volume energy spectra at $k^2 < 0$ and $k^2 > 0$
are smoothly connected around $k^2=0$ along with the blue band,
as is expected from the analytic properties of $S$-matrix and the ERE.
In fact, the ERE at the NLO determined from these 4 data (pink band)
is consistent with the blue band at $|(k/m_\pi)^2| \lesssim 0.2$ within errors.
One also observes that
the positive intercept at $k^2=0$ ($1/a_0$) supports
the conclusion from Fig.~\ref{fig:luscher}~(Left) that the system has no bound state.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth,clip]{figs/dE0_Ldep.pdf}
\includegraphics[width=0.50\textwidth,clip]{figs/kcot_vs_k2_w_ERE.pdf}
\caption{
\label{fig:luscher}
(Left)
The lowest eigenenergies on finite volumes from the HAL QCD potential.
The red line corresponds to the fit by the asymptotic L\"uscher's finite volume formula in the large $L$, Eq.~(\ref{eq:L_dep_scat}).
(Right) The scattering phase shifts from finite volume eigenenergies using L\"uscher's finite volume formula (green triangle, blue square, red diamonds),
together with those in the infinite volume from the Schr\"odinger equation (blue band).
The black dotted lines denote the constraints by the L\"uscher's finite volume formula,
and the black solid line represents the bound state condition in the infinite volume.
The red dashed line with the pink band corresponds to the NLO ERE analysis to the finite volume data.
}
\end{figure}
\section{Summary}
In this paper, we have made critical investigations on
the systematic uncertainties in the HAL QCD method.
While the time-dependent HAL QCD method is
free from the issue associated with the ground state saturation,
the approximation of the energy-independent non-local potential
by the derivative expansion introduces systematic uncertainties, so that
it is necessary to check the errors introduced by the expansion.
We have performed the {(2+1)-flavor} lattice QCD calculation for the $\Xi\Xi(^1$S$_0)$ system
at $m_\pi = 0.51$~GeV.
Using the four-point correlation functions from both wall and smeared quark sources,
we have established the theoretical and numerical
method to determine LO and N$^2$LO potentials
in the derivative expansion.
Scattering phase shifts calculated from these potentials reveal
that the LO potential is sufficient to reproduce observables at low energies
($k^2/m_{\pi}^2 < 0.1$), while
the N$^2$LO correction becomes non-negligible but
remains small even at high energies ($k^2/m_{\pi}^2 \simeq 0.5$),
confirming the good convergence of the derivative expansion below
the inelastic threshold for this particular system.
We have also found that the potential at the LO analysis for the wall source
agrees with the LO potential at the N$^2$LO analysis except at short distances
and can reproduce the scattering phase shifts precisely at low energies.
Other systematic uncertainties such as the inelastic state contaminations and
the finite volume effect to the potential are investigated
and are found to be well under control.
After establishing the reliability of the HAL QCD potential,
we have calculated the eigenvalues of the Hamiltonian in finite boxes with the potential.
The volume dependence of the lowest eigenvalues
is well described by $1/L$-expansion for scattering states obtained from the L\"uscher's finite volume formula.
We have also discussed the relation among the energy spectrum, phase shifts
and the effective range expansion.
In a forthcoming paper~\cite{Iritani:2018vfn},
we will perform the spectral decomposition of the correlation function
based on the eigenmodes of the Hamiltonian in a finite box with the HAL QCD potential,
which will enable us to better understand the requirements for reliably extracting finite-volume energies.
\acknowledgments
We thank the authors of Ref.~\cite{Yamazaki:2012hi} and ILDG/JLDG~\cite{conf:ildg/jldg, Amagasa:2015zwb}
for providing the gauge configurations.
Lattice QCD codes of
CPS~\cite{CPS}, Bridge++~\cite{bridge++} and the modified version thereof by Dr. H.~Matsufuru,
cuLGT~\cite{Schrock:2012fj} and domain-decomposed quark solver~\cite{Boku:2012zi,Teraki:2013}
are used in this study.
The numerical calculations have been performed on BlueGene/Q and SR16000 at KEK, HA-PACS at University of Tsukuba,
FX10 at the University of Tokyo and K computer at RIKEN, AICS (hp150085, hp160093).
This work is supported in part by the Japanese Grant-in-Aid for Scientific
Research (No. JP24740146, JP25287046, JP15K17667, JP16K05340, JP16H03978),
by MEXT Strategic Program for Innovative Research (SPIRE) Field 5,
by a priority issue (Elucidation of the fundamental laws and evolution of the universe) to be tackled by using Post “K” Computer,
and by Joint Institute for Computational Fundamental Science (JICFuS).
\clearpage
|
{
"timestamp": "2019-01-29T02:26:05",
"yymm": "1805",
"arxiv_id": "1805.02365",
"language": "en",
"url": "https://arxiv.org/abs/1805.02365"
}
|
\section{Introduction}
The spreading process is currently one of the hottest topics in the field of complex networks, such
as the spreading of epidemic, opinion, rumor, new technologies and behaviors and so on. So far, a
great deal of significant progresses have been achieved including the infinitesimal threshold
\cite{Pastor-Satorras:2001,Boguna:2002,Ferreira:2012,Boguna:2013,Parshani:2010,Castellano:2010},
reaction-diffusion model \cite{Colizza:2007a,Colizza:2007b,Andrea:2008,Liu:2009},
temporal and/or multilayer networks
\cite{Boccaletti:2014,Feng:2015,Sahneh:2013,Wang:2013,Yagan:2013,Newman:2005,Marceau:2011,
Buono:2015,Buono:2014,Zhao:2014,Zheng:2017,Zheng:2018,Holme:2012,Perra:2012} etc (see the review Refs.
\cite{Pastor:2015,Barrat:2008,Dorogovtsev:2008,Wang:2017} for details). These models significantly
increase our understanding on epidemic/information spreading and are very useful for public health
authorities and relevant government departments to control the epidemic/information spreading.
A common point in all these contributions is that there is only one transition in the
spreading process where the spreading range will be approximately zero when the transmission
probability $\beta$ is less than a critical value $\beta_c$ and become nonzero when $\beta \geq
\beta_c$. Larger than the critical point $\beta_c$, the spreading range will be gradually increased
with the further increase of $\beta$. On the other hand, in recent years, a novel double
transition was observed on epidemic spreading process in some particular conditions, such as
the network with a very heterogeneous and
clustered structure \cite{Simon:2014,Bhat:2017}, epidemic spreading with an asymmetric interaction
\cite{Allard:2017} and contagion processes with heterogeneous adoptability \cite{Min:2017} etc.
The so-called double transition indicates that there are two critical
values $\beta_c^1$ and $\beta_c^2$ in the spreading process. The first transition is between
healthy and endemic phases, and the second transition is between two endemic phases with very
different internal organizations. For example, Ref\cite{Allard:2017} shows that with $\beta
<\beta_c^{1}$, all outbreaks are microscopic and quickly die out; with $\beta_c^1<\beta <
\beta_c^{2}$, they observed a macroscopic epidemic within the network of homosexual contacts
between males, with microscopic spillover into the rest of the population via bisexual males. While
$\beta > \beta_c^{2}$, they found a more classic epidemic scenario in the sense that it is of
macroscopic scale in most of the population.
Although some significant mechanisms of double transition have been uncovered in the previous
studies, many gaps in our knowledge remain in spreading dynamics. For example, this unique double
transition was observed only on epidemic spreading dynamics in some particular situations.
However, the study of double transition on information spreading process is neglected,
especially in the aspect of identifying the critical factors driving this phenomenon. As we know,
the information spreading carries its special features, which is different with epidemic spreading,
such as memory effects (i.e., previous contacts could impact the information spreading in current
time \cite{Dodds:2004,Lu:2011,Zheng:2013}) and non-redundant contacts (people usually do not transfer an
information item more than once to the same guy \cite{Wang:2015,Wu:2018}). In addition, information
spreading is affected by multiple channels from different types of contacts in different regions
\cite{Brummitt:2012,Lee:2014,Min:2016}. For instance, when choosing which products to buy, ideas to
accept, and behaviors to adopt, people are not only influenced by friends, colleagues and family in
the same region through face-to-face interactions, but also affected by distant relatives and
friends in another region through the telephone or Internet communication. In this sense, it is
very necessary to investigate the double transition in the information spreading dynamics
with the effects of multiple channels and memory of non-redundant information.
The effects of multiple channels on the spreading process have been widely investigated based on a
powerful analytical framework: multilayer or multiplex
networks\cite{Boccaletti:2014,Feng:2015,Sahneh:2013,Wang:2013,Yagan:2013,Newman:2005,Marceau:2011,
Buono:2015,Buono:2014,Zhao:2014,Zheng:2017,Zheng:2018}, where the intra-links and inter-links
represent the multiple social relations (channels) among individuals. So far, the majority of
researches about multilayer networks are mainly focused on how the one-to-one interconnections
influence the dynamic processes taking place on them
\cite{Funk:2010,Allard:2009,Son:2012,Sanz:2012,Souza:2009,Hackett:2016,Dickison:2012,Mendiola:2012}.
However, to the best of our knowledge, few researchers pay attention to the information spreading with
one-to-many interconnections, especially in the aspect of mathematical theory analysis. On the other hand,
despite many studies have revealed that the interaction strength between different networks, degree-degree correlation,
degree distribution and mean degree in each network play a critical role in the
relevant dynamic processes\cite{Funk:2010,Allard:2009,Son:2012,Sanz:2012,Souza:2009,Hackett:2016,Dickison:2012,Mendiola:2012},
how the properties of the multilayer network structures affect the double transition of information
spreading is still under debate in network science.
To fill these gaps, in this work, we propose a SAR (Susceptible-Accepted-Recovered) information
spreading model on multilayer networks, where we emphasize the effects of multiple channels,
memory and non-redundant contacts. Our numerical simulations reveal that, contrary to previous
work, there is a double transition including a continuous transition and a following
discontinuous transition in the final accepted size with respect to a transmission probability.
Further, we demonstrate that the phenomenon of the double transition originates from two
outbreaks between the two networks, which depends on a weak coupling condition between the two
networks, the difference of the degree distributions between them, and a large adoption threshold in
turn. To better understand the findings, an edge-based compartmental theory is developed which perfectly agree
with the numerical simulations.
The rest of this paper is organized as follows. In Sec. II, a Susceptible-Accepted-Recovered (SAR)
model on a two-layered network was proposed to describe the multiple channels information spreading.
In Sec. III, an edge-based compartmental compartmental theory is given in detail. In Sec. V,
simulation results are presented. Finally, in Sec. VI, the
conclusions and discussions are presented.
\section{The Susceptible-Accepted-Recovered model on a two-layered network}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig1.eps}
\caption{(Color online). Sketch of the Susceptible-Accepted-Recovered (SAR) model on a two-layered
network. ``Black", ``green" and ``red" lines represent the links of the layer
$\mathcal{A}$, $\mathcal{B}$ and the inter-layer $\mathcal{AB}$, respectively. $\beta_a$, $\beta_b$ and
$\beta_{ab}$ denote the transmission probability of layers $\mathcal{A}$, $\mathcal{B}$ and
$\mathcal{AB}$. At time $t$, the susceptible node $i$ in layer $\mathcal{A}$ may receive a piece of
information from an accepted node in layer $\mathcal{A}$ and $\mathcal{B}$ with probability
$\beta_a$ and $\beta_{ab}$, respectively. Once the node $i$ receives the information successfully
from one accepted neighbor, the cumulative number $m$ of received information for node $i$ will
increase $1$ and the accepted neighbor will not transmit the same information to the node $i$ any
more. Assuming that the susceptible node $i$ has received the information $m$ times from the time
step $0$ to $t$, the node $i$ will become accepted state if $m\geq T_A$. }
\label{Fig:model}
\end{figure}
To understand the effects of multiple
channels in the information spreading process, we here introduce a two-layered network with
coupling between its two layers, i.e. the layer $\mathcal{A}$ and $\mathcal{B}$ in Fig.
\ref{Fig:model}. We let the two layers have the same size $N_a=N_b=N$ and their degree
distributions $P_A(k)$ and $P_B(k)$ be different. We may imagine the layer $\mathcal{A}$ as a
human communication network for one geographic region or community and the layer $\mathcal{B}$
for a separated region. There are two kinds of links for each node in the two-layered networks,
i.e. intra-links within layer $\mathcal{A}$ or $\mathcal{B}$ and the inter-links between layer $\mathcal{A}$
and $\mathcal{B}$. Each node could receive information not only through friends, colleagues and family
with intra-links in the same region, but also from distant relatives and friends with inter-links
in another region by the telephone and Internet. In details, we firstly generate two separated
networks $\mathcal{A}$ and $\mathcal{B}$ with the same size $N$ and different degree distributions
$P_A(k_a)$ and $P_B(k_b)$, respectively. Then, we add links randomly between $\mathcal{A}$ and
$\mathcal{B}$ until the steps we planned. The average node degrees of layer $\mathcal{A}$,
$\mathcal{B}$ and inter-layer $\mathcal{AB}$ is presented by $\langle k_a\rangle$,
$\langle k_b\rangle$, and $\langle k_{ab}\rangle$,
respectively. In the above way, we obtain an uncorrelated two-layered network.
To discuss information spreading in the two-layered network, we adopt a
Susceptible-Accepted-Recovered (SAR) model. At each time step, a node can occupy only one of the
three states: (i) Susceptible: the node has not received the information yet or has received the
information but hesitate to accept it; (ii) Accepted: the node accepts the information and
transmits it to its neighbors; (iii) Recovered: the node loses interest to the information and will
not spread it any more. Thus, this Susceptible-Accepted-Recovered (SAR) model is similar to the SIR
(Susceptible-Infected-Refractory) model in epidemiology.
The information spreading process can be described as follows:
(i) At the beginning, a fraction $\rho_0$ of nodes are random uniformly chosen from the layer $\mathcal{A}$ as
seeds (accepted state) to spread the first piece of information. All other nodes are in the susceptible state.
(ii) At each time step $t$, the susceptible node $i$ in layer $\mathcal{A}$ may receive a piece of
information from an accepted node in layer $\mathcal{A}$ and $\mathcal{B}$ with probability
$\beta_a$ and $\beta_{ab}$ (see Fig. \ref{Fig:model}), respectively. For the susceptible node in
layer $\mathcal{B}$, the change of the nodes' state is the same as in layer $\mathcal{A}$ but with probability $\beta_b$. Once
the node $i$ receives the information successfully from an accepted neighbor, the cumulative number
$m$ of received information for the node $i$ will increase one and this accepted neighbor will not
transmit the same information to the node $i$ any more, i.e., non-redundant information
transmission. As an individual has to remember the pieces of non-redundant information he or
she received from neighbors before time $t$, the so-called non-redundant information memory is
induced in our model.
(iii) When a susceptible node $i$ has received the information $m$ times until time step $t$
and $m\geq T_A$ in layer $\mathcal{A}$ (or $m\geq T_B$ in layer $\mathcal{B}$),
the node $i$ will become accepted state, where $T_A$ and $T_B$ is the adoption threshold of node
in layer $\mathcal{A}$ and $\mathcal{B}$, respectively. At the same time step, each accepted
node will lose interest in transmitting the
information and becomes recovered with probability $\mu$.
(iv) The steps are repeated until there is no accepted node in the network.
In our numerical simulations, we set the network size $N_a=N_b=10\,000$, recovered
probability $\mu=1.0$, $\beta_a=\beta_b=\beta$, and initially
chose $\rho_0=0.05$ of nodes in layer $\mathcal{A}$ to be accepted.
\section{theory}
\subsection{The edge-based compartmental theory on a single network}
Let us first illustrate the edge-based compartmental theory for a
single network, by following the methods introduced
in Refs. \cite{Wang:2015,Volz:2008,Miller:2011,Shu:2016,Miller:2012,Miller:2013a,Miller:2014}. We let $\rho_S(t)$,
$\rho_A(t)$, and $\rho_R(t)$ be the densities of the Susceptible, Accepted, and Recovered nodes at time $t$,
respectively. The spreading process will be ended when $t\rightarrow\infty$ and thus $\rho_R(\infty)$ represent the
final fraction of accepted nodes.
We use a variable $\theta(t)$ to denote the probability that a node $v$ has not transmitted the information to the
node $u$ along a randomly chosen edge by time $t$.
For an uncorrelated, large and sparse network, the probability
that a randomly chosen node $u$ of degree $k$ has received the information $m$ times from his/her neighbors at
time $t$ is
\begin{equation}\label{eq:1}
\tau(k,m,\theta(t))=(_{m}^{k})\theta(t)^{k-m}(1-\theta(t))^{m}
\end{equation}
Notice that a node with degree $k$ has the probability $1-\rho_0$ to be not one of the initial seeds. At the
same time, the probability that a susceptible node $u$ with degree $k$ has received the information $m$ times
and still does not accept it by time $t$ is $\sum_{m=0}^{T-1} \tau(k,m,\theta(t))$, where $T$ is the adoption threshold
in the model. Combining the initial seeds and summing over all possible values of $m$, we obtain the probability that the node
$u$ is still in the susceptible state at time $t$ as
\begin{equation}\label{eq:2}
s(k,t)=(1-\rho_0)\sum_{m=0}^{T-1} \tau(k,m,\theta(t))
\end{equation}
Averaging over all $k$, the density of susceptible nodes (i.e., the probability of a randomly chosen individual
is in the susceptible state) at time $t$ is given by
\begin{eqnarray}\label{eq:3}
\rho_S(t)=\sum\limits_{k=0}^{\infty}P(k)s(k,t).
\end{eqnarray}
where $P(k)$ is the degree distribution of the network. In order to solve $\rho_S(t)$, one needs to know $\theta(t)$.
Since a neighbor $v$ of node $u$ may be susceptible, accepted, or recovered, $\theta(t)$ can be expressed as
\begin{equation}\label{eq:4}
\theta(t)=\Phi^S(t)+\Phi^A(t)+\Phi^R(t)
\end{equation}
where $\Phi^S(t),\Phi^A(t),\Phi^R(t)$ is the probability that the neighbor $v$ is in the susceptible, accepted,
recovery state, respectively, and has not transmitted the information to node $u$ through this connection. Once
these three parameters derived, we will get the density of susceptible nodes at time $t$
by substituting them into Eq. (\ref{eq:1})(\ref{eq:2}) and then into Eq. (\ref{eq:3}). For this
purpose, in the following, we will focus on how to solve them.
To find $\Phi^S(t)$, we now consider a randomly chosen node $u$, and
assume this node is in the cavity state, which means that it cannot transmit any information to its neighbors $v$ but
can be informed by its neighbors. In this case, the neighbor $v$ can only get
the information from its other neighbors except the node $u$. If a neighboring node $v$ of $u$ has degree $k'$,
the probability that node $v$ has received $m$ pieces of the information at time $t$ will be $\tau(k'-1,m,\theta(t))=(_{m}^{k'-1})\theta(t)^{k'-1-m}(1-\theta(t))^{m}$.
After received the information $m$ times, node $v$ still does not accept it with probability
$(1-\rho_0)\sum_{m=0}^{T-1} \tau(k'-1,m,\theta(t))$. For uncorrelated networks,
the probability that one edge from node $u$ connects with a node $v$ with degree $k'$ is
$k'P(k')/\langle k \rangle$, where $\langle k\rangle$ is the mean degree of the network.
So, summing over all possible $k'$, one obtains
\begin{equation}\label{eq:5}
\Phi^S(t)=(1-\rho_0) \frac{\sum\limits_{k'}k'P(k')\sum\limits_{m=0}^{T-1}\tau(k'-1,m,\theta(t))}{\langle k \rangle}
\end{equation}
The growth of $\Phi^R(t)$ includes two
consecutive events: firstly, an accepted neighbor has not transmitted the information successfully to node $u$
with probability $1-\beta$; secondly, the accepted neighbor has become recovered with probability $\mu$. Combining
these two events, the $\Phi^A(t)$ to $\Phi^R(t)$ flux is $\mu(1-\beta)\Phi^A(t)$. Thus, one gets
\begin{equation}\label{eq:6}
\frac{d\Phi^R(t)}{dt}= \mu(1-\beta)\Phi^A(t)
\end{equation}
Once the accepted neighbor $v$ transmits the information to $u$ successfully (with probability $\beta$), the
$\Phi^A(t)$ to $1-\theta(t)$ flux will be $\beta\Phi^A(t)$, which means
\begin{eqnarray}\label{eq:7}
\frac{d(1-\theta(t))}{dt}=\beta\Phi^A(t).
\end{eqnarray}
That is
\begin{equation}\label{eq:8}
\frac{d\theta(t)}{dt}=-\beta\Phi^A(t).
\end{equation}
Combining Eqs. (\ref{eq:6}) and (\ref{eq:8}) and considering (as initial conditions) $\theta(0)=1$ and
$\Phi^R(0)=0$, one obtains
\begin{eqnarray}\label{eq:9}
\Phi^R(t)=\frac{\mu[1-\theta(t)](1-\beta)}{\beta}.
\end{eqnarray}
Substituting Eqs. (\ref{eq:5}) and (\ref{eq:9}) into Eq.(\ref{eq:4}), we get an expression for $\Phi^A(t)$ in terms
of $\theta(t)$. Then, one can rewrite Eq. (\ref{eq:8}) as
\begin{eqnarray}
\frac{d\theta(t)}{dt}&=&-\beta\theta(t)+\mu(1-\theta(t))(1-\beta)\nonumber \\
&&+\frac{\beta(1-\rho_0)\sum_{k'}k'P(k')\sum\limits_{m=0}^{T-1}\tau(k'-1,m,\theta(t))}{\langle k\rangle} \label{eq:10}
\end{eqnarray}
With $\theta(t)$ on hand, the equation of the system comes out to be
\begin{eqnarray}\label{eq:11}
\frac{d\rho_R(t)}{dt}&=&\mu \rho_A(t) \nonumber, \\
\rho_S(t)&=&\sum\limits_{k=0}^{\infty}P(k)s(k,t) \nonumber, \\
\rho_A(t)&=&1-\rho_S(t)-\rho_R(t).
\end{eqnarray}
In fact, Eq. (\ref{eq:10}) does not depend on Eq. (\ref{eq:11}), so the system is
governed by the single ordinary differential equation (\ref{eq:10}).
Although the resulting equation are simpler than those found by other methods,
it can be proven to exactly predict the disease/information spreading dynamics in the large-population
limit for different network topologies\cite{Wang:2015,Volz:2008,Miller:2011,Shu:2016,Miller:2012,Miller:2013a,Miller:2014}.
\subsection{The edge-based compartmental theory on a two-layered network}
Now, we develop an analogous theoretical framework from the single network to the case of two uncorrelated interconnected networks
based on the approach in Refs. \cite{Zheng:2018}, which is suited to the problems studied in
our work. In particular, when one assumes that the population is made up of two layers,
then $P_j(k_1,k_2)$ denote the probability
that a node of layer $j$ has $k_1$ degree in layer $1$ and $k_2$ in layer $2$.
For the sake of simplicity, one can name the two
layers $\mathcal{A}$ and $\mathcal{B}$ as $1$ and $2$. Let $\beta_{j,l}$ be the rate of
transmission across an edge from network $l$ to network $j$, and let us define $\mu$ to be the recovery rate of a node
in any layer.
Firstly, let us define $\theta_{j,l}$ to be the probability that randomly chosen an edge $(u,v)$, node $v$ in
layer $j$ ($j=1,2$) has not transmitted the information to the node $u$ in layer $l$ ($l=1,2$) by time $t$.
For the considered case, we have $\theta_{1,2}$, $\theta_{1,1}$, $\theta_{2,1}$ and $\theta_{2,2}$ four variables.
Once the four variables were obtained, we can solve the equations of the system.
Now, we will solve $\theta_{1,2}$ as an example in detail. Similarly to the the case of single network,
a neighbor $v$ in layer 2 of node $u$ in layer 1 may be susceptible, accepted, or recovered. Then $\theta_{1,2}$
can be expressed as \begin{equation}\label{eq:12}
\theta_{1,2}=\Phi^S_{1,2}+\Phi^A_{1,2}+\Phi^R_{1,2}
\end{equation}
where $\Phi^S_{1,2}$, $\Phi^A_{1,2}$, $\Phi^R_{1,2}$ is the probability that the neighbor $v$ is in the
susceptible, accepted, recovery state, and has not transmitted the information to node $u$ through this edge $(u,v)$.
Similarly, to find $\Phi^S_{1,2}$, the neighbor $v$ in layer 2 can only get
the information from its other neighbors except the node $u$ in layer 1. Thus,
the probability that the node $v$ with degree $(k_1,k_2)$ has received the information $m$ times from his/her neighbors at
time $t$ is $\tau(k_1-1,n,\theta_{2,1})\tau(k_2,m-n,\theta_{2,2})$, where $\tau(k_1-1,n,\theta_{2,1})$ indicates the probability
that the node $v$ received $n$ times information from $k_1-1$ neighbors with $\theta_{2,1}$ and $\tau(k_2,m-n,\theta_{2,2})$ is the
probability that the node $v$ received the last $m-n$ times information from $k_2$ neighbors with $\theta_{2,2}$. It should be
noted that function $\tau(k,m,\theta)=(_{m}^{k})\theta^{k-m}(1-\theta)^{m}$, which has the similar expression as Eq. (\ref{eq:1}).
After received the information $m$ times, node $v$ still does not accept it with probability
\begin{eqnarray}
X_{1,2}=\sum\limits_{m=0}^{T_B-1}\sum\limits_{n=0}^{m}
\tau(k_1-1,n,\theta_{2,1})\tau(k_2,m-n,\theta_{2,2}) \label{eq:13}
\end{eqnarray}
For uncorrelated networks, the probability that one
edge from node $u$ connects with a node $v$ with degree $(k_1,k_2)$ is
$\frac{k_1P_2(k_1,k_2)}{\sum_{k_1,k_2}k_1P_2(k_1,k_2)}$. Thus, one has
\begin{eqnarray}
\Phi^S_{1,2}&=& \frac{\sum\limits_{k_1,k_2}k_1P_2(k_1,k_2)X_{1,2}}{\sum\limits_{k_1,k_2}k_1P_2(k_1,k_2)} \label{eq:14}
\end{eqnarray}
It is easily to know that the growth of $\Phi^R_{1,2}$ includes two consecutive events:
first, an accepted neighbor has not transmitted the information to
node $u$ via with probability $1-\theta_{1,2}$; second, the accepted neighbor has been recovered with
probability $\mu$. Combining these two events, the $\Phi^A_{1,2}$ to $\Phi^R_{1,2}$ flux is $\mu(1-\theta_{1,2})\Phi^A_{1,2}$.
Thus, one gets
\begin{eqnarray}
\frac{d\Phi^R_{1,2}}{dt}&=& \mu(1-\theta_{1,2})\Phi^A_{1,2} \label{eq:15}
\end{eqnarray}
Once the accepted neighbor $v$ in layer 2 transmits the information to node $u$ in layer 1 successfully (with
probability $\beta_{1,2}$), the $\Phi^A_{1,2}$ to $1-\theta_{1,2}$ flux will be $\beta_{1,2}\Phi^A_{1,2}$, which means
\begin{eqnarray}
\frac{d\theta_{1,2}}{dt}&=&-\beta_{1,2}\Phi^A_{1,2} \label{eq:16}
\end{eqnarray}
Combining Eqs. (\ref{eq:15}) and (\ref{eq:16}), and considering the initial conditions $\theta_{1,2}(0)=1$ and
$\Phi^R_{1,2}(0)=0$, one obtains
\begin{eqnarray}
\Phi^R_{1,2}&=&\frac{\mu(1-\theta_{1,2})(1-\beta_{1,2})}{\beta_{1,2}} \label{eq:17}
\end{eqnarray}
Substituting Eqs. (\ref{eq:14}) (\ref{eq:17}) into Eq.(\ref{eq:12}) and then into (\ref{eq:16}) , one gets
\begin{eqnarray}\label{eq:18}
\dot{\theta}_{1,2} &=& -\beta_{1,2}(\theta_{1,2}-\Phi^S_{1,2}-\Phi^R_{1,2})\nonumber\\
&=&-\beta_{1,2}\theta_{1,2}+\mu(1-\theta_{1,2})(1-\beta_{1,2})\nonumber\\
&&+\beta_{1,2} \frac{\sum\limits_{k_1,k_2}k_1P_2(k_1,k_2)X_{1,2}}{\sum\limits_{k_1,k_2}k_1P_2(k_1,k_2)}
\end{eqnarray}
Similarly, one can write down $\theta_{1,1}$, $\theta_{2,1}$ and $\theta_{2,2}$ as follows
\begin{eqnarray}
\dot{\theta}_{1,1} &=& -\beta_{1,1}(\theta_{1,1}-\Phi^S_{1,1}-\Phi^R_{1,1})\nonumber\\
&=& -\beta_{1,1}\theta_{1,1}+\mu(1-\theta_{1,1})(1-\beta_{1,1}) \nonumber\\
&&+\beta_{1,1} \frac{(1-\rho_0)\sum\limits_{k_1,k_2}k_1P_1(k_1,k_2)X_{1,1}}{\sum\limits_{k_1,k_2}k_1P_1(k_1,k_2)} \label{eq:19} \\
\dot{\theta}_{2,1}&=&\!-\beta_{2,1}(\theta_{2,1}-\Phi^S_{2,1}-\Phi^R_{2,1})\nonumber\\
&=&-\beta_{2,1}\theta_{2,1} +\mu(1-\theta_{2,1})(1-\beta_{2,1}) \nonumber\\
&&+ \beta_{2,1} \frac{(1-\rho_0)\sum\limits_{k_1,k_2}k_2P_1(k_1,k_2)X_{2,1}}{\sum\limits_{k_1,k_2}k_2P_1(k_1,k_2)}\label{eq:20}\\
\dot{\theta}_{2,2}&=&-\beta_{2,2}(\theta_{2,2}-\Phi^S_{2,2}-\Phi^R_{2,2})\nonumber\\
&=&-\beta_{2,2}\theta_{2,2} +\mu(1-\theta_{2,2})(1-\beta_{2,2})\nonumber\\
&&+\beta_{2,2} \frac{\sum\limits_{k_1,k_2}k_2P_2(k_1,k_2)X_{2,2}}{\sum\limits_{k_1,k_2}k_2P_2(k_1,k_2)}\label{eq:21}
\end{eqnarray}
where
\begin{eqnarray}
X_{1,1}&=&\sum\limits_{m=0}^{T_A-1}\sum\limits_{n=0}^{m}\tau(k_1-1,n,\theta_{1,1})\tau(k_2,m-n,\theta_{1,2}) \label{eq:22} \\
X_{2,1}&=&\sum\limits_{m=0}^{T_A-1}\sum\limits_{n=0}^{m}\tau(k_1,n,\theta_{1,1})\tau(k_2-1,m-n,\theta_{1,2}) \label{eq:23} \\
X_{2,2}&=&\sum\limits_{m=0}^{T_B-1}\sum\limits_{n=0}^{m}\tau(k_1,n,\theta_{2,1})\tau(k_2-1,m-n,\theta_{2,2}) \label{eq:24}
\end{eqnarray}
It should be noted that as a node in layer $1$ has the probability $1-\rho_0$ not to be one of the initial seeds,
after received the information $m$ times, node $v$ through a corresponding edge still does not accept it with probability
$(1-\rho_0)X_{1,1}$ and $(1-\rho_0)X_{2,1}$ in Eqs. (\ref{eq:19}) and (\ref{eq:20}), respectively. With Eqs. (\ref{eq:18}-\ref{eq:24}) on hand,
the densities associated with each distinct state can be obtained by
\begin{equation}
\begin{cases}
\dot{R}_1=\mu A_1(t) \\
S_1(t)=(1-\rho_0)\sum\limits_{k_1,k_2}^\infty P_1(k_1,k_2)Y_1\\ \label{eq:25}
A_1(t)=1-S_1(t)-R_1(t)
\end{cases}
\end{equation}
\begin{equation}
\begin{cases}
\dot{R}_2=\mu A_2(t) \\
S_2(t)=\sum\limits_{k_1,k_2}^\infty P_2(k_1,k_2)Y_2\\ \label{eq:26}
A_2(t)= 1-S_2(t)-R_2(t)
\end{cases}
\end{equation}
where
\begin{eqnarray}
Y_1=\sum\limits_{m=0}^{T_A-1}\sum\limits_{n=0}^{m}\tau(k_1,n,\theta_{1,1})\tau(k_2,m-n,\theta_{1,2})\label{eq:27}\\
Y_2=\sum\limits_{m=0}^{T_B-1}\sum\limits_{n=0}^{m}\tau(k_1,n,\theta_{2,1})\tau(k_2,m-n,\theta_{2,2})\label{eq:28}
\end{eqnarray}
Eqs. (\ref{eq:25}) and (\ref{eq:26}) are the main theoretical results in this paper. To obtain the densities
associated with each state, instead of getting the analytic solutions of Eqs. (\ref{eq:25}) and (\ref{eq:26}),
we solve them by numerical integration and get the corresponding theoretical curves.
\section{Results}
To study the effects of multiple channels on information spreading, we have performed extensive
simulations with our model in coupled Scale-free (SF)\cite{Catanzaro:2005}
and Erdos-R\H{e}nyi (ER) networks \cite{Albert:2002}. To compare the theoretical predictions with the
numerical results, we also take into account coupled ER-ER and SF-SF networks in this work. Next,
we mainly try to find out the key factors, which influence
the emergence of the double transition on information spreading process.
\subsection{The effects of multiple channels on the double transition}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{fig2.eps}
\caption{(Color online). Emergence of the double transition on information spreading process
on SF-ER networks. (a) and (b) represent the density of final recovered nodes $\rho_R$ and
variability $\Delta$ as a function of transmission probability $\beta$ with different average
degree $\langle k_{ab}\rangle$, respectively. Squares, circles and up triangles represent $\langle k_{ab}\rangle=1$, $3$ and $5$,
respectively. The symbols show the simulated results and the lines are the corresponding
theoretical results in (a) from Eqs. (\ref{eq:25}) and (\ref{eq:26}). The results
are averaged over $10^3$ independent realizations. The parameters are $N_a=N_b=10\,000$, $\mu=1.0$,
$\beta_{ab}=0.5$, $T_A=T_B=2$, $\rho_0=0.05$, $P_A(k_a)\sim k_a^{-2.1}$, $\langle k_a\rangle=6$, $\langle k_b\rangle=6$. }
\label{Fig:kab}
\end{figure}
To better quantify the spreading behavior, we let $\rho_S(t)$, $\rho_A(t)$ and $\rho_R(t)$ denote
the fraction of susceptible, accepted and recovered nodes at time $t$ in the whole network. When the spreading
is ended, the final size of recovered nodes can be denoted by $\rho_R$. A
larger $\rho_R$ implies a larger spreading range at the final state.
To numerically identify the effective spreading threshold $\beta_c$ of
the SAR model, we use the variability measure\cite{Shu:2016,Crepey:2015}:
\begin{eqnarray}
\Delta=\frac{\sqrt{\langle \rho_R^2\rangle-\langle \rho_R\rangle^2}}{\langle \rho_R\rangle}
\label{eq:23}
\end{eqnarray}
In general, the variability $\Delta$ exhibits a peak at a critical
point\cite{Shu:2016,Crepey:2015}. Thus, we estimate the numerical effective spreading threshold
$\beta_c$ from the position of the peak of the variability.
Fig. \ref{Fig:kab}(a) shows the final size of recovered nodes $\rho_R$ as a function of
transmission probability $\beta$ with different average degree $\langle k_{ab}\rangle$ on SF-ER networks. Fig.
\ref{Fig:kab}(b) shows the variability $\Delta$ versus $\beta$ with corresponding $\langle k_{ab}\rangle$ in Fig.
\ref{Fig:kab}(a). When the interaction strength is weak (i.e., $\langle k_{ab}\rangle$ is relatively small), the
double transition occur on the information spreading process, which is indicated by two peaks of
$\Delta$ in Fig. \ref{Fig:kab}(b). It has also found that the system undergoes a
continuous transition from accepted free phase to accepted phase and a following discontinuous
transition between the accepted phases. In addition, with the increasing of $\langle k_{ab}\rangle$,
the second critical point $\beta_c^2$ close to the
first one $\beta_c^1$. Once the coupling strength is strong enough ($\langle k_{ab}\rangle=5$), the two
critical points merge into one, i.e., the second transition is vanished. These result have
been confirmed by Eqs. (\ref{eq:25}) and (\ref{eq:26}) of the theory, see the lines in Fig.
\ref{Fig:kab}(a). It is maybe helpful to understand the influence of $\langle k_{ab}\rangle$ on the double
transition from the aspect of purely coupling in network structure. When the coupling strength is
strong, a two-layered network behave as a solid single network \cite{Radicchi:2013,Sahneh:2015}. In
this case, the effect of multiple channels is not prominent and the spreading behavior is the same as
the common one \cite{Wang:2015}. Therefore, a key factor determining the occurrence of double
transition is a weak coupling between two networks.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fig3.eps}
\caption{(Color online). The densities of final recovered nodes $\rho_R^A$ and $\rho_R^B$ in
layer $\mathcal{A}$ and $\mathcal{B}$ as a function of transmission probability $\beta$, where
(a), (b) and (c) represent the cases of average degree $\langle k_{ab}\rangle=1$,
$\langle k_{ab}\rangle=3$, and $\langle k_{ab}\rangle=5$,
respectively. Based on the peaks of $\Delta$ in Fig. \ref{Fig:kab}(b), the red and green
dash lines indicate the first and second critical point, respectively. The symbols show the
simulated results and the solid lines are the corresponding theoretical results. All the parameters are
set as Fig. \ref{Fig:kab}. } \label{Fig:rhoAB}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fig4.eps}
\caption{(Color online). Average densities of (a) susceptible $\rho_S(t)$, (b) accepted $\rho_A(t)$,
and (c) recovered $\rho_R(t)$ nodes versus time $t$ with different transmission probability
$\beta$. Squares, circles and up triangles represent $\beta=0.49$, $0.51$ and $0.53$, respectively,
which indicate the spreading patterns below, at and above the second transition in Fig.
\ref{Fig:kab}(a) with $\langle k_{ab}\rangle=1$. The symbols show the simulated
results and the lines are the corresponding theoretical results. All the results are averaged over
$100$ independent realizations and the parameters are the same as Fig.
\ref{Fig:kab}. } \label{Fig:timeseries}
\end{figure}
To gather a deeper understanding the double transition phenomenon, in Fig. \ref{Fig:rhoAB} we
also measure the densities of final recovered nodes $\rho_R^A$ and $\rho_R^B$ in the layer
$\mathcal{A}$ and $\mathcal{B}$ as a function of transmission probability $\beta$, where Fig.
\ref{Fig:rhoAB}(a), (b) and (c) report the cases of average degree $\langle k_{ab}\rangle=1$, $\langle k_{ab}\rangle=3$, and
$\langle k_{ab}\rangle=5$, respectively. Based on the peaks of $\Delta$ in Fig. \ref{Fig:kab}(b), the red and
green dash lines indicate the first and second critical point $\beta_c^1$ and $\beta_c^2$,
respectively. Comparing with Fig. \ref{Fig:rhoAB}(a), (b) and (c), it is visible to observe that the
first threshold $\beta_c^1$ is the same, indicating that the first critical point $\beta_c^1$
corresponds to the spreading threshold in layer $\mathcal{A}$. With the increase of $\beta$, more
and more individuals have accepted the information in layer $\mathcal{A}$ and more information
has been spread to layer $\mathcal{B}$. When the $\beta$ closes to the second threshold
$\beta_c^2$, the system undergoes an abrupt transition. For a very strong coupling (see Fig.
\ref{Fig:rhoAB}(c)), the two critical points merge into one, which shows a discontinues
transition as the traditional threshold model \cite{Dodds:2004,Wang:2015}. In addition, from
Fig. \ref{Fig:rhoAB}(a) and (b), it is found that when $\beta_c^1<\beta<\beta_c^2$,
the density of recovered nodes in layer $\mathcal{B}$ is not zero, indicating the information has
been spread to a small fraction individuals in layer $\mathcal{B}$ but these small part accepted
individuals are unable to trigger an outbreak of the information.
To better understand the spreading behavior around the second threshold $\beta_c^2$, we study the evolution of the
nodes densities of susceptible $\rho_S(t)$, accepted $\rho_A(t)$, and recovered $\rho_R(t)$ in Fig. \ref{Fig:timeseries}, respectively.
The green, yellow and blue symbols and lines represent the spreading cases of below, at and above the second critical point
$\beta_c^2$, respectively. It is apparent to observe that when $\beta>\beta_c^2$ (see the blue symbols and lines), $\rho_A(t)$ shows
two peaks in Fig. \ref{Fig:timeseries}(b) and $\rho_R(t)$ increases dramatically at the final stage in Fig.
\ref{Fig:timeseries}(c), implying that the system undergoes a second outbreak.
\subsection{The effects of the adoption threshold $T_A$ and $T_B$ }
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fig5.eps}
\caption{(Color online). The effects of the adoption threshold $T_A$ and $T_B$ on double
transition. (a) and (b) show the dependence of the final recovered density $\rho_R$ on the
transmission probability $\beta$ with different $T_A$ and $T_B$, respectively. (c) and (d) plot the
corresponding variability $\Delta$ in the case of (a) and (b), respectively. The green, yellow and
blue symbols and lines represent $T_A=1$, $2$, $3$ in (a)(c) and $T_B=1$,
$2$, $3$ in (b)(d), respectively, where the symbols represent the simulated results and the lines
are the corresponding theoretical results in (a) and (b) from Eqs. (\ref{eq:25}) and (\ref{eq:26}).
The parameters are set as $T_B=2$ in (a) and $T_A=2$ in (b). The other ones are $N_a=N_b=10\,000$,
$\mu=1.0$, $\beta_{ab}=0.5$, $\langle k_{ab}\rangle=2$, $\rho_0=0.05$, $P_A(k_a)\sim k_a^{-2.1}$, $\langle k_a\rangle=6$,
$\langle k_b\rangle=6$.} \label{Fig:TaTb}
\end{figure}
In general, the adoption threshold of the individuals will influence the phase transition on the spreading dynamics\cite{Dodds:2004,Wang:2015}.
In this sense, we next study the effects of the adoption threshold $T_A$ and $T_B$ on the double transition.
Fig. \ref{Fig:TaTb}(a) and (b) show the dependence of the final recovered density $\rho_R$ on the
transmission probability $\beta$ with typical $T_A$ and $T_B$, respectively. As is shown in Fig.
\ref{Fig:TaTb}(a), when $T_A=1$, the phenomenon of double transition do not occur in the
spreading process. The corresponding variability $\Delta$ clearly confirms this point in Fig.
\ref{Fig:TaTb}(c). When $T_A=2$ and $T_A=3$, it is observed that the double
transition emerge with the increasing of $\beta$. The corresponding variability $\Delta$ shows two
peaks in Fig. \ref{Fig:TaTb}(c). Similarly, in Fig. \ref{Fig:TaTb}(b) and (d), we plot the final
recovered density $\rho_R$ and the corresponding variability $\Delta$ as a function of transmission
probability $\beta$ with different $T_B$, respectively. The results are similar to the case in Fig.
\ref{Fig:TaTb}(a) and (c). It is obvious to know that increasing the adoption threshold impedes
individuals from accepting the information. A larger value of adoption threshold means that
the individual will accept the information only it receives the information more times from distinct neighbors.
As a result, the individuals easily accept the
information when the adoption threshold is small (i.e., $T_A=1$ or $T_B=1$). Particularly, when
$T_A=1$, the information is spread fast in layer $\mathcal{A}$ and then the individuals in layer
$\mathcal{B}$ know the information quickly. Thus we observe a macroscopic outbreak at the
same time. While for the case of $T_B=1$, the individuals in layer $\mathcal{B}$ will accept the
information once they received it one time. In this case, the information in layer $\mathcal{A}$
can spill into layer $\mathcal{B}$ easily and it is equivalent to a relatively strong
interaction between the two layers, where the spreading process shows a synchronous outbreak behavior. Therefore,
the double transition disappears in this situation.
\subsection{Influence of network structure}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fig6.eps}
\caption{(Color online). The influence of network structure on double transition. (a) and (c)
show $\rho_R$ and $\Delta$ versus $\beta$ with different degree exponent $\gamma_B$ in coupled
SF-SF networks, respectively. (b) and (d) show $\rho_R$ and $\Delta$ versus $\beta$ with different
$\langle k_a\rangle$ and $\langle k_b\rangle$ in coupled ER-ER networks, respectively. The degree exponent $\gamma_A=2.1$ is
fixed in (a) and (c) and the other parameters are set as $\langle k_{ab}\rangle=2$, $T_A=T_B=2$,
$N_a=N_b=10\,000$, $\mu=1.0$, $\beta_{ab}=0.5$, $\rho_0=0.05$.
} \label{Fig:sf}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fig7.eps}
\caption{(Color online). The double transition disappears on coupled ER-ER networks.
The dependence of the final recovered density $\rho_R$ on the transmission
probability $\beta$ with different (a)$\langle k_{ab}\rangle$, (b) $T_A$ and (c) $T_B$, respectively.
(d) (e) and (f) plot the corresponding variability $\Delta$ in the case of (a) (b) and (c), respectively.
The parameters are set as $T_A=T_B=2$ in (a)(d); $T_B=2$ in (b)(e); $T_A=2$ in (c)(f), respectively.
The other ones are $\langle k_a\rangle=\langle k_b\rangle=6$, $\langle k_{ab}\rangle=2$, $N_a=N_b=10\,000$,
$\mu=1.0$, $\beta_{ab}=0.5$, $\rho_0=0.05$.
}
\label{Fig:er}
\end{figure}
One more key question is how the network topology affects the phenomenon of the double transition. To answer
this question, we consider the influence of degree distribution of coupled SF-SF and ER-ER networks. Notice that
the coupled SF-SF network is generated with the power-law degree
distribution $P_A(k_a)\sim k_a^{-\gamma_A}$ and $P_B(k_b)\sim k_b^{-\gamma_B}$ in layer
$\mathcal{A}$ and $\mathcal{B}$, respectively, where $\gamma_A$ and $\gamma_B$ are the degree exponents.
The smaller of the degree exponent is, the stronger of the heterogeneity of network structure will be.
For fixed $\gamma_A=2.1$, Fig. \ref{Fig:sf} (a) and (c) show $\rho_R$
and $\Delta$ versus $\beta$ with different degree exponent $\gamma_B$ in coupled SF-SF networks, respectively.
It is found that when the $\gamma_B$ closes to $\gamma_A$, the phenomenon of the
double transition is not prominent any more. As the difference of the heterogeneity
in degree distribution between the two layers is not distinctive, the spreading speed in layer $\mathcal{A}$ and $\mathcal{B}$ are
comparative. In this case, it is easy to observe the synchronous outbreak behavior between layer $\mathcal{A}$ and $\mathcal{B}$.
In fact, the result can be qualitatively explained as follows\cite{Wang:2015}: From our model, we know that hubs
accept the information with more larger probability. With the increasing
of network heterogeneity in layer $\mathcal{B}$, the network has
a large number of nodes with very small degrees and
more nodes with large degrees. At the beginning, the hubs facilitate the information spreading as they are
more likely to receive the information from layer $\mathcal{A}$. After that, a large number of nodes in layer
$\mathcal{B}$ with very small degrees will
accept the information, resulting a similar behavior as layer $\mathcal{A}$ in the spreading process.
To deeply understand this point, we investigate a specific case with the same degree distribution
in coupled ER-ER networks. As is shown in Fig. \ref{Fig:sf}(b) and (d), both the curves of
$\rho_R$ and $\Delta$ indicate that the double transition disappears with different $\langle k_a\rangle$ and
$\langle k_b \rangle$. What is more, for different cases of $\langle k_{ab}\rangle$, $T_A$ and $T_B$, the disappearance of the
double transition is also found in Fig. \ref{Fig:er}. In an ER network,
the individuals are more likely to accept or not accept
the information synchronously, which result in a discontinuous transition \cite{Wang:2015}. These results
confirm again that the heterogeneity of degree distribution in each layer is very helpful for the appearance of
the double transition.
\section{Conclusions}
In recent years, researchers found that under certain conditions, there exists a double
transition in the infected fraction versus the transmission probability on the epidemic spreading
process. However, it is not clear whether it exists in the information spreading dynamics as the
information spreading carries its special features, such as the effects of multiple channels,
memory effects and non-redundant contacts etc. By combining these key factors in the
information spreading dynamics, we indeed find the double transition in the phase diagram.
These special features play a crucial role on the appearance of the double transition.
In summary, we have proposed a SAR model to describe the information spreading process on a two-layered network,
where we emphasize the effects of multiple channels, memory and non-redundant contacts. Our simulation results show that
there is a double transition in the phase diagram. Moreover, we find
that such a phenomenon originates from two outbreaks between the two networks, which is a
distinctive feature of a multilayer network of interactions. Further, we reveal that the double
transition are driven by a weak coupling condition between the two layers, a
large adoption threshold and the difference of the degree distributions betwen the two
networks. An edge-based compartmental theory is developed which fully
explains all numerical results. Our findings may be helpful
for understanding the secondary outbreaks of the information in our life.
\section{Acknowledgements}
This work was partially supported by the NNSF of China under Grant No. 11505114 and No.
10975099, the Program for Professor of Special Appointment (Orientational Scholar) at Shanghai
Institutions of Higher Learning under Grants No. QD2015016, and the Fundamental Research Funds
for the central Universities under Grants No. YJ201830.
\section{References}
|
{
"timestamp": "2018-05-08T02:12:45",
"yymm": "1805",
"arxiv_id": "1805.02270",
"language": "en",
"url": "https://arxiv.org/abs/1805.02270"
}
|
\section{Introduction}
Some fundamental results from the theory of normed spaces have been shown to
hold in the more general setting of non-symmetric convex bodies. Dvoretzky's
theorem \cite{Dv61, Mi71} was extended in \cite{LaMa75} and \cite{Gor88};
Milman's Quotient of Subspace theorem \cite{Mi85} and duality of entropy
results were extended in \cite{MP00}. In this note, we extend the Alon--Milman
Theorem.
A \emph{convex body} is a compact convex set in $\Re^d$ with non-empty interior.
We denote the orthogonal projection onto a linear subspace $H$ or $\Re^d$ by
$P_H$. For $p=1,2,\infty$, the closed unit ball of $\ell^d_p$ centered at
the origin is denoted by $\mathbf B^d_p$.
Let $K$ and $L$ be convex bodies in $\Re^d$ with $L=-L$. We define their
\emph{distance} as
\[
d(K,L)=\inf\{\lambda>0 \colon} % "such that" in set exprs like {... : ... \ L\subset T(K-a)\subset \lambda L \mbox{ for some }
a\in \Re^d \mbox{ and } T\in GL(\Re^d)\}.
\]
By compactness, this infimum is attained, and when $K=-K$, it is attained with
$a=0$.
Alon and Milman \cite{AM83} proved the following theorem in the case when $K$
is centrally symmetric.
\begin{thm}\label{thm:AMassym}
For every $\epsilon>0$ there is a constant $C(\epsilon)>0$ with the property
that in any dimension $d\in\mathbb Z^+$, and for any convex body $K$ in
$\Re^d$, at least one of the following two statements hold:
\begin{enumerate}[(i)]
\item
there is an $m$-dimensional linear subspace $H$ of $\Re^d$ such that
$d(P_H(K),\mathbf B_2^m)<1+\epsilon$, for some $m$ satisfying $\ln\ln m\geq
\frac{1}{2}\ln\ln d$, or
\item
there is an $m$-dimensional linear subspace $H$ such that
$d(P_H(K),\mathbf B_1^m)<1+\epsilon$, for some $m$ satisfying $\ln\ln m\geq
\frac{1}{2}\ln\ln d - C(\epsilon)$.
\end{enumerate}
\end{thm}
The main contribution of the present note is a way to deduce
Theorem~\ref{thm:AMassym} from the original result of Alon and Milman, that is,
the centrally symmetric case.
By polarity, one immediately obtains
\begin{cor}
For every $\epsilon>0$ there is a constant $C(\epsilon)>0$ with the property
that in any dimension $d\in\mathbb Z^+$, and for any convex body $K$ in
$\Re^d$ containing the origin in its interior, at least one of the following
two statements hold:
\begin{enumerate}[(i)]
\item
there is an $m$-dimensional linear subspace $H$ of $\Re^d$ such
that
$d(H\cap K,\mathbf B_2^m)<1+\epsilon$, for some $m$ satisfying
$\ln\ln m\geq \frac{1}{2}\ln\ln d$, or
\item
there is an $m$-dimensional linear subspace $H$ such that
$d(H\cap K,\mathbf B_{\infty}^m)<1+\epsilon$, for some $m$ satisfying
$\ln\ln m\geq \frac{1}{2}\ln\ln d - C(\epsilon)$.
\end{enumerate}
\end{cor}
\section{Proof of Theorem~\ref{thm:AMassym}}\label{sec:AM}
For a convex body $K$ in $\Re^d$, we denote its polar by $K^\ast=\{x\in\Re^d\colon} % "such that" in set exprs like {... : ...
\iprod{x}{y}\leq 1\mbox{ for all } y\in K\}$. The \emph{support function} of
$K$ is $h_K(x)=\sup\{\iprod{x}{y}\colon} % "such that" in set exprs like {... : ... y\in K\}$. For basic properties, see
\cite{Sch14, GiEtAl14}.
First in Lemma~\ref{lem:closetoball}, by a standard argument, we show that if
the \emph{difference body} $L-L$ of a convex body $L$ is close to the Euclidean
ball, then so is some linear dimensional section of $L$. For this, we need
Milman's theorem whose proof (cf. \cite{Mi71, FLM77, MiSch86}) does not use the
symmetry of $K$ even if it is stated with that assumption.
\begin{lem}[Milman's Theorem]\label{lem:milmann}
For every $\epsilon>0$ there is a constant $C(\epsilon)>0$ with the property
that in any dimension $d\in\mathbb Z^+$, and for any convex body $K$ in
$\Re^d$ with $\mathbf B_2^d\subseteq K$, there is an $m$-dimensional
linear subspace $H$ of $\Re^d$ such that
$(1-\varepsilon)r(\mathbf B_2^d\cap H)\subseteq K \subseteq
(1+\varepsilon)r(\mathbf B_2^d\cap H)$, for some $m$ satisfying
$m\geq C(\epsilon)M^2d$, where
\[M=M(K)=\int\limits_{\Se^{d-1}_2} ||x||_K d\sigma(x),\]
and $r=\frac{1}{M}$.
\end{lem}
\begin{lem}\label{lem:closetoball}
Let $\alpha,\varepsilon>0$ be given. Then there is a constant
$c=c(\alpha,\varepsilon)$ with the property that in any dimension $m\in\mathbb Z^+$,
and for any convex body $L$ in $\Re^m$ with $d(L-L,\mathbf B_2^m)<1+\alpha$, there is a
$k$ dimensional linear subspace $F$ of $\Re^m$ such that
$d(P_F(L),\mathbf B_2^k)<1+\epsilon$ for some $k\geq cm$.
\end{lem}
\begin{proof}
Let $\delta=d(L-L,\mathbf B_2^m)$. We may assume that
$\frac{1}{\delta}\mathbf B_2^m\subseteq
L-L\subseteq \mathbf B_2^m$. Thus, $h_{L-L}\geq\frac{1}{\delta}$. With the notations
of
Lemma~\ref{lem:milmann}, we have
\begin{equation}
M(L^\ast)=\int\limits_{\Se^{d-1}_2} ||x||_{L^\ast} d\sigma(x)=
\frac{1}{2}\int\limits_{\Se^{d-1}_2} h_L(x)+h_L(-x) d\sigma(x)
\end{equation}
\[
=\frac{1}{2}\int\limits_{\Se^{d-1}_2} h_{L-L}(x) d\sigma(x)
\geq\frac{1}{2\delta}\geq\frac{1}{2(1+\alpha)}.
\]
Note that $L^\ast\supset (L-L)^\ast\supset \mathbf B^d_2$, thus, by
Lemma~\ref{lem:milmann} and polarity, we obtain that $L$ has a $k$ dimensional
projection $P_F$ with $d(P_F L,\mathbf B^d_2\cap F)\leq 1+\varepsilon$ and $k\geq
C(\varepsilon)\frac{1}{4(1+\alpha)^2}m$. Here, $C(\varepsilon)$ is the same as
in Lemma~\ref{lem:milmann}.
\end{proof}
The novel geometric idea of our proof is the following.
We call a convex body $T=\operatorname{conv}\left( T_1\cup\{\pm e\} \right)$ in $\Re^m$ a
\emph{double cone} if $T_1=-T_1$ is convex set, $\operatorname{span} T_1$ is an
$(m-1)$-dimensional linear subspace, and $e\in\Re^m\setminus\operatorname{span} T_1$.
Double cones are \emph{irreducible convex bodies}, that is, for any double cone
$T$, if $T=L-L$ then $L=T/2$, see \cite{NaVi03, Yo91}. We prove a stability
version of this fact.
\begin{lem}[Stability of irreducibility of double cones]\label{lem:closetocube}
Let $L$ be a convex body in $\Re^m$ with $m\geq 2$, and $T$ be a double cone of
the form $T=\operatorname{conv}\left( T_1\cup\{\pm e\} \right)$. Assume that $T\subseteq
L-L\subseteq \delta T$ for some $1\leq \delta<\frac{3}{2}$. Then
\begin{equation*}
\left(\frac{3}{2}-\delta\right)T\subseteq L-a\subseteq
\left(\delta-\frac{1}{2}\right)T.
\end{equation*}
for some $a\in\Re^m$.
\end{lem}
\begin{proof}
By the assumptions, $e\in T\subseteq L-L$, thus, by translating $L$, we may
assume that $o, e\in L$.
Thus,
\begin{equation}
L\subseteq (L-L)\cap(L-L+e)\subseteq \delta T \cap (\delta T + e).
\end{equation}
We claim that
\begin{equation}\label{eq:dtcapdt}
\delta T \cap (\delta T + e) = \frac{e}{2}+\left(\delta-\frac{1}{2}\right)T.
\end{equation}
Indeed, let $H_\lambda$ denote the hyperplane $H_\lambda=\lambda e+ e^{\perp}$.
To prove \eqref{eq:dtcapdt}, we describe the sections of the right hand side
and the left hand side by the hyperplanes $H_\lambda$ for all relevant values
of $\lambda$.
For any $\lambda\in [-\delta,\delta]$, we have
\[\delta T\cap H_\lambda= \delta (T\cap H_{\lambda/\delta})=\lambda e +
\delta \left(1-\frac{|\lambda|}{\delta }\right)T_1.\]
For any $\lambda\in [-\delta +1,\delta +1]$, we have
\[(\delta T+e)\cap H_\lambda=e+(\delta T\cap H_{\lambda-1})=
\lambda e + \delta \left(1-\frac{|\lambda-1|}{\delta }\right)T_1.
\]
Thus, for any $\lambda\in [-\delta +1,\delta]$, we have
\[\delta T\cap(\delta T+e)\cap H_\lambda=
\lambda e + \delta \left(1-\frac{1}{\delta
}\max\{|\lambda|,|\lambda-1|\}\right)T_1.\]
On the other hand, for any $\lambda\in [-\delta+1,\delta]$, we have
\[(e/2+(\delta-1/2)T)\cap H_\lambda=\lambda e +
(\delta-1/2)\left(1-\frac{|\lambda-1/2|}{\delta-1/2}\right)T_1.\]
Combining these two equations yields \eqref{eq:dtcapdt}.
Thus,
\[
T\subseteq L-L= \left(L-\frac{e}{2}\right) -
\left(L-\frac{e}{2}\right)\subseteq \left(L-\frac{e}{2}\right)-
\left(\delta-\frac{1}{2}\right)T.
\]
Using the fact that $T=-T$, and $1\leq \delta<3/2$, we obtain
\[
\left(\frac{3}{2}-\delta\right)T\subseteq L-\frac{e}{2},
\]
finishing the proof of Lemma~\ref{lem:closetocube}.
\end{proof}
Now, we are ready to prove Theorem~\ref{thm:AMassym}. With the notations of the
theorem, let $D=K-K$, and apply the symmetric version of the theorem for $D$ in
place of $K$. We may assume that $\varepsilon<1/2$.
In case (i), we use Lemma~\ref{lem:closetoball} and loose a linear factor in
the dimension of the almost-euclidean projection.
In case (ii), we use Lemma~\ref{lem:closetocube} with $T=\mathbf B^m_1$ and obtain the
same dimension for the almost-$\ell_1^m$ projection.
\bibliographystyle{alpha}
|
{
"timestamp": "2018-05-08T02:17:56",
"yymm": "1805",
"arxiv_id": "1805.02531",
"language": "en",
"url": "https://arxiv.org/abs/1805.02531"
}
|
\section{Introduction}
For many purposes, one needs a \textit{reference measure. }In discrete
situations such as the integers $\mathbb{Z}$, one has \textit{counting
measure. }In Euclidean space $\mathbb{R}^{d}$, one has \textit{Lebesgue
measure. }In locally compact groups, one has \textit{Haar measure. }In
infinite-dimensional settings such as Hilbert space, one has neither local
compactness nor Haar measure. Here various possibilities arise, some
pathological -- which we avoid by restricting to \textit{Radon} measures
(below). One is to use Christensen's concept of \textit{Haar-null sets
(below),\textit{\ }even though there is no Haar measure; see [Chr1,2],
Solecki [Sol], and the companion paper to this, [BinO7]. Another is to use
\textit{Gaussian measures;} for background see e.g. Bogachev [Bog1,3], Kuo
[Kuo] and for Gaussian processes, Lifshits [Lif], Marcus and Rosen [MarR],
Ibragimov and Rozanov [IbrR]. Another is to weaken the invariance of measure
to (relative) \textit{quasi-invariance -- }see [Bog2, \S 9.11, p. 304-305],
and also \S 9.3 and \S 9.16 below.
Hilbert spaces are rather special, and the natural setting for Christensen's
Haar-null sets is Banach spaces. The Banach and Hilbert settings combine (or
intertwine) in Gross's concept of \textit{abstract Wiener space, }where
(identifying a Hilbert space $H$ with its dual) one has a triple $B\subseteq
H\subseteq B^{\ast \ast }$, with both inclusions continuous dense
embeddings. (It is tempting, but occasionally misleading, to speak of
embedded `subspaces' despite either subset here typically having a topology
\textit{finer} than the \textit{subspace} topology induced by the containing
space, hence the use of quotation marks.) This is (essentially) the setting
of \textit{reproducing-kernel Hilbert spaces (RKHS); }see e.g. Berlinet and
Thomas-Agnon [BerTA]. Crucial here is the Cameron-Martin(-Maruyama-Girsanov)
theorem ([CamM1,2,3], [Gir]; [Bog1, 2.4], [Bog3, 1.4]). Here a suitabl
\textit{\ translation} gives a change of measure, the two measures being
equivalent, with Radon-Nikodym derivative given by the \textit
Cameron-Martin formula, }$(CM)$ below. Crucial also are the \textit{Gaussian
dichotomy} results (two Gaussian measures on the same space are either
equivalent or mutually singular). One has equivalence under translatio
\textit{\ }exactly when the translator is in the \textit{Cameron-Martin
space }[Bog1, 2.2]; these are the \textit{admissible translators}. We note
that the Girsanov \textit{change of measure }(by translation, using $(CM)$)
is the key to, e.g., Black-Scholes theory in mathematical finance (for
background see e.g. [BinK]).
Our purpose here is to construct a group-theoretic analogue of the
Cameron-Martin space arising in Gaussian measure theory. We are motivated by
a \textit{relativization of the Steinhaus} \textit{interior-point property}
[Ste], to be introduced below (important to classical regular variation --
see e.g. [BinGT, Th. 1.1.1]), i.e. with the notion of interior relativized
to a distinguished subset equipped with a finer topology:\ in brief a
`relativized version'. Though the classical paradigm may fail in an
infinite-dimensional Hilbert space, it can nevertheless hold relative to an
embedded (necessarily, compactly so -- see the final assertion of Th. 3.4)
space. Such is precisely the case when interiors are taken relative to the
Cameron-Martin space.
We recall the Gaussian context in a locally compact topological group $G$.
For simplicity, take $G$ Euclidean. Then matters split, according to the
support of the Gaussian measure. If this is the whole of $G$, the measure
has a density (given by the classical and familiar Edgeworth formula for the
multi-normal (multi-variate normal) distribution, see e.g. [BinF, 4.16]). If
not, the measure is singular viewed on $G$, but becomes non-singular when
restricted to the subgroup generated by its support (see \S 9.12). (This
situation is familiar in statistics: behaviour may seem degenerate only
because it is viewed in a context bigger than its natural one; see e.g.
[BinF].) Another instance of a similar `support-degeneracy' phenomenon
arises in the It\^{o}-Kawada theorem, when a (suitably non-degenerate)
probability measure $\mu $ has its convolution powers converging to Haar
measure on a subgroup $F$ of $G$, the closed subgroup generated by its
support [Hey1, \S\ 2.1]. In each case, the moral is the obvious one: if one
begins in the wrong context, identify the right one and start again.
Let $X$ be a locally convex topological vector space; it suffices for us to
take $X$ a separable\textit{\ Fr\'{e}chet }space (that is, having a
translation-invariant complete metric). Equip this with a Gaussian
(probability) measure $\gamma $ (`gamma for Gaussian', following [Bog1, Ch.
2] and \S 9.10; for Radon Gaussian measures in this context see [Bog1,
Ch.3]). Suppose further that the dual satisfies $X^{\ast }\subseteq
L^{2}(\gamma )$. Write $\gamma _{h}(K):=\gamma (K+h)$ for the translate by
h.$ \textit{Relative} \textit{quasi-invariance} of $\gamma _{h}$ and $\gamma
,$ that for all compact $K$
\[
\gamma _{h}(K)>0\text{ iff }\gamma (K)>0,
\
holds relative to a set of vectors $h\in X$ (the \textit{admissible
translators}) forming a vector subspace known as the\textit{\ Cameron-Martin
space, }$H(\gamma )$. Then, in fact, $\gamma _{h}$ and $\gamma $ are
equivalent, $\gamma \sim \gamma _{h},$ iff $h\in H(\gamma ).$ Indeed, if
\gamma \sim \gamma _{h}$ fails, then the two measures are mutually singular,
$\gamma _{h}\bot \gamma $ (the Hajek-Feldman Theorem -- cf. [Bog1, Th.
2.4.5, 2.7.2]).
Our key inspiration is that, for any non-null measurable subset $A$ of $X,$
the difference set $A-A$ contains a $|.|_{H}$-open nhd (neighbourhood) of $0$
in $H,$ i.e. $(A-A)\cap H$ contains a $H$-open nhd of $0$ -- see [Bog1, p.
64]. This flows from the continuity in $h$ of the density of $\gamma _{h}$
wrt $\gamma $ ([Bog1, Cor. 2.4.3]), as given in the \textit
Cameron-Martin-Girsanov formula}
\begin{equation}
\exp \left( \hat{h}(x)-\frac{1}{2}||\hat{h}||_{L^{2}(\gamma )}^{2}\right)
\tag{$CM$}
\end{equation
(where $\hat{h}$ `Riesz-represents' $h,$ i.e. $x^{\ast }(h)=\langle x^{\ast
},\hat{h}\rangle ,$ for $x^{\ast }\in X^{\ast },$ as in \S 5). Thus here a
modified Steinhaus Theorem holds: the \textit{relative-interior-point theore
}.
In a locally compact topological group, Gaussian measures $\gamma $ may be
defined: see e.g. Heyer [Hey1, 5.2] (in the sense of Parthasarathy; cf.
[Hey1, 5.3] for Gaussianity in the sense of Bernstein). See also \S 9.12.
Such a $\gamma $ may be singular w.r.t. a (left) Haar measure $\eta $. Such
is the case in the Euclidean case, as above, with the Gaussian having its
support on a proper linear subspace $H$, and in this case $A-A$ with $A$
measurable and non-$\gamma $-null will only have non-empty \textit{relative}
$H$-interior (and quasi-invariance only relative to $H$). We recall two
results due to Simmons [Sim] (cf. Mospan [Mos]; for generalizations beyond
the locally compact case, using results here, see also the companion paper
[BinO7]): (1) a measure $\mu $ is singular w.r.t. (left) Haar measure $\eta
_{G}$ on $G$ if and only if $\mu $ is concentrated on a $\sigma $-compact
subset $B$ such that $B^{-1}B$ has void interior (as in the Euclidean
example); (2) $\mu $ is absolutely continuous w.r.t. $\eta _{G}$ ($\mu \ll
\eta _{G})$ iff the group $G$ has the\textit{\ Steinhaus property}: for each
non-$\mu $-null compact set $K$
\[
1_{G}\in \mathrm{int}(K^{-1}K).
\
(This does not preclude having $\mu (K)>0$ and $\mu (K+h)=0$ for some $K$
and $h.)$ Other characterizations of (Haar-) absolute continuity and
singularity are studied in [LiuR] and [LiuRW], and in the related
[Gow1,2,3]; see also [Pro] and [BarFF] on singularity. In certain locally
compact groups (e.g. [Hey1, 5.5.7] for the case $G=\mathbb{R}^{m}\times
\mathbb{T}^{n}$ with $\mathbb{T}$ the unit circle) the condition $\mu \ll
\eta _{G}$ may imply that the support of a Gaussian probability measure $\mu
$ is $G;$ see, however, [Hey1, 5.5.8] for an example of a Gaussian with full
support which is `Haar-singular'.
We develop an analogue of these relative interior results for a general
Polish group $G.$ This first leads, by analogy with an abstract Wiener space
triple [Bog1, 3.9], [Str, 4.2], to the concept of a \textit{Steinhaus triple}
$(H,G,\mu ),$ which we study in \S \S 2-4, demonstrating `relativized
variants' of classical results. In \S 5 we exhibit a link between the group
context and the classical Cameron-Martin theory above by verifying that a
divisible abelian group with an $\mathbb{N}$-homogeneous group-norm (below)
is in fact a topological vector space. In \S 6 we extend our usage in \S 3
of the notions of \textit{subcontinuity} and \textit{selective subcontinuity}
of a measure (introduced in [BinO4,7]), and in Theorem 6.1 establish the key
property of a \textit{Solecki reference measure}. Then in \S 7 for a given
Polish group $G$ and `sufficiently subcontinuous' measure $\mu $ we
construct a corresponding subset $H(\mu ),$ which together with $G$ and $\mu
$ forms a Steinhaus triple (possibly `selective':\ see below). We verify
that it is an analogue of the Cameron-Martin space when $G$ is a Hilbert
space regarded as an additive group. In \S 8 we examine the extent of the
embedded `subspace' $H(\mu ).$ We close with complements in \S 9.
\section{Steinhaus triples: the context}
\qquad \textbf{Context.} Recall that a \textit{Polish} space $X$ is
separable and \textit{topologically complete}, i.e. its topology $\tau _{X}$
may be generated by a complete metric. Throughout the paper $G\ $will be a
\textit{Polish group} [BecK]: a topological group which is a Polish space.
By the Birkhoff-Kakutani Theorem ([Bir], [Kak3]; cf. [DieS, \S 3.3]) we may
equip $G$ with a \textit{left-invariant} metric $d_{G}^{L}$ (equivalently,
with a (\textit{group-})\textit{norm} $||g||:=d_{G}^{L}(g,1_{G}),$ as in
[BinO2] -- `pre-norm' in [ArhT]) that generates its topology $\tau _{G}.$
(So $d_{G}^{L}(g,g^{\prime })=||g^{-1}g^{\prime }||$ and the corresponding
right-invariant metric is $d_{G}^{R}(g,g^{\prime }):=||g^{\prime }g^{-1}||
.) This metric, which is particularly useful, need not be complete, although
$d=d_{G}^{L}+d_{G}^{R}$ is complete: see [TopH, Th. 2.3.5]. Nonetheless, the
group-norm endows $G$ with Fr\'{e}chet-like features, helpful here. When the
norm generating the topology is bi-invariant, \textit{Klee's completeness
theorem} [Kle], [DieS, Th. 8.16] asserts that if the topology is completely
metrizable, then in fact the norm is itself complete.
We fix a sequence of points $\{g_{n}\}_{n\in \mathbb{N}}$ dense in $G$; a
sequence $z_{n}\rightarrow 1_{G}$ will be called \textit{null}, and a null
sequence \textit{trivial} if it is ultimately constantly $1_{G}.$ For
\delta >0,$ by $B_{\delta }^{G}$ (resp. $B_{\delta }^{H}$) we denote the
open ball in $G$ (or $H)$ centered at $1_{G}$ of radius $\delta $ under
d_{G}^{L}$ (or $d_{H}^{L}$), which is \textit{symmetric}; by $\mathcal{B}(G)$
the \textit{Borel} sets; by $\mathcal{K}(G)$ the family of \textit{compact}
sets (carrying the Hausdorff metric induced by $d_{G}^{L}$).
Throughout, measure is to mean \textit{Borel measure} -- i.e. its domain
comprises the Borel sets of the relevant metrizable space [GarP] -- and such
a measure $\mu $ is \textit{Radon} if it is \textit{locally finite} (so that
each point has a neighbourhood of finite measure) and the relevant Borel
sets are \textit{inner compact regular}, i.e. ($\mu $-)approximable by
compacts from within
\[
\mu (B)=\sup \{\mu (K):K\in \mathcal{K}(G),K\subseteq B\}
\
([Sch] and \S 9.2); being locally finite on a separable metric space such a
measure is $\sigma $-finite. A $\sigma $-finite measure on a metric space is
necessarily \textit{outer regular} ([Kal, Lemma 1.34], cf. [Par, Th. II.1.2]
albeit for a probability measure), i.e. approximable by open sets from
without, and, when the metric space is completely metrizable, inner regular
([Bog2, II. Th. 7.1.7], [Xia, Th. 1.1.8], cf. [Par, Ths. II.3.1 and 3.2]).
For $\mu $ a Radon measure, we write $\mathcal{M}_{+}(\mu )$ for the
non-null sets, and put
\[
\mathcal{K}_{+}(\mu ):=\mathcal{K}(G)\cap \mathcal{M}_{+}(\mu ).
\
By $\mathcal{P}(G)$ we denote the family of (Borel) \textit{probability}
measures $\mu $ on $G,$ i.e. with $\mu (G)=1$ so these are Radon (as above);
by $\mathcal{U}(G)$ the \textit{universally measurable} sets (i.e.
measurable with respect to every measure $\mu \in \mathcal{P}(G)$ in the
sense of measure completion [Bog2.I, Th. 1.5.6, Prop. 1.5.11 `Lebesgue
completion'] -- for background and literature, see [BinO7]). We recall that
N\subseteq G$ is (left) \textit{Haar-null} in $G$ if, for some Borel
B\supseteq N$ and $\mu \in \mathcal{P}(G),$ $\mu (gB)=0$ for all $g\in G.$
\bigskip
\textbf{Definition. }Call $(H,G,\mu )$ a \textit{Steinhaus triple} if $G$ is
a Polish group with (group-) norm $||.||_{G}$ (notation as above), $\mu $ a
Radon measure on $G$ and $1_{G}\in H\subseteq G,$ a \textit{continuously
embedded subset} with group-norm $||.||_{H}$, having the property that for
K\in \mathcal{K}_{+}(\mu )$ there is in $H$ an ($H$-) open neighbourhood $U$
of $1_{G}$ (i.e. $U$ is open under $||.||_{H})$ such tha
\[
U\subseteq K^{-1}K.
\
(So $\mathrm{cl}_{G}(U)$ is compact in $G.$) The latter condition links the
topological with the algebraic structure; the norm on $H$ introduces a
topology on $H$ \textit{finer} than the subspace topology induced by $G$ --
cf. [Bog1, Ch. 2], [BerTA], [Gro1,2], [Str, \S 4.2]. In view of its
distinguished status we will refer to $H$ as the \textit{Steinhaus support}.
Our aim is to establish structural similarities with the classical
Cameron-Martin space $H(\gamma )$ of \S 1 ([Bog1, \S 2.4], \S 3, \S 8 and \S
9). Below, $\tau _{G}$ and $\tau _{H}$ will be the topologies of $G$ and $H;$
$\tau _{G|H}$ will be the topology induced on $H$ by $\tau _{G}.$ So
\[
\tau _{G|H}\subseteq \tau _{H}.
\]
It will be helpful to bear in mind the following illuminating example.
\bigskip
\textbf{Cautionary Example. }Consider the additive group $G=\mathbb{R}$ with
the Euclidean topology and its additive subgroup $H=\mathbb{Q}$ with \textit
discrete} topology. Enumerate $\mathbb{Q}$ as $\{q_{n}\}_{n\in \mathbb{N}}$
and pu
\[
\mu :=\tsum\nolimits_{n=1}^{\infty }2^{-n}\delta _{q_{n}},
\
with $\delta _{x}$ the Dirac measure at $x.$ Here $\tau _{G|H}\subseteq \tau
_{H}$. For $K\subseteq \mathbb{R}$ (Euclidean) compact, $\mu (K)>0$ iff
K\cap \mathbb{Q\neq \emptyset }$, and then $\{0\}$ ($\subseteq K-K$) is
\tau _{H}$-open.
\bigskip
The topological link between $H$ and $G$ above is at its neatest and most
thematic in norm language. But, as $H$ need not be a subgroup, it would
suffice for the continuous embedding to be determined by just a metric on $H
, or more generally a choice of refining topology. Furthermore, as in any
abstract definition of inner regularity, one is at liberty here to restrict
attention to a, possibly countable, subfamily of $\mathcal{K}_{+}(\mu )$ --
see \S 9.8. Such variants will be referred to below as \textit{selective
Steinhaus triples}. See the Remarks after Th. 7.1 and after Prop. 8.3 below.
\bigskip
\noindent \textbf{Remarks.} 1. The inclusion above implies that
\[
K\cap Ku\neq \emptyset \qquad (u\in U)
\
(indeed, if $u=k^{-1}k^{\prime }$, then $k^{\prime }=ku).$ In Theorem 3.4
below we strengthen this conclusion to yield the measure-theoretic
`Kemperman property', introduced in [Kem], as in [BinO4]. (One would expect
this to imply shift-compactness for $`G$-shifts of $H$', as indeed is so --
see Th. 3.8 below.)
2. Note that $W\cap H\ $is open in $H$ for $W$ open in $G$. It follows that
if $G$ is not locally compact, then, for $U\subseteq K^{-1}K$ as above, $U\
is \textit{nowhere dense in} $G$ (as $\mathrm{cl}_{G}U$ is compact, its
interior in $G$ must be empty). Theorems 3.1 and 3.2 below expand this
remark to stark category/measure dichotomies.
\bigskip
We close this section with an observation which we need several times in the
next section.
\bigskip
\noindent \textbf{Lemma 2.1. }\textit{For }$(H,G,\mu )$\textit{\ a Steinhaus
triple, if} $H\ $\textit{is Polish in its own topology, then
\[
\mathcal{B}(H)\subseteq \mathcal{B}(G),
\
\textit{\ i.e. if }$B$ \textit{is Borel in} $H,$ \textit{then }$B$ \textit
is Borel in }$G.$ \textit{In particular, if }$K\subseteq H$ \textit{is
compact in }$\tau _{H}$\textit{, then it is compact in }$\tau _{G}$\textit{:
}$\mathcal{K}(H)\subseteq \mathcal{K}(G).$
\bigskip
\noindent \textbf{Proof. }As $H$ is Polish, $B$ being Borel is a Lusin space
(cf. [Sch, Ch. 2], [Rog, 1 \S\ 2.1]), so we may write $B$ as an injective
continuous image of the irrationals, $B=f(\mathbb{N}^{N})$ say, with $f
\mathbb{N}^{\mathbb{N}}\rightarrow H$ continuous (with $H$ under $\tau _{H}
) [Rog, 1. Cor 2.4.2]. Now the embedding $\iota :H\rightarrow G\ $is
continuous and so $B=\iota \circ f(\mathbb{N}^{\mathbb{N}})$ is an injective
continuous image of the irrationals, and so a Borel subset of $G$ [Rog, 1.
Th. 3.6.1]. The final assertion is clear (since $K=\iota (K)$ is compact, as
$\tau _{G|K}\subseteq \tau _{H|K}$). $\square $
\section{Steinhaus triples: the general case}
In this section we study general topological properties of Steinhaus
triples, foremost among which is \textit{local quasi-invariance} (Theorem
3.5 below), a much weakened version of \textit{relative} quasi-invariance
(which we consider separately in the next section), i.e. relative to a
subgroup of `admissible' translators. This is preceded by a technical result
(Theorem 3.4) reminiscent of a lemma due to Kemperman [Kem] -- cf. [Kuc,
Lemma 3.7.2]; this will be revisited in another context in \S \S 6,7. We
close the section with Theorem 3.8, deducing a new property of the
Cameron-Martin space $H(\gamma ).$
For the sake of clarity, we emphasize that $\mathcal{K}_{+}(\mu )=\mathcal{K
(G)\cap \mathcal{M}_{+}(\mu )$: the compactness referred to is thus in the
sense of the topology on $G.$ Our first result confirms that the Steinhaus
support will always be meagre in our setting:
\bigskip
\noindent \textbf{Theorem 3.1.} \textit{For }$(H,G,\mu )$\textit{\ a
Steinhaus triple with }$H$\textit{\ Polish: if }$H$ \textit{is a dense
subgroup of }$G$\textit{, then either\newline
}\noindent i)\textit{\ }$H$\textit{\ is meagre in }$G$\textit{\ (so }
\mathrm{int}_{G}(H)=\emptyset ,$\textit{), or else\newline
}\noindent ii) $G$\textit{\ is locally compact and }$H=G$\textit{.}
\bigskip
\noindent \textbf{Proof.} Suppose (i) fails. Choose any $K\in \mathcal{K
_{+}(\mu )$ and consider any $\delta >0$ so small that $B_{\delta }^{H},$
the $H$-ball of radius $\delta >0$ centered at $1_{G},$ satisfie
\[
B_{\delta }^{H}\cdot B_{\delta }^{H}=B_{2\delta }^{H}\subseteq K^{-1}K.
\
For any countable dense subset $D$ of $H
\[
H=\bigcup\nolimits_{d\in D}d\cdot B_{\delta }^{H}
\
(refer to the metric $d_{H}^{L}),$ so $d\cdot B_{\delta }^{H}$ is non-meagre
in $G$ for some $d\in D.$ As $G$ is a topological group, $B_{\delta
}^{H}=d^{-1}d\cdot B_{\delta }^{H}$ is also non-meagre. Now $B_{\delta
}^{H}, $ being open in $H,$ is Borel in $G$ by Lemma 2.1, and so has the
Baire property by Nikodym's Theorem [Rog, Part 1 \S 2.9], and is non-meagre
in $G.$ By the Piccard-Pettis Theorem ([Pic], [Pet], cf. [BinO2, Th. 6.5]),
applied in $G,$ there is $r>0$ wit
\[
B_{r}^{G}\subseteq B_{\delta }^{H}\cdot B_{\delta }^{H}=B_{2\delta
}^{H}\subseteq K^{-1}K.
\
So $B_{2\delta }^{H}$ for all small enough $\delta >0$ contains $B_{r}^{G}$
for some $r>0,$ and $B_{r}^{G}$ has compact closure in $G.$ So $G=H$ (for
any $g\in G\ $choose $h\in gB_{r}^{G}\cap H;\ $then $g\in
hB_{r}^{G}\subseteq hB_{2\delta }^{H}\subseteq H$), the two topologies
coincide and $G$ is locally compact. $\square $
\bigskip
Next we turn from category to measure negligibility.
\bigskip
\noindent \textbf{Theorem 3.2.} \textit{For }$(H,G,\mu )$\textit{\ a
Steinhaus triple with }$H$\textit{\ Polish: if }$H\ $\textit{is a dense
subgroup of }$G$\textit{, then either\newline
}\noindent i)\textit{\ }$H$\textit{\ is }$\mu $\textit{-null, or else
\newline
\noindent ii)\textit{\ }$H$ \textit{is locally compact under its own
topology, with }$\mu _{H}\ll \eta _{H}$\textit{\ for }$\mu _{H}$\textit{\
the restriction of }$\mu $ \textit{to }$H$ \textit{and }$\eta _{H}$ \textit
a Haar-measure on }$H.$
\bigskip
\noindent \textbf{Proof.} Again suppose (i) fails and again recall that by
Lemma 2.1 the open subsets of $H$ have the Baire property as they are Borel
in $G$. Let $\mu _{H}$ denote the restriction of $\mu $ to the Borel subsets
of $H:
\[
\mu _{H}(B)=\mu (B\cap H)\qquad (B\in \mathcal{B}(H)).
\
The resulting measure is still Radon: it is a locally finite (since $\tau
_{G|H}\subseteq \tau _{H}$) Borel measure and $H$ is Polish (cf. \S 2). So
since $\mu _{H}(H)=\mu (H)>0,$ there is a compact subset $K\subseteq H$
which is $\mu _{H}$-non-null. This set $K$ is compact also in the sense of
G $ (Lemma 2.1) and $\mu $-non-null. Hence again for some $\delta >0
\begin{equation}
B_{\delta }^{H}\subseteq K^{-1}K\subseteq H, \tag{$KH$}
\end{equation
the latter inclusion as $H$ is a subgroup. So the topology of $H$ is locally
compact (and indeed $\sigma $-compact). Hence $H$ supports a left Haar
measure $\eta _{H}$ and s
\[
\mu _{H}\ll \eta _{H},
\
by the Simmons-Mospan theorem ([Sim], [Mos]; [BinO7], \S 1), since $\mu _{H}$
has the Steinhaus property (\S 1) on $H.$ So
\[
\mu (B)=\mu _{H}(B\cap H)+\mu (B\backslash H)\qquad (B\in \mathcal{B}(G)),
\
wit
\[
\mu (B\cap H)=\int_{B\cap H}\frac{d\mu _{H}}{d\eta _{H}}d\eta _{H}(h).\qquad
\square
\]
\bigskip
\noindent \textbf{Remarks.} 1. For $H$ locally compact, as above, the
subgroup $H$ of $G$ is capable of being generated by any non-null compact
subset $K$ of $H$. See $(KH)$ above; use a dense set of translates of
B_{\delta }^{H}.$
\noindent 2. For $L$ compact in $H,$ as $L\backslash B_{\delta }^{H}$ is
closed in $H$ it is also compact in $H,$ and so also in $G$ (via the
continuous embedding). Take $L:=\mathrm{cl}_{H}B_{\delta }^{H}\subseteq
K^{-1}K\subseteq H;$ then $L$ is a compact set of $H$ and so of $G.$ So the
set $W:=G\backslash (L\backslash B_{\delta }^{H})$ is open in $G,$ and s
\[
B_{\delta }^{H}=L\backslash (L\backslash B_{\delta }^{H})=L\cap W.
\
That is, the topology of $H$ on each subset of the form $L:=\mathrm{cl
_{H}B_{\delta }^{H}$ is induced by the topology of $G:
\[
\tau _{H|L}=\tau _{G|L}.
\
So $H$ is a countable union of compact \textit{subspaces} of $G.$ Compare
the Cautionary Example above.
\noindent 3. Being locally compact, $H\ $is topologically complete and so is
an absolute $\mathcal{G}_{\delta }$. But that means only that it is a
\mathcal{G}_{\delta }$ subset in any space $X$ whereof it is a subspace
\[
\tau _{X|H}=\tau _{H}.
\]
\bigskip
A complementary result follows, in which we need to assume that, for $\tau
_{G}$-compact subsets of $H,$ the mapping $m_{K}(h):=\mu (Kh)$ is continuous
on $H$ at $1_{G}$ relative to $\tau _{G|H},$ the topology induced by $G.$
(This is a relativized version of the global concept of \textit{mobility}
studied by A. van Rooij and his collaborators -- see e.g. [LiuR].) The proof
below is a relativized version of one in [Gow1]; we give it here as it is
short and thematic for our development.
\bigskip
\noindent \textbf{Theorem 3.2}$^{\prime }$ (cf. [Gow1]). \textit{For }
(H,G,\mu )$\textit{\ a Steinhaus triple with }$H$\textit{\ Polish and dense
in }$G,$ \textit{and }$G\ $\textit{not} \textit{locally compact}: \textit{if
}$h\mapsto \mu (Kh)$\textit{\ is }$\tau _{G|H}$\textit{-continuous at }
1_{G} $ \textit{on }$H$\textit{\ for each }$K\in \mathcal{K}_{+}(\mu )$
\textit{lying in }$H,$ \textit{then }$\mu (H)=0.$
\bigskip
\noindent \textbf{Proof. }Suppose otherwise. Then, referring as in Th. 3.2
to the restriction $\mu _{H}(B)=\mu (B)$ for $B\in \mathcal{B}(H),$ there is
$K\subseteq H\ $compact in $H\ $(so also compact in $G$) with $\mu (K)>0.$
Consider any $\delta >0$ with $0<\delta <\mu (K).$ Put $\varepsilon :=\delta
/3.$ Choose open $U$ in $G$ with $K\subseteq U$ such that $\mu (U)<\mu
(K)+\varepsilon ,$ and $\eta >0$ so tha
\[
|\mu (Kh)-\mu (K)|\leq \varepsilon
\
for $h\in B_{\eta }^{G}\cap H.$ W.l.o.g. $KB_{\eta }^{G}\subseteq U,$ whence
$\mu (Kh\backslash K)\leq \mu (U\backslash K)\leq \varepsilon ,$ for $h\in
B_{\eta }^{G}\cap H,$ and likewise $\mu (K\backslash Kh)\leq 2\varepsilon ,$
becaus
\[
\mu (K)-\varepsilon \leq \mu (Kh)\leq \mu (Kh\cap K)+\mu (U\backslash K).
\
Combining yield
\[
\mu (Kh\triangle K)\leq 3\varepsilon =\delta .
\
So, since $\delta <\mu (K)$,
\[
B_{\eta }^{G}\cap H\subseteq \{h\in H:\mu (Kh\triangle K)<\delta \}\subseteq
\{h\in H:Kh\cap K\neq \emptyset \}\subseteq K^{-1}K:
\
\[
H\cap B_{\eta }^{G}\subseteq K^{-1}K.
\
S
\[
\mathrm{cl}_{G}(B_{\delta }^{G})=\mathrm{cl}_{G}(H\cap B_{\delta
}^{G})\subseteq K^{-1}K.
\
Hence $G$ is locally compact, a contradiction. $\square $
\bigskip
\noindent \textbf{Theorem 3.3. }\textit{For }$(H,G,\mu )$\textit{\ a
Steinhaus triple, with }$H$\textit{\ Polish, if }$H$ \textit{is a dense
proper subgroup of }$G$\textit{, then }$H$\textit{\ is generically left
Haar-null in }$G$\textit{\ -- left Haar-null for quasi all }$\mu ^{\prime
}\in \mathcal{P}(G),$\textit{\ in the sense of the L\'{e}vy metric on }
\mathcal{P}(G).$\textit{\ In particular, this is so for the Cameron-Martin
space }$H(\gamma ).$
\bigskip
\noindent \textbf{Proof.} This follows from a result of Dodos [Dod, Cor. 9]
since $H$ is an analytic subgroup (cf. Lemma 2.1) with empty interior (Th.
3.1) and has $\mu $-measure zero (Th. 3.2$^{\prime }$). $\square $
\bigskip
We include here a similar result which is thematic.
\bigskip
\noindent \textbf{Theorem 3.3}$^{\prime }$\textbf{\ (Smallness). }\textit
For }$X$\textit{\ a Fr\'{e}chet space carrying} \textit{a Radon Gaussian
measure }$\gamma $\textit{,}\textbf{\ }$H(\gamma )$\textit{\ is
(generically) left Haar-null -- left Haar-null for quasi all }$\mu \in
\mathcal{P}(X),$\textit{\ in the sense of the L\'{e}vy metric on }$\mathcal{
}(X).$
\bigskip
\noindent \textbf{Proof. }For $\gamma $ a Radon probability measure,
H:=H(\gamma )$ is a separable Hilbert space ([Bog1, 3.2.7], cf. [Bog1, p.
62]). So it is complete under its own norm, so a Polish space; it is
continuously embedded in $X$ [Bog1, 2.4.6] so, as a subset of $X,$ it is
analytic as in Lemma 2.1 (here:$\ \tau _{X|H}\subseteq \tau _{H}$). As $H$
has empty interior in $X$ (Th. 3.1), again by the result of Dodos [Dod, Cor.
9], $H$ is generically left-Haar null. $\square $
\bigskip
\noindent \textbf{Remark. }In the setting above, $H:=H(\gamma )$ is in fact
a $\sigma $-\textit{compact} subset of $X$: by [Bog1, 2.4.6], the $H$-closed
\textit{unit ball} $U_{H}$ is weakly closed in $X\ $and, being convex (by
virtue of its norm), it is closed in $X$ [Rud2, 3.12], cf. [Con, \S 5.12];
but, as in the next theorem (Th. 3.4), it is a subset of some compact set of
the form $K-K$. So $U_{H}$ is itself compact in $X$ -- see [Bog1, 3.2.4].
Of course, if $X\ $is an infinite-dimensional space Hilbert space and $H$ is
a $\sigma $-compact subset of $X$, then, by Baire's theorem, $H\ $must have
empty interior in $X.$
\bigskip
\noindent \textbf{Theorem 3.4.} \textit{For }$(H,G,\mu )$\textit{\ a
Steinhaus triple and} $K\in \mathcal{K}_{+}(\mu ),$ \textit{there are }
\varepsilon ,\delta >0$\textit{\ such that
\[
\mu (K\cap Kh)\geq \delta \text{ for all }h\in H\ \text{with }||h||_{H}\leq
\varepsilon .
\
\textit{In particular,
\[
\mu _{-}^{H}(K):=\sup_{\varepsilon >0}\inf \{\mu (Kh):h\in B_{\varepsilon
}^{H}\}\geq \delta ,
\
\textit{so that for any} \textit{null sequence} $\mathbf{t}=\{t_{n}\}$
\textit{in }$H$ \textit{(i.e. with} $t_{n}\rightarrow _{H}1_{G}),$
\[
\mu _{-}^{\mathbf{t}}(K):=\lim \inf\nolimits_{n\rightarrow \infty }\mu
(Kt_{n})\geq \delta ;
\
\textit{furthermore, for some} $r>0,
\[
K\cap Kh\in \mathcal{M}_{+}(\mu )\qquad (h\in B_{r}^{H}):\qquad
B_{r}^{H}\subseteq K^{-1}K.
\]
\bigskip
\noindent \textbf{Proof. }Suppose otherwise. Then for some $K\in \mathcal{K
_{+}(\mu )$ and for each pair $\varepsilon ,\delta >0$ there is $h\in H$
with $||h||_{H}<\varepsilon $ an
\[
\mu (K\cap Kh)<\delta .
\
So in $H$ there is a sequence $t_{n}\rightarrow _{H}1_{G}$ with
\[
\mu (K\cap Kt_{n})<2^{-n-1}\mu (K)
\
for each $n\in \mathbb{N}.$ Tak
\[
M:=K\cap \dbigcup\nolimits_{n\in \mathbb{N}}Kt_{n}.
\
The
\[
\mu (M)<\mu (K)/2.
\
So we may choose a compact $\mu $-non-null $K_{0}\subseteq K\backslash M;$
then, since $(H,G,\mu )$ is a Steinhaus triple, there is in $H$ a non-empty
open nhd $V$ of $1_{G}$ with
\[
V\subseteq K_{0}^{-1}K_{0}.
\
Now $t_{m}\in V$ for all large $m$. Fix such an $m;$ the
\[
K_{0}\cap K_{0}t_{m}\neq \emptyset .
\
Consider $k_{0},k_{1}\in K_{0}$ with $k_{0}=k_{1}t_{m}\in K_{0}\cap
K_{0}t_{m}\subseteq K\cap \dbigcup\nolimits_{n}Kt_{n}=M;$ as $K_{0}$ is
disjoint from $M,$ this is a contradiction. $\square $
\bigskip
An immediate corollary is
\bigskip
\noindent \textbf{Theorem 3.5 (Local quasi-invariance).} \textit{For }
(H,G,\mu )$ \textit{a Steinhaus triple and }$B\in \mathcal{B}(G)$\textit{:
if }$\mu (B)>0,$\textit{\ then there exists }$\delta >0$ \textit{so that }
\mu (Bh)>0$\textit{\ for all }$h\in H$\textit{\ with }$||h||_{H}\leq \delta
. $
\bigskip
\noindent \textbf{Proof. } This follows from Th. 3.4 since there is compact
K\subseteq B$ with $\mu (K)>0.$ Then for all sufficiently small $h\in H$
\mu (Bh)\geq \mu (Kh)>0.$ $\square $
\bigskip
\noindent \textbf{Corollary 3.1.} \textit{For each null sequence }
t_{n}\rightarrow 1_{G}$\textit{\ in }$H$\textit{\ and }$\mu $\textit
-non-null }$K,
\[
0<\mu _{-}(K)\leq \lim \inf \mu (t_{n}K)\leq \lim \sup \mu (t_{n}K)\leq \mu
(K),
\
\textit{and so, for }$\mu $\textit{-non-null compact }$L\subseteq K,
\[
0<\mu _{-}(L)\leq \mu _{-}(K)\text{ and }\mu (L)\leq \mu (K).
\]
\bigskip
\noindent \textbf{Proof.} Writing $\mu ^{\delta }(K):=\inf \{\mu (Kh):h\in
B_{\delta }^{H}\},$ for any $\delta >0$ and all large enough $n
\[
\mu ^{\delta }(K)\leq \mu (t_{n}K)\leq \mu ^{\delta }(K)+\delta ,
\
yielding the lower bound when $\delta \downarrow 0$. Also for $\delta >0,$
there is open $U\supseteq K$ with $\mu (U\backslash K)\leq \delta ,$ and so
as $K$ is compact for all large enough $n
\[
\mu (t_{n}K)\leq \mu (K)+\delta .
\]
For each $\delta >0$ choose $t_{\delta }\in B_{\delta }^{H}$ with $\mu
^{\delta }(K)\leq \mu (t_{\delta }K)\leq \mu ^{\delta }(K)+\delta .$ Then,
for non-null $L\subseteq K,$ since $t_{\delta }L\subseteq t_{\delta }K$, by
the earlier proved assertions
\[
\mu _{-}(L)\leq \lim \inf \mu (t_{\delta }L)\leq \lim \inf \mu (t_{\delta
}K)\leq \lim \inf [\mu ^{\delta }(K)+\delta ]=\mu _{\_}(K),
\
an
\[
\lim \sup \mu (t_{\delta }L)\leq \mu (L)\leq \mu (K).\qquad \square
\]
\noindent \textbf{Remarks.} 1. Above, if $\mu (t_{n}K)\rightarrow \mu
_{0}\geq \mu _{-}(K),$ the
\[
\mu (\tbigcap\nolimits_{n\in \mathbb{N}}(K\backslash t_{n}K))=\mu (K)-\mu
_{0}\leq \mu (K)-\mu _{-}(K).
\]
\noindent 2. Evidently, for disjoint non-null compact $K,L$
\[
\mu (K)+\mu (L)=\mu (K\cup L)\geq \mu _{-}(K\cup L)\geq \mu _{-}(K)+\mu
_{-}(L).
\
If one of these is sharp, one may imagine passsing through a subsequence
K_{n}$ with $\mu (K_{n})>\mu _{-}(K_{n})>0.$
\bigskip
There is no Steinhaus-like assumption on the measure $\mu $ in the following
result, which, standing in apposition to Th. 3.4, is a kind of converse.
\bigskip
\noindent \textbf{Proposition 3.1 }([BinO7, L. 1]). \textit{For }$H\subseteq
G$ \textit{continuously embedded in} $G$, $\mu \in \mathcal{P}(G)$\textit{\
and }$K\in \mathcal{K}_{+}(\mu )$: \textit{if }$\mu _{-}^{H}(K)>0$\textit{,\
then there is }$\delta >0$\textit{\ with
\[
\Delta /4\leq \mu (K\cap Kt)\qquad (t\in B_{\delta }^{H}),
\
\textit{so that
\begin{equation}
K\cap Kt\in \mathcal{M}_{+}(\mu )\qquad (t\in B_{\delta }^{H}).
\tag{$\ast
$}
\end{equation
\textit{In particular,
\[
K\cap Kt\neq \emptyset \qquad (t\in B_{\delta }^{H}),
\
\textit{or, equivalently,
\begin{equation}
B_{\delta }^{H}\subseteq K^{-1}K, \tag{$\ast \ast $}
\end{equation
\textit{so that }$B_{\delta }^{H}$ \textit{has compact closure under }$\tau
_{G}$\textit{.}
\bigskip
\noindent \textbf{Proof.} Put $H_{t}:=K\cap Kt\subseteq K$. Take $\Delta
:=\mu _{-}^{H}(K)>0.$ Then for any small enough $\delta >0$, $\mu
(Kt)>\Delta /2$ for $t\in B_{\delta }^{H}.$ Fix such a $\delta >0.$
By outer regularity of $\mu $, choose $U$ open with $K\subseteq U$ and $\mu
(U)<\mu (K)+\Delta /4.$ By upper semicontinuity of $t\mapsto Kt$, w.l.o.g.
KB_{\delta }\subseteq U$ $.$ For $t\in B_{\delta }^{H},$ by finite
additivity of $\mu ,$ since $\Delta /2<\mu (Kt)$
\begin{eqnarray*}
\Delta /2+\mu (K)-\mu (H_{t}) &\leq &\mu (K)+\mu (Kt)-\mu (H_{t})=\mu (K\cup
Kt) \\
&\leq &\mu (U)\leq \mu (K)+\Delta /4.
\end{eqnarray*
Comparing extreme ends of this chain of inequalities give
\[
0<\Delta /4\leq \mu (H_{t})\qquad (t\in B_{\delta }^{H}).
\]
For $t\in B_{\delta }^{H},$ as $K\cap Kt\in \mathcal{K}_{+}(\mu )$, take
s\in K\cap Kt\neq \emptyset ;$ then $s=at$ for some $a\in K,$ so
t=a^{-1}s\in K^{-1}K.$ Conversely, $t\in B_{\delta }^{H}\subseteq K^{-1}K$
yields $t=a^{-1}a^{\prime }$ for some $a,a^{\prime }\in K;$ then $a^{\prime
}=at\in K\cap Kt$. $\square $
\bigskip
L\"{o}wner showed in 1939 that there exists no ($\sigma $-finite)
translation-invariant measure on an infinite-dimensional Hilbert space
([Loe, \S 1], [Neu2]). This is contained in the result below: with $G$ a
Hilbert space regarded as an additive group and $\mu $ Radon, if $\mu $ is
translation-invariant, then $\mu (K)=\mu _{-}^{G}(K)>0$ for some compact $K,$
so $G$ is locally compact and so finite-dimensional.
\bigskip
\noindent \textbf{Corollary 3.2 }(cf. [Gow1])\textbf{.} \textit{If }$\mu
_{-}^{G}(K)>0$\textit{\ for some }$K\in \mathcal{K}_{+}(\mu ),$\textit{\
then }$G\ $\textit{is locally compact.}
\bigskip
\noindent \textbf{Proof.} If $\mu _{-}(K)>0,$ then there is $\delta >0$ such
that $\mu (tK)>\mu _{-}(K)/2$ for all $t\in B_{\delta }=B_{\delta }^{G};$ so
for $\Delta :=\mu _{-}(K)/2
\[
B_{\delta }\subseteq B_{\delta }^{\Delta }=\{z\in B_{\delta }:\mu
(Kz)>\Delta \}\subseteq K^{-1}K,
\
and so $B_{\delta }$ has compact closure. $\square $
\bigskip
\noindent \textbf{Theorem 3.6.} \textit{For }$(H,G,\mu )$\textit{\ a
Steinhaus triple and} $K\in \mathcal{K}_{+}(\mu ),$ \textit{the set
\[
\mathcal{O}(K):=\{h\in H:\mu (Kh)>0\}
\
\textit{is }$H$\textit{-open, and so }$\mu $\textit{\ is continuous on a
dense }$\mathcal{G}_{\delta }$\textit{\ of }$\mathcal{O}(K)$ \textit{(so off
a meagre set), i.e.
\[
\mu (Kh)=\mu _{-}^{H}(Kh)>0\qquad (\text{quasi all }h\in \mathcal{O}(K)).
\
\textit{Conversely, for }$H,G\ $\textit{topological groups with }$H$\textit
\ a continuously embedded subgroup of }$G\ $\textit{and }$\mu \in \mathcal{P
(G)$\textit{: if }$\mathcal{O}(K)$ \textit{is open for }$K\in \mathcal{K
_{+}(\mu )$ \textit{and the above relative continuity property of }$\mu
\textit{\ holds on a dense} $\mathcal{G}_{\delta }$ \textit{subset of}
\mathcal{O}(K)$\textit{\ in }$H,$ \textit{then }$(H,G,\mu )$\textit{\ is a
Steinhaus triple. }
\bigskip
\noindent \textbf{Proof.} The first assertion follows from Th. 3.5 on local
quasi-invariance applied to $Kh$ with $h\in \mathcal{O}(K).$ Since the map
g\mapsto \mu (Kg)$ is upper semi-continuous (see. e.g. [BinO7, Prop. 1],
[Hey1, 1.2.8]), the second assertion follows from the first by the theorem
of Fort [For, R1] (cf. [Xia, Appendix I, Lemma I.2.2]) that an upper
semi-continuous map is continuous on a $H$-dense $\mathcal{G}_{\delta }$ in
\mathcal{O}(K).$
As for the converse, for $K\in \mathcal{K}_{+}(\mu ),$ since $1_{G}\in
\mathcal{O}(K)$ there is $h\in H$ with $\mu (Kh)=\mu _{-}^{H}(Kh)>0.$ It now
follows by Th. 3.4 that $B_{r}^{H}\subseteq (Kh)^{-1}Kh$ for some $r>0,$ and
so $1_{G}\in hB_{r}^{H}h^{-1}\subseteq K^{-1}K,$ i.e. $1_{G}$ is an interior
point of $K^{-1}K$ (since $hB_{r}^{H}h^{-1}$ is open, as $H$ is a
topological group). $\square $
\bigskip
As a corollary of Th. 3.4, we now obtain a result concerning embeddability
into non-negligible sets (here the non-null measurable sets) of some
translated subsequence of a given null sequence. This property, first used
implicitly by Banach [Ban1,2], has been studied in various general contexts
by many authors, most recently under the term `shift-compactness' -- see
e.g. [Ost1]. The new context of a Steinhaus triple is notable in limiting
the null sequences to the distinguished subspace. Here the statement calls
for the passage from a null sequence in $H$ to its inverse sequence; this
inversion is of course unnecessary if $H^{-1}=H,$ e.g. if $H$ is a subgroup
of $G,$ as will be the case in Theorem 3.8 below. (The group-theoretic
approach to shift-compactness is that of a group action, here of translation
in $G$ -- see [MilO]; for applications see [Ost2].)
\bigskip
\noindent \textbf{Theorem 3.7 (Shift-compactness Theorem for Steinhaus
triples).} \textit{For }$(H,G,\mu )$\textit{\ a Steinhaus triple, }$\mathbf{
}$ \textit{a null sequence in }$H,$ \textit{and }$E\in \mathcal{M}_{+}(\mu
): $\textit{\ for }$\mu $\textit{-almost all }$s\in E$ \textit{there exists
an infinite }$\mathbb{M}_{s}\subseteq \mathbb{N}$ \textit{with
\[
\{sh_{m}^{-1}:m\in \mathbb{M}_{s}\}\subseteq E.
\]
\bigskip
\noindent \textbf{Proof.} Fix a compact $K_{0}\subseteq E$ with $\mu
(K_{0})>0.$ Choose inductively a sequence $m(n)\in \mathbb{N}$ and
decreasing compact sets $K_{n}\subseteq K_{0}\subseteq E$ with $\mu
(K_{n})>0 $ such tha
\[
\mu (K_{n}\cap K_{n}h_{m(n)})>0.
\
To check the inductive step, suppose $K_{n}$ already defined. As $\mu
(K_{n})>0,$ by Th. 3.4 there are $\delta ,\varepsilon >0$ such that
\[
\mu (K_{n}\cap K_{n}h)\geq \delta \text{ for all }h\in H\ \text{with
||h||_{H}\leq \varepsilon .
\
So there is $m(n)>n$ with $\mu (K_{n}\cap K_{n}h_{m(n)})>0.$ Putting
K_{n+1}:=K_{n}\cap K_{n}h_{m(n)}\subseteq K_{n}$ completes the inductive
step, and so the induction.
By compactness, select $s$ wit
\[
s\in \bigcap\nolimits_{m\in \mathbb{N}}K_{m}\subseteq K_{n+1}=K_{n}\cap
K_{n}h_{m(n)}\qquad (n\in \mathbb{N}).
\
Choosing $k_{n}\in K_{n}\subseteq K$ with $s:=k_{n}h_{m(n)}$ gives $s\in
K_{0}\subseteq E,$ an
\[
sh_{m(n)}^{-1}=k_{n}\in K_{n}\subseteq K_{0}\subseteq E.
\
Finally take $\mathbb{M}:=\{m(n):n\in \mathbb{N}\}.$
As for the final assertion, recalling from \S 2 that $\mathcal{U}(G)$
denotes the universally measurable sets, define
\[
F(H):=\bigcap\nolimits_{n\in \mathbb{N}}\bigcup\nolimits_{m>n}H\cap
Hh_{m}\qquad (H\in \mathcal{U}(G)).
\
Then $F:\mathcal{U}(G)\rightarrow \mathcal{U}(G)$ and $F$ is monotone
(F(S)\subseteq F(T)\ $for $S\subseteq T);$ moreover, $s\in F(H)$ iff $s\in H$
and $sh_{m}^{-1}\in H$ for infinitely many $m$. It suffices to show that
E_{0}:=E\backslash F(E)$ is $\mu $-null (cf. the Generic Completeness
Principle [BinO1, Th. 3.4]). Suppose otherwise. Then, as $\mu (E_{0})>0,$
there exists a compact $K_{0}\subseteq E_{0}$ with $\mu (K_{0})>0.$ But
then, as in the construction above, $\emptyset \neq F(K_{0})\cap
K_{0}\subseteq F(E)\cap E_{0},$ contradicting $F(E)\cap E_{0}=\emptyset .$
\square $
\bigskip
\noindent \textbf{Corollary 3.3.} \textit{If the subsequence embedding
property of Theorem 3.7 holds for all the null sequences in a set }$H
\textit{\ which is continuously embedded in }$G$\textit{\ for all }$E\in
\mathcal{K}_{+}(\mu )$, \textit{then }$(H,G,\mu )$\textit{\ is a Steinhaus
triple. In particular, for any }$E\in \mathcal{M}_{+}(\mu )$ \textit{the set
}$E^{-1}E$ \textit{has non-empty }$H$\textit{-interior.}
\bigskip
\noindent \textbf{Proof.} If in $H$ there is no open subset $U$ with
U\subseteq K^{-1}K,$ then there exists $h_{n}\in H\ $with $h_{n}\notin
B_{1/n}^{H}\backslash (K^{-1}K).$ Then there is $s\in K$ and an infinite
\mathbb{M}_{s}\subseteq \mathbb{N}$ wit
\[
\{sh_{m}^{-1}:m\in \mathbb{M}_{s}\}\subseteq K.
\
So for any $m\in \mathbb{M}_{s},$ $h_{m}s^{-1}\in K^{-1}$, i.e. $h_{m}\in
K^{-1}K,$ a contradiction. $\square $
\bigskip
Another immediate corollary is the following result, which was actually our
point of departure.
\bigskip
\noindent \textbf{Theorem 3.8 (Shift-compactness Theorem for the
Cameron-Martin Space). }\textit{For }$X$\textit{\ a Fr\'{e}chet space
carrying a Radon Gaussian measure }$\gamma $\textit{\ with }$X^{\ast
}\subseteq L^{2}(\gamma )$\textit{, and }$H(\gamma )$\textit{\ the
Cameron-Martin space: if} $\mathbf{h}$ \textit{is null in }$H(\gamma ),$
\textit{and }$E\in \mathcal{M}_{+}(\gamma )$\textit{, then for }$\gamma
\textit{-almost all }$s\in E$ \textit{there exists an infinite }$\mathbb{M
_{s}\subseteq \mathbb{N}$ \textit{with
\[
\{s+h_{m}:m\in \mathbb{M}_{s}\}\subseteq E.
\]
\bigskip
\noindent \textbf{Proof.} Regarding $X$ as an additive group, $(H(\gamma
),X,\gamma )$ is a Steinhaus triple, by [Bog1, p. 64]. As $H(\gamma )$ is a
subspace of $X$, $(-h_{n})$ is also a null sequence in $H(\gamma );$ by
Theorem 3.7, for $\gamma $-almost all $s\in E$ there is $\mathbb{M
_{s}\subseteq \mathbb{N}$ with
\[
\{s-(-h_{m}):m\in \mathbb{M}_{s}\}\subseteq E.\qquad \square
\]
\section{Steinhaus triples: the quasi-invariant case}
We begin with the definition of measure `relative quasi-invariance' promised
in \S 3. Classical results on this topic are given in the setting of
topological vector spaces -- see e.g. [GikS, Ch. VII], [Sko], [Yam2] -- with
the exception of [Xia] who develops the associated harmonic analysis in its
group setting. We adopt a similar approach here in order to pursue some
parallels with Cameron-Martin theory.
\subsection{Relative quasi-invariance}
\noindent \textbf{Definition. }Say that $\mu $ is \textit{relatively
quasi-invariant w.r.t }$H$, or just $H$\textit{-quasi-invariant} if $\mu
(hB)=0$ for all $\mu $-null Borel $B\in \mathcal{B}(G)$ (equivalently, $\mu
-null compact $B)$ and $h\in H$ $.$
\bigskip
Recall that $\mathrm{supp}_{G}(\mu )$ denotes the \textit{topological suppor
}, which is the smallest closed set of full $\mu $-measure; for $\mu $
Radon, such a smallest closed set is guaranteed to exist [Bog2, Prop. 7.2.9].
\bigskip
\noindent \textbf{Proposition 4.1 }(cf. [Bog1, 3.6.1])\textit{. For }
(H,G,\mu )$ \textit{a Steinhaus triple with }$H$\textit{\ a subgroup and }
\mu \in \mathcal{P}(G),$\textit{\ a }$H$\textit{-quasi-invariant (Radon)
measure: if }$\mu (\mathrm{cl}_{G}H)=1,$ \textit{then\
\[
S_{\mu }:=\mathrm{supp}_{G}(\mu )=\mathrm{cl}_{G}H.\mathit{\ }
\
\textit{In particular, this is so for }$H$\textit{\ the Cameron-Martin space
}$H(\gamma ).$
\bigskip
\noindent \textbf{Proof. }Let $L:=\mathrm{cl}_{G}H;$ then $L\subseteq S_{\mu
}.$ If the inclusion were proper: take $x\in L\backslash S_{\mu }.$ There is
$V$ open in $G$ with $x\in V$ and $\mu (V)=0.$ But, as $x\in L,$ there is
h_{0}\in V\cap H,$ and so $1_{G}\in W:=h_{0}^{-1}V$ with $\mu (W)=0.$
W.l.o.g. $W^{-1}=W$ (otherwise pass to $W\cap W^{-1},$ which contains
1_{G}).$ So also $\mu (hW)=0$ for $h\in H$ (by $H$-quasi-invariance). Then,
by the definition of the support
\[
HW=\bigcup\nolimits_{h\in H}(hW)\subseteq X\backslash S_{\mu },
\
and so $\mu (HW)=0.$ But $\mathrm{cl}_{G}H\subseteq HW,$ for if the point
x\in \mathrm{cl}_{G}H,$ then its nhd $xW$ meets $H,$ in $h$ say; then $x\in
hW^{-1}=hW$ (since $xw=h$ implies $x=hw^{-1}$). So $\mu (\mathrm{cl
_{G}H)=0, $ contradicting the fact that $\mu (\mathrm{cl}_{G}H)=1$. $\square
$
\bigskip
\noindent \textbf{Definition. }Following [Bog1, 3.6.2], say that for a
Steinhaus triple $(H,G,\mu )$ the measure $\mu $ is \textit{non-degenerate}
iff $S_{\mu }=\mathrm{cl}_{G}H.$
\bigskip
We close this subsection by tracing a measure-to-category dependence.
\bigskip
\noindent \textbf{Proposition 4.2 (From measure to category: nullity to
empty interior).}\textit{\ For }$(H,G,\mu )$ \textit{a Steinhaus triple with
}$H$ \textit{a subgroup and }$\mu \in \mathcal{P}(G)$\textit{\ an }$H
\textit{-quasi-invariant} \textit{non-degenerate (Radon) measure: if }$\mu
(H)=0$\textit{, then
\[
\mathrm{int}_{G}(H)=\emptyset .
\
\textit{In particular, this is so for }$H$ \textit{the Cameron-Martin space
$H(\gamma ).$
\bigskip
\noindent \textbf{Proof. }Suppose not. Then w.l.o.g. $1_{G}\in W:=$\textrm
int}$_{G}(H)\subseteq H$ (as $H$ is a subgroup and so $Ww^{-1}\subseteq H$
for $w\in W$). Also w.lo.g. $W=W^{-1}$ (otherwise pass to the non-empty open
subset $W^{-1}\cap W\subseteq H).$ But $\mu (H)=0,$ so $\mu (W)=0,$ and so
also $\mu (hW)=0$ for $h\in H$ (by $H$-quasi-invariance). Then, by the
definition of the support
\[
HW=\bigcup\nolimits_{h\in H}hW\subseteq X\backslash S_{\mu }.
\
So $\mu (HW)=0,$ and the rest of the proof is as in Prop. 4.1 using $S_{\mu
}=\mathrm{cl}_{G}H$. $\square $
\subsection{Admissible translators for $\protect\mu $-quasi-invariance}
We close with a study of the algebraic structure of admissible translators
by considering the natural candidates for the Steinhaus support of a (Radon)
measure and a corresponding natural complement (inspired by the
Hajek-Feldman Dichotomy Theorem -- cf. [Bog1, Th. 2.4.5, 2.7.2]). The
results here (in particular Prop.4.5) will be used in \S 8. In what follows
the use of $Q$ (`q for quasi-invariance') is justified in Prop. 4.4 below;
write $\mu _{g}(B):=\mu (Bg)$ for $B\in \mathcal{B}(G),$ and put
\begin{eqnarray*}
Q_{R} &=&Q_{R}(\mu ):=\{g\in G:(\forall K\in \mathcal{K}_{+}(\mu ))\text{
\mu (Kg)>0\}, \\
Q &=&Q(\mu ):=\{g\in G:(\forall K\in \mathcal{K}_{+}(\mu ))[\mu (Kg)>0\text{
\& }\mu (Kg^{-1})>0]\}, \\
\mathcal{N} &\mathcal{=}&\mathcal{N}(\mu ):=\{g\in G:\mu _{g}\bot \mu
\},\qquad G_{0}=G_{0}(D):=\dbigcup\nolimits_{d\in D}d\mathcal{N}(\mu ),
\end{eqnarray*
where $D:=\{g_{n}:n\in \mathbb{N}\}$ is a dense subset of $G.$ Evidently
\textbf{\
\[
\mathcal{N}(\mu )=\dbigcap\nolimits_{K\in \mathcal{K}_{+}(\mu )}\{g:\mu
(Kg)=0\}=\dbigcap\nolimits_{n\in \mathbb{N}}\dbigcap\nolimits_{K\in \mathcal
K}_{+}(\mu )}\{g:\mu (Kg)<1/n\}.
\
The definitions imply that
\[
Q\subseteq Q_{R}\subseteq G\backslash \mathcal{N}(\mu ).
\]
\bigskip
\noindent \textbf{Lemma 4.1.} $Q$ \textit{is a subgroup of} $G.$
\bigskip
\noindent \textbf{Proof. }First, $g\in Q$ iff $g^{-1}\in Q.$ Next take
x,y\in Q$ and $K\in \mathcal{K}_{+}(\mu ).$ As $Kx\in \mathcal{K}_{+}(\mu )$
and $y\in Q,$ $\mu (Kxy)=\mu ((Kx)y)>0.$ $\square $
\bigskip
\noindent \textbf{Proposition 4.3. }$Q$ \textit{is the largest subgroup of
admissible translators for }$\mu $-\textit{right quasi-invariance:
\[
\mu _{g}\sim \mu \text{ for }g\in Q.
\
\textit{In particular, if }$Q_{R}$ \textit{is a subgroup, then }$Q=Q_{R}.$
\bigskip
\noindent \textbf{Proof. }For $K\in \mathcal{K}(G):$ if $\mu (K)=0$ and
g\in Q$, then $\mu (Kg)=0.$ For suppose otherwise. Then $\mu (Kg)>0,$ and
then also $\mu ((Kg)g^{-1})>0,$ as $g^{-1}\in Q,$ i.e. $\mu (K)>0,$ a
contradiction. Evidently, if $g$ admits right quasi-invariance, then $\mu
(Kg)>0$ for $K\in \mathcal{K}_{+}(\mu );$ so if $g$ lies in a subgroup
admitting right quasi-invariance, then also $\mu (Kg^{-1})>0,$ and so $g\in
Q.$ $\square $
\bigskip
\noindent \textbf{Proposition 4.4 (Subgroups).} \textit{For }$G$\textit{\
abelian and symmetric }$\mu \in \mathcal{P}(G),$ $Q_{R}$ \textit{is a
subgroup of} $G.$ \textit{In particular, for }$G$ \textit{a Hilbert space as
in \S 1, equipped with a symmetric Gaussian measure }$\mu =\gamma $\textit{,
the Cameron-Martin space }$H(\gamma )$ \textit{is precisely of the form}
Q_{R}.$
\bigskip
\noindent \textbf{Proof. }For $\mu $ symmetric, and $K\in \mathcal{K}(G):$
if $\mu (K)>0,$ then $\mu (K^{-1})>0;$ so, for $x\in Q_{R}$ , $\mu
(K^{-1}x)>0.$ But, as $G$ is abelian
\[
\mu (Kx^{-1})=\mu (xK^{-1})=\mu (K^{-1}x)>0.
\
So if $x\in Q_{R},$ then $x^{-1}\in Q_{R},$ i.e. $Q_{R}=Q$, and, by Lemma
4.1, $Q_{R}$ is a group.
In particular, for any (symmetric) Gaussian $\mu =\gamma $ with domain a
Hilbert space $G,$ as in \S 1, the Cameron-Martin space $H(\gamma )$
coincides with $Q_{R}(\gamma )$. Indeed, the Hilbert space $G,$ regarded as
an additive group, is abelian, and $h\in H(\gamma )$ holds iff $\mu _{h}$ is
equivalent to $\mu $. This may be re-stated as follows: $h\in H(\gamma )$
holds iff for all $K\in \mathcal{K}(G):$ $\mu (K)>0$ iff $\gamma _{h}(K)>0.$
$\square $
\bigskip
\noindent \textbf{Lemma 4.2. }$Q_{R}\mathcal{N}(\mu )\subseteq \mathcal{N
(\mu )$ and $Q\mathcal{N}(\mu )=\mathcal{N}(\mu ).$
\bigskip
\noindent \textbf{Proof. }For $h\in Q,$ observe that $\mathcal{K}_{+}(\mu )h
\mathcal{K}_{+}(\mu )$: indeed, if $h\in Q,$ then $\mu (Kh)>0$ for all $K\in
\mathcal{K}_{+}(\mu ),$ so that $\mathcal{K}_{+}(\mu )h\subseteq \mathcal{K
_{+}(\mu ).$ Further, for any compact $K\in \mathcal{K}_{+}(\mu ),$ as
h^{-1}\in Q$ so that $\mu (Kh^{-1})>0,$ $K=Kh^{-1}h\in \mathcal{K}_{+}(\mu
)h $.
It now follows that if $\mu (Kg)=0$ for all $K\in \mathcal{K}_{+}(\mu ),$
then $\mu (Khg)=0$ for $h\in Q$ and all $K\in \mathcal{K}_{+}(\mu ),$ i.e.
hg\in \mathcal{N}(\mu ),$ i.e. $Q\mathcal{N}(\mu )\subseteq \mathcal{N}(\mu
).$ But if $g\in \mathcal{N}(\mu )$ and $h\in Q,$ then likewise $h^{-1}g\in
\mathcal{N}(\mu )$, as $h^{-1}\in Q,$ so $g=h(h^{-1}g)\in Q\mathcal{N}(\mu
). $ $\square $
\bigskip
\noindent \textbf{Proposition 4.5. }$\mathcal{N}(\mu )$\textit{\ is Borel.}
\bigskip
We give two proofs below, the second of which was kindly suggested by the
Referee.
\bigskip
\noindent \textbf{First Proof. }The sets $\{g:\mu (Kg)<1/n\}$ are open as
x\mapsto \mu (Kx)$ is upper semicontinuous for $K$ compact. Indeed, the se
\[
\{(g,K):\mu (Kg)<1/n\}
\
is open in the product space $G\times \mathcal{K}(G)$: for open $U\supseteq
K\ $with $\mu (U)<1/2n,$ choose $\delta >0$ with $KgB_{2\delta }\subseteq U.$
Then for compact $H\subseteq KB_{\delta }$ and $h\in gB_{\delta },
\[
Hh\subseteq KgB_{2\delta }\subseteq U.
\]
By inner regularity we may assume that $\mu $ is concentrated on a $\sigma
-compact set, say on $\dbigcup\nolimits_{n}K_{n}$ with $K_{n}$ an ascending
sequence of compact sets. Then $\{K\subseteq K_{m}:\mu
(K)=0\}=\dbigcap\nolimits_{n\in \mathbb{N}}\{K\subseteq K_{m}:\mu (K)<1/n\}$
is $\mathcal{G}_{\delta }$ in $\mathcal{K}(K_{m}),$ and its complement an
\mathcal{F}_{\sigma }$ in $\mathcal{K}(G),$ as $\mathcal{K}(K_{m})$ is
compact. Consequently
\[
\{(g,K):K\in \mathcal{K}_{+}(\mu ),\mu (Kg)\geq
1/n\}=\dbigcup\nolimits_{m}\{(g,K):K_{m}\cap K\in \mathcal{K}_{+}(\mu ),\mu
(Kg)\geq 1/n\}
\
\[
=\{(g,K):\mu (Kg)\geq 1/n\}\cap \dbigcup\nolimits_{m}\{(g,K):K_{m}\cap K\in
\mathcal{K}_{+}(\mu )\}\in \mathcal{F}_{\sigma }(G\times \mathcal{K}(G)).
\
S
\[
G\backslash \mathcal{N}(\mu )=\mathrm{proj}_{G}\dbigcup\nolimits_{m,n
\{(g,K):K_{m}\cap K\in \mathcal{K}_{+}(\mu ),\mu (Kg)\geq 1/n\}.
\
But the vertical sections $\{g\}\times \dbigcup\nolimits_{m}\{K:K_{m}\cap
K\in \mathcal{K}_{+}(\mu ),\mu (Kg)\geq 1/n\}$ are $\sigma $-compact. So the
projection is Borel, by the Arsenin-Kunugui theorem ([Rog, Th. 1.4.3, Th.
5.9.1], cf. [Kec, Th. 18.18]). $\square $
\bigskip
\noindent \textbf{Second Proof. }Assume w.l.o.g. that $\mu \in \mathcal{P}(G)
$ so that also $\mu _{g}\in \mathcal{P}(G)$ for $g\in G.$ (Otherwise, since
\mu $ is $\sigma $-finite (\S 2), we may replace $\mu $ by an equivalent
probability measure $\mu ^{\prime }$; then $\mu _{g}\bot \mu $ iff $\mu
_{g}^{\prime }\bot \mu ^{\prime }$.) Since $G$ is separable, we may choose a
(dense) sequence of continuous real-valued functions $\{f_{n}\}$ with
||f_{n}||_{\infty }\leq 1$ so that, with $||.||_{TV}$ denoting the total
variation distance,
\[
||\mu -\mu _{g}||_{TV}=\sup_{n}\int f_{n}d(\mu -\mu _{g})\quad (g\in G);
\
cf. [Con, App. C Cor. C.14], [Yos, I.3 Cor.]. Take $\{K_{m}\}_{m}$ an
increasing sequence of compact sets with $\mu $ concentrated on on its
union; then $f_{nm},$ where
\[
f_{nm}(g):=(f_{n}\ast 1_{K_{m}^{-1}})(g^{-1}),
\
is continuous, cf. [HewR, 20.16] or [Rud1, \S 1.1.5-1.1.6]. Bu
\[
f_{nm}(g)=\int_{K_{m}}f_{n}(yg^{-1})d\mu (y)\rightarrow \int f_{n}(x)d\mu
_{g}(x)\quad \quad \text{(as }m\rightarrow \infty ).
\
S
\[
||\mu -\mu
_{g}||_{TV}=\sup_{n}\lim_{m}\int_{K_{m}}[f_{n}(y)-f_{n}(yg^{-1})]d\mu (y),
\
and so $f(g):=||\mu -\mu _{g}||_{TV}$ ($g\in G)$ is Borel measurable. But,
cf. [Bog1, Problem 2.21], $\mathcal{N}(\mu )=f^{-1}\{2\},$ which is thus
Borel. $\square $
\section{Groups versus vector spaces}
Here we link our new results, in a group context, to classical
Cameron-Martin theory, in a topological vector space context. There is some
similarity here to material in [Hey1, \S 3.4] on (homomorphic) embeddings of
$\mathbb{Q}$ (rational embeddability) and of $\mathbb{R}$ (the more
exacting, continuous embeddability) in the space of probability measures;
for later developments see [Hey2], [HeyP]. The latter are related to
divisibility properties of groups (and of the convolution semigroups of
measures). Recall that a group $G$ is (infinitely, or $\mathbb{N}$-) \textit
divisible} [HewR, A5] if for each $n\in \mathbb{N}$ every element $g\in G$
has an $n$-th root $h\in G$, i.e. with $h^{n}=g$ (for their structure theory
in the abelian case, see e.g. [HewR, A14], [Fuc], or [Kap]).
The refinement norm used to define the classical Cameron-Martin space within
a topological vector space $X$ depends ultimately on the embedding of the
continuous linear functionals, $X^{\ast },$ in $L^{2}(X,\gamma )$ and on the
Riesz Representation theorem for Hilbert spaces. In common with the
additive-subgroup literature of topological vector spaces (for the
corresponding Pontryagin-van Kampen duality theory, see e.g. Gel$^{\prime }
fand-Vilenkin [GelV], [Bana], [Mor], and [Xia]), we consider the natural
group analogue here to be the embedding of continuous real-valued additive
maps on a metrizable abelian group $X$ into $L^{2}(X,\mu )$ for $\mu \in
\mathcal{P}(X),$ with mean (`averaging map'
\[
a_{\mu }:x^{\ast }\mapsto \mu (x^{\ast }):=\int_{X}x^{\ast }(x)d\mu
(x)\qquad (x^{\ast }\in X^{\ast }),
\
with $X^{\ast }$ the continuous additive maps on $X.$ One may then, as in
the classical setting ([Bog1, 2.2]), define a covariance operator b
\[
R(x^{\ast })(y^{\ast }):=\int_{X}[x^{\ast }(x)-\mu (x^{\ast })][y^{\ast
}(x)-\mu (y^{\ast })]d\mu (x)\quad \quad (x^{\ast },y^{\ast }\in X^{\ast }).
\
As there, for $h\in X$ define the \textit{Cameron-Martin group-norm
\[
|h|_{H}:=\sup \{x^{\ast }(h):x^{\ast }\in X^{\ast },R(x^{\ast },x^{\ast
})\leq 1\},
\
\[
H:=\{h\in X:|h|_{H}<\infty \}.
\
It may now readily be checked that $H$ is a subgroup, and that $|h|_{H}$ is
a group-norm on $H$. (For any $h\in X,$ with $h\neq 1_{X},$ use a standard
extension theorem, e.g. as in [HewR, A.7], to extend the partial
homomorphism $h^{s}\mapsto s$ (for $s\in \mathbb{Z}$) to a full homomorphism
$x_{h}^{\ast },$ say of unit variance; then $|h|_{H}\geq x_{h}^{\ast }(h)>0,$
as $x_{h}^{\ast }$ is non-constant. This is an analogue of the Gelfand-R
\u{\i}kov theorem on point separation by characters [HewR, 22.12], cf.
[Tar]. See [Bad, Lemma 1] for a variant Hahn-Banach extension theorem in the
present commutative group context.) It is not clear, however, whether the
resulting group is trivial for $\mu \in \mathcal{P}(X)$. In the classical
Gaussian $\gamma $ context, $|h|_{H}=\infty $ implies the mutual singularity
$\gamma _{h}\bot \gamma $ [Bog1, 2.4.5(i)].
For $h\in H$ and $n\in \mathbb{N},$ assuming w.l.o.g. that the supremum for
|h|_{H}$ occurs with $x^{\ast }(h)>0,$
\begin{eqnarray*}
|nh|_{H} &=&\sup \{x^{\ast }(nh):x^{\ast }\in X^{\ast },R(x^{\ast },x^{\ast
})\leq 1\} \\
&=&\sup \{nx^{\ast }(h):x^{\ast }\in X^{\ast },R(x^{\ast },x^{\ast })\leq 1\}
\\
&=&n|h|_{H}.
\end{eqnarray*
Thus the norm $|.|_{H}$ (which is subadditive) may be said to be $\mathbb{N}
\textit{-homogeneous}, as in [BinO2, \S 3.2], or \textit{sublinear} in the
sense of Berz (see below).
Suppose now that the group $X$ is (infinitely) divisible, so $x/n$ is
defined for $n\in \mathbb{N}$, as is also $qx$ for rational $q$. It follows
by a similar argument to that above that if $h\in H$ and $q>0,$ then
|qh|_{H}<\infty ,$ and so $H$\ is also divisible. We now briefly study
sublinear group-norms on a divisible abelian group, proving in particular in
Prop. 5.1 that $H$ is a topological vector space.
\bigskip
\textbf{Definition.} In an abelian ($\mathbb{N}$-) divisible group $G,$
since its group-norm $||.||$ is subadditive, we follow Berz [Ber] (cf.
[BinO3,5,6]) in calling it \textit{sublinear} i
\[
||ng||=n||g||\qquad (g\in G,n\in \mathbb{N})
\
(so that $||g/n||=||g||/n),$ or equivalently and more usefully
\[
||qg||=q||g||\qquad (g\in G,q\in \mathbb{Q}_{+}).
\]
\noindent \textbf{Remark.} The triangle inequality for the norm gives
||g||\leq n||g/n||;$ the definition requires the reverse inequality.
\bigskip
\noindent \textbf{Proposition 5.1.} \textit{A divisible abelian topological
group complete under a sublinear group-norm is a topological vector space
under the action}
\begin{equation}
t\cdot g:=\lim_{q\rightarrow t}qg\qquad (t\in \mathbb{R}_{+},\text{ }q\in
\mathbb{Q}_{+},\text{ }g,\in G). \tag{$\dag $}
\end{equation
\textit{In particular, the group has the embeddability property defined by}
t\mapsto t\cdot g,$ \textit{for }$t\in \mathbb{R}_{+}$.
\bigskip
\noindent \textbf{Proof. }Since
\[
||(q-q^{\prime })(g-g^{\prime })||=|q-q^{\prime }|\cdot ||g-g^{\prime
}||\qquad (q,q^{\prime }\in \mathbb{Q}_{+},\text{ }g,g^{\prime }\in G),
\
and the group is norm-complete, the action ($\dag $) is well defined. It is
also jointly continuous, since passing to appropriate limits over $\mathbb{Q
_{+}
\[
||(t-t^{\prime })(g-g^{\prime })||=|t-t^{\prime }|\cdot ||g-g^{\prime
}||\qquad (t,t^{\prime }\in \mathbb{R}_{+},\text{ }g,g^{\prime }\in G).
\
The action extends to $\mathbb{R}$ by taking $-t\cdot g:=t\cdot (-g),$ and
converts the group into a vector space, as $(q+q^{\prime })g=qg+q^{\prime }g$
and $q(g+g^{\prime })=qg+qg^{\prime }$ may be taken to the limit through
\mathbb{Q}$. Likewise the final statement follows, a
\[
||t\cdot g||=\lim_{q\rightarrow t}||qg||=t||g||.\qquad \square
\]
\bigskip
We verify that the classical boundedness theorem holds in the group context
for additive functions, and hence that such functions are linear (in the
sense of the action ($\dag $)).
\bigskip
\noindent \textbf{Proposition 5.2.} \textit{For an abelian divisible group }
G$\textit{\ complete under a sublinear group-norm and} $x^{\ast
}:G\rightarrow \mathbb{R}$ \textit{additive, }$x^{\ast }$\textit{\ is
continuous iff
\[
||x^{\ast }||=\sup \{|x^{\ast }(x)|/||x||:x\in G\backslash \{0\}\}<\infty ,
\
\textit{and then
\[
|x^{\ast }(x)|\leq ||x^{\ast }||\cdot ||x||.
\]
\bigskip
\noindent \textbf{Proof.} If $x^{\ast }$ is continuous, choose $\delta >0$
with $|x^{\ast }(x)|\leq 1$ for $||x||\leq \delta .$ For $x\neq 0$ and
rational $q\geq ||x||/\delta ,$ as $||x/q||=||x||/q\leq \delta ,
\[
|x^{\ast }(x)|=q|x^{\ast }(x/q)|\leq q;
\
then taking limits as $q\rightarrow ||x||/\delta $ through $\mathbb{Q}$
yield
\[
|x^{\ast }(x)|\leq ||x||/\delta .
\
This holds also for $x=0.$ So $||x^{\ast }||\leq 1/\delta <\infty .$ In this
case, by definition, $|x^{\ast }(x)|\leq ||x^{\ast }||\cdot ||x||$ for
x\neq 0,$ and this again holds also for $x=0.$
Conversely, if $||x^{\ast }||<\infty ,$ then again $|x^{\ast }(x)|\leq
||x^{\ast }||\cdot ||x||$ , and so $x^{\ast }$ is continuous at $0$ and
hence everywhere. $\square $
\bigskip
\noindent \textbf{Corollary 5.1.}\textit{\ A continuous additive function }
x^{\ast }$\textit{\ may be extended by taking
\[
x^{\ast }(t\cdot g):=\lim_{q\rightarrow t}x^{\ast }(qg)\qquad (g\in G,\text{
}q\in \mathbb{Q}_{+},\text{ }t\in \mathbb{R}_{+}),
\
\textit{which then makes it linear in the sense of the action} $t\cdot g:
\[
x^{\ast }(t\cdot g)=t\cdot x^{\ast }(g)\qquad (g\in G,t\in \mathbb{R}_{+}).
\
\noindent \textbf{Proof. }Convergence of\textbf{\ }$x^{\ast }(qg)$ as
q\rightarrow t$ follows from the finiteness of $||x^{\ast }||$ and
\[
|x^{\ast }(qg)-x^{\ast }(q^{\prime }g)|=|x^{\ast }((q-q^{\prime })g)|\leq
||x^{\ast }||\cdot |q-q^{\prime }|\cdot ||g||.
\
The final claim follows, taking limits through $\mathbb{Q}_{+},
\[
x^{\ast }(tg)=\lim_{q\rightarrow t}x^{\ast }(qg)=\lim_{q\rightarrow t}q\cdot
x^{\ast }(g)=t\cdot x^{\ast }(g).\qquad \square
\]
\bigskip
Returning to the context of the Cameron-Martin group-norm, notwithstanding
the issue of possible triviality of $H$, one may define here as in the
classical context the set $X_{\mu }^{\ast }\subseteq L^{2}(\mu ),$ by
passing to the closed span in $L^{2}(\mu )$ of the centred (mean-zero) image
$\{x^{\ast }-\mu (x^{\ast }):x^{\ast }\in X^{\ast }\}$ of $X^{\ast }$. Then
one may extend the domain of $R$ above to the subspace $X_{\mu }^{\ast }$ of
$L^{2}(\mu ),$ so that $R(f)$ for $f\in X_{\mu }^{\ast }$ is given by
\[
R(f)(y^{\ast }):=\int f(x)[y^{\ast }(x)-\mu (y^{\ast })]d\mu (x)\qquad
(y^{\ast }\in X^{\ast }).
\
(For $f=x^{\ast }\in X^{\ast },$ this coincides with the previous
definition, since $\mu (x^{\ast })\int_{X}[y^{\ast }(x)-\mu (y^{\ast })]d\mu
(x)=0.$) Then $h\in H$ if $h=R(g)$ for some $g=\hat{h}\in X_{\mu }^{\ast }$;
then $\delta _{h}(y^{\ast })=y^{\ast }(h)=R(g)(y^{\ast }),$ for $y^{\ast
}=z^{\ast }-\mu (z^{\ast })$ with $z^{\ast }\in X^{\ast }$, and so taking
y^{\ast }\rightarrow g
\[
|h|_{H}=||g||_{L^{2}(\mu )}.
\
Conversely, for fixed $h\in H,$ the map $f\mapsto f(h)$, for $f\in X_{\mu
}^{\ast }$ (including $f=x^{\ast }-\mu (x^{\ast })$ for $x^{\ast }\in
X^{\ast }$) is represented b
\[
\langle f,\hat{h}\rangle _{L^{2}(\mu )},
\
by the Riesz Representation Theorem. So $h=R(\hat{h}).$
For $h,k\in H,$ define their inner product by referring to the elements
\hat{h},\hat{k}$ such that $h=R(\hat{h}),$ $k=R(\hat{k}),$ passing in $R\
to the limit, and puttin
\[
(h,k)_{H}:=\int_{X}\hat{h}(x)\hat{k}(x)d\mu (x).
\
Then $H\ $inherits an inner-product structure b
\[
|h|_{H}^{2}=||\hat{h}||_{L^{2}(\mu )}^{2}=\int_{X}(\hat{h}(x))^{2}d\mu
(x)=(h,h)_{H(\mu )}.
\
In the vector-space Gaussian context, for $h\in H$ the density of $\mu _{h}$
w.r.t. $\mu $ is given by $(CM)$ above in \S 1.
For Gaussian measures on (locally compact) groups $G$, see \S 9.12.
\section{Reference measures with selective subcontinuity}
We briefly recall the construction of a measure due to Solecki, as this
exemplifies a property at the heart of the construction of a Steinhaus
triple in \S 7. For this it will be helpful to review from Th. 3.4 the
notation $\mu _{\_}^{\mathbf{t}}$ for $\mathbf{t}=\{t_{n}\}$ a null
sequence, i.e. with $t_{n}\rightarrow 1_{G}:$
\[
\mu _{-}^{\mathbf{t}}(K):=\lim \inf\nolimits_{n\rightarrow \infty }\mu
(Kt_{n})\qquad (K\in \mathcal{K}(G)).
\
When $\mu (K)=\mu _{-}^{\mathbf{t}}(K)$ we refer to \textit{selective
subcontinuity} of $\mu $ `along' $\mathbf{t}$ (cf. [BinO7]). Thus, for $G$
not locally compact and $K\in \mathcal{K}_{+}(\mu )$, by Cor. 3.2 there will
be $\mathbf{t}$ with $\mu (Kt_{n})\rightarrow 0,$ but there may, and in
certain specified circumstances necessarily will, also exist $\mathbf{t}$
with $\mu _{-}^{\mathbf{t}}(K)>0$ (as when the Subcontinuity Theorem, Th.
6.1 below, holds).
Recall that a (Polish) group $G$ is \textit{amenable at} $1$ ([Sol]; cf.
[BinO7, \S 2] for the origin of this term) if, given any sequence of
measures $\nu _{n}\in \mathcal{P}(G)$ with $1_{G}\in $ \textrm{supp}$(\nu
_{n}),$ there are $\sigma $ and $\sigma _{n}$ in $\mathcal{P}(G)$ with
\sigma _{n}\ll \nu _{n}$ for each $n\in \mathbb{N}$, and
\[
\sigma _{n}\ast \sigma (K)\rightarrow \sigma (K)\qquad (K\in \mathcal{K
(G)).
\
Here $\ast $ denotes convolution. (Abelian Polish groups all have this
property [Sol, Th. 3.2].) With $\delta _{g}$ the Dirac measure at $g$ and
\mathbf{t}$ a non-trivial null sequence, taking
\[
\nu _{n}:=2^{n-1}\sum\nolimits_{m\geq n}2^{-m}\delta _{t_{m}^{-1}}\in
\mathcal{P}(G)
\
$,$ we denote by $\sigma _{n}(\mathbf{t})$ and $\sigma (\mathbf{t})$ the
measures whose existence the definition above asserts. We term $\sigma
\mathbf{t})$ a \textit{Solecki reference measure}. Below we place further
restrictions on the rate of convergence of $\mathbf{t}$ and refer to a
symmetrized version of $\nu _{n}.$
In the above setting, $\sigma (K)>0$ implies that for some sequence
m(n):=m_{K}(n)
\[
\sigma (Kt_{m_{K}(n)})\rightarrow \sigma (K).
\
We repeat here a result from the companion paper [BinO7], as its proof
motivates the definition which follows.
\bigskip
\noindent \textbf{Theorem 6.1 }(\textbf{Subcontinuity Theorem}, after
Solecki [Sol, Th. 1(ii)]).\textit{\ For }$G\ $\textit{amenable at }$1
\textit{, }$0<\theta <1,$ \textit{and }$\mathbf{t}$ \textit{a null sequence,
there is }$\sigma =\sigma (\mathbf{t})\in \mathcal{P}(G)$ \textit{such that
for each }$g\in G,K\in \mathcal{K}(G)$ \textit{with }$\sigma (gK)>0$\textit
\ there is a subsequence }$\mathbf{s}=\mathbf{s}(g,K):=\{t_{m(n)}\}$ \textit
with
\[
\sigma (gKt_{m(n)})>\theta \sigma (gK)\qquad \text{(}n\in \mathbb{N}\text{)}
\text{ so}\qquad \sigma _{\_}^{\mathbf{s}}(gK)>0.
\
\textit{That is, }$\sigma $\textit{\ is subcontinuous along }$\mathbf{s}
\textit{\ on }$gK.$ \textit{In particular
\[
\sigma (K)=\sigma _{\_}^{\mathbf{t}}(K).
\]
\noindent \textbf{Proof.} For $\mathbf{t}=\{t_{n}\}$ null, put $\nu
_{n}:=2^{n-1}\sum\nolimits_{m\geq n}2^{-m}\delta _{t_{m}^{-1}}\in \mathcal{P
(G);$ then $1_{G}$ $\in $ \textrm{supp}$(\nu _{n})\supseteq
\{t_{m}^{-1}:m>n\}.$ By definition of amenability at $1$, in $\mathcal{P}(G)$
there are $\sigma $ and $\sigma _{n}\ll \nu _{n},$ with $\sigma _{n}\ast
\sigma (K)\rightarrow \sigma (K)$ for all $K\in \mathcal{K}(G).$
Fix $K\in \mathcal{K}(G)$ and $g$ with $\sigma (gK)>0.$ As $gK$ is compact,
\sigma _{n}\ast \sigma (gK)\rightarrow \sigma (gK);$ then w.l.o.g.
\begin{equation}
\nu _{n}\ast \sigma (gK)>\theta \sigma (gK)\qquad (n\in \mathbb{N}).
\tag{$\ddagger $}
\end{equation
For $m,n\in \mathbb{N}$ choose $\alpha _{mn}\geq 0$ with
\sum\nolimits_{m\geq n}\alpha _{mn}=1$ $(n\in \mathbb{N})$ and $\nu
_{n}:=\sum\nolimits_{m\geq n}\alpha _{mn}\delta _{t_{m}^{-1}}.$ Then for
each $n$ there is $m=m(n)\geq n$ wit
\[
\sigma (gKt_{m})>\theta \sigma (gK);
\
otherwise, summing the reverse inequalities over all $m\geq n$ contradicts (
\ddagger $). So $\lim_{n}\sigma (gKt_{m(n)})\geq \theta \sigma (gK):$
\sigma $ is subcontinuous along $\mathbf{s}$ $:=\{t_{m(n)}\}$ on $gK.$
For the last assertion, take $g=1_{G}$ and recall that $\sigma (K)\geq
\sigma _{\_}^{\mathbf{t}}(K),$ by upper semi-continuity of $t\mapsto \sigma
(Kt).$ $\square $
\bigskip
A corollary to this is a `non-Haar-null' version of the Steinhaus-Weil
interior point theorem:
\bigskip
\noindent \textbf{Theorem SW} (\textbf{Steinhaus-Weil Theorem beyond Haar
[Sol, Th 1(ii)]).\textit{\ For }$G\ $\textit{amenable at }$1:$\textit{\ if}
E\in \mathcal{U}(G)$\textit{\ is not left Haar null, then }$1_{G}\in \mathrm
int}(E^{-1}E).$
\bigskip
\noindent \textbf{Proof.} Suppose otherwise; then $1_{G}\notin \mathrm{int
(E^{-1}E)$ and we may choose $t_{n}\in B_{1/n}\backslash E^{-1}E.$ As
t_{n}\rightarrow 1_{G},$ choose $\sigma =\sigma (\mathbf{t})$ as in the
preceeding theorem. Since $E$ is not left Haar null, $\sigma (gE)>0$ for
some $g$. For this $g,$ choose compact $K\subseteq E$ with $\sigma (gK)>0.$
Then by the Subcontinuity Theorem (Th. 6.1) and by Prop. 3.1 for $\Delta
=\sigma _{-}^{\mathbf{t}}(K)/4,$ there is $\delta >0$ such tha
\[
\emptyset \neq B_{\delta }^{\Delta }\subseteq K^{-1}g^{-1}gK=K^{-1}K;
\
moreover, as in Prop. 3.1, $t_{n}\in K^{-1}K\subseteq E^{-1}E$ for
infinitely many $n$, which contradicts the choice of $\mathbf{t}.$ So
1_{G}\in \mathrm{int}(E^{-1}E).$ $\square $
\bigskip
\noindent \textbf{Definition} ([BinO7]). Say that a null sequence $\mathbf{t}
$ is \textit{regular} if $\mathbf{t}$ is non-trivial, $\{||t_{k}||\}_{k}$ is
non-increasing, and
\[
||t_{k}||\leq r(k):=1\left/ [2^{k}(k+1)]\right. \qquad (k\in \mathbb{N}).
\
For regular $\mathbf{t}$, put
\[
\tilde{\nu}_{k}=\tilde{\nu}_{k}(\mathbf{t}):=2^{k-1}\sum\nolimits_{m\geq
k}2^{-m}(\delta _{t_{m}^{-1}}+\delta _{t_{m}})=\frac{1}{2}\delta
_{t_{k}^{-1}}+\frac{1}{4}\delta _{t_{k+1}^{-1}}+....
\
Then $\tilde{\nu}_{k}(B_{r(k)})=1$ and $\tilde{\nu}_{k}$ is \textit{symmetri
}: $\tilde{\nu}_{k}(K^{-1})=\tilde{\nu}_{k}(K),$ as $t\in K$ iff $t^{-1}\in
K^{-1}.$ In the definition below, which is motivated by the proof of Theorem
6.1, we will view the measure $\tilde{\nu}_{k}$ as just another version of
\nu _{k}$ above (by merging $\mathbf{t}^{-1}$ with $\mathbf{t}$ by
alternation of terms).
\bigskip
\noindent \textbf{Definition} ([BinO7]). Say that a (Polish) group $G$ is
\textit{strongly amenable at 1} if $G$ is amenable at 1, and for each
\textit{regular} $\mathbf{t}$ the Solecki measure $\sigma (\mathbf{t})$
corresponding to $\nu _{k}(\mathbf{t})$ has associated measures $\sigma _{k}
\mathbf{t})\ll \nu _{k}(\mathbf{t})$ with the following \textit
concentration property}. Writing, for $k\in \mathbb{N}$,
\[
\sigma _{k}:=\sum\nolimits_{m\geq k}a_{km}\delta _{t_{m}^{-1}},
\
for some non-negative sequences $\mathbf{a}_{k}:
\{a_{kk,}a_{k,k+1,}a_{k,k+2},...\}$ of unit $\ell _{1}$-norm, there is an
index $j$ and $\alpha >0$ with
\[
a_{k,k+j}\geq \alpha >0\qquad \text{for all large }k.
\]
A refinement of Solecki's proof of the Subcontinuity Theorem (Th. 6.1 above)
yields the following two results, for the proof of which we refer the reader
to the companion paper [BinO7].
\bigskip
\noindent \textbf{Theorem 6.2 }(\textbf{Strong amenability at 1}, [BinO7,
Th. 4] after [Sol2, Prop. 3.3(i)]). \textit{Any abelian Polish group }$G
\textit{\ is strongly amenable at 1.}
\bigskip
\noindent \textbf{Theorem 6.1}$_{\text{\textbf{S}}}$ \textbf{(Strong
Subcontinuity Theorem). }\textit{For }$G$\textit{\ a (Polish) group that is
strongly amenable at 1, if }$\mathbf{t}$\textit{\ is regular and }$\sigma
=\sigma (\mathbf{t})$\textit{\ is a Solecki measure -- then for }$K\in
\mathcal{K}_{+}(\sigma )
\[
\sigma (K)=\lim_{n}\sigma (Kt_{n})=\sigma _{-}^{\mathbf{t}}(K).
\]
\noindent \textbf{Remark. }The reference measure $\sigma (\mathbf{t})$ in
the last theorem may in fact be selected symmetric [BinO7], in which cas
\textit{\ }$Q_{R}(\sigma (\mathbf{t}))$ is a subgroup.
\bigskip
We note an immediate corollary, useful in \S 8 below.
\bigskip
\noindent \textbf{Corollary 6.1}. \textit{For }$G,\mathbf{t}$\textit{\ and }
\sigma $ \textit{as in Th. 6.1}$_{\text{S}}$ \textit{above, and }$K,H\in
\mathcal{K}(G),\delta >0$\textit{: if }$0<\Delta <\sigma (K)$\textit{\ and }
0<D<\sigma (H),$\textit{\ then there is }$n$\textit{\ with}
\[
B_{\delta }^{K\Delta }\cap B_{\delta }^{HD}\supseteq \{t_{m}:m\geq n\}.
\]
\bigskip
\noindent \textbf{Proof.} Take $\varepsilon :=\min \{\sigma (K)-\Delta
,\sigma (H)-D\}>0.$ As $K,H\in \mathcal{K}_{+}(\sigma ),$ there is $n$ such
that $||t_{m}||<\delta $ for $m\geq n$ and
\[
\sigma (Kt_{m})\geq \sigma (K)-\varepsilon \geq \Delta ,\qquad \sigma
(Ht_{m})\geq \sigma (H)-\varepsilon \geq D\qquad (m\geq n).\qquad \square
\]
\section{The Steinhaus support{\protect\large \ }of a measure}
In this section we construct (one might say via a `disaggregation') the
Steinhaus support $H(\mu )$ of a probability measure $\mu $ defined on a
Polish group $G$ (see Th. 7.1); this is possible provided the measure has
`sufficient subcontinuity' (defined below) -- sufficient to allow a
relative-interior Steinhaus property, relative to some embedded `subspace'.
In the next section we apply the construction to a Solecki measure $\sigma
\mathbf{t})$ for a regular null sequence $\mathbf{t}$ as in \S 6.
\bigskip
\textbf{Definition.} Say that a probability measure has \textit{sufficient
subcontinuity}, written $\mu \in \mathcal{P}_{\text{suf}}(G),$ if for all
K\in \mathcal{K}_{+}(\mu )$ and $\delta >0$ there is $\Delta (K,\delta )\geq
0$ so small that for $\Delta (K,\delta )<\Delta <\mu (K)
\[
B_{\delta }^{K,\Delta }=B_{\delta }^{K,\Delta }(\mu )=\{s\in B_{\delta }:\mu
(Ks)>\Delta \}
\
is infinite. Above, if $\Delta (K,\delta )\equiv 0,$ say that $\mu $ is of
\textit{Solecki type}; these will be considered in \S 8.
\bigskip
Lemma 7.1 below asserts that, for $G$ amenable at $1,$ a Solecki measure
\mu =\sigma (\mathbf{t})$ has this property. Further motivation for working
with this assumption is provided by Th 3.6: for $(H,G,\mu )$ a Steinhaus
triple and $K\in \mathcal{K}_{+}(\mu ),$ the set $\mathcal{O}(K):=\{s\in
H:\mu (Ks)>0\}$ is open in $H$ and $\{s\in H:\mu (Ks)=\mu _{\_}^{H}(Ks)>0\}$
is dense in $\mathcal{O}(K)$.
The goal here is to create a new topology, if not on $G$ then on a dense
subset of $G,$ in which the sets
\[
B_{\delta }^{K,\Delta }:=\{z\in B_{\delta }:\mu (Kz)>\Delta \}
\
(with $\mu $ understood from context) shall be open sub-base members for
selected $K\in \mathcal{K}_{+}(\mu )$. This is tantamount to requiring that
the corresponding maps $z\mapsto \mu (Kz)$ be continuous on some subset of
G;$ cf. Th. 3.2$^{\prime }$ , also by way of justification.
\bigskip
Whenever we consider sets $B_{\delta }^{K,\Delta }$ for $K\in \mathcal{K
_{+}(\mu )$ and $\delta >0$ we implicitly assume that $\Delta \geq \Delta
(K,\delta ).$
Notice the monotonicities:
\[
\Delta \leq D\Longrightarrow B_{\delta }^{K,D}\subseteq B_{\delta
}^{K,\Delta },\qquad \eta \leq \varepsilon \Longrightarrow B_{\eta
}^{K,\Delta }\subseteq B_{\varepsilon }^{K,\Delta }.
\]
\bigskip
The sequence of Lemmas 7.1-7.4 below justifies the introduction of a new
topology with sub-basic sets of the form $gB_{\delta }^{K,\Delta },$ but
\textit{only} on those points of $G$ that can be \textit{covered} by these
sets: the detailed statement is in Theorem 7.1 below. The proof strategy
demands both a countable iteration -- an inductive generation of a family of
sets $B_{\delta }^{K,\Delta }$ -- and then a countable subgroup of
translators $g.$ In the subsequent section, we identify which are the points
that can be covered.
For $\mu \in \mathcal{P}(G),\mathcal{H\subseteq K}_{+}(\mu ),$ we pu
\[
\mathcal{B}_{1_{G}}(\mu ,\mathcal{H}):=\{B_{\delta }^{K,\Delta }(\mu ):K\in
\mathcal{H};\text{ }\delta ,\Delta \in \mathbb{Q}_{+};\mathit{\ }0<\Delta
<\mu (K)\};
\
this is to be a neighbourhood base at $1_{G}.$ For $\mathcal{H=K}_{+}(\mu )$
we abreviate this to $\mathcal{B}_{1_{G}}(\mu )$ or even to $\mathcal{B
_{1_{G}}$, when $\mu $ is understood.
\bigskip
\noindent \textbf{Lemma 7.1. }\textit{For }$\mu \in \mathcal{P}(G)$\textit{,
}$\mathbf{t}$\textbf{\ }\textit{null and non-trivial, and arbitrary }$\delta
>0$\textit{: if }$0<\Delta <\mu _{-}^{\mathbf{t}}(K),$\textit{\ then }
B_{\delta }^{K,\Delta }(\mu )\neq \{1_{G}\}.$\textit{\ In particular, for }
G $\textit{\ amenable at }$1$ \textit{and} $\mu =\sigma (\mathbf{t})$\textit
: if} $0<\Delta <\sigma (K),$\textit{\ then} $B_{\delta }^{K,\Delta }(\sigma
)$ \textit{is infinite.}
\bigskip
\noindent \textbf{Proof. }Since $\mathbf{t}$ is null and non-trivial, for
all large enough $n$ both\textbf{\ }$t_{n}\in B_{\delta }$ and also $\mu
(Kt_{n})>\Delta .$ For $\mu =\sigma (\mathbf{t})$ and $0<\Delta <\sigma (K),$
pick $0<\theta <1$ wit
\[
\theta \sigma (K)=\Delta .
\
Then for some, necessarily non-trivial, subsequence $\mathbf{s}:=\{s_{n}\}$
of $\mathbf{t}$
\[
\sigma (Ks_{n})>\theta \sigma (K)=\Delta .
\
So $B_{\varepsilon }^{K,\Delta }=\{s\in B_{\varepsilon }:\sigma (Ks)>\Delta
\}$ is infinite. $\square $
\bigskip
\noindent \textbf{Lemma 7.2. }\textit{For }$\mu \in \mathcal{P}_{\text{suf
}(G)$\textit{\ and }$K\in \mathcal{K}_{+}(\mu ):$\textit{\ if }$w\in
B_{\delta }^{K,\Delta },$\textit{\ then for }$H=Kw$\textit{\ and some }
\varepsilon >0$
\[
\{w\}\neq wB_{\varepsilon }^{H,\Delta }\subseteq B_{\delta }^{K,\Delta }.
\
\textit{In particular:\newline
}(i)\textit{\ if }$1_{G}\in gB$ \textit{for some }$B\in \mathcal{B}_{1_{G}},
\textit{\ then there is is} $B^{\prime }\in \mathcal{B}_{1_{G}}$ \textit
with }$1_{G}\in B^{\prime }\subseteq gB;$\newline
(ii) \textit{for }$G$\textit{\ amenable at }$1$\textit{: if} $\mu =\sigma
\mathbf{t})$\textit{\ with }$\mathbf{t}$\textit{\ null, then }
B_{\varepsilon }^{H,\Delta }$ \textit{may be selected infinite}.
\bigskip
\noindent \textbf{Proof.} As $w\in B_{\delta }$ there is $0<\varepsilon
<\delta $ with $wB_{\varepsilon }\subset B_{\delta }.$ As $w\in B_{\delta
}^{K,\Delta },$ $\mu (H)=\mu (Kw)>\Delta ,$ so, passing to a smaller
\varepsilon $ if necessary, there is $\Delta (H,\varepsilon )>0$ so that
B_{\varepsilon }^{H,\Delta ^{\prime }}$ is infinite for $\Delta ^{\prime
}\geq \max \{\Delta (H,\varepsilon ),\Delta \}>\Delta (K,\delta )$. Then
\begin{eqnarray*}
w &\in &wB_{\varepsilon }^{H,\Delta ^{\prime }}=w\{s\in B_{\varepsilon }:\mu
(Hs)>\Delta ^{\prime }\}=\{ws\in wB_{\varepsilon }:\mu (Kws)>\Delta ^{\prime
}\} \\
&\subseteq &\{x\in B_{\delta }:\mu (Kx)>\Delta \}=B_{\delta }^{K,\Delta }.
\end{eqnarray*}
For the last part, suppose $1_{G}\in gB$ with $B=B_{\delta }^{K,\Delta }\in
\mathcal{B}_{1_{G}};$ then $w\in B$ for $w=g^{-1}.$ Applying the first part,
take $B^{\prime }:=B_{\varepsilon }^{H,\Delta }\in \mathcal{B}_{1}$ for
H=Kw $ and the $\varepsilon >0$ above; then,
\[
w\in wB^{\prime }=B_{\varepsilon }^{H,\Delta }\subseteq B_{\delta
}^{K,\Delta }=B:\qquad 1_{G}\in B^{\prime }\subseteq gB.\qquad \square
\]
\bigskip
\noindent \textbf{Corollary 7.1.} \textit{If }$x\in yB\cap zC$\textit{\ for
$x,y,z\in G$ \textit{and some }$B,C\in \mathcal{B}_{1_{G}},$ \textit{then }
x\in x(B^{\prime }\cap C^{\prime })\subseteq yB\cap zC$\textit{\ for some}
B^{\prime },C^{\prime }\in \mathcal{B}_{1_{G}}.$
\bigskip
\noindent \textbf{Proof.} As $1_{G}\in x^{-1}yB$ and $1_{G}\in x^{-1}zC,$
there are $B^{\prime },C^{\prime }\in \mathcal{B}_{1_{G}}$ with $1_{G}\in
B^{\prime }\subseteq x^{-1}yB$ and $1_{G}\in C^{\prime }\subseteq x^{-1}zC.$
Then $x\in xB^{\prime }\cap xC^{\prime }\subseteq yB\cap zC.$ $\square $
\bigskip
We now improve on Lemma 7.2 by including some technicalities, whose purpose
is to introduce a \textit{separable} topology on a subspace of $G$ refining
that induced by $\tau _{G}$. In view of the monotonicities observed above,
we may restrict attention to $\delta ,\Delta \in \mathbb{Q}_{+}:=\mathbb{Q
\cap (0,\infty ).$
\bigskip
\noindent \textbf{Lemma 7.3. }\textit{For }$\mu \in \mathcal{P}_{\text{suf
}(G)$\textit{\ and countable }$\mathcal{H\subseteq K}_{+}(\mu )$\textit{,
there is a countable }$D=D(\mathcal{H})\subseteq G$ \textit{accumulating at
$1_{G}$ \textit{such that: if }$w\in B_{\delta }^{K,\Delta }$\textit{\ with
$K\in \mathcal{H}$, $\delta ,\Delta \in \mathbb{Q}_{+}$\textit{and }$\Delta
<\mu (K),$ \textit{then for some }$g\in D$\textit{\ with }$\mu (Kg)>\Delta $
\textit{and some }$\varepsilon \in \mathbb{Q}_{+},$
\[
w\in gB_{\varepsilon }^{Kg,\Delta }\subseteq B_{\delta }^{K,\Delta }.
\
\noindent \textbf{Proof. }As $G$ is separable, we may choose $\{\bar{g
_{m}\}_{m\in \mathbb{N}}=\{\bar{g}_{m}(B_{\delta }^{K,\Delta })\}_{m\in
\mathbb{N}}\subseteq B_{\delta }^{K,\Delta }$ dense in $B_{\delta
}^{K,\Delta }$, an infinite set, by Lemma 7.1. Tak
\[
D=D(\mathcal{H}):=\{\bar{g}_{m}(B_{\delta }^{K,\Delta }):K\in \mathcal{H
,\delta ,\Delta \in \mathbb{Q}_{+},\Delta <\mu (K)\},
\
which is countable. Since $B_{\delta }^{K,\Delta }\subseteq B_{\delta },$ $D$
accumulates at $1_{G}.$ We claim that $D\ $above satisfies the conclusions
of the Lemma.
Fix $w\in B_{\delta }^{K,\Delta },$ with $K,\Delta ,\delta $ as in the
hypotheses. Choose $\varepsilon \in \mathbb{Q}_{+}$ wit
\[
wB_{3\varepsilon }\subseteq B_{\delta }.
\
Choose $\bar{g}_{m}=\bar{g}_{m}(B_{\delta }^{K,\Delta })$ with $||\bar{g
_{m}^{-1}w||<\varepsilon ,$ possible by construction of $\{\bar{g
_{m}(B_{\delta }^{K,\Delta })\}$. Put $z_{m}:=\bar{g}_{m}^{-1}w;$ then $w
\bar{g}_{m}z_{m},$ $z_{m}\in B_{\varepsilon }$ and $\bar{g}_{m}\in
wB_{\varepsilon },$ so $w\in \bar{g}_{m}z_{m}B_{\varepsilon }\in \bar{g
_{m}B_{2\varepsilon }\subseteq wB_{3\varepsilon }\subseteq B_{\delta }.$ By
choice of $\{\bar{g}_{m}\}_{m\in \mathbb{N}}$, $\mu (K\bar{g}_{m})>\Delta ,$
and furthermor
\begin{eqnarray*}
w &\in &\bar{g}_{m}z_{m}\{s\in B_{\varepsilon }:\mu (K\bar{g
_{m}z_{m}s)>\Delta \}\subseteq \bar{g}_{m}\{t\in B_{2\varepsilon }:\mu (
\bar{g}_{m}t)>\Delta \} \\
&=&\bar{g}_{m}B_{2\varepsilon }^{K\bar{g}_{m},\Delta }\subseteq \{x\in
B_{\delta }:\mu (Kx)>\Delta \}=B_{\delta }^{K,\Delta },
\end{eqnarray*
as $\bar{g}_{m}B_{2\varepsilon }\subseteq B_{\delta }.$ $\square $
\bigskip
In Lemma 7.3 above $Kg$ need not belong to $\mathcal{H}$. Lemma 7.4 below
asserts that Lemma 7.3 holds on a countable family $\mathcal{H}$ of compact
sets that is closed under the appropriate translations.
\bigskip
\noindent \textbf{Lemma 7.4.}\textit{\ For }$\mu \in \mathcal{P}_{\text{suf
}(G),$ \textit{there are a countable} $\mathcal{H\subseteq K}_{+}(\mu )$
\textit{and a countable set} $D=D(\mathcal{H})\subseteq G$ \textit{dense in
$G$ \textit{such that: if }$w\in B_{\delta }^{K,\Delta }$\textit{\ with }
K\in \mathcal{H}$, $\delta ,\Delta \in \mathbb{Q}_{+}$\textit{and }$0<\Delta
<\mu (K),$ \textit{then for some }$g\in D$\textit{\ with }$\mu (Kg)>\Delta $
\textit{with }$Kg\in \mathcal{H}$ \textit{and some }$\varepsilon \in \mathbb
Q}_{+},$
\[
w\in gB_{\varepsilon }^{Kg,\Delta }\subseteq B_{\delta }^{K,\Delta }.
\
\noindent \textbf{Proof. }Suppose $\mu $ is concentrated on
\dbigcup\nolimits_{n}K_{n}$, with the $K_{n}$ compact and $\mu (K_{n})>0$.
Taking $\mathcal{H}_{0}$ to comprise the basic compacts $K_{n}\cap g_{m}\bar
B}_{\delta }$ with $\{g_{m}\}$ dense in $G$ and $\delta \in \mathbb{Q}_{+},$
proceed by induction
\[
\mathcal{H}_{n+1}:=\{Kg:K\in \mathcal{H}_{n},g\in D(\mathcal{H}_{n}),\delta
,\Delta \in \mathbb{Q}_{+},\mathit{\ }0<\Delta <\mu (K)\},
\
\[
\mathcal{H}:=\dbigcup\nolimits_{n}\mathcal{H}_{n},\qquad
D:=\dbigcup\nolimits_{n}D(\mathcal{H}_{n}).\qquad \square
\]
\noindent \textbf{Theorem 7.1. }\textit{For} $\mu \in \mathcal{P}_{\text{suf
}(G)$\textit{\ there are a countable} $\mathcal{H\subseteq K}_{+}(\mu )$
\textit{and a countable set} $\Gamma =\Gamma (\mathcal{H})\subseteq G$
\textit{dense in }$G$ \textit{such that, taking
\[
\mathcal{B}_{\mathcal{H}}(\mu )=\{B_{\delta }^{K,\Delta }(\mu )\in \mathcal{
}_{1_{G}}:K\in \mathcal{H};\delta ,\Delta \in \mathbb{Q}_{+};\mathit{\
0<\mu (K)<\Delta \},
\
\[
\mathcal{B}(\mu )=\mathcal{B}_{\Gamma }(\mu ):=\Gamma \cdot \mathcal{B}_
\mathcal{H}}(\mu )=\{\gamma B:\gamma \in \Gamma ,B\in \mathcal{B}_{\mathcal{
}}(\mu )\}
\
\textit{is a sub-base for a second-countable topology on the subset
\[
H(\mu ):=\dbigcup \mathcal{B}_{\Gamma }(\mu )=\dbigcup \{\gamma B:B\in
\mathcal{B}_{\mathcal{H}}(\mu ),\gamma \in \Gamma \}.
\]
\noindent \textbf{Proof. }Take a countable subgroup $\Gamma $ in $G,\ $which
is dense in $G$ under $\mathcal{\tau }_{G}$ and contains $D(\mathcal{H})$,
as in Lemma 7.4. Consider $w\in \gamma B_{\delta }^{K,\Delta }$ with $\gamma
\in \Gamma ,$ $K\in \mathcal{H},\delta ,\Delta \in \mathbb{Q}_{+},\mathit{\
\Delta <\mu (K).$ Then for some $g\in D(\mathcal{H})\cap B_{\delta
}^{K,\Delta }$ and $\varepsilon >0
\[
gB_{\varepsilon }^{Kg,\Delta }\subseteq B_{\delta }^{K,\Delta }.
\
So bot
\[
w\in \gamma gB_{\varepsilon }^{Kg,\Delta }\subseteq \gamma B_{\delta
}^{K,\Delta },
\
and $\gamma g\in \Gamma ,$ the latter as $g\in D(\mathcal{H})\subseteq
\Gamma .$ So, by the Corollary 7.1 (of Lemma 7.2), the family $\mathcal{B
(\mu )$ forms a sub-base for a topology on the set of point
\[
\dbigcup \{\gamma B:B\in \mathcal{B}(\mu ),g\in \Gamma \}.\qquad \square
\]
\noindent \textbf{Remark. }The same proof shows that one may drop
countability in the conditions and second-countability in the conclusions.
\bigskip
\noindent \textbf{Definition. }We term the second-countable topology of the
preceeding theorem the $\mu $\textit{-topology.}
\bigskip
In the next result we take $\mu =\sigma (\mathbf{t}).$ As $1_{G}\in
B_{\delta }^{K,\Delta }\subseteq B_{\delta },$ the $\sigma (\mathbf{t})
-topology evidently refines the \textit{original topology} $\tau _{G}$ of
G. $ The finer topology could be discrete; in cases of interest, however,
this will not happen:
\bigskip
\noindent \textbf{Proposition 7.1 (Refinement). }\textit{For }$G$\textit{\
amenable at }$1$\textit{,\ }$\mathbf{t}$\textbf{\ }\textit{null and
non-trivial, the open sets }$B_{\delta }^{K,\Delta }$ \textit{of the }
\sigma (\mathbf{t})$\textit{-topology are infinite and refine the topology
induced by }$\tau _{G}$ \textit{on }$H(\sigma (\mathbf{t})).$
\bigskip
\noindent \textbf{Proof.} For $\{g_{n}\}_{n\in \mathbb{N}}$ dense in $G,$
write $D:=\{g_{n}:n\in \mathbb{N}\}.$ The open sets of $G$ are generated as
unions of sets of the form $gV,$ with $g\in D$ and $V$ an open nhd of
1_{G}. $ We show that these sub-basic sets of the $\sigma (\mathbf{t})
-topology refine the $\tau _{G}$-nhds of the identity. For $V$ a non-empty
\tau _{G}$-open nhd of $1_{G}$ choose $U$ to be a non-empty $\tau _{G}$-open
nhd of $1_{G}$, with $U^{-1}U\subseteq V$.
Consider any non-trivial null sequence $\mathbf{t}$ and, referring to the
Subcontinuity theorem (Th. 7.1), consider $\sigma =\sigma (\mathbf{t})\in
\mathcal{P}(G).$ For $\{g_{n}\}$ dense in $G$, there is $n$ with $\sigma
(g_{n}U)>0;$ for otherwise, $\sigma (g_{n}U)=0$ for each $n$ and, since
G=\dbigcup\nolimits_{n}g_{n}U,$ we reach the contradiction $\sigma (G)=0$.
Pick $n$ with $\sigma (g_{n}U)>0;$ write $g$ for $g_{n}.$
Choose compact sets $K_{n}$ such that $\sigma $ is concentrated on
\dbigcup\nolimits_{n}K_{n}$ and a countable base $\mathcal{B}$ for $\tau
_{G}.$ Sinc
\[
\sigma (gU)=\sigma (\dbigcup\nolimits_{m}K_{m}\cap gU)=\sigma
(\dbigcup\nolimits_{m}\{K_{m}\cap g\bar{B}:\bar{B}\subseteq U,B\in \mathcal{
},m\in \mathbb{N}\}),
\
there are $m\in \mathbb{N}$ and $B\in \mathcal{B}$ with\textit{\ }$\mu
(K_{m}\cap g\bar{B})>0.$
Take $K:=K_{m}\cap g\bar{B};$\textit{\ }there is a subsequence\textit{\ }
\mathbf{s}=\mathbf{s}(K):=\{t_{m(k)}\}$ wit
\[
\sigma (Kt_{m(k)})>\sigma (K)/2\qquad \text{(}k\in \mathbb{N}\text{)},\text{
so}\qquad \mu _{\_}^{\mathbf{s}}(K)>0.
\
So as $\Delta :=\sigma _{\_}^{\mathbf{s}}(K)/4<\sigma _{\_}^{\mathbf{s
}(K)<\sigma (K),$ by [BinO7, Lemma 1], there is $\delta >0$ wit
\[
1_{G}\in B_{\delta }^{K,\Delta }\subseteq K^{-1}K\subseteq \bar{B
^{-1}g^{-1}g\bar{B}\subseteq U^{-1}U\subseteq V,
\
and $t_{m(n)}\in B_{\delta }^{K,\Delta }$ for all large enough $n$ as in
[BinO7, Lemma 1]. $\square $
\bigskip
\noindent \textbf{Remark. }Proposition 7.1 is connected to the
Steinhaus-Weil Theorem, Theorem SW, in \S 6\ above: a similar argument
gives, for $E$ non-left-Haar-null, that $1_{G}\in \mathrm{int}_{G}(\hat{E})$
fo
\[
\hat{E}:=\bigcup\nolimits_{\delta ,\Delta >0,g\in G,\mathbf{t}}\{B_{\delta
}^{gK,\Delta }(\sigma (\mathbf{t})):K\subseteq E,K\in \mathcal{K}_{+}(\sigma
(\mathbf{t})),\Delta <\sigma (gK)\}.
\
That is, the relevant basic open nhds of $1_{G}$ in the various $\sigma
\mathbf{t})$-topologies `aggregate' to yield a nhd of $1_{G}$ in the
original topology of $G.$
\section{Connections with Cameron-Martin theory}
In this section, we pursue the connection with Cameron-Martin theory.
Proposition 8.1 provides the basis for a definition of the `covered points'
under $\mu ;$ this identifies a canonical `largest' Steinhaus support for
\mu $ (modulo an initial choice of dense subset). The result takes its
motivation from the following classical observation:
\textit{In a locally convex topological vector space }$X$\textit{, in
particular in a Fr\'{e}chet space, equipped with a symmetric Radon Gaussian
measure }$\gamma $\textit{: if }$E$\textit{\ is any Hilbert space
continuously embedded in }$H(\gamma )$\textit{, then there exists a
symmetric Radon Gaussian measure }$\gamma ^{\prime }$\textit{\ with }
H(\gamma ^{\prime })=E$ [Bog1, 3.3.5].
We sign off by showing that the topology of the Steinhaus support is
metrizable.
We recall from \S 4.2: $\mu _{g}(B)=\mu (Bg)$ for $B\in \mathcal{B}(G);
\[
\mathcal{N}\mathcal{=}\mathcal{N}(\mu ):=\{g\in G:\mu _{g}\bot \mu \},\qquad
G_{0}=G_{0}(D):=\dbigcup\nolimits_{d\in D}d\mathcal{N}(\mu ),
\
with $D=\{g_{n}:n\in \mathbb{N}\}\ $a dense subset of $G$; measures of
Solecki type (\S 7) have $\Delta (K,\delta )\equiv 0.$
\bigskip
\noindent \textbf{Proposition 8.1 (Covering Lemma).} \textit{For }$\mu \in
\mathcal{P}(G)$ \textit{of Solecki type, let }$\tilde{D}$\textit{\ be a
dense subset of }$Q(\mu ).$\textit{\ For }$\delta >0,K\in \mathcal{K
_{+}(\mu )$ \textit{and }$x\in Q(\mu )$\textit{\ there is }$g\in \tilde{D}
\textit{\ with
\[
x\in gB_{\delta }^{K,\Delta }
\
\textit{\ for all small enough }$\Delta <\mu (K)$\textit{.}
\bigskip
\noindent \textbf{Proof. }Choose $g\in \tilde{D}\cap xB_{\delta }\subseteq
Q(\mu )$. Then $y^{-1}:=x^{-1}g\in B_{\delta },$ so also $y=g^{-1}x\in
B_{\delta }$ (symmetry of the group-norm on $G),$ and $y=g^{-1}x\in Q(\mu )
, as $Q(\mu )$ is a subgroup. Now $\mu (Ky)>0,$ as $y\in Q(\mu ),$ so we may
choose $0<\Delta <\min \{\mu (Ky),\mu (K));$ the
\[
x=gy\in g\{z\in B_{\delta }:\mu (Kz)>\Delta \}=gB_{\delta }^{K,\Delta
}.\qquad \square
\]
Proposition 8.1 above identifies how points of $Q(\mu )$ can be covered by
certain translates of basic sets of the form $B_{\delta }^{K,\Delta }.$ To
go beyond $Q(\mu )$ this motivates the following.
\bigskip
\noindent \textbf{Definition.} Say that $g\in G$ is a \textit{covered} point
($g$ `is covered') under $\mu \in \mathcal{P}(G)$ if there is $K\in \mathcal
K}_{+}(\mu )$ with $\mu (Kg)>0.$ (Then $g\in B_{\delta }^{K,\Delta }$ for
\delta >||g||$ and $0<\Delta <\min \{\mu (K),\mu (Kg)\}.)$ So the points of
Q_{R}$ are covered, but $g\in G$ is \textit{not covered} if $\mu _{g}(K)=\mu
(Kg)=0$ for all $K\in \mathcal{K}_{+}(\mu ),$ that is, $\mu _{g}\bot \mu $.
\bigskip
\noindent \textbf{Proposition 8.2. }\textit{For }$\mu \in \mathcal{P}(G)$
\textit{of Solecki type, }$\{g_{n}\}_{n\in \mathbb{N}}$\textit{\ dense in }
G $ \textit{and }$\delta >0,$\textit{\ the sets }$g_{n}B_{\delta }^{K,\Delta
}$\textit{\ cover }$G\backslash G_{0}=G\backslash \dbigcup\nolimits_{n\in
\mathbb{N}}g_{n}\mathcal{N}(\mu )$ \textit{and so generate a topology on the
Borel set }$G\backslash G_{0}$\textit{\ for which these are sub-basic.}
\bigskip
\noindent \textbf{Proof. }Consider $x\notin \dbigcup\nolimits_{n\in \mathbb{
}}g_{n}\mathcal{N}(\mu )$ with $\{g_{n}\}$ is dense in $G$. Then for
arbitrary $\delta >0$, select $g_{n}\in B_{\delta }(x).$ Then $x\in
B_{\delta }(g_{n}),$ and $y:=g_{n}^{-1}x\in B_{\delta }\backslash \mathcal{N
(\mu ),$ as $g_{n}^{-1}x\notin \mathcal{N}(\mu ).$ Now we may choose $K\in
\mathcal{K}_{+}(\mu )$ with $\mu (Ky)>0.$ Then for $0<\Delta <\min \{\mu
(K),\mu (Ky)\},
\[
x=g_{n}y\in g_{n}B_{\delta }^{K,\Delta }.
\
That is, for $\delta >0,$ the family $\{g_{n}B_{\delta }^{K,\Delta }:K\in
\mathcal{K}_{+}(\mu ),$ $0<\Delta <\mu (K),n\in \mathbb{N}\}$ covers
G\backslash G_{0},$ and so a second-countable topology is generated with
sub-base the set
\[
\{g_{n}B_{\delta }^{K,\Delta }:K\in \mathcal{K}_{+}(\mu ),0<\Delta <\mu
(K),n\in \mathbb{N},\delta \in \mathbb{Q}_{+}\}.\qquad \square
\]
\bigskip
\noindent \textbf{Remark. }In the special case when $Q(\mu )$ is dense in $G$
(for instance taking $G$ to be $\bar{Q}),$ so that also $\tilde{D}$ (in
Prop. 8.1) is dense in $G,$ Prop. 8.2 above (with $g_{n}=\tilde{g}_{n})$
follows from Proposition 8.1. Note that in this case also $G_{0}=\mathcal{N
(\mu ),$ since $\tilde{g}_{n}\mathcal{N}(\mu )\subseteq \mathcal{N}(\mu ),$
by Lemma 3.2.
\bigskip
\noindent \textbf{Definitions.} For $\mu \in \mathcal{P}(G)$ and $K\in
\mathcal{K}_{+}(\mu ),$ pu
\[
\Delta _{K}(x,y):=|\mu (Kx)-\mu (Ky)|\leq 1\quad \quad (x,y\in G),
\
which is a pseudometric, so tha
\[
\rho _{K}(x,y):=\max \{d_{L}^{G}(x,y),\Delta _{K}(x,y)\}\quad \quad (x,y\in
G)
\
is a metric.
\bigskip
\noindent \textbf{Proposition 8.3. }\textit{For }$\mu \in \mathcal{P}(G)$
\textit{of Solecki type,} $g\in G\backslash \mathcal{N}(\mu )$, $K\in
\mathcal{K}_{+}(\mu )$ \textit{and }$\varepsilon >0:$\textit{\ if }
0<\varepsilon <\mu (Kg),$\textit{\ then there is }$\delta =\delta
(\varepsilon )$\textit{\ with }$0<\delta <\varepsilon $\textit{\ such that
\textbf{\
\[
gB_{\delta (\varepsilon )}^{Kg,\Delta }\subseteq B_{\varepsilon }^{\rho
_{K}}(g):=\{x:\rho _{K}(x,g)<\varepsilon \}\subseteq gB_{\varepsilon
}^{Kg,\Delta }\text{, for }\Delta =\mu (Kg)-\varepsilon .
\
\textit{Hence, for any enumeration }$\{K_{n}\}_{n}$ \textit{of the basic
compact sets in }$\mathcal{K}_{+}(\mu )$ \textit{comprising the family }
\mathcal{H}_{0}$ \textit{of Lemma 7.4, the metric
\[
\rho (x,y):=\sup \{d_{L}^{G}(x,y),2^{-n}\Delta _{K_{n}}(x,y)\}
\
\textit{generates the }$\mu $\textit{-topology on }$G\backslash G_{0}
\textit{.}
\bigskip
\noindent \textbf{Proof. }Note that for $0<\varepsilon <\mu (Kg)
\[
\Delta _{K}(x,g)<\varepsilon \text{ iff [}d_{L}^{G}(x,g)<\varepsilon \text{
and }\mu (Kg)-\varepsilon <\mu (Kx)<\mu (Kg)+\varepsilon ].
\
Write $x=gh;$ then $d_{L}^{G}(x,g)<\varepsilon $ is equivalent to the
constraint $||h||<\varepsilon .$ As $x\rightarrow \mu (Kx)$ is upper
semicontinuous, there is $0<\delta =\delta (\varepsilon )<\varepsilon $ such
that $\mu (Kgh)<\mu (Kg)+\varepsilon ,$ for $h\in B_{\delta };$ this yields
the further required constraint $\mu (Kgh)>\Delta :=\mu (Kg)-\varepsilon .$
The remaining assertions are now immediate. $\square $
\bigskip
\noindent \textbf{Remarks.} 1. In the above argument $\mu (Kgh\triangle
Kg)\leq 2\varepsilon $ provided $\mu (KgB_{\delta })<\varepsilon .$ This
implies that convergence in $\rho $ implies convergence in the Weil-like
group-norm $||\cdot ||_{\mu }^{E}$ of [BinO7] with $E=Kg$; indeed in the
locally compact case these norms generate the Weil topology of [Wei] (cf.
[Hal, \S 62], [HewR, \S 16], [Yam2, Ch. 2] and the recent [BinO7]). So the
\rho $-topology refines the Weil-like topology.
\noindent 2. As with Theorem 8.1 above, there is an analogue, in which
metrizability is dropped in favour of a uniform structure.
\section{Complements}
\noindent \textbf{1.} \textit{Historical remarks}. The fundamental reference
here is of course the first, Haar's 1933 paper in which he introduces Haar
measure [Haa]. Von Neumann, in the paper (of the same journal) immediately
after Haar's [Neu1], applies Haar measure for compact Lie groups to solve
Hilbert's fifth problem. He follows this with two further contributions
[Neu3,4]. Kakutani made extensive relevant contributions to both topological
groups and to measure theory. His papers on the first appear in Volume 1 of
his Selected Papers [Kak3], together with commentaries (p. 391-408) by A. H.
Stone, J. R. Choksi, W. W. Comfort, K. A. Ross and J. Taylor. Here he deals
with metrisation, with uniqueness of Haar measure, and (with Kodaira) on its
completion regularity. His papers on the second appear in Volume 2, with
commentaries (p.379-383) by Choksi, M. M. Rao, Oxtoby [Oxt3], and Ross. Here
he deals (alone, with Kodaira, and with Oxtoby) on extension of Lebesgue
measure, and with equivalence of infinite product measures (\S 9.18 below).
The other key historical references here are the Cameron-Martin papers
[CamM1,2,3]; see [Bog1], [LedT], [Str] for textbook accounts.
\noindent \textbf{2. }\textit{Radon measures. }We recall that on a complete
separable metric space every Borel measure is Radon, i.e. has inner compact
regularity ([Bog2, Vol. II, Th. 7.1.7]).
A metrizable \v{C}ech-complete space (i.e. one that is a $\mathcal{G
_{\delta }$ in some (any) compactification) is topologically complete, i.e.
the topology may be generated by a complete metric [Eng, Th. 4.3.26]. So, in
particular, if a locally compact group is metrizable, then it has a complete
metric, and so every Borel measure on the group, in particular every Haar
measure, is Radon. If in addition the group is separable (so Polish), then,
being second-countable, it is $\sigma $-compact, and then every Haar measure
is $\sigma $-finite, and so also outer regular ([Kal, Lemma 1.34], cf. [Par,
Th. II.1.2] albeit for a probability measure).
In general, one may pass from a Haar measure $\eta _{X}$ on a locally
compact group $X$ which is outer regular (i.e. Borel sets are outer $\eta
_{X}$-approximable by open sets) to the Borel measure $\mu $ defined b
\[
\mu (B):=\sup \{\eta _{X}(K):K\in \mathcal{K}(X),K\subseteq B\}\quad (B\in
\mathcal{B}(X)),
\
which agrees with $\eta _{X}$ on $\mathcal{K}(X)$ and so is inner compact
regular [Bog2, Th. 7.11.1]; however, $\mu $ need not be outer-regular. In
applications inner compact regularity carries more advantages, hence our
adoption of this property of measures.
We note some alternative usages here.
(a) For Schwartz [Sch, 1.2], a Radon measure is a locally finite, Borel
measure which is inner compact regular (definition R$_{\text{3}}$); an
equivalent definition includes local finiteness and couples outer regularity
with inner compact regularity restricted to open sets (definition R$_{\text{
}}$). A third definition involves both a locally finite measure $M$ which is
outer regular and $m$ its associated essential measure (outer measure
restricted to $\mathcal{B}(X)$) which is inner compact regular, the two
agreeing on open sets and on sets of finite $M$-measure.
(b) Fremlin [Fre2, p. 15] defines Radon measures to be locally finite and
inner compact regular (plus complete and locally determined [Fre1, p.13]).
(c) Heyer [Hey1, \S 1.1], for a locally compact group $X$, defines a Radon
measure as a linear functional with domain the continuous complex functions
with compact support in $X$ and with a boundedness condition where the
bounds correspond to the possible compact supports.
\noindent \textbf{3. }\textit{Invariance beyond local compactness. }We
recall our opening paragraph, which set out the contrast between the local
compactness of the group setting, where one has Haar measure, and the
absence of both in the Hilbert-space setting in which Cameron-Martin theory
originated. We note that the invariance property of Haar measure may be
extended \textit{beyond }the locally-compact case. Nothing new is obtained
in our setting of probability measures, but if one drops local finiteness,
Haar-like measures of `pathological' character can occur (\S 9.8 below). We
quote Diestel and Spalsbury [DieS, Ch. 10], who give a textbook account of
the early work of Oxtoby in this area [Oxt1]. We note in passing that this
interesting paper is not cited by Oxtoby himself in either edition of his
classic book [Oxt2]. We note also the use of local finiteness in Schwartz's
definition of a Radon measure [Sch].
The classic case of Haar (invariant) measure is Lebesgue measure in
Euclidean space. A number of authors have produced `Lebesgue-like'
extensions of Lebesgue measure from $\mathbb{R}^{n}$ to $\mathbb{R}^{\mathbb
N}};$ see e.g. Baker [Bak1,2], Gill and Zachary [GilZ], [Pan], Yamasaki
[Yam1,2].
Admissible translators present themselves here and also in a variety of
related circumstances; for a statistical setting see e.g. [Shep], and for
later developments [Smol] and Sadasue [Sad1].
\noindent \textbf{4. }\textit{Quasi-invariance beyond local compactness.
Such questions are addressed in a vector-space context in Bogachev's book
[Bog1]; see also Yamasaki [Yam1,2], Arutyunyan and Kosov [AruK] (cf. \S 4).
For the group context, see Ludkovsky [Lud], Sadasue [Sad1,2] and the
references cited there (again, cf. \S 4).
\noindent \textbf{5. }\textit{Group representations beyond local
compactness. }See Ludkovsky [Lud] for group representations, and the
monograph of Banaszczyk [Bana] for Pontryagin duality in the abelian case;
for a textbook treatment see [FelD]. For harmonic analysis, see Gel$^{\prime
}$fand and Vilenkin [GelV], Xia [Xia].
\noindent \textbf{6. }\textit{Integration beyond local compactness. }Measure
and integration are of course closely linked, in this context as in any
other. For monograph accounts, see e.g. Skorohod [Sko], [Yam2], [Xia],
[GilZ].
\noindent \textbf{7. }\textit{Differentiation beyond local compactness.
Differentiation in infinitely many dimensions owes much to pioneering work
by Fomin, and has led on to the theory of smooth measures and the Malliavin
calculus. See e.g. Bogachev [Bog3], Dalecky and Fomin [DalF].
\noindent \textbf{8. }\textit{The Oxtoby-Ulam Theorem }([Oxt1, Th. 2],
[DieS, Th.10.1]). This asserts that in a non-locally-compact Polish group
carrying a (non-trivial, left) invariant Borel measure every nhd of the
identity contains \textit{uncountably} many \textit{disjoint} (left) \textit
translates} of a compact set of positive measure. Since local finiteness
rules out such pathology, `total' invariance of a Radon measure implies
local compactness, hence the introduction of `selective invariance' and
`selective approximation' (by compact sets) in the variant Steinhaus triples
of \S 2.
\noindent \textbf{9. }\textit{Invariant means. }One can deal with invariant
means in place of invariant measures. This involves the theory of \textit
amenable groups, }and amenability more generally; see Paterson [Pat], which
has an extensive bibliography. There are links with Solecki's concept of
\textit{amenability at }1 ([Sol] and \S 6; [BinO7]).
\noindent \textbf{10. }\textit{Fr\'{e}chet spaces: Gaussianity}\textbf{\
[Bog1]. For $X$ a locally convex topological vector space, $\gamma $ a
probability measure on the $\sigma $-algebra of the cylinder sets generated
by $X^{\ast }$ (the Borel sets, for $X$ separable Fr\'{e}chet, e.g.
separable Banach), with $X^{\ast }\subseteq L^{2}(\gamma ):$ then $\gamma $
is called \textit{Gaussian} on $X$ iff $\gamma \circ \ell ^{-1}$ defined by
\[
\gamma \circ \ell ^{-1}(B)=\gamma (\ell ^{-1}(B))\qquad (\text{Borel
B\subseteq \mathbb{R})
\
is Gaussian (normal) on $\mathbb{R}$ for every $\ell \in X^{\ast }\subseteq
L^{2}(\gamma )$. For a monograph treatment of Gaussianity in a Hilbert-space
setting, see Janson [Jan].
\noindent \textbf{11. }\textit{Cameron-Martin} \textit{aspects. }For $X\ $F
\'{e}chet and $\gamma $ Gaussian case, when the closed span of $\{x^{\ast
}-\gamma (x^{\ast }):x^{\ast }\in X^{\ast }\}$ is infinite-dimensional,
H(\gamma )$ is $\gamma $-null in $X$ [Bog1, Th. 2.4.7].
Furthermore, for $h\in H(\gamma ),$ the Radon-Nikodym density $d\gamma
_{h}/d\gamma $ (which is explicitly given by the Cameron-Martin formula
(CM) $) as a function of $h$ is continuous on $H(\gamma )$ [Bog1, Cor.
2.4.3]. This implies, for $\mathbf{t}$ null in the $H(\gamma )$-norm (with
t_{n}\in H(\gamma ))$ and compact $K$ with $\gamma (K)>0,$ that $\gamma
_{-}^{\mathbf{t}}(K)>0.$ Here, for $X$ sequentially complete, the
corresponding balls (under the $H(\gamma )$-norm) are weakly compact in $X$
-- cf. [Bog1, Prop. 2.4.6], also the Remark before Th. 3.4 above.
We note here for convenience the following properties of the Cameron-Martin
space.
(i) For $\gamma $ non-degenerate, $H(\gamma )$ is everywhere dense [Bog1, \S
3.6].
(ii) $\mathrm{cl}_{X}H(\gamma )$ is of full measure [Bog1, 3.6.1].
(iii) If $X_{\gamma }^{\ast }$ is infinite-dimensional (i.e. is not locally
compact), then $\gamma (H(\gamma ))=0$ [Bog1, 2.4.7].
(iv) For $\gamma $ a Radon Gaussian measure, both of the spaces $H(\gamma )$
and $L^{2}(X,\gamma )$ are separable [Bog1, 3.2.7 and Cor. 3.2.8].
(v) The `relative mobility property' (cf. \S 9.15 below) that $\gamma
(Kh)\rightarrow \gamma (K)$ as $h\rightarrow 0$ always holds [Bog1, Th.
2.4.8] applied to $1_{K}$ (indeed, for $h\in H(\gamma ),$ $d\gamma
_{h}/d\gamma $ exists and is continuous in $h$, by [Bog1, 2.4.3]).
(vi) For $\gamma $ Radon and $X$ a locally convex topological vector space,
the closed unit ball of $H(\gamma )$ is compact in $X$ [Bog1, 3.2.4].
(vii) In a locally convex space, there is a sequence of metrizable compacta
K_{n}$ with $\gamma (\tbigcup\nolimits_{n}K_{n})=1$ [Bog1, Th. 3.4.1].
(viii) For $X$ a locally convex space, equipped with a symmetric Radon
Gaussian measure $\gamma $: if $E$ is any Hilbert space continuously
embedded in $H(\gamma )$, then there exists a symmetric Radon Gaussian
measure $\gamma ^{\prime }$ with $H(\gamma ^{\prime })=E$ [Bog1, 3.3.5].
\noindent \textbf{12. }\textit{Locally compact groups: Gaussianity. }For
Gaussian measures on locally compact groups $G$, see e.g. [Par, IV.6] for $G$
abelian and [Hey1, 5.2, 5.3] for the general case. Use is made there of
characters -- bounded, multiplicative or additive according to notation; the
local inner product [Hey1, 5.1.7] is between the group and its Pontryagin
dual.
One link between the group and vector-space aspects can be seen in the
central role played in each by Gaussianity. We may think of this in each
case as saying that, as in $(CM),$ the relevant Fourier transform is of
exponential type, the exponent having two terms, one linear (concerning
means -- location, or translation), one quadratic (concerning covariances,
which captures scale and dependence effects). Where the density of the
measure exists, it involves (via the `normal' Edgeworth formula above in the
Euclidean case) the inverse of the covariance $\Sigma $ (matrix or
operator), important in its own right (as the concentration or precision
matrix/operator $K:=\Sigma ^{-1}$). So `degeneracy-support' phenomena as
above are unavoidable (below). Statistically, samples from two populations
can only be usefully compared if their covariances are the same, and then
the relevant statistic is the likelihood ratio; see e.g. [IbrR] for
background here.
The supports of Gaussian measures on groups, and in particular the
connections between Gaussian and Haar supports, have been studied by
McCrudden [McC1,2], [McCW] and others\textit{.}
\noindent \textbf{13. }\textit{Dichotomy. }The equivalence-singularity
dichotomy for Gaussian measures is a general consequence ([LePM], [Kal];
[MarR, \S 5.3]) of the triviality of a certain tail algebra (cf. [Hey1, Th.
5.5.6]); tail triviality in this case is established using a \textit
zero-one law}.
\noindent \textbf{14.}\textit{\ Automatic continuity. }The general theme of
automatic continuity -- situations where a function subject to mild
qualitative conditions must necessarily be continuous -- is important in
many contexts; see e.g. [BinO5] and the references therein. For results of
this type on $\gamma $-measurable linear functions for Gaussian $\gamma $,
see [Bog1, Ch. 2]. See also [Sol, Cor. 2].
\noindent \textbf{15.}\textit{\ Simmons-Mospan theorem and subcontinuity.
Recall from the companion paper [BinO7] (cf. [BinO4]) the following
definition, already used in Th. 3.4 above. For $\mu \in \mathcal{P}(G)$ and
K\in \mathcal{K}(G),$
\[
\mu _{-}(K):=\sup_{\delta >0}\inf \{\mu (Kt):t\in B_{\delta }\};
\
then $\mu $ is \textit{subcontinuous} if $\mu _{\_}(K)>0$ for all $K$ with
\mu (K)>0.$ (For a related notion see [LiuR], where a Radon measure $\mu $
on a space $X,$ on which a group $G\ $acts homeomorphically, is called
\textit{mobile} if each map $t\mapsto \mu (Kt)$ is continuous for $K\in
\mathcal{K}(X).)$ It follows from Prop. 3.1 above (see Cor. 3.2) that if
\mu _{-}(K)>0$ for some $K,$ then $G$ is locally compact. Note that in a
locally compact group, right uniform continuity of all the maps $t\mapsto
\mu (tB)$ for $B$ Borel is equivalent to absolute continuity of $\mu $
w.r.t. Haar measure ([Hey1, L. 6.3.4] -- cf. [HewR, Th. 20.4]). So if $G\
is not locally compact, no measure $\mu \in \mathcal{P}(G)$ is
subcontinuous; then for all compact $K\subseteq G,$ $\mu _{-}(K)=0.$
If $G$ is locally compact, then its left Haar measure $\eta =\eta _{G}$
satisfies
\[
\eta _{-}(K)=\eta (K);
\
in particular, this equation holds for all non-null compact $K.$ The latter
observation extends to measures $\mu $ that are absolutely continuous w.r.t.
$\eta _{G}.$ Conversely, if $\mu $ is a measure satisfying $\mu _{-}(K)>0
\textit{\ }for all compact $K$ with $\mu (K)>0,$ then, as a consequence of
the Simmons-Mospan theorem (\S 1), $\mu $ is absolutely continuous w.r.t.
\eta _{G}:$ see [BinO7].
\textbf{16. }\textit{Quasi-invariance and the Mackey topology of analytic
Borel groups. }We stop to comment on the force of \textit{full
quasi-invariance} of a measure in connection with a Steinhaus triple
(H,G,\mu )$ with $H$ (and $G$) Polish. Both groups, being absolutely Borel,
are analytic spaces (Lemma 2.1 above). So both carry a \textit{standard}
Borel `structure' (i.e. Borel isomorphic to the $\sigma $-algebra of Borel
subsets of some Borel subset of a Polish space) with $H$ carrying a Borel
`substructure' ($\sigma $-subalgebra) of $G.$ (Borel subsets of $H$ are
Borel in $G.$) Mackey [Mac] investigates such Borel groups, defining also a
(Borel) measure $\mu $ to be \textit{standard} if it has a \textit{standard}
Borel support (vanishes outside of a standard Borel set). It emerges that
every $\sigma $-finite Borel measure in an analytic Borel space is standard
[Mac, Th. 6.1]. Of interest to us is Mackey's notion of a `measure class'
C_{\mu },$ comprising all Borel measures $\nu $ with the same null sets as
\mu :$ $\mathcal{M}_{0}(\nu )=\mathcal{M}_{0}(\mu ).$ Such a measure class
may be closed under translation, and may be right or left invariant; then
the common null sets are themselves invariant, and so may be viewed as
witnessing quasi-invariance of the measure $\mu .$ Mackey shows that a Borel
group with a one-sided invariant measure class has a both-sidedly invariant
measure class [Mac, Lemma 7.2]; furthermore, if the class is countably
generated, then the class contains a left-invariant and a right-invariant
measure [Mac, Lemma 7.3]. This enables Mackey to improve on Weil's theorem
in showing that an analytic Borel group $G$ with a one-sidedly invariant
measure class, in particular one generated by a quasi-invariant measure, has
a unique locally compact topology making $G$ a topological group as well as
generating the given Borel `structure'.
\textbf{17. }\textit{The Strassen set and the law of the iterated logarithm
(LIL)}. The LIL completes (with the law of large numbers (LLN) and central
limit theorem (CLT)) the trilogy of classical limit theorems in probability
theory; for a survey see e.g. [Bin]. One form, the compact LIL, links the
unit ball $U$ of the reproducing-kernel Hilbert space associated with the
covariance structure of a random variable $X$ with values in a separable
Banach space $B$ with the cluster set of the partial sums, normalised as in
the classical LIL. See e.g. [LedT, 207-210]. The first results of this type
were Strassen's functional LIL and its extension to Banach spaces by Kuelbs
and others ([LedT, 233-234], [Bog1, 358]).
\textbf{18. }\textit{Product measures}. Infinite products of probability
measures correspond to infinite sequences of independent random variables;
they give a particularly important class of probability measures on
infinite-dimensional spaces. A basic result here is the \textit{Kakutani
alternative}: if the laws of the factors are equivalent, the laws of the
products are either equivalent or mutually singular, depending on the
convergence or divergence of the infinite product of the Hellinger distances
of the factor laws ([Kak2], [JacS, Ch. IV]; the term \textit
Kakutani-Hellinger distance} is now used). As usual with dichotomies in
probability theory, there are links with zero-one laws (cf. \S 9.13). See
also Shepp [She], [Kak2].
\textbf{19. }\textit{Non-Archimedean fields}. L\"{o}wner's result [Loe] (cf.
[Neu2]) addresses the loss of a property -- desirable, in some respects --
as the dimension $n\rightarrow \infty $, by changing the base field from the
reals to a non-Archimedean field. This is an early example of
non-Archimedean fields (which originate in algebra and algebraic number
theory) being applied to address a concrete problem in a quite different
area.
\textbf{20. }\textit{Other settings. }Recent generalizations of
Cameron-Martin theory analyze an infinite-dimensional Lie-group or a
sub-Riemannian manifold setting -- see for example [DriG], [GorL] and [Gor],
which thus preserve much of the classical setting; see also [Pug] (cf.
[Bog1, p. 393]) and [Shi] for special cases.
\bigskip
\textbf{Acknowledgement. }We thank V. I. Bogachev for helpful discussion and
the Referee for his thorough, scholarly and constructive report, which led
to many improvements.
\bigskip
\textbf{References}
\bigskip
\noindent \lbrack ArhT] A. Arhangel'skii, M. Tkachenko, \textsl{Topological
groups and related structures}. World Scientific, 2008. \newline
\noindent \lbrack AruK] L. M. Arutyunyan and E. D. Kosov, Spaces of
quasi-invariance of product measures. \textsl{Funct. Anal. Appl.}\textbf{\ 4
} (2015), 142--144.\newline
\noindent \lbrack Bad] R. Badora, On the Hahn-Banach theorem for groups.
\textsl{Arch. Math.} (Basel) \textbf{86} (2006), 517--528.\newline
\noindent \lbrack Ban1] S. Banach, \textsl{Th\'{e}orie des op\'{e}rations li
\'{e}aires}, in: Monografie Mat., vol.1, 1932 (in: \textquotedblleft
Oeuvres\textquotedblright , vol.2, PWN, 1979), translated as `Theory of
linear operations, North Holland, 1978.\newline
\noindent \lbrack Ban2] S. Banach, Sur l'\'{e}quation fonctionnelle
f(x+y)=f(x)+f(y)$, \textsl{Fund. Math. }\textbf{1} (1920),123--124;
reprinted in: Oeuvres, Vol. I, PWN, Warszawa, 1967, 47--48 (commentary by H.
Fast, p. 314).\newline
\noindent \lbrack Bana] W. Banaszczyk, \textsl{Additive subgroups of
topological vector spaces.} Lecture Notes in Mathematics, 1466. Springer,
1991.\newline
\noindent \lbrack Bak1] R. L. Baker,\textquotedblleft Lebesgue
measure\textquotedblright\ on $\mathbb{R}^{\infty }$. Proc. Amer. Math. Soc.
113 (1991), no. 4, 1023--1029.\newline
\noindent \lbrack Bak2] R. L. Baker, \textquotedblleft Lebesgue
measure\textquotedblright\ on $\mathbb{R}^{\infty }$. II. \textsl{Proc.
Amer. Math. Soc.} \textbf{132} (2004), 2577--2591.\newline
\noindent \lbrack BarFF] A. Bartoszewicz, M. Filipczak, T. Filipczak, On
supports of probability Bernoulli-like measures. \textsl{J. Math. Anal. Appl
} \textbf{462} (2018), 26--35.\newline
\noindent \lbrack BecK] H. Becker, and A. S. Kechris, \textsl{The
descriptive set theory of Polish group actions.} London Math. Soc. Lecture
Notes \textbf{232}. Cambridge University Press, 1996.\newline
\noindent \lbrack BerTA] A. Berlinet, C. Thomas-Agnan, \textsl
Reproducing-kernel Hilbert spaces in probability and statistics}. Kluwer,
2004.\newline
\noindent \lbrack Ber] E. Berz, Sublinear functions on $\mathbb{R},$ \textrm
\textsl{Aequat. Math.}} \textbf{12} (1975), 200-206.\newline
\noindent \lbrack Bin] N. H. Bingham, Variants on the law of the iterated
logarithm. \textsl{Bull. London Math. Soc.} \textbf{18} (1986), 433-467
\newline
\noindent \lbrack BinF] N. H. Bingham and J.M. Fry, \textsl{Regression:
Linear models in statistics}. Springer, 2010.\newline
\noindent \lbrack BinGT] N. H. Bingham, C. M. Goldie and J. L. Teugels,
\textsl{Regular variation}, 2$^{\text{nd}}$ ed., Cambridge University Press,
1989 (1$^{\text{st}}$ ed. 1987). \newline
\noindent \lbrack BinK] N. H. Bingham and R. Kiesel, \textsl{Risk-neutral
valuation. Pricing and hedging of financial derivatives. }Springer, 2$^
\text{nd}}$ ed. 2004 (1$^{\text{st}}$ed. 1998) \newline
\noindent \lbrack BinO1] N. H. Bingham and A. J. Ostaszewski, Kingman,
category and combinatorics. \textsl{Probability and mathematical genetics}
(Sir John Kingman Festschrift, ed. N. H. Bingham and C. M. Goldie), 135-168,
\textsl{London Math. Soc. Lecture Notes in Mathematics} \textbf{378}, CUP,
2010. \newline
\noindent \lbrack BinO2] N. H. Bingham and A. J. Ostaszewski, Normed groups:
Dichotomy and duality. \textsl{Dissert. Math.} \textbf{472} (2010), 138p.
\newline
\noindent \lbrack BinO3] N. H. Bingham, A. J. Ostaszewski, Category-measure
duality: convexity, midpoint convexity and Berz sublinearity. \textsl
Aequat. Math.} \textbf{91} (2017), 801--836.\newline
\noindent \lbrack BinO4] N. H. Bingham and A. J. Ostaszewski, Beyond
Lebesgue and Baire IV: Density topologies and a converse Steinhaus-Weil
theorem. \textsl{Topology and its Applications} \textbf{239} (2018), 274-292
(arXiv:1607.00031).\newline
\noindent \lbrack BinO5] N. H. Bingham and A. J. Ostaszewski, Additivity,
subadditivity and linearity: Automatic continuity and quantifier weakening.
\textsl{Indag. Math.} (N.S.) \textbf{29} (2018), 687--713.\newline
\noindent \lbrack BinO6] N. H. Bingham and A. J. Ostaszewski, Variants on
the Berz sublinearity theorem, arXiv:1712.05183.\newline
\noindent \lbrack BinO7] N. H. Bingham and A. J. Ostaszewski, The
Steinhaus-Weil property: its converse, Solecki amenability and
subcontinuity, arXiv:1607.00049v3.\newline
\noindent \lbrack Bir] G. Birkhoff, A note on topological groups. \textsl
Compos. Math.} \textbf{3} (1936), 427--430.\newline
\noindent \lbrack Bog1] V. I. Bogachev, \textsl{Gaussian measures}, Math.
Surveys \& Monographs \textbf{62}, Amer Math Soc., 1998.\newline
\noindent \lbrack Bog2] V. I. Bogachev, \textsl{Measure theory.} Vol. I, II.
Springer, 2007.\newline
\noindent \lbrack Bog3] V. I. Bogachev, \textsl{Differentiable measures and
the Malliavin calculus.} Math. Surveys \& Monographs \textbf{164}. Amer Math
Soc., 2010.\newline
\noindent \lbrack CamM1] R. H. Cameron, W. T. Martin, Transformations of
Wiener integrals under translations. \textsl{Ann. Math. }(2) \textbf{45},
(1944), 386--396.\newline
\noindent \lbrack CamM2] R. H. Cameron, W. T. Martin, Transformations of
Wiener integrals under a general class of linear transformations. \textsl
Trans. Amer. Math. Soc.} \textbf{58}, (1945), 184--219.\newline
\noindent \lbrack CamM3] R. H. Cameron, W. T. Martin, The transformation of
Wiener integrals by nonlinear transformations. \textsl{Trans. Amer. Math.
Soc.} \textbf{66}, (1949), 253--283.\newline
\noindent \lbrack Chr1] J. P. R. Christensen, On sets of Haar measure zero
in abelian Polish groups. Proceedings of the International Symposium on
Partial Differential Equations and the Geometry of Normed Linear Spaces
(Jerusalem, 1972). \textsl{Israel J. Math.} \textbf{13} (1972), 255--260
(1973).\newline
\noindent \lbrack Chr2] J. P. R. Christensen, \textsl{Topology and Borel
structure. Descriptive topology and set theory with applications to
functional analysis and measure theory.} North-Holland Mathematics Studies
\textbf{10}, 1974.\newline
\noindent \lbrack Con] J. B. Conway, \textsl{A course in functional analysis
} 2$^{\text{nd}}$ ed. Graduate Texts in Mathematics \textbf{96}. Springer,
1990 (1$^{\text{st}}$ ed. 1985).\newline
\noindent \lbrack DalF] Yu. L. Dalecky and S. V. Fomin, \textsl{Measures and
differential equations in infinite-dimensional space. }Kluwer, 1991.\newline
\noindent \lbrack DieS] J. Diestel, A. Spalsbury,\textsl{\ The joys of Haar
measure.} Graduate Studies in Mathematics \textbf{150}, Amer. Math. Soc.,
2014.\newline
\noindent \lbrack Dod] P. Dodos, The Steinhaus property and Haar-null sets.
\textsl{Bull. Lond. Math. Soc.} \textbf{41} (2009), 377--384.\newline
\noindent \lbrack DriG] B. K. Driver, M. Gordina, Heat kernel analysis on
infinite-dimensional Heisenberg groups. \textsl{J. Funct. Anal.} \textbf{255}
(2008), 2395--2461.\newline
\noindent \lbrack FelD] J. M. G.Fell and R. S. Doran, \textsl
Representations of *-algebras, locally compact groups, and Banach
*-algebraic bundles}:\textsl{\ Basic representation theory of groups and
algebras.} Vol. 1 Pure and Applied Mathematics \textbf{125}. Academic Press,
1988.\newline
\noindent \lbrack For] {M. K. Fort, Jr., {A unified theory of
semi-continuity.} \textsl{Duke Math. J. }\textbf{16} (1949), 237--246.
\newline
\noindent \lbrack Fre1] D. H. Fremlin, \textsl{Measure theory}, Volume 2,
Broad foundations, Torres Fremlin, 2001.\newline
\noindent \lbrack Fre2] D. H. Fremlin, \textsl{Measure theory}, Volume 4,
Topological measure spaces, Part One, Torres Fremlin, 2003.\newline
\noindent \lbrack Fuc] L. Fuchs, \textsl{Infinite abelian groups.Vol. 1}
Academic Press, 1970; vol. 2 1973.\newline
\noindent \lbrack GarP] R. J. Gardner and W. F. Pfeffer, \textsl{Borel
measures}, in: \textsl{Handbook of set-theoretic topology} (K. Kunen and J.
E. Vaughan, eds), 961--1043, North-Holland, 1984.
\noindent \lbrack GelV] I. M. Gel$^{\prime }$fand and N. Ya. Vilenkin,
\textsl{Generalized functions}. Vol. 4.\textsl{\ Applications of harmonic
analysis.} Academic Press , 1964.\newline
\noindent \lbrack GikS] I. I. Gikhman and A. V. Skorokhod, \textsl{Theory of
Random Processes I}, Grundlehren math. Wiss. \textbf{210}. Springer, 1974
(reprinted 2004; Izdat. Nauka, Moscow, 1971).\newline
\noindent \lbrack GilZ] T. Gill, W. Zachary, \textsl{Functional Analysis and
the Feynman Operator Calculus}, Springer, 2016.\newline
\noindent \lbrack Gir] I. V. Girsanov, On transforming a certain class of
stochastic processes by absolutely continuous substitution of measures,
\textsl{Th. Prob. Appl}. \textbf{5} (1960), 285--301.\newline
\noindent \lbrack Gor] M. Gordina, Heat kernel analysis and Cameron-Martin
subgroup for infinite dimensional groups. \textsl{J. Funct. Anal.} \textbf
171} (2000), 192--232.\newline
\noindent \lbrack GorL] M. Gordina, T. Laetsch, A convergence to Brownian
motion on sub-Riemannian manifolds. \textsl{Trans. Amer. Math. Soc}. \textbf
369} (2017), 6263--6278.\newline
\noindent \lbrack Gow1] C. Gowrisankaran, Radon measures on groups. \textsl
Proc. Amer. Math. Soc.} \textbf{25} (1970), 381--384.\newline
\noindent \lbrack Gow2] C. Gowrisankaran, Quasi-invariant Radon measures on
groups. \textsl{Proc. Amer. Math. Soc. }\textbf{35} (1972), 503--506.\newline
\noindent \lbrack Gow3] C. Gowrisankaran, Semigroups with invariant Radon
measures. \textsl{Proc. Amer. Math. Soc.} \textbf{38} (1973), 400--404
\newline
\noindent \lbrack Gro1] L. Gross, Abstract Wiener spaces, in: \textsl{Proc.
Fifth Berkeley Sympos. Math. Statist. and Probability}, Vol. II, Part 1, pp.
31--42, Univ. California Press, 1967.\newline
\noindent \lbrack Gro2] L. Gross, Abstract Wiener measure and
infinite-dimensional potential theory, in: \textsl{Lectures in Modern
Analysis and Applications}, II, pp. 84--116, \textsl{Lecture Notes in
Mathematics} \textbf{140}, Springer, 1970.\newline
\noindent \lbrack Haa] A. Haar, Der Massbegriff in der Theorie der
kontinuierlichen Gruppen. \textsl{Math. Ann.} \textbf{34} (1933), 147-169
\newline
\noindent \lbrack Hal] P. R. Halmos{\normalsize , \textsl{Measure theory},}
Grad. Texts in Math. \textbf{18}, Springer 1974 (1st. ed. Van Nostrand,
1950).{\normalsize \newline
}\noindent \lbrack Hey1] H. Heyer, \textsl{Probability measures on locally
compact groups.} Ergebnisse Math. \textbf{94}, Springer, 1977.\newline
\noindent \lbrack Hey2] H. Heyer, Recent contributions to the embedding
problem for probability measures on a locally compact group. \textsl{J.
Multivariate Anal.} \textbf{19} (1986), 119--131.\newline
\noindent \lbrack HeyP] H. Heyer, G. Pap, On infinite divisibility and
embedding of probability measures on a locally compact abelian group.
\textsl{Infinite-dimensional harmonic analysis} III, 99--118, World Sci.
Publ., 2005.\newline
\noindent \lbrack HewR] E. Hewitt and K. A Ross, \textsl{Abstract harmonic
analysis.} Vol. I., Grundlehren Math. Wiss. \textbf{115}, Springer, 2$^
\text{nd}}$ ed. 1979 (1$^{\text{st}}$ ed. 1963).\newline
\noindent \lbrack IbrR] I. A. Ibragimov, Y. A. Rozanov, \textsl{Gaussian
random processes.} Translated from the Russian by A. B. Aries. Applications
of Mathematics \textbf{9}. Springer, 1978.\newline
\noindent \lbrack JacS] J. Jacod and A. N. Shiryaev, \textsl{Limit theorems
for stochastic processes.}Grund. Math. Wiss. \textbf{288}, 2$^{\text{nd}}$
ed. 2003 (1$^{\text{st}}$ ed. 1987).\newline
\noindent \lbrack Jan] S. Janson, \textsl{Gaussian Hilbert spaces.}
Cambridge Tracts in Mathematics \textbf{129}. Cambridge University Press,
1997.\newline
\noindent \lbrack Kak1] S. Kakutani, \"{U}ber die Metrisation der
topologischen Gruppen, \textsl{Proc. Imp. Acad. Tokyo} \textbf{12} (1936),
82--84 (reprinted in [Kak3], Vol. 1, 60--62).\newline
\noindent \lbrack Kak2] S. Kakutani, On equivalence of infinite product
measures. \textsl{Ann. of Math.} (2) \textbf{49}, (1948). 214--224
(reprinted in [Kak3], Vol. 2, 19-29).\newline
\noindent \lbrack Kak3] S. Kakutani, \textsl{Selected Papers}, Vol. 1, 2,
ed. R. R. Kallman, Birkh\"{a}user, 1986.\newline
\noindent \lbrack Kal] G. Kallianpur, Zero-one laws for Gaussian processes.
\textsl{Trans. Amer. Math. Soc.} \textbf{149} (1970), 199-211.\newline
\noindent \lbrack Kap] I. Kaplansky, \textsl{Infinite abelian groups.} U.
Michigan Press, 1969 (1$^{\text{st}}$ ed. 1954).\newline
\noindent \lbrack Kec] A. S. Kechris, \textsl{Classical descriptive set
theory}. Graduate Texts in Mathematics \textbf{156}, Springer, 1995.\newline
\noindent \lbrack Kem] J. H. B. Kemperman, A general functional equation
\textsl{\ Trans. Amer. Math. Soc.} \textbf{86} (1957), 28--56.{\normalsize
\newline
}\noindent \lbrack Kle] V. Klee, Invariant extensions of linear functionals,
\textsl{Pacific J. Math.} 4 (1954), 37-46.\newline
\noindent \lbrack Kuc] M. Kuczma, \textsl{An introduction to the theory of
functional equations and inequalities. Cauchy's equation and Jensen's
inequality,} 2$^{\text{nd}}$ ed., Birkh\"{a}user, 2009 (1$^{\text{st}}$ ed.
PWN, Warszawa, 1985).\newline
\noindent \lbrack Kuo] H. H. Kuo, Gaussian measures in Banach spaces.
\textsl{Lecture Notes in Mathematics} \textbf{463}, Springer, 1975.\newline
\noindent \lbrack LedT] M. Ledoux and M. Talagrand, \textsl{Probability in
Banach spaces}. Ergeb. Math. (3) \textbf{33}, Springer, 1991 (paperback
2011).\newline
\noindent \lbrack LePM] R. D. LePage, V. Mandrekar, Equivalence-singularity
dichotomies from zero-one laws. \textsl{Proc. Amer. Math. Soc.} \textbf{31}
(1972), 251--254.\newline
\noindent \lbrack Lif] M. A. Lifshits, \textsl{Gaussian random functions}.
Mathematics and its Applications \textbf{322}, Kluwer, 1995.\newline
{\normalsize \noindent }[LiuR] T. S. Liu, A. van Rooij, Transformation
groups and absolutely continuous measures, \textsl{lndag. Math.} \textbf{71}
(1968), 225-231.\newline
{\normalsize \noindent }[LiuRW] T. S. Liu, A. van Rooij, J-K Wang,
Transformation groups and absolutely continuous measures II, \textsl{Indag.
Math.} \textbf{73 }(1970), 57--61.\newline
{\normalsize \noindent }[Loe] K. L\"{o}wner (=Charles Loewner), Grundz\"{u
ge einer Inhaltslehre im Hilbertschen Raume. \textsl{Ann. of Math.} \textbf
40} (1939), 816--833 (reprinted in \textsl{Charles Loewner, Collected Papers}
(ed. Lipman Bers), Birkh\"{a}user, 1988, 106-123).\newline
{\normalsize \noindent }[Lud] S. V. Ludkovsky, Properties of quasi-invariant
measures on topological groups and associated algebras. \textsl{Ann. Math.
Blaise Pascal} \textbf{6 }(1999), 33--45.\newline
{\normalsize \noindent }[Mac] G. W. Mackey, Borel structure in groups and
their duals. \textsl{Trans. Amer. Math. Soc.} \textbf{85} (1957), 134--165
\newline
\noindent \lbrack MarR] M. B. Marcus, J. Rosen, \textsl{Markov processes,
Gaussian processes, and local times.} Cambridge Studies in Advanced
Mathematics \textbf{100}, 2006.\newline
\noindent \lbrack McC1] M. McCrudden, On the supports of absolutely
continuous Gauss measures on connected Lie groups. \textsl{Monatsh. Math.}
\textbf{98} (1984), 295--310.\newline
\noindent \lbrack McC2] M. McCrudden, On the supports of Gauss measures on
algebraic groups. \textsl{Math. Proc. Cambridge Philos. Soc.} \textbf{96}
(1984), 437--445.\newline
\noindent \lbrack McCW] M. McCrudden, R.M. Wood, On the support of
absolutely continuous Gauss measures.\textsl{\ Probability measures on
groups, VII (Oberwolfach, 1983)}, 379--397, \textsl{Lecture Notes in Math.}
\textbf{1064}, Springer, 1984.\newline
\noindent \lbrack MilO] H. I. Miller and A. J. Ostaszewski, Group actions
and shift-compactness. \textsl{J. Math. Anal. Appl.} \textbf{392} (2012),
23-39.\newline
\noindent \lbrack Mor] S. A. Morris,\textsl{\ Pontryagin duality and the
structure of locally compact abelian groups. }London Math. Soc. Lecture Note
Series \textbf{29}. Cambridge University Press, 1977.\newline
\noindent \lbrack Mos] Y. V. Mospan, A converse to a theorem of Steinhaus.
\textsl{Real An. Exch.} \textbf{31} (2005), 291-294.{\normalsize \newline
}\noindent \lbrack Neu1] J. von Neumann, Die Einf\"{u}hrung analytischer
Parameter in topologischen Gruppen. \textsl{Ann. Math.} \textbf{34} (1933),
170-179 (\textsl{Collected Works II, }366-386, Pergamon, 1961).{\normalsize
\newline
}\noindent \lbrack Neu2] J. von Neumann, Review of [Loe], \textsl{Math.
Reviews} 1-48; MathSciNet MR0000285.{\normalsize \newline
}\noindent \lbrack Neu3] J. von Neumann, Zum Haarsche Mass in topologischen
Gruppen, \textsl{Comp. Math.} \textbf{1} (1934), 106-114 (\textsl{Collected
Works II, }445-453, Pergamon, 1961).{\normalsize \newline
}\noindent \lbrack Neu4] J. von Neumann, The uniqueness of Haar's measure.
\textsl{Mat. Sbornik} \textbf{1} (1936), 721-734 (\textsl{Collected Works
IV, }91-104, Pergamon, 1962).{\normalsize \newline
}\noindent \lbrack Ost1] A. J. Ostaszewski, Beyond Lebesgue and Baire III:
Steinhaus's Theorem and its descendants, \textsl{Topology and its
Applications} \textbf{160} (2013), 1144-1154.\newline
\noindent \lbrack Ost2] A. J. Ostaszewski, Effros, Baire, Steinhaus and
non-separability. \textsl{Topology and its Applications} \textbf{195} (M.E.
Rudin memorial issue) (2015), 265-274.\newline
\noindent \lbrack Oxt1] J. C. Oxtoby,{\normalsize \ }Invariant measures in
groups which are not locally compact.{\normalsize \ \textsl{Trans. Amer.
Math. Soc.} }\textbf{60}{\normalsize \ }(1946), 215--237.\newline
\noindent \lbrack Oxt2] J. C. Oxtoby,{\normalsize \ \textsl{Measure and
category}, }Graduate Texts in Math. \textbf{2}, Springer, 2$^{\text{nd}}$
ed. 1980 (1$^{\text{st}}$ ed. 1972).{\normalsize \newline
}\noindent \lbrack Oxt3] J. C. Oxtoby, A commentary on [49], [62] and [63].
Pages 379-383 in [Kak3, Vol. 2].\newline
\noindent \lbrack Pan] G. R. Pantsulaia, On an invariant Borel measure in
Hilbert space. \textsl{Bull. Pol. Acad. Sci. Math.} \textbf{52} (2004),
47--51.\newline
\noindent \lbrack Par] K. R. Parthasarathy, \textsl{Probability measures on
metric spaces,} Acad. Press, 1967 (reprinted, AMS, 2005).\newline
\noindent \lbrack Pat] A. L. T. Paterson,{\normalsize \ \textsl{Amenability.}
}Math. Surveys and Mon. \textbf{29}, Amer. Math. Soc., 1988.{\normalsize
\newline
}\noindent \lbrack Pet] {B. J. Pettis, {On continuity and openness of
homomorphisms in topological groups.} \textsl{Ann. of Math. }(2) \textbf{52}
(1950), 293--308.}\newline
\noindent \lbrack Pic] {\ S. Piccard, {Sur les ensembles de distances des
ensembles de points d'un espace Euclidien.\ }\textsl{M\'{e}m. Univ. Neuc
\^{a}tel} \textbf{13}, 212 pp. 1939.}\newline
\noindent \lbrack Pro] V. Prokaj, A characterization of singular measures,
\textsl{Real Anal. Exchange} \textbf{29} (2003/2004), 805--812.\newline
\noindent \lbrack Pug] O. V. Pugach\"{e}v, Quasi-invariance of Poisson
distributions with respect to transformations of configurations. \textsl
Dokl. Math.} \textbf{77} (2008), 420--423.\newline
\noindent \lbrack Rog] C. A. Rogers, J. Jayne, C. Dellacherie, F. Tops\o e,
J. Hoffmann-J\o rgensen, D. A. Martin, A. S. Kechris, A. H. Stone, \textsl
Analytic sets,} Academic Press, 1980.\newline
\noindent \lbrack Rud1] W. Rudin, \textsl{Fourier analysis on groups}.
Wiley, 1962 (reprinted 1990).\newline
\noindent \lbrack Rud2] W. Rudin, \textsl{Functional analysis}, 2$^{\text{ed
}$ ed., McGraw-Hill, 1991 (1$^{\text{st}}$ ed. 1973).\newline
\noindent \lbrack Sad1] G. Sadasue, On absolute continuity of the Gibbs
measure under translations.\textsl{\ J. Math. Kyoto Univ.} \textbf{41}
(2001), 257--276.\newline
\noindent \lbrack Sad2] G. Sadasue, On quasi-invariance of infinite product
measures. \textsl{J. Theoret. Probab.} \textbf{21} (2008), no. 3, 571--585
\newline
\noindent \lbrack Sch] L. Schwartz, \textsl{Radon measures on arbitrary
topological spaces and cylindrical measures.} Tata Institute of Fundamental
Research Studies in Mathematics \textbf{6}, Oxford University Press, 1973
\newline
\noindent \lbrack She] L. A. Shepp, Distinguishing a sequence of random
variables from a translate of itself. \textsl{Ann. Math. Statist.} \textbf{3
} (1965), 1107--1112.\newline
\noindent \lbrack Shi] H. Shimomura, Quasi-invariant measures on the group
of diffeomorphisms and smooth vectors of unitary representations. \textsl{J.
Funct. Anal.} \textbf{187} (2001), 406--441.\newline
\noindent \lbrack Sim] S. M. Simmons, A converse Steinhaus theorem for
locally compact groups. \textsl{Proc. Amer. Math. Soc.} \textbf{49} (1975),
383-386.{\normalsize \newline
}\noindent \lbrack Sko] A. V. Skorohod, \textsl{Integration in Hilbert space
} Ergebnisse Math. \textbf{79}, Springer, 1974.\newline
\noindent \lbrack Smo] W. Smole\'{n}ski, On quasi-invariance of product
measures. \textsl{Demonstratio Math.} \textbf{11} (1978), no. 3, 801--805
\newline
\noindent \lbrack Sol] S. Solecki, Amenability, free subgroups, and Haar
null sets in non-locally compact groups. \textsl{Proc. London Math. Soc.}
\textbf{(3) 93} (2006), 693--722.\newline
\noindent \lbrack Ste] H. Steinhaus, Sur les distances des points de mesure
positive.\textsl{\ Fund. Math.} \textbf{1} (1920), 83-104.{\normalsize
\newline
}\noindent \lbrack Str] D. W. Stroock, \textsl{Probability theory. An
analytic view.} 2$^{\text{nd}}$ ed. Cambridge University Press, 2011 (1$^
\text{st}}$ ed. 1993).{\normalsize \newline
\noindent }[Tar] V. Tarieladze, Characteristic functionals of probabilistic
measures in DS-groups and related topics. \textsl{J. Math. Sci.} \textbf{211}
(2015), 137--296.{\normalsize \newline
\noindent }[TopH] F. Tops\o e and J. Hoffmann-J\o rgensen, Analytic spaces
and their application, in: [Rog].{\normalsize \newline
\noindent }[Wei] A. Weil,{\normalsize \textsl{\ L'int\'{e}gration dans les
groupes topologiques}, }Actualit\'{e}s Scientifiques et Industrielles 1145,
Hermann, 1965 (1$^{\text{st }}$ ed. 1940).\newline
{\normalsize \noindent }[Xia] D. X. Xia,\textsl{\ Measure and integration
theory on infinite-dimensional spaces. Abstract harmonic analysis.} Pure and
App. Math. \textbf{48}. Academic Press, 1972.\newline
\noindent \lbrack Yam1] Y. Yamasaki, Translationally invariant measure on
the infinite-dimensional vector space. \textsl{Publ. Res. Inst. Math. Sci.
\textbf{16} (1980), 693--720.\newline
\noindent \lbrack Yam2] Y. Yamasaki, \textsl{Measures on
infinite-dimensional spaces.} World Scientific, 1985.\newline
\noindent \lbrack Yos] K. Yosida, \textsl{Functional analysis}. Reprint of
the sixth (1980) edition. Classics in Mathematics. Springer, 1995.
\bigskip
\noindent Mathematics Department, Imperial College, London SW7 2AZ;
n.bingham@ic.ac.uk \newline
Mathematics Department, London School of Economics, Houghton Street, London
WC2A 2AE; A.J.Ostaszewski@lse.ac.uk
\end{document}
|
{
"timestamp": "2019-03-18T01:02:36",
"yymm": "1805",
"arxiv_id": "1805.02325",
"language": "en",
"url": "https://arxiv.org/abs/1805.02325"
}
|
\section{Introduction}
\label{sec:wh-intro}
A common approach to building a high-performance data management system is to host all of its data
and metadata in the main memory~\cite{MemSQL,SQLite,Redis,lmdb}.
However, when expensive I/O operations are removed (at least from the critical path),
index operations become a major source of the system's cost, reportedly consuming
14--94\% of query execution time in today's in-memory databases~\cite{KGP13}.
Recent studies have proposed many optimizations to improve them with a major focus on
hash-table-based key-value (KV) systems, including efforts on avoiding chaining in hash tables,
improving memory access through cache prefetching,
and exploiting parallelism with fine-grained locking~\cite{FAK13,LLL16,NFG13}.
With these efforts the performance of index lookup can be pushed close to the hardware's limit,
where each lookup needs only one or two memory accesses to reach the requested data~\cite{LLL16}.
Unfortunately, the $O(1)$ lookup performance and benefits of the optimizations are not available
to ordered indexes used in important applications, such as B+ tree in LMDB~\cite{lmdb},
and skip list in LevelDB~\cite{leveldb}. Ordered indexes are required to support
range operations,
though the indexes can be (much) more expensive than hash tables supporting only point operations.
Example range operations include searching for all keys in a given key range or for keys of a common
prefix. It has been proved that lookup cost in a comparison-based ordered index
is $O(\log N)$ key comparisons,
where $N$ is the number of keys in the index~\cite{CLR09}.
As an example, in a B+ tree of one million keys
a lookup requires about 20 key comparisons on average.
When the B+ tree grows to billions of keys,
which is not rare with small KV items managed in today's large memory of hundreds of GBs,
on average 30 or more key-comparisons are required for a lookup.
Lookup in both examples can be an order of magnitude slower than that in hash tables.
Furthermore, searching in a big index with large footprints increases working
set size and makes CPU cache less effective.
While nodes in the index are usually linked by pointers,
pointer chasing is a common access pattern in the search operations.
Therefore, excessive cache and TLB misses may incur tens of DRAM accesses
in a lookup operation~\cite{WNJ17}.
The performance gap between ordered and unordered indexes has been significantly widened.
As a result, improving ordered indexes to support efficient search operations
has become increasingly important.
As a potential solution to reduce the search overhead,
prefix tree, also known as trie,
may be adopted as an ordered index, where a key's location is solely determined by the key's content
(a string of \textit{tokens}, e.g., a byte string), rather than by the key's relative order
in the entire keyset. Accordingly, trie's search cost is determined by the number of tokens
in the search key ($L$), instead of the number of keys ($N$) in the index.
This unique feature makes it possible for tries to perform search faster than the
comparison-based ordered indexes, such as B+ tree and skip list. As an example,
for a trie where keys are 4-byte integers and each byte is a token,
the search cost is upper-bounded by a constant ($4$) regardless of the number of keys in the index.
This makes trie favorable in workloads dominated by short keys,
such as searching in IPv4 routing tables where all of the keys are 32-bit integers.
However, if keys are long (e.g., URLs of tens of bytes long),
even with a small set of keys in the trie, the search cost can be consistently high
(possibly substantially higher than the $O(\log N)$ cost in other indexes).
As reported in a study of Facebook's KV cache workloads on its production Memcached system,
most keys have a size between 20 to 40 bytes~\cite{AXF12}, which makes trie an undesirable choice.
It is noted that the \textit{path compression} technique may help to reduce
a trie's search cost~\cite{LKN13}.
However, its efficacy highly depends on the key contents,
and there is no assurance that its $O(L)$ cost can always be reduced.
Together with its issues of inflated index size and fragmented memory usage~\cite{LKN13},
trie has not been an index structure of choice in general-purpose in-memory data management systems.
In this paper we propose a new ordered index structure, named \textit{Wormhole},
to bridge the performance gap between hash tables and ordered indexes
for high-performance in-memory data management.
Wormhole efficiently supports all common index operations,
including lookup, insertion, deletion, and range query.
Wormhole has a lookup cost of $O(\log L)$ memory accesses,
where $L$ is the length of search key (actual number of accesses can be (much) smaller than $\log_2 L$).
With a reasonably bounded key length (e.g., 1000 bytes), the cost can be considered as $O(1)$,
much lower than that of other ordered indexes, especially for a very-large-scale KV store.
In addition to lookup, other operations, such as insertion, deletion, and range query, are also efficiently supported.
In the meantime,
Wormhole has a space cost comparable to B+ tree, and often much lower than trie.
This improvement is achieved by leveraging strengths of three data structures,
namely, space efficiency of B+ tree (by storing multiple items in a tree node),
trie's search time independent of store size, and hash-table's $O(1)$ search time,
to orchestrate a single efficient index. Specifically, we first use a trie structure
to replace the non-leaf section of a B+ tree structure in order to remove the $N$ factor
in the B+ tree's $O(\log N)$ search time.
We then use a hash table to reduce the lookup cost on the trie structure
to $O(\log L)$, where $L$ is the search key length.
We further apply various optimizations in the new structure to realize
its full performance potential and maximize its measurable performance.
The proposed ordered index is named \textit{Wormhole}
for its capability of jumping on the search path from the tree root to a leaf node.
We design and implement an in-memory Wormhole index and extensively
evaluate it in comparison with several representative indexes,
including B+ tree, skip list, Adaptive Radix Tree (ART)~\cite{LKN13},
and Masstree (a highly optimized trie-like index)~\cite{MKM12}.
Experiment results show that Wormhole outperforms these indexes
by up to 8.4$\times$, 4.9$\times$, 4.3$\times$, and 6.6$\times$,
in terms of key lookup throughput, respectively.
We also compare Wormhole with a highly optimized Cuckoo hash table when range queries
are not required. The results show that Wormhole achieves point-lookup throughput
30--92\% of the hash-table's throughput.
The rest of this paper is organized as below.
Section~\ref{sec:wh-index} introduces design of Wormhole's core data structure.
Section~\ref{sec:wh-opt} describes techniques for efficient implementation of the Wormhole index.
Section~\ref{sec:wh-eval} presents experiment setup, workloads, and evaluation results.
Section~\ref{sec:wh-related} discusses the related work, and
Section~\ref{sec:wh-conc} concludes.
\section{The Wormhole Data Structure}
\label{sec:wh-index}
In this section we introduce the Wormhole index structure,
which has significantly lower asymptotic lookup time than existing ordered indexes,
without increasing demand on space and cost of other modification operations, such as insertion.
To help understand how Wormhole achieves this improvement,
we start from B+ tree and progressively evolve it to the structure of Wormhole.
\subsection{Background: Lookup in the B+ Tree}
\label{sec:wh-bt}
Figure~\ref{fig:arch1} shows a small set of 12 keys indexed in an example B+
tree, where each character is a token. While a key in the index is usually associated with a value,
we omit the values in the discussion and only use keys to represent
KV items to focus on time and space costs of index
operations. The example B+ tree has one internal node (the root
node) and four leaf nodes. In the B+ tree all keys are placed in leaf
nodes while internal nodes store a subset of the keys to facilitate
locating search keys at leaf nodes.
Keys in a leaf node are usually sorted and all leaf
nodes are often linked into a fully sorted list to support range
operations with a linear scan on it. We name the sorted list
\textit{LeafList}, and the remaining structure of the index as
\textit{MetaTree}, as shown in Figure~\ref{fig:arch1}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\columnwidth]{21-arch1.pdf}
\caption{An example B+ tree containing 12 keys}
\label{fig:arch1}
\end{figure}
MetaTree is used to accelerate the process of
locating a leaf node that potentially stores a given
search key. A search within the leaf node is conducted
thereafter. Because a leaf node's size, or number of keys held in the
node, is bounded in a predefined range $[\lceil \frac{k}{2}\rceil, k]$ ($k$ is a predefined constant integer),
the search with a leaf node takes $O(1)$ time.
Accordingly, the major search cost is incurred in the MetaTree, which is $\log_2 \frac{N}{k}$
or $O(\log N)$ (N is the number of indexed keys).
As the B+ tree grows, the MetaTree will contain more levels of internal nodes,
and the search cost will increase at a rate of $O(\log N)$.
Our first design effort is to replace the MetaTree with a structure whose search cost is not tied to $N$.
\subsection{Replacing the MetaTree with a Trie}
\label{sec:wh-trie}
An intuitive idea on B+ tree's improvement is to replace its MetaTree structure with a hash table,
as illustrated in Figure~\ref{fig:archx}. This can reduce the search cost to $O(1)$. However, this
use of hash table does not support inserting a new key at the correct position in the sorted LeafList.
It also does not support range queries whose search identifiers are not existent in the index,
such as search for keys between ``Brown'' and ``John'' or search for keys with a prefix of ``J''
in the example index shown in Figure~\ref{fig:archx}, where ``Brown'' and ``J'' are not in the index.
Therefore, the MetaTree itself must organize keys in an ordered fashion.
Another issue is that the hash table requires an entry (or pointer) for every key in the index,
demanding a space cost higher than MetaTree.
\begin{figure}[!t]
\centering
\includegraphics[width=0.69\columnwidth]{24-naive-hash.pdf}
\caption{Replacing B+ tree's \textit{MetaTree} with hash table}
\label{fig:archx}
\end{figure}
To address the issues, trie can be a better replacement as it is an ordered index and its lookup cost
($O(L)$, where $L$ is the search key length) is not tied to $N$, the number of keys in the index.
Figure~\ref{fig:arch2} illustrates the index evolved from B+ tree
with its MetaTree structure replaced by a trie structure named \textit{MetaTrie}.
For each node in the LeafList we create a key as its \textit{anchor} and insert it into MetaTrie.
A node's anchor key is to serve as a borderline between this node and the node immediately
on its left, assuming the sorted LeafList is laid out horizontally in an ascending order
as shown in Figure~\ref{fig:arch2}. Specifically, the anchor key (\textit{anchor-key}) of a node (Node$_b$),
must meet the following two conditions:
\begin{itemize}[itemsep=0ex,topsep=0ex,partopsep=0ex,parsep=0ex]
\item \textbf{The Ordering Condition}: $\textit{left-key}<\textit{anchor-key}\leq\textit{node-key}$,
where \textit{left-key} represents any key in the node (Node$_a$) immediately left to Node$_b$,
and \textit{node-key} represents any key in Node$_b$.
If Node$_b$ is the left-most node in the LeafList,
the condition is $\textit{anchor-key}\leq\textit{node-key}$.
\item \textbf{The Prefix Condition}: An anchor key cannot be a prefix of another anchor key.
\end{itemize}
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\columnwidth]{22-arch2.pdf}
\caption{Replacing B+ tree's \textit{MetaTree} with \textit{MetaTrie}}
\label{fig:arch2}
\end{figure}
When an anchor key is inserted into the MetaTrie,
one new leaf node corresponding to the key is created in the trie. In addition, any prefix of the key is inserted to the trie as its internal node, assuming it is not yet in the trie.
We use the prefix condition to make sure every anchor key has a corresponding leaf node in the MetaTrie.
In the formation of an anchor key, we aim to minimize the key length to reduce the MetaTrie size.
To this end we design a method to form an anchor key for the aforementioned Node$_b$ in compliance
with the two conditions, assuming the smallest token, denoted by $\bot$,
does not appear in regular keys (other than the anchor keys) on the LeafList.
We will remove the restriction on use of the smallest token later.
We denote the smallest key in Node$_b$ as $\langle P_1P_2...P_kB_1B_2...B_m\rangle$
and the largest key in Node$_a$ as $\langle P_1P_2...P_kA_1A_2...A_n\rangle$,
and $A_1 < B_1$, where $P_i$ ($1\leq i\leq k$), $A_i$ ($1\leq i\leq m$),
and $B_i$ ($1\leq i\leq n$) represent the keys' tokens. If $k$ or $n$ is 0, it represents the corresponding key segment does not exist.
Accordingly, $\langle P_1P_2...P_k\rangle$ is the longest common prefix of the two keys.
Assuming Node$_b$ is a new leaf node whose anchor key has not been determined, we form its anchor key as follows:
\begin{itemize}[itemsep=0ex,topsep=0ex,partopsep=0ex,parsep=0ex]
\item If Node$_b$ is not the left-most node on the LeafList ($m>0$),
we will check whether $\langle P_1P_2...P_kB_1\rangle$
is a prefix of the anchor key of the node immediately after
Node$_b$ on the LeafList, denoted Node$_c$.\footnote{Note
that if $\langle P_1P_2...P_kB_1\rangle$ is a prefix of any other anchor,
it must be a prefix of Node$_c$'s anchor.}
If not (including the case where Node$_c$ does not exist),
Node$_b$'s anchor is $\langle P_1P_2...P_kB_1\rangle$.
Otherwise,
Node$_b$'s anchor is $\langle P_1P_2...P_kB_1\bot\rangle$,
which cannot be a prefix of Node$_c$'s anchor.
This is because a $(k+2)$\textsuperscript{th} token of Node$_c$'s anchor key must be larger than $\bot$.
We then check whether Node$_a$'s anchor is a prefix of Node$_b$'s anchor
(Node$_a$ is $\langle P_1P_2...P_j\rangle$, where $j \leq k$).
If so, Node$_a$'s anchor will be changed to $\langle P_1P_2...P_j\bot\rangle$. Note that by appending the $\bot$ token to meet the anchor key's prefix condition, its ordering condition can be violated. To accommodate the situation, the $\bot$ is ignored in the ordering condition test without compromising the correctness.
\item Otherwise (Node$_b$ is the left-most node), its anchor is $\bot$,
which is not any other anchor's prefix.
\end{itemize}
Using the method the four leaf nodes in Figure~\ref{fig:arch2}, starting from the left-most one,
have their respective anchors as ``$\bot$'', ``$Au$'', ``$Jam$'', and ``$Jos$''.
All the anchors and their prefixes are inserted into the MetaTrie.
\subsection{Performing Search on MetaTrie}
\label{sec:trie1-lookup}
The basic lookup operation on the MetaTrie with a search key is similar to that in a conventional
trie structure, which is to match tokens in the key to those in the trie
one at a time and walk down the trie level by level accordingly. If the search key is ``Joseph''
in the example index shown in Figure~\ref{fig:arch2}, it will match the anchor key ``Jos'',
which leads the lookup to the last leaf node in the LeafList. The search key is the first one
in the node. However, unlike lookup in a regular trie, when matching of the search key with
an anchor key fails before a leaf node is reached, there is still a chance that the key is
in the index. This is because the keys are stored only at the LeafList and are not directly
indexed by the trie structure. One example is to look up ``Denice'' in the index,
where matching of the first token `D' fails, though the search key is in a leaf node.
Furthermore, when a search key is matched with a prefix of an anchor key,
there is still a chance the search key is not in the index.
An example is to look up ``A'' in the index.
To address the issue, we introduce the concept of \textit{target node} for a search key $K$.
A target node for $K$ is such a leaf node whose anchor key $ K_1$ and immediately next anchor key
$K_2$ satisfy $K_1\leq K<K_2$, if the anchor key $K_2$ exists. Otherwise,
the last leaf node on the LeafList is the search-key's target node. If a search key is in the index,
it must be in its target node. The target nodes of ``A'', ``Denice'', and ``Joseph'' are the first,
second, and fourth leaf nodes in Figure~\ref{fig:arch2}, respectively.
The question is how to identify the target node for a search key.
Looking for a search-key's target node is a process of finding its longest prefix matching an
anchor's prefix. A (short) prefix of the search key can be a prefix of multiple anchors. However,
if its (long) prefix is found to be equal to a unique anchor key, the prefix cannot be another anchor's
prefix due to the prefix condition for being an anchor. Apparently this unique anchor key is not
larger than the search key. Furthermore, if the anchor's next anchor exists, according to anchor's
definition this anchor is smaller than its next anchor and it is not its prefix. However, this
anchor is the search-key's prefix. Therefore, the search key is smaller than the next anchor.
Accordingly, the anchor's leaf node is the target node of the search key. In the example,
the unique anchor of search key ``Joseph'' is ``Jos'',
which can be found by walking down the MetaTrie with the search key.
If there is not such an anchor that is the prefix of a search key, such as ``Denice'' in
Figure~\ref{fig:arch2}, we cannot reach a leaf node by matching token string of the key with
anchor(s) one token at a time starting at the first token. The matching process breaks in
one of two situations. The first one is that a token in the key is found to be non-existent
at the corresponding level of the trie. For example, there isn't an internal node `D' at Level 1
(beneath the root at Level 0) of the trie to match the first token of the search key ``Denice''.
The second one is that tokens of the search key run out during the matching before a leaf node
is reached. An example is with the search key ``A''.
For the first situation, we assume that a search key's first $k$
tokens ($\langle T_1T_2...T_k\rangle$) are matched and $T_{k+1}$ at Level $k+1$ of
the trie is the first unmatched token. Because $\langle T_1T_2...T_k\rangle$ is not an anchor,
there must exist a node matching
$\langle T_1T_2...T_kL\rangle$, a node matching $\langle T_1T_2...T_kR\rangle$, or both,
where tokens $L < T_{k+1} < R$. In other words, the two nodes are siblings of the hypothetical node
matching $\langle T_1T_2...T_{k+1}\rangle$. Accordingly these two nodes are its left and right siblings,
respectively. We further assume that they are immediate left and right siblings, respectively.
Rooted at left or right sibling nodes there is a subtree, named left or right subtrees, respectively.
If the left sibling exists, the search key's target node is the right-most leaf node of the left subtree.
If the right sibling exists,
the left-most leaf node of the right subtree is the target node's immediate next node on the LeafList.
As all leaf nodes are doubly linked,
the target node can be reached by walking backward on the LeafList by one node.
For search key ``Denice'' in the example, both subtrees exist, which are rooted at internal
nodes ``A'' and ``J'', respectively, and the target node (the second leaf node) can be reached
by either of the two search paths, as depicted in Figure~\ref{fig:arch2-lookup}.
For search key ``Julian'', only the left subtree (rooted at internal node ``O'') is available and only
one search path down to the right-most leaf node exists to reach the target node (the fourth leaf node).
For the second situation, we can append the smallest token $\bot$ on the search key. As we assume
the token is not used in the regular key, $\bot$ becomes the first unmatched key and we can follow
the procedure described for the first situation to find the search-key's target node. Note that in
this case only the right subtree exists. Figure~\ref{fig:arch2-lookup} shows the path to reach
the target node of the search key ``A'', which is the first leaf node.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\columnwidth]{25-arch2-lookup.pdf}
\caption{Example lookups on a \textit{MetaTrie} with search keys ``A'', ``Denice'', and ``Julian''.}
\label{fig:arch2-lookup}
\end{figure}
Once a target node for a search key is identified, further actions for lookup,
insertion, and deletion operations are straightforward.
For lookup, we will search in the target node for the key.
In Section~\ref{sec:wh-opt} we will present an optimization technique to accelerate the process.
Similar to those in the B+ tree, insertion and deletion of a key may lead to splitting
of a leaf node and merging of adjacent leaf nodes to ensure that a node does not grow
over its predetermined capacity and does not shrink below a minimum size.
The difference is that the splitting and merging operations are not (recursively) propagated onto the
parent nodes in Wormhole, as it does in B+ tree, to balance leaf node heights.
The only operations in the MetaTrie are removing anchors for merged nodes
or adding new anchors for split nodes.
To remove an anchor, only the trie nodes exclusively used by the anchor are to be removed.
This composite index structure is more space efficient than a conventional trie index by
storing multiple keys in a leaf node and inserting anchors usually (much) shorter than keys
in the trie. Its search time is practically independent of number of keys in the index,
and is only proportional to anchor lengths, which can be further reduced by intelligently choose
the location where a leaf node is split (we leave this optimization as future work).
However, in the worst case the search time can still be $O(L)$, where $L$ is the length of
a search key. With a long key, the search time can still be substantial. In the following
we will present a technique to further reduce the search cost to $O(\log L)$.
\subsection{Accelerating Search with a Hash Table}
\label{sec:wh-boost}
In the walk from the root of a MetaTrie to a search-key's target leaf node, there are two phases.
The first one is actually to conduct the longest prefix match (LPM) between the search key
and the anchors in the trie. If the longest prefix is not equal to an anchor, the second phase
is to walk on a subtree rooted at a sibling of the token next to the matched prefix of the search key.
The $O(L)$ cost of each of the phases can be significantly reduced.
For the first phase, to obtain the LPM we do not have to walk on the trie along a path from the root
token by token. Waldvogel et al. proposed to use \textit{binary search on prefix lengths}
to accelerate the match for routing table lookups~\cite{WVT97}.
To apply the approach, we insert all prefixes of each anchor into a hash table.
In Figure~\ref{fig:arch2-lookup}'s index, ``Jam'' is an anchor, and accordingly its prefixes
(``'', ``J'', ``Ja'', ``Jam'') are inserted in the hash table. We also track the MetaTrie's height,
or the length of the longest anchor key, denoted $L_{\text{anc}}$.
Algorithm~\ref{alg:bsearch} depicts how a binary search for a search key of length
$L_{\text{key}}$ is carried out.
As we can see the longest prefix can be found in
$O(\log (\text{min}(L_{\text{anc}}, L_{\text{key}})))$ time.
In the example index for search key ``James'' it takes two hash-table lookups
(for ``Ja'' and ``Jam'') to find its longest common prefix (``Jam'').
\begin{algorithm}[t]
\begin{algorithmic}[1]
\caption{Binary Search on Prefix Lengths}
\label{alg:bsearch}
\scriptsize
\Function{searchLPM}{search\_key, L$_{\text{anc}}$, L$_{\text{key}}$}
\State m $\gets$ 0;\quad\quad n $\gets$ min(L$_{\text{anc}}$, L$_{\text{key}}$)+1
\While{(m+1) < n}
\State prefix\_len $\gets$ (m+n)/2
\If{search\_key[0 : prefix\_len-1] is in the trie}
\State m $\gets$ prefix\_len
\Else\quad
n $\gets$ prefix\_len
\EndIf
\EndWhile
\State \Return search\_key[0 : m-1]
\EndFunction
\end{algorithmic}
\end{algorithm}
The hash table is named \textit{MetaTrieHT}, which is to replace the MetaTrie to index
the leaf nodes on the LeafList. The \textit{MetaTrieHT} for the MetaTrie in Figure~\ref{fig:arch2}
is illustrated in Figure~\ref{fig:arch3}. Each node in MetaTrie corresponds to an item in
the hash table. If the node represents an anchor, or a leaf node, the hash item is a leaf item,
denoted `\texttt{L}' in Figure~\ref{fig:arch3}. Otherwise, the node is an internal node and the corresponding hash item is
an internal item, denoted `\texttt{I}'. Using this hash table, pointers in the MetaTrie facilitating
the walk from node to node in the trie are not necessary in the MetaTrieHT,
as every prefix can be hashed into the index structure to know whether it exists.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{23-arch3.pdf}
\caption{The structure of Wormhole.
For clarity the bitmap is depicted by directly listing child tokens.}
\label{fig:arch3}
\end{figure}
Each hash item has two fields supporting efficient walk in the second search phase on a path
to a leaf node. The first field is a bitmap. It is meaningful only for internal items.
It has a bit for every possible child of the corresponding internal node in the trie.
The bit is set when the corresponding child exists. With the bitmap, sibling(s) of an unmatched
token can be located in $O(1)$ time. Trie node corresponding to a hash item can be considered
as root of a subtree. In the second phase it is required to know the right-most leaf node
or the left-most leaf node of the subtree. The second field of a hash item contains two pointers,
each pointing to one of the leaf nodes. Accordingly, the second phase takes a constant time.
The index consisting of a LeafList and a MetaTrieHT represents Wormhole's core data structure. Its operations, including lookup (\texttt{GET}), insertion (\texttt{SET}),
deletion (\texttt{DEL}), and range search (\texttt{RangeSearchAscending}) are formally
depicted in Algorithms~\ref{alg:main}, \ref{alg:ancillary}, and~\ref{alg:splitmerge}.
The $O(\log L)$ time cost of Wormhole is asymptotically lower than $O(\log N)$ for B+ tree and
$O(L)$ for trie, where $L$ is the search key's length and $N$ is number of keys in the index.
\input{04-algo.tex}
Regarding space efficiency, Wormhole is (much) better than trie by indexing multiple keys
in a leaf node, rather than individual keys in the trie.
When compared to the B+ tree, it has the same number of leaf nodes.
Therefore, their relative space cost is determined by amount of space held by their respective internal nodes.
Wormhole's MetaTrieHT is essentially organized as a trie,
whose number of nodes highly depends on its key contents.
While it is hard to quantitatively evaluate its space cost and compare it to that of the B+ tree
without assuming a particular workload, we analyze factors impacting the number.
Generally speaking, if the keys often share common prefixes,
many anchors will also share common prefixes,
or nodes on the trie, which reduces the trie size.
On the other hand, if the keys are highly diverse it's less likely to have long common prefixes
between adjacent keys in the LeafList.
According to the rule of forming anchors, short common prefixes lead to short anchors.
Because it is anchors, instead of user keys, that are inserted into the trie,
short anchors lead to fewer internal nodes.
We will quantitatively measure and compare the space costs of Wormhole and B+ tree
with real-world keys in Section~\ref{sec:wh-eval}.
\subsection{Concurrency Support}
To provide strong support of concurrent operations for high scalability, Wormhole aims to minimize
its use of locks, especially big locks, and minimize impact of a lock on concurrency.
There are three groups of operations that require different levels of access exclusiveness.
The first group includes point and range lookups that do not modify the index and do not demand
any access exclusiveness among themselves. The second group includes insertions and deletions
whose required modifications are limited on one or multiple leaf nodes on the LeafList.
They demand access exclusiveness only at the leaf nodes. The third group includes insertions
and deletions that incur split and merge of leaf nodes and modifications of the MetaTrieHT
by adding or removing anchors and their prefixes in it. They demand exclusiveness at the relevant
leaf nodes and at the MetaTrieHT.
A design goal of Wormhole's concurrency control is to minimize the limit imposed by
insertions/deletions on the concurrency of lookup operations.
To this end, we employ two types of locks.
One is a reader-writer lock for each leaf node on the LeafList. For the second group of operations,
insertion/deletion of a key modifies only one leaf node, and accordingly only one node is locked
and becomes unavailable for lookup. For the third group of the operations with one key,
only one or two leaf nodes have to be locked for split or merge, respectively.
However, for addition or removal of prefixes of an anchor in the MetaTrieHT structure,
we may have to simultaneously acquire multiple locks to have exclusive
access of (many) hash items (equivalently trie nodes). To this end the second type of lock
is a single mutex lock on the entire MetaTrieHT to grant exclusive access to an addition or removal
operation of an anchor and its prefixes, instead of fine-grained locks
with much higher complexity and uncertain performance benefits.
However, as every key lookup requires access of the MetaTrieHT table, a big lock imposed
on the entire MetaTrieHT can substantially compromise performance of the first two groups
of operations that perform read-only access on the MetaTrieHT.
To address this issue, we employ the QSBR RCU mechanism~\cite{RCU98,urcu} to enable lock-free access on MetaTrieHT for
its readers (the first two groups of operations).
Accordingly, only the writers of MetaTrieHT need to acquire the mutex lock.
To perform a split/merge operation, a writer first acquires the lock.
It then applies the changes to a second hash table (\textit{T2}),
an identical copy of the current MetaTrieHT (\textit{T1}).
Meanwhile, T1 is still accessed by readers.
Once the changes have been fully applied to T2, T2 will be made visible for readers to access
by atomically updating the pointer to the current MetaTrieHT through RCU,
which simultaneously hides T1 from new readers.
After waiting for an RCU grace period which guarantees T1 is no longer accessed by any readers,
the same set of changes is then safely applied to T1.
Now T1 is again identical to T2 and it will be reused as the second hash table for the next writer.
The extra space used by the second MetaTrieHT is negligible
because a MetaTrieHT, containing only the anchor keys,
is consistently small in size compared with the size of the entire index structure.
As an example, for the eight keysets used in our evaluation (see Table~\ref{tab:datasets}),
the extra space consumed by the second table is only 0.34\% to 3.7\% of the whole index size.
When a lookup reaches a leaf node on the LeafList after searching on a MetaTrieHT,
it needs to make sure that the hash table it used is consistent with the leaf node.
For an insertion/deletion in the third group,
it first acquires lock(s) for relevant leaf node(s), from left to right if two or more leaf nodes are to be locked,
and then acquires the mutex lock for the MetaTrieHT.
With the locks both the leaf node(s) and the table can be updated.
To minimize readers' wait time on the critical section we release the locks
on the leaf nodes right after they have been updated.
To prevent lookups via an old MetaTrieHT from accessing updated leaf nodes,
including nodes being split or deleted, we use version numbers to check their consistency.
Each MetaTrieHT is assigned a version number.
The number is incremented by one for each split/merge operation where a new version of MetaTrieHT is made visible.
Each leaf node is assigned an expected version number, initialized as 0.
When a leaf node is locked for split/merge operation,
we record the current MetaTrieHT's version number plus 1 as the leaf node's expected version number.
A lookup remembers the MetaTrieHT's version number when it starts to access the MetaTrieHT,
and then compares the number with the expected number of the target leaf node it reaches.
If the expected number is greater, this lookup shall abort and start over.
The penalty of the start-overs is limited.
First, for a split/merge operation, only one or two leaf nodes have their version numbers updated.
Lookups targeting any other leaf nodes don't need to start over.
Second, the rate of split/merge is much lower than that of insert/delete operations.
Third, a start-over only needs to perform a second lookup on a newer version of MetaTrieHT,
which is read-only and much faster than an insert/delete operation.
\section{Optimization and Enhancement}
\label{sec:wh-opt}
While Wormhole's design provides significantly improved
asymptotic lookup time, we apply several optimization
techniques to maximize the efficiency of Wormhole's operations on MetaTrieHT
and LeafList. We will also discuss how the assumption on a reserved $\bot$
token not allowed in user keys can be removed. All the techniques
described in this section are also covered in Algorithms~\ref{alg:main},
\ref{alg:ancillary}, and~\ref{alg:splitmerge}.
\subsection{Improving Operations in MetaTrieHT}
There are two major operations in MetaTrieHT for a lookup involving
a sequence of prefixes of a search key. They can be CPU-intensive or
memory-intensive. The first operation is to compute a prefix's
hash value as an index in the MetaTrieHT hash table. The second one
is to read the prefix in the table and compare it with the search-key's
corresponding prefix. Wormhole conducts these operations for each
of its selected prefixes during its binary search for the longest
prefix match. However, a hash-table-based index requires them only
once for a search key. We aim to reduce their CPU and memory access
costs, respectively, and make them comparable with those of the
hash-table-based indexes.
Regarding the first operation, the cost of some commonly used hash
functions, such as that for CRC~\cite{crc} and
xxHash~\cite{xxHash}, is approximately proportional to their
input lengths. By reducing the lengths, we can reduce the hashing
cost. Fortunately, there exist incremental hash functions, including
both CRC and xxHash. Such a function can leverage previously hashed
value of an input string when it computes hash value for an extended
string composed of the string appended with an increment. In this
case it does not need to recompute the longer string from scratch.
Taking advantage of the above properties, Wormhole uses incremental hashing
whenever a prefix match is found and the prefix is extended during
its binary common-prefix search.\footnote{CRC-32c is used in our
implementation.} In this way, the average number of tokens used for
hashing in a lookup of a search key of length $L$ is reduced from
$\frac{L}{2}\log_2 L$ to only $L$, comparable to that of a hash table lookup.
Regarding the second operation, each prefix match operation may
involve multiple prefixes stored in a hash bucket. In the process
many memory accesses may occur,
including dereferencing pointers for prefixes
and accessing potentially long prefixes of several
cache-lines long. These accesses are likely cache misses.
To reduce the cache misses, we organize 8 prefixes in an array
of a cache-line size (64 bytes), named hash slot
(see Figure~\ref{fig:hash-slot}). Each element in the array
consists of a 16-bit tag hashed from the prefix and a 48-bit
pointer to the original prefix.\footnote{On x86-64
only the low-order 48 bits are used in virtual memory address.}
In a lookup, key-comparisons are performed only for prefixes having
a matched tag, which effectively reduces average number of
key-comparisons to almost one per lookup. Similar approaches have been
widely used in high-performance hash tables~\cite{FAK13,BZG16,LAK14}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{26-hash-slot.pdf}
\caption{Structure of Wormhole's hash table}
\label{fig:hash-slot}
\end{figure}
However, it takes multiple hash-table lookups to find an LPM,
which still leads to multiple key-comparisons for a lookup on Wormhole.
To further reduce this overhead,
we first optimistically trust all tag-matches and omit key-comparisons
in every hash-table lookup until finding a seemingly correct LPM.
Tag comparisons may produce false-positive matches, which can
lead the binary search to a wrong prefix that is longer than
the correct one.
To detect this error, a full key comparison is
performed at the last prefix after the binary search.
If it is a mismatch, the search will start over with full prefix comparisons.
Note that there are no false-negative matches in this approach.
Accordingly, it always produces the correct longest prefixes if false-positive matches
do not occur. With the 16-bit tags produced by a well-designed hash
function, the probability of error occurrence is only
$0.0153\%$ for keys of 1024-bytes long ($1-(\frac{2^{16}-1}{2^{16}})^{10}$).
\subsection{Improving Operations in Leaf Node}
Once a target leaf node is identified, a search of a key within
the node is carried out. As keys in the node are sorted, a binary
search may be used during the search. Similar to the issue of many
memory accesses in the MetaTrieHT, accessing a number of original
(long) keys for comparison can be very expensive. Accordingly, we also
calculate a 16-bit hash tag for each key and place the tags in a tag
array in the ascending hash order. A search is then conducted on the
compact tag array. Only when a tag is matched will its corresponding
key be read and compared, which substantially reduces the number of
memory references.
We then further reduce number of comparisons on the tag array using a direct
speculative positioning approach. If a hash function that uniformly
hashes keys into the tag space is employed, the tag values themselves
are well indicative of their positions in the array. Specifically,
with a tag of value $T$ computed from a search key we will first
compare it with a tag at position $\frac{k\times T}{T_{\text{max}}+1}$
in the key array, where $k$ is number of keys in the array and
$T_{\text{max}}$ is the largest possible tag value.
If there isn't a match at the position, we will compare it with its neighboring
tags. Using the lower 16-bits of a (CRC-32c) hash value as the tag,
it usually takes only 1 to 3 tag comparisons
to complete the search in a node of 128 keys.
Another benefit of having the compact tag array
is that the original key array does not have to always stay sorted.
For efficiency, we may append newly inserted keys after the
keys in the key array without immediate sorting,
as illustrated in Figure~\ref{fig:wh-leaf}.
The sorting on the key array can be indefinitely delayed
until a range search or split reaches the node.
Further, the batched sorting amortizes the cost of ordered insertions
when multiple unsorted keys are appended.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\columnwidth]{27-leaf-o1.pdf}
\caption{Wormhole's leaf node}
\label{fig:wh-leaf}
\end{figure}
\subsection{Wormhole with Any Key Tokens}
\label{sec:zero-keys}
We have assumed existence of a token value that never appears in
regular keys, similar to an assumption in the design of
Masstree~\cite{MKM12}. With this assumption, we had designated
an unused value, denoted `$\bot$', as the smallest value and
used it to extend prefix so as to form an anchor satisfying the
rule that no anchor can be a prefix of another anchor.
By removing the assumption, we have to allow the minimal token value, say binary zero,
to appear in the keys.
This is not an issue for printable keys where 0 is not used.
However, a difficult situation arises for binary keys when a
potential anchor (generated due to node split) becomes a prefix of
another anchor that consists of the prefix and trailing zeroes.
One example is that we cannot identify any position in the first
leaf node in Figure~\ref{fig:zero-keys} to split it and produce a
legitimate anchor. Suppose we split the node in the middle and
select binary ``100'' as the anchor. Apparently it is a prefix of the next
anchor ``10000'' and it violates the prefix condition. In this case where
all keys in the node are composed of a common prefix ``1'' and a number of trailing `0's,
there is not a position where we can split and form a new anchor.
To address this issue, we simply allow the leaf node to grow over the node
capacity into a \textit{fat node} without splitting it.
Note that the introduction of fat node is mainly for correctness and
we believe it has virtually no impact on real systems.
For example, with a maximal node size of $N$, having a fat node requires that
there are at least $N+1$ keys sharing the same prefix but having different numbers of trailing zeroes.
In this case the longest key among them must have at least $N$ trailing zeroes.
With a moderate $N$ of 64 or 128, the fat node is unlikely to be seen with any real datasets.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\columnwidth]{28-zero-keys.pdf}
\caption{Introducing a fat leaf node}
\label{fig:zero-keys}
\end{figure}
\section{Evaluation}
\label{sec:wh-eval}
In this section we experimentally evaluate Wormhole by comparing it with several
commonly used index structures, including B+ tree~\cite{C79}, skip list~\cite{skiplist},
Adaptive Radix Tree (ART)~\cite{LKN13}, and Masstree~\cite{MKM12}.
In the Wormhole prototype we use 128 as the maximum leaf-node size (number of keys in a leaf node).
We use an STX B+-tree~\cite{stxbptree}, a highly optimized in-memory B+ tree implementation, to accommodate large datasets.
The B+ tree's fanout is set to 128, which yields the best result on our testbed.
We use the skip list implementation extracted from LevelDB~\cite{leveldb}.
ART is a trie-like index with a lookup cost of $O(L)$.
To reduce space consumption, ART adaptively selects its node size and
employs path compression to reduce number of nodes.
We use an ART's implementation available on Github~\cite{libart}.
Masstree is a trie-like index with a very high fanout (up to $2^{64}$).
With this high fanout it is impractical to use arrays to hold children pointers in trie nodes.
Therefore, at each trie node it employs a B+ tree to index the children.
We use the publicly available source code of Masstree from
its authors in the evaluation~\cite{masstree}.
Among the five indexes, only Wormhole and Masstree employ fine-grained
RCU and/or locks, which enables thread-safe access for all of their index operations.
The other three indexes are not designed with built-in concurrency control mechanisms.
For example, LevelDB needs to use an external mutex lock to synchronize writers on its skip list.
For fair comparison, we only compare Wormhole with their thread-unsafe implementations
with read-only or single-writer workloads.
Experiments are run on a Dell R630 server with two 16-core Intel Xeon E5-2697A v4 CPUs,
each with 40\,MB LLC. To minimize the interference between threads or cores,
hyper-threading is turned off from BIOS and we use one NUMA node to run the experiments.
The server is equipped with 256\,GB DDR4-2400 ECC memory (32\,GB$\times$8) and
runs a 64-bit Linux (v4.15.15).
To evaluate Wormhole in a networked environment, we connect two identical servers
of the above configuration with a 100\,Gb/s Infiniband (Mellanox ConnectX-4).
Requests of index operations are generated from one server and are sent to the other for processing.
\begin{table}[t!]
\centering
\footnotesize
\caption{Description of Keysets} \label{tab:datasets}
\begin{tabular}{l|l|r|r}
\toprule
\makecell[c]{Name} & \makecell[c]{Description} & \makecell[c]{Keys \\
($\times 10^6$)} & \makecell[c]{Size \\ (GB)} \\
\hline
Az1 & \makecell[l]{Amazon reviews metadata, \\
avg. length: 40\,B, format: \textit{item-user-time}} & 142 & 8.5 \\
\hline
Az2 & \makecell[l]{Amazon reviews metadata; \\
avg. length: 40\,B, format: \textit{user-item-time}} & 142 & 8.5 \\
\hline
Url & \makecell[l]{URLs in Memetracker,
avg. length: 82\,B} & 192 & 20.0 \\
\hline
K3 & Random keys, length: 8\,B & 500 & 11.2 \\
\hline
K4 & Random keys, length: 16\,B & 300 & 8.9 \\
\hline
K6 & Random keys, length: 64\,B & 120 & 8.9 \\
\hline
K8 & Random keys, length: 256\,B & 40 & 10.1 \\
\hline
K10 & Random keys, length: 1024\,B & 10 & 9.7 \\
\bottomrule
\end{tabular}
\end{table}
We use publicly available datasets collected at Amazon.com~\cite{MY16} and MemeTracker.org~\cite{meme9}.
The original Amazon dataset contains 142.8 million product reviews with metadata,
We extract three fields (Item ID, User ID, and Review time) in the metadata to construct two keysets,
named \textit{Az1} and \textit{Az2}, by concatenating them in different orders
(see Table~\ref{tab:datasets}). Key composition varies with the order,
and may impact the index's performance, especially for the trie-based indexes
(B+ tree and Wormhole). For the MemeTracker dataset we extract URLs from it and
use them as keys in the keyset, named \textit{Url}.
For trie-based indexes a performance-critical factor is key length. We create five synthetic keysets,
each with a different fixed key length (from 8\,B to 1024\,B). Key count is selected to make sure
each keyset is of the same size (see Table~\ref{tab:datasets}). Key contents are randomly generated.
In the evaluation we are only concerned with performance of index access and skip access of
values in the KV items. In the experiments, the search keys are uniformly selected
from a keyset to generate a large working set so that an index's performance is not overshadowed
by effect of CPU cache. In the experiments we use 16 threads to concurrently access the indexes
unless otherwise noted.
\subsection{Lookup Performance}
\label{sec:eval-lookup}
In the experiments for measuring lookup throughput we insert each of the keysets to an index,
then perform lookups on random keys in the index.
We first measure single-thread throughput of the indexes
and see how they scale with number of the threads.
The results with \textit{Az1} keyset are shown in Figure~\ref{fig:threads}.
With one thread, Wormhole's throughput is 1.266\,MOPS (million operations per second),
about 52\% higher than that of ART (0.834\,MOPS), the second-fastest index in this experiment.
All of the five indexes exhibit good scalability.
As an example, Wormhole's throughput with 16 threads (19.5\,MOPS) is 15.4$\times$ of that with one thread.
In addition, it's 43\% higher than that of ART with 16 threads.
We also create a thread-unsafe version of wormhole index (namely \textit{Wormhole-unsafe})
by not using of the RCU and the locks.
As shown in Figure~\ref{fig:threads},
the thread-unsafe Wormhole reaches 21.2\,MOPS, a 7.8\% increase of its thread-safe counterpart.
Since the results of the other keysets all show a consistent trend as described above,
we omit them from this paper.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{34-threads.pdf}
\caption{Lookup throughput with different number of threads.
The \textit{Az1} keyset is used in this experiment.}
\label{fig:threads}
\end{figure}
We then investigate Wormhole's performance with different keysets.
We use 16 threads for the following experiments unless otherwise noted.
The throughput results with the eight keysets are shown in Figure~\ref{fig:tpyh16}.
Wormhole improves the lookup throughput by 1.3$\times$ to 4.2$\times$ when compared
with the best results among the other indexes for each keyset.
Compared with throughput of the B+ tree and skip list, throughput of the other three indexes exhibits higher
variations due to their use of trie structure and variable key lengths in different keysets.
In the meantime, throughput of Masstree and ART are more tightly correlated to key length than Wormhole.
Masstree substantially outperforms B+ tree and skip list with short keys
(e.g., \textit{K3} and \textit{K4}). However,
its throughput drops quickly with longer keys (e.g., \textit{Url}, \textit{K8}, and \textit{K10}).
Wormhole's lookup throughput is much less affected by key-length variation because
its throughput is determined by the anchor length ($L_{\text{anc}}$), which is usually (much) smaller
than the average key length. Specifically, it is determined by
$\log (\text{min}(L_{\text{anc}}, L_{\text{key}}))$, rather than by $L$ in Masstree and ART
(see Algorithm~\ref{alg:bsearch}).
In \textit{Url} the URLs often share long common prefixes, which leads to long anchors
(about 40\,B in average as measured) in Wormhole.
Even though, Wormhole still outperforms
the others by at least 1.7$\times$.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{33-get-yh-16.pdf}
\caption{Lookup throughput on local CPU}
\label{fig:tpyh16}
\end{figure}
Various optimizations are applied in Wormhole's implementation,
including tag matching in MetaTrieHT (\textit{TagMatching}), incremental hashing (\textit{IncHashing}),
sorting by tags at leaf nodes (\textit{SortByTag}), and direct speculative positioning in the leaf nodes (\textit{DirectPos}).
To see how much individual optimizations quantitatively contribute to the Wormhole's improvement,
we incrementally apply them one at a time to a basic Wormhole version without the optimizations
(\textit{BaseWormhole}). Figure~\ref{fig:wh-break} shows the throughput of Wormholes without and with
the incrementally added optimizations, as well as that of B+ tree as a baseline on different keysets.
As shown, BaseWormhole improves the throughput by 1.26$\times$ to 2.25$\times$.
After two optimizations (TagMatching and IncHashing) are applied,
the improvement increases to 1.4$\times$ to 2.6$\times$. The index workloads are memory-intensive,
and memory access efficiency plays a larger role than CPU in an index's overall performance.
As TagMatching reduces memory accesses, and corresponding cache misses,
it contributes more to throughput improvement than IncHashing,
which reduces CPU cycles and has a contribution of only about 3\%.
A more significant improvement is received with SortByTag and DirectPos applied at the leaf nodes.
At the leaf nodes SortByTag removes expensive full key comparisons. Its contribution is bigger
with keysets of longer keys. DirectPos can dramatically reduce number of tag comparisons from 6--7 to less than 3 (on average), and also substantially contributes to the throughput improvements
(though less significant than SortByTag). Overall with all the optimizations the throughput
is improved by up to 4.9$\times$ by Wormhole.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{70-break.pdf}
\caption{Throughput with optimizations applied. For an optimization,
the ones above it in the legends are also applied.
E.g., \texttt{+DirectPos} represents all optimizations are applied.}
\label{fig:wh-break}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{39-ib-ib-16.pdf}
\caption{Lookup throughput on a networked key-value store}
\label{fig:tpib16}
\end{figure}
Network had often been considered as a major potential bottleneck for client/server applications and
a slow connection can overshadow any performance improvement made at the host side.
However, today's off-the-shelf network devices are able to offer a high bandwidth
close to the speed of main memory.
For example, the aggregated bandwidth of three 200\,Gb/s Infiniband (IB) links
(3$\times$24\,GB/s) is close to that of a CPU's memory controller
(76.8GB/s for a Xeon E4 v4 CPU).
This ever-increasing network bandwidth makes performance of networked applications
more sensitive to the efficiency of the host-side CPU/memory usage.
To evaluate by how much Wormhole can improve performance of networked data-intensive applications,
we port the indexes to HERD, a highly optimized RDMA-enabled key-value store~\cite{rdmabench},
and run the lookup benchmarks over a 100\,Gb/s IB link.
We use a batch size of 800 (requests per operation) for RDMA sends and receives.
The throughput results are shown in Figure~\ref{fig:tpib16}.
Generally speaking, Wormhole is able to maintain its advantage over the other indexes,
which is comparable to the results on a single machine (Figure~\ref{fig:tpyh16}).
However, the peak throughput of Wormhole is decreased by 5\% to 20\% for most datasets.
For the \textit{K10} dataset, the large key size (1\,KB each) significantly inflates the size of each request.
In this setting with one IB link, the network bandwidth becomes the bottleneck that
limits the improvement of Wormhole.
As a result, with the \textit{K10} dataset Wormhole's throughput is only 37.5\% of that without the network,
and is only 30\% higher than that of B+ tree.
\subsection{Comparing with Hash Tables}
\begin{figure}[!t]
\centering
\includegraphics[width=0.65\columnwidth]{36-hashget-yh-16.pdf}
\caption{Lookup throughput of Wormhole and Cuckoo hash table}
\label{fig:wh-cuckoo}
\end{figure}
Wormhole aims to bridge the performance gap between ordered indexes and hash tables.
To know how far Wormhole's performance is close to that of hash tables,
we compare Wormhole with a highly optimized Cuckoo hash table~\cite{libcuckoo}.
The experimental results are shown in Figure~\ref{fig:wh-cuckoo}.
For the first seven keysets, Wormhole's throughput is about 31\% to 67\% of that of the hash table.
The $K10$ keyset has very long keys (1024-byte keys).
16 cache-lines need to be accessed in one key comparison. And the key-access cost dominates
lookup time in both indexes. By using only tags in the MetaTrieHT and leaf nodes in the comparison
in both Wormhole and the optimized hash table, they have similar number of full key accesses.
As a result, on this keyset Wormhole's throughput is close to that of the hash table.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{45-lohi-16.pdf}
\caption{Lookup throughput for keysets of short and long common prefixes}
\label{fig:wh-lohi}
\end{figure}
Besides key length, another factor affecting Wormhole's lookup efficiency is anchor length,
which determines the MetaTrieHT's size and lookup time in MetaTrieHT. With randomly generated
key contents, the anchors are likely very short. However, in reality a key's true content may
only occupy the last several bytes and fill the leading bytes of a key with the same filler token
such as `0'. To simulate this scenario, we form a number of keysets. Each keyset contains
10 million keys of a fixed size ($L$). Such a keyset, denoted as ($K_{\text{short}}$),
contains keys of random contents and is expected to have short anchors.
We then fill each-key's first $L-4$ bytes with `0', and denote the resulting keyset as $K_{\text{long}}$.
Figure~\ref{fig:wh-lohi} shows lookup throughput on the two keysets, $K_{\text{short}}$ and $K_{\text{long}}$,
at different key lengths.
The Cuckoo hash table shows little throughput difference with the two keysets at various key lengths.
However, with longer anchors Wormhole's throughput on $K_{\text{long}}$ is lower than that on $K_{\text{short}}$.
This throughput reduction becomes larger with long keys. With the longest keys (512\,B)
the corresponding long anchors lead to more memory accesses
(e.g., $\log_2 512 = 9$ for LPM on the MetaTrieHT),
reducing its throughput from about 78\% of the hash-table's throughput to only 40\%.
\subsection{Performance of other Operations}
\label{sec:other-ops}
In this section we use workloads having insertion operations.
Note that several indexes we used (skip list, B+ tree, and ART) cannot safely perform concurrent
accesses when a writer is present.
If we apply locking or use their lockfree/lockless variants to allow concurrent readers and writers,
their performance can be penalized because of the extra overhead.
For a more vigorous and fair comparison,
we compare Wormhole with their implementations without concurrency control.
Accordingly, we use only one thread for insertion-only workloads,
and then exclude the three thread-unsafe indexes in the evaluation with multi-threaded read-write workloads.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{35-set-yh-1.pdf}
\caption{Throughput of continuous insertions}
\label{fig:wh-insert}
\end{figure}
In insertions-only workloads, keys from
a keyset are inserted into an initially empty index, and the insertion throughput is shown in Figure~\ref{fig:wh-insert}. Wormhole's throughput is comparable
to that of the skip list on most keysets.
With short keys (e.g., \textit{K3} and \textit{K4}),
both Masstree and Wormhole show a higher throughput than comparison-based indexes
(B+ tree and skip list) as insertion of short keys has a low cost on a trie-like structure.
However, with longer keys (e.g., \textit{Url}) throughput of Masstree and Wormhole becomes lower.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{50-size.pdf}
\caption{Memory usage of the indexes}
\label{fig:wh-size}
\end{figure}
When an index for each of the keysets is built, we estimate their memory demands by
taking difference of resident memory sizes, reported by the \texttt{\small getrusage()}
system call, before and after an index is built.
Hugepages are disabled for this experiment to minimize memory wastage due to internal fragmentation.
In the indexes, space for each KV item is allocated separately and is reached with a pointer in an index node.
To establish a \textit{baseline} to represent minimal memory demand of a keyset,
we multiply the key count of the set with the sum of key length and a pointer's size.
Memory demands of the indexes are shown in Figure~\ref{fig:wh-size}.
As shown, in most cases Wormhole's memory usage is comparable to
those of B+ tree and skip list.
Wormhole uses a small trie to organize its anchors and places the keys in
large leaf nodes. As anchors can be much shorter than keys, the space overhead
of the MetaTrieHT can be further reduced, leading to a higher space efficiency than the trie-based Masstree,
which places keys in the trie structure.
Masstree's memory usage is significantly higher than the other indexes, except for
keysets with very short keys (e.g., \textit{K3}) where the entire index is actually managed
by a single B+ tree at the root trie node.
On the contrary, ART has significantly higher space consumption with short keys
(\textit{K3} and \textit{K4}), which is due to its excessive number of trie nodes.
With longer keys, the path compression helps to amortize the space cost
with relatively reduced numbers of trie nodes.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{60-rw-cc-16.pdf}
\caption{Throughput of mixed lookups and insertions}
\label{fig:wh-rw}
\end{figure}
We now evaluate Wormhole with workloads of mixed lookups and insertions using 16 threads.
As shown in Figure~\ref{fig:wh-rw}, we change percentage of insertions from 5\%, 50\%, to 95\% of the total operations to see
how Wormhole's performance is affected by operations that may update the MetaTrieHT.
In general, the trend of relative throughput between Masstree and Wormhole with insertions
on different keysets is similar (compare Figures~\ref{fig:tpyh16} and~\ref{fig:wh-rw}).
With more insertions, the throughput improvements of Wormhole over Masstree become smaller,
but still substantial. With a big leaf node most insertions do not update the MetaTrieHT
and lookup time still holds a significant portion of the entire operation cost.
Furthermore, Wormhole's concurrency control allows updates on the MetaTrieHT to impose
minimal constraint on lookups' concurrency.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{65-scan-sc-16.pdf}
\caption{Throughput of range lookups}
\label{fig:wh-scan}
\end{figure}
To compare Wormhole with other indexes on range operations,
we randomly select a search key and search for following (up to) 100 keys starting from the search key.
As range-scan is not implemented in the ART source code, it is omitted in this experiment.
The results for various keysets are shown in Figure~\ref{fig:wh-scan}.
In the range search much of the operation time is spent on sequentially scanning of a sorted list.
The performance advantage of Wormhole on reaching the first search key is dwarfed.
As a result, Wormhole's throughput improvement is reduced
(improvement of 1.05$\times$ to 1.59$\times$ over B+ tree).
However, as Masstree stores all keys in a trie structure, range query is much more expensive due to
its frequent pointer chasing on the trie, which leads to its much lower throughput than the other indexes.
\section{Related Work}
\label{sec:wh-related}
Comparison-based ordered indexes are commonly used as in-memory index of popular SQL and NoSQL databases,
such as B-tree (or B+ tree) in LMDB~\cite{lmdb} and MongoDB~\cite{MongoDB},
and skip list in MemSQL~\cite{MemSQL} and LevelDB~\cite{leveldb}.
Because their lookup cost is bounded by $O(\log N)$ the efforts on improving their lookup performance
are mainly focused on improvement of parallelism and caching efficiency.
For example, Bw-tree enables latch-free operations of B+ tree to improve lookup
efficiency on multi-cores~\cite{LLS13}.
FAST leverages architecture-specific knowledge to optimize B+-tree's
layout in the memory to minimize cache and TLB misses~\cite{KCS10}.
Many studies have proposed to use hardware accelerators, such as GPU,
to improve index lookups without changing the underlying data structure~\cite{HY11,ZWY15,KGP13,HSP13,SJ16}.
Wormhole takes a new approach to fundamentally reduce its asymptotic cost
to $O(\log L)$.
In addition to its algorithmic improvement,
Wormhole is further strengthened by a series of implementation optimizations.
Trie has been proposed to achieve a lookup cost lower than those of the comparison-based indexes.
ART adaptively changes the size of each trie node to minimize the space usage of the trie structure~\cite{LKN13}.
However with a small fanout (256 in ART) the $O(L)$ lookup cost can be significant for long keys.
Masstree enables a very high fanout ($2^{64}$) by using a B+ tree at each trie node~\cite{MKM12}.
Accordingly, Masstree's lookup cost on the trie structure is practically reduced to 1/8 of that in ART.
However, with the high fanout a trie node may have to be represented by a large B+ tree,
which makes access on this trie node slow and offsets the benefit of having reduced trie height.
Wormhole's lookup efficiency is less sensitive to key length as it has a $O(\log L)$ lookup cost.
Using large leaf nodes to host keys and a small trie to manage the anchors,
Wormhole's space efficiency is much better than a trie.
Caching can effectively improve index lookup for workloads of strong locality.
For example, SLB uses a small cache to reduce the lookup cost for frequently
accessed data~\cite{WNJ17}.
However, caching is not effective for accessing of cold data.
Wormhole improves the index structure which can reduce DRAM accesses for workloads of little locality.
B$\varepsilon$-Tree is a B-tree-like index which allocates a buffer at each internal node
to reduce the high write amplification of B Tree~\cite{BF03}.
However the use of buffers incurs an additional overhead for lookups.
Similarly, FloDB uses a hash table as a buffer ahead of a skip list in LevelDB to service write requests,
which can remove the expensive skip-list insertion out of the critical path~\cite{BGT17}.
FloDB's hash table needs to be fully flushed upon serving a range operation,
which can impose long delays for range queries.
Wormhole has a low lookup cost which benefits both read and write operations.
By quickly identifying a leaf node for write operations, and using hashed keys in the sorting,
write operations in Wormhole has a consistently low cost.
In addition to using fine-grained locks,
many synchronization approaches have been proposed for efficient access of shared data structures.
MemC3~\cite{FAK13} and Masstree~\cite{MKM12} use version numbers to enable lock-free
access for readers.
Atomic operations, such as CAS and LL/SC, have been extensively used to implement
lock-free lists and trees~\cite{FR04,NM14,BP12,BFG05}.
RCU has been extensively used for read-dominant data structures~\cite{urcu,mckenney2013rcu}.
Other approaches, such as transactional memory and delegation techniques,
have been extensively studied~\cite{ST95,HM93,REB17,HIS10}.
We employ fine-grained locking, RCU, and version numbers to enable an efficient thread-safe Wormhole index
that is only slightly slower than the thread-unsafe Wormhole.
While there could be many other choices for more efficient concurrency control on Wormhole,
we leave it for future work.
\section{Conclusion}
\label{sec:wh-conc}
To the best of our knowledge, Wormhole is the first ordered key-value index achieving
the $O(\log L)$ lookup cost,
which is better than the $O(\log N)$ or $O(L)$ cost of other ordered indexes,
assuming key length $L$ much smaller than key count $N$.
The reduced asymptotic cost makes Wormhole capable of delivering quick access to KV items,
especially in challenging scenarios where the index manages a very large number of items with long keys.
Extensive evaluation demonstrates that Wormhole can improve index lookup throughput
by up to 8.4$\times$, 4.9$\times$, 4.3$\times$, and 6.6$\times$,
compared with skip list, B+ tree, ART, and Masstree, respectively.
Meanwhile, Wormhole's performance with other operations, including insertion, deletion,
and range query, is also higher than or comparable to other indexes.
Its space demand is as low as that of B+ tree.
The source code of an implementation of the Wormhole index is publicly available at
\mbox{texttt{https://github.com/wuxb45/wormhole}}.
|
{
"timestamp": "2019-02-19T02:04:40",
"yymm": "1805",
"arxiv_id": "1805.02200",
"language": "en",
"url": "https://arxiv.org/abs/1805.02200"
}
|
\section{Introduction}
\label{intro}
The discovery of the $J/\psi$, first bound state of $c$ and $\overline{c}$ quarks, known as charmonium, is published in Ref.\cite{Aubert:1974}, whereas Ref.\cite{Augustin:1974} describes the first observation of the $\psi(2S)$ and marked the field of hadron spectroscopy with the beginning of an important testing ground for the properties of the strong interaction using QCD. Charmonium system allows the prediction of some of the parameters of the states, using non-relativistic
and relativistic potential models, lattice QCD, NRQCD and sum rules \cite{Brambilla2011}. Although the first charmonium state was discovered in 1974, there are still many puzzles in charmonium physics. The charmonium spectroscopy below the open charm threshold has been well measured and agrees with the theoretical expectations, however, there {\bf are} still lack of adequate experimental {\bf informations} and solid theoretical inductions for the charmonium states above the open charm threshold \cite{PDGlatest}. Recently many other new resonances named $X\,Y\,Z$ particles have been discovered and are still under examination as these states do not match the predictions of the non-relativistic or semi-relativistic $q\bar{q}$ potential models.
In 1976, Siegrist and others at MARK-I collaboration (SLAC) observed the resonance $\psi(4415)$ with mass $4415\pm7$ MeV \cite{Siegrist:1976}. In 1978, DASP collaboration observed {\bf peaks} for $\psi(4040)$, $\psi(4160)$ and $\psi(4415)$ resonances with mass $4040\pm10$, $4159\pm20$ and $4417\pm10$ MeV respectively using non-magnetic detector \cite{Brandelik:1978}. Ablikim and others at BES collaboration and Mo and others at Beijing Institute HEP, determined the resonance parameters for $\psi(4040)$, $\psi(4160)$ and $\psi(4415)$ charmonium. Eichten identified that {\bf these} three resonances are $3^3S_1$, $2^3D_1$ and $4^3S_1$ with linear plus Coulomb potential model \cite{Eichten1980} and most later potential model calculations {\bf are agreed} with their identification. Recently, the LHCb collaboration measured the mass $4191_{-8}^{+9}$ MeV of the resonance $\psi(4160)$ with $J^{PC} = 1^{--}$ \cite{Aaij2013}. In the year 2007, a resonant structure was observed by Belle collaboration with mass $4664\pm11\pm5$ MeV \cite{Wang:2007} and after one year later same collaboration observed a clear peak in the $e^+e^-\rightarrow \Lambda_c^+\Lambda_c^-$ invariant mass distribution and assumed that the observed peak to be a resonance {\bf of} mass $4634_{-7}^{+8}{}_{-8}^{+5}$ MeV with the possibility of $5^3S_1$ charmonium state \cite{Pakhlova:2008}.
Rapidis and others at SLAC, LGW collaboration, observed a resonance with mass $3772\pm6$ MeV, just above the threshold for the production of charmed particles \cite{Rapidis:1977}. In parallel observation, W. Bacino and others at SLAC discovered and confirmed the $\psi (3770)$ resonance with mass $3770\pm6$ MeV \cite{Bacino:1977} and the parameters were determined by SLAC and LBL collaborations \cite{Abrams:1979}. In 2006 BES Collaboration measured the precise measurements of the mass of $\psi (3770)$ resonance \cite{Ablikim:2006} and recently its parameters have been measured using the data collected with the KEDR detector \cite{Anashin:2011}. The Belle collaboration reported the first observation of a new charmonium-like state with mass $3943\pm6\pm6$ MeV in the spectrum of masses recoiling from the $J/\psi$ in the inclusive process $e^+e^-\rightarrow J/\psi + \text{anything}$, and denoted it as $X(3940)$ \cite{Abe:2007a}. Later on, new measurement for the $X(3940)$ was performed by the same collaboration and the mass $3942_{-6}^{+7}\pm6$ MeV was reported \cite{Abe:2007b}. The $3^1S_0$ state can be a
good candidate of the $X(3940)$ resonance \cite{Sreethawong:2013,Wang:2016}.
Evidence of a new narrow resonance $X(3823)$ was found by Belle \cite{Bhardwaj:2013}, with its mass near to potential model expectations for the centroid of the $1^3D_J$ states. Recently, BESIII Collaboration \cite{Ablikim:2015a}, observed a narrow resonance $X(3823)$ through the process $e^{+}e^{-}\rightarrow \pi^{+}\pi^{-}X(3823)$ and confirmed that it is a good candidate for the $\psi(1^{3}D_{2})$ charmonium state.
In year 2003, Belle Collaboration observed charmonium like state in the decay process $B^{\pm} \rightarrow K^{\pm}\pi^+\pi^-J/\psi$ with mass $3872\pm0.6\pm0.5$ MeV \cite{Choi:2003} and was confirmed by CDF, D0 and BABAR Collaboration experiments \cite{Acosta:2003,Abazov:2004,Aubert:2004}. Several properties of the $X(3872)$ have been determined \cite{Choi:2011,Aaltonen:2009,Aubert:2008gu} and CDF collaboration explained the X(3872) particle as a conventional charmonium $c\bar{c}$ state with $J^{PC}$ be either $1^{++}$ or $2^{-+}$ \cite{Abulencia:2006}. Recently BES III collaboration reported the first observation of process $e^-e^-{\rightarrow}\gamma X(3872)$ with mass $3871\pm0.7\pm0.2$ MeV\cite{Ablikim:2013c}. Barnes and Godfrey in 2003, evaluated the strong and electromagnetic decays and considered all possible 1D and 2P charmonium assignments for $X(3872)$ \cite{Barnes:2003}.
The $X(3915)$ was observed by S.K.Choi and his team at Belle Collaborations \cite{Choi:2004} and later on BABAR collaboration confirmed the existence of the charmonium-like resonance $X(3915)$ and measured its mass $3919.4\pm2.2\pm1.6$ MeV with the $J^{PC}=0^{++}$ option \cite{delAmoSanchez:2010,Lees:2012a}. This state is conventionally identified as the $\chi_{c0}(2P)$ charmonium \cite{Liu:2009,Zhou:2015}. The Belle Collaboration in the year 2005, observed the $Z(3930)$ resonance in the $\gamma\gamma\rightarrow D\bar{D}$ process \cite{Uehara:2005} with mass $3929\pm5\pm2$ MeV and considered {\bf it as} a strong candidate for the $\chi_{c2}$(2P) state.
{\bf BABAR Collaboration was confirmed the $Z(3930)$ resonance as the $\chi_{c2} (2P)$ state with mass $3926.7\pm2.7\pm1.1$ MeV and quantum numbers $J^{PC} = 2^{++}$ \cite{Aubert:2010}.}
In the year 2013, the BESIII collaboration observed a new structure with mass $3899\pm3.6\pm4.9$ MeV in the $\pi^{\pm}J/\psi$ mass spectrum (referred as $Z_{c}(3900)$) \cite{Ablikim:2013b} and simultaneously Belle collaboration also observed a structure with mass $3894.5\pm6.6\pm4.5$ MeV in the $\pi^{\pm}J/\psi$ mass spectrum \cite{Liu:2013}. Observations of Xiao and his team, based on $e^+e^-$ annihilations at $\sqrt{s}=4170$ MeV, provide independent confirmation of the existence of the $Z_{c}^{\pm}(3900)$ state and provide new evidence for the existence of the neutral member $Z_{c}^{0}$(3900) \cite{Xiao:2013}. Recently BES III Collaboration performed an analysis {\bf with favor to} the assignment of the $J^{P}=1^{+}$ quantum numbers \cite{Ablikim:2015b}.
In year 2009, CDF collaboration reported evidence for a narrow structure near $J/\psi\phi$ threshold in $B^+ {\rightarrow} J/\psi\phi K^+$ decays with mass $4143\pm2.9\pm1.2$ MeV \cite{Aaltonen:2009a} and recently observed by the CMS \cite{Chatrchyan:2013} and D0 \cite{Abazov:2015,Abazov:2013} collaborations.
It has been suggested that the $X(4140)$ resonance could be a molecular state \cite{Liu:2009a,Branz:2009,Albuquerque:2009,Ding:2009}, a tetra-quark state \cite{Stancu:2009,Wang:2015a,Anisovich:2015a} or a hybrid state \cite{Wang:2009,Mahajan:2009}. Searches for the narrow $X(4140)$ were negative in
LHCb \cite{Aaij:2012pz} and BaBar \cite{Lees2014a} experiments. In 2011, the CDF Collaboration observed the $X(4140)$ structure with a statistical significance greater than 5 standard deviations and also find evidence for a second structure $X(4274)$ with a mass of $4274.4_{-6.7}^{+8.4}\pm1.9$ MeV \cite{Aaltonen:2011}. Very recently the LHCb Collaboration confirmed the resonance $X(4140)$ with mass $4146.5\pm4.5_{-2.8}^{+4.6}$ MeV and $X(4274)$ with mass $4273.3\pm8_{-3.6}^{+17.2}$ MeV in the $J/\psi\phi$ invariant mass distribution and determined their spin-parity quantum numbers to be $J^{PC} = 1^{++}$ for both \cite{Aaij:2016nsc}. They also investigated two new structures named as the $X(4500)$ and $X(4700)$ in the high $J/\psi\phi$ mass region. Ref.\cite{Lu:2016a} suggest that $X(4274)$ can be a good candidate for the conventional $\chi_{c1} (3^{3}P_{1})$ state. Study of charmonium in relativistic Dirac formalism with linear confinement potential indicate that the $X(4140)$ state can be admixture of two P states whereas $X(4630)$ and $X(4660)$ are the admixed of S-D wave state\cite{Bhavsar:2018}.
Recently developed (GSPM) generalized screened potential model \cite{Gonzalez:2015}, the non-relativistic, Coulomb gauge QCD approach \cite{Guo:2014}, the light front quark model(LFQM) \cite{Ke:2013}, the relativistic quark model \cite{Ebert2002}, the effective field theory framework of potential non-relativistic QCD (pNRQCD) approach \cite{Brambilla2006}, the effective
Lagrangian approach \cite{DeFazio:2008}, lattice
QCD \cite{Donald:2012ga,Liu:2012}, LCQCD and QCD sum rules \cite{Zhu:1998,Beilin:1984} and the widely used potential models \cite{Godfrey1985,Barnes:2005,Li:2012vc,Li:2009,Cao:2012,Segovia:2008,Eichten1978}, are different theoretical model have been employed in theory to study the charmonium spectrum. The Cornell potential model is well known among the many phenomenologically successful potential models, which describes the charmonium system quite well.
The recent experimental results on new charmonium-like $X\,Y\,Z$ states indicate that they can be interpreted as above threshold charmonium levels and cannot be assigned to any charmonium states in the conventional quark model. These experimental results, motivate us and renewed theoretical interest to carry out a spectroscopic study and decay properties of charmonium.
In this article, to calculate the mass spectrum of the charmonium, we use Gaussian wave function both in position space as well as momentum space with a potential model, incorporating corrections to the kinetic energy of quarks as well as incorporating the relativistic correction of ${\cal{O}}\left(\frac{1}{m}\right)$ to the potential energy part of the Hamiltonian. We also investigate the Regge trajectories in both the $(M^{2}\rightarrow J)$ and $(M^{2}\rightarrow n)$ planes (where $J$ is the spin and $n$ is the principal quantum number) using our predicted masses for the charmonium, as the Regge trajectories play a significant role to identify the nature of current and future experimentally observed charmonium states. We also obtained the pseudoscalar and vector decay constants for charmonium as well as the radiative (Electric and Magnetic dipole) transition rates and the annihilation decay.
The article is organized as follows. Section {\ref{sec:mass}}, present the theoretical framework for the mass spectra, Section {\ref{sec:decay}} present the decay constants ($f_{P/V}$), Section {\ref{sec:E1M1}} present the radiative (E1 and M1) transitions in and Section {\ref{sec:annihilation}} present annihilation decays. In Section {\ref{sec:Resu}}, we discuss results for the mass spectra, ($f_{P/V}$) decays, E1 and M1 transition width as well as annihilation decays. The Regge trajectories from estimated masses in the $(J,M^{2})$ and $(n_r,M^{2})$ planes are in Section {\ref{sec:reg}}. Finally, we draw our conclusion in Section-{\ref{sec:conclusion}}.
\section{Methodology}
\subsection{Cornell potential with ${\cal{O}}\left(\frac{1}{m}\right)$ corrections \label{sec:mass}}
Inspired by the extensive progress made in the experimental observation as well as the theoretical development of the charmonium, here we calculate the mass spectra and decay properties the charmonium within the widely used coulomb plus linear potential, Cornell potential \cite{Deng:2016,Godfrey1985,Barnes:2005,Godfrey:2015}. In this approach, we consider the relative corrections to the kinetic energy part and ${\cal{O}}\left(\frac{1}{m}\right)$ correction to the potential energy part \cite{Koma2006,Kher:2017,Kher:2017b}, which is inspired from the pNRQCD (potential non-relativistic quantum chromodynamics) \cite{Brambilla:2004jw,Brambilla2011,Brambilla:2014}. The Cornell potential working well for heavy light flavour, hence we employed it for heavy-heavy flavour.
We employ following Hamiltonian \cite{Gupta1995,Hwang1997,Kher:2017,Kher:2017b} and quark-antiquark potential \cite{Koma2006} to study of the charmonium mass spectroscopy,
\begin{equation}
H=\sqrt{\mathbf{p}^{2}+m_{Q}^{2}}+\sqrt{\mathbf{p}^{2}+m_{\bar{Q}}^{2}}+V(\mathbf{r}),\label{Eq:hamiltonian}
\end{equation}
\begin{equation}
V\left(r\right)=V^{\left(0\right)}\left(r\right)+\left(\frac{1}{m_{Q}}+\frac{1}{m_{\bar{Q}}}\right)V^{\left(1\right)}\left(r\right)+{\cal O}\left(\frac{1}{m_{}^{2}}\right).
\end{equation}
Here, $m_{Q}$($m_{\bar{Q}}$) is the quark(anti-quark) mass. and The Cornell-like potential $V^{\left(0\right)}$ \cite{Eichten1978} and leading order perturbation theory yields $V^{\left(1\right)}\left(r\right)$ are,
\begin{equation}\label{pote}
V^{\left(0\right)}(r)=-\frac{4\alpha_{S}\left({M^{2}}\right)}{3r}+Ar+V_{0}
\end{equation}
\begin{equation}
V^{\left(1\right)}\left(r\right)=-C_{F}C_{A}\alpha_{s}^{2}/4r^{2}
\end{equation}
where
$\alpha_{S}\left({M^{2}}\right)$, $A$, $V_{0}$ and $C_{F}=4/3$, $C_{A}=3$ is the strong running coupling constant, potential parameter, potential constant and the Casimir charges respectively.
{\bf This correction was original studied by Y.Koma, where the relativistic correction to the QCD static potential ${\cal{O}}\left(\frac{1}{m}\right)$ was investigated non-perturbatively. This correction is found to be similar to the Coulombic term of the static potential when applied to charmonium. The leading order corrections are classified in powers of the inverse of heavy quark mass}\cite{Koma2006}.
Here, to estimate the expected values of the Hamiltonian with the Ritz variational strategy, we use Gaussian wave function in position space {\bf as well as} in momentum space \cite{Kher:2017,Kher:2017b} has the form
\begin{eqnarray}
R_{nl}(\mu,r) & = & \mu^{3/2}\left(\frac{2\left(n-1\right)!}{\Gamma\left(n+l+1/2\right)}\right)^{1/2}\left(\mu r\right)^{l}\times\nonumber \\
& & e^{-\mu^{2}r^{2}/2}L_{n-1}^{l+1/2}(\mu^{2}r^{2})
\end{eqnarray} and
\begin{eqnarray}
R_{nl}(\mu,p) & = & \frac{\left(-1\right)^{n}}{\mu^{3/2}}\left(\frac{2\left(n-1\right)!}{\Gamma\left(n+l+1/2\right)}\right)^{1/2}\left(\frac{p}{\mu}\right)^{l}\times\nonumber \\
& & e^{-{p}^{2}/2\mu^{2}}L_{n-1}^{l+1/2}\left(\frac{p^{2}}{\mu^{2}}\right)
\end{eqnarray} respectively with the Laguerre polynomial $L$ and the variational parameter $\mu$. We estimated $\mu$ for each state, for the prefer value of $A$, using \cite{Hwang1997},
\begin{equation}
\left\langle{K.E.}\right\rangle =\frac{1}{2} \left\langle{\frac{rdV}{dr}}\right\rangle\label{Eq:virial theorem}
\end{equation}
To integrate relativistic correction, we enlarge Hamiltonian Eq.(\ref{Eq:hamiltonian}) with powers up to ${{\cal{O}}\left({\bf p}^{10}\right)}$ and ${\cal{O}}\left(\frac{1}{m}\right)$ at the kinetic energy and the potential energy part respectively \cite{Kher:2017}. {\bf We use a position space Gaussian wave-function to obtain expected value of the potential energy part whereas for the kinetic energy part, we use a momentum space wave-function using virial theorem Eq.(\ref{Eq:virial theorem}}).
We adapted the ground state center of weight mass and equated with the PDG data by fixing $A$, $\alpha_s$ and $V_0$ using the following equation \cite{Rai2008,Rai2002}:
\begin{equation}
M_{SA}=M_{P}+\frac{3}{4}(M_{V}-M_{P}),\label{Eq:rai1-1}
\end{equation}
We also forecast the center of weight mass for the $nJ$ state as \cite{Rai2008}:
\begin{equation}
M_{CW,n}=\frac{\Sigma_{J}(2J+1)M_{nJ}}{\Sigma_{J}(2J+1)}\label{Eq:rai2-1}
\end{equation}
In the case of quarkonia, bound states are represented by $n^{2S+1}L_{J}$, identified with
the $J^{PC}$ values, with $\vec{J}=\vec{L}+\vec{S}$, $\vec{S}=\vec{S}_{Q}+\vec{S}_{\bar{Q}}$, parity $P=(-1)^{L+1}$ and the charge conjugation $C=(-1)^{L+S}$ with $(n,L)$ being the radial quantum numbers. The spin dependent interaction are required to remove the degeneracy of charmonium states and can be written as \cite{Barnes:2005,Eichten:2008,Voloshin:2007,Lakhina2006}.
\begin{eqnarray}
V_{SD} & = & V_{LS}(r)\left(\vec{L}\cdot\vec{S}\right)+ V_{SS}(r)\left[S\left(S+1\right)-\frac{3}{2}\right]+\nonumber\\
& & V_{T}(r)\left[S\left(S+1\right)-\frac{3\left(\vec{S}\cdot\vec{r}\right)\left(\vec{S}\cdot\vec{r}\right)}{r^{2}}\right]
\end{eqnarray}
where the spin-spin, the spin-orbit and the tensor interactions can be written in terms of the vector and scalar parts of the $V(r)$ as by \cite{Voloshin:2007}
\begin{eqnarray}
V_{SS}(r) & = & \frac{1}{3m_{Q}^{2}}\nabla^{2}V_{V} =\frac{16\pi\alpha_{s}}{9m_{Q}^{2}}\delta^{3}\left(\vec{r}\right),
\end{eqnarray}
\begin{eqnarray}
V_{LS}(r) & = & \frac{1}{2m_{Q}^{2}r}\left(3\frac{dV_{V}}{dr}-\frac{dV_{S}}{dr}\right),
\end{eqnarray}
\begin{eqnarray}
V_{T}(r) & = & \frac{1}{6m_{Q}^{2}}\left(3\frac{d^{2}V_{V}}{dr^{2}}-\frac{1}{r}\frac{dV_{V}}{dr}\right),
\end{eqnarray}
\noindent where
$V_{V}(=-\frac{4\alpha_{s}}{3r})$ is the coulomb part and $V_{S}( = Ar )$ is the confining part of Eq.(\ref{pote})
In the present study, the quark masses is $m_{c}=1.55$ ~GeV to reproduce the ground state masses of the charmonium. The fitted potential parameters are $A=0.160\,\, GeV^{2}$, $\alpha_s=0.333$ and $V_0= -0.23074\,\, GeV$.
\subsection{Decay Constants ($f_{P/V}$)}
\label{sec:decay}
The decay constants with the QCD correction factor are computed using the Van-Royen-Weisskopf formula \cite{VanRoyen1967,Braaten1995},
\begin{equation}\label{Eq:decayconst}
f_{P/V}^{2}=\frac{12\left|\psi_{P/V}(0)\right|^{2}}{M_{P/V}}\left(1-\frac{\alpha_{S}}{\pi}\left[2-\frac{m_{Q}-m_{\bar{q}}}{m_{Q}+m_{\bar{q}}}\ln\frac{m_{Q}}{m_{\bar{q}}}\right]\right);\end{equation}
The Eq.(\ref{Eq:decayconst}) also gives the inequality\cite{Hwang1997a}
\begin{equation}\label{eq:ineq}
\sqrt{m_v}f_v \geq \sqrt{m_p}f_p
\end{equation}Our results are in accordance with Eq.(\ref{eq:ineq}) and tabulated in Table(\ref{tab:decaycc}). The value in parenthesis is the decay constant with QCD correction.
\subsection{Radiative Transitions \label{sec:E1M1}}
The radiative transition is influenced by the matrix element of the $EM$ current between the initial $i$ and final $f$ quarkonium state, i.e., $\langle f\mid j_{em}^{\mu}\mid i\rangle$. The electric dipole $(E1)$ or magnetic dipole $(M1)$ transition are leading order transition amplitudes \cite{Ding:2007,Lu2016,Guo:2010a}.
The E1 matrix elements are estimated by\cite{Radford2009}
\noindent
\begin{eqnarray}
\Gamma_{(E1)}\left(n^{2S+1}L_{J}\rightarrow n^{'2S^{'}+1}L_{J^{'}}^{'}+\gamma\right)=\qquad&\qquad \qquad& \nonumber\\
\frac{4\alpha e_{Q}^{2}}{3} \frac{E_{\gamma}^{3}E_{f}}{M_{i}}C_{fi}\delta_{SS^{'}}\times\left|\left\langle f\left|r\right|i\right\rangle \right|^{2}
\end{eqnarray}
where Photon energy ${\bf E_{\gamma}=\frac{M_{i}^{2}-M_{f}^{2}}{2M_{i}^{2}}}$; the fine structure constant ${\bf \alpha=1/137}$; the quark charge $e_{Q}$ in units of the electron charge and the energy of final state $E_f$. The angular momentum matrix element $C_{fi}$ is
\begin{equation}
C_{fi}=max\left(L,L^{'}\right)\left(2J^{'}+1\right)\left\{ \begin{array}{ccc}
L^{'} & J^{'} & S\\
J & L & 1
\end{array}\right\}^{2}
\end{equation}
where$\left\{ :::\right\} $ is a 6-j symbol. The matrix elements $ \langle n^{'2S^{'}+1}L_{J^{'}}^{'}\mid r\mid n^{2S+1}L_{J}\rangle $ were evaluated using the wave-functions
\begin{equation}
\left\langle f \left| r \right| i \right\rangle=\int dr R_{n_{i}l_{i}}\left(r\right)R_{n_{f}l_{f}}\left(R\right)
\end{equation}
The M1 radiative transitions are evaluated {\bf using} the following expression \cite{Segovia:2016,Barnes:2005}
\begin{equation}\label{eq:m1}
\Gamma_{M1}\left(n^{2S+1}L_{J}\rightarrow n^{'2S^{'}+1}L_{J^{'}}^{'}\right)=\frac{4\alpha e_{Q}^{2}}{3m_{Q}^2} \frac{E_{\gamma}^{3}E_{f}}{M_{i}}S_{fi}\left|\mathcal{M}_{fi} \right|^{2},\end{equation}
where,
\begin{equation}
\mathcal{M}_{fi} =\int dr R_{n_{i}l_{i}}\left(r\right)j_{0}\left(E_{\gamma}r/2\right)R_{n_{f}l_{f}}\left(R\right)
\end{equation} and
\begin{eqnarray}
S_{fi}&=& 6\left(2S+1\right)\left(2S^{'}+1\right)\left(2J^{'}+1\right)\times \nonumber \\
& & \left\{ \begin{array}{ccc}
J & 1 & J^{'}\\
S^{'} & L & S
\end{array}\right\}^{2}\left
\{ \begin{array}{ccc}
1 & 1/2 & 1/2\\
1/2 & S^{'} & S
\end{array}\right\}^{2}
\end{eqnarray}
here L = 0 for S-waves and $j_{0}(x)$ is the spherical Bessel function.
The E1 and M1 radiative transition widths are listed in table (\ref{tab:E1CC}) and (\ref{tab:M1CC}) respectively.
\subsection{Annihilation Decays \label{sec:annihilation}}
Decays of quarkonia states into leptons or photons or gluons is extremely useful for the production and identification of resonances as well as the leptonic decay rates of quarkonia. {\bf It} can also assist to recognize conventional mesons and multi-quark structures \cite{Kwong:1987,Kwong:1988}.
\subsubsection{Leptonic decays}
The $^{3}S_{1}$ and $^{3}D_{1}$ states have $J^{PC}=1^{--}$ quantum numbers, annihilate into lepton pairs through a single virtual photon. The leptonic decay width of the ($^{3}S_{1}$) and ($^{3}D_{1}$) states of charmonium including first order radiative QCD correction is given by \cite{Segovia:2016,Kwong:1987,Bradley:1980}.
\begin{equation}
\varGamma\left(n^{3}S_{1}\rightarrow e^{+}e^{-}\right)=\frac{4e_{Q}^{4}\alpha^{2}\mid R_{nS}\left(0\right)\mid^{2}}{M_{nS}^{2}}\left(1-\frac{16\alpha_{s}}{3\pi}\right)
\end{equation}
\begin{equation}
\varGamma\left(n^{3}D_{1}\rightarrow e^{+}e^{-}\right)=\frac{25e_{Q}^{2}\alpha^{2}\mid R_{nD}^{\prime\prime}\left(0\right)\mid^{2}}{2m_{Q}^{4}M_{nD}^{2}}\left(1-\frac{16\alpha_{s}}{3\pi}\right)
\end{equation}
where,
$M_{nS}$ is mass of the decaying charmonium state.
\subsubsection{Decay into photons }
The annihilation decay of the charmonium states into two or three photon, without and/or with radiative QCD corrections are given by\cite{Segovia:2016,Kwong:1987}
\begin{equation}
\varGamma\left(n^{1}S_{0}\rightarrow\gamma\gamma\right)=\frac{3e_{Q}^{4}\alpha^{2}\mid R_{nS}\left(0\right)\mid^{2}}{m_{Q}^{2}}\left(1-\frac{3.4\alpha_{s}}{\pi}\right)
\end{equation}
\begin{equation}
\varGamma\left(n^{3}P_{0}\rightarrow\gamma\gamma\right)=\frac{27e_{Q}^{4}\alpha^{2}\mid R_{nP}^{\prime}\left(0\right)\mid^{2}}{m_{Q}^{4}}\left(1+\frac{0.2\alpha_{s}}{\pi}\right)
\end{equation}
\begin{equation}
\varGamma\left(n^{3}P_{2}\rightarrow\gamma\gamma\right)=\frac{36e_{Q}^{4}\alpha^{2}\mid R_{nP}^{\prime}\left(0\right)\mid^{2}}{5m_{Q}^{4}}\left(1-\frac{16\alpha_{s}}{3\pi}\right)
\end{equation}
\begin{eqnarray}
\varGamma\left(n^{3}S_{1}\rightarrow3\gamma\right)&=&\frac{4 (\pi^{2}-9)e_{Q}^{6}\alpha^{3}\mid R_{nS}\left(0\right)\mid^{2}}{3\pi m_{Q}^{2}} \nonumber \times\\ &&\left(1-\frac{12.6\alpha_{s}}{\pi}\right)
\end{eqnarray}
\subsubsection{Decay into gluons }
The annihilation decay of the charmonium states into two or three gluon as well as into gluons with photon and light quark, without and/or with radiative QCD correction are given by\cite{Segovia:2016,Kwong:1987,Kwong:1988,Belanger:1987}
\begin{equation}
\varGamma\left(n^{1}S_{0}\rightarrow gg\right)=\frac{2\alpha_{s}^{2}\mid R_{nS}\left(0\right)\mid^{2}}{3m_{Q}^{2}}\left(1+\frac{4.8\alpha_{s}}{\pi}\right)
\end{equation}
\begin{equation}
\varGamma\left(n^{3}P_{0}\rightarrow gg\right)=\frac{6\alpha_{s}^{2}\mid R_{nP}^{\prime}
\left(0\right)\mid^{2}}{m_{Q}^{4}}
\end{equation}
\begin{equation}
\varGamma\left(n^{3}P_{2}\rightarrow gg\right)=\frac{8\alpha_{s}^{2}\mid R_{nP}^{\prime}
\left(0\right)\mid^{2}}{5m_{Q}^{4}}
\end{equation}
\begin{equation}
\varGamma\left(n^{1}D_{2}\rightarrow gg\right)=\frac{2\alpha_{s}^{2}\mid R_{nD}^{\prime\prime}
\left(0\right)\mid^{2}}{3\pi m_{Q}^{6}}
\end{equation}
\begin{eqnarray}
\varGamma\left(n^{3}S_{1}\rightarrow3g\right)&=&\frac{10 (\pi^{2}-9)\alpha_{s}^{3}\mid R_{nS}\left(0\right)\mid^{2}}{81\pi m_{Q}^{2}}\nonumber \times\\ &&\left(1-\frac{3.7\alpha_{s}}{\pi}\right)
\end{eqnarray}
\begin{equation}
\varGamma\left(n^{1}P_{1}\rightarrow3g\right)=\frac{20 \alpha_{s}^{3}\mid R_{nP}^{\prime}\left(0\right)\mid^{2}}{9\pi m_{Q}^{4}}ln(m_{Q}\langle r \rangle)
\end{equation}
\begin{equation}
\varGamma\left(n^{3}D_{1}\rightarrow3g\right)=\frac{760 \alpha_{s}^{3}\mid R_{nP}^{\prime\prime}\left(0\right)\mid^{2}}{81\pi m_{Q}^{6}}ln(4m_{Q}\langle r \rangle)
\end{equation}
\begin{equation}
\varGamma\left(n^{3}D_{2}\rightarrow3g\right)=\frac{10 \alpha_{s}^{3}\mid R_{nP}^{\prime\prime}\left(0\right)\mid^{2}}{9\pi m_{Q}^{4}}ln(4m_{Q}\langle r \rangle)
\end{equation}
\begin{equation}
\varGamma\left(n^{3}D_{3}\rightarrow3g\right)=\frac{40 \alpha_{s}^{3}\mid R_{nP}^{\prime\prime}\left(0\right)\mid^{2}}{9\pi m_{Q}^{6}}ln(4m_{Q}\langle r \rangle)
\end{equation}
\begin{eqnarray}
\varGamma\left(n^{3}S_{1}\rightarrow \gamma gg\right)&=&\frac{8 (\pi^{2}-9)e_{Q}^{2}\alpha\alpha_{s}^{2}\mid R_{nS}\left(0\right)\mid^{2}}{9\pi m_{Q}^{2}}\nonumber \times\\
&&\left(1-\frac{6.7\alpha_{s}}{\pi}\right)
\end{eqnarray}
\begin{equation}
\varGamma\left(n^{3}P_{1}\rightarrow q\bar{q}+g\right)=\frac{8\eta_{f} \alpha_{s}^{3}\mid R_{nP}^{\prime}\left(0\right)\mid^{2}}{9\pi m_{Q}^{4}}ln(m_{Q}\langle r \rangle)
\end{equation}
The calculated annihilation decay width of charmonium are listed in Tables(\ref{tab:annihi2e} \text{to} \ref{tab:annihiqqg}).
\end{multicols}
\begin{figure}
\centering
\includegraphics[bb=30bp 60bp 750bp 550bp,clip,width=0.80\textwidth]{MassCC.eps}
\caption{Mass spectrum.\label{fig:MassCC}}
\end{figure}
\begin{table}
\caption{Pseudoscalar and vector decay constants (in ${GeV}$).\label{tab:decaycc} }
\noindent \centering{}%
\begin{tabular}{lllllll}
\hline
Decay& State & Our Work & Expt.\cite{PDGlatest} & \cite{Bhaghyesh:2011} & \cite{Negash:2015}&\cite{Bhavsar:2018} \\
\addlinespace[3pt]
\hline
\addlinespace[5pt]
\(f_P\) & 1S & 0.501(0.395) & $\bf 0.335\pm0.075$ & 0.471(0.360) & 0.404 & \\
& 2S & 0.301(0.237) & &0.344(0.286) & 0.331&\\
& 3S & 0.264(0.208) & &0.332(0.254) & 0.291&\\
& 4S & 0.245(0.193) & & 0.312(0.239)&& \\
& 5S & 0.233(0.184) & & & &\\
& 6S & 0.224(0.177) & & && \\
\hline
\addlinespace[5pt]
\(f_V\)&1S & 0.510(0.402) & $\bf 0.411\pm0.005$ & 0.462(0.317) & 0.375& 0.420 \\
&2S & 0.303(0.239) & $\bf 0.271\pm0.008$ &0.369(0.253) &0.295&0.285 \\
&3S & 0.265(0.209) & $\bf 0.174\pm0.018$ &0.329(0.226) &0.261 &0.218\\
&4S & 0.240(0.194) & &0.310(0.212) &0.240&0.166 \\
&5S & 0.234(0.185) & & 0.290(0.199)&&0.106 \\
&6S & 0.225(0.177) & & & &\\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{S-P-D-wave center of weight masses (in GeV). (LP = Linear potential model, SP = Screened potential model, NR = Non-relativistic and RE = Relativistic)\label{tab:ccmsa}}
\scalebox{0.90}{
\begin{tabular}{cccccccccccccc}
\hline
& \multicolumn{2}{c}{This work} & & \multicolumn{10}{c}{Others Theory $M_{SA}$ in (GeV) }\\
$nL$ & $\mu$ & $M_{SA}$ & Expt.\cite{PDGlatest} & \multirow{2}{*}{LP (SP) \cite{Deng:2016}} & \multirow{2}{*}{\cite{Yang:2015cc}} & \multirow{2}{*}{\cite{Ebert2011}} & \multirow{2}{*}{\cite{Cao:2012}}& \multirow{2}{*}{NR (GI)\cite{Barnes:2005}} & \multirow{2}{*}{\cite{Li:2009}} & \multirow{2}{*}{\cite{Radford:2007}}
&\multirow{2}{*}{\cite{Ebert:2002}}
&\multirow{2}{*}{RE(NR)\cite{Sultan:2014}} &\multirow{2}{*}{\cite{Godfrey:1985}}\\
&$(GeV)$ & $(GeV)$&$(GeV)$ & & & & & & & & & \\
\hline
\addlinespace[3pt]
$1S$ & 0.716 & 3.068 & 3.068 & 3.068 (3.069)& 3.090 & 3.067& 3.061& 3.063 (3.067) & 3.068 &3.068 & 3.068&3.068 (3.063) &3.068 \\
$2S$ & 0.469 & 3.638 & 3.674 & 3.668 (3.668)& 3.667 & 3.673& 3.676 & 3.662 (3.663)& 3.661 &3.664 & 3.662& 3.657 (3.661)& 3.665\\
$3S$ & 0.412 & 4.027 & & 4.071 (4.024)& 4.070 & 4.027& 4.080 & 4.065 (4.091) & 4.014 &4.075 & 4.064& 4.051 (4.064)& 4.090\\
$4S$ & 0.382 & 4.353 & & 4.406 (4.277)& 4.408 & 4.421& 4.406 & 4.400 (4.444)& 4.267 & & & 4.350 (4.400) & \\
$5S$ & 0.363 & 4.646 & & 4.706 (4.469)& 4.710 & 4.831& & & 4.459 & & & 4.655 (4.694) & \\
$6S$ & 0.349 & 4.917 & & & 4.987 & 5.164& & & 4.603 & & & 4.907 (4.973)& \\
\addlinespace[5pt]
$1P$ & 0.484 & 3.534 & 3.525 & 3.524 (3.527) & 3.523 & 3.525&3.525& 3.522 (3.523) & 3.524 & 3.526& 3.526& 3.554 (3.519)&3.523 \\
$2P$ & 0.416 & 3.936 & & 3.945 (3.919)& 3.941 & 3.926&3.945 & 3.942 (3.961) & 3.913 &3.960 & 3.945 & 3.963 (3.938)& 3.962 \\
$3P$ & 0.384 & 4.269 & & 4.291 (4.238)& 4.289 & 4.337& 4.316 & 4.286 (4.323) & 4.188 & & & 4.296 (4.283)& \\
\addlinespace[5pt]
$1D$ & 0.437 & 3.802 & & 3.805 (3.805) & 3.798 & 3.803& 3.815 & 3.800 (3.849) & 3.796 & 3.823 & 3.811& 3.839 (3.799)&3.837\\
$2D$ & 0.396 & 4.150 & & 4.164 (4.108)& 4.160 & 4.196& 4.165& 4.159 (4.209)& 4.099 &4.190 & & 4.187 (4.158) & 4.210\\
$3D$ & 0.372 & 4.455 & & 4.478 (4.336) & 4.478 & 4.455&4.522 & & 4.327 & & & 4.486 (4.473)& \\
\hline
\end{tabular}}
\end{table}
\begin{table}
\caption{Hyperfine and fine splittings(in MeV). (LP = Linear potential model, SP = Screened potential model, NR = Non-relativistic and RE = Relativistic)\label{tab:masssplit}}
\scalebox{0.93}{
\begin{tabular}{lllllllllllll}
\hline
\multirow{2}{*}{Splitting} & This & Expt. & \multicolumn{10}{c}{Others}\\
& work & \cite{PDGlatest} & \cite{Deng:2016} & \cite{Yang:2015cc} & \cite{Ebert2011} & \cite{Barnes:2005} & \cite{Cao:2012}& \cite{Li:2009} &\cite{Radford:2007} & \cite{Sultan:2014}&\cite{Ebert:2002}&\cite{Bhavsar:2018} \\
& & & LP(SP) & & & NR(GI) & & & & RE (NR)&& \\
\hline
\addlinespace[3pt]
m($1^{3}S_{1}$)-m($1^{1}S_{0}$) & 99 & $113.3\pm0.7$ & 114 (113)& 116 & 115 & 108 (123) & 100& 118 & 117 & 102 (108) & 117&119\\
m($2^{3}S_{1}$)-m($2^{1}S_{0}$) & 43 & $46.7\pm1.3$ & 44 (42)& 11 & 51 & 42 (53) &38 & 50 & 89& 33 (42)& 98& 54\\
m($3^{3}S_{1}$)-m($3^{1}S_{0}$) & 36 & & 30 (26)& 9 &50 & 29 (36) &29 & 31 &81 & 30 (29) &97& 32\\
m($4^{3}S_{1}$)-m($4^{1}S_{0}$) & 34 & & 24 (17)& 6 & 26 & 22 (25)& 20 & 23 && 24 (22) & & 4.3 \\
m($5^{3}S_{1}$)-m($5^{1}S_{0}$) & 32 & & 21 (13) & 6 & 26 & & & 17 & & 22 (19) & &2.3 \\
m($6^{3}S_{1}$)-m($6^{1}S_{0}$) & 32 & & & 5 & 12 & & & 10 & & 19 (17)& & \\
\addlinespace[5pt]
m($1^{3}P_{2}$)-m($1^{3}P_{1}$) & 33 & $45.5\pm0.2$ & 36 (32)& 47 & 44 & 51 (40) & 41& 44 & 50& 41 (44) & 46&\\
m($1^{3}P_{1}$)-m($1^{3}P_{0}$) & 66 & $95.9\pm0.4$ & 101 (106) & 63 & 102 & 81 (65)& 52& 77 & 92& 71 (80) & 86&\\
m($2^{3}P_{2}$)-m($2^{3}P_{1}$) & 31 & & 30 (23) & 46 & 45 & 47 (26) & 38 &36& 54& 40 (40) &43&\\
m($2^{3}P_{1}$)-m($2^{3}P_{0}$) & 59 & & 68 (66)& 59 & 36 & 73 (37)& 92 & 59 & 96& 66 (73) & 75&\\
m($3^{3}P_{2}$)-m($3^{3}P_{1}$) & 33 & & 26 (19) & 44 & 35 & 46 (20)& 53 & 30 & & 45 (38) &&\\
m($3^{3}P_{1}$)-m($3^{3}P_{0}$) & 60 & & 54 (46) & 58 &18 & 69 (25) & 81& 47 & & 63 (69) && \\
\hline
\end{tabular}}
\end{table}
\begin{table*}
\caption{Complete mass spectra (in GeV). (LP = Linear potential model, SP = Screened potential model, NR = Non-relativistic and RE = Relativistic, )\label{tab:massescc}}
\scalebox{0.90}{
\begin{tabular}{llllllllllllll}
\hline
State & \multirow{2}{*}{$J^{P}$} & This & Expt. & \multicolumn{10}{c}{Others}\\
$n^{2S+1}L_{J}$ & & work & \cite{PDGlatest} & LP (SP) \cite{Deng:2016} & \cite{Yang:2015cc} & \cite{Ebert2010} & NR (GI) \cite{Barnes:2005} & \cite{Cao:2012}& \cite{Li:2009} &\cite{Radford:2007} &\cite{Ebert:2002} &RE (NR)\cite{Sultan:2014} &\cite{Godfrey:1985}\\
\hline
\addlinespace[3pt]
$1^{1}S_{0}$ & $0^{-+}$ & 2.995 & 2.984 & 2.983 (2.984) & 3.069 & 2.981 & 2.982 (2.975) & 2.978 & 2.979& 2.980 & 2.979& 2.992 (2.982)& 2.97 \\
$1^{3}S_{1}$ & $1^{--}$ & 3.094 & 3.097 & 3.097 (3.097)& 3.097 & 3.096 & 3.090 (3.098) & 3.088& 3.097 & 3.097 & 3.096& 3.094 (3.090)&3.10\\
$2^{1}S_{0}$ & $0^{-+}$ & 3.606 & 3.639 & 3.635 (3.637) & 3.659 & 3.635 & 3.630 (3.623) & 3.647& 3.623 & 3.597& 3.588& 3.625 (3.630) &3.62 \\
$2^{3}S_{1}$ & $1^{--}$ & 3.649 & 3.686 & 3.679 (3.679)& 3.670 & 3.686 & 3.672 (3.676) &3.685 & 3.673 & 3.686& 3.686& 3.668 (3.672) & 3.68 \\
$3^{1}S_{0}$ & $0^{-+}$ & 4.000 & & 4.048 (4.004) & 4.063 & 3.989 & 4.043 (4.064)&4.058 & 3.991 & 4.014& 3.991 &4.029 (4.043) &4.06 \\
$3^{3}S_{1}$ & $1^{--}$ & 4.036 & 4.039 & 4.078 (4.030)& 4.072 & 4.039 & 4.072 (4.100) &4.087 & 4.022 &4.095 & 4.088 &4.059 (4.072) &4.10\\
$4^{1}S_{0}$ & $0^{-+}$ & 4.328 & & 4.388 (4.264)& 4.403 & 4.401 & 4.384 (4.425)& 4.391 & 4.250 & & & 4.332 (4.388)& \\
$4^{3}S_{1}$ & $1^{--}$ & 4.362 & 4.421 & 4.412 (4.281)& 4.409 & 4.427 & 4.406 (4.450) & 4.411 & 4.273 & 4.433& & 4.356 (4.406)&4.45 \\
$5^{1}S_{0}$ & $0^{-+}$ & 4.622 & & 4.690 (4.459) & 4.705 & 4.811 & & & 4.446 & & &4.639 (4.685)& \\
$5^{3}S_{1}$ & $1^{--}$ & 4.654 & 4.643 & 4.711 (4.472)& 4.711 & 4.837 & & & 4.463 & & &4.661 (4.704)& \\
$6^{1}S_{0}$ & $0^{-+}$ & 4.893 & & & 4.983 & 5.155 & & & 4.595 & & &4.893 (4.960)& \\
$6^{3}S_{1}$ & $1^{--}$ & 4.925 & & & 4.988 & 5.167 & & & 4.605 & & &4.912 (4.977)& \\
\addlinespace[5pt]
$1^{3}P_{0}$ & $0^{++}$ & 3.457 & 3.415 & 3.415 (3.415) & 3.440 & 3.413 & 3.424 (3.445) & 3.366& 3.433 & 3.416& 3.424&3.472 (3.424) &3.44\\
$1^{3}P_{1}$ & $1^{++}$ & 3.523 & 3.511 & 3.516 (3.521) & 3.503 & 3.511 & 3.505 (3.510) & 3.518 & 3.510 & 3.508& 3.510 & 3.543 (3.505) &3.51\\
$1^{1}P_{1}$ & $1^{+-}$ & 3.534 & 3.525 & 3.522 (3.526) & 3.526 & 3.525 & 3.516 (3.517) & 3.527 & 3.519 & 3.527& 3.526 & 3.544 (3.516) &3.52\\
$1^{3}P_{2}$ & $2^{++}$ & 3.556 & 3.556 & 3.552 (3.553)& 3.550 & 3.555 & 3.556 (3.550)& 3.559& 3.554 & 3.558& 3.556&3.584 (3.549) &3.55\\
$2^{3}P_{0}$ & $0^{++}$ & 3.866 & 3.918 & 3.869 (3.848)& 3.862 & 3.870 & 3.852 (3.916) & 3.843 & 3.842 & 3.844& 3.854& 3.885 (3.852)&3.92\\
$2^{3}P_{1}$ & $1^{++}$ & 3.925 & 3.872 & 3.937 (3.914)& 3.921 & 3.906 & 3.925 (3.953)&3.935 & 3.901 & 3.940& 3.929& 3.951 (3.925)&3.95\\
$2^{1}P_{1}$ & $1^{+-}$ & 3.936 &3.887 & 3.940 (3.916)& 3.944 & 3.926 & 3.934 (3.956) &3.942 & 3.908 &3.961 & 3.945 &3.951 (3.934) &3.96\\
$2^{3}P_{2}$ & $2^{++}$ & 3.956 & 3.927 & 3.967 (3.937) & 3.967 & 3.949 & 3.972 (3.979) &3.973 & 3.937 & 3.994& 3.972 &3.994 (3.965) &3.98\\
$3^{3}P_{0}$ & $0^{++}$ & 4.197 & & 4.230 (4.146) & 4.212 & 4.301 &4.202 (4.292) & 4.208& 4.131 & & & 4.219 (4.202)&\\
$3^{3}P_{1}$ & $1^{++}$ & 4.257 & 4.273 & 4.284 (4.192) & 4.270 & 4.319 & 4.271 (4.317) & 4.299 & 4.178 & & &4.283 (4.271) & \\
$3^{1}P_{1}$ & $1^{+-}$ & 4.269 & & 4.285 (4.193) & 4.292 & 4.337 & 4.279 (4.318) & 4.310& 4.184 & & & 4.283 (4.279)&\\
$3^{3}P_{2}$ & $2^{++}$ & 4.290 & & 4.310 (4.311) & 4.314 & 4.354 & 4.317 (4.337)& 4.352 & 4.208 & & & 4.328 (4.309)& \\
\addlinespace[5pt]
$1^{3}D_{1}$ & $1^{--}$ & 3.799 & 3.773 & 3.787 (3.792)& 3.759 & 3.783 & 3.785 (3.819) & 3.809& 3.787 & 3.804& 3.798& 3.830 (3.785)&3.82 \\
$1^{3}D_{2}$ & $2^{--}$ & 3.805 & 3.822 & 3.807 (3.807) & 3.787 & 3.795 & 3.800 (3.838) & 3.820 & 3.798 & 3.824& 3.813&3.841 (3.800) &3.84\\
$1^{1}D_{2}$ & $2^{-+}$ & 3.802 & & 3.806 (3.805) & 3.799 & 3.807 & 3.799 (3.879) &3.815 & 3.796 & 3.824& 3.811& 3.837 (3.799)&3.84\\
$1^{3}D_{3}$ & $3^{--}$ & 3.801 & & 3.811 (3.808)& 3.823 & 3.813 & 3.806 (3.849) &3.813 & 3.799 & 3.831 & 3.815&3.844 (3.805) &3.84\\
$2^{3}D_{1}$ & $1^{--}$ & 4.145 & 4.191 & 4.144 (4.095)& 4.119 & 4.150 & 4.142 (4.194) &4.154 & 4.089 &4.164 & & 4.174 (4.141)&4.19\\
$2^{3}D_{2}$ & $2^{--}$ & 4.152 & & 4.165 (4.109) & 4.148 & 4.190 & 4.158 (4.208) &4.169 &4.100 &4.189 & & 4.187 (4.158)&4.21\\
$2^{1}D_{2}$ & $2^{-+}$ & 4.150 & & 4.164 (4.108)& 4.160 & 4.196 & 4.158 (4.208) & 4.165& 4.099 & 4.191& & 4.183 (4.158)&4.21\\
$2^{3}D_{3}$ & $3^{--}$ & 4.151 & & 4.172 (4.112)& 4.185 & 4.220 & 4.167 (4.217) & 4.166& 4.103 & 4.202& & 4.195 (4.165)&4.22\\
$3^{3}D_{1}$ & $1^{--}$ & 4.448 & & 4.456 (4.324)& 4.437 & 4.448 & & 4.502 & 4.317 & 4.477& & 4.470 (4.455) &4.52 \\
$3^{3}D_{2}$ & $2^{--}$ & 4.456 & & 4.478 (4.337)& 4.466 & 4.456 & &4.524 & 4.327 & & & 4.485 (4.472)& \\
$3^{1}D_{2}$ & $2^{-+}$ & 4.455 & & 4.478 (4.336)& 4.478 & 4.455 & & 4.524 & 4.326 & & & 4.480 (4.472)& \\
$3^{3}D_{3}$ & $3^{--}$ & 4.457 & & 4.486 (4.340) & 4.503 & 4.457 & & 4.527 & 4.331 & & & 4.497 (4.481)& \\
\hline
\end{tabular}}
\end{table*}
\begin{table*}
\caption{Electric dipole (E1) transitions widths of $c\overline{c}$ mesons. (LP = Linear potential model, SP = Screened potential model, NR = Non-relativistic and RE = Relativistic, Here $E_{\gamma}$ in MeV and $\varGamma$ in KeV) \label{tab:E1CC} }
\scalebox{0.90}{
\begin{tabular}{ccccccccccccccc}
\hline
\addlinespace[3pt]
\multicolumn{2}{c}{Transition} & \multicolumn{2}{c}{This work} & Expt.\cite{PDGlatest}& \multicolumn{10}{c}{Other work} \\
Initial & Final & $E_{\gamma}$ & $\varGamma$ & $\varGamma$ & \cite{Li:2009} & \cite{Ebert:2002} & \cite{Parmar2010} & \cite{Barnes:2005}& \cite{Brambilla:2004}& \cite{Eichten:2002}& \cite{Segovia:2008}&\cite{Cao:2012}& \cite{Deng:2016}& \cite{Sultan:2014}\\
& & & & & & & & NR(GI)& & & & & LP(SP)& RE(NR)\\
\hline
\addlinespace[5pt]
$1^{3}P_{2}$ & $1^{3}S_{1}$ & 432.31 & 233.85 & $406\pm31$ & 309 & 327 & 383& 424 (313) & 315& 315& & 405 & 327(338)& 437.5(424.5)\\
$1^{3}P_{1}$ & $1^{3}S_{1}$ & 402.92 & 189.86 & $320\pm25$ & 244 & 265 & 361& 314 (239) & 241& 242&&341 &269 (278) & 329.5(319.5)\\
$1^{1}P_{1}$ & $1^{1}S_{0}$ & 497.67 & 357.83 & & 323 & 560 & 671& 498 (352) & 482& 482&&473 &361 (373)& 570.5(490.3)\\
$1^{3}P_{0}$ & $1^{3}S_{1}$ & 344.13 & 118.29 & $131\pm14$ & 117 & 121 & 264 & 152 (114) & 120& 120& & 104 & 141(146)& 159.2(154.5)\\
\hline
\addlinespace[5pt]
$2^{3}S_{1}$ & $1^{3}P_{2}$ & 91.58 & 7.07 & $26\pm1.5$ & 34 & 18.2 & & 38 (24) & 30.1& 29&28.6&39 & 36(44)& 35.5 (37.9) \\
$2^{3}S_{1}$ & $1^{3}P_{1}$ & 123.46 & 10.39 & $27.9\pm1.5$ & 36 & 22.9 & & 54 (29) & 42.8& 41&33.0&38 & 45(48)&50.9 (54.2) \\
$2^{3}S_{1}$ & $1^{1}P_{1}$ & 112.88 & 7.94 & & 104 & & & && && & & \\
$2^{3}S_{1}$ & $1^{3}P_{0}$ & 186.43 & 11.93 & $29.8\pm1.5$ & 25 & 26.3 & &63 (26) & 47& 46 &28.8&29 & 27(26)& 58.8 (62.6)\\
$2^{1}S_{0}$ & $1^{3}P_{1}$ & 82.19 & 9.20 & & & & & &&&& & & \\
$2^{1}S_{0}$ & $1^{1}P_{1}$ & 71.49 & 6.05 & & & 6.2 & &49 (36)& 35.1& 35.1&& 56 &49 (52)& 45.2 (49.9)\\
\hline
\addlinespace[5pt]
$1^{3}D_{3}$ & $1^{3}P_{2}$ & 237.31 & 237.51 & & 323 & 156 & 432 & 272 (296)& 402&&& 302 & &397.7(271.1) \\
$1^{3}D_{2}$ & $1^{3}P_{2}$ & 241.19 & 62.34 & & 55 & 59 & 131& 64 (66)& 69.5& 56&& 82& 79(82)&96.52(64.06) \\
$1^{3}D_{2}$ & $1^{3}P_{1}$ & 271.75 & 89.18 & & 208 & 215 & 423 &307 (268)& 313& 260&&301& 281(291)& 438.2(311.2) \\
$1^{3}D_{1}$ & $1^{3}P_{2}$ & 235.48 & 6.45 & $<21$ & 4.6 & 6.9 & 15.2 & 4.9 (3.3)& 3.88& 3.7& 3.3& 8.1&5.4 (5.7)& 4.73(4.86) \\
$1^{3}D_{1}$ & $1^{3}P_{1}$ & 266.10 & 139.52 & $70\pm17$ & 93 & 135 & 246&125 (77) & 99& 94& 89.7&153& 115 (111)& 122.8(126.2) \\
$1^{3}D_{1}$ & $1^{3}P_{0}$ & 326.57 & 343.87 & $172\pm30$ & 197 & 355 & 448 & 403 (213)& 299& 287& 221.7& 362&243 (232)& 394.6(405.4)\\
\hline
\addlinespace[5pt]
$2^{3}P_{2}$ & $2^{3}S_{1}$ & 295.70 & 281.93 & & 100 & & 164 & 304 (207)&&& &264 & & 377.1(287.5)\\
$2^{3}P_{1}$ & $2^{3}S_{1}$ & 266.71 & 206.87 & & 60 & & 174 &183 (183) &&&& 234& & 246.0(185.3)\\
$2^{1}P_{1}$ & $2^{1}S_{0}$ & 315.84 & 343.55 & & 108 & & 333&280 (218) & &&&274& & 349.8(272.9) \\
$2^{3}P_{0}$ & $2^{3}S_{1}$ & 210.86 & 102.23 & & 44 & & 112&64 (135) & &&&83& & 108.3(65.3)\\
\hline
\addlinespace[5pt]
$2^{3}P_{2}$ & $1^{3}D_{3}$ & 152.16 & 33.27 & & & & & 88 (29)&&&& 76& & 60.67(78.69)\\
$2^{3}P_{2}$ & $1^{3}D_{2}$ & 148.18 & 5.49 & & & & &17 (5.6)& &&& 10& & 11.48(15.34)\\
$2^{3}P_{2}$ & $1^{1}D_{2}$ & 151.21 & 5.83 & & & & & & &&&& & \\
$2^{3}P_{2}$ & $1^{3}D_{1}$ & 154.03 & 0.41 & & & & &1.9 (1.0)& &&& 0.64& & 2.31(1.67)\\
$2^{3}P_{1}$ & $1^{3}D_{1}$ & 123.91 & 5.35 & & & & & 22 (21)&&&& 11& & 31.15(21.53)\\
$2^{3}P_{0}$ & $1^{3}D_{1}$ & 65.87 & 3.21 & & & & &13 (51)&& && 1.4& &33.24(13.55) \\
\hline
\end{tabular}}
\end{table*}
\begin{table}
\caption{Magnetic dipole (M1) transitions widths. (LP = Linear potential model, SP = Screened potential model, NR = Non-relativistic and RE = Relativistic, Here $E_{\gamma}$ in MeV and $\varGamma$ in KeV) \label{tab:M1CC} }
\scalebox{0.89}{
\begin{tabular}{cccccccccccccc}
\hline
\addlinespace[2pt]
\multicolumn{2}{c}{ Transition} & \multicolumn{2}{c}{This work} & Expt.\cite{PDGlatest}& \multicolumn{9}{c}{Other work } \\
\addlinespace[2pt]
Initial&Final &$E_{\gamma}$ & $\varGamma$ & $\varGamma$& \cite{Ebert:2002} & \cite{Parmar2010} & NR(GI)\cite{Barnes:2005}& \cite{Brambilla:2004}&\cite{Eichten:2002}& \cite{Segovia:2008}&\cite{Cao:2012}& LP(SP)\cite{Deng:2016}& RE(NR)\cite{Sultan:2014}\\
\addlinespace[3pt]
\hline
\addlinespace[5pt]
$1^{3}S_{1}$ & $1^{1}S_{0}$ & 97 & 1.647 & $1.58\pm0.37$ & 1.05 & 2.01 & 2.9 (2.4)& 1.960& 1.92& 2.0& 2.2 & 2.39 (2.44)& 2.765 (2.752)\\
$2^{3}S_{1}$ & $2^{1}S_{0}$ & 42 & 0.135 & $0.21\pm0.15$& 0.99 & 0.20 &0.21 (0.17)& 0.140& 0.04& 0.2&0.096 &0.19 (0.19) &0.198 (0.197)\\
$3^{3}S_{1}$ & $3^{1}S_{0}$ & 36 & 0.082 & & & 0.012 &0.046 (0.067) & & & 0.0046& 0.044 & 0.051 (0.088)&0.023 (0.044)\\
$2^{3}S_{1}$ & $1^{1}S_{0}$ & 595 & 69.57 & $1.24\pm0.29$ & 0.95 & &4.6 (9.6)& 0.926 & 0.91&& 3.8&8.08 (7.80)& 3.370 (4.532)\\
$2^{1}S_{0}$ & $1^{3}S_{1}$ & 476 & 35.72 & & 1.12 & & 7.9 (5.6)& 0.538&&7.2& 6.9&2.64 (2.29)& 5.792 (7.962) \\
\hline
\addlinespace[5pt]
$1^{3}P_{2}$ & $1^{3}P_{0}$ & 97 & 1.638 & & & & & &&&& &\\
$1^{3}P_{2}$ & $1^{3}P_{1}$ & 33 & 0.189 & & & & && &&& &\\
$1^{3}P_{2}$ & $1^{1}P_{1}$ & 22 & 0.056 & && & & &&&& &\\
$1^{1}P_{1}$ & $1^{3}P_{0}$ & 76 & 0.782 & & & & && &&& &\\
\hline
\end{tabular}}
\end{table}
\begin{table*}
\caption{Leptonic decay widths ({$\psi \rightarrow\varGamma_{e^{+}e^{-}}$} in KeV ).\label{tab:annihi2e} }
\scalebox{0.9}{
\begin{tabular}{ccccccccccccc}
\hline
\addlinespace[2pt]
State & \multicolumn{2}{c}{This work} & Expt.\cite{PDGlatest}& \multicolumn{9}{c}{Other work } \\
\addlinespace[2pt]
& $\varGamma_{l^{+}l^{-}}$ & $\varGamma_{l^{+}l^{-}}^{cf}$ & & \cite{DSouza:2017} & \cite{Bhaghyesh:2011,Bhaghyesh:2012} & \cite{Li:2009} & \cite{Giannuzzi:2008} & \cite{Radford:2007} & \cite{Barnes:2005}&\cite{Segovia:2008}&\cite{Cao:2012}&\cite{Bhavsar:2018} \\
\addlinespace[2pt]
\hline
\addlinespace[5pt]
$J/\psi$ & 8.335 & 3.623 & $5.55\pm0.14\pm0.02$ & 3.112 & 6.847 (2.536) & 11.8 (6.60) & 4.080 & 4.28 & 12.13& 3.93 &6.0(3.3)&5.63\\
$\psi(2S)$ & 2.496 & 1.085 & $2.33\pm0.07$ & 2.197 & 3.666 (1.358) & 4.29 (2.40) & 2.375 & 2.25 & 5.03&1.78 &2.2(1.2)&2.19\\
$\psi(3S)$ & 1.722 & 0.748 & $0.86\pm0.07$ & 1.701 & 2.597 (0.962) & 2.53 (1.42) & 0.835 & 1.66 & 3.48&1.11 &1.8(0.98)&1.20\\
$\psi(4S)$ & 1.378 & 0.599 & $0.58\pm0.07$ & & 2.101 (0.778) & 1.73 (0.97) & & 1.33 & 2.63& 0.78 &1.3(0.70)&0.63\\
$\psi(5S)$ & 1.168 & 0.508 & & & 1.701 (0.633) & 1.25 (0.70) & & & & 0.57 & &0.24\\
$\psi(6S)$ & 1.017 & 0.442 & & & & 0.88 (0.49) & & & & 0.42 && \\
\addlinespace[2pt]
\hline
\addlinespace[5pt]
$1^{3}D_{1}$ & 0.261 & 0.113 & $0.262\pm0.018$ & 0.275 & 0.096 & 0.055 (0.031) & & 0.09 & 0.056& 0.22 &0.079(0.044)&\\
$2^{3}D_{1}$ & 0.381 & 0.166 & $0.48\pm0.22$ & 0.223 & 0.112 & 0.066 (0.037) & & 0.16 & 0.096& 0.30 &0.13(0.073)&\\
$3^{3}D_{1}$ & 0.485 & 0.211 & & & & 0.079 (0.044) & & && 0.33 && \\
\hline
\end{tabular}}
\end{table*}
\begin{table*}
\caption{Two-photon decay widths without and with correction factor (in KeV).\label{tab:annihi2p}}
\scalebox{0.90}{
\begin{tabular}{cccccccccccccccc}
\hline
\addlinespace[2pt]
State & \multicolumn{2}{c}{This work} & Expt.\cite{PDGlatest}& \multicolumn{12}{c}{Other work } \\
\addlinespace[2pt]
& $\varGamma_{\gamma\gamma}$ & $\varGamma_{\gamma\gamma}^{cf}$ & & \cite{DSouza:2017} & \cite{Negash:2015} & \cite{Bhaghyesh:2011,Bhaghyesh:2012} & \cite{Li:2009} & \cite{Laverty:2009} & \cite{Munz:1996} & \cite{Lakhina2006} & \cite{Kim:2004} & \cite{Giannuzzi:2008}&\cite{Cao:2012}&\cite{Ebert:2003b}&\cite{Munz:1996}\\
\addlinespace[2pt]
\hline
\addlinespace[5pt]
$\eta_{c}(1S)$ & 10.351 & 6.621 & $5.1\pm 0.4$ & 6.96 & 7.918 & 6.68 & 8.5 & 5.09 & 3.5 & 7.18 & 7.14 & 4.252& 7.5 & 5.5& 3.5\\
$\eta_{c}(2S)$ & 4.501 & 2.879 & $2.15\pm0.6$ & 10.45 & 5.789 & 5.08 & 2.4 & 2.63 & 1.38 & 1.71 & 4.44 & 3.306&2.9 &1.8 &1.38\\
$\eta_{c}(3S)$ & 3.821 & 2.444 & & 1.03 & 0.299 & 4.53 & 0.88 & & 0.94 & 1.21 & & 1.992& 2.5 & &\\
$\eta_{c}(4S)$ & 3.582 & 2.291 & & & & & & & 0.73 & & && 1.8 & &\\
$\eta_{c}(5S)$ & 3.460 & 2.213 & & & & & & & 0.62 & & & & & &\\
$\eta_{c}(6S)$ & 3.378 & 2.161 & & & & & & & & & &&& & \\
\addlinespace[2pt]
\hline
\addlinespace[5pt]
$1^{3}P_{0}$ & 1.973 & 2.015 & $2.36\pm0.35$ & 13.43 & & 2.62 & 2.5 & 2.02 & 1.39 & 3.28 & && 10.8 & 2.9&1.39\\
$2^{3}P_{0}$ & 2.299 & 2.349 & & 2.67 & & & 1.7 & & 1.11 & & & & 6.7&1.9 &1.11\\
$3^{3}P_{0}$ & 2.714 & 2.773 & & & & & 1.2 & & 0.91 & & & & 6.5& &\\
\addlinespace[2pt]
\hline
\addlinespace[5pt]
$1^{3}P_{2}$ & 0.526 & 0.229 & $0.53\pm0.03$ & 1.72 & & 0.25 & 0.31 & 0.46 & 0.44 & & & & 0.27& 0.50&0.44\\
$2^{3}P_{2}$ & 0.613 & 0.267 & & 0.343 & & & 0.23 & & 0.48 & & &&0.39&0.52 &0.48\\
$3^{3}P_{2}$ & 0.724 & 0.315 & & & & & 0.17 & & 0.014 & & && 0.66& & \\
\hline
\end{tabular}}
\end{table*}
\begin{table}
\caption{Three-photon decay widths (in eV).\label{tab:annihi3p} }
\centering
\begin{tabular}{cccc}
\hline
\addlinespace[2pt]
State & \multicolumn{2}{c}{This work} & Expt.\cite{PDGlatest} \\
& $\varGamma_{\gamma\gamma\gamma}$ & $\varGamma_{\gamma\gamma\gamma}^{cf}$& \\
\addlinespace[2pt]
\hline
\addlinespace[5pt]
$J/\psi$ & 4.41691 & 3.94748& $1.08\pm0.032$\\
$\psi(2S)$ & 1.83911 & 1.64365& \\
$\psi(3S)$ & 1.55252 & 1.38752& \\
$\psi(4S)$ & 1.45187 & 1.29756&\\
$\psi(5S)$ & 1.40027 & 1.25145&\\
$\psi(6S)$ & 1.36564 & 1.2205&\\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Three-gluon decay widths (KeV) \label{tab:annihi3g}. }
\centering
\begin{tabular}{cccccc}
\hline
\addlinespace[2pt]
State & \multicolumn{2}{c}{This work} & Expt.\cite{PDGlatest}& \multicolumn{2}{c}{Other work } \\
& $\varGamma_{ggg}$ & $\varGamma_{ggg}^{cf}$ & &\cite{Eichten:2002}& \cite{Barnes:2003}MeV\\
\addlinespace[2pt]
\hline
\addlinespace[5pt]
$J/\psi$ & 442.669 & 269.059&$59.55\pm0.18$ &$52.8\pm5$&\\
$\psi(2S)$ & 184.318 & 112.031&$31.38\pm0.85$& $23\pm2.6$&\\
$\psi(3S)$ & 155.596 & 94.5727&&&\\
$\psi(4S)$ & 145.508 & 88.4413&&&\\
$\psi(5S)$ & 140.337 & 85.2984&& &\\
$\psi(6S)$ & 136.866 & 83.1888&&&\\
\addlinespace[5pt]
$1^{1}P_{1}$ & 285.127 & &&$720\pm320$&\\
$2^{1}P_{1}$ & 420.078 && &&1.29\\
$3^{1}P_{1}$ & 558.78 &&&& \\
\addlinespace[5pt]
$1^{3}D_{1}$ & 189.367 &&&216 &1.15\\
$2^{3}D_{1}$ & 359.346 &&&& \\
$3^{3}D_{1}$ & 556.588 &&&&\\
\addlinespace[5pt]
$1^{3}D_{2}$ & 53.8761 &&& 36&0.08\\
$2^{3}D_{2}$ & 102.236 && &&\\
$3^{3}D_{2}$ & 158.353 && &&\\
\addlinespace[5pt]
$1^{3}D_{3}$ & 89.7001 &&& 102&0.18\\
$2^{3}D_{3}$ & 170.217 && &&\\
$3^{3}D_{3}$ & 263.647 & &&&\\
\hline
\end{tabular}
\end{table}
\begin{table*}
\caption{Two-gluon decay widths(in MeV).\label{tab:annihi2g}}
\centering
\begin{tabular}{cccccccccc}
\hline
\addlinespace[2pt]
State & \multicolumn{2}{c}{This work} & Expt.\cite{PDGlatest}& \multicolumn{6}{c}{Other work } \\
& $\varGamma_{gg}$& $\varGamma_{gg}^{cf}$ & & \cite{DSouza:2017} & \cite{Negash:2015} & \cite{Bhaghyesh:2011,Bhaghyesh:2012}& \cite{Laverty:2009} & \cite{Kim:2004}& \cite{Eichten:2002} \\
\addlinespace[2pt]
\hline
\addlinespace[5pt]
$\eta_{c}(1S)$ & 24.249 & 36.587 & $28.6\pm2.2$ & 28.60 & 13.070 & 32.44 & 15.70 & 19.6& $17.4\pm2.8$\\
$\eta_{c}(2S)$ & 10.545 & 15.910 & $14\pm7$ & 42.90 & 9.534 & 24.64 & 8.10 & 12.1& $8.3\pm1.3$\\
$\eta_{c}(3S)$ & 8.952 & 13.507 & & 4.26 & 4.412 & 21.99 & & & \\
$\eta_{c}(4S)$ & 8.392 & 12.662 & & & & & & & \\
$\eta_{c}(5S)$ & 8.106 & 12.230 & & & & & && \\
$\eta_{c}(6S)$ & 7.914 & 11.941 & & & & & && \\
\addlinespace[2pt]
\hline
\addlinespace[5pt]
$1^{3}P_{0}$ & 4.621 & 9.274 & $10\pm0.6$ & 47.76 & & 15.67 & 4.68 & &$14.3\pm3.6$ \\
$2^{3}P_{0}$ & 5.386 & 10.810 & & 9.50 & & & & &\\
$3^{3}P_{0}$ & 6.357 & 12.758 & & & & & && \\
\addlinespace[2pt]
\hline
\addlinespace[5pt]
$1^{3}P_{2}$ & 1.232 & 0.945 & $1.97\pm0.11$ & 5.27 & & 1.46 & 1.72 && $1.71\pm0.21$ \\
$2^{3}P_{2}$ & 1.436 & 1.101 & & 1.04 & & & && \\
$3^{3}P_{2}$ & 1.695 & 1.300 & & & & & && \\
\addlinespace[2pt]
\hline
\addlinespace[5pt]
$1^{1}D_{2}$ & 12.460 (KeV) & & & & & & & & 110 (KeV)\\
$2^{1}D_{2}$ & 21.679 (KeV) & & & & & & && \\
$3^{1}D_{2}$ & 31.757 (KeV) & & & & & & && \\
\hline
\end{tabular}
\end{table*}
\begin{table}
\caption{$n^{3}S_{1}\rightarrow\gamma gg$ decay widths.\label{tab:annihip2g}}
\centering
\begin{tabular}{cccc}
\hline
\addlinespace[2pt]
State & \multicolumn{2}{c}{This work} & Expt.\cite{PDGlatest} \\
State & $\varGamma_{\rightarrow\gamma gg}$ (KeV) & $\varGamma_{\rightarrow\gamma gg}^{cf}$ (KeV)& \\
\addlinespace[2pt]
\hline
\addlinespace[3pt]
$J/\psi$ &31.0421 & 8.99657 &$8.18\pm0.25$\\
$\psi(2S)$& 12.9253 & 3.74599& $2.93\pm0.16$ \\
$\psi(3S)$& 10.9111& 3.16224& \\
$\psi(4S)$ & 10.2037& 2.95723&\\
$\psi(5S)$& 9.8411 & 2.85214&\\
$\psi(6S)$& 9.59771 & 2.7816&\\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{$n^{3}P_{1}\rightarrow q\overline{q}+g$ decay widths.\label{tab:annihiqqg}}
\centering
\begin{tabular}{cc}
\hline
State & This work \\
& $\varGamma_{q\overline{q}+g}$(KeV)\\
\addlinespace[2pt]
\hline
\addlinespace[3pt]
$1^{3}P_{1}$ & 342.152 \\
$2^{3}P_{1}$ & 504.093 \\
$3^{3}P_{1}$ & 670.536\\
\hline
\end{tabular}
\end{table}
\begin{multicols}{2}
\section{Results and Discussion\label{sec:Resu}}
{\bf In the framework of Cornell potential with a Gaussian wave function and relativistic correction of the Hamiltonian, comprise with a $\mathcal{O}(1/m)$ rectification in the potential energy term and elaboration of the kinetic energy term up to ${{\cal{O}}\left({\bf p}^{10}\right)}$,} we have studied the mass spectra of charmonium states. {\bf We have calculated center of weight masses (value of Hamiltonian yields) for the nS $(n\leq6)$, $ nP\; \text{and}\; nD \;(n\leq3)$ charmonium states and tabulated in Table(\ref{tab:ccmsa}).}
We observed that Hamiltonian yields for nS $(n\leq3)$ and $ nP\; \text{and}\; nD \;(n\leq3)$ are in accordance with experimental as well as values predicted by other theoretical model, whereas for nS $(4\leq n\leq6)$ are underestimated and/or overestimated compared to results of other theoretical model.
The calculated mass of charmonium states are {\bf graphically} represented in Fig.\ref{fig:MassCC} and tabulated in Table(\ref{tab:massescc}) with experimentally observed results. After addition of the spin hyperfine interaction in fixed spin average mass for {\bf the ground} state, we obtained pseudoscalar state mass $\eta_{c}$ (2995 MeV) and vector state mass $J/\psi$ (3094 MeV).
The estimated mass $2^{1}S_{0}$ (3606 MeV) is 33 MeV lower than experimentally observed mass, whereas mass $3^{3}S_{1}$(4036) is accordance with mass given in PDG \cite{PDGlatest} and other model estimates \cite{Deng:2016,Ebert2010,Li:2009}. Our calculated mass $5^{3}S_{1}$ (4654 MeV) is 11 MeV higher than value quoted in PDG \cite{PDGlatest} and accordance with mass estimated by other model \cite{Radford:2007,Sultan:2014}. We have assigned $X(4660)$ to the $5^{3}S_{1}$ state of charmonium. Estimated mass of $6^{3}S_{0}$ (4893 MeV) and $6^{3}S_{1}$ (4925 MeV) states are agreement with mass estimated by other model \cite{Sultan:2014}.
The P-wave states, $1^{3}P_{1}$ with predicted mass 3511 MeV, $1^{1}P_{1}$ with predicted mass 3525 MeV and $2^{3}P_{2}$ with predicted mass 3556 MeV are in good agreement with experimental observed value \cite{PDGlatest}.
{\bf We have assigned newly observed charmonium like state $X(3900)$ to the $2^{1}P_{1}$ (3936 MeV) and state $X(3872)$ to the $2^{3}P_{1}$ (3925 MeV). The mass predicted for state $2^{1}P_{1}$ (3936 MeV) and state $2^{3}P_{1}$ (3925 MeV) is in good agreement with mass predicted by other model \cite{Deng:2016,Yang:2015cc,Ebert2010,Barnes:2005,Cao:2012,Ebert2002,Sultan:2014}. The candidate $X(3872)$ as the $2^{3}P_{1}$ state with well established quantum numbers, although the interpretation of it as a
molecular state \cite{Tornqvist:2004,Braaten:2007} and was questioned in Ref.\cite{Albaladejo:2017}, while Ref.\cite{Hanhart:2007} interpreted it as virtual state}.
We have also assigned charmonium like states, $X(3915)$ and $X(4274)$ to the $2^{3}P_{0}$ (3866 MeV) and $3^{3}P_{1}$(4257 MeV) states respectively. {\bf To consider $X(3915)$ as the $2^{3}P_0$ state is still problematic and was also pointed out in Ref.\cite{Deng:2016,Zhao:2013} and the references therein. In Ref.\cite{Zhao:2013,Liu:2009fe,Guo:2010}, the authors suggest the $X(3915)$ as the $2^{3}P_0$ state faces the following problems: First, A scalar meson should be the open-flavor modes for the dominant decay channels, above the corresponding thresholds. The Facts that $X(3915)$ can couple in an S-wave and the $D\bar{D}$ channel, although was not observed in the $D\bar{D}$ channel. Second, the mass splitting between the state $1^{3}P_2$ and $1^{3}P_0$ is 141 MeV, while the mass splitting between relatively well determined $X(3930)$ as the $2^{3}P_2$ state and $X(3915)$ as the $2^{3}P_0$ state is 9 MeV, which is too small for the hyperfine splitting}.
We observed that new charmonium like states $X(4140)$ and $X(4274)$ with their quantum number $J^{PC}=1^{++}$ is a good candidate for $3^{3}P_{1}$ {\bf state} within screen potential model and linear potential model respectively. However none of the {\bf models} can give $J^{PC}=1^{++}$ charmonium state masses 4147 MeV and 4273 MeV at the same time, which may indicate the exotic nature of $X(4140)$ and/or $X(4274)$, which was also pointed out in Ref.\cite{Deng:2016}.
The predicted mass for $1^{3}D_{1}$ (3799 MeV), $1^{3}D_{2}$ (3805 MeV) and $2^{3}D_{1}$ (4145 MeV) states are accordance with Experiment observed results \cite{PDGlatest} as well as with good agreement with other model prediction. \cite{Deng:2016,Ebert2010,Barnes:2005,Cao:2012,Li:2009,Ebert2002,Sultan:2014}. The estimated masses of charmonium using our model are overall in agreement (with few MeV difference) with experimentally observed values. It is found that states with a mass of $M<4.1$ GeV are in good agreement with other theoretical estimates.
Table(\ref{tab:masssplit}) shows the hyperfine splittings for S wave states and fine splittings for some P wave states. For comparison, the experimental data from the PDG \cite{PDGlatest} and predictions with other theoretical model are listed in the same table as well. we observed that the predicted hyperfine splittings, up-to 2S states are in agreement with the world average data \cite{PDGlatest} and predictions with other theoretical model. The hyperfine splittings for 3S to 6S states have a different value in the different theoretical model. By comparing our predicted results with other theoretical model, we observed that masses of the low-lying nS $(n\leq2)$, nP, nD $(n=1)$, charmonium states are in less difference, whereas masses of the higher charmonium states $nS\;(n\geq3)$, $nP,\;nD \;(n\geq2)$, are in notable difference.
The estimated pseudoscalar and vector decay constant $f_{P}$($f_{Pcor}$) and $f_{V}$($f_{Vcor}$) respectively, without(with) QCD correction are tabulated in Table(\ref{tab:decaycc}), which are in agreement with experimental results as well as other theoretical model estimates.
We calculate radiative E1 and M1 dipole transitions widths and are tabulated in Tables(\ref{tab:E1CC}, \ref{tab:M1CC}). We calculate the E1 transition of $\varGamma[1P \rightarrow (1S)\gamma]$, $\varGamma[2S \rightarrow (1P)\gamma]$, $\varGamma[1D \rightarrow (1P)\gamma]$, $\varGamma[2P \rightarrow (2S)\gamma]$ and $\varGamma[2P \rightarrow (1D)\gamma]$ using the masses predicted by our model. Our calculated E1 transition of $\varGamma[1P \rightarrow (1S)\gamma]$ and $\varGamma[2S \rightarrow (1P)\gamma]$ are lesser than experimental results as well as other theoretical estimates, whereas for $\varGamma[1D \rightarrow (1P)\gamma]$, $\varGamma[2P \rightarrow (2S)\gamma]$ and $\varGamma[2P \rightarrow (1D)\gamma]$ transition, are in agreement with the estimates of other theoretical model. Our prediction of $\varGamma[1^{3}D_1 \rightarrow (1^{3}P_1)\gamma]$ and $\varGamma[1^{3}D_1 \rightarrow (1^{3}P_0)\gamma]$ almost double in comparison with the PDG average data \cite{PDGlatest} while prediction of $\varGamma[1^{3}D_1 \rightarrow (1^{3}P_2)\gamma]$ is in agreement with the PDG average data \cite{PDGlatest} as well as with predicted by other model.
We also, calculate the M1 transition of the low-lying 1S, 2S and 3S states as well as 1P states. Our prediction of $\varGamma[1^{3}S_1 \rightarrow (1^{1}S_0)\gamma]$ and $\varGamma[2^{3}S_1 \rightarrow (2^{1}S_0)\gamma]$ are in agreement with the PDG average data \cite{PDGlatest}, while $\varGamma[2^{3}S_1 \rightarrow (1^{1}S_0)\gamma]$ is much larger than the PDG average data \cite{PDGlatest}. {\bf Gang Li and Qiang Zhao, Ref.\cite{Li:2007,Li:2011a} studied intermediate meson loop contributions to $1^{3}S_1, 2^{3}S_1 \rightarrow \gamma 2^{1}S_0,(\gamma1^{1}S_0)$ apart from
the dominant M1 transitions in an effective Lagrangian approach. Results shows that the IML contributions are relatively small but play a crucial role. Radiative decay widths including the M1 in the GI model and intermediate hadronic loops for $1^{3}S_1\rightarrow \gamma 2^{1}S_0$ is $1.59$ KeV and for $2^{3}S_1 \rightarrow \gamma 2^{1}S_0(\gamma1^{1}S_0)$ is 0.032(0.86) KeV \cite{Li:2007}, whereas results including the M1 transition amplitude of the GI model and IML transitions for $1^{3}S_1\rightarrow \gamma 2^{1}S_0$ is $1.58\pm0.37$ KeV and for $2^{3}S_1 \rightarrow \gamma 2^{1}S_0(\gamma1^{1}S_0)$ is $0.08\pm0.03$ ($2.78^{+2.65}_{-1.75}$) KeV \cite{Li:2011a}.}
Our prediction of $\varGamma[3^{3}S_1 \rightarrow (3^{1}S_0)\gamma]$ is in agreement with the other theoretical model prediction, while prediction of $\varGamma[2^{1}S_0 \rightarrow (1^{3}S_1)\gamma]$ is larger than the predicted by other theoretical model. We observed that the various models have different estimates of E1 and M1 transitions, it may be due to the different models have different parameters or treatments in the relativistic corrections. The E1 and M1 transitions as a whole are strongly model dependence and more studies are required in both experiments as well theory.
{\bf We estimate partial decay width $\varGamma$ and $\varGamma^{cf}$ (with QCD correction factor) of annihilation processes, using the masses predicted by our potential model and the radial wave function at the origin, for $e^{+}e^{-}$, two-photon, three-photon, two-gluon, three-gluon, $\gamma gg$ and $q\bar{q}+g$ are tabulated in Tables(\ref{tab:annihi2e}-\ref{tab:annihiqqg}) and are {\bf compared} with experimental results from PDG\cite{PDGlatest} as well as other theoretically calculated estimates.}
We observed that our estimated leptonic decay without QCD correction for $J/\psi$, $\psi (2S)$, $\psi (3S)$ and $\psi (4S)$ is higher than experimentally observed leptonic decay width. {\bf After QCD correction, estimated leptonic decay} is 1.93 KeV, 1.24 KeV, 0.11 KeV and 0.019 KeV lesser than the experimental result for $J/\psi$, $\psi (2S)$, $\psi (3S)$ and $\psi (4S)$ state respectively. Also, our estimated leptonic decay with QCD correction for $n^{3}D_1$ state is much lower than the experimental result.
{\bf Our estimated two-photon and two-gluon decay widths with QCD correction for $\eta_c (nS)$, $n^{3}P_0$ and $n^{3}P_2$ state are accordance with experimentally observed results as well as with the other theoretical estimates. Our estimated three-photon decay widths with QCD correction for $J/\psi$ is lower than the experimentally observed result whereas estimated three-gluon decay widths with QCD correction for $J/\psi$ and $\psi(2S)$ state is higher than the experimentally observed result as well as other theoretical estimates.}
Our estimated $\gamma gg$ decay width with QCD correction for $J/\psi$ and $\psi(2S)$ state is {\bf accordance} with the experimentally observed result. We have also {\bf compute} $q\bar{q}+g$ decay width for $n^{3}P_1$ states. We observed that radiative QCD corrections modify theoretical predictions considerably and {\bf bring} estimated result close to experimental data. We also observed that the estimated values of annihilation decay width by of various models show a wide range of variations. Due to the considerable uncertainties arise from the wave functions dependence model and possible relativistic {\bf as well as} QCD radiative corrections, we would like to mention that formulas used for calculation of annihilation decay width should be regarded as estimates of the partial widths rather than precise predictions.
\subsection{Regge trajectories \label{sec:reg}}
We plot the Regge trajectories for the $(n,M^{2})$ and $(J,M^{2})$ planes with the help of masses estimated by our potential model. The "daughter" trajectories are the trajectories with the same value of $J$ and differ by a quantum number correspondent to the radial quantum number. The masses of the "daughter" trajectories are higher than those for the leading trajectory with given quantum numbers. The linearity of Regge trajectories {\bf represents} as a reflection of strong forces between quarks at large distances (color confinement).
The Regge trajectories in the $(J,M^{2})$ plane with $(P=(-1)^{J})$ ($J^{P}=1^{-},2^{+},3^{-}$ ) natural and $(P=(-1)^{J-1})$ ($J^{P}=0^{-},1^{+},2^{-}$ ) unnatural parity are depicted in Figs.~(\ref{fig:NPmesonCC}-\ref{fig:UNPmesonCC}). {\bf In figure, charmonium masses estimated by our model are represented by the solid triangles whereas experimentally available mass with the corresponding charmonium name are represented by hollow squares.} The Regge trajectories for $n_{r}= n-1$ principal quantum number in the $(n_{r},M^{2})$ plane are describe in Figure~(\ref{fig:PsVmesonCC}) and Figure~(\ref{fig:SavmesonCC}).
The following definitions are used to calculate the $\chi^{2}$ fitted slopes ($\alpha$, $\beta$) and the intercepts ($\alpha_{0}$, $\beta_{0}$) \cite{Kher:2017,Kher:2017b}.
\begin{equation}
J=\alpha M^{2}+\alpha_{0}.\label{eq:J regge}
\end{equation}
\begin{equation}
n_{r}=\beta M^{2}+\beta_{0}\label{eq:nr regge}
\end{equation}
Calculated slopes and intercepts are tabulated in Tables~(\ref{tab:alfaCC},\ref{tab:bitaCC},\ref{tab:SpinaveCC}). The estimated masses of the charmonium fit well to the $(n,M^{2})$ and $(J,M^{2})$ planes trajectories. The daughter trajectories, which involve both radially and orbitally excited states, turn out to be almost linear, equidistant and parallel whereas The parent Regge trajectories, which start from ground states, are exhibiting a nonlinear behavior in the lower mass region in both planes.
We observed that the linearity of the Regge trajectories depends on quark masses, as the orbital momentum $\ell$ of the state is proportional to its mass: $\ell=\alpha M^{2}(\ell)+\alpha(0)$, where the slope $\alpha$ depends on the flavor content of the states lying on the corresponding trajectory. In the Regge phenomenology, the radial spectrum of heavy quarkonia typically {\bf leads} to strong nonlinearities, in the framework of hadron string model \cite{Afonin:2016}.
\end{multicols}
\begin{figure}
\centering
\includegraphics[bb=30bp 60bp 750bp 550bp,clip,width=0.80\textwidth]{NPmesonCC.eps}
\caption{Regge trajectory ($M^{2}\rightarrow J$) with natural parity. \label{fig:NPmesonCC}}
\end{figure}
\begin{figure}
\centering
\includegraphics[bb=30bp 60bp 750bp 550bp,clip,width=0.80\textwidth]{UNPmesonCC.eps}
\caption{Regge trajectory ($M^{2}\rightarrow J$) with unnatural parity. \label{fig:UNPmesonCC}}
\end{figure}
\begin{figure}
\centering
\includegraphics[bb=30bp 60bp 750bp 550bp,clip,width=0.80\textwidth]{PsVmesonCC.eps}
\caption{Regge trajectory ($M^{2}\rightarrow n_{r}$) for the pseudoscalar and vector $S$ state and excited $P$ and $D$ state masses.\label{fig:PsVmesonCC}}
\end{figure}
\begin{figure}
\centering
\includegraphics[bb=30bp 60bp 750bp 550bp,clip,width=0.80\textwidth]{SavmesonCC.eps}
\caption{Regge trajectory ($M^{2}\rightarrow n_{r}$) for the S-P-D states center of weight mass.\label{fig:SavmesonCC}}
\end{figure}
\begin{table}
\caption{Slopes and intercepts of the $(J,\: M^{2})$ Regge trajectories with unnatural and natural parity.\label{tab:alfaCC} }
\centering
\begin{tabular}{cccc}
\hline
\addlinespace[2pt]
{Parity} & {Trajectory} & {$\alpha(GeV^{-2})$} & {$\alpha_{0}$}\\
\addlinespace[2pt]
\hline
\addlinespace[3pt]
\multirow{3}{*}{Unnatural } & Parent & $0.355\pm0.058$ & $-3.252\pm0.706$\\
& First daughter & $0.471\pm0.038$ & $-6.164\pm0.576$\\
& Second daughter & $0.518\pm0.032$ & $-8.319\pm0.570$\\
\addlinespace[2pt]
\hline
\addlinespace[3pt]
\multirow{3}{*}{Natural} & Parent & $0.401\pm0.060$ & $-2.902\pm0.746$\\
& First daughter & $0.504\pm0.057$ & $-5.764\pm0.877$\\
& Second daughter & $0.553\pm0.059$ & $-8.057\pm1.081$\\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Slopes and intercepts for the $(n_{r},\: M^{2})$ Regge trajectories. \label{tab:bitaCC} }
\centering
\begin{tabular}{cccc}
\hline
\addlinespace[2pt]
Meson & $J^P$ & $\beta(GeV^{-2})$ & $\beta_{0}$\\
\addlinespace[2pt]
\hline
\addlinespace[3pt]
$\eta_{c}$ & $0^{-+}$ & $0.341\pm0.017$ & $-3.236\pm0.303$\\
$\Upsilon$ & $1^{--}$ & $0.347\pm0.014$ & $-3.463\pm0.252$\\
$\chi_{c0}$ & $0^{++}$ & $0.324\pm0.006$ & $-3.861\pm0.088$\\
$\chi_{c1}$ & $1^{++}$ & $0.355\pm0.007$ & $-4.441\pm0.112$\\
$h_{c}$ & $1^{+-}$ & $0.346\pm0.009$ & $-4.399\pm0.138$\\
$\chi_{c2}$ & $2^{++}$ & $0.345\pm0.012$ & $-4.284\pm0.183$\\
$\psi(^{3}D_{1})$ & $1^{--}$ & $0.374\pm0.006$ & $-5.406\pm0.104$\\
$\psi(^{3}D_{2})$ & $2^{--}$ & $0.377\pm0.009$ & $-5.473\pm0.159$\\
$\psi(^{1}D_{2})$ & $2^{-+}$ & $0.371\pm0.006$ & $-5.372\pm0.101$\\
$\psi(^{3}D_{3})$ & $3^{--}$ & $0.369\pm0.006$ & $-5.344\pm0.100$\\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Slopes and intercepts of $(n_{r},\: M^{2})$ Regge trajectory for center of weight mass.\label{tab:SpinaveCC} }
\centering
\begin{tabular}{ccc}
\hline
\addlinespace[2pt]
Trajectory & $\beta(GeV^{-2})$ & $\beta_{0}$ \\
\addlinespace[2pt]
\hline
\addlinespace[5pt]
S State & $0.342\pm0.012$ & $-3.413\pm0.226$ \\
P State & $0.348\pm0.009$ & $-4.36\pm0.1464$ \\
D State & $0.371\pm0.006$ & $-5.372\pm0.101$ \\
\hline
\end{tabular}
\end{table}
\begin{multicols}{2}
\section{Conclusion\label{sec:conclusion}}
We can conclude from the mass spectra of charmonium, Tables~(\ref{tab:ccmsa},\ref{tab:massescc}), investigated using a Cornell potential with relativistic correction to the Hamiltonian, are accordance with the available experimental results as well as predicted by the other theoretical model. The predicted pseudoscalar $(f_{Pcor})$ and the vector $(f_{Vcor})$ decay constants with QCD correction using our estimated charmonium masses are in accordance with experimental as well as predicted by other theoretical model.
We observed from the Regge trajectories Figs.~(\ref{fig:NPmesonCC}-\ref{fig:SavmesonCC}), that the experimental masses of charmonium states are sitting nicely. In the mass region of the lowest excitations of charmonium, the slope of the trajectories decreases with increasing quark mass. The curvature of the trajectory near the ground state is due to the contribution of the color Coulomb interaction, which increases with mass. Hence, the Regge trajectories of the charmonium are basically nonlinear and exhibiting a nonlinear behavior in the lower mass region.
From a comparison of our estimated radiative (E1 and M1 dipole) transitions width with other theoretical estimations, we conclude that the various models have very different predictions of E1 and M1 dipole transitions may be due to different parameters and treatments are used in the relativistic corrections in the model. The calculated E1 and M1 dipole transitions width using the masses and parameters estimated by our model are in agreement with other theoretical and experimental predictions. Although, in most cases, more precise experimental measurements are required.
We also conclude from calculated annihilation decay widths using the Van Royen-Weisskopf relation, that the inclusion of QCD correction factors is helpful to {\bf bring} estimated results close to experimental results. The various models show a wide range of variations in results of annihilation decay widths, which may be resolved using the NRQCD (non-relativistic QCD) and pNRQCD (potential
non-relativistic QCD)formalism. \\
{\bf Acknowledgements} A. K. Rai acknowledge the financial support extended by Department of Science of Technology, India under SERB fast track scheme SR/FTP /PS-152/2012.\\
\bibliographystyle{epj}
|
{
"timestamp": "2018-05-08T02:18:00",
"yymm": "1805",
"arxiv_id": "1805.02534",
"language": "en",
"url": "https://arxiv.org/abs/1805.02534"
}
|
\section{Introduction}
\label{sect:intro}
The starting point of this paper concerns some ergodic problems as they were
studied in \cite{BarlesMeireles2017} and \cite{Ichihara,Ichihara2013}. There,
the authors study the Hamilton-Jacobi equation $$\lambda-\Delta u+H(x,Du)=0,$$
where typically $H(x,Du)=|Du|^m-f(x)$, and $m>1$ (or $m>2$). Our initial
aim was to a consider a simple non-local version of this equation and try
to see how similar results could be obtained: existence of solutions, critical
ergodic constants, and some qualitative behaviour like growth estimates. The
equation {we consider} is the following:
\begin{equation}
\label{eq:EP}\tag*{$\ep{}$}
\lambda-\mathcal{L}[u](x)+|Du(x)|^m=f(x)
\quad\text{in}\quad \R^N,
\end{equation}
where the non-local operator is defined as a convolution with a regular
kernel, $\mathcal{L}[u]:=J\ast u-u$, and $J$ is a continuous, compactly supported
probability density. We explain below why, especially in those ergodic
problems, this equation raises interesting questions and new phenomena, even
compared to the more commonly studied fractional Laplacian, for which the kernel
is singular, $J(x)=1/|x|^{N+\alpha}$, $\alpha\in(0,2)$.
Let us also mention that in the ``standard'' setting, studying ergodic problems
is done either in bounded domains, or in the periodic case, see for instance \cite{BarlesDaLio2005, BarlesSouganidis2001}. Both situations
allow for a better control of the solutions and
{thus, to use compactness arguments}.
The fact that in \cite{BarlesMeireles2017,Ichihara,Ichihara2013}, the authors
consider an unbounded domain (the whole space $\R^N$), with a possibly unbounded
data $f$, leads to various difficulties in the process of constructing
solutions, estimating their growth and getting comparison results.
This is even more challenging and difficult in our non-local setting.
{We managed to recover most of the
results one can expect for $\ep{}$, but this work turned out to be much more
interesting and demanding than a simple
adaptation from the local case.} We had to develop new methods and techniques to
deal with the non-local term, and we found out that there are natural
limitations in the growth of solutions and the right-hand side $f$,
a feature which is not present in the local setting. We are
also convinced that the ideas that we use here can be helpful in the
local case as well, and improve some of the results found
in~\cite{BarlesMeireles2017,Ichihara,Ichihara2013}.
By a solution of~\ep{} we understand a pair $(\lambda,u)$ where $\lambda\in\R$
and $u$ is a continuous viscosity solution of the equation.
We also refer to $\ep{\lambda}$ when $\lambda$ is given and the unknown is only $u$.
Observe that~\ep{} is invariant by addition of constants to the solution, as is
usually the case in ergodic problems. As we shall see, the solutions will be
actually locally Lipschitz continuous so that the equation will hold almost
everywhere and in the weak sense.
This kind of ergodic problems are known,~\cite{Ichihara2013}, to be closely
related to the asymptotic behaviour of solutions of the associated evolution
equation, which would read in our case
\begin{equation}\label{ergo.time}
u_t-\mathcal{L}[u]+|Du|^m=f\quad\text{in}\quad\R^N.
\end{equation}
It is not the purpose of this paper to investigate this question, but let us just
mention that in general, solutions of \eqref{ergo.time} are expected to
behave like $u(x,t)=\bar\lambda t+v(x)+o(1)$ as $t\to\infty$ where $(\bar\lambda,v)$
is a solution of \ep{}.
The specific value $\bar\lambda$ is usually obtained by
taking the supremum of all the $\lambda$'s such that there is a solution $v$ of
\ep{\lambda}. And as for $v$, it is in general the unique (up to an additive
constant) solution of $\ep{}$ which is bounded from below. In this paper we
focus on the existence and properties of this pair $(\bar\lambda,v)$, {not on the asymptotic behaviour for \eqref{ergo.time}.}
\subsection{The framework}
Throughout the paper, we make the following fundamental assumptions:\\[2mm]
\noindent $(i)$ the kernel $J:\R^N\to\R$ is $\C^1$, symmetric, radially
decreasing, compactly supported in $B_1(0)$, with $\int J(y)\dy=1$ and
strictly positive in all $B_1(0)$;\\[2mm]
$(ii)$ we restrict ourselves to the superquadratic case $m>2$ (more on this
below);\\[2mm]
$(iii)$ the function $f$ is assumed to be at least continuous
and bounded from below.
\noindent We will give more precise assumptions on $f$ later, but these basic three
assumptions $(i)$--$(iii)$ will always hold and we shall not recall them anymore.
\noindent{\sc On the non-local term ---} We use here a convolution with a
regular, compactly supported kernel. This choice has several consequences that
need to be dealt with. First, no regularizing effect can be obtained from
operator $\mathcal{L}$. Indeed, contrary to
the Laplacian which is a second-order one, or a fractional Laplacian which would
be of order $\alpha\in(0,2)$, {operator $\mathcal{L}$ can be seen} as a
zero-order term. However, this non-local operator still enjoys some strong
maximum principle. We refer to \cite{AndreuMazonRossiToledo,
ChasseigneChavesRossi2006} and the references therein for general properties
of this type of non-local operator. In particular, $\mathcal{L}$ is known to be
an approximation of the Laplacian as the support of $J$ shrinks to the origin
(when $J$ is symmetric).
We also choose to consider here a compactly supported kernel.
When the non-local operator is defined through a fractional Laplacian, for instance, the tail of
order $1/|x|^{N+\alpha}$ implies a power-type growth restriction for all
possible solutions, since $\mathcal{L}[u]$ has to be defined, at least.
On the
contrary, if $J$ is compactly supported, $\mathcal{L}[u]$ is always defined, as
long as $u$ is locally bounded. But (see below), contrary to the local case, the
presence of a non-local term in the equation implies some growth limitation,
even in the case of a compactly supported kernel. A similar behaviour was also
found in~\cite{BrandleChasseigneFerreira}, where the growth of the initial data is
limited, and differs from the local heat equation.
\noindent{\sc On the Hamiltonian ---} In this paper we consider
the case $H(x,p)=|p|^m-f(x)$, but most of the results are adaptable
to more general cases, for instance $H(x,p)=a(x)|p|^m-f(x)$ where $a(x)$ is
regular and does not degenerate, as is done in~\cite{Ichihara}.
Notice however that since our
solutions are not necessarily bounded, the gradient is only locally bounded in
principle, and as is well-known in Hamilton-Jacobi equations, mixing the
$x$-dependence with the $p$-dependence leads to several difficult issues.
As we mentioned, we will restrict ourselves to the {superquadratic} case, $m>2$. Actually, there is only one place
where this specific condition seems to play a role, namely in the existence
construction (proof of Proposition~\ref{proposition.existence.vR.R.varepsilon}),
when using results of~\cite{CapuzzoLeoniPorretta} to deal with a
\textit{vanishing viscosity
approximation}. It is not clear to us whether this restriction
is purely technical and could be relaxed to the more general assumption $m>1$. This is a clearly difference from the local equation, since in this case, the viscous term does not vanish.
\subsection{Main results} We present now the main results of this paper,
which can be summarized in three items.
\noindent\textsc{A. Growth limitations ---} As we said, since $J$ is
compactly supported, the non-local term $\mathcal{L}[u]$ is well-defined for any
locally bounded function $u$. However, contrary to the local case, it turns out
that problem \ep{} is not solvable for arbitrary growths of function $f$. More
precisely, if $f(x)=C\exp(a^{|x|})$, then \ep{} is solvable only when $a\leq m$
(at least in the class of radial, radially increasing at infinity solutions).
The formal explanation is that in order to solve the equation, the $|Du|^m$-term
has to be the leading term. But, at least for fast growing radial supersolutions
$\psi$, this is not the case: the convolution looks like
$-\mathcal{L}[\psi](r)\simeq -\psi(r+1)$, which grows faster than $|\psi'|^m(r)$
and this implies that the supersolution inequality cannot be satisfied.
Similarly, when solutions exist they cannot have arbitrary growth for the
same reason (and again we only have non-existence results in the class of radial
and radially increasing at infinity solutions).
We refer to Lemma~\ref{lem:non.existence} and
Corollary~\ref{cor:non.existence} for precise statements.
\noindent\textsc{B. Existence of solutions and of a critical ergodic
constant ---} We prove that for functions $f$ in a suitable growth class,
typically $f(x)\leq C\exp\big(m^{|x|}\big)$ for some $C>0$, problem \ep{} is
solvable. Moreover, there is a bound for the constructed solution,
$u(x)\leq\Psi(x):=|x|f^{1/m}(x)$ for $|x|$ large, so that $u(x)\leq
C|x|\exp\big(m^{|x| -1}\big)$ for large~$|x|$.
Getting this existence result requires to deconstruct all the methods that are
used in \cite{BarlesMeireles2017,Ichihara,Ichihara2013} (and even
\cite{GT}, see Appendix). A big issue that we face is that we do not have an universal local gradient estimate, as it is the case in~\cite{BarlesMeireles2017,Ichihara,Ichihara2013}.
This is due to the fact that $\mathcal{L}$ is just a
zero-order operator. We manage to bypass this difficulty by using a supersolution
(obtained by a modification of function $\Psi$ above)
in order to control the non-local term.
But this implies several technical problems, since $\Psi$ is only
a supersolution of~\ep{} for $|x|\neq0$, see the whole construction in Section~3.
Notice that there are bigger supersolutions, but this specific $\Psi$ yields a
kind of \textit{minimal supersolution} in the sense that bounded from below
solutions behave like $\Psi$ (see below).
Once the existence result is proved, it is usual to consider the critical
ergodic constant as the supremum of all $\lambda$'s such that $\ep{\lambda}$ is
solvable. However, we still face the difficulty of estimates here and we need to
include {a limiting upper behaviour} of solutions in the definition:
$$\bar\lambda:=\sup\Big\{\lambda\in\R:\text{ $\exists u$, solution of
$\ep{\lambda}$, such that }
\limsup_{|x|\to\infty}\frac{u(x)}{\Psi(x)}<\infty\Big\}.$$
We prove that $\bar\lambda$ is finite and that there exists a solution $u$ associated
to $\bar\lambda$.
Again, it is natural to have a limitation for the growth of solutions. Indeed, if
$u$ grows too fast, then $\mathcal{L}[u]$ becomes the leading term of the
equation, and we have seen already that this is not possible if we want to have a solution.
\noindent\textsc{C. Characterization of the critical ergodic constant ---}
As in the local case, we prove that the critical ergodic constant $\bar\lambda$
can be characterized by the fact that $(\lambda,u)$ is a solution of \ep{}
such that $u$ is bounded from below and $\limsup u(x)/\Psi(x)<\infty$ if and
only if $\lambda=\bar\lambda$. And in this case,
$u$ is uniquely determined (up to an additive constant).
Notice that in \cite{BarlesMeireles2017}, such results are obtained for
solutions and functions $f$ which grow like powers. In contrast, we are able to consider much faster
growths, like $\exp(a^{|x|})$.
A key step in this improvement is to prove a bound from below for solutions such
that $\inf u>-\infty$. This is done in Lemma~\ref{lemma:u.bounded.below.Psi}.
This Lemma is a refinement of \cite[Proposition 3.4]{BarlesMeireles2017}, and allows
to treat faster growths. Actually, this approach could also be applied to the
local case in order to generalize various results in \cite{BarlesMeireles2017}.
\subsection{Organization of the paper} In Section~\ref{sect:ass.pre} we
state the main hypotheses on the function $f$, construct sub and supersolutions
to the problem and prove {non-existence results which illustrate the fact that
the growths of $u$ and $f$ are limited}. Section~\ref{sect:approximate}
deals with auxiliary problems defined on a bounded domain. Then, in
Section~\ref{sect:existence} we prove the existence of solutions of~\ep{}. The
last four sections are devoted to the critical ergodic constant and bounded from
below solutions. In particular, we establish the existence of a critical ergodic
constant in Section~\ref{sect:critical} under some growth restriction. In
Section~\ref{sect:bounded} and~\ref{sect:uniqueness} we prove that there are
solutions that are bounded form below, that these are unique (up to an additive
constant) and that they are associated to the critical ergodic constant.
Finally, in Section~\ref{sect:revisited} we extend the class of
solutions associated with the critical ergodic constant and prove some
continuous dependence of the critical ergodic constant with respect to~$f$.
\
\section{Preliminaries}
\label{sect:ass.pre}
\noindent\textsc{Basic Notations ---} In the following, $B_R$ will stand for $B_R(0)=\{x\in\mathbb{R}^N : |x|<R\}$ and we use the notation $|x|\gg1$
to say that a property is valid for $|x|$ sufficiently large.
We will denote $u(x)=o(v(x))$ to say that $u(x)/v(x)\to0$ as $|x|\to\infty$. In
particular, $o_\alpha(1)$ represents a quantity which goes to zero as the
parameter $\alpha$ goes to zero (or $+\infty$, depending on the situation). If
some uniformity with respect to some other parameter is required, this will be
mentioned explicitly.
\subsection{Definitions and hypotheses}
\begin{definition}
\label{def:viscosity}
A locally bounded u.s.c. function $u:\mathbb{R}^N\to\mathbb{R}$
is a viscosity subsolution of
\ep{} if for any $\C^1$-smooth function $\varphi$, and any point
$x_0\in\mathbb{R}^N$ where $u-\varphi$ reaches a maximum, there holds,
$$
\lambda-\mathcal{L}[u](x_0)+|D\varphi(x_0)|^m-f(x_0)\leq 0.
$$
\end{definition}
A locally bounded l.s.c. function is a viscosity supersolution if the same holds
with reversed inequalities and the maximum point replaced by a minimum. Finally
a viscosity solution is {a continuous function $u$ which is at the same time
a sub- and a super-solution of \ep{}.}
Notice that, in the above definitions we only need the test function
$\varphi$ to be $\C^1$ in a neighborhood of $x_0$ (or even only at $x_0$), and we
shall use this remark when we use test functions which are not $\C^1$ in all
$\R^N$.
\begin{remark}
\label{remark:viscosity.bounded}
If we consider $u:\Omega\to\mathbb{R}$ and the equation defined on a bounded
domain $\Omega$ together with a boundary condition, say $u=\psi$ on
$\partial\Omega$, then the definition of viscosity subsolution (respectively
supersolution) has two parts, depending whether the maximum point $x_0$ is
achieved inside $\Omega$ or on the boundary, $\partial\Omega$. In this latter
case, the condition for $u$ to be a subsolution reads
$$
\max(\lambda-\mathcal{L}[u](x_0)+|D\varphi(x_0)|^m-f(x_0), u(x_0)-\psi(x_0))\leq 0
$$
(respectively $\min$ and $\geq$ for supersolutions). However, we shall not
use boundary conditions here: we have only a viscosity solution in a ball
$B_R$ and send $R\to\infty$, see Lemma~\ref{lemma:epsilon.to.0}.
\end{remark}
We list now the complete set of assumptions we use throughout this paper. We
want to stress at this point, that we could have simplified this list by making
strong assumptions (for instance, assuming that $f$ is radial and radially
increasing). However, we opted to keep track of what was really necessary to
assume to produce each result. We think that the methods we design, with these
weak assumptions, can be helpful in other situations. We comment on each
hypothesis and give typical examples.
The main hypothesis on the right hand-side $f$, that we use throughout the paper is the following:
\begin{enumerate}[\noindent\bf(H1)]
\setcounter{enumi}{-1}
\item
$f:\R^N\to\R$ is $\C^1$ and $\inf\{f(x):|x|=r\}\to\infty$ as
$r\to\infty$.
\end{enumerate}
In particular, we are assuming that $f$ is \textit{uniformly coercive} so that
for $x$ large enough we have $f>0$ and we can set
\begin{equation*}\label{def:Phi}
\Phi(x):=|x|f(x)^{1/m}.
\end{equation*}
In addition we have to impose some extra hypothesis on $f$. The next set of
assumptions are related to its growth. The first two hypotheses,~\hyp{H1}
and~\hyp{H2}, are required to construct a supersolution in
$\R^N\setminus\{0\}$. Hypothesis \hyp{H1} is fundamental here, it is were we see
the limitation in the growth of $f$, see more below.
\begin{enumerate}[\noindent\bf(H1)]
\item For $|x|\gg1$, $\sup\limits_{y\in B_1(x)} |D\Phi(y)|\leq
|D\Phi(x)|^m$.\\
\item For $|x|\gg1$, $x\cdot Df(x)\geq -f(x)$.
\end{enumerate}
The following two hypotheses,~\hyp{H3} and~\hyp{H4}, control the behaviour of
$f$ from below. This is crucial in order to prove that solutions which are
bounded from below, actually have a minimal behaviour at infinity, which is
given by $\Phi$.
\begin{enumerate}
[\noindent\bf(H1)]
\setcounter{enumi}{2}
\item As ${|x|\to\infty}$, $\Phi(x)=o(f(x)).$\\
\item There exists $\eta_0\in(0,1)$ such that for all
$\eta\in(0,\eta_0)$, there exists $\csubeta,\csupeta>0$ and
$R_\eta>0$ such that\\[6pt]
$\text{for $|x|\geq R_\eta$, and $s\in
B_\eta(0)$,\quad} \csubeta f\big((1-\eta)x\big)\leq
f\big(x+s|x|\big)\leq \csupeta f\big((1+\eta)x\big).$
\end{enumerate}
The last set of hypotheses is needed in order to get a comparison result
in the class of bounded from below solutions. Depending on whether we are in the
``slow'' (power-type) case or ``fast'' (exponential or more) case, the
approach differs a little bit, but we cover both cases.
\begin{enumerate}
[\noindent\bf(H1)]
\setcounter{enumi}{4}
\item There exists $a_0>1$: $\forall a\in(1,a_0)$, as $|x|\to\infty$,
$\sup\limits_{|z|\leq 1}\Phi(a(x+z))\ll f(x)$.\\
\item There exists $R_0>0$, such that $\forall a\in(1,a_0)$ and $|x|>R_0$,
$f(ax)\geq af(x)$.\\
\item One of the following holds:\\
\textit{Slow case --} for all $a\in(1,a_0)$, $\limsup
(f(ax)/f(x))<\infty$ as $|x|\to\infty$;\\
\textit{Fast case --} for all $a\in(1,a_0)$, $\liminf
(f(ax)/f(x))=+\infty$ as $|x|\to\infty$.
\end{enumerate}
\noindent\textsc{Examples and discussion on the hypotheses ---} As we said,
\hyp{H1} highlights the limiting behaviour
of $f$ in order to get a solution. It can be checked that
if $f(x)=\exp\big(p^{|x|}\big)$, then \hyp{H1} is satisfied if and only if
$p\leq m$. This hypothesis is essential in order to control the non-local term
by the gradient term.
On the contrary,~\hyp{H3} and~\hyp{H4} both imply that $f$ has a minimal growth.
By the specific form of $\Phi$, \hyp{H3} is equivalent to $f(x)\gg |x|^{m _*}$
where $m_*:=m/(m-1)$. Actually, this is not a real limitation since if $f$
does not grow so fast, the methods in \cite{BarlesMeireles2017} readily adapt.
Hypothesis~\hyp{H5}, though it is similar to~\hyp{H3}, {is a bit more
restrictive than~\hyp{H3}.}
For power-type functions $f$, this condition also reduces to $f(x)\gg|x|^{m_*}$
and \hyp{H5} implies \hyp{H3}.
Finally,~\hyp{H2},~\hyp{H4} and~\hyp{H6} are automatically satisfied if $f$ is
radial and increasing. Hence, these hypotheses are needed to control how much
the function $f$ is allowed to deviate from this behaviour. Notice though, that
they allow $f$ to be quite \lq\lq far\rq\rq\ from radial and increasing.
In conclusion, the typical functions that satisfy all these hypotheses are the
following:
$$\begin{aligned}
f_1(x)&=c|x|^\alpha\text{ with $\alpha>m_*$,}\\
f_2(x)&=c\exp(\alpha |x|)\text{ with $\alpha>0$,}\\
f_3(x)&=c\exp\big(p^{|x|}\big)\text{ with $p\leq m$.}
\end{aligned}
$$
Some non-radial as well as some non-monotone versions are allowed within
the range of~\hyp{H2},~\hyp{H4} and~\hyp{H6}.
Hypothesis \hyp{H7} covers all cases $f_1,f_2,f_3$ above, but makes the
distinction between \textit{power-type} growths which satisfy \hyp{H7}-slow and
exponentials for which \hyp{H7}-fast is fulfilled.
\begin{remark}
\label{rem:regularity.f}
We assume that $f$ is $\C^1$ throughout the paper for simplicity: with this
assumption we can compute and use the gradient of
$\Phi(x)=|x|f^{1/m}(x)$, for $|x|$ large.
In fact, regularity of
$f$ is not an issue here and we could consider only continuous functions to
get exactly the same results by using smooth approximations of $f$.
\end{remark}
Across Sections 4--6 we will assume that $f$
verifies the three assumptions \hyp{H0}--\hyp{H2} without
mentioning it anymore. In Section 7,
where we prove uniqueness, we will use more assumptions on $f$ and hence we will write only what is really necessary in order to prove each result.
\subsection{Subsolutions and Supersolutions}
It is straightforward to see that, for
$\lambda\leq\min(f)$, typical subsolutions of \epl are the constants.
In this range of $\lambda$-values, {there are also coercive subsolutions, as the
following lemma shows. Those subsolutions will help us build solutions which
tend to infinity at infinity (see Section~\ref{sect:bounded}).}
\begin{lemma}\label{lem:subsol}
Let $f$ verify \hyp{H0}. Then for any $\lambda\leq\min(f)$ there exists a
Lipschitz subsolution $\Theta_\lambda$ of \epl, such that
$\Theta_\lambda(x)\to\infty$ as $|x|\to\infty$.
\end{lemma}
\begin{proof}
Since $\lambda\leq\min(f)$ and $f$ is uniformly coercive,
there exists $R_*>0$ which depends on $\lambda$ such that $f(x)\geq\lambda+\kappa^m$, if $|x|\geq
R_*$, for some $\kappa>0$.
We define
\begin{equation*}
\label{eq:def.subsolution.Theta}
\Theta_\lambda(x):=\kappa(|x| - R_*)_+
\end{equation*} which is (globally)
Lipschitz. Using Lemma~\ref{lem:convex} we see that for any $x\in\R^N$,
$-\mathcal{L}[\Theta_\lambda](x)\leq 0$. Moreover, since
$|D\Theta_\lambda|=\kappa$ or $|D\Theta_\lambda|=0$ almost everywhere, we get in any case
\begin{equation}
\label{eq:theta.condition}
\lambda-\mathcal{L}[\Theta_\lambda]+|D\Theta_\lambda|^m\leq
\lambda+\kappa^m\leq f.
\end{equation}
Notice that the exact proof has to be done in the sense of viscosity, but at the
points where $|x|=R_*$, no testing is done for the subsolution condition, while
at other points the function is smooth. So, $\Theta_\lambda$ is indeed a
coercive subsolution in $\R^N$, {in the sense of viscosity}.
\end{proof}
\begin{remark}
The parameter $\kappa$ is somewhat free if we allow $R$ to be big. {More
precisely, for $|x|$
big enough $f(x)$ is big and we can choose $\kappa$ big, thus we
can build subsolutions with arbitrary big linear growth.}
\end{remark}
If we try now to construct a supersolution, the first thing that we face is that
it is not possible to do it in all $\R^N$, if $\lambda\leq\min(f)$.
\begin{lemma}
\label{lemma:non.existence.super}
{
Let $f$ verify \hyp{H0} and $\lambda\leq\min(f)$. Then there is no
coercive, l.s.c. viscosity supersolution of~\ep{} in all $\R^N$.
}
\end{lemma}
\begin{proof}
Assume by contradiction that there is such a supersolution, $\psi$. Since it is
lower semi-continuous and coercive, $\psi$ reaches its global minimum at some
point $x_0$. Then, since $\psi(y)\geq \psi(x_0)$ for all
$y\in\R^N$, we can use the constant $\psi(x_0)$ as a test-function for the
viscosity inequality at $x_0$, which yields
$$
0\leq f(x_0)-\lambda\leq -\mathcal{L}[\psi](x_0)-|D(\psi(x_0))|^m=
-\int_{B_1(x_0)}J(x_0-y)(\psi(y)-\psi(x_0))\d y\leq 0.
$$
This implies that necessarily, $\psi(y)=\psi(x_0)$ for all $y\in B_1(x_0)$.
Then we can repeat the argument using as center any $y\in B_1(x_0)$ and we
finally get that $\psi(y)=\psi(x_0)$ for $y\in\mathbb{R}^N$. But then
we get a contradiction since $\psi$ cannot be constant.
\end{proof}
However we are able to build supersolutions in
$\R^N\setminus\{0\}$ (without any restriction on $\lambda$). In order to do it,
we look first at~\ep{0}, \begin{equation}
\label{eq:EP0}
\tag*{$\ep{0}$}
-\mathcal{L}[u](x)+|Du(x)|^m=f(x)
\quad\text{in}\quad \R^N.
\end{equation}
In this construction we assume that \hyp{H0}--\hyp{H2} hold, so there exists
$R^*>0$ such that for any $|x|\geq R^*$, $f(x)\geq1$ and \hyp{H1},\hyp{H2} hold
for such $x$.
\begin{remark}
\label{rem:R_*=R^*} If we take $\kappa=1$ and $\lambda=0$ in the construction of the subsolution $\Theta_\lambda$, then $R_*=R^*$. We will use this fact in Section~\ref{sect:bounded} in order to construct bounded from below solutions.
\end{remark}
Up to fixing the constants, we use the
following construction: we set $\Psiint:=b|x|$ for $|x|<R^*+1$;
$\Psiext=c\Phi$ for $|x|>R^*$ and then combine $\Psiint$ and $\Psiext$ in the
intermediate region $R^*\leq |x|\leq R^*+1$, in order to get a supersolution
of~\ep{0} for all $|x|\neq 0$. We finally define $\Psi_\lambda=c_\lambda\Psi$ which
yields a supersolution to~\ep{\lambda} for $|x|\neq0$, provided $c_\lambda$ is
well-chosen.
\begin{lemma}
\label{lemma:psi1.super}
There exists $c_0>0$ such that for any $c\geq c_0$,
$\Psiint(x):=c|x|$ is a supersolution of~\ep{0} for
$0<|x|\leq R^*+1$.
\end{lemma}
\begin{proof}
The proof is straightforward: we first have
$$
D\Psiint(x)=c\frac{x}{|x|}\mbox{\quad and \quad} |\mathcal{L}[\Psiint](x)|\leq
\sup_{y\in B_1(x)}|D\Psiint(x+y)|=c. $$
Then, since $m>1$, for any $c$ big enough we have
\begin{equation*}
\label{eq:condition.c0}
c^m-c\geq\max\limits_{B_{R^*+1}}f,
\end{equation*}
and we get
$
-\mathcal{L}[\Psiint](x)+|D\Psiint(x)|^m \geq -c+c^m\geq f.
$
\end{proof}
\begin{lemma}
\label{lemma:psi2.super}
Let $f$ verify {\rm\bf(H0)--(H2)}. There exits
$c_1>0$ such that for any $c\geq c_1$,
$\Psiext=c\Phi$ is a supersolution of~\ep{0} for $|x|\geq R^*$.
\end{lemma}
\begin{proof}
We estimate each term in~\ep{0} separately. On one hand we have
$$
D\Psiext=cD\Phi=c\Big(\frac{x}{|x|}f^{1/m}+\frac{|x|}{m}f^{1/m-1}\cdot
Df\Big).$$ Using \hyp{H2},
\begin{equation}
\label{eq:estimate.below.psi1}
\frac{x}{|x|}\cdot D\Phi= \frac{x}{|x|}\cdot \frac{x}{|x|}\Big(f^{1/m}+\frac{1}{m}f^{1/m-1}x\cdot Df\Big)
\geq f^{1/m}\Big(1-\frac1{m}\Big),
\end{equation}
from where we get that $|D\Phi|\geq |\frac{x}{|x|}\cdot D\Phi|\geq f^{1/m}(1-\frac1{m}).$
On the other hand, in order to estimate the non-local term we use \hyp{H1} and
get
$$
|\mathcal{L}[\Phi](x)|\leq \sup_{y\in B_1(x)}|D\Phi(x+y)|\leq |D\Phi(x)|^m.
$$
Therefore, if $c$ is such that
\begin{equation*}
\label{eq:condition.c1}
c^m-c\geq \frac{m}{m-1},
\end{equation*}
we get
$$
-\mathcal{L}[\Psiext](x)+|D\Psiext(x)|^m \geq -c|D\Phi(x)|^m+c^m|D\Phi(x)|^m\geq \frac{m-1}{m}(c^m-c)f\geq f,
$$
and conclude that $\Psiext$ is a supersolution of~\ep{0} for $|x|\geq R^*$.
\end{proof}
Finally, the construction ends by interpolating between $\Psiint$ and $\Psiext$ in
the region $R^*\leq|x|\leq R^*+1$. To this aim, let
$\chi:[0,\infty)\mapsto[0,\infty)$ be a regular,
radial and non-decreasing function that verifies $\chi(r)=0$ for $r\leq R^*$
and $\chi(r)=1$ for $r\geq R^*+1$. We set
\begin{equation}
\label{eq:def.psi.0}
\Psi:=(1-\chi)\Psiint+\chi\Psiext\quad \text{in}\quad \R^N.
\end{equation}
\begin{lemma}
\label{lemma:psi.chi.super}
Let $f$ verify \hyp{H0}--\hyp{H2}. There exists $c_2>0$ such that
for any $c\geq c_2$, $\Psi$ is a supersolution of~\ep{0} for
$R^*\leq|x|\leq R^*+1$.
\end{lemma}
\begin{proof}
We first give a rough estimate of the non-local term. Notice that since
$|x|\geq R^*$, $f(x)\geq1$ so that $\Psiext(x)\geq c|x|\geq0$. Since also
$\Psiint(x)=c|x|\geq0$,
it follows that for any $R^*\leq|x|\leq R^*+1$, $\Psi(x)\geq0$. Hence, for
such $x$,
$$-\mathcal{L}[\Psi](x)\geq -(J\ast\Psi)(x)\geq
-c(R^*+1)-c\,\sup_{B_{R^*+2}\setminus B_{R^*}}\Phi=-Kc
$$
for some constant $K$ depending only on $f$ (through $R^*$ and $\Phi$).
We now turn to the gradient term.
As we noticed, $\Psiext(x)\geq c|x|=\Psiint(x)$ for $R^*\leq|x|\leq R^*+1$.
And since $\chi$ is radially nondecreasing, we get, using in the last line \eqref{eq:estimate.below.psi1} and the fact that $f(x)\geq1$ for $|x|\geq R^*$,
$$\begin{aligned}
\frac{x}{|x|}D\Psi(x) & =(1-\chi)\frac{x}{|x|}D\Psiint+
\chi\frac{x}{|x|}D\Psiext+\chi'(\Psiext-\Psiint)\\
& \geq (1-\chi)\frac{x}{|x|}D\Psiint+
\chi\frac{x}{|x|}D\Psiext\\
&\geq (1-\chi)c+\chi c (1-1/m)= c(1-\chi/m)\geq c(1-1/m).
\end{aligned}
$$
This gives a lower estimate for the gradient of $\Psi$: for any
$R^*\leq|x|\leq R^*+1$,
$$\begin{aligned}
|D\Psi(x)|^m &\geq \big|\frac{x}{|x|}D\Psi(x)\big|^m\geq
{c^m(1-1/m)^m}.
\end{aligned}
$$
To conclude, we get that for any $R^*\leq|x|\leq R^*+1$,
$$-\mathcal{L}[\Psi]+|D\Psi|^m-f\geq -Kc+{c^m(1-1/m)^m}-\sup_{B_{R^*+1}\setminus
B_{R^*}} f.$$
Hence, if $c$ is big enough, since $m>1$,
we obtain that the right-hand side is non-negative which yields the result.
\end{proof}
We are now ready to construct a supersolution for \ep{\lambda} for $|x|\neq 0$.
We first fix $c_*=\max(c_0,c_1,c_2)$ where $c_0,c_1$ and $c_2$ are defined in
the lemmas above. Then the corresponding function $\Psi$ is $\C^1$-smooth and it
is a supersolution of \ep{0} in $\R^N\setminus\{0\}$. In order to deal with a
non-zero ergodic constant $\lambda$, it is enough to multiply $\Psi$ by some
constant (depending on $\lambda$). We set
\begin{equation}
\label{eq:super.lambda}
\Psi_\lambda:= c_\lambda\Psi,
\quad \text{ where }c_\lambda=(2+\lambda^-)\text{ and }\lambda^-=\max(0,-\lambda)\geq0.
\end{equation}
\begin{proposition}
\label{prop:psilamba.strict}
Let $f$ verify \hyp{H0}--\hyp{H2}, then $\Psi_\lambda$ is a strict
supersolution for \ep{\lambda} for all $|x|\neq 0$.
\end{proposition}
\begin{proof}
Recall that for $|x|> R^*$, $f(x)\geq 1$ and since $\Psi$ is a
supersolution of \ep{0}, for such $x$ we have
$$
\begin{aligned}
\lambda -\mathcal{L}[\Psi_\lambda]+|D\Psi_\lambda|^m&=\lambda
-(2+\lambda^-)\mathcal{L}[\Psi]+(2+\lambda^-)^m|D\Psi|^m\\
&\geq \lambda+(2+\lambda^-)\big(-\mathcal{L}[\Psi]+|D\Psi|^m\big)\\
&\geq\lambda+(2+\lambda^-)f\\
&\geq \lambda+\lambda^-+1+f\geq f+1.
\end{aligned}
$$
On the other hand, if $|x|\leq R^*$, $\Psi_\lambda=(2+\lambda_-)\Psiint$.
Then, as in Lemma~\ref{lemma:psi1.super} we obtain
$$\begin{aligned}
-\mathcal{L}[\Psi_\lambda]+|D\Psi_\lambda|^m & \geq
\lambda+(2+\lambda^-)^m c^m-(2+\lambda_-)c\geq
\lambda+(2+\lambda^-)(c^m-c)\\
& \geq\lambda+ (2+\lambda^-)\max_{B_{R^*+1}}f\geq f+1.
\end{aligned}
$$
Notice that in the last inequality, we use that the maximum of $f$
on $B_{R^*+1}$ is greater than or equal to one since at least
$f(x)\geq1$ for $|x|\geq R^*$.
\end{proof}
\begin{remark}
\label{rem:propoperties.Psi}
For $|x|\gg 1$, $\Psi_\lambda=(2+\lambda_-) \Phi$. Hence,
the supersolution $\Psi_\lambda$ {satisfies the same hypotheses that we assume
on $\Phi$.}
\end{remark}
\begin{remark}
\label{rem:supersolution.bigger.constant}
Notice that for any $c\geq c_\lambda$, $c\Psi$ is also a strict supersolution of~\ep{\lambda}.
\end{remark}
\newcommand{\Rinf}{\mathcal{R}_\infty}
\subsection{Non-Existence results}
\label{sect:non.existence} We end this section by showing, at least in a radial
case, that (super)solutions to~\ep{} only exist if $f$ does not grow too fast.
And in that case, (super)solutions cannot grow too fast either. To formulate the
result, let us introduce the class
$$\Rinf:=
\big\{\phi:\R^N\to\R\text{ such that for $|x|\gg1$, $\phi$ is radial
and radially increasing}\big\}.$$
\begin{lemma}\label{lem:non.existence}
Let $\alpha,\beta,r_0,m>1$, $\eps\in(0,1)$ and consider the following problem
\begin{equation}\label{ineq:lem.nonex}
\begin{cases}\text{find $\psi\in \C^1\cap\Rinf$ such that } \\
\alpha\psi(r)-\beta\psi(r+1-\eps)+(\psi'(r))^m\geq
f(r)\text{ for }r>r_0,
\end{cases}
\end{equation}
where $f\in\Rinf$ satisfies \hyp{H0}. Given $a>m$, there exists
$\eps_0(a,m)\in(0,1)$ such that:
\noindent $(i)$ there is no solution $\psi$ of \eqref{ineq:lem.nonex} if
$f(r)\geq c\exp(a^{r})$ for $r\gg1$.
\noindent $(ii)$ there is no solution $\psi$ of \eqref{ineq:lem.nonex}
satisfying $\psi(r)\geq c\exp(a^{r-1})$ for $r\gg 1$.
\end{lemma}
\begin{proof} The proof is essentially the same for both results, but we start with
$(ii)$, since $(i)$ will need only a small preliminary step before applying
the same method. We proceed by contradiction, assuming that a function
$\psi$ satisfies \eqref{ineq:lem.nonex} and $\psi(r)\geq c\exp(a^{r-1})$ for
some $C>0$ and $a>m$. The following computations will be done for $r$ big
enough so that $f$ and $\psi$ are radial and radially increasing.
We first
claim that $\psi(r)\to +\infty$ as $r\to\infty$. Indeed, since
$\psi$ is nondecreasing, it is bounded from below on $[r_0,+\infty)$ and if,
in addition, we assume that it is bounded from above, we get that
for some constant $C>0$, $(\psi'(r))^m\geq f(r)-C$. Hence,
$\psi'(r)\geq1$ for $r$ big enough (recall that $f$ satisfies \hyp{H0}),
which is a contradiction with the boundedness of $\psi$. Hence $\psi$ is not
bounded and since it is nondecreasing, the claim holds.
Then, we can assume that for some $r_1>r_0$, $\psi(r)\geq 1/\alpha$ on
$[r_1,\infty)$, which implies that for $r\geq r_1$,
\begin{equation}\label{ineq:nonex.2}
(\alpha\psi(r)+\psi'(r))^m\geq f(r)+\beta\psi(r+1-\eps).
\end{equation}
From this inequality, we prove by an iteration process
that $\psi$ has to blow-up for any $r$ big enough, a contradiction. For
$r$ big enough we can assume that $f(r)\geq0$, hence we only use the
$\beta\psi$-term in the right and side:
\begin{equation}\label{eq:ode}
\forall r>r_1\,,\quad
(\alpha\psi(r)+\psi'(r))^m\geq \beta c\exp(a^{r-\eps})\,.
\end{equation}
We claim that there exists $\eps_0(a,m)>0$ such that if
$0<\eps<\eps_0$, there exists $\delta\in(\eps,1-\eps)$ such that
$\gamma:=a^{\delta}/m>1$. Indeed, if we take
$\delta=1-\eps$ we get $a^{\delta}=a^{1-\eps}$ which is greater
than $m$ provided $\eps$ is small enough (depening on $a,m$). Hence the same
is true for some $\delta<1-\eps$, close to $1-\eps$ (and of course we can
assume that $\delta>\eps$).
Integrating the former ODE on $(r+\delta,r+1-\eps)$, where $r>r_1$, yields,
using $\gamma$,
\begin{equation*}\label{ineq:nonex.3}
\begin{aligned}
\psi(r+1-\eps) & \geq
c\beta e^{-\alpha(r+1-\eps)}\int_{r+\delta}^{r+1-\eps}
\exp\big(\frac{a^{s-\eps}}{m}\big)\,
e^{\alpha s}\d s\\
& \geq c \beta (1-\eps -\delta)e^{\alpha(\delta-1+\eps)}\exp\big(
a^{r-\eps}\frac{a^{\delta}}{m}\Big)
\geq (c \beta) Ce^{ \gamma a^{r-\eps}},
\end{aligned}
\end{equation*}
where the constant $C>0$ depends on $\delta,\eps,\alpha,a,m$.
With this new estimate for $\psi$, we can improve \eqref{eq:ode} as
\begin{equation}
\forall r>r_1\,,\quad
(\alpha\psi(r)+\psi'(r))^m\geq (c \beta) C\exp(\gamma a^{r-\eps})\,,
\end{equation}
and by direct induction we obtain that for any $n\geq2$ and $r>r_1$,
$$\psi(r+1-\eps)\geq (c \beta)^{n}C(\delta,\eps,\alpha,a,m)^ne^{\gamma^n a^r}.$$
Since $\gamma>1$, we conclude by sending $n$ to infinity, which yields a
contradiction since it would mean that $\psi(r)=\infty$ for $r>r_1$.
The proof of $(i)$ is exactly the same, but it requires a
first iteration using the growth of $f$ instead of the
term $\beta\psi(r+1-\eps)$ in~\eqref{ineq:nonex.2}: since
$$
\forall r>r_1\,,\quad
\Big[(e^{\alpha r}\psi)'e^{-\alpha r}\Big]^m\geq ce^{a^r}\,,
$$
we obtain for $r>r_1$
$$
\psi(r+1-\eps)\geq
ce^{-\alpha(r+1-\eps)}\int_{r_1}^{r+1-\eps}e^{(a^s)/m}e^{\alpha s}\d s
\geq cC\exp\big(\gamma a^{r}\big)\,.
$$
With this estimate, we are back to $(i)$ and get the contradiction by the
same iteration process, using the $\beta\psi$-term.
\end{proof}
\begin{corollary}\label{cor:non.existence} Assume that $f\in\Rinf$
satisfies \hyp{H0}.\\[2mm]
\noindent $(i)$ there is no supersolution $u\in\C^1\cap\Rinf$ of \ep{}
if $f(x)\geq C\exp(a^{|x|})$ for $|x|\gg1$ with $C>0$ and $a>m$.\\[2mm]
\noindent $(ii)$ there is no supersolution $u\in\C^1\cap\Rinf$ of \ep{} such
that for $|x|\gg1$, $u(x)\geq C\exp\big(a^{|x|-1}\big)$, with $C>0$ and $a>m$.
\end{corollary}
\begin{proof}
Set $\bar f:=f-\lambda$ and we use Lemma~\ref{lem:est.L.psi} with $\alpha=1$,
$\beta=c_\epsilon$. We get
$$
\alpha\psi(r)-\beta\psi(r+1-\eps)+(\psi'(r))^m\geq
\bar f(r)\text{ for }r\gg1\,.
$$
So, part $(ii)$ follows directly from applying Lemma~\ref{lem:non.existence} with
$\bar f(r)$. For part $(i)$, just notice that for $r\gg1$,
$\bar f(r)=f(r)-\lambda\geq (C/2)\exp(a^{r})$.
\end{proof}
Of course, this result is restrictive: there may exist some non-radial (or non
radially increasing) solution, which seems though quite improbable if $f\in\Rinf$.
But at least our result is a good hint that a more general non-existence statement
should hold.
\section{Approximate problems in bounded domains}
\label{sect:approximate}
In this section we settle and solve some approximate problems that are defined in
$B_R$. Solving such problems is quite standard for local equations (see~\cite{BarlesMeireles2017, Ichihara}), but we have to adapt here several steps to deal
with the non-local term.
In particular, we will use the Perron's method, following the standard construction; however, we do not want to skip it, since, because of the
non-local operator which is involved, we have to check and
adapt every step carefully.
Those approximate problems will be the key in order to construct solutions of \ep{} in the
whole space, Section~\ref{sect:existence}.
First of all we adapt the definition of the non-local term to the bounded domain~$B_R$. When solving the Dirichlet problem we need to consider the usual boundary
condition (function $g\in \C^{0,\gamma}(\partial B_R)$ for any
$\gamma\in(0,1)$ see~\eqref{eq.approx.R.varepsilon} below), but also an \textit{outer
condition} $\psi$, which enters in the non-local
operator, see~\cite{ChasseigneChavesRossi2006, CortazarElguetaRossi2009}.
Thus for $R>1$ we define
\begin{equation}
\label{eq.nonlocal.Dirichlet}
\mathcal{L}_R^\psi[v](x):=\int_{B_R} J(x-y)v(y)\d y+\int_{B_R^C} J(x-y)\psi(y)\d y-v(x).
\end{equation}
It is clear that, if $v$ is defined in the whole space, then $\mathcal{L}_R^v[v]=\mathcal{L}[v]$. Moreover, since $J$ is compactly supported on $B_1$, for $x\in B_R$, the outer term in~\eqref{eq.nonlocal.Dirichlet} becomes
$$
\int_{B_R^C} J(x-y)\psi(y)\d y=\int_{B_{R+1}\setminus B_R} J(x-y)\psi(y)\d y,
$$
so that the function $\psi$ needs only to be, say, continuous on $B_{R+1}\setminus
B_R$ for $\mathcal{L}_R^\psi$ to be defined.
Now, for $\varepsilon>0$ and $R>1$ fixed, we consider
the approximate problem
\begin{equation}
\label{eq.approx.R.varepsilon}
\left\{
\begin{array}{ll}
\lambda-\varepsilon\Delta v-\mathcal{L}_R^\psi[v]+|Dv|^m=f,\quad& x\in B_R,\\
v=g, & x\in \partial B_R.
\end{array}
\right.
\end{equation}
The fundamental existence result is the following:
\begin{proposition}
\label{proposition.existence.vR.R.varepsilon}
Let $\eps>0$, $R>1$, $f\in \C^1(B_R)$, $\psi\in\C^0(B_{R+1}\setminus
B_R)$ and $g\in\C^{0,\gamma}(\partial B_R)$. If there exists a subsolution
$\underline{v}\in\C^2(B_{R})\cap\C^0(\overline{B_R})$
of~\eqref{eq.approx.R.varepsilon}, then there
exists a solution, ${v}\in\C^{2}(B_R)\cap\C^0(\overline{B_{R}})$
of~\eqref{eq.approx.R.varepsilon}.
\end{proposition}
\begin{remark}
We opted to state this result assuming $f\in \C^1$, since this is the general
assumption we make (see Section~\ref{sect:ass.pre}). But the same proof holds
with a regularizing argument if $f$ is only continuous or even
$f\in\W(\mathbb{R}^N)$.
\end{remark}
In order to use Perron's method, we
have to provide, as a first step, a supersolution to problem~\eqref{eq.approx.R.varepsilon}. To this aim, let us
consider the linearized problem
\begin{equation}
\label{eq.pbm.laplace.eps.ball}
\left\{
\begin{array}{ll}
-\varepsilon\Delta\phi-\mathcal{L}_R^\psi[\phi]=M,\quad& x\in B_R,\\
\phi=g,& x\in \partial B_R.
\end{array}
\right.
\end{equation}
Existence and uniqueness of a solution
$\phi\in \C^{2,\gamma}(B_R)\cap\C^{0}(\overline{B_R})$ for any
$\gamma\in(0,1)$
for problem~\eqref{eq.pbm.laplace.eps.ball} is obtained through a variant
of~\cite[Theorem 6.8]{GT} that includes the non-local operator. We detail in
the Appendix the construction and adaptations, see
Lemma~\ref{lemma.existence.super.classical}.
\begin{lemma} \label{lemma.supersolution.continuity}
Let $M> \|f\|_{\L^\infty(B_R)}+|\lambda|$,
$\overline{v}\in\C^{2,\gamma}(B_R)\cap\C^0(\overline{B_R})$ be the solution
of~\eqref{eq.pbm.laplace.eps.ball} and $\underline{v}$ a subsolution of
\eqref{eq.approx.R.varepsilon}.
Then $\underline{v}\leq\overline{v}$ in $B_R$.
\end{lemma}
\begin{proof}
Due to the choice of $M$, it is straightforward that $\overline{v}$
is a strict supersolution of~\eqref{eq.approx.R.varepsilon}. {Moreover,
$\overline{v}=\underline{v}$ on $\partial B_R$.}
To get the comparison result, we define $w:=\underline{v}-\overline{v}\in\C^2(B_R)\cap\C^0(\overline{B_R})$, which verifies
$$-\varepsilon\Delta w-\mathcal{L}_R^0[w]<0\quad\text{in}\ B_R\quad \mbox{and}\quad w=0\quad\text{on}\ \partial B_R.$$
Notice that the exterior term involving $\psi$ cancels after
substracting the equations, hence we obtain a zero-Dirichlet problem for
$w$, $i.e.$ with $g=\psi=0$.
Let $x_0$ be a maximum point of $w$ in $\overline{B_R}$. If
$w(x_0)\leq0$ the result follows immediately, so let us assume that
$w(x_0)>0$. In this case, notice that $x_0\in\partial B_R$ is impossible
since $w=0$ on the boundary.
Since $x_0\in B_R$ and $w$ is a $\C^2$-smooth function, we have $\Delta w(x_0)\leq 0$ and the equation yields $$-\mathcal{L}^0_R[w](x_0)<0.$$
But since $x_0$ is a point where $w$ reaches its positive maximum,
we get
$$
0<w(x_0)\Big(\int_{B_R}J(x_0-y)\d y-1\Big),
$$
which is a contradiction since $w(x_0)>0$ and $\int_{B_R}J(x_0-y)\dy\leq1$.
\end{proof}
In the following, we introduce the critical
exponent $\alpha_*:=(m-2)/(m-1)\in(0,1)$ as found in \cite{CapuzzoLeoniPorretta},
which is where $m>2$ is needed.
\begin{proof}[Proof of
Proposition~{\rm\ref{proposition.existence.vR.R.varepsilon}}]
Let
{${\overline{v}}\in\C^2(B_R)\cap\C^1(\overline{B_R})$} be a supersolution
of~\eqref{eq.approx.R.varepsilon} and consider the set \begin{equation*}
\mathcal{S}:=\{v_S\in \C^{0,\alpha_*}(\overline{B_R}):
v_S \textrm{{viscosity subsolution}
of~\eqref{eq.approx.R.varepsilon} with }
\underline{v}\leq v_S\leq \overline{v} {\rm\ in\ } \overline{B_R}\}.
\end{equation*}
The set is non-empty since $\underline{v}\in \mathcal{S}$.
For any $x\in B_R$, we set
$$v(x):=\sup_{v_S\in\mathcal{S}}v_S(x)$$
which is well-defined, since all the functions $v_S$ in $\mathcal{S}$
are bounded above by $\overline{v}$.
We first notice that $\underline{v}\leq v\leq\overline{v}$ in $B_R$
and that necessarily $v=\underline{v}=\overline{v}$ on $\partial B_R$.
Concerning the regularity of $v$, we use the
estimates of \cite[Theorem 1.1]{CapuzzoLeoniPorretta}. Writing~\eqref{eq.approx.R.varepsilon} as
$$
v_S-\varepsilon\Delta v_S+|Dv_S|^m\leq F(x),
$$
where
$$
F(x):=\int_{B_R} v_S(y)J(x-y)\d y+\int_{B_{R+1}\setminus B_R}
\psi(y)J(x-y)\d y+f(x)-\lambda,
$$
we see that there exists a constant $K$ depending only on $\|(v_S)^-\|_{\L^\infty(B_R)}$ and
$\|F\|_{\L^\infty(B_R)}$ such that
$$
|v_S(x)-v_S(y)|\leq K|x-y|^{\alpha_*}\quad \text{in}\quad \overline{B_{R}}.
$$
Now, for any $v_S\in\mathcal{S}$ we have $(v_S)^-\leq(\underline{v})^-$ and
$v_S\leq\overline{v}$. So, both functions $F(x)$ and $(v_S)^-(x)$ are uniformly
bounded in $B_R$ with respect to $v_S\in\mathcal{S}$.
We deduce that the subsolutions $v_S$ in $\mathcal{S}$ are uniformly
H\"older continuous up to the boundary, which implies that
$v\in\C^{0,\alpha_*}(\overline{B_R})$.
\noindent{\sc Claim --} $v$ is a viscosity solution
of~\eqref{eq.approx.R.varepsilon}.
It is standard that, being defined as a supremum of subsolutions, $v$ is also
a subsolution of~\eqref{eq.approx.R.varepsilon}. Hence $v\in\mathcal{S}$
and it remains to prove that $v$ is a supersolution
of~\eqref{eq.approx.R.varepsilon}. Since by construction
$v=g$ on $\partial B_R$, we only need to check the supersolution condition
inside $B_R$, which is done as usual through the construction of a bump
function.
We proceed by contradiction: let us assume that $v$ is not a supersolution of~\eqref{eq.approx.R.varepsilon}.
Then, there exists a fixed $\bar x\in B_R$ and $\varphi\in\C^2(B_R)$ such
that $v-\varphi$ has a
minimum at $\bar x$ and
\begin{equation}\label{eq:perron.contr} -\varepsilon\Delta \varphi (\bar
x)-\mathcal{L}_R^\psi[v](\bar x)+|D\varphi(\bar
x)|^m-f(\bar x)+\lambda<0.
\end{equation}
We can assume with no restriction that $v(\bar x)=\varphi(\bar x)$, hence
$v\geq\varphi$ in $B_R$.
Moreover, we claim that $v(\bar x)<\overline{v}(\bar x)$.
Indeed, assuming otherwise that $v(\bar x)=\overline{v}(\bar x)$,
since $v\leq\overline{v}$ we would have $v(\bar x)=\varphi(\bar
x)=\overline{v}(\bar x)$ and $\varphi\leq v\leq\overline{v}$,
which together imply that $\varphi-\overline{v}$
has a minimum at $\bar x$.
Consequently
$D\varphi(\bar x)=D\overline{v}(\bar x)$ and $\Delta \varphi(\bar
x)-\Delta\overline{v}(\bar x)\geq 0$. Replacing $\varphi$ by $\overline{v}$
in~\eqref{eq:perron.contr} at the point $\bar x$ we would get
\begin{equation*} -\varepsilon\Delta \overline{v}(\bar x)
-\mathcal{L}_R^\psi[v](\bar x)+|D\overline{v}(\bar
x)|^m-f(\bar x)+\lambda<0.
\end{equation*}
But since $v(\bar x)=\overline{v}(\bar x)$ and
$v\leq\overline{v}$ in $B_R$, we have
$\mathcal{L}_R^\psi[v](\bar
x)\leq\mathcal{L}_R^\psi[\overline{v}](\bar x)$ and we see that
\begin{equation*} -\varepsilon\Delta\overline{v}(\bar
x)-\mathcal{L}_R^\psi[\overline{v}](\bar
x)+|D\overline{v}(\bar x)|^m-f(\bar x)+\lambda<0.
\end{equation*}
This contradicts the fact that $\overline{v}$ is a supersolution
of~\eqref{eq.approx.R.varepsilon}. Hence $v(\bar x)< \overline{v}(\bar x)$
and we define, for any $y\in B_R$, the bump function
\begin{equation*}
\label{eq.definition.vdelta} v_\delta(y):=\max\{
v(y);\varphi(y)+\delta-|\bar x-y|^2\}. \end{equation*}
Notice that by construction,
\begin{equation}\label{est:bump}
\begin{cases}v_\delta(y)=v(y) & \text{if } y\notin
B_{\delta^{1/2}}(\bar x),\\
v(y)\leq v_\delta(y)\leq v(y)+\delta\qquad & \text{if }
y\in B_{\delta^{1/2}}(\bar x).
\end{cases}
\end{equation}
The strategy is to prove that for $\delta>0$ small enough,
$v_\delta\in\mathcal{S}$, which contradicts the definition of $v$ as a sup,
since $v_\delta(\bar x)=\varphi(\bar x)+\delta>v(\bar x)$. We divide this
into four steps.
\noindent$(i)$ {\sc Regularity --} The function $v_\delta$, defined as a
maximum of two functions which belong to $\C^{0,\alpha_*}(\overline{B_R})$,
belongs itself to $\C^{0,\alpha_*}(\overline{B_R})$.
\noindent$(ii)$ {\sc Bounds --}
It is clear by construction that $v_\delta\geq \underline{v}$ in $B_R$.
On the other hand,
if $|y-\bar x|^2\geq\delta$ then $v_\delta(y)=v(y)$, see~\eqref{est:bump}.
Hence, outside $B_{\delta^{1/2}}(\bar x)$, we have $v_\delta\leq
\overline{v}$. Moreover, since $v(\bar x)<\overline{v}(\bar x)$, for
$\delta$ small, we have $v_\delta\leq\overline{v}$ for $y\in
B_{\delta^{1/2}}(\bar x)$. Therefore $v_\delta\leq \overline{v}$ in $B_R$
for $\delta$ small enough.
\noindent$(iii)$ {\sc Subsolution condition --}
As before, outside $B_{\delta^{1/2}}(\bar x)$, we have that $v_\delta=v$.
Hence $v_\delta$ is a subsolution in $B_{\delta^{1/2}}^C(\bar x)$.
Let $y\in B_{\delta^{1/2}}(\bar x)$ and $\varrho\in\C^2(B_R)$ be a test function
such that $v_\delta-\varrho$ has a (strict) maximum zero at $y$.
We have to prove that in this situation $v_\delta$ verifies
\begin{equation*}
\label{eq.weps.subsolution}
-\varepsilon\Delta\varrho(y) -\mathcal{L}_R^\psi[v_\delta](y)+
|D\varrho(y)|^m-f(y)+\lambda\leq 0.
\end{equation*}
Since $y\in B_{\delta^{1/2}}(\bar x)$ we have
$\varrho(y)=\varrho(\bar x)+o_\delta(1)$ uniformly with respect to $y\in
B_{\delta^{1/2}}(\bar x)$. Similarly, $
|D\varrho(y)|^m=|D\varrho(\bar x)|^m+o_\delta(1)$ and
$\varepsilon\Delta\varrho(y)=\varepsilon\Delta\varrho(\bar x)+o_\delta(1)$.
Moreover, $-\mathcal{L}_R^\psi[v_\delta](y) =
-\mathcal{L}_R^\psi[v_\delta](\bar x) + o_\delta(1)$, and the
facts that $v_\delta \geq v$ and
$v_\delta(\bar x)=v(\bar x)+\delta$ imply that
$-\mathcal{L}_R^\psi[v_\delta](\bar x)
\leq -\mathcal{L}_R^\psi[v](\bar x)+\delta$.
Finally, since $f$ is continuous,
$f(y)=f(\bar x)+o_\delta(1)$. Gathering these estimates
we obtain that for any $y\in B_{\delta^{1/2}}(\bar x)$,
$$
\begin{aligned}
-\varepsilon\Delta\varrho(y) &-\mathcal{L}_R^\psi[v_\delta](y)+
|D\varrho(y)|^m-f(y)+\lambda\\
\leq &-\varepsilon\Delta\varrho(\bar x)-
\mathcal{L}_R^\psi[v](\bar x)+
|D\varrho(\bar x)|^m-f(\bar x)+\lambda +o_\delta(1).
\end{aligned}
$$
Finally, since $\varrho(\bar x)=v_\delta(\bar x)+o_\delta(1)=\varphi(\bar
x)+o_\delta(1)$, we deduce that for $\delta>0$ small enough, according
to~\eqref{eq:perron.contr}, the bump function $v_\delta$ is a subsolution of
\eqref{eq.approx.R.varepsilon} in $B_{\delta^{1/2}}(\bar x)$. Therefore,
$v_\delta=\max(v,\varphi_\delta)$ is a subsolution in $B_R$.
\noindent $(iv)$ {\sc Contradiction --}
The above points $(i)-(ii)-(iii)$ imply that $v_\delta\in\mathcal{S}$,
which is a contradiction with the definition of $v$, since
$v_\delta(\bar x)>v(\bar x)$.
We conclude that $v\in \C^{0,\alpha_*}(\overline{B_R})$ is a supersolution
of~\eqref{eq.approx.R.varepsilon} in $B_R$ and since it is also a
subsolution, it is a (viscosity) solution of~\eqref{eq.approx.R.varepsilon}.
Finally, to get that $v\in\C^2(B_R)$ we use some standard bootstrap regularity
estimates. Notice first that, since $v$ and $\psi$ are continuous, both
integrals terms in $\mathcal{L}^\psi_R[v]$ are at least Lipschitz
continuous. Since, by assumption, $f$ is also Lipschitz, we apply
\cite[Theorem 3.1]{CapuzzoLeoniPorretta}, which
implies that $v$ is at least Lipschitz continuous, locally in $B_R$.
Thus, the Lipschitz function $v$ satisfies an equation of the form
$-\varepsilon\Delta v = \tilde{F}\in\L^\infty_{\rm loc}(B_R)$ in the
viscosity sense.
This implies that $v$ is actually also a \textit{weak} solution of this
equation. This is a quite straightforward and standard statement in viscosity solutions' theory
which comes from the fact that we can regularize $v$ as $v_n$ and pass to
the limit in the weak sense in the equation.
By standard regularity results, it follows that
$v\in \mathrm{W}^{2,p}_{\mathrm{loc}}(B_R)$ for any $p>1$ so that
$v\in\C^{1,\alpha}$ for any $\alpha\in(0,1)$. Hence
$\tilde{F}$ is in fact in $\C^{0,\alpha}(B_R)$
and from this we deduce that $v\in\C^{2,\alpha}(B_R)$ for any
$\alpha\in(0,1)$.
\end{proof}
We end this section by introducing a uniform (in $\varepsilon$)
estimate of the solution $v$:
\begin{lemma}\label{lem:est.vbar}
For any $R>1$ there exists a constant $C=C(R,\psi,g,f)>0$ such that for
$\varepsilon>0$ small enough,
$\|v\|_{\L^\infty(B_R)}\leq C(R,\psi,g,f)$.
\end{lemma}
\begin{proof}
Consider the following equation
\begin{equation}
\label{eq.JJ}
-\mathcal{L}_R^{\psi}[\chi]=M+1\quad\text{in}\quad B_R,
\end{equation}
where $M$ is defined in Lemma~\ref{lemma.supersolution.continuity}.
We refer to~\cite[Appendix]{MelianRossi} for existence of a solution $\chi\in
L^1(B_R)$ of~\eqref{eq.JJ}. Notice that, since both integrals in the non-local term are at least continuous,
we have $\chi\in\C^0(\overline{B_R})$. Then we consider a resolution of the
identity $(\rho_k)_{k\in\N}$ and set
$$\bar\chi_k:=\rho_k\ast\big(\chi+c\big),\quad c:=\|g\|_{\L^\infty(B_R)}
+ \|\chi\|_{\L^\infty(B_R)},$$
so that for any $k\in\N$, $\bar\chi_k\in\C^2(B_R)\cap\C^0(\overline{B_R})$
and $\bar\chi_k\geq g$ on $\partial B_R$.
Since $\bar\chi_k\to\chi+c$ uniformly in $B_R$ and $c>0$, it follows that
$$-\mathcal{L}^\psi_R[\bar\chi_k]=-\mathcal{L}^\psi_R[\chi]-
\mathcal{L}^0_R[c]+o_k(1)\geq M+1+o_k(1),$$
where $o_k(1)$ vanishes as
$k\to\infty$, uniformly with respect to $x\in B_R$.
We first choose $k=k_0$ big enough (but fixed) so that the right-hand side
above is greater than $M+1/2$. Then, it follows that for
$0<\varepsilon<\varepsilon_0(k_0)$ we have $\varepsilon\|\Delta
\bar\chi_{k_0}\|_{\L^\infty(B_R)} < 1/2$. Hence, setting
$\omega(x):=\bar\chi_{k_0}(x)$ we find that for $\varepsilon$ small enough,
$$-\varepsilon\Delta \omega -\mathcal{L}^\psi_R[\omega]>M$$
which means that $\omega$ is a supersolution
of~\eqref{eq.pbm.laplace.eps.ball} such that $\omega\geq g$ on $\partial
B_R$. By the comparison principle (which can be proved exactly as in the proof of
Lemma~\ref{lemma.supersolution.continuity} and Theorem~\ref{thm:comparison.principle}) we get that
$\overline{v}\leq\omega \leq \|\omega\|_{\L^\infty(B_R)}:=C(R,\psi,g,f)$ in $B_R$. And the
result follows since by construction $v\leq \overline{v}$.
\end{proof}
\begin{remark}\label{rem:unif.n}
We will use later (see Lemmas~\ref{lem:murne} and \ref{lem.estimates.0})
that the above estimate is uniform with respect to the
data $\psi,g,f$ provided they remain bounded. Especially,
this holds true if we take approximations $\psi_n,g_n,f_n$ that converge
uniformly in $B_R$.
\end{remark}
\section{Existence results for \ep{}}
\label{sect:existence}
The aim of this section is to prove that there exists at least one value of
$\lambda$ for which problem \ep{} is solvable. Moreover, the solution turns out to be bounded by the supersolution $\Psi_\lambda$ constructed in Section~\ref{sect:ass.pre}, see~\eqref{eq:super.lambda}.
\begin{theorem}
\label{thm.existence}
If there exists a strict viscosity subsolution,
$\underline{u}\in\Wloc(\mathbb{R}^N)$, of $\ep{\lambda}$ such that
$\underline{u}\leq \Psi_\lambda$ in $\R^N$ and
$\underline{u}(0)=0$, then there exists a viscosity solution,
${u_\lambda}\in\Wloc(\mathbb{R}^N)$, of $\ep{\lambda}$, such that
$u_{\lambda}\leq\Psi_{\lambda}$ in $\R^N$
and $u_\lambda(0)=0$.
\end{theorem}
We reduce the proof of this result to solve the approximate problem defined on~$B_R$,
see~\eqref{eq.approx.R.varepsilon}, and then pass to the limit first as
$\varepsilon$ tends to zero, and second as $R$ tends to $+\infty$. To perform
this we need two main ingredients: $(i)$ smooth the data (the right-hand side $f$ and
the subsolution $\underline{u}$); $(ii)$ get some local uniform bounds independent
of $\varepsilon$ and $R$ to pass to the limit.
Let $\rho_n$ be a resolution of the identity and set $f_n:=\rho_n\ast f$,
$\psi_n:=\rho_n\ast \underline{u}$. Then both $f_n$ and $\psi_n$ are smooth and
converge uniformly in $B_R$ to $f$ and $\underline{u}$ respectively.
In the following, $o_n(1)$ stands for any quantity which vanishes as
$n\to\infty$, uniformly with respect to $x\in B_R$. Thus,
$\psi_n=\underline{u}+o_n(1)$ and $f_n=f+o_n(1)$.
Observe that, as we mentioned in Remark~\ref{rem:regularity.f}, our general
assumption on $f$ is~$\C^1$. This will be not enough here, since at some steps
in this section we will have to compute $\Delta\Psi_\lambda$. This is why we
have to do an approximation argument and use~$f_n$.
We consider~\eqref{eq.approx.R.varepsilon}$_n$; i.e, problem~\eqref{eq.approx.R.varepsilon} with data $f=f_n$, outer condition $\psi=\psi_n$,
and boundary data $g=\psi_n$ on $\partial B_R$.
The first result we prove states that we can use $\psi_n$ as a subsolution to~\eqref{eq.approx.R.varepsilon}$_n$ in
$B_R$. {Notice that during the proof we will choose $\varepsilon<\eps_0(n)$.
This will not be a problem when passing to the limit, since we will first send
$\varepsilon$ to zero (Lemma~\ref{lemma:epsilon.to.0}) and then $n$ to
infinity.}
\begin{lemma}\label{lem:approx.rhon}
There exists $\eta>0$ such that {for $n$ big enough and
$0<\varepsilon<\eps_0(n)$,} the smooth function $\psi_n$ satisfies
\begin{equation}
\label{eq:psin.is.a.subsolution}
\lambda-\varepsilon\Delta\psi_n-\mathcal{L}_R^{\psi_n}[\psi_n]+
|D\psi_n|^m\leq f_n-\eta/2 \quad\text{in}\quad B_R.
\end{equation}
Moreover, $\psi_n(0)=o_n(1)$ and $\psi_n\leq \Psi_\lambda+o_n(1)$
uniformly in $B_R$.
\end{lemma}
\begin{proof}
Since $\psi_n$ converges uniformly in $B_R$ to
$\underline{u}$,
it is a direct consequence of the assumptions on $\underline{u}$ that $\psi_n(0)=o_n(1)$ and $\psi_n\leq \Psi_\lambda+o_n(1)$.
In order to prove the first part of the lemma, notice that, since
$\underline{u}$ is a locally Lipschitz strict subsolution of~\ep{\lambda} in
$\R^N$, there exists $\eta>0$ such that for almost any $x\in B_R$,
\begin{equation}
\label{eq:subsolution.inequality}
\lambda-\mathcal{L}[\underline{u}]+
|D\underline{u}|^m \leq f-\eta.
\end{equation}
We can then estimate the terms in~\eqref{eq:psin.is.a.subsolution} as
follows: for the non-local term we notice that
$\mathcal{L}_R^{\psi_n}[\psi_n]=\mathcal{L}[\psi_n]=
\mathcal{L}[\underline{u}]+o_n(1)$.
{
To deal with the gradient term, since $\underline{u}$ is only Lipschitz
locally, not $\C^1$ we cannot use $|D\psi_n|^m=|D\underline u|^m+o_n(1)$, but
we use Jensen's inequality: for any convex function $\varphi:\R^N\to\R$,
any probability measure $\sigma$ on $\R^N$ and any $\sigma$-integrable
function $p:\R^N\in\R^N$ there holds
$$\varphi\Big(\int p(y)\d\sigma(y)\Big)\leq\int
(\varphi\circ p)(y)\d\sigma(y).$$
We applied it to $\varphi(s)=|s|^m$ which is convex since $m>1$,
$\sigma(y)=\rho_n(x-y)$ where $x$ is fixed and $p(y)=D\underline{u}(y)$
which is $\sigma$-integrable, because it is locally bounded, while $\sigma$ is
compactly supported. We deduce that for any $x$,
$$|D\psi_n|^m(x)=|\rho_n\ast D\underline{u}|^m(x)\leq
\rho_n\ast\big(|D\underline{u}|^m\big)(x)\,.
$$
Using the subsolution inequality~\eqref{eq:subsolution.inequality} for $\underline{u}$ we get
$$|D\psi_n|^m\leq \rho_n\ast(f-\eta-\lambda+\mathcal{L}[\underline{u}])=
f_n-\eta-\lambda+\mathcal{L}_R^{\psi_n}[\psi_n]+o_n(1)\,,$$
where the $o_n(1)$ is uniform with respect to $x\in B_R$.
Moreover, $-\varepsilon\Delta\psi_n$ is as
small as we want if we choose $\varepsilon<\eps_0(n)$, $n$ being fixed.
}
Hence,
we deduce that
$$\lambda-\varepsilon\Delta\psi_n-\mathcal{L}_R^{\psi_n}[\psi_n]+
|D\psi_n|^m \leq f_n-\eta+o_n(1)+\varepsilon\|\Delta\psi_n\|_{\L^\infty(B_R)}
\quad\text{in}\quad B_R\,,$$
which yields~\eqref{eq:psin.is.a.subsolution}, provided
$\eps <\eps_0(n)$.
\end{proof}
\newcommand{\vrne}{v_{R,n,\varepsilon}}
\newcommand{\wrne}{w_{R,n,\varepsilon}}
\newcommand{\murne}{\mu_{R,n,\varepsilon}}
\newcommand{\Frne}{F_{R,n,\varepsilon}}
A supersolution to~\eqref{eq.approx.R.varepsilon}$_n$ is obtained as in Section~\ref{sect:approximate}, but now
using~\eqref{eq.pbm.laplace.eps.ball}$_n$, which is~\eqref{eq.pbm.laplace.eps.ball} with $\psi$ replaced by $\psi_n$ and $g=\psi_n$. Hence, using Proposition~\ref{proposition.existence.vR.R.varepsilon}, we find a solution
of~\eqref{eq.approx.R.varepsilon}$_n$, defined for $\eps <\eps_0(n)$, that we denote $\vrne$. By construction we have $\vrne\geq\psi_n=\underline u+o_n(1)$.
We define also $$\wrne(x):=\vrne(x)-\vrne(0),$$ so that
$\wrne(0)=0$ and $\wrne(x)=\psi_n(x)-\vrne(0)$ on $\partial B_R$.
Moreover, $\wrne$ satisfies
\begin{equation}
\label{eq.w}
\lambda-\varepsilon\Delta \wrne-
\mathcal{L}_R^{\psi_n}[\wrne]+|D\wrne|^m=
f_n+\murne,\quad x\in B_R,
\end{equation}
where $\murne(x):=\mathcal{L}_R^0[\vrne(0)](x)$.
Concerning this term (which does not exist neither in the local case nor in the problem defined in the whole space),
we have a first estimate which follows directly from the construction of $\vrne$ and the fact that $\underline u(0)=0$:
\begin{equation}
\label{eq:first.estimate.mu}
\begin{aligned}
\murne(x)&=\vrne(0)\Big(\int_{B_R}J(x-y)\d y-1\Big)\\
&\geq (\underline u(0)+o_n(1))\Big(\int_{B_R}J(x-y)\d y-1\Big)=o_n(1).
\end{aligned}
\end{equation}
Moreover,
introducing the indicator function $\ind{A}$ of $A$, we have:
\begin{lemma}\label{lem:murne}
For any $R>1$, there exists a constant $\nu(R)>0$ such that, for $n$ big enough and
$0<\varepsilon<\eps_0(n)$,
\begin{equation*}\label{mu.rne}
|\murne(x)|\leq \nu(R)\cdot \ind{B_{R}\setminus B_{R-1}}(x).
\end{equation*}
\end{lemma}
\begin{proof}
Notice that, for any $|x|\leq R-1$, since $J$ is compactly supported in $B_1$,
$$\murne(x)=\vrne(0)\Big(\int_{B_R}J(x-y)\d y-1\Big)=0.$$
Hence, we first deduce that $\murne$ is compactly supported in $B_R\setminus
B_{R-1}$. Second, Lemma~\ref{lem:est.vbar} gives an estimate of $\vrne(0)$
by a constant which is independent of $\varepsilon$ (small enough). Actually,
this estimate can be found uniform in $n$, since $\psi_n$ and $f_n$ converge
uniformly in $B_R$ to $\underline{u}$ and $f$ respectively.
\end{proof}
We now prove a local bound for
$\wrne$, independent of $\eps,n$ and $R$,
in terms of the supersolution $\Psi_\lambda$.
\begin{lemma}
\label{lemma:psiboundsw}
For any $R>1$ fixed, $\wrne \leq \Psi_{\lambda}+o_n(1)$
in $\overline B_R$.
\end{lemma}
\begin{proof} By Lemma~\ref{lem:approx.rhon} we have that $\psi_n\leq \Psi_n+o_n(1)$
uniformly in $B_R$. Hence
$$\mathcal{L}_R^{\psi_n}[\Psi_{\lambda}]\leq\mathcal{L}_R^{\Psi_\lambda+o_n(1)}
[\Psi_\lambda]\leq \mathcal{L}[\Psi_{\lambda}]+o_n(1).$$
Since $\Psi_\lambda$ is a strict supersolution of $\epl$ in $\mathbb{R}^N\setminus\{0\}$, we
deduce that for $\varepsilon\ll 1$ and $n$ big enough
\begin{equation}
\label{eq.supersolution.condition.psi}
\lambda-\varepsilon\Delta \Psi_{\lambda}-\mathcal{L}_R^{\psi_n}
[\Psi_{\lambda}]+|D\Psi_{\lambda}|^m>f_n+o_n(1), \quad {\rm for\quad}
|x|\neq 0.
\end{equation}
Let $x_0\in\overline B_R$ be such that $(\wrne-\Psi_{\lambda})(x_0)\geq
(\wrne-\Psi_{\lambda})(x)$ for all $x\in\overline B_R$.
If $x_0\in B_R\setminus\{0\}$ and
$(\wrne-\Psi_{\lambda})(x_0)\leq0$, the result follows. On the other
hand, if $x_0\in B_R\setminus\{0\}$ is a point where
$\wrne-\Psi_{\lambda}$ achieves a positive maximum, then at this point
$D\wrne(x_0)=D\Psi_{\lambda}(x_0)$, $\Delta \wrne(x_0)\leq \Delta\Psi_{\lambda}(x_0)$ and
$\mathcal{L}_R^{\psi_n}[\wrne](x_0)\leq
\mathcal{L}_R^{\psi_n}[\Psi_{\lambda}](x_0)$.
Using that $\wrne$ satisfies~\eqref{eq.w} together with~\eqref{eq:first.estimate.mu} we get a
contradiction with~\eqref{eq.supersolution.condition.psi}.
Hence, either $x_0=0$ or $x_0\in\partial B_R$. In both cases we get
$\wrne\leq \Psi_{\lambda}+o_n(1)$ in $\overline B_R$ as follows:
\noindent$(i)$ If $x_0\in \partial B_R$ then, since by construction
$\vrne(x_0)=\psi_n(x_0)\leq \Psi_\lambda(x_0)+o_n(1)$
and $\vrne(0)\geq \psi_n(0)=o_n(1)$,
we get $\wrne(x_0)\leq \Psi_{\lambda}(x_0)+o_n(1)$.
\noindent$(ii)$ If $x_0=0$ then $\wrne(0)= 0 \leq \Psi_{\lambda}(0)+o_n(1)$.
\end{proof}
Our next aim is letting $\eps\to 0$. To do so, we need estimates that are
independent of $\eps$ both from above and below.
\begin{lemma}\label{lem.estimates.0}
For any $R>1$ fixed, there exists $C_1(R),C_2(R)>0$, independent of $\varepsilon$,
such that, for $n$ big enough and
$0<\varepsilon<\eps_0(n)$, $-C_1(R)+o_n(1)\leq \wrne\leq C_2(R)+o_n(1)$ in $B_R$.
\end{lemma}
\begin{proof}
The upper bound is a direct consequence of Lemma~\ref{lemma:psiboundsw}.
For the lower bound we use that, by construction, $\vrne\leq \overline{v}$. This implies, see the proof of
Lemma~\ref{lem:est.vbar}, that $\vrne\leq C(R)$ in $B_R$. Here, as in
Lemma~\ref{lem:murne}, we notice that the estimate is uniform with respect to
$n$, since $\psi_n$ and $f_n$ converge uniformly in $B_R$.
Now, using this bound,
$\wrne(x)=\vrne(x)-\vrne(0)\geq
\psi_n(x)-C(R)=\underline{u}-C(R)+o_n(1)\geq -C_1(R)+o_n(1)$ in $B_R$.
\end{proof}
\begin{lemma}
\label{lemma:epsilon.to.0}
For any $R>1$ fixed, the sequence of solutions (up to a subsequence)
$\{\wrne\}_\varepsilon$ of~\eqref{eq.w} converges locally uniformly
in $B_R$ as $\varepsilon\to 0$ to a continuous viscosity solution
$w_{R,n}$ of
\begin{equation}
\label{eq.w.without.eps}
\lambda-\mathcal{L}_R^{\psi_n}[w_{R,n}]+|Dw_{R,n}|^m=f_n+
\mu_{R,n}, \quad x\in B_R,
\end{equation}
where $\mu_{R,n}$ is compactly supported in $B_R\setminus B_{R-1}$.
Moreover, $w_{R,n}(0)=0$ and $w_{R,n}\leq\Psi_{\lambda}$ in $B_R$.
\end{lemma}
\begin{proof}
By adding and subtracting the term $\wrne$, we rewrite
equation~\eqref{eq.w} under the form
$$\wrne-\eps\Delta
\wrne+|D\wrne|^m=\Frne,$$
where $\Frne:=f_n+\murne+
\mathcal{L}_R^{\psi_n}[\wrne]+\wrne-\lambda$.
Using the estimates of \cite[Theorem 3.1]{CapuzzoLeoniPorretta}
we have that for any $R'<R$ there exists a constant $K$ depending only on
$R',\|\Frne\|_{L^\infty(B_R)}$ and $\|(\wrne)^-\|_{L^\infty(B_R)}$ such
that
$$|\wrne(x)-\wrne(y)|\leq K|x-y|\quad \text{in}\quad B_{R'}.$$
By Lemma~\ref{lem.estimates.0}, we know that $\wrne$ is bounded uniformly
by some $C(R)$ in $B_R$, with respect to $\eps >0$ small enough (and $n$).
Moreover, Lemma~\ref{lem:murne} provides a uniform estimate for
$\murne$. Finally, we can estimate $\mathcal{L}_R^{\psi_n}[\wrne]$ using the
bounds given in Lemma~\ref{lem.estimates.0}, and we get for some constant
$C(R)>0$,
$$\|\Frne\|_{\L^\infty(B_R)}\leq\|f_n\|_{\L^\infty(B_R)}+
|\lambda|+C(R).$$
From this we deduce that $K$ can be chosen independent of $\eps >0$ small
and we obtain a local uniform bound in $\C^{0,\alpha}(B_R)$ as $\eps\to0$.
Passing to the limit is done by Ascoli's Theorem and the stability property
of viscosity solutions: up to an extraction, $\wrne\to w_{R,n}$ in $B_R$
which is a viscosity solution of \eqref{eq.w.without.eps}.
\end{proof}
The last steps consist in sending $n,R\to +\infty$, for which we have to find other
local estimates, now independent of $R$ and $n$. This time we use gradient
estimates, which are provided by a sort of \textit{implicit control} of the
equation.
\begin{lemma}
\label{lem:est.autocontrol}
Fix $R_0>0$. Then for any $n$ big enough and any $R>R_0+1$,
there exists a constant $C=C(R_0)$ such
that $\|w_{R,n}\|_{\W(B_{R_0})}\leq C$.
\end{lemma}
\begin{proof}
The continuous viscosity solution $w_{R,n}$ of \eqref{eq.w.without.eps} is
Lipschitz continuous in $B_R$. Hence it is differentiable almost everywhere
and equation~\eqref{eq.w.without.eps} holds almost everywhere. Moreover,
since all the terms in~\eqref{eq.w.without.eps} are continuous, the equation
holds everywhere in $B_R$.
As a consequence, since $\mu_{R,n}=0$ on $B_{R_0}\subset B_{R-1}$,
we can estimate the gradient term as follows,
\begin{equation}
\label{eq:autocontrol1}
\begin{aligned}
\sup_{B_{R_0}} |Dw_{R,n}|^m &\leq \sup_{B_{R_0}}|f_n|+
|\lambda|+\sup_{B_{R_0}}\big|\mathcal{L}_R^{\psi_n}[w_{R,n}]\big|.
\end{aligned}
\end{equation}
On $B_{R_0+1}\setminus B_{R_0}$ we use the bound $\psi_n\leq\Psi_\lambda+o_n(1)$, see Lemma~\ref{lem:approx.rhon},
for the non-local term, so that for $n$ big enough, we have the \textit{implicit estimate}
$$
\begin{aligned}
\sup_{B_{R_0}} |Dw_{R,n}|^m &\leq C_0(R_0)+2\sup_{B_{R_0}}|w_{R,n}|+
\sup_{B_{R_0+1}\setminus B_{R_0}}|\Psi_\lambda|\\
&\leq C_1(R_0) +C_2(R_0)\sup_{B_{R_0}}|Dw_{R,n}|,
\end{aligned}
$$
where we have used the Mean Value Theorem and the fact that $w_{R,n}(0)=0$
to get the last inequality.
Setting now $X:=\sup_{B_{R_0}}|Dw_{R,n}|$ we have $X^m\leq C_2X+C_1$ for some
constants $C_1(R_0),C_2(R_0)$ independent of $R$. So, since $m>1$, there exists a
positive constant $C_3=C_3(R_0)$, depending only on $R_0$, such that
$\sup_{B_{R_0}} |Dw_{R,n}| \leq C_3(R_0)$.
Using again that $w_{R,n}(0)=0$, we deduce also that
$\|w_{R,n}\|_{\L^\infty(B_{R_0})}\leq C_4(R_0)$ for some $C_4(R_0)>0$. Gathering
these estimates, we get $\|w_{R,n}\|_{\W(B_{R_0})} \leq C(R_0)$ for some
$C(R_0)>0$, which is the desired result.
\end{proof}
We can finally complete the existence result:
\begin{proof}[Proof of Theorem~{\rm\ref{thm.existence}}]
Since the sequence $\{w_{R,n}\}$ is locally bounded in $\W(B_R)$, independently
of $n$, by using Ascoli's Theorem we can pass to the limit as $n\to\infty$
in $B_R$. We skip the details of this passage to the limit, which
is straightforward, and yields a solution (passing to the limit in~\eqref{eq.w.without.eps}) $w_R$, which is locally bounded in $\W(B_R)$, of
\begin{equation}\label{eq.lim.R}
\lambda-\mathcal{L}_R^{\underline{u}}[w_R]+|Dw_R|^m=f+
\mu_R, \quad x\in B_R,
\end{equation}
where $\mu_R$ is still supported on $B_{R}\setminus B_{R-1}$.
Then, we send $R\to\infty$ and get that the functions $w_R$ converge
locally uniformly to a function
$u_{\lambda}\in \W_{\rm
loc}(\R^N)$, such that $ u_{\lambda}\leq \Psi_{\lambda}$
in $\R^N$ and $u_\lambda(0)=0$.
Moreover $u_\lambda$ verifies \ep{\lambda}. Indeed, as a consequence of the
Dominated Convergence Theorem we have
$\mathcal{L}_R^{\underline{u}}[w_R]\to\mathcal{L}[u_{\lambda}]$ locally
uniformly and the
correction term $\mu_R$ vanishes locally uniformly as
$R\to\infty$. So, we can pass to the limit in
\eqref{eq.lim.R} in the viscosity sense to get the result.
\end{proof}
We conclude this section by the following result, which provides indeed solutions of~\ep{}
for certain values of $\lambda$.
\begin{corollary}\label{cor:existence}
For any $\lambda\leq \min(f)$ the problem \ep{\lambda} is solvable and the
constructed solution solution $u_\lambda$ satisfies $u_\lambda(0)=0$,
$u_\lambda\leq \Psi_\lambda$. \end{corollary}
\begin{proof}
If $\lambda<\min(f)$, we can take $\underline{u}=0$ as a strict subsolution
of~\ep{\lambda}in Theorem~\ref{thm.existence}.
If $\lambda=\min(f)$, $\underline{u}=0$ is not a {strict}
subsolution of~\ep{\lambda}, but it is a {regular} subsolution. So in this case
we do not need to use the strict subsolution property to regularize
$\underline{u}$ (see Lemma~\ref{lem:approx.rhon}) into a smooth subsolution. Hence the above construction
also works.
\end{proof}
\section{Existence of a critical constant}
\label{sect:critical}
In this Section we investigate the existence of a critical ergodic constant
$\lambda_*$. Following \cite{BarlesMeireles2017, Ichihara}, it would seem
natural to consider the supremum of all
$\lambda$'s such that there exist a solution (or a
subsolution) $u$ of \ref{eq:EP}:
$$\lambda_\sharp:=\sup\big\{ \lambda\in\R: \ep{\lambda}
\text{ is solvable}\,\big\}.$$
However, due to the non-local character of the equation, it seems impossible
to prove that $\lambda_\sharp$ is finite, because such a result would require
a uniform control of the growth of possible solutions. Nevertheless, we have seen in
Subsection~\ref{sect:non.existence} that the growth of (super)solutions is
restricted somehow. Following this remark, let us define for $\mu>0$ the class
$$\Eclass(\mu):=\Big\{ u:\R^N\to\R: \limsup_{|x|\to\infty}
\frac{u(x)}{\Psi(x)}\leq \mu \Big\}$$
where $\Psi$ has been defined in~\eqref{eq:def.psi.0}. Hence, instead of $\lambda_\sharp$, we will deal with
$$
\Lambda(\mu):=\big\{\lambda\in\R:
\text{there exists } u\in\Eclass(\mu)\text{ solution of
}\ep{\lambda}\,\big\}
$$
and define a critical ergodic constant under this restrictive
growth $\lambda_*(\mu):=\sup\Lambda(\mu)$.
We will then relax the growth condition
in Section~\ref{sect:revisited}, after we have collected
more information on bounded from below solutions and uniqueness.
Let us set $\mu_0:=2+(\min f)^-$, so that $\mu_0=c_\lambda$ for
$\lambda=\min(f)$, see~\eqref{eq:super.lambda}. As we noticed, see Remark~\ref{rem:supersolution.bigger.constant},
for all $\mu\geq\mu_0$, $\mu\Psi$ is a strict supersolution of~\ep{} for $|x|\neq0$.
\begin{lemma}\label{lem:global.est}
Assume that $u\in\Eclass(\mu)$ is a solution of
\ref{eq:EP} for some $\mu\geq\mu_0$. Then $u(x)\leq\mu\Psi+u(0)$ for all $x\in\R^N$.
\end{lemma}
\begin{proof}
We follow the same ideas as in the proof of Lemma~\ref{lemma:psiboundsw}.
First, we define $\tilde{u}:=u-u(0)$, which is still a solution of
\ref{eq:EP} such that $\tilde{u}\in\Eclass(\mu)$.
By the limsup property, for any $\eta>0$ there exists
$R_\eta$ such that for $|x|\geq R_\eta$, $\tilde u(x)\leq (1+\eta)\mu\Psi(x)$.
Now, by continuity in $\overline{B}_{R_\eta}$, the maximum of
$\tilde u-(1+\eta)\mu\Psi$ is attained at some point
$x_0\in \overline{B}_{R_\eta}$. If the maximum is attained at the
boundary, then $\tilde u\leq(1+\eta)\mu\Psi$ in $\R^N$.
Similarly, if the maximum is attained at $x_0=0$ we have
$\tilde u(0)=(1+\eta)\mu\Psi(0)=0$.
Finally, if the maximum is attained at a point
$x_0$ such that $0<|x_0|<R_\eta$, we use the comparison principle, Theorem~\ref{thm:comparison.principle}: $\tilde u$ is a
(sub)solution of \ref{eq:EP} while $(1+\eta)\mu\Psi$ is a $C^1$-smooth, strict
supersolution of \ref{eq:EP} and $\tilde u\leq(1+\eta)\mu\Psi$ outside the ball $B_{R_\eta}$. We reach a contradiction by using
$(1+\eta)\mu\Psi$ as a test function for $\tilde u$ at $x_0$.
The conclusion is that the maximum of $\tilde u-(1+\eta)\mu\Psi$ in $\R^N$
is non-positive, and the result follows after letting $\eta$ tend to zero:
$\tilde{u}\leq\mu\Psi$ which implies the estimate on $u$.
\end{proof}
\begin{lemma}
Let $\lambda_1\in \Lambda(\mu)$ for some $\mu>0$.
Then $\lambda_2\in\Lambda(\mu)$ for all $\lambda_2<\lambda_1$.
\end{lemma}
\begin{proof}
Since $\lambda_1\in\Lambda(\mu)$, there exists a solution
$u_1\in\Wloc(\mathbb{R}^N)\cap\Eclass(\mu)$ of
$\ep{\lambda_1}$. But since $\lambda_2<\lambda_1$, it follows that
$u_1-u_1(0)$ is a strict subsolution of \ep{\lambda_2}.
Then, Theorem~\ref{thm.existence} with $\underline{u}=u_1-u_1(0)$ and
$\lambda=\lambda_2$ yields a solution $u\in\Wloc(\mathbb{R}^N)$ of~\ep{\lambda_2}, such
that $u(0)=0$ and $u\leq u_1$. Hence, $u\in\Eclass(\mu)$
which implies that $\lambda_2\in\Lambda(\mu)$.
\end{proof}
\begin{lemma}\label{lem:lambda.finite}
For any $\mu\geq \mu_0$, we have $\min(f)\leq\lambda^*(\mu)<\infty$.
\end{lemma}
\begin{proof}
We begin with the bound from below: from Corollary~\ref{cor:existence} we have that if $\lambda=\min(f)$, then $u_\lambda$ is a solution of~\ep{} such that $u_\lambda(0)=0$ and $u_\lambda\leq
c_\lambda\Psi$. But for this specific
$\lambda$, $c_\lambda=\mu_0$. Hence, $u_\lambda$ belongs to
$\Eclass(\mu_0)\subset\Eclass(\mu)$ for any $\mu\geq \mu_0$, which proves that
for any $\mu\geq \mu_0$, $\lambda_*(\mu)\geq\min(f)$.
Assume now that $\lambda_*(\mu)=\infty$.
Then, there exists a sequence of solutions $\{(\lambda_n,v_n)\}$
such that $\lambda_n\to\infty$ as
$n\to\infty$, and thus we can assume that $\lambda_n\geq\min(f)$ for all $n$
sufficiently large.
Following \cite{BarlesMeireles2017}, we set
$\psi_n:=\lambda_n^{-1/m}(v_n-v_n(0))$ so that
$$
-\lambda_n^{1/m}\mathcal{L}[\psi_n]+\lambda_n|D\psi_n|^{m}=f-\lambda_n
$$
and after dividing by $\lambda_n$ we get
\begin{equation}\label{ineq:contr.psi.n}
|D\psi_n|^{m}=\lambda_n^{-1}f+\lambda_n^{1/m-1}\mathcal{L}[\psi_n]-1.
\end{equation}
Now we fix $R_0>0$ and use the \textit{implicit estimates} as in the proof of
Lemma~\ref{lem:est.autocontrol}, but here we take into account the uniform
estimate given by Lemma~\ref{lem:global.est} in order to control the convolution
on $B_{R_0+1}\setminus B_{R_0}$: since $\psi_n(0)=0$ and $\lambda_n\geq1$ for
$n$ big enough, we have $\psi_n\leq \lambda_n^{-1/m}\mu\Psi\leq\mu\Psi$. Hence
$$
\begin{aligned}
\sup_{B_{R_0}} |D\psi_n|^m &\leq \lambda_n^{-1}\sup_{B_{R_0}}|f|
+\lambda_n^{1/m-1}\sup_{B_{R_0}}|\,\mathcal{L}[\psi_n]\,|\\
& \leq C_1(R_0)+\mu\sup_{B_{R_0+1}\setminus B_{R_0}} |\Psi|+
2\sup_{B_{R_0}}|\psi_n|\,.
\end{aligned}
$$
Recall that $\mu>0$ is fixed so that, setting
$X:=\sup_{B_{R_0}}|D\psi_n|$, there exist some constants
$a(R_0),\, b(R_0)$ such that for $n$ big enough
$$X^m\leq a(R_0)+b(R_0)X.$$
This yields a uniform bound ($i.e.$ independent of $n$) for the gradient of
$\psi_n$ in any fixed ball $B_{R_0}$.
Using the the fact that $\psi_n(0)=0$, up to extraction of a subsequence, we can assume that
$\psi_n\to\psi$ locally uniformly for some $\psi\in\Wloc(\R^N)$. Then, sending
$n\to\infty$ in \eqref{ineq:contr.psi.n}, we obtain a
contradiction: $|D\psi|^m\leq -1$. The conclusion is that necessarily
$\lambda_*(\mu)<\infty$.
\end{proof}
\begin{lemma}
For any $\mu\geq \mu_0$, there exists a solution $v\in \Eclass(\mu)$ of
\ep{\lambda} for the critical ergodic constant $\lambda=\lambda^*(\mu)$.
\end{lemma}
\begin{proof}
Consider a sequence of solutions $\{\lambda_n,v_n\}$ such that
$\lambda_n\to\lambda_*(\mu)$. Since for any $n$,
$\tilde v_n:=v_n-v_n(0)\in\Eclass(\mu)$ and $\tilde{v}_n(0)=0$,
this allows to use again the same \textit{implicit estimates}
as in Lemma~\ref{lem:lambda.finite}. This implies that the sequence
$\{\tilde v_n\}$ is locally uniformly bounded in $\Wloc(\mathbb{R}^N)$.
Hence, up to extraction of a subsequence, we get local uniform convergence of
$\tilde v_n$ to some $\tilde v\in\Wloc(\mathbb{R}^N)$ and passing to the limit in the
viscosity sense, $\tilde v$ is a solution of $\ep{\lambda}$ for
$\lambda=\lambda_*(\mu)$.
Finally, since for any $n\in\N$, $\tilde v_n\leq \mu\Psi$, see Lemma~\ref{lem:global.est}, we have
$\tilde v\in\Eclass(\mu)$ so that $\lambda_*(\mu)\in\Lambda(\mu)$.
\end{proof}
\section{Bounded from below solutions}
\label{sect:bounded}
Along this section we follow~\cite{Ichihara, Ichihara2013} but, as mentioned before, we have to adapt the arguments to take into account the non-local character of the problem. Here we use the specific sub and supersolutions that we constructed in
Section~\ref{sect:ass.pre} and $\Theta$ will stand for $\Theta_0$, the subsolution
constructed with $\lambda=0$ and $\kappa=1$, see Lemma~\ref{lem:subsol}.
Let us consider for $\sigma\in(0,1)$ the following equation, defined in
$\mathbb{R}^N$:
\begin{equation}
\label{eq:with.sigmav}
-\mathcal{L}[v]+|Dv|^m+\sigma v=f+ \sigma \Theta.
\end{equation}
\begin{lemma}
\label{lemma:phi0.sub}
There exists $c_1>0$ such that for any $\sigma>0$,
$\theta_0:=\Theta-c_1\sigma^{-1}$ is a strict
subsolution of~\eqref{eq:with.sigmav}.
\end{lemma}
\begin{proof}
From Lemma~\ref{lem:subsol} we see that $\Theta$ is not necessarily a
subsolution of~\ep{0}, since $f$ could be negative.
However, there exists $c\geq0$ such that, in the viscosity sense,
\begin{equation}\label{eq:approx.subsol}
-\mathcal{L}[\Theta]+|D\Theta|^m-f\leq c\quad\text{in}\quad\R^N.
\end{equation}
Indeed, if $|x|>R_*$, using~\eqref{eq:theta.condition} with $\kappa=1$, then $-\mathcal{L}[\Theta]+
|D\Theta|^m-f\leq 1-f\leq 0$, while for $|x|<R_*$,
we have $-\mathcal{L}[\Theta]+ |D\Theta|^m-f \leq -\min(f)$. Finally,
recall that (see Lemma~\ref{lem:subsol}) if $|x|=R_*$, no smooth
test function can touch from above so that we do not need to check the
subsolution condition in the viscosity sense.
Hence,~\eqref{eq:approx.subsol} holds true with $c=(\min(f))^-\geq0$.
Now, choosing $c_1>c$, we have that
$$
\begin{aligned}
-\mathcal{L}[\theta_0]+|D\theta_0|+\sigma\theta_0-f& =
-\mathcal{L}[\Theta]+|D\Theta|^m+\sigma(\Theta-c_1\sigma^{-1})-f\\
&\leq c-c_1+\sigma\Theta< \sigma\Theta,
\end{aligned}
$$
which proves the result.
\end{proof}
Now, in order to construct a supersolution to the viscous version of
\eqref{eq:with.sigmav} we use a $\C^2$-regularization of $\Psi$ as follows: let
$\overline\Psi\in\C^2(\mathbb{R}^N)$, such that $\overline{\Psi}=2\Psi$ if
$|x|\geq R_*$ and $\overline{\Psi}\geq 0$ if $|x|\leq R_*$. Notice that such a
$\overline{\Psi}$ exists, since $\Psi\geq 0$ in $\R^N$ and it is $\C^2$-regular for
$|x|\geq R_*$. Notice also that the constant $2$ corresponds to the choice
$\lambda=0$ in \eqref{eq:super.lambda}.
\begin{lemma}\label{lem:super.vsigma}
There exists $c_2>0$ such that for any $R>R_*$,
$\sigma>0$ and $0<\eps<\eps_0(c_2,R)$,
$\psi_0:=\overline{\Psi}+c_2\sigma^{-1}$
is a strict supersolution of
\begin{equation*}\label{eq:with.sigmav.viscous}
-\varepsilon\Delta v -\mathcal{L}[v]+|Dv|^m+\sigma v=
f+ \sigma \Theta\ \text{ in } B_R\,.
\end{equation*}
\end{lemma}
\begin{proof}
Notice first that since for $|x|>R_*$, $\Theta(x)=(|x|-R_*)$, while
$\Psi(x)=|x|f^{1/m}(x)\geq|x|$ and $\Psi\geq0$, then
$\Theta\leq\overline{\Psi}$.
Now, in a similar way as in the proof of Lemma~\ref{lemma:phi0.sub}, using that $2\Psi$ is a strict supersolution of~\ep{} (with $\lambda=0$, see Proposition~\ref{prop:psilamba.strict}) for
$|x|\geq R_*>0$, and
that for $|x|\leq R_*$, both $f$ and $\overline\Psi$ are regular, we
obtain $ -\mathcal{L}[\overline\Psi]+ |D\overline\Psi|^m-f\geq -c$
for some $c>0$ independent of $\eps,\sigma,R$. Hence,
$$
\begin{aligned}
-\eps\Delta\psi_0-\mathcal{L}[\psi_0]+|D\psi_0|+\sigma\psi_0-f&=
-\eps\Delta\overline{\Psi}-\mathcal{L}[\overline\Psi]+|D\overline\Psi|^m+
\sigma(\overline\Psi+c_2\sigma^{-1})-f\\
&\geq -\eps\|\Delta\overline{\Psi}\|_{\L^\infty(B_R)}
-c+c_2+\sigma\overline\Psi> \sigma\Theta,
\end{aligned}
$$
provided we choose $c_2>c$ and $\eps<\eps_0(c_2,R)$ small enough.
\end{proof}
\begin{lemma}
\label{lemma:existence.vsigma}
For any $\sigma>0$, there exists a viscosity solution $v_\sigma\in\W_{\rm
loc}(\mathbb{R}^N)$ of~\eqref{eq:with.sigmav}. Moreover, $\sigma
v_\sigma(0)$ is bounded independently of $\sigma$.
\end{lemma}
\begin{proof}
Using $\theta_0$ and $\psi_0$ as subsolution and supersolution respectively, we follow exactly
the proofs of Proposition~\ref{proposition.existence.vR.R.varepsilon} and
Theorem~\ref{thm.existence} to construct a solution with the desired
properties. We sketch only the main modifications that we make here:
\noindent $(i)$ We use $\theta_0$ as a strict
subsolution in $B_R$, regularize it as $(\theta_0)_n$ to play the role of
$\psi_n$ in Proposition~\ref{proposition.existence.vR.R.varepsilon}.
This yields that for any $\varepsilon\in(0,1)$ and $R>1$ fixed,
we have a solution $v_{\sigma,R,n,\eps}$ of the approximate problem
\begin{equation}\label{eq:with.sigmav.approx}
\left\{\begin{array}{ll}
-\varepsilon\Delta v -\mathcal{L}_R^{\theta_0}[v]+|Dv|^m+\sigma v=
f_n+ \sigma \Theta=\tilde{f}_n,& x\in B_R,\\
v=\theta_0,&x\in\partial B_R.
\end{array}\right.
\end{equation}
Notice that $\Theta$ is Lipschitz so that we do not need to regularize it
in the right-hand side.
\noindent $(ii)$
As we already noticed, $\theta_0\leq\Theta\leq\overline{\Psi}$ in $\R^N$. Thus it follows that
$-\mathcal{L}_R^{\theta_0}[\overline{\Psi}]\geq
-\mathcal{L}[\overline{\Psi}]$. Hence, using Lemma~\ref{lem:super.vsigma}
and the fact $\psi_0=\overline{\Psi}+c_2\sigma^{-1}\geq\overline{\Psi}$,
we obtain that for $n$ big enough and $\eps$ small enough (depending on
$c_2$ and $R$), $\psi_0$ is a supersolution
of~\eqref{eq:with.sigmav.approx}.
\noindent $(iii)$ Using the classical comparison result in $B_R$, see Theorem~\ref{thm:max.principle}, for
$R>R_*$, we deduce that
$$
\Theta-\frac{c_1}{\sigma}=\theta_0\leq v_{\sigma,R,n,\eps}\leq \psi_0=
\overline\Psi+\frac{c_2}{\sigma},
$$
which yields directly local uniform bounds for the solution,
independent of $R>R_*$, $n$ and $\varepsilon$ (provided $R$ is fixed).
Actually, this step is easier than in Theorem~\ref{thm.existence}.
\noindent $(iv)$ Passing first to the limit as $\eps\to0$ (with $R>R_*$
fixed), then as $n,R\to\infty$, we conclude that there exists a function
$v_\sigma\in\W_{\rm loc}(\R^N)$, viscosity solution
of~\eqref{eq:with.sigmav}, which verifies
\begin{equation}
\label{eq:bounds.for.vsigma}
\Theta-\frac{c_1}{\sigma}\leq v_{\sigma}\leq \overline\Psi+\frac{c_2}{\sigma}.
\end{equation}
This implies that $\sigma v_\sigma(0)$ is bounded between $-c_1$ and
$c_2$, which are constants independent of $\sigma$.
\end{proof}
The last step consists in sending $\sigma\to 0$ and get a bounded from
below solution of~\ep{\lambda} for a certain $\lambda$. We
define $w_\sigma:=v_\sigma-v_\sigma(0)$, which verifies $w_\sigma(0)=0$ and is a
viscosity solution of
\begin{equation}
\label{eq:wsigma.solution}
\lambda_\sigma-\mathcal{L}[w_\sigma]+|Dw_\sigma|^m+\sigma w_\sigma=f+ \sigma
\Theta\ \text{ in $\mathbb{R}^N$},
\end{equation}
where $\lambda_\sigma=\sigma v_\sigma(0)$. Again, we need a uniform bound from
above in order to control the non-local term as $\sigma\to0$:
\begin{lemma}
\label{lemma:wsigma.bounded}
There exists $\mu>0$ such that for any $\sigma\in(0,1)$, $w_\sigma\leq \mu\Psi$.
\end{lemma}
\begin{proof}
The argument is similar to that of Lemma~\ref{lemma:psiboundsw}, except that
we are in the whole space $\R^N$.
Let us first notice that since $\lambda_\sigma$ is bounded from below, we
can find $\mu>0$ such that for any $\sigma>0$,
$c_{\lambda_\sigma}= 2+(\lambda_\sigma)^-<\mu$. This implies in particular
that $\mu\Psi$ is a (strict) supersolution of $\ep{\lambda_\sigma}$.
Now, we keep $\sigma>0$ fixed.
Since $w_\sigma\leq 2\Psi+c_2\sigma^{-1}$, for $|x|$ big and $\mu>2$, it
follows that $w_\sigma-\mu\Psi$ reaches a maximum at some point
$x_0\in\R^N$.
\noindent $(i)$ If $x_0=0$, the result follows by using that $w_\sigma(0)=0$
and $\mu\Psi\geq 0$.
\noindent $(ii)$ Let $x_0\neq 0$. Up to a constant, we can assume that
the maximum is such that $w_\sigma(x_0)>\mu\Psi(x_0)$. Otherwise we are done.
Since $w_\sigma$ is a viscosity solution of~\eqref{eq:wsigma.solution}, we can
use the subsolution condition at $x_0$ with $\mu\Psi$ as test
function (recall that by construction $\mu\Psi$ is $\C^1$-smooth). We get
\begin{equation}
\label{eq:wsigma.viscoity.sub}
\lambda_{\sigma}-\mathcal{L}[w_\sigma]+|D(\mu\Psi)|^m+\sigma w_\sigma-f\leq
\sigma \Theta\,.
\end{equation}
Since $w_\sigma-\mu\Psi$ reaches a maximum at $x_0$, we have
$-\mathcal{L}[w_\sigma](x_0)\geq -\mathcal{L}[\mu\Psi](x_0)$.
Hence we get
$$\lambda_\sigma-\mathcal{L}[\mu\Psi]+|D(\mu\Psi)|^m+\sigma \mu\Psi-f\leq
\sigma \Theta\,.
$$
But since $\mu\Psi$ is a supersolution of $\ep{\lambda_\sigma}$ and
$\mu\Psi>\Theta$, we reach a contradiction.
The conclusion is that for any $\sigma>0$, we have $w_\sigma\leq\mu\Psi$ in
$\R^N$ for some $\mu>0$ fixed.
\end{proof}
In order to pass to the limit, we need local uniform estimates.
\begin{lemma}
\label{lemma:wsigma.uniformly.bounded}
Let $R_0>0$. For any $R>R_0+1$,
there exists a constant $C=C(R_0)$ such
that $\|w_{\sigma}\|_{\W(B_{R_0})}\leq C$.
\end{lemma}
\begin{proof}
We use the same \textit{implicit estimate} technique
as in the proof of Lemma~\ref{lem:est.autocontrol} with only two minor
modifications. The first one comes from the extra term $\sigma\Theta$ in
equation~\eqref{eq:wsigma.viscoity.sub}, which does not pose any problem in $B_{R_0+1}$.
The second comes from the non-local
operator, which is defined now on the whole space \textit{i.e,} $\mathcal{L}$
instead of $\mathcal{L}_R^{\psi}$. In order to deal with this latter issue,
we use the uniform bound $w\sigma\leq\mu\Psi$ on $B_{R_0+1}\setminus B_{R_0}$,
see Lemma~\ref{lemma:wsigma.bounded}. Hence, the equivalent
to~\eqref{eq:autocontrol1} reads now
$$
\label{eq:autocontrol.sigma}
\begin{aligned}
\sup_{B_{R_0}} |Dw_{\sigma}|^m &\leq \sup_{B_{R_0}}|f|+
\sup_{B_{R_0}} |\sigma\Theta_0|+
|\lambda_\sigma|+\sup_{B_{R_0}}\big|\mathcal{L}[w_{\sigma}]\big|\\
&\leq C_0(R_0)+2\sup_{B_{R_0}}|w_{\sigma}|+
\sup_{B_{R_0+1}\setminus B_{R_0}}|\mu\Psi|\\
&\leq C_1(R_0) +C_2(R_0)\sup_{B_{R_0}}|Dw_{\sigma}|.
\end{aligned}
$$
We conclude the proof as in Lemma~\ref{lem:est.autocontrol}, using that here
also $w_\sigma(0)=0$ and get
$\|w_{\sigma}\|_{\W(B_{R_0})} \leq C(R_0)$.
\end{proof}
Finally, we also need to control $w_\sigma$ uniformly from below:
\begin{lemma}
\label{existence:bounded.from.below}
There exists $M>0$ such that for any $\sigma>0$,
\begin{equation*}
\label{eq:compair.w.theta}
w_\sigma\geq \Theta-M \quad \text{ in $\mathbb{R}^N$}.
\end{equation*}
\end{lemma}
\begin{proof}
First of all, observe that, thanks to the estimate in $\Wloc$,
for fixed $R>R_*$, there exists $M=M(R)>0$ such that
$$\sup_{0<\sigma<1}\sup_{B_R}(|\Theta|+|w_\sigma|)\leq M\,.$$
In order to prove~\eqref{eq:compair.w.theta}, we fix $\delta\in(1/2,1)$ and show
that $w_\sigma\geq\delta\Theta-M$ in $\mathbb{R}^N$. We distinguish three cases:
\noindent$(i)$ If $|x|\leq R$, it is straightforward that
$\delta\Theta-w_\sigma\leq \sup_{B_R} (|\Theta|+|w_\sigma|)\leq M$, and the
result follows.
\noindent$(ii)$ Using~\eqref{eq:bounds.for.vsigma} we have that
$\inf_{\mathbb{R}^N}(w_\sigma-\Theta)>-\infty$. Then, since $M<\infty$ and
$\Theta\to\infty$ as $|x|\to\infty$, we get that
$$
w_\sigma-\delta\Theta+M=(w_\sigma-\Theta)+(1-\delta)\Theta+M\to\infty
$$
as $|x|\to\infty$. Hence, there exists $R_1=R_1(\sigma,\delta)>R$, such that
$w_\sigma\geq\delta\Theta-M$ for $|x|>R_1$.
\noindent$(iii)$ Finally, let $\mathcal{A}:=\{x\in\mathbb{R}^N : R<|x|<R_1\}$.
The idea here is to apply a comparison argument to the functions $w_\sigma$ and
$\delta\Theta-M$, which will imply the result. We observe that neither
$w_\sigma$ nor $(\delta\Theta-M)$ are a super or subsolution
of~\eqref{eq:with.sigmav}. But consider
\begin{equation}
\label{eq:eq.modifies.sigma}
-\mathcal{L}[\nu]+|D\nu|^m+\sigma \nu-f= \sigma \Theta-\sigma M\quad \text{ in
}\mathcal{A}\,.
\end{equation}
Since $\lambda_\sigma=\sigma w_\sigma(0)$ is bounded by $\sigma M$,
from~\eqref{eq:wsigma.solution} we get that $w_\sigma$ is a supersolution
of~\eqref{eq:eq.modifies.sigma}.
On the other hand, since $R>R_*$, we have $\Theta=(|x|-R_*)$ and $f>1$. Hence,
using that $-\mathcal{L}[\Theta]\leq0$,
$$\begin{aligned}
-\mathcal{L}[\delta\Theta-M] &+|D(\delta\Theta-M)|^m+
\sigma(\delta\Theta-M)-f =-
\delta\mathcal{L}[\Theta]+\delta^m+\sigma(\delta\Theta-M)-f\\
&\leq \sigma\Theta+(\delta^m-\sigma M-1)\leq \sigma\Theta-\sigma M\,.
\end{aligned}
$$
We conclude that $\delta\Theta-M$ is a subsolution
of~\eqref{eq:eq.modifies.sigma}.
Thanks to $(i)$ and $(ii)$ above, we have that
$w_\sigma \geq \delta\Theta-M$ on $\partial\mathcal{A}$ so we can apply a
comparison argument, see Theorem~\ref{thm:comparison.principle}, to get that $w_\sigma \geq \delta\Theta-M$ in $\mathcal{A}$.
From $(i)$--$(iii)$ we have that $w_\sigma \geq \delta\Theta-M$ in
$\mathbb{R}^N$ and we conclude by letting $\delta\to 1$.
\end{proof}
We can finally prove the existence of a solution of~\ep{} that is bounded from below.
\begin{theorem}
There exists a solution $(\lambda,u)$ of~\ep{} such that
$\inf_{\mathbb{R}^N}(u-\Theta)>-\infty$ and $u\in\Eclass(\mu)$ for some
$\mu>2$.
\end{theorem}
\begin{proof}
In order to pass to the limit in~\eqref{eq:wsigma.solution}, we use that
$\sup_{\sigma}|\lambda_\sigma|$ is bounded independently of $\sigma$ and
the bounds of Lemma~\ref{lemma:wsigma.uniformly.bounded}. This yields a sequence
${\sigma_n}$, with $\sigma_n\to 0$ as $n\to\infty$, a constant $\lambda$ and
$u\in \W_{\rm loc}(\mathbb{R}^N)$, such that $\lambda_\sigma\to\lambda$ and
$w_\sigma\to u$ as $n\to \infty$. By passing to the limit
in~\eqref{eq:wsigma.solution} we also get that $(\lambda,u)$ is a solution
of~\ep.
Moreover, since $w_\sigma\geq\Theta-M$ for all $\sigma>0$ we get that
$\inf_{\mathbb{R}^N}(u-\Theta)>-\infty$. Finally, we pass to the limit in
$w_\sigma(0)=0$ and in the estimate $w_\sigma\leq\mu \Psi$ of
Lemma~\ref{lemma:wsigma.bounded} to conclude that $u\in\Eclass(\mu)$ for
some $\mu>2$.
\end{proof}
\section{Uniqueness}
\label{sect:uniqueness}
We are finally concerned with the uniqueness (up to addition of constants) of
solutions to~\ep{}, for $\lambda$ fixed. To this aim, we have to develop first
some tools related to the behaviour and comparison results of bounded from below
solutions. Let us recall that we have constructed the supersolution
$\mu\Psi$ for large values of $x$, under the conditions \hyp{H0}--\hyp{H2}
and moreover, for that values of $x$, $\mu\Psi$ is just
$\mu|x|f^{1/m}$, see Section~\ref{sect:ass.pre}.
We begin with a lower-estimate. In
\cite[Proposition 3.4]{BarlesMeireles2017} the authors consider functions with power-type growth. Here we use a similar technique, but we have to refine it, due to the fact
that we are considering functions whose growth is far bigger than power-type.
\begin{lemma}
\label{lemma:u.bounded.below.Psi}
Let $f$ verify \hyp{H0}--\hyp{H4} and let $u$ be a supersolution of~\ep{} for
$|x|\gg1$ such that $u\in\Eclass(\mu)$ for some $\mu>0$ and
$\inf_{\mathbb{R}^N} u>-\infty$. Then, for any $\eta\in(0,\eta_0)$,
there exists a constant $C_\eta>0$, such that
$$
u(x)\geq C_\eta \Psi((1-\eta)x)-\frac{1}{C_\eta}
\,,\text{ for }|x|\gg 1.$$
\end{lemma}
\begin{proof}
Since $u$ is bounded from below, we can assume without loss of generality
(just by adding a constant), that $u\geq 0$. Then we argue by contradiction;
\textit{i.e.} we assume that there exists a sequence
$|x_\varepsilon|\to\infty$ such that
$$
\frac{u(x_\varepsilon)}{\Psi((1-\eta)x_\varepsilon)}\to 0.
$$
Let $\alpha=1-\eta$ and define
$$
v_\varepsilon(s):=
\frac{u(x_\varepsilon+s|x_\varepsilon|)}{\Psi(\alpha x_\varepsilon)},
\quad s\in B_\eta.
$$
Then $v_\varepsilon(0)\to 0$ as $\varepsilon\to 0$ and
$$
|Dv_\varepsilon|^m(s)=
\Big(\frac{|x_\varepsilon|}{\Psi(\alpha x_\varepsilon)}\Big)^m
\Big|Du(x_\varepsilon+s|x_\varepsilon|)\Big|^m.
$$
We use now that $u$ is a non-negative supersolution of~\ep{} to estimate the
gradient from below: $J\ast u\geq0$ so that
$|Du(y)|^m\geq f(y)-u(y)-\lambda$. On the other hand, using
Lemma~\ref{lem:global.est}, we have $u(y)\leq\mu\Psi(y)+u(0)$.
We combine these inequalities at $y=x_\eps+s|x_\eps|$. Notice that
$\alpha|x_\eps|\leq|y|\leq(1+\eta)|x_\eps|$, so that $|y|$ is large provided
$\eps>0$ is small enough and in this case,
$$
\frac{|x_\varepsilon|}{\Psi(\alpha x_\varepsilon)}=
\frac{1}{\alpha f^{1/m}(\alpha x_\varepsilon)}\,.
$$
Hence it follows that
$$
\begin{aligned}
|Dv_\varepsilon|^m(s)& \geq \frac{1}{\alpha^mf(\alpha x_\varepsilon)}
\Big(f(x_\varepsilon+s|x_\varepsilon|)-u(x_\varepsilon+s|x_\varepsilon|)
-\lambda\Big)\\
&\geq \frac{f(x_\varepsilon+s|x_\varepsilon|)}{\alpha^m
f(\alpha x_\varepsilon)} -
\frac{\mu\Psi(x_\varepsilon+s|x_\varepsilon|)+u(0)}{\alpha^m
f(\alpha x_\varepsilon)}-\frac{\lambda}{\alpha^m
f(\alpha x_\varepsilon)}\\
& \geq \frac{f(x_\varepsilon+s|x_\varepsilon|)}{\alpha^m
f(\alpha x_\varepsilon)}
\Big(1-o_\eps(1)\frac{\mu}{\alpha^m}
\frac{f(x_\varepsilon+s|x_\varepsilon|)}{f(\alpha x_\varepsilon)}
+o_\varepsilon(1)\Big)\\
& \geq \frac{C_\eta}{\alpha^m}(1-o_\varepsilon(1))
\geq \frac{C_\eta}{2\alpha^m},\quad {\rm for\quad}\varepsilon\ll 1,
\end{aligned}
$$
where the last two lines follow from \hyp{H3} and \hyp{H4}.
Therefore, we conclude that for $\varepsilon$ small
$$
|Dv_\varepsilon|(s)\geq \frac1{\alpha}(\frac{C_\eta}{2})^{1/m}>0.
$$
Let $w$ be a solution to $|Dw|=\frac1{\alpha}(\frac{C_\eta}{2})^{1/m}$ in
$B_\eta$ with boundary data $ w=0$. By a standard comparison (in the viscosity
sense) for the equation $|Du|=\text{constant}$, we deduce that
$v_\varepsilon\geq w$ for any $\varepsilon\ll 1$. But this leads to a
contradiction, since $w(0)>0$ while $v_\varepsilon(0)\to 0$.
\end{proof}
\begin{lemma}
\label{lemma:anu(ax).is.super}
Let $f$ verify~\hyp{H0}--\hyp{H3} and \hyp{H5}--\hyp{H6}.
Let $u$ be a supersolution of~\ep{} such that $u\in\Eclass(\mu)$ for
some $\mu>0$ and $\inf_{\mathbb{R}^N} u>-\infty$. Then,
there exist $a_0>1$ and $R_1>0$ such that for all $a\in(1,a_0)$,
$\overline{u}(x):= a^Nu(ax)$ is a supersolution of~\ep{} for $|x|\geq R_1$.
\end{lemma}
\begin{proof}
For simplicity, we reduce the computations to the case $u(0)=0$ and use the
estimate $u\leq\mu\Psi$ given by Lemma~\ref{lem:global.est}.
Define $\mathcal{A}[\overline{u}](x):=
\lambda-\mathcal{L}[\overline{u}](x)+|D\overline{u}|^m(x)-f(x).$ Using that
$u$ is a supersolution of~\ep{} and $a>1$, we have that
\begin{equation}
\label{eq:first.estimate.A}
\begin{aligned}
\mathcal{A}[\overline{u}](x)&=\lambda-\mathcal{L}[u](ax)+
a^{(N+1)m}|Du|^m(ax)-f(ax)\\&
\qquad+\mathcal{L}[u](ax) -\mathcal{L}[\overline{u}](x)+f(ax)-f(x)\\
&\geq f(ax)-f(x)+\mathcal{L}[u](ax)-\mathcal{L}[\overline{u}](x).
\end{aligned}
\end{equation}
Changing variables and using that $J$ is compactly supported in $B_1$, we
can estimate the difference
$\mathcal{L}[u](ax)-\mathcal{L}[\overline{u}](x)$ as follows
$$
\begin{aligned}
\mathcal{L}[u](ax)-\mathcal{L}[\overline{u}](x)&\geq
\int_{\mathbb{R}^N} J(ax-y)u(y)\d y-a^N\int_{\mathbb{R}^N} J(x-y)u(ay)\d y\\
&= a^N\int_{\mathbb{R}^N} (J(az)-J(z))u(a(x-z))\d z\\
&= a^N\int_{|z|<1} (J(az)-J(z))u(a(x-z))\d z.
\end{aligned}
$$
Observe now that, since $J$ is radially decreasing, $J(az)<J(z)$ for all
$a>1$ and moreover $ |J(az)-J(z)|\leq \|DJ\|_{\infty}(a-1)$, for $|z|<1$.
Thus,
$$
\begin{aligned}
\mathcal{L}[u](ax)-\mathcal{L}[\overline{u}](x)
&\geq -a^N\|DJ\|_{\infty}(a-1)\int_{|z|<1} u(a(x-z))\d z\\
&\geq -a^N\|DJ\|_{\infty}(a-1)\mu\sup_{|z|<1}\Psi(a(x+z)).
\end{aligned}
$$
Plugin this estimate into~\eqref{eq:first.estimate.A} and using \hyp{H5} and
\hyp{H6} we get
\begin{equation}
\label{eq:second.estimate.A}
\begin{aligned}
\mathcal{A}[\overline{u}](x)&\geq f(ax)-f(x)-
a^N(a-1)\|DJ\|_{\infty}\mu\sup_{|z|<1}\Psi(a(x+z))\\
&\geq af(x)-f(x)-a^N(a-1)\|DJ\|_{\infty}\mu f(x) o_x(1)
\end{aligned}
\end{equation}
where $o_x(1)$ tends to $0$ as $x$ tends to infinity (this $o_x$ is uniform
with respect to $a$).
To conclude the proof take $|x|>R_1$ such that $\mu\|DJ\|_\infty a_0^N
o_x(1)\leq 1/2$. Then~\eqref{eq:second.estimate.A} becomes
$$
\begin{aligned}
\mathcal{A}[\overline{u}](x)&\geq
f(x)(a-1)\Big(1-\mu\|DJ\|_\infty a_0^No_x(1) \Big)\geq f(x)\frac{a-1}{2}
\geq 0
\end{aligned}
$$
and $\overline{u}$ is a supersolution of~\ep{}.
\end{proof}
Observe that Lemma~\ref{lemma:anu(ax).is.super} remains true independently of the hypothesis~\hyp{H7}. However, this Lemma will be not enough to prove the comparison result when $f$ has a \lq\lq slow\rq\rq\ growth, see the proof of Lemma~\ref{lemma:u1lequ2.fast} below. Therefore, we have to do a different approach in this latter case. Indeed, when \hyp{H7}-slow holds, we use a similar argument as in the subquadratic
case of~\cite{BarlesMeireles2017}, proving that $u^q$ is a supersolution of~\ep{}. However, under our general hypotheses here,
we have to be more precise in the control of the constants, so that the computations are more tedious. Notice also that we still assume \hyp{H3}, which implies
that $f$ cannot grow too slow: typically faster than $f(x)=|x|^{m_*}$, where
$m_*=m/(m-1)\in(1,2)$.
The following estimate allows to control the non-local term in the case
of \hyp{H7}-slow. We get that
$\mathcal{L}[\Psi]$ can be controlled using the following estimate:
\begin{lemma}\label{lem:slow.est}
Let $f$ verify \hyp{H0}--\hyp{H4} and \hyp{H7}-slow. Then there
exists $C>0$ such that for all $|x|\gg1$,
$$\sup_{B_1(x)}\Psi(y)\leq C\Psi(x).$$
\end{lemma}
\begin{proof}
Fix $\eta\in(0,\eta_0)$. Then, for $|x|>1/\eta$,
the ball $B_1(x)$ is contained in $\big\{x+s|x|:s\in B_\eta(0)\big\}$. Thus, using \hyp{H4} we
have
$$
\sup_{B_1(x)}f(y)\leq\sup_{s\in B_\eta(0)}f(x+s|x|)\leq\csupeta
f\big((1+\eta)x\big).
$$
Now, by \hyp{H7}-slow, there exists $C>0$ such that for $|x|$ big enough,
$f\big((1+\eta)x\big)\leq Cf(x)$, which implies that
$$
\sup_{B_1(x)}f(y)\leq \csupeta C\,f(x).
$$
Finally, using that $\Psi(x)=|x|f^{1/m}(x)$ for $|x|$ large, we get the same
result for $\Psi$ (with another constant).
\end{proof}
\begin{lemma}\label{lem:uq.super}
Let $f$ verify \hyp{H0}--\hyp{H6} and \hyp{H7}-slow. Let $u$ be a supersolution of~\ep{} such that $u\in\Eclass(\mu)$ for
some $\mu>0$ and $\inf_{\mathbb{R}^N} u>-\infty$. Then,
there exist $q_0>1$ and
$R_1>0$ such that for all $q\in(1,q_0)$, the function $u^q$ is a supersolution of~\ep{} for $|x|\geq R_1$.
\end{lemma}
\begin{proof}
We first notice that under our assumptions, we can assume with no
restriction that $u\geq0$ so that $|Du^q|^m=(qu^{q-1})^m|Du|^m$.
Now take $\eta\in(0,\eta_0)$.
By Lemma~\ref{lemma:u.bounded.below.Psi}
there exists $R_1$ such that for
$|x|>R_1$, $u\geq (C_\eta/2) \Psi=C_0\Psi$. Hence, for such $x$,
$$
|Du^q|^m\geq
\big(q C_0^{q-1}\big)^m\Psi^{m(q-1)}|Du|^m
\geq C_0^{m(q-1)}\Psi^{m(q-1)}|Du|^m\,.
$$
For the non-local term we
use Lemma~\ref{lem:slow.est} as above and the fact that $u\in\Eclass(\mu)$:
$$\begin{aligned}
-\mathcal{L}[u^q] &= -\int J(x-y)u^q(y)\dy + u^q(x)\\
&\geq -\mu^{q-1}\int J(x-y)\Psi^{q-1}(y)u(y)\dy + C_0^{q-1}\Psi^{q-1}(x)
u(x)\\
&\geq -(\mu C)^{q-1}\Psi^{q-1}(x)\int J(x-y)u(y)\dy+C_0^{q-1}\Psi^{q-1}(x)
u(x)\\
&\geq -C_1^{q-1}\Psi^{q-1}(x)(J\ast u)(x)+C_0^{q-1}\Psi^{q-1}(x)u(x)
\end{aligned}$$
where $C_0,C_1$ are uniform with respect to $q\in(1,q_0)$.
Using that $u$ is a supersolution of~\ep{} to replace $(J\ast u)$ below we get
\begin{equation}
\label{eq:bounds.h7slow}
\begin{aligned}
\lambda-&\mathcal{L}[u^q]+|Du^q|^m\\ &\geq
\lambda-C_1^{q-1}\Psi^{q-1}(J\ast u)+C_0^{q-1}\Psi^{q-1}u+
C_0^{m(q-1)}\Psi^{m(q-1)}|Du|^m\\
&\geq\lambda+C_1^{q-1}\Psi^{q-1}\Big(f-\lambda-|Du|^m\Big)
+C_0^{m(q-1)} \Psi^{m(q-1)}|Du|^m\\
&\qquad +\big(C_0^{q-1}-C_1^{q-1}\big)\Psi^{q-1}u\\
&\geq \lambda+
|Du|^m\big(C_0^{m(q-1)}\Psi^{m(q-1)}-C_1^{q-1}\Psi^{q-1}\big)\\
&\qquad + C_1^{q-1}\Psi^{q-1}(f-\lambda)
+(C_0^{q-1}-C_1^{q-1})\Psi^{q-1}u.
\end{aligned}
\end{equation}
If we choose $|x|$ big enough
such that $C_0^m\Psi^m>2C_1\Psi$, which is possible since
$\Psi$ is coercive, then
$(C_0^{m(q-1)}\Psi^{m(q-1)}-C_1^{q-1}\Psi^{q-1}\big)>\frac12 C_0^{m(q-1)}\Psi^{m(q-1)}$.
Moreover, since $u$ is a supersolution,
$
|Du|^m\geq f(x)-\lambda+J\ast u-u.
$
Using again Lemma~\ref{lem:slow.est} together with the fact that $u\leq\mu\Psi+u(0)$ we
get $J\ast u\leq \mu C\Psi+u(0)$, which implies
$
|Du|^m\geq f-\lambda- C'\Psi(x)-u(0)
$
for some $C'>0$. Moreover, by \hyp{H3}, $\Psi(x)\ll f(x)$ so that for $|x|$ big
enough, $|Du|^m(x)\geq f(x)/2$. We plug this into the last line in~\eqref{eq:bounds.h7slow} and get
\begin{equation}
\label{eq:bounds.h7slow2}
\begin{aligned}
\lambda-\mathcal{L}[u^q]+|Du^q|^m \geq \lambda &+
\frac12 C_0^{m(q-1)}\Psi^{m(q-1)}f
+ C_1^{q-1}\Psi^{q-1}(f-\lambda)\\
&+(C_0^{q-1}-C_1^{q-1})\Psi^{q-1}u.
\end{aligned}
\end{equation}
We first take $|x|$ big enough so that $f(x)>\lambda$ and $C_1\Psi(x)\geq1$.
Then, by \hyp{H3},
$u\leq\mu\Psi\leq f/2$ for $|x|$ big enough. And using again the fact that $\Psi$
is coercive, for $|x|$ big enough we also have
$$
C_0^m\Psi^m\geq 2\big|C_0^{q-1}-C_1^{q-1}\big|^{1/(q-1)} \Psi.
$$
Thus, replacing in~\eqref{eq:bounds.h7slow2} yields
$$
\lambda-\mathcal{L}[u^q]+|Du^q|^m \geq \lambda+
(f-\lambda)=f
$$
and the result holds.
\end{proof}
We are now ready to perform comparison results.
\begin{lemma}
\label{lemma:u1lequ2.fast}
Let $f$ verify~\hyp{H0}--\hyp{H7} (slow or fast)
and let $u_1, u_2\in\Eclass(\mu)$ for
some $\mu>0$ be respectively a subsolution and a supersolution of~\ep{} with
$\inf_{\mathbb{R}^N}u_2>-\infty$. There exists $R_1>0$ such that if
$u_1<u_2$ on $\partial B_{R_1}$, then $u_1\leq u_2$ in $B_{R_1}^C$.
\end{lemma}
\begin{proof}
Let us begin by assuming \hyp{H7}-slow. By Lemma~\ref{lem:uq.super},
for any $q>1$ (close enough to 1), $u_1^q$ is a supersolution in
$\{|x|>1\}$. Since $u_1,u_2\in\Eclass(\mu)$, by
Lemma~\ref{lemma:u.bounded.below.Psi} we see that $u_2^q\gg u_1$ as
$|x|\to\infty$. So, the maximum of $u_1-u_2^q$ in $B_{R_1}^C$
is attained at some point $x_0$. If $x_0\in\partial B_{R_1}$, then since
$u_1(x_0)<u_2(x_0)$ we deduce the result. On the other hand, if $|x_0|>0$ we
can use the equations and the Strong Maximum Principle,
see Theorem~\ref{thm:max.principle}, to reach a contradiction.
The conclusion is that for any $q\in(1,q_0)$, $u_1\leq u_2^q$ and the
comparison follows by sending $q\to1$.
Let us now turn to the case of \hyp{H7}-fast and
define, for $a>1$, $\overline{u_2}:=a^Nu_2(ax)$. From
Lemma~\ref{lemma:anu(ax).is.super}, we know that there exists $R_1>0$ and
$a_0>0$, such that $\overline{u_2}$ is a supersolution of~\ep{} in
$B_{R_1}^C$, for any $a\in(1,a_0)$. Moreover if $a$ is chosen close enough to
$1$, by continuity of both functions we have $\overline{u_2}\geq u_1$ in
$\partial B_{R_1}$.
Observe first that, by Lemma~\ref{lemma:u.bounded.below.Psi}, for any $a>1$
fixed and $\eta\in(0,1)$, there exists a constant $c_{a,\eta}>0$ such that for
$|x|$ large enough,
$$
\overline{u_2}(x)\geq c_{a,\eta}\Psi(a(1-\eta)x)-1/c_{a,\eta},
$$
while on the other hand, since $u_1\in\Eclass(\mu)$ we have
$$
u_1(x)\leq \mu\Psi(x)+u_1(0)\text{ in }\R^N.
$$
Therefore, for $|x|\gg 1$,
$$
\begin{aligned}
(u_1-\overline{u_2})(x)&\leq \mu\Psi(x) -
c_{a,\eta}\Psi(a(1-\eta)x)+1/c_{a,\eta}+u_1(0)\\
&\leq
\mu|x|\Big(f^{1/m}(x)-
\frac{c_{a,\eta}}{\mu}a(1-\eta)f^{1/m}(a(1-\eta)x)\Big)+c(a,\eta,u_1).
\end{aligned}
$$
Now, for $a\in(1, a_0)$ we fix $\eta>0$ small enough such that
$a(1-\eta)>1$.
Hypothesis \hyp{H7}-fast implies that $\liminf
\big[ f^{1/m}(a(1-\eta)x)/f^{1/m}(x)\big]=+\infty$. Hence,
for any $c>0$, there exists $c_0>0$ such that provided $|x|$ is big enough
we have
$$
f^{1/m}(a(1-\eta)x)\geq c f^{1/m}(x)+c_0.
$$
From this, choosing conveniently $c$, it follows that as $|x|\to\infty$,
$\limsup(u_1-\overline{u_2})= -\infty$. This implies that the supremum of
$u_1-\overline{u_2}$ is attained at a point $x_0\in B_{R_1}^C$ or on the
boundary $\partial B_{R_1}$.
In the first case, \textit{i.e.} if $x_0\in B_{R_1}^C$, we get a contradiction
by using the Maximum Principle (see Theorem~\ref{thm:max.principle}). In the
second case, by assumption, $(u_1-\overline{u_2})(x)\leq
(u_1-\overline{u_2})(x_0)\leq0$ for $x\in B_{R_1}^C$ and hence $u_1\leq
\overline{u_2}$.
The proof concludes by sending $a\searrow 1$, which implies $u_1\leq {u_2}$ in
$B_{R_1}^C$.
\end{proof}
The next two theorems show, not only the uniqueness of bounded from below
solutions, but also that this unique solution corresponds to the solution
associated with the critical ergodic constant $\lambda_*(\mu)$.
\begin{theorem}\label{thm:uniq.class.mu}
Let $f$ verify~\hyp{H0}--\hyp{H7} (slow or fast). Let $(\lambda_1,u_1)$ and
$(\lambda_2,u_2)$ be two solutions of~\ep{\lambda_1} and~\ep{\lambda_2}, such
that $u_1,u_2\in \Eclass(\mu)$ for some $\mu>0$
and $\inf_{\mathbb{R}^N}u_1>-\infty$, $\inf_{\mathbb{R}^N}u_2>-\infty$. Then
$\lambda_1=\lambda_2$ and $u_1=u_2+c$ for some constant $c\in\R$.
\end{theorem}
\begin{proof}
Assume that $\lambda_2\leq \lambda_1$. Then $(\lambda_1,u_1)$ can be seen as a
subsolution of~$\ep{\lambda_2}$. Moreover, by adding a constant, if
necessary, we can ensure that $w=u_1-(u_2+C)$ verifies $\sup_{\partial
B_{R_1}} w= 0$. Therefore, for any $\eps >0$, $u_1-(u_2+C+\eps)<0$ on
$\partial B_{R_1}$ and we can apply
Lemma~\ref{lemma:u1lequ2.fast} which gives that
$u_1-(u_2+C+\eps)\leq0$ in $B_{R_1}^C$. After sending $\eps$ to zero, we get
that $w\leq 0$ in $B_{R_1}^C$.
Now,
{ if we consider the function $g(s)=|s|^m$, then, by convexity,
$g(p+q)\geq g(p)+Dg(p)\cdot q$. Using this inequality with $p=Du_1$ and
$q=Du_2-Du_1$ yields that $w$ is a subsolution of
$(\lambda_1-\lambda_2)-\mathcal{L}[w]+c(x)|Dw|=0$, with $c(x)=m|Du_1|^{m-1}$.
And since $\lambda_1\geq\lambda_2$, $w$ is a subsolution of
$-\mathcal{L}[w]+c(x)|Dw|\leq0$}. Moreover, since $w\leq0$ outside $B_{R_1}$,
we can use the comparison property in
$B_{R_1}$ (see Theorem~\ref{thm:comparison.principle}) and deduce that $w\leq 0$ in $B_{R_1}$.
Hence, $w$ reaches a maximum at some point in $\partial B_{R_1}$.
Thus, applying the Strong Maximum Principle, see
Theorem~\ref{thm:max.principle}, we infer that $w=\max w$ in $\mathbb{R}^N$.
This implies that $u_1=u_2+C$ and consequently, that $\lambda_1=\lambda_2$.
\end{proof}
\begin{theorem}
Let $f$ verify~\hyp{H0}--\hyp{H7} (slow or fast) and let $(\lambda, u)$ be a solution
of~\ep{} such that $u\in\Eclass(\mu)$ for some $\mu>0$ and
$\inf_{\mathbb{R}^N}u>-\infty$. Then $\lambda=\lambda_*(\mu)$.
\end{theorem}
\begin{proof}
The proof is done exactly as in \cite{BarlesMeireles2017} and uses arguments
which are similar to that of Theorem~\ref{thm:uniq.class.mu}. Assume that
$(\lambda,u)$ is a solution of \ep{} such that
$u\in\Eclass(\mu)$ for some $\mu>0$ and $\inf u>-\infty$. Let also $v$ be a
solution associated with the critical ergodic constant $\lambda_*(\mu)$. We
already know that $\lambda\leq\lambda_*(\mu)$, so we only need to prove the
converse inequality.
Take $R_1$ as in
Lemma~\ref{lemma:u1lequ2.fast}. We can choose $C\in\R$ such that
$\max_{\partial B_{R_1}}(v-(u+C))\leq0$. Considering the function
$w:=\max(u+C+\eps,v)$,
it turns out that $w$ is bounded from below because of $u$. It
is also a subsolution of $\ep{\lambda_*(\mu)}$ because $u+C+\eps$ is a
subsolution of this equation since $\lambda\leq\lambda_*(\mu)$. And moreover, $w\in\Eclass(\mu)$.
Using Lemma~\ref{lemma:u1lequ2.fast}
we deduce that for any $\eps >0$, $w\leq
u+C+\eps$ in $B_{R_1}^C$, so that finally $w\leq u+C$ in $B_{R_1}^C$. The
comparison in $B_{R_1}$ implies also that $w\leq u+C$ in $B_{R_1}$. Thus,
$w-(u+C)$ reaches its maximum on $\partial B_{R_1}$ which implies that $w=u+C$
in $\R^N$. The conclusion is that $v=u+C$, and thus $\lambda=\lambda_*(\mu)$.
\end{proof}
\section{Criticality revisited}
\label{sect:revisited}
In this section, we first extend the results of Section~\ref{sect:critical} on
critical ergodic constants to the more general class
$$
\bar\Eclass:=\bigg\{u:\R^N\to\R: \limsup_{|x|\to\infty}
\frac{u(x)}{\Psi(x)}<\infty\bigg\}.
$$
Notice that $\bar\Eclass\supset\Eclass(\mu)$ for any $\mu>0$, and that
in $\bar\Eclass$, contrary to $\Eclass(\mu)$, we do not have a uniform control of the
behaviour of solutions (indeed for any $c>0$, $c\Psi\in\bar\Eclass$).
Let us define the critical ergodic constant in $\bar\Eclass$ as usual:
$$\bar\lambda:=\sup\big\{\lambda\in\R: \text{there exists } u\in\bar\Eclass,
\text{ solution of }\ep{\lambda}\big\}.$$
We will prove here in particular that $\bar\lambda$ is finite.
\begin{lemma}\label{lem:bound.lambda.star}
For any $\mu>\mu_0$, $\lambda_*(\mu)=\lambda_*(\mu_0)$.
\end{lemma}
\begin{proof}
This is a consequence of the uniqueness and characterization of bounded from
below solutions in $\Eclass(\mu)$. We know that (up to a constant)
there exists a unique
$u\in\Eclass(\mu_0)$ such that $\inf u>-\infty$ and $u$ is a solution of
$\ep{\lambda_*(\mu_0)}$. Similarly, there is a unique $v\in\Eclass(\mu)$
such that $\inf v>-\infty$ and $v$ is a solution of $\ep{\lambda_*(\mu)}$.
Now, since $\Eclass(\mu_0)\subset\Eclass(\mu)$, we can apply
Theorem~\ref{thm:uniq.class.mu} to conclude that $u=v$ (up to a constant) and
$\lambda_*(\mu)=\lambda_*(\mu_0)$.
\end{proof}
\begin{corollary}
Assume that $f$ satisfies \hyp{H0}--\hyp{H7}.
Then $\min(f)\leq\bar\lambda<\infty$.
\end{corollary}
\begin{proof}
Take any pair $(\lambda,u)$ solution of $\ep{}$ such that $u\in\bar\Eclass$.
Since $u\in\Eclass(\mu)$ for some $\mu$, we deduce that, by definition of
$\lambda_*(\mu)$, $\lambda\leq\lambda_*(\mu)$. But using
Lemma~\ref{lem:bound.lambda.star}, we get that necessarily
$\lambda\leq\lambda_*(\mu_0)$. Hence,
taking the supremum, $\bar\lambda\leq\lambda_*(\mu)<\infty$.
\end{proof}
Now, following \cite{BarlesMeireles2017}, let us give some Lipschitz estimate of
the critical ergodic constant $\bar\lambda$. We denote by $\bar\lambda(f)$ the
constant $\bar\lambda$ that corresponds to the equation with right-hand side
$f$. The following result extends \cite[Proposition 4.4]{BarlesMeireles2017} to more general
cases that we cover here.
\begin{lemma}
Let $f_1$, $f_2$ verify~\hyp{H0}--\hyp{H2}. Assume that there exists a
constant $c>0$ and a function $g>0$ such that $f_1(x),f_2(x)\geq cg(x)$ and
$$ m:=\sup_{x\in\mathbb{R^N}}\frac{|f_1(x)-f_2(x)|}{g(x)}<\infty,$$
then
\begin{equation}\label{est.lip.lambda}
|\bar\lambda(f_2)-\bar\lambda(f_1)|\leq
\frac{m}{c+m}\max\{\bar\lambda(f_1),\bar\lambda(f_2)\}
\end{equation}
\end{lemma}
\begin{proof}
The proof is exactly the same as in~\cite{BarlesMeireles2017} and we omit it.
We only want to point out that, once we know that there is a solution
to~\ep{\bar\lambda}, due to~\hyp{H0}--\hyp{H2}, the key points of the proof rely on the boundedness of $m$ and the lower
bound for $f$ given by $g$.
\end{proof}
A typical application is to power-type functions $f$ as in
\cite{BarlesMeireles2017}, but we also have a similar result for faster growths,
for instance in the limiting case:
\begin{corollary}
Assume that $f_i(x)\leq c_i\exp(m^{|x|})$, $i=1,2$, and define the
function~$g$ as $g(x)=c_0\exp(m^{|x|})$ where $c_0:=\min(c_1,c_2)$. Then
\eqref{est.lip.lambda} holds.
\end{corollary}
\
We end this section by a remark, more than a result, concerning the scaling
properties of $\bar\lambda$. Let $f(x)=|x|^\alpha$, with $\alpha>m_*$. Then, for
any $c>1$, it seems reasonable to think that
$$
\bar\lambda(cf)=c^{m_*N/(\alpha-m_*N)}\bar\lambda(f).
$$
The idea of the proof follows again~\cite{BarlesMeireles2017}. Our main
difficulty to complete the proof comes from the non-local term. Indeed, let
$u_1$ be a solution to~\ep{\bar\lambda}. We would like to construct a solution
(or subsolution) to~\ep{} with right-hand side $\widetilde{f}=cf$. To this aim
consider $u_2(x)=a^{-\beta}u_1(ax)$, with $a=c^{1/(\alpha-m_*N)}<1$ and
$\beta=(N+m)/(m-1)$. The fact is that we are not able to prove that
$-\mathcal{L}[u_2](x)\leq -\mathcal{L}[u_1](ax)$ for all $x\in\mathbb{R}^N$, and
we only have, following the proof of Lemma~\ref{lemma:anu(ax).is.super}, that
$$
-\mathcal{L}[a^{N+\beta} u_2](x)\leq -\mathcal{L}[u_1](ax)+o(|x|^\alpha).
$$
Hence
$$
\begin{aligned}
-\mathcal{L}[u_2](x)+|Du_2(ax)|^m&\leq
a^{-m_*N}(-\mathcal{L}[u_1](ax)+|Du_1(ax)|^m)+o(|x|^\alpha)\\&=a^{-m_*N}(f(ax)-
\bar\lambda(f(ax)))+o(|x|^\alpha),
\end{aligned}
$$
which implies that, $u_2$ is a subsolution of
$$
a^{-m_*N}\bar\lambda(a|x|^\alpha)-\mathcal{L}[u_2](x)+|Du_2(ax)|^m=
a^{-m_*N+\alpha}|x|^\alpha
$$
only for $x$ big enough. If we could prove that $u_2$ is a subsolution for all $x$
we would conclude, due to definition of $\bar\lambda$ as a supremum, that
$$
a^{-m_*N}\bar\lambda(a|x|^\alpha)\leq \bar\lambda(a^{-m_*N+\alpha}|x|^\alpha).
$$
In a similar way as in~\cite{BarlesMeireles2017} we could get the reverse
inequality. So, while it is not clear whether there is really a scaling property
for $\bar\lambda$ with power functions $f$, at least it seems that an
approximating scaling property should hold, with a different exponent than in
the local case.
\section*{Appendix}
\label{sect:appendix}
\setcounter{section}{0}
\renewcommand{\thesection}{\Alph{section}}
\section{Properties of the non-local operator $\mathcal{L}$}
Across the paper we use several times basic properties and technical estimates of $\mathcal{L}$ or $\mathcal{L}_R^\phi$. We summarize them here, for the reader's convenience.
Let $R>0$ and $\psi\in\C^0(\mathbb{R}^N)$. Recall that we have defined non-local operator $\mathcal{L}$ and the Dirichlet non-local operator, respectively, as
$$
\begin{aligned}
&\mathcal{L}[v](x):=\int_{\mathbb{R}^N} J(x-y)v(y)\d y-v(x),\\
& \mathcal{L}_R^\psi[v](x):=\int_{B_R} J(x-y)v(y)\d y+\int_{B_{R+1}\setminus B_R} J(x-y)\psi(y)\d y-v(x),
\end{aligned}
$$
where $J$ is a symmetric and compactly supported in $B_1$ kernel.
\begin{lemma}
Let $c$ be a positive constant and $u$, $v$ two positive functions. Then
\begin{enumerate}
\itemsep=2pt
\item $\mathcal{L}[c]=0$ and $\mathcal{L}_R^0[c]\leq 0$ if $c\geq 0$.
\item $\mathcal{L}_R^\psi[u+c]=\mathcal{L}_R^\psi[u]+\mathcal{L}^0_R[c]$.
\item $\mathcal{L}_R^\psi[\psi]=\mathcal{L}[\psi]$.
\item $\mathcal{L}_R^\psi [u]\leq \mathcal{L}[u]$ if $\psi\leq u$.
\item $\mathcal{L}[u](x_0)\leq \mathcal{L}[v](x_0)$ and $\mathcal{L}_R^\psi[u](x_0)\leq \mathcal{L}_R^\psi[v](x_0)$ if $u(x_0)=v(x_0)$ and $u\geq v$.
\item $\mathcal{L}[u](x)\leq \mathcal{L}[u](y)+o_{\delta}(1)$ and $\mathcal{L}_R^\psi[u](x)\leq \mathcal{L}_R^\psi[u](y)+o_{\delta}(1)$ if $|x-y|^2\leq \delta$.
\end{enumerate}
\end{lemma}
We omit the proof, since it follows straightforward from the definition of the non-local operators.
\begin{lemma}
Let $x_0$ be a point where $u$ attains a positive maximum, respectively minimum. Then $\mathcal{L}[u](x_0)\leq 0$ and $\mathcal{L}_R^0[u](x_0)\leq 0$, respectively $\geq$.
\end{lemma}
\begin{proof}
At the point $x_0$ where $u$ attains a positive maximum we have
$$
\mathcal{L}[u](x_0)=\int_{\mathbb{R}^N} J(x-y)u(y)\d y-u(x)\leq u(x_0)\Big(\int_{\mathbb{R}^N} J(x-y)-1\Big)=0.
$$
We do a similar computation for $\mathcal{L}_R^0[u]$.
\end{proof}
\begin{lemma}
If $g\in\C^1(\mathbb{R}^N)$ then $|\mathcal{L}[g](x)|\leq \sup\limits_{z\in
B_1(x)}|Dg(z)|$.
\end{lemma}
\begin{proof}
We use the fact that, for all $y\in B_1(x)$, $|g(x)-g(y)|\leq
\sup\limits_{z\in B_1(x)}|Dg(z)|$. Then, by direct computation, we obtain $$
\begin{aligned}
|\mathcal{L}[g](x)|\leq &\int_{B_1(x)} J(x-y)|g(y)-g(x)|\d y\leq
\int_{B_1(x)} J(x-y)\sup\limits_{z\in B_1(x)}|Dg(z)|\d y\\
=&\sup\limits_{z\in B_1(x)}|Dg(z)|,
\end{aligned}
$$
since $J$ is
compactly supported on $B_1$.
\end{proof}
\begin{lemma}
\label{lem:convex}
Let $\psi$ be convex. Then $-\mathcal{L}[\psi]\leq 0$.
\end{lemma}
\begin{proof}
The result follows from Jensen's inequality,
$$
-\mathcal{L}[\psi](x)=\int_{\mathbb{R}^N} \psi(x+z)\d \nu(z) -\psi(x) \leq
\psi\Big(\int_{\mathbb{R}^N} (x+z)\d \nu(z)\Big) -\psi(x)\leq 0.
$$
where $\nu$ denotes the probability measure associated to $J$, $\d
\nu(z)=J(z)\d z$.
\end{proof}
\begin{lemma}
\label{lem:est.L.psi}
Let $\psi$ be nondecreasing and
for $\epsilon\in(0,1)$, let $c_\epsilon=\mu(B_1\setminus B_{1-\epsilon})$.
Then \begin{equation*}
\label{eq:upperbound.minusL}
-\psi(|x|+1|)+\psi(|x|)\leq -\mathcal{L}[\psi](x)\leq -
c_\epsilon\psi(|x|+1-\epsilon)+\psi(|x|).\end{equation*}
\end{lemma}
\begin{proof}
Since $\psi$ is nondecreasing we have
$$
\begin{aligned}
-\mathcal{L}[\psi](x)&=-\int_{B_1} J(y)(\psi(|x-y|)-\psi(|x|))\d y\geq -
\int_{B_1} J(y)(\psi(|x|+1)-\psi(|x|)) \d y\\
&=-\psi(|x|+1)+\psi(|x|).
\end{aligned}$$
The other inequality yields as follows
$$ \begin{aligned}
-\mathcal{L}[\psi](x)&=-\int_{B_1} J(y)(\psi(|x-y|)-\psi(|x|))\d y\leq
-\int_{B_1\setminus B_{1-\epsilon}} J(y)\psi(|x-y|) \d y+ \psi(|x|)\\
&\leq -c_\epsilon\psi(|x|+1-\epsilon)+\psi(|x|).
\end{aligned}
$$
\end{proof}
\section{Comparison Results}
We prove here two comparison results that we use in several places across the paper. To this aim let us consider the general equation
\begin{equation}
\label{eq:max.principle.elin}
-\mathcal{L}[w]+c(x)|Dw|+\alpha w=0,\quad \alpha\geq 0.
\end{equation}
Observe that this equation appears in different contexts. For instance, it turns out to be satisfied (with $\alpha=0$) by $w=v_1-v_2$ if $v_1, v_2$ are a subsolution and a supersolution,
respectively, of~\ep{}.
\begin{theorem}
\label{thm:comparison.principle}
Let $v\in\Wloc(\mathbb{R^N})$ be a subsolution of~{\rm\eqref{eq:max.principle.elin}}, such that for $R>1$, $v\leq 0$ in
$B_{R+1}\setminus B_R$. Then $v\leq 0$ in $B_R$.
\end{theorem}
\begin{proof}
Let $x_0\in B_R$ be a point where $v$ reaches a positive maximum. Hence, the constant function $\varphi(x):=v(x_0)$ is an admissible test function for $v$ at $x_0$; i.e $v(x)-\varphi(x)$ reaches a maximum at $x_0$ and $v(x_0)=\varphi(x_0)>0$. Hence, since $|D\varphi|=0$ and $v\leq 0$ in $B_{R+1}\setminus B_R$
$$
\begin{aligned}
0&\geq -\mathcal{L}[v](x_0)-c(x)|D\varphi(x_0)|+\alpha v(x_0)\\&
= -\int_{B_R}J(x_0-y)v(y)\d y-\int_{B_{R+1}\setminus B_R}J(x_0-y)v(y)\d y+v(x_0)+\alpha v(x_0)
\\ &\geq v(x_0)\Big(1-\int_{B_R}J(x_0-y)\d y +\alpha \Big)>0,
\end{aligned}
$$
which is a contradiction. Hence $v(x_0)\leq 0$ and the result follows.
\end{proof}
\begin{remark}
The result holds true even if we replace the gradient in~\eqref{eq:max.principle.elin} by $c(x)|Dw|^{m-1}$, with $m>1$. Moreover, it is also true for the approximate problems that have a $-\varepsilon\Delta$-term. Indeed, it is straigthforward, since at a maximum point $-\varepsilon\Delta v(x_0)\geq 0$.
\end{remark}
\begin{theorem}[{\bf Strong Maximum Principle}]
\label{thm:max.principle}
Let $v\in\Wloc(\mathbb{R^N})$ be a subsolution of~{\rm\eqref{eq:max.principle.elin}}, which reaches a maximum at $x_0\in\mathbb{R}^N$. Then $v\equiv v(x_0)$ in $\mathbb{R}^N$.
\end{theorem}
\begin{proof}
Let $x_0\in B_1$ be a point where $v$ reaches a maximum. As in the previous proof, the constant function $\varphi(x):=v(x_0)$ is an admissible test function for $v$ at $x_0$. Hence, since $v(y)\leq v(x_0)$ for all $y\in B_1(x_0)$, we have
$$
0\geq -\mathcal{L}[v](x_0)-c(x)|D\varphi(x_0)|+\alpha v(x_0)
\geq -\int_{B_1(x_0)}J(x_0-y)(v(y)-v(x_0))\d y\geq 0.
$$
This implies that $v(y)=v(x_0)$ for all $y\in B_1(x_0)$.
We can repeat now the argument using as center any $y\in B_1(x_0)$ and get that $v(y)=v(x_0)$ for $y\in\mathbb{R}^N$.
\end{proof}
\section{Existence result for an auxiliary problem}
We devote this last section of the Appendix to prove the existence of solutions of
equation~\eqref{eq.pbm.laplace.eps.ball}. To this aim we fix $\gamma\in(0,1)$,
$\varepsilon>0$, $R>1$ and we consider the following problem
\begin{equation}
\label{eq.pbm.laplace.eps.ball.again.general}
\left\{
\begin{array}{ll}
-\varepsilon\Delta\phi-\mathcal{L}_R^{\psi}[\phi]=f\,,\quad &
x\in B_R,\\
\phi=g& x\in \partial B_R\,,
\end{array}
\right.
\end{equation}
where $\psi\in\C^0(B_{R+1}\setminus B_R)$, $g\in\C^{0,\gamma}(\partial B_R)$ and
$f\in\C^{0,\gamma}(B_R)$.
\begin{lemma}
\label{lemma.existence.super.classical}
There exists a unique solution $\phi\in C^{2,\gamma}(B_R)\cap\C^0(\overline{B_R})$
of~\eqref{eq.pbm.laplace.eps.ball.again.general}.
\end{lemma}
Uniqueness comes from the comparison principle, see Theorem~\ref{thm:comparison.principle}. Actually we do a similar
argument in the proof of Lemma~\ref{lemma.supersolution.continuity}, so that we
skip the details here.
In order to prove the existence part of the result, we consider the unique function
$\varphi\in\C^{2,\gamma}(B_R)\cap\C^0(\overline{B_R})$ such that
of $-\Delta \varphi=0$ in $B_R$ with boundary data $\varphi=g$ on $\partial
B_R$ (see for instance \cite[Theorem 6.13]{GT}).
Then we set $\rho=\phi-\varphi$, which is a solution of
\begin{equation}
\label{eq.pbm.laplace.eps.ball.again}
\left\{
\begin{array}{ll}
-\varepsilon\Delta\rho-\mathcal{L}_R^{0}[\rho]=F,\quad& x\in B_R,\\
\rho=0& x\in \partial B_R,
\end{array}
\right.
\end{equation}
where
$F:=f+\varepsilon\Delta\varphi+\mathcal{L}_R^\psi[\varphi]\in\C^{0,\gamma}(B_R)$.
It is clear that, if $\rho\in C^{2,\gamma}(\overline{B_R})$ is a
solution of~\eqref{eq.pbm.laplace.eps.ball.again}, then $\phi\in
C^{2,\gamma}({B_R})\cap\C^0(\overline{B_R})$ and it verifies
problem~\eqref{eq.pbm.laplace.eps.ball.again.general}. Notice that the
boundary data for $\rho$ is zero, so that it belongs to
$C^{2,\gamma}(\partial B_R)$.
Hence we reduce the proof of Lemma~\ref{lemma.existence.super.classical} to
proving existence for~\eqref{eq.pbm.laplace.eps.ball.again}. This is based on
the Continuity Method for elliptic operators (see~\cite[Theorem 5.2]{GT}) and
two a priori bounds that we show next.
\begin{lemma}
\label{lemma.GT37.adapted}
Let $\rho\in \C^{2}(\overline {B_R})$ be a solution
to~\eqref{eq.pbm.laplace.eps.ball.again} in $B_R$. Then, there exists a constant
$C=C(\varepsilon, R)$ such that
\begin{equation} \label{eq:GT.bound1} \sup_{B_R}
|\rho|\leq \sup_{\partial B_R}|\rho|+C\sup_{B_R}|F|
\end{equation}
\end{lemma}
\begin{proof}
Though the proof is essentially the same as~\cite[Lemma 3.7]{GT}, it has to be
adapted carefully in some places in order to take into account the non-local
term. To this aim, let $\alpha>0$, $\mathcal{L}_0=-\varepsilon\Delta
-\mathcal{L}_R^0$ and define for $x_1\in[-R,R]$
$$\widetilde\rho:=\sup_{\partial B_R}\rho^{+}+(\e^{\alpha R}-\e^{\alpha
x_1})\sup_{B_R}(|F^+|/\varepsilon).$$
We first observe that, if $\alpha=\alpha(\varepsilon, R)$ is chosen big
enough, we get
$$
\mathcal{L}_0[\e^{\alpha x_1}]=-\alpha^2\varepsilon\e^{\alpha x_1}-
\mathcal{L}_R^0[\e^{\alpha x_1}]\leq-\alpha^2\varepsilon\e^{\alpha x_1}+
\e^{\alpha x_1}\leq -\varepsilon.
$$
Moreover, since for any constant $k\geq 0$, $\mathcal{L}_R^0[k]\leq 0$, we have,
$$
\mathcal{L}_0[\widetilde\rho]=\mathcal{L}_0[\sup_{\partial B_R}\rho^{+}+
\e^{\alpha R}\sup_{B_R}(|F^+|/\varepsilon)]-\mathcal{L}_0[\e^{\alpha x_1}
\sup_{B_R}(F^+/\varepsilon)]\geq \sup_{B_R}|F^+|.
$$
Now, since $\mathcal{L}_0[\widetilde\rho-\rho]\geq \sup_{B_R}|F^+|-F\geq 0$ in
$B_R$ and $\widetilde\rho-\rho\geq 0$ on $\partial B_R$, by the Maximum
Principle, see~\cite[Theorem 6]{MelianRossi}, we get $\widetilde\rho-\rho\geq
0$ in $B_R$, which yields
$$
\sup_{B_R} \rho\leq \sup_{\partial B_R} \rho^++C\sup_{B_R}F^+
$$
Replacing $\rho$ by $-\rho$ we get~\eqref{eq:GT.bound1}.
\end{proof}
\begin{lemma}\label{lemma.GT66.adapted}
Let $\rho\in \C^{2,\alpha}(\overline{B_R})$ be a solution
to~\eqref{eq.pbm.laplace.eps.ball.again}. Then, there exists a constant
$C=C(N,\alpha,\varepsilon, R)>0$ such that
$$
\|\rho\|_{C^{2,\alpha}(\overline{B_R})}\leq
C\|F\|_{C^{0,\alpha}(\overline{B_R})}
$$
\end{lemma}
\begin{proof}
Writing the equation as $\rho-\varepsilon\Delta
\rho=\int_{B_R}J(x-y)\rho(y)\d y+F$ we
get, using \cite[Theorem 6.6]{GT}, that there exists a constant $C_1>0$ such
that
$$ \|\rho\|_{C^{2,\alpha}(\overline{B_R})}\leq
C_1\bigg(\|\rho\|_{C^{0}(\overline{B_R})}+\|F\|_{C^{0,\alpha}(\overline{B_R})}+
\bigg|\int_{B_R}J(x-y)\rho(y)\d y\bigg|_{C^{0,\gamma}(\overline{B_R})}\bigg).
$$
On the other hand $\|\int_{B_R}J(x-y)\rho(y)\d y\|_{C^{0,\gamma}(\overline{B_R})}\leq
C_2\|\rho\|_{C^{0}(\overline{B_R})}$, for some positive constant $C_2$.
Combing these bounds with the previous one shown in
Lemma~\ref{lemma.GT37.adapted}, we finally get
$$
\|\rho\|_{C^{2,\alpha}(\overline{B_R})}\leq C\|F\|_{C^{0,\alpha}(\overline{B_R})}
$$
\end{proof}
\begin{proof}[Proof of Lemma~{\rm\ref{lemma.existence.super.classical}}]
It is a direct adaptation of~\cite[Theorem 6.8]{GT}.
\end{proof}
\noindent{\sc Acknowledgments. ---} {Work partially supported by Spanish project MTM2014-57031-P.}
|
{
"timestamp": "2018-05-08T02:14:46",
"yymm": "1805",
"arxiv_id": "1805.02382",
"language": "en",
"url": "https://arxiv.org/abs/1805.02382"
}
|
\section{Introduction}
The idea of Floquet engineering is to subject a quantum system to time-periodic
driving in such a way that it acquires interesting novel properties that are
difficult to achieve by other means. This concept has been applied very
successfully to systems of atomic quantum gases in optical lattices
\cite{Eckardt17}. The fact that these systems are extremely clean, well
isolated from their environment, and highly tunable also in a time-dependent
fashion makes them an ideal platform for studying coherent many-body dynamics.
Examples for Floquet engineering in optical lattices include, among others,
dynamic localization \cite{DunlapKenkre86, LignierEtAl07},
photon-assisted tunneling \cite{EckardtHolthaus07, SiasEtAl08, IvanovEtAl08,
AlbertiEtAl09, HallerEtAl10}, the control of an interaction-induced quantum phase
transition \cite{EckardtEtAl05b, ZenesiniEtAl09}, the creation of kinetic
frustration \cite{EckardtEtAl10, StruckEtAl11}, artificial magnetic fields
\cite{Kolovsky11, BermudezEtAl11, AidelsburgerEtAl11, HaukeEtAl12b, StruckEtAl12,
StruckEtAl13, AidelsburgerEtAl13, MiyakeEtAl13, AtalaEtAl14, KennedyEtAl15},
topological band structures \cite{OkaAoki09, JotzuEtAl14, AidelsburgerEtAl15},
and number dependent gauge potentials \cite{GoergEtAl19}.
A simple explanation of the basic concept underlying Floquet engineering is often
given by considering the one-cycle time-evolution operator
\begin{equation}
\hat{U}(T,0) = \mathcal{T}\exp\bigg[\frac{1}{i\hbar}\int_0^T\!\mathrm{d} t\, \hat{H}(t)\bigg]
\end{equation}
for a quantum system described by a time-periodic Hamiltonian with angular driving frequency $\omega=2\pi/T$,
\begin{equation}
\hat{H}(t)=\hat{H}(t+T),
\end{equation}
where $\mathcal{T}$ denotes time ordering. The fact that this operator is unitary
allows one, at least formally, to express it in terms of an hermitian
operator $\hat{H}_F$ that is called Floquet Hamiltonian,
\begin{equation}
\hat{U}(T,0)\equiv \exp\Big(\frac{1}{i\hbar}T\hat{H}_F\Big).
\end{equation}
This effective time-independent Hamltonian $\hat{H}_F$ governs the time evolution of
the system, when it is monitored stroboscopically in integer steps of the driving
period $T$. Thus, at a first glance, one might expect that the driven system
behaves as some effective non-driven system described by the Hamiltonian $\hat{H}_F$.
However, while the above reasoning applies to small quantum systems, the situation
in many-body systems is more complex. Here the eigenstates of $\hat{H}_F$ will
typically be superpositions of states having very different energies. This is a
consequence of the lack of energy conservation in driven systems, which is
reflected in the possibility of resonant coupling, and the fact that in a large
system resonances will be ubiquitous. The lack of energy conservation suggests
that in the thermodynamic limit the system approaches an infinite-temperature-like
state, so that in the sense of eigenstate thermalization the eigenstates of
$\hat{H}_F$ represent an infinite-temperature ensemble \cite{LazaridesEtAl14b,
DAlessioRigol14}. From this point of view, the Floquet Hamiltonian $\hat{H}_F$
does not seem to be a suitable object for engineering interesting system properties.
The fact that Floquet engineering can, nevertheless, be a useful concept also
in many-body quantum systems, is related to the observation that in some
parameter regimes the time scale $\tau$ associated with unwanted resonant
processes, where the system absorbs (or emits) energy quanta $\hbar\omega$, can
become rather long. Since typically energy absorption dominates, in the
following, we will denote such detrimental energy-non-conserving processes as
``heating'' and $\tau$ as the corresponding ``heating'' time \footnote{This
(rather common) terminology shall not imply that the system is described by
thermodynamic variables such as temperature.}
On times shorter than $\tau$, we might be able to engineer and study interesting
driving-induced physics described by an approximate time-independent effective
Hamiltonian $\hat{H}_\text{eff}$, corresponding to a non-driven system with modified
properties. The standard strategy employed for deriving such an effective
Hamiltonian involves two steps \cite{EckardtEtAl05b}:
The first step is given by a \emph{low-frequency approximation}, where the
assumption is made that the system remains in a low-energy subspace, which is
separated by an energy gap from excited states, which is much larger than the
driving frequency. In non-driven systems, such low-energy approximations are
common. For example, in a lattice system higher-lying orbital states spanning
Bloch bands above a band gap are neglected, when deriving Hubbard-type
tight-binding models, or doublon-holon excitations lying above a charge gap of
a Mott insulator are eliminated adiabatically, in order to derive spin
Hamiltonians. For a sufficiently large energy gap, in non-driven systems one
can expect that the admixture of higher-lying states is captured by a
converging perturbation theory and will always remain small. In contrast, in a
periodically driven many-body system, the situation is generically different.
Here resonant excitations to the neglected excited states can occur, where the
drive provides one or several energy quanta $\hbar\omega$. Such processes
contribute to the aforementioned detrimental heating. However, for driving
frequencies (and amplitudes) much lower than the gap, so that the system would
need to absorbe many energy quanta $\hbar\omega$ (``photons'') at once, they
can be exponentially slow with respect to the photon number. Thus, by
estimating the associated heating rate \cite{ChoudhuryMueller14, WeinbergEtAl15,
ChoudhuryMueller15, GenskeRosch15, BilitewskiCooper15, BilitewskiCooper15b,
StraeterEckardt16, ReitterEtAl17, RajapopalEtAl19, SinghEtAl19}, we might be
able to argue that we can still neglect higher-lying states on the time scale of
an experiment.
The second step is given by a \emph{high-frequency approximation}. Let us
assume that according to the first step we are able to neglect, say, higher-
lying Bloch bands, so that we can describe our system by a Hubbard Hamiltonian
acting in the lowest Bloch band. Now, the periodic drive can still resonantly
create excitations within this low-energy subspace. This form of energy
absoprtion (heating) can be reduced considerably by considering driving
frequencies that are sufficiently large, so that absorbing an energy quantum of
$\hbar\omega$ corresponds to an exponentially slow high-order process in which
several elementary excitations are created at once \cite{EckardtHolthaus08b,
EckardtAnisimovas15}. If this is the case, we can employ a rotating-wave
approximate and describe the system by the time-averaged low-energy Hamiltonian
(or compute also further corrections using a high-frequency expansion
\cite{CasasEtAl00, VerdenyEtAl13, GoldmanDalibard14, EckardtAnisimovas15,
BukovEtAl15}). In this way, we arrive at an approximate effective Hamiltonian
$\hat{H}_\text{eff}$ that describes the dynamics of our system on time scales
before driving-induced heating sets in. The leading order of this expansion is
simply given by the time-averaged Hamiltonian and corresponds to a
rotating-wave approximation.
The two steps outlined above require that there is a window of suitable driving
frequencies that are both low compared to the relevant energy gap separating the
low-energy subspace from higher-lying states and large compared to the energy
scales governing this low-energy subspace. In this article, we investigate the
question, whether such an optimal frequency window exists, using the
experimentally relevant example of repulsively interacting bosonic atoms in a
periodically shaken one-dimensional optical lattice. For this purpose, we
compare the evolution generated by an approximate effective Hamiltonian
$\hat{H}_\text{eff}$ acting in the lowest Bloch band to the evolution obtained from
integrating the dynamics of the fully time-dependent model that, apart from the
lowest band, contains also first excited band.
The remaining part of this paper is organized as follows: After introducing
the system and the model in Sec.~\ref{sec:system}, in Sec.~\ref{sec:Heff} we
recapitulate the derivation of the approximate effective Hamiltonian
$\hat{H}_\text{eff}$ from the low and the high-frequency approximation. In the
following two sections, we then compare the evolution generated by
$\hat{H}_\text{eff}$ to numerical simulations: in Sec.~\ref{sec:intra} we
investigate the break-down of the high-frequency approximation due to intraband
heating and in Sec.~\ref{sec:intrainter} we study the combined effect of
intraband and interband heating beyond the high and low frequency
approximation. Finally, we close with Sec.~\ref{sec:conclusions}.
\section{System and Model\label{sec:system}}
We consider a system of ultracold bosonic atoms in a one-dimensional optical
lattice potential
\begin{equation}\label{fig:lattice}
V({\bm r}) = V_0\sin^2(k_L x)+V_\perp(y,z).
\end{equation}
Here the laser wave number $k_L$ defines the recoil energy
$E_R=\hbar^2k_L^2/(2m)$ with atom mass $m$, corresponding to the kinetic energy
required to localize a particle on the length of a lattice constant $a=\pi/k_L$.
Typical recoil energies take values of a few kHz. The deep confining potential
$V_\perp(y,z)\simeq\frac{m}{2}\omega_\perp^2(y^2+z^2)$ shall reduce the
dynamics to one spatial dimension via a large transverse excitation gap
$\hbar\omega_\perp$ that freezes the particles in the lowest transverse
single-particle state. More precisely, $\omega_\perp$ will be chosen large
enough, so that the time scale for driving induced transverse heating can be
expected to be much longer than the one for resonant excitations of
longitudinal degrees of freedom in lattice direction, which we are going to
investigate here.
The system shall be driven periodically in time by the homogenous sinusoidal
force pointing in the lattice direction ${\bm e}_x$,
\begin{equation}\label{fig:force}
{\bm F}(t) = - K a\cos(\omega t) {\bm e}_x.
\end{equation}
It is characterized by the driving strength $K$, corresponding to the amplitude
of the potential offset between neighboring lattice sites, and the angular
driving frequency $\omega$, which defines also the driving period $T=2\pi/\omega$.
Such a force can be realized as an inertial force by shaking the lattice back and
forth in $x$ direction.
In the absence of periodic forcing, experiments performed in the regime of deep
lattices, $V_0/E_R\gtrsim 5$, at the typical ultracold quantum gas temperatures
are described accurately by the single-band Bose Hubbard model \cite{JakschEtAl98}
\begin{equation}\label{eq:Hs}
\hat{H}_s = -J_s\sum_{\ell=1}^{M-1}
\Big(\hat{b}^\dag_{s\ell+1} \hat{b}^{\phantom{\dag}}_{s\ell} + \text{H.c.} \Big)
+\frac{U_s}{2} \sum_{\ell=1}^{M}\hat{n}_{s\ell}(\hat{n}_{s\ell}-1) .
\end{equation}
Here the index $\ell$ denotes the lattice sites in ascending order from 1 to $M$ and the label $s$ indicates the lowest Bloch band to be distinguished from the first excited band, labeled by $p$, which is considered below. Moreover,
$\hat{b}^\dag_{\alpha\ell}$, $\hat{b}^{\phantom{\dag}}_{\alpha \ell}$, and
$\hat{n}_{\alpha\ell}=\hat{b}^\dag_{\alpha\ell}\hat{b}^{\phantom{\dag}}_{\alpha\ell}$ denote the creation,
annihilation and number operator for a boson in a Wannier state of band $\alpha$
on site $\ell$. Nearest-neighbor tunneling is described by the parameter $J_s$
and on-site interactions by the Hubbard parameter $U_s$.
While in a non-driven system, a description in the low-energy subspace of the $s$
band is well justified, this assumption is not as clear in a system that is
driven periodically. Even if the driving frequency is small compared to the
band gap separating the $s$ band from the first excited $p$ band, states of
excited bands might still be populated via multiphoton excitations corresponding
to either single-particle processes \cite{WeinbergEtAl15, StraeterEckardt16} or
two-particle scattering \cite{ReitterEtAl17}. If periodic driving is used to
control the physics of the lowest band, such excitation processes must be viewed
as unwanted heating. In order to estimate this effect, we will also take into
account the first excited band, which for the undriven lattice is captured
by the Hamiltonian
\begin{eqnarray}\label{eq:Hp}
\hat{H}_p &=& \Delta\sum_{\ell=1}^{M} \hat{n}_{p\ell}
+ J_p\sum_{\ell=1}^{M-1}\Big(\hat{b}^\dag_{p\ell+1} \hat{b}^{\phantom{\dag}}_{p\ell} + \text{H.c.} \Big)
\\\nonumber&&
+\,\frac{U_p}{2}\sum_{\ell=1}^{M} \hat{n}_{p\ell}(\hat{n}_{p\ell}-1) ,
\end{eqnarray}
and coupled to the $s$ band via the interband interaction term
\begin{equation}\label{eq:Hsp}
\hat{H}_{sp} =U_{sp}\sum_{\ell=1}^M \Big[2\hat{n}_{s\ell}\hat{n}_{p\ell}
+ \frac{1}{2}\Big(\hat{b}^\dag_{p\ell}\hat{b}^\dag_{p\ell}
\hat{b}^{\phantom{\dag}}_{s\ell}\hat{b}^{\phantom{\dag}}_{s\ell} + \text{H.c.} \Big)\Big].
\end{equation}
Here $\Delta$ denotes the orbital energy required to excite a particle to a
Wannier state of the $p$ band and $J_p$ and $U_p$ describe nearest-neighbor
tunneling and on-site interactions in this $p$ band, respectively. The on-site
scattering and repulsion between $s$ and $p$ states is quantified by $U_{sp}$.
If the energy scales of the periodic force, $\hbar\omega$ and $K$, remain below
the band gap $\Delta$, the bands of the undriven problem, $s$ and $p$, provide a
useful basis also for the description of the driven system (see supplemental
material of Ref.~\cite{ReitterEtAl17}). Assuming this regime, we project the
potential $-{\bm r}\cdot{\bm F}(t)$ induced by the force to the lowest two bands and
obtain the driving term of the Hamiltonian:
\begin{equation}\label{eq:Hdr}
\hat{H}_\text{dr}(t) = K\cos(\omega t) \sum_{\ell=1}^{M}
\Big[\ell \Big(\hat{n}_{s\ell}+\hat{n}_{p\ell}\Big)
+\,\eta\Big( \hat{b}^\dag_{p\ell}\hat{b}^{\phantom{\dag}}_{s\ell} + \text{H.c.}\Big) \Big]
\end{equation}
where $\eta$ is the dipole matrix element between two Wannier states of the $s$
and the $p$ band on the same lattice site in units of the lattice constant.
The total Hamiltonian to be used for our analysis is now given by
\begin{equation}\label{eq:tb}
\hat{H}(t) = \hat{H}_s + \hat{H}_{p} + \hat{H}_{sp} + \hat{H}_\text{dr}(t).
\end{equation}
The number of independent parameters that describe this model is reduced
considerably by noticing that $J_s/E_R$, $J_p/E_R$, $\Delta/E_R$, and $\eta$
are determined completely by the dimensionless lattice depth $V_0/E_R$.
Moreover, the interaction parameters $U_s$, $U_p$, and $U_{sp}$ share the very
same (linear) dependence on both the $s$-wave scattering length $a_s$ (which
can be tuned using Feshbach resonances) and the transverse confinement
$\omega_\perp$, so that their ratios $U_p/U_s$ and $U_{sp}/U_s$ equally depend
on $V_0/E_R$ only. Thus, taking $J_s$ and $\hbar/J_s$ as the units for
energy and time, respectively, the undriven model is characterized by $V_0/E_R$
and $U_s/J_s$ as well as by the average number of particles per site $N/M$. The
periodic driving is furthermore characterized by the dimensionless diving
strength $K/J_s$ and angular frequency $\hbar\omega/J_s$. The dependence of the
model parameters on the lattice depth $V_0/E_R$, obtained from band-structure
calculations, is shown in Fig.~\ref{fig:parameters}.
\begin{figure}[t]
\includegraphics[width=8.6cm]{Fig1.pdf}
\caption{Parameters characterizing the two-band Bose-Hubbard model for a shaken
one-dimensional optical cosine lattice plotted versus the lattice depth
$V_{0}/E_{R}$.}
\label{fig:parameters}
\end{figure}
\section{The low and the high-frequency approximation\label{sec:Heff}}
Most schemes of Floquet engineering in optical lattices
(such as for example the control of the bosonic Mott transition
\cite{EckardtEtAl05b, ZenesiniEtAl09}, the implementation of kinetic
frustration \cite{EckardtEtAl10, StruckEtAl11}, the creation of artificial
gauge fields \cite{StruckEtAl12, HaukeEtAl12b, Kolovsky11, AidelsburgerEtAl11,
StruckEtAl13, AidelsburgerEtAl13, KennedyEtAl15}, and the realization of
Floquet topological insulators \cite{OkaAoki09,JotzuEtAl14,AidelsburgerEtAl15})
are based on two approximations: a low-frequency approximation with respect to orbital degrees of
freedom and a high-frequency approximation with respect to processes occurring in
the lowest band described by $H_s$.
The low-frequency single-band approximation is based on the assumption that the
driving frequency and amplitude remain low enough to ensure that the system
remains in the subspace spanned by the lowest ($s$-type) Wannier-like orbital at
each lattice site. It roughly requires driving frequencies
\begin{equation}\label{eq:lf}
\hbar\omega\ll \Delta
\end{equation}
and driving amplitudes $K$ smaller than a threshold value $K_\text{th}$
below which multiphoton transitions are expected to be suppressed exponentially
with the photon number $\Delta/\hbar\omega$ \cite{StraeterEckardt16}. It leads to a
description of the system in terms of a tight-binding model with a single orbital
state per lattice site, which in our case is given by the single-band model
\begin{equation}\label{eq:Hsb}
\hat{H}_\text{sb}(t) = \hat{H}_s + K\cos(\omega t)\sum_{\ell=1}^M \ell \hat{n}_{s\ell}.
\end{equation}
The high-frequency approximation is is based on the assumption that the
driving frequency is still large compared to the energy scales $J_s$ and $U_s$
governing the low-energy model (\ref{eq:Hsb}),
\begin{equation}\label{eq:hf}
\hbar\omega \gg J_s, U_s .
\end{equation}
Under these conditions the resonant creation of collective excitations of
energy $\hbar\omega$ becomes a slow high-order process that can be ngelected on
sufficiently short time scales. This allow us to describe the system using an
approximate effective time-independent Hamiltonian obtained from a
high-frequency expansion \cite{EckardtEtAl05b,
BukovEtAl15, GoldmanDalibard14, EckardtAnisimovas15}. For that purpose, we
first perform a gauge transformation with the time-periodic unitary operator
\begin{equation}
\hat{U}(t) = \exp\bigg(-i\sum_{\ell=1}^M \theta(t)\ell\hat{n}_{s\ell}\bigg)
\end{equation}
with $\theta(t) = K/(\hbar\omega)\sin(\omega t)\ell$, which integrates out the
driving term. The transformed Hamiltonian
$\hat{H}' = \hat{U}^\dag\hat{H}_\text{sb}\hat{U} -i\hat{U}^\dag\dot{\hat{U}}$ reads
\begin{eqnarray}\label{eq:Hprime}
\hat{H}'(t) &=&
-J_s\sum_{\ell=1}^{M-1}
\Big(e^{i\theta(t)}\hat{b}^\dag_{s\ell+1} \hat{b}^{\phantom{\dag}}_{s\ell} + \text{H.c.} \Big)
\nonumber\\&&
+\,\frac{U_s}{2}\sum_{\ell=1}^{M}\hat{n}_{s\ell}(\hat{n}_{s\ell}-1) .
\end{eqnarray}
The fact that it possesses typical matrix elements that are are small compared to
$\hbar\omega$ even for large $K\sim\hbar\omega$ justifies the
high-frequency approximation also for strong driving. Its leading order is given
by the rotating-wave approximation, where the system is described by
the time-averaged Hamiltonian
\begin{eqnarray}\label{eq:Heff}
\hat{H}_\text{eff}&=&\frac{1}{T} \int_0^T\!\mathrm{d} t \, \hat{H}'(t) \mathrm{d} t
\\\nonumber
&=&
-J_s^\text{eff}\sum_{\ell=1}^{M-1}
\Big(\hat{b}^\dag_{s\ell+1} \hat{b}^{\phantom{\dag}}_{s\ell} + \text{H.c.} \Big)
+\frac{U_s}{2}\sum_{\ell=1}^{M}\hat{n}_{s\ell}(\hat{n}_{s\ell}-1) ].
\end{eqnarray}
Here the effective tunneling matrix element
\begin{equation}\label{eq:Jeff}
J_s^\text{eff} = J_s \mathcal{J}_0(K/\hbar\omega)
\end{equation}
acquired a dependence on the scaled driving amplitude $K/(\hbar\omega)$ described by a Bessel function $\mathcal{J}_n$.
In this way the time evolution of the system's state $|\psi(t)\rangle$ is
approximately described by
\begin{equation}
|\psi(t)\rangle \approx \hat{U}(t)e^{-\frac{i}{\hbar}(t-t_0)\hat{H}_\text{eff}}
\hat{U}^\dag(t_0)|\psi(t_0)\rangle.
\end{equation}
In particular, we expect
\begin{equation}\label{eq:psi_n}
|\psi(n T)\rangle \approx e^{-\frac{i}{\hbar}nT\hat{H}_\text{eff}}
|\psi(0)\rangle \equiv |\psi_n^\text{eff}\rangle
\end{equation}
for integers $n$, when monitoring the dynamics stroboscopically in steps of the
driving period at those times $t=nT$, for which $\hat{U}(nT)=1$. Higher orders of the
high-frequency expansion will provide relative corrections of the order of
$J_s/\hbar\omega$ to the evolution governed by $\hat{H}_\text{eff}$
\cite{GoldmanDalibard14, EckardtAnisimovas15}.
The single-band high-frequency approximation, leading to a description of the
system's dynamics in terms of the approximate effective Hamiltonian~(\ref{eq:Heff}),
requires that there is a window of driving frequencies for which both conditions
(\ref{eq:lf}) and (\ref{eq:hf}) are fulfilled. Since with increasing lattice
depth $V_0/E_R$ both $J_s$ decreases rapidly and $\Delta$ increases moderately
(see Fig.~\ref{fig:parameters}), while the interaction parameter $U_s$ can be
made small by tuning the $s$-wave scattering length using a Feshback resonance,
such a window will open for sufficiently large $V_0/E_R$. However, even within
such a frequency window heating will not be suppressed completely and eventually
make itself felt on some time scale $\tau$. This heating time $\tau$ has to be
compared to the typical duration of an experiment, which will be given by some
fixed multiple of the tunneling time $\hbar/J_s$, which in turn increases
exponentially with the lattice depth [asymptotically for deep lattices
$\ln(J_s/E_\text{R})\simeq -2\sqrt{V_0/E_\text{R}}$ \cite{Zwerger03}, see also
Fig.~\ref{fig:parameters}]. Thus, in order to take into account also this
latter effect, in the following we will investigate the behavior of the
dimensionless heating time $\tau J_s/\hbar$. In doing so, we have to keep in
mind that there will also be background heating (resulting from noise,
three-body collisions, or scattering with background particles),
which is independent of the periodic driving and happens on some time scale
$\tau_0$. Assuming $\tau_0\sim 1\text{s}$ ($\sim 10\text{s}$), requiring $\tau_0\gg\hbar/J_s$, and
noting that $E_R \sim 2\pi\cdot\hbar\,3\text{kHz}$ for typical experiments, we
can see from Fig.~\ref{fig:parameters} that the lattice depth is limited to
values $V_0/E_R\lesssim 15$ $(20)$.
\section{Intraband heating\label{sec:intra}}
Let us first investigate the validity of the high-frequency approximation,
before considering also heating due to the coupling to the first excited band.
For this purpose we consider the following quench scenario. We assume that the system
is prepared in the ground state of the undriven Hamiltonian (\ref{eq:Hs}), when
at time $t=0$ the driving amplitude is switched on abruptly to a finite value $K$.
We integrate the time evolution of the system described by the time-dependent
single-band Hamiltonian $\hat{H}_\text{sb}(t)$ [Eq.~(\ref{eq:Hsb})] and compare it to
the approximate solution $|\psi_n^\text{eff}\rangle$ [Eq.~(\ref{eq:psi_n})] obtained
from the time-averaged single-band Hamiltonian $\hat{H}_\text{eff}$. For that purpose
we consider a small system of $N=6$ particles on $M=10$ lattice sites, for which
we can integrate the time evolution exactly.
In order to monitor the deviation between the exact time evolution and the
dynamics predicted by the rotating wave approximation, we consider the
expectation value
\begin{equation}\label{eq:n0}
n_{0}(t)=\langle\aa_{s0}\ao_{s0}\rangle
\quad\text{with}\quad
\ao_{s0}=\frac{1}{\sqrt{M}}\sum_{\ell=1}^M \hat{b}^{\phantom{\dag}}_{s\ell},
\end{equation}
which corresponds to the mean occupation of the single-particle state with
quasimomentum $0$ in the $s$ band.
The difference
\begin{equation}
\Delta n_0(t) = n_{0}(t)-n^\text{eff}_{0}(t)
\end{equation}
between the exact expectation value and the one obtained within the rotating-wave
approximation taken at times $t=nT$ with integer $n$ will serve us as an
indicator for the validity of the approximations made.
While for the results presented in this section, $n_0(t)$ refers to the dynamics
generated by the time-dependent single-band Hamiltonian~(\ref{eq:Hsb}), later
on in the following section, $n_0(t)$ will correspond to the dynamics of the
full driven two-band model (\ref{eq:tb}).
\begin{figure}[t]
\includegraphics[width=8.8cm]{Fig2.pdf}
\caption{Difference $\Delta n_0$ between the exact time
evolution of the single-band Hamiltonian (\ref{eq:Hsb}) and that obtained from
the rotating wave approximation (\ref{eq:psi_n}), taken at times $t=nT$ with inter
$n$. The time evolution is initiated by abruptly switching the amplitude of the drive
at $t=0$ from $0$ to $K$. The parameters are $N=6$, $M=10$, $V_0/E_R=14$,
$U/J_s=1$, $\hbar\omega/J_s=30$, and $K/\hbar\omega = 4$.
At time $\tau$ the difference $\Delta n_0$ exceeds $0.2$ for the first time.}
\label{fig:evolution}
\end{figure}
In Fig.~\ref{fig:evolution} we plot $\Delta n_0(t)$ for a quench to a
large driving amplitude $K/\hbar\omega = 4$ (the other parameters are specified
in the caption). For this value the effective tunneling parameter changes its
sign, $J_s^\text{eff}\approx -0.4 J$, so that the quench is significant also on
the level of the rotating wave approximation. We can see that $\Delta n_0(t)$
shows an irregular oscillatory behavior, with a roughly linearly growing envelope.
We define the heating time $\tau$ as the time at which $|\Delta n_0(t)|$ exceeds
the value $\Delta n_\text{cut}=0.2$ for the first time. Note that $\tau$ gives
only an estimate for the time scale on which heating starts to play a role.
The value of $\Delta n_\text{cut}$ is obviously somewhat arbitrary. It is chosen
to be much smaller than the the initial occupation of the zero momentum state,
which is of the order of $N$, and it is also smaller than (and of the order of)
the filling factor $N/M=0.6$ corresponding to the mean occupation of each momentum
state. The linear spreading of the envelope of $\Delta n_0(t)$ implies that
altering $\Delta n_\text{cut}$ by a factor of order one will simply alter the
heating time $\tau$ by roughly the same factor. Note also that the typical
deviations $|\Delta n_0(t)|$ at time $t=\tau$ are smaller than
$\Delta n_\text{cut}=0.2$, since in most cases $\Delta n_\text{cut}$ is reached
the first time during the time evolution when an extreme fluctuation of
$|\Delta n_0(t)|$ occurs.
Note that, alternatively, the time $\tau$ could also be defined via the
(stroboscopic or period-averaged) energy absorption. Such a definition would
possess the advantage that, to some extent, in experiments it can be measured
(or at least estimated) directly from time-of-flight images
\cite{WeinbergEtAl15, ReitterEtAl17, RajapopalEtAl19,
SinghEtAl19}. On the other hand, from the point of view of Floquet engineering,
the relevant quantity to look at is the deviation from the approximate effective
Hamiltonian, the physics of which we wish to implement. And these deviations are
not necessarily proportional to the absorbed energy. Namely, the excitation of a
particle within the lowest band might be as detrimental as its excitation to the
first excited band via a multi-photon process, despite the fact that the former
is associated with a much lower energy absorption than the latter. Therefore, we
have decided to define the ``heating'' time $\tau$ via deviations from the
dynamics expected from the target Hamiltonian, as described in the previous
paragraph.
\begin{figure}[t]
\includegraphics[width=8.9cm]{Fig3.pdf}
\caption{Heating time $\tau$ (dots) versus driving frequency $\hbar\omega/J_s$
for two different values of the interaction strength $U_s/J_s$. The other
parameters are chose as in Fig.~\ref{fig:evolution}: $N=6$, $M=10$,
$K/\hbar \omega=4$, and $V_{R}/E_{R}=14$. The solid lines are exponential fits.}
\label{fig:IntraOmega}
\end{figure}
In Fig.~\ref{fig:IntraOmega}, we plot the heating time $\tau J_s/\hbar$ versus the
driving frequency $\hbar\omega/J$ for two different values of the interaction
strength $U/J_s=1$ and $U/J_s=5$ (the other parameters are specified in the
caption). We see that the heating time is considerably reduced for the larger
value of the interactions. Moreover, an exponential dependence of the heating
time on the driving frequency can be observed. This agrees with the expectation
for heating processes based on perturbation theory in Floquet space \cite{
EckardtHolthaus08b}. Namely, one can argue
that the order of the process of absorbing an energy quantum $\hbar\omega$,
corresponding to the number of elemantary excitations (quasiparticles) that
have to be collectively excited, will grow like a power of $\omega$ and that
the corresponding matrix element will be suppressed exponentially with the
order \cite{EckardtHolthaus08b, EckardtAnisimovas15}. Such an exponential
suppression of heating with respect to the driving frequency has recently also
been proven for spin systems having a finite local energy bound
\cite{AbaninEtAl15, KuwaharaEtAl16}. Note, however, these proofs do not apply
to the bosonic Hubbard model considered here, which in principle allows for
macroscopic site occupations.
\section{Intraband and interband heating\label{sec:intrainter}}
The exponential increase of the heating time with respect to the driving frequency
visible in Fig.~\ref{fig:IntraOmega} is an artifact of the single-band
description of the driven lattice system. Namely, for sufficiently large driving
frequencies unwanted excitations to higher-lying orbital states (spanning
excited Bloch bands) will become the dominant heating effect. In order to take
into account this effect, we will include also the coupling to the $p$-band. For
this purpose we consider the two-band Hamiltonian~(\ref{eq:tb}) and monitor the
heating time $\tau$ defined in the same way as in the previous section. In an
experiment, of course, also further (higher-lying) bands will play a role.
However, the coupling to the first excited band is most dominant, because with
respect to the lowest band it both is energetically closest and possesses the
largest coupling matrix elements. Therefore, the characteristic time scale for
interband heating processes is determined by transitions to the $p$ band.
Higher-lying bands can still make themselves felt, e.g.\ in the precise shape
of resonance lines (as discussed in Ref.~\cite{WeinbergEtAl15}). However, such
details are not crucial for the present analysis, which is interested in the
time scales only.
\begin{figure}[t]
\includegraphics[width=8.6cm]{Fig4}
\caption{Heating time versus driving frequency both for the
single-band Hamiltonian (empty symbols) and the two-band Hamiltonian (full symbols)
for a system of $N=4$ particles on $M=8$ sites (corresponding to 16 single
particle-states) with $V_{0}/E_{R}=14$, $K/\hbar\omega=4$, and two different
interaction strengths.}
\label{fig:HeatingComp}
\end{figure}
In Fig.~\ref{fig:HeatingComp} we plot the heating time $\tau$ versus the driving
frequency for a system of $N=4$ particles on $M=8$ sites (corresponding to 16
single-particle states) with lattice depth $V_0/J=14$. For strong driving,
$K/\hbar\omega=4$, and two different interaction strengths, $U_s/J_s=1$ and
$U_s/J_s=5$, we compare the heating times obtained
from the single-band model (\ref{eq:Hsb}) (open circles) to those obtained from
the two-band model (\ref{eq:tb}) (filled circles). As expected, we can observe
that, while the coupling to the $p$ band does not influence the heating time for
low frequencies, it becomes dominant for large driving frequencies. For the
two-band model the interplay between intraband and interband heating gives rise
to a maximum of the heating time, $\tau_\text{opt}$, at some optimal intermediate
driving frequency $\omega_\text{opt}$.
For the larger interaction strength $\tau_\text{opt}$ is lower and occurs at
a larger frequency.
\begin{figure}[t]
\includegraphics[width=8.6cm]{Fig5.pdf}
\caption{Heating time $\tau J_s/\hbar$ versus driving frequency $\hbar\omega/J_s$
for the two-band model with different interaction parameters $U_s/J_s$, for $N=4$,
$M=8$, $V_{0}/E_{R}=14$, and $K/\hbar\omega=4$. The inset shows the optimal
(maximum) heating time $\tau_\text{opt}$ (diamonds) and the corresponding optimal
frequency $\omega_\text{opt}$ versus $U_s/J$.}
\label{fig:HeatingU}
\end{figure}
To study the impact of interactions in more detail, we compare the
frequency-dependent heating times for various interaction strengths $U_s/J_s$ in
Fig.~\ref{fig:HeatingU}.
The inset shows the optimal (maximum) heating time $\tau_\text{opt}$ (diamonds,
right axis) and the corresponding optimal driving frequency $\omega_\text{opt}$
(circles, left axis) versus $U_s/J_s$. We observe a significant reduction of
$\tau_\text{opt}$ combined with an upshift of $\omega_\text{opt}$, when
increasing the interaction strength $U_s/J_s$ up to values of about 3.
Both the shift of $\omega_\text{opt}$ and the noticeable reduction of
$\tau_\text{opt}$ for the single-band model (Fig.~\ref{fig:IntraOmega}) suggest
that increasing the interactions mainly enhances intraband heating, so that
intraband heating becomes the dominant heating processes limiting $\tau$ up to
larger values of $\omega$. For values of $U_s/J_s$ that are larger than 3 both
$\tau_\text{opt}$ and $\omega_\text{opt}$ approximately saturate.
We attribute this favorable behavior to the reaching of the strongly
interacting regime $U_s/|J_s^\text{eff}|\approx 2.5 (U_s/J_s)\gg1$ in the lowest
band. Here the kinetic energy of the particles is not sufficient anymore to
induce changes in the site occupations that are associated with a change of
interaction energy (for an initial state without multiply occupied sites this
regime corresponds to the hard-core boson limit). Once this regime is reached,
the physics within the lowest band does not change much anymore, when the
interactions are increased further, which is consistent with to the observed
saturation. This argument holds until eventually for even stronger
interactions, $U_{sp}\sim \Delta$, deviations due to interband coupling will
make themselves felt.
\begin{figure}[t]
\includegraphics[width=8.6cm]{Fig8.pdf}
\caption{Map of $n_0(t)/n_0(0)$ at time $t=100$ ms for the two-band model versus
driving frequency and interaction strengths, with $N=4$, $M=8$, $V_0/E_R=10$,
and $K/\hbar\omega = 1.5$. Here $n_0(t)$ is defined in Eq.~(\ref{eq:n0}).
We assumed a recoil energy of $E_R=3.33 2\pi\hbar$ kHz, a typical value for an
experiment with $^87$Rb atoms, for which the chose time span corresponds to
$t J_s/\hbar\approx 40$ tunneling times. The driving strength corresponds to an
effective tunneling matrix element of $J_s^\text{eff}\approx 0.5J_s$.}
\label{fig:Map}
\end{figure}
In Fig.~\ref{fig:Map} we depict the lowest-band zero-quasimomentum occupation
$n_0(t)$ in units of its initial value $n_0(0)$ at time
$t\approx 40 \hbar/J_s$. This time is chosen to be large compared to the
tunneling time $\hbar/J_s$, which is the relevant time scale for experiments.
It is plotted versus the interaction strength $U_s/J_s$ and the driving frequency
$\hbar\omega/J_s$, where the low-frequency regime is shown in the left panel,
while results for higher driving frequencies are given in the right panel. In
the underlying simulations, we have considered a lattice depth of
$V_0/E_\text{R}=10$ and a driving strength of $K/\hbar\omega=1.5$, which
is smaller than the one used previously and does not induce a sign change of
the effective tunneling matrix element (\ref{eq:Jeff}),
$J^\text{eff}_s\approx 0.51J_s$.
The latter implies that on the the level of the effective Hamiltonian, the quench
induced when switching on the driving, does not correspond to an inversion of the
effective dispersion relation, but rather to a reduction of the band width by a
factor of one half. On the level of the time-averaged Hamiltonian (\ref{eq:Heff}),
this rather mild quench will excite the system only weakly, so that the
occupation $n_0(t)/n_0(0)$ will retain a rather large value also
during the dynamics following the quench. Thus, a significant reduction of
$n_0(t)/n_0(0)$ indicates unwanted driving-induced heating. Note also that (for
fixed $K/\hbar\omega$) the ideal dynamics generated by $\hat{H}_\text{eff}$, and
thus also $n_0(t)/n_0(0)$, should be independent of the driving frequency.
Therefore, also any frequency dependence of $n_0(t)/n_0(0)$ must be viewed
as a deviation from the target dynamics generated by $\hat{H}_\text{eff}$.
In Fig.~\ref{fig:Map}, we find signatures of heating in the form of a significant
reduction of $n_0(t)/n_0(0)$ in various regimes. In the regime of weak
interactions $U_s/J_s\ll1$, heating is visible both for too low frequencies, when
$\hbar\omega \sim J_s$, as well as for too high frequencies, when
$\hbar\omega\sim \Delta$ (with $\Delta/J_s\sim 250$ for the given lattice depth).
When the interband interactions $U_s$ become larger than the interband tunneling
$J_s$, low frequency heating sets in already for larger $\hbar\omega$, in
accordance with condition (\ref{eq:lf}). At the same time, we can also observe
that interband heating at large frequencies is enhanced in the presence of
interactions. For the chosen lattice depth of $V_0/E_\text{R}=10$, we observe
that strong interactions $U_s/J_s\gg1$ lead to significant heating at any
frequency.
\begin{figure}[t]
\includegraphics[width=8.6cm]{Fig6.pdf}
\caption{Heating time $\tau J_s/\hbar$ versus driving frequency
$\hbar\omega/J_s$ for the two-band model with
different lattice depths $V_{0}/E_{R}$, for $N=4$, $M=8$, $U_s/J_s=5$, and
$K/\hbar\omega=4$. The inset shows the optimal
(maximum) heating time $\tau_\text{opt}$ (diamonds) and the corresponding optimal
frequency $\omega_\text{opt}$ versus $V_0/E_R$.}
\label{fig:HeatingV}
\end{figure}
The dependence of the heating time $\tau J_s/\hbar$ on the lattice depth
$V_0/E_\text{R}$ is investigated in detail in Fig.~\ref{fig:HeatingV}, where we
plot the scaled heating time $\tau J_s/\hbar$ versus $\hbar\omega/J_s$ for
various values of $V_0/J_s$ and for $U_s/J_s=5$ as well as $K/\hbar\omega=4$.
The inset shows $\tau_\text{opt} J_s/\hbar$ and $\hbar\omega_\text{opt}/ J_s$
versus $V_0/E_\text{R}$. We can observe that both $\tau_\text{opt} J_s/\hbar$
and $\hbar\omega_\text{opt}/J_s$ increase with the lattice depth. The main figure
shows that this behavior is associated with a significant reduction of heating
for large $\hbar\omega/J_s$. Let us discuss this behavior in more detail.
First, we can notice that the intraband dynamics, described by the single-band
Hamiltonian (\ref{eq:Hsb}) and measured in the natural unit of the tunneling time
$\hbar/J_s$, is determined by the dimensionless ratios $U_s/J_s$,
$\hbar\omega/J_s$, and $K/\hbar\omega$, which we kept fixed in our simulations
when increasing the lattice depth $V_0/E_\text{R}$. This choice of fixed parameters
is natural from the point of view of quantum simulation, where we wish to
engineer the properties of the lowest band described by the approximate effective
Hamiltonian (\ref{eq:Heff}). It explains why for small $\hbar\omega/J_s$, for
which interband coupling is negligible, the dimensionless heating time
$\tau J_s/\hbar$ is hardly influenced by the lattice depth. This can be seen from
the fact that all curves in Fig.~\ref{fig:HeatingV} agree up to the point
($\sim \hbar\omega_\text{opt}/J_s$), where $\tau J_s/\hbar$ starts to be reduced
by interband processes.
We can, moreover, observe in Fig.~\ref{fig:HeatingV} that the interband heating,
which is responsible for the reduction of $\tau J_s/\hbar$ at large frequencies,
is significantly reduced with increasing lattice depth. This behavior results from
the interplay of various effects. On the one hand, with increasing lattice depth
$V_0/E_\text{R}$ the band separation $\Delta/E_\text{R}$ increases, whereas the
interband coupling parameter $\eta$ decreases (Fig.~\ref{fig:parameters}). Both
effects tend to reduce interband heating. An additional and much stronger
reduction of interband heating will, however, results from the exponential
suppression of the tunneling parameter $J_s$ with the square root of the
lattice depth $V_0/E_\text{R}$ (Fig.~\ref{fig:parameters}). Namely, since we
keep the dimensionless ratio $\hbar\omega/J_s$ fixed (taking the point of view of
quantum simulation, as explained in the previous paragraph), the number
$n_\text{ph}$ of photons (i.e.\ energy quanta $\hbar\omega$) needed to overcome
the band separation $\Delta$, $n_\text{ph}\approx\Delta/(\hbar\omega)$, will
strongly increase with the lattice depth. This, in turn, implies a very strong
suppression of interband heating, since we expect an exponential suppression of
interband transitions with $n_\text{ph}$ \cite{StraeterEckardt16,WeinbergEtAl15}.
The effects described in the previous paragraph explain a strong increase of
the heating time $\tau$ with increasing lattice depth. However, from the point
of view of quantum simulation, we have to compare the heating time to the relevant
experimental time scale, given by the tunneling time $\hbar/J_s$. This is why
here we are always plotting the scaled heating time $\tau J_s/\hbar$. Therefore,
when increasing the lattice depth, the expected strong increase of $\tau$
directly competes with the exponential increase of $\hbar/J_s$ with the square
root of the lattice depth. The results presented in Fig.~\ref{fig:HeatingV}
clearly show that the former effect wins over the latter one, so that, all in
all, $\tau J_s/\hbar$ is reduced when the lattice depth $V_0/E_\text{R}$ is
raised. We can, thus, see a noticeable increase of the optimal heating time
$\tau_\text{opt}J_s/\hbar$ (shown in the inset of Fig.~\ref{fig:HeatingV}) with
$V_0/E_R$.
While the results of Fig.~\ref{fig:HeatingV} imply that driving-induced heating
can effectively be reduced by raising the lattice depth, this possibility is
limited by non-driving-induced heating processes, originating, e.g., from
three-body collisions, scattering with background particles, or noise. Namely,
the tunneling time, which increases with the lattice depth, has to remain short
compared to the the time scale $\tau_0$ associated with such background heating.
In turn, this means that by increasing $\tau_0$ by reducing non-driving induced
heating, the experimentalist can also reduce driving-induced heating. This is
a major result of this article.
\begin{figure}[t]
\includegraphics[width=8.6cm]{Fig7.pdf}
\caption{Heating time $\tau J_s/\hbar$ versus driving frequency $\hbar\omega/J_s$for the two-band model with different driving amplitudes $K/\hbar\omega$, for
$N=4$, $M=8$, $V_{0}/E_{R}=14$,
and $U_s/J_s=5$. The inset shows the optimal
(maximum) heating time $\tau_\text{opt}$ (diamonds) and the corresponding optimal
frequency $\omega_\text{opt}$ versus $K/\hbar\omega$.}
\label{fig:HeatingK}
\end{figure}
Let us, finally, also have a look at the dependence of the heating time on the
driving strength. In Fig.~\ref{fig:HeatingK} we plot $\tau J_s/\hbar$ versus
$K/\hbar\omega$ for a system with $V_0/E_\text{R}=14$ and $U_s/J_s=5$. We focus
on values of $K/\hbar\omega$ that are interesting for Floquet engineering (i.e.\
that are large enough to achieve a significant modification of $J^\text{eff}_s$
and not much larger than required for tuning $J^\text{eff}_s$ to negative
values). For the smallest considered driving strength of $K/\hbar\omega=1$ a
narrow window of frequencies is found for which the heating time takes large
values of more than 300 tunneling times. This window disappears for stronger
driving. Note that we do not find a simple monotonous decrease of the heating
time with respect to the driving strength. We attribute this observation to the
non-monotonous behavior of the finite-frequency components
$\propto e^{i m \omega t}$ of the time-dependent Hamiltonian (\ref{eq:Hprime})
in the rotating frame [as well as of the corresponding two-band Hamiltonian].
Namely, these terms, which describe heating processes beyond the rotating wave
approximation (\ref{eq:Heff}) where the system exchanges $m$ energy quanta
$\hbar\omega$, involve Bessel-function expressions
$\mathcal{J}_m(K/\hbar\omega)$ that depend in a non-monotonous way on the
driving strength $K/\hbar\omega$.
\section{Conclusions\label{sec:conclusions}}
In summary, we have investigated the conditions for Floquet engineering in
optical lattices. In particular we were interested in the existence of a
frequency window where both low-frequency intraband heating and high-frequency
interband heating is suppressed on a time scale $\tau$ that is large compared
to the tunneling time. Considering the concrete example of a small
one-dimensional system of interacting bosons in a shaken optical lattice, we
presented numerical results that show that such a frequency window exists for
sufficiently deep lattices. The maximum ratio of heating and tunneling time,
$\tau_\text{opt} J_s/\hbar$, (which is found for an optimal intermediate
driving frequency $\omega_\text{opt}$) is found to increase with the lattice
depth. This result, which is not obvious since also the tunneling time
increases exponentially with the lattice depth, implies that we can reduce
driving-induced heating, by simply ramping up the lattice depth. However, we
have pointed out that this strategy is limited to lattice depths for which the
tunneling time is still much smaller than the time scale $\tau_0$ for
non-driving-induced background heating. Thus, the larger the time scale for
such background heating, the more we can reduce also driving induced heating.
We have also found that ramping up the interaction strengths, driving-induced
heating is significantly enhanced, until a saturation value is reached roughly
when the ratio $U_s/J_s$ reaches values of 3. This saturation-behavior is a
promising result regarding the possibility of Floquet engineering of strongly
correlated states of matter such as fractional Chern insulators
\cite{GrushinEtAl14, AnisimovasEtAl15, RaciunasEtAl16, RaciunasEtAl18}.
An interesting direction for future work concerns the role of disorder. It has
been argued that many-body localization can protect the driven system against
unwanted heating associated with deviations from the high-frequency approximation
\cite{LazaridesEtAl15, PonteEtAl15}. Roughly speaking, within the localization
length, the system is not able to create excitations of a sufficiently large
energy $\hbar\omega$. The mechanism is crucial also for the stabilization of
discrete time crystals \cite{KhemaniEtAl16,ElseEtAl16,ChoiEtAl17, ZhangEtAl17,
Sacha_Zakrzewski2017, KhemaniEtAl19, ElseEtAl19}.
However, disorder-induced localization cannot be expected to prevent the system
also against unwanted heating associated with deviations from the low-frequency
approximation. Unwanted resonant multi-photon excitations to states above the gap
can still occur. It is an interesting question, in how far the corresponding
heating rates are influenced by disorder-induced localization.
\begin{acknowledgments}
We thank Christoph Str\"ater for providing the band structure data. This work
was supported by the German Research Foundation DFG via the Research Unit
FOR2414 (under Project No. 277974659).
\end{acknowledgments}
|
{
"timestamp": "2020-01-27T02:12:52",
"yymm": "1805",
"arxiv_id": "1805.02443",
"language": "en",
"url": "https://arxiv.org/abs/1805.02443"
}
|
\section{Introduction}
For an interconnection network, one mainly concerns about the reliability and fault tolerance. An interconnection network is usually modelled as a connected graph $G=(V, E)$, where nodes represent processors
and edges represent communication links between processors. {\it The connectivity $\kappa (G)$} of a graph $G$ is an important parameter to evaluate the reliability and fault tolerance of a network. It is defined as the minimum number of vertices whose deletion results in a disconnected graph. In addition, Whitney~\cite{w} defines the connectivity from local point of view. That is, for any subset $S=\{u, v\}\subseteq V(G)$, let $\kappa_{G}(S)$ denote the maximum number of internally disjoint paths between $u$ and $v$ in $G$. Then $\kappa(G)=min\{\kappa_{G}(S)|S\subseteq V(G)$ and $|S|=2\}.$ As a generalization of the traditional connectivity, Chartrand et al.~\cite{c} introduced the {\it generalized $k$-connectivity} in $1984$. This parameter can measure the reliability of a network $G$ to connect any $k$ vertices in $G$. Let $S\subseteq V(G)$ and $\kappa_{G}(S)$ denote the maximum number $r$ of edge-disjoint trees $T_{1}, T_{2}, \ldots, T_{r}$ in $G$ such that $V(T_{i})\bigcap V(T_{j})=S$ for any $i, j \in \{1, 2, \ldots, r\}$ and $i\neq j$. For an integer $k$ with $2\leq k\leq n$, the {\it generalized $k$-connectivity} of a graph $G$ is defined as $\kappa_{k}(G)= min\{\kappa_{G}(S)|S\subseteq V(G)$ and $|S|=k\}$. The generalized $2$-connectivity is exactly the traditional connectivity. Li~\cite{l4} derived that it is NP-complete for a general graph $G$ to decide whether there are $k$ internally disjoint trees connecting $S$, where $k$ is a fixed integer and $S\subseteq V(G).$ Some results~\cite{l2,l5} about the upper and lower bounds of the generalized connectivity are obtained. In addition, there are some results of the generalized $k$-connectivity for some classes of graphs and most of them are about $k=3$. For example, Chartrand {\em et al.}~\cite{ch} studied the generalized connectivity of complete graphs; Li {\em et al.}~\cite{LIS} characterized the minimally $2$-connected graphs with generalized connectivity $\kappa_{3}=2$; Li {\em et al.}~\cite{l1} studied the generalized $3$-connectivity of Cartesian product graphs; Li {\em et al.}~\cite{l8} studied the generalized $3$-connectivity of graph products; Li {\em et al.}~\cite{l3} studied the generalized connectivity of the complete bipartite graphs; Li {\em et al.}~\cite{l6} studied the generalized $3$-connectivity of the star graphs and bubble-sort graphs; Li {\em et al.}~\cite{l7} studied the generalized $3$-connectivity of the Cayley graph generated by trees and cycles and Lin and Zhang~\cite{L} studied the generalized $4$-connectivity of hypercubes etc.
In this paper, we focus on the $(n,k)$-bubble-sort graph, denoted by $B_{n,k}$. The complete graph $K_{n}$ and the bubble-sort graph $B_{n}$ are special $(n,k)$-bubble-sort graphs $B_{n,k}$ for $k=1$ and $k=n-1$, respectively. In~\cite{ch}, it was shown that $\kappa_{3}(K_{n})=n-2$ for $n\geq 3$ and in~\cite{l6}, it was shown that $\kappa_{3}(B_{n})=n-2$ for $n\geq 3$. Following, we study the generalized $3$-connectivity of $B_{n,k}$ for $2\leq k\leq n-1$ and it is shown that $\kappa_{3}(B_{n,k})=n-2$, which generalizes the known results about bubble-sort graphs~\cite{l6}.
The paper is organized as follows. In section 2, some notation and definitions are given. In section 3, the connectivity of $(n,k)$-bubble-sort graphs $B_{n,k}$ is determined for $2\leq k\leq n-1$. In addition, the generalized $3$-connectivity of $B_{n,k}$ is determined for $2\leq k\leq n-1$ and an algorithm for constructing $n-1$ internally disjoint paths in $B_{n-1,k-1}$ was proposed. In section 4, the paper is concluded.
\section{Preliminary}
Let $G=(V, E)$ be a simple, undirected graph. Let $|V(G)|$ be the size of vertex set and $|E(G)|$ be the size of edge set. For a vertex $v$ in $G$, we denote by $N_{G}(v)$ the {\em neighbourhood} of the vertex $v$ in $G$ and $N_{G}[v]=N_{G}(v)\bigcup\{v\}$. Let $U \subseteq V(G)$, denote $N_G(U)=\bigcup\limits_{v\in U}N_{G}(v)-U$. Let $d_{G}(v)$ denote the degree of the vertex $v$ in $G$ and $\delta(G)$ denote the {\em minimum degree} of the graph $G$. The subgraph induced by $V^{\prime}$ in $G$, denoted by $G[V^{\prime}]$, is a graph whose vertex set is $V^{\prime}$ and the edge set is the set of all the edges of $G$ with both ends in $V^{\prime}$. A graph is said to be {\em $k$-regular} if for any vertex $v$ of $G$, $d_{G}(v)=k$. Two $xy$- paths $P$ and $Q$ in $G$ are {\em internally disjoint} if they have no common internal vertices, that is $V(P)\bigcap V(Q)=\{x, y\}$. Let $Y\subseteq V(G)$ and $X\subset V(G)\setminus Y$, the $(X, Y)$-paths is a family of internally disjoint paths starting at a vertex $x\in X$, ending at a vertex $y\in Y$ and whose internal vertices belong neither to $X$ nor $Y$. If $X=\{x\}$, the $(X, Y)$-paths is a family of internal disjoint paths whose starting vertex is $x$ and the terminal vertices are distinct in $Y$, which is referred to as a {\em $k$-fan} from $x$ to $Y$. For terminologies and notation not undefined here we follow the reference~\cite{B}.
Let $\Gamma$ be a finite group and $S$ be a subset of $\Gamma$, where the identity of the group does not belong to $S$. The {\em Cayley graph $Cay(\Gamma, S)$} is a digraph with vertex set $\Gamma$ and arc set $\{(g, g.s)| g\in \Gamma, s\in S\}$. If $S= S^{-1}$, then $Cay(\Gamma, S)$ is an undirected graph, where $S^{-1}=\{s^{-1}|s \in S\}$.
Let $[n]=\{1,2,\cdots,n\}$ and $Sym(n)$ denote the group of all permutations on $[n]$. Let $(p_{1}p_{2}\cdots p_{n})$ denote a permutation on $[n]$ and $(ij)$, which is called a transposition, denote the transposition that swaps the objects at positions $i$ and $j$, that is, $(p_{1}\cdots p_{i}\cdots p_{j}\cdots p_{n})(ij)=(p_{1}\cdots p_{j}\cdots p_{i}\cdots p_{n})$. For the Cayley graph $Cay(Sym(n), T)$, where $T$ is a set of transpositions of $Sym(n)$. Let $G(T)$ be the graph on $n$ vertices $\{1,2,\ldots,n\}$ such that there is an edge $ij$ in $G(T)$ if and only if transposition $(ij)\in T$~\cite{s}. The graph $G(T)$ is called {\em the transposition generating graph} of $Cay(Sym(n), T)$. It is well known that if $G(T)\cong P_{n}$, where $P_{n}$ is a path with $n$ vertices, then $Cay(Sym(n), T)$ is called an {\em $n$-dimensional bubble sort graph} and denoted by $B_{n}$.
As a generalization of $B_{n}$, the $(n,k)$-bubble-sort graph, denoted by $B_{n,k}$, was introduced by Shawash~\cite{Sha} in $2008$. The $(n,k)$-bubble-sort graph $B_{n,k}$ is defined as follows.
\begin{defi}\label{defi2}
Given two positive integers $n$ and $k$ with $n>k$, let $[n]$ denote the set $\{1, 2, \cdots, n\}$ and $P_{n,k}$ be a set of arrangements of $k$ elements in $[n]$. The $(n,k)$-bubble-sort graph $B_{n,k}$ has vertex set $P_{n,k}$, and two vertices $u=a_{1}a_{2}\cdots a_{k}$ and $v=b_{1}b_{2}\cdots b_{k}$ are adjacent if and only if one of the following conditions hold.
\begin{enumerate}
\item [{\rm (a)}] There exists an integer $m\in [2, k]$ such that $a_{m-1}=b_{m}, a_{m}=b_{m-1}$ and $a_{i}=b_{i}$ for all $i\in[k]\setminus \{m-1, m\}$.
\item [{\rm (b)}] $a_{i}=b_{i}$ for all $i\in[k]\setminus \{1\}$ and $a_{1}\neq b_{1}$.
\end{enumerate}
\end{defi}
For two distinct $i$ and $j$, where $i\in [n]$ and $j\in [k]$. Let $V_{n,k}^{j:i}$ be the set of vertices in $B_{n,k}$ with the $j$th position being $i$, that is, $V_{n,k}^{j:i}=\{p|p=p_{1}p_{2}\cdots p_{j}\cdots p_{k}\in P_{n,k}$ and $p_{j}=i\}$. For a vertex $v=p_{1}p_{2}\cdots p_{i}\cdots p_{n}$, we call $p_{i}$ the element at position $i$ of the vertex $v$. For a fixed position $j\in[k]$, $\{V_{n,k}^{j:i}|1\leq i \leq n\}$ forms a partition of $V_{n,k}$. Let $B_{n,k}^{j:i}$ denote the subgraph of $B_{n,k}$ induced by $V_{n,k}^{j:i}$. Then for each $j\in [k]$, $B_{n,k}^{j:i}$ is isomorphic to $B_{n-1, k-1}$. Thus, $B_{n,k}$ can be recursively constructed from $n$ copies of $B_{n-1, k-1}$. It is easy to check that each $B_{n,k}^{j:i}$ is a subgraph of $B_{n,k}$ and $B_{n,k}$ can be decomposed into $n$ subgraphs $B_{n,k}^{j:i}$s according to the $j$th position. By the symmetry of $B_{n,k}$ and for simplicity, we shall take $j$ as the last position $k$ and use $B_{n,k}^{i}$ to denote $B_{n,k}^{k:i}$. For convenience, let $B_{n,k}=B_{n,k}^{1}\bigoplus B_{n,k}^{2}\bigoplus \cdots \bigoplus B_{n,k}^{n}$, where $\bigoplus$ just denotes the corresponding decomposition of $B_{n,k}$. Obviously, any vertex $u$ of $B_{n,k}^{i}$ has $k-1$ neighbors in $B_{n,k}^{i}$ and one neighbor outside of $B_{n,k}^{i}$, which is called the outside neighbour of $u$.
\begin{figure}[!ht]
\begin{center}
\vskip1cm
\includegraphics[scale=0.2]{1.eps}
\end{center}
\vskip0.5cm
\caption{The $(4, 2)$-bubble-sort graph $B_{4,2}$}\label{F2}
\end{figure}
Let $E(i,j)$ be the set of edges between $B_{n,k}^{i}$ and $B_{n,k}^{j}$, that is, $E(i,j)=\{(p, q)\in E(B_{n,k})|p\in V(B_{n,k}^{i})$ and $q\in V(B_{n,k}^{j})\}$. Clearly, $E(i,j)$ is a matching between $B_{n,k}^{i}$ and $B_{n,k}^{j}$ and $|E(i,j)|=\frac{(n-2)!}{(n-k)!}$. By the definition of $B_{n,k}$, $B_{n,1}$ is isomorphic to $K_{n}$ and $B_{n,n-1}$ is isomorphic to $B_{n}$. It follows that $B_{n,k}$ is a generalization of the bubble-sort graph $B_{n}$. The $(4, 2)$-bubble-sort graph $B_{4,2}$ is depicted in Figure~\ref{F2}.
\section{The generalized $3$-connectivity of the $(n, k)$-bubble-sort graph }
In this section, the generalized $3$-connectivity of the $(n, k)$-bubble-sort graph $B_{n,k}$ will be proved. To prove the result, the following lemmas are useful.
\begin{lem}\label{lem1}
Let $B_{n, k}=B_{n, k}^{1}\bigoplus B_{n, k}^{2}\bigoplus \ldots\bigoplus B_{n, k}^{n}$ for $n\geq k+1$ and $1\leq k\leq n-1$. Then the following results hold.
\begin{enumerate}
\item [{\rm (1)}] For any vertex $u$ of $B_{n, k}^{i}$, it has exactly one outside neighbour.
\item [{\rm (2)}] For any copy $B_{n, k}^{i}$, no two vertices in $B_{n, k}^{i}$ have a common outside neighbour. In addition, $|N(B_{n,k}^{i})|=\frac{(n-1)!}{(n-k)!}$ and $|N(B_{n, k}^{i})\bigcap V(B_{n, k}^{j})|=\frac{(n-2)!}{(n-k)!}$ for $i\neq j$.
\end{enumerate}
\end{lem}
\f {\bf Proof.} (1) By the definition of $B_{n, k}$, the result holds clearly.
(2) Let $u, v\in V(B_{n,k}^{i})$ and $u\neq v$. If they have a common outside neighbour $w$, then $u$ and $v$ are the two outside neighbours of $w$ which lie in the same copy, which contradicts with {\rm (1)}. Thus, no two vertices in $B_{n, k}^{i}$ have a common outside neighbour.
Since $|V(B_{n,k}^{i})|=\frac{(n-1)!}{(n-k)!}$ and no two vertices in $B_{n,k}^{i}$ have a common outside neighbor, $|N(B_{n,k}^{i})|=\frac{(n-1)!}{(n-k)!}$ and $|N(B_{n, k}^{i})\bigcap V(B_{n, k}^{j})|=\frac{(n-2)!}{(n-k)!}$ for $i\neq j$.
\hfill\qed
\begin{lem}{\rm(\cite{l5})}\label{lem2}
Let $G$ be a connected graph and $\delta$ be its minimum degree. Then $\kappa_{3}(G)\leq \delta$. Further, if there are two adjacent vertices of degree $\delta$, then $\kappa_{3}(G)\leq \delta-1$.
\end{lem}
\begin{lem}{\rm(\cite{l5})}\label{lem3}
Let $G$ be a connected graph with $n$ vertices. If $\kappa(G)=4k+r,$ where $k$ and $r$ are two integers with $k\geq 0$ and $r\in \{0, 1, 2, 3\},$ then $\kappa_{3}(G)\geq 3k+\lceil\frac{r}{2}\rceil$. Moreover, the lower bound is sharp.
\end{lem}
\begin{lem}{\rm(\cite{B})}\label{lem4}
Let $G=(V, E)$ be a $k$-connected graph, and let $X$ and $Y$ be subsets of $V(G)$ of cardinality at least $k$. Then there exists a family of $k$ pairwise disjoint $(X, Y)$-paths in $G$.
\end{lem}
\begin{lem}{\rm(\cite{B})}\label{lem5}
Let $G=(V, E)$ be a $k$-connected graph, let $x$ be a vertex of $G$, and let $Y\subseteq V\setminus \{x\}$ be a set of at least $k$ vertices of $G$. Then there exists a $k$-fan in $G$ from $x$ to $Y$, that is, there exists a family of $k$ internally disjoint $(x, Y)$-paths whose terminal vertices are distinct in $Y$.
\end{lem}
Next, we determine the connectivity of $B_{n,k}$ for $k=2$.
\begin{lem}\label{lem6}
$\kappa(B_{n,2})=n-1$ for $n\geq 3$.
\end{lem}
\f {\bf Proof.} Let $B_{n, 2}=B_{n, 2}^{1}\bigoplus B_{n, 2}^{2}\bigoplus \ldots\bigoplus B_{n, 2}^{n}$. Let $F$ be a minimum vertex cut of $B_{n, 2}$ and $u\in V(B_{n, 2})$. Since $N_{B_{n, 2}}(u)$ is a vertex cut of $B_{n, 2}$ and $|N_{B_{n, 2}}(u)|=n-1$, $|F|\leq n-1$.
Next, we show that $|F|\geq n-1$. Suppose to the contrary, that is, $|F|\leq n-2$. Let $F_{i}=F\bigcap V(B_{n, 2}^{i})$ for each $i\in\{1, 2,\cdots,n\}$. Without loss of generality, let $|F_{1}|\geq |F_{2}|\geq\cdots\geq |F_{n}|$. Then $|F_{_{n-1}}|=|F_{_{n}}|=0$. By Lemma~\ref{lem1}(2), $B_{n, 2}[V(B_{n, 2}^{n-1})\bigcup V(B_{n, 2}^{n})]$ is connected. Let $C$ be a component of $B_{n, 2}-F$ that does not contain $B_{n, 2}[V(B_{n, 2}^{n-1})\bigcup V(B_{n, 2}^{n})]$ as a subgraph and $c_{i}=|V(C)\bigcap V(B_{n, 2}^{i})|$ for each $i\in\{1, 2,\cdots, n-2\}$. Then there exists an integer $l\in\{1, 2,\cdots, n-2\}$ such that $c_{l}> 0$. Let $u\in V(B_{n, 2}^{l})\bigcap V(C)$ and $u^{\prime}\in V(B_{n, 2}^{j})$, where $u^{\prime}$ is the outside neighbour of $u$ in $B_{n, 2}^{j}$, $j\in [n]$ and $l\neq j$.
If $u^{\prime}\in V(B_{n, 2}^{j})\setminus V(C)$, then $u^{\prime}\in F_{j}$. It implies that $|F_{j}|\geq 1$.
If $u^{\prime}\in V(C)$, then $N_{B_{n, 2}^{j}}(V(B_{n, 2}^{n-1})\bigcup V(B_{n, 2}^{n}))\subseteq F_{j}$. Otherwise, the component that contains $B_{n, 2}[V(B_{n, 2}^{n-1})\bigcup V(B_{n, 2}^{n})]$ will be $C$ as $B_{n, 2}^{j}\cong K_{n-1}$, which is a contradiction. By Lemma~\ref{lem2}, $|N_{B_{n, 2}^{j}}(V(B_{n, 2}^{n-1})\bigcup V(B_{n, 2}^{n})|=2$. It implies that $|F_{j}|\geq 2$.
Recall that $B_{n, 2}^{l}$ is a complete graph, then $|F|=|F_{1}\bigcup \cdots \bigcup F_{n}|\geq |V(B_{n, 2}^{l})|-c_{l}+c_{l}=n-1$, a contradiction. Thus, $|F|\geq n-1$.
\hfill\qed
Next, we determine the connectivity of $B_{n, k}$ for $2\leq k\leq n-1$.
\begin{lem}\label{lem7}
$\kappa(B_{n,k})=n-1$ for $2\leq k\leq n-1$.
\end{lem}
\f {\bf Proof.} Let $F$ be a minimum vertex cut of $B_{n, k}$ and $u\in V(B_{n, 2})$. Since $N_{B_{n, k}}(u)$ is a vertex cut of $B_{n, k}$ and $|N_{B_{n, k}}(u)|=n-1$, $|F|\leq n-1$.
Next, we show that $\kappa(B_{n,k})\geq n-1$. We prove the result by induction on $k$. When $n\geq 3$ and $k=2$, by Lemma~\ref{lem6}, the result holds. Suppose that the result holds for $B_{n^{\prime},k-1}$, where $2\leq k-1\leq n^{\prime}-2$. Now we consider $B_{n,k}$ for $3\leq k\leq n-2$. Let $F_{i}=F\bigcap V(B_{n, k}^{i})$ for each $i\in\{1, 2,\cdots,n\}$. Without loss of generality, let $|F_{1}|\geq |F_{2}|\geq\cdots\geq |F_{n}|$. Suppose to the contrary, that is, $|F|\leq n-2$. Thus, $|F_{n-1}|=|F_{n}|=0$.
If $|F_{1}|=n-2$, then $|F_{i}|=0$ for each $i\in\{2,3,\cdots,n\}$. By Lemma~\ref{lem1}(2), $B_{n, k}[\bigcup_{i=2}^{n}V(B_{n, k}^{i})]$ is connected. As any vertex in $B_{n, k}^{1}\setminus F_{1}$ has an outside neighbour, $B_{n, k}-F$ is connected, a contradiction.
If $|F_{1}|\leq n-3$, then $|F_{i}|\leq n-3$ for each $i\in\{2,3,\cdots,n\}$. By induction, $B_{n, k}^{i}-F_{i}$ is connected for each $i\in\{1,2,\cdots,n\}$. As $|F_{n}|=0$ and there are $\frac{(n-2)!}{(n-k)!}$ independent edges between $B_{n, k}^{i}$ and $B_{n, k}^{n}$. Note that $\frac{(n-2)!}{(n-k)!}-|F_{i}|\geq \frac{(n-2)!}{(n-3)!}-|F_{i}|\geq 1$ for each $i\in\{1,2,\cdots,n-1\}$. Then there exists at least one edge between $B_{n, k}^{i}-F_{i}$ and $B_{n, k}^{n}$. It implies that $B_{n, k}-F$ is connected, a contradiction. Thus, $|F|\geq n-1$.
\hfill\qed
To prove the main result, the following lemmas are useful.
\begin{lem}\label{lem8}
Let $B_{n, k}=B_{n, k}^{1}\bigoplus B_{n, k}^{2}\bigoplus \ldots\bigoplus B_{n, k}^{n}$ and $H=B_{n, k}[V(B_{n, k})\setminus V(B_{n, k}^{i})]$ for some $i\in[n]$. If $2\leq k\leq n-1$, then $\kappa(H)=n-2$.
\end{lem}
\f {\bf Proof.} Without loss of generality, let $H=B_{n, k}[V(B_{n, k})\setminus V(B_{n, k}^{n})]$, that is, $H=B_{n, k}^{1}\bigoplus B_{n, k}^{2}\\\bigoplus \ldots\bigoplus B_{n, k}^{n-1}$. As there is some vertex $v\in V(H)$ whose outside neighbour belongs to $B_{n, k}^{n}$, $\delta(H)=n-2$. Hence, $\kappa(H)\leq \delta (H)=n-2$.
Next, we show that $\kappa(H)\geq n-2$. To prove the result, we just need to show that for any two distinct vertices $v_{1}$ and $v_{2}$ of $H$, there exist at least $n-2$ internally disjoint paths between them. The result is proved by considering the following two cases.
Case 1. $v_{1}$ and $v_{2}$ belong to the same copy of $B_{n-1,k-1}$.
Without loss of generality, let $v_{1}, v_{2}\in V(B_{n, k}^{1})$. By Lemma~\ref{lem7}, $\kappa(B_{n, k}^{1})=n-2$. Hence, there are $n-2$ internally disjoint paths between $v_{1}$ and $v_{2}$ in $B_{n, k}^{1}$.
Case 2. $v_{1}$ and $v_{2}$ belong to different copies of $B_{n-1,k-1}$.
Without loss of generality, let $v_{1}\in V(B_{n, k}^{1})$ and $v_{2}\in V(B_{n, k}^{2})$.
Subcase 2.1. $3\leq k\leq n-1$
By Lemma~\ref{lem1}(2), there are $\frac{(n-2)!}{(n-k)!}$ independent edges between $B_{n, k}^{1}$ and $B_{n, k}^{2}$. Choose $n-2$ vertices $u_{1}, u_{2}, u_{3},\cdots,$ $u_{n-2}$ from $B_{n, k}^{1}$ such that the outside neighbour $u_{i}^{\prime}$ of $u_{i}$ belongs to $B_{n, k}^{2}$ for each $i\in\{1,2,\cdots,n-2\}$. This can be done as $\frac{(n-2)!}{(n-k)!}\geq n-2$ for $k\geq 3$ and $n\geq k+1$. Let $S=\{u_{1}, u_{2}, u_{3},\cdots, u_{n-2}\}$ and $S^{\prime}=\{u_{1}^{\prime}, u_{2}^{\prime}, u_{3}^{\prime},\cdots, u_{n-2}^{\prime}\}$. By Lemma~\ref{lem7}, $\kappa(B_{n, k}^{1})=\kappa(B_{n, k}^{2})=n-2$. If $v_{1}\notin S$, by Lemma~\ref{lem5}, there exists a family of $n-2$ internally disjoint $(v_{1}, S)$-paths $P_{1}, P_{2}, \cdots, P_{n-2}$ whose terminal vertices are distinct in $S$. Note that if $v_{1}\in S$, then there is a $(v_{1}, S)$ path that contains the only vertex $v_{1}$. Similarly, if $v_{2}\notin S^{\prime}$, by Lemma~\ref{lem5}, there exists a family of $n-2$ internally disjoint $(v_{2}, S^{\prime})$ paths $P_{1}^{\prime}, P_{2}^{\prime}, \cdots, P_{n-2}^{\prime}$ whose terminal vertices are distinct in $S^{\prime}$. Note that if $v_{2}\in S^{\prime}$, there is a $(v_{2}, S^{\prime})$ path that contains the only vertex $v_{2}$. Let $\widehat{P_{i}}=P_{i}\bigcup u_{i}u_{i}^{\prime}\bigcup P_{i}^{\prime}$ for each $i\in\{1,2,\cdots,n-2\}$, then $n-2$ disjoint paths between $v_{1}$ and $v_{2}$ are obtained in $H$.
Subcase 2.2. $k=2$ and $n\geq 3$
By Lemma~\ref{lem1}(2), there is exactly one edge between $B_{n, k}^{i}$ and $B_{n, k}^{j}$ for $i\neq j$ and $i,j\in\{1,2,\cdots,n-1\}$. Choose $n-2$ vertices $u_{1}, u_{2}, u_{3},\cdots,$ $u_{n-2}$ from $B_{n, k}^{1}$ such that the outside neighbour $u_{i}^{\prime}$ of $u_{i}$ belongs to $B_{n, k}^{i+1}$ for each $i\in\{1,2,\cdots,n-2\}$, and choose $n-3$ vertices $w_{2}, w_{3},\cdots,$ $w_{n-2}$ from $B_{n, k}^{2}$ such that the outside neighbour $w_{i}^{\prime}$ of $w_{i}$ belongs to $B_{n, k}^{i+1}$ for each $i\in\{2,3,\cdots,n-2\}$. Let $S=\{u_{1}, u_{2}, u_{3},\cdots, u_{n-2}\}$ and $S^{\prime}=\{u_{1}^{\prime}, w_{2}, w_{3},\cdots, w_{n-2}\}$. Note that $B_{n, k}^{i}\cong K_{n-1}$ for each $i\in\{1,2,\cdots,n\}$. If $v_{1}\notin S$, then $S=N_{B_{n, k}^{1}}(v_{1})$. If $v_{1}\in S$, let $v_{1}=u_{1}$. Then $S\setminus \{u_{1}\}\subseteq N_{B_{n, k}^{1}}(v_{1})$. Similarly, if $v_{2}\notin S^{\prime}$, then $S^{\prime}=N_{B_{n, k}^{2}}(v_{2})$. If $v_{2}\in S^{\prime}$, let $v_{2}=u_{1}^{\prime}$. Then $S^{\prime}\setminus \{u_{1}^{\prime}\}\subseteq N_{B_{n, k}^{2}}(v_{2})$. Recall that $B_{n, k}^{i}\cong K_{n-1}$ for $i\in[n-1]$, then $u_{i}^{\prime}w_{i}^{\prime}$ is an edge in $B_{n, k}^{i+1}$ for each $i\in\{2,3,\cdots,n-2\}$. Let $P_{1}=v_{1}u_{1}u_{1}^{\prime}v_{2}$ and $P_{i}=v_{1}u_{i}u_{i}^{\prime}w_{i}^{\prime}w_{i}v_{2}$ for each $2\leq i\leq n-2$, then $n-2$ disjoint paths between $v_{1}$ and $v_{2}$ are obtained in $H$.
Hence, $\kappa(H)=n-2$.
\hfill\qed
\begin{lem}\label{lem9}
Let $B_{n, 2}=B_{n, 2}^{1}\bigoplus B_{n, 2}^{2}\bigoplus \ldots\bigoplus B_{n, 2}^{n}$. For any vertex $v\in V(B_{n, 2}^{i})$ for $1\leq i\leq n$, let $N_{B_{n, 2}^{i}}[v]=N_{B_{n, 2}^{i}}(v)\bigcup\{v\}$. Then $|N_{B_{n, 2}^{i}}[v]|=n-1$ and the $n-1$ outside neighbours of vertices in $N_{B_{n, 2}^{i}}[v]$ belong to different copies of $B_{n-1, 1}$.
\end{lem}
\f {\bf Proof.} Let $v\in V(B_{n, 2}^{i})$, then $d_{B_{n, 2}^{i}}(v)=n-2$. Thus, $|N_{B_{n, 2}^{i}}[v]|=n-1$ holds clearly. Without loss of generality, assume $i=2$ and $v=12$. Then $N_{B_{n, 2}^{i}}[v]=\{32, 42, \cdots,n2\}$. Let $S$ be the set of outside neighbours of the vertices in $N_{B_{n, 2}^{i}}[v]$, then $S=\{21, 23, 24, \cdots, 2n\}$. Hence, the outside neighbours are contained in $B_{n, 2}^{1}, B_{n, 2}^{3}, \cdots, B_{n, 2}^{n}$, respectively. The result is desired.
\hfill\qed
Following, we prove the generalized $3$-connectivity of $B_{n,k}$ for $k=2$.
\begin{theorem}\label{thm1}
$\kappa_{3}(B_{n,2})=n-2$ for $n\geq 3$.
\end{theorem}
\f {\bf Proof.} As $B_{n,2}$ is $(n-1)$-regular. By Lemma~\ref{lem2}, $\kappa_{3}(B_{n,2})\leq \delta-1= n-2$. To complete the result, it suffices to show that $\kappa_{3}(B_{n,2})\geq n-2$. We prove the result by induction on $n$.
For $n=3$, $B_{3,2}$ is connected. Then $\kappa_{3}(B_{3,2})\geq 1=n-2$.
For $n=4$, by Lemma~\ref{lem3} and Lemma~\ref{lem7}, $\kappa_{3}(B_{n, 2})\geq \lceil\frac{3}{2} \rceil=2=n-2$.
Next, suppose that $n\geq 5$. Let $B_{n, 2}=B_{n, 2}^{1}\bigoplus B_{n, 2}^{2}$ $\bigoplus\ldots \bigoplus B_{n,2}^{n}$ and $v_{1}, v_{2}, v_{3}$ be any three distinct vertices of $B_{n,2}$. For convenience, let $S=\{v_{1}, v_{2}, v_{3}\}$. We prove the result by considering the following three cases.
Case 1. $v_{1}, v_{2}$ and $v_{3}$ belong to the same copy of $B_{n-1,1}$.
Without loss of generality, let $v_{1}, v_{2}, v_{3}\in V(B_{n,2}^{1})$. By the inductive hypothesis, $\kappa_{3}(B_{n,2}^{1})\\\geq n-3$. That is, there are $n-3$ internally disjoint trees $T_{1}, T_{2}\cdots, T_{n-3}$ connecting $S$ in $B_{n,2}^{1}$. Let $v_{1}^{\prime}, v_{2}^{\prime}$ and $v_{3}^{\prime}$ be the outside neighbours of $v_{1}, v_{2}$ and $v_{3}$, respectively. Then $\{v_{1}^{\prime}, v_{2}^{\prime}, v_{3}^{\prime}\}\subseteq V(B_{n, 2})\setminus V(B_{n, 2}^{1})$. As $B_{n, 2}[V(B_{n, 2})\setminus V(B_{n, 2}^{1})]$ is connected, there exists a tree $T$ connecting $v_{1}^{\prime}, v_{2}^{\prime}$ and $v_{3}^{\prime}$ in $B_{n, 2}[V(B_{n, 2})\setminus V(B_{n, 2}^{1})]$. Let $T_{n-2}=T\bigcup v_{1}v_{1}^{\prime}\bigcup v_{2}v_{2}^{\prime}\bigcup v_{3}v_{3}^{\prime}$, then it is a tree connecting $S$ and $V(T_{n-2})\bigcap V(B_{n,2}^{1})=S$. Hence, there exist $n-2$ internally disjoint trees connecting $S$ in $B_{n,2}$ and the result is desired.
Case 2. $v_{1}, v_{2}$ and $v_{3}$ belong to two different copies of $B_{n-1, 1}$.
Without loss of generality, let $v_{1}, v_{2}\in V(B_{n,2}^{1})$ and $v_{3}\in V(B_{n,2}^{2})$. By Lemma~\ref{lem7}, $\kappa(B_{n,2}^{1})=n-2$. Hence, there exist $n-2$ internally disjoint paths $P_{1}, P_{2}, \ldots, P_{n-2}$ between $v_{1}$ and $v_{2}$ in $B_{n,2}^{1}$. Choose $n-2$ distinct vertices $x_{1}, x_{2}, \ldots, x_{n-2}$ from $P_{1}, P_{2}, \ldots, P_{n-2}$ such that $x_{i}\in V(P_{i})$ for each $i\in \{1,2,\cdots,n-2\}$. Note that at most one of these paths has length $1$. If there is one path with length $1$, say $P_{1}$ and let $x_{1}=v_{1}$. Let $x_{i}^{\prime}$ be the outside neighbour of $x_{i}$ for each $i\in \{1,2,\cdots,n-2\}$. Let $X^{\prime}=\{x_{1}^{\prime}, x_{2}^{\prime}, \cdots, x_{n-2}^{\prime}\}$, then $X^{\prime}\subset V(B_{n,2})\setminus V(B_{n,2}^{1})$. By Lemma~\ref{lem1}, $|X^{\prime}|=n-2$. By Lemma~\ref{lem8}, $B_{n,2}[V(B_{n,2})\setminus V(B_{n,2}^{1})]$ is $n-2$ connected. By Lemma~\ref{lem5}, there exist $n-2$ internally disjoint $(v_{3}, X^{\prime})$-paths $P_{1}^{\prime}, P_{2}^{\prime}, \ldots, P_{n-2}^{\prime}$ in $B_{n,2}[V(B_{n,2})\setminus V(B_{n,2}^{1})]$ whose terminal vertices are distinct in $X^{\prime}$. Note that if $v_{3}\in X^{\prime}$, then there is a $(v_{3}, X^{\prime})$-path that contains exactly one vertex $v_{3}$. Let $T_{i}=P_{i}\bigcup x_{i}x_{i}^{\prime}\bigcup P_{i}^{\prime}$ for each $i\in\{1,2,\cdots,n-2\}$. Then $n-2$ internally disjoint trees connecting $S$ in $B_{n,2}$ are obtained.
Case 3. $v_{1}, v_{2}$ and $v_{3}$ belong to three different copies of $B_{n-1,1}$, respectively.
Without loss of generality, let $v_{1} \in V(B_{n,2}^{1}), v_{2} \in V(B_{n,2}^{2})$ and $v_{3} \in V(B_{n,2}^{3})$. Let $N_{B_{n,2}^{i}}[v_{i}]=N_{B_{n,2}^{i}}(v_{i})\bigcup\{v_{i}\}$ for $i=1,2,3$. By Lemma~\ref{lem9}, for each $i\in \{1,2,3\}$ and $j\in \{4,5,\cdots, n\}$, there exists one vertex in $N_{B_{n, 2}^{i}}[v_{i}]$, say $u_{i}^{j}$, such that the outside neighbour $(u_{i}^{j})^{\prime}$ of $u_{i}^{j}$ belongs to $B_{n,2}^{j}$. As $B_{n,2}^{j}$ is connected, we can find a tree $\widehat{T}_{j}$ connecting $(u_{1}^{j})^{\prime}, (u_{2}^{j})^{\prime}$ and $(u_{3}^{j})^{\prime}$ for each $j\in\{4,5,\cdots,n\}$.
Let $T_{j}=\widehat{T}_{j}\bigcup u_{1}^{j}(u_{1}^{j})^{\prime} \bigcup u_{2}^{j}(u_{2}^{j})^{\prime} \bigcup u_{3}^{j}(u_{3}^{j})^{\prime}\bigcup v_{1}u_{1}^{j}\bigcup v_{2}u_{2}^{j}\\\bigcup v_{3}u_{3}^{j}$ as $B_{n-1,1}\cong K_{n-1}$, then $n-3$ internally disjoint trees connecting $S$ are obtained. Let $\widehat{B}_{n,2}^{i}=B_{n,2}^{i}-(\{u_{i}^{4}, u_{i}^{5},\cdots, u_{i}^{n}\}\setminus \{v_{i}\})$. Then there are at most $n-3$ vertices deleted from $B_{n,2}^{i}$ for each $i\in\{1,2,3\}$. As $B_{n,2}^{i}$ is $n-2$ connected, $\widehat{B}_{n,2}^{i}$ is still connected. For $i,j \in \{1,2,3\}$ and $i\neq j$, there is exactly an edge between $B_{n,2}^{i}$ and $B_{n,2}^{j}$. Thus, $B_{n,2}[\bigcup_{i=1}^{3}V(\widehat{B}_{n,2}^{i})]$ is connected and there is a tree $T_{n-2}$ connecting $S$. Hence, there exist $n-2$ internally disjoint trees connecting $S$ in $B_{n,2}$ and the result is desired.
\hfill\qed
Next, we prove the generalized $3$-connectivity of $B_{n,k}$ for $3\leq k\leq n-1$.
\begin{theorem}\label{thm2}
$\kappa_{3}(B_{n,k})=n-2$ for $3\leq k\leq n-1$.
\end{theorem}
\f {\bf Proof.} As $B_{n,k}$ is $(n-1)$-regular. By Lemma~\ref{lem2}, $\kappa_{3}(B_{n,k})\leq \delta-1= n-2$. To complete the result, it suffices to show that $\kappa_{3}(B_{n,k})\geq n-2$. We prove the result by induction on $n$.
For $n=3$, $B_{3,k}$ is connected. Then $\kappa_{3}(B_{3,k})\geq 1=n-2$.
For $n=4$, by Lemma~\ref{lem3} and Lemma~\ref{lem7}, $\kappa_{3}(B_{n,k})\geq \lceil\frac{3}{2} \rceil=2=n-2$.
Next, suppose that $n\geq 5$. Let $B_{n,k}=B_{n,k}^{1}\bigoplus B_{n,k}^{2}$ $\bigoplus\ldots \bigoplus B_{n,k}^{n}$ and $v_{1}, v_{2}, v_{3}$ be any three distinct vertices of $B_{n,k}$. For convenience, let $S=\{v_{1}, v_{2}, v_{3}\}$. We prove the result by considering the following three cases.
Case 1. $v_{1}, v_{2}$ and $v_{3}$ belong to the same copy of $B_{n-1,k-1}$.
Case 2. $v_{1}, v_{2}$ and $v_{3}$ belong to two different copies of $B_{n-1,k-1}$.
Case 3. $v_{1}, v_{2}$ and $v_{3}$ belong to three different copies of $B_{n-1,k-1}$, respectively.
The proofs of Case $1$ and Case $2$ are the same as the proof of Case $1$ and Case $2$ in Theorem $1$. Thus, only the Case $3$ is considered.
Without loss of generality, let $v_{1} \in V(B_{n,k}^{1}), v_{2} \in V(B_{n,k}^{2})$ and $v_{3} \in V(B_{n,k}^{3})$. Let $v_{1}=p_{1}p_{2}\cdots p_{k-1}1$ and $v_{i}=p_{i}p_{2}\cdots p_{k-1}1$ for $k+1\leq i\leq n$, where $p_{k+1}, p_{k+2}, \cdots, p_{n}$ are distinct elements in $[n]\setminus \{p_{1}, p_{2},\cdots, p_{k-1},1\}$. We now present the algorithm, called (n-1)IDP, that constructs $n-1$ internally disjoint paths $P_{2}^{1}, P_{3}^{1},\cdots, P_{n}^{1} $ in $B_{n}^{1}$ such that the outside neighbour of each terminal vertex of the $n-1$ paths belong to different copies of $B_{n-1, k-1}$.
\begin{algorithm}[h]
\caption{(n-1)IDP(k) }
\begin{algorithmic}[1]
\REQUIRE $n, k$, where $3\leq k \leq n-1$, $v_{1}=p_{1}p_{2}\cdots p_{k-1}1$;
\ENSURE $n-1$ pairwise disjoint path $P_{2}^{1}, P_{3}^{1},\cdots,P_{k}^{1}, P_{k+1}^{1},\cdots, P_{n}^{1}$;
\FOR{$i=2$ to $k-1$}
\STATE $P_{i}^{1}=v_{1}, t=v_{1}$;\
\FOR{$j=i$ to $k-1$}
\STATE $t=t(j-1,j)$ // where $(j-1,j)$ is a transposition
\STATE $P_{i}^{1}=P_{i}^{1}\bigcup t;$\
\ENDFOR
\ENDFOR
\STATE $P_{k}^{1}=v_{1}$;\
\FOR{$i=k+1$ to $n$}
\STATE $P_{i}^{1}=v_{1}v_{i}$, $t=v_{i}=p_{i}p_{2}\cdots p_{k-1}1$;\
\FOR{$j=1$ to $k-2$}
\STATE $t=t(j,j+1)$ // where $(j,j+1)$ is a transposition
\STATE $P_{i}^{1}=P_{i}^{1}\bigcup t;$\
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
By the above algorithm, there are the following $n-1$ paths $P_{2}^{1}, P_{3}^{1},\cdots, P_{n}^{1}$ starting at the vertex $v_{1}$ in $B_{n,k}^{1}$, where $p_{k+1}, p_{k+2}, \cdots, p_{n}$ are distinct elements in $[n]\setminus \{p_{1}, p_{2},\cdots, p_{k-1},1\}$.
$P_{2}^{1}=(\underline{p_{1}}p_{2}p_{3}\cdots p_{k-1}1)(i_{2}\underline{p_{1}}p_{3}\cdots p_{k-1}1)(p_{2}p_{3}\underline{p_{1}}\cdots p_{k-1}1)\cdots (p_{2}p_{3}\cdots p_{k-1}\underline{p_{1}}1)$;
$P_{3}^{1}=(p_{1}\underline{p_{2}}p_{3}\cdots p_{k-1}1)(p_{1}p_{3}\underline{p_{2}}\cdots p_{k-1}1)\cdots (p_{1}p_{3}\cdots p_{k-1}\underline{p_{2}}1)$;
$\cdots $
$P_{k-1}^{1}=(p_{1}p_{2}p_{3}\cdots \underline{p_{k-2}}p_{k-1}1)(p_{1}p_{2}p_{3}\cdots p_{k-1}\underline{p_{k-2}}1)$;
$P_{k}^{1}=(p_{1}p_{2}p_{3}\cdots \underline{p_{k-1}}1)$;
$P_{k+1}^{1}=(p_{1}p_{2}p_{3}\cdots p_{k-1}1)(\underline{p_{k+1}}p_{2}p_{3}\cdots p_{k-1}1)(p_{2}\underline{p_{k+1}}p_{3}\cdots p_{k-1}1)(p_{2}p_{3}\underline{p_{k+1}}\cdots p_{k-1}1)\cdots(p_{2}\\p_{3}p_{4}\cdots\underline{p_{k+1}}1)$;
$P_{k+2}^{1}=(p_{1}p_{2}p_{3}\cdots p_{k-1}1)(\underline{p_{k+2}}p_{2}p_{3}\cdots p_{k-1}1)(p_{2}\underline{p_{k+2}}p_{3}\cdots p_{k-1}1)(p_{2}p_{3}\underline{p_{k+2}}\cdots p_{k-1}1)\cdots(p_{2}\\p_{3}p_{4}\cdots \underline{p_{k+2}}1)$;
$\cdots $
$P_{n}^{1}=(p_{1}p_{2}p_{3}\cdots p_{k-1}1)(\underline{p_{n}}p_{2}p_{3}\cdots p_{k-1}1)(p_{2}\underline{p_{n}}p_{3}\cdots p_{k-1}1)(p_{2}p_{3}\underline{p_{n}}\cdots p_{k-1}1)\cdots(p_{2}p_{3}p_{4}\cdots\underline{p_{n}}\\1)$.
\begin{clm}\label{clm1}
For every $a,b\in\{2,3,\cdots,n\}$ and $a\neq b, V(P_{a}^{1})\bigcap V(P_{b}^{1})=\{v_{1}\}$.
\end{clm}
The proof of the Claim $1$. Without loss of generality, suppose that $a< b$.
If $a,b\in\{2,3,\cdots,k\}$, then for any vertex $y\in V(P_{a}^{1})\setminus\{v_{1}\}$, the $a-1$ elements at positions $1,2,\cdots,a-1$ of $y$ are $p_{1}p_{2}\cdots p_{a-2}p_{a}$. However, for any vertex $z\in V(P_{b}^{1})\setminus\{v_{1}\}$, the $a-1$ elements at positions $1,2,\cdots,a-1$ of $z$ are $p_{1}p_{2}\cdots p_{a-2}p_{a-1}$. As $p_{a}\neq p_{a-1}$, then $y\neq z$. Hence, the claim holds.
If $a, b\in\{k+1,\cdots,n\}$, then for any vertex $y\in V(P_{a}^{1})\setminus\{v_{1}\}$, it is the permutation of $\{p_{a},p_{2}\cdots,p_{k-1},1\}$. For any vertex $z\in V(P_{b}^{1})\setminus\{v_{1}\}$, it is the permutation of $\{p_{b},p_{2}\cdots,p_{k-1},1\}$. As $p_{a}, p_{b}\in [n]\setminus \{p_{1},p_{2}\cdots,p_{k-1},1\}$ and $p_{a}\neq p_{b}$, then $y\neq z$. Thus, the claim holds.
If $a\in\{2,3,\cdots,k\}$ and $b\in\{k+1,\cdots,n\}$, then for any vertex $y\in V(P_{a}^{1})\setminus\{v_{1}\}$, it is the permutation of $\{p_{1},p_{2}\cdots,p_{k-1},1\}$ and for any vertex $z\in V(P_{b}^{1})\setminus\{v_{1}\}$, it is the permutation of $\{p_{b},p_{2}\cdots,p_{k-1},1\}$. As $p_{b}\in[n]\setminus \{p_{1},p_{2}\cdots,p_{k-1},1\}$, then $p_{1}\neq p_{b}$ and $y\neq z$. Thus, the claim holds.
The proof of the Claim $1$ is complete.
\begin{clm}\label{clm2}
Let $X^{1}=\{u_{i}^{1}|u_{i}^{1}$ is the terminal vertex of the path $P_{i}^{1}$ for each $i\in\{2,3,\cdots,n\}\}$. Then the outside neighbours of vertices in $X^{1}$ belong to different copies of $B_{n-1,k-1}$, respectively.
\end{clm}
The proof of the Claim $2$. By Lemma~\ref{lem1}(2), the outside neighbours of vertices in $X^{1}$ are in $B_{n,k}^{2},B_{n,k}^{3},\cdots, B_{n,k}^{n}$, respectively. The proof of the Claim $2$ is complete.
\vskip0.3cm
Without loss of generality, suppose that the outside neighbour $(u_{i}^{1})^{\prime}$ of $u_{i}^{1}$ is in $B_{n,k}^{i}$ for each $i\in\{2,3,4,\cdots,n\}$. Otherwise, we can reorder the paths accordingly.
Similarly, let $v_{2}=p_{1}p_{2}p_{3}\cdots p_{k-1}2$, then there are $n-1$ paths $P_{1}^{2}, P_{3}^{2},\cdots, P_{n}^{2}$ starting at the vertex $v_{2}$ in $B_{n,k}^{2}$. Let $X^{2}=\{u_{1}^{2}, u_{3}^{2}, \cdots, u_{n}^{2}\}$ such that $u_{i}^{2}$ is the terminal vertex of the path $P_{i}^{2}$ and the outside neighbour $(u_{i}^{2})^{\prime}$ of $u_{i}^{2}$ is in $B_{n,k}^{i}$ for each $i\in\{1,3,4,\cdots,n\}$. In addition, there are $n-1$ paths $P_{1}^{3}, P_{2}^{3},\cdots, P_{n}^{3}$ starting at the vertex $v_{3}$ in $B_{n,k}^{3}$. Let $X^{3}=\{u_{1}^{3}, u_{2}^{3}, \cdots, u_{n}^{3}\}$ such that $u_{i}^{3}$ is the terminal vertex of the path $P_{i}^{3}$ and the outside neighbour $(u_{i}^{3})^{\prime}$ of $u_{i}^{3}$ is in $B_{n,k}^{i}$ for each $i\in\{1,2,4,\cdots,n\}$.
Obviously, the outside neighbour $(u_{1}^{3})^{\prime}$ of $u_{1}^{3}$ is in $B_{n,k}^{1}$ and the outside neighbour $(u_{2}^{3})^{\prime}$ of $u_{2}^{3}$ is in $B_{n,k}^{2}$. As $B_{n,k}^{1}$ is connected, there is a $((u_{1}^{3})^{\prime},v_{1})$-path $\widehat{P}_{1}$ in $B_{n,k}^{1}$. Let $t_{1}$ be the first vertex of the path $\widehat{P}_{1}$ which is in $\bigcup_{l\in\{2,3,\cdots,n\}}V(P_{l}^{1})$. Similarly, there is a $((u_{2}^{3})^{\prime},v_{2})$-path $\widehat{P}_{2}$ in $B_{n,k}^{2}$ as $B_{n,k}^{2}$ is connected. Let $t_{2}$ be the first vertex of the path $\widehat{P}_{2}$ which is in $\bigcup_{l\in\{1,3,\cdots,n\}}V(P_{l}^{2})$.
\begin{figure}[!ht]
\begin{center}
\vskip1cm
\includegraphics[scale=0.8]{a.eps}
\end{center}
\vskip0.5cm
\caption{The illustration of Subcase $3.1$ for $t_{1}\in V(P_{3}^{1})$ and $t_{2}\in V(P_{3}^{2})$ }\label{F2}
\end{figure}
To prove the result for $3\leq k\leq n-1$, the following two subcases are considered.
Subcase $3.1$. $t_{1}\in \bigcup_{l\in\{2,3\}} V(P_{l}^{1})$ and $t_{2}\in \bigcup_{l\in\{1,3\}} V(P_{l}^{2})$.
In this case, the induced subgraph $B_{n,k}[V(P_{1}^{3})\bigcup V(P_{2}^{1})\bigcup V(P_{3}^{1})\bigcup V(\widehat{P}_{1}[(u_{1}^{3})^{\prime}, t_{1}])]$ of $B_{n,k}$ contains a $(v_{3},v_{1})$-path, where $\widehat{P}_{1}[(u_{1}^{3})^{\prime}, t_{1}]$ is the subpath of $\widehat{P}_{1}$ starting at $(u_{1}^{3})^{\prime}$ and ending at $t_{1}$. Similarly, the induced subgraph $B_{n,k}[V(P_{2}^{3})\bigcup V(P_{1}^{2})\bigcup V(P_{3}^{2})\bigcup V(\widehat{P}_{2}[(u_{2}^{3})^{\prime}, t_{2}])]$ of $B_{n,k}$ contains a $(v_{3},v_{2})$-path, where $\widehat{P}_{2}[(u_{2}^{3})^{\prime}, t_{2}]$ is the subpath of $\widehat{P}_{2}$ starting at $(u_{2}^{3})^{\prime}$ and ending at $t_{2}$. The union of the $(v_{3}, v_{1})$-path and the $(v_{3}, v_{2})$-path forms a tree $T_{1}$ connecting $S$ in $B_{n,k}$. See Figure $2$.
In addition, as $(u_{j}^{1})^{\prime}, (u_{j}^{2})^{\prime}, (u_{j}^{3})^{\prime}\in V(B_{n,k}^{j})$ for each $j\in\{4,5,\cdots,n\}$ and $B_{n,k}^{j}$ is connected, there is a tree $T_{j}^{\prime}$ connecting $(u_{j}^{1})^{\prime}, (u_{j}^{2})^{\prime}$ and $(u_{j}^{3})^{\prime}$ in $B_{n,k}^{j}$. Let $T_{j}=T_{j}^{\prime}\bigcup P_{j}^{1}\bigcup P_{j}^{2}\bigcup P_{j}^{3}\bigcup \\u_{j}^{1}(u_{j}^{1})^{\prime}\bigcup u_{j}^{2}(u_{j}^{2})^{\prime}\bigcup u_{j}^{3}(u_{j}^{3})^{\prime}$ for each $j\in\{4,5,\cdots,n\}$. Combining the trees $T_{j}s$ for $4\leq j\leq n$ and the tree $T_{1}$, and $n-2$ internally disjoint trees connecting $S$ in $B_{n,k}$ are obtained.
Subcase $3.2$. $t_{1}\in \bigcup_{l\in\{4,5,\cdots,n\}} V(P_{l}^{1})$ or $t_{2}\in \bigcup_{l\in\{4,5,\cdots,n\}} V(P_{l}^{2})$.
Without loss of generality, let $t_{1}\in V(P_{4}^{1})$.
Note that $v_{1}=p_{1}p_{2}\cdots p_{k-1}1$. By the assumption that the outside neighbor of the terminal vertex in $P_i^1$ is in
$B_{n,k}^i$ for $i\in \{2,3,\ldots, k\}$, one has that $v_{1}=23\cdots k1$. It implies that $p_i=i+1$ for $1\leq i\leq k-1$.
If $k\geq 4$, we obtain that $p_{k-1}\neq 2$ and $p_{3}=4$. For any vertex $v\in V(P_{4}^{1})$, $v$ is a permutation of $\{p_{1}, p_{2}, \cdots, p_{k-1},1\}$. Next, we consider the path $P_{2}^{1}$. Note that $u_{2}^{1}$ is the terminal vertex of $P_{2}^{1}$ and $u_{2}^{1}=p_{2}p_{3}\cdots p_{k-1}p_{1}1=34\cdots k21$. We can extend the path $P_{2}^{1}$ starting from $u_{2}^{1}$ as follows: $(3\underline{4}56\cdots k21)(35\underline{4}6\cdots k21)\cdots(35\cdots 26\underline k{4}1)$. Let $\widehat{u}_{2}^{1}=35\cdots 241$ and the extended path starting at $v_{1}$ and ending at $\widehat{u}_{2}^{1}$ be $\widehat{P}_{2}^{1}$. Then the outside neighbour of $\widehat{u}_{2}^{1}$ is in $B_{n,k}^{4}$.
If $k=3$ and $t_{1}\neq v_{1}$, then $v_1=231$ and $4\in[n]\setminus \{p_{1}, p_{2}, 1\}=\{4,5,\ldots, n\}$ and the vertex $t_{1}$ is a permutation of $\{4, p_{2},1\}=\{4,3,1\}$.
Note that $u_{2}^{1}=p_{2}21=321$. Now, we extend the path $P_{2}^{1}$ starting from $u_{2}^{1}$ to $\widehat{P}_{2}^{1}$, where $\widehat{P}_{2}^{1}={P}_{2}^{1}(421)(241)$. Let $\widehat{u}_{2}^{1}=241$. Now replacing $P_{2}^{1}$ with $\widehat{P}_{2}^{1}$,
The outside neighbor of terminal vertex $\widehat{u}_{2}^{1}$ of $\widehat{P}_{2}^{1}$ is in $B_{n,k}^{4}$.
Next, we prove the following claim.
\begin{clm}\label{clm3}
$V(\widehat{P}_{2}^{1})\bigcap V(P_{j}^{1})=\{v_{1}\}$ for each $j\in\{3,4,\cdots,n\}$ for $k\geq 3$.
\end{clm}
The proof of Claim $3$. For $k\geq 4$, we prove the result by contradiction. Suppose that there exists $l\in\{3,4,\cdots,n\}$ such that $|V(\widehat{P}_{2}^{1})\bigcap V(P_{l}^{1})|\geq 2$. Assume that $u\in V(\widehat{P}_{2}^{1})\bigcap V(P_{l}^{1})$ and $u\neq v_{1}$. Since $V(P_{2}^{1})\bigcap V(P_{l}^{1})=\{v_{1}\}, u\notin V(P_{2}^{1})$. Thus, $u\in V(\widehat{P}_{2}^{1})\setminus V(P_{2}^{1})$.
If $u\neq \widehat{u}_{2}^{1}$, then the element at position $k-1$ of $u$ is $2$. However, the element at position $k-1$ of each vertex in $V(P_{l}^{1})$ is $p_{k-1}$ or $k$. As $k\neq 2$ and $p_{k-1}\neq 2$, a contradiction.
Next, suppose $u=\widehat{u}_{2}^{1}$. The $k=4$ and $u=u_{4}^{1}$. However, the element at position $k-2$ of $u_{4}^{1}$ is $i_{k-1}$, a contradiction.
For $k=3$, let $x\in V(P_{m}^{1})$ for $4\leq m\leq n$, then it is a permutation of $\{m,3,1\}$. However, for any vertex $y\in V(\widehat{P}_{2}^{1}\setminus P_{2}^{1})$, it is a permutation of $\{4,2,1\}$. Thus, $x\neq y$.
The proof of the claim is complete.
\vskip0.3cm
Similarly, if $t_{2}\in V(P_{\ell }^{2})$ and $\ell \in\{4,5,\cdots,n\}$, we can extend the path $P_{2}^{2}$ to obtain the extended path, say $\widehat{P}_{2}^{2}$, such that the outside neighbour of the terminal vertex of the extended path $\widehat{P}_{2}^{2}$ is in $B_{n,k}^{\ell }$ and there is only one common vertex $v_{2}$ between the extended path and other paths $P_{j}s$ in $B_{n,k}^{2}$.
Since the induced subgraph $B_{n,k}[V(P_{1}^{3})\bigcup V(\widehat{P}_{1}[(u_{1}^{3})^{\prime}, t_{1}])\bigcup V(P_{4}^{1})]$ contains a $(v_{3}, v_{1})$-path, say $D_1$. Similarly, the induced subgraph $B_{n,k}[V(P_{2}^{3})\bigcup V(\widehat{P}_{2}[(u_{2}^{3})^{\prime}, t_{2}])\bigcup V(P_{4}^{1})]$ contains a $(v_{3}, v_{2})$-path, say $D_2$. A tree, say $T_{1}$, by combining $D_1$ and $D_2$ is obtained and the tree $T_{1}$ connects $S$ in $B_{n,k}$.
Similar as subcase $3.1$ just by replacing $P_{4}^{1}$ with $\widehat{P}_{2}^{1}$ as $t_{1}\in V(P_{4}^{1})$ or
replacing $P_{\ell}^{2}$ with $\widehat{P}_{2}^{2}$ if $t_{2}\in V(P_{\ell }^{2})$ for $\ell \in\{4,5,\cdots,n\}$,
there is a tree $T_{j}$ connecting $S\bigcup V(B_{n,k}^{j})$ for each $j\in\{4,5,\cdots,n\}$ and $T_{j}s$ are internally disjoint $S$-trees. Combining the trees $T_{j}s$ for $4\leq j\leq n$ and the tree $T_{1}$, $n-2$ internally disjoint trees connecting $S$ in $B_{n,k}$ are obtained. Thus, the result is desired.
\hfill\qed
\section{Concluding remarks}
The generalized $k$-connectivity is a generalization of traditional connectivity. In this paper, we focus on the $(n,k)$-bubble-sort graph, denoted by $B_{n,k}$. We study the generalized $3$-connectivity of $B_{n,k}$ and show that $\kappa_{3}(B_{n,k})=n-2$ for $2\leq k\leq n-1$. So far, there are few results about the generalized $k$-connectivity for larger $k$. We are interested in this topic and we would like to study in this direction to show the corresponding results of $B_{n,k}$ for $k\geq 4$.
\section*{Acknowledgments}
This work was supported by the National Natural Science Foundation of China (No. 11731002), the Fundamental Research Funds for the Central Universities (No. 2016JBM071, 2016JBZ012) and the $111$ Project of China (B16002).
|
{
"timestamp": "2018-05-08T02:16:05",
"yymm": "1805",
"arxiv_id": "1805.02437",
"language": "en",
"url": "https://arxiv.org/abs/1805.02437"
}
|
\section{Introduction}
Let $\Sigma_{g,1}$ be a compact oriented surface of genus $g$ with one boundary component.
The mapping class group $\mathcal{M}_{g,1}$ is the group of isotopy classes of orientation preserving diffeomorphisms of $\Sigma_{g,1}$ which fix the boundary component pointwise.
The Torelli group $\mathcal{I}_{g,1}$, which consists of mapping classes acting trivially on the first homology $H=H_1(\Sigma_{g,1}, \bb{Z})$, is an important subgroup of $\mathcal{M}_{g,1}$.
There is a central filtration
$
\mathcal{I}_{g,1}=\mathcal{M}_{g,1}(1) \supset \mathcal{M}_{g,1}(2) \supset \mathcal{M}_{g,1}(3) \supset \cdots
$
defined by the action on the nilpotent quotients of the fundamental group of $\Sigma_{g,1}$.
The associated graded quotient of this filtration is described by the Johnson homomorphisms
\[
\tau_k^\mathcal{M} : \mathop{\mathrm{gr}}\nolimits^k(\mathcal{M}_{g,1}) \hookrightarrow \mf{h}_{g,1}(k), \quad k\ge 1.
\]
Here, $\mathop{\mathrm{gr}}\nolimits^k(\mathcal{M}_{g,1}) = \mathcal{M}_{g,1}(k)/\mathcal{M}_{g,1}(k+1)$ and $\mf{h}_{g,1}(k)$ is the kernel of the Lie bracket
$H \otimes_\bb{Z} \mathop{\mathcal{L}}\nolimits_{2g}(k+1)\to \mathop{\mathcal{L}}\nolimits_{2g}(k+2)$, where $\mathop{\mathcal{L}}\nolimits_{2g} = \bigoplus_{m\ge 1} \mathop{\mathcal{L}}\nolimits_{2g}(m)$ is the free Lie algebra generated by $H = \mathop{\mathcal{L}}\nolimits_{2g}(1)$.
Note that the collection $\{ \tau_k^\mathcal{M} \}_k$ defines an injective homomorphism of graded Lie algebras:
\[
\tau^\mathcal{M}: \mathop{\mathrm{gr}}\nolimits(\mathcal{M}_{g,1}) = \bigoplus_{k \ge 1} \mathop{\mathrm{gr}}\nolimits^k(\mathcal{M}_{g,1}) \hookrightarrow
\mf{h}_{g,1} = \bigoplus_{k\ge 1} \mf{h}_{g,1}(k).
\]
The space $\mf{h}_{g,1}$ is called the Lie algebra of symplectic derivations \cite{Mo1, Ko1}.
The Johnson homomorphisms were introduced by Johnson \cite{Jo1, Jo2}, and Morita \cite{Mo1} gave a refinement of the target.
For recent developments in the theory of Johnson homomorphisms, we refer to expository articles \cite{HM, Hain19, KK2, MorSurvey, Sak, S3}.
A particularly important fact is that the map $\tau_k^\mathcal{M}$ is equivariant with respect to the action of the group $\mathcal{M}_{g,1}/\mathcal{I}_{g,1} \cong \mathop{\mathrm{Sp}}\nolimits(2g,\bb{Z})$.
This fact enables us to make use of representation theory to analyze $\tau_k^\mathcal{M}$, in particular when we work over a field of characteristic zero.
In what follows, putting $\bb{Q}$ as a subscript or a superscript means that one takes tensor product with the rationals.
As shown by Johnson \cite{Jo1} the first Johnson homomorphism $\tau_1^\mathcal{M}$ is surjective.
It was first observed by Morita \cite{Mo1} that the map $\tau_k^\mathcal{M}$ is not surjective for higher $k$.
That is, for any odd $k \ge 3$, he constructed the surjective homomorphism
$$\mathop{\mathrm{Tr}}\nolimits_k : \mf{h}_{g,1}^\bb{Q}(k) \rightarrow S^kH_\bb{Q},$$
where $S^k$ means the $k$th symmetric tensor product, and proved that $\mathop{\mathrm{Tr}}\nolimits_k \circ \tau_k^\mathcal{M} \equiv 0$.
In other words, the map $\mathop{\mathrm{Tr}}\nolimits_k$ is an obstruction for the surjectivity of the $k$th Johnson homomorphism $\tau_k^\mathcal{M}$.
We call the quotient of $\mf{h}_{g,1}^{\bb{Q}}(k)$ by the image of $\tau_{k,\bb{Q}}^\mathcal{M}$ the $k$th Johnson cokernel of the mapping class group $\mathcal{M}_{g,1}$.
The $\mathop{\mathrm{Sp}}\nolimits$-module structure of the Johnson cokernels becomes an interesting object of study.
The Morita trace $\mathop{\mathrm{Tr}}\nolimits_k$ detects the unique $\mathop{\mathrm{Sp}}\nolimits$-irreducible component $S^k H_{\bb{Q}}$ in the $k$th Johnson cokernel.
In \cite{ES2}, the first and the third authors introduced the $\mathop{\mathrm{Sp}}\nolimits$-homomorphism
\[ c_k:\mf{h}_{g,1}^\bb{Q}(k) \rightarrow \mathop{\mathcal{C}}\nolimits_{2g}^\bb{Q}(k). \]
(See \S \ref{subsec:ESobs} for its definition.)
Here, $\mathop{\mathcal{C}}\nolimits_{2g}^\bb{Q}(k)$ is the quotient module of $H^{\otimes{k}}_\bb{Q}$ with respect to the action of the cyclic group of order $k$
as cyclic permutations of the components of $H^{\otimes{k}}_\bb{Q}$.
By using the third author's result in \cite{Sa}
that the space $\mathop{\mathcal{C}}\nolimits_{2g}^\bb{Q}(k)$ coincides with the $k$th Johnson cokernel of the automorphism group of the free group,
they proved that
\[
\mathop{\mathrm{Im}}\nolimits(\tau^\mathcal{M}_{k,\bb{Q}})\subset \mathop{\mathrm{Ker}}\nolimits(c_k) \subset \mf{h}_{g,1}^\bb{Q}(k)
\]
in a stable range.
The map $c_k$ is a refinement of $\mathop{\mathrm{Tr}}\nolimits_k$ in the sense that $\mathop{\mathrm{Ker}}\nolimits(c_k)\subset \mathop{\mathrm{Ker}}\nolimits(\mathop{\mathrm{Tr}}\nolimits_k)$.
Moreover, in \cite{ES2} it was shown that for $k \equiv 1 \ (\text{mod} \ 4)$ and $k \ge 5$, an $\mathop{\mathrm{Sp}}\nolimits$-irreducible component $[1^k]$ is detected in $\mf{h}_{g,1}^\bb{Q}(k)/\mathop{\mathrm{Ker}}\nolimits(c_k)$, hence in the $k$th Johnson cokernel.
We call this component the anti-Morita obstruction.
There are several studies on the trace maps $c_k$ and their application to the Johnson cokernels.
In \cite{EE}, the first author and Hikoe Enomoto detected several series of hook-type components in $\mf{h}_{g,1}^\bb{Q}(k)/\mathop{\mathrm{Ker}}\nolimits(c_k)$.
Recently, by using the hairy graph complex, Conant \cite{C} detected new $\mathop{\mathrm{Sp}}\nolimits$-components in the Johnson cokernels which cannot be detected by the trace maps $c_k$.
At the present stage, the structure of the Johnson cokernels has not been completely determined.
By using the trace map $c_k$, Morita, Sakasai and Suzuki \cite{MSS} determined it up to degree $6$.
In \cite{KK1}, Kawazumi and the second author introduced the map
\[
\delta_k^{\text{alg}}:\mf{h}^\bb{Q}_{g,1}(k)\to \bigoplus_{\substack{p,q\ge 1, \\ p+q=k}}\mathop{\mathcal{C}}\nolimits_{2g}^\bb{Q}(p)\otimes \mathop{\mathcal{C}}\nolimits_{2g}^\bb{Q}(q)
\]
(See \S \ref{subsec:KKobs} for its definition.)
The map $\delta^{\text{alg}}_k$ arises from the Turaev cobracket, a topological operation which measures self-intersections of curves on a surface.
They showed that
\[
\mathop{\mathrm{Im}}\nolimits(\tau^\mathcal{M}_{k,\bb{Q}})\subset \mathop{\mathrm{Ker}}\nolimits(\delta_k^{\text{alg}})\subset \mf{h}_{g,1}^\bb{Q}(k),
\]
and that $\mathop{\mathrm{Ker}}\nolimits(\delta_k^{\text{alg}}) \subset \mathop{\mathrm{Ker}}\nolimits(\mathop{\mathrm{Tr}}\nolimits_k)$.
The main purpose of this paper is to compare the two obstructions coming from $c_k$ and from $\delta_k^{\text{alg}}$.
Our first result is as follows.
\begin{theorem}\label{t1}
For each $k \ge 1$ and $2g \geq k+2$, we have $\mathop{\mathrm{Ker}}\nolimits(c_k) \subset \mathop{\mathrm{Ker}}\nolimits(\delta_k^{\text{alg}})$.
\end{theorem}
Our proof is based on a relation between several contraction maps defined on $H^* \otimes_\mathbb{Z} \mathop{\mathcal{L}}\nolimits_{2g}(k+1)$; see Theorem {\rmfamily \ref{tC}}.
We remark that recently, Alekseev, Kawazumi, Kuno and Naef \cite{AKKN} showed that the above theorem holds for any $g$ in a completely different way.
Our second result gives explicit differences between the two obstructions.
\begin{theorem}\label{t2}
Assume that $g\ge k+1$.
\begin{enumerate}[(i)]
\item For any $k \equiv 1 \ (\text{mod} \ 4)$ such that $k \ge 5$, the $\mathop{\mathrm{Sp}}\nolimits$-irreducible component $[1^k]$ lies in $\mathop{\mathrm{Ker}}\nolimits(\delta_k^{\text{alg}})/\mathop{\mathrm{Ker}}\nolimits(c_k)$.
Thus $\mathop{\mathrm{Ker}}\nolimits(c_k) \subsetneq \mathop{\mathrm{Ker}}\nolimits(\delta_k^{\text{alg}})$.
\item For $k=8$, an $\mathop{\mathrm{Sp}}\nolimits$-irreducible component $[3,1^5]$ appears in $\mathop{\mathrm{Ker}}\nolimits(\delta_8^{\text{alg}})/\mathop{\mathrm{Ker}}\nolimits(c_8)$.
\end{enumerate}
\end{theorem}
Topologically, each of the components in $\mathop{\mathrm{Ker}}\nolimits(\delta_k^{\text{alg}})/\mathop{\mathrm{Ker}}\nolimits(c_k)$ is a component of the $k$th Johnson cokernel, and cannot be detected by the usual Turaev cobracket, but by the framed version of it; see \cite{AKKN} and \cite{Ka}.
By some computer calculations, the first author and Hikoe Enomoto have checked that $[4,1^5]$ also appears in
$\mathop{\mathrm{Ker}}\nolimits(\delta_9^{\text{alg}})/\mathop{\mathrm{Ker}}\nolimits(c_9)$. They conjecture that $[3,1^{k-3}] \ (5 \le k \equiv 0 \pmod{4})$
and $[4,1^{k-4}] \ (9 \le k \equiv 1 \pmod{4})$ appear in $\mathop{\mathrm{Ker}}\nolimits(\delta_k^{\text{alg}})/\mathop{\mathrm{Ker}}\nolimits(c_k)$.
These results and observations suggest that the difference of
$\mathop{\mathrm{Ker}}\nolimits(\delta_k^{\text{alg}})$ and $\mathop{\mathrm{Ker}}\nolimits(c_k)$ are not so small.
\section{Andreadakis-Johnson Theory for $\mathop{\mathrm{Aut}}\nolimits{F_n}$}
In this section, we review the Andreadakis-Johnson filtration and the Johnson homomorphisms of the automorphism groups of free groups.
For details, see \cite{S2} for example.
\subsection{Johnson homomorphisms of $\mathop{\mathrm{Aut}}\nolimits{F_n}$}
\label{subsec:JAut}
Let $F_n$ be a free group of rank $n \geq 2$ with basis $x_1, \ldots ,x_n$ and let $\mathop{\mathrm{Aut}}\nolimits{F_n}$ be the automorphism group of $F_n$.
The group $\mathop{\mathrm{Aut}}\nolimits{F_n}$ acts naturally on the abelianization $H:=\mathop{F_n^{\text{ab}}}:=F_n/[F_n,F_n]$ of $F_n$.
The kernel of this action is called the IA-automorphism group and denoted by $\mathop{\mathrm{IA}}\nolimits_n$.
The basis $x_1,\ldots,x_n$ induces a basis of $H$ and we can identify $\mathop{\mathrm{Aut}}\nolimits{H}$ with the general linear group $\mathop{\mathrm{GL}}\nolimits(n,\mathbb{Z})$.
Thus we have the group extension
\[
1 \to \mathop{\mathrm{IA}}\nolimits_n \to \mathop{\mathrm{Aut}}\nolimits{F_n} \to \mathop{\mathrm{GL}}\nolimits(n,\mathbb{Z}) \to 1.
\]
Let $F_n=\Gamma_n(1)\supset \Gamma_n(2) \supset \cdots$ be the lower central series of $F_n$. Namely it is defined by
$\Gamma_n(1):=F_n$ and $\Gamma_n(k):=[\Gamma_n(k-1),F_n]$ for $k \ge 2$.
It is classically known that the associated graded quotient
$$\mathop{\mathcal{L}}\nolimits_n:=\bigoplus_{k \ge 1}\mathop{\mathcal{L}}\nolimits_n(k), \quad \text{where $\mathop{\mathcal{L}}\nolimits_n(k):=\Gamma_n(k)/\Gamma_n(k+1)$},$$
has the graded Lie algebra structure induced from
the commutator bracket on $F_n$ and is isomorphic to the free Lie algebra generated by $H=\mathop{\mathcal{L}}\nolimits_n(1)$.
Moreover, we have the canonical embedding
$$\mathop{\mathcal{L}}\nolimits_n(k) \hookrightarrow H^{\otimes k}.$$
The group $\mathop{\mathrm{Aut}}\nolimits{F_n}$ acts naturally on $F_n/\Gamma_n(k+1)$.
The kernel of this action is denoted by $\mathcal{A}_n(k)$.
Then the subgroups $\mathcal{A}_n(k)$ form the descending filtration $\mathop{\mathrm{IA}}\nolimits_n=\mathcal{A}_n(1) \supset \mathcal{A}_n(2) \supset \cdots$ which we call
the Andreadakis-Johnson filtration.
Andreadakis proved the following theorem.
\begin{theorem}[Andreadakis \cite{And}] \label{tA} \
\begin{enumerate}[$(i)$]
\item For any $k,\ell \ge 1$, $\sigma \in \mathcal{A}_n(k)$ and $x \in \Gamma_n(\ell)$, we have $\sigma(x)x^{-1} \in \Gamma_n(k+\ell)$.
\item For any $k,\ell \ge 1$, we have $[\mathcal{A}_n(k),\mathcal{A}_n(\ell)]\subset \mathcal{A}_n(k+\ell)$, namely the Andreadakis-Johnson filtration $\{\mathcal{A}_n(k)\}$ is a descending central filtration of $\mathop{\mathrm{IA}}\nolimits_n$.
\end{enumerate}
\end{theorem}
By Theorem \ref{tA} (i), for any $k \ge 1$ we can define the homomorphism
$$\tilde{\tau}_k:\mathcal{A}_n(k) \to \mathop{\mathrm{Hom}}\nolimits_\mathbb{Z}(H,\mathop{\mathcal{L}}\nolimits_n(k+1))$$
by
\[
\sigma \mapsto \big( x \mod \Gamma_n(2) \mapsto \sigma(x)x^{-1} \mod \Gamma_n(k+2) \big).
\]
The kernel of $\tilde{\tau}_k$ coincides with $\mathcal{A}_n(k+1)$ and we obtain the injective homomorphism
\[
\tau_k:\mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}_n) \hookrightarrow \mathop{\mathrm{Hom}}\nolimits_\mathbb{Z}(H,\mathop{\mathcal{L}}\nolimits_n(k+1))
= H^* \otimes_\mathbb{Z} \mathop{\mathcal{L}}\nolimits_n(k+1),
\]
where $\mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}_n):=\mathcal{A}_n(k)/\mathcal{A}_n(k+1)$.
We call $\tau_k$ the $k$th Johnson homomorphism of $\mathop{\mathrm{Aut}}\nolimits{F_n}$.
Next, we define a variant of the Johnson homomorphism $\tau_k$. Let
$
\mathop{\mathrm{IA}}\nolimits_n=\mathcal{A}'_n(1) \supset \mathcal{A}'_n(2) \supset \cdots
$
be the lower central series of $\mathop{\mathrm{IA}}\nolimits_n$, and set $\mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}'_n):=\mathcal{A}'_n(k)/\mathcal{A}'_n(k+1)$.
By Theorem \ref{tA} (ii), we have $\mathcal{A}'_n(k) \subset \mathcal{A}_n(k)$ for any $k$.
Thus we obtain the (not necessarily injective) homomorphism
\[
\tau'_k := \tau_k \circ i_k :\mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}'_n) \to H^* \otimes_\mathbb{Z} \mathop{\mathcal{L}}\nolimits_n(k+1),
\]
where the map $i_k:\mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}'_n) \to \mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}_n)$ is induced from the inclusion $\mathcal{A}'_n(k) \hookrightarrow \mathcal{A}_n(k)$.
The group $\mathop{\mathrm{Aut}}\nolimits{F_n}$ acts naturally on each graded quotient $\mathop{\mathcal{L}}\nolimits_n(k)$.
Moreover, it acts on the normal subgroup $\mathcal{A}_n(k)$ by conjugation, and hence on the graded quotients $\mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}_n)$ and $\mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}_n')$.
The action of the subgroup $\mathop{\mathrm{IA}}\nolimits_n$ on these quotients is trivial, and
we obtain the well-defined action of the group $\mathop{\mathrm{GL}}\nolimits(n,\mathbb{Z})=\mathop{\mathrm{Aut}}\nolimits{F_n}/\mathop{\mathrm{IA}}\nolimits_n$ on $\mathop{\mathcal{L}}\nolimits_n(k)$, $\mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}_n)$ and $\mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}_n')$.
The homomorphisms $\tau_k$ and $\tau'_k$ are $\mathop{\mathrm{GL}}\nolimits(n,\mathbb{Z})$-equivariant.
In \cite{Sa}, the third author completely determined the structure of the cokernels of $\tau'_k$ in a stable range.
Let $\mathop{\mathcal{C}}\nolimits_n(k)$ be the quotient module of $H^{\otimes{k}}$ by the action of the cyclic group of order $k$.
Namely,
\[
\mathop{\mathcal{C}}\nolimits_n(k):=H^{\otimes{k}}/\langle a_1 \otimes a_2 \otimes \cdots \otimes a_k-a_2 \otimes \cdots \otimes a_k\otimes a_1 | a_i \in H \rangle.
\]
One has $\mathop{\mathcal{C}}\nolimits_n(0)=\mathbb{Z}$ and $\mathop{\mathcal{C}}\nolimits_n(1)=H$.
Let
$$\pi_k: H^{\otimes{k}} \to \mathop{\mathcal{C}}\nolimits_n(k)$$
be the natural projection, and let
$\Phi_{12}: H^* \otimes_\mathbb{Z} H^{\otimes{k+1}} \to H^{\otimes{k}}$ be the contraction map defined by
\[
\Phi_{12}( f \otimes a_1 \otimes a_2 \otimes \cdots \otimes a_{k+1})
= f(a_1)a_2 \otimes \cdots \otimes a_{k+1},
\]
where $f\in H^*$ and $a_i \in H$.
For simplicity, its restriction to $H^* \otimes_\mathbb{Z} \mathop{\mathcal{L}}\nolimits_n(k+1)$ is denoted by the same letter: thus we obtain the map
$$\Phi_{12}: H^* \otimes _{\mathbb{Z}} \mathop{\mathcal{L}}\nolimits_n(k+1) \to H^{\otimes k}.$$
\begin{theorem}[Satoh, \cite{Sa}]\label{thm-Satoh} \ Suppose $k \ge 2$ and $n \ge k+2$.
\begin{enumerate}[$(i)$]
\item The homomorphism $\pi_k \circ \Phi_{12}: H^* \otimes_{\mathbb{Z}} \mathop{\mathcal{L}}\nolimits_n(k+1) \rightarrow \mathop{\mathcal{C}}\nolimits_n(k)$ is surjective.
\item We have $\mathop{\mathrm{Im}}\nolimits\tau'_{k}=\mathop{\mathrm{Ker}}\nolimits(\pi_k \circ \Phi_{12})$, namely $\mathop{\mathrm{Coker}}\nolimits(\tau'_{k}) \cong \mathop{\mathcal{C}}\nolimits_n(k)$.
\end{enumerate}
\end{theorem}
\noindent
Formulas of the $\mathop{\mathrm{GL}}\nolimits$-irreducible decompositions of $\mathop{\mathcal{C}}\nolimits_n^\mathbb{Q}(k)$ and $\mathop{\mathrm{Im}}\nolimits(\tau'_{k,\mathbb{Q}})$ are given in \cite{ES1}.
\begin{rem}
Recently Darn\'{e} \cite{D} showed that the natural map $i_k:\mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}'_n) \to \mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}_n)$ is surjective
for $n \ge k+2$. This means that the stable $k$th cokernel $\mathrm{Coker}(\tau_k)$ coincides with $\mathop{\mathrm{Coker}}\nolimits(\tau'_{k})$.
Namely, in the stable range, the Johnson cokernels for $\mathop{\mathrm{Aut}}\nolimits{F_n}$ are completely determined over $\mathbb{Z}$.
\end{rem}
\subsection{A generating set of $\mathop{\mathrm{Im}}\nolimits\tau_{k}'$}
Let $e_1, \ldots ,e_n$ be the standard basis of $H=\mathop{F_n^{\text{ab}}}$ induced from the basis $x_1, \ldots ,x_n$ of $F_n$, and $e_1^*, \ldots, e_n^*$ the dual basis of $H^*$.
For any $a_1, a_2, \ldots, a_k \in H$, we set
\[ [a_1, a_2, \ldots, a_k] := [ \cdots [[a_1, a_2], a_3], \ldots, ], a_k] \in \mathcal{L}_n(k). \]
This is called a $k$-simple commutator.
We have a generating set of $\mathop{\mathrm{Im}}\nolimits\tau_{k}'$ as a $\mathbb{Z}$-module in a stable range.
\begin{prop}\label{T-Gen}
Suppose $k \ge 2$ and $n \ge k+2$.
Then the image of $\tau_{k}'$ is generated as a $\mathbb{Z}$-module by the following four types of elements in $H^* \otimes_{\mathbb{Z}} \mathop{\mathcal{L}}\nolimits_n(k+1)$:
\begin{enumerate}[$(i)$]
\item[$(K_1)$] $e_i^* \otimes [e_{i_1},e_{i_2}, \ldots ,e_{i_{k+1}}]$
for any $1 \le i, i_1, \ldots , i_{k+1} \le n$ such that $i_1, \ldots , i_{k+1} \neq i$.
\item[$(K_2)$] $e_i^* \otimes [e_{i_1}, e_{i_2}, \ldots, e_{i_k}, e_i]$
for any $1 \le i, i_1, \ldots , i_k \le n$ such that $i_1, \ldots , i_k \neq i$.
\item[$(K_3)$] $e_i^* \otimes [e_i,e_{i_1}, \ldots ,e_{i_k}]-e_j^* \otimes [e_j,e_{i_k}, e_{i_1}, \ldots, e_{i_{k-1}}]$
for any $1 \le i, j, i_1, \ldots , i_k \le n$ such that $i,j \neq i_1, \ldots ,i_k$. (possibly $i=j$.)
\item[$(K_4)$] $\displaystyle e_i^* \otimes [e_{i_1},e_{i_2}, \ldots ,e_{i_{k+1}}] -\sum_{j=1}^{k+1}\delta_{i,i_j}e_m^*\otimes [e_{i_1}, \ldots ,e_{i_{j-1}},e_m,e_{i_{j+1}}, \ldots ,e_{i_k},e_{i_{k+1}}]$
for any $1 \le i, m, i_1, \ldots , i_{k+1} \le n$ such that $i=i_j$ for some $1 \le j \le k+1$ and $m \neq i_1, \ldots ,i_{k+1}$.
\end{enumerate}
\end{prop}
\begin{proof}
It is easily seen that these elements belong to $\mathop{\mathrm{Ker}}\nolimits(\pi_k \circ \Phi_{12})$.
In \S 3.2 in \cite{Sa}, it was shown that these elements belong to $\mathop{\mathrm{Im}}\nolimits\tau_{k}'$.
Furthermore, by the arguments in the process of the proof of $\mathop{\mathrm{Im}}\nolimits\tau'_{k} \supset \mathop{\mathrm{Ker}}\nolimits(\pi_k \circ \Phi_{12})$,
it turns out that the above elements generate $\mathop{\mathrm{Ker}}\nolimits(\pi_k \circ \Phi_{12})$ as a $\mathbb{Z}$-module.
Since $\mathop{\mathrm{Ker}}\nolimits(\pi_k \circ \Phi_{12}) = \mathop{\mathrm{Im}}\nolimits\tau'_{k}$, we obtain the required result.
\end{proof}
We remark that each of $\mathop{\mathrm{gr}}\nolimits^k(\mathcal{A}'_n)$ is finitely generated since $\mathop{\mathrm{IA}}\nolimits_n$ is finitely generated.
We should also remark that due to a recent work by Church, Ershov and Putman \cite{CEP}, each of $\mathcal{A}_n'(k)$ and $\mathcal{A}_n(k)$
is finitely generated in a stable range. However it seems to be still open to describe an explicit finite generating system of them.
\subsection{Contractions and $\mathop{\mathrm{Im}}\nolimits\tau'_{k}$}
\label{sec:Cont}
We generalize the contraction map $\Phi_{12}$ in \S \ref{subsec:JAut}.
For each $1\le \ell \le k+1$, we consider the contraction map
$\Phi_{1,\ell+1}: H^* \otimes_\mathbb{Z} H^{\otimes{k+1}} \to H^{\otimes{k}}$ defined by the formula
\[
\Phi_{1,\ell+1}( f \otimes a_1 \otimes \cdots \otimes a_{k+1} )
= f(a_\ell) a_1 \otimes \cdots \otimes a_{\ell-1} \otimes a_{\ell+1} \otimes \cdots \otimes a_{k+1},
\]
where $f\in H^*$ and $a_i \in H$.
We denote its restriction to $H^* \otimes_\mathbb{Z} \mathop{\mathcal{L}}\nolimits_n(k+1)$
by the same letter: thus we obtain the map
$$\Phi_{1,\ell+1}: H^* \otimes_\mathbb{Z} \mathop{\mathcal{L}}\nolimits_n(k+1) \to H^{\otimes k}.$$
For $y \in H$ and $e_{i_j}$ for $1 \leq j \leq k$,
in order to describe the expansion of the simple commutator $[y, e_{i_1}, \ldots, e_{i_k}]$ in $H^{\otimes k}$,
we introduce the following notation.
For an ordered subset $S=(j_1, j_2, \ldots, j_l)$ of the ordered set $(i_1, i_2, \ldots, i_k)$, define
\[\begin{split}
e_{\overrightarrow{S}} := e_{j_1} \otimes e_{j_2} \otimes \cdots \otimes e_{j_l}, \hspace{1em}
e_{\overleftarrow{S}} := e_{j_l} \otimes e_{j_{l-1}} \otimes \cdots \otimes e_{j_1}.
\end{split}\]
Let $S^c$ be the ordered complement of $S$. For example, if $S$ is the ordered subset $(2,4,5)$ of $(1, 2, \ldots, 6)$, we have
$S^c =(1,3,6)$ and
\[\begin{split}
e_{\overrightarrow{S}} & =e_{2} \otimes e_{4} \otimes e_{5}, \hspace{1em} e_{\overleftarrow{S}}=e_{5} \otimes e_{4} \otimes e_{2}, \\
e_{\overrightarrow{S^c}} & =e_{1} \otimes e_{3} \otimes e_{6}, \hspace{1em} e_{\overleftarrow{S^c}}=e_{6} \otimes e_{3} \otimes e_{1}.
\end{split}\]
Then, we have
\[ [y, e_{i_1}, \ldots, e_{i_k}] = \sum_S (-1)^{|S|} e_{\overleftarrow{S}} \otimes y \otimes e_{\overrightarrow{S^c}} \]
where $S$ ranges over all ordered subset of $(i_1, i_2, \ldots, i_k)$, and $|S|$ denotes the number of elements in $S$.
We can easily obtain the following lemma.
\begin{lemma}\label{L-HKT}
As notation above,
for any $1 \leq \ell \leq k+1$, if $i \neq i_1, i_2, \ldots, i_k$ then we have
\[ \Phi_{1,\ell+1}(e_i^* \otimes [e_i, e_{i_1}, \ldots, e_{i_k}]) = \sum_{\substack{S \subset (i_1, i_2, \ldots, i_k)\\[1pt] |S|=\ell-1}} (-1)^{l-1} e_{\overleftarrow{S}} \otimes e_{\overrightarrow{S^c}}. \]
\end{lemma}
For any $1\leq \ell \leq k+1$,
define the homomorphism
$$\varpi_\ell:H^{\otimes{k}} \to \mathop{\mathcal{C}}\nolimits_n(\ell-1) \otimes \mathop{\mathcal{C}}\nolimits_n(k-\ell+1)$$
by
\[
\varpi_\ell(
a_1 \otimes \cdots \otimes a_{k+1})= \pi_{\ell-1}(a_1 \otimes \cdots \otimes a_{\ell-1}) \otimes \pi_{k-\ell+1}(a_\ell \otimes \cdots \otimes a_k),
\]
and set
\[
\Theta_\ell:=\varpi_\ell \circ \Phi_{1,\ell+1}:H^* \otimes_\mathbb{Z} H^{\otimes k+1} \to \mathop{\mathcal{C}}\nolimits_n(\ell-1) \otimes \mathop{\mathcal{C}}\nolimits_n(k-\ell+1). \]
We denote the restriction of this map to $H^* \otimes_\mathbb{Z} \mathop{\mathcal{L}}\nolimits_n(k+1)$ by the same letter:
$$\Theta_\ell: H^* \otimes_\mathbb{Z} \mathop{\mathcal{L}}\nolimits_n(k+1) \to \mathop{\mathcal{C}}\nolimits_n(\ell-1) \otimes \mathop{\mathcal{C}}\nolimits_n(k-\ell+1).$$
\begin{theorem}\label{tC}
Suppose $k\ge 2$ and $n \geq k+2$.
For any $1 \leq \ell \leq k+1$, we have
\[
\mathop{\mathrm{Ker}}\nolimits(\Theta_1) \subset \mathop{\mathrm{Ker}}\nolimits(\Theta_\ell)
\]
in $H^* \otimes_{\mathbb{Z}} \mathop{\mathcal{L}}\nolimits_n(k+1)$.
\end{theorem}
\begin{proof}
By Proposition {\rmfamily \ref{T-Gen}} and $\mathop{\mathrm{Ker}}\nolimits(\Theta_1) = \mathop{\mathrm{Ker}}\nolimits(\pi_k \circ \Phi_{12}) = \mathop{\mathrm{Im}}\nolimits\tau'_{k}$, it suffices to show that
all the generators of type $K_1$, $K_2$, $K_3$ and $K_4$ of $\mathop{\mathrm{Im}}\nolimits\tau'_{k}$ belong to $\mathop{\mathrm{Ker}}\nolimits(\Theta_\ell)$ for any $1 \le \ell \le k+1$.
Clearly, generators of type $K_1$ belong to $\mathop{\mathrm{Ker}}\nolimits(\Theta_\ell)$.
Consider a generator of type $K_2$. We have
\[\begin{split}
\Phi_{1,\ell+1} & (e_i^* \otimes ([e_{i_1}, e_{i_2}, \ldots, e_{i_k}] \otimes e_i - e_i \otimes [e_{i_1}, e_{i_2}, \ldots, e_{i_k}])) \\
&= \begin{cases}
0 \hspace{1em} & \mathrm{if} \hspace{1em} \ell \neq 1, k+1, \\
\pm [e_{i_1}, e_{i_2}, \ldots, e_{i_k}] & \mathrm{if} \hspace{1em} \ell = 1, k+1.
\end{cases}
\end{split}\]
This shows that generators of type $K_2$ belong to $\mathop{\mathrm{Ker}}\nolimits(\Theta_\ell)$,
since $\mathop{\mathcal{L}}\nolimits_n(k)$ is in the kernel of the projection $\pi_k: H^{\otimes k} \to \mathop{\mathcal{C}}\nolimits_n(k)$.
For a generator
\[ X=e_i^* \otimes [e_i,e_{i_1}, \ldots ,e_{i_k}]-e_j^* \otimes [e_j,e_{i_k}, e_{i_1}, \ldots, e_{i_{k-1}}] \]
of type $K_3$. By Lemma {\rmfamily \ref{L-HKT}}, we have
\[ \Phi_{1,\ell+1}(X)=(-1)^{\ell -1}
\begin{bmatrix} \displaystyle \sum_{\substack{S \subset (i_1, i_2, \ldots, i_k)\\[1pt] |S|=\ell-1}} e_{\overleftarrow{S}} \otimes e_{\overrightarrow{S^c}}
\,\, - \sum_{\substack{T \subset (i_k, i_1, \ldots, i_{k-1})\\[1pt] |T|=\ell-1}} e_{\overleftarrow{T}} \otimes e_{\overrightarrow{T^c}}
\end{bmatrix}. \]
Here the first sum is written as
\[ \sum_{\substack{i_k \in S \\[1pt] |S|=\ell-1}} e_{\overleftarrow{S}} \otimes e_{\overrightarrow{S^c}}
+ \sum_{\substack{i_k \not\in S \\[1pt] |S|=\ell-1}} e_{\overleftarrow{S}} \otimes e_{\overrightarrow{S^c}}, \]
and the second sum is written as
\[ \sum_{\substack{i_k \in T \\[1pt] |T|=\ell-1}} e_{\overleftarrow{T}} \otimes e_{\overrightarrow{T^c}}
+ \sum_{\substack{i_k \not\in T \\[1pt] |T|=\ell-1}} e_{\overleftarrow{T}} \otimes e_{\overrightarrow{T^c}}. \]
Then we have
\[\begin{split}
\sum_{\substack{i_k \in S \\[1pt] |S|=\ell-1}} & e_{\overleftarrow{S}} \otimes e_{\overrightarrow{S^c}}
- \sum_{\substack{i_k \in T \\[1pt] |T|=\ell-1}} e_{\overleftarrow{T}} \otimes e_{\overrightarrow{T^c}} \\
& = \sum_{\substack{S_0 \subset (i_1, \ldots, i_{k-1}) \\[1pt] |S_0|=\ell-2}} e_{i_k} \otimes e_{\overleftarrow{S_0}} \otimes e_{\overrightarrow{S_0^c}}
- \sum_{\substack{T_0 \subset (i_1, \ldots, i_{k-1}) \\[1pt] |T_0|=\ell-2}} e_{\overleftarrow{T_0}} \otimes e_{i_k} \otimes e_{\overrightarrow{T_0^c}} \\
& \overset{\varpi_{\ell}}{\longmapsto} 0
\end{split}\]
since $\pi_{\ell-1}(e_{i_k} \otimes e_{\overleftarrow{S_0}}) = \pi_{\ell-1}(e_{\overleftarrow{S_0}} \otimes e_{i_k})$.
Similarly, the remaining terms
\[
\sum_{\substack{i_k \not\in S \\[1pt] |S|=\ell-1}} e_{\overleftarrow{S}} \otimes e_{\overrightarrow{S^c}}
-
\sum_{\substack{i_k \not\in T \\[1pt] |T|=\ell-1}} e_{\overleftarrow{T}} \otimes e_{\overrightarrow{T^c}}
\]
are annhilated by $\varpi_{\ell}$.
This shows that $\Phi_{1, \ell+1}(X)$ is in the kernel of $\varpi_\ell$
and thus generatos of type $K_3$ belong to $\mathrm{Ker}(\Theta_\ell)$ for any $1 \le \ell \le k+1$.
Finally, consider a generator
\[
X=e_i^* \otimes [e_{i_1},e_{i_2}, \ldots ,e_{i_{k+1}}]
-\sum_{j=1}^{k+1}\delta_{i,i_j}e_m^*\otimes [e_{i_1}, \ldots ,e_{i_{j-1}},e_m,e_{i_{j+1}}, \ldots ,e_{i_k},e_{i_{k+1}}]
\]
of type $K_4$.
Assume $i_{j_1}= \cdots = i_{j_t} =i$.
For any $1 \leq \ell \leq k+1$, we can calculate $\Phi_{1, \ell+1}(e_i^* \otimes [e_{i_1},e_{i_2}, \ldots ,e_{i_{k+1}}])$
by taking all contractions between $e_i^*$ and $e_{i_{j_s}}$ for $1 \leq s \leq t$. In particular, the contribution
of the contraction between $e_i^*$ and $e_{i_{j_s}}$ for a fixed $s$ is equal to that of
\[ \Phi_{1, \ell+1}(e_m^*\otimes [e_{i_1}, \ldots ,e_{i_{j_s -1}},e_m,e_{i_{j_s +1}}, \ldots ,e_{i_k},e_{i_{k+1}}]). \]
Therefore we see that $\Phi_{1,\ell+1}(X)=0$.
This completes the proof of Theorem {\rmfamily \ref{tC}}.
\end{proof}
\section{Structures of the Johnson Cokernels of $\mathcal{M}_{g,1}$}
In this section, we turn our attention to the mapping class group $\mathcal{M}_{g,1}$ and prove Theorem \ref{t1}
in Introduction.
\subsection{Johnson homomorphisms for $\mathcal{M}_{g,1}$}
\label{sec:JMG}
We review the Johnson homomorphisms and their cokernels of $\mathcal{M}_{g,1}$, following \cite{ES2}.
Given a base point on the boundary,
the fundamental group $\pi_1(\Sigma_{g,1})$ of the surface $\Sigma_{g,1}$ is a free group $F_{2g}$ of rank $2g$.
Take a basis $x_1, x_2, \ldots, x_{2g}$ of $\pi_1(\Sigma_{g,1})$ such that the product $\prod_{i=1}^g [x_i,x_{i+g}]$ is parallel to the boundary component.
The homology classes $e_1, \ldots ,e_{2g}$ of $x_1, \ldots ,x_{2g}$ form a symplectic basis of the first homology group $H=H_1(\Sigma_{g,1},\mathbb{Z})$.
The natural action of the mapping class group $\mathcal{M}_{g,1}$ on $\pi_1(\Sigma_{g,1})$ induces the Dehn-Nielsen embedding
\[
\varphi:\mathcal{M}_{g,1} \to \mathop{\mathrm{Aut}}\nolimits(\pi_1(\Sigma_{g,1})) \cong \mathop{\mathrm{Aut}}\nolimits{F_{2g}}.
\]
Recall from \S \ref{subsec:JAut} the surjective homomorphism
$\pi:\mathop{\mathrm{Aut}}\nolimits{F_{2g}} \rightarrow \mathop{\mathrm{GL}}\nolimits(2g,\mathbb{Z})$.
The image of $\pi_\mathcal{M}:=\pi \circ \varphi: \mathcal{M}_{g,1} \to \mathop{\mathrm{GL}}\nolimits(2g,\mathbb{Z})$ coincides with the integral symplectic group
\[
\mathop{\mathrm{Sp}}\nolimits(2g,\mathbb{Z}):=\{A \in \mathop{\mathrm{GL}}\nolimits(2g,\mathbb{Z}) ; {}^tA J A=J\},
\]
where
$J=\left(
\begin{array}{cc}
0 & J_g \\
-J_g & 0
\end{array}
\right)$ and the $(g \times g)$-matrix
$J_g$ is equal to $\left(\begin{array}{ccc} O & & 1 \\
& \rotatebox{75}{$\ddots$} & \\
1 & & O
\end{array}\right)$.
The kernel of $\pi_\mathcal{M}$ is nothing but the Torelli group $\mathcal{I}_{g,1}$.
We obtain the following commutative diagram.
\[
\xymatrix{
1\ar[r] & \mathop{\mathrm{IA}}\nolimits_{2g}\ar[r] & \mathop{\mathrm{Aut}}\nolimits{F_{2g}}\ar[r]^{\pi} & \mathop{\mathrm{GL}}\nolimits(2g,\mathbb{Z})\ar[r] & 1 \\
1\ar[r] & \mathcal{I}_{g,1}\ar[r]\ar@{^(->}[u]^{\varphi|_{\mathcal{I}_{g,1}}} & \mathcal{M}_{g,1}\ar[r]_{\pi_\mathcal{M}}\ar@{^(->}[u]^{\varphi} & \mathop{\mathrm{Sp}}\nolimits(2g,\mathbb{Z})\ar[r]\ar@{^(->}[u] & 1
}
\]
For any $k\ge 1$ we set $\mathcal{M}_{g,1}(k):=\mathcal{M}_{g,1} \cap \mathcal{A}_{2g}(k)$, where $\mathcal{A}_{2g}(k)$ is the $k$th term of the Andreadakis-Johnson filtration of $\mathop{\mathrm{IA}}\nolimits_{2g}$.
Let $\mathcal{I}_{g,1}=\mathcal{M}'_{g,1}(1) \supset \mathcal{M}'_{g,1}(2) \supset \cdots$ be the lower central series of $\mathcal{I}_{g,1}$.
Then we obtain the two homomorphisms
\[
\tau_k^\mathcal{M}:\mathop{\mathrm{gr}}\nolimits^k\mathcal{M}_{g,1}=\mathcal{M}_{g,1}(k)/\mathcal{M}_{g,1}(k+1) \hookrightarrow H^* \otimes_\mathbb{Z} \mathop{\mathcal{L}}\nolimits_{2g}(k+1)
\]
and
\[
{\tau'_k}^\mathcal{M}:\mathop{\mathrm{gr}}\nolimits^k\mathcal{M}'_{g,1}=\mathcal{M}'_{g,1}(k)/\mathcal{M}'_{g,1}(k+1) \rightarrow H^* \otimes_\mathbb{Z} \mathop{\mathcal{L}}\nolimits_{2g}(k+1)
\]
induced from the Dehn-Nielsen embedding and the Johnson homomorphisms of $\mathop{\mathrm{Aut}}\nolimits{F_n}$.
We call $\tau_k^\mathcal{M}$ and ${\tau'_k}^\mathcal{M}$ the $k$th Johnson homomorphisms for $\mathcal{M}_{g,1}$.
By an argument similar to that of the Johnson homomorphisms of $\mathop{\mathrm{Aut}}\nolimits{F_n}$,
we see that the group $\mathop{\mathrm{Sp}}\nolimits(2g,\mathbb{Z})$ acts naturally on the source and the target of the maps $\tau_k^\mathcal{M}$ and ${\tau'_k}^\mathcal{M}$,
and that $\tau_k^\mathcal{M}$ and ${\tau'_k}^\mathcal{M}$ are $\mathop{\mathrm{Sp}}\nolimits(2g,\mathbb{Z})$-equivariant
homomorphisms.
We remark that the homomorphism ${\tau'_k}^\mathcal{M}$ is not necessarily injective.
However, the following seminal work of Hain \cite{Hai} shows that the rational images of $\tau_k^\mathcal{M}$ and ${\tau'_k}^\mathcal{M}$ are equal.
\begin{theorem}[Hain \cite{Hai}]\label{thm-Hain}
We have $\mathop{\mathrm{Im}}\nolimits\tau_{k,\mathbb{Q}}^\mathcal{M}=\mathop{\mathrm{Im}}\nolimits{\tau'_{k,\mathbb{Q}}}^{\hspace{-3mm}\mathcal{M}}$ in $H_\mathbb{Q}^* \otimes_\mathbb{Q} \mathop{\mathcal{L}}\nolimits_{2g}^\mathbb{Q}(k+1)$.
\end{theorem}
The space $H^*$ is canonically isomorphic to $H$ by the Poicar\'{e} duality and we can identify $H^* \otimes \mathop{\mathcal{L}}\nolimits_{2g}(k+1)$ with $H \otimes \mathop{\mathcal{L}}\nolimits_{2g}(k+1)$.
In \cite{Mo1}, Morita proved that $\mathop{\mathrm{Im}}\nolimits\tau_{k}^\mathcal{M} \subset \mf{h}_{g,1}(k)$,
where $\mf{h}_{g,1}(k)$ is the kernel of the left bracketing homomorphism
\[
H \otimes \mathop{\mathcal{L}}\nolimits_{2g}(k+1) \rightarrow \mathop{\mathcal{L}}\nolimits_{2g}(k+2), \quad X\otimes u \mapsto [X,u].
\]
\subsection{Enomoto-Satoh's obstructions}
\label{subsec:ESobs}
In \cite{ES2}, Enomoto and Satoh introduced new classes in the Johnson cokernels.
These classes are defined by the $\mathop{\mathrm{Sp}}\nolimits$-homomorphism
\[
c_k:\mf{h}^\mathbb{Q}_{g,1}(k) \hookrightarrow H_\mathbb{Q} \otimes_\mathbb{Q} \mathop{\mathcal{L}}\nolimits_{2g}^\mathbb{Q}(k+1) \cong H_\mathbb{Q}^* \otimes_\mathbb{Q} \mathop{\mathcal{L}}\nolimits_{2g}^\mathbb{Q}(k+1) \overset{\Theta_1}{\twoheadrightarrow} \mathop{\mathcal{C}}\nolimits_{2g}^\mathbb{Q}(k),
\]
where $\Theta_1$ has been introduced in \S \ref{sec:Cont}.
The following commutative diagram holds:
\[
\xymatrix{
& & \mathop{\mathrm{Im}}\nolimits(\tau'_{k,\mathbb{Q}})\ar@{^(->}[rr] & & H_\mathbb{Q}^* \otimes_\mathbb{Q} \mathop{\mathcal{L}}\nolimits_{2g}^\mathbb{Q}(k+1)\ar@{->>}[r]^{\hspace{2.5em} \Theta_1} & \mathop{\mathcal{C}}\nolimits_{2g}^\mathbb{Q}(k) \\
\mathop{\mathrm{Im}}\nolimits(\tau_{k,\mathbb{Q}}^\mathcal{M})\ar@{=}[rr]^{\!\rm{Thm. \, \ref{thm-Hain}}} & & \mathop{\mathrm{Im}}\nolimits({\tau'}_{k,\mathbb{Q}}^{\hspace{0.5mm}\mathcal{M}})\ar@{^(->}[r]\ar@{^(->}[u] &
\mf{h}_{g,1}^{\mathbb{Q}}(k)\ar@{^(->}[r]\ar@<0.5ex>@{.>}[rru]^(.35){c_k} & H_\mathbb{Q} \otimes_\mathbb{Q} \mathop{\mathcal{L}}\nolimits_{2g}^\mathbb{Q}(k+1)\ar@{=}[u]\ar@{->>}[r] & \mathop{\mathcal{L}}\nolimits_{2g}^\mathbb{Q}(k+2)
}
\]
By using Theorem \ref{thm-Satoh} and Theorem \ref{thm-Hain}, they \cite{ES2} proved that
\[
\mathop{\mathrm{Im}}\nolimits(\tau_{k,\mathbb{Q}}^\mathcal{M})\subset \mathop{\mathrm{Ker}}\nolimits(c_k) \subset \mf{h}_{g,1}^\mathbb{Q}(k).
\]
\subsection{Kawazumi-Kuno's obstructions}
\label{subsec:KKobs}
In \cite{KK1}, Kawazumi and Kuno introduced another type of classes in the Johnson cokernels
by using some topological consideration on self-intersections of loops on the surface $\Sigma_{g,1}$.
In more detail, they considered an operation called the Turaev cobracket, and showed that its graded version $\delta^{\text{alg}}$ gives rise to an obstruction for the Johnson image.
(For more details, see \cite{KK1} and \cite{KK2}.)
The map $\delta^{\text{alg}}$ is homogeneous of degree $(-2)$ and the degree $k$ part
\[
\delta_k^{\text{alg}}: H_\mathbb{Q}^{\otimes k+2} \to \bigoplus_{\substack{p,q\ge 1, \\ p+q=k}}\mathop{\mathcal{C}}\nolimits^\mathbb{Q}_{2g}(p) \otimes \mathop{\mathcal{C}}\nolimits_{2g}^\mathbb{Q}(q)
\]
sends $a_1 \otimes \cdots \otimes a_{k+2}$ to
\[
\sum_{\substack{1 \le i<j \le k+2, \\ 1<j-i<k+1}}a_i^*(a_j)
\left\{
\begin{array}{l}
\pi(a_{i+1} \otimes \cdots \otimes a_{j-1})\otimes \pi(a_{j+1} \otimes \cdots \otimes a_{k+2} \otimes a_1 \otimes \cdots \otimes a_{i-1}) \\
-\pi(a_{j+1} \otimes \cdots \otimes a_{k+2} \otimes a_1 \otimes \cdots \otimes a_{i-1})\otimes \pi(a_{i+1} \otimes \cdots \otimes a_{j-1})
\end{array}
\right\}.
\]
Here, $a_i^* \in H^*_\mathbb{Q}$ is the element corresponding to $a_i\in H_\mathbb{Q}$ through the Poincar\'e duality $H^*_\mathbb{Q}=H_\mathbb{Q}$,
and $\pi$ denotes the projection $\pi_l: H_{\mathbb{Q}}^{\otimes l} \to \mathop{\mathcal{C}}\nolimits^\mathbb{Q}_{2g}(l)$ when it is applied to $H_{\mathbb{Q}}^{\otimes l}$.
By restriction (and using the same letter), we obtain the map
\[
\delta_k^{\text{alg}}: \mf{h}_{g,1}(k) \to
\bigoplus_{\substack{p,q\ge 1, \\ p+q=k}}\mathop{\mathcal{C}}\nolimits^\mathbb{Q}_{2g}(p) \otimes \mathop{\mathcal{C}}\nolimits_{2g}^\mathbb{Q}(q).
\]
In \cite{KK1}, it was shown that
\[
\mathop{\mathrm{Im}}\nolimits(\tau_k^\mathcal{M}) \subset \mathop{\mathrm{Ker}}\nolimits(\delta^{\text{alg}}_k) \subset \mf{h}_{g,1}(k).
\]
\subsection{Proof of Theorem \ref{t1}}
\label{subsec:pft1}
Here we give a proof of Theorem \ref{t1}.
Recall from \S \ref{sec:Cont} the homomorphism $\Theta_\ell: H_\mathbb{Q}^* \otimes H_\mathbb{Q}^{\otimes k+1} \to \mathop{\mathcal{C}}\nolimits_{2g}^\mathbb{Q}(\ell-1) \otimes \mathop{\mathcal{C}}\nolimits_{2g}^\mathbb{Q}(k-\ell+1)$.
We can regard it as a map from $H_\mathbb{Q}^{\otimes k+2} = H_\mathbb{Q} \otimes H_\mathbb{Q}^{\otimes k+1}$ by the Poincar\'e duality.
\begin{proof}[Proof of Theorem \ref{t1}]
Let $\zeta$ be the cyclic permutation of the components of $H_\mathbb{Q}^{\otimes k+2}$ given by $\zeta(a_1\otimes a_2\otimes \cdots \otimes a_{k+2}):=a_2\otimes \cdots \otimes a_{k+2}\otimes a_1$ and set $\displaystyle \zeta_{k+2}:=\sum_{i=0}^{k+1} \zeta^i \in {\rm End}(H_\mathbb{Q}^{\otimes k+2})$.
Then, we see that
\[
\delta^{\text{alg}}_k =(\Theta_2+\cdots+\Theta_k) \zeta_{k+2}
\]
on $H^{\otimes k+2}_\mathbb{Q}$.
Since any element of $\mf{h}_{g,1}^\mathbb{Q}(k)$ is $\zeta$-invariant in $H_\mathbb{Q}^{\otimes k+2}$
(for instance, see \cite[Proposition 5.2]{ES2}), one has
$\delta^{\text{alg}}_k=(k+2)(\Theta_2+\cdots +\Theta_k)$ on $\mf{h}_{g,1}^\mathbb{Q}(k)$.
The homomorphism $\Theta_1$ is nothing but the trace map $c_k$,
and hence $\mathop{\mathrm{Ker}}\nolimits{c_k}=\mathop{\mathrm{Ker}}\nolimits{\Theta_1}$.
By Theorem \ref{tC}, $\mathop{\mathrm{Ker}}\nolimits{\Theta_1} \subset \mathop{\mathrm{Ker}}\nolimits{\Theta_\ell}$ for any $\ell \ge 2$.
Therefore, $\mathop{\mathrm{Ker}}\nolimits{c_k} \subset \mathop{\mathrm{Ker}}\nolimits{\delta_k^{\mathrm{alg}}}$ on $\mf{h}_{g,1}(k)$.
\end{proof}
\begin{rem}
There is a refinement of $\delta^{\text{alg}}_k$ which uses the same formula but
we allow $j-i$ to be $1$ or $k+1$ so that $p$ and $q$ can be zero in the target.
This map comes from a framed version of the Turaev cobracket and does actually have
the same information as $c_k$.
For more detail, see \cite{AKKN} and \cite{Ka}.
\end{rem}
\section{Proof of Theorem \ref{t2}}
In this section, we prove Theorem \ref{t2}.
We consider polynomial representations of $\mathop{\mathrm{GL}}\nolimits(2g,\bb{Q})$ and rational representations of $\mathop{\mathrm{Sp}}\nolimits(2g,\bb{Q})$.
The isomorphism classes of $\mathop{\mathrm{GL}}\nolimits$-irreducible polynomial representations are parametrized by partitions $\lambda$ such that their lengths $\ell(\lambda)$ are at most $2g$.
We denote by $(\lambda)$ the $\mathop{\mathrm{GL}}\nolimits$-irreducible polynomial representation corresponding to a partition $\lambda$.
The isomorphism classes of $\mathop{\mathrm{Sp}}\nolimits$-irreducible rational representations are parametrized by partitions $\lambda$ such that their lengths $\ell(\lambda)$ are at most $g$.
We denote by $[\lambda]$ the $\mathop{\mathrm{Sp}}\nolimits$-irreducible rational representation corresponding to a partition $\lambda$.
\subsection{Anti-Morita obstruction $[1^k]$}
In this subsection, we prove Theorem \ref{t2}(i).
First, we recall the anti-Morita obstruction. In \cite{ES2}, we have the following result.
\begin{theorem}[Enomoto and Satoh {\cite[Theorem 1]{ES2}}]
Suppose $g \ge k+1$ and $k \equiv 1 \pmod{4}$ and $k \ge 5$.
The multiplicities of $\mathop{\mathrm{Sp}}\nolimits$-irreducible representations $[1^k]$ are exactly one in
$\mf{h}_{g,1}^\bb{Q}(k)/\mathop{\mathrm{Ker}}\nolimits(c_k)$.
\end{theorem}
We also recall the $\mathop{\mathrm{GL}}\nolimits$-irreducible decomposition of $\mathop{\mathcal{C}}\nolimits^\mathbb{Q}_{2g}(k)$
obtained by \cite{ES1}.
\begin{lemma}[{\cite[Corollary 4.2(2)]{ES1}}]\label{lem:c}
Suppose $2g \ge k$.
The multiplicity $[\mathop{\mathcal{C}}\nolimits_{2g}^\bb{Q}(k):(1^k)]$ of $(1^k)$ in $\mathop{\mathcal{C}}\nolimits_{2g}^{\bb{Q}}(k)$
is equal to 1 if k is odd, and 0 if k is even.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{t2}(i).]
Note that $g \ge k+1$ implies $2g \ge k$.
Assume $k \equiv 1 (\mathrm{mod} \ 4)$ and $k \ge 5$.
To prove that the $\mathop{\mathrm{Sp}}\nolimits$-homomorphism $\delta_k^{\text{alg}}: \mf{h}_{g,1}(k) \to \bigoplus_{\substack{p,q\ge 1, \\ p+q=k}}\mathop{\mathcal{C}}\nolimits^\mathbb{Q}_{2g}(p) \otimes \mathop{\mathcal{C}}\nolimits_{2g}^\mathbb{Q}(q)$
annhilates the $\mathop{\mathrm{Sp}}\nolimits$-irreducible component $[1^k]$ in $\mf{h}_{g,1}(k)/\mathop{\mathrm{Ker}}\nolimits(c_k)$,
it is sufficient to show that $[1^k]$ does not appear in all $\mathop{\mathcal{C}}\nolimits^\mathbb{Q}_{2g}(p) \otimes \mathop{\mathcal{C}}\nolimits_{2g}^\mathbb{Q}(q)$ for $p,q \ge 1$ and $p+q=k$.
By the $\mathop{\mathrm{GL}}\nolimits$-$\mathop{\mathrm{Sp}}\nolimits$ branching rule,
it is enough to show that there is no $\mathop{\mathrm{GL}}\nolimits(2g, \bb{Q})$-irreducible representation $(1^k)$ in
$\mathop{\mathcal{C}}\nolimits_{2g}^\bb{Q}(p) \otimes \mathop{\mathcal{C}}\nolimits_{2g}^{\bb{Q}}(q)$ for $p+q=k$ and $p,q\ge 1$.
For partitions $\mu$ and $\nu$ of $p$ and $q$ respectively, suppose $(\mu) \otimes (\nu)$ has the $\mathop{\mathrm{GL}}\nolimits$-irreducible representation $(1^k)$.
If $\ell(\mu)<p$ or $\ell(\nu)<q$, we have $\ell(\mu)+\ell(\nu)<k$. Then by the Littlewood-Richardson rule, there is no $\mathop{\mathrm{GL}}\nolimits$-irreducible representation $(1^k)$ in $(\mu) \otimes (\nu)$. Hence, we consider $\ell(\mu)=p$ and $\ell(\nu)=q$.
This case is nothing but $\mu=(1^p)$ and $\nu=(1^q)$. Since $p+q=k \equiv 1 \pmod{4}$, the signatures of $p$ and $q$ are different.
By Lemma \ref{lem:c}, there is no component $(1^p) \otimes (1^q)$ in $\mathop{\mathcal{C}}\nolimits_{2g}^\bb{Q}(p) \otimes \mathop{\mathcal{C}}\nolimits_{2g}^{\bb{Q}}(q)$. This is a contradiction.
\end{proof}
\begin{rem}\label{rem:c}
Especially, for $5\le k \equiv 1 \pmod{4}$ and $g\ge k+1$,
an $\mathop{\mathrm{Sp}}\nolimits$-irreducible component $[1^k]$ appears in $\mathop{\mathrm{Ker}}\nolimits(\Theta_2)/\mathop{\mathrm{Ker}}\nolimits(c_k)$,
thus $\mathop{\mathrm{Ker}}\nolimits(c_k) \neq \mathop{\mathrm{Ker}}\nolimits(\Theta_2)$.
\end{rem}
\subsection{A hook-type component $[3,1^5]$}
\quad In this subsection, we prove Theorem \ref{t2} (ii). \\
\quad First, in \cite[Theorem 1.1]{EE},
several series of hook-type Sp-irreducible components $[r+1,1^{k-r-1}]$
are detected in $k$th Johnson cokernel $\mf{h}_{g,1}(k)/\mathop{\mathrm{Ker}}\nolimits(c_k)$.
An $\mathop{\mathrm{Sp}}\nolimits$-irreducible representation $[3,1^5]$ for $k=8$ and $r=2$ is one of such components.
\begin{prop}
For $g\ge 9$, an $Sp$-irreducible component $[3,1^5]$ appears in $\mf{h}_{g,1}(8)/\mathop{\mathrm{Ker}}\nolimits(c_8)$.
\end{prop}
Note that the multiplicity of $[3,1^5]$ is larger than or equal to $1$ in each
$\mathop{\mathcal{C}}\nolimits_{2g}^\bb{Q}(p) \otimes \mathop{\mathcal{C}}\nolimits_{2g}^{\bb{Q}}(8-p)$ for $1 \le p \le 7$.
Therefore, to prove that $[3,1^5]$ lies in
$\mathop{\mathrm{Ker}}\nolimits(\delta_8^{\text{alg}})/\mathop{\mathrm{Ker}}\nolimits(c_8)$,
we need to use a different way from the previous subsection.
We consider a maximal vector which gives a component $[3,1^5]$ in $\mf{h}_{g,1}(8)/\mathop{\mathrm{Ker}}\nolimits(c_8)$ and
prove that it lies in $\mathop{\mathrm{Ker}}\nolimits(\Theta_\ell)$ for $2 \le \ell \le 8$.
As in \S \ref{sec:JMG} we fix a symplectic basis $\{e_1, \ldots ,e_g,e_{g+1}, \ldots ,e_{2g}\}$ of $H_\bb{Q}$.
Set $i':=2g-i+1$ for each integer $1 \le i \le 2g$.
We see that
\begin{eqnarray*}
\langle e_i,e_j\rangle=0=\langle e_{i'},e_{j'}\rangle,\quad
\langle e_i,e_{j'}\rangle=\delta_{ij}=-\langle e_{j'},e_{i}\rangle, \quad (1 \le i,j \le g).
\end{eqnarray*}
For each integer $1 \le i \le 2g$, we define
$
e_i^*=\left\{
\begin{array}{ll}
e_{i'}, & (1 \le i \le g), \\
-e_{i'}, & (g+1 \le i \le 2g).
\end{array}
\right.
$
Then
$\langle e_i,e_j^* \rangle=\delta_{ij}$ for any $i,j$.
Set $\omega=\displaystyle \sum_{i=1}^{2g}e_i \otimes e_i^* \in H_{\bb{Q}}^{\otimes{2}}$.
We identify $H_\bb{Q}$ with $H_\bb{Q}^*$ by $v \mapsto \langle v,\bullet \rangle$.
Note that $\langle e_{r'},e_r\rangle e_{r'}^*=e_r$ for $1 \le r \le 2g$. \\
\quad We define
\[
v_{[3,1^5]}:=\omega \otimes (e_1 \wedge e_2 \wedge e_3 \wedge e_4 \wedge e_5 \wedge e_6) \otimes e_1 \otimes e_1
\in H_{\mathbb{Q}}^{\otimes 10}
\]
where $e_1 \wedge e_2 \wedge \cdots \wedge e_6$
is the anti-symmetrizer $\displaystyle \sum_{\sigma \in \mf{S}_6}\mathop{\mathrm{sgn}}\nolimits(\sigma) (e_{\sigma(1)} \otimes e_{\sigma(2)} \otimes \cdots \otimes e_{\sigma(6)})
\in H_{\bb{Q}}^{\otimes{6}}$. \\
\quad Let $s_i$ be the permutation of $i$ and $i+1$.
By the Brauer-Schur-Weyl duality, the set of elements
$\{v_{[3,1^5]} \cdot \tau \cdot \theta \cdot \zeta_{10} \ (\tau \in \mf{S}_{10})\}$ generates
the space of $\mathop{\mathrm{Sp}}\nolimits$-maximal vectors corresponding to $\mathop{\mathrm{Sp}}\nolimits$-irreducible components $[3,1^5]$ in $\mf{h}_{g,1}(8)$,
where $\theta=(1-s_2)(1-s_3s_2) \cdots (1-s_9s_8 \cdots s_2)$ is the Dynkin-Specht-Wever idempotent and
$\zeta_{10} \in {\rm End}(H_\mathbb{Q}^{\otimes 10})$
is defined in the proof of Theorem \ref{t1}. \\
\quad In \cite{EE}, a component $[3,1^5]$ is detected in $\mf{h}_{g,1}(8)/\mathop{\mathrm{Ker}}\nolimits(c_8)$ by proving the following claim.
\begin{prop}[{\cite[Proposition 3.8]{EE}}]
$c_8(v_{[3,1^5]}\theta\zeta_{10}) \neq 0$.
\end{prop}
Recall from \S \ref{subsec:pft1} that, up to scalar, $\delta_{8}^{\text{alg}}$ is equal to $\Theta_2+ \cdots +\Theta_8$.
Therefore the following theorem implies Theorem \ref{t2} (ii).
\begin{theorem}
For $2 \le \ell \le 8$, we have $\Theta_\ell(v_{[3,1^5]}\theta\zeta_{10})=0$.
\end{theorem}
\begin{proof}
Note that it is sufficient to prove the claim for $\ell=2,3,4,5$.
We use the following notations. The $(i,j)$-expansion operator $D_{ij}:H_\bb{Q}^{\otimes{k}} \to H_\bb{Q}^{\otimes k+2}$ is given by
\[
(v_1 \otimes \cdots \otimes v_k)D_{ij}=
\sum_{r=1}^{2g}v_1 \otimes \cdots \otimes v_{i-1} \otimes e_r \otimes v_i \otimes \cdots \otimes v_{j-2} \otimes e_r^* \otimes v_{j-1} \otimes \cdots \otimes v_k.
\]
The element $\Lambda_{a,b}\in H_{\mathbb{Q}}
^{\otimes 8}$ is given by
\[
\sum_{\sigma \in \mf{S}_6}\mathrm{sgn}(\sigma)
e_{\sigma(1)} \otimes \cdots \otimes e_{\sigma(a-1)} \otimes e_1 \otimes e_{\sigma(a)} \otimes \cdots \otimes e_{\sigma(b-2)} \otimes e_1 \otimes e_{\sigma(b-1)} \otimes \cdots \otimes e_{\sigma(6)}.
\]
In \cite[Proposition 3.3]{EE}, we have
\begin{eqnarray*}
v_{[3,1^5]}\theta&=&
(e_1 \wedge \cdots \wedge e_6) \otimes e_1^{\otimes 2} \cdot (D_{12}-3D_{14}+3D_{16}-D_{18}) \\
&{}&+e_1 \otimes (e_1 \wedge \cdots \wedge e_6) \otimes e_1 \cdot (-2D_{13}+6D_{15}-6D_{17}+2D_{19}) \\
&{}&+e_1^{\otimes 2} \otimes (e_1 \wedge \cdots \wedge e_6) \cdot (D_{14}-3D_{16}+3D_{18}-D_{1,10}).
\end{eqnarray*}
Let us denote the three terms in the right hand side by
$\boldsymbol{v}_1,\boldsymbol{v}_2$ and $\boldsymbol{v}_3$. \\
\quad For the $13$-contraction operator $\Phi_{13}$, we obtain
\begin{eqnarray*}
&{}&\Phi_{13}(\boldsymbol{v}_1)=2\Lambda_{1,8}+2\Lambda_{6,7}-2\Lambda_{2,3}+3\Lambda_{1,4}-3\Lambda_{1,6}, \\
&{}&\Phi_{13}(\boldsymbol{v}_2)=(-4g-2)\Lambda_{1,8}+(-4g-2)\Lambda_{1,2}-4\Lambda_{6,8}+4\Lambda_{2,4}, \\
&{}&\Phi_{13}(\boldsymbol{v}_3)=2\Lambda_{1,2}-2\Lambda_{3,4}+2\Lambda_{7,8}-3\Lambda_{1,4}+3\Lambda_{1,6}.
\end{eqnarray*}
Then we have
\[
\Phi_{13}(v_{[3,1^5]}\theta\zeta_{10})=
-(4g)(\Lambda_{1,2}+\Lambda_{1,8})
+2(\Lambda_{6,7}-\Lambda_{2,3})+4(\Lambda_{2,4}-\Lambda_{6,8})+2(\Lambda_{7,8}-\Lambda_{3,4}).
\]
The first term is in the kernel of $\varpi_2:H_\bb{Q}^{\otimes{8}} \to C_{2g}(1) \otimes C_{2g}(7)$ because
$\Lambda_{1,2}$ and $\Lambda_{1,8}$ are of the form
$e_1 \otimes (\text{a maximal vector with weight} \ (2,1^5) \ \text{in} \ H_{\bb{Q}}^{\otimes{7}})$, and $(2,1^5)$ does not appear in $C_{2g}(7)$
(\cite[Corollary 4.2]{ES2}).
The remaining three terms are also in the kernel of $\varpi_2$ because they cancel each other in $C_{2g}(1) \otimes C_{2g}(7)$.
Hence we obtain $v_{[3,1^5]}\theta\zeta_{10} \in \mathop{\mathrm{Ker}}\nolimits(\Theta_2)$. \\
\quad For the $14$-contraction operator $\Phi_{14}$, we have
\begin{eqnarray*}
\Phi_{14}(v_{[3,1^5]}\theta\zeta_{10})&=&
(4g)\Lambda_{1,2}+(-6g+1)(\Lambda_{3,4}+\Lambda_{7,8}) \\
&{}& -2\Lambda_{3,5}-2\Lambda_{4,5}+6\Lambda_{4,6}-6\Lambda_{5,6}+6\Lambda_{5,7}-2\Lambda_{6,7}-2\Lambda_{6,8}.
\end{eqnarray*}
The first term is in the kernel of $\varpi_3$ because $[1^6]$ does not appear in $C_{2g}(6)$. All the other terms are in the kernel of $\varpi_3$
because each term is contained in $e_i \otimes e_j \otimes v-e_j \otimes e_i \otimes v \in H_\bb{Q}^{\otimes{8}}$.
Hence we obtain $v_{[3,1^5]}\theta\zeta_{10} \in \mathop{\mathrm{Ker}}\nolimits(\Theta_3)$. \\
\quad For the $15$-contraction operator $\Phi_{15}$, we have
\begin{eqnarray*}
\Phi_{15}(v_{[3,1^5]}\theta\zeta_{10})&=&12g(\Lambda_{1,8}+\Lambda_{3,4}) \\
&{}&+4(\Lambda_{4,5}-\Lambda_{7,8})+4(\Lambda_{5,6}-\Lambda_{6,7})+8(\Lambda_{6,8}-\Lambda_{4,6})
.
\end{eqnarray*}
In the first term, by dividing
$\sum_{\sigma \in \mf{S}_6}$ into $\sum_{\sigma(1) \ \text{or} \ \sigma(2)=1}
+\sum_{\sigma(3) \ \text{or} \ \sigma(6)=1}
+\sum_{\sigma(4) \ \text{or} \ \sigma(5)=1}$, they are in the kernel of $\varpi_4$.
The remaining three terms are also in the kernel of $\varpi_4$ because they cancel each other in $C_{2g}(3) \otimes C_{2g}(5)$.
Thus we obtain $v_{[3,1^5]}\theta\zeta_{10} \in \mathop{\mathrm{Ker}}\nolimits(\Theta_4)$. \\
\quad For the $16$-contraction operator $\Phi_{16}$, we have
\begin{eqnarray*}
\Phi_{16}(v_{[3,1^5]}\theta\zeta_{10})&=&-(6g+1)(\Lambda_{1,2}+\Lambda_{3,4}-\Lambda_{5,6}-\Lambda_{7,8}) \\
&{}& +2(\Lambda_{6,7}-\Lambda_{2,3}+\Lambda_{2,4}-\Lambda_{6,8}+\Lambda_{1,3}-\Lambda_{5,7})
.
\end{eqnarray*}
Since the projection $H_\bb{Q}^{\otimes{4}} \to \mathop{\mathcal{C}}\nolimits_{2g}^\mathbb{Q}(4)$ annihilates the elements $e_i \wedge e_j \wedge e_k \wedge e_\ell$,
all the terms are in the kernel of $\varpi_5$.
Therefore we obtain $v_{[3,1^5]}\theta\zeta_{10} \in \mathop{\mathrm{Ker}}\nolimits(\Theta_5)$.
\end{proof}
\section*{Acknowledgments}\label{S-Ack}
The first author is supported by JSPS KAKENHI 26870368 and 18K03204.
He also would like to thank Hikoe Enomoto for his careful support on some computer calculations.
The second author is supported by JSPS KAKENHI 26800044 and 18K03308.
The third author is supported by JSPS KAKENHI 24740051 and 16K05155.
\bibliographystyle{amsplain}
|
{
"timestamp": "2020-01-14T02:05:16",
"yymm": "1805",
"arxiv_id": "1805.02563",
"language": "en",
"url": "https://arxiv.org/abs/1805.02563"
}
|
\section{Introduction}
It is well known that classical General Relativity is a quite successful phenomenological theory at laboratory, solar system, galactic and extragalactic scales and in general for length scales $l \gg l_{Pl}\approx 10^{-33}cm$, where $l_{Pl}$ is the Planck length. Singularity problems of Einstein's equations at Planck length and the quantum behaviour of matter and energy at small distances (high energy) suggest that a quantum version of the gravitational field (Quantum Gravity) should be found. There are many different approaches to Quantum Gravity: String Theory, Loop Quantum Gravity, Non-Commutative Geometry, Causal Dynamical Triangulations, Poset Theory, Asymptotic Safety etc.
As the Newton's constant has a negative mass dimension, the perturbative quantization of General Relativity leads to a (perturbative) non-renormalizable theory. In general, perturbative non-renormalizable theories have a number of counter terms which increase as the loop orders. This implies that the renormalization process introduces infinitely many parameters so that the resulting theory does not have any predictive power \cite{lauscherreuter2001}. This is not a dead end, because a perturbatively non-renormalizable theory might be renormalizable under a generalized notion of renormalizability based on non-perturbative arguments. This non-perturbative renormalizability, introduced by K. Wilson \cite{wilsonfourQFT}, is related to the existence of a Non-Gaussian Fixed Point (NGFP) which guarantee the finiteness of the theory in the ultraviolet limit\cite{NiederReuter}.
The {\it Asymptotics Safety} conjecture dates back to Weinberg \cite{1979weinberg}. He suggested that General Relativity might be a non-perturbatively renormalizable Quantum Field Theory if the gravitational RG-flow approaches a non-trivial fixed point in the high energy limit. He himself proved that NGFP exists in 2+$\epsilon$ dimensions \cite{1979weinberg}. In d=4 a NGFP exists in the case of {Einstein-Hilbert} truncation \cite{2001ReuterSaueressig}. The main idea of this approach is that if one has a classical action of Gravity, in the Riemannian case, coupled with $a_{i}$ constants coupled to $O^{i}(x,g)$ operators, $x$ and $g$ being, respectively, the space-time coordinates and the metric tensor $g$ \cite{Guarnieriphd},
\begin{equation}
S(M,g)=\int_{M}d^{4}x\sqrt{g}\sum^{\infty}_{i=0}a_{i}O^{i}(x,g)\;\;\;\;,
\label{gen}
\end{equation}
$M$ is the four dimensional differentiable manifold. The renormalizable group is defined once one fixes an infrared cutoff $k$ and writes the renormalization group equations in terms of the dimensionless coupling constants $\tilde{a}_{i}(k)$ and the $\beta$-functions in the following manner \cite{Guarnieriphd}
\begin{equation}
k\partial_{k}\tilde{a}_{i}(k)=\beta_{i}(\tilde{a}_{1}(k), \tilde{a}_{2}(k), \tilde{a}_{3}(k)...)
\label{RG}
\end{equation}
A point $\tilde{a}_{\star}$ is a NGFP if it is a non trivial zero of the beta-functions, that is $\beta_{i}(\tilde{a}_{\star})=0 \; \forall i$ and $\tilde{a}_{\star}\neq 0$.
Once one has found a the NGFP, the next step is to linearize previous equation \cite{Guarnieriphd}
\begin{equation}
k\partial_{k}\tilde{a}_{i}(k)=\sum_{j}B_{ij}\left(\tilde{a}_{j}(k)-\tilde{a}_{\star j}(k)\right)\;\;,
\label{lirg}
\end{equation}
where one has assumed the following definitions:
\begin{equation}
B_{ij}\equiv\partial_{j}\beta_{i}(\tilde{a}_{\star})\;\;\;\;,\;\;\;\;B=\left(B_{ij}\right)\;\;\;\;.
\label{pos}
\end{equation}
The general solution to the previous linear equation can be written in the following form
\begin{equation}
\tilde{a}_{i}(k)=a_{\star i} +\sum_{I}C_{I}V^{I}_{i}\left( \frac{k_{0}}{k} \right)^{\Theta _{I}}\;\;\;\;,
\label{gena}
\end{equation}
where $V^{I}$ are right-eigenvectors, solutions of the eigenvalue equation (matrix equation)
\begin{equation}
B\;V^{I}=-\Theta_{I}V^{I}
\label{eigen}
\end{equation}
$\Theta_{I}$ being the critical exponents. Now, notice that the fact that one assumes $\tilde{a}_{i}(k) \longmapsto a_{\star i}$ when $k\mapsto \infty$ requires that $C_{I}=0 \forall I$ in case
$\mathrm{Re} \;\Theta_{I} <0$. The Ultra-Violet(UV) critical surface $S_{UV}$ is defined as the number of independent renormalization group trajectories hitting the fixed point as $k\mapsto \infty$. The dimension $\Delta_{UV}$ of this surface is the dimension of $S_{UV}$. Said in another way, the dimension of the critical surface is the number of independent attractive directions or, equivalentely, the number of eigenvalues $\Theta$ with $\mathrm{Re} \;\Theta_{I} >0$. The resulting quantum theory has $\Delta_{UV}$ free parameters. If this number is finite, then the theory is predictive as a pertinent renormalizable model with $\Delta_{UV}$ renormalizable couplings.
These considerations hold, in general, but has been introduced for the perturbative renormalization group (RG). In the non perturbative case one starts from a Wilson-type, coarse-grained, free energy functional
\begin{equation}
\Gamma_{k}\left[g_{\mu\nu}\right]\;\;\;\;,
\label{fe}
\end{equation}
where $k$ is the infrared cut-off. $\Gamma_{k}$ contains all the quantum fluctuations with momenta $p>k$ and not yet of those with $p<k$. The modes $p<k$ are suppressed in the path-integral by a mass-square type term $R_{k}(p^{2})$.
The behavior of the free-energy functional interpolates between $\Gamma_{k\mapsto \infty}=S$, $S$ being the classical (bare) action, and $\Gamma_{k\mapsto 0}=\Gamma$, $\Gamma$ being the standard effective action. $\Gamma_{k}$ satisfies the RG-equation, called also the Wetterich equation \cite{WetterichFRG},
\begin{equation}
k\partial_{k}\Gamma_{k}=\frac{1}{2}Tr\left[(\delta^2\Gamma_k + R_k)^{-1}k\partial_k R_k\right]
\label{Wett}
\end{equation}
In general, since this $RG$-equation is quite complicate, one adopts a powerlul non perturbative approximation scheme: truncate the space of the action functional and project the RG flow onto a finite dimensional space. That is to say, one consider that the free energy functional $\Gamma_{k}$, formally, can be expanded in the following way
\begin{equation}
\Gamma_{k}[\cdot]=\sum_{i=0}^{N}g_{i}(k)k^{d_i}I_i[\cdot]\;\;\;\;,
\label{svilop}
\end{equation}
where $I_{i}[\cdot]$ are given "local or non local functionals" of the fields and $g_{i}(k)$. In the case of gravity, the following truncation ansatz is usually made:
\begin{equation}
I_{0}[g]=\int d^{4}x\sqrt{g}\;\;\;\;,I_{1}[g]=\int d^{4}x\sqrt{g}R\;\;\;\;,I_{2}[g]=\int d^{4}x\sqrt{g}R^{2}\;\;\;\;,\mathrm{etc.}
\label{trunca}
\end{equation}
The simplest truncation is the Einstein-Hilbert truncation which looks like
\begin{equation}
\Gamma_{k}=-\frac{1}{16\pi G_{k}} \int d^{4}x \left( R-2 \bar{\lambda}_{k} \right) +\mathrm{g.f. } +\mathrm{g.t.}\;\;\;\,,
\label{truncaEH}
\end{equation}
here g.f. means classical gauge fixing terms, while g.h. are ghost terms. There are two running parameters $G_{k}$, the Newton constant, which can be written in a dimensionless way as $g(k)=k^{2}G_{k}$. In the same manner, the cosmological constant $\bar{\lambda}_{k}$ becomes $\lambda(k)=\bar{\lambda}_{k}/k^{2}$.
Inserting this ansatz into the flow (Wetterich) equation, one obtains "a projection" onto e finite dimensional space \cite{NiederReuter}
\begin{equation}
Tr[...]=(...)\int \sqrt{g} +(...)\int \sqrt{g}R +...\;\;\;\;,
\label{expo}
\end{equation}
and then the following finite-dimensional RG equations
\begin{eqnarray}
k\partial_{k}g(k)=\beta_{g}(g,\lambda)\\ \nonumber
k\partial_{k}\lambda(k)=\beta_{\lambda}(g,\lambda)\;.
\label{finite}
\end{eqnarray}
The solutions of this equations provide the scaling relation for the a-dimensional gravitational constant $g(k)$ and the a-dimensional cosmological constant $\lambda(k)$.
\section{Modified Einstein-Hilbert Action and Lorentzian ADM Asymptotic Safe Gravity}
It is an old idea, which dates back to Dirac \cite{1972Dirac}, to consider that the gravitational constant $G$ is not really a constant but varies as function of the Space-Time coordinates, $G=G(x)$ \cite{BransDicke} The first idea in this direction dates back to Brans and Dicke theory \cite{BransDicke}, who proposed a coupling of gravity with a scalar field $\phi(x)$ of the type $\phi(x)\equiv 1/G(x)$ \cite{ReuterWeyer}.
The method proposed here is quite different respect to the usual Brans-Dicke theory. In fact, the scalar field $\phi(x)$ is a true dynamical variable in Brans-Dicke theory with a kinetic term, whose equation of motion is determined by varying the action with respect to the field $\phi(x)$ \cite{ReuterWeyer}. Instead, here, one aims to look for a modified theory of General Relativity. In fact, following the general guide-line of Asymptotic Safety approach to Quantum Gravity, the first step is to find the $k$-dependence of the coupling constants, in our case $G$ and $\Lambda$, by the Renormalization Group approach as explained just above. The second step is to fix the dependence from space-time $x$ of the infrared cutoff $k$, that is $k=k(x)$. This identification is generally made on the base of either symmetrical or physical arguments. Therefore $G(k(x))=G(x)$ and $\Lambda(k(x))=\Lambda(x)$ become Space-Time functions that cannot be determined on the base of a Lagrangian dynamic \cite{ReuterWeyer}. Therefore they behave, technically, either as external or, equivalently, as non-geometrical fields. Then the variation of the modified Einstein-Hilbert Lagrangian under the variation of the metric tensor $g$ does not affect $G(x)$ and $\Lambda(x)$ which should be considered given functions. Reuter and Weyer \cite{ReuterWeyer} remark that the modified Einstein Equations, derived from modified Einstein Hilbert Lagrangian, should contain some extra integrability conditions which should put constraints on $G(x)$ and $\Lambda (x)$, or further constraints on the cutoff identification $k(x)$. Reuter and Weyer \cite{ReuterWeyer} start from the following modified Einstein-Hilbert action
\begin{equation}
S_{mEH}[g,G(x),\Lambda(x)]\equiv \frac{1}{16\pi}\int d^{4}x\sqrt{-g}\left(\frac{R}{G(x)}-2\frac{\Lambda(x)}{G(x)}\right)\;\;\;\;.
\label{mEH}
\end{equation}
Following \cite{Manrique}, one starts from a Lorentzian Metric (M,g) and consider an ADM metric decomposition
\begin{equation}
g=-(N^{2}-N_{i}N^{i})dt \otimes dt +N_{i}(dx^{i} \otimes dt
+dt \otimes dx^{i})+h_{ij}dx^{i} \otimes dx^{j}\;\;\;\;,
\label{metricADM}
\end{equation}
where $N=N(x)$ is a function on the four dimensional space-time and it is called "lapse", $N^{i}(x)$ are called "shifts", and $h_{ij}(x)$ is the three-metric on the space-like surfaces $\Sigma$ of the ADM foliation \cite{menotti}. In this context, the regulator $R_{k}$ depends on the Laplacian on the three-dimensional spatial surfaces $\Sigma$. The infrared cut-off of the RG transformations is built from the spectrum of the Laplacian operator defined on the three-dimensional surfaces $\Sigma$.
\section{ADM Analysis of Modified Einstein-Hilbert Lagrangian}
One consider, from now on, a Space-Time $(M,g)$ which is such that $M\equiv \Re \times \Sigma $, $\Re$ being the time-like direction, and $\Sigma$ the Space-like three-dimesional surface. The metric tensor $g$ inherits ADM decomposition form given by (\eqref{metricADM}). The extrinsic curvature term $K_{ij}$ is defined on the three-dimesional surface $\Sigma$ and has the following definition
\begin{equation}
K_{ij}=\frac{1}{2}(-\frac{\partial h_{ij}}{\partial t}+{\bar \nabla}_{i}N_{j}+{\bar \nabla}_{j}N_{i})
\label{curvextrin}
\end{equation}
where the covariant derivative $\bar \nabla$ is a covariant derivative defined on the three-dimensional spatial surfaces $\Sigma$ through the three-dimensional spatial metric $h_{ij}$. The four-dimensional trace of the Ricci tensor ${}^{4}R$ is decomposed into ADM foliation in the following way \cite{AlfioGiampieroRubano}
\begin{equation}
\sqrt{-g}{}^{4}R=N\sqrt{h}\left(K_{ij}K^{ij} - K^{2} + {}^{3}R\right) -2 \left(K \sqrt{h} \right),_{\;0} + 2 f^{i},_{i}\;\;\;\;\;,
\label{scompongo}
\end{equation}
where
\begin{equation}
f^{i}\equiv \sqrt{h} \left(K N^{i} - h^{ij} N,_{j}\right)\;\;\;\;.
\label{bondo}
\end{equation}
It is useful to have in mind the following identities \cite{AlfioGiampieroRubano}
\begin{eqnarray}
{1 \over G}\left(K\sqrt{h}\right),_{0}&=&{G,_{0} \over G^{2}}K\sqrt{h}+\left(K\sqrt{h} \over G \right)_{,0} \label{idento1}\\
\
{1 \over G}{\partial f^{i} \over \partial x^{i}}&=&{G_{,i} \over G^{2}}f^{i} + {\partial \over \partial x^{i}} \left(f^{i} \over G \right)
\label{idento}
\end{eqnarray}
Once introduced these definitions, it is quite straightforward to write down the Einstein-Hilbert action into ADM coordinates $S_{ADM}(h_{ij}, N, N^{i})$ with the York boundary term. The latter is needed to make the variation of the Einstein-Hilbert action linear under the variations of the metric tensor \cite{menotti}
\begin{equation}
S_{ADM}[h_{ij},N, N^{i}]=\frac{1}{16\pi}\int_{R \times \Sigma}dt d^{3}x \sqrt{h}N \frac{1}{G(t,x)}\left({}^{4}R-2 \Lambda(t,x)\right)+\frac{1}{8\pi} \int_{\partial M}{K \sqrt{h}\over G(t,x)}d^{3}x\;\;.
\label{ADMnormal}
\end{equation}
This action can be simplified a lot if one uses the identities above \eqref{idento1}\eqref{idento} and suppose that $\Sigma$ is a closed manifolds (so that the total spatial divergence of $f^{i} \over G$ resulting from \eqref{scompongo} and \eqref{idento} yields zero contribution, having taken $\partial \Sigma=\emptyset$). So one gets, finally,
\begin{equation}
S_{ADM}(h_{ij},N, N^{i})={1\over 16\pi}\int_{R \times\Sigma} \left[{N \sqrt{h}\over G}(K_{ij}K^{ij}
-K^{2}+{ }^{(3)}R-2 \Lambda)
-2{G_{,0}\over G^{2}}K \sqrt{h}
+2{G_{,i}f^{i}\over G^{2}}\right]dtd^{3}x\;\;\;\;.
\label{extendoADM}
\end{equation}
Until now one has considered this theory in the Lagrangian formalism. One wants to pass from the Lagrangian formalism with the variables $(N, N^{i}, h_{ij})$ to the Hamiltonian formalism in which there are positions and momenta coordinates. Therefore the first step, in this process, is to define the Lagrangian density ${\cal L}_{ADM}$ from previous equation (\ref{extendoADM}):
\begin{equation}
{\cal L}_{ADM}\equiv {1\over 16\pi} \left[{N \sqrt{h}\over G}(K_{ij}K^{ij}
-K^{2}+{ }^{(3)}R-2 \Lambda)
-2{G_{,0}\over G^{2}}K \sqrt{h}
+2{G_{,i}f^{i}\over G^{2}}\right]\;\;\;\;.
\label{densiLagr}
\end{equation}
From the previous definition, one gets the "spatial momenta" $\pi_{ij}$
\begin{equation}
{\pi}^{ij}={\partial{{\cal L}_{ADM}} \over \partial{\dot h}_{ij}}=
-\frac{\sqrt h}{16\pi G}\left(K^{ij}-h^{ij}K\right) +\frac{{\sqrt h}\;h^{ij}}{16\pi N G^{2}}\left(G_{,0}-G_{,k}N^{k}\right)
\label{momenta}
\end{equation}
Of course, it is straightforward to notice that these are momentum densities rather than momenta. Their integration, on the three-dimensional surfaces $\Sigma$, gives the momenta \cite{menotti}. The formula above \eqref{momenta} does not look very encouraging since there is not a direct relation between the momentum densities and the extrinsic curvature as in standard ADM \cite{menotti}. But one can define the following new variable ${\tilde \pi}_{ij}$ in the following way
\begin{equation}
{\tilde \pi}^{ij}=\pi^{ij}-\frac{{\sqrt h}\;h^{ij}}{16\pi N G^{2}}\left(G_{,0}-G_{,k}N^{k}\right)
=-\frac{\sqrt h}{16\pi G}\left(K^{ij}-h^{ij}K\right)
\label{newmomnta}
\end{equation}
which have the usual relation with the extrinsic curvature $K_{ij}$ \cite{espositolibro}. Of course, one can define the conjugate momenta $\pi$ and $\pi_{i}$ to the lapse function $N$ and to the shift functions $N^{i}$. Following Dirac's constraint theory, they are zero on the constraint surface and are called primary constraints \cite{menotti}
\begin{equation}
\pi=\frac{\partial {\cal L}_{ADM}}{\partial \dot {N}}\approx 0\;\;\;\;\;\;\pi_i=\frac{\partial {\cal L}_{ADM}}{\partial \dot {N}^i}\approx 0
\label{vincprima}
\end{equation}
Then one may reasonably ask if the following change of variables
\begin{equation}
\left (N, N^{i}, h_{ij},\pi, \pi_{i}, \pi^{ij}\right)\mapsto \left (N, N^{i}, h_{ij}, \pi, \pi^{i}, {\tilde \pi}^{ij}\right)
\label{cambio}
\end{equation}
is a canonical transformation in the Hamiltonian formalism \cite{Arnoldlibro}. A change of variables of the kind (\ref{cambio}) is canonical if the symplectic two form $\Omega=dq^{i} \wedge dp_{i}$ is invariant under this transformation. Applying trivially this definition, a change of variable $(q^i, p_i) \mapsto (Q^i, P_i)$ is a canonical transformation if the following conditions hold \cite{Arnoldlibro}
\begin{eqnarray}
F\equiv \frac{\partial (q^1,...q^n,p_1,...,p_n)}{\partial (Q^1,...,Q^n,P_1,...,P_n)}&\;\;\;\; &
J\equiv
\begin{pmatrix}
0 & I \\
-I & 0
\end{pmatrix}\\
&F^{T}JF=J&
\label{canonical}
\end{eqnarray}
where $F$ is the Jacobian of the transformation of variables. It is a straightforward calculation to check that the new definition of the momenta \eqref{newmomnta} defines a change of variables \eqref{cambio} satisfying the requirement \eqref{canonical}.
One is entitled to define the "canonical" Hamiltonian density ${\cal H}_{ADM}$ \cite{espositolibro} on the manifold fixed by the primary constraints \eqref{vincprima},
\begin{equation}
{\cal H}_{ADM}={\pi}^{ab}{\dot h}_{ab}-{\cal L}_{ADM}\;\;\;\;,
\label{cano}
\end{equation}
from this Hamiltonian density, one can define the Hamiltonian density in the new canonical coordinates \eqref{cambio} through substitution \eqref{newmomnta} \cite{Arnoldlibro}.
So implementing these transformations one gets
\begin{eqnarray}
{\cal H}_{ADM}&=&N\left((16\pi G)G_{abcd}{\tilde \pi}^{ab}{\tilde \pi}^{cd}-\frac{{\sqrt h}({}^{3}R-2\Lambda)}{16\pi G}\right)+2{\tilde \pi}^{ab}{\bar \nabla}_{a}N_{b}\\ \nonumber &+&
\frac{{\sqrt h}(G_{,0} -G_{,k} N^k){\bar \nabla}_{a}N^{a}}{8\pi G^2 N}
+\frac{G_{,i}{\sqrt h}h^{ij}}{8\pi G^{2}}N_{,j}
\label{hamiltonianagrande}
\end{eqnarray}
in which $G_{abcd}$ is DeWitt supermetric \cite{espositolibro} defined in the following way
\begin{equation}
G_{abcd}=\frac{1}{2 \sqrt h}\left (h_{ac}h_{bd}+h_{ad}h_{bc}-h_{ab}h_{cd}\right)
\label{DeWittmetric}
\end{equation}
One is now in a position to define the total Hamiltonian $H_{T}$ as \cite{dirac1966}
\begin{equation}
H_T=\int_{\Sigma}\left(\lambda \pi + \lambda^{i}\pi_{i}+{\cal H}_{ADM}\right)d^{3}x\;\;\;\;,
\label{hamilttotal}
\end {equation}
where $\lambda$ and $\lambda^{i}$ are Lagrange multipliers. Basic and well known considerations, integrations by parts and confront with standard
Hamiltonian analysis of General Relativity \cite{menotti}, suggest to define the Hamiltonian constraint $\cal H$ and the momentum constraints ${\cal H}_i$ through the preservation of the primary constraints \eqref{vincprima}
\begin{equation}
{\cal H}\equiv \left\{\pi,H_T\right\}\;\;,\;\;{\cal H}_i\equiv \left\{\pi,H_T\right\}.
\label{defino}
\end{equation}
Recall that the Poisson brackets are defined as
\begin{equation}
\{A,B\}=\int d^{3}x\left(\frac{\delta A}{\delta h^{ij}}\frac{\delta B}{\delta {\tilde\pi}_{ij}} - \frac{\delta A}{\delta {\tilde\pi}_{ij}}\frac{\delta B}{\delta h^{ij}},\right),
\label{parentesis}
\end{equation}
therefore the Hamiltonian constraint $\cal H$ results to be
\begin{equation}
{\cal H}=(16\pi G)G_{abcd}{\tilde \pi}^{ab}{\tilde \pi}^{cd}-\frac{{\sqrt h}({}^{3}R-2\Lambda)}{16\pi G}-\frac{{\sqrt h}(G_{,0} -G_{,k} N^k){\bar \nabla}_{a}N^{a}}{8\pi G^2 N^2}-\nabla_{j}\left(\frac{G_{,i}{\sqrt h}h^{ij}}{8\pi G^{2}}\right)
\label{vinchamiltgrande}
\end{equation}
while the momenta constraints ${\cal H}_i$ are
\begin{equation}
{\cal H}_{i}=-2{\bar \nabla}^{a}{\tilde \pi}_{ai}+\frac{{\sqrt h}(-G_{,i}){\bar \nabla}_{a}N^{a}}{8\pi G^2 N}-{\sqrt h}{\bar\nabla_{i}}\left(\frac{G,_{0}-G_{,k}N^k}{8\pi G^2 N}\right)\;\;\;\;.
\label{vincconstrgrande}
\end{equation}
The previous expressions of the Hamiltonian constraint $\cal H$ and the momentum constraints ${\cal H}_i$ appear quite complicated.
The first check, one can do, is to see how these functions behave under gauge transformations. Following \cite{menotti} one can calculate the following (gauge) transformation on the three-dimensional spatial surfaces $\Sigma$ and gets
\begin{equation}
\{h_{ij},\int d^{3}x \tilde{N}^{i}{\cal{H}}_{i}\}={\cal{L}}_{\mathbf {\tilde{N}}} h_{ij}\;\;\;\;,
\label{tremetric}
\end{equation}
where ${\bf \tilde{N}}=\left( {\tilde{N}}^{i} \right)$ is a generic three-dimensional vector field on $\Sigma$. Therefore the momentum constraints ${\cal H}_i$ are still the generators of the gauge transformations on the three-dimensional metric $h_{ij}$. Following the same reasonings for the momenta ${\tilde \pi}^{ij}$, one obtains
\begin{equation}
\left\{{\tilde \pi}^{ij},\int d^{3}x {\tilde N}^{i}{\cal{H}}_{i}\right\}=\int d^{3}x{\cal{L}}_{\bf {\tilde{N}}} {\tilde\pi}^{ij}+{\bar \nabla}_{a}\left[\frac{{\tilde N}^{s}}{2}\left(\frac{G_{,\;s}}{8\pi G^2 N}\right)N^{a}h^{ij}\sqrt{h}\right]\;\;\;\;,
\label{momentatrespace}
\end{equation}
the momentum constraints ${\cal H}_{i}$ will be still the generators of the diffeomorphism transformations on $\Sigma$ for the momentum densities ${\tilde \pi}^{ij}$ if, sufficient condition, $G,_{s}=0$, that means $G=G(t)$.
Taking into account the standard results of the Hamiltonian theory of Einstein General Relativity as regards the constraint algebra \cite{menotti} , one finds that the non-zero part of the Poisson brackets on the Hamiltonian constraint reduces to
\begin{equation}
\{\int d^{3}x{\tilde N}(x){\cal{H}}(x),\int d^{3}x'{\tilde N}'(x'){\cal{H}}(x')\}=\int d^3y({\tilde N}'{\bar\nabla}_{i}{\tilde N}-{\tilde N}{\bar\nabla}_{i}{\tilde N}')\frac{G_{,0}N^i}{GN^2}\tilde\pi\;.
\label{commutogrande}
\end{equation}
Clearly this says that in order to preserve the Hamiltonian constraint and then time-diffeomorphims, in the ADM splitting, one has to impose the sufficient condition $N^{a}\approx0$. Furthermore looking at the Hamiltonian constraint \eqref{vinchamiltgrande} and the momentum constraints \eqref{vincconstrgrande}, the only possibility they stay first class constraints is to impose strongly $N^{a}=0$ and $N=N(t)$. This implies the right ADM metric to start is not \eqref{metricADM} but one with reduced gauge invariances
\begin{equation}
g=-N^{2}(t)dt \otimes dt+h_{ij}dx^{i} \otimes dx^{j}
\label{normalcoordinates}
\end{equation}
in which the shifts $N^{i}$ are put to zero and $N=N(t)$. This is ADM metric in Gaussian normal coordinates \cite{Wiltshire}. The ADM-Hamiltonian density ${\cal H}_{ADM}$ reduces to
\begin{equation}
{\cal H}_{ADM}=N\left((16\pi G)G_{abcd}{\tilde \pi}^{ab}{\tilde \pi}^{cd}-\frac{{\sqrt h}({}^{(3)}R-2\Lambda)}{16\pi G}\right)\;\;\;\;,
\label{Hamilrido}
\end{equation}
One has only the primary constraint $\pi \approx 0$ and the Hamiltonian constraint ${\cal H}$
\begin{equation}
{\cal H}=\left((16\pi G)G_{abcd}{\tilde \pi}^{ab}{\tilde \pi}^{cd}-\frac{{\sqrt h}({}^{(3)}R-2\Lambda)}{16\pi G}\right)\;\;\;\;,
\label{constraint}
\end{equation}
and they are first class. The Hamiltonian constraint is preserved as in General Relativity \cite{menotti} and the Hamiltonian analysis does not impose any restriction on the functional form of $G(x)$.
\section{Cosmologies of the Sub-Planck Era}
As a straightforward application of all previous considerations, one can study a cosmological minisuperspace model based on FLRW metric
\begin{equation}
ds^2 = -N(t)^2 dt^2 +\frac{a(t)^2}{1-K r^2} dr^2 +a(t)^2 (r^2 d\theta^2 + r^2\sin\theta d\phi^2)\;\;\;\;,
\label{FLRWmetric}
\end{equation}
where $N(t)$ is the Lapse function discussed in the considerations above, $a(t)$ is the scale factor of the universe and $K$ assumes values $-1, 0, 1$ depending on the topology one consider, that is respectively hyperbolic-open Universes, flat-infinite Universe or closed-elliptic Universe. All the details on this particular cosmological case can be found in the reference \cite{AlfiomeAlessia}. Notice that FLRW metric is an ADM-metric in Gaussian normal coordinates \eqref{normalcoordinates} Following the discussion above, one considers a renormalization group modified Einstein-Hilbert action $S$
\begin{equation}
S =\int_{M} d^4 x \sqrt{-g} \, \left\{\frac{R-2\Lambda(k)}{16\pi G(k)} + \mathcal{L}_m\right\}+\frac{1}{8\pi} \int_{\partial M}{K \sqrt{h}\over G(k)}d^{3}x\;\;\;\;.
\label{actionpunto}
\end{equation}
This action, respect to formula (\ref{mEH}), does not contain the dependence of the infrared cut-off $k$ by the space-time coordinates $x$, $k=k(x)$. $G(k)$ is the gravitational constant and $\Lambda(k)$ the cosmological constant. They both depended from $k$. $M$ is a Lorentzian Manifold with boundary $\partial M$ and a York term has been added in the previous formula. $\mathcal{L}_{m}$ is the the Lagrangian for the matter fields.
The very fact of using a homogeneous and isotropic FLRW Universe implies that the infrared cutoff $k$, for symmetry reasons, can depend only on cosmological time $t$, $k=k(t)$ and so this implies $G$ and $\Lambda$ are function of $t$ only
\begin{equation}
G\equiv G(k(t)),\;\;\;\; \Lambda\equiv \Lambda(k(t)).
\label{equivo}
\end{equation}
In principle this dependence of $k$ from $t$ could be either explicit or implicit via the scale factor $a(t)$, $k=k(t,a(t),{\dot a}(t), {\ddot a}(t)...)$ \cite{AlfioGiampieroRubano}, \cite{2002PhRDAlfioReuter}\cite{2002PhLBAlfioReuter}. As already explained in section 2, recent work \cite{Manrique} has shown that for the ADM formalism \cite{1960JMPADM} \cite{1961PhRDADM} \cite{1960PhR1ADM} \cite{1960PhR2ADM} the infrared cutoff of the RG transformations is built from the spectrum of the Laplacian operator defined on the three-dimensional surfaces $\Sigma$. In particular $k\sim a^{-1}$ in the case of FLRW metric.
Let us also assume that the matter fields are described by a perfect fluid of energy density $\rho$ and pressure $p$. In this case the relation between $\rho$ and $p$ is parametrized by an equation of state of the type $p=w\rho$, where $w$ is a constant. Therefore the conservation of matter stress-energy tensor ${T^{\mu\nu}}_{;\nu}=0$ with the metric
\eqref{FLRWmetric} fixes the functional form
\begin{equation}
\label{rhof}
\rho(a) = m \, a^{-3-3w}
\end{equation}
\noindent where $m$ is an arbitrary integration constant. It is now clear that $\mathcal{L}_m=-mNa^{-3w}$ \cite{2014greci}. One thus obtain the following Lagrangian without the York term \cite{1986York}
\begin{equation} \label{lag1}
\mathcal{L}=\, -\frac{3 \, a{\dot a }^{2}}{8\pi N(t)G(a)}+\frac{3 \, a N K}{8\pi G(a)} -\frac{ a^{3}N\Lambda(a)}{8\pi G(a)}-\frac{2 Nm}{a^{3w}} +\frac{3 \, a^{2}{\dot a }^{2}G'(a)}{8\pi N G(a)^{2}}+ \frac{d}{dt} \left(\frac{3 a^2{\dot a }^{2}}{8 \pi NG(a)}\right)
\end{equation}
where $G'(a)$ stands for the derivative of $G$ with respect to $a$. The York term \cite{1986York}, added as prescribed in \eqref{actionpunto}, cancels the total derivative.
\section{Constraint Analysis}
The constraints analysis of this minisuperspace model is a lower dimensional application of the general field theoretical analysis performed in the previous paragraphs. One gets a primary constraint
\begin{equation}
p_N=\frac{\partial \mathcal{L}}{\partial {\dot N}}\approx 0\quad\mapsto\quad
\phi_N(N,a,p_N,p_a)=p_N\approx 0 \;\;,
\label{primary}
\end{equation}
\noindent therefore one defines the canonocal Hamiltonian $H_{C}$ as
\begin{equation}
H_C\equiv p_i q^i -\mathcal{L}|_{M}=p_a{\dot a}-\mathcal{L} \;\;.
\end{equation}
Here the momentum $p_a$ associated to the generalized coordinate $a(t)$ is given by
\begin{equation}
p_a\equiv\frac{\partial \mathcal{L}}{\partial {\dot a}}=-\frac{6 \, a\,{\dot a }}{8\pi N G(a)} \, (\eta(a)+1)
\end{equation}
where $\eta\equiv k\partial_k G_k$, with $k\sim a^{-1}$, defines the ``anomalous dimension'' of Newton's constant as a function of the scale factor
\begin{equation}
\label{anod}
\eta(a)=-\frac{a\,G'(a)}{G(a)} \;\;.
\end{equation}
The canonical Hamiltonian thus reads
\begin{equation}
H_C=-\frac{2 \pi NG(a)^2p^2_a}{3a(G(a)-aG'(a))}-\frac{3aNK}{8\pi G(a)}+\frac{a^3 \Lambda(a)N}{\pi G(a)}+\frac{2Nm}{a^{3w}}\;.
\label{Ham}
\end{equation}
Pursuing the Dirac's constraint analysis, one gets the Hamiltonian constarint as in \eqref{constraint}
\begin{equation}
\mathcal{H}=-\frac{2 \pi G(a)^2p^2_a}{3a(G(a)-aG'(a))}-\frac{3aK}{8\pi G(a)}+\frac{a^3 \Lambda(a)N}{\pi G(a)}+\frac{2m}{a^{3w}} \;\;.
\label{Hamconst}
\end{equation}
The total Hamiltonian $H_{T}$ is then
\begin{equation}
H_T=N\mathcal{H} +\lambda_N\phi_N
\label{total}
\end{equation}
Imposing the gauge $N=1$ as a constraint $N-1 \approx 0$, one has that $\phi_{N}$ becomes a second class constraint and $\lambda_{N}=0$,
\begin{eqnarray}
\{N-1,\phi_{N}\}=1\;\;\;\;
\frac{d}{dt}(N-1)=\left\{N-1,H_T\right\}=\lambda_N=0\;\;.
\label{determino}
\end{eqnarray}
\section{Bouncing and Emergent Cosmologies from Asymptotic Safety}
The Hamiltonian constraint of the previous section provides the following RG-improved Friedmann equation
\begin{equation}
\frac{K}{a^2 H^2}-\frac{8\pi G(a)\, \rho+\Lambda(a)}{3 H^2}+\eta(a)+1=0 \;, \label{1fe}
\end{equation}
this implies an evolution equation for $a(t)$, provided $\eta(a) +1 \neq 0$
\begin{equation}
\dot{a}^2=-\tilde{V}_K(a)\equiv-\frac{K+V(a)}{\eta(a)+1}\;\; {\textrm where} \;\;
V(a)=\frac{{a}^2}{3}(8\pi G(a)\, \rho+\Lambda(a))
\label{evolvo}
\end{equation}
in which the scalings of $G(a)$ and $\Lambda(a)$ are determined by RG flow. It has been shown \cite{2000BonannoReuter} that under certain approximations the beta function for the Newton constant can be solved analytically and the beta function for the cosmological constant numerically. The running of the couplings in the early stages of the universe is completely captured by their behavour around the NGFP. One can approximates
\begin{align}
&G(a)\simeq G_0 \left(1+G_0\, g_\ast^{-1} a^{-2}\right)^{-1} \label{grun}\\
&\Lambda(a)\simeq \Lambda_0 + \lambda_\ast a^{-2} \label{lrun} \;\;,
\end{align}
where equation \eqref{grun} is the expression of the running Newton's coupling constant in \cite{2000BonannoReuter} and \eqref{lrun} is the linearization of the beta function cosmological constant around the NGFP $(\lambda^{\star}, g^{\star})$. $G_{0}$ and $\Lambda_{0}$ are the infrared value of the gravitation and cosmological constants and coincides the observed values.
Cosmological implications of this analysis can be studied in the region $\tilde{V}_K(a)\leq 0$, as one may promptly check in equation \eqref{evolvo}. In particular, if $\tilde{V}_K(a)=0$ admits real solutions at some $a=a_b>0$, then $\tilde{V}_K(a)$ may give rise to either an emergent universe scenario or a bouncing model.
The equation $\tilde{V}_K(a)=0$ implies
%
\begin{equation}
\left(a^2+\frac{G_0}{g_\ast}\right) \left(a^2+\frac{\lambda _\ast-3 K}{\Lambda_0}\right)+\frac{8\pi m\,G_0}{\Lambda_0}\,a^{1-3 w}
=0 \;\;. \label{poteqq}
\end{equation}
Bouncing solutions exist for some values of $w$. To simplify the discussion, one may restrict to the case of a radiation-dominated universe, $w=1/3$. Then eq.~\eqref{poteqq} has (at most) two solutions with non-negative real part. The number of such solutions determines the cosmological scenario arising from $\tilde{V}_K(a)$. In fact, no solutions implies no bounces and the universe has a singularity in the past, at $a=0$. This case corresponds to the blue line in Fig.~\ref{fig1}. On the other hand, a bouncing universe is realized when $\tilde{V}_K(a)$ has two different zeros. In particular if $\tilde{V}_K''(a)>0$ the scale factor oscillates between one minimum and one maximum value. On the contrary, if
$\tilde{V}_K''(a)<0$, the universe has a bounce at either a minimum \textit{or} a maximum value of the scale factor (black line model in Fig. \ref{fig1}). Only in the former case the initial singularity is avoided.
\begin{figure}
\begin{center}
\includegraphics[width=0.55\textwidth]{fig1}
\caption{The effective potential $\tilde{V}_K(a)$ for a
bouncing universe (black), emergent universe (red), singular universe (blue), for $K=0$, $w=1/3$, $g_\ast=0.1$, $\lambda_\ast=-0.5$ and $m=3$. Black, red and blue correspond to $\Lambda_0=2 \times 10^{-4}$, $\Lambda_0= 8.3 \times 10^{-4}$ and $\Lambda_0=1.5 \times 10^{-3}$ respectively.\label{fig1}}
\end{center}
\end{figure}
The outcome of an emergent universe, from this model, represents the most interesting feature. It consists of \textit{past-eternal} inflationary phase which follows an initial quasi-static state. This universe starts at some minimum scale factor $a_{b}>0$. Later inflates and expands according to standard cosmology and the laws of General Relativity. The requirements to have an emergent universe at $a_{b}>0$ are $\tilde{V}_K''(a)<0$ and the double zero $\dot{a}_b=\ddot{a}_b=0$. This case is represented in the red line in Fig \ref{fig1}.
If one consider an early universe dominated by radiation ($w=\frac{1}{3}$), equation \eqref{poteqq} can be solved
\begin{equation}
a_b^2 = -\frac{G_0 \Lambda_0 +g_\ast(\lambda_\ast-3K)}{2 g_\ast \Lambda_0} \pm\sqrt{\left(\frac{G_0 \Lambda_0-g_\ast(\lambda_\ast-3K)}{2 g_\ast \Lambda_0}\right)^2-\frac{8\pi m\,G_0}{\Lambda_0}} \;\;.
\end{equation}
Imposing that the previous equation has two coincident solutions (emergent universe condition) one determines the value of $m$. Furthermore $a_b^2$ has to be positive, which means
\begin{equation}
\label{cond}
\lambda_\ast-3K<-\frac{G_0\Lambda_0}{g_\ast} \;\;.
\end{equation}
In the classical case $\lambda_{*}=0$ ($g_{*}>0$) and then since the bare cosmological constant $\Lambda_{0}$ and the bare gravitational constant ${G_{0}}$ are positive, an emergent universe is possible for values of the spatial curvature $K>0$. In reference \cite{AlfiomeAlessia} it is discussed that the Asymptotic Safety Scenario is based on the evidence that there exists NGFP, such that $\lambda_{*}\neq 0$. In particular there exists cases in which $\lambda_{*}$ is negative enough\cite{2017Alessia} \cite{AlfiomeAlessia} to allow also the cases $K=0$ and $K=-1$.
Assuming that condition \eqref{cond} holds, in the case of an emergent universe eq. \eqref{evolvo} becomes
\begin{equation}
\dot{a}^2=\frac{4 {g_\ast} a_b^2 \Lambda_0}{3 \left({g_\ast}a_b^2-{G_0} \right)} (a-a_b)^2
\end{equation}
in which the minimum scale factor $a_{b}$ is
\begin{equation}
a_b=\sqrt{-\frac{G_0 \Lambda_0 +g_\ast(\lambda_\ast-3K)}{2 g_\ast \Lambda_0}} \;\;.
\end{equation}
Condition $\tilde{V}_K(a)\leq0$ implies
\begin{equation}
a^2\geq a_b^2>\frac{G_0}{g_\ast}\;\;.
\end{equation}
(see also \cite{AlfiomeAlessia} for further details and discussions).
One can investigate the behaviour of the early emergent universe close to $a_{b}$ linearizing the quantum equation \eqref{evolvo} around $a_{b}$. The approximate equation is then:
\begin{equation}
\dot{a}^2=\frac{4 {g_\ast} a_b^2 \Lambda_0}{3 \left({g_\ast}a_b^2-{G_0} \right)} (a-a_b)^2\;\;,
\end{equation}
then the general solution is
\begin{equation}
a(t)=a_b+\epsilon\,\mathrm{exp}\left\{\sqrt{\frac{4 {g_\ast} a_b^2 \Lambda_0}{3 \left({g_\ast} a_b^2-{G_0}\right)}}\; t\right\} \;\;,
\label{emergo}
\end{equation}
$\epsilon$ being an integration constant. It is evident that \eqref{emergo} exibits an emergent universe scenario with exponential evolution of the scale factor and then no need of a model with an {\it{ad hoc}} inflation. The density parameter can be written
\begin{equation}
\Omega-1=\frac{3 \left({g_\ast}a_b^2-{G_0} \right) K}{4 {g_\ast} a_b^4 \Lambda_0}\;e^{-2N_e}\;\;.
\end{equation}
The number $N_{e}$ of efolds is
\begin{equation}
N_e\simeq\mathrm{log}\left(\frac{\epsilon}{a_b}\,\mathrm{exp}\left\{\sqrt{\frac{4 {g_\ast} a_b^2 \Lambda_0}{3 \left({g_\ast} a_b^2-{G_0}\right)}}\; t_e\right\}\right) \;\;,
\end{equation}
where $t_{e}$ is the cosmic time at the inflation exit.
\section{Conclusions and Open Questions}
Hamiltonian (ADM) analysis of RG improved Einstein-Hilbert action with $G$ and $\Lambda$ as external, non geometrical field, has been performed. It has been showed that if one requires that this theory behaves like the Hamiltonian theory of Einstein General Relativity, that is the momentum constraints and the Hamiltonian constraint be the generators respectively of the space diffeomorphisms on $\Sigma$ and the time diffeomorphisms, then one cannot start from the ADM-metric \eqref{metricADM} but from ADM metric in Gaussian normal coordinates \eqref{normalcoordinates}.
An immediate application of the above considerations is FLRW cosmology in the minisupersapce approach using Dirac's constraint analysis. It generates sub-Planckian cosmological models via Asymptotic Safety. They exhibit bouncing and emergent Universes. The latter ones are solution of the equations of motion also in cases $K=-1,0$, that are impossible to draw from classical General Relativity.
Although this analysis shows that RG improved Einstein-Hilbert action with $G$ and $\Lambda$ as external fields can be cast in the Hamiltonian formalism only in the case of ADM metric in Gaussian normal coordinates, one can still legitimately ask if there exists cases and/or particular foliations in which one does not need to loose space diffeomorphisms in order to make sense of the Hamiltonian formalism. In order to throw light on this issue, it could be useful, following the suggestions of section $2$, to study the Hamiltonian formalism of the Branse-Dicke theory.
In the same direction ADM formalism for Black Holes could result quite enlightening . Here a completely different symmetry implies a different ADM foliation, which could, eventually, help to answer previous questions.
\section{Acknowledgment}
G. Gionti thanks the organizers of the Corf\'u Summer Workshop on Testing the Fundamental Physics Principles. He is grateful to A. Bonanno and A. Platania for stimulating discussions and collaboration on this work. He has also enjoyed the hospitality at Osservatorio Astrofisico di Catania while part of this research has been carried out.
\bibliographystyle{JHEP}
|
{
"timestamp": "2018-05-08T02:13:43",
"yymm": "1805",
"arxiv_id": "1805.02318",
"language": "en",
"url": "https://arxiv.org/abs/1805.02318"
}
|
\section{Introduction}
The spin-orbit coupling (SOC) arising from the interaction of a particle's spin with its motion plays a crucial role in various fields of physics~\cite{Galitski2013,zhai2015degenerate}, intriguing plenty of studies on fascinating phenomena, such as the quantum spin Hall effect~\cite{zhu2006spin,liu2007optically,Beeler2013}, topological superfluidity~\cite{sato2009non,jiang2011majorana,liu2012probing,liu2012topological,*liu2013topological,liu2014realization} and exotic bosonic phases of matter~\cite{wang2010spin,ho2011bose,li2012quantum,hu2012spin}. In the last decade, this effect can be synthesized in ultracold neutral atoms~\cite{bloch2008many} through the atom-light interaction~\cite{liu2009effect,spielman2009raman}. The realization of Raman-laser-induced one-dimensional SOC~\cite{lin2011spin,wang2012spin,cheuk2012spin,dalibard2011colloquium,goldman2014light}, also referred to as equal Rashba and Dresselhaus SOC (ERDSOC), and two-dimensional SOC for both Bose~\cite{wu2016realization,sun2017long} and Fermi gases~\cite{huang2016experimental}, has provided a versatile platform for understanding the interplay between SOC and quantum many-body physics~\cite{li2012sum,martone2012anisotropic,zheng2013properties,zhang2012collective,ji2014experimental,ji2015softening}.
In a three-dimensional weakly interacting Bose gas with ERDSOC, the system exhibits sequentially three exotic condensation phases at zero temperature, i.e., the stripe (ST), plane-wave (PW), and zero-momentum (ZM) phases as the Rabi frequency of the Raman laser beams gradually rises~\cite{lin2011spin,li2012sum,martone2012anisotropic,zheng2013properties}. Previous investigations mainly focus on the last two phases. The problems that have been addressed include the ground-state phase diagram~\cite{li2012quantum,zheng2013properties}, quantum and thermal fluctuations~\cite{ozawa2012stability,cui2013enhancement,liao2014spin,chen2017quantum}, collective excitations~\cite{khamehchi2014measurement,chen2017collective}, superfluidity and critical velocities~\cite{zhu2012exotic,zhou2012opposite,yu2017landau}. However, only a handful of works involved the stripe phase, which attracts great attention after being directly observed in recent experiments with ultracold atomic gases~\cite{leonard2017supersolid,li2017stripe}.
Theoretically, the existence of a stripe phase was first predicted in Refs.~\cite{wang2010spin,ho2011bose,li2012quantum} using a first-order stripe ansatz. Later, by employing an improved high-order ansatz and calculating the static structure factor, one of the present authors (Y. L.) and her collaborators characterized the spin and density responses of the stripe phase and found two gapless modes in the elementary excitation spectrum~\cite{li2013superstripes}. This calculation clearly indicates the importance of the inclusion of high-order harmonics in the trial ansatz, for the purpose of \emph{quantitatively} characterizing the stripe phase. Unfortunately, apart from the elementary phonon excitation spectrum, none of the other physical properties of the stripe phase has so far been investigated using the ansatz with high-order harmonics.
Experimentally, in a SOC Bose gas of $^{87}$Rb atoms, the phase space for the stripe phase is small. Martone \textit{et al.} tried to enhance the stripe contrast and make it visible and stable under realistic experimental conditions, by theoretically considering the load of atoms into a two-dimensional bilayer configuration~\cite{martone2014approach}. Most recently, Li and co-workers achieved the effective SOC in optical superlattices and observed for the first time the exotic stripe phase with supersolid properties using Bragg spectroscopy~\cite{li2016spin,li2017stripe}. This experimental breakthrough provides a great opportunity to test and verify the theoretical predictions on the stripe phase.
In this work, motivated by previous theoretical studies and recent experiments, we explore the fascinating stripe phase and aim to make quantitative predictions of several fundamental properties of this phase using the high-order stripe ansatz. Within the Bogoliubov approximation, we first consider the dependence of the density distribution and the excitation spectrum on the tunable Rabi frequency. By introducing high-order harmonics and comparing the free energy in different trial ansatz, we then obtain an improved critical Rabi frequency for the transition from the stripe and plane-wave phases. We numerically calculate the depletion of the condensate induced by quantum fluctuations, with which one can straightforwardly characterize the first-order ST-PW transition and the second-order PW-ZM transition. Finally, by developing a phase-twist method, we discuss the superfluidity of a SOC Bose gas via calculating the superfluid density in all three phases.
This paper is organized as follows. We describe the model Hamiltonian and the theoretical framework in Sec.~\ref{sec:theory}. In Sec.~\ref{sec:results}, we present the density profile and the Bogoliubov excitation spectrum of the stripe phase (see Fig.~\ref{fig1}) and show how to improve the prediction on the critical ST-PW transition by taking high-order harmonics in the ansatz (Fig.~\ref{fig2} and~\ref{fig3}). We then calculate the quantum depletion as a function of Rabi frequency (Fig.~\ref{fig4}). The analytic expression of the superfluid density is derived for all the three phases using the first-order ansatz. In the stripe phase, the analytic prediction is compared with the more accurate numerical result with the high-order ansatz (see Fig.~\ref{fig5}). A summary and outlook are given in Sec.~\ref{sec:summary}.
\section{Theoretical Frameworks\label{sec:theory}}
\subsection{The model Hamiltonian}
We consider a three-dimensional weakly interacting Bose gas with Raman-induced spin-orbit coupling, the same as in our previous work~\cite{chen2017quantum}. The system can be described by a two-component (i.e., two-energy-level) Hamiltonian, $\hat{H}=\hat{H}_{0}+\hat{H}_{\mathrm{int}}$, where the single-particle Hamiltonian $\hat{H}_{0}$ and the interaction Hamiltonian $\hat{H}_{\mathrm{int}}$ read, respectively ($\hbar=1$)~\cite{lin2011spin,li2012quantum,zheng2013properties}
\begin{eqnarray}
\hat{H}_{0} & = & \int d^{3}{\bf r}[\hat{\Phi}_{\uparrow}^{\dagger}({\bf r}),\hat{\Phi}_{\downarrow}^{\dagger}({\bf r})]\mathcal{H}_{\mathrm{s}}(\hat{{\bf p}})\left[\begin{array}{c}
\hat{\Phi}_{\uparrow}({\bf r})\\
\hat{\Phi}_{\downarrow}({\bf r})
\end{array}\right],\label{eq:single-particle}\\
\hat{H}_{\mathrm{int}} & = & \int d^{3}{\bf r}\sum_{\sigma,\sigma^{\prime}=\uparrow,\downarrow}\frac{g_{\sigma\sigma^{\prime}}}{2}\hat{\Phi}_{\sigma}^{\dagger}({\bf r})\hat{\Phi}_{\sigma^{\prime}}^{\dagger}({\bf r})\hat{\Phi}_{\sigma^{\prime}}({\bf r})\hat{\Phi}_{\sigma}({\bf r}).\label{eq:interaction}
\end{eqnarray}
Here, $\mathcal{H}_{\mathrm{s}}(\hat{{\bf p}})$ is given by
\begin{equation}
\mathcal{H}_{\mathrm{s}}(\hat{{\bf p}})=\frac{(\hat{{\bf p}}-k_{\mathrm{r}}\hat{{\bf e}}_{x}\sigma_{z})^{2}}{2m}+\frac{\Omega}{2}\sigma_{x}+\frac{\delta}{2}\sigma_{z},
\end{equation}
with the canonical momentum operator $\hat{{\bf p}}=-i\nabla$. It is worth noting that, in the Raman SOC scheme, the physical momenta of the two pseudo-spin states are $\hat{{\bf p}}-k_{\mathrm{r}}\hat{{\bf e}}_{x}$ for pseudo-spin-up atoms and $\hat{{\bf p}}+k_{\mathrm{r}}\hat{{\bf e}}_{x}$ for pseudo-spin-down atoms, respectively. $g_{\sigma\sigma^{\prime}}=4\pi a_{\sigma\sigma^{\prime}}/m$ are interaction strengths for intra- ($\sigma=\sigma^{\prime}$) and interspecies ($\sigma\neq\sigma^{\prime}$), and $a_{\sigma\sigma^{\prime}}$ are the corresponding $s$-wave scattering lengths. $k_{\mathrm{r}}\hat{{\bf e}}_{x}$ is the recoil momentum of the Raman lasers along the $x$ axis, with a recoil energy $E_{\mathrm{r}}=k_{\mathrm{r}}^{2}/(2m)$. The detuning of the Raman lasers is assumed to be zero $\delta=0$, and the Rabi frequency $\Omega$ can be flexibly tuned, in accord with the recent experiments~\cite{ji2014experimental,ji2015softening}. Previous works have shown that the difference between intra- and interspecies interactions is essential to the emergence of the spin-mixed stripe phase~\cite{li2012quantum,martone2014approach}. Hence, we will set $g_{_{\uparrow\uparrow}}=g_{_{\downarrow\downarrow}}=g>g_{_{\uparrow\downarrow}}$ in the following investigations on the stripe phase.
\subsection{The Gross-Pitaevskii equation and Bogoliubov theory}
In this work, we employ the quasiparticle formalism at the Bogoliubov level to describe a weakly interacting dilute Bose gas with SOC at zero temperature~\cite{griffin1996conserving,dodd1998collective,buljan2005incoherent,dalfovo1999theory,Pitaevskii2003Book}. Following the standard procedure~\cite{chen2015collective,chen2017quantum}, the Bose field operator $\hat{\Phi}_{\sigma}({\bf r},t)$ ($\sigma=\uparrow,\downarrow$) can be rewritten as a combination of the condensate wave function $\phi_{\sigma}$ and the noncondensate fluctuation operator $\hat{\eta}_{\sigma}$ as
\begin{equation}
\hat{\Phi}_{\sigma}({\bf r},t)=\phi_{\sigma}({\bf r},t)+\hat{\eta}_{\sigma}({\bf r},t).\label{eq:newBosefield}
\end{equation}
Using a Bogoliubov transformation, the fluctuation operator $\hat{\eta}_{\sigma}({\bf r},t)$ and its conjugate can be expanded as
\begin{subequations} \label{eq:eta}
\begin{eqnarray}
\hat{\eta}_{\sigma} &=& \underset{j}{\sum}\left[u_{j\sigma}({\bf r})e^{-i\varepsilon_{j}t}\hat{a}_{j}+v_{j\sigma}^{*}({\bf r})e^{i\varepsilon_{j}t}\hat{a}_{j}^{\dagger}\right],\\
\hat{\eta}_{\sigma}^{\dagger} &=& \underset{j}{\sum}\left[u_{j\sigma}^{*}({\bf r})e^{i\varepsilon_{j}t}\hat{a}_{j}^{\dagger}+v_{j\sigma}({\bf r})e^{-i\varepsilon_{j}t}\hat{a}_{j}\right],
\end{eqnarray}
\end{subequations}
in terms of the quasiparticle amplitudes $u(u^{*})$, $v(v^{*})$ and the quasiparticle frequency $\varepsilon_{j}$. Here, $j\equiv({\bf q},\tau)$ is the index of the quasiparticle energy level, with the quasimomentum ${\bf q}$ and the branch index $\tau$. $\hat{a}^{\dagger}$ ($\hat{a}$) are respectively the creation (annihilation) operators for quasiparticles, satisfying the bosonic commutation relations:
\begin{equation}
\left[\hat{a}_{i},\hat{a}^{\dagger}_{j}\right]=\delta_{ij},~\left[\hat{a}_{i}^{\dagger},\hat{a}_{j}^{\dagger}\right]=\left[\hat{a}_{i},\hat{a}_{j}\right]=0.
\end{equation}
After substituting Eq.~\eqref{eq:newBosefield} and Eqs.~\eqref{eq:eta} into the equations of motion
\begin{equation}
i\partial_{t}\hat{\Phi}_{\sigma}({\bf r},t)=\left[\hat{\Phi}_{\sigma},\hat{H}\right],
\end{equation}
and applying the mean-field decoupling of the cubic terms in $\hat{\Phi}$ and $\hat{\Phi}^\dagger$~\cite{griffin1996conserving}, we obtain two coupled equations as in Ref.~\cite{chen2017quantum}.
The first equation is the modified Gross-Pitaevskii (GP) equation for the condensate,
\begin{equation}
\left[\mathcal{H}_{\mathrm{s}}(\hat{{\bf p}})+\mathrm{diag}(\mathcal{L}_{\uparrow},\mathcal{L}_{\downarrow})\right]\phi=\mu\phi,\label{eq:gp}
\end{equation}
where we have introduced the spinor $\phi\equiv(\phi_{\uparrow},\phi_{\downarrow})^{T}$, the chemical potential $\mu$, and the diagonal element ($\sigma\neq\bar{\sigma}$)
\begin{equation}
\mathcal{L}_{\sigma}\equiv gn_{\sigma}+g_{_{\uparrow\downarrow}}n_{\bar{\sigma}}.
\end{equation}
The second equation is the coupled Bogoliubov equation for quasiparticles,
\begin{subequations} \label{eq:bogoliubov}
\begin{eqnarray}
\left[\mathcal{H}_{\mathrm{s}}(\hat{{\bf p}})-\mu+\mathcal{A}_\uparrow\right]U_j +\mathcal{B}V_j&=&\varepsilon_jU_j, \\
-\mathcal{B}U^*_j -\left[\mathcal{H}_{\mathrm{s}}(\hat{{\bf p}})-\mu+\mathcal{A}_\downarrow\right]V^*_j&=&\varepsilon_jV^*_j,
\end{eqnarray}
\end{subequations}
where $U_{j}\equiv(u_{j\uparrow},u_{j\downarrow})^{T}$, $V_{j}\equiv(v_{j\uparrow},v_{j\downarrow})^{T}$, and
\begin{equation}
\mathcal{A}_{\sigma}\equiv\left[\begin{array}{cc}
2gn_{\sigma}+g_{_{\uparrow\downarrow}}n_{\bar{\sigma}} & g_{_{\uparrow\downarrow}}\phi_{\sigma}\phi_{\bar{\sigma}}\\
g_{_{\uparrow\downarrow}}\phi_{\bar{\sigma}}\phi_{\sigma} & 2gn_{\bar{\sigma}}+g_{_{\uparrow\downarrow}}n_{\sigma}
\end{array}\right],
\end{equation}
\begin{equation}
\mathcal{B}\equiv\left[\begin{array}{cc}
g\phi_{\uparrow}^{2} & g_{_{\uparrow\downarrow}}\phi_{\uparrow}\phi_{\downarrow}\\
g_{_{\uparrow\downarrow}}\phi_{\uparrow}\phi_{\downarrow} & g\phi_{\downarrow}^{2}
\end{array}\right].
\end{equation}
After solving the GP and Bogoliubov equations, Eqs.~\eqref{eq:gp} and \eqref{eq:bogoliubov}, one obtains straightforwardly the ground-state wave function $\phi({\bf r})$ and the Bogoliubov excitation spectrum $\varepsilon_{j}$, as a function of Rabi frequency $\Omega$. At zero temperature, the total energy of the system can be written in terms of $\phi({\bf r})$ as~\cite{li2012quantum}
\begin{equation}
\begin{aligned}E= & \int d^{3}{\bf r}\left[\left(\phi_{\uparrow}^{\dagger}({\bf r}),\phi_{\downarrow}^{\dagger}({\bf r})\right)\mathcal{H}_{\mathrm{s}}(\hat{{\bf p}})\left(\begin{array}{c}
\phi_{\uparrow}({\bf r})\\
\phi_{\downarrow}({\bf r})
\end{array}\right)\right.\\
& \left.+\frac{1}{2}g\left(|\phi_{\uparrow}({\bf r})|^{4}+|\phi_{\downarrow}({\bf r})|^{4}\right)+g_{\uparrow\downarrow}|\phi_{\uparrow}({\bf r})|^{2}|\phi_{\downarrow}({\bf r})|^{2}\right].
\end{aligned}
\label{eq:mean-field-energy}
\end{equation}
In the above derivations, the anomalous densities (i.e., $\langle\hat{\eta}^{\dagger}\hat{\eta}^{\dagger}\rangle$ and $\langle\hat{\eta}\hat{\eta}\rangle$) are omitted, as we use the Popov approximation to ensure a gapless spectrum~\cite{griffin1996conserving,chen2015collective}. In other words, we set the density of the condensate to be the total density, $n_{c}=\bar{n}=N/V$. In addition, the thermal density $\langle\hat{\eta}_{\sigma}^{\dagger}\hat{\eta}_{\sigma}\rangle$ and the spin-flip term $\langle\hat{\eta}_{\sigma}^{\dagger}\hat{\eta}_{\bar{\sigma}}\rangle$ vanish at zero temperature and are therefore not taken into account in our calculations. Nevertheless, the condensate is still depleted by a small fraction of the total density, even at zero temperature, due to quantum fluctuations. This is the so-called quantum depletion,
\begin{equation}
n_{\mathrm{qd}}=\frac{1}{V}\sum_{{\bf q},\tau,\sigma}|v_{\sigma{\bf q}}^{(\tau)}|^{2},\label{eq:quantum_depletion}
\end{equation}
involving typically about $1\%$ of the total density (see next section). The quantum depletion will be explored thoroughly in the next section. Quantum fluctuations also lead to the well-known Lee-Huang-Yang (LHY) correction to the total energy, beyond the mean-field approximation. In our calculations, for the self-consistency of the theory, we do not include the LHY correction to the energy, which is at the order of the square root of the gas parameter $(\bar{n}a^3)^{1/2}$~\cite{zheng2013properties}. Otherwise, the energy correction at the same order due to the anomalous densities may have to be taken into account on an equal footing, which is clearly beyond the scope of our work.
\subsection{The plane-wave ansatz}
The magnetic plane-wave phase and the nonmagnetic zero-momentum phase have been extensively investigated in the previous works~\cite{martone2012anisotropic,zheng2013properties,zhang2016superfluid,chen2017quantum} by using a \textit{plane-wave ansatz} at the momentum $P_{x}$ in Eq.~\eqref{eq:newBosefield}:
\begin{equation}
\phi({\bf r})=\sqrt{\bar{n}}\left(\begin{array}{c}
\cos\theta\\
-\sin\theta
\end{array}\right)e^{iP_{x}x}.\label{eq:plane-wave}
\end{equation}
Here, $\bar{n}=N/V$ is the uniform average density, and the variational angle $\theta$ in the range $[0,\pi/4]$ weighs the spin components of the condensate. In free space, the quasiparticle amplitudes $u_{j\sigma}({\bf r})$, $v_{j\sigma}({\bf r})$ with index $j=({\bf q},\tau)$ and spin component $\sigma$ can be expanded as $u_{j\sigma}({\bf r})=u_{{\bf q}\sigma}^{\tau}e^{i{\bf qr}}$ and $v_{j\sigma}({\bf r})=v_{{\bf q}\sigma}^{\tau}e^{i{\bf qr}}$, where the normalization condition for each branch of two physical solutions ($\tau=\pm$) is given by,
\begin{equation}
\sum_{{\bf q}\sigma}(|u_{{\bf q}\sigma}^{\tau}|^{2}-|v_{{\bf q}\sigma}^{\tau}|^{2})=1.
\end{equation}
In this case, the ground-state energy per particle in Eq.~\eqref{eq:mean-field-energy}
becomes~\cite{li2012quantum,chen2017quantum}
\begin{equation} \label{eq:PW-energy}
\begin{aligned}\frac{E^{(\mathrm{PW})}}{N}= & \frac{P_{x}^{2}+k_{\mathrm{r}}^{2}-2P_{x}k_{\mathrm{r}}\cos{2\theta}}{2m}-\frac{1}{2}\Omega\sin{2\theta}\\
& +\frac{2g\bar{n}-(g-g_{_{\uparrow\downarrow}})\bar{n}\sin^{2}{2\theta}}{4}.
\end{aligned}
\end{equation}
The minimization of the energy gives rise to two solutions: the plane-wave phase where the condensates occur at the momentum $P_{x}=\pm k_{\mathrm{r}}\sqrt{1-\Omega^{2}/[4E_{\mathrm{r}}-(g-g_{\uparrow\downarrow})\bar{n}]^{2}}$ for $\Omega\leq4E_{\mathrm{r}}-(g-g_{\uparrow\downarrow})\bar{n}$ and the zero-momentum phase with zero momentum $P_{x}=0$ for $\Omega>4E_{\mathrm{r}}-(g-g_{\uparrow\downarrow})\bar{n}$~\cite{li2012quantum,chen2017quantum}. In the lowest-lying excitation spectrum, a typical feature of the plane-wave phase is the emergence of the roton-maxon structure. The zero-momentum phase exhibits only the linear phonon mode~\cite{martone2012anisotropic,zheng2013properties,chen2017quantum}.
\subsection{The stripe ansatz}
Instead of the well-studied plane-wave phase and zero-momentum phase, we are concentrating on the exotic stripe phase in this work, which was recently observed in ultracold atomic systems~\cite{leonard2017supersolid,li2017stripe}. To understand the key properties of the stripe phase, a \emph{first-order} \textit{\emph{stripe ansatz}} is often adopted~\cite{li2012quantum,martone2014approach,ji2014experimental,yu2014equation}:
\begin{equation}
\phi(\mathbf{r})=\sqrt{\frac{\bar{n}}{2}}\left[\left(\begin{array}{c}
\sin\theta\\
-\cos\theta
\end{array}\right)e^{-iP_{x}x}+\left(\begin{array}{c}
\cos\theta\\
-\sin\theta
\end{array}\right)e^{iP_{x}x}\right].\label{eq:1st_stripe}
\end{equation}
This ansatz is an equal superposition of two plane waves with momentum $\pm P_{x}$, in contrast to the single-plane-wave ansatz in Eq.~\eqref{eq:plane-wave}.
By substituting this trial wave function, Eq.~\eqref{eq:1st_stripe}, into the model Hamiltonian and minimizing the ground-state energy per particle, which takes the form~\cite{li2012quantum},
\begin{equation}
\begin{aligned}\frac{E^{(\mathrm{1st})}}{N}= & \frac{P_{x}^{2}+k_{\mathrm{r}}^{2}-2P_{x}k_{\mathrm{r}}\cos{2\theta}}{2m}-\frac{1}{2}\Omega\sin{2\theta}\\
& +\frac{(g+g_{_{\uparrow\downarrow}})\bar{n}}{4}\left(1+\frac{1}{2}\sin^{2}{2\theta}\right),
\end{aligned}
\label{eq:1stST-energy}
\end{equation}
one can straightforwardly determine the critical Rabi frequency $\Omega$ of three exotic phases in the appropriate interaction regimes (i.e., $G_{2}>0$~\footnote{\label{note1}The condition is necessary for the existence of the exotic stripe phase, where the more strict one is $E_{\mathrm{r}}>2G_{2}+2G_{2}^{2}/G_{1}$ in Ref.~\cite{li2012quantum}.}), which are respectively given by~\cite{li2012quantum}
\begin{equation}
\Omega_{c1}=2\left[(2E_{\mathrm{r}}+G_{1})(2E_{\mathrm{r}}-2G_{2})\frac{2G_{2}}{G_{1}+2G_{2}}\right]^{1/2}\label{eq:omega1}
\end{equation}
for the ST-PW phase transition, and
\begin{equation}
\Omega_{c2}=4E_{\mathrm{r}}-4G_{2}\label{eq:omega2}
\end{equation}
for the PW-ZM phase transition. Here, the two interaction parameters are $G_{1}=(g+g_{\uparrow\downarrow})\bar{n}/4$ and $G_{2}=(g-g_{\uparrow\downarrow})\bar{n}/4$. It is worth mentioning that, in our previous work~\cite{chen2017quantum}, we have determined the ST-PW transition at zero temperature using the criterion of a vanishing roton energy gap. The determined critical Rabi frequency is in a good agreement with $\Omega_{c1}$, and it agrees even better if we neglect quantum fluctuations in the calculations.
The critical Rabi frequency in Eq. \eqref{eq:omega1} for the ST-PW boundary is accurate at the sufficiently weak interaction strengths (i.e., $G_{1}/E_{\textrm{r}},G_{2}/E_{\textrm{r}}\rightarrow0$). However, when the interactions become stronger, a high-order stripe ansatz with high-order harmonics (i.e., the plane waves with wave vectors $\pm3P_{x}$, $\pm5P_{x}$, etc.) need to be considered~\cite{li2013superstripes}. In this work, we take the following stripe \textit{\emph{ansatz}} that includes high-order terms~\cite{li2013superstripes},
\begin{equation}
\phi(\mathbf{r})=\sqrt{\frac{\bar{n}}{2}}\sum_{\gamma=\pm}\sum_{\alpha=1}^{N_{\mathrm{L}}}\left(\begin{array}{c}
\phi_{\uparrow}^{(\gamma\alpha)}\\
\phi_{\downarrow}^{(\gamma\alpha)}
\end{array}\right)e^{i\gamma(2\alpha-1)P_{x}x},\label{eq:high-order_stripe}
\end{equation}
which possesses a symmetry, $\phi_{\uparrow}^{(\alpha)}=-[\phi_{\downarrow}^{(-\alpha)}]^{*}$, and is periodically repeating in real space. Here, $\alpha$ is the index of the stripe order and is smaller or equal to a cutoff integer $N_{\mathrm{L}}$. After solving the wave function $\phi({\bf r})$, the ground-state energy $E^{(N_{\mathrm{L}})}$ can be numerically calculated using Eq.~\eqref{eq:mean-field-energy}. At $N_{\mathrm{L}}=1$, the energy $E^{(N_{\mathrm{L}}=1)}$ recovers the analytic expression for the first-order stripe ansatz $E^{(\mathrm{1st})}$ in Eq. \eqref{eq:1stST-energy}.
To investigate the low-energy excitations, the Bogoliubov quasiparticle amplitudes $u$, $v$ for the index $j=({\bf q},\tau)$ and spin $\sigma$ can be simply expanded in a Bloch form as~\cite{li2013superstripes}
\begin{subequations} \label{eq:uv_nth}
\begin{eqnarray}
u_{j\sigma}({\bf r})&=&e^{i{\bf qr}}\sum_{\gamma=\pm}\sum_{\beta=1}^{N_\mathrm{M}} u^{(\gamma\beta,\tau)}_{\sigma}e^{i\gamma(2\beta-1)P_xx}, \\
v_{j\sigma}({\bf r})&=&e^{i{\bf qr}}\sum_{\gamma=\pm}\sum_{\beta=1}^{N_\mathrm{M}} v^{(\gamma\beta,\tau)}_{\sigma}e^{i\gamma(2\beta-1)P_xx},
\end{eqnarray}
\end{subequations}
where ${\bf q}$ is the quasimomentum and $\beta$ is the expansion order index and is smaller than or equal to the cutoff integer $N_{\mathrm{M}}$. We substitute Eqs. \eqref{eq:uv_nth} into the Bogoliubov equations, Eqs. \eqref{eq:bogoliubov}, to determine the expansion coefficients $u_{\sigma}^{(\gamma\beta,\tau)}$ and $v_{\sigma}^{(\gamma\beta,\tau)}$.
In the recent experiments~\cite{lin2011spin,ji2014experimental,ji2015softening}, $^{87}$Rb atoms are used. The typical interaction energy is $gn=0.38E_{\mathrm{r}}$ with the peak density $n=0.46k_{\mathrm{r}}^{3}$ in harmonic traps, and the ratio between the interspecies interaction and intraspecies interaction is very close to unity, i.e., $g_{_{\uparrow\downarrow}}/g=100.99/101.20$~\cite{ji2015softening}. With these parameters, the two critical Rabi frequencies are respectively given by $\Omega_{c1}=0.2E_{\mathrm{r}}$ and $\Omega_{c2}=4.0E_{\mathrm{r}}$ [see Eqs.~\eqref{eq:omega1} and~\eqref{eq:omega2}], characterizing the first-order ST-PW and second-order PW-ZM phase transitions at zero temperature. The stripe phase is energetically favored at only a small region $\Omega\leq0.2E_{\mathrm{r}}$ of the Rabi frequency. The contrast in the stripe density is not large enough to be resolved in the laboratories~\cite{lin2011spin,ji2014experimental,martone2014approach}. In our calculations, we will consider a relatively large ratio of inter- to intraspecies interaction strengths (i.e., large $G_{1}$ and $G_{2}$), in order to enlarge the window for the stripe phase in the phase diagram.
\subsection{The phase-twist method\label{sec:phase-twist}}
Microscopically, by imposing a phase twist ${\bf Q}=Q_{x}\hat{\bf e}_x+Q_{y}\hat{\bf e}_y+Q_{z}\hat{\bf e}_z$, i.e., a supercurrent, on the order parameter
\begin{equation}
\phi({\bf r})\rightarrow e^{i{\bf Q\cdot r}}\phi({\bf r}),
\end{equation}
the superfluid will flow with a velocity ${\bf v}_{\mathrm{s}}=\hbar{\bf Q}/m$. In the limit ${\bf Q}\to0$, the variation of free energy $\Delta\mathcal{F}({\bf Q})\equiv\mathcal{F}({\bf Q})-\mathcal{F}(0)$ is approximately given by the extra kinetic energy of the imposed supercurrent,
\begin{equation}
\begin{aligned}
\Delta\mathcal{F}({\bf Q})\approx & \sum_{i,j=x,y,z}\frac{Q_iQ_j}{2}\lim_{Q_{_{i,j}}\to0}\frac{d^{2}\mathcal{F}({\bf Q})}{dQ_idQ_j}\\
&\equiv\sum_{i,j}\frac{1}{2}n^{(ij)}_{\mathrm{s}}mv^{(i)}_{\mathrm{s}}v^{(j)}_{\mathrm{s}}V.
\end{aligned}
\end{equation}
Therefore, the ratio of the tensor element $n^{(ij)}_{\mathrm{s}}$ of the superfluid density over the total density $\bar{n}=N/V$ can be expressed by~\cite{fisher1973helicity,taylor2006pairing,he2018realizing}
\begin{equation} \label{eq:ns_fraction}
\frac{n^{(ij)}_{\mathrm{s}}}{\bar{n}}\equiv\frac{m}{N}\lim_{Q_{_{i,j}}\to0}\frac{d^{2}\mathcal{F}({\bf Q})}{dQ_idQ_j},~i,j=x,y,z.
\end{equation}
In the presence of SOC in $x$ axis, the superfluid density can be written by a tensor form~\cite{zhang2016superfluid}
\begin{equation}
\hat{n}_{\mathrm{s}}=n^{(x)}_{\mathrm{s}}\hat{\bf e}_x\hat{\bf e}_x+n^{(\perp)}_{\mathrm{s}}(\hat{\bf e}_y\hat{\bf e}_y+\hat{\bf e}_z\hat{\bf e}_z),
\end{equation}
where the tensor elements at $i\neq j$ vanish due to the reflection symmetry of the Hamiltonian, and $n^{(i=x,\perp)}_{\mathrm{s}}$ indicates the superfluid component along the $x$ direction or in the perpendicular direction, respectively. At zero temperature, without losing the generality, we start with the first-order ansatz in Eq.~\eqref{eq:1st_stripe}, and the corresponding energy per particle $\epsilon(\theta,P_{x})\equiv E/N$ is a function of two variational parameters $(\theta,P_{x})$ (see also Ref.~\cite{li2012quantum}). By fixing the total particle number $N$ and imposing a phase-twist $Q_i$ at the equilibrium $\left(\theta_{0},P_{0}\to\theta(Q_i),P_{0}(Q_i)\right)$~\footnote{In this work, the phase twist is along the direction of SOC, i.e., the $x$ direction, or in the perpendicular $y-z$ plane}, after some straightforward derivations on Eq.~\eqref{eq:ns_fraction}, the fraction of superfluid component can be explicitly expressed by~\cite{he2018realizing}
\begin{equation} \label{eq:ns_SOC}
\frac{n^{(i=x,\perp)}_{\mathrm{s}}}{\bar{n}}=\frac{m}{N}\left[\frac{\partial^{2}\mathcal{F}}{\partial Q_i^{2}}-\left(\frac{\partial^{2}\mathcal{F}}{\partial\theta\partial Q_i}\right)^{2}/\left(\frac{\partial^{2}\mathcal{F}}{\partial\theta^{2}}\right)\right]_{Q_i\to0}.
\end{equation}
This expression is also applicable for the plane-wave and zero-momentum phases.
By substituting Eq.~\eqref{eq:1stST-energy} of the ground-state energy for the stripe phase and Eq.~\eqref{eq:PW-energy} for the plane-wave and zero-momentum phases into Eq.~\eqref{eq:ns_SOC}, we obtain the analytic fraction of the superfluid component in the respective regimes as
\begin{eqnarray}
\left(\frac{n^{(x)}_{\mathrm{s}}}{\bar{n}}\right)_{\textrm{ST}} & = & 1-\frac{2E_{\mathrm{r}}}{(2E_{\mathrm{r}}+G_{1})(4E_{\mathrm{r}}+2G_{1})^{2}/\Omega^{2}-G_{1}},\label{eq:ns_ST}\\
\left(\frac{n^{(x)}_{\mathrm{s}}}{\bar{n}}\right)_{\textrm{PW}} & = & 1-\frac{E_{\mathrm{r}}}{(E_{\mathrm{r}}-G_{2})\Omega_{c2}^{2}/\Omega^{2}+G_{2}},\label{eq:ns_PW}\\
\left(\frac{n^{(x)}_{\mathrm{s}}}{\bar{n}}\right)_{\textrm{ZM}} & = & 1-\frac{4E_{\mathrm{r}}}{\Omega+4G_{2}},\label{eq:ns_ZM}
\end{eqnarray}
in the direction of SOC, i.e., $x$ axis, and
\begin{equation} \label{eq:ns_yz}
\left(\frac{n^{(\perp)}_{\mathrm{s}}}{\bar{n}}\right)_{\textrm{ST}} =\left(\frac{n^{(\perp)}_{\mathrm{s}}}{\bar{n}}\right)_{\textrm{PW}}=\left(\frac{n^{(\perp)}_{\mathrm{s}}}{\bar{n}}\right)_{\textrm{ZM}} = 1
\end{equation}
in the perpendicular plane. The expression in Eq.~\eqref{eq:ns_ST} should be understood as an approximate result for the superfluid fraction in the stripe phase along SOC direction. It can be improved by taking high-order harmonics in the stripe ansatz. The next two expressions, Eqs. \eqref{eq:ns_PW} and \eqref{eq:ns_ZM}, were first obtained in Ref.~\cite{zhang2016superfluid}. It is worth noting that the variational parameters $(\theta,P_{x})$ are independent of the perpendicular twist $Q_{\perp}$ in Eq.~\eqref{eq:ns_SOC}, giving rise to the unaffected superfluid fraction $n^{(\perp)}_{\mathrm{s}}/\bar{n}=1$ in the perpendicular direction, the same as in a usual Bose gas~\cite{zhang2016superfluid}.
\section{Results and Discussions\label{sec:results}}
We are now ready to perform numerical calculations. We take the cutoffs $N_{\mathrm{L}}=N_{\mathrm{M}}\geq14$, to ensure that our results are cutoff independent (see Appendix~\ref{appendix} for the check on the cutoff dependence).
\subsection{Density profile and Bogoliubov excitation spectrum\label{sec:density}}
In this section, we study the density profile and corresponding excitation spectrum of the stripe phase. At zero temperature, we assume the typical interaction energies $G_{1}=0.5E_{\mathrm{r}}$ and $G_{2}=0.1E_{\mathrm{r}}$ with the average density $\bar{n}=1.0k_{\mathrm{r}}^{3}$, which give rise to the critical Rabi frequencies, $\Omega_{c1}=2.27E_{\mathrm{r}}$ and $\Omega_{c2}=3.60E_{\mathrm{r}}$ in Eqs.~\eqref{eq:omega1} and~\eqref{eq:omega2}, within the first-order stripe ansatz.
\begin{figure}
\begin{centering}
\includegraphics[width=0.48\textwidth]{fig1a}
\par\end{centering}
\begin{centering}
\includegraphics[width=0.48\textwidth]{fig1b}
\par\end{centering}
\centering{}\caption{(Upper panel) The high-order stripe density profile $n$ for spin-up atoms, spin-down atoms, and total atoms along the SOC direction at two Rabi frequencies $\Omega=0.1E_{\mathrm{r}}$ (a) and $\Omega=1.0E_{\mathrm{r}}$ (b). (Lower panel) The corresponding excitation spectrum $\varepsilon_{j}$ for the lowest five branches. Here, we take $G_{1}=0.5E_{\mathrm{r}}$ and $G_{2}=0.1E_{\mathrm{r}}$. $d=\pi/P_{x}$ is the spatial periodicity of stripes. The two dashed lines in (c) show the phonon dispersion of a conventional two-component Bose gas in the limit of $\Omega=0$.}
\label{fig1}
\end{figure}
In Fig.~\ref{fig1}, by setting two different Rabi frequencies $\Omega=0.1E_{\mathrm{r}}$ and $1.0E_{\mathrm{r}}$, we present the respective density distributions and their lowest excitation branches in different colors. In contrast to the plane-wave and zero-momentum phases, the density of the condensate is modulated by the SOC strength in the stripe regime with a spatially periodic order. The total density contrast of the stripe can be estimated using the first-order stripe ansatz and is given by~\cite{li2012quantum,martone2014approach}
\begin{equation}
\mathcal{C}\equiv\frac{n_{\mathrm{max}}-n_{\mathrm{min}}}{n_{\mathrm{max}}+n_{\mathrm{min}}}=\frac{\Omega}{2(2E_{\mathrm{r}}+G_{1})}.
\end{equation}
At $\Omega=0.1E_{\mathrm{r}}$ and $1.0E_{\mathrm{r}}$, the modulation amplitude $\mathcal{C}$ is about $0.02$ and $0.2$ of the total average density $\bar{n}$, respectively. These estimations agree well with the high-order density profiles illustrated in Figs.~\ref{fig1}(a) and \ref{fig1}(b).
\begin{figure}[t]
\centering{}\includegraphics[width=0.48\textwidth]{fig2} \caption{The ground-state energy as a function of the Rabi frequency in the first-order (dashed-blue), high-order (solid-black) stripe ansatz and the plane-wave ansatz (dotted-red). The interaction parameters are the same as in Fig.~\ref{fig1}.}
\label{fig2}
\end{figure}
The previous investigations have shown that the lowest-lying excitation spectrum in the plane-wave phase exhibits an intriguing roton-maxon structure due to the degenerate double-minimum in the single-particle dispersion~\cite{martone2012anisotropic,zheng2013properties,mivehvar2015enhanced,chen2017quantum}. As the Rabi frequency decreases towards the ST-PW transition, the roton structure becomes much more clear and the roton energy gap is gradually approaching zero, indicating a critical ST-PW Rabi frequency~\cite{chen2017quantum}. However, in the stripe phase, the density modulation spontaneously breaks the spatial translational symmetry, giving rise to infinite gapped branches as a function of the quasimomentum $q_{x}\in[0,2P_{x}]$. The lowest two branches are gapless~\cite{li2013superstripes}, as indicated by two linear phonon modes, i.e., the red and blue curves in Figs.~\ref{fig1}(c) and \ref{fig1}(d). As we decrease the Rabi frequency towards the limit $\Omega\rightarrow0$, the gap between different excitation branches vanishes and one recovers the Bogoliubov excitation spectrum~\cite{abad2013study}
\begin{equation}
\omega_{\pm}({\bf k})=\sqrt{\frac{{\bf k}^{2}}{2m}\left[\frac{{\bf k}^{2}}{2m}+(g\pm g_{_{\uparrow\downarrow}})\bar{n}\right]},
\end{equation}
anticipated for a conventional two-component Bose gas {[}see the two dashed black curves in Fig.~\ref{fig1}(c){]}.
\subsection{Critical Rabi frequency for the ST-PW phase transition}
\begin{figure}[t]
\centering{}\includegraphics[width=0.48\textwidth]{fig3}\caption{(a) Contour plot of the shift $\delta\Omega_{1}=\Omega_{c1}^{\mathrm{(new)}}-\Omega_{c1}$ for the ST-PW transition as functions of the interaction energy strengths $G_{1}$ and $G_{2}$. (b) The dependence of $\delta\Omega_{1}$ on $G_{2}$ at $G_{1}=0.5E_{\mathrm{r}}$.}
\label{fig3}
\end{figure}
The two critical Rabi frequencies, $\Omega_{c1}$ and $\Omega_{c2}$, are determined by comparing the total energy $E^{(\mathrm{1st})}$ of the first-order stripe and $E^{(\mathrm{PW})}$ of the plane-wave ansatz [see, e.g., Eqs.~\eqref{eq:1stST-energy} and~\eqref{eq:PW-energy}]. By taking into account the high-order harmonics in Eq.~\eqref{eq:high-order_stripe}, the stripe phase may become energetically more favorable, and as a consequence, the first-order ST-PW transition point may shift to a relatively larger Rabi frequency. This is indeed confirmed in Fig.~\ref{fig2}. At the same interaction parameters, the ground-state energy of the stripe phase becomes lower with the inclusion of high-order harmonics, i.e., $E^{(N_{\mathrm{L}})}\leq E^{(\mathrm{1st})}$. The window for the stripe phase is therefore enlarged, with a larger critical Rabi frequency $\Omega_{c1}^{\mathrm{(new)}}=2.47E_{\mathrm{r}}>\Omega_{c1}=2.27E_{\textrm{r}}$ for the first-order ST-PW transition.
In Fig.~\ref{fig3}(a), we show the dependence of the relative shift of the ST-PW transition, $\delta\Omega_{1}\equiv\Omega_{c1}^{\mathrm{(new)}}-\Omega_{c1}$, on the interaction parameters $G_{1}$ (the vertical axis) and $G_{2}$ (the horizontal axis). Figure~\ref{fig3}(b) reports the shift as a function of $G_{2}$ at $G_{1}=0.5E_{\mathrm{r}}$.
\begin{figure*}
\centering{}\includegraphics[width=0.96\textwidth]{fig4}\caption{Quantum depletion $n_{\mathrm{qd}}/\bar{n}$ as a function of the Rabi frequency $\Omega$ in the ST phase (red dotted line) and in the PW and ZM phases (blue dashed line). The blue diamonds show the depletion of a uniform single-component weakly interacting Bose gas with the same interaction parameters, while the red circles give the depletion of a two-component Bose gas. The vertical dashed and dotted curves indicate the critical $\Omega_{c1}^{(\mathrm{new})}$ and $\Omega_{c2}$, respectively. Here, we take the interaction energies $G_{1}=0.5E_{\mathrm{r}}$ and $G_{2}=0.01E_{\textrm{r}}$ (a), $0.04E_{\mathrm{r}}$ (b), and $0.07E_{\mathrm{r}}$ (c).}
\label{fig4}
\end{figure*}
In general, at sufficiently small $G_{2}$ (i.e., $\ll0.01E_{\mathrm{r}}$), the stripe phase locates at a very small range of the Rabi frequency, where the density is slightly modulated with negligible contrast. We find that the difference $\delta\Omega_{1}$, as shown in Fig.~\ref{fig3}(b), is close to zero. This implies a negligible contribution of the high-order harmonics, and the state of the system can be well described by the dominant first-order stripe trial wave function. When the interaction energy $G_{2}$ becomes relatively larger, the density modulation becomes significant and the difference $\delta\Omega_{1}$ is sizable, as shown by the yellow and red regions in Fig.~\ref{fig3}(a). Meanwhile, as $G_{1}$ increases, $\delta\Omega_{1}$ becomes much more pronounced. The significant shift of the ST-PW phase transition position indicates the crucial role played by the high-order harmonics in Eq.~\eqref{eq:high-order_stripe}. Our results suggest that they have to be accounted for in future theoretical investigations, particularly at relatively large values of interaction parameters.
\subsection{Quantum depletion at zero temperature}
Using the mean-field Bogoliubov theory, it is straightforward to obtain the quantum depletion using Eq.~\eqref{eq:quantum_depletion}. For a single-component Bose gas, the quantum depletion was recently measured~\cite{lopes2017quantum}. In Fig.~\ref{fig4}, we present the $\Omega$ dependence of the quantum depletion at zero temperature across all three phases, at $G_{1}=0.5E_{\mathrm{r}}$ and at three different values of $G_{2}$ (i.e., $0.01E_{\mathrm{r}}$, $0.04E_{\mathrm{r}}$, and $0.07E_{\mathrm{r}}$).
The quantum depletion in the plane-wave and zero-momentum phases has been studied in Ref. \cite{zhang2016superfluid}. We show their behavior in great detail in Figs. \ref{fig4}(a)--\ref{fig4}(c) (i.e., the dashed-blue curves). It is a nonmonotonic function of the Rabi frequency. In the plane-wave phase, the contribution to quantum depletion comes from both phonons and rotons in the lowest-lying excitation spectrum~\cite{ji2015softening}. As one increases $\Omega$, the roton energy gap becomes larger and the phonon mode dominates the contribution. This leads to a maximum in the depletion at the PW-ZM transition $\Omega_{c2}$ (i.e., dotted-black vertical curve)~\cite{zheng2013properties}. As $\Omega$ continues to increase in the zero-momentum phase, the depletion decreases, since the roton contribution to the depletion disappears and the only contribution is from the phonon mode.
The behavior of depletion in the stripe regime (i.e., the dotted-red curves) has not been studied before. It increases slowly and monotonically, as $\Omega$ increases up to the ST-PW transition $\Omega_{c1}^{(\mathrm{new})}$. This can be understood from the smooth softening of the two phonon modes, as illustrated in Figs.~\ref{fig1}(c) and \ref{fig1}(d). It is worth noting that the depletion at $\Omega_{c1}^{(\mathrm{new})}$ experiences a jump due to the first-order character of the transition~\cite{li2012quantum,martone2012anisotropic,chen2017quantum}. The size of the discontinuity becomes significant as we increase $G_{2}$.
\begin{figure*}
\centering{}\includegraphics[width=0.96\textwidth]{fig5}\caption{Superfluid fraction $n^{(x)}_{\mathrm{s}}/\bar{n}$ as a function of the Rabi frequency $\Omega$ in the ST phase (red dashed line -- first-order ansatz; green dotted line -- high-order ansatz), the PW and ZM phases (blue dashed line). The solid-black lines are the component $n^{(\perp)}_{\mathrm{s}}/\bar{n}$ in the perpendicular plane. The two vertical lines indicate the critical Rabi frequency of the phase transition. The interaction parameters are the same as in Fig.~\ref{fig4}.}
\label{fig5}
\end{figure*}
The behavior of the quantum depletion can be further understood with two analytic results. In the absence of SOC, the quantum depletion of a conventional homogeneous single-component Bose gas is of the order of $\sqrt{\bar{n}a^{3}}$ and can be analytically written as~\cite{pethick2002bose,Pitaevskii2003Book}
\begin{equation}
\frac{n_{\mathrm{qd}}^{(\mathrm{1c})}}{\bar{n}}=\frac{8}{3\sqrt{\pi}}(\bar{n}a^{3})^{1/2}.
\end{equation}
This result can be extended to the two-component case with equal spin
density as
\begin{eqnarray}
\frac{n_{\mathrm{qd}}^{(\mathrm{2c})}}{\bar{n}} & = & \frac{8}{3\sqrt{\pi}}\left[(\bar{n}a_{+}^{3})^{1/2}+(\bar{n}a_{-}^{3})^{1/2}\right],
\end{eqnarray}
where the scattering length $a_{\pm}=(a\pm a_{\uparrow\downarrow})/2$ and $a\equiv a_{\uparrow\uparrow}=a_{\downarrow\downarrow}$. In Fig.~\ref{fig4}, we have checked that by starting with the plane-wave ansatz in Eq.~\eqref{eq:plane-wave}, towards the limit $\Omega\rightarrow0$, the depletion coincides with the single-component $n_{\mathrm{qd}}^{(\mathrm{1c})}$ (see blue diamonds), as the plane-wave phase tends to be fully spin polarized. Similarly, the depletion predicted using the stripe ansatz in Eq.~\eqref{eq:high-order_stripe} (i.e., equal combination of two spins) recovers the two-component $n_{\mathrm{qd}}^{(\mathrm{2c})}$ (see red circles) in the limit $\Omega\rightarrow0$.
\subsection{Superfluid fraction at zero temperature}
We now turn to discuss the superfluidity of the system in the presence of SOC, which can be characterized by the superfluid density. In Fig.~\ref{fig5}, we report the behavior of the superfluid density as a function of Rabi frequency in three phases at the same interaction energy strengths as in Fig.~\ref{fig4}.
It is apparent from the figure that the superfluid density $n^{(x)}_{\mathrm{s}}$ along the SOC direction in the plane-wave and zero-momentum regimes (i.e., the dashed-blue curves), calculated by using Eqs.~\eqref{eq:ns_PW} and~\eqref{eq:ns_ZM}, exhibits an intriguing behavior. It goes down monotonically in the plane-wave phase, touches zero at the PW-ZM transition $\Omega_{c2}$, and then bounces back in the zero-momentum phase. This behavior exactly recovers the result in Ref.~\cite{zhang2016superfluid}, where the normal density of the system was calculated using the transverse current response function at zero temperature and the nonzero normal density was explained using a sum rule together with a gapped branch in the elementary excitation spectrum~\cite{zhang2016superfluid}.
On the contrary, in the stripe phase the superfluid density can be evaluated using the first-order stripe ansatz. The resulting analytic prediction [see Eq.~\eqref{eq:ns_ST}] is shown by the red dotted-dashed curves in Fig.~\ref{fig5}. Using the same phase-twist method but with the high-order stripe ansatz, we obtain the dotted-green curves in the figure. The difference between the two results (i.e., first-order vs high-order) increases with increasing $\Omega$. The difference also becomes significant if we use a large interaction parameter $G_{2}$. The suppression of the superfluid density at nonzero $\Omega$ may be understood from the softening of the lowest two phonon modes in the low-energy excitation spectrum [see Figs.~\ref{fig1}(c) and \ref{fig1}(d)]. The critical velocity decreases as the phonon modes become soft. This means that physically it is more favorable to create excitations to destroy the superfluidity of the system~\cite{Pitaevskii2003Book}. As anticipated, the superfluid density also exhibits a discontinuity at $\Omega_{c1}^{(\mathrm{new})}$, because of the first-order ST-PW transition.
It is worth noting that in the perpendicular plane, there is no density modulation due to the absence of the spin-orbit coupling. As a result, the superfluid fraction $n^{(\perp)}_{\mathrm{s}}/\bar{n}$ is 100\% (see the solid-black curves), the same as in a conventional spinless Bose gas.
\section{Conclusions and Outlooks\label{sec:summary}}
In conclusion, we have applied the mean-field Gross-Pitaevskii equation and Bogoliubov theory to characterize the stripe phase of a Raman-type spin-orbit-coupled Bose gas at zero temperature. The stripe density of the condensate, which is significantly modulated by spin-orbit coupling, has been calculated, and the corresponding low-energy excitation spectrum has been explored over a large range of the Rabi frequency. By using a high-order stripe ansatz, we have determined in a more accurate way the critical Rabi frequency for the transition between the stripe and plane-wave phases. We have calculated the quantum depletion in all three phases of the system. We have also derived an explicit but approximate expression of the superfluid density within a phase-twist approach. This first-order analytic prediction has been compared with the more accurate numerical result obtained with a stripe ansatz that involves high-order harmonics. Further questions, such as the finite-temperature effect and other physical observables such as the moment of inertia~\cite{stringari2017diffused}, remain to be investigated in order to better understand the exotic stripe phase.
\begin{acknowledgments}
Our research was supported through the Australian Research Council's (ARC) Discovery Projects No. FT140100003 and No. DP180102018 (X.J.L.), and No. FT130100815 and No. DP170104008 (H.H.).
\end{acknowledgments}
|
{
"timestamp": "2018-07-10T02:11:55",
"yymm": "1805",
"arxiv_id": "1805.02320",
"language": "en",
"url": "https://arxiv.org/abs/1805.02320"
}
|
\section{Introduction}
The 1970s saw a major advance in the combinatorial approach to enumerative geometry when M.-P.~Sch\"{u}tzenberger proved the Littlewood-Richardson rule for describing the cohomology rings of Grassmannians. Since then, the modern Schubert calculus has turned to extending this understanding in two different directions: on the one hand to replace the Grassmannian with a more complicated homogeneous space, and on the other hand to replace ordinary cohomology with a richer generalized cohomology theory. Along these lines, the goal of this paper is to begin unraveling the $K$-theoretic Schubert calculus of Kac-Moody homogeneous spaces. Our results are purely combinatorial in nature, but allow us to conjecture explicit Littlewood-Richardson-style rules in this geometric context.
Let $G$ be a complex Kac-Moody group with Borel and opposite Borel subgroups $B_+$ and $B_-$, respectively. Let $B_+ \subseteq P \subset G$ be a parabolic subgroup. The homogeneous space $X = G/P$ is a {\bf Kac-Moody flag variety}. The Zariski closures of the \mbox{$B_-$-orbits} are the {\bf Schubert varieties} $\{ X_w \}_{w \in W^P}$ and give a cell decomposition of $X$; here, $W^P$ denotes the set of minimal-length representatives of the quotient $W/W_P$, where $W$ is the Weyl group of $G$ and $W_P$ is the parabolic Weyl group for $P$. The cohomology ring $H^\star(G/P)$ thereby has a distinguished Schubert basis $\{ \sigma_w \}_{w \in W^P}$, where $\sigma_w$ is Poincar\'e dual to $X_w$. Thus, to determine multiplication in $H^\star(X)$, it suffices to determine the {\bf Schubert structure constants} $c_{u,v}^w$ defined by
\begin{equation}\label{eq:LR}
\sigma_u \cdot \sigma_v = \sum_{w \in W^P} c_{u,v}^w \sigma_w.
\end{equation}
In the case that $X = {\rm Gr}_k(\mathbb{C}^n)$ is a Grassmannian, the parameter space of $k$-dimensional linear subspaces of $\mathbb{C}^n$, this problem is solved in a positive combinatorial manner by any of the various Littlewood-Richardson rules (e.g., \cite{Littlewood.Richardson,Sch77,Vakil}). For a general Kac-Moody flag variety $X$, these $c_{u,v}^w$ are also non-negative integers, but it is generally a major open problem to give an analogous Littlewood-Richardson-style rule to determine them.
For $X = {\rm Gr}_k(\mathbb{C}^n)$, M.-P.~Sch\"{u}tzenberger's Littlewood-Richardson rule is stated in terms of the \emph{jeu de taquin} for standard Young tableaux \cite{Sch77} fitting inside a $k \times (n-k)$ rectangle. One may realize this rectangle as a subposet of positive roots for ${\rm GL}_n(\mathbb{C})$ in such a way that the inversion set of $w \in W^P$ is an order ideal in this subposet. One may further realize standard Young tableaux as linear extensions of intervals in this poset. Using this perspective, H.~Thomas and A.~Yong \cite{Thomas.Yong:minuscule} gave a uniform extension of Sch\"{u}tzenberger's rule to compute all cohomological Schubert structure constants for the larger family of \emph{minuscule varieties}. This was further extended by P.-E.~Chaput and N.~Perrin \cite{CP12} to a positive combinatorial formula for computing certain \emph{$\Lambda$-minuscule} Schubert structure constants for general Kac-Moody $X$. In the Chaput-Perrin rule, the role of the $k \times (n-k)$ rectangle is played by the \emph{$d$-complete posets} introduced by R.~Proctor \cite{Proctor:JACO,Proctor:algebra}; $d$-complete posets are exactly those posets encoding the containment relations among $\Lambda$-minuscule Schubert varieties.
Much work in the modern Schubert calculus has been devoted to studying homogeneous spaces through richer cohomology theories. In these theories, there are Schubert bases analogous to the cohomological $\sigma_w$ and the structure constants defined analogously to Equation~(\ref{eq:LR}) enjoy various positivity properties. Hence, it makes sense to attempt to develop positive combinatorial formulas for these structure constants in the style of the classical Littlewood-Richardson rules. In the Grassmannian case, one has, for example: the equivariant cohomology rule of A.~Knutson and T.~Tao \cite{Knutson.Tao}, the $K$-theory rule of A.~Buch \cite{Buch:K}; the equivariant $K$-theory rule of O.~Pechenik and A.~Yong \cite{Pechenik.Yong:KT}; the quantum cohomology rule of A.~Buch, A.~Kresch, K.~Purbhoo, and H.~Tamvakis \cite{Buch.Kresch.Purbhoo.Tamvakis}; and the equivariant quantum cohomology rule of A.~Buch \cite{Buch:quantum}. Our interest is in the ordinary $K$-theory ring $K(X)$ of the Kac-Moody flag variety $X$, where the $K$-theoretic Schubert classes $\{[\mathcal{O}_{X_w}]\}_{w \in W^P}$ are represented by the structure sheaves of the Schubert varieties. Specifically, we are interested in the structure constants $K_{u,v}^w$ of $K(X)$ defined by
\begin{equation}\label{eq:KLR}
[\mathcal{O}_{X_u}] \cdot [\mathcal{O}_{X_v}] = \sum_{w \in W^P} K_{u,v}^w [\mathcal{O}_{X_w}].
\end{equation}
For Grassmannians, various alternatives to Buch's original rule \cite{Buch:K} for $K_{u,v}^w$ are now known \cite{Vakil,Thomas.Yong:K,Pechenik.Yong:genomic}. However, only the rule of H.~Thomas and A.~Yong \cite{Thomas.Yong:K} is currently known to extend to all of the minuscule varieties \cite{Buch.Ravikumar,Clifford.Thomas.Yong,BS16}. This Thomas-Yong rule is based on a jeu de taquin theory for \emph{increasing tableaux}. This combinatorial theory displays a number of additional subtleties when compared to Sch\"{u}tzenberger's jeu de taquin for standard tableaux. In particular, a key ingredient is the need to identify increasing tableaux with the \emph{unique rectification target} property. (These combinatorial notions are reviewed in Section~\ref{Section posets, skew shapes, and rectifications}.)
In \cite[Problem 9.1]{Thomas.Yong:K} and \cite[Remark 3.24]{BS16}, the authors ask to what extent their combinatorial theory extends to the case of $d$-complete posets. The missing ingredient is that it is not currently known whether general $d$-complete posets have ``enough'' unique rectification targets. We conjecture, however, that they do.
\begin{conjecture}\label{conj:URTs}
Let $\mathcal{P}$ be a $d$-complete poset and let $\lambda \subseteq \mathcal{P}$ be an order ideal. Then there is an (explicitly-defined) unique rectification target supported on $\lambda$.
\end{conjecture}
We initiate a study of the existence and structure of unique rectification targets in the $d$-complete posets. As shown by R.~Proctor \cite{Proctor:JACO}, every $d$-complete poset can be constructed by gluing together (in prescribed ways) certain irreducible $d$-complete posets. These irreducible pieces are classified in \cite{Proctor:JACO} and include all of the minuscule posets (i.e., the posets describing the Schubert stratification of minuscule varieties).
The informal version of our main result is the following special case of Conjecture~\ref{conj:URTs}.
\begin{theorem}\label{thm:main}
Conjecture~\ref{conj:URTs} holds in the case that $\mathcal{P}$ is built from minuscule posets.
\end{theorem}
\begin{figure}[ht]
\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=10pt}, every draw/.append style={black, thick},anchor=base,baseline,node distance=.8cm]
\node (Shape1a1) at (0,0) {};
\node [above left of=Shape1a1] (Shape1a2) {};
\node [above left of=Shape1a2] (Shape1a3) {};
\node [above left of=Shape1a3] (Shape1a4) {};
\node [above right of=Shape1a1](Shape1d1) {};
\node [above left of=Shape1d1] (Shape1d2) {};
\node [above left of=Shape1d2] (Shape1d3) {};
\node [above left of=Shape1d3] (Shape1d4) {};
\draw (Shape1a1)--(Shape1a2);
\draw (Shape1a2)--(Shape1a3);
\draw (Shape1a3)--(Shape1a4);
\draw (Shape1d1)--(Shape1d2);
\draw (Shape1d2)--(Shape1d3);
\draw (Shape1d3)--(Shape1d4);
\draw (Shape1a1)--(Shape1d1);
\draw (Shape1a2)--(Shape1d2);
\draw (Shape1a3)--(Shape1d3);
\draw (Shape1a4)--(Shape1d4);
\node [above left = 1cm and 2.8cm of Shape1a4](Bat1a1) {};
\node [above left of=Bat1a1](Bat1a2) {};
\node [above left of=Bat1a2](Bat1a3) {};
\node [above left of=Bat1a3](Bat1a4) {};
\node [above left of=Bat1a4](Bat1a5) {};
\node [above left of=Bat1a5](Bat1a6) {};
\node [above right of=Bat1a4](Bat1b1) {};
\node [above left of=Bat1b1](Bat1b2) {};
\node [above left of=Bat1b2](Bat1b3) {};
\node [above right of=Bat1b2](Bat1c1) {};
\node [above left of=Bat1c1](Bat1c2) {};
\node [above left of=Bat1c2](Bat1c3) {};
\node [above right of=Bat1c1](Bat1d1) {};
\node [above left of=Bat1d1](Bat1d2) {};
\node [above left of=Bat1d2](Bat1d3) {};
\node [above left of=Bat1d3](Bat1d4) {};
\node [above left of=Bat1d4](Bat1d5) {};
\node [above right of=Bat1d1](Bat1e1) {};
\node [above left of=Bat1e1](Bat1e2) {};
\node [above left of=Bat1e2](Bat1e3) {};
\node [above left of=Bat1e3](Bat1e4) {};
\node [above left of=Bat1e4](Bat1e5) {};
\node [above right of=Bat1e4](Bat1f1) {};
\node [above left of=Bat1f1](Bat1f2) {};
\node [above of=Bat1f2](Bat1f3) {};
\node [above of=Bat1f3](Bat1f4) {};
\node [above of=Bat1f4](Bat1f5) {};
\draw (Shape1a4)--(Bat1a1);
\draw (Bat1a1)--(Bat1a2);
\draw (Bat1a2)--(Bat1a3);
\draw (Bat1a3)--(Bat1a4);
\draw (Bat1a4)--(Bat1a5);
\draw (Bat1a5)--(Bat1a6);
\draw (Bat1b1)--(Bat1b2);
\draw (Bat1b2)--(Bat1b3);
\draw (Bat1c1)--(Bat1c2);
\draw (Bat1c2)--(Bat1c3);
\draw (Bat1d1)--(Bat1d2);
\draw (Bat1d2)--(Bat1d3);
\draw (Bat1d3)--(Bat1d4);
\draw (Bat1d4)--(Bat1d5);
\draw (Bat1e1)--(Bat1e2);
\draw (Bat1e2)--(Bat1e3);
\draw (Bat1e3)--(Bat1e4);
\draw (Bat1e4)--(Bat1e5);
\draw (Bat1f1)--(Bat1f2);
\draw (Bat1a4)--(Bat1b1);
\draw (Bat1a5)--(Bat1b2);
\draw (Bat1a6)--(Bat1b3);
\draw (Bat1c1)--(Bat1b2);
\draw (Bat1c2)--(Bat1b3);
\draw (Bat1c1)--(Bat1d1);
\draw (Bat1c2)--(Bat1d2);
\draw (Bat1c3)--(Bat1d3);
\draw (Bat1e1)--(Bat1d1);
\draw (Bat1e2)--(Bat1d2);
\draw (Bat1e3)--(Bat1d3);
\draw (Bat1e4)--(Bat1d4);
\draw (Bat1e5)--(Bat1d5);
\draw (Bat1e4)--(Bat1f1);
\draw (Bat1e5)--(Bat1f2);
\draw (Bat1f3)--(Bat1f2);
\draw (Bat1f3)--(Bat1f4);
\draw (Bat1f5)--(Bat1f4);
\node [above = 1cm of Shape1a4] (Diamond1b2) {};
\node [circle,above of=Diamond1b2] (Diamond1b0) {};
\node [circle,above of=Diamond1b0] (Diamond1bottom) {};
\node [circle,above left of=Diamond1bottom] (Diamond1left) {};
\node [circle,above right of=Diamond1bottom] (Diamond1right) {};
\node [circle, above right of=Diamond1left] (Diamond1top) {};
\node [circle,above of =Diamond1top] (Diamond1t0) {};
\node [circle, above of =Diamond1t0] (Diamond1t2) {};
\draw (Diamond1t2) -- (Diamond1t0);
\draw (Diamond1top) -- (Diamond1t0);
\draw (Diamond1top) -- (Diamond1left);
\draw (Diamond1top) -- (Diamond1right);
\draw (Diamond1right) -- (Diamond1bottom);
\draw (Diamond1left) -- (Diamond1bottom);
\draw (Diamond1b0) -- (Diamond1bottom);
\draw (Diamond1b2) -- (Diamond1b0);
\draw (Diamond1b2)--(Shape1a4);
\node [above left= 1cm of Diamond1left] (ShiftedShape1a1) {};
\node [above left of=ShiftedShape1a1] (ShiftedShape1b1) {};
\node [above left of=ShiftedShape1b1] (ShiftedShape1c1) {};
\node [above left of=ShiftedShape1c1] (ShiftedShape1d1) {};
\node [above left of=ShiftedShape1d1] (ShiftedShape1e1) {};
\node [above left of=ShiftedShape1e1] (ShiftedShape1f1) {};
\node [above right of=ShiftedShape1b1] (ShiftedShape1b2) {};
\node [above right of=ShiftedShape1c1] (ShiftedShape1c2) {};
\node [above right of=ShiftedShape1c2] (ShiftedShape1c3) {};
\node [above right of=ShiftedShape1d1] (ShiftedShape1d2) {};
\node [above right of=ShiftedShape1d2] (ShiftedShape1d3) {};
\node [above right of=ShiftedShape1d3] (ShiftedShape1d4) {};
\node [above right of=ShiftedShape1e1] (ShiftedShape1e2) {};
\node [above right of=ShiftedShape1e2] (ShiftedShape1e3) {};
\node [above right of=ShiftedShape1e3] (ShiftedShape1e4) {};
\node [above right of=ShiftedShape1e4] (ShiftedShape1e5) {};
\node [above right of=ShiftedShape1f1] (ShiftedShape1f2) {};
\node [above right of=ShiftedShape1f2] (ShiftedShape1f3) {};
\node [above right of=ShiftedShape1f3] (ShiftedShape1f4) {};
\node [above right of=ShiftedShape1f4] (ShiftedShape1f5) {};
\draw (ShiftedShape1a1) -- (ShiftedShape1b1);
\draw (ShiftedShape1b1) -- (ShiftedShape1c1);
\draw (ShiftedShape1c1) -- (ShiftedShape1d1);
\draw (ShiftedShape1d1) -- (ShiftedShape1e1);
\draw (ShiftedShape1e1) -- (ShiftedShape1f1);
\draw (ShiftedShape1b2) -- (ShiftedShape1c2);
\draw (ShiftedShape1c2) -- (ShiftedShape1d2);
\draw (ShiftedShape1d2) -- (ShiftedShape1e2);
\draw (ShiftedShape1e2) -- (ShiftedShape1f2);
\draw (ShiftedShape1c3) -- (ShiftedShape1d3);
\draw (ShiftedShape1d3) -- (ShiftedShape1e3);
\draw (ShiftedShape1d4) -- (ShiftedShape1e4);
\draw (ShiftedShape1e3) -- (ShiftedShape1f3);
\draw (ShiftedShape1e4) -- (ShiftedShape1f4);
\draw (ShiftedShape1e5) -- (ShiftedShape1f5);
\draw (ShiftedShape1b1) -- (ShiftedShape1b2);
\draw (ShiftedShape1c1) -- (ShiftedShape1c2);
\draw (ShiftedShape1c3) -- (ShiftedShape1c2);
\draw (ShiftedShape1d1) -- (ShiftedShape1d2);
\draw (ShiftedShape1d3) -- (ShiftedShape1d2);
\draw (ShiftedShape1d3) -- (ShiftedShape1d4);
\draw (ShiftedShape1e1) -- (ShiftedShape1e2);
\draw (ShiftedShape1e3) -- (ShiftedShape1e2);
\draw (ShiftedShape1e3) -- (ShiftedShape1e4);
\draw (ShiftedShape1e5) -- (ShiftedShape1e4);
\draw (ShiftedShape1f1) -- (ShiftedShape1f2);
\draw (ShiftedShape1f3) -- (ShiftedShape1f2);
\draw (ShiftedShape1f3) -- (ShiftedShape1f4);
\draw (ShiftedShape1f5) -- (ShiftedShape1f4);
\draw (Diamond1left) -- (ShiftedShape1a1);
\node [above right= 1cm of Diamond1right](Cayley1a1){};
\node [above right of=Cayley1a1](Cayley1a2) {};
\node [above right of=Cayley1a2](Cayley1a3) {};
\node [above right of=Cayley1a3](Cayley1a4) {};
\node [above right of=Cayley1a4](Cayley1a5) {};
\node [above left of=Cayley1a3](Cayley1b1) {};
\node [above right of=Cayley1b1](Cayley1b2) {};
\node [above right of=Cayley1b2](Cayley1b3) {};
\node [above left of=Cayley1b2](Cayley1c1) {};
\node [above right of=Cayley1c1](Cayley1c2) {};
\node [above right of=Cayley1c2](Cayley1c3) {};
\node [above left of=Cayley1c1](Cayley1d1) {};
\node [above right of=Cayley1d1](Cayley1d2) {};
\node [above right of=Cayley1d2](Cayley1d3) {};
\draw (Cayley1a1)--(Cayley1a2);
\draw (Cayley1a2)--(Cayley1a3);
\draw (Cayley1a3)--(Cayley1a4);
\draw (Cayley1a4)--(Cayley1a5);
\draw (Cayley1b1)--(Cayley1b2);
\draw (Cayley1b2)--(Cayley1b3);
\draw (Cayley1c1)--(Cayley1c2);
\draw (Cayley1c2)--(Cayley1c3);
\draw (Cayley1d1)--(Cayley1d2);
\draw (Cayley1d2)--(Cayley1d3);
\draw (Cayley1a3)--(Cayley1b1);
\draw (Cayley1a4)--(Cayley1b2);
\draw (Cayley1a5)--(Cayley1b3);
\draw (Cayley1c1)--(Cayley1b2);
\draw (Cayley1c2)--(Cayley1b3);
\draw (Cayley1c1)--(Cayley1d1);
\draw (Cayley1c2)--(Cayley1d2);
\draw (Cayley1c3)--(Cayley1d3);
\draw (Diamond1right) -- (Cayley1a1);
\node [above right =1cm and .8cm of Shape1d1](Cayley2a1) {};
\node [above of=Cayley2a1](Cayley2a2) {};
\node [above of=Cayley2a2](Cayley2a3) {};
\node [above left of=Cayley2a3](Cayley2a4) {};
\node [above left of=Cayley2a4](Cayley2a5) {};
\node [above right of=Cayley2a3](Cayley2b1) {};
\node [above left of=Cayley2b1](Cayley2b2) {};
\node [above left of=Cayley2b2](Cayley2b3) {};
\node [above right of=Cayley2b2](Cayley2c1) {};
\node [above left of=Cayley2c1](Cayley2c2) {};
\node [above left of=Cayley2c2](Cayley2c3) {};
\node [above right of=Cayley2c1](Cayley2d1) {};
\node [above left of=Cayley2d1](Cayley2d2) {};
\node [above left of=Cayley2d2](Cayley2d3) {};
\node [above right of=Cayley2d3](Cayley2d4) {};
\draw (Cayley2a1)--(Cayley2a2);
\draw (Cayley2a2)--(Cayley2a3);
\draw (Cayley2a3)--(Cayley2a4);
\draw (Cayley2a4)--(Cayley2a5);
\draw (Cayley2b1)--(Cayley2b2);
\draw (Cayley2b2)--(Cayley2b3);
\draw (Cayley2c1)--(Cayley2c2);
\draw (Cayley2c2)--(Cayley2c3);
\draw (Cayley2d1)--(Cayley2d2);
\draw (Cayley2d2)--(Cayley2d3);
\draw (Cayley2d3)--(Cayley2d4);
\draw (Cayley2a3)--(Cayley2b1);
\draw (Cayley2a4)--(Cayley2b2);
\draw (Cayley2a5)--(Cayley2b3);
\draw (Cayley2c1)--(Cayley2b2);
\draw (Cayley2c2)--(Cayley2b3);
\draw (Cayley2c1)--(Cayley2d1);
\draw (Cayley2c2)--(Cayley2d2);
\draw (Cayley2c3)--(Cayley2d3);
\draw (Shape1d1)--(Cayley2a1);
\node [above right =1cm and 2.1cm of Shape1d1](ShiftedShape2a1) {};
\node [above right of=ShiftedShape2a1] (ShiftedShape2b1) {};
\node [above right of=ShiftedShape2b1] (ShiftedShape2c1) {};
\node [above right of=ShiftedShape2c1] (ShiftedShape2d1) {};
\node [above right of=ShiftedShape2d1] (ShiftedShape2e1) {};
\node [above right of=ShiftedShape2e1] (ShiftedShape2f1) {};
\node [above left of=ShiftedShape2b1] (ShiftedShape2b2) {};
\node [above left of=ShiftedShape2c1] (ShiftedShape2c2) {};
\node [above left of=ShiftedShape2c2] (ShiftedShape2c3) {};
\node [above left of=ShiftedShape2d1] (ShiftedShape2d2) {};
\node [above left of=ShiftedShape2d2] (ShiftedShape2d3) {};
\node [above left of=ShiftedShape2d3] (ShiftedShape2d4) {};
\node [above left of=ShiftedShape2e1] (ShiftedShape2e2) {};
\node [above left of=ShiftedShape2e2] (ShiftedShape2e3) {};
\node [above left of=ShiftedShape2e3,] (ShiftedShape2e4) {};
\node [above left of=ShiftedShape2e4] (ShiftedShape2e5) {};
\node [above left of=ShiftedShape2f1] (ShiftedShape2f2) {};
\node [above left of=ShiftedShape2f2] (ShiftedShape2f3) {};
\node [above left of=ShiftedShape2f3] (ShiftedShape2f4) {};
\node [above left of=ShiftedShape2f4] (ShiftedShape2f5) {};
\node [above left of=ShiftedShape2f5] (ShiftedShape2f6) {};
\draw (ShiftedShape2a1) -- (ShiftedShape2b1);
\draw (ShiftedShape2b1) -- (ShiftedShape2c1);
\draw (ShiftedShape2c1) -- (ShiftedShape2d1);
\draw (ShiftedShape2d1) -- (ShiftedShape2e1);
\draw (ShiftedShape2e1) -- (ShiftedShape2f1);
\draw (ShiftedShape2b2) -- (ShiftedShape2c2);
\draw (ShiftedShape2c2) -- (ShiftedShape2d2);
\draw (ShiftedShape2d2) -- (ShiftedShape2e2);
\draw (ShiftedShape2e2) -- (ShiftedShape2f2);
\draw (ShiftedShape2c3) -- (ShiftedShape2d3);
\draw (ShiftedShape2d3) -- (ShiftedShape2e3);
\draw (ShiftedShape2e3) -- (ShiftedShape2f3);
\draw (ShiftedShape2e5) -- (ShiftedShape2f5);
\draw (ShiftedShape2b1) -- (ShiftedShape2b2);
\draw (ShiftedShape2c1) -- (ShiftedShape2c2);
\draw (ShiftedShape2c3) -- (ShiftedShape2c2);
\draw (ShiftedShape2d1) -- (ShiftedShape2d2);
\draw (ShiftedShape2d3) -- (ShiftedShape2d2);
\draw (ShiftedShape2d3) -- (ShiftedShape2d4);
\draw (ShiftedShape2e1) -- (ShiftedShape2e2);
\draw (ShiftedShape2e3) -- (ShiftedShape2e2);
\draw (ShiftedShape2e3) -- (ShiftedShape2e4);
\draw (ShiftedShape2e5) -- (ShiftedShape2e4);
\draw (ShiftedShape2f1) -- (ShiftedShape2f2);
\draw (ShiftedShape2f3) -- (ShiftedShape2f2);
\draw (ShiftedShape2f3) -- (ShiftedShape2f4);
\draw (ShiftedShape2f5) -- (ShiftedShape2f4);
\draw (ShiftedShape2f5) -- (ShiftedShape2f6);
\draw (ShiftedShape2d4) -- (ShiftedShape2e4);
\draw (ShiftedShape2e4) -- (ShiftedShape2f4);
\draw (Shape1d1)--(ShiftedShape2a1);
\node [above = 0.9cm of ShiftedShape2f1](Bat2a1){};
\node [above left of=Bat2a1](Bat2a2) {};
\node [above left of=Bat2a2](Bat2a3) {};
\node [above left of=Bat2a3](Bat2a4) {};
\node [above left of=Bat2a4](Bat2a5) {};
\node [above left of=Bat2a5](Bat2a6) {};
\node [above right of=Bat2a4](Bat2b1) {};
\node [above left of=Bat2b1](Bat2b2) {};
\node [above left of=Bat2b2](Bat2b3) {};
\node [above right of=Bat2b2](Bat2c1) {};
\node [above left of=Bat2c1](Bat2c2) {};
\node [above left of=Bat2c2](Bat2c3) {};
\node [above right of=Bat2c1](Bat2d1) {};
\node [above left of=Bat2d1](Bat2d2) {};
\node [above left of=Bat2d2](Bat2d3) {};
\node [above left of=Bat2d3](Bat2d4) {};
\node [above left of=Bat2d4](Bat2d5) {};
\node [above right of=Bat2d1](Bat2e1) {};
\node [above left of=Bat2e1](Bat2e2) {};
\node [above left of=Bat2e2](Bat2e3) {};
\node [above left of=Bat2e3](Bat2e4) {};
\node [above left of=Bat2e4](Bat2e5) {};
\node [above right of=Bat2e4](Bat2f1) {};
\node [above left of=Bat2f1](Bat2f2) {};
\draw (Bat2a1)--(Bat2a2);
\draw (Bat2a2)--(Bat2a3);
\draw (Bat2a3)--(Bat2a4);
\draw (Bat2a4)--(Bat2a5);
\draw (Bat2a5)--(Bat2a6);
\draw (Bat2b1)--(Bat2b2);
\draw (Bat2b2)--(Bat2b3);
\draw (Bat2c1)--(Bat2c2);
\draw (Bat2c2)--(Bat2c3);
\draw (Bat2d1)--(Bat2d2);
\draw (Bat2d2)--(Bat2d3);
\draw (Bat2d3)--(Bat2d4);
\draw (Bat2d4)--(Bat2d5);
\draw (Bat2e1)--(Bat2e2);
\draw (Bat2e2)--(Bat2e3);
\draw (Bat2e3)--(Bat2e4);
\draw (Bat2e4)--(Bat2e5);
\draw (Bat2f1)--(Bat2f2);
\draw (Bat2a4)--(Bat2b1);
\draw (Bat2a5)--(Bat2b2);
\draw (Bat2a6)--(Bat2b3);
\draw (Bat2c1)--(Bat2b2);
\draw (Bat2c2)--(Bat2b3);
\draw (Bat2c1)--(Bat2d1);
\draw (Bat2c2)--(Bat2d2);
\draw (Bat2c3)--(Bat2d3);
\draw (Bat2e1)--(Bat2d1);
\draw (Bat2e2)--(Bat2d2);
\draw (Bat2e3)--(Bat2d3);
\draw (Bat2e4)--(Bat2d4);
\draw (Bat2e5)--(Bat2d5);
\draw (Bat2e4)--(Bat2f1);
\draw (Bat2e5)--(Bat2f2);
\draw (ShiftedShape2f1)--(Bat2a1);
\node [above right= 1cm and 1cm of ShiftedShape2f1](Shape2a1) {};
\node [above left of=Shape2a1] (Shape2a2) {};
\node [above left of=Shape2a2] (Shape2a3) {};
\node [above left of=Shape2a3] (Shape2a4) {};
\node [above right of=Shape2a1](Shape2b1) {};
\node [above left of=Shape2b1] (Shape2b2) {};
\node [above left of=Shape2b2] (Shape2b3) {};
\node [above left of=Shape2b3] (Shape2b4) {};
\node [above right of=Shape2b1](Shape2c1) {};
\node [above left of=Shape2c1] (Shape2c2) {};
\node [above left of=Shape2c2] (Shape2c3) {};
\node [above left of=Shape2c3] (Shape2c4) {};
\node [above right of=Shape2c1](Shape2d1) {};
\node [above left of=Shape2d1] (Shape2d2) {};
\node [above left of=Shape2d2] (Shape2d3) {};
\node [above left of=Shape2d3] (Shape2d4) {};
\draw (Shape2a1)--(Shape2a2);
\draw (Shape2a2)--(Shape2a3);
\draw (Shape2a3)--(Shape2a4);
\draw (Shape2b1)--(Shape2b2);
\draw (Shape2b2)--(Shape2b3);
\draw (Shape2b3)--(Shape2b4);
\draw (Shape2c1)--(Shape2c2);
\draw (Shape2c2)--(Shape2c3);
\draw (Shape2c3)--(Shape2c4);
\draw (Shape2d1)--(Shape2d2);
\draw (Shape2d2)--(Shape2d3);
\draw (Shape2d3)--(Shape2d4);
\draw (Shape2a1)--(Shape2b1);
\draw (Shape2a2)--(Shape2b2);
\draw (Shape2a4)--(Shape2b4);
\draw (Shape2c1)--(Shape2b1);
\draw (Shape2c2)--(Shape2b2);
\draw (Shape2c4)--(Shape2b4);
\draw (Shape2c1)--(Shape2d1);
\draw (Shape2c2)--(Shape2d2);
\draw (Shape2c4)--(Shape2d4);
\draw (Shape2a3)--(Shape2b3);
\draw (Shape2c3)--(Shape2b3);
\draw (Shape2c3)--(Shape2d3);
\draw (ShiftedShape2f1)--(Shape2a1);
\end{tikzpicture}
\caption{The Hasse diagram of a representative $d$-complete poset $\mathcal{P}$ that is ``built from minuscule posets'' in the sense of Theorem~\ref{thm:main}. In $\mathcal{P}$, every order ideal $\lambda \subseteq \mathcal{P}$ has a unique rectification target, provided by Theorem~\ref{thm:main}.}
\label{fig:big_poset}
\end{figure}
For an example of such a poset covered by Theorem~\ref{thm:main}, see Figure~\ref{fig:big_poset}.
We also demonstrate the extent to which Conjecture~\ref{conj:URTs} is sensitive to the poset $\mathcal{P}$ being $d$-complete. We establish general results on the failure of Conjecture~\ref{conj:URTs} to extend to posets that are slight deformations of $d$-complete posets.
For any $\mathcal{P}$ satisfying Conjecture~\ref{conj:URTs}, one obtains (as in \cite[\textsection 3.5]{BS16}) a corresponding combinatorially-defined associative commutative unital algebra $K(\mathcal{P})$ with a basis $\{ \lambda \}$ indexed by order ideals of $\mathcal{P}$. The structure constants $t_{\lambda,\mu}^\nu$ of $K(\mathcal{P})$ are defined in such a way as to transparently alternate in degree. (This construction is discussed in Section~\ref{Section $d$-complete posets}.)
For $w \in W^P$ $\Lambda$-minuscule, the interval $[{\rm id}, w]$ in Bruhat order is isomorphic to the poset of order ideals of a certain $d$-complete poset $\mathcal{P}_w$ constructed from $w$.
Building on Conjecture~\ref{conj:URTs}, we propose the following.
\begin{conjecture}\label{conj:geometry}
Let $X=G/P$ be a Kac-Moody flag variety and let $m \in W^P$ be $\Lambda$-minuscule.
Then, for $u,v,w \leq m$ in Bruhat order, we have the equality of structure constants \[
K_{u,v}^w = t_{\lambda,\mu}^\nu,
\]
between the rings $K(X)$ and $K(\mathcal{P}_m)$,
where the order ideals $\lambda, \mu, \nu \subseteq \mathcal{P}_m$ correspond to the the Weyl group elements $u,v,w \in W^P$, respectively.
\end{conjecture}
In Section~\ref{Section $d$-complete posets}, we give the precise versions of Conjecture~\ref{conj:URTs} and Theorem~\ref{thm:main}, as well as the details necessary for a precise understanding of Conjecture~\ref{conj:geometry}.
In light of Conjecture~\ref{conj:geometry}, Theorem~\ref{thm:main} should be understood as giving a conjectural positive combinatorial rule for certain $K$-theoretic Schubert structure constants of Kac-Moody flag varieties.
Several cases of Conjecture~\ref{conj:geometry} are known to be true or have been previously conjectured. If the flag variety $X$ is minuscule, then Conjecture~\ref{conj:geometry} reduces to the main theorem of \cite{BS16}. If, on the other hand, $X$ is general but $|\nu| - |\lambda| - |\mu| = 0$, then Conjecture~\ref{conj:geometry} reduces to \cite[Conjecture~1.1]{CP12}, many cases of which are proved in \cite[Theorem~1.3]{CP12}. Assuming one followed the general structure utilized by \cite{CP12,BS16}, the main ingredients one would need in a proof of Conjecture~\ref{conj:geometry} are
\begin{itemize}
\item[(1.)] a proof of the remaining cases of Conjecture~\ref{conj:URTs} and
\item[(2.)] {\it ad hoc} geometric verifications of Conjecture~\ref{conj:geometry} for special $u$ lying in a generating set of classes.
\end{itemize}
For a large class of such Schubert problems, Theorem~\ref{thm:main} provides the necessary first ingredient, so it only remains to establish the second in those cases.
Another potential application of Theorem~\ref{thm:main} (or more generally Conjecture~\ref{conj:URTs}) is to establishing plane partition identities. In \cite{HPPW}, the authors use the existence of unique rectification targets in minuscule posets to give bijective proofs of the equinumerosity of various classes of plane partitions, in particular resolving a 1983 question of R.~Proctor \cite{Proctor:trap}. The main technology of \cite{HPPW} applies equally to any $d$-complete poset satisfying Conjecture~\ref{conj:URTs}; hence, we expect Theorem~\ref{thm:main} to yield analogous identities. Further discussion may appear elsewhere.
This paper is organized as follows. In Section~\ref{Section posets, skew shapes, and rectifications}, we fix notation for posets and describe the Thomas-Yong theory of jeu de taquin for increasing tableaux. We then recall the definition of unique rectification targets (URTs).
Section~\ref{Section adding to minimum and maximal elements} studies the behavior of URTs when two posets are combined via Proctor's \emph{slant sum} operation. Section ~\ref{Section slant sum} builds on Section~\ref{Section adding to minimum and maximal elements} by introducing the notion of a \emph{$p$-chain URT}, a stronger version of a URT that we will later need. Section ~\ref{Section doubled tailed diamonds} establishes the necessary technical fact that all increasing tableaux of straight shape in a double-tailed diamond poset are $p$-chain URTs. In Section~\ref{Section $d$-complete posets}, we first recall background on $d$-complete posets. We also recall needed notions to make Conjectures~\ref{conj:URTs} and~\ref{conj:geometry} precise. We then apply the results from Section~\ref{Section slant sum} and Section ~\ref{Section doubled tailed diamonds} to the study of $d$-complete posets and prove Theorem~\ref{thm:main}, our main result.
\section{posets, skew shapes, and rectifications}
\label{Section posets, skew shapes, and rectifications}
All posets in this paper will be finite, nonempty, and connected. These assumptions are made for convenience and clarity only; many of our results do not fundamentally rely on these properties, although the statements and proofs become messier without them. Moreover, the original definition of $d$-complete posets (which we follow in this paper) requires finiteness. Although there is now a more general notion of infinite $d$-complete posets (see the ``Added Notes'' at the very end of \cite{Proctor.Scoppetta} for discussion), we will not explicitly consider such objects. In this section, $\mathcal{P}$ will denote an otherwise arbitrary poset.
We begin by fixing necessary terminology regarding posets. For $x,y \in \mathcal{P}$, we say that $z$ \textbf{covers} $x$ (written $x \lessdot z$), if $x<z$ and there does not exist a $y \in \mathcal{P}$ with $x < y < z$.
Let $x,y \in \mathcal{P}$. If $x < y$, we say that $x$ is an \textbf{ancestor} of $y$ and that $y$ is a \textbf{descendant} of $x$. If $x \lessdot y$, we say that $x$ is a \textbf{parent} of $y$ and that $y$ is a \textbf{child} of $x$.
Adding ``\textbf{weak}'' to any of these terms also allows for equality, e.g.\ $x$ is a \textbf{weak descendant} of $y$ if $x \geq y$.
We denote the minimum element (if it exists) of the poset $\mathcal{P}$ by $\hat{0}_\mathcal{P}$. We say a poset $\mathcal{P}$ has a $\hat{0}_\mathcal{P}$ to mean that it has a minimum, which is $\hat{0}_\mathcal{P}$.
We often visualize posets using Hasse diagrams, where each element is represented by a circle, and $a \lessdot b$ if there is a line that goes up from $a$ to $b$.
\begin{example}\label{ex:Q}
Let $\mathcal{Q}$ be the poset on the elements
$$\{(1,1),(1,2),(1,3),(2,1),(2,2),(3,1)\}$$
of $\mathbb{Z}^2$ under the natural order $(a,b) \leq (c,d)$ if both $a \leq c$ and $b \leq d$.
As a Hasse diagram, we have
\[
\mathcal{Q} =
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {};
\node [above right of=21] (31) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {};
\node [above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
\]
We will use this $\mathcal{Q}$ as a running example throughout this section.
\end{example}
A \textbf{shape} $\nu$ of $\mathcal{P}$ is any subset of $\mathcal{P}$. The shape $\nu$ has a natural poset structure given by restricting that of $\mathcal{P}$.
A shape $\nu$ of $\mathcal{P}$ is called an \textbf{order ideal} of $\mathcal{P}$ if it is closed downwards, i.e.\ if $y \in \nu$ and $x < y$ together imply $x \in \nu$. Similarly, an \textbf{order filter} of $\mathcal{P}$ is a subset that is closed upwards. For historical reasons, we will also refer to the order ideals of $\mathcal{P}$ as \textbf{straight shapes}.
\begin{example}\label{ex:straight}
The following are all the straight shapes of the poset $\mathcal{Q}$ from Example~\ref{ex:Q}. For greater visual context, we represent elements not in the straight shape with solid black circles.
\[
\begin{array}{lllll}
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [fill=black, above right of=11] (21) {};
\node [fill=black, above right of=21] (31) {};
\node [fill=black, above left of=11] (12) {};
\node [fill=black, above left of=12] (13) {};
\node [fill=black, above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [ above right of=11] (21) {};
\node [fill=black, above right of=21] (31) {};
\node [fill=black, above left of=11] (12) {};
\node [fill=black, above left of=12] (13) {};
\node [fill=black, above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [fill=black, above right of=11] (21) {};
\node [fill=black, above right of=21] (31) {};
\node [ above left of=11] (12) {};
\node [fill=black, above left of=12] (13) {};
\node [fill=black, above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [ above right of=11] (21) {};
\node [fill=black, above right of=21] (31) {};
\node [ above left of=11] (12) {};
\node [fill=black, above left of=12] (13) {};
\node [fill=black, above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [fill=black, above right of=11] (21) {};
\node [fill=black, above right of=21] (31) {};
\node [ above left of=11] (12) {};
\node [ above left of=12] (13) {};
\node [fill=black, above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
\\
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [ above right of=11] (21) {};
\node [above right of=21] (31) {};
\node [fill=black, above left of=11] (12) {};
\node [fill=black, above left of=12] (13) {};
\node [fill=black, above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [ above right of=11] (21) {};
\node [fill=black, above right of=21] (31) {};
\node [ above left of=11] (12) {};
\node [above left of=12] (13) {};
\node [fill=black, above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [ above right of=11] (21) {};
\node [above right of=21] (31) {};
\node [ above left of=11] (12) {};
\node [fill=black, above left of=12] (13) {};
\node [fill=black, above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [ above right of=11] (21) {};
\node [fill=black, above right of=21] (31) {};
\node [ above left of=11] (12) {};
\node [fill=black, above left of=12] (13) {};
\node [above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [ above right of=11] (21) {};
\node [fill=black, above right of=21] (31) {};
\node [ above left of=11] (12) {};
\node [above left of=12] (13) {};
\node [above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
\\
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [ above right of=11] (21) {};
\node [above right of=21] (31) {};
\node [ above left of=11] (12) {};
\node [fill=black, above left of=12] (13) {};
\node [above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [ above right of=11] (21) {};
\node [above right of=21] (31) {};
\node [ above left of=11] (12) {};
\node [ above left of=12] (13) {};
\node [fill=black,above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [ above right of=11] (21) {};
\node [above right of=21] (31) {};
\node [ above left of=11] (12) {};
\node [ above left of=12] (13) {};
\node [above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}&
\end{array}
\]
\end{example}
If $\lambda \subseteq \nu$ are straight shapes of $\mathcal{P}$, then the shape $\nu \setminus \lambda$ is called a \textbf{skew shape} of $\mathcal{P}$ and is denoted $\nu/\lambda$. Note that every straight shape $\lambda$ can also be realized as the skew shape $\lambda / \emptyset$.
An element $x \in \lambda$ is called an \textbf{inner corner} of the skew shape $\nu / \lambda$ if $x$ is maximal in $\lambda$. We write $\ensuremath{\mathrm{IC}}(\nu / \lambda)$ to denote the set of inner corners of $\nu / \lambda$.
Clearly, we have the following.
\begin{lemma}
\label{Corollary inner corner ancestors}
Let $\nu/\lambda$ be a skew shape of the poset $\mathcal{P}$. If $c \in \ensuremath{\mathrm{IC}}(\nu / \lambda)$ and $b < c$, then $b \notin \ensuremath{\mathrm{IC}}(\nu / \lambda)$. \qed
\end{lemma}
For a skew shape $\nu/\lambda$ of $\mathcal{P}$, a function $T: \nu/\lambda \to \mathbb{Z}_{>0}$ is called a \textbf{skew increasing $\mathcal{P}$-tableau} of shape $\nu/\lambda$ if $T$ is a strictly order preserving map, i.e.\ if $x < y$ implies $T(x) < T(y)$.
If, in addition, $T$ is a bijection onto an initial segment of $\mathbb{Z}_{>0}$, we say $T$ is a \textbf{skew standard $\mathcal{P}$-tableau}. In both cases, if $\nu/\lambda$ is a straight shape, we drop the word ``skew.''
We depict a skew increasing $\mathcal{P}$-tableau $T$ using Hasse diagrams with labels. For an element $x \in \mathcal{P}$, we put the value of $T(x)$ in the circle of the Hasse diagram corresponding to $x$. Also, to make clear what the ambient poset is we represent skewed out elements (the elements in $\lambda$) with unlabeled hollow circles.
\begin{example}
If $\mathcal{P}$ is the numbers $1,2,3,4$ with the usual order and $\nu/\lambda = \mathcal{P}$ and $T(x) = x+5$, then $T$ can be visualized as
\[
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (1) at (0,0) {6};
\node [ above of=1] (2) {7};
\node [ above of=2] (3) {8};
\node [ above of=3] (4) {9};
\draw (1) -- (2);
\draw (2) -- (3);
\draw (3) -- (4);
\end{tikzpicture}
\]
\end{example}
\begin{example}
The following is an example of a skew increasing $\mathcal{Q}$-tableau $T$ of shape $\nu/\lambda$
\[\nu =
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {};
\node [above right of=21] (31) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {};
\node [above left of=21] (22) {};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}, \quad
\lambda = \begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\end{tikzpicture}
,
\quad T = \begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {1};
\node [above right of=21] (31) {2};
\node [above left of=11] (12) {3};
\node [above left of=12] (13) {4};
\node [above left of=21] (22) {4};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}.\]
\end{example}
For a skew shape $\nu/\lambda$ of $\mathcal{P}$, we say a function $T: \nu/\lambda \to \mathbb{Z}_{>0} \cup \{\bullet\}$ is a \textbf{skew dotted increasing $\mathcal{P}$-tableau} of shape $\nu/\lambda$ if there is a rational number $q$ such that $T$ becomes a strictly order preserving map ($\nu/\lambda \to \mathbb{Q}$) when we replace each $\bullet$ with that fixed $q$.
\begin{example}
The following is a skew dotted increasing $\mathcal{Q}$-tableau
\[
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {1};
\node [above right of=11] (21) {$\bullet$};
\node [above right of=21] (31) {2};
\node [above left of=11] (12) {$\bullet$};
\node [above left of=12] (13) {3};
\node [above left of=21] (22) {2};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
\]
and the following is not (because one cannot replace the $\bullet$ with one fixed $q$ and make it an order preserving map)
\[
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {1};
\node [above right of=11] (21) {$\bullet$};
\node [above right of=21] (31) {3};
\node [above left of=11] (12) {3};
\node [above left of=12] (13) {$\bullet$};
\node [above left of=21] (22) {4};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
\]
\end{example}
\begin{definition}
Let $T$ be a skew increasing $\mathcal{P}$-tableau of shape $\nu/\lambda$. If $\gamma$ is a nonempty set of inner (or outer) corners of $\nu/\lambda$, then $\mathsf{AddDots}_\gamma(T)$ is the skew increasing $\mathcal{P}$-tableau $S$ of shape $\nu/\lambda \cup \gamma$ defined by
\[S(x) =
\begin{cases}
T(x), & \text{ if } x \in \nu/\lambda; \\
\bullet, & \text{ if } x \in \gamma.
\end{cases}
\]
\end{definition}
\begin{example}
For
\[T =
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [fill=SkyBlue,above right of=11] (21) {};
\node [above right of=21] (31) {3};
\node [fill=SkyBlue,above left of=11] (12) {};
\node [above left of=12] (13) {1};
\node [above left of=21] (22) {2};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
\]
let $\gamma$ be the set of blue shaded inner corners. Then, we have
\pushQED{\qed}
\[ \mathsf{AddDots}_\gamma(T) =
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {$\bullet$};
\node [above right of=21] (31) {3};
\node [above left of=11] (12) {$\bullet$};
\node [above left of=12] (13) {1};
\node [above left of=21] (22) {2};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture} \qedhere \popQED \]
\let\qed\relax
\end{example}
\begin{definition}
Let $T$ be a skew dotted increasing $\mathcal{P}$-tableau. For $n \in \mathbb{Z}_{>0}$, $\mathsf{Swap}_{\bullet,n}(T)$ is the skew dotted increasing $\mathcal{P}$-tableau $S$ defined by
\[S(x) =
\begin{cases}
n, & \text{ if } T(x) = \bullet \text{ and } T(y) = n \text{ for some $y \lessdot x$}; \\
\bullet, & \text{ if } T(x) = n \text{ and } T(y) = \bullet \text{ for some $y \gtrdot x$}; \\
T(x), & \text{ otherwise.}
\end{cases}
\]
\end{definition}
\begin{example}
We have
\ytableausetup{centertableaux}
\begin{align*}
&\mathsf{Swap}_{\bullet,1}\left( \;
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {$\bullet$};
\node [above right of=21] (31) {3};
\node [above left of=11] (12) {$\bullet$};
\node [above left of=12] (13) {1};
\node [above left of=21] (22) {2};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture} \; \right) =
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {$\bullet$};
\node [above right of=21] (31) {3};
\node [above left of=11] (12) {1};
\node [above left of=12] (13) {$\bullet$};
\node [above left of=21] (22) {2};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
&\mathsf{Swap}_{\bullet,2}\left( \;
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {1};
\node [above right of=21] (31) {2};
\node [above left of=11] (12) {$\bullet$};
\node [above left of=12] (13) {2};
\node [above left of=21] (22) {2};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
\; \right) =
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {1};
\node [above right of=21] (31) {2};
\node [above left of=11] (12) {2};
\node [above left of=12] (13) {$\bullet$};
\node [above left of=21] (22) {$\bullet$};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}\\
&\mathsf{Swap}_{\bullet,1}\left( \;
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {$\bullet$};
\node [above right of=21] (31) {2};
\node [above left of=11] (12) {$\bullet$};
\node [above left of=12] (13) {1};
\node [above left of=21] (22) {1};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
\right) =
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {1};
\node [above right of=21] (31) {2};
\node [above left of=11] (12) {1};
\node [above left of=12] (13) {$\bullet$};
\node [above left of=21] (22) {$\bullet$};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
&\mathsf{Swap}_{\bullet,1}\left( \;
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {$\bullet$};
\node [above right of=21] (31) {1};
\node [above left of=11] (12) {$\bullet$};
\node [above left of=12] (13) {1};
\node [above left of=21] (22) {1};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
\; \right) =
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {1};
\node [above right of=21] (31) {$\bullet$};
\node [above left of=11] (12) {1};
\node [above left of=12] (13) {$\bullet$};
\node [above left of=21] (22) {$\bullet$};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}\\
&\mathsf{Swap}_{\bullet,1}\left( \;
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {$\bullet$};
\node [above right of=21] (31) {3};
\node [above left of=11] (12) {1};
\node [above left of=12] (13) {2};
\node [above left of=21] (22) {3};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
\; \right) =
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above right of=11] (21) {$\bullet$};
\node [above right of=21] (31) {3};
\node [above left of=11] (12) {1};
\node [above left of=12] (13) {2};
\node [above left of=21] (22) {3};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}& \\
\end{align*}
\ytableausetup{nocentertableaux}
\end{example}
\begin{definition}
Let $T$ be a skew dotted increasing $\mathcal{P}$-tableau. Let $\mathcal{Q}$ be the subset of $\mathcal{P}$ which $T$ maps to an integer, i.e.\
\[\mathcal{Q} = \{x : T(x) \in \mathbb{Z}_{>0} \}.\]
Then, we define $\mathsf{RemoveDots}(T) = T|_\mathcal{Q}$.
\end{definition}
\begin{example}
For example,
\[\mathsf{RemoveDots} \left( \;
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {1};
\node [above right of=11] (21) {2};
\node [above right of=21] (31) {$\bullet$};
\node [above left of=11] (12) {2};
\node [above left of=12] (13) {$\bullet$};
\node [above left of=21] (22) {$\bullet$};
\draw (11) -- (12);
\draw (11) -- (21);
\draw (13) -- (12);
\draw (22) -- (12);
\draw (22) -- (21);
\draw (31) -- (21);
\end{tikzpicture}
\; \right) = \begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {1};
\node [above right of=11] (21) {2};
\node [above left of=11] (12) {2};
\draw (11) -- (12);
\draw (11) -- (21);
\end{tikzpicture}.\]
\end{example}
\begin{definition}
Let $T$ be a skew increasing $\mathcal{P}$-tableau of shape $\nu/\lambda$ and let $\gamma \subseteq \ensuremath{\mathrm{IC}}(\nu/\lambda)$. Let $n = \max(\ensuremath{\mathrm{Range}}(T))$. The \textbf{slide} of $\gamma$ in $T$ is the skew increasing $\mathcal{P}$-tableau
\[\mathsf{Slide}_\gamma(T) = \mathsf{RemoveDots} \circ \mathsf{Swap}_{\bullet,n} \circ \cdots \circ \mathsf{Swap}_{\bullet,1} \circ \mathsf{AddDots}_\gamma(T).\]
We will also use the notation $\mathsf{Slide}_{\gamma_1, \ldots , \gamma_n}$ to denote iterated slides, i.e.\
\[\mathsf{Slide}_{\gamma_1, \ldots, \gamma_n}(T) = \mathsf{Slide}_{\gamma_n} \circ \cdots \circ \mathsf{Slide}_{\gamma_1}(T).\]
\end{definition}
\begin{example}
As an example, let $T$ be an increasing $\mathcal{P}$ tableau
\[T = \begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {1};
\node [above left of=13] (14) {2};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {2};
\node [above left of=22] (23) {3};
\node [above right of=21] (31) {2};
\node [above left of=31] (32) {3};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}, \;
\mathcal{P} = \begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {a};
\node [above left of=12] (13) {};
\node [above left of=13] (14) {};
\node [above right of=11] (21) {b};
\node [above left of=21] (22) {};
\node [above left of=22] (23) {};
\node [above right of=21] (31) {};
\node [above left of=31] (32) {};
\node [above right of=31] (41) {};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}\]
where $a,b$ are the two inner corners for $T$.
We compute $\mathsf{Slide_\gamma}$ for various values of $\gamma$, beginning after $\mathsf{addDots}$, showing the intermediate swaps, and ending after $\mathsf{removeDots}$:
\begin{align*}
\gamma &= \{a\} &
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {$\bullet$};
\node [above left of=12] (13) {1};
\node [above left of=13] (14) {2};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {2};
\node [above left of=22] (23) {3};
\node [above right of=21] (31) {2};
\node [above left of=31] (32) {3};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture} &&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {1};
\node [above left of=12] (13) {$\bullet$};
\node [above left of=13] (14) {2};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {2};
\node [above left of=22] (23) {3};
\node [above right of=21] (31) {2};
\node [above left of=31] (32) {3};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}&&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {1};
\node [above left of=12] (13) {2};
\node [above left of=13] (14) {$\bullet$};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {2};
\node [above left of=22] (23) {3};
\node [above right of=21] (31) {2};
\node [above left of=31] (32) {3};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}&&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {1};
\node [above left of=12] (13) {2};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {2};
\node [above left of=22] (23) {3};
\node [above right of=21] (31) {2};
\node [above left of=31] (32) {3};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture} \\
\gamma &= \{b\} &
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {1};
\node [above left of=13] (14) {2};
\node [above right of=11] (21) {$\bullet$};
\node [above left of=21] (22) {2};
\node [above left of=22] (23) {3};
\node [above right of=21] (31) {2};
\node [above left of=31] (32) {3};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}&&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {1};
\node [above left of=13] (14) {2};
\node [above right of=11] (21) {2};
\node [above left of=21] (22) {$\bullet$};
\node [above left of=22] (23) {3};
\node [above right of=21] (31) {$\bullet$};
\node [above left of=31] (32) {3};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}&&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {1};
\node [above left of=13] (14) {2};
\node [above right of=11] (21) {2};
\node [above left of=21] (22) {3};
\node [above left of=22] (23) {$\bullet$};
\node [above right of=21] (31) {3};
\node [above left of=31] (32) {$\bullet$};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}&&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {1};
\node [above left of=13] (14) {2};
\node [above right of=11] (21) {2};
\node [above left of=21] (22) {3};
\node [above right of=21] (31) {3};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\end{tikzpicture}\\
\gamma &= \{a,b\} &
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {$\bullet$};
\node [above left of=12] (13) {1};
\node [above left of=13] (14) {2};
\node [above right of=11] (21) {$\bullet$};
\node [above left of=21] (22) {2};
\node [above left of=22] (23) {3};
\node [above right of=21] (31) {2};
\node [above left of=31] (32) {3};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}&&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {1};
\node [above left of=12] (13) {$\bullet$};
\node [above left of=13] (14) {2};
\node [above right of=11] (21) {$\bullet$};
\node [above left of=21] (22) {2};
\node [above left of=22] (23) {3};
\node [above right of=21] (31) {2};
\node [above left of=31] (32) {3};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}&&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {1};
\node [above left of=12] (13) {2};
\node [above left of=13] (14) {$\bullet$};
\node [above right of=11] (21) {2};
\node [above left of=21] (22) {$\bullet$};
\node [above left of=22] (23) {3};
\node [above right of=21] (31) {$\bullet$};
\node [above left of=31] (32) {3};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}&&
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {1};
\node [above left of=12] (13) {2};
\node [above left of=13] (14) {$\bullet$};
\node [above right of=11] (21) {2};
\node [above left of=21] (22) {3};
\node [above left of=22] (23) {$\bullet$};
\node [above right of=21] (31) {3};
\node [above left of=31] (32) {$\bullet$};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}\\
(c&ontinued) &
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {1};
\node [above left of=12] (13) {2};
\node [above right of=11] (21) {2};
\node [above left of=21] (22) {3};
\node [above right of=21] (31) {3};
\node [above right of=31] (41) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (31) -- (41);
\draw (12) -- (22);
\end{tikzpicture}
\end{align*}
\end{example}
For a tableau $T$ of shape $\nu / \lambda$, we use the notation $\ensuremath{\mathrm{IC}}(T)$ to mean $\ensuremath{\mathrm{IC}}(\nu / \lambda)$.
\begin{definition}
Let $T$ be a skew increasing $\mathcal{P}$-tableau.
We define its rectification step sets, $S_i$, recursively.
First, $S_0 = \{T\}$. Next,
\[S_{n+1} = \{\mathsf{Slide}_\gamma(S) : S \in S_{n} \text{ and } \emptyset \neq \gamma \subseteq \ensuremath{\mathrm{IC}}(S) \}.\]
The \textbf{rectifications} of $T$ are the elements of the \textbf{rectification set}
\[\mathsf{rects}(T) =\{U : U \in S_n \text{ for some } n \in \mathbb{Z}_{\geq 0} \text{ and } U \text{ is of straight shape}\}.\]
To denote that $U$ is a rectification of $T$ given by sliding the sequence of sets of inner corners $(\gamma_1, \ldots, \gamma_n)$, we write $T \xrightarrow{\gamma_1, \ldots, \gamma_n} U$.
\end{definition}
\begin{example}
We do an example which has two rectifications.
Let
\[\mathcal{P} =
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {};
\node [above left of=13] (14) {};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {};
\node [above left of=22] (23) {};
\node [above right of=21] (31) {};
\node [above left of=31] (32) {};
\node [above left of=32] (33) {};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (32) -- (33);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (23) -- (33);
\draw (13) -- (23);
\end{tikzpicture}, \;
T =
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {};
\node [above left of=13] (14) {2};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {};
\node [above left of=22] (23) {2};
\node [above right of=21] (31) {1};
\node [above left of=31] (32) {3};
\node [above left of=32] (33) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (32) -- (33);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (23) -- (33);
\draw (13) -- (23);
\end{tikzpicture}.
\]
One possible rectification follows this path
\[
\begin{tikzpicture} [arrow/.style = {thick,-stealth}]
\node (T1) at (0,0){
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {$\bullet$};
\node [above left of=13] (14) {2};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {};
\node [above left of=22] (23) {2};
\node [above right of=21] (31) {1};
\node [above left of=31] (32) {3};
\node [above left of=32] (33) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (32) -- (33);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (23) -- (33);
\draw (13) -- (23);
\end{tikzpicture}
};
\node[right= -.2cm of T1.east](T1r){};
\node [right = 4cm of T1.south, anchor=south] (T2) {
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {2};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {};
\node [above left of=22] (23) {4};
\node [above right of=21] (31) {1};
\node [above left of=31] (32) {3};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}
};
\node [right = .01cm of T1r -| T2.west](T2l) {};
\node [right = -.2cm of T2l -| T2.east] (T2r) {};
\draw [->] (T1r) -- (T2l);
\node [right= 1cm of T2] (T3){
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {2};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {$\bullet$};
\node [above left of=22] (23) {4};
\node [above right of=21] (31) {1};
\node [above left of=31] (32) {3};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (13) -- (23);
\end{tikzpicture}
};
\node [right = .01cm of T2r -| T3.west](T3l) {};
\node [right = -.2cm of T3l -| T3.east] (T3r) {};
\draw [->] (T2r) -- (T3l);
\node [right=1cm of T3] (T4){
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {2};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {3};
\node [above left of=22] (23) {4};
\node [above right of=21] (31) {1};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\draw (13) -- (23);
\end{tikzpicture}
};
\node [right = .01cm of T3r -| T4.west](T4l) {};
\draw [->] (T3r) -- (T4l);
\node [below right = 3cm and .15cm of T1.south, anchor=south] (S1){
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {};
\node [above left of=11] (12) {$\bullet$};
\node [above left of=12] (13) {2};
\node [above right of=11] (21) {$\bullet$};
\node [above left of=21] (22) {3};
\node [above left of=22] (23) {4};
\node [above right of=21] (31) {1};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\draw (13) -- (23);
\end{tikzpicture}
};
\node [above = .32cm of S1.north] (step){};
\node [above = -.1cm of S1.east] (S1r) {};
\node [below left = 3cm and .24cm of T2.south, anchor=south](S2){
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {};
\node [above left of=11] (12) {2};
\node [above left of=12] (13) {4};
\node [above right of=11] (21) {1};
\node [above left of=21] (22) {3};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (11) -- (21);
\draw (12) -- (22);
\end{tikzpicture}
};
\node [right = .01cm of S1r -| S2.west](S2l) {};
\node [right = -.2cm of S2l -| S2.east] (S2r) {};
\draw [->] (S1r) -- (S2l);
\node [below left = 3cm and .24cm of T3.south, anchor=south] (S3){
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,,anchor=base,baseline,]
\node (11) at (0,0) {$\bullet$};
\node [above left of=11] (12) {2};
\node [above left of=12] (13) {4};
\node [above right of=11] (21) {1};
\node [above left of=21] (22) {3};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (11) -- (21);
\draw (12) -- (22);
\end{tikzpicture}\;
};
\node [right = .01cm of S2r -| S3.west](S3l) {};
\node [right = -.2cm of S3l -| S3.east] (S3r) {};
\draw [->] (S2r) -- (S3l);
\node [below left = 3cm and .26cm of T4.south, anchor=south] (S4){
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {1};
\node [above left of=11] (12) {2};
\node [above left of=12] (13) {4};
\node [above right of=11] (21) {3};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (11) -- (21);
\end{tikzpicture}
};
\node [right = .01cm of S3r -| S4.west](S4l) {};
\node [right = -.2cm of S4l -| S4.east] (S4r) {};
\draw [->] (S3r) -- (S4l);
\draw[->] (T4.south) |- (step) -| (S1.north);
\end{tikzpicture}
\]
so one rectification of $T$ is \begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {1};
\node [above left of=11] (12) {2};
\node [above left of=12] (13) {4};
\node [above right of=11] (21) {3};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (11) -- (21);
\end{tikzpicture}.
Another rectification follows this path
\[
\begin{tikzpicture} [arrow/.style = {thick,-stealth}]
\node (T1) at (0,0) {
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {$\bullet$};
\node [above left of=13] (14) {2};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {$\bullet$};
\node [above left of=22] (23) {2};
\node [above right of=21] (31) {1};
\node [above left of=31] (32) {3};
\node [above left of=32] (33) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (13) -- (14);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (31) -- (32);
\draw (32) -- (33);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\draw (22) -- (32);
\draw (23) -- (33);
\draw (13) -- (23);
\end{tikzpicture}
};
\node[right= -.2cm of T1.east](T1r){};
\node [right = 4cm of T1.south, anchor=south] (T2) {
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {2};
\node [above right of=11] (21) {};
\node [above left of=21] (22) {2};
\node [above left of=22] (23) {4};
\node [above right of=21] (31) {1};
\node [above left of=31] (32) {3};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (23) -- (22);
\draw (23) -- (13);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\draw (22) -- (32);
\end{tikzpicture}
};
\node [right = .01cm of T1r -| T2.west](T2l) {};
\node [right = -.2cm of T2l -| T2.east] (T2r) {};
\draw [->] (T1r) -- (T2l);
\node [right = 1cm of T2] (T3) {
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {2};
\node [above right of=11] (21) {$\bullet$};
\node [above left of=21] (22) {2};
\node [above left of=22] (23) {4};
\node [above right of=21] (31) {1};
\node [above left of=31] (32) {3};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (23) -- (22);
\draw (23) -- (13);
\draw (31) -- (32);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\draw (22) -- (32);
\end{tikzpicture}
};
\node [right = .01cm of T2r -| T3.west](T3l) {};
\node [right = -.2cm of T3l -| T3.east] (T3r) {};
\draw [->] (T2r) -- (T3l);
\node [right = 1cm of T3] (T4) {
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {};
\node [above left of=11] (12) {};
\node [above left of=12] (13) {2};
\node [above right of=11] (21) {1};
\node [above left of=21] (22) {2};
\node [above left of=22] (23) {4};
\node [above right of=21] (31) {3};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (23) -- (22);
\draw (23) -- (13);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\end{tikzpicture}
};
\node [right = .01cm of T3r -| T4.west](T4l) {};
\node [right = -.2cm of T4l -| T4.east] (T4r) {};
\draw [->] (T3r) -- (T4l);
\node [below right = 3cm and .24cm of T1.south, anchor=south] (S1){
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {};
\node [above left of=11] (12) {$\bullet$};
\node [above left of=12] (13) {2};
\node [above right of=11] (21) {1};
\node [above left of=21] (22) {2};
\node [above left of=22] (23) {4};
\node [above right of=21] (31) {3};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (22) -- (23);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\draw (13) -- (23);
\end{tikzpicture}
};
\node [above = .32cm of S1.north] (step){};
\node [above = -.1cm of S1.east] (S1r) {};
\node [below right = 3cm and .01cm of T2.south, anchor=south](S2){
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,anchor=base,baseline,]
\node (11) at (0,0) {};
\node [above left of=11] (12) {2};
\node [above left of=12] (13) {4};
\node [above right of=11] (21) {1};
\node [above left of=21] (22) {4};
\node [above right of=21] (31) {3};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\end{tikzpicture}
};
\node [right = .01cm of S1r -| S2.west](S2l) {};
\node [right = -.2cm of S2l -| S2.east] (S2r) {};
\draw [->] (S1r) -- (S2l);
\node [below right = 3cm and .001cm of T3.south, anchor=south] (S3){
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,,anchor=base,baseline,]
\node (11) at (0,0) {$\bullet$};
\node [above left of=11] (12) {2};
\node [above left of=12] (13) {4};
\node [above right of=11] (21) {1};
\node [above left of=21] (22) {4};
\node [above right of=21] (31) {3};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (11) -- (21);
\draw (21) -- (31);
\draw (12) -- (22);
\end{tikzpicture}
};
\node [right = .01cm of S2r -| S3.west](S3l) {};
\node [right = -.2cm of S3l -| S3.east] (S3r) {};
\draw [->] (S2r) -- (S3l);
\node [below left = 3cm and .01cm of T4.south, anchor=south] (S4){
\begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=24pt, font=\Large}, every draw/.append style={black, thick}, node distance = 1.5cm,,anchor=base,baseline,]
\node (11) at (0,0) {1};
\node [above left of=11] (12) {2};
\node [above left of=12] (13) {4};
\node [above right of=11] (21) {3};
\node [above left of=21] (22) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (11) -- (21);
\draw (12) -- (22);
\end{tikzpicture}
};
\node [right = .01cm of S3r -| S4.west](S4l) {};
\node [right = -.2cm of S4l -| S4.east] (S4r) {};
\draw [->] (S3r) -- (S4l);
\draw[->] (T4.south) |- (step) -| (S1.north);
\end{tikzpicture}
\]
so another rectification of $T$ is \begin{tikzpicture} [every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (11) at (0,0) {1};
\node [above left of=11] (12) {2};
\node [above left of=12] (13) {4};
\node [above right of=11] (21) {3};
\node [above left of=21] (22) {4};
\draw (11) -- (12);
\draw (12) -- (13);
\draw (21) -- (22);
\draw (11) -- (21);
\draw (12) -- (22);
\end{tikzpicture}. One can check that these two are the only rectifications of $T$.
\end{example}
We say a skew increasing $\mathcal{P}$-tableau \textbf{rectifies uniquely} if it has exactly one rectification. We say an increasing $\mathcal{P}$-tableau $T$ of straight shape is a \textbf{unique rectification target (URT)} if every skew increasing $\mathcal{P}$-tableau which rectifies to $T$ rectifies uniquely.
\section{Unique rectification in slant sums}
\label{Section adding to minimum and maximal elements}
In this section, we explore the structure of URTs in posets that are built out of smaller posets by a slant sum operation.
\begin{definition}
\cite{Proctor:JACO}
Let $\mathcal{P}, \mathcal{Q}$ be disjoint posets. Assume $\mathcal{Q}$ has a minimum element $\hat{0}_\mathcal{Q}$. Let $p \in \mathcal{P}$.
The \textbf{slant sum} of $\mathcal{Q}$ to $\mathcal{P}$ at $p$, denoted $\mathcal{P} \, _{p}/^{\hat{0}_\mathcal{Q}} \, \mathcal{Q}$ is the poset on $\mathcal{P} \sqcup \mathcal{Q}$ induced by imposing $\hat{0}_\mathcal{Q} \gtrdot p$ together with the orders on $\mathcal{P}$ and $\mathcal{Q}$. Because a poset's minimum is unique, we will usually drop the $\hat{0}_\mathcal{Q}$ and denote the slant sum as $\mathcal{P} \slantsum{p} \mathcal{Q}$. If $\mathcal{Q}_1, \dots , \mathcal{Q}_n$ are pairwise disjoint posets which each have minima, then $\mathcal{P} \slantsum{p} (\mathcal{Q}_1, \dots ,\mathcal{Q}_n)$ denotes the iterated slant sum of posets at $p$. (Clearly, the order of the $\mathcal{Q}_i$s does not matter.)
Finally, given $p_1, \dots, p_m \in \mathcal{P}$ and pairwise disjoint posets $\mathcal{Q}_i^j$ with minima, we write
\[\mathcal{P} \slantsum{p_1} (\mathcal{Q}_1^1, \dots, \mathcal{Q}_{r_1}^1) \slantsum{p_2} \cdots \slantsum{p_m} (\mathcal{Q}_1^m, \dots, \mathcal{Q}_{r_m}^m)\] to denote
the result of slant summing each $\mathcal{Q}_i^j$ onto $p_j$ (in any order).
\end{definition}
We say that a poset $\mathcal{P}$ is a \textbf{chain} if all pairs of elements of $\mathcal{P}$ are comparable, that is, if $\mathcal{P}$ is a total order. The size of a chain $\mathcal{P}$ is the number of its elements.
\begin{proposition}\label{thm:adding min}
Let $\mathcal{P}$ be a poset with $\hat{0}_\mathcal{P}$ and let $\mathcal{C} = \{ c\}$ be a chain of size $1$. Let $\mathcal{R} = \mathcal{C} \slantsum{c} \mathcal{P}$.
If $T$ is a a skew increasing $\mathcal{P}$-tableau that rectifies uniquely (in $\mathcal{P}$), then the skew increasing $\mathcal{R}$-tableau $T_\mathcal{R}$ of shape $\nu \sqcup c /\lambda \sqcup c$ defined by $T_\mathcal{R}(x) = T(x)$ rectifies uniquely (in $\mathcal{R}$).
\end{proposition}
\begin{proof}
Suppose $U$ is the unique rectification of $T$. Let $p$ be the minimum of $\mathcal{P}$.
When we rectify $T_\mathcal{R}$ in $\mathcal{R}$, since $c$ is covered only by $p$, the last step of rectification is to slide $c$. Just before this final step, one has a tableau $S$ whose shape is an order ideal in $\mathcal{P} \subset \mathcal{R}$.
Clearly, a process of slides producing $S$ from $T_\mathcal{R}$ (in $\mathcal{R}$) corresponds to a sequence of slides rectifying $T$ (in $\mathcal{P}$).
Thus, since $T$ rectifies uniquely to $U$, we have $S(x) = U(x)$ for all $x \in \mathcal{P}$.
The final slide is necessarily $\mathsf{Slide}_{\{c\}}$, since $c$ is the only inner corner of $S$. Hence, there are no further choices to make and thus $T_\mathcal{R}$ rectifies uniquely.
\end{proof}
\begin{remark}
\label{Remark counterexample tail}
The converse of Proposition~\ref{thm:adding min} is false.
Let
\[\text {$\mathcal{P} = \begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (b) at (0,0) {};
\node [above left of=b] (l) {};
\node [above right of=b] (r) {};
\node [above right of=l] (t) {};
\node [above of =t] (t1) {};
\draw (b) -- (l);
\draw (b) -- (r);
\draw (t) -- (r);
\draw (t) -- (l);
\draw (t) -- (t1);
\end{tikzpicture}$ and $\mathcal{R} = \begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (b1) at (0,0) {};
\node [above of=b1] (b) {};
\node [above left of=b] (l) {};
\node [above right of=b] (r) {};
\node [above right of=l] (t) {};
\node [above of =t] (t1) {};
\draw (b1) -- (b);
\draw (b) -- (l);
\draw (b) -- (r);
\draw (t) -- (r);
\draw (t) -- (l);
\draw (t) -- (t1);
\end{tikzpicture}$.}\]
For the skew increasing $\mathcal{P}$-tableaux $T_\mathcal{P} = \begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (b) at (0,0) {};
\node [above left of=b] (l) {};
\node [above right of=b] (r) {};
\node [above right of=l] (t) {1};
\node [above of =t] (t1) {2};
\draw (b) -- (l);
\draw (b) -- (r);
\draw (t) -- (r);
\draw (t) -- (l);
\draw (t) -- (t1);
\end{tikzpicture}$, we have
\[
\mathsf{rects}(T_\mathcal{P}) = \left\{ \text{$\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (b) at (0,0) {1};
\node [above left of=b] (l) {2};
\node [above right of=b] (r) {};
\node [above right of=l] (t) {};
\node [above of =t] (t1) {};
\draw (b) -- (l);
\draw (b) -- (r);
\draw (t) -- (r);
\draw (t) -- (l);
\draw (t) -- (t1);
\end{tikzpicture}$, $\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (b) at (0,0) {1};
\node [above left of=b] (l) {};
\node [above right of=b] (r) {2};
\node [above right of=l] (t) {};
\node [above of =t] (t1) {};
\draw (b) -- (l);
\draw (b) -- (r);
\draw (t) -- (r);
\draw (t) -- (l);
\draw (t) -- (t1);
\end{tikzpicture}$, and $\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (b) at (0,0) {1};
\node [above left of=b] (l) {2};
\node [above right of=b] (r) {2};
\node [above right of=l] (t) {};
\node [above of =t] (t1) {};
\draw (b) -- (l);
\draw (b) -- (r);
\draw (t) -- (r);
\draw (t) -- (l);
\draw (t) -- (t1);
\end{tikzpicture}$} \right\}.
\]
However, the skew $\mathcal{R}$-tableaux $T_\mathcal{R} = \begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (b1) at (0,0) {};
\node [above of=b1] (b) {};
\node [above left of=b] (l) {};
\node [above right of=b] (r) {};
\node [above right of=l] (t) {1};
\node [above of =t] (t1) {2};
\draw (b1) -- (b);
\draw (b) -- (l);
\draw (b) -- (r);
\draw (t) -- (r);
\draw (t) -- (l);
\draw (t) -- (t1);
\end{tikzpicture}$ rectifies uniquely to $\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (b1) at (0,0) {1};
\node [above of=b1] (b) {2};
\node [above left of=b] (l) {};
\node [above right of=b] (r) {};
\node [above right of=l] (t) {};
\node [above of =t] (t1) {};
\draw (b1) -- (b);
\draw (b) -- (l);
\draw (b) -- (r);
\draw (t) -- (r);
\draw (t) -- (l);
\draw (t) -- (t1);
\end{tikzpicture}$.
\end{remark}
\begin{remark}\label{rem:URT_not_pchain}
Extending a poset by a new maximum element does not preserve unique rectification targets. For example, it is easy to check that
$\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (b) at (0,0) {1};
\node [above left of=b] (l) {2};
\node [above right of=b] (r) {};
\node [above right of=l] (t) {};
\draw (b) -- (l);
\draw (b) -- (r);
\draw (t) -- (r);
\draw (t) -- (l);
\end{tikzpicture}$
is a unique rectification target in the poset $\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (b) at (0,0) {};
\node [above left of=b] (l) {};
\node [above right of=b] (r) {};
\node [above right of=l] (t) {};
\draw (b) -- (l);
\draw (b) -- (r);
\draw (t) -- (r);
\draw (t) -- (l);
\end{tikzpicture}$.
However, as shown in Remark~\ref{Remark counterexample tail}, $\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (b) at (0,0) {1};
\node [above left of=b] (l) {2};
\node [above right of=b] (r) {};
\node [above right of=l] (t) {};
\node [above of =t] (t1) {};
\draw (b) -- (l);
\draw (b) -- (r);
\draw (t) -- (r);
\draw (t) -- (l);
\draw (t) -- (t1);
\end{tikzpicture}$ is not a unique rectification target in the poset $\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}, baseline={([yshift=-.5ex]current bounding box.center)}]
\node (b) at (0,0) {};
\node [above left of=b] (l) {};
\node [above right of=b] (r) {};
\node [above right of=l] (t) {};
\node [above of =t] (t1) {};
\draw (b) -- (l);
\draw (b) -- (r);
\draw (t) -- (r);
\draw (t) -- (l);
\draw (t) -- (t1);
\end{tikzpicture}$.
\end{remark}
We now introduce some useful notation and terminology that we will need. Suppose $\mathcal{Q}$ is a shape of $\mathcal{P}$, and $T$ is a skew increasing $\mathcal{P}$-tableau. Then, the \textbf{restriction} of $T$ to $\mathcal{Q}$, denoted $T|_\mathcal{Q}$ is the increasing $\mathcal{Q}$-tableau given by restricting the domain of $T$ to those elements that are in $\mathcal{Q}$.
\begin{definition}
Suppose $\mathcal{F}$ is an order filter of a poset $\mathcal{P}$ with a $\hat{0}_\mathcal{F}$. We say $\mathcal{F}$ is a \textbf{funnel} if, for all $p \in \mathcal{P} \setminus \mathcal{F}$ with $p<f$ for some $f \in \mathcal{F}$, we have $p<\hat{0}_\mathcal{F}$.
\end{definition}
Note that, in particular, the embedded copy of $\mathcal{Q}$ in any slant sum $\mathcal{P} \, _{p}/^{\hat{0}_\mathcal{Q}} \, \mathcal{Q}$ forms a funnel.
\begin{definition}
\label{definition corresponding}
Let $\mathcal{F}$ be a funnel of a poset $\mathcal{P}$. Suppose $T$ is a skew increasing $\mathcal{P}$-tableau of shape $\nu/\lambda$ with a rectification $U$.
Then, the \textbf{corresponding} skew increasing $\mathcal{F}$-tableau of $T$ for $U$, denoted as $(T \rightarrow U)|_\mathcal{F},$ is the increasing $\mathcal{F}$-tableau defined by
\[(T \rightarrow U)|_\mathcal{F} = T|_\mathcal{E},\]
where
\[\mathcal{E} \coloneqq
\begin{cases}
{\{x \in \nu/\lambda \cap \mathcal{F} : T(x) \geq U(\hat{0}_\mathcal{F})\}}, & \text{ if } \hat{0}_\mathcal{F} \in \nu/\lambda; \\
\emptyset, & \text{ if } \hat{0}_\mathcal{F} \notin \nu/\lambda.
\end{cases}
\]
\end{definition}
\begin{proposition}
\label{Theorem corresponding respects slides}
Let $\mathcal{F}$ be a funnel of a poset $\mathcal{P}$ and let $T$ be a skew increasing $\mathcal{P}$-tableau.
If $U$ is a rectification of $T$, then $U|_\mathcal{F}$ is a rectification of $(T \rightarrow U)|_\mathcal{F}$.
\end{proposition}
\begin{proof}
For concision, write $F$ for $(T \rightarrow U)|_\mathcal{F}$. The proposition is trivial if $F$ is the empty tableau, so we assume $F$ is supported on at least one element of $\mathcal{F}$.
Let $T \xrightarrow{\gamma_1, \dots, \gamma_n} U$.
Using the sequence of sets of inner corners $\gamma_1, \dots ,\gamma_n$, we will recursively construct sets of inner corners $\theta_1, \dots ,\theta_n$ that rectify $F$ to $S|_\mathcal{F}$.
We write $T_i^j$ to represent $T$ \emph{after} $i$ slides and just \emph{before} the $j$th swap; that is,
\[T_i^j \coloneqq \begin{cases}
\mathsf{Swap}_{\bullet,j-1} \circ \cdots \circ \mathsf{Swap}_{\bullet,1} \circ \mathsf{AddDots}_{\gamma_{i+1}} \circ \mathsf{Slide}_{\gamma_1, \ldots, \gamma_{i}}(T), & \text{ if } j \geq 1; \\
\mathsf{Slide}_{\gamma_1, \ldots, \gamma_{i}}(T), &\text{ if } j = 0.
\end{cases} \]
In particular, $T_0^0 = T$ and $T_0^1 = \mathsf{AddDots}_{\gamma_1}(T)$.
Let $F_0 \coloneqq F$.
For $1\leq i \leq n$, recursively define $\theta_i$ and $F_i$ as follows
\begin{align*}
\theta_i \coloneqq \{ & c \in \ensuremath{\mathrm{IC}}(F_{i-1}) : \text{there exists a $j$ with $T_i^j(c) = \bullet$ and} \\
& \text{$T_i^{j+1}(f) = \bullet$ for some $f \gtrdot c$ with $f \in \ensuremath{\mathrm{Dom}}(F_{i-1})$}\}
\end{align*}
and
\[F_i \coloneqq \mathsf{Slide}_{\theta_1, \ldots, \theta_i}(F).\]
Similar to $T_i^j$, we let $F_i^j$ be
\[F_i^j \coloneqq \begin{cases}
\mathsf{Swap}_{\bullet,j-1} \circ \cdots \circ \mathsf{Swap}_{\bullet,1} \circ \mathsf{AddDots}_{\theta_{i+1}}(F_i), & \text{ if } j \geq 1; \\
F_i, &\text{ if } j = 0.
\end{cases}\]
Finally, we define certain subsets of $\theta_i$ which will be useful in our analysis. Let $\theta_i^0 = \emptyset$ and, for $k > 0$, let
\begin{align*}
\theta_i^k \coloneqq \{ & c \in \ensuremath{\mathrm{IC}}(F_{i-1}) : \text{there exists a $j^\star \geq k$ with $T_i^{j^\star}(c) = \bullet$ and} \\
& \text{$T_i^{j^\star+1}(f) = \bullet$ for some $f \gtrdot c$ with $f \in \ensuremath{\mathrm{Dom}}(F_{i-1})$}\}.
\end{align*}
Note that $\theta_i^1 = \theta_i$.
Set
$m \coloneqq U(\hat{0}_\mathcal{F}) = \min \ensuremath{\mathrm{Range}}(F).$
For the remainder of this proof, we say $F_i^j$ and $T_i^j$ {\bf N-agree} if $F_i^j$ and $T_i^j$ agree on all numeric labels within $\mathcal{F}$ greater than or equal to $m$; that is, for all $f \in \mathcal{F}$, if $T_i^j(f) \geq m$ or $F_i^j(f) \in \mathbb{Z}$, then $T_i^j(f) =F_i^j(f)$.
Additionally, for the remainder of this proof, we say that $F_i^j$ and $T_i^j$ {\bf agree} if they satisfy all the conditions:
\begin{enumerate}
\item[(A.0)] $F_i^j$ and $T_i^j$ N-agree;
\item[(A.1)] $F_i^j|_{\ensuremath{\mathrm{Dom}}(F_i)} = T_i^j|_{\ensuremath{\mathrm{Dom}}(F_i)}$;
\item[(A.2)] for all $c \in \ensuremath{\mathrm{IC}} (F_i)$, $F_i^j(c) = \bullet$ if and only if $c \in \theta_{i+1}^j$.
\end{enumerate}
We now establish inductively that $F_i^j$ and $T_i^j$ agree for all $i$ and $j$.
First, note that, since $F_0^0 = F = (T \rightarrow U)|_\mathcal{F}$ is by definition a restriction of $T = T_0^0$ to the subposet of $\mathcal{F}$ consisting of those $x \in \mathcal{F}$ with $T_0^0(x) \geq m$, $F_0^0$ and $T_0^0$ satisfy (A.0) and (A.1). Furthermore, $\theta_1^0 = \emptyset$ and $F_0^0$ has no $\bullet$s, which proves (A.2).
Now, inductively assume that $F_\ell^h$ and $T_\ell^h$ agree for all $\ell \leq i$ and $h \leq j$.
Let
\[M = \max \ensuremath{\mathrm{Range}} (T|_\mathcal{F}) = \max \ensuremath{\mathrm{Range}} (F).\]
\medskip
\noindent
{\sf (Case 1: $j =M+1$):}
Since the largest label in $F$ and $T|_\mathcal{F}$ is $M$, our inductive step is to show that $F_{i+1}^0$ and $T_{i+1}^0$ agree.
We have $F_{i+1}^0 = \mathsf{RemoveDots}(F_i^{M+1})$ and $T_{i+1}^0 = \mathsf{RemoveDots}(T_i^{M+1})$. Hence, since $F_i^M$ and $T_i^M$ satisfy (A.0), it is clear that $F_{i+1}^0$ and $T_{i+1}^0$ satisfy (A.0) since removing $\bullet$s does not change numeric values.
Since $F_{i+1}^0$ and $T_{i+1}^0$ satisfy (A.0) and $F_{i+1}^0$ has no $\bullet$s, we have that $F_{i+1}^0$ and $T_{i+1}^0$ satisfy (A.1). Furthermore, by definition $\theta_{i+1}^0 = \emptyset$ and $F_{i+1}^0$ has no $\bullet$s, proving (A.2).
\bigskip
\noindent
{\sf (Case 2: $j =0$):}
In this case, $F_{i}^1 = \mathsf{AddDots}_{\theta_{i+1}}(F_i^0)$ and $T_{i}^1 = \mathsf{AddDots}_{\gamma_{i+1}}(T_i^0)$. (We note that, by definition, $\theta_{i+1}$ is a subset of $F_i$'s inner corners, so $\mathsf{AddDots}_{\theta_{i+1}}(F_i^0)$ is valid.)
To show that $F_{i}^1$ and $T_{i}^1$ agree, we must verify (A.0)--(A.2).
{\bf Proof of (A.0):}
Since $F_i^0$ and $T_i^0$ N-agree by assumption and $\mathsf{AddDots}_{\theta_{i+1}}$ does not affect numerical labels, it is clear that $F_{i}^1$ and $T_{i}^1$ N-agree.
{\bf Proof of (A.1):}
Let $f \in \ensuremath{\mathrm{Dom}}(F_i)$. Then $F_i(f) \in \mathbb{Z}$, so
\[F_i^1(f) = F_i(f) = T_i(f) = F_i^1 (f) \in \mathbb{Z},\]
where the first and last equalities are by adding $\bullet$s not affecting numeric labels and the middle equality is by (A.0) for $F_i^0$ and $T_i^0$.
{\bf Proof of (A.2):} Since $F_i^1 = \mathsf{AddDots}_{\theta_{i+1}}(F_i)$, we have that $F_i(c) = \bullet$ if and only if $c \in \theta_{i+1} = \theta_{i+1}^1$.
\bigskip
\noindent
{\sf (Case 3: $0 < j < M+1$):}
We must verify (A.0)--(A.2) for $F_i^{j+1}$ and $T_i^{j+1}$. First, we prove some helpful claims.
\begin{claim}
\label{claim F bullet implies T bullet or large covers}
For all $c \in \ensuremath{\mathrm{IC}}(F_i)$, if $F_i^j(c) = \bullet$, then either $T_i^j(c) = \bullet$ or else, for all $f \in \ensuremath{\mathrm{Dom}}(F_i^j)$ with $c \lessdot f$, we have $F_i^j(f) = T_i^j(f) > j$;
\end{claim}
\begin{proof}[Proof of Claim~\ref{claim F bullet implies T bullet or large covers}]
Let $c \in \ensuremath{\mathrm{IC}}(F_i)$ and suppose $F_i^j(c) = \bullet$.
Then, by (A.2), $f \in \theta_{i+1}^j$,
so there exists a $j^\star \geq j$ such that $T_i^{j^\star}(c) = \bullet$ and
$T_i^{j^\star+1}(f_0) = \bullet$. There are two cases to consider: either $j^\star = j$ or $j^\star > j$.
First, suppose $j^\star = j$. Then $T_i^j(c) = T_i^{j^\star}(c) = \bullet$, as desired.
Otherwise, suppose $j^\star > j$. Since $T_i^{j^\star}(c) = \bullet$, we have that for all $c' \in \mathcal{P}$ with $c' \gtrdot c$ that $T_i^j(c') \geq j^\star > j$. Thus, by the inductive (A.1), we have for all $f \in \ensuremath{\mathrm{Dom}}(F_i)$ with $f \gtrdot c$ that $F_i^j(f) = T_i^j(f) > j$.
\end{proof}
\begin{claim}
\label{claim ic or domain}
Suppose $p \in \mathcal{P} \setminus \left( \ensuremath{\mathrm{Dom}}(F_i) \cup \ensuremath{\mathrm{IC}}(F_i) \right)$. Further suppose $p \lessdot f$ with $f \in \mathcal{F}$ and $F_i^j(f) = j$. Then, $F_i^j(p) \neq \bullet$ and $T_i^j(p) \neq \bullet$.
\end{claim}
\begin{proof}[Proof of Claim~\ref{claim ic or domain}]
Since $p \notin \ensuremath{\mathrm{Dom}}(F_i) \cup \ensuremath{\mathrm{IC}}(F_i)$, we have $p \notin \ensuremath{\mathrm{Dom}}(F_i^j)$, and so it is not the case that $F_i^j(p) = \bullet$.
Suppose $T_i^j(p) = \bullet$.
We will obtain a contradiction by deriving that $p \in \ensuremath{\mathrm{IC}}(F_i)$. Suppose $p' \gtrdot p$ and $p' \notin \ensuremath{\mathrm{Dom}}(F_i)$.
If $p' \in \ensuremath{\mathrm{Dom}}(T_i^j)$, then since $T_i^j(p) = \bullet$, we have that $T_i^j(p') \geq j$ (since it above a $\bullet$ after the $(j-1)$st swap), and we know $j \geq m$ since $F_i^j(f) = j$ and $m$ is the least label in $F$.
Hence, by the inductive (A.1), we have $T_i^j(p') = F_i^j(p') \geq j$. Therefore, $F_i^j(p') = F_i(p')$, so $p' \in \ensuremath{\mathrm{Dom}}(F_i)$, in violation of our assumption.
Thus, $p' \notin \ensuremath{\mathrm{Dom}}(T_i^j)$.
Let $p'' > p'$.
Because the domain of $T_i^j$ is a skew shape and $p' \notin \ensuremath{\mathrm{Dom}} (T_i^j)$, we know that $p'' \notin \ensuremath{\mathrm{Dom}}(T_i^j)$.
Thus, since $\ensuremath{\mathrm{Dom}}(F_i) \subseteq \ensuremath{\mathrm{Dom}}(T_i) \subseteq \ensuremath{\mathrm{Dom}}(T_i^j)$, we have $p'' \notin \ensuremath{\mathrm{Dom}}(F_i)$.
Thus, for any $p' \gtrdot p$ with $p' \notin \ensuremath{\mathrm{Dom}}(F_i)$, we then have, for all $p'' > p'$, that $p'' \notin \ensuremath{\mathrm{Dom}}(F_i)$. Thus, $p \in \ensuremath{\mathrm{IC}}(F_i)$, which is our desired contradiction.
\end{proof}
{\bf Proof of (A.0)}:
By the inductive (A.0), $F_i^j$ and $T_i^j$ agree on numeric labels greater than or equal to $m$. Since $F_i^{j+1}$ and $T_i^{j+1}$ are just $F_i^{j}$ and $T_i^{j}$ after applying $\mathsf{Swap}_{\bullet, j}$, it suffices to consider the movement of the $j$ labels. If $j< m$, we are done. Hence, assume $j \geq m$. We know by (A.0) that for all $f \in \mathcal{F}$, that $T_i^j(f) = j$ if and only if $F_i^j(f)=j$. Hence, it suffices to show that whenever $f \in \mathcal{F}$ with $F_i^j(f) =T_i^j(f) = j$ and whenever $\hat{f} \lessdot f$, we have $F_i^j(\hat{f}) = \bullet$ if and only if $T_i^j(\hat{f}) = \bullet$.
Fix $f \in \mathcal{F}$ and $\hat{f} \lessdot f$. Suppose $T_i^j(f) = F_i^j(f) = j$. We must show $F_i^j(\hat{f}) = \bullet$ if and only if $F_i^j(\hat{f}) = \bullet$. Either $\hat{f} \in \ensuremath{\mathrm{Dom}}(F_i)$, $\hat{f} \in \ensuremath{\mathrm{IC}}(F_i)$, or $\hat{f} \notin \ensuremath{\mathrm{Dom}}(F_i) \cup \ensuremath{\mathrm{IC}}(F_i)$.
Suppose $\hat{f} \in \ensuremath{\mathrm{Dom}}(F_i)$. Then, the inductive (A.1) directly gives that $F_i^j(\hat{f}) = \bullet$ if and only if $T_i^j(\hat{f}) = \bullet$, as desired.
Suppose $\hat{f} \notin \ensuremath{\mathrm{Dom}}(F_i) \cup \ensuremath{\mathrm{IC}}(F_i)$. Then, Claim~\ref{claim ic or domain} gives that $F_i^j(\hat{f}) \neq \bullet$ and $T_i^j(\hat{f}) \neq \bullet$.
Finally, suppose $\hat{f} \in \ensuremath{\mathrm{IC}}(F_i)$. If $F_i^j(\hat{f}) = \bullet$, then Claim~\ref{claim F bullet implies T bullet or large covers} gives that $T_i^j(\hat{f}) = \bullet$. Conversely, if $T_i^j(\hat{f}) = \bullet$, then $T_i^{j+1}(f) = \bullet$ by the definition of swapping, so by definition $\hat{f} \in \theta_{i+1}^j$. Hence, $F_i^j(\hat{f}) = \bullet$ by the inductive (A.2).
{\bf Proof of (A.1)}:
Let $f \in \ensuremath{\mathrm{Dom}}(F_i)$. Then by the inductive (A.0), we have that $f \in \ensuremath{\mathrm{Dom}}(T_i)$, so either $T_i^{j+1}(f) \in \mathbb{Z}$ or $T_i^{j+1}(f) = \bullet$.
If $T_i^{j+1}(f) \in \mathbb{Z}$, then by construction of the swapping process $T_i^{j+1}(f) \geq T_i(f)$. Moreover, $T_i(f) = F_i(f) \geq m$ by the inductive (A.0). Hence, $T_i^{j+1}(f) \geq m$. Thus, we have $F_i^{j+1}(f) = T_i^{j+1}(f)$ by (A.0) for $F_i^{j+1}$ and $T_i^{j+1}$ (which has already been established at this point).
Now, suppose that $T_i^{j+1}(f) = \bullet$. If $F_i^{j+1}(f) \neq \bullet$, then $F_i^{j+1}(f) \geq m$. Hence, by (A.0) for $F_i^{j+1}$ and $T_i^{j+1}$ (which has already been established at this point), we have
\[\bullet \neq F_i^{j+1}(f) = T_i^{j+1}(f) = \bullet, \] which is a contradiction. Thus, $F_i^{j+1}(f) = \bullet$.
{\bf Proof of (A.2)}:
We have
\[
\{f \in \ensuremath{\mathrm{IC}}(F_i) : F_i^{j+1}(f) = \bullet \} = \{f \in \ensuremath{\mathrm{IC}}(F_i): F_i^j(f) = \bullet \text{ and $F_i^j(f') \neq j$ for all $f' \gtrdot f$}\}.
\]
By the inductive (A.2), this equals
\[
\{f \in \theta_i^j: \text{$F_i^j(f') \neq j$ for all $f' \gtrdot f$}\},
\]
which in turn equals
\[
\{f \in \theta_i^j: \text{$T_i^j(f') \neq j$ for all $f' \in \ensuremath{\mathrm{Dom}}(F_i)$ with $f' \gtrdot f$}\}
\]
by the inductive (A.0).
As this last set is the definition of $\theta_i^{j+1}$,
this completes the proof of (A.2) and hence the induction.
As a consequence of our induction, we have $F_n^0 = T_n^0|_{\mathcal{F}} = U|_\mathcal{F}$. Hence,
\[
F \xrightarrow{\theta_1, \dots, \theta_n} U|_\mathcal{F},
\]
as desired.
\end{proof}
We now derive a few straightforward corollaries of Proposition~\ref{Theorem corresponding respects slides}. We will not use these corollaries in the sequel; however, they seem interesting to us for elucidating some of the structure of URTs among collections of related posets.
\begin{corollary}
\label{Corollary agree on filters with minimums}
Let $T$ be a skew increasing $\mathcal{P}$-tableau and let $\mathcal{F}$ be a funnel of $\mathcal{P}$.
Let $U, V$ be rectifications of $T$.
If $U|_\mathcal{F}$ is a URT in $\mathcal{F}$ and $U(\hat{0}_\mathcal{F}) = V(\hat{0}_\mathcal{F})$, then $U|_\mathcal{F} = V|_\mathcal{F}$.
\end{corollary}
\begin{proof}
By definition, $(T \rightarrow U)|_\mathcal{F}$ = $(T \rightarrow V)|_\mathcal{F}$, since $U(\hat{0}_\mathcal{F}) = V(\hat{0}_\mathcal{F})$.
By Proposition~\ref{Theorem corresponding respects slides}, $(T \rightarrow U)|_\mathcal{F}$ rectifies to $U|_\mathcal{F}$ and $(T \rightarrow V)|_\mathcal{F}$ rectifies to $V|_\mathcal{F}$.
Since by assumption $U|_\mathcal{F}$ is a URT in $\mathcal{F}$, it follows that $U|_\mathcal{F} = V|_\mathcal{F}$.
\end{proof}
\begin{definition}
The \textbf{bottom chain} $\mathcal{C}$ of a poset $\mathcal{P}$ is the order ideal of $\mathcal{P}$ constructed as follows.
Define
\[\min(\mathcal{P}) \coloneqq
\begin{cases}
\{ \hat{0}_\mathcal{P} \}, & \text{ if $\mathcal{P}$ has a $\hat{0}_\mathcal{P}$}; \\
\emptyset, & \text{ otherwise}.
\end{cases}
\]
We construct the shapes $C_i$ recursively.
Let $C_0 \coloneqq \min(\mathcal{P})$ and let $C_{i+1} \coloneqq C_i \cup \min(\mathcal{P} \setminus C_i)$.
Finally, the bottom chain of $\mathcal{P}$ is \[\mathcal{C} \coloneqq \bigcup_{n \in \mathbb{Z}_{\geq 0}} C_n.\]
\end{definition}
\begin{lemma}
\label{Lemma unique rectification in bottom chain}
Let $\mathcal{C}$ be the bottom chain of a poset $\mathcal{P}$. Let $U,V$ be rectifications of a skew increasing $\mathcal{P}$-tableau $T$. Then $U|_\mathcal{C} = V|_\mathcal{C}$.
\end{lemma}
\begin{proof}
In any rectification $W$ of $T$, the labels of $W|_\mathcal{C}$ must be just the smallest numbers in the range of $T$ in increasing order.
\end{proof}
\begin{corollary}
\label{Corollary slant sum tree base case}
Let $\mathcal{P}_1, \dots, \mathcal{P}_n $ be pairwise disjoint posets with minimum elements $\hat{0}_{\mathcal{P}_k}$.
Let $\mathcal{C} = \{ c \}$ be a chain of size 1.
Construct the slant sum $\mathcal{R} \coloneqq \mathcal{C} \slantsum{c} (\mathcal{P}_1, \dots ,\mathcal{P}_n)$. For $U$ a straight-shaped increasing $\mathcal{R}$-tableau, if $U|_{\mathcal{P}_k}$ is an URT in $\mathcal{P}_k$ for each $1\leq k \leq n$, then $U$ is a URT in $\mathcal{R}$.
\end{corollary}
\begin{proof}
Suppose $U,V$ are rectifications of some skew increasing $\mathcal{R}$-tableau $T$.
Since $c$ is in the bottom chain of $\mathcal{R}$, $U|_\mathcal{C} = V|_\mathcal{C}$ by Lemma~\ref{Lemma unique rectification in bottom chain}.
Let $m \coloneqq U(c) = V(c)$.
It remains to show that $U|_{P_k} = V|_{P_k}$ for all $k$. Fix some $\mathcal{P}_k$. By the increasingness of $U$ and $V$, we have $U(\hat{0}_{\mathcal{P}_k}) > m$ and $V(\hat{0}_{\mathcal{P}_k}) > m$, but both $U(\hat{0}_{\mathcal{P}_k})$ and $V(\hat{0}_{\mathcal{P}_k})$ must also be less than all the other labels in their respective tableaux for elements in $\mathcal{P}_k$ that are greater than $\hat{0}_{\mathcal{P}_k}$. Thus, since $\hat{0}_{\mathcal{P}_k}$ only covers $c$ and all the $\mathcal{P}_k$ are disjoint funnels, it is easy to see that
\[U(\hat{0}_{\mathcal{P}_k}) = V(\hat{0}_{\mathcal{P}_k}) = \min (\ensuremath{\mathrm{Range}}(T|_{\mathcal{P}_k}) \setminus \{m\}).\]
Finally, since $U(\hat{0}_{\mathcal{P}_k}) = V(\hat{0}_{\mathcal{P}_k})$, we have $U|_{\mathcal{P}_k} = V|_{\mathcal{P}_k}$ by applying Corollary~\ref{Corollary agree on filters with minimums}.
\end{proof}
We will say a poset $\mathscr{T}$ is a \textbf{tree} if $\mathscr{T}$ has a $\hat{0}_\mathscr{T}$ and if each other $x \in \mathscr{T}$ has exactly one parent.
\begin{corollary}
Let $\mathscr{T}$ be a tree. Let $t_1, \dots, t_n$ be (not necessarily distinct) elements of $\mathscr{T}$. Let $\mathcal{P}_1, \ldots, \mathcal{P}_n$ be pairwise disjoint posets with minimum elements.
Define
\[\mathcal{R}_0 \coloneqq \mathscr{T}\]
and
\[\text{ $\mathcal{R}_{i+1} = \mathcal{R}_i \slantsum{t_{i+1}} \mathcal{P}_{i+1}$ for $i = 0, \dots, n-1$}.\]
Let $U$ be a straight-shaped increasing $\mathcal{R}_n$-tableau.
If $U|_{\mathcal{P}_k}$ is an URT in $\mathcal{P}_k$ for all $1\leq k \leq n$, then $U$ is a URT in $\mathcal{R}_n$.
\end{corollary}
\begin{proof}
By repeated application of Corollary~\ref{Corollary slant sum tree base case}.
\end{proof}
\section{Restricted rectifications and $p$-chain URTs}
\label{Section slant sum}
Most of the results in this section are fairly technical lemmas that we will need later.
\begin{definition}
Let $\mathcal{P}$ be a poset, let $\mathcal{Q}$ be a subset of $\mathcal{P}$, and let $T$ be a skew increasing $\mathcal{P}$-tableau.
Then, the \textbf{rectifications of $T$ restricted to $\mathcal{Q}$} are the elements of the set
\[\mathsf{rects}|_\mathcal{Q} (T) := \{U|_\mathcal{Q} : U \in \mathsf{rects}(T) \}.\]
\end{definition}
\begin{lemma}\label{Lemma poset "sees" chain}
\hspace{1in} \\ \vspace{-.2in}
\begin{enumerate}
\item Fix $p \in \mathcal{P}$ and let $k \coloneqq |\{x \in \mathcal{P}: x \leq p\}|$ be the cardinality of the principal order ideal generated by $p$. Let $\mathcal{C}$ be a chain poset of size $k$. If
\[\mathcal{R} = \mathcal{P} \slantsum{p} (\mathcal{Q}_1, \dots, \mathcal{Q}_n)\] for pairwise disjoint posets $\mathcal{Q}_1, \dots, \mathcal{Q}_n$ with minimum elements
and $T$ is a skew increasing $\mathcal{R}$-tableau, then there is a skew increasing $(\mathcal{P} \slantsum{p} \mathcal{C})$-tableau $C$ such that
\[\mathsf{rects}|_\mathcal {P} (T) = \mathsf{rects}|_\mathcal{P} (C).\]
\item More generally, if $p_1,..,p_n \in \mathcal{P}$ are distinct, define $k_i \coloneqq |\{x \in \mathcal{P}: x \leq p_i\}|$ and let $\mathcal{C}_i$ be the chain poset of size $k_i$. Then, if
\[\mathcal{R} = \mathcal{P} \slantsum{p_1} (\mathcal{Q}_1^1, \dots ,\mathcal{Q}_{m_1}^1) \slantsum{p_2} \dots \slantsum{p_n} (\mathcal{Q}_1^n,\dots, \mathcal{Q}^n_{m_n})\] for pairwise disjoint posets $\mathcal{Q}^1_1, \dots , \mathcal{Q}^n_{m_n}$ with minimum elements
and $T$ is a skew increasing $\mathcal{R}$-tableau, then there is a skew increasing $(\mathcal{P} \slantsum{p_1} \mathcal{C}_1 \slantsum{p_2}\dots\slantsum{p_n} \mathcal{C}_n)$-tableau $C$ such that
\[\mathsf{rects}|_\mathcal {P} (T) = \mathsf{rects}|_\mathcal{P} (C).\]
\end{enumerate}
\end{lemma}
\begin{proof}
We first prove (1).
Since there are $k$ weak ancestors of $p$, there are at most $k$ distinct labels from $\bigcup_h \mathcal{Q}_h$ that can swap into $\mathcal{P}$ during rectification of the skew increasing $\mathcal{R}$-tableau $T$.
Let $q_1 < \dots < q_m \in \mathbb{Z}_{>0}$ be the $m$ smallest labels of elements in
$T|_{\bigcup_{h}\mathcal{Q}_h},$
where $m$ is the lesser of $k$ and the number of distinct labels in $T|_{\bigcup_{h}\mathcal{Q}_h}$.
Fix a chain $\mathcal{C} = \{ c_1 <\dots < c_k\}$ of size $k$.
Define the skew increasing $\mathcal{P} \slantsum{p} \mathcal{C}$-tableau $C$ as follows:
\[C(x) =
\begin{cases}
T(x), & x \in \mathcal{P} \cap \ensuremath{\mathrm{Dom}}(T); \\
q_r, & x = c_r \text{ and } 1 \leq i \leq m.
\end{cases}
\]
We will show $\mathsf{rects}|_\mathcal {P} (T) = \mathsf{rects}|_\mathcal{P} (C)$. First, we show that $\mathsf{rects}|_\mathcal {P} (T) \subseteq \mathsf{rects}|_\mathcal{P} (C)$. To do this, suppose $T \xrightarrow{\gamma_1, \dots, \gamma_n} U$ in $\mathcal{R}$. We must show that $U|_\mathcal{P} \in \mathsf{rects}|_\mathcal{P} (C)$. We use the sequence of inner corners $\theta_i \coloneqq \gamma_i \cap \mathcal{P}$ to rectify $C$.
Write $T_i^j$ to represent $T$ \emph{after} $i$ slides and just \emph{before} the $j$th swap; that is,
\[T_i^j \coloneqq \begin{cases}
\mathsf{Swap}_{\bullet,j-1} \circ \cdots \circ \mathsf{Swap}_{\bullet,1} \circ \mathsf{AddDots}_{\gamma_{i+1}} \circ \mathsf{Slide}_{\gamma_1, \ldots, \gamma_{i}}(T), & \text{ if } j \geq 1; \\
\mathsf{Slide}_{\gamma_1, \ldots, \gamma_{i}}(T), &\text{ if } j = 0.
\end{cases} \]
Similarly, let $C_i^j$ be
\[C_i^j \coloneqq \begin{cases}
\mathsf{Swap}_{\bullet,j-1} \circ \cdots \circ \mathsf{Swap}_{\bullet,1} \circ \mathsf{AddDots}_{\theta_{i+1}} \circ \mathsf{Slide}_{\theta_1, \ldots, \theta_{i}}(C), & \text{ if } j \geq 1; \\
\mathsf{Slide}_{\theta_1, \ldots, \theta_{i}}(C), &\text{ if } j = 0.
\end{cases} \]
For the rest of this proof, we say $T_i^j$ and $C_i^j$ are {\bf similar} if they satisfy both of the following conditions:
\begin{enumerate}
\item[(S.0)] $T_i^j|_\mathcal{P} = C_i^j|_\mathcal{P}$;
\item[(S.1)] $\{T_i^j(q) \in \mathbb{Z} : q \in \bigcup_h \mathcal{Q}_h \text{ and } T_i^j(q) \leq q_m\} = \{C_i^j(c) \in \mathbb{Z}: c \in \mathcal{C} \text{ and } C_i^j(c) \leq q_m\}$.
\end{enumerate}
We show by induction that $T_i^j$ and $C_i^j$ are similar for all $i$ and $j$. In particular, this will yield $T_i^j|_\mathcal{P} = C_i^j|_\mathcal{P}$, proving that $U|_\mathcal{P} = T_n^0|_\mathcal{P} = C_n^0|_\mathcal{P} \in \mathsf{rects}|_\mathcal{P}(C)$, as desired.
By construction, we have that $T_0^0|_\mathcal{P} = T|_\mathcal{P} = C|_\mathcal{P} = C_0^0|_\mathcal{P}$ so $T_0^0$ and $C_0^0$ satisfy (S.0). Condition (S.1) for $T_0^0$ and $C_0^0$ is also by construction.
Now, inductively assume $T_i^j$ and $C_i^j$ are similar. For concision, write $\mathbb{S}_i^j$ for the set \[\{T_i^j(q) \in \mathbb{Z} : q \in \bigcup_h \mathcal{Q}_h \text{ and } T_i^j(q) \leq q_m\} = \{C_i^j(c) \in \mathbb{Z}: c \in \mathcal{C} \text{ and } C_i^j(c) \leq q_m\}\] considered in the inductive (S.1) condition.
Let
\[M = \max \ensuremath{\mathrm{Range}} (T) \geq \max \ensuremath{\mathrm{Range}} (C).\]
\medskip
\noindent
{\sf (Case 1: $j =M+1$):}
We have that
\begin{align*}
T_{i+1}^0|_\mathcal{P} &= \mathsf{RemoveDots}(T_{i}^{M+1})|_\mathcal{P}= \mathsf{RemoveDots}(T_{i}^{M+1}|_\mathcal{P}) \\
&=\mathsf{RemoveDots}(C_{i}^{M+1}|_\mathcal{P}) = \mathsf{RemoveDots}(C_{i}^{M+1})|_\mathcal{P} \\
&= C_{i+1}^0|_\mathcal{P},
\end{align*}
so $T_{i+1}^0$ and $C_{i+1}^0$ satisfy (S.0).
Removing $\bullet$s does not affect the numerical labels in $\bigcup_h \mathcal{Q}_h$ or $\mathcal{C}$, so (S.1) for $T_{i+1}^0$ and $C_{i+1}^0$ is immediate from (S.1) for $T_{i}^{M+1}$ and $C_{i}^{M+1}$. This completes this case.
Before turning to the next case, note that if none of the weak ancestors of $p$ are skewed out of $T_i^0|_\mathcal{P} = C_i^0|_\mathcal{P}$, then (S.0) and (S.1) continue holding in perpetuity since no elements of $\bigcup_h \mathcal{Q}_h$ or $\mathcal{C}$ will be involved in any swaps. Thus, for the remaining cases, assume at least one weak ancestor of $p$ is skewed out.
Further, note that if $\mathbb{S}_i^j = \emptyset$, then since at least one weak ancestor of $p$ is skewed out, we have $\{T_i^j(q) \in \mathbb{Z} : q \in \bigcup_h \mathcal{Q}_h\} = \{C_i^j(c) \in \mathbb{Z}: c \in \mathcal{C} \} = \emptyset$ by the definition of $q_m$. Hence if $\mathbb{S}_i^j = \emptyset$, then (S.0) and (S.1) continue to hold in perpetuity, since neither tableau has labels outside of $\mathcal{P}$. Thus, for the remaining cases, we may further assume $\mathbb{S}_i^j \neq \emptyset$.
\medskip
\noindent
{\sf (Case 2: $j =0$):} First, we verify that $\theta_{i+1} \subseteq \ensuremath{\mathrm{IC}}(C_i^0)$. Let $t \in \theta_{i+1}$. Since $t \in \theta_{i+1}$, we have that $t \in \gamma_{i+1} \cap \mathcal{P}$, so $t$ is an inner corner in $T_i^0|_\mathcal{P} = C_i^0|_\mathcal{P}$. (This equality is by the inductive (S.0).) Hence, since $C$ has skewed out nodes only in $\mathcal{P}$, it follows that $t \in \ensuremath{\mathrm{IC}}(C_i^0)$. Thus, $\theta_{i+1} \subseteq \ensuremath{\mathrm{IC}}(C_i^0)$.
Since $\theta_{i+1} = \gamma_{i+1} \cap \mathcal{P}$, we have
\begin{align*}
T_i^{1}|_\mathcal{P} &= \mathsf{AddDots}_{\gamma_{i+1}} (T_i^{0})|_\mathcal{P}
= \mathsf{AddDots}_{\theta_{i+1}} (T_i^{0})|_\mathcal{P} \\
&= \mathsf{AddDots}_{\theta_{i+1}} (T_i^{0}|_\mathcal{P})
= \mathsf{AddDots}_{\theta_{i+1}} (C_i^{0}|_\mathcal{P}) \\
&= C_i^1|_\mathcal{P},
\end{align*}
proving (S.0) for $T_i^1$ and $C_i^1$.
Similarly to the previous case, since adding $\bullet$s does not affect the numerical labels in $\bigcup_h \mathcal{Q}_h$ or $\mathcal{C}$, (S.1) for $T_i^1$ and $C_i^1$ is immediate from (S.1) for $T_i^0$ and $C_i^0$.
\medskip
\noindent
{\sf (Case 3: $0 < j < M+1$):}
We show (S.0) first.
Observe that the structure of $\mathcal{R}$, $\mathcal{P} \slantsum{p} \mathcal{C}$, and $\mathcal{P}$ easily ensures that if $q \in \mathcal{P}$, then
\begin{itemize}
\item[(F.1)] for all $\hat{q} \lessdot q$ (in either $\mathcal{R}$ or $\mathcal{P} \slantsum{p} \mathcal{C}$), $\hat{q} \in \mathcal{P}$, and
\item[(F.2)] if $q \neq p$, then for all $q' \gtrdot q$ (in either $\mathcal{R}$ or $\mathcal{P} \slantsum{p} \mathcal{C}$) we have that $q' \in \mathcal{P}$.
\end{itemize}
Now (F.1), (F.2), the fact that $T_i^j|_\mathcal{P} = C_i^j|_\mathcal{P}$, and the local nature of the swapping process, together ensure that $T_i^{j+1}(q) = F_i^{j+1}(q)$ for all $q \in \mathcal{P}$ with $q \neq p$. Furthermore, if $T_i^j(p) \neq \bullet$, then (F.1) ensures $T_i^{j+1}(p) = F_i^{j+1}(p)$.
Hence, it remains to consider the situation where $T_i^j(p) =C_i^j(p)=\bullet$. Then
\begin{align*}
T_i^{j+1}(p) &= \begin{cases}
j, & \text{if $j = \min \{T_i^j(p'') \in \mathbb{Z}: p'' \in \mathcal{R}$ and $p'' > p$\}}\\
\bullet, & \text{otherwise}
\end{cases} \\
&= \begin{cases}
j, & \text{if $j = \min \{C_i^j(p'') \in \mathbb{Z}: p'' \in \mathcal{P} \slantsum{p} \mathcal{C}$ and $p'' > p$\}}\\
\bullet, & \text{otherwise}
\end{cases} \\
&=C_i^{j+1}(p).
\end{align*}
Here, the first and third equalities are by the definition of the swapping process and the increasingness of the tableaux. The second equality is because
\[
\min \text{$\{T_i^j(p'') \in \mathbb{Z}: p'' \in \mathcal{R}$ and $p'' > p$\}}
= \min \text{$\{C_i^j(p'') \in \mathbb{Z}: p'' \in \mathcal{P} \slantsum{p} \mathcal{C}$ and $p'' > p$\}},
\]
as follows from the inductive (S.0), (S.1), and the assumption that $\mathbb{S}_i^j \neq \emptyset$. This proves (S.0).
It remains to show (S.1). The swapping process only affects labels with value $j$. We already have \[\{T_i^j(q) \in \mathbb{Z} : q \in \bigcup \mathcal{Q}_h \text{ and } T_i^j(q) \leq q_m\} = \{C_i^j(c) \in \mathbb{Z}: c \in \mathcal{C} \text{ and } C_i^j(c) \leq q_m\},\] so it just remains to show that
\begin{align*}
j \in \{C_i^{j+1}(c) \in \mathbb{Z}: c \in \mathcal{C} &\text{ and } C_i^{j+1}(c) \leq q_m\} \\
&\Updownarrow \\
j \in \{T_i^{j+1}(q) \in \mathbb{Z} : q \in \bigcup \mathcal{Q}_h &\text{ and } T_i^{j+1}(q) \leq q_m\}.
\end{align*}
Either $T_i^{j+1}(p) = j$ or not.
If $T_i^{j+1}(p) = j$, then (S.0) for $T_i^{j+1}$ and $C_i^{j+1}$ (which is already established at this point) implies $C_i^{j+1}(p) =j$. Hence, the increasingness of $T_i^{j+1}$ and $C_i^{j+1}$ ensures that
\[
j \notin \{T_i^{j+1}(q) \in \mathbb{Z} : q \in \bigcup \mathcal{Q}_h \text{ and } T_i^{j+1}(q) \leq q_m\}
\]
and
\[
j \notin \{C_i^{j+1}(c) \in \mathbb{Z}: c \in \mathcal{C} \text{ and } C_i^{j+1}(c) \leq q_m\}.
\]
Otherwise, $T_i^{j+1}(p) \neq j$. Then, nothing could have been swapped into $p$ at this stage. Thus, since $p$ is the only element connecting $\mathcal{P}$ to $\bigcup \mathcal{Q}_h$ or $\mathcal{C}$, in this situation (S.1) for $T_i^{j+1}$ and $C_i^{j+1}$ is immediate from (S.1) for $T_i^j$ and $C_i^j$.
This completes the induction and shows that $\mathsf{rects}|_\mathcal {P} (T) \subseteq \mathsf{rects}|_\mathcal{P} (C)$.
To show the reverse containment $\mathsf{rects}|_\mathcal {P} (C) \subseteq \mathsf{rects}|_\mathcal{P} (T)$, we follow the same strategy, except that we first remove any skewed out nodes in $\bigcup_h \mathcal{Q}_h$. Suppose $C \xrightarrow{\theta_1, \dots, \theta_n} V$. We must find a sequence of inner corners that yields a rectification $U$ of $T$ such that $U|_\mathcal{P} = V|_\mathcal{P}$. First, we remove any skewed nodes in $\bigcup_h \mathcal{Q}_h$, as follows. Let $T_0 = T$. Recursively define
\[\alpha_{i+1} \coloneqq \ensuremath{\mathrm{IC}}(T_i) \cap (\bigcup_h \mathcal{Q}_h)\] and
\[T_{i+1} \coloneqq \mathsf{Slide}_{\alpha_{i+1}}(T_i).\]
Let $k$ be least such that $\alpha_k = \emptyset$.
Then $T_k$ has no skewed out nodes in $\bigcup_h \mathcal{Q}_h$. Finally, let
$\gamma_{i} \coloneqq \theta_i \cap \mathcal{P}$. Then $\mathsf{Slide}_{\gamma_1, ..., \gamma_n}(T_k) = U|_\mathcal{P}$. The proof is exactly the same as before except $C$, $V$, $T_k$, $\theta_i$, and $\gamma_i$ play the respective roles of $T$, $U$, $C$, $\gamma_i$, and $\theta_i$.
This completes the proof of (1).
The proof for (2) is by induction on $n$.
The base case $n=1$ is the previously proven statement (1). For $n>1$, let
\[
\mathcal{P}' = \mathcal{P} \slantsum{p_1} (\mathcal{Q}_1^1, \dots ,\mathcal{Q}_{m_1}^1)
\]
and let
\[
\mathcal{S} = \mathcal{P}' \slantsum{p_2} \mathcal{C}_2 \slantsum{p_3}\dots\slantsum{p_n} \mathcal{C}_n.
\]
By the inductive hypothesis, there is a skew increasing $\mathcal{S}$-tableau $T_\mathcal{S}$ such that
\[
\mathsf{rects}|_{\mathcal{P}'}(T_S) = \mathsf{rects}|_{\mathcal{P}'}(T),
\]
so furthermore by restriction we have
\[
\mathsf{rects}|_{\mathcal{P}}(T_S) = \mathsf{rects}|_{\mathcal{P}}(T).
\]
Observe that
\[
\mathcal{S} =
\mathcal{P}' \slantsum{p_2} \mathcal{C}_2 \slantsum{p_3}\dots\slantsum{p_n} \mathcal{C}_n = (\mathcal{P} \slantsum{p_2} \mathcal{C}_2 \slantsum{p_3}\dots\slantsum{p_n} \mathcal{C}_n) \slantsum{p_1} (\mathcal{Q}_1^1, \dots ,\mathcal{Q}_{m_1}^1).
\]
Then, by (1), there is an increasing skew $\mathcal{P} \slantsum{p_1} \mathcal{C}_2 \slantsum{p_2}\dots\slantsum{p_n} \mathcal{C}_n$-tableau $T_\mathcal{C}$ such that
\[
\mathsf{rects}|_{(\mathcal{P} \slantsum{p_2} \mathcal{C}_2 \slantsum{p_3}\dots\slantsum{p_n} \mathcal{C}_n)}(T_\mathcal{S}) = \mathsf{rects}|_{(\mathcal{P} \slantsum{p_2} \mathcal{C}_2 \slantsum{p_3}\dots\slantsum{p_n} \mathcal{C}_n)}(T_\mathcal{C})
\]
so furthermore by restriction
\[
\mathsf{rects}|_{\mathcal{P}}(T_\mathcal{S}) = \mathsf{rects}|_{\mathcal{P}}(T_\mathcal{C}).
\]
Thus,
\[
\mathsf{rects}|_{\mathcal{P}}(T_\mathcal{C}) = \mathsf{rects}|_{\mathcal{P}}(T_\mathcal{S}) = \mathsf{rects}|_{\mathcal{P}}(T),
\]
as desired.
\end{proof}
\begin{definition}
Let $\mathcal{P}$ be a poset and fix $p \in \mathcal{P}$.
Let $U$ be a URT in $\mathcal{P}$.
Then $U$ is a \textbf{$p$-chain unique rectification target} in $\mathcal{P}$ if $U$ is a URT in $\mathcal{P} \slantsum{p} \mathcal{C}$ for every chain poset $\mathcal{C}$. More generally, $U$ is a {\bf $\{p_1,\dots,p_n\}$-chain URT} in $\mathcal{P}$ if $U$ is a URT in $\mathcal{P} \slantsum{p_1} \mathcal{C}_1 \slantsum{p_2} \cdots \slantsum{p_n} \mathcal{C}_n$ for all pairwise disjoint chains $\mathcal{C}_1, \dots , \mathcal{C}_n$.
\end{definition}
Being a $p$-chain URT is a strictly stronger notion than being a URT. For an example of a URT that is not a $p$-chain URT, see Remark~\ref{rem:URT_not_pchain}.
\begin{proposition}
\label{Theorem p-chain URT rectifies uniquely on restriction}
Let $\mathcal{R}$ be the slant sum
\[
\mathcal{R} \coloneqq \mathcal{P} \slantsum{p_1} (\mathcal{Q}_1^1, \dots ,\mathcal{Q}_{m_1}^1) \slantsum{p_2} \cdots \slantsum{p_n} (\mathcal{Q}_1^n,\dots, \mathcal{Q}^n_{m_n}),
\]
for $p_i$ distinct and $Q_i^j$ all pairwise disjoint with minimum elements.
Let $T$ be a skew increasing $\mathcal{R}$-tableau with rectifications $U$ and $V$.
If $U|_\mathcal{P}$ is a $\{p_1,..,p_n\}$-chain URT in $\mathcal{P}$, then $U|_\mathcal{P} = V|_\mathcal{P}$.
\end{proposition}
\begin{proof}
By Lemma~\ref{Lemma poset "sees" chain}, there exist chain posets $\mathcal{C}_1,\dots,\mathcal{C}_n$ and a skew increasing $\mathcal{P} \slantsum{p_1}\mathcal{C}_1 \slantsum{p_2}\cdots \slantsum{p_n} \mathcal{C}_n$-tableau $T_\mathcal{C}$ such that
\[
\mathsf{rects}|_\mathcal{P} (T) = \mathsf{rects}|_\mathcal{P} (T_\mathcal{C}).
\]
Since $U|_\mathcal{P}$ is a $\{p_1,..,p_n\}$-chain URT, we know $|\mathsf{rects}|_\mathcal{P} (T_\mathcal{C})| = 1 $, so $|\mathsf{rects}|_\mathcal{P} (T)| = 1$.
Hence $U|_\mathcal{P} = V|_\mathcal{P}$.
\end{proof}
\begin{proposition}
\label{slant sums with AB-chain URTs}
\hspace{1in} \\ \vspace{-.2in}
\begin{enumerate}
\item Let $\mathcal{R}$ be the slant sum $\mathcal{P} \slantsum{p}\mathcal{Q}$ and let $U$ be an increasing $\mathcal{R}$-tableau of straight shape.
Suppose $A \subseteq \mathcal{P}$ and $B \subseteq \mathcal{Q}$.
If $p \in A$ and $U|_\mathcal{P}$ is an $A$-chain URT in $\mathcal{P}$ and $U|_\mathcal{Q}$ is a $B$-chain URT in $\mathcal{Q}$, then $U$ is an $A \cup B$-chain URT in $\mathcal{R}$.
\item More generally, let
\[ \mathcal{R} \coloneqq \mathcal{P} \slantsum{p_1} (\mathcal{Q}_1^1, \dots ,\mathcal{Q}_{m_1}^1) \slantsum{p_2} \cdots \slantsum{p_n} (\mathcal{Q}_1^n,\dots, \mathcal{Q}^n_{m_n})\] and
let $U$ be an increasing $\mathcal{R}$-tableau of straight shape.
Suppose $\{p_1,\dots, p_n\} \subseteq A \subseteq \mathcal{P}$ and $B_i^j \subseteq \mathcal{Q}_i^j$.
Set
\[D \coloneqq A \cup \left( \bigcup_{i,j} B_i^j \right).\]
If $U|_\mathcal{P}$ is an $A$-chain URT in $\mathcal{P}$ and $U|_{\mathcal{Q}_i^j}$ is a $B_i^j$-chain URT in $\mathcal{Q}_i^j$ for each $i,j$, then $U$ is a $D$-chain URT in $\mathcal{R}$.
\end{enumerate}
\end{proposition}
\begin{proof}
For simplicity, we only explicitly prove part (1). The proof of part (2) follows the same strategy.
Let $\mathcal{C}$ be a poset formed by taking $\mathcal{R}$ and slant summing chains on top of elements in $A \cup B$. We must show that $U$ is a URT in $\mathcal{C}$. Hence, suppose some skew increasing $\mathcal{C}$-tableau $C$ rectifies to $U$ and $V$. Then, we must show $U = V$.
Since $U$ is an $A$-chain URT in $\mathcal{P}$, by Proposition \ref{Theorem p-chain URT rectifies uniquely on restriction}, we have that $U|_\mathcal{P} = V|_\mathcal{P}.$
It is easy to see that since $U$ and $V$ agree on $\mathcal{P}$, they must also agree on any chain $\mathcal{C}_a$ slant summed onto an element $a \in A \subseteq \mathcal{P}$. This is since, in any such chain, the labels of $\mathcal{C}_a$ in $U$ and $V$ must be exactly those labels of $\mathcal{C}_a$ in $T$ that have values greater than $U(a) = V(a)$. By increasingness of $U$ and $V$, these are necessarily written in increasing order along $\mathcal{C}_a$ in both tableaux. Thus, $U|_{\mathcal{C}_a} = V|_{\mathcal{C}_a}$.
Let $\mathcal{Q}_C$ be the principal order filter of $\mathcal{C}$ generated by $\hat{0}_\mathcal{Q}$.
Since $\mathcal{Q}_C$ is a funnel in $\mathcal{C}$, we may consider the tableaux $(T \to U)|_{\mathcal{Q}_C}$ and $(T \to V)|_{\mathcal{Q}_C}$.
By Definition~\ref{definition corresponding}, we have $(T \to U)|_{\mathcal{Q}_C} = (T \to V)|_{\mathcal{Q}_C}$, since both tableaux defined to the the restriction of $T$ to the set
\[\mathcal{E} \coloneqq \{q \in \mathcal{Q} : T(q) > U(p) \} = \{q \in \mathcal{Q} : T(q) > V(p) \},\] where the second equality is by recalling $U(p) = V(p)$ (since $p \in \mathcal{P}$) and noting that $p$ is the only element of $\mathcal{C}$ covered by $\hat{0}_\mathcal{Q}$.
By Proposition~\ref{Theorem corresponding respects slides}, $U|_{\mathcal{Q}_C}$ and $V|_{\mathcal{Q}_C} $ are rectifications of $(T \to U)|_\mathcal{Q} =(T \to V)|_\mathcal{Q}$.
However, since $U|_{\mathcal{Q}}$ is a $B$-chain URT in $\mathcal{Q}$, we have that $U|_{\mathcal{Q}}$ is a URT in $\mathcal{Q}_C$. Hence $U|_{\mathcal{Q}_C} = V|_{\mathcal{Q}_C}$.
Thus, we have shown that $U(c) = V(c)$ for all $c \in \mathcal{C}$, so $U=V$, as desired.
\end{proof}
The following result follows inductively from Corollary~\ref{Corollary slant sum tree base case}; we, however, prove it here as a useful demonstration of working with $p$-chain URTs in preparation for more sophisticated uses later.
\begin{corollary}
\label{Trees everything URT}
Let $\mathscr{T}$ be a tree. Let $U$ be any increasing $\mathscr{T}$-tableau of straight shape. Then $U$ is a URT in $\mathscr{T}$.
\end{corollary}
\begin{proof}
Let $n = |\mathscr{T}|$. Define $\mathscr{T}_1 \subseteq \dots \subseteq \mathscr{T}_n$ such that for all $i$, $\mathscr{T}_i$ is an order ideal of $\mathscr{T}$ and $|\mathscr{T}_i| = i$. In particular, we have $\mathscr{T}_1 = \{\hat{0}_\mathscr{T}\}$ and $\mathscr{T}_n = \mathscr{T}$. We claim that for all $i$, $U|_{\mathscr{T}_i}$ is a $\mathscr{T}_i$-chain URT in $\mathscr{T}_i$. We work by induction on $i$.
First, we note that in any singleton poset $\mathcal{P}$, every increasing $\mathcal{P}$-tableau of straight shape is a $\mathcal{P}$-chain URT by Lemma \ref{Lemma unique rectification in bottom chain}.
Thus, since $|\mathscr{T}_1| = 1$, $U|_{\mathscr{T}_1}$ is a $\mathscr{T}_1$-chain URT in $\mathscr{T}_1$. Now suppose $i>1$. Let $t$ be the unique element in $\mathscr{T}_i \setminus \mathscr{T}_{i-1}$ and let $p$ be the unique parent of $t$ in $\mathscr{T}$. Then $\mathscr{T}_i = \mathscr{T}_{i-1} \; \slantsum{p} \{t\}$. By the inductive hypothesis, $U|_{\mathscr{T}_{i-1}}$ is a $\mathscr{T}_{i-1}$-chain URT in $\mathscr{T}_{i-1}$. Because $|\{t\}|=1$, $U|_{\{p\}}$ is a $\{t\}$-chain URT in $\{t\}$. Thus by Proposition \ref{slant sums with AB-chain URTs}, $U|_{\mathscr{T}_i}$ is a $\mathscr{T}_i$-chain URT in $\mathscr{T}_i$. This completes the induction. Hence $U|_{\mathscr{T}_n} = T$ is a URT in $\mathscr{T}_n = \mathscr{T}$.
\end{proof}
Trees are a particularly simple subfamily of the $d$-complete posets studied in this paper. Corollary~\ref{Trees everything URT} should be understood as a particularly strong version of Theorem~\ref{thm:main} for this special subfamily.
\section{Double-tailed diamonds}
\label{Section doubled tailed diamonds}
In this section, we investigate the $p$-chain unique rectification targets of certain posets, called double-tailed diamonds. This special family of $d$-complete posets plays a central role in the study of general $d$-complete posets. We will apply the results developed here to the general case in Section~\ref{Section $d$-complete posets}.
For $k \geq 3$, a {\bf double-tailed diamond} $\mathcal{D}(k)$ has $2k - 2$ elements, two of which are incomparable elements in the middle with chains of size $k-2$ above and below them. Figure~\ref{fig:double-tailed} illustrates the Hasse diagrams of some of these posets. It is easy to work out that any increasing tableau on any order ideal of a double-tailed diamond is a URT. (This is even explicitly observed in \cite[Proof of Theorem~3.12]{BS16}.) For application in Section~\ref{Section $d$-complete posets}, we need to strengthen this observation to the setting of $p$-chain URTs.
\begin{figure}[h]
\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node (top) at (0,0) {};
\node [below left of=top] (left) {};
\node [below right of=top](right) {};
\node [below right of=left] (bottom) {};
\node [draw=white, below of=bottom] {$\mathcal{D}(3)$};
\draw [black, thick] (top) -- (left);
\draw [black, thick] (top) -- (right);
\draw [black, thick] (right) -- (bottom);
\draw [black, thick] (left) -- (bottom);
\end{tikzpicture}
\qquad
\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node [above of =top] (t1) {};
\node (top) at (0,0) {};
\node [below left of=top] (left) {};
\node [below right of=top] (right) {};
\node [below right of=left] (bottom) {};
\node [below of=bottom] (b1) {};
\node [draw=white, below of=b1] {$\mathcal{D}(4)$};
\draw [black, thick] (top) -- (t1);
\draw [black, thick] (top) -- (left);
\draw [black, thick] (top) -- (right);
\draw [black, thick] (right) -- (bottom);
\draw [black, thick] (left) -- (bottom);
\draw [black, thick] (b1) -- (bottom);
\end{tikzpicture}
\qquad
\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node [ above of =t1] (t2) {};
\node [ above of =top] (t1) {};
\node (top) at (0,0) {};
\node [below left of=top] (left) {};
\node [below right of=top] (right) {};
\node [below right of=left] (bottom) {};
\node [below of=bottom] (b1) {};
\node [below of=b1] (b2) {};
\node [draw=white, below of=b2] {$\mathcal{D}(5)$};
\draw [black, thick] (t2) -- (t1);
\draw [black, thick] (top) -- (t1);
\draw [black, thick] (top) -- (left);
\draw [black, thick] (top) -- (right);
\draw [black, thick] (right) -- (bottom);
\draw [black, thick] (left) -- (bottom);
\draw [black, thick] (b1) -- (bottom);
\draw [black, thick] (b1) -- (b2);
\end{tikzpicture}
\qquad
\caption{The Hasse diagrams of the three smallest double-tailed diamonds.}\label{fig:double-tailed}
\end{figure}
To study the $p$-chain URTs of double-tailed diamonds, we introduce a \textbf{chained double-tailed diamond}. A chained double-tailed diamond is formed by slant summing a chain onto each of the two middle elements of the double-tailed diamond. We index the elements of a chained double-tailed diamond as shown in Figure~\ref{fig:chained_DTD}. We refer to the set of elements indexed as $\ell_k$ as the {\bf left chain} of the poset, and those indexed as $r_k$ as the {\bf right chain}. In this notation, a chained double-tailed diamond corresponds to a triple of positive integers $m,n,p \geq 1$. We denote the chained double-tailed diamond for $(m,n,p)$ by $\mathcal{D}(m,n,p)$. In particular, $\mathcal{D}(k) = \mathcal{D}(1,k,1)$.
\begin{figure}[h]
\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node[label=right:{$t_n$}] (tn) at (0,0) {};
\node[draw=white,below of=tn] (tdots) {\vdots};
\node[label=right:{$t_2$}, below of=tdots] (t2) {};
\node[label=right:{$t_1$}, below of=t2] (t1) {};
\node[label=left:{$\ell_1$}, below left of=t1] (l1) {};
\node[label= right:{$r_1$}, below right of=t1] (r1) {};
\node[label=left:{$b_1$}, below right of=l1] (b1) {};
\node[label=left:{$b_2$}, below of=b1] (b2) {};
\node[draw=white,below of=b2] (bdots) {\vdots};
\node[label=left:{$b_n$}, below of=bdots] (bn) {};
\node[label=left:{$\ell_2$}, above left of=l1] (l2) {};
\node [draw=white, circle, above left of=l2,rotate=135] (ldots) {\Large\ldots};
\node[label=left:{$\ell_m$}, above left of=ldots] (lm) {};
\node[label=right:{$r_2$}, above right of=r1] (r2) {};
\node [draw=white,circle, above right of=r2,rotate=45] (rdots) {\Large\ldots};
\node[label=right:{$r_p$},above right of=rdots] (rp) {};
\draw [black, thick] (tn) -- (tdots);
\draw [black, thick] (t1) -- (t2);
\draw [black, thick] (tdots) -- (t2);
\draw [black, thick] (t1) -- (l1);
\draw [black, thick] (l2) -- (l1);
\draw [black, thick] (l2) -- (ldots);
\draw [black, thick] (ldots) -- (lm);
\draw [black, thick] (t1) -- (r1);
\draw [black, thick] (r2) -- (r1);
\draw [black, thick] (rdots) -- (r2);
\draw [black, thick] (rdots) -- (rp);
\draw [black, thick] (b1) -- (r1);
\draw [black, thick] (b1) -- (l1);
\draw [black, thick] (b1) -- (b2);
\draw [black, thick] (bdots) -- (bn);
\draw [black, thick] (bdots) -- (b2);
\end{tikzpicture}
\caption{Our standard indexing of the nodes of the chained double-tailed diamond poset $\mathcal{D}(m,n,p)$.}\label{fig:chained_DTD}
\end{figure}
\begin{proposition}
\label{Theorem chained double tailed diamond rectifies uniquely}
Let $T$ be a skew increasing $\mathcal{D}(m,n,p)$-tableau.
Then, $T$ rectifies uniquely.
\end{proposition}
\begin{proof}
Suppose $T$ has shape $\nu/\lambda$.
If $\{\ell_1, r_1\} \not \subseteq \lambda$, then there are no choices to be made during rectification and hence $T$ rectifies uniquely.
Thus, assume $\{\ell_1, r_1\} \subseteq \lambda$.
We can repeatedly perform slides on inner corners not equal to $\ell_1$ or $r_1$ because all the descendants of $\ell_1$ and $r_1$ are in disjoint chains and hence these slides clearly commute.
Thus, we may assume $T$ has exactly two inner corners $\ell_1$ and $r_1$. Write $I \coloneqq \{ \ell_1, r_1 \}$.
Let $s \in \mathbb{Z}$ be largest such that $t_s \in \nu$ (set $s=0$ if $t_1 \notin \nu$).
We induct on $s$.
If $s=0$, then $\nu$ is a tree, and so $T$ rectifies uniquely by Corollary~\ref{Trees everything URT}.
Assume $s \geq 1$ and that the proposition holds for smaller $s$.
\medskip
\noindent
{\sf (Case 1: $T(t_1) \neq \min \ensuremath{\mathrm{Range}}(T)$):}
Without loss of generality, we may assume $T(\ell_2) < T(t_1)$.
There are three possibilities for $\gamma \subseteq I$: either $\gamma = \{\ell_1\}$, $\gamma = \{r_1\}$, or $\gamma = \{\ell_1, r_1\}$.
If we choose $\gamma = \{\ell_1\}$, then after $\mathsf{Slide}_{\{\ell_1\}}$ is completed, $r_1$ will be the unique inner corner; thus, the following slide is necessarily at $r_1$.
Similarly, if we choose $\gamma = \{r_1\}$, the next slide is necessarily at $\ell_1$.
By routine case analysis, one checks that
\[\mathsf{Slide}_{\{\ell_1\}, \{r_1\}}(T) = \mathsf{Slide}_{\{r_1\}, \{\ell_1\}}(T) = \mathsf{Slide}_{\{r_1,\ell_1\}}(T)\]
in any of the various cases: $T(t_1) < T(r_2),T(t_1) = T(r_2),$ or $T(t_1) > T(r_2)$.
Thus, any choice made at this first step of rectifying $T$ yields the same tableau after one or two slides. That latter tableau has a unique rectification, as there are no further choices to be made. Hence, $T$ rectifies uniquely.
\medskip
\noindent
{\sf (Case 2: $T(t_1) = \min \ensuremath{\mathrm{Range}}(T)$):}
\medskip
\noindent
{\sf (Case 2.1: $s=1$):}
Then $T$ looks like the following.
\[
\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=20pt}, every draw/.append style={black, thick},node distance=1.25cm]
\node[circle] (t1) at (0,0) {$T(t_1)$};
\node[circle, below left of=t1] (l1) {};
\node[circle, below right of=t1] (r1) {};
\node[circle, below right of=l1] (b1) {};
\node[circle, below of=b1] (b2) {};
\node[draw=white,below of=b2] (bdots) {\vdots};
\node[circle, label=left:{$b_n$}, below of=bdots] (bn) {};
\node[circle, above left of=l1] (l2) {$T(\ell_2)$};
\node [circle,draw=white,above left of=l2,rotate=135] (ldots) {\Large\ldots};
\node[circle, above left of=ldots] (lm) {$T(\ell_m)$};
\node[circle, above right of=r1] (r2) {$T(r_2)$};
\node [circle,draw=white, above right of=r2,rotate=45] (rdots) {\Large\ldots};
\node[circle, above right of=rdots] (rp) {$T(r_p)$};
\draw [black, thick] (t1) -- (l1);
\draw [black, thick] (l2) -- (l1);
\draw [black, thick] (l2) -- (ldots);
\draw [black, thick] (ldots) -- (lm);
\draw [black, thick] (t1) -- (r1);
\draw [black, thick] (r2) -- (r1);
\draw [black, thick] (rdots) -- (r2);
\draw [black, thick] (rdots) -- (rp);
\draw [black, thick] (b1) -- (r1);
\draw [black, thick] (b1) -- (l1);
\draw [black, thick] (b1) -- (b2);
\draw [black, thick] (bdots) -- (b2);
\draw [black, thick] (bdots) -- (bn);
\end{tikzpicture}
\]
Consider any rectifications $U$ and $V$ of $T$.
By Lemma~\ref{Lemma unique rectification in bottom chain}, for all $k$ we have $U(b_k) = V(b_k)$ is the $(n-k+1)$st smallest element of $\ensuremath{\mathrm{Range}}(T)$. Thus, since $T(t_1) = \min \ensuremath{\mathrm{Range}}(T)$ by assumption, we have $U(b_n)=T(t_1)$. Hence, $t_1 \notin \ensuremath{\mathrm{Dom}}(R)$ and so $U$ looks like:
\[
\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=20pt}, every draw/.append style={black, thick},node distance=1.25cm]
\node[circle] (b1) at (0,0) {$R(b_1)$};
\node[circle, above left of=b1] (l1) {$R(\ell_1)$};
\node[circle, above right of=b1] (r1) {$R(r_1)$};
\node[circle, below of=b1] (b2) {$R(b_2)$};
\node[draw=white,below of=b2] (bdots) {\Large{\vdots}};
\node[circle, label=left:{$b_n$}, below of=bdots] (bn) {$R(b_n)$};
\node[circle, above left of=l1] (l2) {$R(\ell_2)$};
\node [circle,draw=white,above left of=l2,rotate=135] (ldots) {\Large\ldots};
\node[circle, above left of=ldots] (lm) {$R(\ell_m)$};
\node[circle, above right of=r1] (r2) {$R(r_2)$};
\node [circle,draw=white, above right of=r2,rotate=45] (rdots) {\Large\ldots};
\node[circle, above right of=rdots] (rp) {$R(r_p)$};
\draw [black, thick] (l2) -- (l1);
\draw [black, thick] (l2) -- (ldots);
\draw [black, thick] (ldots) -- (lm);
\draw [black, thick] (r2) -- (r1);
\draw [black, thick] (rdots) -- (r2);
\draw [black, thick] (rdots) -- (rp);
\draw [black, thick] (b1) -- (r1);
\draw [black, thick] (b1) -- (l1);
\draw [black, thick] (b1) -- (b2);
\draw [black, thick] (bdots) -- (b2);
\draw [black, thick] (bdots) -- (bn);
\end{tikzpicture}
\]
Consulting these pictures, one observes that
\[U(\ell_1) = \min (\{T(\ell_i): 2 \leq i \leq m \text{ and } T(\ell_i) > U(b_1)\}\] and similarly for $V$.
Since $U(b_1) = V(b_1)$ by Lemma~\ref{Lemma unique rectification in bottom chain}, this means $U(\ell_1) = V(\ell_1)$.
Clearly, the labels of $U$ in the left chain of $\mathcal{D}(m,n,p)$ are exactly the labels on the left chain of $\mathcal{D}(m,n,p)$ in $T$ that are at least $U(\ell_1)$ written in increasing order. Since the same is true for $V$, we have $U(\ell_q) = V(\ell_q)$ for all $q$. The same argument shows $U(r_q) = V(r_q)$.
Thus, $U=V$ and $T$ rectifies uniquely.
\medskip
\noindent
{\sf (Case 2.2: $s \geq 2$):}
Let $U,V$ be rectifications of $T$.
As in Case 2.1, we have $U(b_n) = V(b_n) = T(t_1)$.
Let $\mathcal{Q} \coloneqq \mathcal{D}(m,n,p) \setminus \{b_n\}$. We must show $U|_\mathcal{Q} = V|_\mathcal{Q}$. Since $n \geq s \geq 2$, $\mathcal{Q}$ has a minimum and is a funnel of $\mathcal{D}(m,n,p)$. Then by Proposition~\ref{Theorem corresponding respects slides}, $U|_\mathcal{Q}$ is a rectification of $S_U \coloneqq (T \to U)|_\mathcal{Q}$ and $V|_\mathcal{Q}$ is a rectification of $S_V \coloneqq (T \to V)|_\mathcal{Q}$.
Since $U(b_n)= V(b_n) = T(t_1)$, it follows from the definition of corresponding tableaux that $S_U = S_V$. In fact, $S_U$ and $S_V$ are merely $T$ restricted by deleting all labels of value $T(t_1)$. Hence, write $S \coloneqq S_U = S_V$. It remains to show that $S$ rectifies uniquely in $\mathcal{Q}$.
Since $S$ is $T$ restricted by deleting all labels of value $T(t_1) = \min \ensuremath{\mathrm{Range}}(T)$ and the inner corners of $T$ are exactly $\{\ell_1, r_1 \}$, it follows that the inner corners of $S$ are exactly those elements $q \in \mathcal{Q}$ with $T(q)=T(t_1)$. The structure of $\mathcal{Q}$ ensures that $\ell_2$ and $r_2$ are the only two nodes $q$ besides $t_1$ that could possibly have the label $T(t_1)$. Let $J \subseteq \{t_1, \ell_2, r_r\}$ be the set of inner corners of $S$. Clearly, since the various slides only affect disjoint chains, for any set partitions $(\gamma_1, \dots, \gamma_h)$ and $(\delta_1, \dots, \delta_k)$ of $J$, we have \[\mathsf{Slide}_{\gamma_h} \circ \dots \circ \mathsf{Slide}_{\gamma_1}(T) = \mathsf{Slide}_{\delta_k} \circ \dots \circ \mathsf{Slide}_{\delta_1}(T).\] Hence, without loss of generality, we may assume that we perform $\mathsf{Slide}_{\{t_1\}}$ first. That is, set $S' \coloneqq \mathsf{Slide}_{\{t_1\}}(S)$ and observe that $\mathsf{Rects}(S') = \mathsf{Rects}(S).$
Finally, we must show that $S'$ rectifies uniquely. Let the shape of $S'$ be $\eta/\theta$. Recall, $s$ is defined to be the largest integer with $t_s \in \ensuremath{\mathrm{Dom}}(T)$, so by the construction of $S$, we also have that $s$ is the largest integer with $t_s \in \ensuremath{\mathrm{Dom}}(S)$. Hence, since $S' \coloneqq \mathsf{Slide}_{\{t_1\}}(S)$, we have that $t_s \notin \ensuremath{\mathrm{Dom}}(S')$. Since $s \leq n$, this ensures that $\eta$ is an order ideal of
\[\mathcal{Q} \setminus \{t_n\} = \mathcal{D}(m,n,p) \setminus \{b_n, t_n\} = \mathcal{D}(m,n-1,p),\]
where the first equality is by the definition of $\mathcal{Q}$ and the second equality follows from $n \geq s \geq 2$.
Hence, $S'$ is a skew increasing $\mathcal{D}(m,n-1,p)$-tableau. Moreover, the largest $i$ such that $S'(t_i)$ is defined is $s-1$, so by the inductive hypothesis, $S'$ rectifies uniquely in $\mathcal{D}(m,n-1,p)$. Thus, $S$ rectifies uniquely in $\mathcal{Q}$, and so $W|_\mathcal{Q}$ is the same for all rectifications $W$ of $T$, so $T$ rectifies uniquely.
\end{proof}
\begin{corollary}
\label{Corollary chained double tailed diamond URTs}
Every increasing $\mathcal{D}(m,n,p)$-tableau of straight shape is a URT.
\end{corollary}
\begin{proof}
Immediate from Proposition~\ref{Theorem chained double tailed diamond rectifies uniquely}.
\end{proof}
\begin{corollary}
\label{everything is chain URT in DTD}
Every increasing $\mathcal{D}(n)$-tableau of straight shape is an $\{\ell_1, r_1\}$-chain URT.
\end{corollary}
\begin{proof}
Immediate from Corollary~\ref{Corollary chained double tailed diamond URTs}
\end{proof}
Proposition~\ref{Theorem chained double tailed diamond rectifies uniquely} is a special case of the following more general conjecture, for which we have some additional experimental evidence. (For the definition of `$d$-complete', see Section~\ref{Section $d$-complete posets}.)
\begin{conjecture}\label{conj:bottom_tree}
Let $\mathcal{P}$ be a $d$-complete poset with bottom tree $\mathcal{B}$. If $T$ is a skew increasing $\mathcal{P}$-tableau with rectifications $R$ and $S$, then we have $R|_\mathcal{B} = S|_\mathcal{B}$.
\end{conjecture}
Special cases of Conjecture~\ref{conj:bottom_tree} are key lemmas in \cite{Thomas.Yong:K} and \cite{Clifford.Thomas.Yong}. These lemmas have additional combinatorial applications \cite{Thomas.Yong:Plancherel,Pechenik:frames}; Conjecture~\ref{conj:bottom_tree} might have similar applications.
\section{$d$-complete posets and minuscule posets}
\label{Section $d$-complete posets}
In this section, we recall the definition of $d$-complete posets following \cite{Proctor:JACO}, and prove our main result Theorem~\ref{thm:main} regarding slant sum trees of minuscule posets. (We use, however, the convention of \cite{Proctor:algebra} regarding the orientation of our posets; the paper \cite{Proctor:JACO} uses the opposite convention, so the posets in \cite{Proctor:JACO} are the duals of ours.) The proofs presented in this section are all straightforward, relying on the technical results of the previous sections. We also develop appropriate terminology here to give precise interpretations of Conjectures~\ref{conj:URTs} and \ref{conj:geometry}.
If $x, y \in \mathcal{P}$, the {\bf interval} $[x, y]$ is the set $\{z \in \mathcal{P}: x \leq z \leq y\}$. We call an interval $[x, y]$ in $\mathcal{P}$ a {\bf $\mathcal{Q}$-interval} if it is isomorphic to the poset $\mathcal{Q}$.
We will be especially interested in $\mathcal{D}(k)$-intervals ($k \geq 3$). Let $\mathcal{D}_0(k) \coloneqq \mathcal{D}(k) \setminus \{t\}$, where $t$ is the minimal element of $\mathcal{D}(k)$. We will also be interested in $\mathcal{D}_0(k)$-intervals ($k \geq 4$).
Examples of $\mathcal{D}_0(k)$-intervals are shown in Figure~\ref{fig:partial_DTD}; the corresponding posets $\mathcal{D}(k)$ are shown in Figure~\ref{fig:double-tailed}.
\begin{figure}[ht]
\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node [circle] (top) at (0,0) {};
\node [above of=top] (t1) {};
\node [circle,below left of=top] (left) {};
\node [circle,below right of=top] (right) {};
\node [circle,below right of=left] (bottom) {};
\node [draw=white,below of=bottom] {$\mathcal{D}_0(4)$};
\draw [black, thick] (top) -- (left);
\draw [black, thick] (top) -- (right);
\draw [black, thick] (right) -- (bottom);
\draw [black, thick] (left) -- (bottom);
\draw [black, thick] (t1) -- (top);
\end{tikzpicture}
\qquad
\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick}]
\node [circle, above of =t1] (t2) {};
\node [circle, above of =top] (t1) {};
\node [circle] (top) at (0,0) {};
\node [circle,below left of=top] (left) {};
\node [circle,below right of=top] (right) {};
\node [circle,below right of=left] (bottom) {};
\node [circle,below of=bottom] (b1) {};
\node [draw=white,below of=b1] {$\mathcal{D}_0(5)$};
\draw [black, thick] (top) -- (t1);
\draw [black, thick] (top) -- (left);
\draw [black, thick] (top) -- (right);
\draw [black, thick] (right) -- (bottom);
\draw [black, thick] (left) -- (bottom);
\draw [black, thick] (b1) -- (bottom);
\draw [black, thick] (t1) -- (t2);
\end{tikzpicture}
\caption{The Hasse diagrams of some small truncated double-tailed diamonds $\mathcal{D}_0(k)$.}\label{fig:partial_DTD}
\end{figure}
\begin{definition}
A poset $\mathcal{P}$ is {\bf $\mathcal{D}(3)$-complete} if it satisfies the following three conditions:
\begin{enumerate}
\item anytime an element $z$ covers two distinct elements $x$ and $y$, there exists a fourth
element $w$ that $x$ and $y$ both cover;
\item if $[w,z]$ is a $\mathcal{D}(3)$-interval in $\mathcal{P}$ with elements $\{w,x,y,z\}$, then $w$ is only covered by $x$ and $y$ in $\mathcal{P}$; and
\item in such a $\mathcal{D}(3)$-interval, there is no $w' \neq w$ that both $x$ and $y$ cover.
\end{enumerate}
\end{definition}
Let $k \geq 4$. Suppose $[x, z]$ is a $\mathcal{D}_0(k)$-interval in which $y$ is the unique element with $y \gtrdot x$.
If there is no $w \in \mathcal{P}$ with $w \lessdot x$ such that $[w,z]$ is a $\mathcal{D}(k)$-interval, then $[x, z]$ is an {\bf incomplete} $\mathcal{D}_0(k)$-interval. If there exists $x' \neq x$ with $y \gtrdot x'$ such that $[x',z]$ is also a $\mathcal{D}_0(k)$-interval, then we say that $[x, z]$ and $[x', z]$ {\bf overlap}.
\begin{definition}
For any $k \geq 4$, a poset $\mathcal{P}$ is {\bf $\mathcal{D}(k)$-complete} if it satisfies the following three conditions:
\begin{enumerate}
\item there are no incomplete $\mathcal{D}_0(k)$-intervals;
\item if $[w,z]$ is a $\mathcal{D}(k)$-interval, then $w$ is covered by only one element in $\mathcal{P}$; and
\item there are no overlapping $\mathcal{D}_0(k)$-intervals.
\end{enumerate}
A poset $\mathcal{P}$ is {\bf $d$-complete} if it is $\mathcal{D}(k)$-complete for every $k \geq 3$.
\end{definition}
Briefly, the algebraic context of $d$-complete posets is as follows. (For further details, see \cite{Stembridge:FC,Proctor:algebra,CP12}.) Let $\Lambda$ be a dominant integral weight of a Kac-Moody Lie algebra $\mathfrak{g}$ with (generally infinite) Weyl group $W$. The Weyl group element $w \in W$ is called {\bf $\Lambda$-minuscule} if it can be written as a reduced word in the simple reflections as
\[
w = s_{i_1} s_{i_2} \cdots s_{i_\ell},
\]
so that for all $j$
\[(s_{i_{j+1}} \cdots s_{i_\ell} - s_{i_j} \cdots s_{i_\ell})\Lambda = \alpha_{i_j},
\]
where $\alpha_{i_j}$ is the simple root for $s_{i_j}$. (In fact, this property is independent of the choice of reduced word \cite[Proposition~2.1]{Stembridge:minuscule}.) Now, if $w$ is $\Lambda$-minuscule, then the interval $[{\rm id}, w]$ in Bruhat order is a distributive lattice. A poset $\mathcal{P}$ is $d$-complete if and only if it is isomorphic to the poset of join irreducibles of such a `$\Lambda$-minuscule distributive lattice'; equivalently, a poset $\mathcal{Q}$ is isomorphic to a Bruhat interval $[{\rm id}, w]$ for some $\Lambda$-minuscule $w$ if and only if $\mathcal{Q}$ is isomorphic to the poset of order ideals of a $d$-complete poset. Since Bruhat order on $W$ also describes containment of Schubert varieties in the Kac-Moody homogeneous space $X = G/B$, we have for $u,v \leq w$ all $\Lambda$-minuscule that the inclusion of Schubert varieties $X_u \subseteq X_v$ is equivalent to the reverse inclusion $\lambda_v \subseteq \lambda_u$ of the corresponding order ideals in the $d$-complete poset for $w$. In addition to their algebraic relations, $d$-complete posets enjoy a number of beautiful combinatorial properties, including an analogue of the classical hook-length formula (for a full proof of this fact, see \cite{Kim.Yoo}). Figure~\ref{fig:big_poset} shows an example of a reasonably large $d$-complete poset.
Say a $d$-complete poset is {\bf irreducible} if it is not the slant sum of two $d$-complete posets.
R.~Proctor \cite{Proctor:JACO} showed that all $d$-complete posets can be uniquely decomposed as a slant sum of irreducible $d$-complete components. In this decomposition, irreducible components are only slant summed onto special nodes of other irreducible components, called \emph{acyclic nodes} \cite{Proctor:JACO}; that is, if $\mathcal{P} = \mathcal{Q} \slantsum{q}\mathcal{R}$ is $d$-complete and $\mathcal{R}$ is irreducible, then $q$ is an acyclic node of its irreducible component. (We avoid giving the somewhat technical definition of acyclic nodes, as it is sufficient for our purposes to use Proctor's explicit identification \cite{Proctor:JACO} of all acyclic nodes of all irreducible $d$-complete posets.) The irreducible $d$-complete posets are classified into $15$ (mostly infinite) families; we follow Proctor's numbering and naming conventions for these families from \cite{Proctor:JACO}. Of these $15$ families, only the components from families $1$--$9$ and $11$ have any acyclic nodes.
For a poset $\mathcal{P}$, we say an increasing $\mathcal{P}$-tableau $T$ of straight shape $\lambda \subseteq \mathcal{P}$ is {\bf minimally-labeled} if
it is minimal among all increasing $\mathcal{P}$-tableaux of shape $\lambda$ under nodewise comparison of labels; that is, if $U$ is another increasing tableau of shape $\lambda$, then $U(x) \geq T(x)$ for all $x \in \lambda$. It is easy to see that there exists a unique minimally-labeled $\mathcal{P}$-tableau of each straight shape $\lambda$. We write $M_\lambda$ for this unique tableau. The precise version of Conjecture~\ref{conj:URTs} is the following.
\begin{conjecture}\label{conj:URT_precise}
Let $\mathcal{P}$ be $d$-complete and let $\lambda \subseteq \mathcal{P}$ be an order ideal. Then, the minimally-labeled increasing $\mathcal{P}$-tableau $M_\lambda$ of shape $\lambda$ is a unique rectification target.
\end{conjecture}
In light of the slant sum structure of $d$-complete posets, Conjecture~\ref{conj:URT_precise} would follow from Proposition~\ref{slant sums with AB-chain URTs} together with information about ($p$-chain) URTs in the $15$ families of irreducible $d$-complete posets. Specifically, it remains to show that
\begin{itemize}
\item for each irreducible $d$-complete poset $\mathcal{Q}$ with acyclic nodes, that $M_\lambda$ is a $p$-chain URT for each order ideal $\lambda \subseteq \mathcal{Q}$ and each acyclic node $p \in \mathcal{Q}$, and that
\item for each irreducible $d$-complete poset $\mathcal{Q}$ without acyclic nodes, that $M_\lambda$ is a URT for each order ideal $\lambda \subseteq \mathcal{Q}$.
\end{itemize}
Unfortunately, we are unable to establish the necessary results for some of these families; hence, we can only leverage
Proposition~\ref{slant sums with AB-chain URTs} to prove a weaker version of Conjecture~\ref{conj:URT_precise}, namely Theorem~\ref{thm:main}. First, we recall the \emph{minuscule posets}, a special subset of $d$-complete posets. Except for some trivial instances, all minuscule posets are irreducible.
Algebraically, one obtains the minuscule posets as follows. Suppose the Kac-Moody group $G$ is in fact complex reductive. Put a partial order on the positive roots $\Phi^+$ of $G$ by taking the transitive closure of the covering relation $\alpha \lessdot \beta$ if and only if $\beta - \alpha$ is a simple root. The simple root $\delta$ is a {\bf minuscule root} if for every positive root $\alpha \in \Phi^+$, the multiplicity of $\delta^\vee$ in the simple coroot expansion of $\alpha^\vee$ is at most $1$. For each minuscule root, one obtains a corresponding {\bf minuscule poset} $\mathcal{P}_\delta$ by restricting the partial order on $\Phi^+$ to those positive roots that use $\delta$ in their simple root expansion. There is also a corresponding {\bf minuscule variety} obtained as the quotient $G / P_\delta$, where $P_\delta$ is the maximal parabolic subgroup associated to the minuscule root $\delta$. The minuscule poset $\mathcal{P}_\delta$ encodes the Schubert stratification of $G / P_\delta$; specifically, the Schubert varieties are naturally indexed by the order ideals of $\mathcal{P}_\delta$, and inclusions of order ideals correspond to reverse inclusions of Schubert varieties.
\begin{table}[ht]
\begin{center}
\begin{tabular} {|l|l|l|}
\hline
Minuscule poset & Minuscule variety & Irreducible $d$-complete classification \\
\hline
\hline
rectangle & Grassmanian & shapes (family 1) \\
\hline shifted staircase & orthogonal Grassmanian & shifted shapes (family 2) \\
\hline double-tailed diamond & quadric hypersurface & insets (family 4--special case)\\
\hline Cayley-Moufang swivel & octonion projective plane & swivels (family 8--special case)\\
\hline bat & Freudenthal variety & bat (family 12)\\
\hline
\end{tabular}
\end{center}
\caption{The $5$ families of minuscule posets are named in the first column. The second column identifies the corresponding minuscule homogeneous space. The third column shows how the minuscule posets fall into R.~Proctor's classification of irreducible $d$-complete posets from \cite{Proctor:JACO}.}
\label{table:minuscules}
\end{table}
Combinatorially, the minuscule posets are completely classified. Minuscule posets consist of three infinite families together with a pair of exceptional examples. This classification is given in Table~\ref{table:minuscules}, with examples shown in Figure~\ref{fig:minuscule}. One infinite family of minuscule posets is the {\bf rectangles}; combinatorially, these are the products $\mathcal{C}_i
\times \mathcal{C}_j$ of two chain posets. Another infinite family is the double-tailed diamonds studied in Section~\ref{Section doubled tailed diamonds}. The final infinite family is the {\bf shifted staircases}; identifying the chain $\mathcal{C}_i$ with the natural order on $\{ 1, \dots, i\}$, shifted staircases are of the form
\[
\{ (x_1,x_2) \in \mathcal{C}_i \times \mathcal{C}_i : x_1 \geq x_2 \},
\]
with the order structure restricted from $\mathcal{C}_i \times \mathcal{C}_i$. For convenience, we will assume that shifted staircases have at least $10$ nodes, as the smaller shifted staircases coincide with small rectangles/double-tailed diamonds. Lastly, for the definitions of the exceptional {\bf Cayley-Moufang swivel} and {\bf bat}, see their Hasse diagrams depicted in the second row of Figure~\ref{fig:minuscule}. The acyclic nodes of the minuscule posets are also shown in Figure~\ref{fig:minuscule}; we will use the indexing of these nodes as $L$ and $R$, as in that figure.
\begin{figure}[ht]
\begin{align*}
\Scale[0.8]{\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick},anchor=base,baseline,]
\node (a1) at (0,0) {};
\node [above left of=a1] (a2) {};
\node [draw=white, above left of=a2,rotate=135] (a3) {\Large\ldots};
\node [fill=red,above left of=a3, label=left:{L}] (a4) {};
\node [above right of=a1](b1) {};
\node [above left of=b1] (b2) {};
\node [draw=white, above left of=b2,rotate=135] (b3) {\Large\ldots};
\node [above left of=b3] (b4) {};
\node [draw=white, above right of=b1, rotate = 45](c1) {\Large\ldots };
\node [draw=white,above left of=c1, rotate = 45] (c2) {\Large\ldots};
\node [draw=white, above left of=c2, rotate = 45] (c3) {};
\node [draw=white, above left of=c3, rotate = 45] (c4) {\Large\ldots};
\node [above right of=c1, fill=red, label= right:{R}](d1) {};
\node [above left of=d1] (d2) {};
\node [draw=white, above left of=d2,rotate=135] (d3) {\Large\ldots};
\node [above left of=d3] (d4) {};
\node [draw=white, below of= a1] {\Large rectangle};
\draw (a1)--(a2);
\draw (a2)--(a3);
\draw (a3)--(a4);
\draw (b1)--(b2);
\draw (b2)--(b3);
\draw (b3)--(b4);
\draw (d1)--(d2);
\draw (d2)--(d3);
\draw (d3)--(d4);
\draw (a1)--(b1);
\draw (a2)--(b2);
\draw (a4)--(b4);
\draw (c1)--(b1);
\draw (c2)--(b2);
\draw (c4)--(b4);
\draw (c1)--(d1);
\draw (c2)--(d2);
\draw (c4)--(d4);
\end{tikzpicture}}&&
\Scale[0.8]{\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick},anchor=base,baseline,]
\node (a1) at (0,0) {};
\node [ above right of= a1] (b1) {};
\node [above right of= b1] (c1) {};
\node [draw=white,above right of= c1,rotate=45] (d1) {\Large\ldots};
\node [above right of= d1] (e1) {};
\node [fill=red,above right of= e1, label=right:{R}] (f1) {};
\node [fill=white,above left of= b1, label=left:{}] (b2) {};
\node [above left of= c1] (c2) {};
\node [above left of= c2] (c3) {};
\node [draw=white,above left of= d1,rotate=45] (d2) {\Large\ldots};
\node [draw=white,above left of= d2,rotate=45] (d3) {\Large\ldots};
\node [draw=white,above left of= d3,rotate=90] (d4) {\Large\ldots};
\node [above left of= e1] (e2) {};
\node [above left of= e2] (e3) {};
\node [draw=white,above left of= e3,rotate=135] (e4) {\Large\ldots};
\node [above left of= e4] (e5) {};
\node [above left of= f1] (f2) {};
\node [above left of= f2] (f3) {};
\node [draw=white,above left of= f3,rotate=135] (f4) {\Large\ldots};
\node [above left of= f4] (f5) {};
\node [above left of= f5] (f6) {};
\node [draw=white, below right = -0.1 and 0.7 of a1] {\Large shifted staircase};
\draw (a1) -- (b1);
\draw (b1) -- (c1);
\draw (c1) -- (d1);
\draw (d1) -- (e1);
\draw (e1) -- (f1);
\draw (b2) -- (c2);
\draw (c2) -- (d2);
\draw (d2) -- (e2);
\draw (e2) -- (f2);
\draw (c3) -- (d3);
\draw (d3) -- (e3);
\draw (e3) -- (f3);
\draw (e5) -- (f5);
\draw (b1) -- (b2);
\draw (c1) -- (c2);
\draw (c3) -- (c2);
\draw (e1) -- (e2);
\draw (e3) -- (e2);
\draw (e3) -- (e4);
\draw (e5) -- (e4);
\draw (f1) -- (f2);
\draw (f3) -- (f2);
\draw (f3) -- (f4);
\draw (f5) -- (f4);
\draw (f5) -- (f6);
\end{tikzpicture}}
&&
\Scale[0.8]{\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick},anchor=base,baseline,]
\node (b2) at(0,0) {};
\node [draw=white,above of =b2] (b1) {\Large \vdots};
\node [circle,above of=b1] (b0) {};
\node [circle,above of=b0] (bottom) {};
\node [circle,fill=red,above left of=bottom, label=left:{L}] (left) {};
\node [circle,fill=red,above right of=bottom, label=right:{R}] (right) {};
\node [circle, above right of=left] (top) {};
\node [circle,above of =top] (t0) {};
\node [draw=white,above of =t0] (t1) {\Large \vdots};
\node [circle, above of =t1] (t2) {};
\node [draw=white,below of=b2] {\Large double-tailed diamond};
\draw (t2) -- (t1);
\draw (t0) -- (t1);
\draw (top) -- (t0);
\draw (top) -- (left);
\draw (top) -- (right);
\draw (right) -- (bottom);
\draw (left) -- (bottom);
\draw (b0) -- (bottom);
\draw (b1) -- (b0);
\draw (b1) -- (b2);
\end{tikzpicture}} \\
\Scale[0.8]{\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick},anchor=base,baseline, ]
\node (a1) at (0,0) {};
\node [above left of= a1](a2) {};
\node [above left of= a2](a3) {};
\node [above left of= a3](a4) {};
\node [above left of= a4](a5) {};
\node [above right of= a3](b1) {};
\node [above left of= b1](b2) {};
\node [above left of= b2](b3) {};
\node [above right of= b2](c1) {};
\node [above left of= c1](c2) {};
\node [above left of= c2](c3) {};
\node [above right of= c1](d1) {};
\node [above left of= d1](d2) {};
\node [above left of= d2](d3) {};
\node [above left of= d3](d4) {};
\node [above left of= d4](d5) {};
\node [below left of=a1, draw=white] {\Large Cayley-Moufang swivel};
\draw (a1)--(a2);
\draw (a2)--(a3);
\draw (a3)--(a4);
\draw (a4)--(a5);
\draw (b1)--(b2);
\draw (b2)--(b3);
\draw (c1)--(c2);
\draw (c2)--(c3);
\draw (d1)--(d2);
\draw (d2)--(d3);
\draw (d3)--(d4);
\draw (d4)--(d5);
\draw (a3)--(b1);
\draw (a4)--(b2);
\draw (a5)--(b3);
\draw (c1)--(b2);
\draw (c2)--(b3);
\draw (c1)--(d1);
\draw (c2)--(d2);
\draw (c3)--(d3);
\end{tikzpicture}} &&
\Scale[0.8]{\begin{tikzpicture}[every node/.append style={circle, draw=black, inner sep=0pt, minimum size=16pt}, every draw/.append style={black, thick},anchor=base,baseline,]
\node (a1) at (0,0) {};
\node [above right of= a1](a2) {};
\node [above right of= a2](a3) {};
\node [above right of= a3](a4) {};
\node [above left of= a4](a5) {};
\node [above left of= a5](a6) {};
\node [above right of= a4](b1) {};
\node [above left of= b1](b2) {};
\node [above left of= b2](b3) {};
\node [above right of= b2](c1) {};
\node [above left of= c1](c2) {};
\node [above left of= c2](c3) {};
\node [above right of= c1](d1) {};
\node [above left of= d1](d2) {};
\node [above left of= d2](d3) {};
\node [above left of= d3](d4) {};
\node [above left of= d4](d5) {};
\node [above right of= d1](e1) {};
\node [above left of= e1](e2) {};
\node [above left of= e2](e3) {};
\node [above left of= e3](e4) {};
\node [above left of= e4](e5) {};
\node [above right of= e4](f1) {};
\node [above left of= f1](f2) {};
\node [above left of= f2](f3) {};
\node [above left of= f3](f4) {};
\node [above left of= f4](f5) {};
\node [below right = 0.2 and 0.4 of a1, draw=white] {\Large bat};
\draw (a1)--(a2);
\draw (a2)--(a3);
\draw (a3)--(a4);
\draw (a4)--(a5);
\draw (a5)--(a6);
\draw (b1)--(b2);
\draw (b2)--(b3);
\draw (c1)--(c2);
\draw (c2)--(c3);
\draw (d1)--(d2);
\draw (d2)--(d3);
\draw (d3)--(d4);
\draw (d4)--(d5);
\draw (e1)--(e2);
\draw (e2)--(e3);
\draw (e3)--(e4);
\draw (e4)--(e5);
\draw (f1)--(f2);
\draw (f2)--(f3);
\draw (f3)--(f4);
\draw (f4)--(f5);
\draw (a4)--(b1);
\draw (a5)--(b2);
\draw (a6)--(b3);
\draw (c1)--(b2);
\draw (c2)--(b3);
\draw (c1)--(d1);
\draw (c2)--(d2);
\draw (c3)--(d3);
\draw (e1)--(d1);
\draw (e2)--(d2);
\draw (e3)--(d3);
\draw (e4)--(d4);
\draw (e5)--(d5);
\draw (e4)--(f1);
\draw (e5)--(f2);
\end{tikzpicture}}
\end{align*}
\caption{Examples of the $5$ families of minuscule posets. The labeled red nodes mark the acyclic nodes of these posets. The exceptional posets of the bottom row have no acyclic nodes.}
\label{fig:minuscule}
\end{figure}
We will only use the following proposition in the case $k=1$ of rectangles; however, for possible future use, we note that it is equally true for four of Proctor's other families: \emph{birds} (family 3), \emph{tailed insets} (family 5), \emph{banners} (family 6), and \emph{nooks} (family 7).
\begin{lemma}
\label{irreducibles that make URTs into p-chain URTs}
Let $k \in \{1,3,5,6,7\}$.
Let $\mathcal{P}$ be an irreducible $d$-complete poset from family $k$ and let $A \subseteq \mathcal{P}$ be the set of all acyclic nodes in $\mathcal{P}$.
If a straight-shaped increasing $\mathcal{P}$-tableau $U$ is a URT for all posets in family $k$, then $U$ is a $A$-chain URT in all such posets.
\end{lemma}
\begin{proof}
Suppose $A = \{a_1, ..., a_k\}$. Let $i_1, ..., i_k$ be arbitrary positive integers.
Let $\mathcal{R}$ be the iterated slant sum $\mathcal{P} \slantsum{a_1} \mathcal{C}_{i_1} \slantsum{a_2} \dots \slantsum{a_k} \mathcal{C}_{i_k}$ of $\mathcal{P}$ with a collection of chains. Observe that $\mathcal{R}$ is an order ideal of a larger poset in the same irreducible family. Thus, $U$ is a URT in $\mathcal{R}$, as desired.
\end{proof}
\begin{theorem}[\cite{BS16}]
\label{Minimally labeled URTs in Minuscules}
Let $\mathcal{P}$ be a minuscule poset. Then, for every order ideal $\lambda \subseteq \mathcal{P}$, the minimally-labeled increasing $\mathcal{P}$-tableau $M_\lambda$ of shape $\lambda$ is a URT in $\mathcal{P}$. \qed
\end{theorem}
\begin{corollary}
\label{Grassmainian chain URT}
Let $\mathcal{P}$ be a rectangle.
Let $M_\lambda$ be an minimally-labeled $\mathcal{P}$-tableau of straight shape.
Then, $M_\lambda$ is an $\{L,R\}$-chain URT in $\mathcal{P}$.
\end{corollary}
\begin{proof}
This follows from Lemma~\ref{irreducibles that make URTs into p-chain URTs} and Theorem~\ref{Minimally labeled URTs in Minuscules}.
\end{proof}
\begin{corollary}
\label{large OG chain URT}
Let $\mathcal{P}$ be a shifted staircase with at least $10$ nodes.
If $M_\lambda$ is a minimally-labeled $\mathcal{P}$-tableau of straight shape,
then $M_\lambda$ is an $\{R\}$-chain URT in $\mathcal{P}$.
\end{corollary}
\begin{proof}
If $\mathcal{S}$ is the slant sum $\mathcal{P} \slantsum{R} \mathcal{C}_j$ of $\mathcal{P}$ with a chain, then $\mathcal{S}$ is an order ideal of a larger shifted staircase, in which minimally-labeled tableaux are URTs by Theorem~\ref{Minimally labeled URTs in Minuscules}.
\end{proof}
In order to state the following, we adopt the convention that an \emph{$\emptyset$-chain URT} in $\mathcal{P}$ is just a URT in $\mathcal{P}$.
\begin{proposition}
\label{A chain URTs for minuscules}
Let $\mathcal{P}$ be a minuscule poset. Let $A$ be the set of acyclic nodes in $\mathcal{P}$.
Let $M_\lambda$ be a minimally-labeled increasing $\mathcal{P}$-tableau of straight shape.
Then, $M_\lambda$ is an $A$-chain URT in $\mathcal{P}$.
\end{proposition}
\begin{proof}
If $\mathcal{P}$ is the Cayley-Moufang swivel or the bat, then it has no acyclic nodes, so $A = \emptyset$. Hence, in these cases, it suffices to verify that $M_\lambda$ is a URT in $\mathcal{P}$. This fact is a special case of Theorem~\ref{Minimally labeled URTs in Minuscules}.
If $\mathcal{P}$ is a rectangle, then $A = \{L,R\}$, and $M_\lambda$ is a $\{L,R\}$-chain URT in $\mathcal{P}$ by Corollary~\ref{Grassmainian chain URT}.
If $\mathcal{P}$ is a double-tailed diamond, then $A = \{L,R\}$, and $M_\lambda$ is a $\{L,R\}$-chain URT in $\mathcal{P}$ by Corollary~\ref{everything is chain URT in DTD}.
Finally, if $\mathcal{P}$ is a shifted staircase with at least $10$ nodes,
then $M_\lambda$ is an $A$-chain URT in $\mathcal{P}$ by Corollary~\ref{large OG chain URT}.
\end{proof}
Proposition~\ref{slant sums with AB-chain URTs} allows us to extend Proposition~\ref{A chain URTs for minuscules} to show that minimally-labeled tableaux are unique rectification targets in iterated slant sums of minuscule posets.
\begin{theorem}
\label{Thm slant sum of minuscule URTs}
Let $\mathcal{P}$ be a $d$-complete poset. If $\mathcal{P}$ is an iterated slant sum of minuscule posets, then all minimally-labeled increasing $\mathcal{P}$-tableaux of straight shape are unique rectification targets.
\end{theorem}
\begin{proof}
We prove the stronger statement that all
minimally-labeled increasing $\mathcal{P}$-tableaux of straight shape are $A$-chain URTs in $\mathcal{P}$, where $A$ denotes the set of acyclic nodes in $\mathcal{P}$.
We induct on the number $n$ of irreducible components in the slant sum decomposition of $\mathcal{P}$. For $\mathcal{Q}$ an irreducible component of $\mathcal{P}$, write $A_\mathcal{Q}$ for the set of acyclic nodes of $\mathcal{Q}$.
The base case, $n=1$ is provided by Proposition~\ref{A chain URTs for minuscules}.
Otherwise, $\mathcal{P}$ is the slant sum of irreducible components. One of these components contains the minimum $\hat{0}_\mathcal{P}$; call this component $\mathcal{M}$. By R.~Proctor's classification of acyclic nodes \cite{Proctor:JACO}, $\mathcal{M}$ has at most two acyclic nodes.
Then,
\[\mathcal{P} = \mathcal{M} \slantsum{L} \{ \mathcal{L}_1, \dots, \mathcal{L}_\ell \} \slantsum{R} \{\mathcal{R}_1, \dots, \mathcal{R}_r\},\]
where $L$ and $R$ are the acyclic nodes of $\mathcal{M}$ (if $L$ or $R$ is not an acyclic node, then we have $\ell = 0$ or $r=0$ respectively), and $\mathcal{L}_1, \dots, \mathcal{L}_\ell$ and $\mathcal{R}_1, \dots, \mathcal{R}_r$ are disjoint $d$-complete posets that are slant sum trees of minuscule components. Note that each $\mathcal{R}_i$ and $\mathcal{L}_j$ is a slant sum of strictly fewer than $n$ irreducible components.
Suppose $T$ is a minimally-labeled increasing $\mathcal{P}$-tableau of straight shape. Then, $T|_{\mathcal{R}_i}$ is a minimally-labeled $\mathcal{R}_i$-tableau of straight shape (modulo shifting the alphabet), so by the inductive hypothesis, $T|_{\mathcal{R}_i}$ is an $A_{\mathcal{R}_i}$-chain URT in $\mathcal{R}_i$ for all $i$. Similarly, $T|_{\mathcal{L}_i}$ is an $A_{\mathcal{L}_i}$-chain URT in $\mathcal{L}_i$ for all $i$. Finally, $T|_\mathcal{M}$ is a minimally-labeled $\mathcal{M}$-tableau of straight shape, so by the inductive hypothesis it is $A_\mathcal{M}$-chain URT in $\mathcal{M}$. Thus, by Proposition~\ref{slant sums with AB-chain URTs}, we have that $T$ is an $A$-chain URT in $\mathcal{P}$, where $A$ is the set of acyclic nodes in $\mathcal{P}$.
\end{proof}
The following is the precise version of Theorem~\ref{thm:main}.
\begin{corollary}
\label{Cor slant sum of minuscule ideals URTs}
Let $\mathcal{P}$ be a $d$-complete poset. If $\mathcal{P}$ is an iterated slant sum of minuscule posets and $\mathcal{Q} \subseteq \mathcal{P}$ is an order ideal, then all minimally-labeled increasing $\mathcal{Q}$-tableaux of straight shape are unique rectification targets.
\end{corollary}
\begin{proof}
Let $M_\lambda$ be a minimally-labeled increasing $\mathcal{Q}$-tableau of straight shape. Since $\mathcal{Q}$ is an order ideal of $\mathcal{P}$, $M_\lambda$ is also a minimally labeled increasing $\mathcal{P}$-tableau of straight shape. Hence by Theorem~\ref{Thm slant sum of minuscule URTs}, $M_\lambda$ is a unique rectification target in $\mathcal{P}$, so it is a unique rectification target in $\mathcal{Q}$.
\end{proof}
Finally, we recall the construction necessary to make precise sense of Conjecture~\ref{conj:geometry}. Let $\mathcal{P}$ be any poset satisfying the conclusion of Conjecture~\ref{conj:URT_precise}. Then, as in \cite[\textsection 3.5]{BS16}, we construct a \emph{combinatorial $K$-theory ring} associated to $\mathcal{P}$. Let $K(\mathcal{P})$ be the free abelian group on the set of order ideals of $\mathcal{P}$. Define a product structure on $K(\mathcal{P})$ by setting
\[
\lambda \cdot \mu \coloneqq \sum_\nu t_{\lambda,\mu}^\nu \; \nu,
\]
where the Greek letters denote order ideals of $\mathcal{P}$ and
$
t_{\lambda,\mu}^\nu
$
is defined to be $(-1)^{|\nu| - |\lambda| - |\mu|}$ times the number of skew increasing $\mathcal{P}$-tableaux of shape $\nu / \lambda$ that rectify to the minimally-labeled tableau $M_\mu$. (Since $M_\mu$ is by hypothesis a URT in $\mathcal{P}$, this number is well-defined.) By \cite[Proposition~3.17]{BS16}, this product structure makes $K(\mathcal{P})$ into a commutative associative algebra with the empty order ideal as multiplicative identity. Conjecture~\ref{conj:geometry} claims then that, when $\mathcal{P}$ is $d$-complete, the structure constants of the algebra $K(\mathcal{P})$ coincide with corresponding $\Lambda$-minuscule Schubert structure constants of the $K$-theory ring $K(X)$, where $X = G/P$ is a Kac-Moody homogeneous space, $w \in W^P$ is a $\Lambda$-minuscule Weyl group element for $P$, and $\mathcal{P}$ is the poset of join irreducibles of the distributive lattice $[{\rm id}, w]$.
\section*{Acknowledgements}
The paper derives from a Summer 2017 DIMACS REU project. R.I. and M.Z. participated in this project under the direction of O.P. with funding provided by the Mathematics Department at Rutgers University. We would like to thank Anders Buch, Lazaros Gallos, and Parker Hund for their roles in organizing and running the REU program.
O.P. is grateful to Bob Proctor and Alexander Yong for inspiring conversations, and to Jake Levinson for helpful comments on exposition.
O.P. was partially supported by an NSF Mathematical Sciences Postdoctoral Research Fellowship \#1703696.
\bibliographystyle{alpha}
\nocite{*}
|
{
"timestamp": "2018-05-08T02:13:12",
"yymm": "1805",
"arxiv_id": "1805.02287",
"language": "en",
"url": "https://arxiv.org/abs/1805.02287"
}
|
\section{Introduction and main results}
The parallel-flow heat exchangers, which facilitates transfer of heat of two fowing parallel to each other in a coaxial tube, is
described as follows:
\begin{align}\label{delay}
\left\{
\begin{array}{lll}
\frac{\partial}{\partial t}\theta_1(t,x)=-\frac{\partial}{\partial x}\theta_1(t,x)+h_1(\theta_2(t,x)-\theta_1(t,x)), 0<x<l, t>0, & \hbox{ } \\
\frac{\partial}{\partial t}\theta_2(t,x)=-\frac{\partial}{\partial x}\theta_2(t,x)+h_2(\theta_1(t,x)-\theta_2(t,x)), 0<x<l, t>0, & \hbox{ } \\
\theta_1(t,0)=u_1(t),\theta_2(t,0)=u_2(t), t\geq 0,& \hbox{ } \\
y_1(t)=\theta_2(t-\tau,l), y_2(t)=\theta_1(t-\tau,l), t\geq\tau, & \hbox{ } \\
\theta_1(0,x)=\theta_{10}(x), \theta_2(0,x)=\theta_{20}(x),
\end{array}
\right.
\end{align}
where $l$ is length of two tubes, $\theta_1(t,x)$, $\theta_2(t,x)\in R$ are the temperature variations at time $t$ and at the point $x\in[0,l]$,
$h_1, h_2>0$ denote the heat exchange rate, $u_1(t)$ and $u_2(t)$ are the boundary control, $\tau>0$ is a (known) constant time delay, and $y_1(t)$ and $y_2(t)$ are the observation which suffers from the time delay $\tau>0$.
The author in \cite{Sano2016} proved that system (\ref{delay}) with $u_1(t)=0$ and $u_2(t)=-ky_2$ is exponentially stable whenever $k^2<\frac{h_2}{h_1}$ and $h_1l<\tau<\frac{h_2l}{k^2}$. However, the exponentially stability of the cases $\tau<h_1l$ and $\tau>\frac{h_2l}{k^2}$ are unknown even when $k_1=0$, see Conclusion of \cite{Sano2016}. {\it Motivated by this}, in this paper, we shall use the scheme observer and predictor-based, developed by Guo and Yang \cite{Guo2009}, to stabilize the equation with arbitrarily delayed observation.
Let $A=A_0+A_1$ with
$A_0=\left(
\begin{array}{cc}
-\frac{\partial}{\partial x} & 0\\
0 & -\frac{\partial}{\partial x}\\
\end{array}
\right)$, $A_1=\left(
\begin{array}{cc}
-h_1 & h_1 \\
h_2 & -h_2 \\
\end{array}
\right),$ $D(A)=\bigg\{\left(
\begin{array}{cc}
f\\
g\\
\end{array}
\right)\in H^1(0,l)\times H^1(0,l): f(0)=g(0)=0\bigg\}.$ By \cite{Chen2010}, $A$ generates a $C_0$-semigroup denoted by $\{e^{At}\}_{t\geq 0}$.
Denote by $B$ and $C$ the control operator and observation operator of the delay free system ($\tau=0$) corresponding to (\ref{delay}), respectively.
The corresponding state space is $L^2([0,1])\times L^2([0,1])$, input space and observation space are $R^2$. We note by $\|z\|$ the norm of $z$ on the associated Hilbert space. By \cite{Chen2010}, $B$ is admissible for $A$, that is,
for zero initial value current state is depended continuously on the input with $L^2$ norm.
Through simple computation, it follows that system
\begin{align*}
\left\{
\begin{array}{ll}
\frac{\partial}{\partial t}\theta_1(t,x)=-\frac{\partial}{\partial x}\theta_1(t,x), 0<x<1, t>0 & \hbox{ } \\
\frac{\partial}{\partial t}\theta_2(t,x)=-\frac{\partial}{\partial x}\theta_2(t,x), 0<x<1, t>0 & \hbox{ } \\
\theta_1(t,0)=0,\theta_2(t,0)=0, t\geq 0,& \hbox{ } \\
\theta_1(0,x)=\theta_{10}(x), \theta_2(0,x)=\theta_{20}(x), 0<x<1,& \hbox{ }
\end{array}
\right.
\end{align*}
implies
$\int_0^l|\theta_1(t,l)|^2dt=\int_0^l|\theta_{10}(l-t)|^2dt\leq\|\theta_{10}\|^2,$ $\int_0^l|\theta_2(t,l)|^2dt=\int_0^l|\theta_{20}(l-t)|^2dt\leq\|\theta_{20}\|^2.$ Hence $C$ is admissible for $A_0$. Observe that $A_1$ is a bounded linear operator, by \cite{Weiss1989}, $C$ is admissible for $A$.
The transfer function of delay free system is given by
\begin{align*}
G(s)=\frac{1}{h_1+h_2}\left(
\begin{array}{cc}
h_2e^{-sl}-h_2e^{-(h_1+h_2+s)l} & h_2e^{-(h_1+h_2+s)l}+h_1 e^{-sl} \\
h_2e^{-sl}+h_1e^{-(h_1+h_2+s)l} & -h_1e^{-(h_1+h_2+s)l}+h_1 e^{-sl} \\
\end{array}
\right)
\end{align*}
Obviously, $G(s)$ is bounded on $Re s>0$. This implies by \cite{Weiss1994} that the delay free system corresponding to (\ref{delay}) is a well-possed, that is, the current state and the output are continuously depended on the initial state and input. By the same procedure as \cite[Theorem 2.1]{Guo2009}, for system (\ref{delay}) with $\tau=0$, {\it the input belonging $L^2(\tau,\infty)\times L^2(\tau,\infty)$ implies the output belonging $L^2(0,\infty)\times L^2(0,\infty)$}.
This is very important, because the output is considered as an input in the observer design.
Moreover $G(s)\rightarrow 0$ as $s\rightarrow +\infty$. This implies by \cite{Weiss1994} that $A-BKC_\Lambda$ is a generator of $C_0$-semigroup and $B$ is admissible for $A-BKC_\Lambda$, where $K=\left(
\begin{array}{cc}
k_1 & 0 \\
0 & k_2 \\
\end{array}
\right)
$ and $C_\Lambda$ being some extension of operator $C$. Moreover, $A-BKC_\Lambda$ is system operator with feedback
$\big(u_1(t),u_2(t)\big)^T =-K\big(y_1(t),y_2(t)\big)^T$.
We design the observer and predictor as follows:\\
\textbf{Observer}: Because of the existence of delay, the state
$\{\big(\theta_1(s,x),\theta_2(s,x)\big), s\in [0,t-\tau], t>\tau\}$ should be estimated from the known observation $\{\big(y_1(s+\tau),y_2(s+\tau)\big),s\in[0,t-\tau],t>\tau\}$. A Luenberger observer is designed by
\begin{align}\label{observe}
\left\{
\begin{array}{lll}
\frac{\partial}{\partial s}\hat{\theta}^+_1(s,x)=-\frac{\partial}{\partial x}\hat{\theta}^+_1(s,x)+h_1(\hat{\theta}^+_2(s,x)-\hat{\theta}^+_1(s,x)), 0<x<l, 0<s<t-\tau, & \hbox{ } \\
\frac{\partial}{\partial s}\hat{\theta}^+_2(s,x)=-\frac{\partial}{\partial x}\hat{\theta}^+_2(s,x)+h_2(\hat{\theta}^+_1(s,x)-\hat{\theta}^+_2(s,x)), 0<x<l, 0<s<t-\tau, & \hbox{ } \\
\hat{\theta}^+_1(s,0)=-k_1[\hat{\theta}^+_2(s,1)-y_1(s+\tau)]+u_1(s), s\geq 0,& \hbox{ } \\
\hat{\theta}^+_2(s,0)=-k_2[\hat{\theta}^+_2(t,1)-y_2(s+\tau)]+u_2(s), s\geq 0,& \hbox{ } \\
\hat{\theta}^+_1(0,x)=\hat{\theta}_{10}(x),\hat{\theta}^+_2(0,x)=\hat{\theta}_{20}(x),& \hbox{ }
\end{array}
\right.
\end{align}
where $\hat{\theta}_{10}(x)$ and $\hat{\theta}_{20}(x)$ are the (arbitrarily assigned)
initial state of the observer. System (\ref{observe}) can be written as the form $z(s)=(A-BKC_\Lambda)z(s)+B\big[\big(u_1(s),u_2(s)\big)^T-K\big(y_1(s+\tau),y_2(s+\tau)\big)^T\big]$. The
admissibility of $B$ for $A-BKC_\Lambda$ implies that the current state of system (\ref{observe}) depends continuously on the initial state and the $L^2$ norm of $\big(u_1(\cdot),u_2(\cdot)\big)^T$ and $\big(y_1(\cdot+\tau),y_2(\cdot+\tau)\big)^T$.\\
\textbf{Predictor}: Predict $\{\big(\theta_1(s,x),\theta_2(s,x)\big),\ s\in(t-\tau,t],t>\tau\}$ by $\{\big(\hat{\theta}^{-}_1(s,t,x),\hat{\theta}^{-}_2(s,t,x)\big),\ s\in[0,t-\tau],t>\tau\}$.
For this purpose, we solve (\ref{delay}) with estimated initial values $\big(\hat{\theta}^+_1(t-\tau,x),\hat{\theta}^+_2(t-\tau,x)\big)$ obtained from (\ref{observe})
\begin{align}\label{observer}
\left\{
\begin{array}{lll}
\frac{\partial}{\partial s}\hat{\theta}^{-}_1(s,t,x)=-\frac{\partial}{\partial x}\hat{\theta}^{-}_1(s,t,x)+h_1(\hat{\theta}^{-}_2(s,t,x)-\hat{\theta}^{-}_1(s,t,x)), 0<x<1, t-\tau<s<t,& \hbox{ } \\
\frac{\partial}{\partial s}\hat{\theta}^{-}_2(s,t,x)=-\frac{\partial}{\partial x}\hat{\theta}^{-}_2(s,t,x)+h_2(\hat{\theta}^{-}_1(s,t,x)-\hat{\theta}^{-}_2(s,t,x)), 0<x<1, t-\tau<s<t, & \hbox{ } \\
\hat{\theta}^{-}_1(s,t,0)=u_1(s), t-\tau\leq s\leq t,& \hbox{ } \\
\hat{\theta}^{-}_2(s,t,0)=u_2(s), t-\tau\leq s\leq t,& \hbox{ } \\
\hat{\theta}^{-}_1(t-\tau,t,x)=\hat{\theta}^+_1(t-\tau,x),\hat{\theta}^{-}_2(t-\tau,t,x)=\hat{\theta}^+_2(t-\tau,x),0\leq x\leq l. & \hbox{ }
\end{array}
\right.
\end{align}
With the above {\it observer} and {\it predictor}, we design the {\it estimated state feedback control law} by
\begin{align*}
u_1=\left\{
\begin{array}{ll}
k_1 \hat{\theta}^{-}_1(t,t,l), t>\tau,& \hbox{} \\
0, 0\leq t\leq \tau, & \hbox{}
\end{array}
\right.
u_2=\left\{
\begin{array}{ll}
k_2 \hat{\theta}^{-}_2(t,t,l), t>\tau, & \hbox{} \\
0, 0\leq t\leq \tau. & \hbox{}
\end{array}
\right.
\end{align*}
Denote $\varepsilon^+_1(s,x)=\hat{\theta}^+_1(s,x)-\theta_1(s,x), \varepsilon^+_2(s,x)=\hat{\theta}^+_2(s,x)-\theta_2(s,x),\ 0\leq s\leq t-\tau;$ $\varepsilon^{-}_1(s,t,x)=\hat{\theta}^{-}_1(s,t,x)-\theta_1(s,x), \varepsilon^{-}_2(s,t,x)=\hat{\theta}^{-}_2(s,t,x)-\theta_2(s,x), t-\tau\leq s\leq t$.
The closed-loop system is transferred to the following partial differential equations
\begin{align}\label{del}
\left\{
\begin{array}{lll}
\frac{\partial}{\partial t}\theta_1(t,x)=-\frac{\partial}{\partial x}\theta_1(t,x)+h_1(\theta_2(t,x)-\theta_1(t,x)), 0<x<l, t>\tau, & \hbox{ } \\
\frac{\partial}{\partial t}\theta_2(t,x)=-\frac{\partial}{\partial x}\theta_2(t,x)+h_2(\theta_1(t,x)-\theta_2(t,x)), 0<x<l, t>\tau, & \hbox{ } \\
\theta_1(t,0)=-k_1\varepsilon^{-}_2(t,t,l)-k_1\theta_2(t,l), t>\tau,& \hbox{ } \\
\theta_2(t,0)=-k_2\varepsilon^{-}_1(t,t,l)-k_2\theta_1(t,l), t>\tau,& \hbox{ }
\end{array}
\right.
\end{align}
\begin{align}\label{efuu}
\left\{
\begin{array}{lll}
\frac{\partial}{\partial s}\hat{\varepsilon}^+_1(s,x)=-\frac{\partial}{\partial x}\hat{\varepsilon}^+_1(s,x)+h_1(\hat{\varepsilon}^+_2(s,x)-\hat{\varepsilon}^+_1(s,x)), 0<x<l, 0<s<t-\tau, t>\tau, & \hbox{ } \\
\frac{\partial}{\partial s}\hat{\varepsilon}^+_2(s,x)=-\frac{\partial}{\partial x}\hat{\varepsilon}^+_2(s,x)+h_2(\hat{\varepsilon}^+_1(s,x)-\hat{\varepsilon}^+_2(s,x)), 0<x<l, 0<s<t-\tau, t>\tau, & \hbox{ } \\
\hat{\varepsilon}^+_1(s,0)=-k_1\hat{\varepsilon}^+_2(s,1), 0\leq s\leq t-\tau, t>\tau,& \hbox{ } \\
\hat{\varepsilon}^+_2(s,0)=-k_2\hat{\varepsilon}^+_2(s,1), 0\leq s\leq t-\tau, t>\tau,& \hbox{ } \\
\hat{\varepsilon}^+_1(0,x)=\hat{\theta}_{10}(x)-\theta_{10}(x),\hat{\varepsilon}^+_2(0,x)=\hat{\theta}_{20}(x)-\theta_{20}(x), 0\leq x\leq l,& \hbox{ }
\end{array}
\right.
\end{align}
\begin{align}\label{observer12}
\left\{
\begin{array}{lll}
\frac{\partial}{\partial s}\varepsilon^{-}_1(s,t,x)=-\frac{\partial}{\partial x}\varepsilon^{-}_1(s,t,x)+h_1(\varepsilon^{-}_2(s,t,x)-\varepsilon^{-}_1(s,t,x)), 0<x<l, t-\tau<s<t, t>\tau, & \hbox{ } \\
\frac{\partial}{\partial s}\varepsilon^{-}_2(s,t,x)=-\frac{\partial}{\partial x}\varepsilon^{-}_2(s,t,x)+h_2(\varepsilon^{-}_1(s,t,x)-\varepsilon^{-}_2(s,t,x)), 0<x<l, t-\tau<s<t, t>\tau, & \hbox{ } \\
\varepsilon^{-}_1(s,t,0)=0, t-\tau\leq s\leq t,t>\tau, & \hbox{ } \\
\varepsilon^{-}_2(s,t,0)=0, t-\tau\leq s\leq t,t>\tau, & \hbox{ } \\
\varepsilon^{-}_1(t-\tau,t,x)=\varepsilon^+_1(t-\tau,x),\varepsilon^{-}_2(t-\tau,t,x)=\varepsilon^+_2(t-\tau,x), 0\leq x\leq l,t>\tau.& \hbox{ }
\end{array}
\right.
\end{align}
Obviously, the system operator of (\ref{efuu}) is $A-BKC_\Lambda$. By the same procedure of \cite[Example 2]{Villegas2009}, we can easily prove that there exists positive constants $M$ and $\gamma_0$ such that
\begin{align}\label{ABKC}
e^{(A-BKC_\Lambda)t}\leq Me^{-\gamma_0 t},\ \forall \ t\geq 0,
\end{align}
provided $k_1^2<\frac{h_1}{h_2}, k_2^2<\frac{h_2}{h_1}$ hold.
This implies that our observer (\ref{observe}) converges and the inverse operator of $A-BKC_\Lambda$ denoted by $(A-BKC_\Lambda)^{-1}$ exists.
Our main results are described as the following theorem.
\begin{theorem}
Assume that $0<k_1<\sqrt{\frac{h_1}{h_2}},\ 0<k_2<\sqrt{\frac{h_2}{h_1}}$ and $t>\tau$. Then,
for $\tau>l$, system (\ref{del}) decays exponentially for any
initial value; for $0<\tau\leq l$ and
$\left(
\begin{array}{c}
w_1 \\
w_2\\
\end{array}
\right)=\left(
\begin{array}{c}
\hat{\theta}_{10}-\theta_{10} \\
\hat{\theta}_{20}-\theta_{20}\\
\end{array}
\right)
\in D(A-BKC_\Lambda),$ system (\ref{del}) decays exponentially in the sense that
\begin{align*}
\bigg\|\left(
\begin{array}{c}
\theta_1(t,\cdot) \\
\theta_2(t,\cdot) \\
\end{array}
\right)\bigg\|
\leq D_\tau e^{-\gamma t} \bigg[\bigg\|\left(
\begin{array}{c}
\theta_{10}(\cdot) \\
\theta_{20}(\cdot) \\
\end{array}
\right)\bigg\|
+C_\tau^0
\bigg\|
(A-BKC_\Lambda)\left(
\begin{array}{c}
w_1 \\
w_2\\
\end{array}
\right)\bigg\|\bigg],
\end{align*}
where $D_\tau$ and $C_\tau^0$ are positive constants independent on $t$, $\gamma$ is a given constant smaller than $\gamma_0$.
\end{theorem}
\section{The proof of Theorem 1.1}
\textbf{Proof.} System (\ref{observer12}) tells us $\left(
\begin{array}{c}
\varepsilon^{-}_1(t,t,\cdot) \\
\varepsilon^{-}_2(t,t,\cdot) \\
\end{array}
\right)=e^{A\tau}
\left(
\begin{array}{c}
\varepsilon^{+}_1(t-\tau,\cdot) \\
\varepsilon^{+}_2(t-\tau,\cdot) \\
\end{array}
\right).$ By simple computation, we obtain
\begin{align}\label{aaaaa}
\left\{
\begin{array}{lll}
\varepsilon^{-}_1(t,t,x)=&\frac{1}{h_1+h_2}\big[
(h_2+h_1e^{-(h_1+h_2)\tau})\varepsilon^{+}_1(t-\tau,x-\tau)+h_1(1-e^{-(h_1+h_2)\tau})\varepsilon^{+}_2(t-\tau,x-\tau)\big]\\
\varepsilon^{-}_2(t,t,x)=& \frac{1}{h_1+h_2}\big[
h_2(1-e^{-(h_1+h_2)\tau})\varepsilon^{+}_1(t-\tau,x-\tau)+(h_1+h_2e^{-(h_1+h_2)\tau})\varepsilon^{+}_2(t-\tau,x-\tau)\big]
\end{array}
\right.
\end{align}
provided $x\geq \tau$, and $\varepsilon^{-}_1(t,t,x)=\varepsilon^{-}_1(t,t,x)=0$, provided $x< \tau$. This implies that, for $\tau>l$, $\varepsilon^{-}_1(t,t,l)=\varepsilon^{-}_2(t,t,l)=0.$
Accordingly, system (\ref{del}) is of the form $\dot{z}(t)=(A-BKC_\Lambda)z(t),t>\tau$, thereby exponentially stable for any initial value.
Below we consider the case $0<\tau\leq l$ and
$\left(
\begin{array}{c}
w_1 \\
w_2\\
\end{array}
\right)=\left(
\begin{array}{c}
\hat{\theta}_{10}-\theta_{10} \\
\hat{\theta}_{20}-\theta_{20}\\
\end{array}
\right)
\in D(A-BKC_\Lambda)$. In such case, (\ref{aaaaa}) indicates $\varepsilon^{-}_1(t,t,l)=\frac{1}{h_1+h_2}\bigg[
h_2\varepsilon^{+}_1(t-\tau,l-\tau)+h_1\varepsilon^{+}_2(t-\tau,l-\tau)
+e^{-(h_1+h_2)\tau}\big(h_1\varepsilon^{+}_1(t-\tau,l-\tau)-h_1\varepsilon^{+}_2(t-\tau,l-\tau)\big)\bigg],$
$\varepsilon^{-}_2(t,t,l)=\frac{1}{h_1+h_2}\bigg[
h_2\varepsilon^{+}_1(t-\tau,l-\tau)+h_1\varepsilon^{+}_2(t-\tau,l-\tau)
+e^{-(h_1+h_2)\tau}\big(-h_2\varepsilon^{+}_1(t-\tau,l-\tau)+h_2\varepsilon^{+}_2(t-\tau,l-\tau)\big)\bigg].$
Hence there exist positive constants $p_1,p_2,q_1,q_2$ independent on $t$ such that
\begin{align}\label{ttl}
\left\{
\begin{array}{ll}
|\varepsilon^{-}_1(t,t,l)|\leq p_1|\varepsilon^{+}_1(t-\tau,l-\tau)|+p_2|\varepsilon^{+}_2(t-\tau,l-\tau)|, & \hbox{} \\
|\varepsilon^{-}_2(t,t,l)|\leq q_1|\varepsilon^{+}_1(t-\tau,l-\tau)|+q_2|\varepsilon^{+}_2(t-\tau,l-\tau)|. & \hbox{}
\end{array}
\right.
\end{align}
Denote
$\left(
\begin{array}{c}
\psi_1(t,\tau,\cdot) \\
\psi_2(t,\tau,\cdot) \\
\end{array}
\right)=e^{(A-BKC_\Lambda)(t-\tau)}
(A-BKC_\Lambda)\left(
\begin{array}{c}
w_1 \\
w_2\\
\end{array}
\right).$
Then, system (\ref{efuu}) tells us that
$\left(
\begin{array}{c}
\varepsilon^{+}_1(t-\tau,\cdot) \\
\varepsilon^{+}_2(t-\tau,\cdot) \\
\end{array}
\right)=(A-BKC_\Lambda)^{-1}\left(
\begin{array}{c}
\psi_1(t,\tau,\cdot) \\
\psi_2(t,\tau,\cdot) \\
\end{array}
\right)$ and it is of the following form
\begin{align*}
\varepsilon^{+}_1(t-\tau,x)
=&\frac{1}{h_1+h_2}\int_0^x\bigg[h_1e^{(h_1+h_2)(\sigma-x)}\big(\psi_2(t,\tau,\sigma)-\psi_1(t,\tau,\sigma)\big)
\\
&-h_2\psi_1(t,\tau,\sigma)-h_1\psi_2(t,\tau,\sigma)\bigg]d\sigma +\int_0^l\big(\alpha_1+\alpha_2e^{(h_1+h_2)\sigma}\big)\psi_1(t,\tau,\sigma)d\sigma \\
& +\int_0^l\big(\alpha_3+\alpha_4e^{(h_1+h_2)\sigma}\big)\psi_2(t,\tau,\sigma)d\sigma,\\
\varepsilon^{+}_2(t-\tau,x)
=&\frac{-1}{h_1+h_2}\int_0^x\bigg[h_2e^{(h_1+h_2)(\sigma-x)}\big(\psi_2(t,\tau,\sigma)-\psi_1(t,\tau,\sigma)\big)d\sigma\\
&+h_2\psi_1(t,\tau,\sigma)+h_1\psi_2(t,\tau,\sigma) \bigg]d\sigma +\int_0^l\big(\beta_1+\beta_2e^{(h_1+h_2)\sigma}\big)\psi_1(t,\tau,\sigma)d\sigma \\
&+\int_0^l\big(\beta_3+\beta_4e^{(h_1+h_2)\sigma}\big)\psi_2(t,\tau,\sigma)d\sigma,
\end{align*}
where $\alpha_j,\beta_j, j=1,2,3,4$ are nonnegative constants independent on $t$.
Accordingly, there exist positive constants $m_1,m_2,n_1,n_2$ independent on $t$ such that
\begin{align}\label{tl}
\left\{
\begin{array}{ll}
|\varepsilon^{+}_1(t-\tau,l-\tau)|\leq m_1\|\psi_1(t,\tau,\cdot)\|+m_2\|\psi_2(t,\tau,\cdot)\|, & \hbox{} \\
|\varepsilon^{+}_2(t-\tau,l-\tau)|\leq n_1\|\psi_2(t,\tau,\cdot)+
n_2\|\psi_2(t,\tau,\cdot)\|. & \hbox{}
\end{array}
\right.
\end{align}
We combing (\ref{ttl}) and (\ref{tl}) to get
$|\varepsilon^{-}_1(t,t,l)|\leq (p_1m_1+p_2n_1)\|\psi_1(t,\tau,\cdot)\|+(p_1m_2+p_2n_2)\|\psi_2(t,\tau,\cdot)\|$ and
$|\varepsilon^{-}_2(t,t,l)|\leq (q_1m_1+q_2n_1)\|\psi_1(t,\tau,\cdot)\|+(q_1m_2+q_2n_2)\|\psi_2(t,\tau,\cdot)\|.$
Use (\ref{ABKC}) to obtain
\begin{align}\label{eq}
\nonumber \bigg| \left(
\begin{array}{c}
\varepsilon^{-}_1(t,t,1) \\
\varepsilon^{-}_2(t,t,1) \\
\end{array}
\right)\bigg|\leq M_1\bigg\|\left(
\begin{array}{c}
\psi_1(t,\tau,\cdot) \\
\psi_2(t,\tau,\cdot) \\
\end{array}
\right)\bigg\|\
=&M_1\bigg\|e^{(A-BKC_\Lambda)(t-\tau)}
(A-BKC_\Lambda)\left(
\begin{array}{c}
w_1 \\
w_2\\
\end{array}
\right)\bigg\|\\
&\leq M_1Me^{-\gamma_0(t-\tau)}\bigg\|
(A-BKC_\Lambda)\left(
\begin{array}{c}
w_1 \\
w_2\\
\end{array}
\right)\bigg\|,
\end{align}
where $M_1$ is a positive constant depended on $p_1,p_2,q_1,q_2,m_1,m_2,n_1,n_2$.
The abstract form of system (\ref{del}) is described as follows
\begin{align*}
\frac{d}{dt}\left(
\begin{array}{c}
\theta_1(t,\cdot) \\
\theta_2(t,\cdot) \\
\end{array}
\right)=
(A-BKC_\Lambda)\left(
\begin{array}{c}
\theta_1(t,\cdot) \\
\theta_2(t,\cdot) \\
\end{array}
\right)+B\left(\begin{array}{c}
\varepsilon^{-}_1(t,t,1) \\
\varepsilon^{-}_2(t,t,1) \\
\end{array}
\right).
\end{align*}
Let $0<\gamma<\gamma_0$ and $Y(t)=e^{\gamma t}\left(
\begin{array}{c}
\theta_1(t,\cdot) \\
\theta_2(t,\cdot) \\
\end{array}
\right)$. We obtain
$$\dot{Y}(t)=(\gamma+A-BKC_\Lambda)Y(t)+Be^{\gamma t}\left(\begin{array}{c}
\varepsilon^{-}_1(t,t,1) \\
\varepsilon^{-}_2(t,t,1) \\
\end{array}
\right), t>\tau $$
with $\{e^{(\gamma+A-BKC_\Lambda)s}\}_{s\geq 0}$ being exponentially stable $C_0$-semigroup.
Since $B$ is admissible for $A-BKC_\Lambda$, $B$ is admissible for $(\gamma+A-BKC_\Lambda)$, see \cite{Weiss1994}. The combination of property of admissibility, exponentially stability of semigroup $\{e^{(\gamma+A-BKC_\Lambda)s}\}_{s\geq 0}$, and (\ref{eq}) implies
\begin{align*}
\|Y(t)\|=&\bigg\|e^{(\gamma+A+BC)(t-\tau)}Y(\tau)\|+\int_\tau^t e^{(\gamma+A-BKC_\Lambda)(t-s)}Be^{\gamma s} \left(\begin{array}{c}
\varepsilon^{-}_1(s,s,1) \\
\varepsilon^{-}_2(s,s,1) \\
\end{array}
\right)ds\bigg\| \\
\leq & C_\tau \|Y(\tau)\|+\bigg(\int_\tau^t\bigg\|e^{\gamma s}\left(\begin{array}{c}
\varepsilon^{-}_1(s,s,1) \\
\varepsilon^{-}_2(s,s,1) \\
\end{array}
\right)\bigg\|^2ds\bigg)^{\frac{1}{2}}\\
\leq & C_\tau \|Y(\tau)\|+MM_2\bigg(\int_\tau^t|e^{\gamma s}e^{-\gamma_0(s-\tau)}|ds\bigg)^{\frac{1}{2}}
\bigg\|
(A-BKC_\Lambda)\left(
\begin{array}{c}
w_1 \\
w_2\\
\end{array}
\right)\bigg\|,
\end{align*}
where $C_\tau$ and $M_2$ are positive constants independent on $t$.
Hence
\begin{align*}
\bigg\|\left(
\begin{array}{c}
\theta_1(t,\cdot) \\
\theta_2(t,\cdot) \\
\end{array}
\right)\bigg\|
\leq D_\tau e^{-\gamma t} \bigg[\bigg\|\left(
\begin{array}{c}
\theta_{10}(\cdot) \\
\theta_{20}(\cdot) \\
\end{array}
\right)\bigg\|
+C_\tau^0
\bigg\|
(A-BKC_\Lambda)\left(
\begin{array}{c}
w_1 \\
w_2\\
\end{array}
\right)\bigg\|\bigg],
\end{align*}
where $D_\tau=C_\tau e^{\gamma\tau}\|e^{A\tau}\|, C_\tau^0=\frac{MM_0}{C_\tau\sqrt{\gamma_0-\gamma}\|e^{A\tau}\|}$. The proof is therefore completed.
|
{
"timestamp": "2018-05-08T02:13:39",
"yymm": "1805",
"arxiv_id": "1805.02316",
"language": "en",
"url": "https://arxiv.org/abs/1805.02316"
}
|
\section{Introduction}
In this paper, we consider a family of $N$ linear multi-variable systems ruled by the equations
\begin{equation}\label{eqsys}
\Sigma_i:\begin{cases}
\dot{x}_i(t) = A_ix_i(t) + B_iu_i(t) + E_i w(t), \; \;\;x_i(0)=x_{i,0} \\
y_i(t) = C_{y,_i}x_i(t) + {D_{y,i}u_i(t)}, + H_{y,i}w(t) \\
e_i(t) = C_{e,i}x_i(t) + D_{e,i}u_i(t) + H_{e,i}w(t)
\end{cases}
\end{equation}
where, for all $t \ge 0$, the signal $x_i(t)\in {\mathbb{R}}^{n_i}$ is the state,
$u_i(t)\in {\mathbb{R}}^{m_i}$ is the control input,
$y_i(t)\in {\mathbb{R}}^{p_i}$ is the measured output, and
$e_i(t)\in {\mathbb{R}}^{\rho_i}$ is the regulated output of the $i$-th system, for $i \in \{1, \dots, N\}$.
The exogenous signal $w(t)\in {\mathbb{R}}^{q}$ represents a reference signal to be tracked or a disturbance signal to be rejected, and is assumed to be generated by an exosystem
\begin{equation} \label{exos}
\dot w = Sw, \quad w(0) = w_0
\end{equation}
All matrices appearing in \eqref{eqsys} are appropriate dimensional constant matrices. We assume the $N$ agents are divided into two groups. The first {\it informed group} consists of systems $\Sigma_i$, for $i \in \{1,\dots,l\}$,
that can access information about $w$ from the measured output $y_i$, which implies $H_{y,i} \neq 0$. The second {\it uninformed group} of systems $\Sigma_i$, for $i \in \{l+1, \dots, N\}$, for which $H_{y,i} = 0$, cannot directly access information about $w$.
The problem of cooperative output regulation for multi-agent systems involves designing control inputs $u_i$ such that the overall system is asymptotically stable for the case $w = 0$, and such that the tracking errors $e_i$ all converge to zero, ensuring the outputs of all the agents converge asymptotically to the desired reference signal. For the special case of a single system ($N=1$), with access to measurements of the exogenous signal, the problem reduces to the classic problem of output feedback regulation. This problem is central to modern control theory. Solvability conditions and extensive compilations of results are given in \cite{Saberi-SS-00}. It is assumed that the measured output $y_i$ is available for controller design.
The problem of output regulation of multi-agent systems has been the subject of a number of papers recently \cite{HGCH}-\cite{SH12b}. As some of the agents cannot access the exogenous signal, the problem cannot be solved by the methods of the classical output regulation.
In \cite{SH12a}, Su and Huang considered the system (\ref{eqsys}) under the assumption that all states of each system can be measured and are available for use in the control input; this occurs when $p_i = n_i$.
They proposed a distributed dynamic state feedback control scheme and gave conditions under which the multi-agent cooperative regulation problem could be solved. They showed that their problem framework and controller architecture could accommodate the methods of \cite{HGCH} and \cite{XWL} as special cases. In \cite{SH12b}, Su and Huang extended the state feedback methods of \cite{SH12a} to the case where $p_i < n_i$ using a distributed dynamic measurement feedback control architecture.
For many control systems there is a need to avoid undesirable transient phenomena such as high-frequency oscillations and large magnitudes of the output \cite{GL96}. For a multi-agent example system, we may consider the lateral and directional control of a research aircraft known as MuPAL-$\alpha$. The flight dynamics of this aircraft were described in \cite{S09}, and \cite{YW13} considered the control of four such aircraft within a network. The control objective was for all the aircraft to simultaneously track a given sideways velocity and a given roll angle. Exceeding the desired sideways velocity in a platoon may cause some aircraft to fly too close together, and possibly collide. If an aircraft exceeds its desired roll angle, its flight may become unstable and possibly crash.
Thus a desirable transient response should seek to minimise, or else avoid entirely, overshoot in the tracking signal. The problem of overshoot is related to the problem of string stability for automated platoons of vehicles \cite{PWN14,MBL18}. Such platoons are usually assumed to be subject to disturbances which should be rejected. Moreover, one of the objectives for them is to track a reference velocity. Obviously, if any of the vehicles in the platoon overshoot in their velocity, collisions might occur.
Numerous papers have appeared recently seeking to improve the transient performance in the tracking control of multi-agent systems, including the use of consensus protocols \cite{PKS17}-\cite{MKD17}, composite nonlinear feedback control \cite{LSY16}, travelling waves \cite{MHS17}, iterative learning control \cite{YXL16}
and transient synchronization \cite{SWA16}. We note however that none of these papers offered a method for entirely avoiding overshoot in all outputs for all the agents.
The design of control laws to achieve a nonovershooting step response for a single linear time invariant (LTI) plant was considered in the paper \cite{SN10} by the first author of the present paper. Several methods were given for the design of a linear state feedback control law to deliver a nonovershooting step response for an LTI multiple-input multiple-output (MIMO) system. This requires the closed-loop system to be stable, and that the tracking error of the step response converges to zero without changing sign in any of its components. In \cite{SN12}, the methods were adapted to the problem of avoiding undershoot in the step response, and in \cite{SN14} the methods were used to achieve nonovershooting output regulation. The design methods of \cite{SN10} and \cite{SN12} have been incorporated into a public domain MATLAB$^{\textrm{\tiny{\textregistered}}}$ toolbox, known as \textbf{NOUS} \cite{PS12}.
In this paper, we consider how to combine the nonovershooting tracking control methods of \cite{SN10} with the distributed control scheme of \cite{SH12b} to solve the multi-agent cooperative output regulation problem in such a manner that all agents achieve exact output regulation with a nonovershooting transient response. The principal contribution of the paper is to identify the necessary system assumptions and information required in order for the control scheme to deliver a nonovershooting response. The authors believe that this is the first paper offering a control scheme to avoid overshoot in all outputs of all agents of a multi-agent system.
The paper is organised as follows. In Section \ref{secmath}, we introduce some elementary notions from graph theory that enable us to define our multi-agent problem. In Section \ref{secpf}, we introduce the dynamic measurement output feedback control architecture introduced by \cite{SH12b}, and define our nonovershooting cooperative output regulation problem. In Section \ref{secnous}, we briefly discuss the nonovershooting controller design methods of \cite{SN10}. The main result of the paper is presented in Section \ref{secps}, where we show how the methods of \cite{SN10} can be employed within the controller architecture of \cite{SH12b} to solve our problem.
Section \ref{secex} demonstrates the application of the control method to the lateral and directional control of a network of research aircraft known as MuPAL-$\alpha$, as discussed in \cite{YW13}. Our simulations demonstrate that the methods introduced in this paper can effectively avoid overshoot in all the outputs of all the agents involved in the flight simulation. Finally Section \ref{secconc} offers some concluding thoughts.
{\bf Notation.} $I_n$ is the $n$-dimensional identity matrix, and ${\bf 0}_{N\times l}$ denotes an $N \times l$ matrix with zero entries.
For a square matrix $A$, we use $\rho(A)$ to denote its spectrum. We say that a square matrix $A$ is {\it Hurwitz} if $\rho(A)$ lies within the open left-hand complex plane.
$Re(\lambda)$ denotes the real part of a complex scalar $\lambda$, and
$\otimes$ denotes the Kronecker product of matrices.
\section{Mathematical Preliminaries}
\label{secmath}
\subsection{Graph Theory} \label{secgraph}
Graph theory \cite{graph2} has been widely used to describe the topology of networked systems by means of vertices and edges. Let ${\cal G}({\cal V},{\cal E}, a)$ denote a {\it weighted digraph}, in which ${\cal V}$ is the finite set of nodes, ${\cal E}$ is the set of directed edges, and $a$ represents the set of weights for each edge. Directed edges have a head node and a tail node. We use
$(j,i) $ to denote the edge in ${\cal E}$ directed from tail node $j$ to head node $i$, and $a_{ij}$ denotes the weighting assigned to this edge. For node $i \in {\cal V}$, we use ${\cal N}_i$ to denote all nodes $j\in {\cal V}$ for which there exists an edge from tail node $j$ to head node $i$. Thus
\begin{equation}
{\cal N}_i = \{ j \in {\cal V}: (j,i)\in {\cal E} \}
\end{equation}
We refer to the nodes in ${\cal N}_i$ as the {\it neighbours} of node $i$. A digraph has a {\it spanning tree} if there exists at least one node having a directed path to all the other nodes. The {\it in-degree} of a node, denoted by $d_{in}(i) $, is the sum of the weights of the edges with heads at that node, and is given by
\begin{equation}
d_{in}(i) =\sum_{j \in {\cal N}_i} \ a_{ij}
\end{equation}
The {\it degree matrix} of a digraph is a diagonal matrix ${\cal D}$, whose diagonal entries are the in-degrees of the nodes of the digraph from which it is derived.
The weighted {\it adjacency matrix} ${\cal A}$ for a digraph has entries ${\cal A}_{ij}$ given by
\begin{equation} \label{eq:adj}
{\cal A}_{ij}=\begin{cases}
a_{ij}, & (j,i)\in {\cal E} \\
0, & \mbox{ otherwise}
\end{cases}
\end{equation}
The information contained within the degree and adjacency matrices of a graph may also be captured within a single matrix known as the {\it Laplacian matrix}, which is defined as
\begin{equation} \label{eq:laplace}
{\cal L}={\cal D}-{\cal A}
\end{equation}
The $N$ systems of \eqref{eqsys} with the exosystem \eqref{exos} can be viewed as a leader-follower multi-agent system of $N+1$ agents with the exosystem as its leader. To model such systems with graphs, we consider a digraph ${{\cal G}}$ with nodes ${\cal V} = \{0,1,\dots,N\}$ in which node $0$ represents the exosystem and the remaining nodes represent the $N$ agents. The set of edges ${\cal E}$ represents the information available to the $i$-th agent for the design of its control law $u_i$. Thus if $(0,2) \in {\cal E}$, then agent 2 is able to see the state $w$ of the exosystem, and $a_{20} =1$. If $(3,2) \notin {\cal E}$, then agent 2 is not able to see the state $x_3$ of agent 3, and $a_{23}=0$.
\begin{lemma} \cite{SH12b} \label{SHlem1}
Let ${\cal G}$ be a digraph with Laplacian ${\cal L}$, and partition ${\cal L}$ according to
\begin{equation}
{\cal L} = \left[ \begin{array}{c|cc} {\bf 0}_{1\times 1} & {\bf 0}_{1\times l} & {\bf 0}_{1\times (N-l)} \\
\hline
{\cal L}_{21} & {\cal L}_{22} & {\cal L}_{23} \\
{\cal L}_{31} & {\cal L}_{32} & {\cal L}_{33}
\end{array} \right]
\end{equation}
where ${\cal L}_{22} \in {\mathbb{R}}^{l \times l}$ and ${\cal L}_{33} \in {\mathbb{R}}^{(N-l) \times (N-l)}$. Then ${\cal L}_{33}$ is nonsingular if and only if ${\cal G}$ contains a directed spanning tree with node 0 as the root. If ${\cal L}_{33}$ is nonsingular, then all its eigenvalues have positive real parts.
\end{lemma}
\subsection{Exponentially decaying sinusoids. }
Our analysis will require some discussion of the properties of exponentially decaying sinusoids.
\begin{definition} For any positive integer $n $, let $\{\mu_i : i \in \{1, \dots, n\}\}$, $\{\omega_i: i \in \{1, \dots, n\}\}$, $\{\alpha_i: i \in \{1,\dots, n\}\}$ and $\{\beta_i: i \in \{1, \dots, n\}\}$ be sets of real numbers such that for all $i \in \{1,\dots, n\}$ we have $\mu_i < 0$ and $\omega_i \geq 0$. Let $f: {\mathbb{R}} \rightarrow {\mathbb{R}}$ be given by
\begin{equation} \label{EDS}
f(t) = \sum_{i=1}^n \ e^{\mu_i t} [\alpha_i \sin(\omega_i t) + \beta_i \cos(\omega_i t) ]
\end{equation}
Also let $\mu <0 $ be given by
\begin{equation}
\mu = \max\{\mu_i: i \in \{1, \dots, n\}\}
\end{equation}
We say that the scalar function $f$ is the sum of exponentially decaying sinusoidal (SEDS) functions with rate $\mu$. If $v: {\mathbb{R}} \rightarrow {\mathbb{R}}^m $ is a vector-valued function with $v(t) = [v_1(t) \dots v_m(t)]^T$, and each component $v_j $ is a SEDS function of rate $\mu_j <0$, then we say that $v$ is a SEDS function with rate $\mu = \max\{\mu_j: j \in \{1, \dots, m\}\}$.
If $f$ is such that $\omega_i = 0$ for all $ i \in \{1, \dots, n\}$, then we say that $f$ is the sum of exponentially decaying (SED) functions.
\end{definition}
We note some straightforward properties of SEDS functions; proofs are given in the Appendix.
\begin{lemma}\label{lem61}
Let $f_1: {\mathbb{R}} \rightarrow {\mathbb{R}}$ and $f_2: {\mathbb{R}} \rightarrow {\mathbb{R}}$ be SEDS functions rate $\mu_1<0$ and $\mu_2 <0$ respectively. Then $f_1+f_2$ and $f_1f_2$ are SEDS functions with rates $\mu = \max\{\mu_1,\mu_2\}$, and $\mu = \mu_1 + \mu_2 $, respectively.
\end{lemma}
\begin{lemma} \label{lem62}
Consider the linear system
\begin{eqnarray}
\label{nomsys}
\begin{array}{rcl}
\dot{x}(t) \hspace{-1mm}&\hspace{-1mm} = \hspace{-1mm}&\hspace{-1mm} A x(t) + B u(t) , \quad x(0)= x_{0} \\
y(t) \hspace{-1mm}&\hspace{-1mm} = \hspace{-1mm}&\hspace{-1mm} C x(t) + D u(t)
\end{array}
\end{eqnarray}
where $A$ is Hurwitz. Let $\lambda_0 = \max\{Re(\lambda): \lambda \in \rho(A)\}$.
\begin{enumerate}
\item For any $x_0$, the zero input solution $x$ and zero input response $y$ arising from the input $u$ with $u(t) = 0$ for all $t \geq 0$ are SEDS functions with rate $\lambda_0$.
\item If the input $u$ is a SEDS function with rate $\mu$, then the zero state response $y$ arising from $x_0=0$ is a SEDS function with rate $\mu$.
\end{enumerate}
\end{lemma}
\begin{lemma} \label{lem63}
Let $f:{\mathbb{R}} \rightarrow {\mathbb{R}}$ be a SEDS function of the form (\ref{EDS}) with rate $\mu$, and for some positive integer $m$, let $g:{\mathbb{R}} \rightarrow {\mathbb{R}} $ be a SED function given by
\begin{equation}
g(t) = \sum_{i=1}^m \ \beta_i e^{\lambda_i t}
\end{equation}
where $\{\lambda_1, \dots, \lambda_m\}$ are distinct negative real numbers satisfying $\mu_i < \lambda_j$ for all $j \in \{1,\dots, m\}$, and $\{\beta_1, \dots, \beta_m\}$ are arbitrary real numbers.
Assume $g(t) \neq 0$ for all $ t\geq 0$. Then there exists a positive real number $\delta$ such that $ g(t) + \delta f(t) \neq 0$ for all $ t \geq 0$.
\end{lemma}
\label{secintro}
\section{Problem formulation}
\label{secpf}
Su and Huang in \cite{SH12b} stated their {\it linear cooperative output regulation problem} as
\begin{problem} \label{P1}
For the system (\ref{eqsys})-(\ref{exos}) with digraph $ {\cal G}$, find suitable control laws $u_i$ of the form (\ref{ulaw1})-(\ref{ulaw3}) for each agent such that
\begin{enumerate}
\item The system matrix of the overall closed loop system is Hurwitz;
\item For any initial condition $x_{i,0}$, $\xi_{i,0}$, $\eta_{i,0}$ with $i \in \{ 1, \dots, N\}$ and $w_0$, the regulated output of the $i$-th agent achieves
\begin{equation}
\lim_{t \rightarrow \infty} \ e_i(t) = 0, \quad i \in \{1, \dots, N\}
\end{equation}
\end{enumerate}
\end{problem}
In this paper, we consider an extension of this problem, and seek control laws to achieve output regulation without overshoot in all components of the tracking error, for all agents.
Since overshoot occurs when the regulated output changes sign, we use $e_{i,j}(t)$ to denote the $j$-th regulated output component of the $i$-th agent and define our {\it linear cooperative nonovershooting output regulation problem} as follows
\begin{problem}\label{P2}
For the system (\ref{eqsys})-(\ref{exos}) with digraph ${\cal G}$ and initial conditions $x_i(0)$ and $w(0)$, find suitable linear control laws $u_i$ for each agent that solve Problem \ref{P1} and also ensure that
$e_{i}(t) \rightarrow 0$ without changing sign in any component, i.e., $\mbox{sgn}(e_{i,j}(t))$ is constant for all $t \geq 0$, for every $j \in \{1, \dots, \rho_i\}$ and for every $i \in \{1, \dots, N\}$.
\end{problem}
Next we discuss the distributed controller given in \cite{SH12b} to solve Problem \ref{P1}, and then we review the nonovershooting tracking control methods of \cite{SN10} that we will use to extend the controller of \cite{SH12b} to additionally solve Problem \ref{P2}.
\subsection{Distributed dynamic measurement output feedback control }
Su and Huang \cite{SH12b} noted the following assumptions for each system $\Sigma_i$ in (\ref{eqsys})-(\ref{exos}):
\begin{itemize}
\item[(A.1)] \label{A1} The matrix $S$ has no eigenvalues with negative real parts.
\item[(A.2)] The pair $(A_i, B_i)$ are stabilizable, for all $i \in \{1, \dots, N\}$.
\item[(A.3)] For every $i \in \{1, \dots, N\}$, there exist matrices $\Gamma_i$ and $\Pi_i$ satisfying
\begin{eqnarray}
\Pi_i\,S \hspace{-1mm}&\hspace{-1mm} = \hspace{-1mm}&\hspace{-1mm} A_i\,\Pi_i+B_i\,\Gamma_i+E_i \label{Pi1}\\
0 \hspace{-1mm}&\hspace{-1mm} = \hspace{-1mm}&\hspace{-1mm} C_{e,i}\,\Pi_i+D_{e,i}\,\Gamma_i+H_{e,i} \label{CPi2}
\end{eqnarray}
\item[(A.4)] The pairs $\Big( \left[ \begin{array}{cc} C_{y,i} & H_{y,i} \end{array} \right], \left[ \begin{array}{cc} A_i & E_i \\ 0 & S \end{array} \right] \Big)$ are detectable, for every
$i \in \{1, \dots, l\}$.
\item[(A.5)] The pairs $(C_{y,i}, A_i) $ are detectable, for every $i \in \{l+1, \dots,N\}$.
\item[(A.6)] \label{A6} The digraph $ {\cal G}$ contains a directed spanning tree with node $0$ as its root.
\end{itemize}
\begin{remark}
Assumptions (A.1)-(A.4) are standard in the output regulation literature \cite{Saberi-SS-00}, and are sufficient for the existence of a measurement feedback controller that can detect both the plant state $x_i$ and the exosystem state $w$, for the informed agents
$i \in \{1, \dots, l\}$. For the uninformed agents $i \in \{l+1, \dots, N\}$, (A.5) means that the plant state $x_i$ is detectable from the measurement output $ y_{i}$, but the exogenous signal $w$ is not detectable from $y_{i}$ because $H_{y,i} = 0$. Hence Problem \ref{P1} cannot be solved by a decentralized measurement feedback control law. \end{remark}
Using Assumptions (A.1)-(A.5), \cite{SH12b} proposed a distributed dynamic measurement output feedback controller of the form:
\begin{equation}
u_i(t) = F_i\,\xi_i(t)+ G_i\eta_i(t), \quad i \in \{1, \dots, N\} \label{ulaw1}
\end{equation}
\begin{equation}
\label{ulaw2}
\begin{cases}
\mbox{if } i \in \{1, \dots, l\} \text{ and } \xi_i(0) = x_{i,0} \\
\begin{bmatrix} \dot{\xi}_{i}(t) \\ \dot{\eta}_{i}(t) \end{bmatrix} =\hspace{-1mm} \begin{bmatrix} A_i & E_i \\ 0 & S \end{bmatrix}\begin{bmatrix} \xi_{i}(t) \\ \eta_{i}(t) \end{bmatrix}+\hspace{-1mm}\begin{bmatrix} B_i \\ 0 \end{bmatrix} u_i(t)\\ +\begin{bmatrix} L_{1,i} \\ L_{2,i} \end{bmatrix} \hspace{-1mm}\Big( C_{y,i} \xi_{i}(t) + D_{y,i}u_i(t) + H_{y,i} \eta_{i}(t) - y_i(t) \Big)
\end{cases}
\end{equation}
\begin{equation}
\label{ulaw3}
\begin{cases}
\mbox{if } \in \{l+1, \dots, N\} \text{ and } \eta_i(0) = \eta_{i,0} \\
\begin{bmatrix}\dot{\xi}_{i}(t) \\ \dot{\eta}_{i}(t) \end{bmatrix} = \begin{bmatrix} A_i & E_i \\ 0 & S \end{bmatrix}\,\begin{bmatrix} \xi_{i}(t) \\ \eta_{i}(t) \end{bmatrix}+\begin{bmatrix} B_i \\ 0 \end{bmatrix} \,u_i(t)
\\+\begin{bmatrix} L_i(C_{y,i} \xi_{i}(t) + D_y u_i(t) - y_i(t)) \\ \gamma\sum_{j=1}^N \ a_{ij} (\eta_{j}(t) - \eta_{i}(t) ) \end{bmatrix}, \quad
\end{cases}
\end{equation}
where $\gamma >0$, $F_i \in {\mathbb{R}}^{m_i \times n_i}$, $G_i \in {\mathbb{R}}^{m_i \times q}$, $L_{1,i} \in {\mathbb{R}}^{n_i \times p_i}$, $L_{2,i} \in {\mathbb{R}}^{q \times p_i}$ and $L_i \in {\mathbb{R}}^{n_i \times p_i}$ are gain matrices, and the parameters $a_{ij}$ are the entries of the adjacency matrix of ${\cal G}$.
Thus the control law (\ref{ulaw3}) combines a distributed observer with a Luenberger observer, and \cite{SH12b} described the controller (\ref{ulaw1})-(\ref{ulaw3}) as a {\it distributed dynamic measurement output feedback controller}.
Their main result was to show that their controller can solve Problem \ref{P1}:
\begin{theorem}[\cite{SH12b}, Theorem 1]
Under Assumptions (A.1)-(A.5), the cooperative output regulation Problem \ref{P1} is solvable by a distributed dynamic measurement output feedback control law of the form (\ref{ulaw1})-(\ref{ulaw3}), if and only if Assumption (A.6) holds.
\end{theorem}
\subsection{Nonovershooting tracking controller design methods}
\label{secnous}
Schmid and Ntogramatzidis \cite{SN10} used state feedback control design methods to deliver a nonovershooting step response for a single LTI plant ($N=1$). Here we discuss how these may be applied to multi-agent system $\Sigma_i$. We consider the {\em nominal systems} that arise when the exosystem (\ref{exos}) is excluded from consideration ($S=0$ and $w(0)=0$). In this case each agent in (\ref{eqsys}) simplifies to
\begin{equation}\label{nomsysi}
\Sigma_{i,nom}:
\begin{cases}
\dot{\tilde x}_i(t) = A_i\,\tilde x(t) + B_i\,\tilde u(t), & \tilde x_i(0)=\tilde x_{i,0}\\
\tilde y_i(t) = C_{y,_i}\,\tilde x_i(t) + {D_{y,i}\,\tilde u_i(t)} & \\
\tilde e_i(t) = C_{e,i}\,\tilde x_i(t) + D_{e,i}\,\tilde u_i(t), & i \in \{ 1, \dots, N\}
\end{cases}
\end{equation}
\cite{SN10} gave several methods for the design of a linear state feedback control law $\tilde u = F \tilde x$ to deliver a nonovershooting step response for a system in the form (\ref{nomsysi}). This requires ensuring that the closed-loop system is asymptotically stable, and the tracking error $\tilde e_i$ converges to zero without overshoot; this implies $\tilde e_{i,j}(t) \rightarrow 0$ as $t \rightarrow \infty$ without changing sign in all output components $j \in \{1, \dots, \rho_i\}$.
The design method assumed that initial condition $\tilde x_{i,0} \neq 0$ of each nominal system (\ref{nomsysi}) is known and available for use in the controller design. The closed-loop eigenvalues to be assigned by the state feedback are to be selected from within a user-specified interval of the negative real line. The algorithm selects candidate sets of distinct closed-loop eigenvalues from within the specified interval and then associates them with candidate sets of closed-loop eigenvectors in such a way that only a small number (generally one or two, or at most three) of the closed-loop modes contribute to each output component. The candidate eigenvalues are associated with candidate eigenvectors and eigendirections by solving a system of equations involving the Rosenbrock matrix of the system (\ref{nomsysi}). These eigenvectors and eigendirections are used to obtain a feedback matrix via Moore's pole placement algorithm \cite{M}.
The error signal $\tilde e(t) $ is then formulated in terms of the candidate set of eigenvectors and a test is used to determine if the system response is nonovershooting in all components. If the test is not successful, then a new candidate set of eigenvalues within the specified interval is chosen, and the process is repeated. The tests are analytic in nature, and do not require simulating the system response to test for overshoot.
The nonovershooting controller design method can be applied to multiple-input multiple-output systems, and these may be of non-minimum phase. The designer has considerable freedom to select the desired closed-loop eigenvalues, in order to accommodate requirements on the convergence rate, or to avoid actuator saturation. The algorithm involves a search for suitable feedback matrices to deliver a nonovershooting response, and a successful search cannot be guaranteed for any given system, for any given initial condition. \cite{SN10} gives some discussion of the circumstances in which a successful search is likely. The condition was that
\begin{equation} \label{zerocond} n -3p \geq z \end{equation}
where $n$ is the number of states, $p$ is the number of inputs/outputs, and $z$ is the number of minimum-phase zeros.
In this paper, we shall assume the existence of feedback matrices that yield a nonovershooting response for the nominal system of each agent $\Sigma_{i,nom} $ with initial condition $\tilde x_{i,0}$ in (\ref{nomsysi}):
\begin{enumerate}
\item[(A.7)] A feedback gain matrix $F_i$
exists such that the eigenvalues of $A_i+B_i F_i$ are real, distinct and negative, and
\item[(A.8)] applying the control law $\tilde u_i = F_i \tilde x$ to $\Sigma_{i,nom}$, with initial condition $\tilde x_{i,0} = x_{i,0} -\Pi_i w_0 $, yields nonovershooting regulated outputs $\tilde e_i$.
\end{enumerate}
We note that condition (A.8) might be difficult to satisfy for some multi-agent systems from some initial conditions, because it seeks to avoid overshoot in all the output components of all agents. In many practical problems it may not be essential to avoid overshoot in all outputs, and in such cases it becomes easier to find suitable feedback matrices to deliver a nonovershooting response for the outputs where avoiding overshoot is important. The methods of \cite{SN10} can accommodate nonovershooting requirements for only a selection of the outputs, and the \textbf{NOUS} toolbox \cite{PS12} offers an option for the user to specify whether or not overshoot is to be avoided for each output component.
\section{Problem Solution} \label{secps}
Here we present the main results of our paper, providing a solution for Problem \ref{P2} under Assumptions (A.1)-(A.8). Thus we assume we have, for any initial condition $\tilde x_{i}(0)$ and $w(0)$, gain matrices $F_i$ such that applying the control law $\tilde u_i = F_i \tilde x_i$ to the nominal system $\Sigma_{i,nom} $ of each agent yields a nonovershooting response, from the initial condition $\tilde x_{i,0} = x_{i,0} - \Pi_i w_0$. Our task is to obtain suitable gain matrices $G_i $, $L_{1,i}$, $L_{2,i}$, and $L_i $ and parameter $\gamma$ so that the control laws (\ref{ulaw1})-(\ref{ulaw3}) will solve Problem \ref{P2}. Firstly we introduce
\begin{equation} \label{Gi}
G_i = \Gamma_i - F_i \Pi_i, \quad \mbox{for $i \in \{1, \dots, N\}$ }
\end{equation}
Define $\lambda_0 = \min\{\lambda: \lambda \in \rho(A_i+B_iF_i) $ for any $ i \in \{1,\dots, N\}$; then $\lambda_0$ provides a lower bound on eigenvalues of all the closed-loop state matrices $ A_i + B_i F_i$. Next we chose $\mu_0 < \lambda_0$ and obtain suitable observer gains $L_{1,i}$, $L_{2,i}$ for $i \in \{1,\dots, l\}$, and $L_i$ for $i \in \{l+1,\dots, N\}$, such that the matrices
\begin{equation} \label{Acci}
A_{cc,i} = \begin{bmatrix} A_i + L_{1,i}C_{y,i} & E_i + L_{1,i} H_{y,i} \\
L_{2,i} C_{y,i} & S + L_{2,i} H_{y,i} \end{bmatrix}
\mbox{ and } A_i+ L_i C_{y,i}
\end{equation} have distinct stable eigenvalues all lying to the left of $ \mu_0$, i.e. for all $\mu \in \rho(A_{cc,i})$,
and for all $\mu \in \rho(A_i+ L_i C_{y,i} )$, we have $Re(\mu) \leq \mu_0$. Thus $\mu_0$ provides an upper bound on the real part of the eigenvalues of all the closed-loop observer matrices.
By Lemma \ref{SHlem1} and (A.6), we know that the real parts of the eigenvalues of $\mathcal L_{33}$ are positive, so there exists $\gamma >0$ such that
\begin{equation}\label{gammadef}
\begin{aligned}
\max \big\{Re(&\lambda_i(S) - \gamma\lambda_j({\cal L}_{33})): \\& i \in \{1, \dots, q\}, j \in \{1,\dots, N-l\} \big\} \leq \mu_0
\end{aligned}
\end{equation}
where $\lambda_i(S)$ and $\lambda_j({\cal L}_{33})$ denote the eigenvalues of $S$ and ${\cal L}_{33}$ respectively.
Next we introduce some notation that will allow us to compactly represent the overall closed-loop system of (\ref{eqsys})-(\ref{exos}) under control laws (\ref{ulaw1})-(\ref{ulaw3}). For $i \in \{1, \dots, l\}$, we define
$
\bar A = \operatorname{blkdiag} (A_1, \dots, A_l), \
\bar B = \operatorname{blkdiag} (B_1, \dots, B_l), \
\bar C_y = \operatorname{blkdiag} (C_{y, 1}, \dots, C_{y, l}), \
\bar C_e = \operatorname{blkdiag} (C_{e, 1}, \ \dots, \ C_{e, l}), \
\bar D_y = \operatorname{blkdiag} (D_{y, 1}, \ \dots, \ D_{y, l}), \
\bar D_e = \operatorname{blkdiag} (D_{e, 1}, \ \dots, \ D_{e, l}), \
\bar E = \operatorname{blkdiag} (E_1, \ \dots, \ E_l), \
\bar H_y = \operatorname{blkdiag} (H_{y, 1}, \ \dots, \ H_{y, l}), \
\bar H_e = \operatorname{blkdiag} (H_{e, 1}, \ \dots, \ H_{e, l}), \
\bar F = \operatorname{blkdiag} (F_{1}, \dots, \ F_{l}), \
\bar G = \operatorname{blkdiag} (G_{1}, \ \dots, \ G_{l}), \
\bar L_1 = \operatorname{blkdiag} (L_{1, 1}, \ \dots, \ L_{1, l}), \
\bar L_2 = \operatorname{blkdiag} (L_{2, 1}, \ \dots, \ L_{2, l}), \
\bar S = I_l \otimes S, \
\bar \Pi = \operatorname{blkdiag} ( \Pi_1, \ \dots, \ \Pi_l), \
\bar \Gamma = \operatorname{blkdiag} ( \Gamma_1, \dots, \Gamma_l), \
\bar x = \mbox{col}(x_1, \ \dots, x_l), \
\bar e = \mbox{col}(e_1, \ \dots, e_l), \
\bar w = \mbox{col}(w, \ \dots, \ w), \
\bar \xi = \mbox{col}(\xi_{1} \dots, \xi_{l}), \
\bar \eta = \mbox{col}(\eta_{1} \dots, \eta_{l}) , \
\bar \varepsilon_{1} = \bar \xi - \bar x, \
\bar \varepsilon_{2} = \bar \eta - w, \
\bar \varepsilon = [\varepsilon_1, \ \varepsilon_2]^T$.
For $i \in \{l+1, \dots, N\}$, we similarly define matrices
$\hat A$, $\hat B$, $\hat C_y$, $\hat C_e$, $\hat D_y$, $\hat D_e$, $\hat E $, $\hat H_e$, $\hat G$,
$\hat L = \operatorname{blkdiag} (L_{l+1}, \dots, L_{N}) $, $\hat S = I_{N-l} \otimes S$, $\hat \Pi$, $\bar \Gamma$, and variables $\hat x$, $\hat e$, $\hat w$, $\hat \xi$, $\hat \eta$,
$\hat \varepsilon_1$, $\hat \varepsilon_2$ and $\hat \varepsilon$. In this form, the regulator equations (\ref{Pi1})-(\ref{CPi2}) become
\begin{eqnarray}
\bar \Pi\bar S \hspace{-1mm}&\hspace{-1mm} = \hspace{-1mm}&\hspace{-1mm} \bar A\,\bar \Pi+\bar B\,\bar \Gamma+\bar E \label{barPi1}\\
0 \hspace{-1mm}&\hspace{-1mm} = \hspace{-1mm}&\hspace{-1mm} \bar C_e\,\bar \Pi+\bar D_{e}\,\bar \Gamma+\bar H_{e} \label{barCPi2} \\
\hat \Pi\hat S \hspace{-1mm}&\hspace{-1mm} = \hspace{-1mm}&\hspace{-1mm} \hat A\,\hat \Pi+\hat B\,\hat \Gamma+\hat E \label{hatPi1}\\
0 \hspace{-1mm}&\hspace{-1mm} = \hspace{-1mm}&\hspace{-1mm} \hat C_e\,\hat \Pi+\hat D_{e}\,\hat \Gamma+\hat H_{e} \label{hatCPi2}
\end{eqnarray}
\begin{theorem} \label{mainthm}
Consider the multi-agent cooperative system $\Sigma_i$ in (\ref{eqsys}) under assumptions (A.1)-(A.6) and initial conditions $x_{i,0}$ and $w_0$. Assume that a distributed dynamic measurement output feedback controller of the form (\ref{ulaw1})-(\ref{ulaw3}) has been obtained that satisfies (A.7)-(A.8) and (\ref{Gi})-(\ref{gammadef}) for all $i \in \{1,\dots, N\}$.
Then this control law solves Problem \ref{P2}, provided the initial estimator error $(\bar \varepsilon(0), \hat \varepsilon(0))$ is sufficiently small.
\end{theorem}
\noindent{\bf{\em Proof:}\ \ } Firstly we obtain expressions for the closed loop system under the controller (\ref{ulaw1})-(\ref{ulaw3}). For the informed agents $i \in \{1,\dots, l\}$, the tracking error dynamics are given by
\begin{equation}
\begin{aligned}
\dot{\bar \varepsilon}(t)=& \left[ \begin{array}{cc} \bar A & \bar E \\ 0 & \bar S \end{array} \right]\,\left[ \begin{array}{c} \bar \xi(t) \\ \bar \eta(t) \end{array} \right]+\left[ \begin{array}{c} \bar B \\ 0 \end{array} \right] \, \bar u(t) \\& +\begin{bmatrix} \bar L_{1}\\ \bar L_{2} \end{bmatrix} \Big( \bar C_{y} \bar \xi(t) + \bar D_{y}\bar u(t) + \bar H_{y} \bar \eta(t) - \bar y(t) \Big) \\&
- \left[ \begin{array}{cc} \bar A & \bar E \\ 0 & \bar S \end{array} \right]\,\left[ \begin{array}{c} \bar x(t) \\ \bar \eta(t) \end{array} \right]-\left[ \begin{array}{c}\bar B \\ 0 \end{array} \right] \,\bar u(t) \\
= & \left[ \begin{array}{cc} \bar A & \bar E \\ 0 & \bar S \end{array} \right]\,\left[ \begin{array}{c} \bar \epsilon_1(t) \\ \bar \epsilon_2(t) \end{array} \right]
\\&+ \begin{bmatrix} \bar L_{1} \\ \bar L_{2} \end{bmatrix} \Big( \bar C_{y} \bar \xi(t) + \bar D_{y}\bar u(t) + \bar H_{y} \bar \eta(t) \\&\qquad\quad\;- ( \bar C_{y}\bar x(t) + \bar D_{y}\bar u(t) + \bar H_{y}\bar w(t) )\Big) \\
= & \left[ \begin{array}{cc} \bar A+\bar L_1\bar C_y & \bar E +\bar L_1\bar H_{y} \\ \bar L_2\bar C_y & \bar S+\bar L_2 \bar H_{y} \end{array} \right]\,\bar \varepsilon(t) \nonumber
\end{aligned}
\end{equation}
The state and exosystem dynamics are given by
\begin{equation}
\begin{aligned}
\left[ \begin{array}{c} \dot{\bar x}(t) \\ \dot{\bar w}(t) \end{array} \right] =& \left[ \begin{array}{cc} \bar A & \bar E\\ 0 &\bar S \end{array} \right]\,\left[ \begin{array}{c}\bar x(t) \\ \bar w(t) \end{array} \right]+\left[ \begin{array}{c} \bar B \\ 0 \end{array} \right] \bar u(t) \\
=& \left[ \begin{array}{cc} \bar A & \bar E \\ 0 & \bar S \end{array} \right]\,\left[ \begin{array}{c}\bar x(t) \\ \bar w(t) \end{array} \right]+\left[ \begin{array}{c} \bar B \\ 0 \end{array} \right] \,(\bar F\,\bar \xi(t)+\bar G\,\bar \eta(t)) \\
=& \begin{bmatrix} \bar A+\bar B\bar F \hspace{-1mm}&\hspace{-1mm} \bar E +\bar B\bar G \\ 0 \hspace{-1mm}&\hspace{-1mm} \bar S \end{bmatrix}\,\begin{bmatrix} \bar x(t) \\ \bar w(t) \end{bmatrix}+\begin{bmatrix} \bar B\bar F & \bar B\bar G \\ 0 & 0 \end{bmatrix} \,\begin{bmatrix} \varepsilon_1(t) \\ \varepsilon_2(t) \end{bmatrix} \nonumber
\end{aligned}
\end{equation}
and hence the closed-loop system for agents $i \in \{1, \dots, l\}$ is
\begin{eqnarray*}
\begin{bmatrix}
\dot{\bar x}(t) \\ \dot{\bar w}(t) \\ \dot{\bar \varepsilon}_1(t) \\ \dot{\bar \varepsilon}_2(t) \end{bmatrix} =
\left[ \begin{array}{cc|cc}
\bar A+\bar B\bar F \hspace{-1mm}&\hspace{-1mm} \bar E +\bar B\bar G \hspace{-1mm}&\hspace{-1mm} \bar B\bar F \hspace{-1mm}&\hspace{-1mm} \bar B\bar G \hspace{-1mm}\\
0 \hspace{-1mm}&\hspace{-1mm}\bar S \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}\\
\hline
0 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm} \bar A+\bar L_1 \bar C_y \hspace{-1mm}&\hspace{-1mm}\bar E +\bar L_1\bar H_{y}\hspace{-1mm}\\
0 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm} \bar L_2\bar C_y \hspace{-1mm}&\hspace{-1mm} \bar S+ \bar L_2\bar H_{y} \hspace{-1mm}\end{array} \right] \hspace{-1mm} \hspace{-1mm}\begin{bmatrix}
\bar{x}(t) \\ \bar{w}(t) \\ {\bar \varepsilon}_1(t) \\ {\bar \varepsilon}_2(t) \end{bmatrix} \\
\end{eqnarray*}
\begin{eqnarray*}
\bar e(t) \hspace{-1mm}&\hspace{-1mm} = \hspace{-1mm}&\hspace{-1mm} \left[ \begin{array}{cc|cc}
\bar C_e +\bar D_{e}\bar F & \bar D_{e}\bar G + \bar H_{e} & \bar D_{e}\bar F & \bar D_e\bar G \end{array} \right] \begin{bmatrix}
\bar{x}(t) \\ \bar{w}(t) \\ {\bar \varepsilon}_1(t) \\ {\bar \varepsilon}_2(t) \end{bmatrix}
\end{eqnarray*}
Introducing coordinates $\bar z(t) = \bar x(t)-\bar \Pi\,\bar w(t) $ and using \eqref{barPi1}, we obtain
\begin{equation}
\begin{aligned}
\dot{ \bar z}(t)
=& \dot{ \bar x}(t) - \bar \Pi \dot{ \bar w}(t)\nonumber \\
=& \dot {\bar x}(t) - \bar \Pi\bar S \bar w(t) \nonumber\\
=& (\bar A + \bar B\bar F)\bar x(t)+(\bar E +\bar B\bar G -\bar \Pi \bar S)\bar w(t)\nonumber +\nonumber \bar B\bar F\bar \varepsilon_1(t) \hspace{-1mm} +\bar B\bar G \bar \varepsilon_2(t)\\
=& (\bar A + \bar B\bar F)\bar x(t) - (\bar A + \bar B\bar F)\bar \Pi\bar w(t)
+ \bar B\bar F\bar \varepsilon_1(t) + \bar B\bar G \varepsilon_2(t), \\
=& (\bar A + \bar B \bar F)\bar z(t)+ \bar B\bar F\bar \varepsilon_1(t) +\bar B\bar G \bar \varepsilon_2(t)
\end{aligned}
\end{equation}
Also
\begin{equation}
\begin{aligned}
(\bar C_e &+\bar D_{e}\bar F)\bar x(t) +(\bar D_{e}\bar G + \bar H_{e})\bar w(t)\\
= & (\bar C_e +\bar D_{e}\bar F)\bar x(t) + (\bar D_{e}\bar \Gamma - \bar D_{e}\bar F\bar \Pi + \bar H_{e})\bar w(t) \nonumber \\
= & \bar C_e \bar x(t) + \bar D_{e}\bar F (\bar x(t) - \bar \Pi \bar w(t)) +( \bar D_{e}\bar \Gamma +\bar H_{e}) \bar w(t) \nonumber \\
= & \bar C_e \bar z(t) + \bar D_{e}\bar F\bar z(t) + (\bar C_e \bar \Pi + \bar D_{e}\bar \Gamma + \bar H_{e})\bar w(t) \nonumber \\
= & (\bar C_e + \bar D_{e}\bar F)\bar z(t)
\end{aligned}
\end{equation}
by (\ref{barCPi2}).
Hence we may write the closed loop system as
\begin{eqnarray}
\dot{\bar z}(t) &=& (\bar A+\bar B\bar F)\,\bar z(t) + [ \bar B\bar F \ \ \bar B\bar G ] \bar \varepsilon(t), \;\; \bar z(0)=\bar z_0 \label{clinfa} \\
\dot {\bar \varepsilon}(t) & = & \bar A_{cc} \bar \varepsilon(t), \quad \bar \epsilon(0)= \bar \epsilon_0 \label{clinfb} \\
\bar e(t) &=& (\bar C_e+\bar D_{e}\,\bar F)\,\bar z(t) + [\bar D_{e}\bar F \ \ \bar D_e\bar G)]\bar \varepsilon(t)
\label{clinfc}
\end{eqnarray}
where
\begin{equation} \bar{A}_{cc} = \left[ \begin{array}{cc} \bar A + \bar{L}_{1}\bar{C}_{y} & \bar{E} + \bar{L}_{1} \bar{H}_{y} \\
\bar{ L}_{2} \bar{C}_{y} & \bar S + \bar{L}_{2} \bar{H}_{y} \end{array} \right]
\end{equation}
Secondly we consider the uninformed agents for $i \in \{l+1,\dots, N\}$ and denote the estimation error as
\begin{eqnarray}
\hat \varepsilon(t) = \left[ \begin{array}{c} \hat \varepsilon_1(t) \\ \hat \varepsilon_2(t) \end{array} \right] =
\left[ \begin{array}{c} \hat \xi(t) - \hat x(t) \\ \hat \eta(t) - \hat w
\end{array} \right]
\end{eqnarray}
From Lemma 2 in \cite{SH12b}, we know that
\begin{equation}
\dot{\hat{\varepsilon}}_2 = \hat S - \gamma({\cal L}_{33} \otimes I_q) \hat \epsilon_2 - \gamma({\cal L}_{32} \otimes I_q) \bar \epsilon_2
\end{equation}
so that
\begin{equation*}
\begin{aligned}
\dot{\hat \varepsilon}(t)=& \left[ \begin{array}{cc} \hat A & \hat E \\ 0 & \hat S \end{array} \right]\,\left[ \begin{array}{c} \hat \xi(t) \\ \hat \eta(t) \end{array} \right]+\left[ \begin{array}{c} \hat B \\ 0 \end{array} \right] \, \hat u(t) \\& -
\left[ \begin{array}{c} \hat L(\hat{C}_{y} \hat{\xi}(t) + \hat{D}_y \hat{u}(t) - \hat {y}) \\ \gamma({\cal L}_{33} \otimes I_q) \hat \epsilon_2 + \gamma({\cal L}_{32} \otimes I_q) \bar \epsilon_2 \end{array} \right] \\
& - \left[ \begin{array}{cc} \hat A & \hat E \\ 0 & \hat S \end{array} \right]\,\left[ \begin{array}{c} \hat x(t) \\ \hat \eta(t) \end{array} \right]-\left[ \begin{array}{c}\hat B \\ 0 \end{array} \right] \,\hat u(t) \\
= & \left[ \begin{array}{cc} \hat A & \hat E \\ 0 & \hat S \end{array} \right]\,\left[ \begin{array}{c} \hat \epsilon_1(t) \\ \hat \epsilon_2(t) \end{array} \right]
\\&- \left[ \begin{array}{c} \hat L ( \hat C_{y}\hat{\xi}(t) + \hat D_{y}\hat u(t) - \hat{C}_y \hat{x}(t) - \hat {D}_y \hat{u}(t) ) \\
\gamma({\cal L}_{33} \otimes I_q) \hat \epsilon_2 + \gamma({\cal L}_{32} \otimes I_q) \bar \epsilon_2 \end{array} \right]
\\
= & \begin{bmatrix} \hat A+\hat L\hat C_y & \hat E \\ 0 & \hat S - \gamma ({\cal L}_{33} \otimes I_q) \end{bmatrix}\,\hat \varepsilon(t)
-\hspace{-1mm} \begin{bmatrix} 0 \\ \gamma({\cal L}_{32} \otimes I_q) \bar \epsilon_2 \end{bmatrix}
\end{aligned}
\end{equation*}
It follows that the closed loop-system for agents $i \in \{l+1, \dots, N\}$ is
\begin{eqnarray*}
\hspace{-2mm}
\begin{bmatrix}
\dot{\hat x}(t) \\ \dot{\hat w}(t) \\ \dot{\hat \varepsilon}_1(t) \\ \dot{\hat \varepsilon}_2(t) \end{bmatrix}
&\hspace{-6mm} = &
\hspace{-6mm}
\left[ \begin{array}{cc|cc}
\hat A+\hat B\hat F \hspace{-1mm}&\hspace{-1mm} \hat E +\hat B\hat G \hspace{-1mm}&\hspace{-1mm} \hat B\hat F \hspace{-1mm}&\hspace{-1mm} \hat B\hat G \hspace{-1mm}\\
0 \hspace{-1mm}&\hspace{-1mm} \hat S \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}\\
\hline
0 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm} \hat A+\hat L \hat C_y \hspace{-1mm}&\hspace{-1mm}\hat E \hspace{-1mm}\\
0 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm} \hat S - \gamma ({\cal L}_{33} \otimes I_q) \hspace{-1mm}\end{array} \right]
\hspace{-1.5mm}
\begin{bmatrix}
\hat{x}(t) \\ \hat{w}(t) \\ {\hat \varepsilon}_1(t) \\ {\hat \varepsilon}_2(t) \end{bmatrix} \\
& &- \begin{bmatrix} 0 \\ 0 \\ 0 \\ \gamma({\cal L}_{32} \otimes I_q) \bar \epsilon_2 \end{bmatrix} \\
\hat e(t) & \hspace{-4mm} = &
\hspace{-4mm} \left[ \begin{array}{cc|cc}
\hat C_e +\hat D_{e}\hat F & \hat D_{e}\hat G + \hat H_{e} & \hat D_{e}\hat F & \hat D_e\hat G \end{array} \right] \left[ \begin{array}{c}
\hat {x}(t) \\ \hat {w}(t) \\ {\hat \varepsilon}_1(t) \\ {\hat \varepsilon}_2(t) \end{array} \right]
\end{eqnarray*}
Introducing the change of coordinates $\hat z(t) = \hat x(t)-\hat \Pi\,\hat w(t) $, we obtain the closed-loop system in the form
\begin{eqnarray}
\dot{\hat z}(t) &=& (\hat A+\hat B\hat F)\,\hat z(t) + [ \hat B\hat F \ \ \hat B\hat G ] \hat \varepsilon(t), \ \hat z(0)=\hat z_0 \label{clnoninfa} \\
\dot {\hat \varepsilon}(t) & = & \hat A_{cc} \hat \varepsilon(t) - \left[ \begin{array}{c} 0 \\ \gamma({\cal L}_{32} \otimes I_q) \end{array} \right] \bar \epsilon_2 , \ \hat \epsilon(0)= \hat \epsilon_0 \label{clnoninfb} \\
\hat e(t) &=& (\hat C_e+\hat D_{e}\,\hat F)\,\hat z(t) + [\hat D_{e}\hat F \ \ \hat D_e\hat G)]\hat \varepsilon(t) \label{clnoninfc}
\end{eqnarray}
where
\begin{equation} \label{hatAcc}
\hat{A}_{cc} = \left[ \begin{array}{cc} \hat A + \hat{L}\hat{C}_{y} & \hat{E} \\
0 & \hat S - \gamma({\cal L}_{33} \otimes I_q) \end{array} \right]
\end{equation}
Next we consider the form of the outputs arising from these closed-loop systems. Firstly we consider (\ref{clinfa})-(\ref{clinfc}) for the informed agents. We may decompose the state vector $\bar z$ according to $\bar z = \bar z_A + \bar z_B$ where $\bar z_A $ and $\bar z_B$ are the zero input solution and zero state solutions, respectively. Similarly we can decompose the output $\bar e$ into $\bar e = \bar e_A + \bar e_B$, the zero input response and zero state responses, respectively. By Assumptions (A.7)-(A.8), for each agent, $A_i + B_i F_i$ is Hurwitz with real, negative and distinct eigenvalues, and the output $\tilde e_{i}$ of the nominal system $\Sigma_{i,nom}$ in (\ref{nomsysi}) from initial condition $x_{i,0} - \Pi_i w_0$ is nonovershooting. Since $\bar e_A$ is composed of the $\tilde e_i$ outputs from all the informed agents, we conclude that $\bar z_A$ and $\bar e_A$ are SED functions and $\bar e_A(t) \rightarrow 0$ as $t \rightarrow \infty$ without changing sign in any component.
Considering the error dynamics for $\bar \varepsilon$ in (\ref{clinfb}), we know by (\ref{Acci}) that $\bar A_{cc}$ is Hurwitz and satisfies $\max\{\mu: \mu \in \rho(\bar A_{cc})\} \leq \mu_0$. By Lemma \ref{lem62}(i), $\bar \varepsilon$ is a SEDS function with rate $\mu_0$, and $|\bar \varepsilon(t)| \leq k_1|\bar \varepsilon_0|$, for some $k_1 >0$.
As $\bar \varepsilon$ is the input for
(\ref{clinfa}), by Lemma \ref{lem62}(ii), we conclude that $\bar e_B$ is a SEDS functions with rate at most $\bar \mu$, and $|\bar e_B(t)| \leq k_2k_1|\bar \varepsilon_0|$, for some $k_2 >0$. We may now apply Lemma \ref{lem63} with $g = \bar e_A$ and
$f = \bar e_B$. Provided $|\bar \varepsilon_0|$ is sufficiently small, we have $\bar e(t) \rightarrow 0$ as $t \rightarrow \infty$ without changing sign in any component.
Next we consider the form of the outputs arising from these closed-loop systems. Firstly we consider (\ref{clinfa})-(\ref{clinfc}) for the informed agents. We may decompose the state vector $\bar z$ according to $\bar z = \bar z_A + \bar z_B$ where $\bar z_A $ and $\bar z_B$ are the zero input solution and zero state solutions, respectively. Similarly we can decompose the output $\bar e$ into $\bar e = \bar e_A + \bar e_B$, the zero input response and zero state responses, given by
\begin{eqnarray}
\bar e_A(t) &=& (\bar C_e+\bar D_{e}\,\bar F)\,\bar z(t) \label{bareA} \\
\bar e_B(t) &=& [\bar D_{e}\bar F \ \ \bar D_e\bar G)]\bar \varepsilon(t) \label{bareB}
\end{eqnarray}
By Assumptions (A.7)-(A.8), for each agent, $A_i + B_i F_i$ is Hurwitz with real, negative and distinct eigenvalues, and the output $\tilde e_{i}$ of the nominal system $\Sigma_{i,nom}$ in (\ref{nomsysi}) from initial condition $x_{i,0} - \Pi_i w_0$ is nonovershooting. Since $\bar e_A$ is composed of the $\tilde e_i$ outputs from all the informed agents, we conclude that $\bar z_A$ and $\bar e_A$ are SED functions and $\bar e_A(t) \rightarrow 0$ as $t \rightarrow \infty$ without changing sign in any component.
Considering the error dynamics for $\bar \varepsilon$ in (\ref{clinfb}), we know by (\ref{Acci}) that $\bar A_{cc}$ is Hurwitz and satisfies $\max\{\mu: \mu \in \rho(\bar A_{cc})\} \leq \mu_0$. By Lemma \ref{lem62}(i), $\bar \varepsilon$ is a SEDS function with rate $\mu_0$, and $|\bar \varepsilon(t)| \leq k_1|\bar \varepsilon_0|$, for some $k_1 >0$.
As $\bar \varepsilon$ is the input for
(\ref{clinfa}), by Lemma \ref{lem62}(ii), we conclude that $\bar e_B$ is a SEDS functions with rate at most $\bar \mu$.
We may now apply Lemma \ref{lem63} with $g = \bar e_A$ and
$f = \bar e_B$ to obtain $\delta >0$ such that
\begin{equation}
\bar e_A(t) + \delta \bar e_B(t) >0
\end{equation}
From (\ref{clinfb}) and (\ref{bareB}), we see that $\bar e_B$ is linearly dependent upon the initial condition $\bar \varepsilon_0$, and hence for suitably small $|\bar \varepsilon_0|$, we have $\bar e_A(t) + \bar e_B(t) >0$,
Thus $\bar e(t) = \bar e_A(t) +\bar e_B(t) \rightarrow 0$ as $t \rightarrow \infty$ without changing sign in any component.
Next we consider the uninformed agents (\ref{clnoninfa})-(\ref{clnoninfc}) for $i \in \{l+1, \dots, N\}$. We again decompose the state vector as $\hat z = \hat z_A + \hat z_B$ where $\hat z_A $ and $\hat z_B$ are the zero input and zero state solutions, respectively. Similarly we have $\hat e = \hat e_A + \hat e_B$ for the zero input response and zero state responses, given by
\begin{eqnarray}
\hat e_A(t) &=& (\hat C_e+\hat D_{e}\,\hat F)\,\hat z(t) \label{hateA} \\
\hat e_B(t) &=& [\hat D_{e}\hat F \ \ \hat D_e\hat G)]\hat \varepsilon(t) \label{hateB}
\end{eqnarray}
Again by assumptions (A.7)-(A.8), we have that for each agent, $A_i + B_i F_i$ is Hurwitz with negative, real and distinct eigenvalues, and the output $\tilde e_{i}$ of the nominal system $\Sigma_{i,nom}$ from initial condition $x_{i,0} - \Pi_i w_0$ is nonovershooting.
Hence $\hat z_A$ and $\hat e_A$ are SED functions and $\hat e_A(t) \rightarrow 0$ as $t \rightarrow \infty$ without changing sign in any component.
Considering the error dynamics for $\hat \varepsilon$ in (\ref{clnoninfb}), we know by (\ref{Acci}) that $\hat A + \hat L_1 \hat C_y$ is Hurwitz and all its eigenvalues satisfy $Re(\mu) \leq \mu_0$. From Lemma 2 of \cite{SH12b}, we have
\begin{equation}
\begin{aligned}
\rho( \hat S - \gamma({\cal L}_{33}& \otimes I_q)) =
\{\lambda_i(S) - \gamma \lambda_j(\mathcal{L}_{33}): \\& \qquad i \in \{1,\dots, q\}, \ j \in \{1,\dots, N-l\}\}
\end{aligned}
\end{equation}
As $ \gamma $ satisfies (\ref{gammadef}), we know that $\hat S - \gamma({\cal L}_{33} \otimes I_q)$ is Hurwitz, and all its eigenvalues satisfy $Re(\mu) \leq \mu_0$. We conclude that $\hat{A}_{cc} $ in (\ref{hatAcc}) is Hurwitz and its eigenvalues satisfy $Re(\mu) \leq \mu_0$.
Decomposing $\hat \varepsilon = \hat \varepsilon _A + \hat \varepsilon_B$ into its zero input and zero state solutions, we observe from Lemma \ref{lem62}(i) that $ \hat \varepsilon_A$ is a SEDS function with rate at most $\mu_0$.
From above we know that $\bar \varepsilon$, and hence also $\bar \varepsilon_2$, are SEDS functions with rate $\mu_0$.
Thus by Lemma \ref{lem62}(ii), $\hat \varepsilon_B$ is also a SEDS function with rate $\mu_0$.
As $\hat \varepsilon$ is the input for
(\ref{clnoninfa}), by Lemma \ref{lem62}(ii), we conclude that $\hat e_B$ is a SEDS functions with rate at most $\bar \mu$,
We may now apply Lemma \ref{lem63} with $g = \hat e_A$ and
$f = \hat e_B$ to obtain $\delta >0$ such that
\begin{equation}
\hat e_A(t) + \delta \hat e_B(t) >0
\end{equation}
From (\ref{clnoninfb}) and (\ref{hateB}), we see that $\hat e_B$ is linearly dependent upon the initial condition
$(\bar \varepsilon_0, \hat \varepsilon_0)$, and hence for suitably small $|(\bar \varepsilon_0,\hat \varepsilon_0)|$, we have $\hat e_A(t) + \hat e_B(t) >0$.
Thus $\hat e(t) = \hat e_A(t) +\hat e_B(t) \rightarrow 0$ as $t \rightarrow \infty$ without changing sign in any component.
\hspace*{\fill}~\QED\par\endtrivlist\unskip
\begin{remark}
It is worth considering the sense in which the multi-agent Problems \ref{P1} and \ref{P2} have been solved with a distributed control system: what information and assumptions are required to hold globally (for all agents), and which ones are local (information that only needs to be known by individual agents)?
The information that must be available for the purpose of controller design is as follows:
\begin{enumerate}
\item All agents require knowledge of the exosystem dynamics $S$, however only the informed agents are able to directly detect the states of the exosystem. The uninformed agents detect the state of the exosystem using information obtained from the informed agents, via the communication network.
\item The control law (15) requires the design of the feedback matrix $F_i$ and the feedforward matrix $G_i$ for each agent.
$F_i$ requires knowledge of the plant dynamics $(A_i,B_i,C_i)$, and estimates of the initial states $x_{i,0}$ and $w_0$ of the $i$-th plant and the exosystem, while $G_i$ requires solutions to matrix equations (\ref{Pi1})-(\ref{CPi2}). Thus the design of these matrices can be done locally, provided $S$ is available.
\item For the informed agents $i \in \{1, \dots, l\}$, design of the observer gains $L_{1,i}$, $L_{2,i}$ defined in (21) can be also be done locally, however there must be an agreement among the controller designers on the values of $\lambda_0$ and $\mu_0$ to be used.
\item For the uninformed agents $i \in \{l+1, \dots, N\}$, design of the observer gains $L_{i}$ defined in (21) can be done locally, provided all these agents have knowledge of the parameters $\lambda_0$ and $\mu_0$. Additionally, the controller design procedure for these agents requires knowledge of the Laplacian submatrix $\mathcal{L}_{33}$ so that suitable $\gamma$ satisfying (22) can be selected.
\end{enumerate}
Thus the controller design method of \cite{SH12b} requires global knowledge of the exosystem dynamics $S$. Assumptions of this kind are widely used in problems of multi-agent consensus tracking control, for example \cite{HGCH}-\cite{SH12a} among many others. In Section V, we provide some further discussion on the cooperative nature of the controller design method in the context of an aircraft control example.
\end{remark}
\section{Example}
\label{secex}
In order to show the effectiveness of our proposed method, we adopt an example from \cite{YW13}. In this example, we consider four networked research aircraft known as MuPAL-$\alpha$ connected as shown in Fig.~\ref{fig:topA}. It is desired for the aircraft to track a given sideways velocity and a given roll angle.
\begin{figure}
\centering
\includegraphics[scale=1]{top_aircraft1.pdf}
\caption{Network of four interconnected aircraft}
\label{fig:topA}
\end{figure}
The exosystem states are defined as $w= (r_v, r_{\phi 1 }, r_{\phi 2}, d_{\phi1},d_{\phi 2})^T$, where $r_v, r_{\phi 1 }, r_{\phi 2}$ are the states of the reference signal, and $d_{\phi1},d_{\phi 2}$ denote the sensor noise in the channel of roll angle. The matrix $S$ is defined as follows:
\begin{equation*}
S=\begin{bmatrix} 0 &0& 0& 0& 0\\
0& 0 &-\frac{2}{3}& 0 &-0.1\\
0& \frac{1}{4}& 0 &0 &0 \\
0& 0& 0& 0& 1\\
0& 0& 0& -1& 0
\end{bmatrix}
\end{equation*}
The states of each aircraft are considered as $x_i=(v_i,r_{ri}, \phi_i, y_{ri}, T_{ai}, T_{ri})^T$, the sideways velocity, roll rate, roll angle, yaw rate, and delays of the two commands, respectively. The measured output $y_i$ is considered as $y_i=(v_i,\phi_i)^T$, and $u_i=(\delta_{a_ci},\delta_{r_ci})^T$, the aileron deflection and rudder deflection commands.
The regulated outputs $e_i$ are the tracking errors of sideways velocity and roll angle.
The state matrix of each aircraft for $i=1, \ldots,4$ is given as
\begin{equation*}
A_i\hspace{-1mm}=\hspace{-1mm}\begin{bmatrix}
-0.178 \hspace{-1mm}&\hspace{-1mm} 6.079\hspace{-1mm}&\hspace{-1mm} 9.763\hspace{-1mm}&\hspace{-1mm} -65.623 \hspace{-1mm}&\ns0\hspace{-1mm}&\hspace{-1mm} 2.890\hspace{-1mm}\\
-0.057 \hspace{-1mm}&\hspace{-1mm}-3.810\hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\ns1.343 \hspace{-1mm}&\hspace{-1mm}-10.750 \hspace{-1mm}&\ns1.187\hspace{-1mm}\\
0 \hspace{-1mm}&\ns1.000\hspace{-1mm}&\hspace{-1mm} 0\hspace{-1mm}&\hspace{-1mm} 0.094\hspace{-1mm}&\hspace{-1mm} 0\hspace{-1mm}&\hspace{-1mm} 0\hspace{-1mm}\\
0.025\hspace{-1mm}&\hspace{-1mm} -0.062\hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm}-0.475\hspace{-1mm}&\hspace{-1mm} 0.345\hspace{-1mm}&\hspace{-1mm} -2.220\hspace{-1mm}\\
0\hspace{-1mm}&\hspace{-1mm} 0\hspace{-1mm}&\hspace{-1mm} 0\hspace{-1mm}&\hspace{-1mm} 0\hspace{-1mm}&\hspace{-1mm} -11.111\hspace{-1mm}&\hspace{-1mm} 0\hspace{-1mm}\\
0\hspace{-1mm}&\hspace{-1mm} 0\hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\ns0 \hspace{-1mm}&\ns0 \hspace{-1mm}&\hspace{-1mm}-11.111
\hspace{-1mm}\end{bmatrix}
\end{equation*}
Also, for $ i=1, \ldots,4$ we have
\begin{equation*}
\begin{aligned}
B_i=&\begin{bmatrix} 0 &-2.8900\\
10.7500& -1.1870\\
0& 0\\
-0.3450& 2.2200\\
22.2222& 0\\
0 &22.2222 \end{bmatrix}, \
C_{y,i} =\begin{bmatrix} 1 &0&0&0&0&0\\
0& 0& 1& 0 &0& 0\end{bmatrix}\\
H_{e,i}=&\begin{bmatrix} -1 &0&0&0&0\\
0& 0 & -1 &0& 0\end{bmatrix}, \
H_{y,1}=\begin{bmatrix} 1 & -1& 0&0&0\\
1 & -1& 0& 0 & 0\end{bmatrix}
\end{aligned}
\end{equation*}
Also $C _{y,i} = C_{e,i} $ and $ H_{y,2}= H_{y,3}= H_{y,4}= 0$.
Conditions (A.1)-(A.6) may readily be checked to be valid; in solving (\ref{Pi1})-(\ref{CPi2}) we used
\begin{eqnarray}
\Gamma \hspace{-1mm} & = & \hspace{-1mm} \begin{bmatrix} -0.0045 & -0.0877 & 0.0472 & -0.0145 & 0.0327 \\
0.0112 & -0.0427 & -0.0139 & -0.0065 & 0.0100
\end{bmatrix} \nonumber \\
\Pi \hspace{-1mm} & = & \hspace{-1mm} \begin{bmatrix} 1.0000 & 0 & 0 & 0 & 0 \\
0.0002 & 0.2480 & -0.0138 & 0.0000 & -0.0866 \\
0 & 0 & 1.0000 & 0 & 0 \\
-0.0022 & 0.0211 & 0.1467 & -0.0002 & -0.0076 \\
-0.0089 & -0.1773 & 0.0837 & -0.0231 & 0.0659 \\
0.0223 & -0.0847 & -0.0329 & -0.011 & 0.0203
\end{bmatrix} \nonumber
\end{eqnarray}
From the network graph in Figure \ref{fig:topA}, we see that the informed group of agents consists of agent 1, while agents 2, 3 and 4 are uninformed. The Laplacian matrix of the digraph is
\begin{equation*}
\mathcal L =
\begin{bmatrix}
0& 0 &0 &0& 0\\
-2 &3 &0 &0& -1\\
0 &-2 &2& 0& 0\\
0& 0 &-2& 2& 0\\
0& 0 &-0.7 &-0.5& 1.2
\end{bmatrix}
\end{equation*}
The distributed nature of the control scheme can be understood in terms of this aircraft example system. The controller design of the matrices required for the control scheme (\ref{ulaw1})-(\ref{ulaw3}) for each aircraft can be done without knowing the identity (flight dynamics) of the other aircraft in the network, provided there is a consensus on the location of closed-loop poles. Regarding knowledge of the communication digraph $\mathcal G$, aircraft in the informed group need only know of those aircraft to whom they are directly linked by an edge of the digraph. Aircraft in the uninformed group require sufficient information to enable them to compute the Laplacian submatrix $\mathcal{L}_{33}$.
The exosystem (\ref{exos}) represents a flight maneuver that all the aircraft are to execute. The maneuver involves varying the sideways velocity and roll angle of each aircraft. Global knowledge of the $S$ matrix defining the exosystem dynamics means that all aircraft are aware of the maneuver - the purpose of the control scheme presented in [6] is to enable all aircraft in the network to {\em synchronise} their execution of the maneuver with that of the leader aircraft.
The invariant zeros of each agent system $(A_i,B_i,C_{y,i})$ are at $\{ -50.54, \ 11.11, \ 11.11\}$. Hence each system has one minimum phase zero at $-50.54$. There are 6 state variables, and two inputs and outputs. Thus (\ref{zerocond}) is satisfied, indicating that a search for feedback matrices to ensure the state feedback $\tilde u = F \tilde x$ yields a nonovershooting response on the nominal plant (\ref{nomsysi}) is likely to succeed. It is also worth noting that the system is of nonminimum phase, due to the repeated right complex plane zero at $11.11$. \cite{SP11} investigated the transient response of MIMO nonminimum phase systems, and found that, although overshoot could generally be avoided, doing so often came at the cost of undershoot, and vice-versa.
To investigate the application of the nonovershooting control method proposed in this paper, we assume that estimates of the initial states of each agent and the exosystem have been obtained as follows
\begin{eqnarray}
x_{1,0} & = &\begin{bmatrix} -1 & 0 & -1 & 1& 0 & 0\end{bmatrix}^T \nonumber \\
x_{2,0} & = & \begin{bmatrix} 0 & -1 & -1 & 0 & -1 & -1 \end{bmatrix}^T \nonumber \\
x_{3,0} & = & \begin{bmatrix} -1& 0 & -1 & 1 & 0 & -1 \end{bmatrix}^T \nonumber \\
x_{4,0} & = & \begin{bmatrix} -1& 1& -1 & 1 & 0& -1 \end{bmatrix}^T \nonumber \\
w_{0} & = & \begin{bmatrix} 1 & 1 & 0& 0& 0\end{bmatrix}^T \nonumber
\end{eqnarray}
The {\textbf{NOUS}} toolbox \cite{PS12} was used to seek such feedback matrices for the nominal system (\ref{nomsysi}) of each agent, from initial conditions $\tilde x_{i,0} = x_{i,0} - \Pi w_0$. The toolbox asks the user to nominate a desired interval of the negative real line for the location of the closed-loop eigenvalues. We chose the interval $(-2.5, -.3)$, and in each case the search succeeded, yielding feedback matrices
\begin{eqnarray}
F_1\hspace{-1mm} & = &\hspace{-1mm} \begin{bmatrix} 0.006 & 0.025 & -0.031 & 0.120 & 0.631 & -0.250 \\
0.061 & 0.023 & -0.354 & 1.08 & 1.38 & -2.04
\end{bmatrix} \nonumber \\
F_2\hspace{-1mm} & = & \hspace{-1mm} \begin{bmatrix} 0.005 & 0.015 & -0.025 & 0.158 & 0.578 & -0.250 \\
0.055 & 0.004 & -0.387 & 1.29 & 1.39 & -2.07
\end{bmatrix} \nonumber \\
F_3 \hspace{-1mm} & = & \hspace{-1mm} \begin{bmatrix} 0.00 & -0.007 & -0.086 & 0.485 & 0.646 & -0.230 \\
0.004 & -0.325 & -0.978 & 5.056 & 1.61 & -2.55
\end{bmatrix} \nonumber \\
F_4 \hspace{-1mm} & = &\hspace{-1mm} \begin{bmatrix} 0.004 & 0.029 & 0.041 & 0.136 & 0.506 & -0.239 \\
0.056 & 0.049 & -0.306 & 0.775 & 1.36 & -2.00
\end{bmatrix} \nonumber
\end{eqnarray}
Figure \ref{fignoerror} shows the tracking errors for the sideways velocity and roll angle for each agent, when the
the control law $\tilde u_i = F_i \tilde x$ is applied to the nominal plant (\ref{nomsysi}) with initial condition $\tilde x_{i,0} = x_{i,0} -\Pi_i w_0 $. These yield nonovershooting tracking errors $\tilde e_i$ for both outputs of all agents. This situation corresponds to the tracking errors that would be observed from the multi-agent system (\ref{eqsys}) under the distributed dynamic output feedback controller (\ref{ulaw1})-(\ref{ulaw3}) if there were no error in the estimates of the initial agent and exosystem states, and then $\bar \epsilon(0) = 0$ and $\hat \epsilon(0) = 0$.
\begin{figure}
\centering
\begin{tabular}{c}
\includegraphics[width=8cm]{Fig1.pdf} \\
\includegraphics[width=8cm]{Fig2.pdf}
\end{tabular}
\caption{Tracking errors with no error in the initial state estimates}
\label{fignoerror}
\end{figure}
To implement the dynamic controller (\ref{ulaw1})-(\ref{ulaw3}), we chose $\mu_0 = -12$ and $\gamma = 24$. These choices satisfy (\ref{gammadef}) as $\rho(S)$ lies on the imaginary axis, and
$\rho(\mathcal L_{33}) = \{ 1.2, \ 2, \ 2\}$. Observer gain matrices $L_{1,1}$, $L_{2,1}$ and $L_i$ for $i \in \{2,3,4\}$ to ensure the closed-loop matrices in (\ref{Acci}) have spectrum lying to the left of $\mu_0$
were obtained using the MATLAB$^{\textrm{\tiny{\textregistered}}}$ {\textbf{place}} command:
\begin{equation}
L_{11} = 10^6\begin{bmatrix}
-1.004 & 2.628 \\
-0.020 & 0.051\\
-1.004 & 2.628\\
-0.153 & 0.401\\
-0.000 & 0.000\\
0.000 & -0.000 \end{bmatrix},
L_{21} = 10^7\begin{bmatrix}
0.593 & -1.556\\
0.492 &-1.294\\
-0.108 & 0.284\\
0.427 & -1.109\\
0.402 & -1.058
\end{bmatrix} \nonumber
\end{equation}
\vspace{-.2cm}
\begin{equation}
L_2 = L_3 = L_4 =
\begin{bmatrix}
-32.5424 & -3.1068 \\
-1.8177 & -147.3808 \\
-0.1349 & -27.7723 \\
3.7878 & -17.6487 \\
0.1207 & 1.7550 \\
-0.2203 & 0.4804
\end{bmatrix} \nonumber
\end{equation}
\vspace{-.2cm}
To show the effect of the initial state estimate errors $\bar \epsilon(0)$ and $\hat \epsilon(0) $ in the system response, we shall assume these errors are 1\% of the state estimates. Hence the initial states of systems (\ref{clinfb}) and (\ref{clnoninfb}) are
\begin{eqnarray}
\bar \xi(0) & = & 1.01 \bar x(0), \ \bar \eta(0) = 1.01 \bar w(0) \nonumber \\
\hat \xi(0) & = & 1.01 \hat x(0), \ \hat \eta(0) = 1.01 \hat w(0) \nonumber
\end{eqnarray}
Figure \ref{figwitherror} shows the tracking errors for the sideways velocity and roll angle for each agent, assuming these errors in the estimates of the initial states.
\begin{figure}
\centering
\begin{tabular}{c}
\includegraphics[width=8cm]{Fig3.pdf} \\
\includegraphics[width=8cm]{Fig4.pdf}
\end{tabular}
\caption{Tracking errors allowing for errors in the initial state estimates}
\label{figwitherror}
\end{figure}
We observe that both tracking errors from all four agents converge to zero without changing sign, and thus overshoot is avoided in both outputs of all four agents - a total of 8 outputs. If the dynamic controller (\ref{ulaw1})-(\ref{ulaw3}) had been designed using the methods of \cite{SH12b} for the choice of the state feedback matrices, then the tracking errors would also converge to zero, however the transient responses of each output component would be expected to involve some overshoot, as may be observed in Figure 3 of \cite{SH12b}. Overshoot would occur even if the initial state estimation errors were zero \cite{SH12a}.
The additional contribution of the control methods in \cite{SN10} is to choose the feedback matrices in a manner that avoids overshoot and hence enables the transient period of the control action - during which synchronisation is being achieved and when the aircraft to do not yet all have the same sideways velocity - to be conducted in a smoother and potentially less dangerous manner.
\section{Conclusion}
We have investigated the problem of designing a consensus control scheme to solve the output regulation problem with a desirable transient response for a family of linear multi-agent systems. The distributed consensus output regulation scheme of Su and Huang was combined with the nonovershooting feedback control scheme of Schmid and Ntogramatzidis to achieve output regulation without overshoot for all agents, under Assumptions (A.1)-(A.8). The author's believe this to be the first control methodology to achieve a nonovershooting transient response for MIMO multi-agent consensus problems.
Theorem \ref{mainthm} guarantees the existence of a neighbourhood of the estimated initial state
$ \tilde x_{i,0} $ such that, if the actual system initial state lies within this neighbourhood, then nonovershooting output regulation will be achieved by the distributed dynamic measurement output feedback controller in (\ref{ulaw1})-(\ref{ulaw3}). Estimating the size of this neighbourhood is an open problem, however, the neighbourhood can be adjusted by the choice of $\lambda_0$, $\mu_0$ and $\gamma$ in (\ref{Acci})-(\ref{gammadef}). The neighbourhood becomes larger if the initial states of some agents are known, and also if the nonovershooting behaviour is only required in a selection of the agent outputs. In practice, the size of this neighbourhood can be estimated with the assistance of the {\textbf{NOUS}} MATLAB$^{\textrm{\tiny{\textregistered}}}$ toolbox \cite{PS12}. This toolbox allows the user to obtain state feedback matrices for a nonovershooting response from the estimated system initial state, for each agents. Combining these within (\ref{ulaw1})-(\ref{ulaw3}) and simulating the response of (\ref{eqsys}) from a range of error estimates of the initial system state will enable this neighbourhood to be approximated.
\section{Acknowledgements}
The authors thank the Associate Editor and the anonymous reviewers for their extensive and insightful comments that have resulted in many improvements to the paper.
\label{secconc}
\section{Appendix}
\underline{Proof of Lemma \ref{lem63}}:
Assume firstly that $g(t) >0$ for all $t \geq 0$. Define $\lambda = \min\{\lambda_i: i \in \{1, \dots, m\}\}$. Then $ \mu < \lambda$ by assumption.
Define $f_1: {\mathbb{R}} \rightarrow {\mathbb{R}} $ with
\begin{equation}
f_1(t) = - \sum_{i=1}^n \ e^{\mu_i t} (|\alpha_i | + | \beta_i|)
\end{equation}
Then $f_1(t) \leq f(t)$ for all $t \geq 0$.
As $f_1$ and $g$ are the sums of finitely many negative real exponential functions, they have finitely many local extrema, and there exists a $\bar t >0$ such that both $f_1$ and $g$ are monotonic on the interval $t \geq \bar t$, and $f_1(t) \rightarrow 0$ and $g(t)\rightarrow 0$ as $ t \rightarrow \infty$.
Hence we have $\delta_1 > 0$ such that
\begin{equation}
\frac{1}{\delta_1} = \sup\left\{\frac{-f_1(t)}{g(t)} : 0 \leq t \leq \bar t\right\}
\end{equation}
and so $ 0 < g(t) + \delta_1f_1(t) $ for all $ 0 \leq t \leq \tilde t$.
Consider $t > \bar t$. As $f_1$ is a SEDS function with rate $\mu$, we know that for $t > \bar t$
\begin{equation} \label{fineq}
-f_1(t) \leq |f_1(\bar t)| e^{\mu (t-\bar t)}
\end{equation}
Assume without loss of generality that the $\{\lambda_1, \dots, \lambda_m\}$ are ordered so that $\lambda_1 < \lambda_2 < \dots < \lambda_m$. Then $\beta_m e^{\lambda_m t}$ is the dominant term of $g$ as $t \rightarrow \infty$. Also the assumption that $g(t) >0$ for all $t \geq 0$ implies $\beta_m >0$. We next introduce the set of integers $T_1 = \{i \in \{1,\dots, m\}: \beta_i \beta_m >0\}$ and the exponential function
\begin{equation} g_1(t) = \sum_{i \in T_1} \ \beta_i e^{\lambda_i t }
\end{equation}
Clearly $g_1(t) >0$ for all $t \geq 0$, and as $m \in T_1$, we see that $\beta_m e^{\lambda_m t} $ is the dominant term of $g_1$.
Hence we can introduce the function $\gamma(t) $ such that $g(t) = \gamma(t)g_1(t)$. Then $ 0 < \gamma(t) \leq 1$ for all $ t \geq \bar t$, and $\gamma(t) \rightarrow 1$ as $t \rightarrow \infty$. Define $\gamma_0 > 0$ with $\gamma_0 = \inf\{\gamma(t): t \geq \bar t\}$; we then have for all $t > \bar t$ that
\begin{equation} \label{gineq}
g(t) \geq \gamma_0 g_1(t) \geq\gamma_0 g_1(\bar t) e^{\lambda_1(t - \bar t)}
\end{equation}
From (\ref{fineq}) and (\ref{gineq}), we obtain for $t > \bar t$,
\begin{eqnarray}
\frac{-f_1(t)}{g(t)}
& \leq & \frac{ |f_1(\bar t)| e^{\mu (t-\bar t)}} { \gamma_0 g_1(\bar t) e^{\lambda_1(t - \bar t)} } \\
& < & \frac{ |f_1(\bar t)|}{ \gamma_0 g_1(\bar t) }
\end{eqnarray}
as $\mu < \lambda_1$. Defining $\delta_2 = \frac{ \gamma_0 g_1(\bar t) }{ |f_1(\bar t)| } >0 $, we obtain
$ 0 < g(t) + \delta_2f_1(t) $ for all $ t > \bar t$. Finally choosing $ \delta = \min\{\delta_1,\delta_2\}$, and noting that $f_1(t) \leq f(t)$, we have $g(t) + \delta f(t) >0 $ for all $ t \geq 0$. A similar argument can be used if $g(t) < 0$ for all $t \geq 0$, and the result follows.
\hspace*{\fill}~\QED\par\endtrivlist\unskip
|
{
"timestamp": "2018-05-08T02:14:04",
"yymm": "1805",
"arxiv_id": "1805.02340",
"language": "en",
"url": "https://arxiv.org/abs/1805.02340"
}
|
\section{Introduction}
A 1.5 dimensional(1.5D) terrain $T$ is an $x$-monotone polygonal chain in $\mathbb{R}^2$ specified by $n$ vertices $V(T) = \{v_1,..., v_i,..., v_n\}$, where $v_i = (x_i, y_i)$. The vertices induce $n-1$ edges $E(T) = \{ e_1,..., e_i,..., e_{n-1} \}$ with $e_i$ = $\overline{v_iv_{i+1}}$.
A point $p$ sees or guards $q$ if the line segment $\overline{pq}$ lies above or on $T$, or more precisely, does not intersect the open region bounded from above by $T$ and from the left and right by the downward vertical rays emanating from $v_1$ and $v_n$.
There are two types of terrain guarding problems: (1) continuous terrain guarding (CTG) problem, with objective of determining a subset of $T$ with minimum cardinality that guards $T$, and (2) discrete terrain guarding problem, with the objective of determining a subset of $U$ with minimum cardinality guarding $X$, given that the two point sets $U$ and $X$ are on $T$.
Many studies have referred to applications of 1.5D terrain guarding in real world~\cite{EA,finite_set,FA}. The examples include guarding or covering a road with security cameras or lights and using line-of-sight transmission networks for radio broadcasting.
\subsection{Related Work}
Ample research has focused on the 1.5D terrain guarding problem, which can be divided into the general terrain guarding problem and the orthogonal terrain guarding problem.
In a 1.5D terrain, King and Krohn \cite{NP} proved that the general terrain guarding problem is NP-hard through planar 3-SAT.
Initial studies on the 1.5D terrain guarding problem discussed the design of a constant-factor approximation algorithm. Ben-Moshe et al. \cite{F-app} gave the first constant-factor approximation algorithm for the terrain guarding problem and left the complexity of the problem open. King~\cite{5-app} gave a simple 4-approximation, which was later determined to actually be a 5-approximation. Recently, Elbassioni et al. \cite{4-app} gave a 4-approximation algorithm.
Finally, Gibson et al. \cite{D-PTAS} considered the discrete terrain guarding problem by finding the minimal cardinality from candidate points that can see a target point \cite{D-PTAS} and proved the presence of a planar graph that appropriately relating the local and global optima; thus, the discrete terrain guarding problem allows a polynomial time approximation scheme (PTAS) based on local search. Friedrichs et al. \cite{C-PTAS} revealed that for the continuous 1.5D terrain guarding problem, finite guard and witness sets ($G$ and $X$, respectively) can be constructed such that an optimal guard cover $G'' \subseteq G$ that covers terrain $T$ is present and when these guards monitor all points in $X$, the entire terrain is guarded. According to \cite{D-PTAS}, the continuous 1.5D terrain guarding problem can apply PTAS by constructing a finite guard and witness set with the former PTAS.
Some studies have considered orthogonal terrain $T$. $T$ is called an orthogonal terrain if each edge $e\in E(T)$ is either horizontal or vertical. An orthogonal terrain has four vertex types. If $v_i$ is a vertex of orthogonal terrain and the angle $\angle{v_{i-1}v_iv_{i+1}}=\pi/2$, then $v_i$ is a convex vertex, otherwise it is a reflex vertex. A convex vertex $v_i$ is left(right) convex if $\overline{v_{i-1}v_i}$($\overline{v_iv_{i+1}}$) is vertical. A reflex vertex $v_i$ is left(right) reflex if $\overline{v_{i-1}v_i}$($\overline{v_iv_{i+1}}$) is horizontal.
Katz and Roisman \cite{2-OTG} gave a 2-approximation algorithm for the problem of guarding the vertices of an orthogonal terrain. The authors constructed a chordal graph demonstrating the relationship of visibility between vertices. On the basis of \cite{F.Gavril}, \cite{2-OTG} gave a 2-approximation algorithm and used the minimum clique cover of a chordal graph to solve the right(left) convex vertex guarding problem.
Lyu and {\" U}ng{\"o}r~\cite{nlogm} gave a 2-approximation algorithm for the orthogonal terrain guarding problem that runs in $O(n\log m)$, where $m$ is the output size. The authors also gave an optimal algorithm for the subproblem of the orthogonal terrain guarding problem. On the basis of the vertex type of the orthogonal terrain, the objective of the subproblem is to determine a minimum cardinality subset of $V(T)$ guarding all right(left) convex vertices of $V(T)$; furthermore, the optimal algorithm uses stack operations to reduce time complexity.
The $O(n\log m)$ time 2-approximation algorithm has previously been considered the optimal algorithm for the orthogonal terrain guarding problem. However, some studies have used alternatives to the approximation algorithm.
Durocher et al. \cite{O-OTG} gave a linear-time algorithm for guarding the vertices of an orthogonal terrain under a directed visibility model, where a directed visibility mode considers the different visibility for types of vertex. If $u$ is a reflex vertex, then $u$ sees a vertex $v$ of $T$, if and only if every point in the interior of the line segment $uv$ lies strictly above $T$. If $u$ is a convex vertex, then $u$ sees a vertex $v$ of $T$, if and only if $\overline{uv}$ is a nonhorizontal line segment that lies on or above $T$. Khodakarami et al. \cite{range} considered the guard with guard range. They presented a fixed-parameter algorithm that found the minimum guarding set in time $O(4^k\cdot k^2 \cdot n)$, where $k$ is the terrain guard range.
\subsection{Result and Problem Definition}
In this paper, we define the CTG problem with two-sided guards and propose an optimal algorithm for the 1.5D CTG problem with two-sided guards. To the best of our knowledge, the 1.5D CTG problem with two-sided guards has never been examined.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{fig1.jpg}
\caption{Point $p$ is two-sided guarded by $v_1$ and $v_n$}
\label{fig1}
\end{center}
\end{figure}
{\bf Definition 1} (Two-Sided Guarding). A point $p$ on a 1.5D terrain is two-sided guarded if there exist two distinct guards $u$, which is on or to
the left of $p$, and $v$, which is on or to the right of $p$, such that $p$ can be seen by both $u$ and $v$. Furthermore,
the guards $u$ and $v$ are called a left guard and a right guard of $p$.
Fig.~\ref{fig1} illustrates an example where vertex $v_1$ left guards $p$ and $v_n$ right guards $p$. In this paper, we define the following problem:
{\bf Definition 2}(CTGTG: Continuous Terrain Guarding with Two-Sided Guards) Given a 1.5D terrain $T$, find a vertex guard set $S$ of minimum cardinality such that every point of $T$ can be two-sided guarded.
\subsection{Paper Organization}
Section 2 presents preliminaries, Section 3 demonstrates how to create a finite point set for the CTGTG model, Section 4 gives an algorithm for the CTGTG, along with its proof, and Section 5 presents our conclusions.
\section{Preliminaries}
Let $p$ and $q$ be two points on a 1.5D terrain,
we write $p\prec q$ if $p$ is on the left of $q$. We denote the visible region of $p$ by
$vis(p) = \{v|v\in V(T)$ and $v$ sees $p\}$. For a $vis(p)$, let $L(p)$ be the leftmost vertex in $vis(p)$ and $R(p)$ be the rightmost vertex in $vis(p)$.
Given a CTGTG instance, let $OPT=\{o_1,o_2,...,o_m\}$ be an optimal guard set, where $o_k\prec o_{k+1}$ for $k=1,...,m-1$.
For a point $p$ on the terrain, let $O_R(p)$ and $O_L(p)$ be the subsets of $OPT$ such that $p$ is right guarded by every guard in $O_R(p)$ and left guarded by every guard in $O_L(p)$.
We also define $N_i^R$ as the rightmost point on the terrain that is not right guarded by
$\{ o_i,o_{i+1},...,o_m\}$ and $N_i^L$ as the leftmost point on the terrain that is not left guarded by $\{ o_1,o_2,...,o_i\}$.
An important visible property on 1.5D terrains is as follows:
\begin{lemma}[\cite{F-app}]
\label{order lemma}
Let $a$, $b$, $c$ and $d$ be four points on a terrain $T$ such that $a \prec b \prec c \prec d$. If $a$ sees $c$ and $b$ sees $d$, then $a$ sees $d$.
\end{lemma}
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{fig2.jpg}
\caption{Schematic of Lemma~\ref{order lemma}.}
\label{fig2}
\end{center}
\end{figure}
Fig.~\ref{fig2} is a schematic of Lemma 1. Because $T$ is an $x$-monotone chain, we use a straight line to demonstrate the relation between $x$-coordinate of points and an arc to show the visible relation among points on $T$. In this report, we use a straight line to simplify the explanations.
\begin{obs}
\label{obs_l}
Assume point $x$ is on $e_j$. If $x$ is left guarded by $v$ then $v_{j+1}$ is also left guarded by $v$.
\end{obs}
\begin{obs}
\label{obs_r}
Assume point $x$ is on $e_j$. If $x$ is right guarded by $v$ then $v_{j}$ is also right guarded by $v$.
\end{obs}
\section{Discretization}
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{VT_is_not_enough.jpg}
\caption{$V(T)$ is right guarded and left guarded by $\{v_1, v_2, v_4, v_5\}$, but not $T$.}
\label{V(T) is not enough}
\end{center}
\end{figure}
Although $V(T)$ are right guarded and left guarded, $T$ is not necessarily right guarded and left guarded. In Fig. \ref{V(T) is not enough}, $V(T)$ is right guarded and left guarded by $\{v_1, v_2, v_4, v_5\}$ with minimal cardinality. The vertices $v_1$ and $v_2$ are left guarded by $v_1$ and right guarded by $v_2$. Vertices $v_4$ and $v_5$ are left guarded by $v_4$ and right guarded by $v_5$. Vertex $v_3$ is left guarded and right guarded by $v_2$ and $v_4$, respectively. Only $v_3$ can right guard $p$ and left guard $q$ where $p$ is on $e_2$ and $q$ is on $e_3$, but $v_3 \notin \{v_1, v_2, v_4, v_5 \}$. In our example, we must create a point set $X$ such that if $X$ is right guarded and left guarded, then $T$ is also.
{\bf Definition 3} (Boundary Point). If line $\overline{v_iv_j}$ and $e_k$ have an intersection point $f\notin \{ v_k, v_{k+1} \}$, and $v_i$ and $v_j$ can see $f$ then $f$ is the boundary point.
In Fig.~\ref{fig:7}, we provide an example with four boundary points: $f_1, f_2, f_3$ and $f_4$. Boundary point $f_1$ is from $v_6$, $f_2$ is from $v_4$; and boundary points $f_3$ and $f_4$ are from $v_1$. We say $e_1$ has two boundary points, $f_1$ and $f_2$; each of $e_4$ and $e_6$ has a boundary point.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{fig7.jpg}
\caption{Points $f_1$, $f_2$, $f_3$ and $f_4$ are boundary points on $T$.}
\label{fig:7}
\end{center}
\end{figure}
\begin{lemma}
\label{x lemma}
For an edge $e_i$ on terrain $T$, there exists at most two points $p$ and $q$ that exclude $v_i$ and $v_{i+1}$ such that $e_i$ is complete two sided guarded if $p$ and $q$ are two sided guarded.
\end{lemma}
\begin{proof}
According to the number of boundary points on $e_i$, we may consider the proof under the following heads: edge $e_i$ does not have boundary point or has one, two, or $k$ boundary points (where $k \geq$ 3).
In the first case, we assume $e_i$ does not have boundary point. Let point $p \notin \{ v_i,v_{i+1} \}$ be on edge $e_i$. If $p$ is right guarded and left guarded, then edge $e_j$ is also right guarded and left guarded.
In the second case, we assume $e_i$ has a boundary point $f$. We divided the edge into two line segments $\overline{v_i f}$ and $\overline{v_{i+1} f}$. Then, the first case can be applied to the line segments $\overline{v_i f}$ and $\overline{f v_{i+1}}$. Therefore, we create two points $p \notin \{ v_i,f,v_{i+1}\}$ on line segment $\overline{v_i f}$ and $q \notin \{ v_i,f,v_{i+1}\}$ on line segment $\overline{fv_{i+1}}$. If $p$ and $q$ are right guarded and left guarded, then $e_i$ is also right guarded and left guarded.
In the third case, we assume $e_i$ has two boundary points $f_1$ and $f_2$. We divided the edge into three line segments $\overline{v_if_1}$, $\overline{f_1f_2}$ and $\overline{f_2v_{i+1}}$. The line segments $\overline{v_if_1}$ and $\overline{f_2v_{i+1}}$ can be reduced to the first case. Therefore, we create two points $p \notin \{ v_i,f_1 \}$ on line segment $\overline{f_2v_{i+1}}$ and $q \notin \{ f_2,v_{i+1} \}$ on line segment $\overline{f_2v_{i+1}}$.
If $p$ and $q$ are left guarded and right guarded, then line segment $\overline{f_1f_2}$ is also left guarded and right guarded.
In the final case, we assume $e_i$ has $k$ boundary points $f_1,...,f_k$. We divide the edge into $k+1$ line segments $L = \{ \overline{v_if_1},\overline{f_1f_2},...,\overline{f_k v_{i+1}} \} $. The line segments $\overline{v_if_1}$ and $\overline{f_kv_{i+1}}$ can be reduced to the first case.
Therefore, we create two points: $p \notin \{ v_i, f_1 \}$ on line segment $\overline{v_if_1}$ and $q \notin \{ f_k, v_{i+1} \}$ on line segment $\overline{f_kv_{i+1}}$.
If $p$ and $q$ are left guarded and right guarded, then each line segment $\overline{f_cf_{c+1}} \in L$ is also left and right guarded.
\end{proof}
From the construction of Lemma~\ref{x lemma}, in order to completely two-sided guard a terrain, it is sufficient to first select a finite subset $X$ of positions from the terrain to be two-sided guarded, such that $|X|\leq 2(n-1)$.
\section{4. Optimal Algorithm for the CTGTG}
In this section, we present an optimal algorithm for the CTGTG. The idea of the algorithm follows from Observation~\ref{include $v_1$ and $v_n$}. In each step of our algorithm, we add a vertex $v_i$ to our result $S$ such that if $v_i \notin OPT$ then $v_i$ can replace a vertex $v_j \in OPT$ and $|S| = |OPT|$.
\begin{obs}
\label{include $v_1$ and $v_n$}
The optimal solution of the CTGTG includes $v_1$ and $v_n$.
\end{obs}
This is because in the CTGTG for right and left guarded $T$, only $v_1$ can left guard $v_1$ and only $v_n$ can right guard $v_n$.
\begin{figure}
\begin{center}\includegraphics[scale=0.8]{fig3.jpg}
\caption{Position of $R(N^R_i)$ and $g \in O_R(N^R_i)$.}
\label{fig3}
\end{center}
\end{figure}
\begin{lemma}
\label{between}
$R(N^R_i)$ and $ g \in O_R(N^R_i)$ do not lie on the right side of $o_i$.
\end{lemma}
\begin{proof}
Assume $R(N^R_i)$($g \in O_R(N^R_i)$) is on the right side of $o_i$ and $x$ is on the edge $e_j =\overline{v_jR(N^R_i)}$($\overline{v_jg}$). We know that $x$ is right guarded by $o_k$ and $o_k$ is on the right side of $R(N^R_i)(g)$. According to Lemma \ref{order lemma}, if $o_k$ right guards $x$, then $N^R_i$ is right guarded by $o_k$. This contradicts the definition of $N^R_i$ and $o_k$ sees $N^R_i$. The schematic of Lemma~\ref{between} is given in Fig~\ref{fig3}.
\end{proof}
\begin{lemma}
\label{between_l}
$L(N^L_i)$ and $g \in O_L(N^L_i)$ do not lie the left side of $o_i$.
\end{lemma}
\begin{lemma}
\label{can't guard}
If $R(N^R_i) \notin O_R(N^R_i)$ and
$R(N^R_i) \prec x$,
then $g \in O_R(N^R_i)$ cannot left guard $x$.
\end{lemma}
\begin{proof}
We prove Lemma~\ref{can't guard} in two steps. The first step explains that if $g \in O_R(N^R_i)$ left guards $x_k \in \{x_j \mid R(N^R_i) \prec x_j \}$, then $x_k$ and $N^R_i$ can see each other. The second step explains that if $x_k$ and $N^R_i$ can see each other, and $g$ left guards $x_k$, then $N^R_i$ is right guarded by $g$.
First, let $N^R_i \prec g \prec R(N^R_i) \prec x_k$. Because $R(N^R_i)$ sees $N^R_i$, according to Lemma~\ref{order lemma} if $x$ and $g$ see each other, then $x_k$ and $N^R_i$ also see each other. This is illustrated in Fig.~\ref{fig:4}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{fig4.jpg}
\caption{If $O_R(N^R_i)$ left guard $x_k$, then $x_k$ and $N^R_i$ see each other.}
\label{fig:4}
\end{center}
\end{figure}
In the second step, we assume that $x_k$ is right guarded by $o_j$ and that $x_k$ is on the edge $e_k$ on the right side of $R(N^R_i)$. We know that if $O_R(N^R_i)$ left guards $x_k$, then $x_k$ and $N^R_i$ see each other. Because $o_j$ right guards $x_k$ and sees $v_l$, if $x$ sees $N^R_i$ then $o_j$ right guard $N^R_i$ too, as illustrated in Fig.~\ref{fig:5}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{fig5.jpg}
\caption{If $x_k$ and $N^R_i$ see each other, then $o_j$ right guards $N^R_i$.}
\label{fig:5}
\end{center}
\end{figure}
\end{proof}
\begin{lemma}
\label{can't guard l}
If $L(N^L_i) \notin O_L(N^L_i)$ and $x \prec L(N^L_i)$, then $g \in O_L(N^L_i))$ cannot right guard $x$.
\end{lemma}
\begin{lemma}
\label{can't guard_1}
If $R(N^R_i) \notin O_R(N^R_i)$, $x$ is right guarded by $o_j$ and $i \leq j \leq m$, then $x$ cannot lie between $g \in O_R(N^R_i)$ and $R(N^R_i)$.
\end{lemma}
\begin{proof}
We assume $x$ is on the $e_k = \overline{v_kR(N^R_i)}$ and $x$ is right guarded by $o_j$. We know that $o_j$ right guards $v_k$ by Observation 2. According to Lemma~\ref{order lemma}, if $x$ is right guarded by $o_j$ then $N_R(o_i)$ is right guarded by $o_j$. Therefore, we know that if $R(N^R_i) \notin O_R(N^R_i)$, then $x$ cannot lie between $g \in O_R(N^R_i)$ and $R(N_R(o_i))$.
\end{proof}
\begin{lemma}
\label{can't guard l_1}
If $L(N_L(o_i)) \notin O_L(N_L(o_i))$, $x$ is left guarded by $o_j$ and and $1 \leq j \leq i$, then $x$ cannot lie between $g \in O_R(N^R_i)$ and $R(N^R_i)$.
\end{lemma}
\begin{theorem}
If $R(N^R_i) \notin O_R(N^R_i)$, then $R(N^R_i)$ can replace $g \in O_R(N^R_i)$.
\end{theorem}
\begin{proof}
Based on Lemma~\ref{between}, Lemma~\ref{can't guard} and Lemma~\ref{can't guard_1},
if $g \in O_R(N^R_i)$ and $R(N^R_i) \notin O_R(N^R_i)$, then $g$ cannot left guard $x_k \in \{ x_j \mid N^R_i \prec x_j \} $. Due to $g \prec R(N^R_i))$, we know $vis(R(N^R_i)) \supseteq vis(g)$ by Lemma~\ref{order lemma}.
\end{proof}
\begin{theorem}
If $O_L(N^L_i) \notin L(N^L_i)$, then $L(N^L_i)$ can replace $g \in O_L(N^L_i)$.
\end{theorem}
\section{Complexity}
Because our approach has two phases, we must first discuss the complexity of discretization. We obtain boundary points for a vertex $v$ on $E(T)$ in $O(n)$ by \cite{map}. Therefore, we compute all boundary points for each vertex of $V(T)$ on each edge $e \in E(T)$ in $O(n^2)$. We obtain at most 2$|V(T)|$ boundary points in $O(n^2)$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{fig8-1.jpg}
\caption{If $L(v)$ cannot see $x$ and $v$ sees $x$ then $v=L(x)$.}
\label{fig:8}
\end{center}
\end{figure}
Next, we demonstrate how to compute an optimal solution for the CTGTG. In step 1, we add $v_1$ and $v_n$ to our solution. In step 2, we compute the $vis(v_1)$ and $vis(v_n)$. In step 3, we add $R(x)$ to our solution, where $x$ is the nonright-guarded rightmost point. If a point $x$ exists that is not right guarded, then repeat step 3 until $X$ is right guarded. In step 4, we add $L(x)$ to our solution, where $x$ is the non-left guarded leftmost point. If a point $x$ exists that is not left guarded, then repeat step 4 until $X$ is right guarded. Finally, all points $x$ are right guarded and left guarded.
We show our algorithm for the CTG problem runs in $O(n)$ using two steps. Before the algorithm begins, we can compute $R(x)$ and $L(x)$ for each point of $X$ in $O(n)$. After this computation, we proceed to the algorithm in $O(n)$. Therefore, our proposed algorithm for the CTG problem runs in $O(n)$.
\begin{algorithm}[h]
\SetAlgoNoEnd
\caption{Compute all $L(x)$}
\label{Compute all $L(x_i)$}
\KwIn{$T$: terrain, $X$: point set}
\KwOut{\{ $L(x_i)|x_i \in X$ \} }
$Q \leftarrow X \cup V(T)$
\For{$q_i \in Q$ processed from left to right } {
$q_j = q_{i-1}$
\While{$L(q_i) = \emptyset$}{
\If{$q_i$ sees $L(q_j)$}{
\If{$L(q_j)$ is not $v_1$}{
$q_j=L(q_j)$
}
\Else{
$L(q_i)=v_1$
}
}
\Else{
$L(q_i)=q_j$
}
}
}
\For{$x_i \in X$ processed from left to right } {
Return $L(x_i)$
}
\end{algorithm}
\begin{lemma}
\label{L(xi)}
If $L(v)$ cannot see $x$ and $v$ sees $x$ then $v=L(x)$.
\end{lemma}
\begin{proof}
Assume $p \prec L(v)\prec v \prec x$, $L(v)$ cannot see $x$ and $v$ can see $x$. If $p$ sees $x$ and cannot see $v$, then a vertex $q$ exists and lie above line $\overline{vL(v)}$ and $p \prec q \prec L(v)$, as illustrated in Fig.~\ref{fig:8}. However, the assumption that $L(v) \neq q$ is contradictory.
\end{proof}
We propose Algorithm~\ref{Compute all $L(x_i)$} to compute $L(x)$ for all points of $X$ in $O(n)$ according to Lemma~\ref{L(xi)} and Lemma~\ref{order lemma}. We unite $X$ and $V(T)$ in a set $Q$. Algorithm~\ref{Compute all $L(x_i)$} finds $L(q_i) \in Q$ from left to right.
We prove that the running time of Algorithm~\ref{Compute all $L(x_i)$} is $O(n)$.
\begin{theorem}
Algorithm~\ref{Compute all $L(x_i)$} runs in O$(n)$.
\end{theorem}
\begin{proof}
We count the number of times $q_i$ sees $L(q_j)$ in the algorithm.
If $q_i$ sees $L(q_j)$, then the algorithm does not visit the vetrices between $q_i$ and $L(q_j)$. Therefore, the number of times $q_i$ sees $L(q_j)$ is at most once for each point of $Q$. If $q_i$ does not see $L(q_j)$, then $q_i$ has found $L(q_i)$. Therefore, the number of times $q_i$ does not see $L(q_j)$ is at most once for each point $Q$.
\end{proof}
After computing $L(x_i)$ and $R(x_i)$ for $X$, we reach the algorithm for the CTGTG in $O(n)$. We divided our algorithm into left and right-guarding, and therefore we provide the algorithm for left-guarding that can be implemented in $O(n)$.
\begin{algorithm}[h]
\SetAlgoNoEnd
\caption{Left-guarding}
\label{A2}
\KwIn{$T$: terrain}
\KwIn{$X$: point set}
\KwOut{$P$ : left-guarding set}
$P$ is null;
$V(T')$=$V(T)$;
\For{$x_i \in X$ processed from left to right } {
\While {$g(x_i)$ is null}{
$p_j$ is rightmost vertex in $P \cap V(T')$;
\If{$x_i$ is guarded by $p_j$}
{
$g(x_i)$ is $p_j$;
Remove the vertices between $x_i$ and $p_j$ from $V(T')$;
}
\ElseIf{$p_j$ on the left side of $L(x_i)$}
{
$g(x_i)$ be the vertex $L(x_i)$ ;
Add $g(v_i)$ to $P$;
Remove the vertics between $x_i$ and $L(x_i)$ from $V(T')$;
}
\Else
{Remove $p_j$ from $V(T')$;}
}
}
\Return $P$
\end{algorithm}
\begin{theorem}
Algorithm~\ref{A2} runs in $O(n)$.
\end{theorem}
\begin{proof}
For each $x_i$, we examine whether $x_i$ is guarded by $p_a \in P$ from $x_i$ to $g(x_i)$. If $g(x_i)$ = $g_a$ = $v_b$, then Algorithm~\ref{A2} will not visit the point and vertex between $x_i$ and $v_b$.
We count the number of times $x_i$ is not seen by $P$.
We can check $p_j$ from $x_i$ to $L(x_i)$.
If $p_j$ does not see $x_i$, then we will not check $p_j$ for $\{ x_k \mid x_i \prec x_k \}$.
Assume $p_j$ does not see $x_i$, $p_k$ sees $x_i$, and $p_k \prec p_j \prec x_i$, if $\{x_l \mid x_i \prec x_l \}$ is seen by $p_j$, then $p_k$ sees $\{x_l \mid x_i \prec x_l\}$ according to Lemma~\ref{order lemma}.
The number of times $X$ is not seen by $P$ is $|V(T)|$, and the number of times $X$ is seen by $P$ is $|X|$.
Therefore, the algorithm visits the point and vertex at most $2|X|+|V(T)|$ times. After computing all $L(x_i)$, Algorithm~\ref{A2} runs in $O(n)$.
\end{proof}
\section{Conclusion}
In this paper, we considered the CTGTG problem and devised an algorithm that can determine the minimal cardinality vertex that guards $T$ under two-sided guarding. We showed that the CTGTG problem can be reduced to the discrete terrain guarding problem with at most $2|V(T)|$ points in $O(n^2)$ and solved the problem using our devised algorithm in $O(n)$ where $n$ is the number of vertices on $T$.
\bibliographystyle{elsarticle-num.bst}
|
{
"timestamp": "2018-05-08T02:16:13",
"yymm": "1805",
"arxiv_id": "1805.02447",
"language": "en",
"url": "https://arxiv.org/abs/1805.02447"
}
|
\section{Introduction}
\label{sec:introduction}
In this paper, we consider a continuous mapping $f: X \to M$ of a topological space $X$ to a manifold $M$. We think of $f$ as a family of fibers $f^{-1}p \subseteq X$ parameterized by points $p$ in $M$. We are interested in topological properties of these fibers that are stable under small perturbations of the map $f$. Besides being of mathematical interest in its own right, this stability requirement is important for applications, where $f$ may be subject to perturbations from measurement noise or computational error. Stability is particularly appealing for data analysis, as data is inherently noisy.
We study the homology groups of the fibers $\Hfunc_j(f^{-1}p)$, and their dimensions the Betti numbers $\beta_j(f^{-1}p)$. We are mainly interested in lower bounds on the Betti numbers that continue to hold for small perturbations of $f$. Lower bounds are important because linearly independent elements of $\Hfunc_j(f^{-1}p)$ that remain linearly independent under small perturbations are regarded as interesting features of the family. The stability requirement is a serious one:
Even when $\beta_j(f^{-1}p)$ is large, there can exist perturbations $\tilde f$ arbitrarily close to $f$ such that $\beta_j \big( \tilde f^{-1}q \big)=0$ for every $q \in M$.
\paragraph{Conventions.} In this introduction, we fix a map $f:X\to M$, where $M$ is a manifold, a metric $d$ on $M$, and an orientation of $M$. All homology groups are with field coefficients and of fixed degree $j$, and all open sets considered are connected.
\paragraph{Betti number lower bound.} The simplest statement of the type of result in this paper is the following: For every open set $U\subseteq M$, we associate a nonnegative integer $\mathcal P_U$ called the {\em persistent dimension} of $U$ with the following properties:
\begin{enumerate}
\item{\em Betti number lower bound}: $\beta_j(f^{-1}p)\geq \mathcal P(U)$ for all $p\in U$, i.e.
$\mathcal P(U)$ is a lower bound for the Betti numbers of all the fibers over $U$.
\item {\em Stability}: For every perturbed $\tilde f$ that is $\epsilon$ close to $f$, we have
$\beta_j (\tilde f^{-1}p) \geq \mathcal P(U)$ for all $p\in U$ that is more than $\epsilon$ away from the boundary of $U$. In other words, $\mathcal P(U)$ is still a lower bound for the Betti numbers of the fibers if $U$ is shrunk by~$\epsilon$.
\end{enumerate}
The metric in which we ask $\tilde f$ is $\epsilon$ close to $f$ is $\sup_pd(fp,\tilde fp)$.)
It follows from 2 that for all $p\in U$, there is an $\epsilon$ so that if $\tilde f$ is $\epsilon$ close to $f$, then
$\beta_j(\tilde f^{-1}p)\geq\mathcal P(U)$.
In other words, the lower bounds on Betti numbers provided by $\mathcal P(U)$ are meaningful even in the presence of small enough error in the determination of $f$.
We would like to say that the $\mathcal P(U)$-dimension part of $\Hfunc_j(f^{-1}p)$ guaranteed by 1
forms a family over $U$.
To do that, we need to recall the idea of a local system.
\paragraph{Local Systems.} A {local system} $\mathcal{L}$ over a space $U$, also called a locally constant sheaf over $U$, is a ``family'' of vector spaces parameterized by points in $U$. It may be defined as the following data:
\begin{enumerate}
\item a vector space $\mathcal L_p$ for every point $p\in U$ called the stalk of $\mathcal L$ at $p$, and
\item an isomorphism $\mathcal L_\gamma: \mathcal L_p\to \mathcal L_q$ for every homotopy class
$\gamma$ of paths from $p$ to $q$ called the monodromy along $\gamma$.
\end{enumerate}
Local systems over $U$ form a category; morphisms $\mathcal L \to \mathcal L'$ are sets of
linear maps $\mathcal L_p\to \mathcal L'_p$ for each $p\in U$ that commute with the monodromy maps.
The isomorphisms $\mathcal L_\gamma$ are required to be compatible with composition of paths.
If $U'$ is a subset of $U$, a local system $\mathcal L$ over $U$ restricts to a local system $\mathcal L|U'$ over $U'$ by throwing away all the data that does not lie in $U'$.
If $U$ is connected, the vector spaces $\mathcal L_p$ all have the same dimension and if further $U$
is simply connected, they may all be identified with a single vector space $V$
so that the maps $L_\gamma $ are all the identity on $V$.
\paragraph{The Persistent Local System.} For every connected
open set $U\subseteq M$, we construct a local system $\mathcal L(U)$ over $U$ called the {\em persistent local system} of $U$ with the following properties:
\begin{enumerate}
\item{\em Relation to homology of fibers}: For every point $p\in U$, the stalk $\mathcal L(U)_p$ of $\mathcal L(U)$ at $p$ is naturally a subquotient of $\Hfunc_j(f^{-1}p)$, the $j$-th homology of the fiber over~$p$.
\item{\em Stability}: For every perturbed $\tilde f$ that is $\epsilon$ close to $f$, $\mathcal L(U)|U^\epsilon$ is naturally a subquotient of
$ \tilde{\mathcal{L}}(U^\epsilon) $ where $U^\epsilon$ is the open subset of $U$ consisting of points that are more than distance $\epsilon$ from the boundary, $\mathcal L(U)|U^\epsilon$ is the restriction of $\mathcal L(U)$ to $U^\epsilon$, and
$ \tilde{\mathcal{L}} $ is the persistent local system on $U^\epsilon$ constructed from $\tilde f$.
\end{enumerate}
We define $\mathcal P(U)$, the persistent dimension of $U$, to be the stalk dimension of $\mathcal L(U)$. The Betti number lower bounds above follow from (1) and (2) above since the dimension of any vector space $V$ is bounded from below by the dimension any subquotient of $V$.
\paragraph{Sheaves and Cosheaves.} It is not surprising that sheaf theory is a useful tool to study these questions. It was introduced by Leray 75 years ago precisely to study the homology of the fibers of a map. We develop the sheaf theory we need (constructible sheaves and cosheaves) in \S\ref{sec:sheaves}-\ref{sec:cosheaves} below. Local systems are examples of both sheaves and cosheaves.
The { $j$-th Leray homology cosheaf of a map} $f:X\to M$ is a cosheaf $\cosheaf{F}_j$ on $M$ that contains the information of the $j$-th homology of the fibers $\Hfunc_j(f^{-1}p)$ for all points $p\in M$, all woven together in one algebraic object. In practice, it is amenable to computation. The $j$-th homology sheaf $\sheaf{F}_j$ is a similar dual object.
\paragraph{The Case $M= \mathbb{R}$ and Persistent Homology.} If the manifold $M$ is the space of real numbers, then there is a remarkably simple construction of the persistent local systems $\mathcal L(U)$. Let $\cosheaf{F}$ be the Leray cosheaf of $f$ and $\cosheaf{F}|U$ be its restriction to $U$. Then $\mathcal L(U)$ can be characterized as the largest local system contained in $\cosheaf{F}|U$ as a direct summand. (If $\underline F$ is the Leray sheaf, $\mathcal{L}(U)$ can also be characterized as the largest local system contained in $\cosheaf{F}|U$ as a direct summand.)
So $\mathcal L(U)$ constructed in this way satisfies the two properties (1) {\em Relation to homology of the fibers} and (2) {\em Stability}. This construction and these properties of it were already known to the Persistent Homology community \cite{levelset_stability, patel}.
Since U is connected and simply connected, the stalks of the local system $\mathcal L(U)$ are all identified with a single vector space $V$.
Most of the persistent homology literature focuses on a special case of our situation. There is a space $Y$ with a function $h:Y\to \mathbb R$ and we are interested in the homology of the sublevel sets $h^{-1}(-\infty,r]$ as a function of $r$ .
For every pair $r \leq s$, the image of the homomorphism $\Hfunc_j \big( h^{-1}(-\infty, r] \big)
\to \Hfunc_j \big( h^{-1}(-\infty, s] \big)$ is called the \emph{persistence vector space} associated to the interval $(r,s)$~\cite{Edelsbrunner2002}.
The collection of all such images, called the \emph{rank function} of $h$, uniquely defines what is called the \emph{barcode} or the \emph{persistence diagram} of $h$ \cite{CSEdH, Carlsson2009, Patel2018}.
This special case translates into a case of ours by concocting a function $f:X\to R$ such that the sublevel sets are its fibers. Take
$X= \big\{(y,r)\in Y \times \mathbb R \; \big| \; g(y)\leq r \big\}$
and $f(y,r)=r$.
The persistence vector space of $h$ for an interval $(r,s)$ is the persistent local system
of $f$ over $(r,s)$.
In this way, the persistent local system behaves very much like the well known rank function
in persistent homology.
There is work on the persistent homology of circle valued functions $f : Y \to \mathbb{S}^1$ \cite{Burghelea2013}.
We believe the persistent local systems of $f$ are closely related to their invariants.
\paragraph{This paper.}This paper was motivated by our desire to generalize this very beautiful theory of persistent vector spaces to functions with values in any manifold. One might ask, why not just do the same thing -- The construction of the persistent local systems $\mathcal L(U)$ described above makes sense for any manifold $M$. However, it doesn't work. The result doesn't satisfy the stability condition. This is the first indication of many aspects of the the problem that are much more complicated for higher dimensional manifolds than for $\mathbb R$. In fact, one can show that there can be no construction of persistent local systems $\mathcal L(U)$ that depends only on $\sheaf{F}$, gives the ``right'' answer for fibrations, and satisfies stability; there is similarly none for $\cosheaf{F}$.
Our construction of the persistent local systems uses both the cosheaf $\sheaf{F}$ and the sheaf $\cosheaf{F}$ plus a map between them $\Ffunc: \sheaf{F}\to \cosheaf{F}$ constructed from the orientation class of~$M$. We call this data a {\em bisheaf}. In terms of computability, a bisheaf is not much more complicated than a sheaf or a cosheaf. However, since the map $\Ffunc$ mixes objects from different categories, the theory of bisheaves is complicated. For example, bisheaves form an interesting category, but unlike sheaves and cosheaves, it is not an abelian category.
Given the bisheaf $\Ffunc: \sheaf{F} \to \cosheaf{F}$, the construction of the persistent local system $\mathcal L(U)$ proceeds in four steps. Here's an outline:
\begin{enumerate}
\item Restrict the bisheaf to $U$, $\Ffunc: \sheaf{F}|U\to \cosheaf{F}|U$.
\item Construct a canonical subsheaf $\Epi(\sheaf{F}|U) \hookrightarrow \cosheaf{F}|U$.
\item Construct a canonical quotient cosheaf $\cosheaf{F}|U \twoheadrightarrow \Mono (\cosheaf{F}|U)$.
\item Then $\mathcal L(U)$ is the image of the composition
$\Epi(\sheaf{F}|U) \hookrightarrow \sheaf{F}|U\to \cosheaf{F}|U \twoheadrightarrow \Mono (\cosheaf{F}|U)$.
\end{enumerate}
We would have liked the persistent local systems $\mathcal L(U)$ to satisfy a stacky functoriality in $U$. What is true is a rather weaker statement: if $U'$ is a subset of $U$ then $\mathcal L(U)|U'$ is naturally a subquotient of $\mathcal L(U')$. The solution we found to this is the {\em persistence stack}, Def. \ref{persistence stack} which has all the functorial properties we need. We believe that the category of bisheaves, the $\Epi$ and $\Mono$ constructions, and persistence stacks are interesting new tools of sheaf theory, and we hope they will be useful in other contexts.
\section{Maps}
\label{sec:maps}
We start by defining the class of spaces and maps we will be working with. The class we consider is chosen to be general enough to include all the maps that generally come up in geometry and applied mathematics, but controlled enough to allow the powerful technology of constructible sheaf theory.
\begin{defn}{\cite{Mather}}
A \define{Thom-Mather space} is a triple $(X, \Sstrat, \Jcontrol)$ satisfying the
following nine axioms:
\begin{enumerate}
\item $X$ is a Hausdorff, locally compact, and second countable
topological space.
\item $\Sstrat$ is a set of path-connected, locally closed subsets of
$X$ such that $X$
is the disjoint union of the elements of $\Sstrat$.
\item[] The elements of $\Sstrat$ are called the \emph{strata} of $X$.
We call $\Sstrat$ the stratification of the Thom-Mather space.
\item Each stratum of $X$ is a topological manifold (in the induced topology)
provided with a $C^\infty$ smoothness structure.
\item The set $\Sstrat$ is locally finite. That is, each point $x \in X$ has an open
neighborhood that intersects finitely many strata.
\item The set $\Sstrat$ satisfies the \emph{condition of the frontier}:
if $R,S \in \Sstrat$ and $S$ has a non-empty intersection with the
closure of $R$, then $S$ is a subset of the closure of $R$.
In this case, we say $S$ is on the \emph{frontier} of $R$.
\item[] The axiom of the frontier makes $\Sstrat$ a poset with $S \leq R$ iff
$S$ is on the frontier of $R$.
\item $\Jcontrol$ is a triple $\big \{ (T_S), (\pi_S), (\rho_S) \big \}$, where
for each $S \in \Sstrat$, $T_S$ is an open neighborhood of $S$ in $X$,
$\pi_S : T_S \to S$ is a continuous retraction onto $S$, and $\rho_S : T_S \to [0,\infty)$
is a continuous function.
\item[] The open set $T_S \subseteq X$ is called the \emph{tubular neighborhood}
of $S$ in $X$,
$\pi_S$ is called the \emph{local retraction} of $T_S$ onto $S$, and
$\rho_S$ is called the \emph{tubular function} of $S$.
We call $\Jcontrol$ the \emph{control data} of the Thom-Mather space.
\item For each stratum $S \in \Sstrat$, $S = \{ x \in T_S \; | \; \rho_S(x) = 0 \}$.
\item[] For two strata $R, S \in \Sstrat$, let $T_{R,S} = T_R \cap S$,
$\pi_{R,S} = \pi_R | T_{R,S} : T_{R,S} \to S$, and $\rho_{R,S} = \rho_R | T_{R,S}:
T_{R,S} \to [0,\infty)$.
It is possible that $T_{R,S}$ is empty, in which case these maps are the
empty mappings.
\item For any strata $R,S \in \Sstrat$, the mapping
$$ (\pi_{R, S} , \rho_{R,S} ) : T_{R, S} \to R \times (0, \infty)$$
is a smooth submersion.
\item For any strata $Q,R, S \in \Sstrat$, the following diagrams commute
for all $x \in T_{Q,S} \cap T_{R,S}$ such that $\pi_{R,S}(x) \in T_{Q,R}$:
\begin{equation*}
\xymatrix{
T_{R,S} \ar[rd]^{\pi_{Q,S}} \ar[rr]^{\pi_{R,S}} && T_{Q,R} \ar[dl]^{\pi_{Q,R}} &&&
T_{R,S} \ar[rr]^{\pi_{R,S}} \ar[dr]^{\rho_{Q,S}} && T_R \ar[dl]^{\rho_{Q,R}} \\
& T_{Q,S} & &&& & [0, \infty). &
}
\end{equation*}
\end{enumerate}
\end{defn}
Let $(X, \Sstrat, \Jcontrol)$ be a Thom-Mather space.
Choose a stratum $S \in \Sstrat$ and an open ball $B \subseteq S$
such that its closure lies entirely in $S$.
For a value $r \in (0,\infty)$, let
$$B_r = \left \{ x \in T_S \; \middle | \; \rho_S(x) < r \text{ and } \pi_S(x) \in B \right\}.$$
We call $B_r$ a \emph{basic open} of $(X, \Sstrat, \Jcontrol)$ \emph{associated}
to the stratum $S$.
Let $\Basic(X, \Sstrat, \Jcontrol)$ be the poset of all basic opens
over all strata $S \in \Sstrat$ and over all $r \in (0, \infty)$ ordered by inclusion.
The union of open sets in $\Basic(X, \Sstrat, \Jcontrol)$ is $X$.
For any two $U, V \in \Basic(X, \Sstrat, \Jcontrol)$ with $x \in U \cap V$, there is a set
$W \in \Basic(X, \Sstrat, \Jcontrol)$ such that $x \in W$ and $W \subseteq U \cap V$.
This makes $\Basic(X, \Sstrat, \Jcontrol)$ a basis for the topology on $X$.
\begin{defn}
\label{defn:constructible_map}
Let $X$ and $Y$ be Hausdorff, locally compact, second countable topological spaces.
A continuous map $f : Y \to X$ is \define{$(\Sstrat, \Jcontrol)$-constructible}
if there is a Thom-Mather space $(X, \Sstrat, \Jcontrol)$ such that for every
pair $V \subseteq U$ in $\Basic(X, \Sstrat, \Jcontrol)$ associated to a common stratum, the inclusions
\begin{equation*}
\xymatrix{
\big( Y, Y - f^{-1}(U) \big) \mono \big( Y, Y - f^{-1}(V) \big)
&&
f^{-1}(V) \mono f^{-1}(U)
}
\end{equation*}
are homotopy equivalences.
A continuous map $f : Y \to X$ is \define{constructible}
if it is $(\Sstrat, \Jcontrol)$-constructible for some Thom-Mather space
$(X, \Sstrat, \Jcontrol)$.
\end{defn}
\vspace{1em}\noindent{\bf Examples} (a) Real algebraic maps, (b) real analytic maps that are ``controlled at infinity'', (c) proper piecewise linear maps that are ``controlled at infinity'', and (d) an open dense set of proper smooth maps, are all constructible.
Here ``controlled at infinity'' means that the map $Y\to X$ factorizes in the category of analytic resp.\ PL spaces as follows:
$Y\subset Z \to X$ where $Y\subset Z$ is an inclusion of an open set, $Z-Y$ is analytic resp.\
PL subspace of $Z$, and $Z\to X$ is proper.
Proper maps are automatically controlled at infinity: set $Z=X$.
Algebraic maps are always similarly controlled at infinity.
In all four cases, the proof has three steps:
\begin{enumerate}
\item Construct a Whitney stratified structure on the map $Z\to X$ in which $Y$ is a union of strata, using \cite{Shiota} in cases (a), (b), and (c) and \cite{Gibson} in case (d).
\item Choose the Thom-Mather data on $X$ to be the one obtained from the Whitney stratification of $X$ in \cite{Mather}.
\item Use {\em moving the wall} from \cite[Chapter 4 page 70]{SMT} to show the required homotopy equivalences.
\end{enumerate}
\begin{rmk}
We expect almost any map defined by a finite process to be constructible. Non-constructible examples, like the inclusion of a cantor set into a manifold, come from infinite or iterative processes.
\end{rmk}
We will not require the smooth structure of a Thom-Mather space until
Section \ref{sec:dilation}.
For the next few sections, all we require is a topological stratified space.
\begin{defn}{\cite{GorMac80}}
An $n$-\define{dimensional (topological) stratified space} $X$ is an $n$-step filtration
\begin{equation*}
\emptyset = X_{-1} \subseteq \cdots \subseteq X_n = X
\end{equation*}
of a second countable Hausdorff space
where for each $d$ and each point $p \in X_d - X_{d-1}$,
there is a compact $(n-d-1)$-dimensional stratified space $L$
and a filtration preserving homeomorphism
$$h: \Rspace^d \times C(L) \to U$$
such that $h (0, \bullet) = p$.
Here $\Rspace^d$ is interpreted as a filtered space with just one step and~$\bullet$ is the cone point of $C(L)$.
We call $h$ a \emph{local parameterization} of the stratified space.
Each connected component of $X_i - X_{i-1}$ is an \define{$i$-stratum}.
\end{defn}
Let $X$ be an $n$-dimensional stratified space and call $\Sstrat$ its set of strata.
The local parameterizations imply that each $i$-stratum is a
topological $i$-manifold and that the condition of the frontier is satisfied.
This makes $\Sstrat$ a poset.
We call an open set $U \subseteq X$ an \emph{$\Sstrat$-basic open} if it is the image
of a local parameterization $h : \Rspace^i \times C(L) \to X$.
An $\Sstrat$-basic open is \emph{associated} to the unique stratum in $\Sstrat$
containing $h(\Rspace^i \times \bullet)$.
Let $\Basic(X, \Sstrat) \subseteq \Open(X)$ be the poset of $\Sstrat$-basic opens.
The set $\Basic(X, \Sstrat)$ is a basis for the topology on $X$.
It will be convenient to write a stratified space as a tuple
$(X, \Sstrat)$ where $\Sstrat$ is its poset of strata.
Note that every Thom-Mather space $(X, \Sstrat, \Jcontrol)$ is a stratified space $(X, \Sstrat)$
and $\Basic(X, \Sstrat) \subseteq \Basic(X, \Sstrat, \Jcontrol)$.
\begin{defn}
A stratified space $(X, \Kstrat)$ is a \define{triangulation}
if there is a simplicial pair $(K, K_0)$ and a homeomorphism $\phi : | K - K_0 | \to X$
such that each stratum of $\Kstrat$ is the image of a simplex in $K - K_0$.
A stratified space $(X, \Sstrat)$ is \define{triangulable} if there is a triangulation
$(X, \Kstrat)$ such that for each (open) simplex $\sigma \in \Kstrat$ there is a stratum
$S \in \Sstrat$ where $\sigma \subseteq S$.
\end{defn}
We use $\sigma$ and $\tau$ to denote (open) simplices of a triangulation $(X, \Kstrat)$.
The \emph{open star} of a simplex $\sigma \in \Kstrat$ is the subposet
$\st\; \sigma := \big \{ \tau \in \Kstrat \; | \; \sigma \leq \tau \big \} \subseteq \Kstrat.$
\begin{prop}[\cite{10.2307/2042563}]
Every Thom-Mather space $(X, \Sstrat, \Jcontrol)$ is triangulable.
\end{prop}
Throughout this paper, $M$ will denote a topological $m$-manifold without boundary.
A topological manifold is a locally Euclidean Hausdorff space.
\section{Sheaves}
\label{sec:sheaves}
In this section, we develop the theory of constructible sheaves. We introduce the notions of an episheaf and epification which we will use to study the fibers of a
constructible~map. On a technical level, the main new device is the use of basic open sets.
For a topological space $X$,
let $\Open(X)$ be its poset of open sets ordered
by inclusion~$V \subseteq U$.
An \emph{open cover} of an open set $U \subseteq X$ is
a subposet $\Ucat \subseteq \Open(X)$ of open sets whose union is $U$ and
for every $U_i, U_j \in \Ucat$, $U_i \cap U_j$ is a union of elements in $\Ucat$.
Let $\Ab$ be the category of abelian groups.
\begin{defn}
A \define{sheaf} (of abelian groups) \emph{over} $X$ is a
contravariant functor
$$\sheaf{F} : \Open(X) \to \Ab$$
satisfying the following property.
For each open set $U \subseteq X$ and for each open cover $\cov{U}$ of $U$,
the universal map $\sheaf{F}(U) \to \mylim \sheaf{F} |_{\cov{U}}$
is an isomorphism.
A \define{sheaf map} is a natural transformation of functors
$\ubar{\alpha} : \sheaf{F} \to \sheaf{G}$.
\end{defn}
\begin{defn}
Let $(X, \Sstrat)$ be a stratified space.
A sheaf $\sheaf{F}$ over $X$ is \define{$\Sstrat$-constructible}
if for every pair of $\Sstrat$-basic opens $V \subseteq U$
associated to a common stratum,
the map
$$\sheaf{F}(V \subseteq U) : \sheaf{F}(U) \to \sheaf{F}(V)$$
is an isomorphism.
A sheaf $\sheaf{F}$ over $X$ is \define{constructible} if there is a stratified space
$(X, \Sstrat)$ for which $\sheaf{F}$ is $\Sstrat$-constructible.
Let $\Sheaf(X, \Sstrat)$ be the category of $\Sstrat$-constructible
sheaves over $X$ and sheaf maps.
Let $\Sheaf(X)$ be the category of constructible sheaves
over $X$ and sheaf maps.
\end{defn}
When defining an $\Sstrat$-constructible sheaf over $X$, it is enough to specify
a contravariant functor on a small subposet of $\Open(X)$.
Let $\Acat \subseteq \Basic(X, \Sstrat)$ be any subposet that is a basis for the topology on $X$.
For example, if $(X, \Sstrat, \Jcontrol)$ is a Thom-Mather space, then we may let
$\Acat$ be $\Basic(X, \Sstrat, \Jcontrol)$.
Let $\Ffunc : \Acat \to \Ab$ be a contravariant functor
such that for each pair $V \subseteq U$ associated to a common stratum, the map
$\Ffunc(V \subseteq U)$ is an isomorphism.
Then $\Ffunc$ uniquely generates (up to an isomorphism)
an $\Sstrat$-constructible sheaf $\sheaf{F}$ as follows.
For an arbitrary open set $U \subseteq X$, let $\Acat(U) \subseteq \Acat$
be the subposet consisting of elements contained in $U$.
Let $\sheaf{F}(U) := \lim \Ffunc |_{\Acat(U)}$.
For an arbitrary pair of open sets $V \subseteq U \subseteq X$, let $\sheaf{F}(V \subseteq U)$ be the
universal morphism between the two limits.
We let the reader check that $\sheaf{F}$ is indeed an $\Sstrat$-constructible sheaf.
If $(X, \Kstrat)$ is a triangulation and $\sheaf{F}$ a
$\Kstrat$-constructible sheaf, then $\sheaf{F}$ is uniquely determined (up to an isomorphism)
by its value on the subposet of open stars
$$\big \{ \st \; \sigma \; \big | \; \sigma \in \Kstrat \big \} \subseteq \Open(X).$$
\begin{ex}
\label{ex:relative_sheaf}
Let $f : Y \to X$ be an $(\Sstrat, \Jcontrol)$-constructible map.
Define $\sheaf{F}_\ast$ as the $\Sstrat$-constructible sheaf generated
by assigning to each $U \in \Basic(X, \Sstrat, \Jcontrol)$
the relative singular homology group
$$\sheaf{F}_\ast ( U ) := \Hgroup_{\ast} \big( Y, Y - {f}^{-1} ( U ); \Zspace \big).$$
For two $(\Sstrat, \Jcontrol)$-basic opens $V \subseteq U$ associated to a common
stratum, the map
$$\sheaf{F}(V \subseteq U) : \sheaf{F}_\ast (U) \to \sheaf{F}_\ast(V)$$
is, by definition of an $(\Sstrat, \Jcontrol)$-constructible map, an isomorphism.
Thus $\sheaf{F}_\ast$ is an $\Sstrat$-constructible sheaf.
\end{ex}
\begin{defn}
An $\Sstrat$-constructible sheaf $\sheaf{F}$ over $X$ is an
\define{episheaf} if for every pair of $\Sstrat$-basic opens
$V \subseteq U$,
the map $\sheaf{F}(V \subseteq U) : \sheaf{F}(U) \to \sheaf{F}(V)$
is surjective.
\end{defn}
\begin{prop}
\label{prop:epi_one}
Consider a sheaf map $\ubar \alpha : \sheaf{E} \to \sheaf{F}$ in $\Sheaf(X, \Sstrat)$.
If $\sheaf{E}$ is an episheaf, then the image of $\ubar \alpha$ is an $\Sstrat$-constructible episheaf.
\end{prop}
\begin{proof}
For any pair of $\Sstrat$-basic opens $V \subseteq U$,
consider the following commutative diagram:
\begin{equation*}
\xymatrix{
\sheaf{E}(U) \ar[d]_{\ubar \alpha(U)} \ar@{->>}[rr]^{\sheaf{E}(V \subseteq U)} && \sheaf{E}(V) \ar[d]^{\ubar \alpha(V)} \\
\sheaf{F}(U) \ar[rr]^{\sheaf{F}(V \subseteq U)} && \sheaf{F}(V).
}
\end{equation*}
The restriction of $\sheaf{F}(V \subseteq U)$ to the image of $\ubar \alpha(U)$
is a surjection onto the image of $\ubar \alpha (V)$.
\end{proof}
Let $\sheaf{F}$ be an $\Sstrat$-constructible sheaf over $X$.
A \emph{sub-episheaf} of $\sheaf{F}$ is an inclusion $\sheaf{E} \mono \sheaf{F}$
of an $\Sstrat$-constructible episheaf $\sheaf{E}$.
The zero sheaf $\sheaf{0} \mono \sheaf{F}$ is the smallest sub-episheaf of $\sheaf{F}$.
For any two sub-episheaves
$\sheaf{E}_1, \sheaf{E}_2 \mono \sheaf{F}$, their internal sum
$\sheaf{E}_1 \uplus \sheaf{E}_2$ is also a sub-episheaf.
Let $\Pcat$ be the poset of sub-episheaves of $\sheaf{F}$ ordered by inclusion.
For any chain
\begin{equation*}
\xymatrix{
\sheaf{E}_1 \ar@{^{(}->}[r] \ar@{^{(}->}[rd] & \sheaf{E}_2 \ar@{^{(}->}[r] \ar@{^{(}->}[d] &
\sheaf{E}_3 \ar@{^{(}->}[r] \ar@{^{(}->}[ld] & \cdots \\
& \sheaf{F} &&
}
\end{equation*}
in $\Pcat$, the sub-episheaf $\biguplus \sheaf{E}_i$ contains them all.
By Zorn's Lemma, $\Pcat$ has a maximal element and therefore
$\sheaf{F}$ has a maximal sub-episheaf.
Consider a sheaf map $\ubar{\alpha} : \sheaf{F} \to \sheaf{G}$ in~$\Sheaf(X, \Sstrat)$.
Suppose $\sheaf{D} \mono \sheaf{F}$ and $\sheaf{E} \mono \sheaf{G}$
are maximal sub-episheaves.
By Proposition \ref{prop:epi_one},
the image of the composition
\begin{equation*}
\xymatrix{
\sheaf{D} \ar@{^{(}->}[r] & \sheaf{F} \ar[r]^{\ubar{\alpha}} & \sheaf{G}
}
\end{equation*}
is a sub-episheaf of $\sheaf{G}$.
My maximality of $\sheaf{E}$, this image is contained in $\sheaf{E}$ thus inducing
a map $\sheaf{D} \to \sheaf{E}$ that makes the following diagram commute:
\begin{equation*}
\xymatrix{
\sheaf{D} \ar@{^{(}->}[d] \ar@{-->}[rr] && \sheaf{E} \ar@{^{(}->}[d] \\
\sheaf{F} \ar[rr]^{\ubar{\alpha}} && \sheaf{G}.
}
\end{equation*}
Thus the assignment to each $\Sstrat$-constructible sheaf
its maximal sub-episheaf is functorial.
\begin{defn}
The \define{epification} of $\Sstrat$-constructible sheaves over $X$ is the functor
$$\Epi : \Sheaf(X, \Sstrat) \to \Sheaf( X, \Sstrat)$$
that sends each sheaf to its maximal sub-episheaf.
Let $\ubar{\eta} : \Epi \Rightarrow \id_{\Sheaf(X, \Sstrat)}$ be the inclusion
natural transformation.
\end{defn}
\section{Cosheaves}
\label{sec:cosheaves}
Cosheaves are ``dual'' to their better known cousins, sheaves. In this section, whose parallel structure to the last one reflects that ``duality", we develop the theory of constructible cosheaves. We introduce the notions of a monocosheaf and monofication.
\begin{defn}
A \define{cosheaf} (of abelian groups) \emph{under} $X$ is a covariant functor
$$\cosheaf{F} : \Open(X) \to \Ab$$
satisfying the following property.
For each open set $U \subseteq X$ and for each open cover $\cov{U}$ of $U$,
the universal map
$\colim\; \cosheaf{F} |_{\cov{U}} \to \cosheaf{F}(U)$
is an isomorphism.
A \define{cosheaf map} is a natural transformation of functors $\lbar{\alpha} : \cosheaf{F} \to \cosheaf{G}$.
\end{defn}
\begin{defn}
Let $(X, \Sstrat)$ be a stratified space.
A cosheaf $\cosheaf{F}$ under $X$ is \define{$\Sstrat$-constructible}
if for every pair of open sets $V \subseteq U$ in $\Basic(X, \Sstrat)$ associated
to a common stratum, the map
$$\cosheaf{F}(V \subseteq U) : \cosheaf{F}(V) \to \cosheaf{F}(U)$$
is an isomorphism.
A cosheaf $\cosheaf{F}$ under $X$ is \define{constructible} if it is
$\Sstrat$-constructible for some stratified space $(X, \Sstrat)$.
Let $\Cosheaf(X, \Sstrat)$ be the category of $\Sstrat$-constructible
cosheaves under $X$ and cosheaf maps.
Let $\Cosheaf(X)$ be the category of constructible cosheaves under $X$ and cosheaf maps.
\end{defn}
When defining an $\Sstrat$-constructible cosheaf under $X$, it is enough to specify
a covariant functor on a small subposet of $\Open(X)$.
Let $\Acat \subseteq \Basic(X, \Sstrat)$ be any subposet that is a basis for the
topology on $X$.
For example, if $(X, \Sstrat, \Jcontrol)$ is a Thom-Mather space, then we may let
$\Acat$ be $\Basic(X, \Sstrat, \Jcontrol)$.
Let $\Ffunc : \Acat \to \Ab$ be a covariant functor
such that for each pair $V \subseteq U$ associated to a common stratum, the map
$\Ffunc(V \subseteq U)$ is an isomorphism.
Then $\Ffunc$ uniquely generates (up to an isomorphism) an $\Sstrat$-constructible cosheaf
$\cosheaf{F}$ as follows.
For an arbitrary open set $U \subseteq X$, let $\Acat(U) \subseteq \Acat$
be the subposet consisting of elements contained in $U$.
Let $\cosheaf{F}(U) := \colim \Ffunc |_{\Acat(U)}$.
For an arbitrary pair of open sets $V \subseteq U \subseteq X$,
let $\cosheaf{F}(V \subseteq U)$ be the
universal morphism between the two colimits.
We let the reader check that $\cosheaf{F}$ is indeed an $\Sstrat$-constructible cosheaf.
If $(X, \Kstrat)$ is a triangulation and $\cosheaf{F}$ a
$\Kstrat$-constructible cosheaf, then $\cosheaf{F}$ is uniquely determined (up to an isomorphism)
by its value on the subposet of open stars
$$\big \{ \st \; \sigma \; \big | \; \sigma \in \Kstrat \big \} \subseteq \Open(X).$$
\begin{ex}
\label{ex:relative_cosheaf}
Let $f : Y \to X$ be a $(\Sstrat, \Jcontrol)$-constructible map.
Define $\cosheaf{F}^\ast$ as the $\Sstrat$-constructible cosheaf generated by assigning
to each $(\Sstrat, \Jcontrol)$-basic open $U \subseteq X$ the
singular relative cohomology group
$$\cosheaf{F}^\ast ( U ) := \Hgroup^\ast \big( Y, Y - f^{-1} (U) ; \Zspace \big).$$
For two $(\Sstrat, \Jcontrol)$-basic opens $V \subseteq U$ associated to a common stratum,
$$\cosheaf{F}^\ast(V \subseteq U) : \cosheaf{F}^\ast (V) \to \cosheaf{F}^\ast(U)$$
is, by definition of an $(\Sstrat, \Jcontrol)$-constructible map, an isomorphism
Thus $\cosheaf{F}^\ast$ is an $\Sstrat$-constructible cosheaf.
\end{ex}
\begin{ex}
\label{ex:ordinary_cosheaf}
Let $f : Y \to X$ be a $(\Sstrat, \Jcontrol)$-constructible map.
Define $\cosheaf{F}_\ast$ as the $(\Sstrat, \Jcontrol)$-constructible cosheaf generated by assigning
to each $(\Sstrat, \Jcontrol)$-basic open $U \subseteq X$ the
singular homology group
$$\cosheaf{F}_\ast ( U ) := \Hgroup_\ast \big( f^{-1}(U) ; \Zspace \big).$$
For two $(\Sstrat, \Jcontrol)$-basic opens $V \subseteq U$ associated to a common stratum,
the map
$$\cosheaf{F}_\ast(V \subseteq U) : \cosheaf{F}_\ast (V ) \to \cosheaf{F}_\ast( U )$$
is, by definition of an $(\Sstrat, \Jcontrol)$-constructible map,
an isomorphism.
Thus $\cosheaf{F}_\ast$ is a $\Sstrat$-constructible cosheaf.
\end{ex}
\begin{ex}
\label{ex:orientation_cosheaf}
Let $(M, \Sstrat)$ be a stratified space where $M$ is an $m$-manifold
without boundary and $\Sstrat$ consists of a single stratum namely $M$.
Note that every open $m$-ball of $M$ is an $\Sstrat$-basic open.
The \emph{local orientation cosheaf} under $M$ is the $\Sstrat$-constructible
cosheaf $\cosheaf{O}$
that assigns to each open $m$-ball $U \subseteq M$
the top dimensional singular relative cohomology group
$$\cosheaf{O}( U ) := \Hfunc^m \big(M, M - U ; \Zspace \big) \cong \Zspace.$$
For two $m$-balls $V \subseteq U$,
the map
$$\cosheaf{O}(V \subseteq U) : \cosheaf{O} (V) \to \cosheaf{O} (U)$$
is an isomorphism.
Thus $\cosheaf{O}$ is an $\Sstrat$-constructible cosheaf.
Moreover, $\cosheaf{O}$ is a local system.
The manifold $M$ is \emph{orientable} if $\cosheaf{O}(M) \cong \Zspace$.
If $M$ is orientable, then an \emph{orientation} of $M$ is the choice of a
generator of $\cosheaf{O}(M)$.
\end{ex}
\begin{defn}
An $\Sstrat$-constructible cosheaf $\cosheaf{M}$ under $X$ is a \define{monocosheaf} if for every pair of
$\Sstrat$-basic opens $V \subseteq U$,
the map $\cosheaf{M}(V \subseteq U): \cosheaf{M}(V) \to \cosheaf{M}(U)$
is injective.
\end{defn}
\begin{prop}
\label{prop:mono_one}
Consider a cosheaf map $\lbar \alpha : \cosheaf{F} \to \cosheaf{M}$ in $\Cosheaf(X, \Sstrat)$.
If $\cosheaf{M}$ is a monocosheaf, then the image of $\lbar \alpha$ is an $\Sstrat$-constructible
monocosheaf.
\end{prop}
\begin{proof}
For any pair of $\Sstrat$-basic opens $V \subseteq U$,
consider the following commutative diagram:
\begin{equation*}
\xymatrix{
\cosheaf{F}(U) \ar[d]_{\lbar \alpha (U)}
&& \cosheaf{F}(V) \ar[d]^{\lbar \alpha(V)} \ar[ll]_{\cosheaf{F}(V \subseteq U)} \\
\cosheaf{M}(U) && \cosheaf{M}(V) \ar@{^{(}->}[ll]_{\cosheaf{M}(V \subseteq U)}
}
\end{equation*}
The restriction of $\cosheaf{M}(V \subseteq U)$ to the image of $\lbar \alpha(V)$
is an injection into the image of $\lbar \alpha(U)$.
\end{proof}
Let $\cosheaf{F}$ be an $\Sstrat$-constructible cosheaf under $X$.
A \emph{quotient-monocosheaf} of $\cosheaf{F}$ is a
surjection $\cosheaf{F} \epi \cosheaf{M}$ to an $\Sstrat$-constructible monocosheaf $\cosheaf{M}$.
The zero cosheaf $\cosheaf{F} \to \cosheaf{0}$ is the largest quotient-monocosheaf of $\cosheaf{F}$
because its kernel is all of $\cosheaf{F}$.
For any two quotient-monocosheaves $\cosheaf{F} \epi \cosheaf{M}_1$ and
$\cosheaf{F} \epi \cosheaf{M}_2$, let
$\cosheaf{K}_1, \cosheaf{K}_2 \subseteq \cosheaf{F}$ be their kernels.
Then
$\cosheaf{F} \epi \sfrac{\cosheaf{F}}{\cosheaf{K}_1 \cap \cosheaf{K}_2}$
is a quotient-monocosheaf of $\cosheaf{F}$.
Let $P$ be the poset of kernels of quotient-monocosheaves of $\cosheaf{F}$
ordered by containment.
For any chain of quotient-monocosheaves
\begin{equation*}
\xymatrix{
&& \cosheaf{F} \ar@{->>}[ld] \ar@{->>}[d] \ar@{->>}[rd] && \\
\cdots \ar@{->>}[r] & \cosheaf{M}_3 \ar@{->>}[r] & \cosheaf{M}_2 \ar@{->>}[r] & \cosheaf{M}_1,
}
\end{equation*}
the corresponding chain of kernels in $P$ has, by taking intersections, a minimal element in~$P$.
By Zorn's Lemma, $P$ has a minimal element and therefore $\cosheaf{F}$ has a minimal quotient-monocosheaf.
Consider a cosheaf map $\lbar{\alpha} : \cosheaf{F} \to \cosheaf{G}$ in $\Cosheaf(X, \Sstrat)$ and suppose
$\cosheaf{F} \epi \cosheaf{M}$ and $\cosheaf{G} \epi \cosheaf{N}$
are minimal quotient-monocosheaves.
By Proposition \ref{prop:mono_one},
the image of the composition
\begin{equation*}
\xymatrix{
\cosheaf{F} \ar[r]^{\lbar \alpha} & \cosheaf{G} \ar@{->>}[r] & \cosheaf{N}
}
\end{equation*}
is a quotient-monocosheaf of $\cosheaf{F}$.
By minimality of $\cosheaf{M}$, the kernel of $\cosheaf{F} \epi \cosheaf{M}$
is contained in the kernel of the above composition
inducing a map
$\cosheaf{M} \to \cosheaf{N}$ that makes the following diagram commute:
\begin{equation*}
\xymatrix{
\cosheaf{F} \ar@{->>}[d] \ar[rr]^{\lbar{\alpha}} && \cosheaf{G} \ar@{->>}[d] \\
\cosheaf{M} \ar@{-->}[rr] && \cosheaf{N}.
}
\end{equation*}
Thus the assignment to each $\Sstrat$-constructible cosheaf
its minimal quotient-monocosheaf is functorial.
\begin{defn}
The \define{monofication} of $\Sstrat$-constructible cosheaves over $X$ is the functor
$$\Mono : \Cosheaf(X, \Sstrat) \to \Cosheaf(X, \Sstrat)$$
that sends each cosheaf to its minimal quotient-monocosheaf.
Let $\lbar{\eta} : \id_{\Cosheaf(X)} \Rightarrow \Mono$ be the quotient
natural transformation.
\end{defn}
\section{Bisheaves}
\label{sec:bisheaves}
We now have both a sheaf theoretic and a cosheaf theoretic approach to studying the fibers
of a constructible map. As mentioned in \S\ref{sec:introduction}, neither of these alone is enough to produce the stability results we want.
We now combine the two approaches with the ideas of a bisheaf and an isobisheaf.
\begin{defn}
A \define{bisheaf} \emph{around} $X$ is a triple
$\bisheaf{F} := \big( \sheaf{F}, \cosheaf{F}, \Ffunc \big)$
where $\sheaf{F}$ is a sheaf over $X$, $\cosheaf{F}$ is a cosheaf under $X$,
and $\Ffunc := \big \{ \Ffunc(U): \sheaf{F}(U) \to \cosheaf{F}(U) \big \}$ is a set of maps satisfying the following property.
For for each pair of open sets $V \subseteq U \subseteq X$, the following diagram commutes:
\begin{equation*}
\xymatrix{
\sheaf{F}(U) \ar[d]_{\Ffunc(U)} \ar[rr]^{\sheaf{F}(V \subseteq U )} && \sheaf{F}(V) \ar[d]^{\Ffunc(V)} \\
\cosheaf{F}(U) && \cosheaf{F}(V) \ar[ll]^{\cosheaf{F}(V \subseteq U)}.
}
\end{equation*}
A \define{bisheaf map} $\ulbar{\alpha} : \bisheaf{F} \to \bisheaf{G}$ is a pair of maps
$\big( \ubar{\alpha}, \lbar{\alpha} \big)$ where
$\ubar{\alpha}: \sheaf{F} \to \sheaf{G}$ is a sheaf map and $\lbar{\alpha} : \cosheaf{G} \to \cosheaf{F}$
is a cosheaf map satisfying the following property.
For every open set $U \subseteq M$,
the following diagram commutes:
\begin{equation*}
\xymatrix{
\sheaf{F}(U) \ar[d]_{\Ffunc(U)} \ar[rr]^{\ubar{\alpha}(U)}&& \sheaf{G}(U) \ar[d]^{\Gfunc(U)} \\
\cosheaf{F}(U) && \cosheaf{G}(U) \ar[ll]^{\lbar{\alpha}(U)}.
}
\end{equation*}
\end{defn}
\begin{defn}
A bisheaf $\bisheaf{F} = \big( \sheaf{F}, \cosheaf{F}, \Ffunc \big)$ around $X$
is \define{$\Sstrat$-constructible}
if both $\sheaf{F}$ and $\cosheaf{F}$ are $\Sstrat$-constructible.
A bisheaf is \define{constructible} if it is $\Sstrat$-constructible for some stratification
$(X, \Sstrat)$.
Let $\Bisheaf(X, \Sstrat)$ be the category of $\Sstrat$-constructible bisheaves
around $X$ and bisheaf maps.
Let $\Bisheaf(X)$ be the category of constructible bisheaves around
$X$ and bisheaf maps.
\end{defn}
\begin{ex}
Let $f : Y \to M$ be a $(\Sstrat, \Jcontrol)$-constructible map to an oriented $m$-manifold $M$.
Recall the relative homology sheaf $\sheaf{F}_{\ast+m}$ of $f$ and
the ordinary homology cosheaf $\cosheaf{F}_\ast$ of $f$; see
Examples \ref{ex:relative_sheaf} and \ref{ex:ordinary_cosheaf} respectively.
Then there is a constructible bisheaf
$$\bisheaf{F}_\ast := \Big( \sheaf{F}_{\ast + m} , \cosheaf{F}_\ast, \big \{ \Ffunc_\ast(U) \big \} \Big)$$
around $M$ where $\Ffunc_\ast(U)$ is a cap product constructed as follows.
Recall the local orientation cosheaf $\cosheaf{O}$ of $M$; see Example \ref{ex:orientation_cosheaf}.
Fix an orientation $o \in \cosheaf{O}(M)$.
Let $U \subseteq M$ be an $(\Sstrat, \Jcontrol)$-basic open and suppose $U$ is associated to a
stratum $S \in \Sstrat$.
Choose an $(\Sstrat, \Jcontrol)$-basic open $U' \subsetneq U$ that is also associated to $S$.
Then the the inclusion
$$\big( f^{-1}(U), f^{-1}(U) - f^{-1}(U') \big) \mono
\big( Y, Y - f^{-1}(U') \big)$$
induces, by excision, an isomorphism on their relative singular (co)homology groups.
The inclusion
$$\big( Y, Y - f^{-1}(U) \big) \mono \big( Y, Y - f^{-1}(U') \big)$$
induces, by definition of a constructible map, an isomorphism on their singular relative (co)homology groups.
Thus the singular cap product
\begin{equation*}
\xymatrix{
\Hfunc_{d+m}\big( f^{-1}(U), f^{-1}(U) - f^{-1}(U') \big) \otimes
\Hfunc^m \big( f^{-1}(U), f^{-1}(U) - f^{-1}(U') \big) \ar[r]^-{\frown}
& \Hfunc_d \big( f^{-1}(U) \big)
}
\end{equation*}
gives rise to a map
\begin{equation*}
\xymatrix{
\sheaf{F}_{d+m}\big( U \big) \otimes
\cosheaf{F}^m \big( U \big) \ar[r]^-{\frown}
& \cosheaf{F}_d \big( U \big)
}
\end{equation*}
where $\cosheaf{F}^m$ is the cosheaf of relative cohomology groups; see Example \ref{ex:relative_cosheaf}.
For any pair of $(\Sstrat, \Jcontrol)$-basic opens $V \subseteq U$,
we have the following diagram where the vertical
maps are induced by inclusion:
\begin{equation*}
\label{eq:naturality}
\xymatrix{
\sheaf{F}_{d + m}(U) \ar@<-5ex>[d]_{i} \otimes \cosheaf{F}^{d + m}(U)
\ar[r]^<<<<<{\frown} & \cosheaf{F}_{d}(U) \\
\sheaf{F}_{d + m}(V) \otimes \cosheaf{F}^{d + m}(V) \ar@<-5ex>[u]^{j}
\ar[r]^<<<<<{\frown} & \cosheaf{F}_{d}(V) \ar[u]^{k} .
}
\end{equation*}
For any $\mu \in \sheaf{F}_{d + m}(U)$ and $c \in \cosheaf{F}^{d + m}(V)$, the cap product
satisfies
\begin{equation}
\label{eq:naturality}
k \big( i(\mu) \frown c \big) = \mu \frown j(c).
\end{equation}
Let $o_U := \cosheaf{O}^{-1}(U \subseteq M)(o)$ and
$o_V := \cosheaf{O}^{-1}(V \subseteq M)(o)$.
The map $f$ induces pull-backs
\begin{equation*}
\xymatrix{
f^m_U : \cosheaf{O} \big ( U \big) \to \cosheaf{F}^m \big( U \big) &&
f^m_V : \cosheaf{O} \big ( U \big) \to \cosheaf{F}^m \big( V \big).
}
\end{equation*}
By Equation \ref{eq:naturality}, the following diagram commutes:
\begin{equation}
\label{eq:cap_square}
\xymatrix{
\sheaf{F}_{d + m}(U) \ar[d]_{\frown f^m_U(o_U)} \ar[rr]^i
&& \sheaf{F}_{d + m}(V) \ar[d]^{\frown f^m_V(o_V)} \\
\cosheaf{F}_d(U) && \cosheaf{F}_d(V). \ar[ll]_k
}
\end{equation}
Thus we have a constructible bisheaf $\bisheaf{F}_{\ast}$ for $f$ where
$\Ffunc_\ast(U) := \frown f^m_U(o_U)$ for each $(\Sstrat, \Jcontrol)$-basic open $U \subseteq M$.
\end{ex}
\begin{defn}
An $\Sstrat$-constructible bisheaf $\bisheaf{I} = \big( \sheaf{I}, \cosheaf{I}, \Ifunc \big)$
around $X$ is an \define{isobisheaf} if $\sheaf{I}$
is an episheaf and $\cosheaf{I}$ is a monocosheaf.
\end{defn}
A \emph{local system} over $X$ is a locally constant sheaf
over $X$ and a \emph{colocal system} under $X$ is a locally constant
cosheaf under $X$.
Inverting the arrows of a local system results in a colocal system.
Inverting the arrows of a colocal system results in a local system.
Thus the category of local systems over $X$ is equivalent to the category
of colocal systems under $X$.
We now confuse the distinction and call both local systems.
\begin{prop}
\label{prop:image_local_system}
Let $\bisheaf{I} = (\sheaf{I}, \cosheaf{I}, \Ifunc)$ be an $\Sstrat$-constructible isobisheaf around $X$.
Then the image $\image \Ifunc$ of all the maps $\big \{ \Ifunc(U) \big\}$
is a local system over $X$ and the coimage $\coimage \Ifunc$ of all the maps $\big \{ \Ifunc(U) \big\}$
is also a local system over $X$.
Furthermore, $\image \Ifunc$ is isomorphic to $\coimage \Ifunc$.
\end{prop}
\begin{proof}
For a pair of $\Sstrat$-basic opens $V \subseteq U$, consider the following
commutative diagram:
\begin{equation*}
\xymatrix{
\sheaf{I}(U) \ar[d]_{\Ifunc(U)} \ar@{->>}[rr]^{\sheaf{I}(V \subseteq U)} && \sheaf{I}(V) \ar[d]^{\Ifunc(V)} \\
\cosheaf{I}(U) && \cosheaf{I}(V) \ar@{^{(}->}[ll]_{\cosheaf{I}(V \subseteq U)}.
}
\end{equation*}
The map $\cosheaf{I}(V \subseteq U)$
takes the image of $\Ifunc(V)$ isomorphically to the image of
$\Ifunc(U)$ making $\image \Ifunc$ a local system.
The map $\sheaf{I}(V \subseteq U)$ takes
the kernel of $\Ifunc(U)$ isomorphically to the kernel
of $\Ifunc(V)$ making $\coimage \Ifunc$ a local system.
We have $\image \Ifunc(U) \cong \coimage \Ifunc(U)$ for each open set $U \subseteq X$.
Thus $\image \Ifunc$ and $\coimage \Ifunc$ are isomorphic.
\end{proof}
Let $\bisheaf{F}$ be an $\Sstrat$-constructible bisheaf over $X$.
Epification of $\sheaf{F}$ and monofication of $\cosheaf{F}$ results in an isobisheaf
$\Iso\big( \bisheaf{F} \big) := \Big( \Epi\big( \sheaf{F} \big),
\Mono \big( \cosheaf{F} \big), \Iso (\Ffunc) := \lbar \eta \big( \cosheaf{F} \big) \circ
\Ffunc \circ \ubar \eta \big( \sheaf{F} \big) \Big);$
see Diagram \ref{dgm:isobisheaf_map}.
Consider a bisheaf map $\ulbar{\alpha} : \bisheaf{F} \to \bisheaf{G}$ in $\Bisheaf(X, \Sstrat)$.
The universal property of episheaves and monocosheaves induces a map of isobisheaves:
\begin{equation}
\label{dgm:isobisheaf_map}
\begin{gathered}
\xymatrix{
\Epi \big( \sheaf{F} \big) \ar@{-->}[rr]^{\Epi ( \ubar{\alpha} )} \ar@{^{(}->}[d]_{\ubar{\eta}(\sheaf{F})}
&& \Epi \big( \sheaf{G} \big) \ar@{^{(}->}[d]^{\ubar{\eta}(\sheaf{G})} \\
\sheaf{F} \ar[rr]^{\ubar{\alpha}} \ar[d]_{\Ffunc}
&& \sheaf{G} \ar[d]^{\Gfunc} \\
\cosheaf{F} \ar@{->>}[d]_{\lbar{\eta}(\cosheaf{F})} && \cosheaf{G} \ar[ll]_{\lbar{\alpha}}
\ar@{->>}[d]^{\lbar{\eta}(\cosheaf{G})} \\
\Mono \big( \cosheaf{F} \big) && \Mono \big( \cosheaf{G} \big)
\ar@{-->}[ll]_{\Mono ( \lbar{\alpha} )}.
}
\end{gathered}
\end{equation}
Thus the assignment to each bisheaf its isobisheaf is functorial.
\begin{defn}
The \define{isofication} of $\Sstrat$-constructible bisheaves around $X$ is the functor
$$\Iso : \Bisheaf(X, \Sstrat) \to \Bisheaf(X, \Sstrat)$$
that sends its bisheaf $\bisheaf{F}$ to its isobisheaf $\Iso \big( \bisheaf{F}\big)$.
Let $\ulbar{\eta} = \big( \ubar{\eta}, \lbar{\eta} \big): \id_{\Bisheaf(X, \Sstrat)} \Rightarrow \Iso$ be the
natural transformations induced by $\ubar{\eta}$ and $\lbar{\eta}$.
\end{defn}
\section{\'Etale Opens}
The idea of an \'etale open was introduced by Grothendieck in algebraic geometry 60 years ago as a
natural generalization of an open set.
For us, it's important to have persistent local systems $\mathcal L(U)$ (see \S\ref{sec:introduction})
not only for open sets $U$, but for \'etale opens as well.
While it's true that the image
$\image U$ of an \'etale open of $M$ is an open subset of $M$, it is not true that $\mathcal L(\image U)$ contains
all the information of $\mathcal{L}(U)$.
In fact $\mathcal L(\image U)$ can vanish while $\mathcal{L}(U)$ is still large.
If $M = \Rspace$, then every \'etale open is an open set.
This is another way in which the $1$-dimensional case is much simpler.
In this section we develop the notion of an \'etale open of a manifold $M$ without boundary.
In the last section, we saw that every constructible bisheaf around $M$ has associated to it
a local system over $M$.
Now, we pull-back the bisheaf along any \'etale open $a : A \to M$ then use the same procedure to
compute its persistent local system over $A$.
This gives us our collection of local systems one for every \'etale open of $M$ which constitutes finer information about the bisheaf.
\begin{defn}
An \define{\'etale open} of $M$ is a continuous map
$a : A \to M$ from a Hausdorf space $A$ to our manifold $M$
that is locally a homeomorphism for every point of $A$.
An \define{\'etale map} $\mu: a \to b$ is a continuous map
$\mu : A \to B$ such that
the following diagram commute:
\begin{equation*}
\xymatrix{
A \ar[dr]_-{a} \ar[rr]^{\mu} && B \ar[ld]^-{b} \\
& M. &
}
\end{equation*}
Let $\Etal(M)$ be the category of \'etale opens of $M$.
The initial object of $\Etal(M)$ is the empty \'etale open $\emptyset : \emptyset \to M$
and the terminal object is the identity \'etale open $\id_M : M \to M$.
Note that every open set of $M$ is an \'etale open.
\end{defn}
Let $(M, \Sstrat)$ be a stratified space and $a : A \to M$
an \'etale open.
Then $a$ pulls-back $\Sstrat$ to a stratification
$a^\star \Sstrat$ of $A$.
\begin{defn}
Let $(M, \Kstrat)$ be a triangulation.
By definition of a triangulation, there is a simplicial pair $(K, K_0)$
and a homeomorphism $\phi : | K - K_0 | \to M$ such that
each stratum of $\Kstrat$ is the image of a simplex in $K - K_0$.
An \'etale open $a: A \to M$ is \define{$\Kstrat$-constructible} if
there is a simplicial pair $( L, L_0)$, a homeomorphism
$\chi : | L - L_0 | \to A$, and a simplicial map $\psi : L \to K$
that satisfies the following conditions:
\begin{itemize}
\item The following diagram commutes:
\begin{equation*}
\xymatrix{
\big | L - L_0 \big | \ar[rr]^-{\chi} \ar[d]^{ | \psi |} && A \ar[d]^a \\
\big | K - K_0 \big| \ar[rr]^{\phi } & & M.
}
\end{equation*}
\item Every simplex in $L_0$ is the face of a simplex in $L - L_0$.
\item Each $(m-1)$-simplex in $L_0$ is the face of a
single $m$-simplex in $L - L_0$.
\end{itemize}
Let $\Etal(M, \Kstrat)$ be the category of $\Kstrat$-constructible \'etale opens.
\end{defn}
\begin{prop}
\label{prop:con-etale}
Let $(M, \Kstrat)$ be a triangulation.
Then for any \'etale open $a : A \to M$, there is an \'etale map $\mu : a \to b$ to a
$\Kstrat$-constructible \'etale open $b$ satisfying the following universal property.
For any \'etale map $\nu : a \to c$ to a $\Kstrat$-constructible
\'etale open $c$, there is a unique \'etale map $\eta : b \to c$ that makes the following
diagram commute:
\begin{equation*}
\xymatrix{
a \ar[rd]_{\nu} \ar[r]^{\mu} & b \ar@{-->}[d]^{\eta} \\
& c.
}
\end{equation*}
\end{prop}
\begin{proof}
By definition of a triangulation, there is a simplicial pair $(K, K_0)$
and a homeomorphism $\phi : | K - K_0 | \to M$ such that
each stratum in $\Kstrat$ is the image of a simplex in $K - K_0$.
Take each stratum $S \in a^\star \Kstrat$ and replace it with a copy of the simplex
$a(S)$ in $\Kstrat$.
Call the resulting poset of simplices $L'$ and $\psi ' : L' \to K$
the simplicial map induced by $a$.
The poset $L'$ may not be a simplicial complex as there may be an $m$-simplex
without all its faces.
Take the closure of $L'$ by completing each $m$-simplex.
Call the resulting simplicial complex $\tilde L$ and $\tilde \psi : \tilde L \to K$
the unique extension of $\psi'$.
Note if two $m$-simplices in $L'$ do not share an $(m-1)$-simplex, then
they do not share an $(m-1)$-simplex in $\tilde L$.
Let $\tilde L_0 := \tilde L - L'$.
The map $| \tilde \psi | : | \tilde L - \tilde L_0 | \to | K - K_0|$ may not be a
$\Kstrat$-constructible \'etale open.
Let us say two simplices $\sigma, \sigma' \in \tilde L$ are related,
$\sigma \sim \sigma'$, if there is a sequence of strata
$$\sigma= \sigma_0 \leftrightarrow \cdots
\leftrightarrow \sigma_n = \sigma_n' \leftrightarrow \cdots
\leftrightarrow \sigma_0' = \sigma'$$
in $\tilde L$ such that adjacent strata are related by a face relation
and $\tilde \psi (\sigma_i) = \tilde \psi (\sigma_i')$ for all $i$.
Take the transitive closure of $\sim$.
Let $L := \tilde L /\sim$ and $\psi : L \to K$
the simplicial map $\tilde \psi / \sim$.
Let $B := | L - L_0 |$ and $b : B \to M$
as the underlying map of $\psi$.
The map $\mu$ is the inclusion $A \mono | L ' |$ followed by the quotient map.
Given $\nu$ and a simplex $\sigma \in L - L_0$, $\mu^{-1}(\sigma)$ is non-empty
and must map along $\nu$ to a single stratum in $c^\star \Kstrat$ otherwise $c$ would not be
an \'etale open (i.e.\ there is a sequence of simplices in $c^\star \Kstrat$ of the type above).
Let $\eta(\sigma) := \nu \big( \mu^{-1}(\sigma) \big)$.
\end{proof}
\section{Stacks}
\label{sec:stacks}
We finally get to the central construction of this paper: persistence stacks. Given a constructible bisheaf over a manifold $M$, we now have a local system
for each \'etale open of $M$.
Here we assemble these local systems into a stack. The advantage is that the persistence stack has good functorial properties which are useful, for example, in proving stability.
The whole construction of the persistent local systems can be though of this way:
$$\genfrac\{\}{0pt}0{\mbox{Maps }}{\mbox{} X\to M }
\longrightarrow
\genfrac\{\}{0pt}0{\mbox{Bisheaves}}{\mbox{over } M }
\longrightarrow
\genfrac\{\}{0pt}0{\mbox{Persistence stacks}}{\mbox{ around } M }\longrightarrow
\genfrac\{\}{0pt}0{\mbox{Local systems for}}{\mbox{each \'etale open of }M}
$$
\vspace{1em}\begin{defn} \label{persistence stack}
An \define{$\Sstrat$-constructible persistence stack} $\Fstack$ \emph{around} $M$
is the assignment to $\Etal(M)$ the following data satisfying the following
axiom:
\begin{itemize}
\item To each \'etale open $a: A \to M$,
$\Fstack(a)$ is an $a^\star \Sstrat$-constructible isobisheaf
$\big( \sheaf{F}_a, \cosheaf{F}_a, \Ffunc_a \big)$.
\item To each \'etale map $\mu : a \to b$, $\Fstack(\mu) :
\mu^\star \Fstack(b) \to \Fstack(a)$ is a bisheaf map
\begin{equation}
\label{dgm:stack}
\begin{gathered}
\xymatrix{
\mu^\star \sheaf{F}_b \ar[d]_{\mu^\star \Ffunc_b} \ar@{^{(}->}[rr]^{\uFstack(\mu)}
&& \sheaf{F}_a \ar[d]^{\Ffunc_a} \\
\mu^\star \cosheaf{F}_b && \cosheaf{F}_a \ar@{->>}[ll]^{\lFstack(\mu)}
}
\end{gathered}
\end{equation}
where $\uFstack(\mu)$ is injective and $\lFstack(\mu)$ is surjective.
\item For each pair of \'etale maps $\mu : a \to b$ and $\nu : b \to c$,
$\Fstack(\nu \circ \mu) = \mu^\star \Fstack(\nu) \circ \Fstack(\mu)$.
\end{itemize}
We call the image $\image \Fstack(a) := \image \Ffunc_a$ the \define{persistent local system} of $\Fstack$ at $a$.
Let $\Fstack$ and $\Gstack$ be two constructible stacks over $M$ not necessarily
constructible with respect to the same stratification.
A \define{map of constructible persistence stacks} $\Phi : \Fstack \to \Gstack$
is the following data satisfying the following axiom:
\begin{itemize}
\item To each \'etale open $a$, $\ulbar \Phi(a) : \Fstack(a) \to \Gstack(a)$
is a bisheaf map
\begin{equation}
\label{dgm:stack_map}
\begin{gathered}
\xymatrix{
\sheaf{F}_a \ar[d]_{\Ffunc_a} \ar[rr]^{\ubar \Phi(a)}
&& \sheaf{G}_a \ar[d]^{\Gfunc_a} \\
\cosheaf{F}_a && \cosheaf{G}_a \ar[ll]^{\lbar \Phi(a)}.
}
\end{gathered}
\end{equation}
Note there are no conditions on $\ubar \Phi(a)$ and $\lbar \Phi(b)$ other than
that the diagram commutes.
\item For each \'etale map $\mu : a \to b$, the following diagram commutes:
\begin{equation}
\label{dgm:stack_map_map}
\begin{gathered}
\xymatrix{
\mu^\star \sheaf{F}_b \ar[ddd]_{\mu^\star \Ffunc_b} \ar[rrrr]^{\mu^\star \ubar \Phi(b)}
\ar@{^{(}->}[rd]^{\uFstack(\mu)}
&&&& \mu^\star \sheaf{G}_b \ar[ddd]^{\mu^\star \Gfunc_b}
\ar@{^{(}->}[ld]^{\uGstack(\mu)} \\
& \sheaf{F}_a \ar[d]_{\Ffunc_a} \ar[rr]^{ \ubar \Phi(a)}
&& \sheaf{G}_a \ar[d]^{\Gfunc_a} & \\
& \cosheaf{F}_a \ar@{->>}[ld]^{ \lFstack(\mu)}
&& \cosheaf{G}_a \ar[ll]^{\lbar \Phi(a)}
\ar@{->>}[rd]^{\lGstack(\mu)} & \\
\mu^\star \cosheaf{F}_b &&&& \mu^\star \cosheaf{G}_b \ar[llll]^{\mu^\star \lbar \Phi(b)}.
}
\end{gathered}
\end{equation}
\end{itemize}
Let $\Stack(X)$ be the \define{category of constructible persistence stacks}
over $M$ and stack maps.
\end{defn}
Given a constructible persistence stack $\Fstack$ over $M$, there is a persistent
local system $\image \Fstack(a)$ for each \'etale open $a : A \to M$.
For an \'etale map $\mu : a \to b$, the two local systems $\image \Fstack(a)$
and $\image \Fstack(b)$ are related by Diagram \ref{dgm:stack}.
Let $\cosheaf{I} := \image \big(\Ffunc_a \circ \uFstack(\mu) \big)$ and
$\cosheaf{K} := \cosheaf{I} \cap \ker \lFstack(\mu)$.
Then
\begin{equation*}
\xymatrix{
\image \mu^\star \Fstack(b) & \ar@{->>}[l]_-{/ \cosheaf{K}} \cosheaf{I}
\ar@{^{(}->}[r] & \image \Fstack(a).
}
\end{equation*}
In other words, the data $\image \mu^\star \Fstack(b)$ \emph{persists} in $\image \Fstack(a)$
as a quotient of a sublocal system.
Persistent local systems satisfy Property 1 of Section \ref{sec:background}.
Given a stack map $\Phi : \Fstack \to \Gstack$ and an \'etale open $a$,
the two local systems $\image \Fstack(a)$ and $\image \Gstack(a)$ are related
by Diagram \ref{dgm:stack_map}.
Thus
\begin{equation*}
\xymatrix{
\image \Fstack(a) & \ar@{->>}[l]_-{/ \cosheaf{K}} \cosheaf{I}
\ar@{^{(}->}[r] & \image \Gstack(a).
}
\end{equation*}
where $\cosheaf{K}$ and $\cosheaf{I}$ are defined similarly.
As we will see in Section \ref{sec:stability}, this observation implies that persistence
local systems satisfy Property 2.
\begin{ex}
\label{ex:stack_example}
A constructible bisheaf $\bisheaf{F}$ over $M$ gives rise to a constructble
persistence stack $\Fstack$ as follows.
For each \'etale open $a :A \to M$, let $\Fstack(a) := \Iso \big( a^\star \bisheaf{F} \big)$.
For an \'etale map $\mu : a \to b$, we have the following commutative diagram where the top and bottom horizontal
maps are induced by the universal property of $\Epi$ and $\Mono$ respectively:
\begin{equation}
\label{dgm:quillen}
\begin{gathered}
\xymatrix{
\mu^\star \Epi \big( b^\star \sheaf{F} \big) \ar@{^{(}->}[rr]^{\ubar{\alpha}}
\ar@{^{(}->}[d]_{\mu^\star \ubar \eta ( b^\star \sheaf{F} ) }
&& \Epi \big( a^\star \sheaf{F} \big) \ar@{^{(}->}[d]^{\ubar \eta ( a^\star \Ffunc )} \\
\mu^\star b^\star \sheaf{F} \ar[rr]^{\cong}
\ar[d]_{\mu^\star b^\star \Ffunc }
&& a^\star \sheaf{F} \ar[d]^{a^\star \Ffunc} \\
\mu^\star b^\star \cosheaf{F}
\ar[d]_{\mu^\star \lbar \eta (b^\star \cosheaf{F}) }
&& \ar[ll]^{\cong} a^\star \cosheaf{F} \ar[d]^{\lbar \eta (a^\star \cosheaf{F}) )} \\
\mu^\star \Mono \big( b^\star \cosheaf{F} \big) && \ar@{->>}[ll]^{\lbar{\alpha}} \Mono
\big( a^\star \cosheaf{F} \big).
}
\end{gathered}
\end{equation}
Let $\Fstack(\mu) := \big( \ubar{\alpha}, \lbar{\alpha} \big)$.
A bisheaf map $\ulbar \alpha : \bisheaf{F} \to \bisheaf{G}$ gives rise
to a map of persistence stacks $\ulbar \Phi : \Fstack \to \Gstack$ as follows.
For each \'etale open $a : A \to M$, $\ulbar \Phi(a)$ is given by
Diagram \ref{dgm:isobisheaf_map}.
\end{ex}
\begin{prop}
\label{prop:constructible_stack}
Let $(M, \Kstrat)$ be a triangulation,
$\bisheaf{F}$ an $\Kstrat$-constructible bisheaf over $M$, and
$\Fstack$ its persistence stack.
For an \'etale open $a : A \to M$, let $\mu : a \to b$
be the universal \'etale map to a $\Kstrat$-constructible \'etale open $b$
in the sense of Proposition \ref{prop:con-etale}.
Then $\Fstack (a) \cong \mu^\star \Fstack (b)$.
\end{prop}
\begin{proof}
The two isobisheaves $\Fstack (a)$ and $\mu^\star \Fstack (b)$
are related by Diagram \ref{dgm:quillen}.
We must show $\ubar{\alpha}$ and $\lbar{\alpha}$ are isomorphisms.
For the construction of $b$ in Proposition \ref{prop:con-etale},
we started by replacing each stratum in $a^\star \Kstrat$ with a
copy of the simplex it maps to under $a$.
Call the resulting underlying space $\tilde A$, $\tilde a : \tilde A \to M$
the extension of $a$, and $\tilde a^\star \Kstrat$ the triangulation of $\tilde A$.
Note that the natural inclusion $A \mono \tilde A$
commutes with $a$ and $\tilde a$.
The \'etale open $b : B \to M$ is a quotient of $\tilde a : \tilde A \to M$
by an equivalence relation $\sim$.
The isobisheaf $\Iso \big( \tilde a^\star \bisheaf{F} \big)$
restricts to $\Iso \big( a^\star \bisheaf{F} \big)$ because
each simplex of $\tilde a^\star \Kstrat$ is contractible.
Let $\tau \in \tilde a^\star \Kstrat$ and suppose there are two
simplices $\tau \lneq \sigma$ and $\tau \lneq \sigma'$ such that
$\tilde a(\sigma) = \tilde a(\sigma')$.
Then $\sigma \sim \sigma'$ meaning both simplices map to the same
simplex in $b^\star \Kstrat$.
The map
$\tilde a^\star \sheaf{F}(\st\; \tau ) \to \tilde a^\star \sheaf{F}(\st\; \sigma )$
is canonically isomorphic to the map
$\tilde a^\star \sheaf{F}(\st\; \tau ) \to \tilde a^\star \sheaf{F}(\st\; \sigma')$
because they are both pull-backs of the map
$\sheaf{F}\big(\tilde a (\st\; \tau) \big) \to \sheaf{F}\big(\tilde a (\st\; \sigma) \big)$.
Thus the identification of $\sigma$ with $\sigma'$ results
in the identification of $\Epi \big( \tilde a^\star \sheaf{F}(\st\; \sigma) \big)$
with $\Epi \big( \tilde a^\star \sheaf{F}(\st\; \sigma') \big)$.
Similarly the map
$\tilde a^\star \cosheaf{F}(\st\; \sigma ) \to \tilde a^\star \cosheaf{F}(\st\; \tau )$
is canonically isomorphic to the map
$\tilde a^\star \cosheaf{F}(\st\; \sigma' ) \to \tilde a^\star \cosheaf{F}(\st\; \tau)$
because they are both pull-backs of the map
$\cosheaf{F}\big(\tilde a (\st\; \tau) \big) \to
\cosheaf{F}\big(\tilde a (\st\; \sigma) \big)$.
Thus the identification of $\sigma$ with $\sigma'$ results
in the identification of $\Mono \big( \tilde a^\star \cosheaf{F}(\st\; \sigma) \big)$
with $\Mono \big( \tilde a^\star \cosheaf{F}(\st\; \sigma) \big)$.
Thus the quotient of $\tilde A$ by~$\sim$ results in an isobisheaf over $B$
that pulls-back along $\mu$ to $\Iso \big( a^\star \bisheaf{F}\big)$.
\end{proof}
\section{Dilation}
\label{sec:dilation}
In this section, we begin the task of proving stability of the persistence stack of a map.
Dilation is way of coarsening or smoothing the data of a constructible bisheaf.
Let $K$ be a simplicial complex.
The \emph{first subdivision} of $K$ is the simplicial complex
$K^1$
whose (open) simplices are chains
$ [\sigma_{i_0} \lneq \cdots \lneq \sigma_{i_n} ]$
of simplices in $K$.
The face relation
$$[\sigma_{i_0} \lneq \cdots \lneq \sigma_{i_n}] \leq [\sigma_{j_0} \lneq \cdots \lneq \sigma_{j_m}]$$
in $K^1$ is the subchain relation.
Similarly, the \emph{second subdivision} of $K$ is the triangulation
$K^2$ of $M$ whose (open) simplices are chains
$$\Big[ [\sigma_{i_0} \lneq \cdots \lneq \sigma_{i_n}] \lneq \cdots \lneq [\sigma_{j_0} \lneq \cdots
\lneq \sigma_{j_m}] \Big]$$
of simplices in $K^1$.
The face relation in $K^2$ is the subchain relation.
\begin{defn}
The \define{dilation} of a simplicial complex $K$ is the simplicial map
$\Sigma : K^2 \to K^1$ defined by sending each vertex
$$\Big[ [\sigma_{i_0} \lneq \cdots \lneq \sigma_{i_n}] \Big ] \in K^2$$
to the vertex $[\sigma_{i_0}] \in K^1$.
Thus each simplex
$$\Big[ [\sigma_{i_0} \lneq \cdots \lneq \sigma_{i_l}] \lneq
\cdots \lneq [\sigma_{j_0} \lneq \cdots \lneq \sigma_{j_m}] \lneq \cdots \lneq
[\sigma_{k_0} \lneq \cdots \lneq \sigma_{k_n}] \Big] \in K^2$$
maps to the simplex
$ [ \sigma_{k_0} \lneq \cdots \lneq \sigma_{j_0} \lneq \cdots \lneq \sigma_{i_0} ] \in K^1.$
Note that for a simplex $\tau \in K$,
$$\Sigma^{-1}\big( [ \tau ] \big) = \cl\; \st\; \big[ [ \tau ] \big] -
\bigcup_{\sigma \lneq \tau} \big \{ \cl \; \st \; \big[ [ \sigma ] \big] \big \}.$$
Here $\cl\; \st\; \big[ [ \tau ] \big]$ means the closure of the open star
of $\big[ [ \tau ] \big]$ in $K^2$.
\end{defn}
Let $(X, \Kstrat)$ be a triangulation and $\phi : | K - K_0 | \to X$ the associated
homeomorphism.
We subdivide $(X, \Kstrat)$ by subdividing $(K, K_0)$ and pushing-forward
along $\phi$.
Denote by $(X, \Kstrat^i)$ the $i$-th subdivision of $(X, \Kstrat)$.
The simplicial dilation map $\Sigma : K^2 \to K^1$
gives rise to a $\Kstrat^1$-constructible
dilation map $\Sigma : (X, \Kstrat^2) \to (X, \Kstrat^1)$.
Let $\bisheaf{F}$ be a $\Kstrat^1$-constructible bisheaf over $M$.
The dilation map pulls-back $\bisheaf{F}$ to a $\Kstrat^2$-constructible
bisheaf $\Sigma^\star \bisheaf{F}$ over $M$ as follows.
The sheaf $\sheaf{F}$ pulls-back to a $\Kstrat^2$-constructible
sheaf $\Sigma^\star \sheaf{F}$ generated by
$\Sigma^\star \sheaf{F}(\st\; \tau) := \sheaf{F}\big( \st\; \Sigma(\tau) \big)$
for each $\tau \in \Kstrat^2$.
The cosheaf $\cosheaf{F}$ pulls-back to a $\Kstrat^2$-constructible cosheaf
$\Sigma^\star \cosheaf{F}$ generated by
$\Sigma^\star \cosheaf{F}( \st\; \tau ) := \sheaf{F} \big( \st \; \Sigma(\tau) \big)$
for each $\tau \in \Kstrat^2$.
Thus $\bisheaf{F}$ pulls-back to a $\Kstrat^2$-constructible bisheaf
$\Sigma^\star \bisheaf{F} := \big( \Sigma^\star \sheaf{F}, \Sigma^\star \cosheaf{F}, \Sigma^\star \Ffunc \big)$
where $\Sigma^\star \Ffunc$ is generated by setting
$\Sigma^\star \Ffunc(\st\; \tau) := \Ffunc \big( \st\; \Sigma(\tau) \big).$
\begin{prop}
\label{prop:dilation_bisheaf_map}
Let $(M, \Kstrat)$ be a triangulation and $\bisheaf{F}$ a $\Kstrat^1$-constructible
bisheaf.
Then there is a canonical bisheaf map $\ulbar \alpha : \Sigma^\star \bisheaf{F} \to \bisheaf{F}$.
\end{prop}
\begin{proof}
For each simplex $\tau \in \Kstrat^2$, we have $\st\; \tau \subseteq \st\; \Sigma(\tau)$,
$\Sigma^\star \sheaf{F} ( \st\; \tau) := \sheaf{F}\big( \st\; \Sigma(\tau) \big)$, and
$\Sigma^\star \cosheaf{F} ( \st\; \tau) := \cosheaf{F}\big( \st\; \Sigma(\tau) \big)$.
Let $\ubar{\alpha}\big( \st\; \tau \big ) := \sheaf{F}\big( \st\; \tau \subseteq \st\; \Sigma(\tau) \big)$
and $\lbar{\alpha}\big( \st\; \tau \big) := \cosheaf{F}\big( \st\; \tau \subseteq \st\; \Sigma(\tau) \big)$.
\end{proof}
\begin{defn}
Let $(M, \Kstrat)$ be a triangulation of a manifold
and $a : A \to M$ a $\Kstrat$-constructible \'etale open.
By definition of a $\Kstrat$-constructible \'etale open, there is a simpicial
pair $(L, L_0)$ and a homeomorphism $\phi : | L - L_0 | \to A$.
Consider the second barycentric subdivision $(L^2, L^2_0)$.
The \define{shrinking} of $a$ is the \'etale open
$\dot a : \dot A \to M$ where
$\dot A := \big | L^2 - \cl\; \st\; L^2_0 \big |$
and $\dot a$ is the restriction of $a$.
Note that $\dot a$ is a $\Kstrat^2$-constructible \'etale open
and that there is a canonical \'etale map $\dot a \to a$.
\end{defn}
\begin{prop}
\label{prop:shrinking}
Let $(M, \Kstrat)$ be a triangulation of a manifold, $\bisheaf{F}$ a $\Kstrat$-constructible bisheaf,
$a : A \to M$ a $\Kstrat$-constructible \'etale open, and $\mu : \dot a \to a$ the
canonical \'etale map from the shrinking of $a$.
Then the two persistence local systems $\image \mu^\star \Iso \big( a^\star \bisheaf{F} \big)$ and $\image \Iso \big( \dot a^\star \Sigma^\star \bisheaf{F} \big)$
are isomorphic as persistent local systems over $\dot A$.
\end{prop}
\begin{proof}
The dilation map $\Sigma : M \to M$ pulls-back to a surjective
$a^\star \Kstrat^1$-constructible map $\Lambda : \dot A \to A$.
The isobisheaf
$$\Epi \big( \dot a^\star \Sigma^\star \sheaf{F} \big) \mono
\dot a^\star \Sigma^\star \sheaf{F} \to \dot a^\star \Sigma^\star \cosheaf{F}
\epi \Mono \big( \dot a^\star \Sigma^\star \cosheaf{F} \big)$$
is the pull-back along $\Lambda$ of the isobisheaf
$$\Epi \big(a^\star \sheaf{F} \big) \mono
a^\star \sheaf{F} \to a^\star \cosheaf{F}
\epi \Mono \big( a^\star \cosheaf{F} \big).$$
For each simplex $\sigma \in \dot a^\star \Kstrat^2$,
$\Lambda(\st\; \sigma) \supseteq \mu(\st \; \sigma)$.
Thus we have the following diagram
\begin{equation*}
\xymatrix{
\Epi \big( \dot a^\star \Sigma^\star \sheaf{F} \big)(\st\; \sigma)
\ar[d] \ar@{^{(}->}[rr] && \Epi \big(a^\star \sheaf{F} \big)\big( \mu(\st \; \sigma) \big) \ar[d] \\
\Mono \big( \dot a^\star \Sigma^\star \cosheaf{F} \big)(\st\; \sigma) &&
\Mono \big( a^\star \cosheaf{F} \big)\big( \mu(\st \; \sigma) \big) \ar@{->>}[ll]
}
\end{equation*}
which induces an isomorphism between the two vertical images.
Therefore $\image \mu^\star \Iso \big( a^\star \bisheaf{F} \big)$ and
$\image \Iso \big( \dot a^\star \Sigma^\star \bisheaf{F} \big)$ are isomorphic.
\end{proof}
\section{Stability}
\label{sec:stability}
Let $M$ be a compact oriented $m$-manifold
and $\mathfrak{W}(X, M)$ the set
of all constructible maps $X \to M$ as in Definition \ref{defn:constructible_map}.
For each open set $U \subseteq X \times M$, let
$$T_U := \big\{ f \in \mathfrak{W}(X, M) \;
\big |\; \mathsf{graph}(f) \subseteq U \big \} .$$
The collection $\big\{ T_U \big \}$ over all open sets $U$ forms the basis for
the \emph{Whitney topology} on~$\mathfrak{W}(X, M)$.
\begin{thm}
\label{thm:main}
Every map $f \in \mathfrak{W}(X, M)$ has an open neighborhood $U \subseteq \mathfrak{W}(X, M)$
such that for every map $g \in U$, their bisheaves $\bisheaf{F}_\ast$ and $\bisheaf{G}_\ast$ are related by canonical bisheaf maps in~$\Bisheaf(M)$:
$$\bisheaf{F}_\ast \leftarrow \Sigma^\star \bisheaf{F}_\ast \to \bisheaf{G}_\ast.$$
\end{thm}
Note that $f$ and $g$ need not be constructible with
respect to the same stratification and therefore
$\bisheaf{F}_\ast$ and $\bisheaf{G}_\ast$
may not be constructible with respect to the same stratification.
Recall $\Sigma : M \to M$ is the dilation map and
$\Bisheaf(M)$ is the category of all constructible
bisheaves over $M$.
\begin{proof}
Suppose $f$ is $(\Sstrat, \Jcontrol)$-constructible making $\bisheaf{F}_\ast$
an $\Sstrat$-constructible bisheaf.
Choose a triangulation $(M, \Kstrat)$ of $(M, \Sstrat, \Jcontrol)$
such that the open star of each simplex in $\Kstrat$
is contained in an $(\Sstrat, \Jcontrol)$-basic open.
This makes $\bisheaf{F}_\ast$ a $\Kstrat^1$-constructible bisheaf
and $\Sigma^\star \bisheaf{F}$ a $\Kstrat^2$-constructible bisheaf.
The bisheaf map $\Sigma^\star \bisheaf{F}_\ast \to \bisheaf{F}_\ast$ follows from
Proposition \ref{prop:dilation_bisheaf_map}.
Every second-countable Hausdorff space is metrizable.
Choose a metric on $M$.
For each simplex $\sigma \in \Kstrat$, we have
$\st\; \big[ [ \sigma] \big] \subseteq \st\; \sigma.$
By compactness of $M$, $\Kstrat$ is finite.
Let
$$\rho := \min_{\sigma \in \Kstrat} \Haus \big( \st\; \big[ [\sigma] \big] ,
\st \; \sigma \big)$$
where $\Haus$ is the Hausdorff distance between the two sets.
The set
$$U := \Big \{ f' \in \mathfrak{W}(X, M) \Big |
\sup_{x \in X} \Dist \big( f(x), f'(x) \big) < \rho \Big \}$$
is an open neighborhood of $f$ in $\mathfrak{W}(X, M)$.
Choose a map $g \in U$ and suppose it is $(\Sstrat', \Jcontrol')$-constructible
making $\bisheaf{G}_\ast$ an $\Sstrat'$-constructible bisheaf.
Choose a triangulation $(M, \Lstrat)$ of $(M, \Sstrat', \Jcontrol')$.
For each $\tau \in \Lstrat$, we assume there is a
$$\sigma = \Big[ [\sigma_{i_0} \lneq \cdots \lneq \sigma_{i_l}] \lneq
\cdots \lneq [\sigma_{j_0} \lneq \cdots \lneq \sigma_{j_m}] \lneq \cdots <
[\sigma_{k_0} < \cdots < \sigma_{k_n}] \Big] \in \Kstrat^2$$
such that $\st\; \tau \subseteq \st\; \sigma$.
If this is not the case, subdivide $\Lstrat$ until this is true.
Note that there may be many $\sigma$ satisfying this relation.
In this case, choose the unique top dimensional simplex~$\sigma$.
We have the following inclusions:
$$\st\; \tau \subseteq \st\; \sigma \subseteq
\st\; \big[ [\sigma_{i_0} \lneq \cdots \lneq \sigma_{i_l}] \big] \subseteq
\st\; \big[ [\sigma_{i_0} ] \big]
\subseteq \st\; \sigma_{i_0}.$$
Choose an $(\Sstrat, \Jcontrol)$-basic open $U \subseteq M$ containing
$\st \; \sigma_{i_0}$ such that both open sets are associated to a common
stratum in $\Sstrat$.
Choose an $(\Sstrat', \Jcontrol')$-basic open $V \subseteq M$ contained in
$\st \; \tau$ such that both open sets are associated to a common
stratum in $\Sstrat'$.
The above inclusions imply an inclusion $i : g^{-1}(V) \to f^{-1}(U)$.
Recall $\Sigma \big( \big[ [\sigma_{i_0} ] \big] \big) = [\sigma_{i_0}]$.
Thus we have the following commutative diagram of solid arrows:
\begin{equation*}
\xymatrix{
\Hgroup_{\ast+m} \big( X, X - f^{-1} ( U ) \big)
\ar[r]^-\cong \ar[d]^{i_{\ast+m}} &
\sheaf{F}_{\ast + m} ( \st\; \sigma_{i_0} ) \ar[r]^-{\cong} &
\Sigma^\star \sheaf{F} \big( \st\; \big[ [ \sigma_{i_0} ] \big] \big )
\ar@{-->}[d]^{\ubar{\alpha}( \st\; \tau )} \\
\Hgroup_{\ast+m} \Big( X, X - g^{-1} ( V ) \Big)
\ar[rr]^-\cong \ar[d]^{\frown} &&
\sheaf{G}_{\ast + m} (\st\; \tau ) \ar[d]^{\Gfunc_\ast ( \st\; \tau ) } \\
\Hgroup_{\ast} \Big( g^{-1} ( V ) \Big) \ar[d]^{i_\ast}
&& \ar[ll]^-\cong \cosheaf{G}_{\ast} ( \st\; \tau )
\ar@{-->}[d]^{\lbar{\alpha} ( \st\; \tau ) }\\
\Hgroup_{\ast} \big( f^{-1} ( U ) \big)
&
\ar[l]^-\cong \cosheaf{F}_{\ast} \big( \phi ( \st\; \sigma_{i_0} ) \big) & \ar[l]^-{\cong}
\Sigma^\star \cosheaf{F} \Big( \st\; \big[ [ \sigma_{i_0} ] \big] \big ) \Big)
}
\end{equation*}
The bisheaf map $\Sigma^\star \bisheaf{F}_\ast \to \bisheaf{G}_\ast$
is generated by defining, for each $\tau \in \Lstrat$, the unique maps
$\ubar{\alpha} ( \st\; \tau )$ and $\lbar{\alpha}( \st\; \tau )$
that make the above diagram commute.
\end{proof}
\begin{corr}
\label{corr:main}
Every map $f \in \mathfrak{W}(X, M)$ has an open neighborhood $U \subseteq \mathfrak{W}(X, M)$
such that for each map $g \in U$ their persistence stacks
$\Fstack_\ast$ and $\Gstack_\ast$ are related by canonical stack maps
in~$\Stack(M)$:
$$\Fstack_\ast \leftarrow \Sigma^\star \Fstack_\ast \to \Gstack_\ast .$$
\end{corr}
\begin{proof}
A bisheaf map gives rise to a canonical map of persistence stacks as constructed
in Example \ref{ex:stack_example}.
The two stack maps follow from the two bisheaf maps of Theorem \ref{thm:main}.
\end{proof}
\section{Examples}
\label{sec:examples}
We have carefully chosen three examples to illustrate key behaviors of persistent local systems.
\begin{ex}
Let $\Rspace^2$ be the plane parameterized by polar coordinates $(r, \theta)$ and
$\Sstrat$ the stratification of $\Rspace^2$ consisting of the following two strata:
the origin $(0,0)$ is the $0$-stratum and $\Rspace^2 - \{ (0,0) \}$ is the $2$-stratum.
The stratification $\Sstrat$ is a Whitney stratification of the plane thus
admitting control data $(\Rspace^2, \Sstrat, \Jcontrol)$.
Let $\Sspace^1$ be the circle parameterized by $[0, 2\pi]$ where $0 = 2\pi$
and let $X := [0, \infty) \times \Sspace^1 \times \Sspace^1$.
Define the map
$f : X \to \Rspace^2$ as $f(r, \phi, \theta) = (r, \theta)$.
The map $f$ is $(\Sstrat, \Jcontrol)$-constructible.
We now examine the bisheaf $\bisheaf{F}_1$ of $f$ in dimension one.
Let $V \subseteq U \subseteq \Rspace^2$ be two
$(\Sstrat, \Jcontrol)$-basic opens
where $U$ is associated to the $0$-stratum and $V$ to the $1$-stratum.
Then $\bisheaf{F}_1$ is uniquely determined (up to an isomorphism) by the following
commutative diagram:
\begin{equation*}
\xymatrix{
\sheaf{F}_3(U) \cong 0 \ar[rr]^0 \ar[d] &&
\Zspace \cong \sheaf{F}_3(V) \ar[d]^{\id}\\
\cosheaf{F}_1(U) \cong \Zspace \oplus \Zspace &&
\Zspace \cong \cosheaf{F}_1(V). \ar[ll]^-{1 \mapsto (1,0)}
}
\end{equation*}
Now consider the persistence stack $\Fstack_1$ of the bisheaf $\bisheaf{F}_1$.
For any \'etale open $a : A \to \Rspace^2$ that covers
the origin, $\image \Fstack_1(a) = 0$.
For any \'etale open $b : B \to \Rspace^2$ that avoids the origin,
$\image \Fstack_1(b)$ is the constant local system $\Zspace$.
Note that we can make an arbitrarily small perturbation to $f$ so
that the pre-image of the origin is empty.
The cap product picks this up and therefore for any \'etale open $a$ that covers the origin,
$\image \Fstack_1(a) = 0$.
\end{ex}
\begin{ex}
Let $\Rspace^2$ be the plane parameterized by polar coordinates $(r, \theta)$ and
$\Sstrat$ the stratification of $\Rspace^2$ consisting of the following two strata:
the origin $(0,0)$ is the $0$-stratum and $\Rspace^2 - \{ (0,0) \}$ is the $2$-stratum.
The stratification $\Sstrat$ is a Whitney stratification of the plane thus
admitting control data $(\Rspace^2, \Sstrat, \Jcontrol)$.
Let $\Sspace^1$ be the circle parameterized by $[0, 2\pi]$ where $0 = 2\pi$,
let $X := [0, \infty) \times \Sspace^1 \times \Sspace^1$, and
let $X_0 := \{ 0\} \times \Sspace^1 \times \Sspace^1$.
Define the map
$f : X/X_0 \to \Rspace^2$ as $f(r, \phi, \theta) = (r, \theta)$.
The map $f$ is $(\Sstrat, \Jcontrol)$-constructible.
We now examine the bisheaf $\bisheaf{F}_1$ of $f$ in dimension one.
Let $V \subseteq U \subseteq \Rspace^2$ be two
$(\Sstrat, \Jcontrol)$-basic opens
where $U$ is associated to the $0$-stratum and $V$ to the $1$-stratum.
Then $\bisheaf{F}_1$ is uniquely determined (up to an isomorphism) by the following
commutative diagram:
\begin{equation*}
\xymatrix{
\sheaf{F}_3(U) \cong \Zspace \ar[rr]^\id \ar[d] &&
\Zspace \cong \sheaf{F}_3(V) \ar[d]^{\id}\\
\cosheaf{F}_1(U) \cong 0 &&
\Zspace \cong \cosheaf{F}_1(V). \ar[ll]
}
\end{equation*}
Now consider the persistence stack $\Fstack_1$ of the bisheaf $\bisheaf{F}_1$.
For any \'etale open $a : A \to \Rspace^2$ that covers
the origin, $\image \Fstack_1(a) = 0$.
For any \'etale open $b : B \to \Rspace^2$ that avoids the origin,
$\image \Fstack_1(b)$ is the constant local system $\Zspace$.
\end{ex}
\begin{ex}
Let $X_0 := \Sspace^1 \times \Sspace^1$ be the torus
and
$$D := \{ (r,\theta) \subseteq \Rspace^2 \; | \; r \leq 1 \text{ and } 0 \leq \theta < 2\pi \}$$
the closed disk of radius one.
Let $x \in X_0$ be the distinguished point $(0,0)$.
Once again, we are using polar coordinates to label points in the plane.
Let $A$ and $B$ be two copies of $D$.
Glue the boundary of $A$ to $X_0$ along the map
$\phi_A : (1,\theta) \to (\theta, 0)$ and
glue the boundary of $B$ to $X_0$ along the map
$\phi_B : (1, \theta) \to (0,\theta)$.
Call the resulting space
$X := X_0 \cup_{\phi_A} A \cup_{\phi_B} B$.
Let $\Sspace^2 := \Rspace^2 \cup \{ \infty \}$ be the $2$-sphere
with the following stratification.
Let $S_0 \subset \Sspace^2$ be the point $(1,0)$, $S_1 \subset \Sspace^2$ the arc
$\{ (1, \theta) \; | \; 0 < \theta < 2\pi \}$,
$S_2$ the connected component of $\Sspace^2 - S_1$ containing the origin,
and $S_3$ the connected component of $\Sspace^2 - S_1$ containing infinity.
The poset $\Sstrat := \{ S_1, S_2, S_3 \}$ is a
Whitney stratification of $\Sspace^2$ thus admitting control data
$(\Sspace^2, \Sstrat, \Jcontrol)$.
Finally, define $f : X \to \Sspace^2$ as the $(\Sstrat, \Jcontrol)$-constructible
map that takes $x$ to $S_0$, $A$ to $S_2$, $B$ to $S_3$, and the torus
$X_0$ to $S_1$.
See Figure \ref{fig:example_3}.
\begin{figure}[h]
\centering
\includegraphics{example_3}
\caption{Here we have an illustration of the torus $X_0$ as a square with opposite sides glued.
The boundary of the disk $A$ is glued to the torus along the vertical circle and the boundary of $B$ is glued
to the torus along the horizontal circle as indicated.
The map $f$ restricted to the torus $X_0$ can be seen as the projection to the diagonal where the
distinguished point $x$ maps to the $0$-stratum $S_0$ and the rest to the arc $S_1$.
}
\label{fig:example_3}
\end{figure}
Now consider the bisheaf $\bisheaf{F}_0$ of $f$ in dimension zero.
Let $U_0 \subseteq \Sspace^2$ be an $\Sstrat$-basic open associated to the stratum $S_0$,
$U_1 \subseteq \Sspace^2$ an $\Sstrat$-basic open associated
to the stratum $S_1$, $U_2 \subseteq U_1$ an $\Sstrat$-basic open
associated to the stratum $S_2$, and $U_3 \subseteq U_1$ an $\Sstrat$-basic
open associated to the stratum $S_3$.
Then $\bisheaf{F}_0$ is uniquely determined (up to an isomorphism) by
the following commutative digram:
\begin{equation*}
\xymatrix{
&& \sheaf{F}_2 (U_0) \cong \Zspace \oplus \Zspace \ar[lld]^{(1,0)} \ar[d]^{\id} \ar[rrd]^{(1,0)} && \\
\sheaf{F}_2(U_2) \cong \Zspace \ar[d]^{\id}
&& \sheaf{F}_2(U_1) \cong \Zspace \oplus \Zspace \ar[d]^{ (1,0) } \ar[ll]^{(1,0)}
\ar[rr]^{ (1, 0)}
&& \sheaf{F}_2(U_3) \cong \Zspace \ar[d]^{\id} \\
\cosheaf{F}_0(U_2) \cong \Zspace \ar[rr]^{\id} \ar[rrd]^{\id} &&
\cosheaf{F}_0(U_1) \cong \Zspace \ar[d]^{\id}
&& \cosheaf{F}_0(U_3) \cong \Zspace \ar[ll]^{\id} \ar[lld]^{\id} \\
&& \cosheaf{F}_0(U_0) \cong \Zspace &&
}
\end{equation*}
Let $\Fstack_0$ be the persistence stack of $\bisheaf{F}_0$.
For any \'etale open $a : A \to \Sspace^2$, $\Fstack_0(a)$
is the constant local system $\Zspace$ over $A$.
We now construct a second constructible map $h : X \to \Sspace^2$.
Let $\Sstrat'$ be the stratification on $\Sspace^2$ consisting of the origin
as the $0$-stratum $S'_1$ and $\Sspace^2 - S_1$ as the 2-stratum $S_2'$.
Once again, $(\Sspace^2, \Sstrat')$ is a Whitney stratification and
therefore admits control data $(\Sspace^2, \Sstrat', \Jcontrol')$.
Define $h$ as the map that takes $B$ to $S_2'$ and the rest of $X$ to
the origin $S_1'$.
Now consider the bisheaf $\bisheaf{H}_0$ of $h$ in dimension $0$.
Let $U \subseteq \Sspace^2$ be an $\Sstrat$-basic open associated
to $S_1$ and $V \subseteq U$ an $\Sstrat$-basic open associated
to $S_2$.
Then $\bisheaf{H}_0$ is uniquely determined (up to an isomorphism) by
the following commutative diagram:
\begin{equation*}
\xymatrix{
\sheaf{H}_2(U) \cong \Zspace \ar[rr]^0 \ar[d]^{0} &&
\sheaf{H}_2(V) \cong \Zspace \ar[d]^{\id} \\
\cosheaf{H}_0(U) \cong \Zspace &&
\cosheaf{H}_0(V) \cong \Zspace.
\ar[ll]^{\id}
}
\end{equation*}
Let $\Hstack_0$ be the persistence stack of $\bisheaf{H}_0$.
For any \'etale open $a : A \to \Sspace^2$ that covers the origin,
$\Hstack_0(a)$ is zero.
This zero is explained by the fact that we may perturb $h$
by an arbitrarily small amount so that the pre-image
of the origin is empty.
This is picked up by the cap product.
By Theorem \ref{thm:main}, $f$ has an open neighborhood $U \subseteq \mathfrak{W}(X, \Sspace^2)$
such that for each map $g \in U$ their bisheaves
are related by canonical bisheaf maps
$$\bisheaf{F}_0 \leftarrow \Sigma^\star \bisheaf{F}_0 \to \bisheaf{G}_0.$$
By Corollary \ref{corr:main}, their persistence stacks are related by
canonical stack maps
$$\Fstack_0 \leftarrow \Sigma^\star \Fstack_0 \to \Gstack_0.$$
Consider the \'etale open $\id : \Sspace^2 \to \Sspace^2$.
The shrinking of $\id$ is $\id$ itself.
As a consequence, Corollary \ref{corr:main} is saying that the two local systems
$\Fstack_0(\id)$ and $\Gstack_0(\id)$ are isomorphic.
However, $\Hstack_0(\id) = 0$.
Therefore $h$ cannot be in the open set $U$.
Our stability theorem is inherently local
and cannot be extended to a global statement.
\end{ex}
\newpage
\bibliographystyle{acm}
|
{
"timestamp": "2019-02-04T02:02:49",
"yymm": "1805",
"arxiv_id": "1805.02539",
"language": "en",
"url": "https://arxiv.org/abs/1805.02539"
}
|
\section{Introduction}
\label{sec:introduction}
\vspace{-0.15cm}
In the past decades, computer designers have steadily used \emph{Dynamic Random Access Memory} (DRAM) as the main memory due to its prominent features such as high performance and low cost per GB.
Despite of the performance and cost efficiency of DRAM, it still suffers from frequent recharge requirement and low scalability.
Recharging DRAM cells every few milliseconds imposes significant power, no matter how many accesses are dispatched to the main memory.
The power usage of DRAM is more pronounced when system is mostly idle.
In addition, the low scalability of DRAM limits the maximum main memory size that can be used in a computer system \cite{itrs}.
To alleviate the limitations of DRAM, \emph{Non-Volatile Memories} (NVMs) have been emerged in the recent studies offering zero leakage current to preserve data and less scalability issue as compared to DRAM.
Among various NVMs offered in the past years, \emph{Phase-Change Memory} (PCM), \emph{Spin-Transfer Torque} (STT-RAM),
and \emph{resistive RAM} (PRAM) are recognized as the most promising NVMs to be employed in the main memory \cite{raey}.
Despite prominent features of NVMs, they have serious shortcomings such as high dynamic write power and long write latency (similar to solid-state drives \cite{tiering}) which prohibit them to entirely replace the DRAM technology.
NVMs have asymmetric characteristics for read and write requests.
In most emerging NVMs, write requests require more time for completion and therefore, their performance will be lower in write-dominant workloads.
From power perspective, write requests are more power consumptive than read requests.
In addition, NVMs have very limited write cycles compared to DRAM despite of several efforts to increase their lifetime \cite{sadegh,7155523}.
Due to shortcomings of DRAM, several studies have attempted to employ NVMs in the main memory of computer systems.
A few of these studies explore possibility of entirely replacing DRAM with NVMs \cite{10.1109/ISPASS.2013.6557176,Lee:2009:APC:1555754.1555758}.
A recent study shows that NVMs cannot reach the performance and power consumption of DRAM in the near future \cite{10.1109/ISPASS.2013.6557176}.
Other studies investigate using a hybrid memory composed of both DRAM and NVMs and possible effects on the \emph{Operating System} (OS)
\cite{clockdwf,Dhiman:2009:PHP:1629911.1630086,
Qureshi:2009:SHP:1555754.1555760}.
Hybrid memories try to use characteristics of DRAM and NVM in order to improve performance or power consumption as compared to a DRAM-based main memory.
Clock-DWF \cite{clockdwf} is one of the most recent studies in this field that uses two clock algorithms, one for managing DRAM and another for managing NVM.
This technique tries to move data pages between these two memories in order to reduce the power consumption while maintaining almost the same performance level.
Clock-DWF outperforms previous work such as CLOCK-PRO and CAR which makes it the most optimal technique in the literature.
The simulated results of Clock-DWF over hybrid DRAM-NVM memory lacks considering the effect of the migrations between DRAM and NVM memories. In addition, the effect of moving data pages
between the main memory and the secondary storage has been neglected.
There are also several studies that employ hybrid memory architecture in on-chip memory \cite{Sampaio:2014:EAA:2691365.2691395}, whose discussion is beyond the scope of this work.
This paper presents a data migration scheme in a hybrid memory architecture employing both DRAM and NVM in the main memory.
The main aim of the proposed scheme is reducing the number of non-beneficial data migrations between DRAM and NVM memories to improve both performance and power efficiency.
To this end, we use two \emph{Least Recently Used} (LRU) queues (one for DRAM and one for NVM) and optimize the LRU queue for NVM to prevent non-beneficial migrations to DRAM.
The optimizations in the LRU queue are minimal and therefore the proposed scheme will have almost the same hit ratio as an unmodified LRU.
Contrary to Clock-DWF that each write hit will result in moving the page to the DRAM main memory, in the proposed scheme every hit in the NVM LRU will be treated similar to the LRU algorithm with one difference.
If a page stays in the top pages of LRU for more than a threshold accesses, it will be considered hot and will be moved to DRAM.
Since the cost of moving a data page between two memories is high, using this threshold will prevent non-beneficial migrations that are very likely to occur in previous studies such as Clock-DWF.
Both the proposed scheme and previous studies have been simulated using a framework developed similar to Linux memory management layer.
The performance and power characteristics are extracted from the same source as previous studies.
We also used PARSEC to run the experiments \cite{parsec}.
Since the multi-level caches in CPU affect the distribution of accesses dispatched to the main memory, in this paper we used COTSon full-system simulator \cite{cotson} which is able to simulate a multi-core system with many cache levels.
The experimental results show that the proposed scheme can reduce the power consumption up to 48\% (14\% on average), improve performance up to 70\% (48\% on average), and improve endurance up to 93\% (64\% on average) compared to previous studies.
As compared to a DRAM-based main memory, the power consumption is reduced up to 79\% (43\% on average).
The rest of the paper is organized as follows.
Section \ref{sec:workloadchar} presents our model for evaluating the performance and power consumption in hybrid memories.
The motivation of this work is discussed in Section \ref{sec:prevwork}.
The proposed data migration scheme is presented in Section \ref{sec:proposed}.
Experimental results are reported in Section \ref{sec:experiment}.
Finally, Section \ref{sec:conclusion} concludes the paper.
\vspace{-0.2cm}
\section{Performance and Power Models in a Hybrid Memory}
\label{sec:workloadchar}
\vspace{-0.15cm}
This section presents a model for performance and power consumption of hybrid memories.
The proposed model tries to consider all aspects of computer systems which influence the performance and/or the power consumption.
In addition to the traditional moving pages in case of a miss or evicting a data page, hybrid memories have migrations between two memories.
The migration between two memories depends on the architecture of the hybrid memory.
For the sake of generality, we consider separate memory modules for DRAM and NVM that communicate through \emph{Direct Memory Access} (DMA).
If both memory types can be assembled in one module, the migrations can be done more effectively.
The integrated memory, however, requires hardware modification which is out of scope of this paper.
In the following sections, the performance and power models will be presented.
\begin{table}[t]
\caption{Parameters Description}
\label{tbl:paramdesc}
\scriptsize
\centering
\begin{tabular}{|c|c|}
\hline
\textbf {Parameter} & \textbf{Description} \\ \hline
$P_{Hit_{DRAM}}$ & DRAM Memory Hit Probability \\ \hline
$P_{Hit_{NVM}}$ & NVM Memory Hit Probability \\ \hline
$P_{R_{DRAM}}$ & DRAM Read Access Probability \\ \hline
$P_{R_{NVM}}$ & NVM Read Access Probability \\ \hline
$P_{W_{DRAM}}$ & DRAM Write Access Probability \\ \hline
$P_{W_{NVM}}$ & NVM Write Access Probability \\ \hline
$P_{Miss}$ & Main Memory Miss Probability \\ \hline
$P_{Mig_{D}}$ & Probability of NVM to DRAM Migration\\ \hline
$P_{Mig_{N}}$ & Probability of DRAM to NVM Migration \\ \hline
$P_{DiskToD}$ & Probability of Moving Page to DRAM due to Page Faults\\ \hline
$P_{DiskToN}$ & Probability of Moving Page to NVM due to Page Faults\\ \hline
$T_{R_{DRAM}}$ & DRAM Memory Read Latency $(s)$ \\ \hline
$T_{R_{NVM}}$ & NVM Memory Read Latency $(s)$ \\ \hline
$T_{W_{DRAM}}$ & DRAM Memory Write Latency $(s)$ \\ \hline
$T_{W_{NVM}}$ & NVM Memory Write Latency $(s)$ \\ \hline
$T_{Disk}$ & Disk Access Latency $(s)$ \\ \hline
$Po_{R_{DRAM}}$ & DRAM Read Dynamic Power $(\eta j)$ \\ \hline
$Po_{W_{DRAM}}$ & DRAM Write Dynamic Power $(\eta j)$ \\ \hline
$Po_{R_{NVM}}$ & NVM Read Dynamic Power $(\eta j)$ \\ \hline
$Po_{W_{DRAM}}$ & NVM Write Dynamic Power $(\eta j)$ \\ \hline
$PageFactor$ & \# of accesses to memory to write a data page \\ \hline
$AvgStaticPower$ & Prorated Static Power Over All Requests \\ \hline
$StperPage$ & Static Power Consumption of a Page $(\eta j/s)$ \\ \hline
$AccessperPage$ & Average Number of Accesses to Each Page $(1/s)$ \\ \hline
\end{tabular}
\vspace{-0.5cm}
\end{table}
\subsection{Performance Model}
\label{sec:perfmodel}
\vspace{-0.15cm}
The performance model depends on the delay of DRAM and NVM, granularity of eviction, and the delay of migration between memories.
For measuring performance, we use \emph{Average Memory Access Time} (AMAT).
The overhead of migrations will be prorated between all accesses to the memory.
Equation \ref{eq:perf} shows the formula for AMAT.
The description of the parameters is available in Table \ref{tbl:paramdesc}.
In this equation, the first two terms calculate AMAT for all hit accesses in either DRAM or NVM.
The third term considers the page faults.
Since transferring a data page from a disk to the memory will be done with DMA, the delay of
writing data blocks to memory will be overlaid with reading the next data block from the disk.
Therefore, OS only sees the disk delay and in this term we only consider the disk delay.
\begin{figure}[h]
\scriptsize
\vspace{-0.5cm}
\begin{flalign}
\label{eq:perf}
\hspace{-2cm} AMAT &= \nonumber \\
& P_{Hit_{DRAM}} * (P_{R_{DRAM}}*T_{R_{DRAM}}+ P_{W_{DRAM}} * T_{W_{DRAM}})\nonumber \\
&+ P_{Hit_{NVM}}* (P_{R_{NVM}} * T_{R_{NVM}} + P_{W_{NVM}} * T_{W_{NVM}}) \nonumber \\
&+ P_{Miss} * T_{Disk} \nonumber \\
&+ P_{Mig_{D}} *PageFactor * (T_{R_{NVM}} + T_{W_{DRAM}})\nonumber \\
&+ P_{Mig_{N}}* PageFactor * (T_{R_{DRAM}} + T_{W_{NVM}})
\end{flalign}
\vspace{-0.5cm}
\end{figure}
The last two terms calculate the migration cost between two memories.
Upon occurring a migration, a data page will be read from a memory and will be written to the other memory.
Since the granularity of data pages is quite larger than the actual accesses to memory (typically 4 up to 16B), we use $PageFactor$ that is a coefficient
which converts moving of a data page into the required number of accesses to memory.
The granularity of the moves between disk and memory modules and between two memories is a data page which is typically 4KB or 8KB.
In this paper, we assume 4KB data pages.
Moving a data page from disk to either of memories might result in a migration between two memories.
It depends on the employed algorithm for managing hybrid memory.
The proposed performance model takes into account this type of migrations.
\subsection{Power Model}
\label{sec:powermodel}
\vspace{-0.15cm}
The proposed power model tries to consider every aspect of the hybrid memories in order to provide more accurate and more realistic estimations for the power consumption of computer systems.
While static power consumption is consumed regardless of the number of arrived requests to the memory, dynamic power is consumed per request sent to the memory.
Our power model considers the migration between two memories and moving pages from disk to either of the memory modules as well as static and dynamic power for servicing requests.
The dynamic power consumption is calculated per access to the memory.
This will result in independency of the power model from application runtime and the memory size.
Therefore, we introduce \emph{Average Power Per Request} (APPR) as a metric for measuring the power as shown in Equation \ref{eq:powerdynamic}.
Similar to the performance model, first two terms calculate the power for all hit accesses to the memories.
The third and fourth terms consider the write power for moving a data page from disk to a memory module.
The last two terms take into account the power effect of the migrations between two memories.
\begin{figure}[h]
\scriptsize
\vspace{-0.5cm}
\begin{align}
\label{eq:powerdynamic}
\hspace{-0.5cm}APPR & = \nonumber \\
& P_{Hit_{DRAM}} * (P_{R_{DRAM}}* Po_{R_{DRAM}}+ P_{W_{DRAM}} * Po_{W_{DRAM}}) \nonumber \\
& + P_{Hit_{NVM}}* (P_{R_{NVM}}* Po_{R_{NVM}}+ P_{W_{NVM}} * Po_{W_{NVM}}) \nonumber \\
& + P_{Miss} * P_{DiskToD} *PageFactor * P_{W_{DRAM}}\nonumber \\
& + P_{Miss} * P_{DiskToN}* PageFactor * P_{W_{NVM}}\nonumber \\
& + P_{Mig_{D}} * PageFactor *(Po_{R_{NVM}}+ Po_{W_{DRAM}}) \nonumber \\
& +P_{Mig_{N}}* PageFactor *(Po_{R_{DRAM}} + Po_{W_{NVM}})
\end{align}
\vspace{-0.5cm}
\end{figure}
Since static power consumption is independent from requests, we introduce a new parameter called
$AvgStaticPower$ which prorates the static power consumption between all requests arrived to the memory in a given time interval.
The reason behind prorating the static power over all requests is that from the OS perspective, the main memory consumes power (including both static and dynamic) for servicing the requests and both of the sources of the power consumption should be considered as the cost of servicing the requests.
For a specific workload, $AvgStaticPower$ is calculated according to Equation \ref{eq:powerstatic}.
\begin{figure}[h]
\scriptsize
\vspace{-0.5cm}
\begin{equation}
\label{eq:powerstatic}
AvgStaticPower_{Page} = \frac{StperPage}{AccessperPage}
\end{equation}
\vspace{-0.5cm}
\end{figure}
Here, $AvgStaticPower$ can be combined with the dynamic power to form an APPR that models all power aspects of hybrid memories.
It is worthy to mention that the dynamic power consumption is still independent from memory size and workload.
As expected, static power per request is still dependent on memory size and request service rate.
\vspace{-0.2cm}
\section{Motivation}
\label{sec:prevwork}
\vspace{-0.15cm}
Designing hybrid memories and employing both DRAM and NVM memories is discussed in many of previous work.
A group of previous studies tried to use DRAM as a caching layer for NVM memory \cite{Qureshi:2009:SHP:1555754.1555760,6799122,7059057}.
Similar to the other caching techniques, if the locality of the requests drops below a threshold, the performance of the cache will be decreased.
In addition, the algorithms employed in the DRAM cache can be moved into the \emph{Last Level Cache} (LLC) of CPU in order to evict mostly read-dominant data pages \cite{cachereplacementpcm}.
Another group of previous studies, similar to our proposed scheme, use DRAM and NVM at the same level in the memory hierarchy \cite{clockdwf, Dhiman:2009:PHP:1629911.1630086,Ramos:2011:PPH:1995896.1995911,6855546}.
Many of the these studies require hardware modifications in memory module controllers \cite{Ramos:2011:PPH:1995896.1995911,Dhiman:2009:PHP:1629911.1630086}.
There are also very few software-driven techniques that try to use the existing interfaces between OS and memory modules \cite{6855546,clockdwf}.
CLOCK-DWF \cite{clockdwf}, which is one of the most effective techniques in the previous work, is a very similar study to this paper and it outperforms many of the previous studies such as CLOCK-PRO \cite{clock-pro}.
Hence, we will have an in-depth analysis about its performance and power.
CLOCK-DWF uses two clock algorithms one for each of the memory modules.
Upon occurrence of a page fault, if the request causing the page fault is write, the page will be moved to the DRAM and otherwise it will be moved to the NVM.
The modifications in the clock algorithm enables CLOCK-DWF to find popular and write-dominant data pages and move them to the DRAM memory.
If a write request arrives for a data page residing in the NVM memory, the data page will be moved to DRAM.
Migrating pages between two memories require many accesses to both memories.
But such effect is not considered in CLOCK-DWF which will result in inaccuracy of their model.
In the reminder of this section, we will analyze CLOCK-DWF with respect to the proposed performance and power models.
Before examining CLOCK-DWF, we will calculate the maximum power saving that can be achieved by reducing the static power consumption.
The proposed power model can be used for modeling homogeneous memories.
Hence, the single DRAM main memory is characterized by the proposed model.
Considering a DRAM-only main memory with LRU algorithm as the eviction policy, Fig. \ref{fig:drampower} shows the composition of the power consumption sources for various workloads.
Since static power consumption contributes for 60-80\% of the total power consumption of DRAM main memory, reducing the static power consumption will have significant effect on the overall systems power consumption.
As shown in Fig. \ref{fig:drampower}, the \emph{streamcluster} benchmark does not behave similar to the other workloads.
According to Table \ref{tbl:workloads}, this workload has a large burst of accesses and a small memory footprint which will result in higher dynamic power consumption.
Workloads with a high hit ratio in LLC of CPU will have higher static power consumption per request.
This is due to less requests will reach the main memory and power consumption will be prorated over fewer number of requests.
CLOCK-DWF maintains two clock algorithms for DRAM and NVM.
The clock algorithm in the NVM is the traditional clock algorithm with one difference.
If a write access arrives for a data page in NVM, the corresponding data page will be moved to the DRAM.
Therefore, no write access will be responded by NVM.
The main aim of this method is to reduce the number of writes in NVM.
Although this prevents any writes from reaching NVM, each write access for a data page in NVM will result in a data page migration between
two memories.
Clock algorithm for DRAM, however, is different and tries to keep write-dominant data pages in the DRAM memory and evicts the mostly read-dominant
data pages.
This is motivated by the fact that the read-only pages will have better performance-power trade-off compared to write requests in NVM.
Upon occurrence of a page fault, if the request is read, the corresponding data page will be moved to NVM and if it is a write, the data page will be moved
to DRAM.
\begin{figure}
\centering
\includegraphics[scale=0.45]{drampower}
\caption{DRAM Power Breakdown}
\label{fig:drampower}
\vspace{-0.5cm}
\end{figure}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=.3\textwidth]{clockpower}%
\label{fig:clockpower}}
\hfil
\subfloat[]{\includegraphics[width=.3\textwidth]{clockperf}%
\label{fig:clockperf}}
\hfil
\subfloat[]{\includegraphics[width=.3\textwidth]{clockwrite}%
\label{fig:clockwrite}}
\caption{a) CLOCK-DWF Power Breakdown Normalized to DRAM Power Consumption b) Normalized AMAT of CLOCK-DWF Compared to DRAM-Only Memory c) Number of Writes in CLOCK-DWF Normalized to NVM-Only Memory}
\label{fig:clock}
\vspace{-0.5cm}
\end{figure*}
\subsection{Power Analysis}
\vspace{-0.15cm}
Fig. \ref{fig:clockpower} depicts the normalized power consumption of CLOCK-DWF compared to power consumption of a DRAM-only memory.
In all workloads, the static power consumption is reduced by 80\% which shows the effectiveness of hybrid
memories to reduce the static power consumption.
Although CLOCK-DWF can decrease the power consumption in many workloads, there are workloads in which CLOCK-DWF fails to improve power consumption and has worse power efficiency compared to DRAM-only memory.
The \emph{streamcluster} benchmark is read-dominant and CLOCK-DWF moves the read-only data pages to NVM.
Therefore, DRAM area will be almost idle and NVM will respond most of the requests.
This will cause the dynamic power consumption to be higher than DRAM-only main memory.
The two other benchmarks that have higher power consumption compared to DRAM are \emph{canneal} and \emph{fluidanimate}.
Although these two workloads are read-intensive, the behaviour of the application causes CLOCK-DWF to migrate a data page to NVM and after a short time, it brings the migrated data pages back to DRAM.
It is worthy to note that the \emph{blackscholes} benchmark is a read-only benchmark and the reason its dynamic power consumption is similar to DRAM-only memory is that
when DRAM is empty, the data page will be moved to DRAM regardless of the type of the request.
In many of the workloads examined in this paper, the contribution of the migrations in power consumption is more than 40\%.
This is due to this fact that when DRAM memory is full, each write access for data pages in NVM will trigger a migration from NVM to DRAM and also,
a migration from DRAM to NVM.
\subsection{Performance Analysis}
\vspace{-0.15cm}
In terms of performance, the source of latencies that can be observed by applications are the delay of responding to the request, the delay of migrations,
and the delay of page faults.
Similar to the power analysis, the performance analysis can show how much we have to pay in terms of latency in order to use a hybrid memory.
Fig. \ref{fig:clockperf} shows the contribution of each source of delay on the AMAT.
AMAT is normalized based on AMAT of a DRAM-only main memory.
The calculated AMAT for requests is very close to the results reported by in the CLOCK-DWF study.
Migrations, however, have not been considered in the CLOCK-DWF study.
Based on the proposed model, the observed delay caused by migrations is considerable and contributes to more than 60\% of the total AMAT.
Therefore, the performance, similar to power, is greatly degraded because of the non-beneficial migrations.
If the hybrid memory algorithm identifies and prevents these migrations, it will reduce the migration cost in terms of performance, power, and endurance.
The beneficial migrations, however, should be allowed to exploit the benefits of hybrid memories.
\subsection{Endurance Analysis}
\vspace{-0.15cm}
As mentioned earlier, CLOCK-DWF does not issue any write requests to the NVM and all writes will be responded in DRAM.
Therefore, the only sources of writes in NVM are migration from DRAM to NVM and moving data pages from disk to NVM in case of a page fault caused
by a read request.
Although the data pages in NVM are read-dominant, each write request for data pages in NVM will result in a high number of physical writes, since the granularity of moving a data page is typically three orders of magnitude larger than the CPU requests.
Fig. \ref{fig:clockwrite} shows the contribution of various sources of writes in NVM.
The number of writes is normalized compared to an NVM-only main memory to see how much CLOCK-DWF can reduce the total number of writes.
In most of the workloads, writes issued for migrations contribute more than 50\% of the total writes in NVM.
This excessive use of migrations makes the overall number of the writes to be even more than an NVM-only main memory.
Hence, the lifetime of NVM will be heavily penalized by using CLOCK-DWF.
\section{Proposed Data Migration Scheme}\label{sec:proposed}
\vspace{-0.15cm}
Non-beneficial migrations are the biggest flaw in CLOCK-DWF and other previous work.
Therefore, in the proposed scheme, we try to identify and prevent this type of migrations.
In addition, the proposed scheme aims to maintain almost the same level of hit ratio as conventional algorithms in order to have comparable performance compared to DRAM-only main memory with LRU algorithm.
The proposed scheme consists of two LRU queues, one queue for DRAM and another queue for NVM.
In order to have a high hit ratio, the algorithms employed in both queues is LRU without any modification.
The proposed scheme manages the migrations between two memories and moves pages from/to disk.
Therefore, both memories work with LRU and the proposed scheme decides when a data page should be migrated to another memory.
Furthermore, upon moving a data page to a memory, it will be treated based on the algorithm of the memory, e.g, moving to the head of the LRU
queue and evicting the last page in the queue.
This is one of the main differences between this work and the previous studies.
In the previous studies, the algorithms for managing pages in memories need to be changed which will result in lower hit ratio.
In order to find the data pages that will improve power consumption and performance upon migration (with respect to the migration cost),
the proposed scheme stores some additional information about data pages such as read and write counters in the NVM LRU queue.
Note that this additional information does not interfere with LRU and it does not need to know about this housekeeping information.
For each data page in the NVM queue, two counters will be stored that count the number of read and write accesses to the corresponding data page
from the time that data page enters the queue.
Fig. \ref{fig:twolru} shows the architecture of the proposed data migration scheme consisting of two LRU queues.
Dashed lines depict actions performed by the proposed technique and solid line are for traditional LRU management algorithm.
Dark data pages are more frequently accessed and are considered as hot data pages.
Contrary to CLOCK-DWF that places page faults issued by read requests on NVM, the proposed scheme moves all pages from disk to DRAM area.
This is motivated by the fact that moving to either NVM or DRAM will result in a page write in NVM since the DRAM is always full and moving a data page
to DRAM will issue an eviction to NVM.
Therefore, the cost of moving to NVM or DRAM is the same in terms of writes in NVM.
The newly accessed data pages have higher probability of access compared to the older data pages and moving this new page to DRAM will result
in increase in DRAM hit ratio instead of NVM hit ratio.
This will help improving both performance and power efficiency since DRAM is superior in terms of dynamic power and delay.
\begin{figure}
\centering
\includegraphics[scale=0.3]{twolru}
\caption{Proposed Data Migration Scheme in a Hybrid Memory Architecture}
\label{fig:twolru}
\vspace{-0.5cm}
\end{figure}
The overhead of storing the housekeeping information is not considerable and is about 0.04\% for 4KB data pages.
However, keeping the counters for all pages in NVM has a few drawbacks.
First, it requires an ordering scheme in order to identify data pages that are cold but will be accessed once in a long time.
These data pages will reside long enough in NVM to have a high counter values and therefore will be moved to DRAM where they cannot compete
with hot data pages and will return to NVM which makes their migration to DRAM without any benefits.
Second, there is no difference between pages that are frequently accessed and typically reside near the head of the NVM LRU queue for the entire time and
data pages which go back and forth in the queue.
In the proposed scheme, another method has been added to handle both of the above-mentioned issues.
The housekeeping information will be only stored for a few percentage of top positions in the NVM LRU queue.
Once a data page moves to the end of this selected percentage of LRU, the corresponding counter will be reset to zero.
This will handle both ordering scheme and identifying burst data accesses.
Since NVMs have different costs for reads and writes in terms of power and performance, we will treat them differently in the proposed scheme.
Write-dominant data pages should have higher priority over read-dominant data pages for migrating to DRAM since they cost more in NVM.
Therefore, $writeperc$ and $writethreshold$ parameters will be set to higher values than $readperc$ and $readthreshold$.
Algorithm \ref{alg:hybrid} shows the flow of the proposed scheme in case of arriving a request.
Since DRAM contains the most hot data pages, the proposed scheme searches DRAM first and if it is not found, it goes to NVM.
Finding the data page in DRAM will result in a normal LRU housekeeping.
Otherwise, the extra housekeeping information in NVM will be updated based on the request type.
The read and write counters will be stored for $readperc$ and $writeperc$ top data pages in the NVM, respectively.
Therefore, in case of a hit, read and write counters for data pages that are dropped off from the top data pages will be cleared.
Lines \ref{alg:hybrid:nvm5} through \ref{alg:hybrid:nvm4} initialize the counters for the corresponding data page.
If the value of the counter for a data page in NVM exceeds the $read\,threshold$ or $write\,threshold$ (depending on the request type), it will be migrated
to DRAM.
Inserting a new data page into memory and eviction policies are unchanged from LRU and therefore, such details are omitted from the algorithm for the sake of brevity.
The values of $read\,threshold$ and $write\,threshold$ determine how aggressive we plan to prevent the migrations with low probability of being
useful.
It is closely related to the cost of the migration between DRAM and NVM which is related to the performance and power characteristics of the
employed NVM.
\begin{algorithm}[t]
\scriptsize
\SetAlgoNoLine
Search for $request\,address$ in $DRAM\,LRU$\;
\eIf{$request\,address$ is found in DRAM}{
Update $DRAM\,LRU$\;}
{Search for $request\,address$ in $NVM\,LRU$\; \label{alg:hybrid:nvm}
\eIf{$request\,address$ is found in NVM}{
Update $NVM\,LRU$\; \label{alg:hybrid:nvm2}
Reset read counter for page in position $readperc$\;\label{alg:hybrid:nvm3}
Reset write counter for page in position $writeperc$\;
\eIf{$request$ is read}{\label{alg:hybrid:nvm5}
\eIf{$request$ is within $readperc$}{
$page\,read\,counter= page\,read\,counter + 1$\;
}{
$page\,read\,counter=1$\;
}
}{
\eIf{$request$ is within $writeperc$}{
$page\,write\,counter= page\,write\,counter + 1$\;}
{$page\,write\,counter=1$\;
}
}\label{alg:hybrid:nvm4}
\If{($request$ is read and $page\,read\,counter > read\,threshold$) or
($request$ is write and $page\,write\,counter > write\,threshold$}{
Migrate page to DRAM\;
}
}{
Issue page fault from Disk to DRAM\;
Migrate from DRAM to NVM if necessary\;
}
}
\caption{Data Migration in a Hybrid Memory}
\label{alg:hybrid}
\end{algorithm}
\section{Experimental Results}
\label{sec:experiment}
\vspace{-0.15cm}
In this section, the experimental setup to extract the traces from workloads and the experimental results for both the proposed method and previous
studies will be presented.
\vspace{-0.1cm}
\subsection{Experimental Setup}
\label{sec:setup}
\vspace{-0.15cm}
The proposed scheme and previous studies are evaluated based on the proposed performance and power models.
For further accuracy of the evaluation, we used COTSon \cite{cotson} which is a full system simulator to obtain memory traces.
The memory traces are extracted from running the actual benchmark programs in a Linux virtual machine inside COTSon and only memory accesses from ROI of the benchmark is considered.
PARSEC-3.0 \cite{parsec} has been selected as the benchmarking suite.
The input of all benchmarks was set to the largest dataset available in order to minimize the effect of starting from cold memory\footnote{swaptions workloads are not included in the results due to compilation issues in our platform.}.
COTSon simulator used a quad-core CPU with two levels of cache and 4GB main memory running an Ubuntu operating system.
Using a quad-core CPU will ensure that there is always enough requests issued to the memory to simulate a production server.
The detailed configuration of the simulated hardware is reported in Table \ref{tbl:cotsonconfig}.
In order to fully understand the effect of different parameters of the workloads on the output of the hybrid memories, the main features of the workloads are presented in Table \ref{tbl:workloads} and will be discussed in the next subsection.
In the experiments, the total memory size is set to 75\% of the total pages and the DRAM size is set to 10\% of the total memory size, similar to previous studies \cite{clockdwf}.
The performance and power characteristics of DRAM and NVM, reported in Table \ref{tbl:memory}, are obtained from the same source as CLOCK-DWF in order to have a fair comparison.
\begin{table}[t]
\caption{COTSon Configuration}
\scriptsize
\label{tbl:cotsonconfig}
\centering
\begin{tabular}{|c|c|}
\hline
CPU & Quad-core with MOESI Protocol \\ \hline
L1 Data Cache & 32KB WB 4-way set associative with 64B line size \\ \hline
L1 Instruction Cache & 32KB WB 4-way set associative with 64B line size \\ \hline
Last-Level Cache & 2MB WB 16-way set associative with 64B line size \\ \hline
Main Memory & 2x 2GB DDR2 \\ \hline
Secondary Storage & HDD with 5 milliseconds response time \\ \hline
\end{tabular}
\vspace{-0.25cm}
\end{table}
\begin{table}[t]
\caption{Workload Characterization}
\scriptsize
\label{tbl:workloads}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Workload & Working Set Size (KB) & \# of Read Requests& \# of Write Requests\\ \hline
Blackscholes & 5,188& 26,242 (100\%) & 0 (0\%) \\ \hline
Bodytrack & 25,304& 658,606 (62\%) & 403,835 (38\%) \\ \hline
Canneal &164,768 & 24,432,900 (98\%) & 653,623 (2\%)\\ \hline
Dedup & 512,460& 17,187,130 (71\%) & 6,998,314 (29\%)\\ \hline
Facesim &210,368 & 11,730,278 (66\%) & 6,137,519 (34\%) \\ \hline
Ferret &68,904 & 54,538,546 (89\%) & 7,033,936 (11\%) \\ \hline
Fluidanimate &266,120 & 9,951,202 (69\%)& 4,492,775 (31\%) \\ \hline
Freqmine & 156,108& 8,427,181 (69\%)& 3,947,122 (31\%)\\ \hline
Raytrace & 57,116& 1,807,142 (83\%)& 370,573 (17\%)\\ \hline
Streamcluster &15,452 & 168,666,464 (99.8\%)& 448,612 (0.2\%)\\ \hline
Vips & 115,380& 5,802,657 (59\%) &4,117,660 (41\%)\\ \hline
X264 & 80,232& 14,669,353 (74\%)& 5,220,400 (26\%)\\ \hline
\end{tabular}
\vspace{-0.1cm}
\end{table}
\begin{table}[t]
\caption{Memory Characteristics \cite{clockdwf}}
\scriptsize
\label{tbl:memory}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Memory & Latency r/w$(\eta s)$ & Power r/w $(\eta j)$ & Static Power $( \frac{j}{GB.second})$ \\ \hline
DRAM & 50/50 & 3.2/3.2 & 1 \\ \hline
NVM (PCM) & 100/350 & 6.4/32 & 0.1 \\ \hline
\end{tabular}
\vspace{-0.5cm}
\end{table}
\vspace{-0.5cm}
\subsection{Experimental Results}
Fig. \ref{fig:proposedpower} depicts the normalized power consumption of CLOCK-DWF and the proposed scheme compared to a DRAM-only main memory.
For each workload, the left and right bars represent CLOCK-DWF and the proposed scheme, respectively.
In most of the workloads, the proposed scheme has better power efficiency comapared to CLOCK-DWF with a few exceptions which will be addressed later in this section.
As shown in Fig. \ref{fig:proposedpower}, the power consumption of the proposed scheme is up to 48\% (14\% on average\footnote{Average numbers reported throughout the paper are geometric means.}) less than CLOCK-DWF.
In addition, the proposed scheme can reduce the total power consumption of the main memory up to 79\% (43\% on average) compared to using a DRAM-only main memory.
The static power consumption is the same for both methods since they are evaluated using the same DRAM and NVM size.
The main benefit of the proposed scheme is that the power consumption for migrations is decreased significantly compared to CLOCK-DWF.
The migration cost is decreased up to 80\% by using the proposed scheme.
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=.3\textwidth]{proposedpower}%
\label{fig:proposedpower}}
\hfil
\subfloat[]{\includegraphics[width=.3\textwidth]{proposedwrite}%
\label{fig:proposedwrite}}
\hfil
\subfloat[]{\includegraphics[width=.3\textwidth]{proposedperf}%
\label{fig:proposedperf}}
\caption{a) Power Breakdown of CLOCK-DWF (Left Bar) and the Proposed Scheme (Right Bar) Normalized to DRAM Power Consumption, b) Number of Writes in CLOCK-DWF (Left Bar) and the Proposed Scheme (Right Bar) Normalized to NVM-Only Memory, and c) Normalized AMAT of the Proposed Scheme Compared to CLOCK-DWF }
\label{fig:hitratio}
\vspace{-0.5cm}
\end{figure*}
Among the benchmark programs, \emph{canneal}, \emph{fluidanimate}, and \emph{streamcluster} have unusual characteristics such as small footprint or lack of read-dominant data pages which will increase the dynamic and migration power and makes them not suitable for using hybrid memories.
Contrary to the other workloads, in \emph{raytrace} workload, the migration cost of the proposed scheme is higher than CLOCK-DWF.
Our analysis shows that the optimal values for $readthreshold$ and $writethreshold$ of this workload differs from the other workloads which caused many non-beneficial migrations between two memories.
It is worthy to note that using adaptive threshold prediction can further improve the efficiency of the proposed scheme.
This is part of our ongoing research.
One of the main differences between CLOCK-DWF and the proposed scheme is how they treat write requests attempting to access data pages in NVM.
CLOCK-DWF moves data pages to DRAM while the proposed scheme tries to respond the request from NVM.
Fig. \ref{fig:proposedwrite} shows the normalized number of writes arrived to NVM compared to a NVM-only main memory.
Without considering the migrations, CLOCK-DWF will reduce the number of writes dispatched to NVM.
Considering the migrations, CLOCK-DWF issues more writes to NVM compared to a NVM-only main memory up to 3.7x, which significantly affects the lifetime of NVM.
The proposed scheme, on the other hand, limits the number of migrations between memories and therefore issues less writes to NVM.
The mentioned tradeoff between dispatching requests to NVM and migrating data pages to DRAM affects the contribution of different sources
of writes in NVM.
The proposed scheme favours issuing writes to NVM instead of migrating the whole data page to DRAM while CLOCK-DWF does the opposite.
This change in policy results in significant decrease (up to 93\%) in the number of writes in NVM compared to CLOCK-DWF.
In addition, the proposed scheme can reduce the number of writes in NVM up to 75\% (49\% on average) compared to a NVM-only main memory which will
prolong its lifetime up to 4x.
In \emph{streamcluster} and \emph{vips} benchmark programs, CLOCK-DWF performs slightly better since burst accesses to data pages are near the threshold of being
beneficial migration and the proposed scheme may take a wrong decision on such cases.
From performance perspective, as we concluded in Section \ref{sec:prevwork}, the migrations lead to high delay on the average request
response time in CLOCK-DWF.
Fig. \ref{fig:proposedperf} depicts the normalized AMAT of the proposed scheme compared to CLOCK-DWF.
The proposed scheme successfully limited the number of migrations and the contribution of the migration is less than 50\% in most of the workloads.
Limiting the migrations improves the AMAT of the proposed scheme significantly compared to CLOCK-DWF up to 70\% (48\% on average).
Preventing non-beneficial migrations is not the only reason that the proposed scheme has superior performance compared to CLOCK-DWF.
The policy for selecting the targets for migrations is another reason that the proposed scheme has higher performance than CLOCK-DWF since placing the hot data pages in DRAM will improve AMAT.
In \emph{raytrace} and \emph{vips} benchmarks, CLOCK-DWF has better AMAT since the proposed scheme issues high number of migrations.
\vspace{-0.1cm}
\section{Conclusion}
\label{sec:conclusion}
\vspace{-0.15cm}
NVMs are emerging memory technologies that unlike DRAM, do not have high leakage power and do not depend on the power supply to store data.
NVMs, however, have their own limitations which prevent them from entirely replacing DRAM.
Hybrid memories try to reduce the power consumption of the main memory while maintaining high performance.
Previous studies lack considering all aspects of the hybrid memories and the inaccuracy in their models results in inefficient hybrid memories.
In this paper, we first presented both performance and power models for the hybrid memories.
Using the proposed models, we identified the shortcomings of previous studies and proposed a novel data migration scheme for hybrid memory.
The proposed scheme consists of two LRU queues with efficient algorithms to manage data migration.
The experimental results show that the proposed scheme can reduce the power consumption up to 79\% compared to DRAM-only memory and up to
48\% compared to previous studies.
\bibliographystyle{IEEEtran}
\vspace{-0.3cm}
|
{
"timestamp": "2018-05-08T02:17:34",
"yymm": "1805",
"arxiv_id": "1805.02514",
"language": "en",
"url": "https://arxiv.org/abs/1805.02514"
}
|
\section{Introduction}
Game theory provides ways of formally representing strategic interactions
between multiple players, as well as a variety of {\em solution concepts}
for the resulting games. The best-known solution concept is that of Nash
equilibrium~\citep{Nash50:Eq}, where each player plays a best response to
all the other players' strategies. The computational complexity of, given
a game in normal form, computing a (any) Nash equilibrium, remained open
for a long time and was accorded significant
importance~\citep{Papadimitriou01:Algorithms}. (I will give a brief
introduction to / review of computational complexity in
Section~\ref{se:complexity}; the reader unfamiliar with it may prefer to
read this section first.) An elegant algorithm for the two-player case, the
Lemke-Howson algorithm~\citep{Lemke64:Equilibrium}, was proved to require
exponential time on some game families by~\cite{Savani04:Exponentially}.
Finally, in a breakthrough series of papers, the problem was established to
be PPAD-complete, even in the two-player
case~\citep{Daskalakis09:Complexity,Chen09:Settling}.\footnote{Depending on
the precise formulation, the problem can actually be FIXP-complete for
more than 2 players~\citep{Etessami10:Complexity}.}
Not all Nash equilibria are created equal; for example, one can
Pareto-dominate another. Moreover, generally, the set of Nash equilibria
does not satisfy {\em interchangeability}. That is, if player 1 plays her
strategy from one Nash equilibrium, and player 2 plays his strategy from
another Nash equilibrium, the result is not guaranteed to be a Nash
equilibrium. This leads to the dreaded {\em equilibrium selection
problem}: if one plays a game for the first time, how is one to know
according to which equilibrium to play? This problem is arguably
exacerbated by the fact that determining whether equilibria with particular
properties, such as placing probability on a particular pure strategy or
having at least a certain level of social welfare, exist is NP-complete in
two-player games (and associated optimization problems are inapproximable
unless P=NP)~\citep{Gilboa89:Nash,Conitzer03:Nash}. In any case,
equilibria are often seen as a state to which play could reasonably
converge, rather than an outcome that can necessarily be arrived at
immediately by deduction.
In this paper, we consider the concept of {\em evolutionarily stable
strategies}, a solution concept for symmetric games with two players.
$s$ will denote a pure strategy and $\sigma$ a mixed
strategy, where $\sigma(s)$ denotes the probability that
mixed strategy $\sigma$ places on pure strategy $s$.
$u(s,s')$ is the utility that a player playing $s$ obtains when playing
against a player playing $s'$, and $$u(\sigma,\sigma') = \sum_{s,s'}
\sigma(s)\sigma'(s')u(s,s')$$ is the natural extension to mixed strategies.
\begin{defin}[\cite{Price73:Logic}]
Given a symmetric two-player game, a mixed strategy $\sigma$ is said to
be an {\em evolutionarily stable strategy (ESS)} if both of the following
properties hold.
\begin{enumerate}
\item (Symmetric Nash equilibrium property) For any mixed strategy $\sigma'$, we have $u(\sigma,\sigma) \geq
u(\sigma',\sigma)$.
\item For any mixed strategy $\sigma'$ ($\sigma' \neq \sigma$) for which $u(\sigma,\sigma) =
u(\sigma',\sigma)$, we have $u(\sigma,\sigma')>u(\sigma',\sigma')$.
\end{enumerate}
\end{defin}
\noindent The intuition behind this definition is that a population of players
playing $\sigma$ cannot be successfully ``invaded'' by a small population
of players playing some $\sigma' \neq \sigma$, because they will perform
{\em strictly} worse than the players playing $\sigma$ and therefore
they will shrink as a fraction of the population. They perform strictly
worse either because (1) $u(\sigma,\sigma) > u(\sigma',\sigma)$, and
because $\sigma$ has dominant presence in the population this outweighs
performance against $\sigma'$; or because (2) $u(\sigma,\sigma) =
u(\sigma',\sigma)$ so the second-order effect of performance against
$\sigma'$ becomes significant, but in fact $\sigma'$ performs worse against
itself than $\sigma$ performs against it, that is,
$u(\sigma,\sigma')>u(\sigma',\sigma')$.\\
\noindent {\bf Example (Hawk-Dove game~\citep{Price73:Logic}).}
Consider the following symmetric two-player game:
\begin{center}
\begin{tabular}{r|c|c|}
&$\text{Dove}$ &$\text{Hawk}$ \\ \hline
$\text{Dove}$ & 1,1 & 0,2\\ \hline
$\text{Hawk}$ & 2,0 & -1,-1\\ \hline
\end{tabular}\\
\end{center}
The unique symmetric Nash equilibrium $\sigma$ of this game is 50\% Dove, 50\%
Hawk. For any $\sigma'$, we have $u(\sigma,\sigma) =
u(\sigma',\sigma) = 1/2$. That is, everything is a best reponse to $\sigma$.
We also have $u(\sigma, \sigma') = 1.5 \sigma'(\text{Dove}) - 0.5
\sigma'(\text{Hawk}) = 2 \sigma'(\text{Dove}) - 0.5$, and
$u(\sigma', \sigma') = 1 \sigma'(\text{Dove})^2 + 2
\sigma'(\text{Hawk})\sigma'(\text{Dove}) + 0
\sigma'(\text{Dove})\sigma'(\text{Hawk}) - 1 \sigma'(\text{Hawk})^2 =
-2 \sigma'(\text{Dove})^2+4 \sigma'(\text{Dove})-1$. The difference
between the former and the latter expression is $2 \sigma'(\text{Dove})^2 -
2\sigma'(\text{Dove}) + 0.5 = 2 (\sigma'(\text{Dove}) - 0.5)^2$. The
latter is clearly positive for all $\sigma' \neq \sigma$, implying that $\sigma$ is
an ESS.\\
Intuitively, the problem of computing an ESS appears significantly harder
than that of computing a Nash equilibrium, or even a Nash equilibrium with
a simple additional property such as those described earlier. In the
latter type of problem, while it may be difficult to find the solution,
once found, it is straightforward to verify that it is in fact a Nash
equilibrium (with the desired simple property). This is not so for the
notion of ESS: given a candidate strategy, it does not appear
straightforward to figure out whether there exists a strategy that
successfully invades it. However, appearances can be deceiving; perhaps
there is a not entirely obvious, but nevertheless fast and elegant way of
checking whether such an invading strategy exists. Even if not, it is not
immediately clear whether this makes the problem of {\em finding} an ESS
genuinely harder. Computational complexity provides the natural toolkit
for answering these questions.
The complexity of computing whether a game has an evolutionarily stable
strategy (for an overview, see Chapter 29 of the Algorithmic Game Theory
book~\citep{Suri07:Evolutionary}) was first studied by
\cite{Etessami08:Computational}, who proved that the problem is both
NP-hard and coNP-hard, as well as that the problem is contained in
$\Sigma_2^P$ (the class of decision problems that can be solved in
nondeterministic polynomial time when given access to an NP oracle; see
also Section~\ref{se:complexity}). \cite{Nisan06:Note}
subsequently\footnote{An early version of \cite{Etessami08:Computational}
appeared in 2004.} proved the stronger hardness result that the problem
is co$D^P$-hard. He also observed that it follows from his reduction that
the problem of determining whether a given strategy is an ESS is coNP-hard
(and \cite{Etessami08:Computational} then pointed out that this also
follows from their reduction). \cite{Etessami08:Computational} also showed
that the problem of determining the existence of a {\em regular} ESS is
NP-complete. As was pointed out in both papers, all of this still leaves
the main question of the exact complexity of the general ESS problem open.
In this paper, this is settled: the problem is in fact
$\Sigma_2^P$-complete. After the review of computational complexity
(Section~\ref{se:complexity}), I will briefly discuss the significance of
this result (Section~\ref{se:significance}).
The remainder of the paper---to which the reader not interested in a review
of computational complexity or a discussion of the significance of the
result is welcome to jump---contains the proof, which is structured as
follows. In Section~\ref{se:restricted}, Lemma~\ref{th:restricted} states
that the slightly more general problem of determining whether an ESS exists
whose support is restricted to a subset of the strategies is
$\Sigma_2^P$-hard. This is the main part of the proof. Then, in
Section~\ref{se:without}, Lemma~\ref{le:no_duplicates} points out that if
two pure strategies are exact duplicates, neither of them can occur in the
support of any ESS. By this, we can disallow selected strategies from
taking part in any ESS simply by duplicating them. Combining this with the
first result, we arrive at the main result, Theorem~\ref{th:main}.
One may well complain that Lemma~\ref{le:no_duplicates} is a bit of a
cheat; perhaps we should just consider duplicate strategies to be ``the
same'' strategy and merge them back into one. As the reader probably
suspects, such a hasty and limited patch will not avoid the hardness
result. Even something a little more thorough, such as iterated
elimination of very weakly dominated strategies (in some order), will not
suffice: in Appendix~\ref{se:appendix} I show, with additional analysis and
modifications, that the result holds even in games where each pure strategy
is the unique best response to some mixed strategy.
\section{Brief Background on Computational Complexity}
\label{se:complexity}
Much of theoretical computer science is concerned with designing algorithms
that solve computational problems {\em fast} (as well as, of course,
correctly). For example, one computational problem is the following: given
a two-player game in normal form, determine whether there exists a Nash
equilibrium in which player $1$ obtains utility at least $1$. A specific
two-player normal-form game would be an {\em instance} of that problem.
What does it mean to solve a problem fast? This is fundamentally about how
the runtime scales with the size of the input (e.g., the size of the game).
The focus is generally primarily on whether the runtime scales as a {\em
polynomial} function of the input, which is considered fast (or {\em
efficient})---as opposed to, say, an exponential function.
For many problems, including the one described in the previous paragraph,
we do not have any efficient algorithm, nor do we have a proof that no such
algorithm exists. However, in these situations, we can often prove that
the problem is at least as hard as any other problem in a large class.
That is, we can prove that if the problem under consideration admits an
efficient algorithm, then so do all other problems in a large class. The
most famous such class is NP, which consists of {\em decision problems},
i.e., problems for which every instance has a ``yes'' or ``no'' answer.
Specifically, it consists of decision problems that are such that for every
``yes'' instance, there is a succinct proof (that can be efficiently
checked) that the answer is ``yes.'' A problem that is at least as hard as
any problem in NP is said to be NP-hard. If an NP-hard problem is also in
the class NP, it is said to be NP-complete; thus, in a sense, all
NP-complete problems are equally hard.
Many problems of interest are NP-complete. The paradigmatic NP-complete
problem is the {\em satisfiability} problem, which asks, given a
propositional logic formula, whether there is a way to set the variables in
this formula to {\em true} or {\em false} in such a way that the formula as
a whole evaluates to {\em true}. For example, the formula
$(x_1 \lor x_2) \land (\lnot x_2)$ is a ``yes'' instance, because setting
$x_1$ to {\em true} and $x_2$ to {\em false} results in the formula
evaluating to {\em true}. The succinct proof that an instance is a ``yes''
instance consists simply of values that the variables can take to make the
formula evaluate to {\em true}. As it turns out, the problem introduced at
the beginning of this section is NP-complete. It is in NP because given
the supports of the strategies in a Nash equilibrium with high utility for
player $1$, we can easily reconstruct such an equilibrium; therefore, the
supports serve as the proof that it is a ``yes'' instance. Many similar
problems are also NP-complete~\citep{Gilboa89:Nash,Conitzer03:Nash}.
A standard way to prove that a problem $A$ is NP-hard is to take another
problem $B$ that is already known to be NP-hard, and {\em reduce} it to
problem $A$. A reduction here is an efficiently computable function that
maps every instance of $B$ to some instance of $A$ with the same truth
value (``yes'' or ``no''). Given such a reduction, an efficient algorithm
for $A$ could be used to solve $B$ as well, proving that in the relevant
sense, $A$ is at least as hard as $B$.
There are other classes of interest besides NP, with hardness and
completeness defined similarly. For example, coNP consists
of problems where there is a succinct proof of an instance being a ``no''
instance. The class $\Sigma_2^P$ is most easily illustrated by a standard
complete problem for it. As in the satisfiability problem, we are given a
propositional logic formula, but this time, the variables are split into
two sets, $X_1$ and $X_2$. We are asked whether there exists a way to set
the variables in $X_1$ such that {\em no matter how} the variables in $X_2$
are set, the formula evaluates to {\em true}. (Note here the similarity to
the ESS problem, where we are asked whether there exists a strategy
$\sigma$ such that {\em no matter which} $\sigma'$ invades, the invasion
is repelled.) Similarly, a complete problem for the class $\Pi_2^P$ (which
equals co$\Sigma_2^P$) asks whether no matter how the variables in $X_1$
are set, there is a way to set the variables in $X_2$ so that the formula
evaluates to {\em true}. These classes are said to be at the {\em second
level of the polynomial hierarchy}, and the generalization to higher
levels is straightforward.
\section{Significance of the Result}
\label{se:significance}
What is the significance of establishing the $\Sigma_2^P$-completeness of
deciding whether an evolutionarily stable strategy exists? When the
computational problem of determining the existence of an ESS comes up, it
is surely more satisfying to be able to simply state the exact complexity
of the problem than to have to state that it is hard for some classes,
included in another, and the exact complexity is unknown. Moreover, the
latter situation also left open the possibility that the ESS problem
exposed a fundamental gap in our understanding of computational complexity
theory. It could even have been the case that the ESS problem required the
definition of an entirely new complexity class for which the problem was
complete.\footnote{In the case of computing one Nash equilibrium, the class
PPAD had previously been defined~\citep{Papadimitriou94:On}, but it did
not have much in the way of known complete problems before the Nash
equilibrium result---and the standing of the class was quite diminished
by this lack of natural problems known to be complete for it.} The
result presented here implies that this is not the case; while $\Sigma_2^P$
is not as well known as NP, it is a well-established complexity class.
Additionally, some of the significance of the result is in the irony that a
key solution concept in evolutionary game theory, which is often taken to
be a model of how equilibria might actually be reached in practice by a
simple process, is actually computationally significantly less tractable
(as far as our current understanding of computational complexity goes) than
the concept of Nash equilibrium. This was already implied by the earlier
hardness results referenced in the introduction, but the result obtained
here shows the gap to be even wider. This perhaps suggests that modified
solution concepts are called for, and more generally that the computational
complexity of solution concepts should be taken into account in assessing
their reasonableness for the purpose at hand. On the other hand, it is
important to note that it may yet be possible to find evolutionarily stable
strategies fast for most games actually encountered in practice. Games
encountered in practice may have additional structure that puts the problem
in a lower complexity class, possibly even P. If so, this would clearly
reduce the force of the call for new solution concepts.
\section{Hardness with Restricted Support}
\label{se:restricted}
Having completed a review of the relevant computational complexity theory and a
discussion of the significance of the result, we now begin the technical
part of the paper. As outlined earlier, we first introduce a slightly
different problem, which we will then show is $\Sigma_2^P$-hard. From
this, it will be fairly easy to show, in Section~\ref{se:without}, that the
main problem is $\Sigma_2^P$-hard.
\begin{defin}
In ESS-RESTRICTED-SUPPORT, we are given a symmetric two-player
normal-form game $G$ with strategies $S$, and a subset $T \subseteq S$.
We are asked whether there exists an evolutionarily stable strategy of
$G$ that places positive probability only on strategies in $T$ (but not
necessarily on all strategies in $T$).
\end{defin}
We will establish $\Sigma_2^P$-hardness by reduction from (the complement
of) the following problem.
\begin{defin}[MINMAX-CLIQUE]
We are given a graph $G=(V,E)$, sets $I$ and $J$, a partition of $V$ into
subsets $V_{ij}$ for $i \in I$ and $j \in J$, and a number $k$. We are
asked whether it is the case that for every function $t: I \rightarrow
J$, there is a clique of size (at least) $k$ in the subgraph induced on
$\bigcup_{i \in I} V_{i,t(i)}$. (Without loss of generality, we will
require $k>1$.)
\end{defin}
\noindent {\bf Example.} Figure~\ref{fi:ess_figure} shows a tiny MINMAX-CLIQUE
instance (let $k=2$).
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=2.75in]{ess_figure}
\caption{An example MINMAX-CLIQUE instance (with $k=2$), for which the
answer is ``no.''}
\label{fi:ess_figure}
\end{center}
\vspace{-.28in}
\end{figure}
The answer to this instance is ``no'' because for $t(1)=2,t(2)=1$, the
graph induced
on $\bigcup_{i \in I} V_{i,t(i)} = V_{12} \cup V_{21} = \{v_{12},v_{21}\}$
has no
clique of size at least $2$.\\
We have the following known hardness result for this problem. (Recall that
$\Pi_2^P = \text{co}\Sigma_2^P$.)
\begin{known}[\citep{Ko95:Complexity}]
MINMAX-CLIQUE is $\Pi_2^P$-complete.
\end{known}
We are now ready to present the main part of the proof.
\begin{lemma1}
ESS-RESTRICTED-SUPPORT is $\Sigma_2^P$-hard.
\label{th:restricted}
\end{lemma1}
\begin{proof1}
We reduce from the complement of MINMAX-CLIQUE. That is, we show how to
transform any instance of MINMAX-CLIQUE into a symmetric two-player
normal-form game with a distinguished subset $T$ of its strategies, so
that this game has an ESS with support in $T$ if and only if the answer
to the MINMAX-CLIQUE instance is ``no.''
{\bf The Reduction.}
For every $i \in I$ and every $j \in J$, create a strategy $s_{ij}$.
For every $v \in V$, create a strategy $s_v$.
Finally, create a single additional strategy $s_0$.
\begin{itemize}
\item For all $i \in I$ and $j \in J$, $u(s_{ij},s_{ij})=1$.
\item For all $i \in I$ and $j, j' \in J$ with $j \neq j'$,
$u(s_{ij},s_{ij'})=0$.
\item For all $i, i' \in I$ with $i \neq i'$ and $j, j' \in J$, $u(s_{ij},s_{i'j'})=2$.
\item For all $i \in I$, $j \in J$, and $v \in V$, $u(s_{ij},s_v)=2-1/|I|$.
\item For all $i \in I$ and $j \in J$, $u(s_{ij},s_0)=2-1/|I|$.
\item For all $i \in I$, $j \in J$, and $v \in V_{ij}$, $u(s_v,s_{ij})=2-1/|I|$.
\item For all $i \in I$, $j,j' \in J$ with $j \neq j'$, and $v \in V_{ij}$,
$u(s_v,s_{ij'})=0$.
\item For all $i,i' \in I$ with $i \neq i'$, $j,j' \in J$, and $v \in V_{ij}$, $u(s_v,s_{i'j'})=2-1/|I|$.
\item For all $v \in V$, $u(s_v,s_v)=0$.
\item For all $v,v' \in V$ with $v \neq v'$ where $(v,v') \notin E$, $u(s_v,s_{v'})=0$.
\item For all $v,v' \in V$ with $v \neq v'$ where $(v,v') \in E$, $u(s_v,s_{v'}) = (k/(k-1))(2-1/|I|)$.
\item For all $v \in V$, $u(s_v,s_0)=0$.
\item For all $i \in I$ and $j \in J$, $u(s_0,s_{ij})=2-1/|I|$.
\item For all $v \in V$, $u(s_0,s_v)=0$.
\item $u(s_0,s_0)=0$.
\end{itemize}
We are asked whether there exists an ESS that places positive probability only on strategies $s_{ij}$ with $i \in I$ and $j \in J$. That is, $T=\{s_{ij} : i \in I, j \in J\}$.
{\bf Example.} Consider again the MINMAX-CLIQUE instance from
Figure~\ref{fi:ess_figure}.
The game to which the reduction maps this instance is:
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|}
& $s_{11}$ & $s_{12}$ & $s_{21}$ & $s_{22}$ & $s_{v_{11}}$ &
$s_{v_{12}}$ & $s_{v_{21}}$ & $s_{v_{22}}$ & $s_0$ \\
\hline
$s_{11}$ & 1 & 0 & 2 & 2 & 3/2
& 3/2 & 3/2 & 3/2 & 3/2 \\ \hline
$s_{12}$ & 0 & 1 & 2 & 2 & 3/2
& 3/2 & 3/2 & 3/2 & 3/2 \\ \hline
$s_{21}$ & 2 & 2 & 1 & 0 & 3/2
& 3/2 & 3/2 & 3/2 & 3/2 \\ \hline
$s_{22}$ & 2 & 2 & 0 & 1 & 3/2
& 3/2 & 3/2 & 3/2 & 3/2 \\ \hline
$s_{v_{11}}$ & 3/2 & 0 & 3/2 & 3/2 & 0
& 0 & 3 & 3 & 0 \\ \hline
$s_{v_{12}}$ & 0 & 3/2 & 3/2 & 3/2 & 0
& 0 & 0 & 3 & 0 \\ \hline
$s_{v_{21}}$ & 3/2 & 3/2 & 3/2 & 0 & 3
& 0 & 0 & 0 & 0 \\ \hline
$s_{v_{22}}$ & 3/2 & 3/2 & 0 & 3/2 & 3
& 3 & 0 & 0 & 0 \\ \hline
$s_0$ & 3/2 & 3/2 & 3/2 & 3/2 & 0 &
0 & 0 & 0 & 0 \\ \hline
\end{tabular}\\
\end{center}
It has an ESS $\sigma$ with weight $1/2$ on each of $s_{12}$ and $s_{21}$.
In contrast, (for example) $\sigma'$ with weight $1/2$ on each of $s_{11}$
and $s_{21}$ is invaded by the strategy $\sigma''$ with weight $1/2$ on
each of $s_{v_{11}}$ and $s_{v_{21}}$, because $u(\sigma'', \sigma') =
u(\sigma', \sigma') = 3/2$ and $u(\sigma'',\sigma'') = u(\sigma', \sigma'')
= 3/2$.
{\bf Proof of equivalence.}
Suppose there exists a function $t: I \rightarrow J$ such that
every clique in the subgraph induced on $\bigcup_{i \in I} V_{i,t(i)}$ has size strictly less than $k$. We will show that the mixed strategy $\sigma$ that places probability $1/|I|$ on $s_{i,t(i)}$ for each $i \in I$ (and $0$ everywhere else) is an ESS.
First, we show that $\sigma$ is a best response against itself.
For any $s_{ij}$ in the support of $\sigma$, we have $u(s_{ij},\sigma) = (1/|I|) \cdot 1 + (1-1/|I|) \cdot 2 = 2 - 1/|I|$, and hence we also have $u(\sigma,\sigma)=2-1/|I|$.
For $s_{ij}$ not in the support of $\sigma$, we have
$u(s_{ij},\sigma) = (1/|I|) \cdot 0 + (1-1/|I|) \cdot 2 =
2-2/|I| < 2-1/|I|$. For all $i \in I$, for all $v \in V_{i,t(i)}$, we have
$u(s_v,\sigma) = (1/|I|) \cdot (2-1/|I|) + (1-1/|I|) \cdot
(2-1/|I|) = 2-1/|I|$.
For all $i \in I$, $j \in J$ with $j \neq t(i)$,
and $v \in V_{ij}$,
we have $u(s_v,\sigma) = (1/|I|) \cdot 0 + (1-1/|I|) \cdot
(2-1/|I|) = (1-1/|I|)
(2-1/|I|) < 2-1/|I|$.
Finally, $u(s_0,\sigma) = 2-1/|I|$. So $\sigma$ is a best response to itself.
It follows that if there were a strategy $\sigma' \neq \sigma$ that could successfully invade $\sigma$, then $\sigma'$ must put probability only on best responses to $\sigma$. Based on the calculations in the previous paragraph, these best responses are $s_0$, and, for any $i$, $s_{i,t(i)}$ and, for all $v \in V_{i,t(i)}$, $s_v$. The expected utility of $\sigma$ against any of these is $2-1/|I|$ (in particular, for any $i$, we have
$u(\sigma,s_{i,t(i)}) = (1/|I|) \cdot 1 + (1-1/|I|) \cdot 2 =
2 - 1/|I|$). Hence, $u(\sigma,\sigma') =
2 - 1/|I|$, and to successfully invade, $\sigma'$ must attain
$u(\sigma',\sigma') \geq 2 - 1/|I|$.
We can write $\sigma' = p_0s_0+ p_1 \sigma_1' + p_2 \sigma_2'$, where $p_0+p_1+p_2=1$,
$\sigma_1'$ only puts positive probability on the
$s_{i,t(i)}$ strategies, and $\sigma_2'$ only
puts positive probability on the $s_v$ strategies with
$v \in V_{i,t(i)}$.
The strategy that results from conditioning $\sigma'$ on $\sigma_1'$ not being played may be written as
\begin{equation*}
\begin{aligned}
& (p_0/(p_0+p_2)) s_0 + (p_2/(p_0+p_2)) \sigma_2' &
\end{aligned}
\end{equation*}
and thus we may write
\begin{equation*}
\begin{aligned}
u(\sigma',\sigma') = &
\ p_1^2 u(\sigma_1',\sigma_1')+
p_1(p_0+p_2) u(\sigma_1', (p_0/(p_0+p_2)) s_0 + (p_2/(p_0+p_2))
\sigma_2') \\
& +
(p_0+p_2)p_1 u((p_0/(p_0+p_2)) s_0 + (p_2/(p_0+p_2)) \sigma_2', \sigma_1')
\\ &
+
(p_0+p_2)^2
u((p_0/(p_0+p_2)) s_0 + (p_2/(p_0+p_2)) \sigma_2',
(p_0/(p_0+p_2)) s_0 + (p_2/(p_0+p_2)) \sigma_2') &
\end{aligned}
\end{equation*}
Now, if we shift
probability mass from $s_0$ to $\sigma_2'$, i.e., we decrease $p_0$ and increase $p_2$ by the same amount,
this will not affect any of the coefficients in the previous expression; it will
not affect any of
\begin{equation*}
\begin{aligned}
& u(\sigma_1',\sigma_1'), & \\
& u(\sigma_1', (p_0/(p_0+p_2)) s_0 + (p_2/(p_0+p_2)) \sigma_2') & \\
& \ \ \ \text{(because $u(s_{ij},s_v) = u(s_{ij},s_0) = 2 - 1/|I|$), and} & \\
& u((p_0/(p_0+p_2)) s_0 + (p_2/(p_0+p_2)) \sigma_2', \sigma_1') & \\
& \ \ \ \text{(because $u(s_0, s_{ij}) = u(s_v, s_{ij}) = 2 - 1/|I|$ when $v \in V_{ij}$
or $v \in V_{i'j'}$ with $i' \neq i$);} &
\end{aligned}
\end{equation*}
and it will not decrease
\begin{equation*}
\begin{aligned}
& u((p_0/(p_0+p_2)) s_0 + (p_2/(p_0+p_2)) \sigma_2',
(p_0/(p_0+p_2)) s_0 + (p_2/(p_0+p_2)) \sigma_2') & \\
& \ \ \ \text{(because
for any $v \in V$, $u(s_0,s_0)=u(s_0,s_v)=u(s_v,s_0)=0$).} &
\end{aligned}
\end{equation*}
Therefore, we may assume without loss of generality that $p_0=0$, and hence $\sigma' = p_1 \sigma_1' + p_2 \sigma_2'$.
It follows that we can write
\begin{equation*}
\begin{aligned}
& u(\sigma',\sigma')= p_1^2
u(\sigma_1',\sigma_1') + p_1p_2 u(\sigma_1',\sigma_2') + p_2p_1
u(\sigma_2',\sigma_1') + p_2^2 u(\sigma_2',\sigma_2') &
\end{aligned}
\end{equation*}
We first note that
$u(\sigma_1',\sigma_1')$ can be at most $2 - 1/|I|$. Specifically,
\begin{equation*}
\begin{aligned}
& u(\sigma_1',\sigma_1') = (\sum_i \sigma_1'(s_{i,t(i)})^2) \cdot 1 + (1 -
\sum_i \sigma_1'(s_{i,t(i)})^2) \cdot 2 &
\end{aligned}
\end{equation*}
and this expression is uniquely
maximized by setting each $\sigma_1'(s_{i,t(i)})$ to $1/|I|$.
$u(\sigma_1',\sigma_2')$ is easily seen to also be $2-1/|I|$, and
$u(\sigma_2',\sigma_1')$ is easily seen to be at most $2-1/|I|$ (in fact,
it is exactly that). Thus, to obtain $u(\sigma',\sigma') \geq 2 - 1/|I|$,
we must have either $p_1 = 1$ or $u(\sigma_2',\sigma_2') \geq 2 - 1/|I|$.
However, in the former case, we would require $u(\sigma_1',\sigma_1') = 2 -
1/|I|$, which can only be attained by setting each $\sigma_1'(s_{i,t(i)})$
to $1/|I|$---but this would result in $\sigma'=\sigma$. Thus, we can
conclude $u(\sigma_2',\sigma_2') \geq 2 - 1/|I|$. But then $\sigma_2'$
would also successfully invade $\sigma$. Hence, we can assume without loss
of generality that $\sigma' = \sigma_2'$, i.e., $p_0=p_1=0$ and $p_2 = 1$.
That is, we can assume that $\sigma'$ only places positive probability on strategies $s_v$ with $v \in \bigcup_{i \in I} V_{i,t(i)}$. For any $v,v' \in V$, we have $u(s_v,s_{v'}) = u(s_{v'},s_v)$.
Specifically, $u(s_v,s_{v'}) = u(s_{v'},s_v) = (k/(k-1))(2-1/|I|)$ if $v
\neq v'$ and $(v,v') \in E$, and $u(s_v,s_{v'}) = u(s_{v'},s_v) = 0$
otherwise. Now, suppose that $\sigma'(s_v)>0$ and $\sigma'(s_{v'}) >0$ for
$v \neq v'$ with $(v,v') \notin E$. We can write $\sigma' = p_0 \sigma'' +
p_1 s_v + p_2 s_{v'}$, where $p_0$, $p_1 = \sigma'(s_v)$, and $p_2 =
\sigma'(s_{v'})$ sum to $1$. We have
\begin{equation*}
\begin{aligned}
& u(\sigma',\sigma') =p_0^2
u(\sigma'',\sigma'') + 2p_0p_1 u(\sigma'', s_v) + 2p_0p_2 u(\sigma'',
s_{v'}) &
\end{aligned}
\end{equation*}
(because $u(s_v,s_v) = u(s_{v'},s_{v'}) = u(s_v,s_{v'}) = 0$).
Suppose, without loss of generality, that $u(\sigma'', s_v) \geq
u(\sigma'', s_{v'})$. Then, if we shift all the mass from $s_{v'}$ to
$s_v$ (so that the mass on the latter becomes $p_1+p_2$), this can only
increase $u(\sigma',\sigma')$, and it reduces the size of the support of
$\sigma'$ by $1$. By repeated application, we can assume without loss of
generality that the support of $\sigma'$ corresponds to a clique of the
induced subgraph on $\bigcup_{i \in I} V_{i,t(i)}$. We know this clique
has size $c$ where $c < k$. $u(\sigma',\sigma')$ is maximized if $\sigma'$
randomizes uniformly over its support, in which case
\begin{equation*}
\begin{aligned}
& u(\sigma',\sigma') =
((c-1)/c) (k/(k-1)) (2-1/|I|) < ((k-1)/k) (k/(k-1)) (2-1/|I|) = 2-1/|I|
&
\end{aligned}
\end{equation*}
But this contradicts that $\sigma'$ would successfully invade $\sigma$. It
follows that $\sigma$ is indeed an ESS.
Conversely, suppose that there
exists an ESS $\sigma$ that places positive probability only on strategies $s_{ij}$ with $i \in I$ and $j \in J$.
We must have $u(\sigma, \sigma) \geq 2-1/|I|$, because
otherwise $s_0$ would be a better response to $\sigma$.
First suppose that for every $i \in I$, there is at most one
$j \in J$ such that $\sigma$ places positive probability on
$s_{ij}$ (we will shortly show that this must be the case).
Let $t(i)$ denote the $j \in J$ such that $\sigma(s_{ij}) >0$ (if there is no such $j$ for some $i$, then choose an arbitrary $j$ to equal $t(i)$). Then, $u(\sigma, \sigma)$ is uniquely maximized by setting $\sigma(s_{i,t(i)}) = 1/|I|$ for all
$i \in I$, resulting in
\begin{equation*}
\begin{aligned}
& u(\sigma, \sigma) = (1/|I|) \cdot 1 + (1 - 1/|I|) \cdot 2 = 2 - 1/|I| &
\end{aligned}
\end{equation*}
Hence, this is the only way to ensure that $u(\sigma, \sigma) \geq 2-1/|I|$, under the assumption that for every $i \in I$, there is at most one
$j \in J$ such that $\sigma$ places positive probability on
$s_{ij}$.
Now, let us consider the case where there exists an $i \in I$ such that there exist $j, j' \in J$ with $j \neq j'$, $\sigma(s_{ij}) > 0$, and $\sigma(s_{ij'}) > 0$, to show that such a strategy cannot obtain a utility of $2-1/|I|$ or more against itself.
We can write $\sigma = p_0 \sigma' + p_1 s_{ij} + p_2 s_{ij'}$,
where $\sigma'$ places probability zero on $s_{ij}$ and $s_{ij'}$. We observe that
$u(\sigma', s_{ij}) = u(s_{ij}, \sigma')$ and
$u(\sigma', s_{ij'}) = u(s_{ij'}, \sigma')$, because
when the game is restricted to these strategies, each player always gets the same payoff as the other player.
Moreover, $u(\sigma',s_{ij}) = u(\sigma',s_{ij'})$,
because $\sigma'$ does not place positive probability on
either $s_{ij}$ or $s_{ij'}$.
Hence, we have that
\begin{equation*}
\begin{aligned}
&
u(\sigma,\sigma) = p_0^2 u(\sigma',\sigma') +
2p_0(p_1+p_2) u(\sigma', s_{ij})
+ p_1^2 + p_2^2 &
\end{aligned}
\end{equation*}
But then, if we
shift all the mass from $s_{ij'}$ to $s_{ij}$ (so that
the mass on the latter becomes $p_1+p_2$) to obtain strategy
$\sigma''$, it follows that $u(\sigma'',\sigma'') >
u(\sigma,\sigma)$. By repeated application, we can find
a strategy $\sigma'''$ such that
$u(\sigma''',\sigma''')>
u(\sigma,\sigma)$ and for every $i \in I$, there is at most one
$j \in J$ such that $\sigma'''$ places positive probability on
$s_{ij}$. Because we showed previously that the latter type of strategy can obtain expected utility at most $2-1/|I|$ against itself, it follows that it is in fact the {\em only} type of strategy (among those that randomize only over the $s_{ij}$ strategies)
that can obtain expected utility $2-1/|I|$ against itself. Hence, we can conclude that the ESS $\sigma$ must have,
for each $i \in I$, exactly one $j \in J$ (to which
we will refer as $t(i)$) such that $\sigma(s_{i,t(i)})
= 1/|I|$, and that $\sigma$ places probability $0$ on every other
strategy.
Finally, suppose, for the sake of contradiction, that there exists a
clique of size $k$ in the induced subgraph on $\bigcup_{i \in I} V_{i,t(i)}$.
Consider the strategy $\sigma'$ that places probability $1/k$
on each of the corresponding strategies $s_v$.
We have that
$u(\sigma,\sigma)=u(\sigma,\sigma')=u(\sigma',\sigma)=
2-1/|I|$. Moreover,
\begin{equation*}
\begin{aligned}
&
u(\sigma',\sigma')=
(1/k)\cdot 0 +
((k-1)/k) \cdot (k/(k-1))(2-1/|I|) = 2-1/|I|
&
\end{aligned}
\end{equation*}
It follows that $\sigma'$ successfully invades $\sigma$---but this contradicts $\sigma$ being an ESS. It follows,
then, that $t$ is such that
every clique in the induced graph on $\bigcup_{i \in I} V_{i,t(i)}$ has size strictly less than $k$.
\end{proof1}
\section{Hardness without Restricted Support}
\label{se:without}
All that remains is to reduce the modified problem to the main problem
of determining whether a game has an ESS. The following lemma makes this
fairly straightforward.
\begin{lemma1}[No duplicates in ESS]
Suppose that strategies $s_1$ and $s_2$ ($s_1 \neq s_2$) are
duplicates, i.e., for all $s$, $u(s_1,s)=u(s_2,s)$.\footnote{It is fine to
require $u(s,s_1)=u(s,s_2)$ as well, and we will do so in
the proof of Theorem~\ref{th:main},
but it is not necessary for this
lemma to hold.}
Then no ESS places positive probability on
$s_1$ or $s_2$.
\label{le:no_duplicates}
\end{lemma1}
\begin{proof1}
For the sake of contradiction, suppose $\sigma$ is an ESS
that places positive probability on $s_1$ or $s_2$ (or both).
Then, let $\sigma' \neq \sigma$ be identical to $\sigma$
with the
exception that $\sigma'(s_1) \neq \sigma(s_1)$ and
$\sigma'(s_2) \neq \sigma(s_2)$ (but it must be that
$\sigma'(s_1)+\sigma'(s_2)=\sigma(s_1)+\sigma(s_2)$).
That is, $\sigma'$ redistributes some mass between $s_1$ and
$s_2$. Then, $\sigma$ cannot repel $\sigma'$,
because $u(\sigma,\sigma)=u(\sigma',\sigma)$
and $u(\sigma,\sigma')=u(\sigma',\sigma')$.
\end{proof1}
We now formally define the main problem:
\begin{defin}
In ESS, we are given a symmetric two-player
normal-form game $G$. We are asked whether there exists
an evolutionarily stable strategy of $G$.
\end{defin}
We now obtain the main result as follows.
\begin{theorem1}
ESS is $\Sigma_2^P$-complete.
\label{th:main}
\end{theorem1}
\begin{proof1}
\cite{Etessami08:Computational} proved membership in $\Sigma_2^P$.
We prove hardness by reduction from ESS-RESTRICTED-SUPPORT, which is hard
by Lemma~\ref{th:restricted}.
Given the game $G$ with strategies $S$ and subset of
strategies $T\subseteq S$ that can receive positive probability, construct a modified game $G'$ by duplicating
all the strategies in $S \setminus T$. (At this point, for duplicate
strategies $s_1$ and $s_2$, we require $u(s,s_1)=u(s,s_2)$ as well as $u(s_1,s)=u(s_2,s)$.)
If $G$ has an ESS $\sigma$ that places positive probability only on strategies in
$T$, this will still be an ESS in $G'$, because
any strategy that uses the new duplicate strategies will still be repelled,
just as its equivalent strategy that does not use the new duplicates was
repelled in the original game. (Here, it should be noted that the equivalent
strategy in the original game cannot turn out to be $\sigma$, because $\sigma$ does not
put any probability on a strategy that is duplicated.)
On the other hand, if $G'$ has an ESS, then by Lemma~\ref{le:no_duplicates}, this ESS can place positive
probability only on strategies in $T$. This ESS will still be an ESS in $G$ (all of whose strategies also exist in $G'$), and naturally it will still place
positive
probability only on strategies in $T$.
\end{proof1}
|
{
"timestamp": "2018-05-08T02:12:05",
"yymm": "1805",
"arxiv_id": "1805.02226",
"language": "en",
"url": "https://arxiv.org/abs/1805.02226"
}
|
\section{Introduction}
\defcitealias{1995Storey}{SH95}
Emission lines from gaseous nebulae contain a wealth of information on the
conditions in the plasma, but to interpret observed spectra theoretical models of
line fluxes are required. The construction of new radio telescopes such as MeerKAT,
ALMA and LOFAR provides new opportunities to study emission lines with improved
sensitivity at low frequencies. Fig.~\ref{fig:alphatrans} shows the H$n\alpha$
transitions (hydrogen transitions between principal quantum numbers $n+1
\rightarrow n$) that will be observable with each of these telescopes. Low
frequency ($\leq 1.4$\,GHz) detections of hydrogen recombination lines are rare
\citep{2002Anantharamaiah}, but carbon lines are commonly detected at these
frequencies. Transitions in carbon produce lines with different frequencies to
hydrogen, but reliable calculations of lines with $n \gtrsim 300$ are needed to
interpret results from insturments such as LOFAR.
\begin{figure}
\includegraphics[width=\columnwidth]{alphatrans}
\caption{The black line shows the frequencies (wavelengths) of H$n\alpha$
transitions as a function of $n$ for hydrogen. The horizontal shaded bands depict
the operating frequency bands of ALMA (blue), MeerKAT (red) and LOFAR (green).
The lighter shaded bands are planned additions to the telescopes, but are not
operational yet. The vertical grey shaded areas indicate the ranges of principal
quantum number for which the H$n\alpha$ transitions will technically fall into
the observing frequencies of the respective telescopes. A colour version of this
graph is available in the electronic copy of this article.}
\label{fig:alphatrans}
\end{figure}
\citet{1966goldberg} showed that at low frequencies where stimulated processes
are important the accuracy of the calculation of atomic level populations has a
substantial influence on the theoretical intensities of recombination lines. This
sentiment is echoed 50 years later by \citet{2017SanchezContreras} who also point
out that there is disagreement between the results of various authors without one
set of values clearly being the correct one. The need for accurate values is even
more important now that radio telescopes with high sensitivity are being constructed.
\citet{1976Burgess} included stimulated emission and absorption terms for the
bound-bound and bound-free transitions in an $n$-model that went up to principal
quantum number $n = 500$. They found that stimulated processes can have a
significant effect on the values of the departure coefficients. This work was
expanded by \citet{1977Summers} who resolved the angular momentum states for
levels with $n \leq 35$, but only considered how the $^1$S, $^2$S and $^2$P levels
of hydrogen were affected by a stellar radiation field. \citet{1977CoPhC..13...39B}
and \citet{1979Salem} published their programme and tables based on the $n$-model
of \citet{1970Brocklehurst} that gave departure coefficients for $50 \leq n \leq
300$. The programme included updated collisional cross sections from \citet{1976Gee},
collisions due to protons, as well as radiative processes involving an external
field, but did not consider angular momentum changing collisions. The programme
was modified by \citet{1990Walmsley} to generate results down to $n =20$.
The results of \citet{1995Storey} (hereafter \citetalias{1995Storey}) are
considered to be the definitive values for optical/IR recombination lines, but
stimulated processes are not included in their model. However, the values of
\citetalias{1995Storey} are being used to study low frequency lines
\citep[see for example][]{2006Fujiyoshi,2017Bendo,2017SanchezContreras}. In this
paper we present results using a model that includes stimulated and absorption
processes in the bound-bound and bound-free transitions in an $nl$-model, and use
updated numerical methods. Our results are compared to those of
\citetalias{1995Storey}. The current calculated values differ significantly from
their results for levels with principal quantum number $n\gtrsim30$.
In section~\ref{sec:model} we describe our model and calculational details. A
stopping criterion for an iterative method of solution is derived in
section~\ref{sec:stopping_criterion}, and the results of including this in our
calculations are compared with those of \citetalias{1995Storey}. The use of a
direct solver is also discussed here. The stimulating effects of continuum
radiation fields within a nebula are considered in section~\ref{sec:cont_fields}.
Finally, the main conclusions of this article are outlined in
section~\ref{sec:conclusions}. Details of the atomic calculations used in our
model are presented in the appendix.
\section{The model}
\label{sec:model}
To determine theoretical line intensities, the electron populations $N_{nl}$ of
all bound states need to be calculated. The level populations $N_{nl}$ are described
by statistical equilibrium which requires that the total rate of all transitions
into any particular level must equal the total rate of the transitions out of
that level.
\citet{1937Menzel} introduced a correction factor, denoted by $b_{nl}$, to
compensate for the degree of departure from local thermodynamic equilibrium (LTE)
of the level population from the LTE value $N_{nl}^*$ so that
\begin{equation}
b_{nl}=\frac{N_{nl}}{N_{nl}^*}\,.
\end{equation}
A departure coefficient $b_{nl}$ that is equal to unity indicates that the level
$nl$ is in LTE with the electron gas. In this scheme, the Saha-Boltzmann equation
for hydrogen becomes
\begin{equation}
N_{nl}=b_{nl}N_\mathrm{e} N_\mathrm{p} \left( \frac{h^2}{2\pi m_\mathrm{e}k_\mathrm{B}
T_\mathrm{e}}\right)^{3/2} (2l+1)
\exp\left(\frac{\chi_{nl}}{k_\mathrm{B}T_\mathrm{e}}\right)\,,\label{eq:sahaboltznl}
\end{equation}
where $N_\mathrm{e}$ and $N_\mathrm{p}$ are the number density of electrons and
protons respectively, $h$ is Planck's constant, $m_\mathrm{e}$ is the electron
mass, $k_\mathrm{B}$ is the Boltzmann constant and $T_\mathrm{e}$ is the kinetic
temperature of the free electron gas. The ionisation potential of level $nl$ for
hydrogen is given by $\chi_{nl}=R_\mathrm{H}hc/n^2$ where $R_\mathrm{H}$ is the
Rydberg constant for hydrogen and $c$ the speed of light in a vacuum.
Collisional processes within the plasma become more efficient than their radiative
counterparts as the principal quantum number $n$ of the level increases. Therefore,
for each set of physical conditions there will be an $n=n^*$ where the collisional
processes dominate completely and set up Boltzmann distributions with temperature
$T_\mathrm{e}$ among the levels so that $b_{nl}=1$ for $n\geq n^*.$
\citet{1966goldberg} introduced an amplification factor $\beta_{n,n'}$ with $n>n'$
that is given by
\begin{equation}
\beta_{n,n'} = \frac{\left(1-\frac{b_{n'}}{b_n} e^{-h\nu/k_\mathrm{B}T_\mathrm{e}}
\right)}{\left(1-e^{-h\nu/k_\mathrm{B}T_\mathrm{e}}\right)}
\label{eq:betadef}
\end{equation}
where $\nu$, the frequency associated with the $n$-$n'$ transition, is given by
\begin{equation}
\nu=R_\mathrm{H} c \left( \frac{1}{n^{'2}}-\frac{1}{n^2} \right)\,.
\end{equation}
The amplification factors give the departure of the ratios of level populations
from what is expected in LTE, as opposed to the departure coefficients that
represent the departure of individual level populations from LTE. A value of
$\beta_{n,n'}<0$ indicates a population inversion between levels $n$ and
$n'$, with an increased amount of inversion for smaller values of $\beta_{n,n'}$.
Therefore, stimulated emission will be important for levels for which
$\beta_{n,n'} \ll 0$. An illustrative discussion of the amplification factor can
be found in \citet{1996Strelnitski}.
\citet{1938BakerMenzel} introduced two simple assumptions which are referred to
as Case A and Case B. For Case A, the nebula is taken to be optically thin to all
line radiation. For Case B, it is assumed that all photons produced by Lyman
transitions are optically thick and are absorbed close to the point where they
are emitted (this is called the on-the-spot approximation). From a calculational
perspective, this means that all transitions to the $n = 1$ level are
ignored. \citet{1962Osterbrock} concluded that Case B is a good quantitative
approximation for nebular conditions.
Substituting the Saha-Boltzmann equation~(\ref{eq:sahaboltznl}) into the rate
equation of all processes due to statistical balance yields
\begin{align}
& \left(\frac{h^2}{2\pi m_\mathrm{e}k_\mathrm{B}T_\mathrm{e}}\right)^{-3/2}
\left(\alpha_{nl}^\mathrm{r} + \alpha_{nl}^\mathrm{s} + N_\mathrm{e}C_{i,nl}\right)
\ + \nonumber\\
& \underset{m>n}{\sum}\;\underset{l'=l\pm1}{\sum}b_{ml'}(2l'+1)
e^{\chi_{ml'}/k_\mathrm{B}T_\mathrm{e}} \left( A_{ml',\,nl} + B_{ml',\,nl}J_{\nu}
+ N_\mathrm{e}C_{ml',\,nl} \right) \nonumber\\
& + \underset{k<n}{\sum}\;\underset{l^{''}=l\pm1}{\sum}b_{kl^{''}}(2 l^{''} +1)
e^{\chi_{kl^{''}}/k_\mathrm{B}T_\mathrm{e}} \left(B_{kl^{''},\,nl}J_{\nu} +
N_\mathrm{e}C_{kl^{''},\,nl} \right)\nonumber\\
& +\underset{l'=l\pm1}{\sum}b_{nl'}(2l'+1)e^{\chi_{nl'}/k_\mathrm{B}
T_\mathrm{e}}\sum_{q} N_qC^q_{nl',\,nl}\nonumber\\
& = \ b_{nl}(2l+1)e^{\chi_{nl}/k_\mathrm{B}T_\mathrm{e}}\left[\alpha_{nl}^\mathrm{p}
+ N_\mathrm{e}C_{nl,i} \right.\nonumber\\
& \left.+\underset{k<n}{\sum}\;\underset{l^{''}=l\pm1}{\sum} \left(A_{nl,\,kl^{''}}
+ B_{nl,\,kl^{''}}J_{\nu} + N_\mathrm{e}C_{nl\,kl^{''}}\right)
\right. \nonumber\\
& \left.+\underset{m>n}{\sum}\;\underset{l'=l\pm1}{\sum}\left(B_{nl,\,ml'}
J_{\nu} + N_\mathrm{e}C_{nl,\,ml'}\right) \ + \underset{l'=l\pm1}{\sum}\sum_{q}N_q
C^q_{nl,\,nl'} \right]\,.
\label{eq:saharatenl}
\end{align}
The left-hand side contains all processes that populate level $nl$. The terms
represent radiative recombination ($\alpha_{nl}^\mathrm{r}$), stimulated recombination
($\alpha_{nl}^\mathrm{s}$), three-body recombination ($C_{i,nl}$), spontaneous emission
($A_{nl,\,n'l'}$), stimulated emission ($B_{nl,\,n'l'}$), collisional de-excitation
($C_{nl,\,n'l'}$), absorption ($B_{n'l',\,nl}$), collisional excitation
($C_{n'l',\,nl}$) and elastic collisions ($C^q_{nl,\,nl'}$). The mean intensity
of incident radiation fields is given by $J_\nu$.
The right-hand side includes all processes that depopulate level $nl$. The terms
represent photoionisation ($\alpha_{nl}^\mathrm{p}$), collisional ionisation
($C_{nl,i}$), spontaneous emission, stimulated emission, collisional de-excitation,
collisional excitation, absorption and elastic collisions, respectively.
The $N_q$ represent the number densities of the different species interacting via
elastic collisions with the bound electrons. In this work, electrons, protons and
He$^+$ ions are taken to induce these collisions, with the proton number density
$N_\mathrm{p} = 0.909 N_\mathrm{e}$ and the He$^+$ number density
$N_{\mathrm{He}^+} = 0.090 N_\mathrm{e}$. These are the same values used by
\citetalias{1995Storey}.
Equation~(\ref{eq:saharatenl}) represents a system of linear equations in the
$b_{nl}$ values. Therefore, it can be written in matrix form as
\begin{equation}
\mathbfss{A}\mathbf{\cdot}\mathbf{b}=\mathbf{y}\label{eq:nlsys}
\end{equation}
where $\mathbf{b}$ is a vector with the $b_{nl}$ values as entries. The diagonal
entries of the matrix $\mathbfss{A}$ represent the processes depopulating a level
$nl$ and the off-diagonal entries in a given row are the bound-bound processes
that are populating that level. Only dipole transitions are considered in this
work which results in $\mathbfss{A}$ being sparse, i.e.~most of the entries in
the $\mathbfss{A}$ matrix are equal to zero. The vector $\mathbf{y}$ contains the
first term of equation~(\ref{eq:saharatenl}) as well as the populating contributions
of levels with $n>n_{\mathrm c}$.
The values of the $b_{nl}$'s for a specific set of conditions are found in a two
step process. First, a set of equations for which the angular momentum states are
not resolved are solved to obtain values for $b_n$. The values produced by this
method are valid for $n>n_\mathrm{c}$ where the equation~(\ref{eq:statpop}) holds.
This is referred to as the $n$-model.
Next, equations~(\ref{eq:saharatenl}) are solved explicitly for $n\leq n_\mathrm{c}$
to obtain the $b_{nl}$ values for this range. This part is referred to as the
$nl$-model and the $b_n$'s for this range are obtained via equations~(\ref{eq:bnsum}).
Sections~\ref{sec:nmodel} and \ref{sec:nlmodel} provide details regarding the the
two parts of the model and calculational details of the individual rates in
equation~(\ref{eq:saharatenl}) are given in Appendix~\ref{sec:calcs}.
\subsection{The $n$-model}
\label{sec:nmodel}
The approach of \citet{1970Brocklehurst} was followed for the $n$-model. In
addition to the processes in the Brocklehurst model, our $n$-model includes
stimulated emission and absorption terms. In principle, an isolated atom has an
infinite number of energy levels. To make the mathematics computationally viable,
an upper cut-off $n_{\text{max}}$ was introduced for the highest $n$ level for
which the rate equations were solved explicitly. The contributions to the sums
above $n_{\text{max}}$ were converted into an integral using the trapezoidal rule.
This integral was then approximated using a 20-point Gaussian quadrature. The
remaining rate equations were cast into matrix form.
Because the departure coefficients vary smoothly and slowly with $n$, the Lagrange
interpolation technique of \citet{1969Burgess} was employed to reduce the number
of equations to be solved five fold. The condensed rate equations were solved by
direct Gaussian elimination with the use of partial pivoting. The resulting
condensed vector containing the $b_n$ values was interpolated to give the full
set of values. The $b_n$ values found using this method compared very well with
the results of solving the full system of equations.
\subsection{The $nl$-model}
\label{sec:nlmodel}
To obtain the departure coefficients for $n\leq n_\mathrm{c}$, the rate
equations~(\ref{eq:saharatenl}) have to be solved simultaneously. The main
difference between the $b_n$'s from the $n$-model and the ones obtained from the
$nl$-model and equation~(\ref{eq:bnsum}) is the inclusion of the elastic collisions
between bound electrons and free particles. The elastic collisions' effects are
important for the populations of mid- to high-energy levels where the $b_n$'s
calculated with the $nl$-model can differ significantly from those of the $n$-model.
The semi-classical impact-parameter formulation developed by
\citet{1964PengellySeaton} for the rates of angular momentum changing collisions
has been considered definitive for many decades. Recently, \citet{2012Vrinceanu}
presented updated formulae for these transition rates for both the quantum and
semi-classical case. \citet{2016Guzman} did an in-depth analysis of the two
approaches and concluded that the analytic equations presented in
\citet{1964PengellySeaton} are much faster to compute and agree very well with
the exact quantum mechanical probabilities of \citet{2012Vrinceanu}. This supports
an earlier conclusion of \citet{2015StoreySochi}. In this work, the formalism of
\citet{1964PengellySeaton} was followed with the modification of \citet{2016Guzman}
to get the partial rates directly. Details are given in section~\ref{sec:elascol}.
There will be a $n=n_\mathrm{c}<n^*$ where the collisional processes will be much
faster than the radiative processes, but radiative effects are still evident. For
$n_\mathrm{c}< n<n^*$, the angular momentum states are populated statistically
according to
\begin{equation}
N_{nl}=\frac{2l+1}{n^2}N_n\,.
\label{eq:statpop}
\end{equation}
The departure coefficient, $b_n$, that represents the departure from LTE for an
energy level $n$ is defined as the weighted sum of the $b_{nl}$'s
\begin{equation}
b_n = \frac{1}{n^2} \sum_{l=0}^{n-1} (2l+1) b_{nl} \,.
\label{eq:bnsum}
\end{equation}
Therefore, it follows that $b_n = b_{nl}$ for $n > n_\mathrm{c}$.
Equation~(\ref{eq:saharatenl}) represents a total of $n_\mathrm{c}(n_\mathrm{c}+1)/2$
equations that have to be solved simultaneously. This value is generally ${\sim}10^4$.
The populating contributions of the first $20$ energy levels beyond $n_\mathrm{c}$
are calculated explicitly and incorporated into the vector $\mathbf{y}$ in
equation~(\ref{eq:nlsys}). The populating effects of levels $n>n_\mathrm{c}+20$
are approximated with a 20-point Gaussian quadrature in the same way as the
contributions of $n>n_{\text{max}}$ are handled in the $n$-model and added to
$\mathbf{y}$. The depopulating effects of levels with $n_\mathrm{c}<n\leq
n_\mathrm{c}+20$ and $n>n_\mathrm{c}+20$ are treated similarly, but added to the
total depopulation rate for each level contained in the diagonal of the matrix
$\mathbfss{A}$.
The level $n_\mathrm{c}$ is determined empirically to ensure the results of the
$nl$-model are used up to a sufficiently high principal quantum number so that
they match the results of the $n$-model to four significant digits. The level
above which equation~(\ref{eq:bnsum}) is valid was found to be weakly dependent
on temperature and is calculated using
\begin{equation}
n_\mathrm{c} = 350 - 15\ln(N_\mathrm{e})
\end{equation}
and rounding up to the nearest multiple of $5$.
\section{Stopping criterion}
\label{sec:stopping_criterion}
\subsection{Background}
Because the values of $b_{nl}$ do not vary smoothly, the matrix condensation
technique used in the $n$-model cannot be employed at this step. The standard
method of solving the large number of simultaneous equations in the $nl$-model is
to use an iterative method \citep[e.g.][]{1971Brocklehurst,1991Smits,1995Storey,
2017Salgado}. An important aspect of an iterative solution is a test to indicate
when convergence of the departure coefficients has been achieved, and to stop the
iterations. In most previous work, the stopping criterion has not been explicitly
stated, except for \citet{2017Salgado}, who terminate their iterative procedure
when the difference between two successive iterations is less than 1 per cent.
We show here that such a stopping criterion is not appropriate and can stop the
iterative process before the values of the departure coefficients have converged
to values within a known error. If the iterative procedure is making small
corrections to the $b_{nl}$'s, the convergence rate can be slow, with the result
that a stopping criterion such as the one used by \citet{2017Salgado} can signal
convergence has occured. Many more iterations can be required before convergence
is reached. These tiny corrections can accumulate to a significant number, as will
be shown below. For the conditions considered in this work, the matrix
$\mathbfss{A}$ has a condtion number $\kappa(\mathbfss{A}) \sim 10^7$ which is
very large and means the matrix is ill-conditioned, which results in a slow
convergence rate \citep{Ill16}.
\subsection{Derivation}
To derive a stopping criterion, we define the following quantities and notation.
Let $\mathbf{b}^{(i)}$ be the vector containing the departure coefficients as
entries after $i$ iterations. The residual $\mathbf{r}^{(i)}$ after $i$ iterations
is given by
\begin{equation}
\mathbf{r}^{(i)}=\mathbfss{A}\mathbf{\cdot}\mathbf{b}^{(i)}-\mathbf{y}
\label{eq:residual}
\end{equation}
and provides an indication of the quality of $\mathbf{b}^{(i)}$. The vector of
errors $\mathbf{e}^{(i)}$ after $i$ iterations is the difference between
$\mathbf{b}^{(i)}$ and the true solution $\mathbf{b}$, i.e.
\begin{equation}
\mathbf{e}^{(i)}=\mathbf{b}^{(i)}-\mathbf{b}\,.\label{eq:error}
\end{equation}
Of course, $\mathbf{b}$ is generally unknown and therefore $\mathbf{e}^{(i)}$
cannot be calculated directly. However, if an upper bound for the error can be
defined then a stopping criterion can be constructed such that the iterations will
only stop after it is guaranteed that the errors are smaller than some predefined
number.
A norm of a vector or matrix, indicated using double bars $\| \cdot \|$, is a
non-negative number that gives a measure of the magnitude of the vector or matrix.
There are various types of norms that can be defined, all of which obey a specified
set of properties. In particular, for a matrix $\mathbfss{M}$ and vector
$\mathbf{v}$ the inequality
\begin{equation}
\| \mathbfss{M}\mathbf{v} \| \leq \| \mathbfss{M} \| \|
\mathbf{v} \| \label{eq:submult}
\end{equation}
will hold for any consistent pair of vector and matrix norms. Note
that the quantity called the condition number mentioned above is defined as
\begin{equation}
\kappa(\mathbfss{M}) = \| \mathbfss{M}^{-1} \| \ \| \mathbfss{M} \|\, . \label{eq:conno}
\end{equation}
If $\kappa(\mathbfss{M}) \gg 1$ then the matrix is called ill-conditioned.
Using the tools developed above, an appropriate stopping criterion for an iterative
solution to the set of equations represented by equation~(\ref{eq:nlsys}) can now
be derived. Substituting equation~(\ref{eq:nlsys}) into equation~(\ref{eq:error})
and using equation~(\ref{eq:residual}) gives
\begin{align}
\mathbf{e}^{(i)} & = \mathbf{b}^{(i)}-\mathbfss{A}^{-1}\mathbf{y}\nonumber\\
& =\mathbfss{A}^{-1}(\mathbfss{A}\mathbf{\cdot}\mathbf{b}^{(i)}-\mathbf{y})\nonumber\\
& =\mathbfss{A}^{-1}\mathbf{r}^{(i)}.
\end{align}
Taking the norm on both sides and using the submultiplicative property given in
equation~(\ref{eq:submult}) leads to
\begin{equation}
\| \mathbf{e}^{(i)}\| = \| \mathbfss{A}^{-1}\mathbf{r}^{(i)}\| \leq\|
\mathbfss{A}^{-1}\| \| \mathbf{r}^{(i)}\| \,.
\end{equation}
Therefore, stopping the iterative procedure only after
\begin{equation}
\| \mathbf{r}^{(i)}\| \leq\epsilon\mathbf{\cdot}\frac{\| \mathbf{b}^{(i)}\| }{\|
\mathbfss{A}^{-1}\| }\label{eq:stop}
\end{equation}
will guarantee that the relative error is smaller than some predefined tolerance
$\epsilon\ll1$. That is, equation~(\ref{eq:stop}) implies
\begin{equation}
\frac{\| \mathbf{e}^{(i)}\| }{\| \mathbf{b}^{(i)}\| }\leq\frac{\|
\mathbfss{A}^{-1}\| \| \mathbf{r}^{(i)}\| }{\| \mathbf{b}^{(i)}\| }\leq\epsilon\,.
\label{eq:conv_criterion}
\end{equation}
Equation~(\ref{eq:conv_criterion}) is true for any consistent pair of vector and
submultiplicative matrix norms.
One appropriate set of such norms is the $l_1$ norm of a vector and the
corresponding operator norm of a matrix. For a general $n$-vector $\mathbf{v}$
with components $v_i$ and a general $n \times n$ matrix $\mathbfss{M}$ with
entries $m_{ij}$, these norms are given respectively by
\begin{equation}
\| \mathbf{v} \|_1 = \sum_{i=1}^n |v_i | \qquad \mathrm{and}
\qquad \| \mathbfss{M}\|_1 = \max_{1\leq j\leq n} \sum_{i=1}^n\left|m_{ij}\right|\,.
\label{eq:normdefs}
\end{equation}
The matrix norm $\| \mathbfss{M} \|_1$ corresponds to the maximum sum of the
absolute values of the individual columns of $\mathbfss{M}$.
Calculating matrix inverses are notoriously expensive. Because an algorithm
for estimating the $l_1$ norm of the inverse of a matrix is available, these norms
have been used. The algorithm of \citet{1984hager} that estimates the $l_1$ norm
of the inverse of a matrix without inverting the matrix first, was refined by
\citet{1988higham}. This algorithm usually gives the exact value of $\|
\mathbfss{A}^{-1}\|_1$ and, at worst, gives an order of magnitude estimate for
the types of matrices considered here \citep{1988higham}. The values of $\|
\mathbf{b}^{(i)}\|_1$ and $\| \mathbf{r}^{(i)}\|_1$ can be calculated directly.
\subsection{Effects on departure coefficients}
In this section an iterative procedure is applied to the linear
system~(\ref{eq:saharatenl}) in order to show the effect of the stopping criterion
given in equation~(\ref{eq:stop}) on the departure coefficients. The procedure is
a derivative of the Gauss-Seidel method and is described in \citet{1971Brocklehurst}.
The same method was used by \citetalias{1995Storey} and \citet{2017Salgado}.
The results of the $n$-model are refined using this iterative
scheme as follows. The departure coefficients $b_n$ from the $n$-model are used
as the initial values so that $b_{nl}=b_n$ at the start of the algorithm. The rate
equation~(\ref{eq:saharatenl}) is then solved to obtain $b_{nl}$ for decreasing
values of $n$ starting with $n=n_\mathrm{c}$. The $b_{nl}$ values for a given $n$
are solved simultaneously as an $n \times n$ matrix by including all the terms
that do not depend on these specific $b_{nl}$ values in the right-hand side of
equation~(\ref{eq:nlsys}). Each newly calculated set of $b_{nl}$'s is used in
subsequent calculations down to $n=2$ for Case A and $n=3$ for Case B. This
constitutes one iteration and the process is repeated until the stopping
criterion~(\ref{eq:stop}) is met.
Fig.~\ref{fig:iterations} shows the evolution of a subset of departure coefficients
as the iterative procedure progresses for a gas at $T_\mathrm{e}=10^4$\,K with
density $N_\mathrm{e}=10^4$\cmc\ for Case B with no external radiation field
present. The values on the left-hand side of the graph show the values after one
iteration and the values on the far right indicate the values after convergence
is reached. For this case, the $b_{nl}$'s converged to four significant digits
($\epsilon=5\times10^{-5}$) after $1\,375$ iterations. The dashed
vertical line at 3 iterations indicates the first point in the process where the
departure coefficients from two successive iterations change by less than 1 per
cent. From the graph it is clear that a small change in the departure coefficients
during the procedure is not sufficient to indicate convergence.
\begin{figure}
\includegraphics[width=\columnwidth]{iterationsplots.pdf}
\caption{The evolution of five departure coefficients for
$T_\mathrm{e}=10^4$\,K, $N_\mathrm{e}=10^4$\cmc\ Case B as the iterative procedure
of the $nl$-model progresses. Convergence is reached after 1\,375
iterations. The dashed line at 3 iterations shows the first point
where the values of the $b_{nl}$'s change by less than 1 per cent.}
\label{fig:iterations}
\end{figure}
\section{Comparison to previous results}
\label{sec:comp}
A comparison of the current results with those of \citetalias{1995Storey} for a
range of temperatures at a fixed electron density of $N_\mathrm{e}=10^4$\cmc\ is
shown in Fig.~\ref{fig:tempdep}. The largest discrepancy occurs at intermediate
principal quantum numbers. This is unsurprising, since the behaviour of the $b_{nl}$'s
at low and high $n$'s are not governed by the $nl$-model. At low energy levels,
radiative processes dominate and the departure coefficients would be largely
unaffected by the elastic collisions introduced in the $nl$-model. Therefore, at
these levels the final $b_{nl}$'s will be very close to their initial values
obtained from the $n$-model. At high energy levels, the collisional processes
dominate completely and all $b_{nl}$'s will tend towards unity, reducing the
difference between the two sets of calculations.
\begin{figure}
\includegraphics[width=\columnwidth]{tempdep.pdf}
\caption{Departure coefficients $b_n$ for Case B for a variety of
temperatures at electron density $N_\mathrm{e}=10^4$\cmc. Dashed lines represent
the values obtained by \citetalias{1995Storey} and solid lines in the same colour
show the current results for the same conditions. A colour version of this figure
is available in the electronic version of the article.}
\label{fig:tempdep}
\end{figure}
Fig.~\ref{fig:ldep} shows the difference between the unsummed $b_{nl}$ values of
\citetalias{1995Storey} and the current work. As expected, the differences are
very small for low values of $n$, but become more pronounced as $n$ and $l$
increase. This effect is due to elastic collisions becoming the dominant process
at intermediate and high $n$ levels.
\begin{figure}
\includegraphics[width=\columnwidth]{angularmomdep.pdf}
\caption{Partial departure coefficients $b_{nl}$ for a selected
number of principal quantum numbers $n$ for $T_\mathrm{e}=10^3$\,K and
$N_\mathrm{e}=10^4$\cmc, Case B. Solid lines represent the calculations of this
work, dashed lines of the same colour show the results of \citetalias{1995Storey}
for the same $n$. A colour version of this figure is available in the electronic
version of the article.}
\label{fig:ldep}
\end{figure}
The results show that line enhancement by stimulated emission of H$n\alpha$
transitions for intermediate $n$ are less extreme than previously thought, with
the point of maximum inversion occurring at a lower level. Fig.~\ref{fig:betas}
illustrates the amplification factor as defined in equation~(\ref{eq:betadef})
for H$n\alpha$ transitions for the same parameters as Fig.~\ref{fig:tempdep}.
The lines that fall in the optical range are largely unaffected, with the largest
deviation from previous results in the radio regime.
\begin{figure}
\includegraphics[width=\columnwidth]{betaplot.pdf}
\caption{The amplification factor $\beta_{n,n+1}$ for H$n\alpha$
transitions for electron density $N_\mathrm{e}=10^4$\cmc\ and a range of temperatures
under Case B conditions. Solid lines show the current results and dashed lines
are those of \citetalias{1995Storey}. A colour version of this figure is available
in the electronic version of the article.}
\label{fig:betas}
\end{figure}
The discrepancy between our values and those of \citetalias{1995Storey}
increases as the temperature decreases and the density increases; there is a
stronger dependence on temperature than density. For $T_\mathrm{e} = 500$\,K and
$N_\mathrm{e} = 10^6$\cmc\ the maximum relative difference is 22.5 per cent. We can
match the values of \citetalias{1995Storey} within a per cent by terminating our
procedure prematurely. Storey (private communication) has noted that the
\citetalias{1995Storey} model converges extremely fast (within 10 iterations) and
that the departure coefficients do not change significantly if either the number
of iterations or $n_c$ is increased.
At this stage we do not understand the reasons that the
\citetalias{1995Storey} model converges so fast and to different values than ours.
The differences in inelastic collisional rates between the models certainly plays
a role, but does not account fully for the differences. We have performed many
tests on our model, but cannot get it to converge as fast or to the same values
as \citetalias{1995Storey}. Clearly, this is an issue that will need further
investigation.
Obtaining a solution from thousands of iteration steps takes a
significant amount of time. To
speed up the solution, a direct solver using the
PARDISO\footnote{\url{http://pardiso-project.org}} package \citep{pardiso-6.0b,
pardiso-6.0a} was tested. It is a sophisticated solver for systems of linear
equations that exploits the sparsity of $\mathbfss{A}$ to solve the system in a
very efficient manner. The direct solver is considerably faster at solving the
set of linear equations than the iterative method.
The departure coefficients shown in the rest of the paper were obtained using the
above direct solver package. The $b_{nl}$'s obtained from the iterative method
with the given stopping criterion match those using the direct solver. An error
estimate of the quality of the results of the direct solver was obtained using
\begin{equation}
\epsilon \approx \| \mathbfss{A} \cdot \mathbf{b} - \mathbf{y} \| / \| \mathbf{b} \|\,.
\end{equation}
It was found that $\epsilon \leq 10^{-4}$ in all cases.
\label{sec:cont_fields}
\section{Stimulating effects of continuum radiation}
In this section we examine the effects of the continuum radiation fields on the
population structure of hydrogen in nebular environments. Specifically, their
role in stimulating transitions between electronic states.
The continuum radiation in the diffuse ISM consists of different components, each
of which are examined to determine their effect on departure coefficients. The
radiation fields considered here are the stellar radiation field, the free-free
radiation field generated by the electrons in the gas, the cosmic microwave
background radiation (CMBR) and emission from dust.
Fig.~\ref{fig:cont_rad_fields} shows the mean intensity $J_{\nu}$ of these fields
as a function of the frequency associated with H$n\alpha$ transitions. The
population inversion of the excited electrons is most pronounced for energy levels
$30\lesssim n\lesssim80$ (refer to Fig.~\ref{fig:betas}), so that electrons in
these levels are susceptible to being stimulated.
\begin{figure}
\includegraphics[width=\columnwidth]{cont_emission_plot.pdf}
\caption{Spectrum of continuum radiation fields $J_{\nu}$ within
a model ionised nebula at $T_\mathrm{e}=10^4$\,K. The free-free field is shown
for three different densities. The horizontal axis shows the principal quantum
number for H$n\alpha$ transitions, and the vertical axis shows the mean
intensities of the various radiation fields.}
\label{fig:cont_rad_fields}
\end{figure}
\subsection{Stellar radiation field}
Photoionised nebulae require a nearby hot source of ultraviolet radiation to
ionise atoms in the gas. In Fig.~\ref{fig:cont_rad_fields} the blackbody radiation
field from a $T=50\,000$\,K source with dilution factor $W=10^{-12}$ is shown on
the far left. As can be seen, the intensity of the ionising radiation from such
a hot star drops off very quickly with decreasing frequency (increasing $n$ in
the diagram) and is negligible at frequencies low enough to stimulate transitions
in hydrogen. An O or B star has to be a distance of $\lesssim10\,\mathrm{AU}$ from
the gas to have any noticeable effect on the departure coefficients and, therefore,
can be neglected in these calculations.
\subsection{Cosmic microwave background radiation}
The CMBR has a low temperature ($T = 2.7$\,K) but is undiluted blackbody radiation.
Coincidentally, the intensity of the radiation peaks around a frequency
corresponding to H$n\alpha$ with $n \sim 40$ which is where the population
inversion is strongest.
For a density of $N_\mathrm{e} = 10^2$\cmc\ stimulated emissions due to the CMBR make up
in excess of 10 per cent of the downward $\mathrm{H}n\alpha$ transitions for $40
\lesssim n\lesssim60$. As the density increases, the effect of stimulated emission
decreases because the influence of elastic collisions increase at these levels.
The correction that the inclusion of the CMBR in the model provides to the summed
$b_{nl}$'s is typically less than 1 per cent.
\subsection{Free-free radiation}
Continuous free-free radiation is produced by a plasma as charged free particles
interact with each other without capture taking place. The free particles are
assumed to be in thermodynamic equilibrium at a temperature $T_\mathrm{e}$. Disregarding
background radiation, the specific intensity of the free-free radiation is given
by
\begin{equation}
J_{\nu}^{\mathrm{ff}} =B_{\nu}(T_\mathrm{e})\left(1-e^{-\tau_{\nu}^{\mathrm{ff}}}\right)\,,
\end{equation}
where $B_{\nu}$ is the Planck function for frequency, and $\tau_{\nu}^\mathrm{ff}$ is
the optical depth of this radiation.
Following \citet{2003Dickinson}, the appropriate optical depth for the free-free
radiation is given by
\begin{align}
\tau_{\nu}^{\mathrm{ff}} & = -3.014 \times 10^{-2}\, T_\mathrm{e}^{-3/2} \left(
\frac{10^{9}}{\nu} \right)^2 \nonumber\\
& \times \left[ \ln\left(4.955 \times 10^{-2} \nu^{-1}\right) + 1.5\ln\left(T_\mathrm{e}
\right) \right](\mathrm{EM})\,,
\end{align}
where EM is the emission measure in cm$^{-6}$\,pc. For a homogeneous gas, EM =
$N_\mathrm{e}^2$\,cm$^{-6}$\,pc.
The intensity of the free-free radiation within a nebula is strongly dependent on
the electron density $N_\mathrm{e}$. The effect of this radiation on the population
structure of hydrogen is negligible for low densities, but can become significant
for $N_\mathrm{e} > 10^4$\cmc. For example, the departure coefficients of a gas
with $N_\mathrm{e} = 10^6$\cmc\ and $T_\mathrm{e} = 10^4$\,K will be affected by
up to 12 per cent around $n=20$ by the inclusion of the free-free radiation field
due to stimulated processes, as illustrated in Fig.~\ref{fig:effect_ff}. The
free-free radiation affects departure coefficients for principal quantum numbers
in the range $10\lesssim n\lesssim60$.
\begin{figure}
\includegraphics[width=\columnwidth]{noradvsffdiff.pdf}
\caption{The effects of the free-free radiation field on the
population structure of hydrogen for $T_\mathrm{e}=10^4$\,K, and a range of
densities in Case B\@. The relative difference between the departure coefficients
calculated when no radiation field is present and when the free-free field is
included in the calculations are shown for each principal quantum number. }
\label{fig:effect_ff}
\end{figure}
\subsection{Dust}
Dust grains form an important component of the ISM and have been found to be
intermixed with the ionised media of \HII\ regions and planetary nebulae
\citep{1993Barlow,1997Kingdon}. Emission from dust grains can dominate the spectrum
from \HII\ regions and PNe at long wavelengths, outshining the free-free specific
intensity by orders of magnitudes.
To model the emission from dust within the cloud, a modified blackbody spectrum
of the form
\begin{equation}
J_{\nu}^\mathrm{d} = \tau_{\nu}^\mathrm{d} B_{\nu}(T_\mathrm{d})\,,
\end{equation}
was used, where $\tau_{\nu}^\mathrm{d}$ is an optical depth and $T_\mathrm{d}$ is
the dust temperature \citep{2014Planck}. The angle-averaged optical depth
\begin{equation}
\tau_{\lambda}^\mathrm{d} =1.5 \times 10^{-3} \left(\frac{100\mathrm{\mu m}}{\lambda}
\right)^{1.7}
\end{equation}
from \citet{2011Draine} was used.
The effect of the dust radiation on the $b_n$ values is very limited as the
populations are only weakly inverted at the frequencies where this radiation can
stimulate transitions. In Fig.~\ref{fig:cont_rad_fields} a dust temperature of
$T_\mathrm{d} = 50$\,K \citep{2003Dupac} is shown, but the result is independent
of $T_\mathrm{d}$ since this has little effect on the frequency range of the field.
In this work we have only considered the effect of the radiation
from dust on the level populations. \citet{1992Hummer} have shown that absorption
by dust can affect the Case B recombination spectrum, possibly with a greater
effect on the departure coefficients than emission.
\section{Description of tables}
The programme described here calculates departure coefficients $b_{nl}$ for level
$nl$ in hydrogen. From this, theoretical spectral line intensities can be
calculated. The formulae required to calculate values that can be compared with
observed lines are presented below, and then the entries in the tables are
explained.
\begin{table*}
\ttfamily
\resizebox{\textwidth}{!}{%
\begin{tabular}{rrrrrrrrrrrrrrrrrrrr}
\multicolumn{20}{l}{NE = 10000 TE = 10000 CASE B NC = 210 NO RAD}\\
\multicolumn{20}{l}{-------------------------------------------------------------------}\\
\multicolumn{20}{l}{BN'S}\\
3 & 2.374e-01 & 4 & 2.895e-01 & 5 & 3.769e-01 & 6 & 4.616e-01 & 7 & 5.328e-01 & 8 & 5.897e-01 & 9 & 6.344e-01 & 10 & 6.694e-01 & 11 & 6.968e-01 & 12 & 7.184e-01\\
13 & 7.355e-01 & 14 & 7.489e-01 & 15 & 7.595e-01 & 16 & 7.678e-01 & 17 & 7.741e-01 & 18 & 7.787e-01 & 19 & 7.818e-01 & 20 & 7.836e-01 & 21 & 7.842e-01 & 22 & 7.838e-01\\
$\boldsymbol{\vdots}$ & & & & & & & & & & & & & & & & & & & \\
\multicolumn{14}{l}{BNL'S} & & & & & & \\
\multicolumn{14}{l}{n = 3} & & & & & & \\
0 & 1.022e+00 & 1 & 2.336e-01 & 2 & 8.271e-02 & & & & & & & & & & & & & & \\
\multicolumn{14}{l}{n = 4} & & & & & & \\
0 & 1.132e+00 & 1 & 3.300e-01 & 2 & 1.446e-01 & 3 & 2.553e-01 & & & & & & & & & & & & \\
\multicolumn{14}{l}{n = 5} & & & & & & \\
0 & 1.206e+00 & 1 & 3.973e-01 & 2 & 1.891e-01 & 3 & 3.235e-01 & 4 & 4.239e-01 & & & & & & & & & & \\
$\boldsymbol{\vdots}$ & & & & & & & & & & & & & & & & & & & \\
\multicolumn{20}{l}{JNM'S}\\
\multicolumn{20}{l}{n = 4}\\
1 & 1.217e-25 & 2 & 9.879e-27 & 3 & 3.289e-27 & & & & & & & & & & & & & & \\
\multicolumn{20}{l}{n = 5}\\
1 & 5.304e-26 & 2 & 4.633e-27 & 3 & 1.602e-27 & 4 & 7.710e-28 & & & & & & & & & & & & \\
\multicolumn{20}{l}{n = 6}\\
1 & 2.830e-26 & 2 & 2.563e-27 & 3 & 8.903e-28 & 4 & 4.426e-28 & 5 & 2.432e-28 & & & & & & & & & & \\
$\boldsymbol{\vdots}$ & & & & & & & & & & & & & & & & & & & \\
\multicolumn{20}{l}{KMN'S}\\
\multicolumn{20}{l}{n = 4}\\
3 & 3.271e-23 & & & & & & & & & & & & & & & & & & \\
\multicolumn{20}{l}{n = 5}\\
3 & 1.003e-23 & 4 & -8.418e-25 & & & & & & & & & & & & & & & & \\
\multicolumn{20}{l}{n = 6}\\
3 & 4.463e-24 & 4 & 2.892e-24 & 5 & -1.518e-23 & & & & & & & & & & & & & & \\
$\boldsymbol{\vdots}$ & & & & & & & & & & & & & & & & & & & \\
\end{tabular}}
\caption{\label{tab:sample_table}An extract of one of the output tables for the
conditions $T_\mathrm{e}=10^4$\,K and $N_\mathrm{e}=10^4$\cmc, Case B, with no external radiation
present. The values of the departure coefficients ($b_n$), the partial departure
coefficients ($b_{nl}$), the emission coefficients ($j_{nn'}$) and the absorption
coefficients ($\kappa_{n'n}$) are tabulated for each case.}
\end{table*}
The emission coefficient $j_{nn'}$ for line radiation is given by
\begin{equation}
j_{nn'}=\frac{h\nu}{4\pi}\sum_{l=0}^{n-1}\sum_{l'=l\pm1}b_{nl}N_{nl}^*A_{nl,n'l'}
\label{eq:emcoef}
\end{equation}
and the absorption coefficient by
\begin{equation}
\kappa_{n'n} = \frac{h\nu}{4\pi} \sum_{l=0}^{n-1}\sum_{l'=l\pm1}\left[ b_{nl}
N_{nl}^*B_{n'l',nl} \left( 1-\frac{b_{nl}}{b_{n'l'}} e^{-h\nu/k_\mathrm{B}T_\mathrm{e}} \right)
\right]\,.
\label{eq:abscoef}
\end{equation}
The tables containing our results are available online in machine-readable ASCII
format and can be downloaded from the article webpage. The file names are a
concatenation of the significands and exponents of the electron density and
temperature, the case (A or B) and any ambient radiation that has been included
in the model, all separated by an underscore. Free-free radiation is indicated by
`FF', the CMBR by `CMB', and no radiation field is designated with the number `0'.
For example, the file named `13\_14\_B\_0.dat' contains the results for Case B
with $N_\mathrm{e}=10^4$\cmc\ and $T_\mathrm{e}=10^4$\,K with no ambient
radiation present. The header in each file contains the same data as in the file name,
as well as the value of $n_\mathrm{c}$.
Each file contains the $b_n$ values calculated using equation~(\ref{eq:bnsum})
for each value of $n$ from $n=2$ for Case A and $n=3$ for Case B to $n=500$. This
is followed by the partial departure coefficients $b_{nl}$ with the appropriate
values of $l$ tabulated after each value of $n$ up to $n=100$. The emission
coefficients $j_{nn'}$ and absorption coefficients $\kappa_{n'n}$ as defined by
equations~(\ref{eq:emcoef}) and (\ref{eq:abscoef}), respectively, are given next.
The coefficients are tabulated next to the lower level $n'$ for ascending values
of the upper level $n$. An extract of one of the data files is shown in table~\ref{tab:sample_table}.
\section{Conclusions}
\label{sec:conclusions}
Updated calculations of departure coefficients for hydrogen atoms under nebular
conditions are presented. The elastic collision rates of \citet{2016Guzman} have
been used, which differ from values used in previous models. We have also included
stimulated and absorption processes in our equations of statistical equilibrium.
The model used to do the calculations is similar to that of \citetalias{1995Storey}.
A stopping criterion has been derived and employed to determine when to terminate
the iterative procedure, which ensures that the $b_{nl}$ values have converged to
a predefined accuracy. This requires that many more iterative steps are required
before a solution is reached than have been used in previous works. In practice,
we found that it is more time efficient to use a direct solver rather than the
iterative method to achieve an acceptable accuracy.
We investigated the effects of stimulated emission due to various components of
the continuum field within a nebula on the population structure of hydrogen. Even
though the stellar radiation field and emission from dust dominate the continuum
spectrum at certain frequencies, the effects on the population structure of both
fields are negligible. The free-free radiation field has the largest influence on
the departure coefficients, increasing as the density increases. The CMBR typically
has an effect of less than 1 per cent on the departure coefficients.
Our results give departure coefficients that are consistently larger (closer to
LTE) than those of \citetalias{1995Storey}. The current results produce
amplification factors $\beta_{n,n+1}$ that are smaller than in previous
calculations, producing less extreme population inversions. The value of $n_\mathrm{c}$,
the level at which populations become statistically distributed, has been
determined empirically. The value, which depends on the electron density $N_\mathrm{e}$,
is much larger than used in other models.
Results for He and metal atoms and ions will be considered in a separate publication.
\section*{Acknowledgements}
We thank the reviewer, Prof P. J. Storey, for his insightful comments on our paper.
AP thanks Unisa for the funding she has received from the Academic Qualification
Incentive Programme.
\bibliographystyle{mnras}
|
{
"timestamp": "2018-05-08T02:16:07",
"yymm": "1805",
"arxiv_id": "1805.02440",
"language": "en",
"url": "https://arxiv.org/abs/1805.02440"
}
|
\section{Introduction}
The $k$-center problem is a classic discrete optimization problem with numerous applications.
Given a metric space $(X,d)$ and a positive integer $k$, the objective is to choose a subset $S\subseteq X$ of at most $k$ points
such that $\max_{v\in X} d(v,S)$ is minimized, where $d(v,S) = \min_{u\in S} d(v,u)$.
Informally, the problem is to open $k$ centers to serve all points, minimizing the maximum distance to service.
This problem has been studied for at least 50 years~\cite{Hakimi64, Hakimi65}, is NP-hard to approximate
to a factor better than $2$~\cite{HN79}, and has a simple $2$-approximation algorithm~\cite{Gon85, HS85a}.
In many applications one is interested in a nuanced version of the problem where instead of serving all points in $X$, the objective is to serve at least a certain number of points. This is the so-called $k$-center with outliers version, or the
{\em robust $k$-center} problem. This problem was first studied by Charikar~{\em et al}.~ in~\cite{CKMN01}
which gives a $3$-approximation for the problem. A best possible $2$-approximation algorithm was recently given by Chakrabarty~{\em et al}.~ in~\cite{CGK16} (see also the paper~\cite{HPST17} by Harris~{\em et al}.~).
Another generalization of the $k$-center problem arises when the location of centers has more restrictions.
For instance, if each point in $X$ has a different weight and the constraint is that the total weight of centers opened is at most $k$.
This problem, now called the {\em knapsack center} problem, was studied by Hochbaum and Shmoys in~\cite{HS86} which gives a $3$-approximation for the problem. To take another instance, $X$ could be vectors in high dimension and the centers picked need to be linearly independent vectors. This motivates the {\em matroid center} problem where the set of centers must be an independent set in a matroid. Chen {\em et al}.~ give a $3$-approximation for this problem in~\cite{CLLW13}.
Naturally, the two aforementioned generalizations can be taken together. Indeed, for the {\em robust matroid center} problem, that is, the problem of picking centers which are an independent set and only $m$ points need to be served, there is a $7$-approximation algorithm in~\cite{CLLW13}. This was recently improved to a $3$-approximation in~\cite{HPST17}. The {\em robust knapsack center} problem, however, has had no non-trivial approximation algorithm till this work. Both~\cite{CLLW13} and~\cite{HPST17} give {\em bi-criteria} $3$-approximation algorithms which violate the knapsack constraint by $(1+\varepsilon)$ (the running time of their algorithm is exponential in $1/\varepsilon$).
\paragraph*{Our Contributions}
Motivated by the state-of-affairs of the robust knapsack center problem, we study a broad generalization of the problems mentioned above.
Let $\mathscr{F}$ be a general down-closed\footnote{if $A\in \mathscr{F}$ and $B\subseteq A$, then $B\in \mathscr{F}$.} family of subsets over $X$.
In the {\em robust $\mathscr{F}$-center} problem we are given a metric space $(X,d)$, a parameter $m$, and the objective is to select a subset $S\in \mathscr{F}$ such that $\min_{T\subseteq X, |T|=m} \max_{v\in T} d(v,S)$ is minimized. That is, the maximum distance of service of the closest $m$ points is minimized.
Observe that if $\mathscr{F} := \{A: w(A) \leq k\}$ then we get the robust knapsack center problem, and if $\mathscr{F}$ is the collection of independent sets of a matroid, then we get the robust matroid center problem. But this generalization captures a host of other problems. For instance, one can consider multiple (but constant) knapsack constraints. Indeed, this was studied in both~\cite{HS86} and~\cite{CLLW13}. The former\footnote{The complete proofs can be found in the STOC 1984 version of ~\cite{HS86}} only looks at the version {\em without} outliers and gives a polynomial time $3$-approximation in the case when the weights are all polynomially bounded. The latter proves that when the weights are not polynomially bounded, there can be no approximation algorithm via a reduction to the {\sc Subset Sum} problem, and gives a $3$-approximation violating each knapsack constraint by at most $(1+\varepsilon)$ multiplicative factor.
Another instance is a single knapsack constraint along with a single matroid constraint. To our knowledge, this problem has not been studied earlier even in the case when outliers are not allowed.
This problem seems natural: for instance, when the points are high dimensional vectors with weights and the collection of centers needs to be a linearly independent set with total weight at most $k$.
The complexity of the robust $\mathscr{F}$-center problem naturally depends on the complexity of $\mathscr{F}$. To understand this, we define the following optimization problem which depends only on the set-system $(X,\mathscr{F})$. We call it the $\mathscr{F}${\em-maximization under partition constraints} or simply $\calF\textrm{-PCM}$. In this problem, one is given an arbitrary partition $\mathscr{P}$ of $X$ along with $\mathscr{F}$, and a {\em poly-bounded} (the range is at most a polynomial in $|X|$) value ${\mathsf{val}}(x)$ on each $x\in X$. The objective is to find a set $S\in \mathscr{F}$
maximizing ${\mathsf{val}}(S)$ such that $S$ contains at most one element from each part of $\mathscr{P}$. Our main result stated colloquially (and formally stated as Theorem~\ref{thm:1} and Theorem~\ref{thm:2} in Section~\ref{sec:prelims}) is the following dichotomy theorem\footnote{We are deliberately being inaccurate here. We should state the theorem for the more general {\em supplier} version where the set $X$ is partitioned into $F\cup C$ and only the points in $C$ need to be covered and only the centers in $F$ can be opened. Being more general, the algorithmic results are therefore stronger. On the other hand, we weren't able (and didn't try too hard) to make our hardness go through for the center version. In the Introduction we stick with the center version and switch to the supplier in the more formal subsequent sections.}.
\begin{theorem*}
{\em For any down-closed family $(X,\mathscr{F})$, the robust $\mathscr{F}$-center problem has an efficient $3$-approximation algorithm if the $\calF\textrm{-PCM}$ problem
can be solved in polynomial time. Otherwise, there is no efficient {\em non-trivial} approximation algorithm for the robust $\mathscr{F}$-center problem.}
\end{theorem*}
Note that in general, we are not concerned about how $\mathscr{F}$ is represented, because the only place the algorithm checks if a set $S$ is in $\mathscr{F}$ is perhaps for solving the $\calF\textrm{-PCM}$ problem. So one can choose a representation that works best for the $\calF\textrm{-PCM}$ solver.
A series of corollaries follow from the above theorem. These are summarized in Table~\ref{tbl:1}.
\begin{asparaitem}
\item When $\mathscr{F} =\{A: w(A) \leq k\}$, the $\calF\textrm{-PCM}$ problem can be solved in polynomial time via dynamic programming. This crucially uses that the ${\mathsf{val}}$
is poly-bounded. Therefore we get a $3$-approximation for the robust knapsack center problem. (Theorem~\ref{thm:rkn})
\item When $\mathscr{F}$ is the independent set of a matroid, then the $\calF\textrm{-PCM}$ problem is a matroid intersection problem. Therefore we get a $3$-approximation for the robust matroid center problem recovering the result from~\cite{HPST17}. (Theorem~\ref{thm:rmc})
\item When $\mathscr{F} = \{A: w_1(A) \leq k_1, w_2(A) \leq k_2, \ldots, w_d(A) \leq k_d\}$ is defined by $d$ weight functions {\em and} each weight function $w_i$ is {\em poly-bounded}, then $\calF\textrm{-PCM}$ can be solved efficiently using dynamic programming.
Therefore we get a $3$-approximation algorithm for the robust multi-knapsack center problem, extending the result in~\cite{HS86} to the case with outliers. (Theorem~\ref{thm:rmkn})
\item When $\mathscr{F}$ is given by the intersection of a single knapsack and a single matroid constraint, then we don't know the complexity. However, when the weight function $w(\cdot)$ is poly-bounded and the matroid is representable, then we can give a {\em randomized} algorithm for the $\calF\textrm{-PCM}$ problem via a reduction to the exact matroid intersection problem.
Therefore, we get a randomized $3$-approximation for this special case of robust knapsack-and-matroid center problem (Theorem~\ref{thm:knandm}).
\end{asparaitem} \smallskip
\noindent
{\bf Remark 1: The Zero Outlier Case.}
At this juncture, the reader may wonder about the complexity of the $\mathscr{F}$-center problem which doesn't allow any outliers.
This is related to the following decision problem. Given $(X,\mathscr{F})$ and an arbitrary sub-partition $\mathscr{P}$ of $X$, the problem asks whether there is a set $S\in \mathscr{F}$ such that $S$ contains {\em exactly} one element from each part of $\mathscr{P}$. We call this the {\em $\mathscr{F}$-feasibility under partition constraints} or simply the $\calF\textrm{-PCF}$ problem. Analogous to the informal theorem from earlier, the $\mathscr{F}$-center problem (without outliers) has an efficient $3$-approximation algorithm if the $\calF\textrm{-PCF}$ problem can be solved efficiently; otherwise the $\mathscr{F}$-center problem has no non-trivial approximation algorithm. Indeed, this theorem is much simpler to prove and arguably the roots of this lie in~\cite{HS86}.
This raises the main open question from our paper: {\em what is the relation between the $\calF\textrm{-PCF}$ and the $\calF\textrm{-PCM}$ problem?} Clearly, the $\calF\textrm{-PCF}$ problem is as easy as the $\calF\textrm{-PCM}$ problem (set all values equal to one in the latter). But is there an $\mathscr{F}$ such that $\calF\textrm{-PCM}$ is ``hard'' while $\calF\textrm{-PCF}$ is ``easy''? One concrete example is the corollary discussed in the last bullet point above. When $\mathscr{F}$ is a single knapsack constraint and a single matroid constraint, then
the $\calF\textrm{-PCF}$ problem is solvable in polynomial time by minimizing a linear function over a matroid polytope and another partition matroid {\em base} polytope.
As noted above, we don't know the complexity of the $\calF\textrm{-PCM}$ problem in this case.
\noindent
{\bf Remark 2: Handling Approximations.}
If the $\calF\textrm{-PCM}$ problem is NP-hard, then the robust $\mathscr{F}$-center has no non-trivial approximation algorithm.
However approximation algorithms for $\calF\textrm{-PCM}$ translate to bi-criteria approximation algorithms for the robust $\mathscr{F}$-center problem.
More precisely, if we have a $\rho$-approximation for the $\calF\textrm{-PCM}$ problem ($\rho \leq 1$), then we get a $(3,\rho)$-{\em bi-criteria} approximation algorithm for the robust $\mathscr{F}$-center problem. That is, we return a solution $S\in \mathscr{F}$ such that the maximum distance among the closest $\rho\cdot m$ points is at most $3$ times the optimum value.
There could be a different notion of approximation possible for the $\calF\textrm{-PCM}$ problem. Given an instance, there may be an algorithm which returns a set $S$ whose value is at least the optimum value but $S\in \mathscr{F}^R$ for some $\mathscr{F}^R\supseteq \mathscr{F}$ which is a `relaxation' of $\mathscr{F}$.
For instance, if $\mathscr{F}$ is the intersection of multiple (constant) knapsack constraints which are not poly-bounded, then for any constant $\varepsilon> 0$ the $\calF\textrm{-PCM}$ problem can be solved~\cite{CVZ11, GRSZ14} returning a set with value at least the optimum but violating each constraint by multiplicative $(1+\varepsilon)$. We can use the same to get a polynomial time $3$-approximation for the robust multiple knapsack-center problem if we are allowed to violate the knapsack constraints by $(1+\varepsilon)$.
\begin{table}[!ht]
\setlength{\arrayrulewidth}{0.2mm}
\setlength{\tabcolsep}{8pt}
\renewcommand{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{|c | c | c |}
\hline
{\bf The constraint system $\mathscr{F}$}& {\bf Without Outliers} & {\bf Robust (With Outliers)} \\
\hline
Knapsack Constraint & 3~\cite{HS86} & {\bf 3} (Theorem~\ref{thm:rkn}) \\
\hline
Matroid Constraint & 3~\cite{CLLW13} & 3~\cite{HPST17} \\
\hline
\makecell{Multiple Knapsack \\ (poly-bounded weights)} & 3~\cite{HS86} & {\bf 3} (Theorem~\ref{thm:rmkn}) \\
\hline
Knapsack and Matroid & {\bf 3} (Theorem~\ref{thm:knandmcenter}) & \makecell{Open \\ {\bf 3} in special case} (Theorem~\ref{thm:knandm}) \\
\hline
\makecell{Multiple Knapsacks \\ and Matroid Constraint}& \makecell{No uni-criteria \\ approximation} & {\bf 3}, $(1+\varepsilon)$ violating (Theorem~\ref{thm:multiknapsackmat})\\
\hline
\end{tabular}
\end{center}
\caption{All the above results can be obtained as corollary or simple extensions to our main result.
The numbers in bold indicate new results.
}
\label{tbl:1}
\end{table}
\paragraph*{Our Technique}
Although our theorem statement is quite general, the proof is quite easy. Let us begin with the $\mathscr{F}$-center problem without outliers.
For this, we follow the algorithmic `partitioning' idea outlined in paper~\cite{HS86} by Hochbaum and Shmoys. As is standard, we guess the optimum distance which we assume to be $1$ by scaling. Initially, all points are marked uncovered. Subsequently, we pick {\em any} uncovered point $x$ and consider a subset $B_x$ of points within distance $1$ from it. Note that the optimum solution {\em must} pick at least one point from each $B_x$ to serve $x$. Next, we call $x$ ``responsible'' for all uncovered points within a distance $2$ from it, and mark all these points covered. Observe that all the newly covered points are within distance $3$ from {\em any} point in $B_x$.
We continue the above procedure till all points are marked covered. Also observe that the $B_x$'s form a sub-partition $\mathscr{P}$ of the universe where each part has a responsible point.
By the above two observations, we see that
the $\calF\textrm{-PCF}$ problem must have a feasible solution with respect to $\mathscr{P}$, and any solution to the $\calF\textrm{-PCF}$ problem gives a $3$-approximation to the $\mathscr{F}$-center problem.
Handling outliers is a bit trickier. The above argument doesn't work since the `responsible' point may be an outlier in the optimal solution and we can no longer assert that the optimal solution must contain a point from each part. Indeed, the nub of the problem seems to be figuring out which points should be outliers. The $3$-approximation algorithm in~\cite{CKMN01} by Charikar {\em et al}.~ (see also paper~\cite{AF+10}) cleverly chooses the partitioning via a greedy procedure, but their argument seems hard to generalize to other constraints.
A different attack used in the algorithm in~\cite{CGK16} by Chakrabarty {\em et al}.~ and that in~\cite{HPST17} by Harris {\em et al}.~ is by writing an LP relaxation and using the solution of the LP to recognize the outliers. At a high level, the LP assigns each point $x$ a variable (in this paper we call it $\mathsf{cov}(x)$) that indicates the extent to which $x$ is served. Subsequently, the partitioning procedure described in the first paragraph is run, except the responsible points are considered in decreasing order of $\mathsf{cov}(x)$. The hope is that points assigned higher $\mathsf{cov}(x)$ in the LP solution are less likely to be outliers, and therefore the partition returned by the procedure can be used to recover a $3$-approximate solution.
This idea does work for the natural LP relaxation of the robust matroid center problem but fails for the natural LP relaxation of the robust knapsack center problem. Indeed, the latter has unbounded integrality gap.
Our solution is to use the round-or-cut framework that has recently been a powerful tool in designing many approximation algorithms
(see ~\cite{CKK17,L16,L15,AMO14,CFLP00}).
We consider the following ``coverage polytope'' for the robust $\mathscr{F}$-center problem: the variables are $\mathsf{cov}(x)$
denoting the extent to which $x$ is covered by a convex combination of sets $S\in \mathscr{F}$.
Of course, we cannot hope to efficiently check whether a particular $\mathsf{cov}$ lies in this polytope.
Nevertheless, we show that for any $\mathsf{cov}$ in the coverage polytope, the partitioning procedure when run in the decreasing order of $\mathsf{cov}$, has the property that there {\em exists} a solution $S\in \mathscr{F}$ intersecting each part at most once which covers at least $m$ points. We can then use the algorithm for $\calF\textrm{-PCM}$ to find this set. Furthermore, and more crucially, if the partitioning procedure does not have this property, then we can efficiently
find a {\em hyperplane separating} $\mathsf{cov}$ from the the coverage polytope. Therefore, we can run the ellipsoid algorithm on the coverage polytope each time either obtaining a separating hyperplane, or obtaining a $\mathsf{cov}$ that leads to a desired partition, and therefore a $3$-approximation.
\section{Preliminaries}\label{sec:prelims}
In this section we give formal definitions and statements of our results.
As mentioned in a footnote in the Introduction, we focus on the supplier version of the problem.
\begin{definition}[$\mathscr{F}$-Supplier Problem]\label{deffc}
The input is a metric space $(X,d)$ on a set of points $X=F\cup C$ with distance function $d:X \times X \longrightarrow \mathbb{R}_{\geq 0}$ and $\mathscr{F} \subseteq 2^F$ a down-closed family of subsets of $F$. The objective is to find $S \in \mathscr{F}$ such that $\max_{v \in C} d(v,S)$ is minimized.
\end{definition}
\begin{definition}[{{Robust $\mathscr{F}$-Supplier}} Problem]
The input is an instance of the $\mathscr{F}$-supplier problem along with an integer parameter $m \in \{0,1,\ldots, |C|\}$.
The objective is to find $S \in \mathscr{F}$ and $T\subseteq C$ for which $\lvert T \rvert \geq m$, and $\max_{u \in T} d(u,S)$ is minimized.
\end{definition}
\noindent
Thus an instance $\mathscr{I}$ of the robust $\mathscr{F}$-supplier problem is defined by the tuple $(F,C,d,m,\mathscr{F})$.
In the definitions above, $F$ and $C$ are often called the set of \emph{facilities} and \emph{customers} respectively.
Given the set system $\mathscr{F}$ defined over $F$, we define the following optimization problem.
\begin{definition}[{$\calF\textrm{-PCM}$} problem]
The input is $\mathscr{J} = (F,\mathscr{F},\mathscr{P},{\mathsf{val}})$ where $F$ is a finite set and $\mathscr{F} \subseteq 2^F$ is a down-closed family, $\mathscr{P} \subseteq 2^F$ is a sub-partition of $F$, and ${\mathsf{val}}:F \longrightarrow \{0,1,2,\cdots\}$ is an
integer-valued function with maximum range $|{\mathsf{val}}|$
satisfying: $\forall f_1,f_2 \in A \in \mathscr{P}$, ${\mathsf{val}}(f_1) = {\mathsf{val}}(f_2)$. The objective is to find:
\[
{\mathsf{opt}}(\mathscr{J})= \max\limits_{S \in \mathscr{F}} ~~~ {\mathsf{val}}(S) : ~~~ \lvert S \cap A \rvert \leq 1, ~~\forall A \in \mathscr{P}
\]
\end{definition}
\noindent
The next theorem is the main result of the paper.
\begin{theorem}\label{mainthrm}\label{thm:1}
Given a {{Robust $\mathscr{F}$-Supplier}} instance $\mathscr{I} = (F,C,d,m,\mathscr{F})$, Let $\mathcal{A}$ be an algorithm that solves any {$\calF\textrm{-PCM}$} instance $\mathscr{J} = (F,\mathscr{F},\mathscr{P},{\mathsf{val}})$, with $|{\mathsf{val}}| \leq |C|$, in time bounded by $T_{\mathcal{A}}(\mathscr{J})$. Then, there is a $3$-approximation algorithm for the {{Robust $\mathscr{F}$-Supplier}} instance that runs in time $\mathrm{poly}(\lvert\mathscr{I}\rvert)T_{\mathcal{A}}(\mathscr{J})$.
\end{theorem}
The next theorem is the (easier) second part of the dichotomy theorem. We show that if {$\calF\textrm{-PCM}$} cannot be solved, then the corresponding {{Robust $\mathscr{F}$-Supplier}} cannot be approximated.
\begin{theorem}\label{compthrm}\label{thm:2}
Given any non-trivial approximation algorithm $\mathcal{B}$ for the {{Robust $\mathscr{F}$-Supplier}} problem that runs in time $\text{T}_{\mathcal{B}}(|\mathscr{I}|)$ on instance $\mathscr{I}$, any {$\calF\textrm{-PCM}$} instance $\mathscr{J} = (F,\mathscr{F},\mathscr{P},{\mathsf{val}})$ can be solved in time $\mathrm{poly}(\lvert\mathscr{J}\rvert)\text{T}_{\mathcal{B}}(|\mathscr{I}|)$, where $|\mathscr{I}| = \mathrm{poly}(|\mathscr{J}|)$.
\end{theorem}
\begin{proof}
Given $\mathscr{J}$ we construct an instance $\mathscr{I}$ of the {{Robust $\mathscr{F}$-Supplier}} problem. The set of facilities is $F$.
We describe the set of customers $C$ next. Extend $\mathscr{P}$ to a partition of $F$ denoted by $\mathcal{Q} = \mathcal{P} \cup \{\{f\}: f\in F, \nexists A \in \mathcal{P} : f \in A\}$. By definition of the $\calF\textrm{-PCM}$ problem, for any $A \in \mathcal{Q}$, there exists a number $n_A \in \{0,1,2,\cdots\}$ such that ${\mathsf{val}}(f) = n_A$, for all $f \in A$.
For each $A \in \mathcal{Q}$, we add $n_A$ customers to $C$ and call this set $\phi(A)$.
We now describe the distance function. For each $A\in \mathcal{Q}$, for each pair $u,v\in A$ and $u,v\in \phi(A)$ we have $d(u,v) = 0$.
For each $u\in A$ and $v\in \phi(A)$, we have $d(u,v) = 1$. All other distances are $\infty$. Observe that $d$ satisfies the triangle inequality.
Finally, we let $m$ be our guess of the value of ${\mathsf{opt}}(\mathscr{J})$. This completes the description of $\mathscr{I} = (F,C,d,m,\mathscr{F})$.
Suppose algorithm $\mathcal{B}$ finds $S \in \mathscr{F}$ and $T \subseteq C$ such that $\lvert T \rvert \geq m$ and $\max_{v \in T} d(v,S) \leq \alpha {\mathsf{opt}}(\mathscr{I}) = \alpha$. Without loss of generality, we can assume $\lvert S \cap A \rvert \leq 1$ for all $A \in \mathscr{P}$, which implies that $S$ is a feasible solution for $\mathscr{J}$. The reason is, if there exists $f_1,f_2\in S$ for which $f_1,f_2\in A \in \mathscr{P}$, then $S\backslash f_2$ is still an $\alpha$-approximate solution for $\mathscr{I}$. To see why this is true, recall that $\mathscr{F}$ is down-closed so $S \backslash f_2 \in \mathscr{F}$ and since $d(f_1,f_2) = 0$ then $S\backslash f_2$ covers all the customers that $S$ covers.
Next, we assert that ${\mathsf{val}}(S) \geq m = {\mathsf{opt}}(\mathscr{J})$ since
$m \leq \lvert T \rvert \leq \lvert \{v \in C : d(v,S) \leq \alpha\} \rvert = \sum_{A \in \mathcal{Q}: \lvert S \cap A\rvert = 1} \lvert \phi(A)\rvert = \sum_{f \in S} {\mathsf{val}}(f),$
where the first equality uses the fact that for $v \in C$ and $f \in A \in \mathcal{Q}$, $d(v,f) \leq \alpha$ only if $v \in \phi(A)$.
Finally, since ${\mathsf{val}}$ is poly-bounded which makes the value of ${\mathsf{opt}}(\mathscr{J})$ to be bounded by $\mathrm{poly}(\lvert\mathscr{J}\rvert)$, one can iterate over all the possible values for ${\mathsf{opt}}(\mathscr{J})$ to guess $m$.
\end{proof}
We end this section by setting a few notations used in the remainder of the paper.
For any $u\in F\cup C$ we let $B_C(u,r)$ be the customers in a ball of radius $r$ around $u$ i.e. $B_C(u,r) = \{v \in C : d(u,v) \leq r\}$. Similarly, define $B_F(u,r)$ as the facilities in a ball of radius $r$ around $u$ i.e. for $u\in F \cup C$, $B_F(u,r) = \{f \in F : d(u,f) \leq r\}$.
\section{Algorithm and Analysis : Proof of Theorem~\ref{thm:1}}\label{mainApp}
We fix $\mathscr{I} = (F,C,d,\mathscr{F}, m)$ the instance of the {{Robust $\mathscr{F}$-Supplier}} problem.
We use ${\widehat{\opt}}$ to denote \emph{our guess} of the value of the optimal solution. Without loss of generality, we can always assume ${\widehat{\opt}}=1$ because if not, we could scale $d$ to meet this criteria.
Our objective henceforth is to either find a set $S \in \mathscr{F}$ such that $\lvert \{ v \in C: d(v,S) \leq 1 \} \rvert \geq m$, or prove
that ${\mathsf{opt}}(\mathscr{I}) > 1$.
There are two parts to our proof. The first part is a partitioning procedure which given an assignment $\mathsf{cov}(v)\in \mathbb R_{\geq 0}$ for every customer
$v\in C$, constructs an instance $\mathscr{J}$ of $\calF\textrm{-PCM}$. We call $\mathsf{cov}$ {\em valuable} if $\mathscr{J}$ has optimum value $\geq m$. Our procedure ensures that if $\mathsf{cov}$ is valuable, then we get a $3$-approximate solution for $\mathscr{I}$.
This is described in Section~\ref{subsecRed}.
The second part contains the proof of Theorem~\ref{thm:1}. In particular we show how using the round-and-cut methodology
using polynomially many calls to $\mathcal{A}$ (recall this is the algorithm for $\calF\textrm{-PCM}$) we can either prove ${\mathsf{opt}}(\mathscr{I}) > 1$, or
find a valuable $\mathsf{cov}$. This is described in Section~\ref{sec:rndandcut}.
\subsection{Reduction to \texorpdfstring{$\calF\textrm{-PCM}$}{Lg}}\label{subsecRed}
Algorithm~\ref{alg:1} inputs an assignment $\{\mathsf{cov}(v) \in \mathbb R_{\geq 0} :v\in C\}$.
It returns a sub-partition $\mathscr{P}$
of $F$ and assigns ${\mathsf{val}}:F\to \{0,1,\cdots,|C|\}$ such that all the facilities in the same part of $\mathscr{P}$ get the same ${\mathsf{val}}$. That is,
it returns an $\calF\textrm{-PCM}$ instance $\mathscr{J} = (F,\mathscr{F}, \mathscr{P}, {\mathsf{val}})$ with $|{\mathsf{val}}| \leq |C|$.
The algorithm maintains a set of \emph{uncovered} customers $U\subseteq C$ initialized to $C$ (Line~\ref{ln:1}).
In each iteration, it picks the customer $v \in U$ with maximum $\mathsf{cov}$ (Line~\ref{ln:greedy}) and adds it to set $\mathsf{Reps}_{\cov}$ (Line~\ref{ln:rep}).
We add the set of facilities $B_F(v,1)$ at distance $1$ from $v$ to $\mathscr{P}$ (Line~\ref{ln:nearby-fac},~\ref{ln:form-part}).
For each such $v$, we eke out the subset ${\mathsf{Chld}}(v) = B_C(v,2) \cap U$ of currently uncovered clients ``represented'' by $v$ (Line~\ref{ln:chld}).
For every facility $f\in B_F(v,1)$ we define its \emph{value} to be: ${\mathsf{val}}(f) = \lvert {\mathsf{Chld}}(v) \rvert$ (Line~\ref{ln:set-val}).
At the end of the iteration, ${\mathsf{Chld}}(v)$ is removed from $U$ (Line~\ref{ln:12}) and the loop continues till $U$ becomes $\emptyset$.
This way, the algorithm partitions $C$ into $\{{\mathsf{Chld}}(v) : v \in \mathsf{Reps}_{\cov}\}$ (see fact\eqref{partfact}).
Claim~\ref{subpartclm} shows that $\mathscr{P}$ is a sub-partition of $F$.
\begin{algorithm}[!ht]
\caption{$\mathscr{F}$-PCM instance construction}
\label{constAlgo}\label{alg:1}
\begin{algorithmic}[1]
\Require {{Robust $\mathscr{F}$-Supplier}} instance $(F,C,d,m,\mathscr{F})$ and assignment $\{\mathsf{cov}(v) \in \mathbb R_{\geq 0} :v\in C\}$
\Ensure $\mathscr{F}$-PCM instance $(F,\mathscr{F},\mathcal{P},{\mathsf{val}})$
\State $U \leftarrow C$ \Comment{The set of uncovered customers} \label{ln:1}
\State $\mathsf{Reps}_{\cov} \leftarrow \emptyset$ \Comment{The set of representatives}
\State $\mathcal{P} \leftarrow \emptyset$ \Comment{The sub-partition of $F$ that will be returned}
\While{ $U \neq \emptyset$}
\State $v \leftarrow \arg\max_{v\in U} \mathsf{cov}(v)$ \Comment{The first customer in $U$ in non-increasing $\mathsf{cov}$ order} \label{ln:6} \label{ln:greedy}
\State $\mathsf{Reps}_{\cov} \leftarrow \mathsf{Reps}_{\cov} \cup v$ \label{ln:7} \label{ln:rep}
\State $B_F(v,1) \leftarrow \{f \in F: d(f,v) \leq 1\}$ \Comment{Facilities that can cover $v$ with a ball of radius 1} \label{ln:8} \label{ln:nearby-fac}
\State $\mathcal{P} \leftarrow \mathcal{P} \cup B_F(v,1)$ \label{ln:9} \label{ln:form-part}
\State ${\mathsf{Chld}}(v) \leftarrow \{u \in U: d(u,v) \leq 2\}$\Comment{Equals to $B_C(v,2)\cap U$} \label{ln:chld} \label{ln:10}
\State ${\mathsf{val}}(f) \leftarrow \lvert {\mathsf{Chld}}(v) \rvert \ \ \forall f \in B_F(v,1)$ \label{ln:11} \label{ln:set-val}
\State $U \leftarrow U \backslash {\mathsf{Chld}}(v)$ \label{ln:12} \label{ln:remove-from-U}
\EndWhile
\end{algorithmic}
\end{algorithm}
\begin{fact}\label{partfact}
$\{{\mathsf{Chld}}(v) : v \in \mathsf{Reps}_{\cov}\}$ is a partition of $C$.
\end{fact}
\begin{fact}\label{greedyrule}
For a $v \in \mathsf{Reps}_{\cov}$ and any $u \in {\mathsf{Chld}}(v)$ line 6 of the algorithm implies $\mathsf{cov}(v) \geq \mathsf{cov}(u)$.
\end{fact}
\begin{claim}\label{subpartclm}
$\mathcal{P}$ constructed by Algorithm~\ref{constAlgo} is a sub-partition of $F$.
\end{claim}
\begin{proof}
By Line~\ref{ln:remove-from-U} of the algorithm, for each $u,v \in \mathsf{Reps}_{\cov}$ we have $d(u,v) > 2$ hence $B_F(u,1) \cap B_F(v,1) = \emptyset$ implying $\mathcal{P}$ is a sub-partition of $F$.
\end{proof}
\begin{claim}\label{bll3clm}
For each $v \in \mathsf{Reps}_{\cov}$ and $f \in B_F(v,1)$, ${\mathsf{Chld}}(v) \subseteq B_C(f,3)$.
\end{claim}
\begin{proof}
For any $u\in {\mathsf{Chld}}(v)$, we have $d(u,v) \leq 2$ and since $d(f,v) \leq 1$, the fact that $d$ is metric implies $d(f,u) \leq 3$.
\end{proof}
\begin{definition}\label{ISdef}
For $S \subseteq F$ let $R(S) = \{v \in \mathsf{Reps}_{\cov} : B_F(v,1) \cap S \neq \emptyset \}$, be the set of representative customers in $\mathsf{Reps}_{\cov}$ that are covered by balls of radius 1 around the facilities in $S$.
\end{definition}
\begin{claim}\label{valclm}
Let $S \in \mathscr{F}$ be any feasible solution of the $\mathscr{F}$-PCM instance constructed by Algorithm~\ref{constAlgo}. Then,
$\sum_{f \in S} {\mathsf{val}}(f) = \sum_{v\in R(S)} \lvert {\mathsf{Chld}}(v) \rvert.$
\end{claim}
\begin{proof}
For an $f \in S$, according to Line~\ref{ln:set-val} of the algorithm, ${\mathsf{val}}(f) > 0$ only if $f \in B_F(v,1)$ for some $v \in \mathsf{Reps}_{\cov}$. Also, by definition of the $\mathscr{F}$-PCM problem, $\lvert B_F(v,1) \cap S\rvert\ \leq 1$ for any $v \in \mathsf{Reps}_{\cov}$. That is, there is exactly one $f \in B_F(v,1) \cap S$ for each $v \in R(S)$ and again by line~\ref{ln:set-val}, ${\mathsf{val}}(f) = \lvert {\mathsf{Chld}}(v) \rvert$. Summing this equality over all $v \in R(S)$ and the corresponding $f \in B_F(v,1) \cap S$ proves the claim.
\end{proof}
\begin{claim}\label{slnConstLma}
Let $\mathscr{I} = (F,C,d,m,\mathscr{F})$ be a {{Robust $\mathscr{F}$-Supplier}} instance and let $\mathsf{cov}:C\to \mathbb{R}_{\geq 0}$ be a coverage function.
Let $\mathscr{J} = (F,\mathscr{F},\mathscr{P},{\mathsf{val}})$ be the $\calF\textrm{-PCM}$ instance returned by Algorithm~\ref{constAlgo} on input $\mathscr{I}$ and $\mathsf{cov}$.
Given any feasible solution $S$ to $\mathscr{J}$, we can cover at least ${\mathsf{val}}(S)$ customers of $C$ by opening radius $3$-balls around each facility in $S$.
\end{claim}
\begin{proof}
By considering $R(S)$ from Definition~\ref{ISdef}, Claim~\ref{valclm} gives:
$\sum_{v\in R(S)} \lvert {\mathsf{Chld}}(v) \rvert=\sum_{f \in S} {\mathsf{val}}(f)$.
From Fact~\ref{partfact}, we get that for all $u,v \in \mathsf{Reps}_{\cov}, {\mathsf{Chld}}(u) \cap {\mathsf{Chld}}(v) = \emptyset$.
Thus, $\lvert \bigcup_{v\in R(S)} {\mathsf{Chld}}(v)\rvert = \sum_{v \in R(S)} \lvert {\mathsf{Chld}}(v) \rvert = {\mathsf{val}}(S)$.
Furthermore, by
Claim~\ref{bll3clm}, $\{ v \in C: d(v,S) \leq 3 \} \supseteq \bigcup_{u\in R(S)} {\mathsf{Chld}}(u)$
implying the size of the former is at least ${\mathsf{val}}(S)$, thus proving the lemma.
\end{proof}
The above claim motivates the following definition of {\em valuable} $\mathsf{cov}$ assignments, and the subsequent lemma.
\begin{definition}
An assignment $\{\mathsf{cov}(v) \in \mathbb R_{\geq 0} :v\in C\}$
is \emph{valuable} with respect to a {{Robust $\mathscr{F}$-Supplier}} instance $\mathscr{I} = (F,C,d,m,\mathscr{F}$), iff ${\mathsf{opt}}(\mathscr{J}) \geq m$, where $\mathscr{J}$ is the $\mathscr{F}$-PCM instance returned by Algorithm~\ref{constAlgo} from $\mathscr{I}$ and $\mathsf{cov}$.
\end{definition}
\begin{lemma}\label{lem:summary}
Given an instance $\mathscr{I}$ of the {{Robust $\mathscr{F}$-Supplier}} problem with ${\mathsf{opt}}(\mathscr{I}) = 1$, and a valuable assignment $\mathsf{cov}$ with respect to it, we can obtain a $3$-approximate solution in time $\mathrm{poly}(|\mathscr{I}|) + T_\mathcal{A}(\mathscr{J})$ where $\mathscr{J}$ is the instance constructed by Algorithm~\ref{constAlgo} from $\mathscr{I}$ and $\mathsf{cov}$.
\end{lemma}
\begin{proof}
Since $\mathsf{cov}$ is valuable, ${\mathsf{opt}}(\mathscr{J})\geq m$.
We use solver $\mathcal{A}$ to return an optimal solution $S\in \mathscr{F}$ with ${\mathsf{val}}(S) \geq m$.
Claim~\ref{slnConstLma} implies that $S$ is a $3$-approximate solution to $\mathscr{I}$.
\end{proof}
\subsection{The Round and Cut Approach}\label{subsecRound}\label{sec:rndandcut}
If the guess ${\widehat{\opt}} = 1$ for $\mathscr{I} = (F,C,d,m,\mathscr{F})$ is at least ${\mathsf{opt}}(\mathscr{I})$, then the following polytope must be non-empty.
To see this, if $S^* \in \mathscr{F}$ is the optimal solution to $\mathscr{I}$ then set $z_{S^*} := 1$ and $z_S := 0$ for $S \in \mathscr{F} \backslash S^*$.
\begin{alignat}{4}
\mathscr{P}^\cI_\cov = \{(\mathsf{cov}(v): v\in C) :
&& \sum\limits_{v\in C} \mathsf{cov}(v) & \geq & ~~m \tag{$\mathscr{P}^\cI_\cov$.1} \label{eq:P1} \\
\forall v\in C, && ~~\mathsf{cov}(v) - \sum\limits_{\substack{S \in \mathscr{F}: d(v,S) \leq 1}} z_S &=& ~~0 \tag{$\mathscr{P}^\cI_\cov$.2} \label{eq:P2} \\
&& \sum\limits_{S \in \mathscr{F}} z_S & = & ~~ 1 \tag{$\mathscr{P}^\cI_\cov$.3} \label{eq:P3} \\
\forall S\in \mathscr{F}, && z_S &\geq &0\} \notag \tag{$\mathscr{P}^\cI_\cov$.4} \label{eq:P4}
\end{alignat}
Even though $\mathscr{P}^\cI_\cov$ has exponentially many auxiliary variables ($z_S$ for all $S\in \mathscr{F}$), its dimension is still $|C|$. The following gives a family of valid inequalities for $\mathscr{P}^\cI_\cov$ via Farkas lemma.
\begin{lemma}\label{prehypLma}
Let $\lambda(v)\in \mathbb{R}$ for every $v\in C$ be such that
\begin{equation}\label{prop1}
\sum\limits_{ \substack{v \in C:\\ d(v,S) \leq 1}} \lambda(v) \leq m \ \ \ \forall S \in \mathscr{F} \tag{V1}
\end{equation}
Then any $\mathsf{cov}\in \mathscr{P}^\cI_\cov$ satisfies
\begin{equation}\label{prop2}
\sum\limits_{v \in C} \lambda(v)\mathsf{cov}(v) \leq m \tag{V2}
\end{equation}
\end{lemma}
\begin{proof}
Given $\mathsf{cov} \in \mathscr{P}^\cI_\cov$, there exists $\{z_S: S\in \mathscr{F}\}$ such that together they satisfy \eqref{eq:P1}-\eqref{eq:P4}.
\begin{align*}
\sum\limits_{v \in C} \lambda(v)\mathsf{cov}(v) &~~=_{\eqref{eq:P2}} ~~~ \sum\limits_{v \in C} \lambda(v)\sum\limits_{ \substack{S \in \mathscr{F}:\\ d(v,S) \leq 1}} z_S ~~= \sum\limits_{S \in \mathscr{F}} z_S\sum\limits_{ \substack{v \in C:\\ d(v,S) \leq 1}} \lambda(v)\\
&~~\leq_{\eqref{prop1},\eqref{eq:P4}} ~~~ m \sum\limits_{S \in \mathscr{F}} z_S =_{\eqref{eq:P3}} ~~~m
\end{align*}
\end{proof}
\noindent
The next lemma shows that all $\mathsf{cov}$'s in $\mathscr{P}^\cI_\cov$ are valuable.
\begin{lemma}
\label{hypLma}
Suppose an assignment $\{\mathsf{cov}(v) \in \mathbb R_{\geq 0} :v\in C\}$ is not valuable with respect to $\mathscr{I}= (F,C,d,m,\mathscr{F})$.
Then there is a hyper-plane separating it from $\mathscr{P}^\cI_\cov$ that can be constructed in polynomial time.
\end{lemma}
\begin{proof}
If $\sum_{v\in C} \mathsf{cov}(v) < m$, this inequality itself is a separating hyper-plane and we are done. So we may assume $\sum_{v\in C} \mathsf{cov}(v)\geq m$.
Let $\mathscr{J} = (F,\mathscr{F},\mathscr{P},{\mathsf{val}})$ be the $\mathscr{F}$-PCM instance constructed by Algorithm~\ref{constAlgo} from $\mathscr{I}$ and $\mathsf{cov}$. Fix $S \in \mathscr{F}$ and recall from Definition~\ref{ISdef} that $R(S) = \{v \in \mathsf{Reps}_{\cov} : B_F(v,1) \cap S \neq \emptyset \}$. Pick an arbitrary $T\subseteq S$ for which $\lvert B_F(v,1) \cap T\rvert = 1$, for all $v \in R(S)$. Observe that by down-closedness of $\mathscr{F}$, we have $T\in \mathscr{F}$ which implies $T$ is a feasible solution for $\mathscr{J}$, and since $\mathsf{cov}$ is not valuable ${\mathsf{val}}(T) < m$.
Furthermore, Claim~\ref{valclm} applied to $T$ gives ${\mathsf{val}}(T) = \sum_{v \in R(T)} |{\mathsf{Chld}}(v)|$.
Since $R(S)=R(T)$ and $|{\mathsf{Chld}}(v)|$ is integer-valued, we get:
\begin{equation}\label{lowp}
\sum_{v\in R(S)} \lvert {\mathsf{Chld}}(v) \rvert \leq m -1
\end{equation}
Let $\alpha = \frac{m}{m - 0.5} > 1$. Define $\lambda(v)$ for $v \in C$ as:
\[\lambda(v) = \begin{cases}
\alpha \lvert{\mathsf{Chld}}(v)\rvert & v \in \mathsf{Reps}_{\cov}\\
0 & \textrm{for all other} ~v\in C
\end{cases} \]
Now observe that for any $S\in \mathscr{F}$:
\begin{equation*}
\sum\limits_{ \substack{v \in C: d(v,S) \leq 1}} \lambda(v) = \sum\limits_{ \substack{v \in \mathsf{Reps}_{\cov}:d(v,S) \leq 1}} \alpha \lvert{\mathsf{Chld}}(v)\rvert = \alpha \sum\limits_{v \in R(S)} \lvert{\mathsf{Chld}}(v)\rvert \leq \alpha(m-1) < m
\end{equation*}
That is, $\lambda(v)$'s satisfy~\eqref{prop1}.
Now we prove \eqref{prop2} is not satisfied thus it can be used to separate $\mathsf{cov}$ from $\mathscr{P}^\cI_\cov$.
\begin{align}
\sum\limits_{v \in C} \lambda(v)\mathsf{cov}(v) & = & \alpha\sum\limits_{v \in \mathsf{Reps}_{\cov}} \lvert{\mathsf{Chld}}(v)\rvert\mathsf{cov}(v) & ~~= & \alpha\sum\limits_{v \in \mathsf{Reps}_{\cov}}\sum\limits_{u \in {\mathsf{Chld}}(v)}\mathsf{cov}(v) \notag \\
& \geq_{\textrm{Fact}~\ref{greedyrule}} & \alpha\sum\limits_{v \in \mathsf{Reps}_{\cov}}\sum\limits_{u \in {\mathsf{Chld}}(v)}\mathsf{cov}(u) & ~~=_{\textrm{Fact}~\ref{partfact}} & \alpha\sum\limits_{v \in C}\mathsf{cov}(v) \geq \alpha m > m \notag
\end{align}
\end{proof}
\begin{proof}[{\bf Proof of Theorem~\ref{mainthrm}}]
Given the guess ${\widehat{\opt}}$ which is scaled to $1$, we use the ellipsoid algorithm to check if $\mathscr{P}^\cI_\cov$ is empty or not.
Whenever ellipsoid asks if a given $\mathsf{cov}$ is in $\mathscr{P}^\cI_\cov$ or not, run Algorithm~\ref{constAlgo} for this given $\mathsf{cov}$ to construct the corresponding $\mathscr{F}$-PCM instance $\mathscr{J}$ and use algorithm $\mathcal{A}$, promised in the statement of Theorem~\ref{mainthrm}, to solve it.
If ${\mathsf{opt}}(\mathscr{J}) \geq m$, then Lemma~\ref{lem:summary} implies that we have a $3$-approximate solution.
Otherwise, $\mathsf{cov}$ is not valuable, and we can use Lemma~\ref{hypLma} to find a separating hyperplane.
In polynomial time, either we get a $\mathsf{cov}\in \mathscr{P}^\cI_\cov$ which by Lemma~\ref{hypLma} has to be valuable, or we prove $\mathscr{P}^\cI_\cov$ is empty and we modify our ${\widehat{\opt}}$ guess. For the correct guess, the latter case won't occur and we get a $3$-approximate solution.
\end{proof}
\section{Applications and Extensions}\label{secApp}
In this section we elaborate on the applications and extensions stated in the Introduction.
We begin with looking at specific instances of $\mathscr{F}$ which have been studied in the literature, and some which have not. \medskip
\noindent
{\bf Single and Multiple Knapsack Constraints.} We look at
\[\mathscr{F}_{\mathsf{KN}} := \{S\subseteq F: \text{ for $i=1,\ldots, d$}, ~\sum_{v \in S} w_i(v) \leq k_i\}\]
where there are $d$ weight functions over $F$ and $k_i$'s are upper bounds on these weights.
Of special interest is the case $d=1$ in which we get the robust knapsack supplier problem also called the weighted $k$-supplier problem with outliers.
The $\calF\textrm{-PCM}$ problem for the above $\mathscr{F}_{\mathsf{KN}}$ has the following complexity: When $d=1$, the problem can be solved in polynomial time. Indeed, given a partition $\mathscr{P}$, since ${\mathsf{val}}(u) = {\mathsf{val}}(v)$ for all $v$ in the same part, any solution which picks a facility from a part $A \in \mathscr{P}$ may as well pick the one with the smallest weight in that part. Thus, the problem boils down to the usual knapsack problem in which we have $|\mathscr{P}|$ items where the item corresponding to part $A\in \mathscr{P}$ has weight $\min_{v\in A} w(v)$ and value ${\mathsf{val}}(v)$.
Since the values are poly-bounded, this problem is solvable in polynomial time. Thus, we get the following corollary to Theorem~\ref{thm:1} resolving the open question raised in~\cite{CLLW13} and~\cite{HPST17}.
\begin{theorem}\label{thm:rkn}
There is a polynomial time $3$-approximation to the robust knapsack center problem.
\end{theorem}
When $d > 1$, then the $\calF\textrm{-PCM}$ problem is NP-hard even when ${\mathsf{val}}$ is poly-bounded. However, if the $w_i$'s are also poly-bounded (actually one of them can be general), then the $\calF\textrm{-PCM}$ problem can be solved in polynomial time using dynamic programming. This problem was in fact studied in ~\cite{HS86} (the conference version) and is called the {\em suitcase} problem there.
Thus, we get the following corollary to Theorem~\ref{thm:1} extending the result in~\cite{HS86}.
\begin{theorem}\label{thm:rmkn}
There is a polynomial time $3$-approximation to the robust multiple-knapsack center problem if the number of weights is a constant and all but possibly one weight function are poly-bounded.
\end{theorem}
\noindent
{\bf Single and Multiple Matroid Constraints.}
We look at
\[\mathscr{F}_{\mathsf{Mat}} := \{S\subseteq F: S\in \mathscr{I}_{M_i}, ~\forall i=1,\ldots,d\}\]
When $d=1$, we get the robust matroid center problem. The $\calF\textrm{-PCM}$ paper reduces to finding a maximum value set in $\mathscr{I}_M$ and a partition matroid induced by $\mathscr{P}$. This is solvable in polynomial time even when ${\mathsf{val}}$ is general and not poly-bounded, and even when $\mathscr{I}_M$ is given as an independent set oracle.
Thus, we get the following corollary to
Theorem~\ref{thm:1} obtaining the result in~\cite{HPST17}.
\begin{theorem}\label{thm:rmc}[Theorem 1.1 in~\cite{HPST17}]
There is a polynomial time $3$-approximation to the robust matroid center problem even when the matroid is described as an independent set oracle.
\end{theorem}
When there are $d>1$ matroids, then the $\calF\textrm{-PCM}$ problem is NP-hard. Therefore, Theorem~\ref{thm:2} implies that for instance, we can have {\em no} unicriteria approximation for the robust matroid-intersection center problem. \medskip
\noindent
{\bf Single Knapsack and Single Matroid Constraint.}
We look at
\[\mathscr{F}_{\mathsf{KN}\cap\mathsf{Mat}} := \{S\subseteq F: \sum_{v\in S} w(v) \leq k, ~~ S\in \mathscr{I}_{M}\}\]
which is the intersection of a single matroid and a single knapsack constraint. To the best of our knowledge, the resulting {{Robust $\mathscr{F}$-Supplier}} problem has not been studied before. One natural instantiation is when $F$ is a collection of high-dimensional vectors with weights and the constraint on the centers is to pick a linearly independent set with total weight at most $k$.
The corresponding $\calF\textrm{-PCM}$ problem asks us, given a partition $\mathscr{P}$ and poly-bounded values ${\mathsf{val}}$, to find a set $S\in \mathscr{I}_\mathcal{M} \cap \mathscr{I}_{\mathscr{P}}$ of maximum value such that $w(S)\leq k$, where $\mathscr{I}_{\mathscr{P}}$ is the partition matroid induced by $\mathscr{P}$. We don't know if this problem can be solved in polynomial time, even in the case when $M$ is another partition matroid.
\def\mathbf{w}{\mathbf{w}}
\def\mathbf{W}{\mathbf{W}}
However, the above problem is related to the {\em exact matroid intersection} problem. In this problem, we are given two matroids $\mathcal{M}$ and $\mathscr{P}$, and a weight function $\mathbf{w}$ on each ground element and a budget $\mathbf{W}$. The objective is to decide whether or not there is a set $S \in \mathscr{I}_\mathcal{M} \cap \mathscr{I}_P$ such that $\mathbf{w}(S) = \mathbf{W}$. Understanding the complexity of this problem is a long standing challenge~\cite{Cam92,MVV87,PY82}. When the matroids are representable over the same field, then ~\cite{Cam92} gives a randomized pseudopolynomial time algorithm for the problem. The following claim shows the relation between $\calF\textrm{-PCM}$ and the exact matroid intersection problem; this claim is essentially present in ~\cite{BerBFS08}.
\begin{claim}\label{clm:ipco}
Given an algorithm for the exact matroid intersection problem, one can solve the $\calF\textrm{-PCM}$ problem in polynomial time when the weights $w$ are poly-bounded.
\end{claim}
\begin{proof}
We guess $\mathsf{V}^*$ to be the optimum value of the $\calF\textrm{-PCM}$ problem; since ${\mathsf{val}}$ is poly-bounded, there are only polynomially many guesses.
We also guess $k^* \leq k$ to be the total $w$ of the optimum set. Again if $w$ is poly-bounded, there are polynomially many guesses.
We define a weight function $\mathbf{w}$ as follows. Let $\phi = w(F) + 1$ be a large enough upper-bound on the possible values of $w(S), S\subseteq F$. Define $\mathbf{w}(f) = \phi {\mathsf{val}}(f) + w(f)$ for all $f\in F$ and $\mathbf{W} = \phi \mathsf{V}^* + k^*$.
We claim that there is a set $S$ in $\mathscr{I}_M \cap \mathscr{I}_{\mathscr{P}}$ with $\mathbf{w}(S) = \mathbf{W}$ iff ${\mathsf{val}}(S) = \mathsf{V}^*$ and $w(S) = k^*$.
The if-direction is trivial.
On the other hand if $\mathbf{w}(S) = \mathbf{W}$ we get
$k^* = \mathbf{w}(S) - \phi \mathsf{V}^* = \phi {\mathsf{val}}(S) + w(S) - \phi V^*$. Now if ${\mathsf{val}}(S) \neq \mathsf{V}^*$ since ${\mathsf{val}}$ is integer-valued and since $\phi > w(S)$ for any $S\subseteq F$,
the RHS is either negative or $> w(F)$. In any case it cannot be $k^*$. Therefore, we must have ${\mathsf{val}}(S) = \mathsf{V}^*$ which implies $w(S) = k^*$.
\end{proof}
Armed with the non-trivial result about exact matroid intersection from~\cite{Cam92}, we get the following.
\begin{theorem}\label{thm:knandm}
Given a linear matroid $\mathcal{M}$ and a poly-bounded weight function, there is a
randomized polynomial time $3$-approximation to the robust knapsack-and-matroid center problem.
\end{theorem}
\subsection{The Case of No Outliers}\label{subsecapx}
The $\mathscr{F}$-supplier problem, that is the case of $m = |C|$, may be of special interest. In this case the problem is easier and the complexity is defined by the complexity of the following decision problem.
\begin{definition}[{$\calF\textrm{-PCF}$} problem]
The input is $\mathscr{J} = (F,\mathscr{F},\mathscr{P})$ where $F$ is a finite set, $\mathscr{F} \subseteq 2^F$ is a down-closed family and $\mathscr{P} \subseteq 2^F$ is an arbitrary sub-partition of $F$. The objective is to decide whether there {\em exists} a set $S\in \mathscr{F}$ such that
$\lvert S \cap A \rvert = 1, ~~\forall A \in \mathscr{P} $.
\end{definition}
\begin{theorem}\label{thm:1b}
If the $\calF\textrm{-PCF}$ problem can be solved efficiently for any partition $\mathscr{P}$, then
the $\mathscr{F}$-supplier problem has a polynomial time $3$-approximation. Otherwise, there is no non-trivial approximation possible
for the $\mathscr{F}$-supplier problem.
\end{theorem}
\begin{proof}[Sketch]
Run Algorithm~\ref{constAlgo} with an arbitrary assignment $\mathsf{cov}$ (and ignore the ${\mathsf{val}}$'s). Let $\mathscr{J} = (F,\mathscr{F},\mathscr{P})$ be the resulting
$\calF\textrm{-PCF}$ instance. If the guess ${\widehat{\opt}}=1$ is correct, then note that the optimum solution $S^*$ must satisfy $S^*\cap A \neq \emptyset$
for all $A\in \mathscr{P}$; if not, then the corresponding $v\in \mathsf{Reps}_{\cov}$ can't be served. Conversely, any $S$ satisfying $S\cap A\neq\emptyset$ for all $A\in \mathscr{P}$ implies a $3$-approximate solution.
Therefore, an algorithm for $\calF\textrm{-PCF}$ can either give a $3$-approximate solution or prove the guess ${\widehat{\opt}}$ is too low.
\end{proof}
Theorem~\ref{thm:1} and Theorem~\ref{thm:1b} raise the question: is there any set of constraints for which the problem without outliers is significantly easier than the problem with outliers? We don't know the answer to this question, although we guess the answer is yes.
For this, it suffices to design a set system for which $\calF\textrm{-PCF}$ is easy but $\calF\textrm{-PCM}$ is hard (perhaps NP-hard). To see the difference between these problems consider the $\mathscr{F}_{\mathsf{KN}\cap\mathsf{Mat}}$ family described in the previous subsection. We don't know if $\calF\textrm{-PCM}$ is easy or hard, but $\calF\textrm{-PCF}$ is easy: this amounts to minimizing $w(S)$ over $S\in \mathscr{I}_\mathcal{M} \cap \mathcal{B}_\mathscr{P}$ where $\mathcal{B}_\mathscr{P}$ is the base polytope induced by $\mathscr{P}$. This can be done in polynomial time, and therefore we get the following corollary.
\begin{theorem}\label{thm:knandmcenter}
There is a polynomial time $3$-approximation to the knapsack-and-matroid center problem.
\end{theorem}
\subsection{Handling Approximation}
The technique used to prove Theorem~\ref{thm:1} is robust enough to translate approximation algorithms for the $\calF\textrm{-PCM}$ problem to
{\em bi-criteria} approximation algorithms for the {{Robust $\mathscr{F}$-Supplier}} problem. There are two notions of approximation algorithms for the $\calF\textrm{-PCM}$ problem
and they lead to two notions of bi-criteria approximation.
The first is the standard notion: a $\rho$-approximation (for $\rho \le 1$) algorithm that takes instance $\mathscr{J}$ of $\calF\textrm{-PCM}$, returns a solution
$S\in \mathscr{F}$ of value ${\mathsf{val}}(S) \ge \rho \cdot {\mathsf{opt}}(\mathscr{J})$. The corresponding bi-criteria approximation notion for the {{Robust $\mathscr{F}$-Supplier}} problem is the following: an $(\alpha,\beta)$-approximation algorithm for instance $\mathscr{I}$ of {{Robust $\mathscr{F}$-Supplier}} returns a solution which opens centers at $S\in \mathscr{F}$
and the distance of at least $\beta m$ customers to $S$ is $\leq \alpha \cdot {\mathsf{opt}}(\mathscr{I})$. The proof of Theorem~\ref{thm:1} in fact implies the following.
\begin{theorem}
\label{thm:1c}
Let $\mathcal{A}$ be a polynomial time $\rho$-approximate algorithm for the $\calF\textrm{-PCM}$ problem. Then there is a
polynomial time $(3,\rho)$-bi-criteria approximation algorithm for the {{Robust $\mathscr{F}$-Supplier}} problem.
\end{theorem}
The second notion of approximation for the $\calF\textrm{-PCM}$ problem is one which satisfies the constraints approximately. This notion is more problem dependent and makes sense only if there is a notion of an approximate relaxation $\mathscr{F}^R$ for the set $\mathscr{F}$. For example, an $(1+\varepsilon)$-relaxation for $\mathscr{F}_{\mathsf{KN}}$ could be the subsets $S$ with
$w_i(S) \leq (1+\varepsilon)\cdot k_i$ for all $i$. A $\rho$-violating algorithm for an instance $\mathscr{J}$ of $\calF\textrm{-PCM}$ would then return a set $S$ with ${\mathsf{val}}(S)\geq {\mathsf{opt}}(\mathscr{J})$ but $S\in \mathscr{F}^R$ which is an $\rho$-relaxation for $\mathscr{F}$.
This defines a different bi-criteria approximation notion for the {{Robust $\mathscr{F}$-Supplier}} problem. An $\alpha$-approximate $\beta$-violating algorithm for the {{Robust $\mathscr{F}$-Supplier}} problem takes an instance $\mathscr{I}$ and returns a solution $S\in \mathscr{F}^R$ which is a $\beta$-relaxation for $\mathscr{F}$ such that
at least $m$ customers in $C$ are at distance at most $\alpha\cdot {\mathsf{opt}}(\mathscr{I})$ to $S$.
\begin{theorem}
\label{thm:1d}
Let $\mathcal{A}$ be a polynomial time $\rho$-violating algorithm for the $\calF\textrm{-PCM}$ problem. Then there is a
polynomial time $3$-approximate-$\rho$-violating algorithm for the {{Robust $\mathscr{F}$-Supplier}} problem.
\end{theorem}
When $\mathscr{F}$ is described by constant $d$ knapsack constraints (with general weights) and a single matroid constraint, for any constant $\varepsilon>0$
Chekuri~{\em et al}.~ give an $(1+\varepsilon)$-approximation algorithm for the $\calF\textrm{-PCM}$ in~\cite{CVZ11}.
Without the matroid constraint, Grandoni~{\em et al}.~ give an $(1+\varepsilon)$-violating algorithm in~\cite{GRSZ14}.
Together, we get the following corollary. The latter recovers a result from~\cite{CLLW13}.
\begin{theorem}\label{thm:multiknapsackmat}
Fix any constant $\varepsilon > 0$. There is a polynomial time $(3,(1+\varepsilon))$-bi-criteria approximation algorithm for the robust supplier problem with
constant many knapsack constraints and one matroid constraint. There is a polynomial time $3$-approximate $(1+\varepsilon)$-violating algorithm for the robust supplier problem with constant many knapsack constraints.
\end{theorem}
\bibliographystyle{alpha}
|
{
"timestamp": "2018-05-08T02:11:55",
"yymm": "1805",
"arxiv_id": "1805.02217",
"language": "en",
"url": "https://arxiv.org/abs/1805.02217"
}
|
\section{Introduction}\label{sec:introduction}
\noindent
This paper considers systems of agents continuously evolving on $\mathbb{S}^{d-1}$, where $d \geq 2$. The interactions between the agents are changing as a function of time.
For such systems we are analyzing a large class of distributed synchronization/consensus control laws. The analysis tool is a lifting method, where an equivalent consensus protocol is analyzed in the ambient space that embeds the sphere. In comparison to projection methods that have been used in this context---e.g., the gnomonic projection---the proposed method is not locally but globally defined on the unit sphere. The control action is performed in the tangent plane. Only relative information between neighboring agents is used in the control laws. Under the assumption that the time-varying graph is uniformly quasi-strongly connected, we show that the consensus manifold is globally uniformly asymptotically stable relative to any closed ball on the sphere contained in an open hemisphere.
Synchronization on the circle, i.e., $d = 2$, is closely related to synchronization of oscillators~\citep{dorfler2014synchronization} and it is equivalent to synchronization on $\mathsf{SO}(2)$, where several applications exist such as flocking in nature and alignment of multi-robot systems. Also for the two-dimensional sphere, i.e., $d = 3$, there are several applications such as formation flying and flocking of birds; consider for example a multi-robot system in 3D, where the relative directions between the robots are available and the goal is to align those. For higher dimensional spheres there are currently related problems such as distributed eigenvector computation, but concrete applications might arise in the future.
The control laws at hand---and slight variations or restrictions on the graph topologies, switchings of the graphs, dimensions of the sphere, and the nonlinear weights in the control laws etc.---have been studied from various perspectives~\citep{scardovi2007synchronization,Sarlette2009,Olfati-Saber2006,Li2014,li2015collective}. There has recently been new developments~\citep{pereira2015,pereira2016,markdahl2016towards,markdahl2016}.
In~\cite{markdahl2017almost}, almost global consensus is shown by characterization of all equilibrium points when the graph is symmetric and constant (time-invariant). It is shown that the equlibria not in the consensus manifold are unstable and the equilibra in the consensus manifold are stable. A similar technique is used in \cite{Tron2012} to show that a consensus protocol on $\mathsf{SO}(3)$ is almost globally asymptotically stable. Now, the above-mentioned results about almost global convergence come at a price. Static undirected graph topologies are assumed as well as more restrictive classes of weights in the control protocols. Furthermore, compared to \cite{markdahl2017almost}, the right-hand sides of the system dynamics is not necessarily an intrinsic gradient and the linearization matrices at equilibriums are not necessarily symmetric. Hence, we cannot use the result due to~\cite{lojasiewicz1982trajectoires} about point convergence for gradient flows. This inspired us to take a closer look at methods that transform the consensus problem on the unit sphere (or a subset thereof) to an equivalent consensus problem in $\mathbb{R}^{d}$.
Before we address the method---referred to as a lifting method---we briefly make some connections to the related problem of consensus on $\mathsf{SO}(3)$.
The problem of consensus on $\mathsf{SO}(3)$ has been extensively studied~\citep{Sarlette2009a,Ren2010,Sarlette2010,Tron2013,Tron2014,Deng2016,johan02}.
There is a connection between that problem and the problem of consensus on $\mathbb{S}^3$
when the unit quaternions are used to represent the rotations. For those, the gnomonic projection can be used to show consensus on the unit-quaternion sphere~\citep{thunberg2014distributed,thunbergaut}. In another line of research, several methods have been introduced where control laws based on only relative information have been augmented with additional auxiliary (or estimation) variables, which are communicated between neighboring agents. By doing so, results about almost global convergence to the consensus manifold are achieved~\citep{AS-RS:09,thunberg2017dynamic}. The latter of these two publications provides a control protocol for Stiefel manifolds, with the unit sphere and $\mathsf{SO}(d)$ as extreme cases. A similar technique had previously been used for the sphere~\citep{scardovi2007synchronization}. The idea of introducing auxiliary variables also extends to the related distributed optimization problem in~\cite{thunberg2017distributed}. In contrast to the mentioned works, in this paper we are not assuming additional communication between the agents by means of auxiliary variables. Instead only relative information is used in the protocols. In a practical setting (considering the case $d = 3$), such information can be measured by for example a vision sensor and requires no explicit communication between the agents.
In the proposed lifting method, we lift the states from the $(d-1)$-dimensional sphere into $\mathbb{R}^{d}$. The non-negative weights in the consensus protocol for the states in the lifting space are nonlinear functions. Each agent moves in a direction that is a weighted combination of the directions to the neighbors. The weights contain rational functions of the norms of the states of the agents. Since these rational functions are not well-defined at the origin, fundamental questions arise about existence, uniqueness, and invariance of sets. Those questions are answered with positive answers.
The hope is that this lifting method will serve as a stepping-stone to future analysis on (almost) global convergence to the consensus manifold on the unit sphere. Compared to the approach in \cite{markdahl2017almost} where all the ``bad'' equlibria on $\mathbb{S}^{d-1}$ were characterized, we only need to characterize one point, which is the origin in the ``lifted space''. If we were to show that this point has a region of attraction that is of measure zero, we would have equivalently shown the desired result about almost global convergence on the unit sphere (assuming $d \geq 3$). However, the non-differentiability of this point remains an additional challenge.
\section{Preliminaries}\label{sec:preliminaries}
\noindent
We begin this section with some \textit{set-definitions}.
The $(d-1)$-dimensional unit sphere is
$$\mathbb{S}^{d-1} = \{y \in \mathbb{R}^{d}: \|y\|_2 = 1\}.$$
The special orthogonal group in dimension $d$ is
$$\mathsf{SO}(d) = \{Q \in \mathbb{R}^{d \times d} : Q^T = Q^{-1}, \text{det}(Q) = 1\}.$$
The set of skew symmetric matrices in dimension $d$ is
$$\mathsf{so}(d) = \{\Omega \in \mathbb{R}^{d \times d} : \Omega^T = - \Omega \}.$$
The set $\mathcal{H} \subset \mathbb{S}^{d-1}$ is an open \textit{hemisphere} if there is $v \in \mathbb{S}^{d-1}$ such that $\mathcal{H} = \{w \in \mathbb{S}^{d-1}: w^Tv > 0\}$.
We consider a multi-agent system with $n$ agents.
Each agent has a corresponding state $x_i(t) \in \mathbb{S}^{d-1}$ for
$t \in [0, \infty)$. The initial state of each agent $i$ at time $0$ is
$x_{i0} \in \mathbb{S}^{d-1}$. Another way to represent the states of the agents is to use rotation matrices. Let $R_i(t) \in \mathsf{SO}(d)$ satisfy $R_i(t)p = x_i(t)$ for all $i$ and $t \in [0, \infty)$, where $p = [1, 0, \ldots, 0]^T$ is the \textit{north pole}; we also define $-p$ as the \textit{south pole}. Let $R_{i0}p = x_{i0}$ for all $i$, where $R_{i0}$ is the initial $R_i$-matrix at time $0$. The $R_i$-matrices can be interpreted as transformations from body coordinate frames---denoted by $\mathcal{F}_i$'s---of the agents to a world coordinate frame $\mathcal{F}_W$. They are transforming the unit vector $p$ in the body frames to the corresponding unit vector (or point on the unit sphere) in the world coordinate frame.
The $R_i$'s
and their dynamics are not uniquely defined, but this is not of importance for the analysis. We choose to define the dynamics of the $R_i$'s according to \eqref{eqq:3} below.
The dynamics of the $x_i$-vectors are given by
\begin{equation}
\label{eqq:1}
\dot{x}_i = (I - x_ix_i^T)R_i[0, v_i^T]^T = R_i[0,v_i^T]^T,
\end{equation}
where $v_i(t)\in \mathbb{R}^{d-1}$ for all $t$. The $v_i$-vectors are the controllers for the agents and those are defined in the body coordinate frames, i.e., the $\mathcal{F}_i$'s. For the $R_i$-matrices the dynamics is
\begin{align}
\label{eqq:3}
\dot{R}_i & = R_i\begin{bmatrix}
0 & -v_i^T \\
v_i & 0
\end{bmatrix}.
\end{align}
The matrix on the right-hand side of $R_i$
in \eqref{eqq:3} is an element of $\mathsf{so}(d)$.
The control is performed
in the tangent space of the sphere, which means that there are $d-1$
degrees of freedom for the control. This is the reason why the $v_i$-vectors are $(d-1)$-dimensional. Before we proceed, we provide some additional explanation for the expression in the right-hand side of \eqref{eqq:3}. According to its definition, the first column of $R_i$ is equal to $x_i$ and by multiplying $\dot x_i$ by $R_i^T$ from the right we obtain---due to \eqref{eqq:1}---the following expression
$$R_i^T\dot x_i = [0, v_i^T]^T.$$
This means that $$R_i^T\dot R_i = \begin{bmatrix}
0 & \star \\
v_i & \star
\end{bmatrix},$$
where the $\star$-parts are left to be chosen. We know that the matrix in the right-hand side above needs to be skew symmetric, since $R_i$ is a rotation matrix. We also know that the first column of it must be equal to $[0, v_i^T]^T$. The matrix of minimum Euclidean norm that fulfills these two requirements is equal to $$\begin{bmatrix}
0 & -v_i^T \\
v_i & 0
\end{bmatrix},$$
i.e., the one we chose in the right-hand side of \eqref{eqq:3}.
We will study a class of distributed synchronization/consensus control laws on the unit sphere, where the agents are moving in
directions comprising conical combinations of directions to neighbors. In this protocol only local and relative information is used. Before we
provide these control laws we introduce directed graphs and time-varying directed graphs.
A directed graph $\mathcal{G}$ is a pair $(\mathcal{V}, \mathcal{E})$, where $\mathcal{V} = \{1, \ldots, n\}$ is the node-set and $\mathcal{E} \subset \mathcal{V} \times \mathcal{V}$ is the edge-set. Each node in the node set corresponds to a unique agent.
The set $\mathcal{N}_i \in
\mathcal{V}$ is the neighbor set or neighborhood
of agent $i$, where $j \in \mathcal{N}_i$
if and only if $(i,j) \in \mathcal{E}$. We continue with the
following definitions addressing connectivity of directed graphs.
In a directed graph $\mathcal{G}$, a \textit{directed path} is a sequence of distinct nodes, such that any consecutive pair of nodes in the sequence comprises an edge in the graph.
We say that $i$ is \textit{connected to} $j$ if there is a directed path from $i$ to $j$.
We say that the graph is \textit{quasi-strongly connected} if there is at least one node that is a center or a root node in the sense that all the other nodes are connected to it. We say that the graph is \textit{strongly connected} if for all $(i,j) \in \mathcal{V} \times \mathcal{V}$ it holds that $i$ is connected to $j$.
Now we define \textit{time-varying graphs}. We define those by first defining \textit{time-varying neighborhoods}. The time-varying neighborhood $\mathcal{N}_i(t)$ of agent $i$
is a piece-wise constant right-continuous set-valued function that maps from $\mathbb{R}$ to $2^{\mathcal{V}}$. We assume that there is $\tau_D >0$ such that
$\inf_{k}(\tau_{i({k+1})} - \tau_{ik}) > \tau_D$ for all $i$, where $\{\tau_{ik}\}_{k = -\infty}^{\infty}$ is the set of time points of discontinuity of $\mathcal{N}_i(t)$. The constant $\tau_D$ is as a lower bound on the dwell-time between any two consecutive switches of the topology.
We define the \textit{time-varying graph} $\mathcal{G}(t) = (\mathcal{V},\mathcal{E}(t))$ as
$$\mathcal{G}(t) = (\mathcal{V},\mathcal{E}(t)) = (\mathcal{V},\bigcup_i\bigcup_{j \in \mathcal{N}_i(t)}\{(i,j)\}).$$
Furthermore, the \textit{union graph} of $\mathcal{G}(t)$ during the time
interval $[t_1,t_2)$ is defined by
\begin{equation*}
\mathcal{G}([t_1, t_2))
= \textstyle\bigcup_{t\in[t_1, t_2)} \mathcal{G}(t)
= (\mathcal{V},\textstyle\bigcup\nolimits_{t\in[t_1, t_2)}\mathcal{E}(t)),
\end{equation*}
where $t_1 < t_2 \leq \infty$.
We say that the graph $\mathcal{G}(t)$ is \textit{uniformly
(quasi-) strongly connected} if there exists a constant $T >0$ such that the
union graph $\mathcal{G}([t, t + T))$ is (quasi-) strongly connected for all $t$.
Now we provide the synchronization protocol to be studied. For each agent $i$, the controller is $v_i$ is defined by
\begin{align}
\label{eqq:4}
\begin{bmatrix}
0 \\
v_i
\end{bmatrix} & = \begin{bmatrix}
0 & 0 \\
0 & I_{d-1}
\end{bmatrix} \sum_{j \in \mathcal{N}_i(t)}f_{ij}(\|x_{ij} - p\|)x_{ij},
\end{align}
where $x_{ij} = R_i^Tx_j = R_i^TR_jp$, which is $x_j$ represented in the frame $\mathcal{F}_i$. The $x_{ij}$'s are what we refer to as relative information and the control law \eqref{eqq:4} is constructed by only such information.
For each $(i,j)$, it holds that $f_{ij}: \mathbb{R} \rightarrow \mathbb{R}$.
The $f_{ij}$-functions are assumed to be Lipschitz and attain positive values for positive arguments. The $\mathcal{N}_i(t)$'s are neighborhoods of a time-varying directed graph $\mathcal{G}(t)$, whose connectivity is at least uniformly quasi-strong. These control laws will be analyzed in the paper.
The expressions in \eqref{eqq:4} are more easily understood if they are expressed in the world frame $\mathcal{F}_{W}$. We define
\begin{align}
\label{eqq:5}
u_i = (I - x_ix_i^T) \sum_{j \in \mathcal{N}_i(t)}f_{ij}(\|x_j - x_i\|)(x_j - x_i),
\end{align}
for all $i$,
which is $[0, v_i^T]^T$ expressed in the frame $\mathcal{F}_W$. The vector $u_i$ is the sum of the positively weighted directions to the neighbors of agent $i$, projected onto the tangent space at the point $x_i$. Also for analysis purposes, \eqref{eqq:5} is easier to work with than \eqref{eqq:4}. The closed loop system is
\begin{align}
\label{eqq:6}
\dot{x}_i & = (I - x_ix_i^T)\sum_{j \in \mathcal{N}_i(t)}f_{ij}(\|x_j - x_i\|)(x_j - x_i), \\
\nonumber
& = (I - x_ix_i^T)\sum_{j \in \mathcal{N}_i(t)}f_{ij}(\|x_j - x_i\|)x_j,
\end{align}
for all $i$.
Let $x = [x_1^T, x_2^T, \ldots, x_n^T]^T$ and $x_0 = [x_{10}^T, x_{20}^T, \ldots, x_{n0}^T]^T$.
We define the set
$$\mathcal{A} = \{x: x_i = x_j \text{ for all }i,j\},$$
which is the synchronization/consensus set. Throughout the paper we assume that the closed-loop dynamics of the
system is given by \eqref{eqq:6}.
We study the convergence of $x(t)$ to the consensus set $\mathcal{A}$. When we talk about convergence we refer to the concepts below.
For the system \eqref{eqq:6}, we say that the set $\mathcal{A}$ is \textit{attractive} relative to a forward invariant set $\mathcal{S} \subset (\mathbb{S}^{d-1})^n$ if $$(x_{0} \in \mathcal{S}) \Longrightarrow (\text{dist}(x(t), \mathcal{A}) \rightarrow 0\text{ as } t \rightarrow \infty)$$
where $\text{dist}(v,\mathcal{A}) = \inf_{w \in \mathcal{A}}\|w - v\|$. Furthermore, we say that the set $\mathcal{A}$ is \textit{globally uniformly asymptotically stable} relative to a forward invariant compact set $\mathcal{S} \subset (\mathbb{S}^{d-1})^n$ if
\begin{enumerate}
\item for every $\epsilon > 0$ there is $T(\epsilon) > 0$ such that
$(x_0 \in \mathcal{S}) \Longrightarrow (\text{dist}(x(T(\epsilon)),\mathcal{A}) \leq \epsilon);$ \\
\item for every $\epsilon >0$ there is $\delta(\epsilon) > 0$ such that \\
$(x_0 \in \mathcal{S} \text{ and } \text{dist}(x_0,\mathcal{A}) \leq \delta) \Longrightarrow (\text{dist}(x(t),\mathcal{A}) \leq \epsilon$
for all $t \geq 0$).
\end{enumerate}
The equivalent definitions to the above will also be used (after changing the sets $\mathcal{S}$ and $\mathbb{S}^{d-1}$) for other systems evolving in $(\mathbb{R}^d)^n$
or linear subspaces thereof. \textit{Forward invariance}, or simply \textit{invariance}, of a set means that if the initial state is contained in the set, then the state is contained in the set for all future times.
The two concepts of global convergence respective almost global convergence relative to a forward invariant set $\mathcal{S}$ refer to, respectively, the situations where convergence occur for all initial points in $\mathcal{S}$ and convergence occur for all initial points in a set $\mathcal{B}$ where $\mathcal{S} - \mathcal{B}$ has measure zero.
\section{Projection methods}
\noindent
Before we continue to present the lifting method, we show how projection
based methods can be used to analyze consensus on hemispheres. In particular we consider two such methods.
The two methods are such that the $x_i$-vectors are projected down onto a $(d-1)$-dimensional linear subspace of $\mathbb{R}^{d}$. The symbol $y_i$ is used to denote the projection variable for $x_i$ in both methods.
\subsection{Equatorial plane projection}
\noindent
The equatorial plane projection simply projects all the states onto a $(d-1)$-dimensional hyperplane (that contains the origin). This plane separates the sphere into two hemispheres. If all the agents are positioned on one of those hemispheres, one can easily show that they reach consensus provided that the graph has strong connectivity. This projection method is appealing because the projections are simple and the convergence proof is straightforward. It is interesting that results from the literature about convergence on hemispheres (and slightly more general ones where the graph is assumed to be time-varying) can easily be shown with this simple projection.
Now, formally, the $x_i$-states are projected onto the equatorial plane whose normal is equal to $p = [1,0, \ldots, 0]^T$ in the world coordinate frame $\mathcal{F}_W$.
The projected state $y_i$ is defined by
\begin{equation}
\begin{bmatrix}
0 \\
y_i
\end{bmatrix} = P_{\text{equ}}x_i = \begin{bmatrix}
0 & 0 \\
0 & I_{d-1}
\end{bmatrix}x_i.
\end{equation}
This is illustrated in Fig.~\ref{fig:1} for the dimension $d = 3$. Points on the northern hemisphere, i.e., the $x_i$'s satisfying $p^Tx_i > 0$, are projected down onto the equatorial plane. For each point there is a blue dotted line between the point and its projection.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{sphere1}
\caption{Illustration of the equatorial plane projection.}
\label{fig:1}
\end{figure}
On the northern hemisphere $x_i \mapsto y_i$ is a diffeomorphism. The mapping $y_i \mapsto x_i$
is defined by
\begin{align}
[x_i]_1 & = \sqrt{1 - \|y_i\|^2}\\
[x_i]_k & = [y_i]_k, \text{ for } k\geq 2,
\end{align}
where $[x_i]_k$ and $[y_i]_k$ are the $k$'th elements of $x_i$ and $y_i$, respectively.
By using this projection one obtains a local convergence result for hemispheres.
\begin{prop}\label{prop:1}
Suppose controller \eqref{eqq:4} is used for each agent $i$ and suppose $\mathcal{G}(t)$ is uniformly strongly connected. If $p^Tx_{i0} > 0$ for all $i$, it holds that $\mathcal{A}$ is attractive for the closed loop system \eqref{eqq:6}.
\end{prop}
\emph{Proof:} \quad
We will use Theorem 1 in \cite{Johan_lyap_2017}. Under the condition that the graph is uniformly strongly connected, if we can show that any closed disc (or ball) in the equatorial place with radius less than $1$ is forward invariant for the $y_i$'s and we can find a function $V: \mathbb{R}^{d-1} \rightarrow \mathbb{R}^+$ such that $V$ is 1) positive definite, 2) $\max_{i \in \mathcal{V}}V(y_i(t))$ is decreasing as a function of $t$, and 3) $\dot V(y_i(t))$ is strictly negative if $i \in \arg\max_{j \in \mathcal{V}}\{V(y_j)\}$ and there is $j \in \mathcal{N}_i(t)$ such that $y_i \neq y_j$. Then the $y_i$'s converge to a consensus formation. This in turn implies, since $y_i \mapsto x_i$ is a diffeomorphism, that the $x_i$'s converge to a consensus formation, i.e., the set $\mathcal{A}$ is attractive.
Let $V = \|\cdot\|^2$, i.e., $V(y_i) = y_i^Ty_i$, which obviously satisfy condition 1). For $\|y_i\| < 1$ it holds that
\begin{align}
\nonumber
& \quad \: \dot{V}(y_i) \\
\nonumber
& = \sum_{j \in \mathcal{N}_i(t)} g_{ij}(y_i, y_j)\bigg (\left(\sqrt{1 - \|y_i\|^2}\sqrt{1 - \|y_i\|^2}\right )y_i^Ty_j \\
\label{eqq:8}
& \quad \:- \left(\sqrt{1- \|y_i\|^2} \sqrt{1- \|y_{\smash{j}}\|^2}\right )\|y_i\|^2 \bigg),
\end{align}
where $g_{ij}(y_i, y_j) = \\ f_{ij}(\|[\sqrt{1- \|y_j\|^2} - \sqrt{1- \|y_i\|^2}, (y_j - y_i)^T]^T\|)$.
Now, at time $t$, assume $i \in \text{arg}\max_{j \in \mathcal{V}}\{V(y_j)\}$ and assume $\|y_i\| < 1$. The following observations imply that conditions 2) and 3) hold. It holds that $g_{ij}(y_i,y_j) \geq 0 $ and the inequality is strict if $y_j \neq y_i$. It holds that $\|y_i\|^2 \geq y_j^Ty_j$ and the inequality is strict if $y_j \neq y_i$. It holds that
$$\left(\sqrt{1- \|y_i\|^2} \sqrt{1- \|y_{\smash{j}}\|^2}\right ) \geq \left(\sqrt{1 - \|y_i\|^2}\right )^2 \geq 0.$$
We also see that any closed disc with radius less than $1$
is forward invariant for the $y_i$'s.
\hfill $\blacksquare$
By a change of coordinates, we obtain the following generalization.
\begin{cor}\label{cor:1}
Suppose controller \eqref{eqq:4} is used for each agent $i$ and suppose $\mathcal{G}(t)$ is uniformly strongly connected. If all the $x_i$'s are contained in an open hemisphere it holds that $\mathcal{A}$ is attractive for the closed loop system \eqref{eqq:6}.
\end{cor}
A main problem, with the equatorial plane projection is that the convex hull of the projected variables is not necessarily forward invariant. This means that the projected variables are not following a consensus protocol. This is also the reason why we settle for the strong connectivity assumption about the graph, i.e., that it is uniformly strongly connected. However, the projected variables under the gnomonic projection---introduced in the subsequent section---do follow a consensus protocol, which, in turn, allows for more general convergence results.
\subsection{The gnomonic projection}
\noindent
The gnomonic projection projects an open hemisphere onto a tangent plane at a point on the sphere. We will use the convention of projecting the points on the southern hemisphere defined as $\{x \in \mathbb{S}^{d-1}: p^Tx < 0\}$ onto the tangent plane at the south-pole, i.e., at the point $-p$. The projection of $x_i$ is the intersection between the tangent plane and the line that passes through the origin and $x_i$. This projection is illustrated in Fig.~\ref{fig:2}, where several points are projected.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{sphere2}
\caption{Illustration of the gnomonic projection.}
\label{fig:2}
\end{figure}
The gnomonic projection has the property that segments of great circles on the sphere (geodesics) correspond to straight line segments in the projection plane.
One can show that a consensus algorithm on the open hemisphere corresponds to a consensus protocol for the projected states. It should be emphasized that the gnonomic projection method is not new. It is claimed to have been invented by the Greek philosopher Thales of Miletus somewhere around 624--546 BCE~\citep{alsina2015mathematical}. Its first appearance in a subject related to the one addressed in this paper, was probably in \cite{Hartley2011} and subsequently in \cite{hartley2013rotation} in the context of rotation averaging.
Later the gnomonic projection was used as a tool to show consensus on the open hemisphere~\citep{thunberg2014distributed,johan02}. In those latter works, the three-dimensional (unit-quaternion) sphere was considered in the context of attitude synchronization. Recently, the gnomonic projection was also considered for arbitrary dimensions~\citep{lageman2016consensus}. It should be emphasized that the graph was not time-varying in that context.
Formally we define $y_i$, the projection of $x_i$, by
\begin{equation}
\begin{bmatrix}
-1 \\
y_i
\end{bmatrix} = \frac{1}{|[x_i]_1|}x_i,
\end{equation}
which is a diffeomorphism from the open southern hemisphere to the tangent plane of the south pole.
Suppose controller
\eqref{eqq:4} is used, i.e., the closed loop dynamics is given by \eqref{eqq:6}. If $p^Tx_i < 0$ for all $i$, i.e., all the $x_i$'s are located on the southern hemisphere, it holds that the dynamics of the $\dot{y}_i$'s is on the form
\begin{align}
\label{eqq:nisse:1}
\dot{y}_i = \sum_{j \in \mathcal{N}_i(t)}h_{ij}(y)(y_j - y_i),
\end{align}
where $y = [y_1^T, y_2^T, \ldots, y_n^T]^T$, and it can be shown that the $h_{ij}(y)$'s are locally Lipschitz and globally Lipschitz on any bounded set. This can be used (as an alternative to using the lifting method) to prove the result in Proposition~\ref{prop:10} in next Section, which is stronger than that in Corollary~\ref{cor:1}.
\section{The lifting method}
\noindent
In this section we propose a method where the $x_i$'s are not projected onto a ($d-1$)-dimensional plane, but rather relaxed to be elements in $\mathbb{R}^{d}$. Those elements, we call them $z_i$'s, are then projected down onto the sphere $\mathbb{S}^{d-1}$ to create the $y_i$'s (which in this case are equivalent to the $x_i$'s). The method can thus be seen as the inverse procedure to the two in the previous section. Provided $z_i \neq 0$, the projection is given by
\begin{equation}
y_i = \frac{z_i}{\|z_i\|}.
\end{equation}
This projection as well as the lifting is illustrated in Figure~\ref{fig:3}. Points in $\mathbb{R}^d$ are projected down onto the sphere in the sense of minimizing the least squares distance.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{sphere3}
\caption{Illustration of the lifting method.}
\label{fig:3}
\end{figure}
We let $z_i(t) \in \mathbb{R}^{d}$ be governed by the following dynamical system
\begin{align}
\label{eqq:11}
\dot{z}_i = \sum_{j \in \mathcal{N}_i(t)}f_{ij}\left(\left\|\frac{z_j}{\|z_j\|} - \frac{z_i}{\|z_i\|}\right\|\right)\frac{\|z_i\|}{\|z_{\smash{j}}\|}(z_j - z_i),
\end{align}
for all $i$.
Let the initial state of the system be $z_0 = [z_{10}^T, z_{20}^T, \ldots, z_{n0}^T]^T$.
Equation \eqref{eqq:11} describes a consensus protocol with nonlinear weights that contain rational functions of the norms of the states.
The question is how this dynamical system is related to \eqref{eqq:6}. The following proposition provides the answer.
\begin{prop}\label{prop:5}
Suppose that all the $z_{i0}$'s are not equal to zero. On the time interval $[0, \infty)$ the dynamics for the $y_i$'s is given by
\begin{equation}\label{eqq:301}
\dot{y}_i = (I - y_iy_i^T)\sum_{j \in \mathcal{N}_i(t)}f_{ij}(\|y_j - y_i\|)(y_j - y_i),
\end{equation}
i.e., it is the same as \eqref{eqq:6}.
\end{prop}
\emph{Proof:} \quad
Proposition~\ref{prop:3} below provides the result that the solution to \eqref{eqq:11} is well-defined on the interval $[0, \infty)$ and $(\mathbb{R}^d\backslash \{0\})^n$ is forward invariant. Given that result, the $y_i$'s and their derivatives are well defined. Now,
\begin{align*}
\dot{y}_i & = \frac{1}{\|z_i\|}(I - y_iy_i^T)\dot{z}_i \\
& = (I - y_iy_i^T)\sum_{j \in \mathcal{N}_i(t)}f_{ij}(\left\|y_j - y_i\right\|)\left(y_j - y_i\frac{\|z_i\|}{\|z_j\|}\right) \\
& = (I - y_iy_i^T)\sum_{j \in \mathcal{N}_i(t)}f_{ij}(\|y_j - y_i\|)(y_j - y_i). \hfill \quad \quad \blacksquare
\end{align*}
\begin{prop}\label{prop:3}
Suppose the dynamics for the $z_i$'s is governed by \eqref{eqq:11}. Suppose there is no $i$ such that $z_{i0} = 0$. Let $H(t,z_0)$ be the convex hull of the $z_{i}(t)$'s during $[0, t_f)$ when the initial condition is $z_0$. Let $t_f$ be such that the solution exists during $[0,t_f)$.
Then the solution to \eqref{eqq:11} exists and is unique for all times $t > 0$, $(H(t,z_0))^n$ is forward invariant, and the set $(\mathbb{R}^d\backslash \{0\})^n$ is forward invariant.
\end{prop}
\emph{Proof:} \quad
We first address the claim that $(H(t,z_0))^n$ is forward invariant. It suffices to verify that for each $i$, the right-hand side of \eqref{eqq:11} is either inward-pointing relative to $H(t,z_0)$, or equal to $0$. Now, due to the structure of \eqref{eqq:11}, this is true.
Now we address the invariance of $(\mathbb{R}^d\backslash \{0\})^n$, and, by doing that, obtain the existence and uniqueness result for the solution during $[0, \infty)$ for free, since the right-hand side of \eqref{eqq:11} is locally Lipschitz on $(\mathbb{R}^d\backslash \{0\})^n$. Now, suppose there is $i_1 \in \mathcal{V}$ and a finite time $t_1 > 0$ such that $\lim_{t \uparrow t_1} z_{i_1}(t) = 0$ and there is no $j$ and $t_0 \in [0,t_1)$ such that $\lim_{t \uparrow t_0} z_{j}(t) = 0$. This means that there is a first finite time $t_1$ for which at least one state, $z_{i_1}$ that is, attains the value $0$. The assumption is equivalent to assuming that $(\mathbb{R}^d\backslash \{0\})^n$ is not forward invariant.
For $i \in \mathcal{V}$ and $t \in [0, t_1)$ it holds that
\begin{align*}
\frac{d}{dt}{\|z_i\|} & = \sum_{j \in \mathcal{N}_i(t)}g(z_i, z_j)(\|z_i\|\theta_{ij} - \frac{~\|z_i\|^2}{\|z_j\|}), \text{ for all } i,
\end{align*}
where $g(z_i,z_j) = f(\|\frac{z_j}{\|z_j\|} - \frac{z_i}{\|z_i\|}\|)$
and $\theta_{ij} = \frac{z_i^Tz_j}{\|z_i\|\|z_j\|}$. For $z_i, z_j \neq 0$, it also holds that
\begin{align*}
\frac{d}{dt}\frac{\|z_i\|}{\|z_j\|} & = \sum_{k \in \mathcal{N}_i(t)}g_{ik}(z_i,z_k)\left(\frac{\|z_i\|}{\|z_j\|}\theta_{ik} - \frac{~\|z_i\|^2}{\|z_j\|\|z_k\|}\right) \\
&\quad - \sum_{l \in \mathcal{N}_j(t)}g_{jl}(z_j,z_l)\left(\frac{\|z_i\|}{\|z_j\|}\theta_{jl} - \frac{\|z_i\|}{\|z_l\|}\right).
\end{align*}
We define $v_{kl} = \frac{\|z_k\|}{\|z_l\|}$ for all $k,l$ and write the
equation above as
\begin{align*}
\dot{v}_{ij} & = \sum_{k \in \mathcal{N}_i(t)}g_{ik}(z_i,z_k)(v_{ij}\theta_{ik} - v_{ij}v_{ik}) \\
& \quad - \sum_{l \in \mathcal{N}_j(t)}g_{jl}(z_j,z_l)(v_{ij}\theta_{jl} - v_{il}).
\end{align*}
Let $\alpha > 0$ be an upper bound for the $f_{ij}$'s,
which is equivalent to an upper bound for the $g_{ij}$'s. Such a bound must exist (since the set $\mathbb{S}^{d-1} \times \mathbb{S}^{d-1}$ is compact and the function $(f_{ij} \circ \text{dist})$ is continuous on $\mathbb{S}^{d-1} \times \mathbb{S}^{d-1}$, where $\text{dist}(\cdot,\cdot)$ is the function that returns the Euclidean distance between two points in $\mathbb{R}^d$).
Let $V(t) = \max\limits_{(i,j) \in \mathcal{V} \times \mathcal{V}}v_{ij}(t)$. On $[0, t_1)$ it holds that
\begin{align}
\label{eqq:12}
D^+{V} \leq 3\alpha n V,
\end{align}
where $D^+$ is the upper Dini-derivative. By using the Comparison Lemma for \eqref{eqq:12}, we can conclude that $V$ is bounded from above by $e^{3\alpha nt_1}V(0)$ on $[0,t_1)$
Now, for $i_1$ and $t \in [0, t_1)$ it holds that
\begin{align*}
\frac{d}{dt}{\|z_{ i_1}\|} & = \sum_{j \in \mathcal{N}_{i_1}(t)}g(z_{i_1}, z_j)(\|z_{ i_1}\|\theta_{ij} - \frac{~\|z_{ i_1}\|^2}{\|z_j\|}) \\
& \geq -n\alpha(e^{3\alpha nt_1}V(0) + 1)\|z_{i_1}\|.
\end{align*}
By using the Comparison Lemma, we can conclude that $\|z_{ i_1}(t)\| \geq \|z_{i_1 0}\|e^{-n\alpha(e^{3\alpha nt_1}V(0) + 1)}$. But this, in turn, means that $\lim_{t \uparrow t_1}\|z_{i_1}(t)\| > 0$, which is a contradiction.
\hfill $\blacksquare$
In the following proposition we make use of $H(t,z_0)$, which was defined in Proposition~\ref{prop:3}.
\begin{prop}\label{prop:7}
Suppose the dynamics for the $z_i$'s are governed by \eqref{eqq:11} and $\mathcal{G}(t)$ is uniformly quasi-strongly connected. Suppose $0 \in \mathbb{R}^d$ is not contained in the convex hull $H(t,w)$ for $w \in \mathbb{R}^{nd}$. Then the consensus set
$\mathcal{A}_z$---defined as the set where all the $z_i$'s are equal in $(H(0,w))^n$---is globally uniformly asymptotically stable relative to $(H(0,w))^n$. Furthermore, there is a point $\bar z \in \mathbb{R}^d$ that all the $z_i$'s converge to.
\end{prop}
\emph{Proof:} \quad
Invariance of $(H(t,z_0))^n$ is an indirect consequence of the
fact that \eqref{eqq:11} is a consensus protocol. On this set the right-hand side of \eqref{eqq:11} is Lipschitz continuous in $z$ and piece-wise continuous in $t$. Now the procedure in the rest of the proof is
analogous to the one in Proposition~\ref{prop:10}. Since the right-hand side of \eqref{eqq:11} is Lipschitz continuous in $z$ and piece-wise continuous in $t$, we can use Theorem 2 in \cite{Johan_lyap_2017} to find a continuously differentiable function ${W}: \mathbb{R}^{d} \times \mathbb{R}^{d} \rightarrow \mathbb{R}^+$ such that 1) $\max_{(i,j) \in \mathcal{V} \times \mathcal{V}}W(z_i(t),z_j(t))$ is decreasing as a function of $t$; and 2) $\dot{W}(z_i(t),z_j(t))$ is strictly negative if $(i,j) \in \text{arg}\max_{(i,j) \in \mathcal{V} \times \mathcal{V}}W(z_i(t),z_j(t))$ and there is $k \in \mathcal{N}_i(t)$ such that $y_i \neq y_k$ or there is $l \in \mathcal{N}_j(t)$ such that $z_j \neq z_l$. The existence of such a function guarantees that the consensus set $\mathcal{A}_z$ is globally uniformly asymptotically stable relative to $(H(0,z_0))^n$. It holds that the function $\|z_i - z_j\|^2$ is such a $W$-function. Convergence to a point for all the $z_i$'s can be shown by using the facts that $(H(t,z_0))^n$ is forward invariant for all $t$ and $z$ converges to $\mathcal{A}_z$.
\hfill $\blacksquare$
As a remark to the previous proposition, we should add that more restrictive results about attractivity of $\mathcal{A}$ can be shown by using the results in~\cite{shi2009global,lin2007state}.
\begin{prop}\label{prop:10}
Suppose the graph $\mathcal{G}(t)$ is uniformly quasi-strongly connected. Then for any closed ball $B$ contained in the hemisphere, the consensus set $\mathcal{A}$ is globally uniformly asymptotically stable relative to $B^n$ under \eqref{eqq:6}.
\end{prop}
\emph{Proof:} \quad
Forward invariance holds for $B^n$ due to the structure of the right-hand side of \eqref{eqq:6}.
Let $z_0 = x_0$.
Due to Proposition \ref{prop:7} we know that the consensus set
$\mathcal{A}_z$ is globally uniformly asymptotically stable relative to $(H(0,z_0))^n$ and that there is a point $\bar z \in \mathbb{R}^d$ that all the $z_i$'s converge to. We also know that the projected $y_i$-variables follow the protocol \eqref{eqq:301}, which is the same as \eqref{eqq:6}. The norms of the $z_i$'s are uniformly bounded on $(H(0,z_0))^n$ and $(H(0,z(T)))^n$ is forward invariant for all $T > 0$. Thus the desired result readily follows.
\hfill $\blacksquare$
\begin{cor}
Suppose the dynamics for the $z_i$'s is governed by \eqref{eqq:11} and suppose the graph $\mathcal{G}(t)$ is quasi-strongly connected. If the $z_i$'s converge to a point $\bar{z} \in \mathbb{R}^d$ that is not equal to zero, then the $y_i$'s converge to a point $\bar{y} \in \mathbb{S}^{d-1}$. Furthermore, if the convex hull of the $z_{i0}$'s does not contain the point zero,
then the $y_i$'s converge to a point $\bar{y} \in \mathbb{S}^{d-1}$.
\end{cor}
\emph{Proof:} \quad
Straightforward application of Proposition~\ref{prop:5}, Proposition~\ref{prop:3}, and Proposition~\ref{prop:10}.
\hfill $\blacksquare$
Variations of Proposition~\ref{prop:10} has appeared in the literature before. The idea of using the gnomonic projection to show consensus on the hemisphere was used in~\cite{thunberg2014distributed,johan02} where restricted versions of Proposition~\ref{prop:10} were given for the dimension $d = 4$ in the context of attitude synchronization. Recently the attractivity of $\mathcal{A}$ relative to open hemispheres was established under quasi-strong graph connectivity~\citep{lageman2016consensus} using the gnomonic projection. The graph was not time-varying in that context.
To get a better understanding of Proposition~\ref{prop:10}, a numerical example is provided by Fig~\ref{fig:10}. In this example there are five agents with a uniformly quasi-strongly connected interaction graph. The agents were initially uniformly distributed on a hemisphere and the $f_{ij}$-functions were chosen to be constant; either equal to $1$ or $2$. In the figure, the red discs denote the initial positions and the yellow disc denote the final consensus point. We have also denoted two points on the trajectories where the graph switches.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{simulation_sphere1.pdf}
\caption{Convergence on a hemisphere.}
\label{fig:10}
\end{figure}
Now, the case when $0 \in \mathbb{R}^d$ is contained in the convex hull of the $z_i$'s is more intriguing. We provide the following result.
\begin{prop}\label{prop:4}
Suppose the dynamics for the $z_i$'s is governed by \eqref{eqq:11} and $\mathcal{G}(t)$ is uniformly strongly connected. Suppose $0 \in \mathbb{R}^d$ is contained in the convex hull of the $z_{i0}$'s, i.e., in $H(0,z_0)$, and there is no $i$ such that $z_{i0} = 0$. Furthermore, suppose that the $z_{i0}$'s are contained in a compact set where the $f_{ij}$'s are
bounded from below by a positive constant $K_d$.
Then the set
$\mathcal{A}_z$---defined as the set where all the $z_i$'s are equal in the convex hull of the $z_{i0}$'s---is attractive. Furthermore, there is a fixed point $\bar z \in \mathbb{R}^d$ that all the $z_i$'s converge to.
\end{prop}
\emph{Proof:} \quad
We need to prove that the $z_i$'s converge to $\mathcal{A}_z$.
In light of Proposition~\ref{prop:7}, the only case left to consider is when the point $0 \in \mathbb{R}^d$ is contained in the convex hull of the $z_i(t)$'s for all $t >0$, i.e., it is contained in $H(t,z_0)$ for all $t > 0$. We will thus only consider this case in the following where we need to prove that all the $z_i$'s converge to $0$. We partition this case into two sub-cases: \\ \\
\textbf{1)} the omega-limit set, denoted by $\Omega(z_0)$, does not contain a point $\bar z = [\bar z_1^T, \bar z_2^T, \ldots, \bar z_n^T]^T$ for which a $\bar z_i = 0$. \\
\noindent
\textbf{2)} the omega-limit set $\Omega(z_0)$ contains at least one point $\bar z = [\bar z_1^T, \bar z_2^T, \ldots, \bar z_n^T]^T$ where at least one of the $\bar z_i$'s is equal to zero. \\
\textbf{We begin by considering 1)}. There must be a ball $B$ around the origin such that there is no time $t$ for which a $z_i(t)$ is contained in the ball. This is proven in the following way. Proposition~\ref{prop:3} guarantees that no $z_i(t)$ can reach the origin in finite time. Thus, at any finite time $t_f$ there exists a largest open ball with radius $\epsilon(t_f)$ such that no $z_i(t)$ is contained in the ball during $[0,t_f]$. Assume that $\lim_{t_f \rightarrow \infty}\epsilon(t_f) = 0$. This implies that there is a point $\bar z = [\bar z_1^T, \bar z_2^T, \ldots, \bar z_n^T]^T$ that is in the closure of $\Omega(z_0)$ for which one of the $\bar z_i$'s is equal to zero. But $\Omega(z_0)$ is compact, hence we know that such a point also will be contained in $\Omega(z_0)$. This is a contradiction to the statement that such points are not contained in $\Omega(z_0)$.
Now, in the set $B^n$ we replace the weights
$f_{ij}(z_i,z_j)\frac{\|z_i\|}{\|z_j\|}$ in the right-hand side of \eqref{eqq:11}
by functions $h_{ij}(z_i,z_j)$ such that the total weights,
consisting of $f_{ij}(z_i,z_j)\frac{\|z_i\|}{\|z_j\|}$ outside $B^n$ and $h_{ij}(z_i,z_j)$ inside $B^n$, are
globally Lipschitz on a set containing $(H(0,z_0))^n$
in the interior. Furthermore, $h_{ij}(y_i,y_j)$ is chosen to be positive when $y_i \neq y_j$.
Now let us study the solution starting at $z_0$ at time $0$ of this modified system with the replaced weights in $B^n$. At any finite time the solution is the same as the original system. However, we can use the results in \cite{Johan_lyap_2017} to show that the solution to the modified system converges to $\mathcal{A}_z$ and in particular all the $z_i$'s converge to a fixed point that is nonzero. This means that after some
finite time $T$, the point $0$ is not contained in $H(t,z_0)$ for the modified as well as for the original system. But this is a contradiction to our assumption that the point $0 \in \mathbb{R}^d$ is contained in the convex hull of the $z_i(t)$'s for all $t >0$.\\ \\
\noindent
\textbf{Now we consider 2)}. Let us first introduce $L(z) = \max_{i \in \mathcal{V}}\|z_i\|$. $L(z(t))$ is, besides continuous, monotonically decreasing. We assume that $\lim_{t \rightarrow \infty}L(z(t)) =\bar L> 0$. This means that for any $t \geq 0$, it
holds that the set $\{j: \|z_j(t)\| \geq \bar L\}$ is nonempty.
We will show that this assumption leads to a contradiction in the
end of the proof. This, in turn, means that all the $z_i$'s converge to $0$.
Since the continuous $f_{ij}$'s are defined on a compact set, there is $K_u >0$ such that the $f_{ij}$'s are bounded from above by $K_u$.
Furthermore, there is also an $M > 0$ such that $\|z_i(t)\| \leq M$ for all $i$ and $t \geq 0$. Take for example $M = L(z_0)$.
We continue by formulating a series of claims, each of which is followed by a proof. After these claims have been introduced, they are used as building blocks in the final part of the proof. Roughly, the claims can be understood as follows. The first claim says that if a $z_i$ is close to the origin, it will remain so for a specified time interval. The second claim says that if a $z_i$ has a neighbor that is close to the origin, it will will be ``dragged'' to the origin by this neighbor. The third claim simply says that there must be a $z_i$ close to the origin at some time. Then we show that that the $z_i$ that is close to the origin will drag all the other states close to the origin; so close that their distances to the origin is smaller than $\bar L$, which, in turn, is a contradiction. \\
\noindent \textbf{Claim 1:} There is $\epsilon > 0$ satisfying $\epsilon^{\frac{1}{2}} \ll 1$ and $\epsilon^{\frac{1}{2}} \ll \bar L$ such that for time $\bar t \geq 0$
there is $i$ such that $z_i(\bar t)$ has smaller norm than $\epsilon$, then $\|z_{{i}}(t)\| \leq \epsilon^{\frac{1}{2}}$ for $t \in [\bar t, \bar t + (n+1)(T + \tau_D)]$, where $\tau_D$ is the lower bound on the dwell time and $T$ is the length of the time interval such that the union graph $\mathcal{G}([t, t + T))$ is guaranteed to be strongly connected, see Section~\ref{sec:preliminaries}. \\
It is assumed that $\epsilon^{\frac{1}{2}} \ll 1$ and $\epsilon^{\frac{1}{2}} \ll \bar L$. Suppose there is $\bar i$ and $\bar t$ such that $\|z_{\bar i}(\bar t)\| \leq \epsilon$.
Let us consider the dynamics for $\|z_{\bar i}\|^2$. It is
\begin{align*}
\frac{d}{dt}\|z_{\bar{i}}\|^2 & = 2\sum_{j \in \mathcal{N}_{\bar i}(t)}f_{ij}\frac{\|z_{\bar i}\|}{\|z_{\smash{j}}\|}(z_{\bar i}^Tz_j - \|z_{\bar i}\|^2) \\
& \leq 2\sum_{j \in \mathcal{N}_{\bar i}(t)}f_{ij}\frac{\|z_{\bar i}\|}{\|z_{\smash{j}}\|}z_{\bar i}^Tz_j \leq 2\sum_{j \in \mathcal{N}_{\bar i}(t)}K_u\|z_{\bar i}\|^2 \\
& \leq 2nK_u\|z_{\bar i}(t)\|^2
\end{align*}
Now we can use the Comparison Lemma to deduce that $\|z_{\bar i}(t)\| \leq \epsilon e^{nK_u(t -\bar t)}$ for $t > \bar t$. Now, if $\epsilon$ is sufficiently small, the expression $\epsilon e^{nK_u(t -\bar t)}$ will be smaller than $\epsilon^{\frac{1}{2}}$ during $[\bar{t}, \bar{t} + (n+1)(T + \tau_D)]$. \\
\noindent \textbf{Claim 2:} There is $\epsilon > 0$ satisfying $\epsilon^{\frac{1}{3}} \ll 1$, $\epsilon^{\frac{1}{3}} \ll \bar L$ such that for any time $\bar t \geq 0$, if an agent $\bar i$ has a neighbor $\bar j$ during $[\bar t, \bar t + T]$ for which it holds that $\|z_{\bar j}(\bar t)\| \leq \epsilon$, then there is a time $\bar t_2$ during $[\bar t, \bar t + T]$ such that $\min_{t \in [\bar t_2, \bar t_2 + \tau_D]}\|z_{\bar i}(t)\| \leq \max\{\epsilon^{\frac{1}{3}},Me^{{-\alpha \tau_D}}\}$ where $\alpha(\epsilon) = (K_d + nK_u) - K_u\frac{\epsilon^{\frac{1}{3}}}{{\epsilon^{\frac{1}{2}}}}$. \\
Suppose that there is no $\bar t_2 \in [\bar t, \bar t + T]$ such that $\min_{t \in [\bar t_2, \bar t_2 + \tau_D]}\|z_{\bar i}(t)\| \leq \epsilon^{\frac{1}{3}}$.
There is $\bar t_3 \in [\bar{t}, \bar t + T]$ such that $\bar{j} \in \mathcal{N}_{\bar i}(t)$ during the time interval $[\bar{t}_3, \bar t_3 +\tau_D] \subset [\bar{t}, \bar t + T + \tau_D]$, see the assumptions on the graph $\mathcal{G}(t)$ in Section~\ref{sec:preliminaries}. We assume that $\epsilon$ is small enough so that $\|z_{\bar{j}}(t)\| \leq \epsilon^{\frac{1}{2}}$ for $t \in [\bar t, \bar t + (n+1)(T + \tau_D)]$, see Claim 1.
During the time interval $[\bar t_3, \bar t_3 + \tau_D]$ it holds that
\begin{align}
\nonumber
& \frac{d}{dt}\|z_{\bar{i}}\|^2 = 2\sum_{j \in \mathcal{N}_{\bar i}(t)}f_{\bar ij}\frac{\|z_{\bar i}\|}{\|z_{\smash{j}}\|}(z_{\bar i}^Tz_j - \|z_{\bar i}\|^2) \\
\nonumber
& \leq 2f_{\bar i \bar j}\|z_{\bar i}\|^2(1 - \frac{\|z_{\bar i}\|}{\epsilon^{\frac{1}{2}}}) \\
\label{eq:new:3}
& \quad +2\sum_{j \in \mathcal{N}_{\bar i}(t)\backslash \bar j}f_{\bar ij}\frac{\|z_{\bar i}\|}{\|z_{\smash{j}}\|}(z_{\bar i}^Tz_j - \|z_{\bar i}\|^2).
\end{align}
Now we take a look at the first expression on the right-hand side of ``$\leq$'' in \eqref{eq:new:3}. We use the fact that $\|z_{\bar i}\| \geq \epsilon^{\frac{1}{3}}$ to obtain
\begin{align*}
2f_{\bar i \bar j}\|z_{\bar i}\|^2(1 - \frac{\|z_{\bar i}\|}{\epsilon^{\frac{1}{2}}}) & \leq 2f_{\bar i \bar j}\|z_{\bar i}\|^2(1 - \frac{\epsilon^{\frac{1}{3}}}{{\epsilon^{\frac{1}{2}}}}) \\
& \leq 2K_d\|z_{\bar i}\|^2(1 - \frac{\epsilon^{\frac{1}{3}}}{{\epsilon^{\frac{1}{2}}}}).
\end{align*}
Now we take a look at the (last) sum expression on the right-hand side of ``$\leq$'' in \eqref{eq:new:3}. For any $j$ such that $\|z_j\| \leq \epsilon^{\frac{1}{3}}$ the corresponding expression
\begin{equation}\label{eq:new:2}
f_{\bar ij}\frac{\|z_{\bar i}\|}{\|z_{\smash{j}}\|}(z_{\bar i}^Tz_j - \|z_{\bar i}\|^2)
\end{equation}
is negative. For any $j \in \mathcal{N}_i(t)$ such that the expression in \eqref{eq:new:2} is positive, the following must hold
\begin{align*}
& f_{\bar ij}\frac{\|z_{\bar i}\|}{\|z_{\smash{j}}\|}(z_{\bar i}^Tz_j - \|z_{\bar i}\|^2) \leq K_u\|z_{\bar i}\|^2
\end{align*}
By using the above inequalities, we can conclude that
\begin{align*}
& \frac{d}{dt}\|z_{\bar{i}}\|^2 \leq 2(K_d + nK_u)\|z_i(t)\|^2 - 2K_u\frac{\epsilon^{\frac{1}{3}}}{{\epsilon^{\frac{1}{2}}}}\|z_i(t)\|^2
\end{align*}
during the time interval $[t_3, t_3 + \tau_D]$.
By using the Comparison Lemma we can conclude that
\begin{align*}
& \|z_{\bar i}(\bar t_2 + \tau_D)\|^2 \leq \|z_{\bar i}(\bar t_2)\|^2 e^{-2\alpha \tau_D} \leq M^2e^{-2\alpha \tau_D},
\end{align*}
where $\alpha = (K_d + nK_u) - K_u\frac{\epsilon^{\frac{1}{3}}}{{\epsilon^{\frac{1}{2}}}}$. \\
\noindent \textbf{Claim 3:} For any $\epsilon > 0$ there is $\bar i \in \mathcal{V}$ and a corresponding
$\bar t$ such that $\|z_{\bar{i}}(\bar t)\| \leq \epsilon$. \\
Claim 3 is a consequence of the fact that the Omega limit set $\Omega(z_0)$ contains at least one point $\bar z = [\bar z_1^T, \bar z_2^T, \ldots, \bar z_n^T]^T$ where at least one of the $\bar z_i$'s is equal to zero. Thus there is $\bar i \in \mathcal{V}$ and a corresponding unbounded sequence $\{\bar t_n\}$ such that $\lim_{n \rightarrow \infty }\|z_{\bar i}(\bar t_n)\| = 0$.\\
Now, we can use the three claims above to obtain the sought contradiction concerning $L$'s convergence to $\bar L > 0$. Due to Claim 3, for any $\epsilon_1 > 0$ we know that there is a time $\bar t$ and an $\bar i_1$ for which $\|z_{\bar i_1}(\bar t)\| \leq \epsilon_1$.
Let $\epsilon_2(\epsilon_1) = \max\{\epsilon_1^{\frac{1}{3}},Me^{{-\alpha(\epsilon_1) \tau_D}}\}$, where $\alpha$ is defined in Claim 2. By choosing $\epsilon_1$ small enough we can ensure, due to Claim 1, that $\epsilon_2^{\frac{1}{2}} \ll 1$ and $\epsilon_2^{\frac{1}{2}} \ll \bar L$ and there is $\bar i_2$ such that for $\bar i_1$, and $\bar i_2$, the norms of both $z_{\bar i_1}$ and $z_{\bar i_2}$ are smaller than $\epsilon_2^{\frac{1}{2}}$ during $[\bar{t} + T + \tau_D, \bar{t} + (n+1)(T + \tau_D)]$.
Now let $\epsilon_3(\epsilon_2) = \max\{\epsilon_2^{\frac{1}{3}},Me^{{-\alpha(\epsilon_2) \tau_D}}\}$. Since $\epsilon_2$ is a function of $\epsilon_1$, by choosing $\epsilon_1$ small enough we can ensure, due to Claim 1, that $\epsilon_3^{\frac{1}{2}} \ll 1$ and $\epsilon_3^{\frac{1}{2}} \ll \bar L$ and there is $\bar i_3$ such that for $\bar i_1$, $\bar i_2$, and $\bar i_3$,the norms of $z_{\bar i}$, $z_{\bar i_2}$, and $z_{\bar i_3}$ are smaller than $\epsilon_3^{\frac{1}{2}}$ during $[\bar{t} + 2(T + \tau_D), \bar{t} + (n+1)(T + \tau_D)]$.
By continuing in this manner, one can finally show that there is an $\epsilon_n < \bar L$ such that for all the $i$'s it holds that $\|z_i\| < \bar L$ at time $\bar{t} + (n+1)(T + \tau_D)$. But $L(t)$ is monotonically decreasing to $\bar L$ from above. Hence we have two contradictory statements.
\hfill $\blacksquare$
The point $0 \in \mathbb{R}^d$, plays a crucial role for this lifting method. If the $z_i$'s converge to a point that is not equal to $0$, then the $y_i$'s converge to a consensus formation. On the other hand, we do not know when convergence to $0$ for the $z_i$'s imply non-convergence to a consensus formation for the $y_i$'s.
It has recently been shown that if the graph $\mathcal{G}(t)$ is static (or time-invariant) and symmetric, the $f_{ij}$'s fulfill certain differentiability assumptions, and $f_{ij} = f_{ji}$ for all $i,j$, then $\mathcal{A}$ is almost globally attractive under \eqref{eqq:6}~\citep{markdahl2017almost} (a simple choice of such $f_{ij}$'s is when $f_{ij} = f_{ji} = a_{ij} = a_{ji} > 0$, i.e., the $a_{ij}$'s are positive scalars). On the other hand, the result does not hold for dimension $d = 2$.
This means that under the same conditions on the graph and the $f_{ij}$'s as in \cite{markdahl2017almost}, the region of attraction of the point $0$ has measure zero when $d \geq 3$ and has positive measure when $d = 2$. Linking those results to the lifting method, e.g., by means of a geometric interpretation, remains an open problem.
\section{Conclusions}
\noindent
This paper addresses distributed synchronization or consensus on the unit sphere. A large
class of consensus control laws is considered in which only relative information is used between neighbors for a time-switching interaction graph. We investigate how a lifting method can be used in the convergence analysis for these control laws.
The proposed method is new in this context. It lifts the states from $\mathbb{S}^{d-1}$ to $\mathbb{R}^{d}$. In the higher-dimensional space, the dynamics of the states is described by a consensus protocol, where each agent is moving in a conical combination of the directions to its neighbors. The weights in the conical combination contain rational functions of the norms of the agents' states.
The paper provides more general convergence results than has been reported before for hemispheres and furthermore provides convergence results for a consensus protocol with rational weights of the norms of the states. However, an additional purpose of the paper was---by introducing the lifting method---to hopefully serve as a stepping-stone towards future research on global convergence results for the considered consensus control laws.
\bibliographystyle{agsm}
|
{
"timestamp": "2018-06-08T02:10:35",
"yymm": "1805",
"arxiv_id": "1805.02528",
"language": "en",
"url": "https://arxiv.org/abs/1805.02528"
}
|
\section{Introduction}
\ Reachable sets have attracted several mathematicians since longer
times both in theoretical and in numerical analysis.
The approaches for the numerical computation of reachable sets mainly split into
two classes, those for reachable sets up to a given time and the other ones
for reachable sets at a given end time. We will give here only exemplary references,
since the literature is very rich (more references are given in an early version of this paper in \cite{BLe}). There are methods based on overestimation and
underestimation of reachable sets based on ellipsoids \cite{KV}, zonotopes \cite{Alth,GlGM} or on approximating the reachable set with support functions resp.~supporting points \cite{BL,Alth}. Other popular and well-studied approaches involve level-set methods, semi-Lagrangian schemes and the computation of an associated Hamilton-Jacobi-Bellman equation, see~e.g.~\cite{BCD,BFZ,F} or are based on the viability concept~\cite{ABS-P} and the viability kernel algorithm~\cite{S-P}. Further methods \cite{BL,BLcham,BPhD} are set-valued generalizations of quadrature methods and Runge-Kutta methods initiated by the works~\cite{D2F,V_int,DF,W,DV}.
Here, we will focus on set-valued quadrature methods and set-valued Runge-Kutta methods
with the help of support functions or supporting points,
since they do not suffer on the wrapping effect or on an exploding number of vertices
and the error of restricting computations only for finitely many directions
can be easily estimated. Furthermore, they belong to the most efficient and fast methods
(see~\cite[Sec.~3.1]{Alth}, \cite[Chap.~9, p.~128]{lG}) for linear control problems
to which we restrict the computation of the minimum time function $T(x)$.
We refer to~\cite{BPhD,BL,lG} (and references therein) for technical details on the numerical implementation, although we will lay out the main ideas of this approach for reader's convenience.
In optimal control theory the regularity of the minimum time functions is studied
intensively, see e.g.~in~\cite{CMW,CNN} and references therein. For the error estimates in this paper it will be essential to single out example classes for which the minimum time function is Lipschitz (no order reduction of the set-valued method) or H\"older-continuous with exponent $\frac{1}{2}$ (order reduction by the square root).
Minimum time functions are usually computed by solving the Hamilton-Jacobi-Bellman (HJB) equations and by the dynamic programming principle, see e.g.~\cite{BBZ,BFZ,BF1,BF2,BFS,CL,GL}. In this approach, the minimal requirement on the regularity of $T(x)$ is the continuity, see e.g.~\cite{BF1,CL,GL}. The solution of a HJB equation with suitable boundary conditions gives immediately -- after a transformation -- the minimum time function and its level sets provide a description of the reachable sets. A natural question occurring is whether it is also possible to do the other way around, i.e., to reconstruct the minimum time function $T(x)$ if knowing the reachable sets. One of the attempts was done in \cite{BBZ,BFZ}, where the approach is based on PDE solvers and on the reconstruction of the optimal control and solution via the value function.
On the other hand, our approach in this work is completely different. It is based on very efficient quadrature methods for convex reachable sets as described in Section~3.
In this article we present a novel approach for calculating the minimum time function.
The basic idea is to use set-valued methods for approximating reachable sets at a given
end time with computations based on support functions resp.~supporting points.
By reversing the time and start from the convex target as initial set we compute
the reachable sets for times on a (coarser) time grid.
Due to the strictly expanding condition for reachable sets,
the corresponding end time is assigned to all boundary points of the computed
reachable sets. Since we discretize in time and in space (by choosing a finite number
of outer normals for the computation of supporting points), the vertices of the polytopes
forming the fully discrete reachable sets are considered as data points of an irregular
triangulated domain. On this simplicial triangulation, a piecewise linear approximation
yields a fully discrete approximation of the minimum time function.
The well-known interpolation error and the convergence results for the set-valued method can be applied to yield an easy-to-prove error estimate by taking into account the regularity of the minimum time function. It requires at least H\"older continuity and
involves the maximal diameter of the simplices in the used triangulation. A second error estimate is proved without explicitely assuming the continuity of the minimum time function and depends only on the time interval between the computed (backward) reachable sets. The computation does not need the nonempty interior of the target set in contrary to the Hamilton-Jacobi-Bellman approach, for singletons the error estimate even improves. It is also able to compute discontinuous minimum time functions, since the underlying set-valued method can also compute lower-dimensional reachable sets. There is no explicit dependence of the algorithm and the error estimates on the smoothness of optimal solutions or controls. These results are devoted to reconstructing discrete
optimal trajectories which reach a set of supporting points from a given target
for a class of linear control problems and also proving the convergence of discrete
optimal controls by the use of nonsmooth and variational analysis.
The main tool is Attouch's theorem that allows to benefit from the convergence of the discrete reachable sets to the time-continuous one.
The plan of the article is as follows: in Section \ref{sec:preliminaries} we collect notations, definitions and basic properties of convex analysis, set operations, reachable sets and the minimum time function. The convexity of the reachable set for linear control problems and the characterization of its boundary via the level-set of the minimum time function is the basis for the algorithm formulated in the next section. In Section \ref{sec:construction}, we briefly introduce the reader to set-valued quadrature methods and Runge-Kutta methods and their implementation and discuss the convergence order for the fully discrete approximation of reachable sets at a given time both in time and in space.
In the next subsection we present the error estimate for the fully discrete minimum time function which depends on the regularity of the minimum time function and on the
convergence order of the underlying set-valued method. Another error estimate expresses
the error only on the time period between the calculated reachable sets. The last subsection discusses the construction of discrete optimal trajectories and convergence of discrete optimal controls. A series of accompaning examples can be found in Section \ref{sec:num_tests}. We compare the error of the minimum time function with respect to time and space discretization studying the influence of its regularity and of the smoothness of the support functions of corresponding set-valued integrands.
We first consider several linear examples with various target
and control sets and study different levels of regularity of the corresponding
minimum time function. The nonlinear example in Subsection~\ref{subsec_nonlin} demonstrates that this approach is not restricted to the class of linear control systems.
Although first numerical experiences are gathered there, its theoretical justification has to be
gained by a forthcoming paper. In Subsection~\ref{subsec_non_exp_prop}
one example demonstrates the need of the strict expanding property of (union of) reachable
sets for characterizing boundary points of the reachable set via time-minimal points. The section ends with a collection of examples which either are more challenging for numerical
calculations or partially violate our assumptions. Finally, a discussion of our approach and possible improvements can be found in Section~\ref{sec:concl}.
\section{Preliminaries}\label{sec:preliminaries}
In this section we will recall some notations, definitions as well as basic knowledge of convex analysis and control theory for later use. Let $\mathcal{C}(\mathbb{R}^n)$ be the set of convex, compact, nonempty subsets of $\mathbb{R}^n$,
$\| \cdot \|$ be the Euclidean norm and $\scal{\cdot}{\cdot}$ the inner product in $\mathbb{R}^n$,
$B_r(x_0)$ be the closed (Euclidean) ball with radius $r>0$ centered at $x_0$ and $S_{n-1}$ be the unique sphere in $\mathbb{R}^n$.
Let $A$ be a subset of $\mathbb{R}^n$, $M$ be an $n\times n$ real matrix, then $B_r(A):= \bigcup_{x\in A}B_r(x)$, $\norm{M}$ denotes the \emph{lub-norm}
of $M$ with respect to $\|\cdot\|$, i.e., the spectral norm.
The \emph{convex hull}, the \emph{boundary}, the \emph{interior} and the \emph{diameter} of a set $A$ are signified by $\co(A),\,\partial A,\, \inter(A),\, \diam(A)$ respectively. We define the support function, the supporting points in a given direction and the set arithmetic operations as follows.
\begin{definition}
Let $A\in \mathcal{C}(\mathbb{R}^n),\, l \in \R^n$. The \emph{support function} and the \emph{supporting face} of $A$ in the direction $l$ are defined as, respectively,
\begin{equation*}
\begin{aligned}
\delta^*(l,A):= \max_{x\in A} \,\scal{l}{x},\,\,
\Y(l,A):=\bb{x\in A \colon \scal{l}{x}=\delta^*(l,A)}.
\end{aligned}
\end{equation*}
An element of the supporting face is called \emph{supporting point}.
\end{definition}
Known properties of the convex hull, the support function and the supporting points when applied to the set operations introduced above can be found in ,e.g.\,~\cite[Chap.~0]{AC}, \cite[Sec.~4.6, 18.2]{ABS-P}, \cite{BPhD,lG,Alth}.
Especially, the convexity of the arithmetic set operations becomes obvious.
We also recall the definition of Hausdorff distance which is the main tool to measure the error of reachable set approximation.
\begin{definition}
Let $C,D\in \mathcal{C}(\mathbb{R}^n),\,x\in \mathbb{R}^n$. Then the \emph{distance function} from $x$ to $D$ is $\di(x,D):=\min_{d\in D} \norm{x-d}$
and the Hausdorff distance between $C$ and $D$ is defined as
\begin{align*}
\dH(C,D)&:=\max \bb{\max_{x\in C} \di(x,D),\max_{y\in D} \di(y,C)}.
\end{align*}
\end{definition}
Now we will recall some basic notations of control theory, see e.g.,~\cite[Chap.~IV]{BCD} for more details. Consider the following linear time-variant control dynamics in $ \mathbb{R}^n$
\begin{equation}\label{LCDyn}
\begin{cases}
\begin{array}{r@{\,}l@{\quad}l}
\dot y(t) & =A(t)y(t)+B(t)u(t)
& \text{ for a.e. } t \in [t_0, \infty), \\
u(t) & \in U
& \text{ for a.e. } t \in [t_0, \infty), \\
y(t_0) & = y_0. &
\end{array}
\end{cases}
\end{equation}
The coefficients $ A(t), B(t) $ are $n\times n$ and $n\times m$ matrices respectively, $y_0 \in \mathbb{R}^n$ is the initial value,
$U\in \mathcal{C}(\R^m)$ is the set of control values. Under standard assumptions, the existence and uniqueness of \eqref{LCDyn} are guaranteed for any measurable function $u(\cdot)$ and any $y_0 \in \mathbb{R}^n$. Let $\mathcal{S}\subset \mathbb{R}^n$, a nonempty compact set, be the
\emph{target} and
$$
\mathcal{U}:=\bb{ u \colon [t_0,\infty) \rightarrow U \text{ measurable}},
$$
the set of \emph{admissible controls}
and $y(t,y_0,u)$ be the solution of \eqref{LCDyn}.
We define the \emph{minimum time starting from $y_0 \in \mathbb{R}^n$ to reach the target $\mathcal{S}$} for some $u \in \mathcal{U}$
$$
t(y_0,u)=\min \,\bb{t\ge t_0:\ y(t,y_0,u)\in \mathcal{S}}\leq \infty.
$$
The \emph{minimum time function to reach $\mathcal{S}$ from $y_0$} is defined as
$
T(y_0)=\inf_{u\in \mathcal{U}} \,\bb{t(y_0,u)},
$
see e.g.,~\cite[Sec.~IV.1]{BCD}.
We also define the
\emph{reachable sets for fixed end time} $t> t_0$, \emph{up to time $t$}
resp.~\emph{up to a finite time} as follows:
\begin{align*}
&\mathcal{R}(t) := \bb{y_0 \in \R^n: \textit{ there exists } u\in \mathcal{U},\,y(t,y_0,u)\in \mathcal{S}},\\
&\RSU(t) := \mbox{} \bb{y_0 \in \R^n: \textit{ there exists } u\in \mathcal{U},\,y(s,y_0,u)\in \mathcal{S} \text{ for some } s\in [t_0,t]}\\& \quad\quad\quad=\bigcup_{\substack s \in [t_0,t]}\mathcal{R}(s), \\
&\mathcal{R} := \bb{y_0 \in \R^n: \textit{ there exists some finite time $ t \geq t_0$ with }y_0 \in \mathcal{R}(t) } = \bigcup_{t\in [t_0,\infty)} \mathcal{R}(t).
\end{align*}
By definition
\begin{equation} \label{eq:reach_leq_sub_level_set}
\RSU(t) = \bb{y_0\in \mathbb{R}^n \colon T(y_0)\le t}
\end{equation}
is a sublevel set
of the minimum time function, while
for a given maximal time $ t_f > t_0$ and some $t \in I := [t_0, t_f]$,
$\mathcal{R}(t)$ is the set of points \emph{reachable from the target in time} $ t $
by the \emph{time-reversed system}
\begin{align} \label{eq:time_rev_cp}
\dot{y}(t) & =\bar A(t)y(t) + \bar B(t)u(t), \\
y(t_0) & \in \mathcal{S}, \label{InCond}
\end{align}
where $\bar A(t):= -A(t_0+ t_f-t),\,\bar B(t):=-B(t_0+ t_f-t)$ for shortening notations.
In other words, $\mathcal{R}(t)$ equals the set of starting points from
which the system can reach the target in time $ t $.
Sometimes $\mathcal{R}(t)$ is called the \emph{backward reachable set} which is also
considered in~\cite{BBZ} for computing the minimum time function by solving
a Hamilton-Jacobi-Bellman equation.
The following standing hypotheses are assumed to be fulfilled in the sequel.
\begin{assumption}\label{standassum}\mbox{}
\begin{enumerate}
\item[ (i) ] $ A(t),\,B(t) $ are $ n \times n $, $ n \times m $ real-valued matrices defining integrable functions on any compact interval of $[t_0,\infty) $.
\item[ (ii) ] The control set $U\subset \mathbb{R}^m$ is convex, compact and nonempty, i.e., $U \in \mathcal{C}(\R^m)$.
\item[ (iii) ] The target set $\mathcal{S}\in \mathbb{R}^n$ is convex, compact and nonempty, i.e., $\mathcal{S} \in \mathcal{C}(\R^n)$. \\
Especially, the target set can be a singleton.
\item[ (iv) ] $\mathcal{R}(t)$ is \emph{strictly expanding} on the compact interval $[t_0, t_f]$, i.e., $\mathcal{R}(t_1) \subset \inter \mathcal{R}(t_2)$ for all $t_0\le t_1<t_2\le t_f$.
\end{enumerate}
\end{assumption}
\begin{remark}
The reader can find sufficient conditions for Assumption \ref{standassum}(iv) for $\mathcal{S}=\bb{0}$ in \cite[Chap.~17]{HL}, \cite[Sec.~2.2--2.3]{LM}. Under this assumption, it is obvious that
\label{rem:strict_expand}
\begin{equation*}
\begin{aligned}
\mathcal{R}(t)= \RSU(t) .
\end{aligned}
\end{equation*}
\end{remark}
Under our standard hypotheses, the control problem~\eqref{eq:time_rev_cp} can
equivalently be replaced by
the following linear differential inclusion
\begin{equation}\label{InLCDyn}
\dot y(t)\in \bar A(t)y(t)+\bar B(t)U \ \ \text{ for a.e. } t \in [t_0, \infty)
\end{equation}
with absolutely continuous solutions $y(\cdot)$ (see \cite[Appendix~A.4]{Tol}).
All the solutions of \eqref{InCond}--\eqref{InLCDyn} are represented as
\begin{equation*}
y(t)=\Phi(t,t_0)y_0+\int_{t_0}^{t} \Phi(t,s)\bar B(s)u(s)ds
\end{equation*}
for all $y_0 \in \mathcal{S},\, u\in \mathcal{U}$, and $t_0\le t<\infty$, where $\Phi(t,s)$ is the \emph{fundamental solution matrix} of the homogeneous system
\begin{equation}\label{fundSol}
\dot y(t)=\bar A(t)y(t),
\end{equation}
with $\Phi(s,s)=I_n$, the $n\times n$ identity matrix.
Using the Minkowski addition and the Aumann's integral \cite{Aum}, the reachable set can be described by means of Aumann's integral as follows
\begin{equation}\label{Rt}
\mathcal{R}(t)=\Phi(t,t_0)\mathcal{S}+\int_{t_0}^{t} \Phi(t,s)\bar B(s)Uds.
\end{equation}
For time-invariant systems,
i.e., $ \bar A(t) = \bar A $, we have $\Phi(t,t_0)=e^{\bar A(t-t_0)}$.
For the linear control system, under Assumptions \ref{standassum}(i)--(iii), \eqref{LCDyn} the reachable set at a fixed end time is
convex which allows to apply support functions or supporting points for its approximation.
Furthermore, the reachable sets change continuously with respect to the end time.
The following proposition provides the connection between $\mathcal{R}(t)$ and the level set of $T(\cdot)$ at time $t$ which is essential for this approach.
We will benefit from
the sublevel representation in~\eqref{eq:reach_leq_sub_level_set}.
The result is related to~\cite[Theorem~2.3]{BBZ}, where the minimum time function at $x$ is
the minimum for which $x$ lies on a zero-level set bounding the backward reachable set.
\begin{proposition}
Let Assumption \ref{standassum} be fulfilled and $t > t_0$. Then
\label{prop:bd_descr_monotone_case_w_level_set}
\begin{equation}\label{ReachLevel}
\partial \mathcal{R}(t)=\bb{y_0 \in \mathbb{R}^n \colon T(y_0)=t}.
\end{equation}
\end{proposition}
\begin{proof}
"$\subset$":
Assume that there exists $x\in \partial \mathcal{R}(t)$ with $x\notin \bb{y_0\in \mathbb{R}^n \colon T(y_0)=t}$.
Clearly, $x \in \RSU(t)$ and~\eqref{eq:reach_leq_sub_level_set} shows that $T(x) \leq t$.
By definition there exists
$s \in [t_0,t]$ with $x \in \mathcal{R}(s)$. Assuming $s <t$ we get the
contradiction $x \in \mathcal{R}(s) \subset \inter \mathcal{R}(t)$ from Assumption~\ref{standassum}(iv).\\
"$\supset$":
Assume that there exists $x\in \bb{y_0\in \mathbb{R}^n \colon T(y_0)=t}$ (i.e., $T(x)=t$) be such that $x\notin \partial \mathcal{R}(t)$. Since
$x\in \mathcal{R}(t)$ by~\eqref{eq:reach_leq_sub_level_set} and we assume that $x\notin \partial \mathcal{R}(t)$, then $x\in \inter( \mathcal{R}(t))$.
Hence, there exists $\varepsilon > 0$ with
\begin{align*}
x + \varepsilon B_1(0) \subset \mathcal{R}(t).
\end{align*}
The continuity of $\mathcal{R}(\cdot)$ ensures for $t_1 \in [t - \delta, t+\delta]
\cap I$ that
\begin{align*}
\dH(\mathcal{R}(t), \mathcal{R}(t_1)) & \leq \frac{\varepsilon}{2}.
\end{align*}
Hence,
\begin{align*}
x + \varepsilon B_1(0) \subset \mathcal{R}(t) & \subset \mathcal{R}(t_1)
+ \frac{\varepsilon}{2} B_1(0).
\end{align*}
The order cancellation law in~\cite[Theorem~3.2.1]{PalUrb}
can be applied, since $\mathcal{R}(t_1)$ is convex and all sets are compact. Therefore,
\begin{align*}
(x + \frac{\varepsilon}{2} B_1(0)) + \frac{\varepsilon}{2} B_1(0)
& \subset \mathcal{R}(t_1) + \frac{\varepsilon}{2} B_1(0)
\end{align*}
which implies $ x + \frac{\varepsilon}{2} B_1(0) \subset \mathcal{R}(t_1) $.
Hence, $x \in \inter( \mathcal{R}(t_1))$ with $t_1 < t$ so that $T(x) \leq t_1 < t$
which is again a contradiction. Therefore, $\bb{y_0\in \mathbb{R}^n \colon T(y_0)=t} \subset \partial \mathcal{R}(t)$.
The proof is completed.
\end{proof}
In the previous characterization of the boundary of the reachable set at fixed end time
the assumption of monotonicity of the reachable sets played a crucial role.
As stated in Remark~\ref{rem:strict_expand}, Assumption \ref{standassum}(iv) also guarantees that the union of reachable sets
coincides with the reachable set at the largest end time and is trivially convex.
If we drop this assumption,
we can only characterize the boundary of the \emph{union} of reachable sets up to a time
under relaxing the expanding property~(iv) while demanding convexity as can be seen in the following proposition.
\begin{proposition}
Let $t > t_0$, Assumptions \ref{standassum}(i)--(iii) and Assumption
\begin{quote}
(iv)' \quad $\RSU(t)$ has convex images and is strictly expanding on the compact interval $[t_0, t_f]$, i.e.,
$
\RSU(t_1) \subset \inter \RSU(t_2) \quad \text{for all } t_0\le t_1<t_2 \le t_f.
$
\end{quote} holds. Then
\label{prop:bd_descr_w_level_set}
\begin{equation} \label{equ:bd_union_reach_sets}
\partial \RSU(t)
= \bb{x\in \R^n \colon T(x)=t}.
\end{equation}
\end{proposition}
\begin{proof}
\emph{The proof can be found in~\cite[Proposition~7.1.4]{Le}.}
\end{proof}
\begin{remark}\label{rem:inclusion}
Assumption~(iv)' implies that the considered system is small-time controllable, see~\cite[Chap.~IV, Definition 1.1]{BCD}. Moreover, under the assumption of small-time controllability the nonemptiness of the interior of $\mathcal{R}$ and the continuity of the minimum time
function in $\mathcal{R}$ are consequences, see~\cite[Chap.~IV, Propositions~1.2,~1.6]{BCD}.
Assumption~(iv)' is essentially weaker than~(iv), since the convexity of $\RSU(t)$
and the strict expandedness of $\RSU(\cdot)$ follow by Remark~\ref{rem:strict_expand}.
The inclusion for $\RSU(\cdot)$ in this assumption is equivalent to small-time controllability
(STC) for time-invariant systems, sufficient conditions for STC in this case via generalized
Petrov and second-order conditions are discussed in~\cite{AM}.
Under one of these two conditions the minimal time function is either continuous or
H\"older continuous with exponent $\frac{1}{2}$.
Extensions of the continuity property
to $\varphi$-convexity can be found in~\cite{CMW}.
In the previous proposition we can allow that $\RSU(t)$ is lower-dimensional and are
still able to prove the inclusion "$\supset$" in~\eqref{equ:bd_union_reach_sets},
since the interior of $\RSU(t)$ would be empty
and $x$ cannot lie in the interior which also creates the (wanted) contradiction.
For the other inclusion "$\subset$" the nonemptiness of the interior of $\mathcal{R}(t)$
in Proposition~\ref{prop:bd_descr_monotone_case_w_level_set}
resp.~the one of $\RSU(t)$ in Proposition~\ref{prop:bd_descr_w_level_set} is essential. Therefore, the expanding property
in~Assumptions~(iv) resp.~(iv)' cannot be relaxed by assuming only monotonicity in the sense
\begin{align} \label{ex:relaxed_expand}
\mathcal{R}(s) \subset \mathcal{R}(t) \quad\text{or}\quad
\RSU(s) \subset \RSU(t)
\end{align}
for $s < t$ as Example \ref{ex:counter_ex_1} shows.
\end{remark}
\section{Approximation of the minimum time function}\label{sec:construction}
\subsection{Set-valued discretization methods}\label{subsec:sv_discr_meth}
Consider the linear control dynamics \eqref{LCDyn}. For a given $x\in \mathbb{R}^n$, the problem of computing approximately the minimum time $T(x)$ to reach $\mathcal{S}$ by following the dynamics $\eqref{LCDyn}$ is deeply investigated in literature. It was usually obtained by solving the associated discrete Hamilton-Jacobi-Bellman equation (HJB), see, for instance, \cite{BF1,F,CL,GL}. Neglecting the space discretization we obtain an approximation of $T(x)$. In this paper, we will introduce another approach to treat this problem based on approximation of the reachable set of the corresponding linear differential inclusion. The approximate minimum time function is not derived from the PDE solver, but from iterative set-valued methods or direct discretization of control problems.
Our aim now is to compute $\mathcal{R}(t)$ numerically up to a maximal time $t_f$ based on the representation~\eqref{Rt} by means of set-valued methods to approximate Aumann's integral. There are many approaches to achieving this goal. We will describe three known options for discretizing the reachable set which are used in the following.
Consider for simplicity of notations an equidistant grid over the interval $I=[t_0,t_f]$ with $N$ subintervals, step size $h = \frac{t_f - t_0}{N}$ and grid points $t_i=t_0+ih$, $i=0,\ldots,N$.
\begin{enumerate}
\item[(I)] Set-valued quadrature methods with the exact knowledge of the fundamental solution matrix of \eqref{fundSol} (see e.g.,~\cite{V_int,D2F,BL}, \cite[Sec.~2.2]{BPhD}): as in the pointwise case, we replace the integral $\int_{t_0}^{t} \Phi(t,s)\bar B(s)Uds$ by some quadrature scheme of order $p$ with non-negative weights.
Therefore, \eqref{Rt} is approximated by
\begin{equation} \label{eq:sv_quad_meth_global}
\mathcal{R}_h(t_N)=\Phi(t_N,t_0)\mathcal{S}+h \sum_{i=0}^{N}c_i \Phi(t_N,t_i)\bar B(t_i)U
\end{equation}
with weights $c_i\ge 0,\,i=0,\ldots,N$. Moreover, the following error estimate holds:
\begin{equation*}
\dH(\int_{t_0}^{t_N} \Phi(t_N,s)\bar B(s)Uds,h \sum_{i=0}^{N}c_i \Phi(t_N,t_i)\bar B(t_i)U)\le Ch^p.
\end{equation*}
\item[(II)] Set-valued combination methods (see e.g.,~\cite{BL}, \cite[Sec.~2.3]{BPhD}): we replace $\Phi(t_N,t_i)$ in method $ (I) $ by its approximation (e.g.,~via ODE solvers of the corresponding matrix equation) such that
\begin{enumerate}
\item[a)]$\Phi_h(t_{m+n},t_0)=\Phi_h(t_{m+n},t_m)\Phi_h(t_m,t_0)$ for all $m \in \{0,\ldots,N\}$, $n \in \{0,\ldots,N-m\}$.
\item[b)]$\sup_{0\le i\le N} \norm{\Phi(t_N,t_i)-\Phi_h(t_N,t_i)}\le Ch^p.$
\end{enumerate}
Then, the discrete reachable sets is globally resp.~locally recursively represented as
\begin{align}
\mathcal{R}_h(t_N) & =\Phi_h(t_N,t_0)\mathcal{S}+h\sum_{i=0}^{N}c_i \Phi_h(t_N,t_i)\bar B(t_i)U, \label{eq:sv_comb_meth_global}\\
\mathcal{R}_h(t_0) & = \mathcal{S}, \label{eq:sv_quad_meth_local_start}
\\
\mathcal{R}_h(t_{i+1}) & =\Phi_h(t_{i+1},t_{i})\mathcal{R}_h(t_{i})+ h \sum_{j=0}^{1} \widetilde{c}_{ij} \Phi_h(t_{i+1},t_{i+j})\bar B(t_{i + j})U.
\label{semigroupcombmethRh}
\end{align}
\item[(III)] Set-valued Runge-Kutta methods (see e.g.,~\cite{DF,W,V,B}): \\
We can approximate \eqref{InLCDyn} by set-valued analogues of Runge-Kutta schemes.
The discrete reachable set is computed recursively with the starting condition \eqref{eq:sv_quad_meth_local_start} for the set-valued Euler scheme (see e.g.,~\cite{DF}) as
\begin{equation} \label{eq:sv_euler_rec}
\mathcal{R}_h(t_{i+1})= \Phi_h(t_{i+1},t_i) \mathcal{R}_h(t_i)+hB(t_i)U,
\end{equation}
for the set-valued Heun's scheme with piecewise constant selections
(see e.g., \cite{V}) as
\begin{equation} \label{eq:sv_heun_rec}
\mathcal{R}_h(t_{i+1})= \Phi_h(t_{i+1},t_i) \mathcal{R}_h(t_i)
+\frac{h}{2}\Big( (I + h A(t_{i+1}))B(t_i)+B(t_{i+1})\Big)U.
\end{equation}
\end{enumerate}
An example of $R_h$ with different choices of numerical methods is as follows.
\begin{equation*}
R_h(t_{j+1}) = \begin{cases}e^{hA} R_h(t_j) + h e^{hA} \overline{B} U & \mbox{set-valued Riemann sum}, \\
(I+hA) R_h(t_j) + h (I+hA) \overline{B} U & \mbox{Riemann sum combined with Euler}, \\
(I+hA)R_h(t_j) + h \overline{B} U & \mbox{set-valued Euler}.
\end{cases}
\end{equation*}
The purpose of this paper is not to focus on the set-valued numerical schemes themselves, but on the approximative construction of $T(\cdot)$. Thus, without loss of generality, we mainly utilize the scheme described in (II) to present our idea from now on. In practice, there are several strategies in control problems to discretize the set of controls $\mathcal{U}$, see e.g.,~\cite{BBCG}. Here we choose a \emph{piecewise constant} approximation $\mathcal{U}_h$ for the sake of simplicity which corresponds to use only one selection on the subinterval $[t_i,t_{i+1}]$ in the corresponding set-valued quadrature method.
Depending on the choice of the method, we can find a subset $\mathcal{U}_h$ of $ U $, usually the piecewise constant controls so that in the case (II), for instance, we have
\begin{align*}
\mathcal{R}_h(t_N)=\bb{y\in \mathbb{R}^n \colon \text{ there exists a piecewise constant control } u_h \in \mathcal{U}_h \text{ and } y_0 \in \mathcal{S}\\
\text{ such that } y=\Phi_h(t_N,t_0)y_0+h\sum_{i=0}^{N}c_i \Phi_h(t_N,t_i)\bar B(t_i)u_h(t_i)},
\end{align*}
or equivalently
$
\mathcal{R}_h(t_N)=\Phi_h(t_N,t_0)\mathcal{S}+h\sum_{i=0}^{N}c_i \Phi_h(t_N,t_i)\bar B(t_i)U.
$
We set
\begin{equation*}
t_h(y_0,y,u_h) = \min \bb{ t_n \colon n\in \mathbb{N},\,\, y=\Phi_h(t_n,t_0)y_0+h\sum_{i=0}^{n}c_i \Phi_h(t_n,t_i)\bar B(t_i) u_h(t_i) }
\end{equation*}
for some $y\in \mathbb{R}^n,\,y_0\in \mathcal{S}$ and a piecewise constant grid function $u_h$ with $u_h(t_i) = u_i \in U$, $i=0,\ldots,n$. If there does not exist such a grid control $u_h$ which reaches $y$ from $y_0$ by the corresponding discrete trajectory, $t_h(y_0,y,u_h)=\infty$. Then the discrete minimum time function $T_h(\cdot)$ is defined as
\begin{equation*}
T_h(y)=\min\limits_{\substack{ u_h\in \mathcal{U}_h\\[0.3ex] y_0\in \mathcal{S}} }\, t_h(y_0,y,u_h).
\end{equation*}
Notice that the definitions of $\mathcal{R}_h$ and $t_h$ for the remaining cases (I) and (III) can be derived in a similar way by using the corresponding expressions of $y$.
\begin{proposition}
In all of the constructions (I)--(III) described above, $\mathcal{R}_h(t_N)$ is a convex, compact and nonempty set.
\end{proposition}
\begin{proof}
The key idea of the proof of this proposition is to employ the linearity of \eqref{InLCDyn}, in conjunction with the convexity of $\mathcal{S},\,U$ and the arithmetic operations for convex sets. In particular, it follows analogously to the proof of \cite[Proposition~3.3]{BBCG}.
\end{proof}
\begin{theorem}\label{dHRRh}
Consider the linear control problem \eqref{InCond}--\eqref{InLCDyn}. Assume that the set-valued quadrature method and the ODE solver have the same order $p$. Furthermore, assume that $\bar A(\cdot)$ and $\delta^*(l,\Phi(t_f,\cdot)\bar B(\cdot)U)$ have absolutely continuous $(p-2)$-nd derivative, the $(p-1)$-st derivative is of bounded variation uniformly with respect to all $l\in S_{n-1}$ and $\sum_{i=0}^{N}c_i\norm{B(t_i)U}$ is uniformly bounded for $N\in \mathbb{N}$. Then
\begin{equation}
\dH(\mathcal{R}(t_N),\mathcal{R}_h(t_N))\le Ch^p,
\end{equation}
where $C$ is a non-negative constant.
\end{theorem}
\begin{proof}
See \cite[Theorem 3.2]{BL}.
\end{proof}
\begin{remark}
For $p=2$ the requirements of Theorem \ref{dHRRh} are fulfilled if $A(\cdot),\,B(\cdot)$ are absolutely continuous and $A'(\cdot),\,B'(\cdot)$ are bounded variation (see~\cite{DV}, \cite[Secs.~1.6, 2.3]{BPhD}).
\end{remark}
The next subsection is devoted to the full discretization of the reachable set, i.e., we consider the space discretization as well. Since we will work with supporting points, we do this implicitly by discretizing the set $S_{n-1}$ of normed directions.
This error will be adapted to the error of the set-valued numerical scheme caused by the time discretization to preserve its order of convergence with respect to time step size as stated in Theorem~\ref{dHRRh}. Then we will describe in detail the procedure to construct the graph of the minimum time function based on the approximation of the reachable sets. We will also provide the corresponding overall error estimate.
\subsection{Implementation and error estimate of the reachable set approximation}\label{subsec:Algorithm}
For a particular problem, according to its smoothness in an appropriate sense we are first able to choose a difference method with a suitable order, say $O(h^p)$ for some $p>0$, to solve \eqref{fundSol} numerically effectively, for instance Euler scheme, Heun's scheme or Runge-Kutta scheme. Then we approximate Aumann's integral in~\eqref{Rt} by a quadrature formula with the same order, for instance Riemann sum, trapezoid rule, or Simpson's rule to obtain the discrete scheme of the global order $O(h^p)$.
We implement the set arithmetic operations in~\eqref{semigroupcombmethRh}
only approximately as indicated in \cite[Proposition~3.4]{BBCG}
and work with finitely many normed directions
\begin{equation} \label{eq:discr_unit_spheres}
\begin{array}{r@{\,}l@{\,}r@{\,}l@{\,}r@{\,}l}
S_{\mathcal{R}}^{\Delta} & := \{ & l^k & \,:\, k=1,\ldots,N_{\mathcal{R}} & \} & \subset S_{n-1}, \\
S_{U}^{\Delta} & := \{ & \eta^r & \,:\, r=1,\ldots,N_U & \} & \subset S_{m-1}
\end{array}
\end{equation}
satisfying
$
\dH(S_{n-1},S_{\mathcal{R}}^{\Delta}) \le C h^p,\,
\dH(S_{m-1},S_{U}^{\Delta}) \le C h^p
$
to preserve the order of the considered scheme approximating the reachable set.
It is well-known that convex sets can be described via support functions or points in every directions. With this approximation we generate a finite set of supporting points of
$\mathcal{R}_h(\cdot)$ and with its convex hull the fully discrete reachable set
$\mathcal{R}_{h\Delta}(\cdot)$.
To reach this target, we also discretize the target set $\mathcal{S}$ and the control set $U$
appearing in~\eqref{eq:sv_quad_meth_local_start} and~\eqref{semigroupcombmethRh},
e.g.,~along the line of \cite[Proposition 3.4]{BBCG}:
\begin{equation}\label{SUdelta}
\begin{array}{r@{\,}l@{\,}r@{\,}l@{\,}r@{\,}l}
\widetilde{\mathcal{S}}_{\Delta} & :=\bigcup _ {l^k \in S_{\mathcal{R}}^{\Delta}} & \bb{y(l^k,\mathcal{S})}, & \ \, \mathcal{S}_{\Delta} := \co(\widetilde{\mathcal{S}}_{\Delta}), \\
\widetilde{U}_{\Delta}& :=\bigcup _ {\eta^r \in S_{U}^{\Delta}} & \bb{y(\eta^r,U)} , & \ \, U_{\Delta} := \co(\widetilde{U}_{\Delta}).
\end{array}
\end{equation}
Hence, $\mathcal{S}_{\Delta},\,U_{\Delta}$ are polytopes approximating
$\mathcal{S}$ resp.~$U$ in the Hausdorff distance with error term $\Orem{h^p}$.
Let $T_{h\Delta}(\cdot)$ be the fully discrete version of $T(\cdot)$ (it will be
defined later in details). Our aim is to construct the graph of $T_{h\Delta}(\cdot)$ up to a given time $t_{f}$ based on the knowledge of the reachable set approximation. We divide $[t_0, t_f]$ into $K$ subintervals each of length $\Delta t$:
$$\Delta t=\frac{t_f-t_0}{K},\,h=\frac{\Delta t}{N},$$
where we have $t_f - t_0 = K N h$ and compute subsequently the sets of supporting points $Y_{h\Delta}(\Delta t)$,\ldots, $Y_{h\Delta}(t_f)$ by the algorithm described below yielding fully discrete reachable sets
$\mathcal{R}_{h\Delta}(i \Delta t)$, $i=1,\ldots,K$. Here $K$ decides how many sublevel sets of the graph of $T_{h\Delta}(\cdot)$ we would like to have and $h$ is the step size of the numerical scheme computing $Y_{h\Delta}(i\Delta t)$ starting from $Y_{h\Delta}((i-1)\Delta t)$.
Due to \eqref{Rt} and \eqref{ReachLevel}, the description of each sublevel set of $T(\cdot)$ can be formulated only with its boundary points, i.e., the supporting points of the reachable sets at the corresponding time. For the discrete setting, at each step, we will determine the value of $T_{h\Delta}(x)$ for $x \in Y_{h\Delta}(\cdot)$. Therefore, we only store this information for constructing the graph of $T_{h\Delta}(\cdot)$ on the subset $[t_0,t_f]$ of its range.
\begin{algorithm}
\label{algorithm}~
\begin{enumerate}
\item[ step 1:] Set $Y_{h\Delta}(t_0)=\widetilde{\mathcal{S}}_{\Delta}$, $\mathcal{R}_{h\Delta}(t_0):= \mathcal{S}_{\Delta}$ as in \eqref{SUdelta}, $i=0$.
\item[ step 2:] Compute $\widetilde{Y}_{h\Delta}(t_{i+1})$ as follows
\begin{align*}
\widetilde{Y}_{h\Delta}(t_{i+1}) & =\Phi_h\big(t_{i+1},t_{i}\big)Y_{h\Delta}\big(t_{i}\big)+h\sum_{j=0}^{N}c_j\Phi_h(t_{i+1},t_{ij})\bar B(t_{ij}) \widetilde U_{\Delta}, \\
\widetilde{\mathcal{R}}_{h\Delta}(t_{i+1}) & = \co\big( \widetilde{Y}_{h\Delta}(t_{i+1}) \big),
\end{align*}
where
\begin{align} \label{eq:time_steps}
t_i & =t_0+i\Delta t,\ t_{ij}=t_i+jh \quad(j=0,1,\ldots,N).
\end{align}
\item[ step 3:] Compute the set of the supporting points $ \bigcup_{l^k\in S_{\mathcal{R}}^{\Delta}} \bb{y(l^k,\widetilde{ \mathcal{R}}_{h\Delta}(t_{i+1}))}$ and set
\begin{align} \label{eq:comp_supp_pts_in_fixed_dir}
Y_{h\Delta}(t_{i+1}) & = \bigcup\limits_{l^k\in S_{\mathcal{R}}^{\Delta}} \big\{ y\big(l^k,\widetilde{ \mathcal{R}}_{h\Delta}(t_{i+1})\big)\big) \big\},
\end{align}
where $ y(l^k,\widetilde{ \mathcal{R}}_{h\Delta}(t_{i+1}))$ is an arbitrary element of $\Y(l^k,\widetilde{ \mathcal{R}}_{h\Delta}(t_{i+1}))$ and set $$\mathcal{R}_{h\Delta}(t_{i+1}):=\co(Y_{h\Delta}(t_{i+1})).$$
\item[step 4:] If $i<K-1$, set $i=i+1$ and go back to step 2. Otherwise, go to step 5.
\item[step 5:] Construct the graph of $T_{h\Delta}(\cdot)$ by
the (piecewise) linear interpolation based on the values $t_i$ at the points $Y_{h\Delta}(t_i)$, $i=0,\ldots,K$.
\end{enumerate}
\end{algorithm}
The algorithm computes the set of vertices $Y_{h\Delta}(t_i)$ of the polygon
$\mathcal{R}_{h\Delta}(t_i)$ which are supporting points in the directions $l^k \in
S_{\mathcal{R}}^{\Delta}$.
The
following proposition is the error estimate between the fully discrete reachable set $\mathcal{R}_{h\Delta}(\cdot)$ and $\mathcal{R}(\cdot)$.
\begin{proposition}\label{dHR_deltahl}
Let Assumptions \ref{standassum}(i)--(iii), together with
\begin{equation}\label{semiR}
\dH\Big(\mathcal{R}_{h}(t_i),\mathcal{R}(t_i )\Big)\le C_{s} h^p
\end{equation}
for the set-valued combination method~\eqref{eq:sv_comb_meth_global} in (II), be valid.
Furthermore, finitely many directions $S_U^{\Delta},\,S_{\mathcal{R}}^{\Delta} \subset S_{n-1}$ are chosen
with
$$\max(\dH(S_{n-1},S_U^{\Delta}),\dH(S_{n-1},S_{\mathcal{R}}^{\Delta}))\le C_{\Delta} h^p.$$
Then, for $h$ small enough,
\begin{equation}\label{dHRRdeltahlglobal}
\begin{aligned}
& \dH\Big(\mathcal{R}_{h\Delta}(t_i),\mathcal{R}_h(t_i )\Big)\le C_{f} h^p,\\
& \dH\Big(\mathcal{R}_{h\Delta}(t_i),\mathcal{R}(t_i)\Big)\le C_{f} h^p,
\end{aligned}
\end{equation}
where $C_s,\,C_{\Delta},\, C_f$ are some positive constants and $t_i=t_0+i\Delta t,\,i=0,\ldots,K$.
\end{proposition}
\begin{proof}
\emph{The proof can be found in~\cite[Proposition~7.2.5]{Le}.}
\end{proof}
\begin{remark}
If $\mathcal S$ is a singleton, we do not need to discretize the target set.
The overall error estimate in~\eqref{dHRRdeltahlglobal} even improves in this case, since
$\dH\big(\widetilde{\mathcal{R}}_{h\Delta}(t_0),\mathcal{R}_h(t_0)\big)=0$.
\end{remark}
As we can see in this subsection the convexity of the reachable set plays a vital role. Therefore, this approach can only be extended to special nonlinear control systems
with convex reachable sets.
In the following subsection, we provide the error estimation of $T_{h\Delta}(\cdot)$ obtained by the indicated approach under Assumptions~\ref{standassum}, the regularity of $T(\cdot)$ and the properties of the numerical approximation.
\subsection{Error estimate of the minimum time function}
After computing the fully discrete reachable sets in Subsection~\ref{subsec:Algorithm}, we obtain the values of $T_{h\Delta}(x)$
for all $x\in \bigcup_{i=0,\ldots,K} Y_{h\Delta}(t_i)$, $t_i= t_0 + i\Delta t$.
For all boundary points $x \in \partial \mathcal{R}_{h\Delta}(t_i)$ and some $i=1,\ldots,K$, we define
\begin{align}
T_{h\Delta}(x) & = t_i \text{ for }
x\in \partial \mathcal{R}_{h\Delta}(t_i) ,
\label{eq:discr_min_time_bd_pt}
\intertext{together with the initial condition}
T_{h\Delta}(x) & = t_0 \ \, \text{ for } x \in \mathcal{S}_{\Delta}. \nonumber
\end{align}
The task is now to define a suitable value of $T_{h\Delta}(x)$
in the computational domain
$$
\Omega := \bigcup_{i=0,\ldots,K} \mathcal{R}_{h\Delta}(t_i),
$$
if $x$ is neither a boundary point of reachable sets nor lies inside the target set.
First we construct a simplicial triangulation
$\bb{\Gamma_j}_{j=1,\ldots,M}$
over the set
$\Omega \setminus \inter(\mathcal{S})$ of points
with grid nodes in $ \bigcup_{i=0,\ldots,K} Y_{h\Delta}(t_i) $.
Hence,
\begin{itemize}
\item $\Gamma_j \subset \R^n$ is a simplex for $j=1,\ldots,M$,
\item $\Omega \setminus \inter(\mathcal{S}) = \bigcup_{j=1,\ldots,M}\Gamma_j$,
\item the intersection of two different simplices is either empty
or a common face,
\item all supporting points in the sets $\bb{Y_{h\Delta}(t_i)}_{i=0,\ldots,K}$
are vertices of some simplex,
\item all the vertices of each simplex have to belong either to
the fully discrete reachable set
$\mathcal{R}_{h\Delta}(t_i)$ or to $\mathcal{R}_{h\Delta}(t_{i+1})$ for some $i=0,1,\ldots,K-1$.
\end{itemize}
For the triangulation as in Figure~\ref{fig:part_triang},
we introduce the maximal diameter of simplices as
$$\Delta_{\Gamma}:= \max_{j=1,\ldots,M} \diam(\Gamma_j) .$$
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.25]{Grid}
\end{center}
\caption{Part of the triangulation}
\label{fig:part_triang}
\end{figure}\\
Assume that $x$ is neither a boundary point of one of the computed discrete
reachable sets $\bb{\mathcal{R}_{h\Delta}(t_i)}_{i=0,\ldots,K}$ nor an element of
the target set $\mathcal{S}$ and
let $ \Gamma_j $ be the simplex containing $x$.
Then
\begin{align} \label{eq:cpa_def}
T_{h\Delta}(x) & =\sum_{ \nu =1}^{ n+1 }\lambda_{ \nu } T_{h\Delta}(x_{ \nu} ),
\end{align}
where $x=\sum_{ \nu =1}^{ n+1 }\lambda_{ \nu } x_{ \nu },\,\sum_{ \nu =1}^{\ n+1 }\lambda_{ \nu }=1$ with $\lambda_{ \nu }\ge 0$
and $\bb{x_{ \nu }}_{ \nu =1, \ldots,n+1 }$ being the vertices of $ \Gamma_j $.
If $x$ lies in the interior of $\Gamma_j$, the index $j$ of this simplex is unique.
Otherwise, $x$ lies on the common face of two or more simplices due to our
assumptions on the simplicial triangulation and~\eqref{eq:cpa_def} is well-defined. Let $i$ be the index such that $\Gamma_j \in \mathcal{R}_{h\Delta}(t_i) \setminus \inter(\mathcal{R}_{h\Delta}(t_{i-1}))$.
Since $T_{h\Delta}(x_\nu)$ is either $t_i$ or $t_{i-1}$ due to~\eqref{eq:discr_min_time_bd_pt}, we have
\begin{align*}
T_{h\Delta}(x) & = \sum_{\nu=1}^{n+1}\lambda_\nu T_{h\Delta}(x_\nu) \le t_i, \\
\partial \mathcal{R}_{h\Delta}(t_i) & = \bb{y \in \R^n: T_{h\Delta}(y) = t_i}.
\end{align*}
The latter holds, since the convex combination is bounded by $t_i$ and equality to $t_i$
only holds, if all vertices with positive coefficient $\lambda_\nu$ lie on the
boundary of the reachable set $\mathcal{R}_{h\Delta}(t_i)$.
The following theorem is about the error estimate of the minimum time function obtained by this approach.
\begin{theorem}\label{errT}
Assume that $T(\cdot)$ is continuous with a non-decreasing modulus $\omega(\cdot)$ in $\mathcal{R}$, i.e.,
\begin{equation}
| T(x)-T(y)|\le \omega(\norm{x-y})\,\,\text{ for all } x,y \in \mathcal{R}.
\end{equation}
Let Assumptions~\ref{standassum} be fulfilled, furthermore assume that
\begin{equation}\label{dHRRhlll}
\dH(\mathcal{R}_{h\Delta}(t_i),\mathcal{R}(t_i))\le Ch^p
\quad \text{for $i=1,\ldots,K$}
\end{equation}
holds. Then
\begin{equation}\label{fullestT}
\norm{T-T_{h\Delta}}_{\infty,\, \Omega }\le \omega( \Delta_{\Gamma} )+ \omega(Ch^p).
\end{equation}
where $\norm{\cdot}_{\infty,\, \Omega }$ is the supremum norm taken over $ \Omega $.
\end{theorem}
\begin{proof}
We divide the proof into two cases.
\begin{enumerate}
\item[case 1:] $x\in \partial \mathcal{R}_{h\Delta}(t_i)$ for some $i=1,\ldots,K$. \\
Let us choose a best approximation
$\bar{x} \in \partial\mathcal{R}(t_i)$ of $x$ so that
$$\norm{x-\bar x} = \di(x,\partial \mathcal{R}(t_i)) \leq \dH(\partial \mathcal{R}_{h\Delta}(t_i),\partial \mathcal{R}(t_i))
= \dH(\mathcal{R}_{h\Delta}(t_i),\mathcal{R}(t_i)),$$
where we used \cite{Wil} in the latter equality. Clearly, \eqref{ReachLevel},
\eqref{eq:cpa_def} show that
$$
T_{h\Delta}(x)=T(\bar{x})=t_i.
$$
Then
\begin{align}
|T(x)-T_{h\Delta}(x)|&\le |T(x)-T(\bar{x})|+|T(\bar{x})-T_{h\Delta}(x)| \nonumber \\
& \le \omega(\norm{x-\bar{x}}) \le \omega\big(\dH(\mathcal{R}_{h\Delta}(t_i),\mathcal{R}(t_i))\big)\le \omega(Ch^p) \label{eq:estim_T}
\end{align}
due to \eqref{dHRRhlll}.
\item[ case 2:]
$x\in \inter\big(\mathcal{R}_{h\Delta}(t_i)\big)\setminus \mathcal{R}_{h\Delta}(t_{i-1})$
for some $i=1,\ldots,K$. \\
Let $ \Gamma_j $ be a simplex containing $x$ with
the set of vertices $\bb{x_j}_{j=1,\ldots, n+1 }$. Then $$T_{h\Delta}(x)=\sum_{j=1}^{ n+1 }\lambda_j T_{h\Delta}(x_j),$$ where $x=\sum_{j=1}^{ n+1 }\lambda_j x_j,\,\sum_{j=1}^{ n+1 } \lambda_j=1, \lambda_j\ge 0$.
We obtain
\begin{align*}
&|T(x)-T_{h\Delta}(x)|= |T(x)-\sum_{j=1}^{ n+1 }\lambda_j T_{h\Delta}(x_j)|\\
&\le |T(x)-\sum_{j=1}^{ n+1 }\lambda_j T( x_j)|+|\sum_{j=1}^{ n+1 }\lambda_j T(x_j)-\sum_{j=1}^{ n+1 }\lambda_j T_{h\Delta}(x_j)| \\
& \le \sum_{j=1}^{\ n+1 }\lambda_j \bigg( |T(x)-T( x_j)|+ |T(x_j) - T_{h\Delta}(x_j)| \bigg) \le \omega(\Delta_\Gamma)+ \omega(Ch^p),
\end{align*}
where we applied the continuity of $T(\cdot)$ for the first term and the error estimate~\eqref{eq:estim_T} of case~1 for the other.
\end{enumerate}
Combining two cases and noticing that $T(x)=T_{h\Delta}(x)=t_0$ if $x\in \mathcal{S}_{\Delta}$, we get
\begin{equation}
\norm{T-T_{h\Delta}}_{\infty,\,\Omega}
:= \max_{x \in \Omega} |T(x) -T_{h\Delta}(x)|
\le \omega(\Delta_{\Gamma})+ \omega(Ch^p).
\end{equation}
The proof is completed.
\end{proof}
\begin{remark}\label{Rem_errT}
Theorem \ref{dHRRh} provides sufficient conditions for set-valued combination methods such that \eqref{dHRRhlll} holds.
See also e.g.,~\cite{DF} for set-valued Euler's method resp. ~\cite{V} for Heun's method.
If the minimum time function is H\"older continuous on $\Omega $, \eqref{fullestT} becomes
\begin{equation}\label{errorHolder}
\norm{T-T_{h\Delta}}_{\infty,\, \Omega }\le C\Big((\Delta_{\Gamma} )^{\frac{1}{k}}+ h^{\frac{p}{k}}\Big)
\end{equation}
for some positive constant $C$. The inequality \eqref{errorHolder} shows that the error
estimate is improved in comparison with the one obtained in~\cite{CL}
and does not assume explicitly the regularity of optimal solutions as in~\cite{BF2}.
One possibility to define the modulus of continuity satisfying the required property of non-decrease in Theorem~\ref{errT} is as follows:
\begin{equation*}
\omega(\delta) = \sup \{ \lvert T(x) - T(y)\rvert : \norm{x-y} \le \delta\}.
\end{equation*}
An advantage of the methods of Volterra type studied in~\cite{CL} which benefit from
non-standard selection strategies is that the discrete reachable sets converge with higher
order than 2. The order 2 is an order barrier for set-valued Runge-Kutta methods
with piecewise constant controls or independent choices of controls,
since many linear control problems with intervals or boxes for the control values
are not regular enough for higher order approximations (see~\cite{V}).
Moreover, notice that there are many different triangulations based on the same data. Among them, we can always choose the one with a smaller diameter close to the Hausdorff distance of the two sets by applying standard grid generators.
\end{remark}
\begin{proposition}\label{errTt2}
Let the conditions of Theorem~\ref{errT} be fulfilled.
Furthermore assume that the step size $h$ is so small such that $C h^p$ in~\eqref{dHRRhlll}
is smaller than $\frac{\varepsilon}{3}$, where
\begin{align}\label{inclusionR}
\mathcal{R}(t_i) + \varepsilon B_1(0)
& \subset \inter \mathcal{R}(t_{i+1}) \quad\mbox{for all $i=0,\ldots,K-1$}.
\end{align}
Then
\begin{align}\label{inclusionRh}
\mathcal{R}_{h\Delta}(t_i) + \frac{\varepsilon}{3} B_1(0)
& \subset \inter \mathcal{R}_{h\Delta}(t_{i+1})
\end{align}
and
\begin{equation}\label{fullestTt2}
\norm{T-T_{h\Delta}}_{\infty,\, \Omega }\le 2 \Delta t,
\end{equation}
where $\norm{\cdot}_{\infty,\, \Omega }$ is the supremum norm taken over $ \Omega $.
\end{proposition}
\begin{proof}
For some $i = 0,\ldots,K-1$ we choose a constant $M_{i+1} > 0$ such that
$\mathcal{R}(t_{i+1}) \subset M_{i+1} B_1(0)$.
Since $\mathcal{R}(t_i)$ does not intersect the complement of
$ \inter \mathcal{R}(t_{i+1}) $ bounded with $M_{i+1} B_1(0)$ and both are compact sets,
there exists $\varepsilon > 0$ such that
\begin{align}
\mathcal{R}(t_i) + \varepsilon B_1(0)
& \subset \inter \mathcal{R}(t_{i+1}) \subset M_{i+1} B_1(0). \label{eq:eps_incl}
\end{align}
We will show that a similar inclusion as~\eqref{eq:eps_incl} holds for the
discrete reachable sets for small step sizes. If the step size $h$ is so small
that $C h^p$ in~\eqref{dHRRhlll} is smaller than $\frac{\varepsilon}{3}$, then we have
the following inclusions:
\begin{align}
\inter \mathcal{R}(t_{i+1})
& \subset \inter \big( \mathcal{R}_{h\Delta}(t_{i+1}) + C h^p B_1(0) \big)
= \inter \mathcal{R}_{h\Delta}(t_{i+1}) + C h^p \inter B_1(0),\nonumber \\
\mathcal{R}(t_i) + \varepsilon B_1(0)
& \subset \inter \mathcal{R}(t_{i+1})
\subset \inter \mathcal{R}_{h\Delta}(t_{i+1}) + \frac{\varepsilon}{3} B_1(0). \nonumber
\intertext{By the order cancellation law of convex compact sets in~\cite[Theorem~3.2.1]{PalUrb}}
\mathcal{R}(t_i) + \frac{2}{3}\varepsilon B_1(0)
& \subset \inter \mathcal{R}_{h\Delta}(t_{i+1}), \nonumber \\
\mathcal{R}_{h\Delta}(t_i) + \frac{\varepsilon}{3} B_1(0)
& \subset \big( \mathcal{R}(t_i) + \frac{\varepsilon}{3} B_1(0) \big)
+ \frac{\varepsilon}{3} B_1(0)
\subset \inter \mathcal{R}_{h\Delta}(t_{i+1}) \label{est2levfull}.
\end{align}
We have
\begin{equation}\label{estcase2b}
|T(x)-T_{h\Delta}(x)|= \sum_{j=1}^{ n+1 }\lambda_j|T(x)- T_{h\Delta}(x_j)|.
\end{equation}
In order to obtain the estimate, we observe that
\begin{enumerate}
\item[1)] $x_j \in \partial \mathcal{R}_{h\Delta}(t_i)$, then
$t_\nu \le T(x_j)\le t_{i+1}$ with $\nu=\max\{0, i-1\}$.
\item[2)] $x\in \inter(\mathcal{R}_{h\Delta}(t_i))\setminus \mathcal{R}_{h\Delta}(t_{i-1})$,
then $t_\nu < T(x)\le t_{i+1}$ with $\nu=\max\{0, i-2\}$.
\end{enumerate}
To prove 1) the inequality
$T(x_j) >= t_0$ is clear. Assume that $T(x_j) < t_{i-1}$ for some $i > 1$. Then $x_j
\in \mathcal{R}(t_{i-1})$. By the estimates~\eqref{dHRRhlll},~\eqref{est2levfull} and $C h^p<\frac{\varepsilon}{3}$, it follows that
\begin{align*}
x_j & \in \mathcal{R}_{h\Delta}(t_{i-1}) + C h^p B_1(0) \subset \inter \mathcal{R}_{h\Delta}(t_i)
\end{align*}
which is a contradiction to the assumption $x_j \in \partial \mathcal{R}_{h\Delta}(t_i)$. Hence, $T(x_j) \geq t_{i-1}$. Assume that $T(x_j) > t_{i+1}$. Then, $x_j \notin \mathcal{R}(t_{i+1})$. Furthermore, $x_j$ cannot be an element of $\mathcal{R}_{h\Delta}(t_i)$, since otherwise
a contradiction to $x_j \notin \mathcal{R}(t_{i+1})$ follows:
\begin{align*}
x_j & \in \mathcal{R}_{h\Delta}(t_i) \subset \mathcal{R}(t_i) + C h^p B_1(0)
\subset \inter \mathcal{R}(t_{i+1}).
\end{align*}
Therefore, $x_j \notin \mathcal{R}_{h\Delta}(t_i)$ which
contradicts $x_j \in \partial \mathcal{R}_{h\Delta}(t_i)$. Hence, the starting assumption $T(x_j) > t_{i+1}$ must be wrong which proves
$T(x_j) \leq t_{i+1}$. \\
To prove 2) if we assume $T(x) \leq t_{i-2}$ for some $i \geq 2$, then $x \in \mathcal{R}(t_{i-2})$
and
\begin{align*}
x & \in \mathcal{R}_{h\Delta}(t_{i-2}) + C h^p B_1(0) \subset \inter \mathcal{R}_{h\Delta}(t_{i-1})
\end{align*}
by estimate~\eqref{dHRRhlll}.
But this contradicts $x \notin \mathcal{R}_{h\Delta}(t_{i-1})$. Therefore, $T(x) > t_{i-2}$.
Assuming $T(x) > t_{i+1}$ for some $i < K-1$, then $x \notin \mathcal{R}(t_{i+1})$.
Furthermore, if $x$ is an element of $\mathcal{R}_{h\Delta}(t_i)$,
\begin{align*}
x & \in \mathcal{R}_{h\Delta}(t_i) \subset \mathcal{R}(t_i) + C h^p B_1(0)
\subset \inter \mathcal{R}(t_{i+1})
\end{align*}
which is a contradiction to $x \notin \mathcal{R}(t_{i+1})$. \\
Therefore, $x \notin \mathcal{R}_{h\Delta}(t_i)$ which
contradicts $x\in \inter(\mathcal{R}_{h\Delta}(t_i))\setminus \mathcal{R}_{h\Delta}(t_{i-1})$.
Hence, the starting assumption $T(x) > t_{i+1}$ must be wrong which proves
$T(x) \leq t_{i+1}$. Consequently, 1) and 2) are proved. Notice that
\begin{enumerate}
\item[a)] the case 1) means
\begin{alignat*}{2}
T(x_j) & \in [t_{i-1}, t_{i+1}] & \quad & (i \geq 1), \\
T(x_j) & =t_0 & \quad & (i = 0)
\end{alignat*}
and $| T(x_j) - T_{h\Delta}(x_j) | \leq \Delta t $ due to $ T_{h\Delta}(x_j)=t_i,\,i=0,\ldots,K$.
\item[b)] from the case 2), we obtain
\begin{align*}
T(x ) & \in (t_{i-2}, t_{i+1}] \quad (i \geq 2), \\
T_{h\Delta}(x_j)& - T(x) < t_i - t_{i-2} = 2 \Delta t, \\
T_{h\Delta}(x_j) & - T(x) > t_{i-1} - t_{i+1} = -2 \Delta t.
\end{align*}
Therefore, $ | T(x) - T_{h\Delta}(x_j) | \leq 2 \Delta t$
for $i \geq 2$ (similarly with estimates for $i=0,1$).
\end{enumerate}
Altogether, \eqref{fullestTt2} is proved.
\end{proof}
\section{Convergence and reconstruction of discrete optimal controls
\label{sec:converg}
In this sub\-section we first prove the convergence of the normal cones of $\mathcal{R}_{h\Delta}(\cdot)$ to the ones of the continuous-time reachable set $\mathcal{R}(\cdot)$ in an appropriate sense. Using this result we will be able to reconstruct discrete optimal trajectories to reach the target from a set of given points and also derive the proof of $L^1$-convergence of discrete optimal controls.
In the following only convergence under weaker assumptions and no convergence order 1 as in~\cite{ABGL-13}
are proved (see more references
therein for the classical field of direct discretization methods).
We also restrict to linear minimum time problems.\\
Some basic notions of nonsmooth and variational analysis which are needed in constructing and proving the convergence of controls can be found in \cite{CLSW,Rockaf}.
Let $A$ be a subset in $\R^n$ and $f: A \rightarrow \R \cup \{\infty\}$ be a function. The \emph{indicator function} of $A$ and the \emph{epigraph} of $f$ be defined as
\begin{equation}
\begin{aligned}
I_A(x)=
\begin{cases}
0 & \quad \text{ if } x\in A\\
+\infty & \quad \text{ otherwise}
\end{cases}, \quad \epi f =\bb{(x,r) \in \R^n \times \R \colon x\in A, \ r\ge f(x)}.
\end{aligned}
\end{equation}
The definitions of normal cone and subdifferential in convex case are taken from \mbox{\cite[Sec.~8.C]{Rockaf}}.
With reference to \mbox{\cite[Definition 7.1]{Rockaf}} for epi-convergence and \cite[Definition 5.32]{Rockaf} for graphical convergence, let us recall Attouch's theorem in a reduced version which plays an important role for convergence results of discrete optimal controls and solutions.
\begin{theorem}[see~\mbox{\cite[Theorem~12.35]{Rockaf}}] \label{theo:attouch}
Let $(f^i)_i$ and $f$ be lower semicontinuous, convex, proper functions from $\R^n$
to $\R \cup \{\infty\}$. \\
Then the epi-convergence of $(f^i)_{i \in \N}$ to $f$ is equivalent to the graphical convergence
of the subdifferential maps $(\partial f^i)_{i \in \N}$ to $\partial f$.
\end{theorem}
The following theorem plays an important role in this reconstruction and will deal with the convergence of the normal cones. If the normal vectors of $\mathcal{R}_{h\Delta}(\cdot)$ converge to the corresponding ones of $\mathcal{R}(\cdot)$, the discrete optimal controls can be computed with the discrete Pontryagin Maximum Principle under suitable assumptions.\\
For the remaining part of this subsection let us consider a
fixed index $i\in \bb{1,2\ldots,K}$.
We choose a space discretization $\Delta = \Delta(h)$ with $\Orem{\Delta} = \Orem{h^p}$
(compare with~\cite[Sec.~3.1]{BPhD}) and often suppress the index $\Delta$ for the approximate solutions and controls.
\begin{theorem}\label{theo:normaconver}
Consider a discrete approximation of reachable sets of type (I)--(III) with
\begin{align} \label{eq:conv_reach_sets}
\lim_{h \downarrow 0} \dH(\mathcal{R}_{h\Delta}(t_i),\mathcal{R}(t_i)) & = 0.
\end{align}
Under Assumptions~\ref{standassum}, the set-valued maps $x \mapsto N_{\mathcal{R}_{h\Delta}(t_i)}(x)$ converge graphically to the set-valued map $x \mapsto N_{\mathcal{R}(t_i)}(x)$
for $i=1,\ldots,K$.
\end{theorem}
\begin{proof}
Let us recall that, under Assumptions \ref{standassum} and by the construction in Subsec.~\ref{subsec:sv_discr_meth}, $\mathcal{R}_{h\Delta}(t_i)$, $\mathcal{R}(t_i)$ are
convex, compact and nonempty sets.
Moreover, we also have that the indicator functions
$I_{\mathcal{R}_{h\Delta}(t_i)}(\cdot),I_{\mathcal{R}(t_i)}(\cdot)$ are lower semicontinuous convex functions (see \cite[Exercise~2.1]{CLSW}).
By \cite[Example~4.13]{Rockaf} the convergence in~\eqref{eq:conv_reach_sets} with respect to the
Hausdorff set also implies the set convergence in the sense of Painlev\'{e}-Kuratowski (see \cite[Sec.~4.A--4.B]{Rockaf}).
Hence, \cite[Proposition~7.4(f)]{Rockaf} applies and shows that the corresponding
indicator functions converge epi-graphically. Since the subdifferential of the (convex)
indicator functions coincides with the normal cone by~\cite[Exercise~8.14]{Rockaf},
Attouch's Theorem \ref{theo:attouch} yields the graphical convergence of the corresponding
normal cones.
\end{proof}
The remainder deals with the reconstruction of discrete optimal trajectories and
the proof of convergence of optimal controls in the \emph{$L^1$-norm},
i.e., $\int_{0}^{t_i}\|\hat{u}(t)-\hat{u}_h(t) \|_1dt\rightarrow 0$ as $h\downarrow 0$
for $\hat{u}(\cdot),\,\hat{u}_h(\cdot)$ being defined later, where
the \emph{$\ell_1$-norm} is defined for $x\in\R^n$ as $ \|x\|_1=\sum_{i=1}^{n}|x_i|$.
To illustrate the
idea, we confine to a special form of the target and control set, i.e., $\mathcal S=\bb{0},\,U=[-1,1]^m,\,t\in [0,t_i]$
and the time invariant time-reversed linear system
\begin{align}
\label{eq:invardyn}
\begin{cases}
\dot{y}(t)&=\bar{A}y(t)+\bar Bu(t),\ u(t)\in [-1,1]^m,\\
y(0)&=0.
\end{cases}
\end{align}
Algorithm \ref{algorithm} can be interpreted pointwisely in this context as follows. For any $y_{(i-1)N}\in Y_{h\Delta}(t_i)$ there exists a sequence of controls $\bb{u_{kj}}^{ k=1,\ldots,i-1}_{j=0,\ldots,N}$ such that
\begin{equation}\label{eq:numpointw}
\begin{cases}
y_{(k-1)N}&=\Phi_h\big(t_k,t_{k-1}\big)y_{(k-1)0}+h\sum_{j=0}^{N}c_{kj}\Phi_h(t_k,t_{(k-1)j})\bar Bu_{(k-1)j},\\
y_{00}&=0,
\end{cases}
\end{equation}
for $k=1,\ldots,i$. Thus $y_{(i-1)N}=h\sum_{k=1}^{i}\sum_{j=0}^{N}c_{kj}\Phi_h(t_i,t_{(k-1)j})\bar Bu_{(k-1)j}.$
The continuous-time adjoint equation of~\eqref{eq:invardyn} written for $n$-row vectors reads as
\begin{equation}\label{eq:adjoint}
\begin{cases}
\dot{\eta}(t)&=- \eta(t)\bar{A},\\
\eta(t_i)&=\zeta
\end{cases}
\end{equation}
and its discrete version, approximated by the same method (see~\cite[Chap.~5]{Ge}) as the one used
to discretize \eqref{eq:invardyn}, i.e., \eqref{eq:numpointw}, can be written as follows. For $k=i-1,i-2,\ldots,0$ and $j=N,N-1,\ldots,1$,
\begin{equation}\label{eq:disadjoint}
\begin{cases}
\eta_{k(j-1)}&= \eta_{kj} \Phi _h(t_{kj},t_{k(j-1)}) \\
\eta_{(i-1)N}&=\zeta_{h},
\end{cases}
\end{equation}
where $\zeta ,\,\zeta_h $ will be clarified later.
By the definition of $t_{kj}$ (see Algorithm~\ref{algorithm}) the index $k0$ can be replaced by $(k-1)N$, the solution of \eqref{eq:disadjoint} in backward time is therefore possible. Here, the end condition will be chosen subject to certain transversality conditions, see the latter reference for more details.
Due to well-known arguments (see e.g.,~\cite[Sec.~2.2]{LM})
the end point of the time-optimal solution lies on the boundary of the reachable set and the adjoint solution
$\eta(\cdot)$ is an outer normal at this end point.
Similarly, this also holds in the discrete case. The following proposition formulates this fact by a discrete version of \cite[ Sec.~2.2, Theorem~2]{LM}. The proof is just a translation of the one of the cited theorem in \cite{LM} to the discrete language. For the sake of clarity, we will formulate and prove it in detail.
\begin{proposition}
Consider the system \eqref{eq:invardyn} in $\R^n$ with its adjoint problem~\eqref{eq:adjoint} as well as their discrete pendants \eqref{eq:numpointw}, \eqref{eq:disadjoint} respectively. Let $ \bb{u_{kj}} $ be a sequence of controls, $ \bb{y_{kj}} $ be its corresponding discrete solution. Then under Assumptions \ref{standassum}, for $h$ small enough,
$y_{(i-1)N} \in Y_{h\Delta}(t_i)$ if and only if there exists nontrivial solution $\bb{\eta_{kj}}$ of \eqref{eq:disadjoint} such that
$$\eta_{kj}\bar B u_{kj}=\max_{u\in U} \bb{\eta_{kj} \bar Bu}$$
for $k=0,...,i-1,\,\,j=0,...,N$, where $Y_{h\Delta}(t_i)$ is defined as in Algorithm \ref{algorithm}.
\end{proposition}
\begin{proof}
Assume that $\bb{u_{jk}}$ is such that $y_{(i-1)N}$ by the response
$$y_{(i-1)N}=h\sum_{k=1}^{i}\sum_{j=0}^{N}c_{kj}\Phi_h(t_i,t_{(k-1)j})\bar Bu_{(k-1)j}.$$
Since $\mathcal R_{h\Delta}(t_i)$ is a compact and convex set by construction, there exists a supporting hyperplane $\gamma$ to $\mathcal R_{h\Delta}(t_i)$ at $y_{(i-1)N}$. Let $\zeta_h$ be the outer normal vector of $\mathcal R_{h\Delta}(t_i)$ at $y_{(i-1)N}$. Define the nontrivial discrete adjoint response \eqref{eq:disadjoint}, i.e.,
\begin{equation*}
\begin{cases}
\eta_{k(j-1)}&= \eta_{kj} \Phi _h(t_{kj},t_{k(j-1)}), \\
\eta_{(i-1)N}&=\zeta_{h},
\end{cases}
\end{equation*}
Then $\eta_0=\eta_{(i-1)N} \Phi _h(t_i, 0)=\zeta_h\, \Phi _h(t_i, 0)$. Noticing that $\Phi _h(t_{kj},t_{k(j-1)} )$ is a perturbation of the identity matrix $I_n$, there exists $\bar{h}$ such that $\Phi _h(t_{kj},t_{k(j-1)} )$ is invertible for $h\in [0,\bar{h}]$ and so is $\Phi _h(t_i, 0)$. Therefore, $\eta_{(i-1)N}=\eta_0 \Phi _h^{-1}(t_i, 0)$. Now we compute the inner product of $\eta_{(i-1)N},\,y_{(i-1)N}$:
\begin{equation*}
\begin{aligned}
\eta_{(i-1)N}&\,y_{(i-1)N}= \displaystyle \eta_0 \Phi _h^{-1}(t_i, 0) \Big(h\sum_{k=1}^{i}\sum_{j=0}^{N}c_{kj}\Phi_h(t_i,t_{(k-1)j})\bar Bu_{(k-1)j}\Big)\\
&=h\sum_{k=1}^{i}\sum_{j=0}^{N}c_{kj} \eta_0 \Phi _h^{-1}(t_i, 0) \Phi_h(t_i,t_{(k-1)j})\bar Bu_{(k-1)j}\\
&=h\sum_{k=1}^{i}\sum_{j=0}^{N}c_{kj} \eta_0 \Phi _h^{-1}(t_{(k-1)j}, 0) \Phi _h^{-1}(t_i, t_{(k-1)j}) \Phi_h(t_i,t_{(k-1)j})\bar Bu_{(k-1)j}\\
&=h\sum_{k=1}^{i}\sum_{j=0}^{N}c_{kj} \eta_0 \Phi _h^{-1}(t_{(k-1)j}, 0) \bar Bu_{(k-1)j}=h\sum_{k=1}^{i}\sum_{j=0}^{N}c_{kj} \eta_{(k-1)j} \bar Bu_{(k-1)j}.\\
\end{aligned}
\end{equation*}
Now assume that $\eta_{kj}\bar B u_{kj}< \max_{u\in U} \bb{\eta_{kj} \bar Bu}$ for some indices $k,\,j$. Then define another sequence of controls as follows
\begin{equation*}
\tilde{u}_{kj}=
\begin{cases}
u_{kj} &\text{ if } \eta_{kj}\bar B u_{kj}=\max_{u\in U} \bb{\eta_{kj} \bar Bu},\\
\max_{u\in U} \bb{\eta_{kj} \bar Bu} &\text{ otherwise}.
\end{cases}
\end{equation*}
Let $\tilde{y}_{(i-1)N}$ be the end point of the discrete trajectory following $\bb{\tilde{u}_{kj}}$. We have
\begin{equation*}
\eta_{(i-1)N}\,\tilde{y}_{(i-1)N}=h\sum_{k=1}^{i}\sum_{j=0}^{N}c_{kj} \eta_{(k-1)j} \bar B \tilde u_{(k-1)j}
\end{equation*}
which implies
$\eta_{(i-1)N}\, y_{(i-1)N}<\eta_{(i-1)N}\,\tilde{y}_{(i-1)N}$ or $\eta_{(i-1)N}( \tilde{y}_{(i-1)N}-y_{(i-1)N})>0$ which contradicts the construction of $\eta_{(i-1)N}=\zeta_h$, an outer normal vector of $\mathcal R_{h\Delta}(t_i)$ at $y_{(i-1)N}$. Therefore, $\eta_{kj}\bar B u_{kj}= \max_{u\in U} \bb{\eta_{kj} \bar Bu}$.\\
Conversely, assume that for some nontrivial discrete adjoint response $${\eta_{(i-1)N}=\eta_0 \Phi _h^{-1}(t_i, 0)},$$ the controls satisfies
\begin{equation}\label{eq:asscontrol}
\eta_{kj}\bar B u_{kj}= \max_{u\in U} \bb{\eta_{kj}\bar B u}
\end{equation}
for every indices $k=0,...,i-1,\,j=0,...,N$. We will show that the end point $y_{(i-1)N}$ of the corresponding trajectory $\bb{y_{kj}}$ will lie at the boundary of $\mathcal R_{h\Delta}(t_i)$, not at any point belonging to its interior. Suppose, by contradiction, $y_{(i-1)N}$ lies in the interior of
$\mathcal R_{h\Delta}(t_i)$. Let $\tilde{y}_{(i-1)N}$ be a point reached by a sequence of controls $\bb{\tilde{u}_{kj}}$ in $\mathcal R_{h\Delta}(t_i)$ in such that
\begin{equation}\label{eq:ineqcontrass}
\eta_{(i-1)N}y_{(i-1)N} < \eta_{(i-1)N}\tilde{y}_{(i-1)N}.
\end{equation}
Our assumption \eqref{eq:asscontrol} implies that
\begin{equation}\label{eq:ineqcontr}
\eta_{kj}\bar B \tilde{u}_{kj}\le \eta_{kj} \bar Bu_{kj}
\end{equation} for all $k,j$. As above, due to \eqref{eq:ineqcontr}, we show that
$$\eta_{(i-1)N}\tilde{y}_{(i-1)N}\le \eta_{(i-1)N}y_{(i-1)N}$$ which is a contradiction to \eqref{eq:ineqcontrass}. Consequently, $y_{(i-1)N} \in \partial \mathcal R_{h\Delta}(t_i)=Y_{h\Delta}(t_i)$.
\end{proof}
Motivated by the outer normality of the adjoints in continuous resp.~discrete time and the
maximum conditions, we
define the optimal controls $\hat{u}(t),\,\hat{u}_h(t)$ as follows
\begin{equation}\label{def:contr}
\left\{
\begin{aligned}
\hat{u}(t) & =\sign (\eta(t)\bar B )^\top & & \text{for } (t \in [0,t_i]), \\
\hat{u}_h(t)&=\hat{u}_{kj} & & \text{if } t\in [t_{kj},t_{k(j+1)}),\,k=0,...,i-1,\, \\
& & & j=0,...,N-1,\\
\hat{u}_h(t_{(i-1)N})&=\hat{u}_{(i-1)(N-1)} & & \text{for } t=t_{(i-1)N},
\end{aligned}
\right.
\end{equation}
where $\hat{u}_{kj}=\sign (\eta_{kj}\bar B)^\top,\,k=0,...,i-1,\,j=0,...,N$
and
\begin{equation*}
w := \sign(v) \text{ with } w_\mu =
\begin{cases}
1 &\text{ if } v_\mu>0,\\
0 &\text{ if } v_\mu=0,\\
-1 &\text{ if } v_\mu<0
\end{cases}
\end{equation*}
is the \emph{signum function} and $v,w \in \R^m$, $\mu=1,\ldots,m$.
Owing to Theorem~\ref{theo:normaconver}, we have that the set-valued maps $(N_{\mathcal{R}_{h\Delta}(t_i)}(\cdot))_h$ converge graphically to $N_{\mathcal{R}(t_i)}(\cdot)$
which implies
that for every
sequence $(y_{(i-1)N},\eta_{(i-1)N})_N$
in the graphs there exists an element $(y(t_i),\eta(t_i))$
of the graph such that
\begin{equation}\label{eq:convernorvec}
(y_{(i-1)N},\eta_{(i-1)N}) \rightarrow (y(t_i),\eta(t_i)) \text{ as } h \downarrow 0,
\end{equation}
where $\eta_{(i-1)N} \in N_{\mathcal{R}_{h\Delta}(t_i)}(y_{(i-1)N}),\,\eta(t_i)\in N_{\mathcal{R}(t_i)}(y(t_i))$. Thus $\zeta,\,\zeta_h$ are chosen such that \eqref{eq:convernorvec} is realized.
Then it is obvious that $\eta_{kj} \rightarrow \eta(t_{kj})$ as $h \downarrow 0$ with $k=0,...,i-1$ uniformly in $j=0,...,N$.
For a function $g \colon I \rightarrow \R^m$, we denote the total variation $V(g,I):=\sum_{1}^{m}V(g_i,I)$, where $V(g_i,I)$ is the usual total variation of the $i$-th components of $g$ over a bounded interval $I\in \R$.
Now if we assume that the system \eqref{eq:invardyn}
is normal, $\hat{u}_h(t)$ converges to $\hat{u}(t)$ in
the $L^1$-norm.
\begin{proposition}
Consider that the minimum time problem with the dynamics~\eqref{eq:invardyn} in $\R^n$. Assume that the normality condition holds, i.e.,
\begin{equation}\label{eq:rank}
\rk \bb{B\omega,AB\omega,\ldots,A^{n-1}B\omega}=n
\end{equation}
for each (nonzero) vector $\omega$
along an edge of $U=[-1,1]^m$ or along the two end points of the interval $U=[-1,1]$ if $m=1$. Then, under Assumptions \ref{standassum}, $\int_{0}^{t_{i}} \|\hat{u}(t)-\hat{u}_h(t)\|_1dt \rightarrow 0$ as $h\rightarrow 0$ for any $i\in \bb{1,\ldots,K}$.
\end{proposition}
\begin{proof}
Due to \eqref{eq:rank} $\hat{u}(t)$ defined as in \eqref{def:contr} on $t_0\le t\le t_i$ is the optimal control to reach the state $\hat{y}(t_i)$ of the corresponding optimal solution from the origin. Moreover, it has a finite number of switchings see \cite[Sec.~2.5, Corollary~2]{LM}. Therefore, the total variation, $V(\hat{u}(t),[t_0,t_i])$, is bounded. Let $\displaystyle I_{kj}=[t_{kj},t_{k(j+1)})$, for $k=0,\ldots,i-1,\,j=0,\ldots,N-1$, and except for ${I_{(i-1)(N-1)}=[t_{(i-1)(N-1)},t_{(i-1)N}]}$. Then
\begin{equation}
\begin{aligned}
&\int_{I_{kj}}\|\hat{u}(t)-\hat{u}_h(t)\|_1 dt\le \int_{I_{kj}}(\|\hat{u}(t)-\hat{u}(t_{kj})\|_1+ \|\hat{u}(t_{kj})-\hat{u}_h(t_{kj})\|_1)dt\\
&\le hV(\hat{u}(t),I_{kj})+h\|\sign (\eta(t_{kj})\bar B)^\top-\sign (\eta_{kj}\bar B))^\top \|_1.
\end{aligned}
\end{equation}
Taking a sum over $k=0,\ldots,i-1,\, j=0,\ldots,N-1$ we obtain
\begin{align*}
& \int_{t_0}^{t_i}\|\hat{u}(t)-\hat{u}_h(t)\|_1dt \\
\le \ & hV(\hat{u}(t),[t_0,t_i])+h\sum_{k=0}^{i-1}\sum_{j=0}^{N-1} \|\sign ( \eta(t_{kj})\bar B))^\top-\sign (\eta_{kj}\bar B))^\top \|_1.\\
\end{align*}
Since $\hat{u}(t)$ has a finite number of switchings and
$\eta_{kj}, \eta(t_{kj})$ are non-trivial with the convergence
$\eta_{kj} \rightarrow \eta(t_{kj})$ as $h\rightarrow 0$
for $k=0,\ldots,i,\, j=0,\ldots,N$, the variation $V(\hat{u}(t),[t_0,t_i])$ and $\sum_{k=0}^{i}\sum_{j=0}^{N-1} \|\sign (\eta(t_{kj})\bar B))^\top-\sign (\eta_{kj}\bar B))^\top \|_1$ are bounded. Therefore,
\begin{equation*}
\int_{t_0}^{t_i} \|\hat{u}(t)-\hat{u}_h(t) \|_1dt \rightarrow 0 \text{ as } h\rightarrow 0.
\end{equation*}
The proof is completed.
\end{proof}
\section{Numerical tests}\label{sec:num_tests}
The following examples should serve as a collection of academic test examples
for calculating
the minimum time function for several, mainly linear control problems
which were previously discussed in the literature.
The examples also illustrate the performance of the error behavior of our proposed approach.
The space discretization follows the presented approach in
Subsection \ref{subsec:Algorithm}
and uses supporting points in directions
\begin{align*}
l^k & := \bigg( \cos\bigg(2 \pi \frac{k-1}{N_{\mathcal{R}}-1}\bigg), \ \sin\bigg(2 \pi \frac{k-1}{N_{\mathcal{R}}-1}\bigg) \bigg)^\top, \ k=1,\ldots,N_{\mathcal{R}}, \\
\eta^r & := \begin{cases}
-1 + 2(r-1) & \quad \text{if $U=[-1,1]$},\ r=1,\ldots,N_U, \\
l^r & \quad \text{if $U \subset \R^2$},\ r=1,\ldots,N_U \\
\end{cases}
\end{align*}
and normally choose either $N_U = 2$ for one-dimensional control sets or $N_U = N_{\mathcal{R}}$ for $U \subset \R^2$ in the discretizations of the unit sphere \eqref{eq:discr_unit_spheres}.
The comparison of the two applied methods is done by computing the error with respect to the
$L^{\infty}$-norm of the difference between the approximate and the true minimum time
function evaluated at test points.
The true minimum time function is delivered
analytically by tools from control theory.
The test grid points are distributed uniformly over the domain
$\mathcal{G}=[-1,1]^2$ with step size $\Delta x= 0.02$.
\subsection{Linear examples}
In the linear, two-dimensional, time-invariant Examples~\ref{ex:1}--\ref{ex:3b} we can
check Assumption \ref{standassum}(iv)
\begin{quote}
$\mathcal{R}(t)$ is \emph{strictly expanding} on the compact interval $[t_0,t_f]$, i.e., ${\mathcal{R}(t_1) \subset \inter \mathcal{R}(t_2)}$ for all $t_0\le t_1<t_2\le t_f$.
\end{quote}
in several ways. From the numerical calculations
we can observe this property in the shown figures for the fully discrete reachable sets.
Secondly, we can use the available analytical formula for the minimum time function
resp.~the reachable sets or check
the Kalman rank condition
$
\rk\Big[ B, A B \Big] = 2
$
for time-invariant systems if the target is the origin (see~\cite[Theorems~17.2 and~17.3]{HL}).
The control sets in the linear examples are either one- or two-dimensional polytopes (a segment or a square)
or balls and are varied to study different regularity allowing high or low order of convergence
for the underlying set-valued quadrature method.
In all linear examples, we apply a set-valued combination method of order 1 and 2 (the set-valued
Riemann sum combined with Euler's method resp.~the set-valued trapezoidal rule with Heun's method).
We start with an example having a Lipschitz continuous minimum time function and verify the
error estimate in Theorem \ref{errT}. Observe that the numerical error here is only
contributed by the spatial discretization of the target set or control set.
\begin{example}
\label{ex:1}
Consider the control dynamics , see \cite{BFS,GL},
\begin{equation}\label{example1}
\dot{x}_1=u_1,\,\,\dot{x}_2=u_2,\,\, (u_1,u_2)^\top \in U \text{ with $U:= B_1(0)$ or $U := [-1,1]^2$ }.
\end{equation}
We consider either the small ball $B_{0.25}(0)$ or the origin as target set $\mathcal{S}$. This is a simple time-invariant example with $\bar A=\begin{bmatrix}
0& 0 \\[0.3em]
0 & 0
\end{bmatrix}$, $\bar B=\begin{bmatrix}
-1& 0 \\[0.3em]
0 & -1
\end{bmatrix}$.
Its fundamental solution matrix is the identity matrix, therefore
\begin{equation*}
\mathcal{R}(t)=\Phi(t,t_0)S+\int_{t_0}^{t}\Phi(t,s)\bar B(s)U=S+(t_0-t)U,
\end{equation*}
and any method from (I)--(III) gives the exact solution, i.e.,
$$
\mathcal{R}_h(t)=\mathcal{R}(t)
=S+(t-t_0)U
$$
due to the symmetry of $U$. For instance, the set-valued Euler scheme with ${h=\frac{t_{j+1}-t_j}{N}}$ yields
\begin{equation*}
\begin{cases}
\mathcal{R}_h(t_{j+1})=\mathcal{R}_h(t_j)+h(\bar A \mathcal{R}_h(t_j)+\bar B U)=\mathcal{R}_h(t_j)-hU,\\
\mathcal{R}_h(t_0)=S,
\end{cases}
\end{equation*}
therefore, $\mathcal{R}_{h}(t_N)=S-NhU=S+( t_N -t_0)U$ and the error is only due to the space discretizations $\mathcal{S}_\Delta \approx \mathcal{S}$, $U_\Delta \approx U$ and does not depend on $h$
(see Table~\ref{tab:1}). The error would be the same for finer step size $h$ and $\Delta t$
in time or if a higher-order method is applied. Note that the error for the origin as target
set (no space discretization error) is in the magnitude of the rounding errors of floating
point numbers.
We choose $t_f = 1,\,K=10$ and $N=2$
and the set-valued Riemann sum combined with Euler's method
for the computations.
It is easy to check that the minimum time function is Lipschitz continuous, since
one of the equivalent Petrov conditions in~\cite{P}, \cite[Chap.~IV, Theorem~1.12]{BCD} with $U=B_1(0)$ or $[-1,1]^2$ hold:
\begin{align*}
0 & > \min_{(u_1,u_2)^{\top}\in U} \ang{\nabla d(x,\mathcal{S}),(u_1,u_2)^\top}, \\
0 & \in \inter\bigg( \bigcup_{u \in U} f(0,u) \bigg) \quad\text{with $f(x,u) = A x + B u$.}
\end{align*}
Moreover, the support function with respect to the time-reversed dynamics \eqref{example1}
\begin{align*}
\delta^* (l,\Phi(t,\tau)\bar B(\tau)U) & = \begin{cases}
\|l\| & \quad\text{if $U = B_1(0)$}, \\
|l_1|+|l_2| & \quad\text{if $U = [-1,1]^2$}
\end{cases}
\end{align*}
is constant with respect to the time $t$, so it is trivially arbitrarily continuously differentiable with respect to $t$ with bounded derivatives uniformly for all $l\in S_{n-1}$.
\begin{table}[h]
\small
\begin{tabular}{|c|c|c|c|}
\hline
$ N_{\mathcal{R}} = N_U $ & $U=B_1(0)$,
& $U=[-1,1]^2$,
& $U=[-1,1]^2$, \rule{0ex}{3ex} \\
& $\mathcal{S}=B_{0.25}(0)$
& $\mathcal{S}=B_{0.25}(0)$
& $\mathcal{S}=\bb{0}$ \rule[-1.5ex]{0ex}{1.5ex} \\
\hline
$100$& $ 6.14\times 10^{-4}$ & $ 4.9 \times 10^{-4} $
& $ 8.9 \times 10^{-16} $ \rule{0ex}{3ex} \\
\hline
$50$ & $ 24\times 10^{-4}$ & $ 19 \times 10^{-4} $
& $ 8.9 \times 10^{-16} $ \rule{0ex}{3ex} \\
\hline
$25$ & $ 0.0258 $ & $ 0.0073 $
& $ 8.9 \times 10^{-16} $ \rule{0ex}{3ex} \\
\hline
\end{tabular}\\[2ex]
\caption{error estimates for Example~\ref{ex:1} with different control and target sets}
\label{tab:1}
\end{table}
In Fig.~\ref{fig:1_ball} the minimum time functions are plotted
for Example~\ref{ex:1} for two different control sets $U = B_1(0)$ (left) and $U = [-1,1]^2$ (right) with the same two-dimensional target set $\mathcal{S} = B_{0.25}(0)$.
The minimum time function is in general not differentiable everywhere. Since it is
zero in the interior of the target, one has at most Lipschitz continuity at
the boundary of $\mathcal{S}$.
In Fig.~\ref{fig:1_square_target_origin} the minimum time function is plotted
for the same control set as in Fig.~\ref{fig:1_ball} (right), but this time the target set
is the origin and not a small ball.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.34]{plotT_exp11}
\includegraphics[scale=0.34]{plotT_exp12}
\caption{Minimum time functions for Example~\ref{ex:1} with different control sets}
\label{fig:1_ball}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.475]{plotT_exp13}
\caption{Minimum time function for Example~\ref{ex:1} with ${U = [-1,1]^2}$, $\mathcal{S}=\{0\}$}
\label{fig:1_square_target_origin
\end{center}
\end{figure}
\end{example}
We now study well-known dynamics as the double integrator and the harmonic oscillator
in which the control set is one-dimensional. The classical rocket car example with
H\"older-continuous minimum time function was already computed
by the Hamilton-Jacobi-Bellman approach in~\cite[Test~1]{F} and \cite{CL,GL}, where numerical calculations
are carried out by enlarging the target (the origin) by a small ball.
\begin{example}
\label{ex:2}
a) The following dynamics is the \emph{double integrator}, see e.g.,~\cite{CL}.
\begin{equation}\label{example3}
\dot{x}_1=x_2,\,\dot{x}_2=u,\,\,u\in U := [-1,1].
\end{equation}
We consider either the small ball $B_{0.05}(0)$ or the origin as target set $\mathcal{S}$.
Then the minimum time function is $\frac{1}{2}$--H\"older continuous
for the first choice of $\mathcal{S}$ see \cite{AM,CL} and the support function for the time-reversed dynamics \eqref{example3}
$$\delta^* (l,\Phi(t,\tau)\bar B(\tau)[-1,1])=\delta \Bigg(l,\begin{bmatrix}
1& -(t-\tau) \\[0.3em]
0 & 1
\end{bmatrix} \begin{bmatrix}
0 \\[0.3em]
-1
\end{bmatrix}[-1,1]\Bigg)=\big|(t-\tau,-1) \cdot l \big|$$
is only absolutely continuous with respect to $\tau$ for some directions $l \in S_1$
with $l_1\neq 0$. Hence, we can expect that the convergence order
for the set-valued quadrature method is at most $2$.
We fix $t_f = 1$ as maximal computed value for the minimum time function
and $N = 5$.
In Table~\ref{tab:2} the error estimates for two set-valued combination methods
are compared (order 1 versus order 2). Since the minimum time function is only
$\frac{1}{2}$--H\"older continuous we expect as overall convergence order $\frac{1}{2}$
resp.~$1$. A least squares approximation of the function $C h^{p}$ for the error term
reveals $C = 1.37606$, $p = 0.4940$ for Euler scheme combined with set-valued Riemann sum
resp.~$C = 22.18877$, $p = 1.4633$ (if $p=1$ is fixed, then $C= 2.62796$) for Heun's method combined with set-valued trapezoidal
rule. Hence, the approximated error term is close to the expected one
by Theorem \ref{errT} and Remark \ref{Rem_errT}.
Very similar results are obtained with the Runge-Kutta methods of order 1 and 2
in Table~\ref{tab:22} in which the set-valued Euler method is slightly better than the
combination method of order 1 in Table~\ref{tab:2}, and the set-valued Heun's method
coincides with the combination method of order 2, since both methods use the same approximations of the given dymanics.
Here we have chosen to double the number of directions $N_{\mathcal{R}}$ each time the step size
is halfened which is suitable for a first order method. For a second order method
we should have multiplied $N_{\mathcal{R}}$ by 4 instead. From this point it is not surprising
that there is no improvement of the error in the fifth row for step size $h = 0.0025$.
\begin{table}[h]
\begin{tabular}{|l|c|c|c|}
\hline
\mbox{ }\ \,$h$ & $N_{\mathcal{R}}$ & \makecell{\textbf{Euler scheme} \\ \& \textbf{Riemann sum}}
& \makecell{\textbf{Heun's scheme} \\ \& \textbf{trapezoid rule}} \\
\hline
$0.04$ & $50$ & $0.2951$ & $0.2265$ \\
\hline
$0.02$ & $100$ & $0.1862$ & $0.1180$ \\
\hline
$0.01$ & $200$ & $0.1332$ & $0.0122$ \\
\hline
$0.005$ & $400$ & $0.1132$ & $0.0062$ \\
\hline
$0.0025$ & $800$ & $0.0683$ & $0.0062$ \\
\hline
\end{tabular}\\[2ex]
\caption{Error estimates for Ex.~\ref{ex:2} a) for combination methods of order 1 and 2}
\label{tab:2}
\end{table}
\begin{table}[h]
\small
\begin{tabular}{|l|c|c|c|}
\hline
\mbox{ }\ \,$h$ & $N_{\mathcal{R}}$ & \textbf{set-valued Euler method}
& \textbf{set-valued Heun method} \\
\hline
$0.04$ & $50$ & $0.2330$ & $0.2265$ \\
\hline
$0.02$ & $100$ & $0.1681$ & $0.1180$ \\
\hline
$0.01$ & $200$ & $0.1149$ & $0.0122$ \\
\hline
$0.005$ & $400$ & $0.0753$ & $0.0062$ \\
\hline
$0.0025$ & $800$ &$0.0318$ & $0.0062$ \\
\hline
\end{tabular}\\[2ex]
\caption{Error estimates for Ex.~\ref{ex:2} a)
for Runge-Kutta meth.\ of order 1 and 2}
\label{tab:22}
\end{table}
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.375]{plotT_exp21}
\hspace*{-4ex}
\includegraphics[scale=0.375]{plotT_exp20}
\caption{Minimum time function for Example~\ref{ex:2}a) with target set $\{0\}$ resp.\ $B_{0.05}(0)$}\label{exam_31}
\label{fig:2a_small_ball}
\label{fig:2a_origin}
\end{center}
\end{figure}
As in Example~\ref{ex:1} we can consider the dynamics \eqref{example3} with the origin as a target (see the minimum time function in Fig.~\ref{fig:2a_origin}~(left). In this case, the numerical computation by PDE approaches, i.e., the solution of the associated Hamilton-Jacobi-Bellman equation (see e.g.,~\cite{F}) requires the replacement of the target point $0$ by a small ball $B_\varepsilon(0)$ for suitable $\varepsilon>0$. This replacement surely increases the error of the calculation (compare the minimum time function in Fig.~\ref{fig:2a_small_ball} for $\varepsilon = 0.05$). However, our proposed approach works perfectly regardless of the fact whether $\mathcal{S}$ is a two-dimensional set or a singleton.
\\[1ex]
b) harmonic oscillator dynamics (see~\cite[Chap.~1, Section~1.1, Example 3]{LM})
\begin{equation}\label{example4}
\dot{x}_1=x_2,\,\dot{x}_2=-x_1+u,\,\,u\in U :=[-1,1].
\end{equation}
Since the Kalman rank condition
$
\rk\Big[ B, A B \Big] = 2,
$
the minimum time function $T(\cdot)$ is also continuous.
The plot for $T(x)$ for the harmonic oscillator with the origin as
target, $ t_f=6,\,N_{\mathcal{R}} = 100,\,
N=5 $ and $K=40$ is shown in Fig.~\ref{fig:2b_origin}.
According to Section \ref{sec:converg} we construct open-loop time-optimal controls for the discrete
problem with target set $\mathcal{S} = \{0\}$ by Euler's method. In Fig.~\ref{fig:exam_32} the corresponding discrete open-loop time-optimal
trajectories for Examples~\ref{ex:2}a)~(left) and b)~(right) are depicted.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.27]{plotT_exp22}
\caption{Minimum time functions for Example~\ref{ex:2}b)}
\label{fig:2b_origin}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=2.4in,height=2.1in]{set_valued_rocket_trajectories}
\includegraphics[width=2.5in,height=2.1in]{set_valued_harmonic_trajectories}
\caption{Approximate optimal trajectories for Example~\ref{ex:2}a) resp.~b)}\label{fig:exam_32}
\end{center}
\end{figure}
\end{example}
The following two examples exhibit smoothness of the support functions and would even allow
for methods with order higher than two with respect to time discretization.
The first
one has a special linear dynamics and is smooth, although the control set is a unit square.
\begin{example}
\label{ex:3a}
In the third linear two-dimensional example the reachable set for various end times $t$
is always a polytope with four vertices and coinciding outer normals at its faces.
Therefore, it is a smooth example which would even justify the use of methods with higher order than 2 to
compute the reachable sets (see \cite{BLcham,BL}).
It is similar to Example~\ref{ex:counter_ex_1}, but has an additional column in matrix $B$ and is a variant of \cite[Example~2]{BLcham}.
Again, we fix $t_f = 1$ as maximal time value and compute the result with $N=2$.
We choose $N_{\mathcal{R}} = 50$ normed directions, since the reachable set has only four different
vertices.
\begin{equation}\label{example15}
\begin{bmatrix}
\dot x_1 \\[0.3em]
\dot x_2
\end{bmatrix}=\begin{bmatrix}
0 & -1 \\[0.3em]
2 & 3
\end{bmatrix}\begin{bmatrix}
x_1 \\[0.3em]
x_2
\end{bmatrix}+\begin{bmatrix}
1 & -1 \\[0.3em]
-1 & 2
\end{bmatrix}\begin{bmatrix}
u_1 \\[0.3em]
u_2
\end{bmatrix},
\end{equation}
where $(u_1,\,u_2)^\top \in [-1,1]^2$. Let the origin be the target set $\mathcal{S}$.
The fundamental solution matrix of the time-reversed dynamics of \eqref{example15} is given by
\begin{align*}
\Phi(t,\tau) & = \begin{bmatrix}
2 e^{-(t-\tau)} - e^{-2(t-\tau)}
& e^{-(t-\tau)} - e^{-2(t-\tau)} \\[0.3em]
-2 e^{-(t-\tau)} + 2 e^{-2(t-\tau)}
& -e^{-(t-\tau)} + 2 e^{-2(t-\tau)}
\end{bmatrix}.
\end{align*}
This is a smooth example in the sense that the support function for the time-reversed
set-valued dynamics of \eqref{example15},
\begin{align*}
\delta ^*(l,\Phi(t,\tau)\bar B(\tau)[-1,1]^2)=e^{-(t-\tau)}\vert l_1-l_2 \vert + e^{-2(t - \tau)}\vert l_1-2l_2 \vert,
\end{align*}
is smooth with respect to $\tau$ uniformly in $l \in S_1$.
The analytical formula for the (time-continuous) minimum time function is as follows:
\begin{equation*}
\begin{aligned}
T((x_1,x_2)^\top)=\max \bb{& t \colon \ t \geq 0 \text{ is the solution of one of the equations }\\
& x_2=-2x_1\pm (e^{-t}-1),\,x_2=-x_1\pm 1/2(1-e^{-2t})}.
\end{aligned}
\end{equation*}
A least squares approximation of the function $C h^{p}$ for the error term
reveals ${C = 2.14475}$, $p = 0.8395$ for the set-valued combination method of order~1
and $C = 23.9210$, $p= 1.7335$ (if $p=2$ is fixed, then $C=70.1265$)
for the one of order~2. The values are similar to the expected ones from Remark \ref{Rem_errT},
since the minimum time function (see Fig.~\ref{fig:3a}~(left)) is Lipschitz (see~\cite[Sec.~IV.1, Theorem~1.9]{BCD}).
Similarly, another variant of this example with a one-dimensional control can be constructed by deleting
the second column in matrix $B$. The resulting (discrete and continuous-time) reachable sets
would be line segments. Thus, the algorithm would compute the fully discrete minimum time
function on this one-dimensional subspace. The absence of interior points in the reachable
sets is not problematic for this approach in contrary to common approaches based on the
Hamilton-Jacobi-Bellman equation as shown in Example~\ref{ex:counter_ex_1}.
\begin{table}[h]
\small
\begin{tabular}{|l|c|c|}
\hline
\mbox{ }\ \ \,$h$ & \textbf{Euler scheme \& Riemann sum}
& \textbf{Heun's scheme \& trapezoid rule} \\
\hline
$0.05$ & $0.170\phantom{0}$ & $0.1153\phantom{000}$ \\
\hline
$0.025$ & $0.095\phantom{0}$ & $0.0470\phantom{000}$ \\
\hline
$0.0125$ & $0.0599$ & $0.0133\phantom{000}$ \\
\hline
$0.00625$ & $0.0285$ & $0.0032\phantom{000}$ \\
\hline
\end{tabular}\\[2ex]
\caption{Error estimates for Example~\ref{ex:3a}
for methods of order 1 and 2}
\label{tab:3}
\end{table}
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.35]{plotT_exp15}
\ \,
\includegraphics[scale=0.4]{plotT_exp31}
\caption{ Minimum time functions for Examples~\ref{ex:3a} and~\ref{ex:3b}}
\label{fig:3a}
\end{center}
\end{figure}
\end{example}
The next example involves a ball as control set and leads naturally to a smooth problem.
\begin{example}
\label{ex:3b}
The following smooth example is very similar to the previous example.
It is given in \cite[Example~4.2]{BBCG}, \cite[Example~4.4]{BL}
\begin{equation}\label{example16}
\begin{bmatrix}
\dot{x}_1 \\[0.3em]
\dot{x}_2
\end{bmatrix}=\begin{bmatrix}
0 & -1 \\[0.3em]
2 & 3
\end{bmatrix}\begin{bmatrix}
x_1 \\[0.3em]
x_2
\end{bmatrix}+B_1(0)
\end{equation}
and uses a ball as control set. This is a less academic example than Example~\ref{ex:3a} (in which the matrix $B(t)$ was carefully chosen), since a ball as control
set often allows the use of higher order methods for the computation of reachable sets
(see~\cite{BL,B}). Here, no analytic formula for the minimum time function is available
so that we can study only numerically the minimum time function (see Fig.~\ref{fig:3a}~(right)).
Obviously, the support function is again smooth with respect to $\tau$ uniformly in all
normed directions $l$, since
\begin{align*}
\delta ^*(l,\Phi(t,\tau) B_1(0) & = \| \Phi(t,\tau)^\top l \|.
\end{align*}
\end{example}
\subsection{A nonlinear example}
\label{subsec_nonlin}
The following special bilinear example with convex reachable sets may provide the hope to extend
our approach to some class of nonlinear dynamics.
We approximate the time-reversed dynamics of Example~\ref{ex:4}
by Euler's and Heun's method.
\begin{example}
\label{ex:4}
The nonlinear dynamics is one of the examples in \cite{GL}.
\begin{equation}\label{example5}
\dot{x}_1=-x_2+x_1 u,\,\,\dot{x}_2=x_1+x_2 u,\,\,u\in [-1,1].
\end{equation}
With this dynamics, after computing the true minimum time function we observe that $T(\cdot)$ is Lipschitz continuous and its sublevel set, which is exactly the reachable set at the corresponding time, satisfies the required properties. The target set $\mathcal{S}$ is $B_{0.25}(0)$. \\
We fix $t_f = 1$ as maximal computed value for the minimum time function
and $N=2$.
Estimating the error term $C h^{p}$ in Table~\ref{tab:4} by least squares approximation yields the values
$C = 0.3293133$, $p= 1.8091$ for the set-valued Euler method
and $C = 0.5815318$, $p= 1.9117$ for the Heun method.
The unexpected good behavior of Euler's method stems from the specific behavior of trajectories.
Although the distance of the end point of the Euler iterates for halfened step size to the
true end point is halfened, but the distance of the Euler iterates to the boundary of the
true reachable set is almost shrinking by the factor 4 due to the specific tangential
approximation.
In Fig.~\ref{exam_5} the Euler iterates are marked with \textasteriskcentered\ in red color,
while Heun's iterates are shown with $\circ$ marks in blue color. The symbol $ \bullet $ marks the end point of the corresponding true solution.
Observe that the dynamics originates from the following system in polar coordinates
\begin{equation*}
\dot{r}=r u,\,\,\dot{\varphi}=1 ,\,\,u\in [-1,1].
\end{equation*}
Hence, the reachable set will grow exponentially with increasing time.
\begin{table}[h]
\small
\captionsetup{width=.95\linewidth
\begin{tabular}{|l|c|c|c|}
\hline
\mbox{ }\ \,$h$ & $N_{\mathcal{R}}$ & \textbf{set-valued Euler scheme}
& \textbf{set-valued Heun's scheme} \\
\hline
$0.5$ & $ 50 $&$0.0848\phantom{00}$ & $0.1461\phantom{00}$ \\
\hline
$0.1$ & $100 $&$0.0060\phantom{00}$ & $0.0076\phantom{00}$ \\
\hline
$0.05$ &$200$ & $0.0015\phantom{00}$ & $0.0020\phantom{00}$ \\
\hline
$0.025$& $400$ & $0.00042\phantom{0}$ & $0.000502$ \\
\hline
$0.0125$ &$800$ & $0.000108$ & $0.000126$ \\
\hline
\end{tabular}\\[2ex]
\caption{Error estimates for Example~\ref{ex:4} with set-valued methods
of order 1 and 2}
\label{tab:4}
\end{table}
\noindent
The minimum time function for this example is shown in Fig.~\ref{exam_5}.
\begin{figure}[htp]
\captionsetup{width=.95\linewidth
\begin{center}
\includegraphics[scale=0.3]{explainnonlinear2_col}
\includegraphics[width=2.53in,height=2.4in]{plotT_exp5}
\caption{Euler and Heun's iterates, minimum time function for Example~\ref{ex:4} resp.}
\label{exam_5}
\end{center}
\end{figure}
\end{example}
\subsection{Non-strict expanding property of reachable sets}
\label{subsec_non_exp_prop
The next example violates the continuity of the minimum time function (the dynamics is not normal).
Nevertheless, the proposed Algorithm \ref{algorithm} is able to provide a good
approximation of the discontinuous minimum time function.
This example also shows that boundary points of the reachable set
can no longer be characterized via time-minimal points
(compare Propositions~\ref{prop:bd_descr_monotone_case_w_level_set}
and~\ref{prop:bd_descr_w_level_set}), if the strict expanding property
of (the union of) reachable sets is not satisfied.
\begin{example}
Consider the dynamics
\label{ex:counter_ex_1}
\begin{equation}
\begin{bmatrix}
\dot{x}_1 \\[0.3em]
\dot{x}_2
\end{bmatrix}=\begin{bmatrix}
0 & -1 \\[0.3em]
2 & 3
\end{bmatrix}\begin{bmatrix}
x_1 \\[0.3em]
x_2
\end{bmatrix}+ u_1 \begin{bmatrix}
1 \\[0.3em]
-1
\end{bmatrix}
\end{equation}
with $u_1 \in U = [-1,1]$, $\mathcal{S} = \{0\}$ and $t \in I=[0, t_f]$.
The Kalman rank condition yields
$
\rk\Big[ B, A B \Big] = 1 < 2,
$
so that the normality of the system is not fulfilled. The fundamental system $\Phi(t,\tau)$ (for the time-reversed system)
is the same as in Example~\ref{ex:3a} so that
\begin{align*}
& \delta ^*(l,\Phi(t,\tau)\bar B(\tau)[-1,1])=e^{\tau-t}\vert l_1-l_2 \vert
= e^{\tau-t} \delta ^*(l, V), \\
\intertext{with the line segment $V = \co(\begin{bmatrix}
-1 \\[0.3em]
1
\end{bmatrix}, \begin{bmatrix}
1 \\[0.3em]
-1
\end{bmatrix})$.
Since}
& \int_0^t \delta^*(l, \Phi(t,\tau)\bar B(\tau)[-1,1]) d\tau
= e^{\tau-t} \bigg|_{\tau=0}^t \cdot \delta ^*(l, V) \\
= & (1 - e^{-t}) \cdot \delta ^*(l, V) = \delta ^*(l, (1 - e^{-t}) V), \\
\mathcal{R}(t) & = \int_0^t \Phi(t,\tau)\bar B(\tau)[-1,1] d\tau = (1 - e^{-t}) V.
\end{align*}
Hence, the reachable set is an increasing line segment (and always part of the same line
in $\R^2$, i.e., it is one-dimensional so that the interior is empty). Clearly,
both inclusions
\begin{align}
\mathcal{R}(s) \subset \mathcal{R}(t) \quad\text{or}\quad
\RSU(s) \subset \RSU(t)
\end{align}
i.e., \eqref{ex:relaxed_expand},
hold, but not the strictly expanding property of $\overline{\mathcal{R}}(\cdot)$
on $[t_0, t_f]$ in Assumptions \ref{standassum}(iv) and~(iv)', i.e.,
\begin{align}
\overline{\mathcal{R}}(t_1) & \subset \inter \overline{\mathcal{R}}(t_2) \text{\ for all $t_0\le t_1<t_2\le t_f$, where \rule{0ex}{4ex}} \label{eq:strict_exp_prop} \\
\overline{\mathcal{R}}(t) & = \begin{cases}
\mathcal{R}(t) & \text{for Assumption (iv)}, \\
\RSU(t) & \text{for Assumption (iv)'}.
\end{cases}\nonumber
\rule[-5.5ex]{0ex}{5.5ex}
\end{align}
The strict inclusion only holds in the relative interior. By \cite[Sec.~IV.1, Proposition~1.2]{BCD} the minimum time function is discontinuous (it has infinite values outside the line segment).
The plots of the two continuous-time reachable sets $\mathcal{R}(t)$ for $t=1,2$ together with the true minimum time function (in red)
and its discrete analogue (in green) obtained by the Euler scheme with $h = 0.025$ are shown
in Fig.~\ref{fig:counter_exam}:
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.34]{plot_example_2_18}
\quad
\includegraphics[scale=0.35]{reach_set_cont_discr_ex_4_1}
\end{center}
\caption{Reachable sets and minimum time functions for Example~\ref{ex:counter_ex_1}}
\label{fig:counter_exam}
\end{figure}
The two red points are the end points of the line segment for a smaller time $t_1 = 1$,
the two blue points are the end points of the line segment for a larger time $t_2 = 2 > t_1$.
The blue line segment is the reachable set for time $t_2$ (also the reachable set up to
time $t_2$). \\
All four points are on the boundary of the blue set $\mathcal{R}(t_2)$, but the minimum time to reach
the two blue points is $t_2$, while the minimum time to reach
the two red points is $t_1 < t_2$ which is a contradiction to
Proposition \ref{prop:bd_descr_w_level_set}.
\end{example}
\subsection{Problematic examples}
The first two examples show linear systems with hidden stability properties so that the discrete
reachable sets converge to a bounded convex set if the time goes to infinity (or is large enough
in numerical experiments). For larger end times the numerical calculation gets more demanding,
since the step size must be chosen small enough according to Proposition \ref{errTt2}.
The remaining part of the subsection
contain examples that violate Assumption \ref{standassum}(iv) and (iv')
from Proposition~\ref{prop:bd_descr_monotone_case_w_level_set}.
The examples demonstrate that a target or a control set not containing the origin
(as a relative interior point) might lead to non-monotone behavior
of the (union of) reachable sets. In all of these examples the union of reachable sets is no longer convex.
\begin{example}\label{ex:n1}
We consider the following time-dependent linear dynamics:
\begin{equation}\label{examplen1}
\dot{x}_1=-x_2,\,\,\dot{x}_2=x_1 - \frac{1}{t^2} u,\,\,u\in [-1,1].
\end{equation}
The reachable sets converge towards a final, bounded, convex set due to
the scaling factor $\frac{1}{t^2}$ in the matrix $B(t)$, see~Fig.~\ref{fig:exam_n1}~(left).
From a formal point of view the strict expanding condition \eqref{inclusionR} in Proposition \ref{errTt2}
is satisfied, but the positive number $\varepsilon$ tends to zero for increasing end time. On the other hand
we would stop the calculations if the Hausdorff distance of two consequent discrete reachable sets
is below a certain threshold.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.34]{exam_13e_euler_T_multiple_pi_plus_1_legend}
\ \,
\includegraphics[scale=0.34]{exam_r12b_euler_T_multiple_legend}
\caption{Reachable sets with various end times $t_f$ for Examples~\ref{ex:n1} and \ref{ex:n2}}
\label{fig:exam_n1}
\end{center}
\end{figure}
\end{example}
\begin{example}\label{ex:n2}
We reconsider Example~\ref{ex:3a} on the larger time interval $[t_0,t_f]=[0,100]$. $\bar{A}$ has negative eigenvalues -1 and -2.
Hence, the reachable sets converge towards a final, bounded, convex set,
see~Fig.~\ref{fig:exam_n1}~(right). We experience the same numerical problems as
in Example~\ref{ex:n1}.
\end{example}
\begin{example}
\label{ex:n4}
Let the dynamics be given by
\begin{equation}\label{examplen4}
\dot{x}_1=x_2 + u_1,\,\,\dot{x}_2=-x_1 + u_2,\,\,u\in B_1(0).
\end{equation}
In case a) the reachable sets for a given end time are always balls around the origin
(see~Fig.~\ref{fig:exam_n4} (left)),
if the target set is chosen as the origin.
In case b) the point $(2,2)^\top$ is considered as target set. Fig.~\ref{fig:exam_n4}~(right) shows
that the union of reachable sets is no longer convex.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.37]{exam_r13f_euler_T_multiple_legend_v2}
\quad
\includegraphics[scale=0.37]{plot_reach_set_ex_4_10_union_up_to_2_pi_x0_2_2_50_sets_cropped}
\caption{Reachable sets with various end times and different target sets for Example~\ref{ex:n4}}
\label{fig:exam_n4}
\end{center}
\end{figure}
\end{example}
\begin{example}\label{ex:n5}
Let us reconsider the dynamics \eqref{example3} of Example \ref{ex:2}, i.e.,
\begin{equation*}
\dot{x}_1=x_2,\,\dot{x}_2=u,\,\,u\in U.
\end{equation*}
In the first case, let $\mathcal{S}=\bb{0},\,\,U=[0,1]$. From the numerical calculations, we observe that $\mathcal{R}(t)$, $\mathcal{R}_{\le}(t)$ are still convex and satisfy \eqref{ex:relaxed_expand} in Remark \ref{rem:inclusion}, but violate the strictly expanding property \eqref{eq:strict_exp_prop} as shown in Fig.~\ref{fig:exam_n5}~(left). In the other case, $U=[1,2]$ is chosen. The convex reachable set $\mathcal{R}(t)$ is not only enlarging, but also moving which results in the nonconvexity of $\mathcal{R}_{\le}(t)$. Moreover, both \eqref{ex:relaxed_expand} in Remark \ref{rem:inclusion} and \eqref{eq:strict_exp_prop} are not fulfilled in this example as depicted in Fig.~\ref{fig:exam_n5}~(right).
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.34]{BSP_R11a3_euler_series_K_5_N_100_M_200_v2}
\quad
\includegraphics[scale=0.34]{BSP_R11a4_euler_series_K_5_N_100_M_200_v3}
\caption{Reachable sets with various end times and different control sets for Example~\ref{ex:n5}}
\label{fig:exam_n5}
\end{center}
\end{figure}
\end{example}
\section{Conclusions}
\label{sec:concl}
Although the underlying set-valued method approximating reachable sets in linear control
problems is very efficient,
the numerical implementation is a first realization only and can still be considerably improved.
Especially, step~3 in Algorithm \ref{algorithm}
can be computed more efficiently as in our
test implementation. Furthermore, higher order methods like the set-valued Simpson's rule
combined with the Runge-Kutta(4) method are an interesting option in examples where the underlying
reachable sets can be computed with higher order of convergence than 2, especially if
the minimum time function is Lipschitz. But even if it is merely H\"older-continuous
with $\frac{1}{2}$, the higher order in the set-valued quadrature method
can balance the missing regularity of the minimum time function and improves the error
estimate.
We are currently working on extending this first approach for linear control problems
without the monotonicity assumption on reachable sets and for nonlinear control problems.
\section*{Acknowledgements}
The authors want to express their thanks to Giovanni Colombo, especially for pointing us to Attouch's theorem, and to Lars Gr\"une. Both of them supported us with helpful suggestions and motivating questions. They are also grateful to Matthias Gerdts about his comments to optimal control.
|
{
"timestamp": "2018-05-08T02:17:27",
"yymm": "1805",
"arxiv_id": "1805.02507",
"language": "en",
"url": "https://arxiv.org/abs/1805.02507"
}
|
\section{Introduction}
Nowadays, due to advances in imaging devices, large scale images have become increasingly available, and there has arisen the necessity of parallel algorithms for image processing.
One suitable method for parallel computation is the domain decomposition method~(DDM), for which we solve a problem by splitting its domain into several smaller subdomains and conquering the small problem in each subdomain separately.
We consider the Rudin--Osher--Fatemi (ROF) model~\cite{ROF:1992} as a model problem,
which is a classical and effective model for image denoising:
\begin{equation}
\label{ROF}
\min_{u \in BV(\Omega)} \frac{\alpha}{2}\int_{\Omega} {(u-f)^2 \,dx} + TV(u),
\end{equation}
where $\Omega$ is the rectangular domain of an image, $f \in L^2 (\Omega)$ is an observed noisy image, $\alpha$ is a positive denoising parameter,
and $TV(u)$ is the total variation measure defined by
\begin{equation*}
\label{TV}
TV(u) = \sup \left\{ \int_{\Omega} {u \mathrm{div} \mathbf{q} \,dx} : \mathbf{q} \in (C_0^1 (\Omega))^2 , |\mathbf{q} | \leq 1 \right\}.
\end{equation*}
Here, $| \mathbf{q}| \leq 1$ means that $|\mathbf{q} (x) | \leq 1$ for a.e.\ $x \in \Omega$.
The solution space $BV(\Omega)$ denotes the space of the functions in $L^1 (\Omega)$ with the finite total variation,
which is a Banach space equipped with the norm $\|u \|_{BV(\Omega)} = \| u \|_{L^1 (\Omega)} + |Du|(\Omega)$.
It is well known that the ROF model has an anisotropic diffusion property so that it preserves edges and discontinuities in images~\cite{SC:2003}.
While overlapping DDMs for image restoration were considered in~\cite{FLS:2010,XTW:2010}, nonoverlapping DDMs for the total variation minimization were proposed in \cite{FS:2009, HL:2013}.
But Lee and Nam~\cite{LN:2017} gave a counterexample that an overlapping DDM does not converge to the global minimizer.
In~\cite{LLWY:2016}, Lee~et al.\ suggested DDMs with the primal-dual stitching technique.
In \cite{CTWY:2015, HL:2015, LN:2017}, DDMs based on the dual total variation minimization were proposed.
In particular, Chang~et al.~\cite{CTWY:2015} showed that the overlapping subspace correction methods for the dual ROF model have $O(1/n)$ convergence.
There are several major difficulties in designing DDMs for~\cref{ROF}.
First, the energy functional in~\cref{ROF} is nonsmooth, which makes the design of solvers hard.
In addition, the energy functional is nonseparable in the sense that it cannot be expressed as the sum of the local energy functionals in the subdomains due to the total variation term.
Finally, the solution space $BV(\Omega)$ allows discontinuities of a solution on the subdomain interfaces, so that it is difficult to design an appropriate interface condition of a solution.
One way to overcome such difficulties is to consider the Fenchel--Rockafellar dual problem as in \cite{CTWY:2015, HL:2015, LN:2017},
which is stated as
\begin{equation}
\label{dual_ROF_old}
\min_{\mathbf{p} \in (C_0^1 (\Omega))^2} \frac{1}{2\alpha} \int_{\Omega} ( \mathrm{div} \mathbf{p} + \alpha f )^2 \,dx \hspace{0.5cm}
\textrm{subject to } |\mathbf{p}| \leq 1.
\end{equation}
Even if it is cumbersome to treat the inequality constraint $|\mathbf{p}| \leq 1$, \cref{dual_ROF_old} is more suitable for DDMs,
since the energy functional is separable and the solution space $(C_0^1 (\Omega))^2$ has some regularity on the subdomain interfaces.
The desired primal solution $u$ is recovered from the dual solution $\mathbf{p}$ of~\cref{dual_ROF_old} by the following relation:
\begin{equation*}
u = f + \frac{1}{\alpha} \mathrm{div} \mathbf{p}.
\end{equation*}
Faster algorithms for solving \cref{dual_ROF_old} were developed in~\cite{BT:2009, Nesterov:2005}.
In the existing works~\cite{Chambolle:2004, CTWY:2015, HL:2015, LN:2017} for~\cref{dual_ROF_old}, the problems were discretized in the finite difference framework.
Each pixel in an image was treated as a discrete point on a grid, and the dual variable was considered as a vector-valued function on the grid.
The discrete gradient and divergence operators were defined by finite difference approximations of the continuous gradient and divergence operators.
In this paper, we propose a finite element discretization for \cref{dual_ROF_old}, which is more suitable for the DDMs than the existing ones.
Each pixel in an image is treated as a square finite element and the problem \cref{dual_ROF} is discretized by using the conforming lowest order Raviart--Thomas element~\cite{RT:1977}.
Based on the proposed discretization, we propose a primal DDM which is similar to the classical Schur complement method for the second order elliptic problems.
Eliminating the interior degrees of freedom in each subdomain yields an equivalent minimization problem to the full dimension problem.
The functional of the resulting minimization problem has enough regularity to adopt the FISTA~\cite{BT:2009}.
Thus, the proposed primal DDM achieves $O(1/n^2)$ convergence, and to the best of our knowledge, it is the best rate among the existing DDMs for the ROF model.
In addition, we propose a primal-dual DDM based on an equivalent saddle point problem.
The continuity of a solution on the subdomain interfaces is enforced by the method of Lagrange multipliers as in \cite{DCT:2016, FLP:2000, FR:1991}, and it yields an equivalent saddle point problem of the original variable (primal) and the Lagrange multipliers (dual).
The local problems for the proposed primal-dual DDM can be solved at a linear convergence rate, so that the method becomes very fast.
The rest of the paper is organized as follows.
In \cref{Sec:dual_ROF}, a conforming discretization of the dual ROF model with a Raviart--Thomas finite element space is introduced.
A primal DDM based on an equivalent minimization problem on the subdomain interfaces is presented in \cref{Sec:primal_DD}.
A primal-dual DDM based on an equivalent saddle point problem is considered in \cref{Sec:pd_DD}.
We present numerical results for the proposed methods in various settings in \cref{Sec:numerical}.
Finally, we conclude the paper with some remarks in \cref{Sec:conclusion}.
\section{The Dual ROF Model}
\label{Sec:dual_ROF}
\subsection{Preliminaries}
We review some preliminaries about the dual ROF model.
The space $H(\div; \Omega)$ is defined as
\begin{equation*}
H(\div; \Omega) = \left\{ \mathbf{p} \in (L^2 (\Omega))^2 : \mathrm{div} \mathbf{p} \in L^2 (\Omega) \right\}.
\end{equation*}
It is a Hilbert space equipped with an inner product
\begin{equation*}
\left< \mathbf{p}, \mathbf{q} \right>_{H(\div; \Omega)} = \int_{\Omega} \mathbf{p} \cdot \mathbf{q} \,dx + \int_{\Omega} \mathrm{div} \mathbf{p} \mathrm{div} \mathbf{q} \,dx,
\end{equation*}
and its induced norm is called the $H(\div; \Omega)$ graph norm.
A remarkable property of $H(\div; \Omega)$ is that, for a vector function $\mathbf{p} \in H(\div; \Omega)$, the normal component $\mathbf{p} \cdot \mathbf{n}$ on $\partial \Omega$ is well-defined \cite{BBF:2013, GR:2012}.
We define $H_0 (\div ; \Omega)$ as the subspace of $H(\div; \Omega)$ with vanishing normal component on $\partial \Omega$.
It can be shown that the space $H_0 (\div ; \Omega)$ is the closure of $(C_0^{\infty}(\Omega))^2$ in the $H(\div; \Omega)$ graph norm~\cite{Monk:2003}.
Thus, it is natural to consider the following alternative formulation of~\cref{dual_ROF_old} using $H_0 (\div ; \Omega)$ as the solution space:
\begin{equation}
\label{dual_ROF}
\min_{\mathbf{p} \in H_0 (\div ; \Omega)} \left\{ \mathcal{J}(\mathbf{p}) := \frac{1}{2\alpha} \int_{\Omega} (\mathrm{div} \mathbf{p} + \alpha f)^2 \,dx \right\} \hspace{0.5cm}
\textrm{subject to } |\mathbf{p}| \leq 1.
\end{equation}
We notice that this formulation was also considered in \cite{CTWY:2015}.
\subsection{Finite Element Discretizations}
A digital image consists of a number of rows and columns of pixels, holding values representing the intensity at a specific point.
We regard each pixel as a unit square and an image as a piecewise constant function in which each piece is a single pixel.
In this sense, we regard each pixel in a digital image as a square finite element whose side length equals~1.
Let $\mathcal{T}$ be the collection of all elements in $\Omega$, i.e. pixels.
We define the space $X$ for the image by
\begin{equation*}
X = \left\{ u \ \in L^2 (\Omega) : u|_{T}\textrm{ is constant } \forall T \in \mathcal{T} \right\}.
\end{equation*}
Then it is clear that $X \subset BV(\Omega )$, which means that the discretization is conforming.
Each degree of freedom of $X$ lies in an element (see \cref{Fig:dofs}(a)), and its corresponding basis function is
\begin{equation*}
\phi_T (x) = \begin{cases} 1 & \textrm{ if } x \in T , \\ 0 & \textrm{ if } x \not\in T , \end{cases} \hspace{0.5cm} T \in \mathcal{T}.
\end{equation*}
For $u \in X$ and $T \in \mathcal{T}$, let $(u)_T$ denote the degree of freedom of $u$ associated with the basis function $\phi_T$.
With a slight abuse of notation, let $\mathcal{T}$ also indicate the set of indices of the basis functions for $X$;
then we can represent $u$ by
$$u = \sum_{T \in \mathcal{T}} (u)_T \phi_T.$$
\begin{figure}[]
\centering
\subfloat[][Degrees of freedom for $X$]{ \includegraphics[height=3.8cm]{./X_dofs.jpg} }
\hspace{1.5cm}
\subfloat[][Degrees of freedom for $Y$]{ \includegraphics[height=3.8cm]{./Y_dofs.jpg} }
\caption{Degrees of freedom for the spaces $X$ and $Y$}
\label{Fig:dofs}
\end{figure}
It is natural to determine the space $Y$ for the dual variable $\mathbf{p}$ such that the divergence of each element in $Y$ is in $X$.
A suitable choice to meet this condition is the lowest order Raviart--Thomas elements~\cite{RT:1977}.
We define $Y$ by
\begin{equation*}
Y = \left\{ \mathbf{q} \in H_0 (\div ; \Omega) : \mathbf{q}|_{T} \in \mathcal{RT}_0(T) \hspace{0.2cm}\forall T \in \mathcal{T} \right\},
\end{equation*}
where $\mathcal{RT}_0(T)$ is the collection of the vector functions $\mathbf{q}$:~$T \rightarrow \mathbb{R}^2$ of the form
$$\mathbf{q} (x_1 , x_2) = \begin{bmatrix} a_1 + b_1 x_1 \\ a_2 + b_2 x_2 \end{bmatrix}.$$
In order for a piecewise $\mathcal{RT}_0(T)$-function to be in $H_0 (\div ; \Omega)$, a particular condition on the element interfaces should be satisfied, which is given in the following proposition~\cite{Nedelec:1980}.
\begin{proposition}
\label{Prop:FEM_interface}
A vector function $\mathbf{q}$\emph{:} $\Omega \rightarrow \mathbb{R}^2$ is in $H(\div; \Omega)$ if and only if
the restriction of $\mathbf{q}$ to each $T \in \mathcal{T}$ is in $H(\mathrm{div} ; T)$, and
for each common edge $e = \bar{T_1} \cap \bar{T_2}$, we have
$$ \mathbf{q} \cdot \mathbf{n} |_{T_1} + \mathbf{q} \cdot \mathbf{n} |_{T_2} = 0 \textrm{ on } e, $$
where $\mathbf{n} |_{T_i}$ is the outer normal to $\partial T_i$ on $e$, $i= 1,2$, so that $\mathbf{n}|_{T_1} = - \mathbf{n}|_{T_2}$.
\end{proposition}
\Cref{Prop:FEM_interface} gives a natural way to choose the degrees of freedom of the space $Y$.
Let $\mathbf{q} \in Y$.
Then the value of $\mathbf{q} \cdot \mathbf{n}$ is well-defined on each common edge of elements,
where the direction of $\mathbf{n}$ is chosen as in \cref{Fig:dofs}(b).
Therefore, we choose the degrees of freedom of $Y$ by the values of $\mathbf{q} \cdot \mathbf{n}$ on the element interfaces.
To construct the corresponding basis functions, we consider a reference square $T_{\mathrm{ref}} = [0, 1]^2$.
The outer normal component of a basis function $\bm{\psi}_{\mathrm{ref}}$ has the value $1$ on one edge, say $x=1$, and $0$ on the other edges.
Such $\bm{\psi}_{\mathrm{ref}}$ is unique and given by $\bm{\psi}_{\mathrm{ref}}(x_1 , x_2 ) = (x_1 , 0)$.
Similarly, the other basis functions on $T_{\mathrm{ref}}$ are given by $(1-x_1, 0)$, $(0, x_2)$, and $(0, 1-x_2)$.
Now, let $\mathcal{I}$ be the set of indices of the basis functions for $Y$, and let $\left\{\bm{\psi}_i \right\}_{i \in \mathcal{I}}$ be the basis.
Also, for $\mathbf{p} \in Y$ and $i \in \mathcal{I}$, let $(\mathbf{p})_i$ denote the degree of freedom of $\mathbf{p}$ associated with the basis function $\bm{\psi}_i$; then we can write
$$ \mathbf{p} = \sum_{i \in \mathcal{I}} {(\mathbf{p})_i \bm{\psi}_i}.$$
Next, we determine the norms and the inner products for which $X$ and $Y$ will be equipped.
In $X$, the $L^2 (\Omega)$-inner product agrees with the Euclidean inner product, so it is natural to choose the inner product as
$$ \left< u, v \right>_X = \int_{\Omega} {uv \,dx} = \sum_{T \in \mathcal{T}} {(u)_T (v)_T} $$
and the norm as its induced norm $$\| u \|_X^2 = \left< u, u \right>_X.$$
We set the inner product for $Y$ by the usual Euclidean inner product
$$ \left< \mathbf{p}, \mathbf{q} \right>_Y = \sum_{i \in \mathcal{I}} {(\mathbf{p})_i (\mathbf{q})_i} $$
and the norm by its induced norm $$\| \mathbf{p} \|_Y^2 = \left< \mathbf{p}, \mathbf{p} \right>_Y.$$
\begin{remark}
We equipped $Y$ with not the $(L^2 (\Omega))^2$-inner product but the Euclidean inner product.
The reason is that if we equip $Y$ with the $(L^2 (\Omega))^2$-inner product, then the $(L^2 (\Omega))^2$-mass matrix occurs in the resulting algorithms, making computation more cumbersome.
In the following, we prove that using the Euclidean inner product instead of the $(L^2 (\Omega))^2$-inner product does not affect both the quality of image denoising and the rate of convergence.
Assume that the image size is $n = M \times N$.
Consider an $n \times n$ symmetric tridiagonal matrix~$\mathrm{trid}_n (\alpha , \beta)$ whose diagonal entries are~$\alpha$ and off-diagonal entries are~$\beta$.
Under an appropriate ordering of the degrees of freedom of~$Y$, one can see that the $(L^2 (\Omega))^2$-mass matrix is a block-diagonal matrix composed of~$N$ $\mathrm{trid}_{M-1} (\frac{2}{3}, \frac{1}{6})$-blocks and $M$ $\mathrm{trid}_{N-1} (\frac{2}{3}, \frac{1}{6})$-blocks.
Hence, all eigenvalues are
\begin{equation*}
\frac{2}{3} + \frac{1}{3} \cos \left(\frac{k\pi}{M}\right), \hspace{0.5cm} k=1, \ldots ,M-1,
\end{equation*}
and
\begin{equation*}
\frac{2}{3} + \frac{1}{3} \cos \left(\frac{k\pi}{N}\right), \hspace{0.5cm} k=1, \ldots ,N-1.
\end{equation*}
See section~C.7 in~\cite{LeVeque:2007} for details.
The $(L^2 (\Omega))^2$-mass matrix is spectrally equivalent to the identity matrix which can be obtained from the $(L^2 (\Omega))^2$-mass matrix by diagonal lumping with proper scaling.
Therefore, one can conclude that the overall performance remains the same even if we use the Euclidean inner product.
\end{remark}
For a pixel $T = T_{ij} \in \mathcal{T}$ on the $i$th row and the $j$th column of the $M \times N$ image, let $\iota_{T, 1} \in \mathcal{I}$ be the index corresponding to the degree of freedom of~$Y$ on the edge shared by~$T_{ij}$ and~$T_{i+1,j}$.
Similarly, let $\iota_{T, 2} \in \mathcal{I}$ be the one on the edge shared by~$T_{ij}$ and $T_{i, j+1}$.
To treat the inequality constraints in~\cref{dual_ROF}, for $1< p < \infty$, we define the subset $C^p$ of $Y$ by
\begin{equation}
\label{Cp}
C^p = \left\{ \mathbf{p} \in Y : |(\mathbf{p})_{\iota_{T, 1}}|^{q} + |(\mathbf{p})_{\iota_{T, 2}}|^{q} \leq 1 \hspace{0.2cm} \forall T \in \mathcal{T} \right\},
\end{equation}
where $q$ is the H\"{o}lder conjugate of $p$ and the convention $(\mathbf{p})_{\iota_{T_{M, j}, 1}} = (\mathbf{p})_{\iota_{T_{i, N}, 2}} = 0$ is adopted.
Also, for $p=1$, we define
\begin{equation}
\label{C1}
C^1 = \left\{ \mathbf{p} \in Y : |(\mathbf{p})_i| \leq 1 \hspace{0.2cm} \forall i \in \mathcal{I} \right\}.
\end{equation}
Clearly, for $1 \leq p < \infty$, $C^p$ is nonempty and convex.
The orthogonal projection of $\mathbf{p} \in Y$ onto $C^p$ can be easily computed by
\begin{equation}
\label{proj_Cp}
(\mathrm{proj}_{C^p} \mathbf{p})_{\iota_{T, k}} = \frac{(\mathbf{p})_{\iota_{T, k}}}{\left( |(\mathbf{p})_{\iota_{T, 1}}|^{q} + |(\mathbf{p})_{\iota_{T, 2}}|^{q} \right)^{\frac{1}{q}}}
\hspace{0.5cm} \forall T \in \mathcal{T}, \hspace{0.1cm} k=1,2
\end{equation}
for $1<p<\infty$ and
\begin{equation}
\label{proj_C1}
(\mathrm{proj}_{C^1} \mathbf{p})_i = \frac{(\mathbf{p})_i}{\max \left\{ 1, |(\mathbf{p})_i| \right\}} \hspace{1cm} \forall i \in \mathcal{I}
\end{equation}
for $p = 1$.
Finally, we are ready to state a finite element version of problem \cref{dual_ROF}:
\begin{equation}
\label{d_dual_ROF}
\min_{\mathbf{p} \in Y} \mathcal{J}(\mathbf{p}) + \chi_{C^p} (\mathbf{p}),
\end{equation}
where $\chi_{C^p}$ is the characteristic function of $C^p$ which is defined as
\begin{equation*}
\chi_{C^p} (\mathbf{p}) = \begin{cases}0 & \textrm{ if } \mathbf{p} \in C^p,\\ \infty & \textrm{ if } \mathbf{p} \not\in C^p. \end{cases}
\end{equation*}
We provide a relation between \cref{d_dual_ROF} and the conventional finite difference discretization of the ROF model.
\begin{figure}[]
\centering
\subfloat[][$\mathrm{div}$]{ \includegraphics[height=3.5cm]{./div.jpg} }
\hspace{1.2cm}
\subfloat[][$\mathrm{div}^*$]{ \includegraphics[height=3.5cm]{./div_star.jpg} }
\caption{Action of the operators $\mathrm{div}$ and $\mathrm{div}^*$ on an element}
\label{Fig:div}
\end{figure}
\begin{theorem}
\label{Thm:equiv}
Let $\mathbf{p}^* \in Y$ be a solution of~\cref{d_dual_ROF}.
If we identify $X$ with the Euclidean space of the functions from the $M \times N$ discrete points $[1, ..., M] \times [1, ..., N]$ into~$\mathbb{R}$, then $u^* = f + \frac{1}{\alpha} \mathrm{div} \mathbf{p}^*$ is a solution of the finite difference ROF model
\begin{equation*}
\min_{u \in X } \frac{\alpha}{2} \| u - f \|_2^2 + \| | D u|_p \|_1, \hspace{0.5cm} (1 \leq p < \infty)
\end{equation*}
where $D u$ is the forward finite difference operator
\begin{eqnarray*}
(Du)_{ij}^1 &=& \begin{cases} u_{i+1, j} - u_{ij} & \textrm{ if } i=1, ..., M-1, \\ 0 & \textrm{ if } i = M,\end{cases} \\
(Du)_{ij}^2 &=& \begin{cases} u_{i, j+1} - u_{ij} & \textrm{ if } j=1, ..., N-1, \\ 0 & \textrm{ if } j = N\end{cases}
\end{eqnarray*}
and $(|Du|_p)_{ij} = \left( |(Du)_{ij}^1|^p + |(Du)_{ij}^2|^p \right)^{\frac{1}{p}}$.
\end{theorem}
\begin{proof}
By the primal-dual equivalence, $u^*$ is a solution of the Fenchel--Rockafellar dual of~\cref{d_dual_ROF} given by
\begin{equation*}
\min_{u \in X} \left\{ \frac{\alpha}{2} \int_{\Omega} (u-f)^2 \,dx + \sup_{\mathbf{p} \in C^p} \int_{\Omega} u \mathrm{div} \mathbf{p} \,dx \right\}.
\end{equation*}
Then, we have
\begin{align*}
\frac{\alpha}{2} \int_{\Omega} (u-f)^2 \,dx + \sup_{\mathbf{p} \in C^p} \int_{\Omega} u \mathrm{div} \mathbf{p} \,dx &= \frac{\alpha}{2} \| u - f \|_2^2 + \sup_{\mathbf{p} \in C^p} \left< u , \mathrm{div} \mathbf{p} \right>_X \\
&= \frac{\alpha}{2} \| u - f \|_2^2 + \sup_{\mathbf{p} \in C^p} \left< \mathrm{div}^* u , \mathbf{p} \right>_Y,
\end{align*}
where $\mathrm{div}^*$:~$X \rightarrow Y$ is defined as
\begin{equation*}
\left< \mathrm{div}^* u , \mathbf{p} \right>_Y = \left< u , \mathrm{div} \mathbf{p} \right>_X \hspace{0.5cm} \forall u \in X, \mathbf{p} \in Y.
\end{equation*}
Observe that the $\mathrm{div}^*$ operator acts like the minus finite difference operator (see \cref{Fig:div}(b)).
Indeed, we can see that
\begin{eqnarray*}
(\mathrm{div}^* u)_{\iota_{T, 1}} &=& u_{ij} - u_{i+1, j} = - (Du)_{ij}^1\\
(\mathrm{div}^* u)_{\iota_{T, 2}} &=& u_{ij} - u_{i, j+1} = - (Du)_{ij}^2
\end{eqnarray*}
for $T = T_{ij} \in \mathcal{T}$ with the convention $u_{Mj} - u_{M+1, j} = u_{iN} - u_{i,N+1} = 0$.
Assume $1<p<\infty$, $\frac{1}{p} + \frac{1}{q} = 1$, and take any $\mathbf{p} \in C^p$.
Then, by the duality between the spaces $l^p$ and $l^q$ in each pixel, we get
\begin{align*}
\sup_{\mathbf{p} \in C^p} \left< \mathrm{div}^* u , \mathbf{p} \right>_Y &= \sum_{T = T_{ij} \in \mathcal{T}} \sup_{|(\mathbf{p})_{\iota_{T, 1}}|^{q} + |(\mathbf{p})_{\iota_{T, 2}}|^{q} \leq 1} \left[ (Du)_{ij}^1 (\mathbf{p})_{\iota_{T, 1}} + (Du)_{ij}^2 (\mathbf{p})_{\iota_{T, 2}} \right] \\
&= \sum_{T = T_{ij} \in \mathcal{T}} \left[ |(Du)_{ij}^1|^p + |(Du)_{ij}^2|^p \right]^{\frac{1}{p}} = \| |Du|_p \|_1,
\end{align*}
which concludes the proof.
The case for $p=1$ is straightforward.
\end{proof}
\Cref{Thm:equiv} means that, by choosing the set $C^p$ appropriately, the finite element model~\cref{d_dual_ROF} can express various versions of discrete total variation, for example, an anisotropic one for $p=1$ and an isotropic one for $p=2$.
Hereafter, for the sake of simplicity, we treat the case for $p=1$ only; generalization to the other cases is straightforward.
We drop the superscript and write $C = C^1$.
Next, note that the divergence operator in the continuous setting is well-defined on $Y$, and its image is contained in $X$.
That is, the divergence of a function in $Y$ is piecewise constant.
This means that we do not need to define a discrete divergence operator as in the preceding researches,
and some good properties from the continuous setting are inheritable to our discretization.
For instance, for a nonoverlapping domain decomposition $\left\{ \Omega_s \right\}_{s=1}^{\mathcal{N}}$ of $\Omega$ and $\mathbf{p} \in Y$,
the following \textit{splitting property} of $\mathcal{J}(\mathbf{p})$ holds:
\begin{equation}
\label{splitting}
\frac{1}{2\alpha} \int_{\Omega} (\mathrm{div} \mathbf{p} + \alpha f)^2 \,dx = \sum_{s=1}^{\mathcal{N}} \frac{1}{2\alpha} \int_{\Omega_s} ( \mathrm{div}(\mathbf{p}|_{\Omega_s}) + \alpha f) ^2 \,dx .
\end{equation}
\Cref{splitting} will be our main tool in designing the DDMs in \cref{Sec:primal_DD,Sec:pd_DD}.
\begin{remark}
The discrete divergence operator proposed in \cite{Chambolle:2004, LN:2017} does not satisfy~\cref{splitting}, which was designed in the finite difference framework.
\end{remark}
\subsection{Solvers for the Finite Element ROF Model}
The proposed discrete problem~\cref{d_dual_ROF} can adopt the existing solvers for the total variation minimization
using either dual approaches~\cite{BT:2009, Chambolle:2004} or primal-dual approaches~\cite{CP:2011}.
We give some results about~\cref{d_dual_ROF} which help to set the parameters for the solvers.
\begin{proposition}
\label{Prop:div_norm}
The operator norm of $\mathrm{div}$\emph{:} $Y \rightarrow X$ has a bound such that $\| \mathrm{div} \|_{Y \rightarrow X}^2 \leq 8$.
\end{proposition}
\begin{proof}
Fix $\mathbf{p} \in Y$.
For a pixel $T \in \mathcal{T}$, let $p_{T, 1}$, $p_{T, 2}$, $p_{T, 3}$, and $p_{T, 4}$ be the degrees of freedom of $\mathbf{p}$ on the top, bottom, left, and right edges of $T$, respectively (see \cref{Fig:div}).
We may set $p_{T, j}$ by $0$ if it is on $\partial \Omega$ for some $j$.
Then, we have
\begin{align*}
(\mathrm{div} \mathbf{p})_T^2 &= (-p_{T, 1} + p_{T, 2} - p_{T, 3} + p_{T,4})^2\\
&\leq 4(p_{T, 1}^2 + p_{T, 2}^2 + p_{T, 3}^2 + p_{T,4}^2 ).
\end{align*}
Summation over all $T \in \mathcal{T}$ yields
\begin{align*}
\| \mathrm{div} \mathbf{p} \|_X^2 = \sum_{T \in \mathcal{T}} {(\mathrm{div} \mathbf{p})_T^2} &\leq 4 \sum_{T \in \mathcal{T}} {(p_{T, 1}^2 + p_{T, 2}^2 + p_{T, 3}^2 + p_{T,4}^2 )}\\
&\leq 8 \sum_{i \in \mathcal{I}} {(\mathbf{p})_i^2} = 8 \| \mathbf{p} \|_Y^2.
\end{align*}
For the second inequality, the fact that every edge is shared by at most two elements is used.
Therefore, $\| \mathrm{div} \|_{Y \rightarrow X}^2 \leq 8$.
\end{proof}
\begin{proposition}
\label{Prop:d_dual_ROF_Lipschitz}
The gradient of $\mathcal{J} (\mathbf{p})$ is given by
$$\nabla \mathcal{J} (\mathbf{p}) = \frac{1}{\alpha} \mathrm{div}^* ( \mathrm{div}\mathbf{p} + \alpha f)$$
and it is Lipschitz continuous with a Lipschitz constant $8/\alpha$.
\end{proposition}
\begin{proof}
Take any $\mathbf{p} \in Y$, and let $\mathbf{q} \in Y$ with $\|\mathbf{q} \|_Y = 1$, and $h > 0$.
Then we have
\begin{align*}
\left| \mathcal{J} (\mathbf{p} + h\mathbf{q}) - \mathcal{J}(\mathbf{p}) - \left< \frac{1}{\alpha} \mathrm{div}^* ( \mathrm{div}\mathbf{p} + \alpha f ), h\mathbf{q} \right>_Y \right|
&= \frac{h^2}{2\alpha} \int_{\Omega} (\mathrm{div} \mathbf{q})^2 \,dx \\
&\leq \frac{h^2}{2\alpha} \| \mathrm{div} \|_{Y \rightarrow X}^2 \| \mathbf{q} \|_Y^2 \leq \frac{4h^2}{\alpha} .
\end{align*}
Therefore, $\nabla \mathcal{J} (\mathbf{p}) = \frac{1}{\alpha} \mathrm{div}^* (\mathrm{div}\mathbf{p} + \alpha f )$.
Furthermore, for any $\mathbf{p}$, $\mathbf{q} \in Y$,
\begin{align*}
\left\| \nabla \mathcal{J} (\mathbf{p}) - \nabla \mathcal{J} (\mathbf{q}) \right\|_Y &= \left\| \frac{1}{\alpha} \mathrm{div}^* \left( \mathrm{div} (\mathbf{p} - \mathbf{q}) \right) \right\|_Y \\
&\leq \frac{1}{\alpha} \| \mathrm{div} \|_{Y \rightarrow X}^2 \| \mathbf{p} - \mathbf{q} \|_Y \leq \frac{8}{\alpha} \| \mathbf{p} - \mathbf{q} \|_Y.
\end{align*}
In the last line, we used \cref{Prop:div_norm} to bound $\| \mathrm{div} \|_{Y \rightarrow X}$.
From the above computations, we conclude that $\nabla \mathcal{J}$ is Lipschitz continuous with a Lipschitz constant $8/\alpha$.
\end{proof}
We notice that the proof of \cref{Prop:div_norm} given here is essentially the same as the proof of Theorem~3.1 of~\cite{Chambolle:2004}.
\section{A Primal Domain Decomposition Method}
\label{Sec:primal_DD}
In this section, we propose a primal DDM for the proposed discretization which resembles the Schur complement method,
one of the most primitive nonoverlapping DDMs for second order elliptic problems.
We note that the method proposed in this section is not a DDM for the ``primal" total variation minimization problem,
but a ``primal" DDM for the ``dual" total variation minimization problem.
In the Schur complement method for second order elliptic problems, the degrees of freedom in the interior of the subdomains are eliminated so that only the degrees of freedom on the subdomain interfaces remain.
The remaining system on the subdomain interfaces is called the Schur complement system, and it is solved by an iterative solver like the conjugate gradient method.
Similarly, in the proposed method, the interior degrees of freedom are eliminated and we solve a resulting minimization problem on the subdomain interfaces.
Every finite-dimensional Hilbert space~$H$ appearing in \cref{Sec:primal_DD,Sec:pd_DD} is equipped with the Euclidean inner product~$\left< \cdot , \cdot \right>_H$ and the induced norm~$\| \cdot \|_H$.
\begin{figure}[]
\centering
\subfloat[][Primal DD]{ \includegraphics[height=3.8cm]{./primal_dd.jpg} }
\hspace{2cm}
\subfloat[][Primal-dual DD]{ \includegraphics[height=4cm]{./pd_dd.jpg} }
\caption{Primal and primal-dual domain decomposition}
\label{Fig:DD}
\end{figure}
We decompose the image domain $\Omega$ into $\mathcal{N} = N \times N$ disjoint square subdomains $\left\{ \Omega_s \right\}_{s=1}^{\mathcal{N}}$ in a checkerboard fashion (see \cref{Fig:DD}(a)).
From now on, the letters $s$ and $t$ stand for indices of subdomains, that is, $s$ and $t$ run from $1$ to $\mathcal{N}$.
We denote the outer normal to $\partial \Omega_s$ by $\mathbf{n}_s$.
For two adjacent subdomains $\Omega_s$ and $\Omega_t$ with $s < t$, let $\Gamma_{st} = \partial \Omega_s \cap \partial \Omega_t$ be the subdomain interface between them.
The subdomain interface $\Gamma_{st}$ is oriented in the way that the normal $\mathbf{n}_{st}$ to $\Gamma_{st}$ is given by $\mathbf{n}_{st} = \mathbf{n}_s = -\mathbf{n}_t$.
Also, we define the union of the subdomain interfaces $\Gamma$ by $\Gamma = \bigcup_{s<t} \Gamma_{st}$.
For the discrete setting, let $\mathcal{T}_s$ be the collection of all elements in $\Omega_s$.
We define the local dual function space $Y_s$ by
\begin{equation}
\label{Y_s}
Y_s = \left\{ \mathbf{q}_s \in H_0(\mathrm{div} ; \Omega_s) : \mathbf{q}_s |_{T} \in \mathcal{RT}_0 (T) \hspace{0.2cm} \forall T \in \mathcal{T}_s \right\}.
\end{equation}
Also, let~$\mathcal{I}_s$ be the set of indices of the basis functions for $Y_s$.
In addition, we set $Y_I$ by the direct sum of all local dual function spaces, that is,
\begin{equation*}
Y_I = \bigoplus_{s=1}^{\mathcal{N}} Y_s.
\end{equation*}
One can observe that, for $\mathbf{p}_I = \bigoplus_{s=1}^{\mathcal{N}} \mathbf{p}_s$ and $\mathbf{q}_I = \bigoplus_{s=1}^{\mathcal{N}} \mathbf{q}_s$, we have
\begin{equation*}
\left< \mathbf{p}_I , \mathbf{q}_I \right>_{Y_I} = \sum_{s=1}^{\mathcal{N}} \left< \mathbf{p}_s , \mathbf{q}_s \right>_{Y_s}.
\end{equation*}
Next, we denote $\mathcal{I}_{\Gamma}$ by the set of indices of degrees of freedom of $Y$ on $\Gamma$,
and define the interface function space $Y_{\Gamma}$ by
\begin{equation*}
Y_{\Gamma} = \mathrm{span} \left\{ \bm{\psi}_i \right\}_{i \in \mathcal{I}_{\Gamma}}.
\end{equation*}
The interface function space~$Y_\Gamma$ is equipped with the inner product defined by
\begin{equation*}
\left< \mathbf{p}_{\Gamma} , \mathbf{q}_{\Gamma} \right>_{Y_{\Gamma}} = \left< \mathbf{p}_{\Gamma} , \mathbf{q}_{\Gamma} \right>_Y
\end{equation*}
and its induced norm
\begin{equation*}
\| \mathbf{p}_{\Gamma} \|_{Y_{\Gamma}}^2 = \left< \mathbf{p}_{\Gamma} , \mathbf{p}_{\Gamma} \right>_{Y_{\Gamma}}.
\end{equation*}
As we readily see, $Y = Y_I \oplus Y_{\Gamma}$.
For $\mathbf{p} \in Y$, there exists a unique decomposition
$$\mathbf{p} = \mathbf{p}_I \oplus \mathbf{p}_{\Gamma} = \left( \bigoplus_{s=1}^{\mathcal{N}} \mathbf{p}_s \right) \oplus \mathbf{p}_{\Gamma}$$
with $\mathbf{p}_s \in Y_s$ and $\mathbf{p}_{\Gamma} \in Y_{\Gamma}$.
Thanks to the splitting property \cref{splitting}, we have
\begin{equation} \begin{split}
\label{primal_DD_splitting}
\mathcal{J}(\mathbf{p}) &= \frac{1}{2\alpha} \int_{\Omega} (\mathrm{div} \mathbf{p} + \alpha f)^2 \,dx \\
&= \sum_{s=1}^{\mathcal{N}} \frac{1}{2\alpha} \int_{\Omega_s} (\mathrm{div}(\mathbf{p}_s + \mathbf{p}_{\Gamma}|_{\Omega_s}) + \alpha f)^2 \,dx.
\end{split} \end{equation}
To treat the inequality constraints, as we did in~\cref{C1}, we define the subset $C_s$ of $Y_s$ by
\begin{equation}
\label{C_s}
C_s = \left\{ \mathbf{p}_s \in Y_s : |(\mathbf{p}_s)_i| \leq 1 \hspace{0.2cm} \forall i \in \mathcal{I}_s \right\},
\end{equation}
and we set $C_I$ as the direct sum of all $C_s$'s:
\begin{equation*}
\label{C_I}
C_I = \bigoplus_{s=1}^{\mathcal{N}} C_s.
\end{equation*}
In addition, let $C_{\Gamma}$ be the subset of $Y_{\Gamma}$ satisfying the inequality constraints:
\begin{equation*}
\label{C_Gamma}
C_{\Gamma} = \left\{ \mathbf{p}_{\Gamma} \in Y_{\Gamma} : |(\mathbf{p}_{\Gamma})_i| \leq 1\hspace{0.2cm} \forall i \in \mathcal{I}_{\Gamma} \right\}.
\end{equation*}
Similarly to~\cref{proj_C1}, the projections onto $C_s$ and $C_{\Gamma}$ can be computed by the pointwise Euclidean projection:
\begin{subequations}
\begin{equation}
\label{proj_C_s}
(\mathrm{proj}_{C_s} \mathbf{p}_s )_i = \frac{(\mathbf{p}_s)_i}{\max \left\{ 1, |(\mathbf{p}_s)_i| \right\}} \hspace{0.5cm} \forall i \in \mathcal{I}_s,
\end{equation}
\begin{equation}
(\mathrm{proj}_{C_{\Gamma}} \mathbf{p}_{\Gamma} )_i = \frac{(\mathbf{p}_{\Gamma})_i}{\max \left\{ 1, |(\mathbf{p}_{\Gamma})_i| \right\}} \hspace{0.5cm} \forall i \in \mathcal{I}_{\Gamma}.
\end{equation}
\end{subequations}
Now, for $\mathbf{p}_{\Gamma} \in C_{\Gamma}$, we consider the following minimization problem:
\begin{equation}
\label{primal_DD_harmonic}
\min_{\mathbf{p}_I \in Y_I} \left\{ \mathcal{J} (\mathbf{p}_I \oplus \mathbf{p}_{\Gamma}) + \chi_{C_I}(\mathbf{p}_I) \right\}.
\end{equation}
We note that, with the help of~\cref{primal_DD_splitting}, a solution of \cref{primal_DD_harmonic} can be obtained by solving
\begin{equation}
\label{primal_DD_harmonic_local}
\min_{\mathbf{p}_s \in Y_s} \left\{ \frac{1}{2\alpha}\int_{\Omega_s} (\mathrm{div}(\mathbf{p}_s + \mathbf{p}_{\Gamma}|_{\Omega_s}) + \alpha f)^2 \,dx + \chi_{C_s}(\mathbf{p}_s) \right\}
\end{equation}
and taking the direct sum of the solutions of~\cref{primal_DD_harmonic_local} over $s=1, ..., \mathcal{N}$.
The local problem~\cref{primal_DD_harmonic_local} can be solved independently in each subdomain.
That is, no communications among processors are required so that the resulting algorithm becomes suitable for parallel computation.
With a slight abuse of notation, we denote a solution of~\cref{primal_DD_harmonic} by $\mathcal{H}_I \p_{\Gamma} \in C_I$.
Although $\mathcal{H}_I \p_{\Gamma}$ is not unique in general, $\mathrm{div} (\mathcal{H}_I \p_{\Gamma})$ is uniquely determined and we will deal with $\mathrm{div} (\mathcal{H}_I \p_{\Gamma})$ only.
Finally, we present the minimization problem for the proposed primal DDM:
\begin{equation}
\label{primal_DD}
\min_{\mathbf{p}_{\Gamma} \in Y_{\Gamma}} \mathcal{J}_{\Gamma}(\mathbf{p}_{\Gamma}) + \chi_{C_{\Gamma}} (\mathbf{p}_{\Gamma}),
\end{equation}
where the functional $\mathcal{J}_{\Gamma}(\mathbf{p}_{\Gamma})$ on $Y_{\Gamma}$ is defined as
\begin{equation}
\label{J_Gamma}
\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}) = \mathcal{J} (\mathcal{H}_I \p_{\Gamma} \oplus \mathbf{p}_{\Gamma}).
\end{equation}
The functional $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma})$ can be regarded as the result of elimination of interior degrees of freedom $\mathbf{p}_I$ from $\mathcal{J} (\mathbf{p})$.
The same technique is widely used in DDMs for second order elliptic problems.
The following proposition shows a relation between~\cref{d_dual_ROF} and~\cref{primal_DD}.
\begin{proposition}
\label{Prop:primal_DD_equiv}
If $\mathbf{p}^* \in Y$ is a solution of~\cref{d_dual_ROF}, then $\mathbf{p}_{\Gamma}^* = \mathbf{p}^* |_{Y_{\Gamma}}$ is a solution of~\cref{primal_DD}.
Conversely, if~$\mathbf{p}_{\Gamma}^* \in Y_{\Gamma}$ is a solution of~\cref{primal_DD}, then $\mathbf{p}^* = \mathcal{H}_I \p_{\Gamma}^* \oplus \mathbf{p}_{\Gamma}^*$ is a solution of~\cref{d_dual_ROF}.
\end{proposition}
\begin{proof}
Let $\mathbf{p}^* \in Y$ be a solution of~\cref{d_dual_ROF} and $\mathbf{p}_{\Gamma}^* = \mathbf{p}^* |_{Y_{\Gamma}}$.
Clearly, $\mathbf{p}_{\Gamma}^* \in C_{\Gamma}$.
We show that $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}) \geq \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}^*)$ for all $\mathbf{p}_{\Gamma} \in C_{\Gamma}$.
Take any $\mathbf{p}_{\Gamma} \in C_{\Gamma}$.
Then, by the minimization property of $\mathbf{p}^*$ with respect to~\cref{d_dual_ROF}, we have
\begin{equation*}
\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}) = \mathcal{J} (\mathcal{H}_I \p_{\Gamma} \oplus \mathbf{p}_{\Gamma}) \geq \mathcal{J}(\mathbf{p}^*).
\end{equation*}
Also, by the minimization property of $\mathcal{H}_I$ with respect to~\cref{primal_DD_harmonic}, we have
\begin{align*}
\mathcal{J}(\mathbf{p}^*) &= \mathcal{J}(\mathbf{p}^* |_{Y_I} \oplus \mathbf{p}_{\Gamma}^*) \\
&\geq \mathcal{J}(\mathcal{H}_I \p_{\Gamma}^* \oplus \mathbf{p}_{\Gamma}^*) = \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}^*).
\end{align*}
Therefore, $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}) \geq \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}^*)$, so that $\mathbf{p}_{\Gamma}^*$ is a solution of~\cref{primal_DD}.
Conversely, let $\mathbf{p}_{\Gamma}^* \in Y_{\Gamma}$ be a solution of~\cref{primal_DD} and $\mathbf{p}^* = \mathcal{H}_I \p_{\Gamma}^* \oplus \mathbf{p}_{\Gamma}^* \in C$.
It suffices to show that $\mathcal{J}(\mathbf{p}) \geq \mathcal{J}(\mathbf{p}^*)$ for all $\mathbf{p} \in C$.
Take any $\mathbf{p} \in C$.
By the minimization property of $\mathcal{H}_I$ with respect to~\cref{primal_DD_harmonic}, we have
\begin{align*}
\mathcal{J}(\mathbf{p}) &= \mathcal{J}(\mathbf{p} |_{Y_I} \oplus \mathbf{p}|_{Y_{\Gamma}}) \\
&\geq \mathcal{J}(\mathcal{H}_I \mathbf{p} |_{Y_{\Gamma}} \oplus \mathbf{p}|_{Y_{\Gamma}}) = \mathcal{J}_{\Gamma} (\mathbf{p}|_{Y_{\Gamma}}),
\end{align*}
while
\begin{equation*}
\mathcal{J}_{\Gamma} (\mathbf{p}|_{Y_{\Gamma}}) \geq \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}^*) = \mathcal{J}(\mathbf{p}^*)
\end{equation*}
by the minimization property of $\mathbf{p}_{\Gamma}^*$ with respect to~\cref{primal_DD}.
Therefore, $\mathbf{p}^*$ is a solution of~\cref{d_dual_ROF}.
\end{proof}
By \cref{Prop:primal_DD_equiv}, it is enough to solve~\cref{primal_DD} to obtain a solution of~\cref{d_dual_ROF}.
As we noted in~\cref{primal_DD_harmonic_local}, \cref{primal_DD} has an intrinsic domain decomposition structure,
so that the parallelization of the algorithm at the subdomain level is straightforward regardless of the choice of solver for the minimization problem.
In this paper, we adopt FISTA~\cite{BT:2009} as the solver for~\cref{primal_DD}, which is known to have $O(1/n^2)$ convergence.
To the best of our knowledge, there have been no DDMs for the ROF model with convergence rate better than~$O(1/n^2)$.
In particular, Chang~et al.~\cite{CTWY:2015} showed that the subspace correction methods for the dual ROF model has the theoretical convergence rate~$O(1/n)$ even in the overlapping domain decomposition case.
To show the suitability of FISTA for~\cref{primal_DD}, it should be ensured that the functional $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma})$ in \cref{J_Gamma} is differentiable and its gradient is Lipschitz continuous.
The following lemmas are ingredients for showing such regularity of $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma})$.
At first, \cref{Lem:primal_DD_div_norm} tells that the norm bound of the $\mathrm{div}$ operator can be improved from \cref{Prop:div_norm} if its domain is restricted to~$Y_{\Gamma}$.
\begin{lemma}
\label{Lem:primal_DD_div_norm}
Assume that each subdomain consists of at least $2 \times 2$ pixels.
Then, the operator norm of $\mathrm{div}$\emph{:} $Y_{\Gamma} \rightarrow X$ has a bound such that $\| \mathrm{div} \|_{Y_{\Gamma} \rightarrow X}^2 \leq 4$.
\end{lemma}
\begin{proof}
Fix $\mathbf{p}_{\Gamma} \in Y_{\Gamma}$ and let $\mathbf{p} = \mathbf{0}_I \oplus \mathbf{p}_{\Gamma} \in Y$, which is an extension of $\mathbf{p}_{\Gamma}$ to $Y$.
We clearly have
$$ \mathrm{div} \mathbf{p} = \mathrm{div} \mathbf{p}_{\Gamma}.$$
For a pixel $T \in \mathcal{T}$, similarly to \cref{Prop:div_norm}, let $p_{T, 1}$, $p_{T, 2}$, $p_{T, 3}$, and $p_{T, 4}$ be the degrees of freedom of $\mathbf{p}$ on the top, bottom, left, and right edges of $T$, respectively.
Since~$\partial T \cap \Gamma$ consists of at most two element edges (when $T$ is at a subdomain corner),
at most two of $\mathbf{p}_{T, i}$'s are nonzero.
Thus, we have
\begin{align*}
(\mathrm{div} \mathbf{p})_T^2 &= (-p_{T, 1} + p_{T, 2} - p_{T, 3} + p_{T, 4})^2\\
&\leq 2(p_{T, 1}^2 + p_{T, 2}^2 + p_{T, 3}^2 + p_{T, 4}^2 ),
\end{align*}
where we use the Cauchy--Schwarz inequality.
Summation over all $T \in T$ yields
\begin{align*}
\| \mathrm{div} \mathbf{p} \|_X^2 = \sum_{T \in \mathcal{T}} {(\mathrm{div} \mathbf{p})_T^2} &\leq 2 \sum_{T \in \mathcal{T}} {(p_{T, 1}^2 + p_{T, 2}^2 + p_{T, 3}^2 + p_{T, 4}^2 )}\\
&\leq 4 \sum_{i \in \mathcal{I}} {(\mathbf{p})_i^2} \\
&= 4 \sum_{i \in \mathcal{I}_{\Gamma}} (\mathbf{p}_{\Gamma})_i^2= 4 \| \mathbf{p}_{\Gamma} \|_{Y_{\Gamma}}^2.
\end{align*}
Therefore, $\| \mathrm{div} \|_{Y_{\Gamma} \rightarrow X}^2 \leq 4$.
\end{proof}
Now we provide the main tool for showing the regularity of $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma})$, which is stated in a more general setting.
We note that \cref{Lem:smooth} can be regarded as a generalization of the smoothness property of the Moreau envelope~\cite{CP:2016}.
\begin{lemma}
\label{Lem:smooth}
Suppose that $H$, $H_1$, and $H_2$ are finite-dimensional Hilbert spaces.
Let $A$:~$H_1 \rightarrow H$, $B$:~$H_2 \rightarrow H$ be linear operators and $c \in H$.
Also, let $g$:~$H_2 \rightarrow \bar{\mathbb{R}}$ be a proper, convex, and lower semicontinuous functional.
Then a functional $F$:~$H_1 \rightarrow \mathbb{R}$ defined as
\begin{equation*}
F(x) = \min_{y \in H_2} \left\{ f(x, y) := \frac{1}{2} \| Ax + By + c \|_H^2 + g(y) \right\}
\end{equation*}
is differentiable and its gradient is given by
\begin{equation*}
\nabla F(x) = A^* (Ax + B y^*(x) + c),
\end{equation*}
where $y^* (x) = \argmin_{y \in H_2} f(x, y)$.
Furthermore, $\nabla F$ is Lipschitz continuous with modulus $L = \| A \|_{H_1 \rightarrow H}^2$.
\end{lemma}
\begin{proof}
For $x \in H_1$, let
\begin{equation*}
d(x) = A^* (Ax + By^* (x) + c).
\end{equation*}
One can easily verify that $d(x)$ is single-valued even though $y^*(x)$ may not be.
Take any $x_1$, $x_2 \in H_1$ and write $y_1 = y^* (x_1)$, $y_2 = y^* (x_2)$.
Then, by the minimization property of $y_1$, we get
\begin{equation} \begin{split}
\label{ub}
F(x_1) &= \frac{1}{2} \| Ax_1 + By_1 + c \|_H^2 + g(y_1) \\
&\leq \frac{1}{2} \| Ax_1 + By_2 + c \|_H^2 + g(y_2) \\
&= g(y_2) + \frac{1}{2} \| Ax_2 + By_2 + c \|_H^2 + \left< Ax_2 + By_2 + c , A(x_1 - x_2) \right>_H \\
&\quad+ \frac{1}{2} \| A(x_1 - x_2) \|_H^2 \\
&\leq F(x_2) + \left< d(x_2) , x_1 - x_2 \right>_{H_1} + \frac{L}{2} \| x_1 - x_2 \|_{H_1}^2 .
\end{split} \end{equation}
On the other hand, the optimality condition of $y_2$ reads as
\begin{equation*}
\label{y2_opt}
g(y) \geq g(y_2) + \left< Ax_2 + By_2 + c , B(y_2 - y)\right>_H \hspace{0.5cm} \forall y \in H_2.
\end{equation*}
Thus, it follows that
\begin{equation} \begin{split}
\label{lb_temp}
F(x_1) &= \frac{1}{2} \| Ax_1 + By_1 + c \|_H^2 + g(y_1) \\
&= g(y_1) + \frac{1}{2} \| Ax_2 + By_1 + c \|_H^2 + \left< Ax_2 + By_1 + c , A(x_1 - x_2) \right>_H \\
&\quad + \frac{1}{2} \| A(x_1 -x_2) \|_H^2 \\
&\geq g(y_2 ) + \left< Ax_2 + By_2 + c , B(y_2 - y_1) \right>_{H} + \frac{1}{2} \| Ax_2 + By_1 + c \|_H^2 \\
&\quad + \left< Ax_2 + By_1 + c , A(x_1 - x_2) \right>_H + \frac{1}{2} \| A(x_1 -x_2) \|_H^2 .
\end{split} \end{equation}
By the vector identity
\begin{equation*}
\left< a +b , b \right> + \frac{1}{2} \| a \|_2^2 = \frac{1}{2} \| a +b \|_2^2 + \frac{1}{2} \| b \|_2^2,
\end{equation*}
equation~\cref{lb_temp} is written as
\begin{equation} \begin{split}
\label{lb}
F(x_1) &\geq g(y_2) + \frac{1}{2} \| Ax_2 + By_2 + c \|_H^2 + \frac{1}{2} \| B(y_1 - y_2) \|_H^2 \\
&\quad + \left< Ax_2 + By_1 + c , A(x_1 - x_2) \right>_H + \frac{1}{2} \| A(x_1 -x_2) \|_H^2 \\
&= F(x_2) + \frac{1}{2} \| B(y_1 - y_2) \|_H^2 + \left< Ax_2 + By_2 + c, A(x_1 - x_2) \right>_H \\
&\quad + \left< B(y_1 - y_2) , A(x_1 - x_2)\right>_H + \frac{1}{2} \| A(x_1 - x_2) \|_H^2 \\
&= F(x_2) + \left< d(x_2) , x_1 -x_2 \right>_{H_1} + \frac{1}{2} \| (Ax_1 + By_1 + c) - (Ax_2 + By_2 + c) \|_H^2 \\
&\geq F(x_2) + \left< d(x_2) , x_1 -x_2 \right>_{H_1} + \frac{1}{2L} \| d(x_1) - d(x_2) \|_{H_1}^2 .
\end{split} \end{equation}
From \cref{ub,lb}, we conclude that $F$ is differentiable with $\nabla F = d$.
Now, it remains to show that $\nabla F$ is Lipschitz continuous.
Interchanging $x_1$ and $x_2$ in~\cref{lb} yields
\begin{equation}
\label{lb2}
F(x_2) \geq F(x_1) - \left< d(x_1) , x_1 - x_2 \right>_{H_1} + \frac{1}{2L} \| d(x_1) - d(x_2) \|_{H_1}^2.
\end{equation}
Summing~\cref{lb,lb2}, we obtain
\begin{align*}
\frac{1}{L} \| d(x_1) - d(x_2) \|_{H_1}^2 &\leq \left< d(x_1) - d(x_2) , x_1 - x_2 \right>_{H_1} \\
&\leq \| d(x_1) - d(x_2) \|_{H_1} \| x_1 - x_2 \|_{H_1},
\end{align*}
which means that $d$ is Lipschitz continuous with modulus~$L$.
\end{proof}
Now, we obtain the desired regularity result of $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma})$ as a direct consequence of \cref{Lem:smooth}.
\begin{corollary}
\label{Cor:smooth}
The gradient of $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma})$ is given by
$$\nabla \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}) = \frac{1}{\alpha} \mathrm{div}^* ( \mathrm{div}(\mathcal{H}_I \p_{\Gamma} \oplus \mathbf{p}_{\Gamma}) + \alpha f) |_{Y_{\Gamma}},$$
which is Lipschitz continuous with a Lipschitz constant $4/\alpha$.
\end{corollary}
\begin{proof}
In \cref{Lem:smooth}, we set $H = X$, $H_1 = Y_{\Gamma}$, and $H_2 = Y_I$.
Taking $A = \mathrm{div}$:~$Y_{\Gamma} \rightarrow X$, $B = \mathrm{div}$:~$Y_I \rightarrow X$, and $g = \chi_{C_I}$ yields the conclusion.
In this case, we have $L = 4/\alpha$ due to \cref{Lem:primal_DD_div_norm}.
\end{proof}
\Cref{Cor:smooth} guarantees that FISTA is appropriate for~\cref{primal_DD}.
The proposed primal DDM for the dual ROF model is summarized in \cref{Alg:primal_DD}.
\begin{algorithm}[]
\caption{Primal DDM}
\begin{algorithmic}[]
\label{Alg:primal_DD}
\STATE Choose $L \geq 4$. Let $\mathbf{q}_{\Gamma}^{(0)} = \mathbf{p}_{\Gamma}^{(0)} = \mathbf{0}_{\Gamma}$ and $t_0 = 1$.
\FOR{$n=0,1,2,...$}
\STATE $\displaystyle \mathcal{H}_I \q_{\Gamma}^{(n)} \in \argmin_{\mathbf{q}_I \in Y_I} \left\{ \mathcal{J} (\mathbf{q}_I \oplus \mathbf{q}_{\Gamma}^{(n)}) + \chi_{C_I}(\mathbf{q}_I) \right\}$
\STATE $\displaystyle \mathbf{p}_{\Gamma}^{(n+1)} = \mathrm{proj}_{C_{\Gamma}} \left( \mathbf{q}_{\Gamma}^{(n)} - \frac{1}{L} \mathrm{div}^* \left(\mathrm{div} (\mathcal{H}_I \q_{\Gamma}^{(n)} + \mathbf{q}_{\Gamma}^{(n)}) + \alpha f \right) \Big|_{Y_{\Gamma}}\right)$
\STATE $\displaystyle t_{n+1} = \frac{1 + \sqrt{1+4t_n^2}}{2}$
\STATE $\displaystyle \mathbf{q}_{\Gamma}^{(n+1)} = \mathbf{p}_{\Gamma}^{(n+1)} + \frac{t_n - 1}{t_{n+1}}(\mathbf{p}_{\Gamma}^{(n+1)} - \mathbf{p}_{\Gamma}^{(n)})$
\ENDFOR
\end{algorithmic}
\end{algorithm}
As we noted in~\cref{primal_DD_harmonic_local}, $\mathcal{H}_I \q_{\Gamma}^{(n)}$ in \cref{Alg:primal_DD} can be obtained independently in each subdomain.
Indeed, $\mathcal{H}_I \q_{\Gamma}^{(n)} = \bigoplus_{s=1}^{\mathcal{N}} \mathbf{q}_s^{(n)}$ where $\mathbf{q}_s^{(n)}$ is a solution of
\begin{equation}
\label{primal_local}
\min_{\mathbf{q}_s \in Y_s} \left\{ \frac{1}{2\alpha} \int_{\Omega_s} \left(\mathrm{div}(\mathbf{q}_s + \mathbf{q}_{\Gamma}^{(n)} |_{\Omega_s} ) + \alpha f \right)^2 \,dx + \chi_{C_s}(\mathbf{q}_s)\right\}.
\end{equation}
Since $\mathbf{q}_{\Gamma}^{(n)} |_{\Omega_s}$ plays a role of only the essential boundary condition in~\cref{primal_local},
the existing solvers for the ROF model can be utilized to obtain $\mathbf{q}_s^{(n)}$ with little modification.
Convergence analysis for \cref{Alg:primal_DD} is straightforward~\cite{BT:2009}.
\begin{theorem}
\label{Thm:primal_DD}
Let $\{ \mathbf{p}_{\Gamma}^{(n)} \}$ be the sequence generated by \cref{Alg:primal_DD}, and let $\mathbf{p}_{\Gamma}^*$ be a solution of~\cref{primal_DD}.
Then for any $n \geq 1$,
\begin{equation*}
\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}^{(n)}) - \mathcal{J}_{\Gamma}(\mathbf{p}_{\Gamma}^*) \leq \frac{2L \| \mathbf{p}_{\Gamma}^{(0)} - \mathbf{p}_{\Gamma}^* \|_{Y_{\Gamma}}^2}{(n+1)^2}.
\end{equation*}
\end{theorem}
\section{A Primal-Dual Domain Decomposition Method}
\label{Sec:pd_DD}
In the primal DDM introduced in \cref{Sec:primal_DD},
the continuity of a solution on the subdomain interfaces is imposed directly.
Alternatively, motivated by existing DDMs in structural mechanics~\cite{FLP:2000,FR:1991}, the continuity can be enforced by the method of Lagrange multipliers, which results in a saddle point problem of the ``primal" variable $\mathbf{p}$ and the Lagrange multipliers $\lambda$ also known as the ``dual" variable.
We name the algorithm proposed in this section ``primal-dual DDM'' because it solves the saddle point problem of $\mathbf{p}$ and $\lambda$ by the primal-dual algorithm~\cite{CP:2011}.
We begin with the same domain decomposition setting as in \cref{Sec:primal_DD}.
At first, we state a proposition which suggests how to treat the continuity of the solution on the subdomain interfaces.
\begin{proposition}
\label{Prop:DD_interface}
A vector function $\mathbf{q}$\emph{:} $\Omega \rightarrow \mathbb{R}^2$ is in $H_0 (\div ; \Omega)$ if and only if
the restriction $\mathbf{q}_s = \mathbf{q} |_{\Omega_s}$ to each subdomain $\Omega_s$ is in $H(\mathrm{div}; \Omega_s)$
satisfying the boundary condition $\mathbf{q}_s \cdot \mathbf{n}_s = 0$ on $\partial \Omega_s \cap \partial \Omega$
and the interface condition $\mathbf{q}_s \cdot \mathbf{n}_{st} - \mathbf{q}_t \cdot \mathbf{n}_{st} = 0$ on $\Gamma_{st}$, $s<t$.
\end{proposition}
\begin{proof}
Applying \cref{Prop:FEM_interface} to a coarse mesh $\left\{ \Omega_s \right\}_{s=1}^{\mathcal{N}}$ of $\Omega$ yields the conclusion.
\end{proof}
We introduce the local function space $\tilde{Y}_s$, defined by
\begin{equation*}
\label{tY_s}
\tilde{Y}_s = \left\{ \tilde{\mathbf{q}}_s \in H(\mathrm{div} ; \Omega_s) : \tilde{\mathbf{q}}_s \cdot \mathbf{n}_s = 0 \textrm{ on } \partial \Omega_s \setminus \Gamma \textrm{, }
\tilde{\mathbf{q}}_s |_{T} \in \mathcal{RT}_0 (T) \hspace{0.2cm} \forall T \in \mathcal{T}_s \right\}.
\end{equation*}
The difference between $Y_s$ in~\cref{Y_s} and $\tilde{Y}_s$ is that the essential boundary condition $\tilde{\mathbf{q}}_s \cdot \mathbf{n}_s = 0$ is not imposed on $\Gamma \cap \partial \Omega_s$ for $\tilde{Y}_s$.
That is, $\tilde{Y}_s$ has degrees of freedom on $\partial \Omega_s \cap \Gamma$ as shown in \cref{Fig:DD}(b), while $Y_s$ does not.
Let $\tilde{\mathcal{I}}_s$ be the set of indices of the basis functions for $\tilde{Y}_s$.
Similarly to~\cref{C_s}, we define the inequality-constrained subset $\tilde{C}_s$ of $\tilde{Y}_s$ by
\begin{equation*}
\label{tC_s}
\tilde{C}_s = \left\{ \tilde{\mathbf{p}}_s \in \tilde{Y}_s : |(\tilde{\mathbf{p}}_s)_i| \leq 1 \hspace{0.2cm} \forall i \in \tilde{\mathcal{I}}_s \right\}.
\end{equation*}
Clearly, the projection onto $\tilde{C}_s$ is given by
\begin{equation*}
\label{proj_tC_s}
(\mathrm{proj}_{\tilde{C}_s} \tilde{\mathbf{p}}_s )_i = \frac{(\tilde{\mathbf{p}}_s)_i}{\max \left\{ 1, |(\tilde{\mathbf{p}}_s)_i| \right\}} \hspace{0.5cm} \forall i \in \tilde{\mathcal{I}}_s.
\end{equation*}
Also, we denote $\tilde{Y}$ by the direct sum of the local function spaces,
\begin{equation*}
\tilde{Y} = \bigoplus_{s=1}^{\mathcal{N}} \tilde{Y}_s
\end{equation*}
and we denote $\tilde{C}$ by
\begin{equation*}
\label{tC}
\tilde{C} = \bigoplus_{s=1}^{\mathcal{N}} \tilde{C}_s.
\end{equation*}
For $\tilde{\mathbf{p}} = \bigoplus_{s=1}^{\mathcal{N}} \tilde{\mathbf{p}}_s$, we define the energy functional $\tilde{\mathcal{J}} (\tilde{\mathbf{p}})$ on $\tilde{Y}$ by
\begin{equation}
\label{tJ}
\tilde{\mathcal{J}} (\tilde{\mathbf{p}}) = \sum_{s=1}^{\mathcal{N}} \frac{1}{2\alpha} \int_{\Omega_s} (\mathrm{div} \tilde{\mathbf{p}}_s + \alpha f)^2 \,dx.
\end{equation}
In addition, we define the operator $B$: $\tilde{Y} \rightarrow \mathbb{R}^{|\mathcal{I}_{\Gamma}|}$ which measures the jump of the normal component of $\tilde{Y}$ on the subdomain interfaces by
\begin{equation}
\label{B}
B\tilde{\mathbf{p}}|_{\Gamma_{st}} = \tilde{\mathbf{p}}_s \cdot \mathbf{n}_{st} - \tilde{\mathbf{p}}_t \cdot \mathbf{n}_{st}, \hspace{0.5cm} s<t.
\end{equation}
Since each degree of freedom in the Raviart--Thomas elements represents the value of the normal component on the corresponding edge,
the standard matrix of $B$ consists of only $-1$'s, $0$'s, and $1$'s.
Thus, an application of $B$ can be done by a series of scalar additions/subtractions only.
By \cref{Prop:DD_interface}, there is an isomorphism between two spaces~$Y$ and~$\ker B \subset \tilde{Y}$, say~$\Phi:$~$Y \rightarrow \ker B$, defined by
\begin{equation}
\label{isomorphism}
\Phi \mathbf{p} = \bigoplus_{s=1}^{\mathcal{N}} \mathbf{p}|_{\Omega_s}, \hspace{0.5cm} \mathbf{p} \in Y.
\end{equation}
By such an isomorphism, \cref{d_dual_ROF} is equivalent to
\begin{equation}
\label{pd_constrained}
\min_{\tilde{\mathbf{p}} \in \tilde{Y}} \tilde{\mathcal{J}} (\tilde{\mathbf{p}}) + \chi_{\tilde{C}}(\tilde{\mathbf{p}}) \hspace{0.5cm}
\textrm{subject to } B\tilde{\mathbf{p}} = 0.
\end{equation}
By treating the constraint $B\tilde{\mathbf{p}} = 0$ in~\cref{pd_constrained} by the method of Lagrange multipliers, we get the following proposition.
\begin{proposition}
\label{Prop:pd_DD_equiv}
If $\mathbf{p}^* \in Y$ is a solution of \cref{d_dual_ROF},
then $\Phi \mathbf{p}^*$ is a primal solution of the saddle point problem
\begin{equation}
\label{pd_DD}
\min_{\tilde{\mathbf{p}} \in \tilde{Y}} \max_{\lambda \in \mathbb{R}^{|\mathcal{I}_{\Gamma}|}}
\left\{ \mathcal{L}(\tilde{\mathbf{p}}, \lambda ) := \tilde{\mathcal{J}} (\tilde{\mathbf{p}}) + \chi_{\tilde{C}}(\tilde{\mathbf{p}}) + \left< B\tilde{\mathbf{p}}, \lambda \right>_{\mathbb{R}^{|\mathcal{I}_{\Gamma}|}} \right\},
\end{equation}
where~$\Phi$:~$Y \rightarrow \ker B$ was defined in~\cref{isomorphism}.
Conversely, if $\tilde{\mathbf{p}}^* \in \ker B \subset \tilde{Y}$ is a primal solution of \cref{pd_DD},
then $\Phi^{-1} \tilde{\mathbf{p}}^*$ is a solution of \cref{d_dual_ROF}.
\end{proposition}
Since the functional $\tilde{\mathcal{J}} (\tilde{\mathbf{p}})$ in \cref{tJ} is convex but not uniformly convex, the $O(1/n)$-primal-dual algorithm can be utilized to solve~\cref{pd_DD}~\cite{CP:2011}.
To estimate a valid range of parameters for the primal-dual algorithm, \cref{Lem:pd_DD_B_norm} gives a norm bound of the operator $B:\tilde{Y} \rightarrow \mathbb{R}^{|\mathcal{I}_{\Gamma}|}$.
\begin{lemma}
\label{Lem:pd_DD_B_norm}
The operator norm of $B$\emph{:} $\tilde{Y} \rightarrow \mathbb{R}^{|\mathcal{I}_{\Gamma}|}$ defined in~\cref{B} has a bound such that $\| B \|_{\tilde{Y} \rightarrow \mathbb{R}^{|\mathcal{I}_{\Gamma}|}}^2 \leq 2$.
\end{lemma}
\begin{proof}
Fix $\tilde{\mathbf{p}} = \bigoplus_{s=1}^{\mathcal{N}} \tilde{\mathbf{p}}_s \in \tilde{Y}$.
Let $(B\tilde{\mathbf{p}})_i$ be a degree of freedom of $B\tilde{\mathbf{p}}$ on $\Gamma_{st}$ for some $s<t$,
and let $(\tilde{\mathbf{p}}_s)_i$, $(\tilde{\mathbf{p}}_t)_i$ be degrees of freedom of $\tilde{\mathbf{p}}_s$, $\tilde{\mathbf{p}}_t$ adjacent to $(B\tilde{\mathbf{p}})_i$, respectively.
Then it satisfies that
$$ (B\tilde{\mathbf{p}})_i = (\tilde{\mathbf{p}}_s)_i - (\tilde{\mathbf{p}}_t)_i .$$
By applying the Cauchy--Schwarz inequality, we get
$$ (B\tilde{\mathbf{p}})_i^2 \leq 2((\tilde{\mathbf{p}}_s)_i^2 + (\tilde{\mathbf{p}}_t)_i^2).$$
Summation over every $i$ and $s<t$ yields $\| B\tilde{\mathbf{p}} \|_{\mathbb{R}^{|\mathcal{I}_{\Gamma}|}}^2 \leq 2 \| \tilde{\mathbf{p}} \|_{\tilde{Y}}^2$.
\end{proof}
Thanks to \cref{Lem:pd_DD_B_norm}, the primal-dual algorithm for~\cref{pd_DD} is given in \cref{Alg:pd_DD}.
We notice that the primal-dual algorithm was used for DDMs in~\cite{DCT:2016}.
\begin{algorithm}[]
\caption{Primal-dual DDM}
\begin{algorithmic}[]
\label{Alg:pd_DD}
\STATE Choose $L \geq 2$, $\tau, \sigma > 0$ with $\tau \sigma = \frac{1}{L}$.
Let $\tilde{\mathbf{p}}^{(0)} = \mathbf{0}$ and $\lambda^{(0)} = 0$.
\FOR{$n=0,1,2,...$}
\STATE $\displaystyle \lambda^{(n+1)} = \lambda^{(n)} + \sigma B (2\tilde{\mathbf{p}}^{(n)} - \tilde{\mathbf{p}}^{(n-1)} )$
\STATE $\displaystyle \tilde{\mathbf{p}}^{(n+1)} \in \argmin_{\tilde{\mathbf{p}} \in \tilde{Y}} \left\{ \tilde{\mathcal{J}} (\tilde{\mathbf{p}}) + \chi_{\tilde{C}}(\tilde{\mathbf{p}}) + \frac{1}{2\tau} \int_{\Omega} (\tilde{\mathbf{p}} - \hat{\mathbf{p}})^2 \,dx \right\}$,\\
\quad where $\displaystyle \hat{\mathbf{p}} = \tilde{\mathbf{p}}^{(n)} - \tau B^* \lambda^{(n+1)}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
We note that the primal problem for $\tilde{\mathbf{p}}^{(n+1)}$ in \cref{Alg:pd_DD} can be solved independently in each subdomain.
Indeed, $\tilde{\mathbf{p}}^{(n+1)}$ can be obtained as the direct sum of $\tilde{\mathbf{p}}_s^{(n+1)}$'s, where $\tilde{\mathbf{p}}_s^{(n+1)}$ is a solution of
\begin{equation}
\label{pd_DD_local}
\min_{\tilde{\mathbf{p}}_s \in \tilde{Y}_s} \left\{ \frac{1}{2\alpha} \int_{\Omega_s} (\mathrm{div} \tilde{\mathbf{p}}_s + \alpha f)^2 \,dx + \chi_{\tilde{C}_s}(\tilde{\mathbf{p}}_s) + \frac{1}{2\tau} \int_{\Omega_s} (\tilde{\mathbf{p}}_s - \hat{\mathbf{p}}_s)^2 \,dx \right\},
\end{equation}
where $\hat{\mathbf{p}}_s = \tilde{\mathbf{p}}_s^{(n)} - \tau B^* \lambda^{(n+1)} |_{\Omega_s}$.
Now, we state the convergence analysis for \cref{Alg:pd_DD}.
See Theorem~5.1 of~\cite{CP:2016} for details.
\begin{theorem}
\label{Thm:pd_DD}
Let $\left\{ \tilde{\mathbf{p}}^{(n)}, \lambda^{(n)} \right\}$ be the sequence generated by \cref{Alg:pd_DD}.
Then, it converges to a saddle point of~\cref{pd_DD} and satisfies that
$$
\mathcal{L} \left( \frac{1}{n}\sum_{k=1}^{n}\tilde{\mathbf{p}}^{(k)}, \lambda \right) -
\mathcal{L} \left( \tilde{\mathbf{p}}, \frac{1}{n}\sum_{k=1}^{n}\lambda^{(k)} \right)
\leq \frac{1}{n} \left( \frac{1}{\tau} \| \tilde{\mathbf{p}} - \tilde{\mathbf{p}}^{(0)} \|_{2, \tilde{Y}}^2 + \frac{1}{\sigma} \| \lambda - \lambda^{(0)} \|_{2, \mathbb{R}^{|\mathcal{I}_{\Gamma}|}}^2 \right)
$$
for any $\tilde{\mathbf{p}} \in Y$ and $\lambda \in \mathbb{R}^{|\mathcal{I}_{\Gamma}|}$.
\end{theorem}
Even though the convergence rate in \cref{Thm:pd_DD} is the same as the existing methods (see, e.g.,~\cite{CTWY:2015}),
the proposed primal-dual DDM has an advantage for the convergence rate of local problems compared to the existing ones.
With the help of a $\frac{1}{\tau}$-uniformly convex term
$$\frac{1}{2\tau} \int_{\Omega_s} (\tilde{\mathbf{p}}_s - \hat{\mathbf{p}}_s)^2 \,dx$$
in~\cref{pd_DD_local}, linearly convergent algorithms such as~\cite[Algorithm~3]{CP:2011} and~\cite[Algorithm~5]{CP:2016} can be adopted, while the known optimal convergence rate of the existing methods for the ROF model is only $O(1/n^2)$, which is far slower than linear convergence.
The following is the linearly convergent primal-dual algorithm~\cite[Algorithm~3]{CP:2011} applied to~\cref{pd_DD_local}.
\begin{algorithm}[]
\renewcommand{\thealgorithm}{}
\caption{Linearly convergent local solver for \cref{Alg:pd_DD}}
\begin{algorithmic}[]
\label{Alg:pd_DD_local}
\STATE Choose $L \geq 8$, $\gamma \leq \alpha$, and $\delta \leq \frac{1}{\tau}$.
\STATE Set $\mu = \frac{2\sqrt{\gamma \delta}}{L}$, $\tau_0 = \frac{\mu}{2\gamma}$, $\sigma_0 = \frac{\mu}{2\delta}$, and $\theta_0 \in \left[ \frac{1}{1+\mu}, 1\right]$.
Let $\bar{u}_s^{(0)} = u_s^{(0)} = 0$ and $\tilde{\mathbf{p}}_s^{(0)} = \mathbf{0}$.
\FOR{$n=0,1,2,...$}
\STATE $\displaystyle \tilde{\mathbf{p}}_s^{(n+1)} = \mathrm{proj}_{\tilde{C}_s} \left( \frac{\tau (\tilde{\mathbf{p}}_s^{(n)} - \sigma_0 \mathrm{div}^* \bar{u}_s^{(n)}) + \sigma_0 \hat{\mathbf{p}}_s}{\tau + \sigma_0}\right)$
\STATE $\displaystyle u_s^{(n+1)} = \frac{(u_s^{(n)} + \tau_0 \mathrm{div} \mathbf{p}_s^{(n+1)}) + \tau_0 \alpha f}{1 + \tau_0 \alpha}$
\STATE $\bar{u}_s^{(n+1)} = u_s^{(n+1)} + \theta_0 (u_s^{(n+1)} - u_s^{(n)})$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Numerical Results}
\label{Sec:numerical}
In this section, numerical results of the algorithms introduced in previous sections are presented.
All the algorithms were implemented in MATLAB~R2018a,
and all the computations were performed on a desktop equipped with Intel Core i5-8600K CPU (3.60GHz), 16GB memory, and the OS Windows 10 Pro 64-bit.
Two test images ``Peppers $512\times512$" and ``Boat $2048 \times 3072$," shown in \cref{Fig:test_images}, were used in the numerical experiments.
We introduced noise to each image using Gaussian additive noise with mean $0$ and variance $0.05$.
As a measurement of the quality of denoising, the peak-signal-to-noise ratio (PSNR) defined by
\begin{equation*}
\mathrm{PSNR} = 10 \log_{10} \left( \frac{\mathrm{MAX}^2 \cdot |\Omega|}{\| u-f_{\mathrm{orig}}\|_{X}^2} \right),
\end{equation*}
where $\mathrm{MAX}$ is the maximum possible pixel value of the image ($\mathrm{MAX} = 1$ in our experiments), $f_{\mathrm{orig}}$ is the original clean image and $u$ is a denoised image, is calculated for each output of the experiment.
We set~$\alpha = 10$ heuristically in~\cref{ROF}.
\begin{figure}[]
\centering
\subfloat[][Peppers $512 \times 512$]{ \includegraphics[height=3.8cm]{./peppers.png} }
\hspace{1.2cm}
\subfloat[][Boat $2048 \times 3072$]{ \includegraphics[height=3.8cm]{./boat.png} }
\caption{Test images for the numerical experiments}
\label{Fig:test_images}
\end{figure}
First, we compare the proposed methods with other existing DDMs for the ROF model.
Thanks to~\cref{Thm:equiv}, direct comparisons with existing methods based on the finite difference discretization are available in the aspect of the primal energy functional defined as
\begin{equation}
\label{primal_energy}
\mathcal{E} (u) = \frac{\alpha}{2} \| u - f \|_2^2 + \| | Du|_1 \|_1.
\end{equation}
The following algorithms are used for our numerical experiments:
\begin{itemize}
\item ALG1: Primal DDM described in \cref{Alg:primal_DD}, $L=4$.
\item ALG2: Primal-dual DDM described in \cref{Alg:pd_DD}, $L=2$, $\sigma=0.02$, $\sigma \tau = 1/L$.
\item HL--RJ: Relaxed block Jacobi~(parallel) method proposed by Hinterm{\"u}ller and Langer~\cite{HL:2015}, relaxation parameter:~$1/3$ (see Remark~3.3 of~\cite{LN:2017}).
\item HL--GS: Block Gauss--Seidel~(successive) method proposed by Hinterm{\"u}ller and Langer~\cite{HL:2015}.
\item LN--RJ: Relaxed block Jacobi method proposed by Lee and Nam~\cite{LN:2017}, relaxation parameter:~$1/3$.
\item LN--GS: Block Gauss--Seidel method proposed by Lee and Nam~\cite{LN:2017}.
\end{itemize}
The number of subdomains~$\mathcal{N}$ is fixed at~$4\times4$.
Local problems are solved by the~$O(1/n^2)$ convergent primal-dual algorithm~\cite[Algorithm~2]{CP:2011} with the parameters $L=8$, $\gamma = 0.125\alpha$, $\tau_0 = 0.01$, and $\sigma_0 \tau_0 = 1/L$ for all algorithms stated above but ALG2.
For ALG2, the linearly convergent primal-dual algorithm~\cite[Algorithm~3]{CP:2011} with the parameters~$L=8$, $\gamma = 0.5\alpha$, and $\delta = 1/\tau$ are used.
Local problems are solved by the following stop criterion:
\begin{equation*}
\frac{\| \mathbf{p}_s^{(n+1)} - \mathbf{p}_s^{(n)} \|_2}{\| \mathbf{p}_s^{(n+1)}\|_2} < 10^{-8}.
\end{equation*}
To evaluate the performances of DDMs based on iterations of the dual variables~$\left\{\mathbf{p}^{(n)} \right\}$ in terms of the primal energy~\cref{primal_energy}, we have to define the primal iterates~$\left\{u^{(n)} \right\}$ appropriately.
For HL--RJ and HL--GS, we define $u^{(n)}$ as
\begin{equation*}
u^{(n)} = f + \frac{1}{\alpha} \mathrm{div} \mathbf{p}^{(n)}.
\end{equation*}
Also, for ALG1 and ALG2, $u^{(n)}$ is defined as
\begin{equation}
\label{primal_DD_u}
u^{(n)} = f + \frac{1}{\alpha} \mathrm{div} (\mathcal{H}_I \q_{\Gamma}^{(n)} \oplus \mathbf{q}_{\Gamma}^{(n)})
\end{equation}
and
\begin{equation}
\label{pd_DD_u}
u^{(n)} = f + \frac{1}{\alpha} \bigoplus_{s=1}^{\mathcal{N}} \mathrm{div} \tilde{\mathbf{p}}_s^{(n)} ,
\end{equation}
respectively.
Meanwhile, we compute the minimum value of the primal energy~$\mathcal{E} (u^*)$ approximately by 10,000 iterations of the~$O(1/n^2)$ convergent primal-dual algorithm applied to the full dimension problem~\cref{d_dual_ROF}.
\begin{figure}[]
\centering
\subfloat[][Peppers $512 \times 512$]{ \includegraphics[height=4.9cm]{./comp_peppers.png} }
\subfloat[][Boat $2048 \times 3072$]{ \includegraphics[height=4.9cm]{./comp_boat.png} }
\caption{Decay of the values of~$\frac{\mathcal{E} (u^{(n)}) - \mathcal{E}(u^*) }{\mathcal{E} (u^*)}$ in various DDMs for the ROF model}
\label{Fig:comp}
\end{figure}
\Cref{Fig:comp} shows the decay of the relative primal energy functional~$\frac{\mathcal{E} (u^{(n)}) - \mathcal{E}(u^*) }{\mathcal{E} (u^*)}$ during 1,000 outer iterations for various DDMs.
It can be observed that the primal energy of ALG1 decreases as fast as the block Gauss--Seidel methods.
ALG1 has an advantage compared to the block Gauss--Seidel methods in the aspect of parallel computation; all local problems of ALG1 can be solved in parallel while only local problems of the same color can be solved in parallel for the block Gauss--Seidel methods.
In~\cref{Fig:comp}, there are oscillations of the primal energy of ALG1 when the value of~$\frac{\mathcal{E} (u^{(n)}) - \mathcal{E}(u^*) }{\mathcal{E} (u^*)}$ is close to~$10^{-10}$.
This is because local problems are solved inexactly by iterative methods.
\begin{table}[]
\centering
\begin{tabular}{| c | c c c c c c |} \hline
Test image & ALG1 & ALG2 & HL--RJ & HL--GS & LN--RJ & LN--GS \\
\hline
Peppers
& 2923 & 271 & 2925 & 2928 & 2925 & 2929 \\ \cline{2-6}
\hline
Boat
& 8613 & 263 & 8613 & 8613 & 8613 & 8614 \\ \cline{2-6}
\hline
\end{tabular}
\caption{Maximum numbers of inner iterations in various DDMs for the ROF model}
\label{Table:max_inner_iter}
\end{table}
Even though the primal energy of ALG2 does not decrease faster than the existing methods, it has its own advantage in that local problems can be solved much faster.
\Cref{Table:max_inner_iter} shows the maximum numbers of inner iterations during~1,000 outer iterations for various DDMs.
ALG1 shows similar behavior on inner iterations compared to the existing DDMs.
On the other hand, as we explained in~\cref{Sec:pd_DD}, ALG2 can adopt linearly convergent algorithms as local solvers, while the other algorithms cannot.
Thus, the maximum number of inner iterations of ALG2 is much less than the other ones.
This phenomenon makes ALG2 practically efficient.
For example, in the case of the test image ``Boat $2048\times3072$,'' a single outer iteration of ALG2 is approximately 32 times faster than the other methods.
Next, we present numerical results for the proposed methods, which emphasize their efficiency as parallel solvers.
To evaluate the parallel efficiency, the \textit{virtual wall-clock time} is measured, which assumes that the algorithms run in parallel in each subdomain.
That is, it ignores the communication time among processors.
We first present the numerical results for \cref{Alg:primal_DD}.
We set the parameter $L = 4$.
We note that, in the viewpoint of image restoration, the stop criteria for the proposed methods need not to be too strict.
We use the following stop criterion:
\begin{equation}
\label{stop_outer}
\left| \frac{\mathcal{E}(u^{(n+1)}) - \mathcal{E}(u^{(n)})}{\mathcal{E}(u^{(n+1)})} \right| < 10^{-3} ,
\end{equation}
where $u^{(n)}$ was defined in~\cref{primal_DD_u}.
Local problems are solved by the $O(1/n^2)$ convergent primal-dual algorithm with the parameters $L=8$, $\gamma = 0.125\alpha$, $\tau_0 = 0.01$, and $\sigma_0 \tau_0 = 1/L$ and the stop criterion
\begin{equation}
\label{stop_inner}
\frac{\| \mathbf{p}_{s}^{(n+1)} - \mathbf{p}_{s}^{(n)} \|_{Y_{s}}}{\| \mathbf{p}_{s}^{(n+1)} \|_{Y_{s}}} < 10^{-5} .
\end{equation}
\begin{table}[]
\centering
\begin{tabular}{| c | c | c c c c |} \hline
Test image & $\mathcal{N}$ & PSNR & iter & \begin{tabular}{c}max\\inner iter\end{tabular} & \begin{tabular}{c}Virtual\\wall-clock\\time (sec)\end{tabular} \\
\hline
\multirow{4}{*}{\shortstack{\begin{phantom}1\end{phantom} \\ \begin{phantom}2\end{phantom} \\ Peppers \\ $512 \times 512$}}
& 1 & 24.41 & - & 526 & 4.90 \\ \cline{2-6}
& $2 \times 2$ & 24.41 & 2 & 532 & 0.77 \\
& $4 \times 4$ & 24.41 & 2 & 584 & 0.26 \\
& $8 \times 8$ & 24.41 & 5 & 590 & 0.22 \\
& $16 \times 16$ & 24.41 & 7 & 573 & 0.14 \\
\hline
\multirow{4}{*}{\shortstack{\begin{phantom}1\end{phantom} \\ \begin{phantom}2\end{phantom} \\ Boat \\ $2048 \times 3072$}}
& 1 & 24.75 & - & 995 & 273.48 \\ \cline{2-6}
& $2 \times 2$ & 24.75 & 2 & 1145 & 91.72 \\
& $4 \times 4$ & 24.75 & 2 & 1408 & 21.03 \\
& $8 \times 8$ & 24.75 & 2 & 1415 & 3.42 \\
& $16 \times 16$ & 24.75 & 2 & 1492 & 1.31 \\
\hline
\end{tabular}
\caption{Performance of the primal DDM \cref{Alg:primal_DD}}
\label{Table:primal_DD}
\end{table}
\cref{Table:primal_DD} shows the performance of \cref{Alg:primal_DD}.
For the single subdomain case, the $O(1/n^2)$ convergent primal-dual algorithm is used.
The PSNRs of the resulting denoised images do not differ from the single subdomain case.
Thus, we can conclude that the results of \cref{Alg:primal_DD} agree with the single subdomain case, as proven in \cref{Prop:primal_DD_equiv}.
With sufficiently many subdomains, the virtual wall-clock time is much less than the wall-clock time of the single subdomain case.
It shows the worth of \cref{Alg:primal_DD} as a parallel algorithm.
Next, we consider the primal-dual DDM.
For \cref{Alg:pd_DD}, we set the parameters $L=2$, $\sigma = 0.02$, and $\sigma\tau = 1/L$.
We use the same stop criterion~\cref{stop_outer} for the outer iterations as in~\cref{Alg:primal_DD} with~$u^{(n)}$ defined in~\cref{pd_DD_u}.
For the local solver, the parameters $L=8$, $\gamma = 0.5\alpha$, and $\delta = 1/\tau$ are used.
The stop criterion~\cref{stop_inner} for local problems is used for~$\tilde{\mathbf{p}}_s^{(n)}$.
\begin{table}[]
\centering
\begin{tabular}{| c | c | c c c c |} \hline
Test image & $\mathcal{N}$ & PSNR & iter & \begin{tabular}{c}max\\inner iter\end{tabular} & \begin{tabular}{c}Virtual\\wall-clock\\time (sec)\end{tabular} \\
\hline
\multirow{4}{*}{\shortstack{\begin{phantom}1\end{phantom} \\ \begin{phantom}2\end{phantom} \\ Peppers \\ $512 \times 512$}}
& 1 & 24.41 & - & 526 & 4.90 \\ \cline{2-6}
& $2 \times 2$ & 24.41 & 22 & 144 & 2.09 \\
& $4 \times 4$ & 24.41 & 24 & 147 & 0.66 \\
& $8 \times 8$ & 24.41 & 26 & 150 & 0.28 \\
& $16 \times 16$ & 24.41 & 30 & 154 & 0.19 \\
\hline
\multirow{4}{*}{\shortstack{\begin{phantom}1\end{phantom} \\ \begin{phantom}2\end{phantom} \\ Boat \\ $2048 \times 3072$}}
& 1 & 24.75 & - & 995 & 273.48 \\ \cline{2-6}
& $2 \times 2$ & 24.75 & 12 & 138 & 95.84 \\
& $4 \times 4$ & 24.75 & 18 & 140 & 24.59 \\
& $8 \times 8$ & 24.75 & 20 & 144 & 3.38 \\
& $16 \times 16$ & 24.75 & 24 & 146 & 1.74 \\
\hline
\end{tabular}
\caption{Performance of the primal-dual DDM \cref{Alg:pd_DD}}
\label{Table:pd_DD}
\end{table}
As \cref{Table:pd_DD} shows, the solution of \cref{Alg:pd_DD} is consistent with the single subdomain case regardless of the number of subdomains.
Since the local solver has the linear convergence rate, which is much faster than the standard algorithms for the ROF model,
we can observe that the maximum number of inner iterations of \cref{Alg:pd_DD} is smaller than that of \cref{Alg:primal_DD} in all cases.
For example, in the experiments with the test image ``Boat $2048 \times 3072$,'' local problems of \cref{Alg:pd_DD} are solved approximately 10 times faster than those of \cref{Alg:primal_DD}.
Consequently, even though the convergence rate of \cref{Alg:pd_DD} is only~$O(1/n)$, the virtual wall-clock time of \cref{Alg:pd_DD} is as small as that of~\cref{Alg:primal_DD} in the case of sufficiently many subdomains.
\begin{figure}[]
\centering
\subfloat[][Noisy ``Peppers $512\times512$'' \\ \centering(PSNR: 19.11)]{ \includegraphics[width=3.8cm]{./peppers_noised.png} }
\subfloat[][Primal DDM, $\mathcal{N} = 16\times16$ \\ \centering(PSNR: 24.41)]{ \includegraphics[width=3.8cm]{./primal_peppers16.png} }
\subfloat[][Primal-dual DDM,\\ \centering$\mathcal{N} = 16\times16$ (PSNR: 24.41)]{ \includegraphics[width=3.8cm]{./pd_peppers16.png} }
\subfloat[][Noisy ``Boat $2048\times3072$'' \\ \centering(PSNR: 19.10)]{ \includegraphics[width=3.8cm]{./boat_noised.png} }
\subfloat[][Primal DDM, $\mathcal{N} = 16\times16$ \\ \centering(PSNR: 24.75)]{ \includegraphics[width=3.8cm]{./primal_boat16.png} }
\subfloat[][Primal-dual DDM,\\ \centering$\mathcal{N} = 16\times16$ (PSNR: 24.75)]{ \includegraphics[width=3.8cm]{./pd_boat16.png} }
\caption{Results of \cref{Alg:primal_DD,Alg:pd_DD} for test images}
\label{Fig:denoised_images}
\end{figure}
Finally, we display the resulting denoised images by the proposed DDMs in \cref{Fig:denoised_images}.
We only provide the images for the case $\mathcal{N} = 16 \times 16$ since all the resulting images are visually the same regardless of the number of subdomains.
One can observe that there are no artificialities at all on the subdomain interfaces even in the case of quite large number of subdomains.
\section{Conclusion}
\label{Sec:conclusion}
In this paper, we proposed an alternative discretization~\cref{d_dual_ROF} for the dual ROF model using a conforming Raviart--Thomas basis.
We mentioned that the proposed discretization naturally satisfies the splitting property~\cref{splitting} of the energy functional.
Thanks to the splitting property, we proposed two DDMs for the dual ROF model: the primal one and the primal-dual one.
We showed that the proposed primal DDM has a $O(1/n^2)$ convergence rate, which is the best among the existing DDMs.
Also, we showed that the local problems in the proposed primal-dual DDM can be solved at a linear convergence rate by using the accelerated primal-dual algorithm.
Numerical results demonstrate the superiority of the proposed DDMs.
We conclude the paper with a remark on the primal-dual DDM.
Since we did not use any regularity of the dual ROF energy functional to prove convergence of the primal-dual DDM, we expect that the primal-dual DDM can be generalized to more advanced imaging problems with total variation, for example, total variation minimization with~$L^1$-fidelity term~\cite{CE:2005}.
\bibliographystyle{siamplain}
|
{
"timestamp": "2019-06-07T02:17:06",
"yymm": "1805",
"arxiv_id": "1805.02562",
"language": "en",
"url": "https://arxiv.org/abs/1805.02562"
}
|
\section{Introduction}
In this paper we are interested in predition of the future given the
past. We assume that a sequence $x_{1}^{m}=x_{1},x_{2},\dots,x_{m}$
has been observed and the goal is to predict the next symbols $x_{m+1}^{n}=x_{m+1},x_{m+2},\dots,x_{n}$
in the sense that we will assign a probability or a probability density
to this sequence. The prediction is compared with iid models given
by a parametrized family $\left(P_{\theta}\right)_{\theta\in\Theta}$
of probability distributions that assign probability $\prod_{i=1}^{n}P_{\theta}\left(x_{i}\right)$
(or the corresponding density) to the sequence $x_{1}^{n}.$ One may
think of the elements of the family $\left(P_{\theta}\right)_{\theta\in\Theta}$
as the models that some experts can choose among. For the techniques
used in this paper the restriction to iid models is crusial, but some
of the results may generalize to non-iid models.
All measures will be described by their density with respect to a
dominating measure $\lambda.$ Data are assumed to lie in $\mathcal{X}\subseteq\mathbb{R}^{d}$
and vectors will be marked with bold face. Assume that $\left(P_{\boldsymbol{\theta}}\right)_{\boldsymbol{\theta}\in\Theta}$
is a natural exponential family with
\begin{align*}
\frac{\mathrm{d}P_{\boldsymbol{\theta}}}{\mathrm{d}\lambda}\left(\boldsymbol{x}\right) & =\frac{\exp\left(\boldsymbol{\theta}\cdot\boldsymbol{x}\right)}{Z\left(\boldsymbol{\theta}\right)}=\exp\left(\boldsymbol{\theta}\cdot\boldsymbol{x}-A\left(\boldsymbol{\theta}\right)\right).
\end{align*}
Here $Z\left(\boldsymbol{\theta}\right)=\int\exp\left(\boldsymbol{\theta}\cdot\boldsymbol{x}\right)\,\mathrm{d}\lambda\boldsymbol{x}$
is the \emph{moment generating function} and $A\left(\boldsymbol{\theta}\right)=\ln\left(Z\left(\boldsymbol{\theta}\right)\right)$
is the \emph{cumulant generating function}. If the parameter has value
$\boldsymbol{\theta}$ then the mean value is $\boldsymbol{\mu}_{\boldsymbol{\theta}}=\nabla A\left(\boldsymbol{\theta}\right).$
The density $\frac{\mathrm{d}P_{\boldsymbol{\theta}}}{\mathrm{d}\lambda}$
will be denoted $p_{\boldsymbol{\theta}}$, but sometimes we will
also use $p_{\boldsymbol{\theta}}$ for iid sequences.
One approach is the frequentist approach where the sequence $\boldsymbol{x}_{1}^{n}$
is generated by the distribution $P_{\boldsymbol{\theta}}$ for some
true but unknown value of $\boldsymbol{\theta}.$ The sequence $\boldsymbol{x}_{1}^{m}$
is used to make inference about the value of $\boldsymbol{\theta}$
in terms of a confidence region. In a Bayesian approach one has a
prior distribution $\pi$ on the true parameter $\boldsymbol{\theta}$
and the sequence $\boldsymbol{x}_{1}^{m}$ is used to calculate a
posterior distribution of $\boldsymbol{\theta}$ as
\[
\frac{p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{m}\right)\pi\left(\boldsymbol{\theta}\right)}{\int_{\Theta}p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{m}\right)\pi\left(\boldsymbol{\theta}\right)\,\mathrm{d}\boldsymbol{\theta}}.
\]
Then the posterior distribution of $\boldsymbol{x}_{m+1}^{n}$ is
given by
\begin{align}
p_{\pi}\left(\boldsymbol{x}_{m+1}^{n}\mid\boldsymbol{x}^{m}\right) & =\int_{\Theta}p_{\boldsymbol{\theta}}\left(\boldsymbol{x}_{m+1}^{n}\right)\,\mathrm{d}\pi\left(\boldsymbol{\theta}\mid\boldsymbol{x}^{m}\right)\label{eq:posteriorpredic}\\
& =\int_{\Theta}p_{\boldsymbol{\theta}}\left(\boldsymbol{x}_{m+1}^{n}\right)\frac{p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{m}\right)\pi\left(\boldsymbol{\theta}\right)}{\int_{\Theta}p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{m}\right)\pi\left(\boldsymbol{\theta}\right)\,\mathrm{d}\boldsymbol{\theta}}\,\mathrm{d}\boldsymbol{\theta}\nonumber
\end{align}
One of the main problems is Bayesian statistics is the question of
how to determine the prior distribution $\pi.$
The moment generating function $Z$ is related to the Laplace transform
of the measure $\lambda$, so any of the functions $Z$ and $A$ can
be used to reconstruct $\lambda$. The \emph{Hesse matrix} of $A$
with respect to $\boldsymbol{\theta}$ equals the \emph{co-variance
matrix} $Cov\left(\boldsymbol{\mu}_{\boldsymbol{\theta}}\right)$.
The Fisher information matrix with respect to the natural parameter
is $Cov\left(\boldsymbol{\mu}_{\boldsymbol{\theta}}\right)$ so that
\emph{Jeffreys' prior} is proportional to $\left|Cov\left(\boldsymbol{\mu}_{\boldsymbol{\theta}}\right)\right|^{\nicefrac{1}{2}}.$
Therefore \emph{Jeffreys' posterior} distribution of the parameter
$\boldsymbol{\theta}$ after observing a sequence of length $m$ with
average $\bar{\boldsymbol{x}}$ is proportional to
\[
\exp\left(m\cdot\left(\boldsymbol{\theta}\cdot\bar{\boldsymbol{x}}-A\left(\boldsymbol{\theta}\right)\right)\right)\cdot\left|Cov\left(\boldsymbol{\mu}_{\boldsymbol{\theta}}\right)\right|^{\nicefrac{1}{2}}.
\]
One motivation for using Jeffreys' prior is that it is considered
as an uninformative prior. Another motivation is that if one restricts
to a bounded subset whose closure is in the interior of the full parameter
space, then the use of Jeffrey's prior is asymptotically optimal in
a MDL sense \cite{Grunwald2007}.
A co-variance matrix is positive semi-definite so the cumulant generating
function is convex. The\emph{ convex conjugate} of the cumulant generating
function $A$ is $A^{*}\left(\boldsymbol{x}\right)=\sup_{\boldsymbol{\theta}}\left\{ \boldsymbol{\theta}\cdot\boldsymbol{x}-A\left(\boldsymbol{\theta}\right)\right\} .$
The conjugate parameter $\boldsymbol{x}^{*}$ equals the value of
$\boldsymbol{\theta}$ such that $P_{\boldsymbol{\theta}}$ has mean
value $\boldsymbol{x}$, i.e. $\boldsymbol{x}^{*}$ is the solution
to the equation $\nabla A\left(\boldsymbol{\theta}\right)=\boldsymbol{x}$.
Usually the conjugate parameter $\boldsymbol{x}^{*}$ is denoted $\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}\right)$
and is called the maximum likelihood estimate of $\boldsymbol{\theta}.$
We can define the \emph{conjugated exponential family} (if it exists)
as the exponential family with sufficient statistic $\boldsymbol{\theta}$
and with cumulant generating function $A^{*}\left(\boldsymbol{x}\right).$
\begin{rem}
For an exponential family the conjugated exponential family gives
a set of ``conjugated priors'' as this concept is defined in the
literature on Bayesian statistics (see \cite{Raiffa1961} and \cite[Sec. 12.2.6]{Liu2014}),
but a set of ``conjugated priors'' need not coincide with the conjugated
exponential family as it is defined in this paper.
\end{rem}
The Bregman divergence generated by the convex function $A$ is defined
by
\[
D_{A}\left(\boldsymbol{\theta}_{2},\boldsymbol{\theta}_{1}\right)=A\left(\boldsymbol{\theta}_{2}\right)-\left(A\left(\boldsymbol{\theta}_{1}\right)+\left(\boldsymbol{\theta}_{2}-\boldsymbol{\theta}_{1}\right)\cdot\nabla A\left(\boldsymbol{\theta}_{1}\right)\right)
\]
Using convex conjugation the divergence can also be written as
\[
D_{A}\left(\boldsymbol{\theta}_{2},\boldsymbol{\theta}_{1}\right)=D_{A^{*}}\left(\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}\right).
\]
The information divergence can be calculated as
\begin{multline*}
D\left(\left.P_{\boldsymbol{\theta}_{1}}\right\Vert P_{\boldsymbol{\theta}_{2}}\right)=E_{\boldsymbol{\theta}_{1}}\left[\ln\left(\frac{\mathrm{d}P_{\boldsymbol{\theta}_{1}}}{\mathrm{d}P_{\boldsymbol{\theta}_{2}}}\right)\right]\\
=E_{\boldsymbol{\theta}_{1}}\left[\left(\boldsymbol{\theta}_{1}\cdot\boldsymbol{X}-A\left(\boldsymbol{\theta}_{1}\right)\right)-\left(\boldsymbol{\theta}_{2}\cdot\boldsymbol{X}-A\left(\boldsymbol{\theta}_{2}\right)\right)\right]\\
=D_{A}\left(\boldsymbol{\theta}_{2},\boldsymbol{\theta}_{1}\right).
\end{multline*}
The conjugated exponential family gives posterior distributions on
the parameter $\boldsymbol{\theta},$ such that the maximum likelihood
estimate $\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}\right)$ is
unbiased in the sense that it equals the mean value of $\boldsymbol{\theta}$
with respect to the posterior distribution of $\boldsymbol{\theta}$
given $\boldsymbol{x}$. Therefore the use of the conjugated exponential
family implies that the maximum likelihood estimator equals the Bayes
estimator with respect to the loss function $D_{A}$ or any other
Bregman divergence.
The likelihood function can be written as
\begin{multline*}
p_{\boldsymbol{\theta}}\left(\boldsymbol{x}\right)=\exp\left(\boldsymbol{\theta}\cdot\boldsymbol{x}-A\left(\boldsymbol{\theta}\right)\right)=\\
\exp\left(-A\left(\boldsymbol{\theta}\right)+\left(A\left(\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}\right)\right)+\left(\boldsymbol{\theta}-\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}\right)\right)\cdot\nabla A\left(\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}\right)\right)\right)\right)\\
\cdot p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}\right)}\left(\boldsymbol{x}\right)\\
=\exp\left(-D_{A}\left(\boldsymbol{\theta},\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}\right)\right)\right)\cdot p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}\right)}\left(\boldsymbol{x}\right).
\end{multline*}
As a consequence we have the following robustness property \cite[Section 19.3, Eq. 19.12]{Grunwald2007}
of the exponential family
\begin{equation}
\frac{\mathrm{d}P_{\boldsymbol{\theta}}}{\mathrm{d}P_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}\right)}}\left(\boldsymbol{x}\right)=\exp\left(-D_{A}\left(\boldsymbol{\theta},\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}\right)\right)\right).\label{eq:robust}
\end{equation}
The likelihood function after observing the sequence $\boldsymbol{x}^{m}$
is
\begin{multline*}
\prod_{i=1}^{m}p_{\boldsymbol{\theta}}\left(\boldsymbol{x}_{i}\right)=\prod_{i=1}^{m}\exp\left(\boldsymbol{\theta}\cdot\boldsymbol{x}_{i}-A\left(\boldsymbol{\theta}\right)\right)\\
=\exp\left(\boldsymbol{\theta}\cdot\sum_{i=1}^{m}\boldsymbol{x}_{i}-m\cdot A\left(\boldsymbol{\theta}\right)\right)\\
=\exp\left(m\cdot\left(\boldsymbol{\theta}\cdot\bar{\boldsymbol{x}}-A\left(\boldsymbol{\theta}\right)\right)\right)\\
=\exp\left(-m\cdot D_{A}\left(\boldsymbol{\theta},\hat{\boldsymbol{\theta}}\left(\bar{\boldsymbol{x}}\right)\right)\right)\\
\cdot\exp\left(m\left(\hat{\boldsymbol{\theta}}\left(\bar{\boldsymbol{x}}\right)\cdot\bar{\boldsymbol{x}}-A\left(\hat{\boldsymbol{\theta}}\left(\bar{\boldsymbol{x}}\right)\right)\right)\right).
\end{multline*}
In the minimum description length (MDL) approach to statistical inference
there is no assumption about a true value of $\boldsymbol{\theta}$,
and the quality of a prediction is compared with the maximum likelihood
estimate of $\boldsymbol{\theta}$ in terms of a difference in code
length. For a data sequence $\boldsymbol{x}^{n}$ the \emph{regret}
of predicting $p\left(\boldsymbol{x}_{m+1}^{n}\mid\boldsymbol{x}^{m}\right)$
is
\[
-\ln\left(p\left(\boldsymbol{x}_{m+1}^{n}\mid\boldsymbol{x}^{m}\right)\right)-\left(-\ln\left(p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}\left(\boldsymbol{x}^{n}\right)\right)\right).
\]
Here the predictor $p\left(\cdot\mid\boldsymbol{x}^{m}\right)$ is
used to code the future $\boldsymbol{x}_{m+1}^{n}$ while the expert
is coding the whole sequence $\boldsymbol{x}^{n}$, but the expert
is allowed to choose the model $\boldsymbol{\theta}=\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)$
that gives the best fit to data. We take the maximum over all possible
data sequences and the predictor that minimizes the maximal regret
is called the \emph{conditional normalized maximum likelihood} predictor
(CNML) \cite{Rissanen2007} and is given by
\begin{multline}
p_{cnml}^{n}\left(\boldsymbol{x}_{m+1}^{n}\mid\boldsymbol{x}^{m}\right)\\
=\frac{p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}\left(\boldsymbol{x}^{n}\right)}{\int_{\mathcal{X}^{n-m}}p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{m}\boldsymbol{y}^{n-m}\right)}\left(\boldsymbol{x}^{m}\boldsymbol{y}^{n-m}\right)\,\mathrm{d}\lambda^{n-m}\left(\boldsymbol{y}^{n-m}\right)}.\label{eq:CNML}
\end{multline}
\section{Main results}
The essence of the following lemma was already present in \cite[Lem. 3]{Bartlett2013}.
\begin{lem}
\label{lem:key}Assume that $\left(P_{\boldsymbol{\theta}}\right)_{\boldsymbol{\theta}\in\Theta}$
is a natural exponential family. Assume that $m$ is a number such
that CNML and Bayesian prediction based on a prior $\pi$ give equal
prediction strategies for sequences $\boldsymbol{x}_{m+1}^{n}$ for
all $n>m.$ Then for any $n>m$ the integral
\[
\int_{\Theta}\frac{p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}{p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}\left(\boldsymbol{x}^{n}\right)}\pi\left(\boldsymbol{\theta}\right)\,\mathrm{d}\boldsymbol{\theta}
\]
is constant as a function of the data sequence $\boldsymbol{x}^{n}=\boldsymbol{x}_{1}\boldsymbol{x}_{2}\dots\boldsymbol{x}_{n}$~,
\end{lem}
\begin{rem}
Prediction with CNML and prediction based of Jeffreys prior can only
be equal if they are both defined. The values of $m$ for which these
prediction methods are defined, may in principle be different and
may depend on the data sequence \cite{Harremoes2013}.
\end{rem}
\begin{IEEEproof}
For all $\boldsymbol{x^{n}}\in\mathcal{X}^{n}$ we must have
\[
p_{\pi}\left(\boldsymbol{x}_{m+1}^{n}\mid\boldsymbol{x}_{m}\right)=p_{cnml}^{n}\left(\boldsymbol{x}_{m+1}^{n}\mid\boldsymbol{x}_{m}\right).
\]
Using (\ref{eq:posteriorpredic}) and (\ref{eq:CNML}) we get
\begin{multline*}
\int_{\Theta}p_{\boldsymbol{\theta}}\left(\boldsymbol{x}_{m+1}^{n}\right)\frac{p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{m}\right)\pi\left(\boldsymbol{\theta}\right)}{\int_{\Theta}p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{m}\right)\pi\left(\boldsymbol{\theta}\right)\,\mathrm{d}\boldsymbol{\theta}}\,\mathrm{d}\boldsymbol{\theta}\\
=\frac{p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}\left(\boldsymbol{x}^{n}\right)}{\int_{\mathcal{X}^{n-m}}p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{m}\boldsymbol{y}^{n-m}\right)}\left(\boldsymbol{x}^{m}\boldsymbol{y}^{n-m}\right)\,\mathrm{d}\lambda^{n-m}\left(\boldsymbol{y}^{n-m}\right)}
\end{multline*}
and
\begin{multline*}
\frac{\int_{\Theta}p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)\pi\left(\boldsymbol{\theta}\right)\,\mathrm{d}\boldsymbol{\theta}}{p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}\left(\boldsymbol{x}^{n}\right)}\\
=\frac{\int_{\Theta}p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{m}\right)\pi\left(\boldsymbol{\theta}\right)\,\mathrm{d}\boldsymbol{\theta}}{\int_{\mathcal{X}^{n-m}}p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{m}\boldsymbol{y}^{n-m}\right)}\left(\boldsymbol{x}^{m}\boldsymbol{y}^{n-m}\right)\,\mathrm{d}\lambda^{n-m}\left(\boldsymbol{y}^{n-m}\right)}.
\end{multline*}
The quantity on the left side is a function of $\boldsymbol{x}^{n}$
while the quantity on the right side is a function of the sub-string
$\boldsymbol{x}^{m}.$ Since the model is invariant under permutations
of the elements in the string $\boldsymbol{x}^{n}$ both sides must
equal a constant. Finally we note that
\[
\frac{\int_{\Theta}p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)\pi\left(\boldsymbol{\theta}\right)\,\mathrm{d}\boldsymbol{\theta}}{p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}\left(\boldsymbol{x}^{n}\right)}=\int_{\Theta}\frac{p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}{p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}\left(\boldsymbol{x}^{n}\right)}\pi\left(\boldsymbol{\theta}\right)\,\mathrm{d}\boldsymbol{\theta}\,,
\]
which proves the lemma.
\end{IEEEproof}
Note that we have not really used that the parametrized family is
an exponential family, so a similar result holds as long as the parametrization
is sufficiently smooth. If the parametrization is sufficiently smooth
one can also prove that the prior must be proportional to Jeffrey's
prior. We conjecture that if conditional MDL is a Bayesian prediction
for some smoothly parametrized family where the parameter space is
finitely dimensional, then the family must be exponential. Recall
that the saddle point approximation \cite{Daniels1954} for the exponential
family is
\[
\exp\left(-nD_{A}\left(\boldsymbol{\theta},\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)\right)\right)\frac{\left|Cov\left(\boldsymbol{\mu}_{\boldsymbol{\theta}}\right)\right|^{\nicefrac{1}{2}}}{\tau^{\nicefrac{d}{2}}}\,,
\]
where $\tau$ is short for $2\pi.$
\begin{thm}
\label{thm:Main}Assume that $\left(P_{\boldsymbol{\theta}}\right)_{\boldsymbol{\theta}\in\Theta}$
is a natural exponential family. Then the following conditions are
equivalent:
$\bullet$ CNML is a Bayesian prediction strategy.
$\bullet$ Jeffreys' posterior distributions are elements of the conjugated
exponential family.
$\bullet$ The renormalized saddle-point approximation is exact for
the conjugated exponential family.
\end{thm}
\begin{IEEEproof}
According to expression (\ref{lem:key}) we may define a constant
$C_{n}$ by
\[
C_{n}=\int_{\Theta}\frac{p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}{p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}\left(\boldsymbol{x}^{n}\right)}\pi\left(\boldsymbol{\theta}\right)\,\mathrm{d}\boldsymbol{\theta}.
\]
Then
\begin{equation}
\frac{p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}{p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}\left(\boldsymbol{x}^{n}\right)}\cdot\frac{\pi\left(\boldsymbol{\theta}\right)}{C_{n}}\label{eq:conj}
\end{equation}
is a probability density function for $\boldsymbol{\theta}$. We will
demonstrate that the family of probability measures (\ref{eq:conj})
parametrized by $\boldsymbol{x}^{n}$ is the conjugated exponential
family with $\boldsymbol{\theta}$ as sufficient statistic. We have
\begin{multline*}
\frac{p_{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}{p_{\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)}\left(\boldsymbol{x}^{n}\right)}\cdot\frac{\pi\left(\boldsymbol{\theta}\right)}{C_{n}}\\
=\frac{\exp\left(n\left(\boldsymbol{\theta}\cdot\bar{\boldsymbol{x}}-A\left(\boldsymbol{\theta}\right)\right)\right)}{\exp\left(n\left(\hat{\theta}\left(\boldsymbol{x}^{n}\right)\cdot\bar{\boldsymbol{x}}-A\left(\hat{\theta}\left(\boldsymbol{x}^{n}\right)\right)\right)\right)}\cdot\frac{\pi\left(\boldsymbol{\theta}\right)}{C_{n}}\\
=\exp\left(n\left(\boldsymbol{\theta}\cdot\bar{\boldsymbol{x}}-A^{*}\left(\bar{\boldsymbol{x}}\right)\right)\right)\cdot\frac{\pi\left(\boldsymbol{\theta}\right)}{\exp\left(nA\left(\boldsymbol{\theta}\right)\right)C_{n}}.
\end{multline*}
According to the robustness property (\ref{eq:robust}) the density
can be rewritten as
\[
\exp\left(-nD_{A}\left(\boldsymbol{\theta},\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}^{n}\right)\right)\right)\cdot\frac{\pi\left(\boldsymbol{\theta}\right)}{C_{n}}.
\]
Since this should hold for $n$ tending to infinity the saddle point
approximation implies that $\pi\left(\boldsymbol{\theta}\right)$
is proportional to $\left|Cov\left(\boldsymbol{\mu}_{\boldsymbol{\theta}}\right)\right|^{\nicefrac{1}{2}}$.
Therefore the density in the exponential family is proportional to
the saddle point approximation.
\end{IEEEproof}
\begin{cor}
If any of the equivalent conditions of Theorem \ref{thm:Main} are
fulfilled the exponential family is steep and the parameter space
is maximal.
\end{cor}
The goal is now to identify exponential families where Jeffreys' posterior
distributions form exponential families with exact renormalized saddle
point approximations. In \cite{Blesild1985} it was proved that under
certain regularity conditions the renormalized saddle point approximation
is exact for \emph{reproductive exponential families}. The reproductive
exponential families were defined and described in detail in \cite{Barndorff-Nielsen1983}
where it was proved in 1 dimension the following families were reproductive:
the Gaussian location families, the Gamma exponential families and
the Inverse Gaussian families. The idea of reproductive exponential
families can be used to construct reproductive exponential families
in higher dimension by combining reproductive exponential families
in lower dimensions. Five non-trivial examples of 2-dimensional (strongly)
reproducible exponential families obtained by combining reproductive
1 dimensional families were listed in \cite{Barndorff-Nielsen1983}.
For each reproductive exponential family the conjugate exponential
family (if it exists) will satisfy the conditions of Theorem \ref{thm:Main}.
We will illustrate how this works for 1-dimensional reproductive exponential
families.
The only 1-dimensional natural exponential families where the renormalized
saddle point approximation is exact, are the three reproductive exponential
families mentioned above \cite{Daniels1980}, and it can be proved
by solving ordinary differential equations \cite{Blesild1985}. A
complete classification of exponential families with exact renormalized
saddle point approximation in dimension 2 or higher would require
solving some complicated partial differential equations. Therefore
a complete catalog of families for which the equivalent conditions
of Theorem \ref{thm:Main} are fulfilled, seems inaccessable.
For the 1-dimensional reproductive exponential families the functions
$A^{*}$ is exactly the ones used in \cite{Barndorff-Nielsen1983}
to prove that the exponential family is reproductive. Exploration
of this fact in higher dimensions will be covered in a future paper.
\section{The Gamma family}
A Gamma distribution can be parametrized by the shape parameter $\alpha$
and the rate parameter $\beta.$ With these parameters the Gamma distribution
$\Gamma\left(\alpha,\beta\right)$ has density
\[
\frac{\beta^{\alpha}x^{\alpha-1}}{\Gamma\left(\alpha\right)}\exp\left(-\beta x\right)=\frac{x^{\alpha-1}}{\Gamma\left(\alpha\right)}\exp\left(-\beta x+\alpha\ln\left(\beta\right)\right)
\]
for $x>0.$ For a fixed value of $\alpha$ this is a natural exponential
family with natural parameter $\theta=-\beta<0$. Therefore $A\left(\theta\right)=-\alpha\ln\left(-\theta\right).$
The mean value is $\mu=-\alpha/\theta$ so that $\theta=-\alpha/\mu.$
The variance is $Var=\alpha\theta^{-2}$ , so that the variance function
is $V\left(\mu\right)=\frac{\mu^{2}}{\alpha}.$ In terms of the parameter
$\beta$ the mean value is $\mu=\alpha/\beta$ and the variance is
$Var=\alpha\cdot\beta^{-2}$ . Jeffreys' prior has density proportional
to $\frac{\alpha^{\nicefrac{1}{2}}}{\beta},$ which cannot be normalized.
The Bregman divergence is
\begin{multline*}
D_{A}\left(\theta_{1},\theta_{2}\right)\\
=\alpha\ln\left(-\frac{1}{\theta_{1}}\right)-\left(\alpha\ln\left(-\frac{1}{\theta_{2}}\right)+\left(\theta_{1}-\theta_{2}\right)\cdot\frac{-\alpha}{\theta_{2}}\right)\\
=\alpha\left(\frac{\theta_{1}}{\theta_{2}}-1-\ln\left(\frac{\theta_{1}}{\theta_{2}}\right)\right).
\end{multline*}
For $\alpha=1$ this Bregman divergence is called the \emph{Itakura-Saito
divergence}.
The convex conjugate of $A$ is
\begin{multline*}
A^{*}\left(x\right)=\sup_{\theta}\left\{ x\cdot\theta-A\left(\theta\right)\right\} =x\cdot\left(-\frac{\alpha}{x}\right)-A\left(-\frac{\alpha}{x}\right)\\
=-\alpha+\alpha\ln\left(\frac{\alpha}{x}\right)=-\alpha+\alpha\ln\left(\alpha\right)-\alpha\ln\left(x\right).
\end{multline*}
We see that the conjugated exponential family of $\beta=-\theta$
is again a Gamma exponential family with shape parameter $\alpha$,
i.e. the Gamma exponential family is \emph{self-conjugated}. If $x$
is observed the posterior distribution of $\beta$ has rate parameter
$x$. If a sequence of length $m$ has been observed then the posterior
distribution is a Gamma distribution with shape parameters $m\alpha$
and rate parameter $m\bar{x}.$
Since the density of a Gamma distribution equals the re-normalized
saddle point approximation we have that the conditions in Theorem
\ref{thm:Main} are fulfilled and the CNML predictor equals Bayesian
prediction based on Jeffreys' prior. This also holds for exponential
families like the inverse Gamma family, the Pareto family, the Nakagima
family, and the Weibull family where the sufficient statistic is a
smooth 1-to-1 function of the sufficient statistic in a Gamma family.
We will now look at the consequences of self-conjugation for calculations
of one-sided credible intervals and one-sided confidence intervals.
Let $G$ denote the distribution function of $\Gamma\left(m\alpha,m\bar{x}\right)$,
i.e. the posterior distribution of $\beta$ if the average is observed
to be $\bar{x}$. Then $\left[0,G^{-1}\left(1-\tilde{\alpha}\right)\right]$
is a $1-\tilde{\alpha}$ \emph{credible interval} for $\beta.$ We
can write
\begin{align*}
G^{-1}\left(1-\tilde{\alpha}\right) & =\frac{F^{-1}\left(1-\tilde{\alpha}\right)}{\bar{x}}
\end{align*}
where $F$ is the distribution function of $\Gamma\left(m\alpha,m\right).$
If $X_{i}\sim\Gamma\left(\alpha,\beta\right)$ then $\sum_{i=1}^{m}X_{i}\sim\Gamma\left(m\alpha,\beta\right)$
and $\frac{1}{m}\sum_{i=1}^{m}X_{i}\sim\Gamma\left(m\alpha,m\beta\right)$
so that $\text{\ensuremath{\beta\bar{X}\sim\Gamma\left(m\alpha,m\right)}.}$
Therefore
\begin{align*}
P\left(\beta\in\left[0,\frac{F^{-1}\left(1-\tilde{\alpha}\right)}{\bar{X}}\right]\right) & =P\left(\bar{X}\in\left[0,\frac{F^{-1}\left(1-\tilde{\alpha}\right)}{\beta}\right]\right)\\
& =1-\tilde{\alpha}
\end{align*}
so that the $1-\tilde{\alpha}$ credible interval $\left[0,\frac{F^{-1}\left(1-\tilde{\alpha}\right)}{\bar{x}}\right]$
is also a $1-\tilde{\alpha}$ \emph{confidence interval} for $\beta$
as defined in the frequentist approach to statistics.
\section{The Gaussian location family}
If the parameter space equals $\mathbb{R}^{d}$ the notion of self-conjugation
becomes very simple. The proof of the following lemma is an easy exercise.
\begin{lem}
Let $B:\mathbb{R}^{d}\to\mathbb{R}^{d}$ denote a linear invertible
self-adjoint mapping. If $G$ is a convex function and $F=G\circ B$
then $F^{*}=G^{*}\circ B^{-1}$.
\end{lem}
The Gaussian location model has density
\[
\frac{\exp\left(-\frac{1}{2}\left(\boldsymbol{x}-\boldsymbol{\mu}\right)\cdot B^{-1}\left(\boldsymbol{x}-\boldsymbol{\mu}\right)\right)}{\tau^{\nicefrac{d}{2}}\cdot\left|B\right|^{^{\nicefrac{1}{2}}}}
\]
where $\boldsymbol{\mu}$ is the mean and $B$ denotes the co-variance
matrix.
\begin{thm}
If an exponential family has a cumulant generating function $A:R^{d}\to R$
that satisfies $A^{*}=A\circ B$ for some positiv definite linear
function $B:\mathbb{R}^{d}\to\mathbb{R}^{d}$ then the exponential
family is a Gaussian location model where $B$ can be identified with
the co-variance matrix.
\end{thm}
\begin{IEEEproof}
Define $F=A\circ B^{\nicefrac{1}{2}}.$ Then
\begin{multline*}
F^{*}=A^{*}\circ\left(B^{\nicefrac{1}{2}}\right)^{-1}=A\circ B\circ B^{-\nicefrac{1}{2}}=A\circ B^{\nicefrac{1}{2}}=F\,.
\end{multline*}
Since $F$ is self-conjugated and defined on $\mathbb{R}^{d}$ we
can apply \cite[Prop. 29a]{Moreau1965} to get $F\left(\boldsymbol{x}\right)=\frac{1}{2}\left\Vert \boldsymbol{x}\right\Vert ^{2}.$
Therefore
\begin{multline*}
A\left(\boldsymbol{x}\right)=F\left(B^{-\nicefrac{1}{2}}\left(\boldsymbol{x}\right)\right)=\frac{1}{2}B^{-\nicefrac{1}{2}}\left(\boldsymbol{x}\right)\cdot B^{-\nicefrac{1}{2}}\left(\boldsymbol{x}\right)\\
=\frac{1}{2}\boldsymbol{x}\cdot B^{-1}\left(\boldsymbol{x}\right).
\end{multline*}
It is easy to prove that the Gaussian location model also has cumulant
generating function $\frac{1}{2}\boldsymbol{x}\cdot B^{-1}\left(\boldsymbol{x}\right).$
\end{IEEEproof}
Since the saddle point approximation is exact for the Gaussian location
family the conditions of Threorem \ref{thm:Main} are fulfilled. For
the Gaussian location family the Bregman divergence is symmetric in
its arguments and inference reduces to the principle of least squares.
In Bayesian statistics a $1-\tilde{\alpha}$ \emph{credible region}
for the mean value parameter can be calculated as a divergence ball
\begin{equation}
\left\{ \theta\in R^{d}\mid D_{A}\left(\boldsymbol{\theta},\hat{\boldsymbol{\theta}}\left(\boldsymbol{x}\right)\right)\leq r\right\} \label{eq:ball}
\end{equation}
where the radius $r$ is chosen so that the ball has probability $1-\tilde{\alpha}$.
Using that the exponential family is self-conjugated we see that the
ball (\ref{eq:ball}) is also a $1-\tilde{\alpha}$ \emph{confidence
region} as defined in frequentist statistics.
\section{The Poisson-exponential family}
The saddle point approximation is exact for the inverse Gaussian family
with density
\[
\left(\frac{\kappa}{\tau\beta^{3}}\right)^{\nicefrac{1}{2}}\exp\left(-\kappa\frac{\left(\beta-\beta_{0}\right)^{2}}{2\beta_{0}^{2}\beta}\right),
\]
where $\beta$ is the sufficient statistic and $\beta_{0}$ denotes
the mean value of the distribution and $\kappa$ denotes the \emph{shape
parameter}. We are going to identify the conjugated exponential family.
First we rewrite
\begin{multline*}
\left(\frac{\kappa}{\tau\beta^{3}}\right)^{\nicefrac{1}{2}}\exp\left(-\kappa\frac{\left(\beta-\beta_{0}\right)^{2}}{2\beta_{0}^{2}\beta}\right)\\
=\left(\frac{\kappa}{\tau\beta^{3}}\right)^{\nicefrac{1}{2}}\exp\left(-\frac{\kappa}{2\beta}\right)\exp\left(\frac{-\kappa}{2\beta_{0}^{2}}\cdot\beta+\frac{\kappa}{\beta_{0}}\right).
\end{multline*}
The natural parameter is $\theta=\frac{-\kappa}{2\beta_{0}^{2}}$
and the cumulant generating function is $A\left(\theta\right)=\left(-2\kappa\theta\right)^{\nicefrac{1}{2}}.$
The convex conjugate is
\begin{align*}
A^{^{*}}\left(\beta\right) & =\sup\left\{ \beta\cdot\theta-A\left(\theta\right)\right\} \\
& =\beta\cdot\frac{-\kappa}{2\beta^{2}}-\left(-2\kappa\cdot\frac{-\kappa}{2\beta^{2}}\right)^{\nicefrac{1}{2}}=\frac{\kappa}{2\beta}.
\end{align*}
One can identify an exponential family with this function as cumulant
generating function by taking the inverse Laplace transform, but it
is more instructive to identify it by calculating the variance function.
We have
\begin{align*}
\left(A^{*}\right)'\left(\beta\right) & =-\frac{\kappa}{2\beta^{2}}\,\textrm{and}\,\left(A^{*}\right)''\left(\beta\right)=\frac{\kappa}{\beta^{3}}.
\end{align*}
Thus $\hat{\theta}\left(\beta\right)=-\frac{\kappa}{2}\beta^{-2}$
so that $\beta\left(\theta\right)=\left(-\frac{\kappa}{2\theta}\right)^{\nicefrac{1}{2}}$
and $V\left(\theta\right)=\kappa\left(\beta\left(\theta\right)\right)^{-3}=2^{\nicefrac{3}{2}}\kappa^{-\nicefrac{1}{2}}\left(-\theta\right)^{\nicefrac{3}{2}}=\phi\cdot\left(-\theta\right)^{\nicefrac{3}{2}}$
where $\phi=2^{\nicefrac{3}{2}}\kappa^{-\nicefrac{1}{2}}$. Since
the variance function is a power function of order $\nicefrac{3}{2}$
one says that the corresponding exponential family is a \emph{Tweedie
family} of order $p=\nicefrac{3}{2}$~. Jeffreys' prior for this
family is proportional to
\[
\left(\left(A^{*}\right)''\left(\beta\right)\right)^{\nicefrac{1}{2}}=\kappa^{\nicefrac{1}{2}}\cdot\beta^{-\nicefrac{3}{2}},
\]
which cannot be normalized. Credible intervals and confidence intervals
can be calculated using $\mathtt{tweedie}$ and the $\mathtt{statmod}$
package in the R program, but the $1-\tilde{\alpha}$ credible intervals
do not coincide with the $1-\tilde{\alpha}$ confidence intervals
reflecting that the Poisson-exponential family is not self-conjugated.
One cannot calculate the density of elements of the Tweedie family
of order $p=\nicefrac{3}{2}$ exactly, but they can be obtained by
the following construction. Let $N$ denote a random variable with
a Poisson distribution $Po\left(\lambda\right)$. Let $X_{1},X_{2},\dots$
denote a sequence of iid random variables each exponentially distributed
$Exp\left(\beta\right)$. Then we may define
\[
Y=\sum_{n=1}^{N}X_{n}\,.
\]
Then the distribution of $Y$ is a compound Poisson distribution.
Distributions where $X_{i}$ are Gamma distributions were called Poisson-gamma
distributions in \cite{Smyth1996}, so we will call the distribution
of $Y$ a \emph{Poisson-exponential distribution} when $X_{i}$ are
exponential. The density of $\sum_{n=1}^{\alpha}X_{n}$ is
\[
\frac{\tilde{\beta}^{\alpha}x^{\alpha-1}\exp\left(-\tilde{\beta}x\right)}{\Gamma\left(\alpha\right)}.
\]
Therefore the Poisson-exponential distribution has a point mass in
0 of weight $\exp\left(-\lambda\right)$ and it has density
\[
\sum_{\alpha=0}^{\infty}\frac{\lambda^{\alpha}\exp\left(-\lambda\right)}{\alpha!}\cdot\frac{\beta^{\alpha}x^{\alpha-1}\exp\left(-\beta x\right)}{\Gamma\left(\alpha\right)}
\]
for $x>0$. We introduce $\kappa=\frac{\tilde{\beta}\cdot\lambda}{2}$
so that the density can be written as
\[
\sum_{\alpha=0}^{\infty}\frac{\left(\frac{\kappa}{2}\right)^{\alpha}x^{\alpha-1}}{\alpha!\Gamma\left(\alpha\right)}\cdot\exp\left(-\beta\cdot x-\frac{\kappa}{2\beta}\right).
\]
This is a natural exponential family with with natural parameter $-\beta$
and cumulant generating function $\kappa/\left(2\beta\right)$. Except
for a change of sign it is the conjugated exponential family of the
inverse Gaussian family.
Since the saddle point approximation is exact for the inverse Gaussian
family, prediction for the Poisson-exponential family based on CNML
equals prediction based on Jeffreys prior, and Jeffreys posterior
equals an inverse Gaussian distribution.
The Poisson-exponential families have been used to model the accumulated
amount of rain in rainfalls, where the amount of rain in each rainfall
is modeled by an exponential distribution and the number of rainfalls
is modeled by a Poisson distribution \cite{Thompson1984,Revfeim1984}.
This application dates back to Cornish and Fisher. Reference to other
applications as well as a derivation of the basic properties of Poisson-gamma
distributions can be found in \cite{Withers2011}. Note that the Poisson-exponential
family is a Tweedie family of order $p=\nicefrac{3}{2}$ and that
some of the literature on applications of the Poisson-exponential
family treat the order $p$ as a free parameter that should be estimated
in order to give a good fit with data. According to our results the
value $p=\nicefrac{3}{2}$ is special with respect to statistical
inference, so that $p$ cannot be considered as a free parameter if
we want to have the properties developed here.
\section*{Acknowledgement}
I would like to thank Wojciech Kot\l owski for useful comments to
this paper.
{\small{}
|
{
"timestamp": "2018-05-08T02:12:11",
"yymm": "1805",
"arxiv_id": "1805.02234",
"language": "en",
"url": "https://arxiv.org/abs/1805.02234"
}
|
\section{Introduction}
From the general relativity (GR), we know that photons would be deviated from their straight path when they pass close to a compact and massive body, and this deflection of light was first observed in 1919 by Dyson, Eddington and Davidson \cite{Dyson}. The effect resulting from the deflection of light rays in a gravitational field is known as gravitational lensing and the object causing a detectable deflection is usually named a gravitational lens \cite{Einstein}. Gravitational lensing can help us extract the information about the distant stars which are too dim to be observed, similar as a natural and large telescope. An astrophysical object (such as black hole) makes light rays passing close to it to have a large deviation, even makes a complete loops around the object before reaching the observer, resulting in two infinite sets of the denominated relativistic images on each side of the object. We could not only extract the information about black holes in the universe, but also verify profoundly alternative theories of gravity in their strong field regime with these relativistic images \cite{Vir,Vir1,Vir2,Vir3,Fritt,Bozza,Bozza2,Eirc,Eirc1,whisk,Gyulchev,Bhad1,TSa1,AnAv,gr1,Kraniotis,JH}. Therefore, the strong gravitational lensing is regarded as a powerful indicator of the physical nature of the central celestial objects and then has been studied extensively in various theories of gravity \cite{schen} recent years.
Several theories \cite{Brans,Buchdahl,Gia,Jacob,Ferraro,Kezban} were proposed to generalize GR in order to get an agreement with the observations that the universe is going through a phase of the accelerated expansion \cite{Adam,S} without requiring the existence of the cosmological constant or dark energy and dark matter. There is a widely know model which is a well-developed case in the infrared modification of gravity and all of these points are nicely illustrated, called massive gravity \cite{Dubovsky}. Fierz and Pauli \cite{Pauli} did the first attempt to include mass for the graviton in 1939. However, the massive gravity theory was not concerned until the vigorous development of quantum field theory in the early 1970s.
Since then, many significant massive gravity models as a modified Einstein gravity theory (see the reviews on the subject \cite{Rham,Kurt,Michael}) were proposed, especially in recent years. In this paper, we focus on a static vacuum spherically symmetric solution in massive gravity raised by \cite{Michael1,Comelli}. There are many people that pay their attention to this solution. Sharmanthie Fernando \cite{Sharmanthie,Sharmanthie1} studied quasinormal modes of scalar and massless Dirac perturbations of this theory. Then, Fabio Capela \cite{Fabio1,Fabio} checked the validity of the laws of thermodynamics in massive gravity by making use of the exact black hole solution and equilibrium states and phase structures of such a solution enclosed in a spherical surface kept at a fixed temperature. Furthermore, the geodesic structure of this solution was discussed in Ref. \cite{zhang}. In this paper, we will study the strong gravitational lensing of this solution.
Scalar fields obtained much attention during the recent years. There are several reasons for this, first and foremost, scalar fields both as fundamental fields and effective fields, are well motivated by the standard model particle physics. Then, we want to explore different field contents, in order to check whether the ``No Hair Theorem" \cite{Ruffini} is true and explore the structure of black holes. Scalar fields are one of the simplest types of ``matter" which are often considered by physicists. Finally, the presence of the scalar fields lead to different black hole spacetimes than those of GR, and may engender some new phenomena. We hope one could detect these deviations in astrophysical observations. Moreover, G. Aad \cite{Aad,Chatrchyan} et al. discovered a scalar particle at the Large Hadron Collider, at CERN, which is identified as the standard model Higgs boson since 2012. This observational proves that there is fundamental scalar fields existing in nature. And many examples of scalar hairy black holes \cite{Martinez,Nadalini,Anabalon,Herdeiro} were constructed by passing the long standing ``No Hair Theorem". Therefore, it is important to study the strong gravitational lensing for black hole with scalar hair. This paper shows that the scalar hair has great influence on the strong gravitational lensing.
The outline of this paper is as follows. In Sec.II we study the physical properties of the strong gravitational lensing around the black hole with scalar charge in massive gravity and probe the effects of the deformation parameters on the radius of the photon sphere, minimum impact parameter and deflection angle. In Sec.III we suppose that the gravitational field of the supermassive black hole at the center of our galaxy can be described by this metric and then obtain the numerical results for the main observables in the strong gravitational lensing, such as the angular image position, angular image separation and relative magnifications of these images. Then, we obtain the time delay of light traveling from the source to the observer by numerical calculation in Sec.IV. We end the paper with a summary.
\section{Deflection angle in massive gravity}
We will use the following massive gravity model \cite{Dubovsky:2004sg}
\begin{eqnarray}
\label{eq:action}
\mathcal{S} = \int \textrm{d} x^{4} \sqrt{- g}
\left[ - \textrm{M}_{\textrm{pl}}^2 \mathcal{R} + \mathcal{L}_{\textrm{m}} +
\Lambda^{4} \mathcal{F} \right] ,
\end{eqnarray}
where the first two terms are the curvature and Lagrangian of the minimally coupled ordinary matter which comprise the standard GR action, and the third term describes four scalar fields $\phi^0$, $\phi^i$
whose space-time dependent vacuum expectation values break
spontaneously the Lorentz symmetry. The function ${\cal F}$
depends on two particular combinations of the derivatives of the
Goldstone fields, $\mathcal{F} = \mathcal{F} \left( X, W^{ij}
\right)$, with
\begin{eqnarray*}
X &=& \dfrac{\partial^{\mu} \phi^0 \partial_{\mu} \phi^0}{\Lambda^4} ,~~
W^{ij} =\dfrac{\partial^{\mu} \phi^i
\partial_{\mu}\phi^j}{\Lambda^4}
- \dfrac{\partial^{\mu} \phi^i \partial_{\mu}\phi^0\,
\partial^{\nu} \phi^j \partial_{\nu}\phi^0}{\Lambda^8 X} ,
\end{eqnarray*}
where the constant $\Lambda$ has the dimension of mass.
A static spherically symmetric solution in massive gravity is described by \cite{Michael1,Comelli}
\begin{equation}\label{metric1}
ds^2=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}(d\theta^{2}+\sin^{2}\theta d\varphi^{2}),
\end{equation}
with
\begin{equation}\label{metric2}
f(r)=1-\frac{2M}{r}-\frac{S}{r^{\lambda}},
\end{equation}
where $M$ accounts for the gravitational mass of the body, $\lambda$ is a parameter of model which depends on the potential, $S$ is a scalar charge whose presence reflects the modification of the gravitational interaction as compared to GR, and we will get standard Schwarzschild solution when $S=0$. The solution (\ref{metric1}) has an attractive behavior at large distances with positive $M$. However, the corresponding Newton potential is repulsive at large distances and attractive near the horizon with negative $M$. The latter does not have a corresponding case in GR, so we only consider the case $M>0$. For $S>0$, the modified black hole has attractive gravitational potential at all distances and the event horizon size is larger than $2 M$. For $S<0$, the event horizon exists only for sufficiently small $S$. But when the event horizon exists, the gravitational field is attractive all the way to the horizon while the attraction is weaker than in the case of the usual Schwarzschild black hole, and the horizon size is smaller. Fortunately the massive gravity theory with a Lorentz violation gives us an asymptotically flat spherically symmetric space with finite total energy, featuring an asymptotic behavior slower than $1/r$ and generically of the form $1/r^{\lambda}$, which makes the black hole solution be far richer than in GR due to the presence of ``hair $\lambda$". The solution does not describe asymptotically flat space when $\lambda<0$ and the Arnowitt-Deser-Misner(ADM) mass will be infinite when $0<\lambda<1$. For $\lambda>1$, the solution recovers standard Schwarzschild term at large distances and the ADM mass is equal to $M$. So we will limit ourselves to the case $\lambda>1$ in the following. It should note that the form of this metric is the same as Reissner-Nordstr$\ddot{o}$m metric when $\lambda=2$ and $S<0$.
To study the gravitational lensing, we just consider the equatorial plan ($\theta=\frac{\pi}{2}$) as usual. It means that both the observer and the source lie in the equatorial plane and the whole trajectory of the photon is limited on the same plane. For simplicity, we set $r/2M=r$ and $S/(2M)^{\lambda}=S$ in the following calculations. The metric (\ref{metric1}) can be rewritten as
\begin{equation}\label{metric3}
ds^2=-A(r)dt^2+B(r)dr^2+C(r)d\phi^2,
\end{equation}
with
\begin{equation}\label{metric4}
A(r)=f(r), ~~B(r)=\frac{1}{f(r)}, ~~C(r)=r^2,~~f(r)=1-\frac{1}{r}-\frac{S}{r^{\lambda}}.
\end{equation}
From Eq.(\ref{metric3}) we know that the event horizons $r_{H(\lambda)}$ for different $\lambda$ are given by
\begin{eqnarray}
r_{H(2)}&=&\frac{1}{2}(1+\sqrt{1+4S}),\\\label{horizon2}
r_{H(3)}&=&\frac{1}{3}(1+\frac{2^{\frac{1}{3}}}{(2+27S+3\sqrt{3}\sqrt{4S+27S^{2}})^{\frac{1}{3}}}+
\frac{(2+27S+3\sqrt{3}\sqrt{4S+27S^{2}})^{\frac{1}{3}}}{2^{\frac{1}{3}}}),\\\label{horizon3}
r_{H(4)}&=&\frac{1}{4}+\frac{1}{4}\sqrt{1-16L+\frac{4S}{3L}}+\frac{1}{2}\sqrt{\frac{1}{2}+4L-\frac{S}{3L}+
\frac{1}{2\sqrt{1-16L+\frac{4S}{3L}}}} ,\label{horizon4}
\end{eqnarray}
with
\begin{eqnarray}
L=\frac{(\frac{2}{3})^{\frac{1}{3}}S}{(-9S+\sqrt{3}\sqrt{27S^{2}+256S^{3}})^{\frac{1}{3}}}.\nonumber
\end{eqnarray}
The limit for $S$ could be drawn from the above equations: when the event horizon exists, we must confine $S>-0.25$, $S>-0.148$ and $S>-0.105$ for $\lambda=2$, $\lambda=3$ and $\lambda=4$, respectively. Obviously, when $S=0$, the radius of the event horizon will always be $1$ for fixed $\lambda$, the same value as the Schwarzschild case.
The equation of the photon sphere is given by \cite{Vir2,Vir3}
\begin{equation}\label{u1}
\frac{C'(r)}{C(r)}=\frac{A'(r)}{A(r)},
\end{equation}
which admits at least one positive solution and the largest real root of Eq.(\ref{u1}) is defined as the radius of the photon sphere $r_{ps(\lambda)}$. The $r_{ps(\lambda)}$ for $\lambda=2,~3$ and $4$ are given as
\begin{eqnarray}
r_{ps(2)}&=&\frac{1}{4}(3+\sqrt{9+32S}),\\\label{U2}
r_{ps(3)}&=&\frac{1}{2}+\frac{1+(1+10S-2\sqrt{5}\sqrt{S+5S^2})^{\frac{2}{3}}}
{2(1+10S-2\sqrt{5}\sqrt{S+5S^2})^{\frac{1}{3}}}\\\label{u3}
r_{ps(4)}&=&\frac{3}{8}+\frac{1}{8}\sqrt{9-\frac{128S}{K}+8K}+\frac{1}{2}\sqrt{\frac{9}{8}+\frac{8S}{K}-\frac{K}{2}+
\frac{27}{8\sqrt{9-\frac{128S}{K}+8K}}},\label{u4}
\end{eqnarray}
with
\begin{eqnarray}
K=(-279-\sqrt{729S^2+4096S^3})^{\frac{1}{3}}.\nonumber
\end{eqnarray}
\begin{figure}[!htb]
\includegraphics[width=0.96\textwidth ]{fig1.eps}
\caption{(color online) The radius of the photon sphere changes with parameter $S$ for different $\lambda$ in massive gravity.}
\end{figure}
The limit for $S$ could also be drawn from the above equations and figure, which are $S>-0.28$, $S>-0.2$ and $S>-0.1779$ for $\lambda=2$, $\lambda=3$ and $\lambda=4$, respectively. It is obviously that the constraint is lower than that for the event horizon. Therefore, we should use the limit $S>-0.25$, $S>-0.148$ and $S>-0.105$ for $\lambda=2$, $\lambda=3$ and $\lambda=4$. It is not hard to find that the radius of the photon sphere will always be 1.5 for fixed $\lambda$ when $S=0$, the same value as the Schwarzschild case \cite{Bozza}.
Following Ref. \cite{Einstein}, we can define the deflection angle for the photon coming from infinite in the massive gravity as
\begin{equation}\label{angle1}
\alpha(r_{0})=I(r_{0})-\pi,
\end{equation}
where $r_{0}$ is the closest approach distance and $I(r_{0})$ \cite{Vir} is
\begin{equation}\label{angle2}
I(r_{0})=2\int^{\infty}_{r_{0}}\frac{\sqrt{B(r)}dr}{\sqrt{C(r)}\sqrt{\frac{C(r)A(r_{0})}{C(r_{0})A(r)}}-1}.
\end{equation}
The deflection angle increases when parameter $r_{0}$ decreases. For a special value of $r_{0}$ the deflection angle will become $2\pi$, it means that the light ray will make a complete loop around the compact object before reaching the observer. What's more, the deflection angle diverges and the photon is captured when $r_{0}$ reduces until equal to the radius of the photon sphere $r_{ps}$.
In order to find the behavior of the deflection angle very close to the photon sphere, we adopt the evaluation method for the integral (14) proposed by Bozza
\footnotemark[2]\footnotetext[2]{Virbhadra \cite{Vir5} who coined the term relativistic images showed that deflection angles and image positions of
relativistic images for the Schwarzschild lensing obtained by Bozza \cite{Bozza1} have about $0.5\%$ error.
However, we found that Bozza's more recent analytic formalism \cite{Bozza} has
about $0.38\%$ errors. Therefore, we adopt here Bozza's new (instead of
his old method) formalism \cite{Bozza} to study the deflection angles and image
positions for relativistic images. However, as shown by Virbhadra
\cite{Vir5}, Bozza's method \cite{Bozza2} gives huge errors, about $-31.8\%$ to $20.0\%$,
for time delays among relativistic images. Virbhadra supported these
results by truly convincing physical arguments. Therefore, for time
delays of relativistic images we adopt Virbhadra and Keeton formalism
\cite{Vir4,Vir5}. Virbhadra \cite{Vir5} also showed that Bozza's results have large
errors for magnifications of relativistic images as well.}\cite{Bozza}.
Let us define a variable
\begin{equation}\label{variable}
z=1-\frac{r_{0}}{r},
\end{equation}
then we will obtain
\begin{equation}\label{angle3}
I(r_{0})=\int^{1}_{0}R(z,r_{0})f(z,r_{0})dz,
\end{equation}
where
\begin{equation}\label{R}
R(z,r_{0})=\frac{2r^{2}\sqrt{A(r)B(r)C(r_{0})}}{r_{0}C(r)},\nonumber
\end{equation}
\begin{equation}\label{f}
f(z,r_{0})=\frac{1}{\sqrt{A(r_{0})-\frac{A(r)C(r_{0})}{C(r)}}}.
\end{equation}
Moreover, the function $R(z,r_{0})$ is regular for all values of $z$ and $r_{0}$. While the function $f(z,r_{0})$ diverges as $z$ tends to zero, i.e., the photon approaches the photon sphere. Thus, the integral (\ref{angle3}) can be split into \begin{eqnarray}\label{IDR}
I_{D}(r_{0})&=&\int^{1}_{0}R(0,r_{ps})f_{0}(z,r_{0})dz, \\
I_{R}(r_{0})&=&\int^{1}_{0}[R(z,r_{0})f(z,r_{0})-R(0,r_{ps})f_{0}(z,r_{0})]dz,
\end{eqnarray}
where $I_{D}(r_{0})$ and $I_{R}(r_{0})$ denote the divergent and regular parts in the integral (\ref{angle3}), respectively. For the purpose of finding the order of divergence of the integrand, we expand the argument of the square root in $f(z,r_{0})$ to the second order in $z$, then we have
\begin{equation}\label{f0}
f_{0}(z,r_{0})=\frac{1}{\sqrt{p_{\lambda}(r_{0})z+q_{\lambda}(r_{0})z^{2}}},
\end{equation}
where
\begin{eqnarray}\label{pq}
\nonumber p_{2}(r_{0})&=&2-\frac{3}{r_{0}}-\frac{4S}{r_{0}^{2}},~~~~
q_{2}(r_{0})=-1+\frac{3}{r_{0}}+\frac{6S}{r_{0}^{2}},\\\nonumber
p_{3}(r_{0})&=&2-\frac{3}{r_{0}}-\frac{5S}{r_{0}^{3}},~~~~\nonumber
q_{3}(r_{0})=-1+\frac{3}{r_{0}}+\frac{10S}{r_{0}^{3}}, \\
p_{4}(r_{0})&=&2-\frac{3}{r_{0}}-\frac{6S}{r_{0}^{4}},~~~~
q_{4}(r_{0})=-1+\frac{3}{r_{0}}+\frac{15S}{r_{0}^{4}}.
\end{eqnarray}
Obviously, we can find that the coefficients $p_\lambda(r_{0})$ vanish and the leading term of the divergence in $f_{0}(z,r_{0})$ is $z^{-1}$ when $r_{0}$ is equal to the radius of the photon sphere $r_{ps}$ for different $\lambda$, which means that the integral (\ref{angle3}) diverges logarithmically. Therefore the deflection angle in the strong field region can be expanded in the form
\begin{equation}\label{angle4}
\alpha(\theta)=-\bar{a}\log(\frac{\theta D_{OL}}{u_{ps}}-1)+\bar{b}+o(u-u_{ps}),
\end{equation}
with
\begin{eqnarray}\label{ab}
\nonumber \bar{a}&=&\frac{R(0,r_{ps})}{2\sqrt{q_{\lambda}(r_{ps})}},\\
\nonumber \bar{b}&=&-\pi+b_{R}+\bar{a}\log\frac{r_{ps}^{2}[C''(r_{ps})A(r_{ps})-C(r_{ps})A''(r_{ps})]}{u_{ps}\sqrt{A^{3}(r_{ps})C(r_{ps})}},\\
\nonumber b_{R}&=&I_{R}(r_{ps}),\\
u_{ps}&=&\frac{r_{ps}}{\sqrt{A(r_{ps})}},
\end{eqnarray}
where $D_{OL}$ is the distance between observer and gravitational lens, $\theta$ is the angular image separation between the optical axis and the image, and they are satisfied $u=\theta D_{OL}$. $u_{ps}$ is the impact parameter $u$ evaluated at $r_{ps}$, called minimum impact parameter. $\bar{a}$ and $\bar{b}$ are the so-called strong field limit coefficients which depend on the metric functions evaluated at $r_{ps}$ and they are the theoretical predictions, not the observables. Making use of Eqs.(\ref{angle4}) and (\ref{ab}), we can study the properties of strong gravitational lensing in the massive gravity.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1.]{fig2.eps}
\end{center}
\caption{(color online) Variation of the strong field limit coefficients $\bar{a}$ (left) and $\bar{b}$ (right) with parameters $S$ and $\lambda$ in the massive gravity. The intersection of $S=0$ stands for Schwarzschild case.}
\label{ab1}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1.]{fig3.eps}
\end{center}
\caption{(color online) The left one is the variation of the minimum impact parameter $u_{ps}$ with parameters $S$ and $\lambda$ in the massive gravity. The right one is the deflection angle evaluated at $u=u_{ps}+0.003$ as function of $S$ for different $\lambda$ in massive gravity. The intersection of $S=0$ stands for Schwarzschild case.}
\label{u ar}
\end{figure}
In Fig.(\ref{ab1}), we plot the change of the coefficients $\bar{a}$ and $\bar{b}$ with scaler charge $S$ for different $\lambda$ in the massive gravity. We can see, with the increase of $S$ for fixed $\lambda$, that the coefficient $\bar{a}$ decreases, but $\bar{b}$ increases first, after reaching a maximum, it decreases slowly. Moreover, $\bar{a}$ increases with the increase of $\lambda$ for $S<0$ but decreases for $S>0$. Nonetheless, the variation of $\bar{b}$ with $\lambda$ is converse to the variation of $\bar{a}$ with $\lambda$. The Fig.(\ref{u ar}) tells us that the minimum impact parameter $u_{ps}$ grows with the increase of $S$ for fixed $\lambda$, and it increases with the increase of $\lambda$ if $S<0$ but decreases if $S>0$. We present the deflection angle $\alpha(\theta)$ that evaluated at $u=u_{ps}+0.003$ in Fig.(\ref{u ar}), too, which shows that in the strong field limit the deflection angle has the similar properties with the coefficient $\bar{a}$. This means that the deflection angle of the light rays is dominated by the logarithmic term in the strong gravitational lensing. From all of the above figures, we can see clearly that all parameters have different properties when they vary with $\lambda$ for $S<0$ and $S>0$. This is due to that the different values of scalar charge $S$ lead to different physical effects. When $S>0$, the gravitational potential is attractive and stronger than that of the Schwarzschild black hole. When $S<0$, even though the Newton potential is always attractive form the event horizon to large distances, the attraction is weaker than that of the Schwarzschild black hole. Furthermore, it is not hard to find that for any $\lambda$, each line of $\bar{a}$, $\bar{b}$, $u_{ps}$ and $\alpha(\theta)$ intersects at $S=0$, which means that they recover the results of the standard Schwarzschild case, i.e., $\bar{a}=1$, $\bar{b}=-0.4002$, $u_{ps}=2.598$ and $\alpha(\theta)=6.28$ \cite{Bozza}. We should note that the metric is the same as the Reissner-Nordstr$\ddot{o}$m metric when $\lambda=2$ and $S<0$, therefore the blue line of Figs. (\ref{ab1}) and (\ref{u ar}) in the region of $\lambda=2$ and $S<0$ is the same as the results for Reissner-Nordstr$\ddot{o}$m black hole \cite{Bozza,Eirc}.
\section{Observables in the strong deflection limit}
Now let us see how the parameters $S$ and $\lambda$ affect the observables in the strong gravitational lensing. Considering the source and observer are far enough from the lens, and the source, lens and observer are highly aligned. Then the lens equation can be written as \cite{Bozza}
\begin{equation}\label{gamma}
\beta=\theta-\frac{D_{LS}}{D_{OS}}\triangle\alpha_{n},
\end{equation}
where $D_{LS}$ is the distance between the lens and the source, $D_{OS}$ is the distance between the observer and the source and $D_{OS}=D_{LS}+D_{OL}$. $\beta$ is the angle between the direction of the source and the optical axis, called angular source position. It is well-known that a light ray can pass close to the photon sphere and go around the lens once, twice, thrice, or many times depending on the impact parameter $u$ before reaching the observer. Therefore, we take $\triangle\alpha_{n}=\alpha-2n\pi$ as the offset of deflection angle, and $n$ represents the loop numbers of the light turns. Thus, a massive compact lens gives rise an infinite sequence of images on both sides of the optic axis. Virbhadra and Ellis in their PRD paper\cite{Vir1} called these images which are formed due to the bending of light through more than $\frac{3}{2}\pi$ relativistic images, as the light rays giving rise to them pass through a strong gravitational field before reaching the observer.
We can find that the angular separation between the lens and the $n-th$ relativistic image is
\begin{equation}\label{theta}
\theta_{n}\simeq\theta^{0}_{n}+\frac{u_{ps}e_{n}
(\beta-\theta_{n}^{0})D_{OS}}{\bar{a}D_{LS}D_{OL}},
\end{equation}
with
\begin{equation}\label{theta1}
\theta_{n}^{0}=\frac{u_{ps}}{D_{OL}}(1+e_{n}),~~~~
e_{n}=e^{\frac{\bar{b}-2n\pi}{\bar{a}}},
\end{equation}
where the quantity $\theta_{n}^{0}$ is the angular image position corresponding to $\alpha=2n\pi$.
The magnification of the $n-th$ relativistic image is given by
\begin{equation}\label{magnification}
\mu_{n}=\frac{1}{\frac{\beta}{\theta}\frac{\partial\beta}{\partial\theta}}\mid_{\theta^{0}_{n}}
=\frac{u_{ps}^{2}e_{n}(1+e_{n})D_{OS}}{\bar{a}\beta D^{2}_{OL}D_{LS}}.
\end{equation}
Obviously, the first relativistic image is the brightest, and the magnification decreases exponentially with $n$. Hence we separate at least the outermost and brightest image $\theta_{1}$ from all the others which are packed together at $\theta_{\infty}$ \cite{Bozza,Bozza2}. As $n\rightarrow\infty$, we can find that $e_{n}\rightarrow0$ from Eq. (\ref{theta1}), which implies that the minimum impact parameter $u_{ps}$ and the asymptotic position of a set of images $\theta_{\infty}$ obey a simple form
\begin{equation}\label{theta2}
u_{ps}=D_{OL}\theta_{\infty}.
\end{equation}
Thus the angular image separation $s$ between the first image and the other ones, the ratio $\mathcal{R}$ of the flux from the first image to those from all other images can be expressed as
\begin{eqnarray}\label{s r}
\nonumber s&=&\theta_{1}-\theta_{\infty}=\theta_{\infty} e^{\frac{\bar{b}-2\pi}{\bar{a}}},\\
\mathcal{R}&=&\frac{\mu_{1}}{\Sigma^{\infty}_{n=2}\mu_{n}}=e^{\frac{2\pi}{\bar{a}}}.
\end{eqnarray}
These two formulas can be easily inverted to give
\begin{eqnarray}\label{ab2}
\nonumber \bar{a}&=&\frac{2\pi}{\log \mathcal{R}},\\
\bar{b}&=&\bar{a}\log(\frac{\mathcal{R}s}{\theta_{\infty}}).
\end{eqnarray}
Through measuring $s$, $\theta_{\infty}$ and $\mathcal{R}$, we can obtain the strong deflection limit coefficients $\bar{a}$ and $\bar{b}$ and the minimum impact parameter $u_{ps}$. Comparing their values with those predicted by theoretical models, we can obtain information about the parameters of the lens object that stored in them.
Suppose that there is a supermassive black hole in galactic center \cite{Genzel} which has a mass $M=4.4\times10^{6}M_{\odot}$ and is situated at a distance from the Earth $D_{OL}= 8.5kpc$, then the ratio of the mass to the distance is $\frac{M}{D_{OL}}\approx2.4734\times10^{-11}$. Take advantage of Eqs.(\ref{angle4}), (\ref{theta2}) and (\ref{s r}) we can estimate the values of the coefficients and observation parameters for gravitational lensing in the strong field limit. The numerical value for the angular image position $\theta_{\infty}$, the angular image separation $s$ and the relative magnifications $r_{m}$ (which is related to $\mathcal{R}$ by $r_{m}=2.5\log \mathcal{R}$) of the relativistic images are listed in Table I, and then plotted in Fig.(\ref{thsrm}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1.]{fig4.eps}
\end{center}
\caption{(color online) Gravitational lensing by the galactic center black hole. Variation of the values of the angular image position $\theta_{\infty}$, the angular image separation $s$ and relative magnitudes $r_{m}$ with parameters $S$ and $\lambda$. All angles are expressed in \emph{$\mu$arcsec}. The intersection of $S=0$ stands for Schwarzschild case.}
\label{thsrm}
\end{figure}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\hline
&\multicolumn{3}{c|}{$\theta_{\infty}$($\mu$arcsec)}
&\multicolumn{3}{c|}{$s$($\mu$ arcsec)}
&\multicolumn{3}{c|}{$r_{m}$(magnitude)} \\
\hline
$S$&$\lambda=2$&$\lambda=3$&$\lambda=4$&$\lambda=2$&$\lambda=3$&
$\lambda=4$&$\lambda=2$&$\lambda=3$&$\lambda=4$\\
\hline
-0.10& 24.58&25.15&25.58&0.0449&0.0577& 0.0629&6.44&6.12&5.94\\
\hline
-0.05&25.59&25.88&26.09&0.0379&0.0420&0.0435&6.65&6.53&6.47\\
\hline
0& 26.51&26.51&26.51&0.0332&0.0332&0.0332&6.82&6.82&6.82 \\
\hline
0.05& 27.36&27.07&26.88&0.0298&0.0275&0.0268&6.96&7.05&7.09\\
\hline
0.10&28.16&27.57&27.21&0.0272&0.0236&0.0225&7.08&7.23&7.30\\
\hline
\hline
\end{tabular}
\end{center}
\label{tab1} \caption{Numerical estimation for main observation parameters in
the strong field limit for the black hole at the center of our galaxy, which is supposed to be described by the black hole with scalar charge in massive gravity.}
\end{table}
Clearly, we can find that for fixed $\lambda$ with the increase of $S$, the angular image position $\theta_{\infty}$ and the relative magnifications $r_{m}$ of the relativistic images increase, while the angular image separation $s$ decreases. What's more, as $S$ and $\lambda$ change, the variation of the angular image position $\theta_{\infty}$ of the relativistic images is similar to the minimum impact parameter $u_{ps}$, and the angular image separation $s$ is similar to the deflection angle $\alpha(\theta)$. Then, the relative magnifications $r_{m}$ increases as the parameter $\lambda$ increases for $S>0$, but it changes contrary when $S<0$. From Fig.(\ref{thsrm}), it is interesting to find that for different $\lambda$, each line of $\theta_{\infty}$, $s$ and $r_{m}$ intersects at $S=0$, which returns standard Schwarzschild case and $\theta_{\infty}=26.5095\mu as$, $s=0.0331\mu as$ and $r_{m}=6.82$ \cite{Bozza}. The blue line of Fig.(\ref{thsrm}) in the regional of $S<0$, $\lambda=2$ gives result for the Reissner-Nordstr$\ddot{o}$m black hole \cite{Bozza,Eirc}.
\section{Time delay in massive gravity}
It is well known that the time delay of light traveling from the source to the observer with the closest distance of approach $r_{0}$ is defined as the difference between the light travel time for the actual ray in the gravitational field of the lens (deflector) and the travel time for the straight path between the source and the observer in the absence of the lens (i.e., if there were no gravitational fields). Following the method used in \cite{Weinberg}, Virbhadra and Keeton \cite{Vir4} first obtain the time required for light to go from one point with $\{r, \theta=\frac{\pi}{2}, \varphi=\varphi_{1}\}$, to a second point $\{r_{0}, \theta=\frac{\pi}{2}, \varphi=\varphi_{2}\}$ as
\begin{equation}\label{time1}
t(r,r_{0})=t(r_{0},r)=\int^{r}_{r_{0}}\sqrt{\frac{A(r)/B(r)}{1-
\frac{B(r)}{B(r_{0})}(\frac{C(r_{0})}{C(r)})^{2}}}dr.
\end{equation}
Using this result, time delay of images in massive gravity can be expressed as \cite{Vir5}
\begin{equation}\label{time2}
\tau(r_{0})=2M[\int^{\chi_{s}}_{r_{0}}\frac{dr}{F(r)}+
\int^{\chi_{o}}_{r_{0}}\frac{dr}{F(r)}]-D_{OS}~sec \beta,
\end{equation}
with
\begin{eqnarray}\label{time4}
F(r)=f(r)\sqrt{1-\frac{f(r)}{f(r_{0})}(\frac{r_{0}}{r})^{2}},~~~~
\chi_{s}=\frac{D_{OS}}{2M}\sqrt{(\frac{D_{LS}}{D_{OS}})^{2}+\tan^{2}\beta},~~~~
\chi_{o}=\frac{D_{OL}}{2M},
\end{eqnarray}
where the first and second terms with positive sign are, respectively, the travel time of the light from the source to the point of closest approach and from that point to the observer, and the third term with a minus sign is the light travel time from the source to the observer in the absence of any gravitational field.
In this part, we just study the primary relativistic image which is on the same side of the source and doesn't loop around the lens ($n=0$). Now, we still use the same mass and distance as the above part to estimate time delay, but with an additional assumption, $D_{OL}=D_{LS}=\frac{1}{2}D_{OS}$. The angular image position $\theta_{sch}$, deflection angle $\alpha_{sch}$ and time delay $\tau_{sch}$ for $S=0$ which is well known as the Schwarzschild case have been shown in Table II. The similar calculation has been done in \cite{Vir4,Vir5} with different mass and distance. After that, we compare the value of $\tau_{\lambda}$ ($\lambda=2,3,4$) with $\tau_{sch}$, and then show the results in Table III. You may ask, why we only calculate the time delay $\tau$ for $\lambda=2,3,4$, this is because that the time delay is easier to be observed than the angular image position and deflection angle.
We can find that the angular image position increases but the deflection angle and time delay decrease with the increase of angular source position for Schwarzschild black hole, from Table II. This is consequence of that with the increase of angular source position $\beta$, the closest distance of approach $r_{0}$ increases, then the effect of the lens will be weakened, and its effect on the light will be reduced. It is easy to find from Table III that with the increase of $\lambda$, the differential time delay $\Delta\tau=\tau_{\lambda}-\tau_{sch}$ decreases significantly, almost about $10^{-5}$ minutes. This is because that the black hole with scalar charge in massive gravity is more close to the Schwarzschild black hole when $\lambda$ is larger. What's more, for each fixed $\lambda$, the $\Delta\tau$ is symmetry for the symmetrical scalar charge. That is to say, the positive scalar charge increases the time delay, and negative scalar charge decreases the time delay, but the magnitude is the same. And the influence of $S=\pm0.1$ is nearly two times of $S=\pm0.05$ for fixed $\lambda$. Furthermore, for each fixed scalar charge $S$ and parameter $\lambda$, $\Delta\tau$ decreases with the increase of angular source position $\beta$. This is the same reason as the Schwarzschild case that the lens is weakened.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\hline
$\beta$&$\theta_{sch}$&$\alpha_{sch}$&$\tau_{sch}$\\
\hline
$0$&1.4512174&2.9024348&18.11567180388526151721\\
\hline
$10^{-6}$&1.4512179&2.9024338&18.11567080981492372465\\
\hline
$10^{-5}$&1.4512224&2.9024248&18.11566186319729594447\\
\hline
$10^{-4}$&1.4512674&2.9023348&18.11557239854682350187\\
\hline
$10^{-3}$&1.4517174&2.9014349&18.11467790460512913207\\
\hline
$10^{-2}$&1.4562259&2.8924519&18.10574820398524547348\\
\hline
$10^{-1}$&1.5020782&2.8041564&18.01795757500632361835\\
\hline
$1$&2.0349350&2.0698701&17.27351876798165832134\\
\hline
$2$&2.7623908&1.5247817&16.66483944815783166894\\
\hline
$3$&3.5871078&1.1742156&16.20694418018656670937\\
\hline
$4$&4.4710356&0.9420713&15.84711090705742086357\\
\hline
$5$&5.3906774&0.7813548&15.55355445096342729969\\
\hline
\hline
\end{tabular}
\end{center}
\label{tab2} \caption{Numerical estimation for angular image position $\theta$, deflection angle $\alpha$, time delay $\tau$ for the black hole at the center of our galaxy, which is supposed to be described by Schwarzschild black hole. $\beta$ stands for the angular source position. Here, all angles are expressed in \emph{arcsec} and time delay is expressed in \emph{minutes}.}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\hline
&\multicolumn{2}{c|}{$\tau_{2}-\tau_{sch}$($10^{-7}$)}
&\multicolumn{2}{c|}{$\tau_{3}-\tau_{sch}$($10^{-13}$)}
&\multicolumn{2}{c|}{$\tau_{4}-\tau_{sch}$($10^{-18}$)} \\
\hline
$\beta$&$S=\pm0.05$&$S=\pm0.1$&$S=\pm0.05$&$S=\pm0.1$&$S=\pm0.05$&$S=\pm0.1$\\
\hline
0&$\pm$5.9&$\pm$11.9&$\pm$23.8&$\pm$47.6&$\pm$12.3&$\pm$24.6\\
\hline
$10^{-6}$&$\pm$5.9&$\pm$11.9&$\pm$23.8&$\pm$47.6&$\pm$12.3&$\pm$24.6\\
\hline
$10^{-5}$&$\pm$5.9&$\pm$11.9&$\pm$23.8&$\pm$47.6&$\pm$12.3&$\pm$24.6\\
\hline
$10^{-4}$&$\pm$5.9&$\pm$11.9&$\pm$23.8&$\pm$47.6&$\pm$12.3&$\pm$24.6\\
\hline
$10^{-3}$&$\pm$5.9&$\pm$11.9&$\pm$23.8&$\pm$47.6&$\pm$12.3&$\pm$24.6\\
\hline
$10^{-2}$&$\pm$5.9&$\pm$11.9&$\pm$23.6&$\pm$47.3&$\pm$12.2&$\pm$24.4\\
\hline
$10^{-1}$&$\pm$5.7&$\pm$11.7&$\pm$22.2&$\pm$44.4&$\pm$11.1&$\pm$22.2\\
\hline
1&$\pm$4.2&$\pm$8.5&$\pm$12.1&$\pm$24.2&$\pm$4.4&$\pm$8.9\\
\hline
2&$\pm$3.1&$\pm$6.2&$\pm$6.5&$\pm$13.1&$\pm$1.7&$\pm$3.5\\
\hline
3&$\pm$2.4&$\pm$4.8&$\pm$3.8&$\pm$7.7&$\pm$0.8&$\pm$1.6\\
\hline
4&$\pm$1.9&$\pm$3.8&$\pm$2.5&$\pm$5.0&$\pm$0.4&$\pm$0.8\\
\hline
5&$\pm$1.6&$\pm$3.2&$\pm$1.7&$\pm$3.4&$\pm$0.2&$\pm$0.4\\
\hline
\hline
\end{tabular}
\label{tab3} \caption{Numerical estimation for differential time delay $\Delta\tau=\tau_{\lambda}-\tau_{sch}$ for the black hole at the center of our galaxy, which is supposed to be described by the black hole with scalar charge in massive gravity. Here, the angular source position $\beta$ is expressed in \emph{arcsec} and differential time delay $\Delta\tau$ is expressed in \emph{minutes}.}
\end{center}
\end{table}
\section{Summary}
We investigated the strong gravitational lensing for the black hole with scalar charge in massive gravity. We determined the value range of scalar charge $S$ for fixed $\lambda$ by the existing region of the event horizon at first. Then we found that the scalar charge $S$ and parameter of model $\lambda$ of black hole affect the event horizon $r_{H}$, radius of the photon sphere $r_{ps}$, deflection angle $\alpha(\theta)$, strong field limit coefficients $\bar{a}$ and $\bar{b}$, minimum impact parameter $u_{ps}$ and main observables, such as the angular image position $\theta_{\infty}$, the angular image separation $s$ and the relative magnifications $r_{m}$ of relativistic images in strong gravitational lensing. We also got the time delay by numerical calculation in which we do not take either weak or strong field approximation. The coefficient $\bar{a}$, deflection angle $\alpha(\theta)$ and angular image separation $s$ decrease with the increase of $S$ for fixed $\lambda$, and they increase with the increase of $\lambda$ for $S<0$ but decrease with the increase of $\lambda$ for $S>0$. The coefficient $\bar{b}$ is a little special, with the increase of $S$ for fixed $\lambda$, it increases first, then decreases slowly after reaching a maximum. And the variation of $\bar{b}$ with $\lambda$ is converse to the variation of $\bar{a}$ with $\lambda$. What's more, there are two other parameters, minimum impact parameter $u_{ps}$ and angular image position $\theta_{\infty}$ of the relativistic images, that change similar too, i.e., they rise with the increase of $S$ for different $\lambda$ but they increase for $S<0$ and decrease for $S>0$, with the increase of $\lambda$. Then, the relative magnifications $r_{m}$ increases with the increase of $S$ for fixed $\lambda$ while it grows with $\lambda$ when $S>0$ and changes contrary when $S<0$. The reason to why all parameters have different properties when they vary with $\lambda$ for $S<0$ and $S>0$ is that different scalar charge $S$ leads to different physical effects. In $S>0$ case, the gravitational potential is attractive and stronger than that of the Schwarzschild black hole. In $S<0$ case, even though the Newton potential is always attractive form the event horizon to large distance, the attraction is weaker than that of the Schwarzschild black hole. The time delay decreases with the increase of angular source position $\beta$, because of that with the increase of $\beta$, the closest distance of approach increases, then the effect of the lens will be weakened, and its effect on the light will be reduced. Due to the black hole with scalar charge in massive gravity is more close to the Schwarzschild case when $\lambda$ is larger, the differential time delay $\tau_{\lambda}-\tau_{sch}$ decreases remarkable with the increase of $\lambda$.
And the influence of scalar charge $S$ on time delay is very small but regular compared with $\beta$ and $\lambda$. It is should point that this black hole and the results will recover to Schwarzschild case at $S=0$, i.e., $r_{H}=1$, $r_{ps}=1.5$, $\bar{a}=1$, $\bar{b}=-0.4002$, $u_{ps}=2.598$, $\alpha(\theta)=6.28$, $\theta_{\infty}=26.5095\mu as$, $s=0.0331\mu as$, $r_{m}=6.82$, and the black hole taking the same form as the Reissner-Nordstr$\ddot{o}$m case when $\lambda=2$ and $S<0$, so the blue line of every figure in the regional of $\lambda=2$, $S<0$ gives result for the Reissner-Nordstr$\ddot{o}$m black hole.
\begin{acknowledgments}
{{We thank Xiaokai He and Xiongjun Fang for suggestions and numerical calculation. This work is supported by the National Natural Science Foundation
of China under Grant Nos. 11475061 and 11305058; the SRFDP under Grant No.
20114306110003; the Open Project Program of State Key Laboratory of
Theoretical Physics, Institute of Theoretical Physics, Chinese
Academy of Sciences, China (No.Y5KF161CJ1); the Hunan Provincial Innovation Foundation for postgraduate (Grant No.CX2016B164).}}
\end{acknowledgments}
|
{
"timestamp": "2018-05-08T02:13:51",
"yymm": "1805",
"arxiv_id": "1805.02330",
"language": "en",
"url": "https://arxiv.org/abs/1805.02330"
}
|
\section{Introduction}
The $G_\delta$-modification $X_\delta$ of a topological space $X$ is the space on the same underlying set
generated by, i.e. having as a basis, the collection of all $G_\delta$ subsets of $X$.
Bella and Spadaro recently investigated
in \cite{BS} the connection between the values of various cardinal functions taken on $X$ and
$X_\delta$, respectively. In their paper, as Question 2, they raised the following problem:
Is $t(X_\delta) \le 2^{t(X)}$ true for every (compact) $T_2$ space $X$? Note that this is actually two
questions.
In this note we answer both questions: In the compact case affirmatively and in the non-compact case negatively.
Actually, for the compact case we prove something stronger: We show that for every regular
Lindelöf space $X$ we have $t(X_\delta) \le 2^{t(X)}$. In the non-compact case we shall show that it is consistent with ZFC that no upper bound
exists for the tightness of the $G_\delta$-modifications of countably tight, even Frechet spaces.
We shall use standard notation and terminology from set theory and general topology. In particular,
concerning cardinal functions, we follow the notation and terminology of \cite{J}
It will be useful to denote by $G_\delta(X)$ the family of all $G_\delta$ subsets of a space $X$.
So, as we said, $G_\delta(X)$ is a basis for $X_\delta$.
For any $A \subs X$ we shall use $\overline{A}^\delta$ to denote the closure of $A$ in $X_\delta$.
Just like in \cite{BS}, our proofs will often use elementary submodels of appropriate "initial segments"
of the form $H(\lambda)$ of the universe. Most readers will be
familiar enough with these notions and for those who are not they are surveyed e.g. in \cite{D}.
\section{Bounds for the tightness of $G_\delta$-modifications}
\begin{theorem}\label{tm:LCT}
If $X$ is a regular Lindelöf space then $t(X_\delta) \le 2^{t(X)}$.
\end{theorem}
\begin{proof}
Assume that $X$ is a regular Lindelöf space, $p \in X$ and $A \subs X$ are such that $p \in \overline{A}^\delta$.
Let us put $\kappa = 2^{t(X)}$ and choose an elementary submodel $M$ of an appropriate $H(\lambda)$ such that
$|M| = \kappa, \, M^{t(X)} \subs M$, moreover $\{X, A, p\} \subs M$. We shall show that then $p \in \overline{A \cap M}^\delta$,
which by $|A \cap M| \le \kappa$ will complete our proof.
To see this, assume that $p \in H \in G_\delta(X)$, i.e. $H = \bigcap_{n < \omega} V_n$ where each $V_n$ is open in $X$.
We have to prove that $H \cap A \cap M \ne \emptyset$.
For every point $x \in \overline{A \cap M}$ (note that this is the closure in $X$) there is a subset
$B_x \subs A \cap M$ with $|B_x| \le t(X)$ such that $x \in \overline{B_x}$. Then $M^{t(X)} \subs M$ implies $B_x \in M$.
Clearly, if $x \ne p$ then $B_x$ can be chosen so that $p \notin \overline{B_x}$. In this case, by the regularity of $X$,
there are open sets $U_x \supset \overline{B_x}$ and $W_x$ with $p \in W_x$ such that $U_x \cap W_x = \emptyset$, moreover
as both $\overline{B_x}$ and $p$ belong to $M$, we may assume that $U_x$ and $W_x$ also belong to $M$.
Now, the closed subspace $\overline{A \cap M} \setm V_n$ of $X$ is Lindelöf for each $n < \omega$,
hence there is a countable subset $C_n \subs \overline{A \cap M} \setm V_n$ such that
$\overline{A \cap M} \setm V_n \subs \bigcup_{x \in C_n} U_x$. This clearly implies that if we put $W_n = \bigcap_{x \in C_n} W_x$ then
$$A \cap M \cap W_n \subs V_n \,.$$
Although we do not know if $C_n \in M$, we do know that $W_n \in M$ because $W_x \in M$ for each $x \in C_n$
and $M$ is countably closed.
It follows then that $W = \bigcap_{n < \omega} W_n \in M \cap G_\delta(X)$, hence $p \in \overline{A}^\delta$ and $p \in W$ imply
$W \cap A \ne \emptyset$ and, by elementarity, $W \cap A \cap M \ne \emptyset$ as well. But then we have
$\emptyset \ne W \cap A \cap M \subs H \cap A \cap M$, which completes our proof.
\end{proof}
The one-point compactification of an uncountable discrete space has countable tightness, it is even Frechet,
and its $G_\delta$-modification clearly has tightness $\omega_1$. This, of course, shows that Theorem \ref{tm:LCT}
is sharp for countably tight compact spaces under CH. But what happens if the continuum $\mathfrak{c}$ is large?
Actually, we do not know the full answer to this question.
However, we happen to have a ready made consistent answer in \cite{CH*} where a weakening of CH called CH* was
introduced. It was shown there that CH* holds in any model obtained by adding any number of Cohen reals to a
ground model that satisfies CH. Thus CH* is consistent with $\mathfrak{c}$ being anything it can be.
Let us denote by $\ell_{\omega_1}(A)$ the set of all points obtainable as the limit of a converging $\omega_1$-sequence
of points of $A$ in a space $X$. We permit constant sequences, hence $A \subs \ell_{\omega_1}(A)$.
It is obvious that we always have $\ell_{\omega_1}(A) \subs \overline{A}^\delta$. Now, for countably tight compacta,
the proof of Theorem 3.2 of \cite{CH*} actually establishes the following converse of this.
\begin{theorem}\label{tm:CCT}
CH* implies that if $X$ is any countably tight compactum and $A \subs X$ then $\ell_{\omega_1}(A) \supset \overline{A}^\delta$,
hence $\ell_{\omega_1}(A) = \overline{A}^\delta$. Consequently, if $X_\delta$ is non-discrete then $t(X_\delta) = \omega_1$.
\end{theorem}
Although the statement of Theorem 3.2 in \cite{CH*} is slightly weaker than this, the reader may easily check
that actually this is proved there.
This result then leads us to the following natural and intriguing question.
\medskip
PROBLEM 1.
Is it consistent to have a countably tight compactum $X$ for which $t(X_\delta) > \omega_1$?
\medskip
It turns out by our next two ZFC results that the, somewhat surprising, equality $\ell_{\omega_1}(A) = \overline{A}^\delta$
may occur in other situations as well. It will be useful to introduce the following notation:
If $X$ is a space and ${\kappa}$ is a cardinal then we write
\begin{displaymath}
\dis_{\kappa}(X)=\{D\in \br X;{\kappa}; : \text{$D$ is discrete}\}.
\end{displaymath}
A countably tight compact, or just Lindelöf space $X$ contains no uncountable free sequences,
i.e. satisfies $F(X) = \omega$. This makes the assumption $F(X) = \omega$ in our following result fitting with
the topic of this paper.
\begin{theorem}\label{tm:psi=w}
Let $X$ be a regular space such that $F(X) = \omega$ and for every $D \in \dis_\omega(X)$
we have $\psi(\overline{D}) \le \omega$. Then for every $A \subs X$ we have $\,\ell_{\omega_1}(A) = \overline{A}^\delta$.
\end{theorem}
\begin{proof}
Assume that $p \in X$ and $A \subs X$ are such that $p \in \overline{A}^\delta \setm A$.
By induction on ${\alpha}<{\omega}_1$ we shall define closed $G_{\delta}$ sets $H_{\alpha}$ containing $p$
and points $x_{\alpha}\in A\cap H_{\alpha}$ as follows.
Assume that $\{H_{\beta}:{\beta}<{\alpha}\}$ and $Y_{\alpha}=\{x_{\beta}:{\beta}<{\alpha}\}$
have been defined, moreover $Y_{\alpha}$ is a free sequence in $X\setm \{p\}$, hence $Y_{\alpha} \in \dis_{\omega}(X)$.
Then either $p \notin \overline {Y_{\alpha}}$ or $\psi(p,\overline {Y_{\alpha}})\le {\omega}$.
But in both cases there is a closed $G_{\delta}$ set
$H$ containing $p$ such that $H\cap (\{p\}\cup \overline {Y_{\alpha}}) = \{p\}$.
We then let $H_{\alpha}=H\cap\bigcap_{{\beta}<{\alpha}}H_{\beta} \in G_\delta(X)$ and use
$p \in \overline{A}^\delta$ to pick the point $x_\alpha \in A \cap H_\alpha$.
Thus we have constructed $\{x_{\beta}:{\beta}<{\omega}_1\} \subs A$.
The sequence $\{x_{\beta}:{\beta}<{\omega}_1\}$ is free in $X\setm \{p\}$ because
for every ${\alpha}<{\omega}_1$ we have $\overline{Y_{\alpha}}\cap H_{\alpha} \subs \{p\}$ and
$\overline{\{x_{\beta}:{\beta}\ge {\alpha}\}}\subs H_{\alpha}$.
If $U$ is any open set containing $p$ then $\{x_{\alpha}:x_{\alpha}\notin U\}$ is free in $X$, hence it is countable.
In other words, $U$ contains a tail of $\{x_{\alpha}:{\alpha}<{\omega}_1\}$, i.e. the $\omega_1$-sequence
$\{x_{\alpha}:{\alpha}<{\omega}_1\} \subs A$ indeed converges to $x$.
\end{proof}
Clearly, the condition $\psi(\overline{D}) \le w(\overline{D}) = \omega$ is satisfied for
any countable subset $D$ of the $\Sigma$-product $\Sigma(\kappa)$ taken inside the Tychonov cube of weight $\kappa$.
Also, compact subspaces of such $\Sigma$-products, i.e. Corson-compacta are Frechet, hence do not contain
uncountable free sequences. Thus we immediately obtain the following corollary of Theorem \ref{tm:psi=w}:
\begin{corollary}\label{Corson}
For every subset $A$ of a Corson-compact space $X$ we have $\,\ell_{\omega_1}(A) = \overline{A}^\delta$.
\end{corollary}
To facilitate the formulation of our next result, we introduce the notation $CAP(\kappa)$ to denote the class
of all spaces in which every subset of cardinality $\kappa$ has a complete accumulation point.
\begin{theorem}\label{tm:psi=w1}
Assume that $X \in CAP(\omega_1)$ is a countably tight regular space such that $\psi(\overline{S}) \le \omega_1$
for every countable subset $S \subs X$. Then for every $A \subs X$ we have $\,\ell_{\omega_1}(A) = \overline{A}^\delta$.
\end{theorem}
\begin{proof}
Consider any point $p \in \overline{A}^\delta \setm A$ and then choose an $\omega_1$-chain $\<N_{\alpha}:{\alpha}<{\omega}_1\>$ of countable elementary submodels
of an appropriate $H(\lambda)$ such that\\
\smallskip
(i) $\{X, A, p\} \subs N_0$;\\
\smallskip
(ii) for every $\beta < \omega_1$ we have $\<N_{\alpha}:{\alpha}<{\beta}\>\in N_{\beta}$.
\smallskip
Let $N=\bigcup_{{\alpha}<{\omega}_1}N_{\alpha}$.
Since $X$ has countable tightness, for every $\alpha < \omega_1$ we have $p \in \overline{N_{\alpha} \cap A}$,
hence $\psi(p,\overline{N_{\alpha}\cap A})\le {\omega}_1$. It follows
that there is a family $\mc U_{\alpha} \in N_{{\alpha}+1}$ of open sets with $|\mc U_{\alpha}| \le \omega_1$ such that
\begin{displaymath}
\{p\}=\bigcap \mc U_{\alpha}\cap \overline{N_{\alpha}\cap A}.
\end{displaymath}
Note that then $\mc U_{\alpha} \in N_{{\alpha}+1} \subs N$ and $|\mc U_{\alpha}|\le {\omega}_1$ imply $\mc U_{\alpha}\subs N$.
Consequently, for all $\alpha < \omega_1$ we have
\begin{displaymath}
\{p\}=\bigcap \{U\in N\cap \tau(X): p\in U \}\cap \overline{N_{\alpha}\cap A},
\end{displaymath}
where $\tau(X)$ denotes the topology of $X$.
Since $X$ has countable tightness, we also have $\overline{N\cap A}=\bigcup_{{\alpha}<{\omega}_1}\overline{N_\alpha \cap A}$,
hence
\begin{displaymath}
\{p\}=\bigcap \{U\in N\cap \tau(X) : p\in U \}\cap \overline{N\cap A}.
\end{displaymath}
Let us now put
\begin{displaymath}
H_{\alpha}=\bigcap \{U\in N_{\alpha}\cap \tau(X) : p\in U \}.
\end{displaymath}
Then $H_\alpha$ is a $G_\delta$ set that we claim is closed in $X$.
Indeed, this is because for every $U\in N_\alpha\cap \tau(X)$ with $p\in U$ there is, by the regularity of $X$ and by elementarity,
some $V\in N_\alpha\cap \tau(X)$ with $p\in V$ such that $\overline{V} \subs U$.
Since $N\cap \tau(X) = \bigcup_{\alpha < \omega_1} N_\alpha\cap \tau(X)$, it follows that
\begin{displaymath}
\{p\}=\bigcap_{\alpha < \omega_1}H_\alpha \cap \overline{N\cap A}.
\end{displaymath}
Now $p \in \overline{A}^\delta$ and
$p \in H_{\alpha} \in N$ imply that for every $\alpha < \omega_1$ we can pick a point
\begin{displaymath}
x_{\alpha}\in N\cap A\cap H_{\alpha}.
\end{displaymath}
We claim that the sequence $S = \{x_{\alpha}:{\alpha}<{\omega}_1\} \subs A$ converges to $p$.
Indeed, if $q\ne p$ then there is a $\beta < \omega_1$ with $q \notin H_\beta \cap \overline{N\cap A}$.
Then $X \setm H_\beta \cap \overline{N\cap A}$ is a neighborhood of $q$ that misses the final segment
$\{x_{\alpha} : \beta \le {\alpha}<{\omega}_1\}$, hence $q$ is not a complete accumulation point of $S$.
But $X \in CAP(\omega_1)$ then implies that $p$ is the unique complete accumulation point of $S$,
and hence $S$ indeed converges to $p$.
\end{proof}
\section{A large cardinal bound for the tightness of the $G_\delta$-modifications}
The first result of this section shows not only that the answer to the original question of Bella and Spadaro
in the non-compact (or non-Lindelöf) case is negative, in fact it shows that no reasonable bound exists,
at least in ZFC.
\begin{theorem}\label{tm:nob}
Assume that $S$ is a non-reflecting stationary set of $\omega$-limits in an uncountable regular cardinal $\kappa$.
Then there is a 0-dimensional Frechet topology $\tau$ on $\kappa + 1 = \kappa \cup \{\kappa\}$ such that for the space $X = (\kappa + 1, \tau)$
we have $t(X_\delta) = \kappa$.
\end{theorem}
\begin{proof}
Let us denote by $\mathcal{V}$ the family of all subsets $V$ of $\kappa$ having the property that for every
$\alpha \in S$ there is $\beta < \alpha$ with $(\beta, \alpha) = \alpha \setm \beta \subs V$. We define $\tau$
to be the topology on $\kappa + 1$ for which all points in $\kappa$ are isolated and $\{V \cup \{\kappa\} : V \in \mathcal{V}\}$
forms a neighborhood base for the point $\kappa$. More precisely,
$\tau = \mathcal{P}(\kappa) \cup \{V \cup \{\kappa\} : V \in \mathcal{V}\}$.
It is simple to verify that $\tau$ is indeed a 0-dimensional $T_2$ topology.
To see that $\tau$ is Frechet, observe that if $A \subs \kappa$ accumulates to the point $\kappa$ then there is
an $\alpha \in S$ such that $\sup (A \cap \alpha) = \alpha$. But then there is an increasing $\omega$-sequence
$B = \{\beta_n : n < \omega\} \subs A \cap \alpha$ with $\sup B = \alpha$ and clearly $B$ converges to the point $\kappa$.
Since $S$ is stationary, it is an immediate consequence of Fodor's theorem that every set $V \in \mathcal{V}$
includes a final segment of $\kappa$. By $cf(\kappa) > \omega$ it follows then that every $G_\delta$ set
containing the point $\kappa$ also includes a final segment of $\kappa$, hence the set $\kappa$ accumulates to
the point $\kappa$ in the space $X_\delta$. Thus, since $\kappa$ is regular, to prove $t(X_\delta) = \kappa$ it will suffice to show that
no proper initial segment of $\kappa$ accumulates to the point $\kappa$ in the space $X_\delta$.
This, of course, is equivalent to showing that for each $\eta < \kappa$ the initial segment $\eta$ is an
$F_\sigma$ set in $X$. We do this by transfinite induction on $\eta < \kappa$. Thus assume that $\eta < \kappa$
and we know that for every $\zeta < \eta$ the initial segment $\zeta$ is an
$F_\sigma$ set $X$, i.e. $\zeta = \cup_{n < \omega} F_{\zeta,n}$ with each $F_{\zeta,n}$ closed.
Of course, if $cf(\eta) \le \omega$ then it is trivial that $\eta$ is also an $F_\sigma$ set.
So, assume that $cf(\eta) > \omega$ and, using that $S$ is non-reflecting, fix a closed unbounded subset
$C$ in $\eta$ with $0 \in C$ that is disjoint from $S$. For every $\gamma \in C$ let $\gamma^+$ denote the least
member of $C$ above $\gamma$.
Let us now define for each $n < \omega$ the set $F_{\eta,n}$ as follows:
$$F_{\eta,n} = \bigcup \{F_{\gamma^+,n} \cap [\gamma, \gamma^+) : \gamma \in C\}\,.$$
Then it is obvious that every $F_{\eta,n}$ is closed in $X$,
i.e. does not contain the point $\kappa$ in its closure, and the union of the $F_{\eta,n}$'s equals $\eta$,
hence it is indeed an $F_\sigma$.
\end{proof}
Consistently, Theorem \ref{tm:nob} yields a very strong negative answer to the question of Bella and Spadaro
in the non-compact case. Indeed, for example, in the constructible universe $L$, or in fact in any
set generic extension of $L$, there is a proper class of regular cardinals that
contain non-reflecting stationary sets of $\omega$-limits.
On the other hand, we do not know the answer to the following natural question:
\medskip
PROBLEM 2.
Is there a ZFC example of a countably tight Hausdorff (or regular, or Tychonov) space $X$ for which $t(X_\delta) > 2^\omega$?
\medskip
It is known that if there is a non-reflecting stationary set of $\omega$-limits in a regular cardinal $\kappa$,
then $\kappa$ is less than the first strongly compact cardinal $\lambda$, provided that it exists.
On the other hand, modulo some large cardinals, it is also known that there may be ZFC models in which such a $\lambda$ exists and
the cardinals $\kappa$ admitting a non-reflecting stationary set of $\omega$-limits are cofinal in $\lambda$, see e.g. \cite{BM}.
Consequently, our second result implies that the first one is in some sense sharp.
In this we shall use the following characterization of strongly compact cardinals, see \cite{K}:
The cardinal $\lambda$ is strongly compact iff for every set $A$ having cardinality at least $\lambda$
there is a $\lambda$-complete and fine free ultrafilter $\mathcal{U}$ on the set $[A]^{< \lambda}$.
That $\mathcal{U}$ is fine
means that for every element $a \in A$ we have $$\{B \in [A]^{< \lambda} : a \in B\} \in \mathcal{U}.$$
\begin{theorem}\label{tm:scp}
Let $\lambda$ be a strongly compact cardinal. Then for every topological space $X$ satisfying
$t(p,X) < \lambda$ for every $p \in X$ we have $t(X_\delta) \le \lambda$.
\end{theorem}
\begin{proof}
Assume that $A \subs X$ and $p \in X$ is a point such that for every $B \in [A]^{< \lambda}$
we have $p \notin \overline{B}^\delta$. We claim that then $p \notin \overline{A}^\delta$ as well.
This clearly implies $t(p,X_\delta) \le \lambda$, hence as $p \in X$ was arbitrary,
$t(X_\delta) \le \lambda$.
To prove our claim, let us fix for each set $B \in [A]^{< \lambda}$ a countable collection
$\{V_{B,n} : n < \omega\}$ of open neighborhoods of $p$ such that $\bigcap \{V_{B,n} : n < \omega\} \cap B = \emptyset$.
This allows us to define the function $f_B : A \to \omega$ with the stipulation
$$f_B(x) = \min \{n : x \notin V_{B,n}\}.$$
Since $A$ clearly can be assumed to have cardinality $\ge \lambda$,
we may next fix a $\lambda$-complete and fine free ultrafilter on $\mathcal{U}$. Then we may define the function $f : A \to \omega$ with the stipulation
$$f(x) = n \,\Leftrightarrow \, \{B \in [A]^{< \lambda} : x \in B \mbox{ and } f_B(x) = n\} \in \mathcal{U}.$$
This makes sense because $\mathcal{U}$ is $\lambda$-complete and for every $x \in A$ we have
$$\{B \in [A]^{< \lambda} : x \in B\} \in \mathcal{U}.$$
Next we show that for every $n < \omega$ the closure in $X$ of the set $A_n = f^{-1}(n)$ misses
the point $p$. This clearly will imply that $p \notin \overline{A}^\delta$.
Now let $\mu = t(p,X) < \lambda$ and hence it will suffice to show that for every
$J \in [A_n]^\mu$ we have $p \notin \overline{J}$. As $\mathcal{U}$ is fine and $\lambda$-complete,
we have $$U = \{B \in [A]^{< \lambda} : J \subs B\} \in \mathcal{U}.$$
Moreover, this clearly implies that we also have
$$W = \{B \in U : \forall\, x \in J \, \big(f(x) = f_B(x)\big)\} \in \mathcal{U}.$$
But for any $B \in W$ we then have $J \cap V_{B,n} = \emptyset$, hence $p \notin \overline{J}$.
This completes our proof.
\end{proof}
{\bf Acknowledgements}. In the research on and preparation of this paper the second, third and fourth named authors
were supported by NKFIH grant no. K113047. The second author would also like to thank the support from the
Mathematics Department of UNC Charlotte and in particular Professor Alan Dow.
\newpage
|
{
"timestamp": "2018-05-08T02:12:06",
"yymm": "1805",
"arxiv_id": "1805.02228",
"language": "en",
"url": "https://arxiv.org/abs/1805.02228"
}
|
\section{Introduction}
\label{sec:firstpage}
Paul Horn, in 2001, inspired by the living system physiology presented an IBM proposal for the future of computer systems \cite{Horn:2001}. His work argued that the efforts of specialists in the maintenance, control and operation of computer systems could be minimized and consequently have their costs reduced dramatically. The community, composed mainly of researchers continued to advance in the researches of this knowledge domain becoming a paradigm named by Horn as \textbf{Autonomic Computation}.
Contributions have been expanded by multidisciplinary research groups and the results have been surprising \cite{Movahedi:2012}. A number of applications, particularly in software, have enabled, for example, the technology of space probes \cite{Sterritt:2005b}, rather Unmanned Space Vehicles (USVs) \cite{Insaurralde:2015}.
The interest aroused has led to the application of Intelligent Agents or Intelligent Elements (IEs) in the Infrastructure of the Internet, concentrating basically on the protocols and techniques like Software Defined Networking (SDN) \cite{Shukla:2014, Nadeau:2013, Wickboldt:2015} will encourage the development of new initiatives in this direction. It is important to develop case studies and experiments on the entire spectrum of applications for the Internet Infrastructure. In this sense, the renewed experience and expansion of research groups will make an effective contribution to improving and consolidating the studies that are being carried out to date, especially the principle of interdisciplinary cooperation.
Section 2 presents the abstract IEs model and the respective application domain. From the abstract model this section continues to the implementation model with its physical characteristics. Section 3 describes the architecture for knowledge acquisition and its functional components. Section 4 presents the steps involved in the process of knowledge acquisition and the research questions involved. Section 5 presents the conclusions with the identification of future problems. Section 6 concludes the authors' acknowledgments.
\section{The Application Domain and its characteristics}
The Autonomous Architecture for Restricted $\quad$Domains (A2RD) model is presented in Figure \ref{fig:ModeloAbstrato_EN} and divided into four layers, described below. The model serves the interest of establishing an architecture of intelligent elements under the administrative domain of ASs, which is known as the designation given to the networks that form the Internet.
\begin{figure*}[ht]
\centering
\includegraphics[width=.9\textwidth]{ModeloAbstrato_EN}
\caption{Four Layer Abstract Model of Autonomous Architecture for Restricted Domains (A2RD). Source: \cite{Braga:2015}}
\label{fig:ModeloAbstrato_EN}
\end{figure*}
The model can exist in any of the $2^{32}$ possible ASs \cite{Hawkinson:1996}. However, on 02/03/2017 there were 56,710 active ASs on the Internet (originating traffic), according to CIDR-report\footnote{http://www.cidr-report.org/as2.0/}. The number of an AS is unique, controlled by the Regional Internet Registers (RIRs) and / or National Internet Registers (NIRs) and is called the Autonomous System Number (ASN). Thus, the largest possible value of x is 56710, corresponding to AS56710, at the date above. There is no conflict between the model being deployed in any AS environment and being domain-restricted. In fact, the implementations are independent, but with a high degree of interoperability and, of course, intense cooperation, because ASes administrators depend on the behavior of all the others. The IANA has reserved two contiguous ranges of ASs numbers for private use \cite{Mitchell:2013}: 64512-65534 and 4200000000-4294967294. Conveniently, these ASes numbers can be used to designate Intelligent Elements in applications that need to represent subdomains.
The first of the four layers hosts the Intelligent Element (IE) named \textbf{Controller}. Its identification is unique and definitive: \textbf{x:0}, that is, the number \textbf{0} placed to the right side of the symbol \textbf{:}, following by the ASN that hosting the model. Sometimes, to make clear which IE is being referenced, \textbf{IE} is used before the identification, as for example, when affirming that the IE Controller is \textbf{IEx:0}. Thus, if \textbf{AS5} is the host domain of the model, then the controlling element is \textbf{IE5:0}. No IE of the lower layers can exist, without the prior consent of the IE Controller. It has the property of keeping oneself organized (self-organization) and ensuring the self-organization of any IE of the lower layers.
The second layer is represented by the so-called Specialized IEs. These elements are identified by suffixes that can range from \textbf{1} to \textbf{9999}. The specialized elements support the IE Controller, in specific activities and necessary to the respective functionalities. These activities range from ensuring the interoperability of the entire system of implemented IEs, to the establishment of specific functionalities, such as servers with end-to-end characteristics \cite{Saltzer:1984}, database access functionalities and semantic repositories, proprietary software (similar to Southern SDN APIs) \cite{Wickboldt:2015}, features required for lower-layer IEs. However, support for the IE Controller is the primary objective of the Specialized IEs. This objective is who determines the functionalities of the second layer. It is assumed that some Specialized IEs may be Autonomic Elements or intelligent elements that execute automatic processes, such as proprietary software and procedures associated with legacy systems, among others. A Specialized IE can be created as functions that only concern the IE Controller, especially when it depends on the functionalities of IEs of the third layer.
In the third layer lies the largest IEs agglomeration, which is why it is called the \textbf{IE Colonies}. Elements of this layer can be Autonomous, Autonomic or Automatic, except Legacy and are directly responsible for the most important activities of the application, including software reuse. They act under the influence of a high degree of interoperability and cooperation between them and between IEs of other layers and other domains / subdomains. They do not directly participate in interconnections or exchange messages with other IEs outside the domain, but they do so through the IEs of the upper layers. There is intense semantic interoperability activity by these IEs, which have a high capacity for self-learning due to continuous interactions with the domain environment, and produce improvement effects on the knowledge of other IEs of the colony itself and the IEs of the upper layers, IE Controller. In other words, these IEs favor the learning of the entire cluster of IEs of the layer model, which hour is being described. The IEs of the colonies receive an identification with numeric suffixes, ranging from \textbf{10000} to \textbf{4294967295}.
In the fourth layer are the \textbf{Auxiliary IEs}. This layer exists, in order to allow the transfer of computing demands to a new set of IEs (successiveness of the model). It reproduces, successively, the first, second, third and a new fourth layers. This new IEs sequence has an additional suffix \textbf{:j:0} for a new IE Controller responsible for the following four new layers. In the new second, third and fourth layers, the IDs of the IEs are postfixed with \textbf{:j:id}, where \textbf{j} is a colony IE number that originated the new fourth layer and the \textbf{id} is a number with the above specifications. A typical application for the fourth layer are subdomains, such as home networks (homenet).
The figure \ref{fig:A2RD-ModeloImplementacao_EN} is the A2RD implementation model, where the small and colored rectangles are IEs. It is seen that the IEs are arranged and distributed among the layers, similar to what was said earlier about the abstract model. As an example, IEs are implemented in the domain of an AS whose number is \textbf{x}. By this same figure, it is observed that the IEs functionally important for the inter-domain operations reside into the upper layers.
\begin{figure}[ht]
\centering
\includegraphics[width=.46\textwidth]{A2SD-ModeloImplementacao_EN}
\caption{A2RD Implementation Model. Source: \cite{Braga:2015a} \cite{Braga:2015}}
\label{fig:A2RD-ModeloImplementacao_EN}
\end{figure}
It is observed in the implementation model that a classification of relevance is the intensity of aggregation that an IE has in relation to the \textbf{self-*} properties. If an IE, however, has some self-organizing capability, it must participate directly linked to the IE Controller. Even if it participate in the layer of Auxiliary IEs there may be a new IE Controller that logically builds a new layer architecture.
On the other hand, the representation of the model is logical (abstraction of the physical implementation). Physically, locating an IE in the domain environment is essential. The best alternative is IP addressing, preferably IPv6, for reasons of availability. The IE Controller must maintain a table associating the logic reference with the IP designated by the IE Controller itself, from the premise that an IPv6 block must be available at the beginning of the implementation. However, this is not a fundamental issue, because as will be seen in next section, in the name of security an IP relation as the IE ID will be available in a primitive Domain Name System (DNS), the hosts file allocated internally and with direct link to the IE Controller.
\section{A2RD Use Case}
The A2Rd implementation model has been tested on an Internet Routing Registry\footnote{http://www.irrd.net/} (IRR) server. IRR is an important data base for AS Administrators. But, as the IRR depends on the systematic update of the human being it becomes an unreliable database. By eliminating human intervention and letting the IEs update, the IRR becomes, surprisingly, a reliable database. Figure \ref{fig:IRRmodell_EN} shows that there is no logical difference between human intervention or not.
\begin{figure}[ht]
\centering
\includegraphics[width=.23\textwidth]{IRRmodel_EN}
\includegraphics[width=.23\textwidth]{IRR-IEmodel_EN}
\caption{IRR model. Objects created by human beings (L) and objects created by IEs (R)}
\label{fig:IRRmodell_EN}
\end{figure}
Numerous IEs have been created, some specialized in the creation of objects, others in the systematic updating of objects and others specialized in capturing information in the various components that makes the AS operational.
\section{Model for knowledge acquisition by IEs}
This experience with de A2RD use case has led to the need to make IEs more autonomous. So, the knowledge required for IEs is available on unstructured bases, in particular, on those produced by the IETF Working Groups (WGs). But they are present in other documents, unofficial and related to equipment and techniques associated with the resources and facilities available in the field of ASes. From this set of available bases, one of the most important is that maintained by the RFC Editor\footnote{\url{https://www.rfc-editor.org/}} represented by more than 8,000 documents called Request for Comments (RFCs).
Figure \ref{fig:tese-modeloprincipal_EN} shows the whole process of acquiring knowledge from the unstructured bases (3) through an adaptation in (4) so that they can be submitted to appropriate tools and techniques in (5), as IBM Watson techniques \cite{ferrucci2004building, ferrucci2010building}, whose preliminary result is stored in an intermediate knowledge base (6), which when suffering interference from other more elaborate tools (7) and from the IEs themselves (1), will give rise to an appropriate knowledge base (2).
\begin{figure*}[ht]
\centering
\includegraphics[width=.9\textwidth]{tese-modeloprincipal_EN}
\caption{Global knowledge capture model for IEs in a restricted domain. Source: \cite{Braga:2015a}}
\label{fig:tese-modeloprincipal_EN}
\end{figure*}
\section{Research questions involved in the process of knowledge acquisition}
The RFCs repository is constantly updated. Any effective use of this repository implies a continuous and permanent process of manipulation, involving four steps, three of which represent research questions that must be studied.
\begin{enumerate}
\item Capture of the RFC Editor repository in a systematic and continuous way.
\item Lexical refinement of RFCs (\textbf{Research question 1})
\item Semantic distillation and construction of the knowledge base. (\textbf{Research question 2})
\item Knowledge base used by IEs. (\textbf{Research question 3})
\end{enumerate}
These steps are represented and identified in Figure \ref{fig:ModeloGlobalQuestoesPesquisa_EN}.
\begin{figure}[!htb]
\centering
\includegraphics[width=.48\textwidth]{ModeloGlobalQuestoesPesquisa_EN}
\caption{Characterization of the problem and its research questions. Source: \cite{Braga:2015a}}
\label{fig:ModeloGlobalQuestoesPesquisa_EN}
\end{figure}
\subsection{WordIETF}
In step 2, which identifies research question 1, one has an immediate result. This is a lexical and syntactical base, called \mbox{\textbf{WordIETF}} with a lot of similarity to WordNet \cite{fellbaum1998wordnet}. \textbf{WordIETF} is built on $\quad$eXtensible Markup Language\footnote{https://www.w3.org/TR/REC-xml/} (XML) and takes full advantage of the building, by the RFC Editor, the RFC's repository in XML \footnote{https://www.rfc-editor.org/rfc-index.xml}.
The construction of the \textbf{WordIETF} has been tested a series of techniques and recommendations produced by various authors. "Computing is a Natural Science", as said Peter Denning \cite{Denning:2007:CNS:1272516.1272529}.
Semantics is essential for living organisms and defines the relationship between the mind and the world \cite{dodig2008semantics}.
More recently, authors have admitted several techniques using computational morphology and there is a belief that the contribution of a database such as \textbf{WordIETF} will be fundamental for the evolution of research questions 2 and 3 \cite{dodig2016nature, dodig2011dialogue, dodig2016information}.
\section{Conclusions}
There are many challenges to the progress of the project, among them is the construction of the \textbf{WordIETF} that can meet the demand for construction of ontologies in the area of Internet Infrastructure. The \textbf{WordIETF} development should be cooperative and although there are initiatives of researchers in the area of Internet of Things (IoT) \cite{Hachem:2011}, and by those interested in the Internetware paradigm of the Context Aware Supporting Environment for Internetware\footnote{https://code.google.com/p/casei/} (CASEi), the authors find to have a central coordination, such as the Internet Research Task Force (IRTF), through a multi-stakeholder group.
Without exhausting other proposals it is necessary to develop methodologies for the construction of IEs, in this case, identified as \textbf{Intelligent Objects} (IOs). IO is an intelligent element built with quality, reusable and preserving the knowledge of its life cycle, in the context of A2RD model applications, strengthening the security environment proposed by the DTS model and other related models. Construction of IOs, in an adequate and standardized manner induces interdisciplinary cooperation and rapid development of IEs. The challenge would be to use the methodology INTERA \cite{BragaJ:2015}, adapted to the construction of IOs. INTERA is a successful methodology used in Learning Objects.
\section{Authors' Acknowledgments}
This work was conducted during a scholarship supported by the International Cooperation Program CAPES at the University of Saskatchewan, CA. Financed by CAPES – Brazilian Federal Agency for Support and Evaluation of Graduate Education within the Brazil's Ministry of Education.
\bibliographystyle{abbrv}
|
{
"timestamp": "2018-05-08T02:12:19",
"yymm": "1805",
"arxiv_id": "1805.02241",
"language": "en",
"url": "https://arxiv.org/abs/1805.02241"
}
|
\section{Introduction}
\subsection{Gauged Linear Sigma models}
One of the major advances in the subject of Gromov--Witten theory is
the development of the so called FJRW-theory by the third author and
his collaborators.
The Gromov--Witten theory of a Calabi--Yau hypersurface of a weighted
projective space is conjectured to be equivalent to its FJRW-dual via
the LG/CY correspondence, a famous duality from physics.
In physics, the Gromov--Witten theory corresponds to a nonlinear sigma
model while FJRW-theory corresponds to a Landau--Ginzburg model.
Back in 1993, Witten gave a physical derivation of the LG/CY
correspondence by constructing a family of theories which was known as
\emph{gauged linear sigma model} or GLSM \cite{Wi93}.
By varying the parameters of GLSM, Witten argued that GLSM converges
to a nonlinear sigma model at a certain limit of parameters and a
Landau-Ginzburg orbifold at a different limit.
Hence, they are related by analytic continuation.
Several years ago, GLSM was put in a firm mathematical footing by Fan, Jarvis
and the third author \cite{FJR18}.
Let us briefly describe the construction.
The input data of a GLSM is an LG-space
\begin{equation*}
W\colon V \mathbin{\mkern-3mu/\mkern-6mu/\mkern-3mu}_{\theta} G\rightarrow \mathbb C
\end{equation*}
for a GIT quotient $V \mathbin{\mkern-3mu/\mkern-6mu/\mkern-3mu}_{\theta} G$ with a $\mathbb C^*$-action
$\mathbb C^*_R\curvearrowright V$ (called the R-charge) such that $W$ is
homogenous of degree one.
Moreover, we assume that the critical locus
$\mathrm{Crit}_W=\{\diff W=0\}\subset V \mathbin{\mkern-3mu/\mkern-6mu/\mkern-3mu}_{\theta}G$ is compact.
The most famous example is
\begin{equation*}
W = p(x^5_1+x^5_2+x^5_3+x^5_4+x^5_5)\colon \mathbb C^5\times \mathbb C\rightarrow \mathbb C
\end{equation*}
with $\mathbb C^*$-action of weight $(1,1,1,1,1,-5)$.
Here $(x_1, x_2, x_3, x_4, x_5)$ are the coordinates of $\mathbb C^5$ and $p$
is the coordinate of $\mathbb C$.
Furthermore, the R-charge has the weight $(0,0,0,0,0, 1)$.
The GIT-quotient $(\mathbb C^5\times \mathbb C)\mathbin{\mkern-3mu/\mkern-6mu/\mkern-3mu}_{\theta} \mathbb C^*$ has two chambers
or phases depending on the character
\begin{equation*}
\theta(z)=z^n\colon \mathbb C^*\rightarrow \mathbb C^*.
\end{equation*}
If $\theta>0$ (i.e., $n>0$), then the unstable locus is
$(0,0,0,0,0)\times \mathbb C$ and we have the GIT quotient
$((\mathbb C^5-\{(0,0,0,0,0)\})\times \mathbb C)\mathbin{\mkern-3mu/\mkern-6mu/\mkern-3mu}_{\theta} \mathbb C^*\cong
\cO_{\mathbb{P}^4}(-5).$ When $\theta<0$, the unstable locus is
$\mathbb C^5\times \{0\}$ and the GIT quotient is
$(\mathbb C^5\times \mathbb C^*)\mathbin{\mkern-3mu/\mkern-6mu/\mkern-3mu}_{\theta}\mathbb C^*\cong [\mathbb C^5/\ZZ_5]$.
This GLSM is supposed to be equivalent to the Gromov--Witten theory of
the quintic 3-fold $X_5=\{ x^5_1+x^5_2+x^5_3+x^5_4+x^5_5=0\}$ in the
chamber $\theta>0$ and FJRW-theory of the LG orbifold
$$F = x^5_1+x^5_2+x^5_3+x^5_4+x^5_5\colon [\mathbb C^5/\ZZ_5]\rightarrow \mathbb C$$
in the chamber $\theta<0$.
Let us use this example to illustrate Fan--Jarvis--Ruan's algebraic
GLSM theory.
The geometric data for the above GLSM is
\begin{equation*}
\scrM^\theta = \{(\cC, \cL, (s_1, s_2,s_3, s_4,s_5)\in H^0(\cL^{\oplus 5}), p\in H^0(\cL^{-5}\otimes \omega_{\log})): \dots\}
\end{equation*}
satisfying certain stability condition where $\cC$ is a pre-stable
curve and $\cL$ is a line bundle over $\cC$.
For $\theta>0$, the stability condition implies that
$(s_1, s_2, s_3, s_4, s_5)$ define a stable quasimap into $\mathbb{P}^4$ and
we obtain a variant of Chang--Li's p-field moduli space \cite{CL12}.
For $\theta<0$, the stability condition implies that the zeros of $p$
form an effective divisor $D$, and that $p$ defines a weighted 5-spin
structure $\cL^5\cong \omega_{\log,\cC}(-D)$.
In both cases, $\scrM^\theta$ is a DM-stack with two-term perfect
obstruction theory and has a virtual cycle in the Chow group.
However, it is not proper (compact).
To obtain a virtual cycle which we can integrate, we use $\diff W$ to
define a cosection
$$\sigma\colon \obs_{\scrM^\theta} \rightarrow \cO_{\scrM^\theta}$$
and apply Kiem---Li's cosection localization technique \cite{KiLi13}
to define a localized virtual cycle $[\scrM]^\mathrm{vir}_\sigma$ with support
on the \emph{compact} sub-locus
$\scrM^\theta(\sigma)\subset \scrM^\theta$ satisfying the condition
$(s_1, s_2, s_3, s_4, s_5, p)\in Crit_W$.
The above construction is beautiful.
However, it is not directly useful for computational purposes.
In many ways, we would like to have an alternative construction which
is more friendly towards effective computation.
To that end, we would like to avoid using a cosection.
In the same paper, Kiem--Li showed that if $\scrM$ is a compact moduli
space with a two-term perfect obstruction theory and a cosection
$\sigma$, then
\begin{equation*}
\deg([\scrM]^\mathrm{vir}) = \deg([\scrM]^\mathrm{vir}_\sigma)
\end{equation*}
This suggests that one should try to compactifiy the GLSM moduli space
$\scrM^\theta$ in a way such that its cosection extends without
additional degeneracy loci.
The main purpose of this and its subsequent articles is to construct
such a compactification.
\subsection{The logarithmic approach}
\subsubsection{Stable maps relative to boundary divisors}
The theory of stable maps relative to a smooth boundary divisor was
first introduced in symplectic geometry in the 90's by Li--Ruan
\cite{LR01} and Ionel--Parker \cite{IP03, IP04}.
Since then, it has become one of main tools in the subject of
Gromov--Witten theory.
During the last twenty years, its algebraic geometric version using
expansions was first developed by Jun Li \cite{Li01, Li02}.
A combination of expansions with logarithmic geometry was introduced
by Kim \cite{Kim10}, and one with orbifold structure was introduced by
Abramovich--Fantechi \cite{AF16}.
In the general logarithmic setting relative to toroidal boundary
divisors, the theory of stable log maps was developed by
Abramovich--Chen--Gross--Siebert \cite{AC14, Ch14, GS13}
\emph{without} using expansions.
A different approach using exploded manifolds was introduced by Brett
Parker \cite{Par11, Par12, Par15}.
In this and the subsequent articles, we will apply the techniques of stable log maps to compactify the gauged linear sigma model (GLSM) of Fan-Jarvis-Ruan \cite{FJR18}, and study their virtual cycles.
\subsubsection{Log maps}
A \emph{stable log map} to a separated log Deligne--Mumford stack $Y$
is a morphism of log stacks $f \colon \cC \to Y$ over a log scheme $S$
where $\cC \to S$ is a \emph{twisted log curve} and the underlying
twisted map $\underline{f}$ obtained by removing log structures is
stable in the usual sense. For our purpose, we will only consider the
case that $\cM_Y$ is of \emph{Deligne--Faltings type of rank one}.
This amounts to saying that the logarithmic boundary of $Y$ is a
Cartier divisor, see Section \ref{sss:rank-one}.
The central object of log maps is the stack $\scrM(Y,\beta)$ parameterizing stable log maps to $Y$ with a given collection of discrete data $\beta$ (Section \ref{sss:log-moduli-set-up}). The case where $Y$ is a log scheme has been developed in \cite{AC14, Ch14, GS13}. The same method applies to the case of log Deligne--Mumford targets. Due to a lack of references, in Section \ref{sec:logmap} we record a proof of algebraicity of $\scrM(Y,\beta)$ together with many useful properties needed in our construction.
\subsubsection{Modular principalization of the boundary}
A stable log map is \emph{degenerate} if it maps a component of the
source curve to the boundary of $Y$.
Denote by $\Delta \subset \scrM(Y,\beta)$ the locus consisting of
degenerate fibers.
In general, it is virtually a toroidal divisor, which becomes a major
difficulty for the construction of a reduced perfect obstruction
theory of the compactified GLSM.
The key to overcome this difficulty is the following \emph{modular
principalization} of $\Delta$.
Let $f\colon \cC \to Y$ be a stable log map over a geometric log point
$S$.
For each irreducible component $Z \subset \cC$ we may associate an
unique element $e_{Z} \in \oM_{S} := \cM_S/\cO^*_S$ called the
\emph{degeneracy} of $Z$ (Section~\ref{sss:map-at-generic-pt}).
As elements of the monoid $\oM_{S}$, they carry a natural partial
ordering such that $e_{Z_1} \poleq e_{Z_2}$ iff
$(e_{Z_2} - e_{Z_1}) \in \oM_S$.
Intuitively $e_Z$ measures the ``speed'' of degeneracy of $Z$ into the
boundary of $Y$, and $e_{Z_1} \poleq e_{Z_2}$ means that $Z_2$
degenerates ``faster'' than $e_{Z_1}$.
The stable log map $f$ is said to have \emph{uniform maximal
degeneracy} if the set of degeneracies has a unique maximal element.
It turns out that having uniform maximal degeneracy is an open
condition and is stable under base change.
Let $\scrU(Y,\beta) \subset \scrM(Y,\beta)$ be the sub-category
fibered over log schemes consisting of objects with the uniform
maximal degeneracy.
In Section \ref{sec:UMD}, we establish the following theorem.
\begin{theorem}[Theorem \ref{thm:max-moduli}] \label{thm:max-moduli-intro}
The canonical morphism $\scrU(Y,\beta) \to \scrM(Y,\beta)$ is a proper, representable and log \'etale morphism of log Deligne--Mumford stacks.
\end{theorem}
The maximal degeneracy defines naturally a \emph{virtual} Cartier
divisor $\Delta_{\max} \subset \scrU(Y,\beta)$ whose support is
precisely the locus of degenerate log maps, see
Section~\ref{sss:boundary-torsor}.
\begin{remark}
The category $\scrU(Y,\beta)$ is indeed the largest sub-category of
$\scrM(Y,\beta)$ to which our construction of reduced perfect
obstruction theory of compactified GLSM applies.
Consequently, our construction applies to subcateogries of
$\scrU(Y,\beta)$ including aligned logarithmic structures of
\cite[8.1]{ACFW13}.
The general construction of this paper allows us to work with
various subcategories of $\scrU(Y,\beta)$ to carry out the
computation of the GLSM virtual cycle.
This will be a task of \cite{CJR18P2}.
\end{remark}
\subsection{The $r$-spin case}
Since the technique is quite involved, for the reader's benefit it
makes sense to work it out in full detail a first nontrivial simple
example.
This is another main purpose of the current article.
Our example of choice is the r-spin theory which corresponds to the
GLSM of
$$W=x^rp\colon [(\mathbb C\times \mathbb C)/\mathbb C^*] \rightarrow \mathbb C,$$
where the coordinates on $\mathbb C \times \mathbb C$ are $(x, p)$, the weight of
action is $(1, -r)$ and $R$-charge is $(0,1)$.
Similarly to the case of quintic 3-folds, this model has two chambers
as well.
The relevant chamber for $r$-spin curve theory is the Landau--Ginzburg
chamber $\theta<0$, where the stable locus is $\mathbb C\times \mathbb C^*$.
Furthermore, we choose a stability condition such that $p$ has no
zero.
By the previous discussion, $p$ can be interpreted as defining an
isomorphism $\cL^r\cong \omega_{\log, \cC}$ and the GLSM moduli space
is
$$\USF^{\circ}_{g,k}=\{(\cC, \cL, s\in H^0(\cL), \cL^r\cong \omega_{\log, \cC})\}.$$
Let $(\cC/S, \cL)$ be an $r$-spin curve consisting of a log curve $\cC \to S$ and an $r$-spin bundle $\cL$ over $\underline{\cC}/\underline{S}$. Denote by $0_{\cP}$ and $\infty_{\cP}$ the zero and infinity sections of $\mathbb{P} := \mathbb{P}(\cL\oplus\cO_{\underline{\cC}})$ respectively, and $\cM_{\cP_{\infty}}$ the log structure on $\mathbb{P}$ associated to $\infty_{\cP}$. Consider the log stack $\cP = (\mathbb{P}, \cM_{\cC}|_{\mathbb{P}}\oplus_{\cO^*}\cM_{\cP_{\infty}})$ with the projection $\cP \to \cC$. A {\em log field} is a section $f \colon \cC \to \cP$. It is {\em stable} if $\omega^{\log}_{\cC/S}\otimes\cO(f^*0_{\cP})^{k}$ is positive for $k \gg 0$.
Denote by $\SF_{\beta}^{1/r}$ the stack of stable log fields with discrete data $\beta = (g, \vgamma, \bf{c})$ consisting of the genus $g$, the monodromy $\vgamma$ of the spin bundle along markings, and the contact order $\bf{c}$ along each markings with $\infty_{\cP}$. We first achieve the compactification:
\begin{theorem}[Theorem \ref{thm:spin-fields-moduli}]
$\SF_{\beta}^{1/r}$ is represented by a proper log Deligne--Mumford stack.
\end{theorem}
\begin{remark}
The compactification of the moduli of abelian and meromorphic
differentials using log stable maps has been studied previously in
\cite{CC16, Gu16}.
The compactification considered in this paper (in the case $r = 1$)
is different from loc.\ cit.\ in that we do not put the log
structure on $\cP$ induced by the zero section.
\end{remark}
\begin{remark}
It is worth emphasizing that the properness of $\SF_{\beta}^{1/r}$ is interestingly a non-trivial fact. As shown in Section \ref{sss:properness-failure}, limit(s) of a one-parameter family of meromorphic sections of spin bundles may not exist regardless of the stability conditions. Log structures play an important role in the existence of the underlying limiting section!
\end{remark}
Note that a log fields $f \colon \cC \to \cP$ is equivalent to a log map $f' \colon \cC \to (\mathbb{P}, \cM_{\cP_{\infty}})$ whose underlying is a section of $\mathbb{P} \to \underline{\cC}$. Since $\cM_{\cP_{\infty}}$ is Deligne--Faltings type of rank one, we may consider the stack $\USF_{\beta}^{1/r}$ of stable log fields with uniform maximal degeneracy with respect to $\cM_{\cP_{\infty}}$. Theorem \ref{thm:max-moduli-intro} implies that $\USF_{\beta}^{1/r}$ is a proper log Deligne--Mumford stacks as well.
Next, we consider its virtual cycle.
The stack $\USF_{\beta}^{1/r}$ admits a natural two term perfect
obstruction theory and hence a virtual cycle
$[\USF_{\beta}^{1/r}]^{\mathrm{vir}}$.
But this virtual cycle is different from cosection localized virtual
cycle.
The main result of the paper is the following:
\begin{theorem}[Proposition \ref{prop:reduced-obs} and \ref{prop:comparison}]
\label{thm:main}
Under the condition that all markings are narrow and of trivial
contact order, the space $\USF_{\beta}^{1/r}$ carries an alternative
``reduced'' two term perfect obstruction theory together with a
cosection $\sigma^{\mathrm{red}}_{\USF/\fU}$ on $\USF_{\beta}^{1/r}$ that
has no additional degeneracy loci.
Furthermore, suppose $[\USF_{\beta}^{1/r}]^{\mathrm{red}}$ is the virtual
cycle of the reduced perfect obstruction theory, then
\begin{equation*}
i_*[\USF^{\circ}]^{vir}_{loc}=[\USF_{\beta}^{1/r}]^{\mathrm{red}}
\end{equation*}
where
$i\colon \overline{\scrM}^{1/r}_{g,\vgamma}\rightarrow
\USF_{\beta}^{1/r}$ is the inclusion of the zero section, and
$\USF^{\circ} = \USF_{\beta}^{1/r} \setminus \Delta_{\max}$.
\end{theorem}
\begin{remark}
We remark that the reduced perfect obstruction theory has the same
virtual dimension as the original one.
Therefore, it is \emph{not} a traditional reduced virtual cycle,
which changes the virtual dimension.
Instead, the perfect obstruction theory is only ``reduced'' along
the boundary of the moduli space.
\end{remark}
\subsection{History of the $r$-spin virtual cycle}
There was a long line of works constructing both the moduli space of
$r$-spin structures and its virtual cycle.
Spin curves were proposed by Witten \cite{Wi93} in an effort to
generalize his famous conjecture that the intersection theory of the
moduli space of stable curves is governed by the KdV-hierarchy.
The compactification was first constructed by Jarvis \cite{Ja98} using
torsion-free sheaves and later by Abramovich--Jarvis \cite{AbJa03}
using line bundles on twisted curves.
The first construction of the virtual cycle is due to
Polishchuck--Vaintrob \cite{PoVa01}.
From the modern point of view, their construction is better viewed as
a quantum K-theoretic construction from which one can obtain a virtual
cycle by taking some kind of Chern character (see~\cite{Ch08}).
The picture was clarified significantly by Fan--Jarvis--Ruan with a
vast generalization (FJRW-theory) of $r$-spin theory.
The input data of FJRW theory is a nondegenerated quasi-homogeneous
polynomial $W$ together with a so called \emph{admissible} finite
automorphism group $G$ of $W$.
The $r$-spin theories are simply the case of $W=z^r$ and
$G=\ZZ/r \ZZ$.
The state space of the $r$-spin theory corresponds to the monodromy at
the marked point, and is indexed by an integer $0\leq m<r$.
The insertion $m>0$ corresponds to the so called \emph{narrow} sector
in FJRW-theory and the corresponding virtual cycle was constructed as
a localized topological Euler class.
The role of $m=0$ was clarified in general FJRW-theory as a new type
of insertions called \emph{broad}. They showed that the broad
insertion is irrelevant in $r$-spin theory but a source of difficulty
in general case.
Fan--Jarvis--Ruan's construction is analytic in nature although there
is an algebraic construction of Polishchuk and Vaintrob using matrix
factorizations \cite{PoVa16}.
However, it is not clear that these two are equivalent in the most
general case.
The last piece of the puzzle before the present work was provided by
Chang--Li--Li in \cite{CLL15}, where they gave yet another algebraic
geometric construction of FJRW virtual cycle for narrow sectors.
This is the construction that we use in this article.
Furthermore, they proved that all constructions of
Polishchuck--Vaintrob, Chiodo, Fan--Jarvis--Ruan and Chang--Li--Li are
equivalent.
Finally, the $A_r$-generalization of Witten's integrable hierarchies
conjecture was proved by Faber--Shadrin--Zvonkine \cite{FSZ10} while
the $D_n, E_{6,7,8}$-generalization was proved by Fan--Jarvis--Ruan
\cite{FJR13}.
\subsection{Effective $r$-spin structures}
A key input that led us to propose this new construction of the
$r$-spin virtual cycle is a conjectural formula of
$[\overline{\scrM}^{1/r}_{g,n}]^\mathrm{vir}$ by the second author.
This formula was motivated by the recent study of the cycle of the
locus of holomorphic differentials and of double ramification cycles.
We outline here this train of thought.
We consider the open sub-stack
$\scrM_{g,\vgamma}^{1/r}\subset\overline{\scrM}_{g,\vgamma}^{1/r}$ of
$r$-spin structures on smooth orbifold curves.
An $r$-spin structure $(\cC/S,\cL)\in \scrM_{g,\vgamma}^{1/r}$ is
called \emph{effective} if $h^0(\cL)>0$.
We denote by $S_0\subset \scrM_{g,\vgamma}^{1/r}$ the locus of
effective $r$-spin structures and by
$\overline{S}_0 \subset \overline{\scrM}_{g,\vgamma}^{1/r}$ its
closure.
A.~Polishchuk studied the geometry of effective $r$-spin structures
(see~\cite{Po06}) and asked the following question: Can we express the
$r$-spin virtual cycle to $\scrM_{g,\vgamma}^{1/r}$ in terms of the
cycle $[\overline{S}_0]$ and other natural cycles?
This problem was left aside until a precise conjecture was recently stated (see~\cite[Conjecture~A.1]{PPZ16P}). This conjecture can be re-stated as follows:
for large values of $r$, we have
\begin{equation*}\label{conj:holodiff}
\epsilon_*\left( \frac{1}{r}[\overline{\scrM}_{g,\vgamma}^{1/r}]^\mathrm{vir} + [\overline{S}_0] \right) = \alpha(r)\in A^*(\overline{\scrM}_{g,n})
\end{equation*}
where $\alpha(r)$ is a polynomial in $r$ (here
$\epsilon\colon \overline{\scrM}_{g,\vgamma}^{1/r}\to \overline{\scrM}_{g,n}$ stands for the forgetful map of the spin structure).
\begin{remark}
Note that the conventions for the value of
$[\overline{\scrM}_{g,\vgamma}^{1/r}]^{\mathrm{vir}}$ are different
in~\cite{CLL15},~\cite{Po06}, and~\cite{PPZ16P}.
\end{remark}
This conjecture is very similar to a conjectural expression by Pixton for the so-called double-ramification (DR) cycles that was proved by Pandharipande, Pixton, and the second and fifth authors (see~\cite{PPZ16P}). The main tool of their proof is the virtual localization formula of Graber and Pandharipande (see~\cite{GrPa99}).
In order to prove the new conjecture of~\cite{PPZ16P}, the second author built a localization formula by analogy with the proof of the expression of DR cycles. In this conjectural localization formula, the role of DR cycles is replaced by cycles of effective $r$-spin structures. The second author checked the consistency of this formula by various computations in low genera.
From this point, our main problem was to construct the space where the conjectural localization formula should hold. The effort to pin down the geometry underlining this formula led to use the machinery of log geometry in this article.
In work in progress \cite{CJRSZ18P2}, we prove the localization formula and show that it implies \cite[Conjecture~A.1]{PPZ16P}.
\subsection{Plan}
The paper is organized as follows.
In Section~\ref{sec:logmap}, we discuss the general set-up of log
stable maps in the orbifold setting.
In Section~\ref{sec:UMD}, we introduce the new notion of log
structures of ``uniform maximal degeneracy'', which is crucial for the
construction of the reduced virtual cycle.
This is applied in Section~\ref{sec:logfields}, to construct the
compactification of the moduli space of $r$-spin curves with a
field. Finally, in Section~\ref{sec:virtual}, we construct the reduced
perfect obstruction theories and cosections, and we prove
Theorem~\ref{thm:main}.
\subsection*{Acknowledgement}
The first author was partially supported by NSF grant DMS 1560830 and
DMS 1700682.
The second author was partially supported by an AMS Simons Travel
Grant.
The third author was partially supported by NSF grant DMS 1405245 and
NSF FRG grant DMS 1159265.
The second author is indepted to Dimitri Zvonkine for many discussions
at the Institut de Mathématiques de Jussieu that have directly led to
a conjectural formula for Witten's class, whose proof was the main
motivation for starting the current project.
Discussions at the ``RTG conference on Witten's class and related
topics'' at the University of Michigan formed the start of this
collaboration.
The second and third authors would like to thank Shuai Guo for the
collaboration which provided motivation for the current work.
The authors would like to thank Rahul Pandharipande for the continuous
support and the wonderful ``Workshop on higher genus'' at ETH Zürich
where our entire program was presented at the first time.
We would also like to thank Dawei Chen, Alessandro Chiodo, J\'er\'emy
Gu\'er\'e, Davesh Maulik, Jonathan Wise and Dimitri Zvonkine for
useful discussions.
Finally, the second and third authors would like to thank MSRI for the
hospitality where the paper was finished during a visit supported by
NSF grant DMS-1440140.
\section{Moduli of twisted stable log maps}
\label{sec:logmap}
In this section, we introduce the set up of stable log maps needed for
compactifying GLSM.
It was defined with prestable source curves in \cite{AC14, Ch14,
GS13}.
We take the opportunity to extend it to the orbifold setting.
\subsection{Twisted log maps}
\subsubsection{Twisted curves}\label{sss:twisted-curve}
Recall from \cite{AV02} that a {\em twisted $n$-pointed curve} over $\underline{S}$ consists of the following data
\[
(\underline{\cC} \to \underline{C} \to \underline{S}, \{\sigma_i\}_{i=1}^n)
\]
where
\begin{enumerate}
\item $\underline{\cC}$ is a proper Deligne--Mumford stack, and is \'etale locally a nodal curve over $\underline{S}$;
\item $\sigma_i \subset \underline{\cC}$ are disjoint closed substacks in the smooth locus of $\underline{\cC} \to \underline{S}$;
\item $\sigma_i \to \underline{S}$ are \'etale gerbes banded by the multiplicative group $\mu_{r_i}$ for some non-negative integer $r_i$;
\item the morphism $\underline{\cC} \to \underline{C}$ is the coarse moduli morphism;
\item along each stacky critical locus of $\underline{\cC} \to \underline{S}$, the group action of $\mu_{r_i}$ is balanced;
\item $\underline{\cC} \to \underline{C}$ is an isomorphism over $\underline{\cC}_{gen}$, where $\underline{\cC}_{gen}$ is the complement of the markings $\sigma_i$ and the stacky critical locus of $\underline{\cC} \to \underline{S}$.
\end{enumerate}
Given a twisted curve as above, by \cite[4.11]{AV02} the coarse space
$\underline{C} \to \underline{S}$ is a family of $n$-pointed usual pre-stable curves
over $\underline{S}$ with the markings determined by the images of
$\{\sigma_i\}$.
We define the \emph{genus} of the twisted curve as the genus of the
corresponding coarse pre-stable curve.
When there is no danger of confusion, we will simply write
$\underline{\cC} \to \underline{S}$ for a family of twisted curves.
Twisted curves only have possible stacky structures along markings and nodes. We recall the local stacky structure below.
\subsubsection{Stacky structure along nodes}
Let $\underline{\cC} \to \underline{S}$ be a family of twisted curves with the coarse moduli $\underline{\cC} \to \underline{C}$. Let ${\bar q} \to \underline{C}$ be a geometric point of a node. Shrinking $\underline{S}$ if necessary, there exists an \'etale neighborhood $\underline{U} \to \underline{C}$ of $\bar{q}$ with an \'etale morphism
\[
\underline{U} \to \spec \big( \cO_{\underline{S}}[x,y]/(xy = t)\big)
\]
for some $t \in \cO_{\underline{S}}$. Then the pull-back $\underline{\cC}\times_{\underline{C}}\underline{U}$ is given by the stack quotient
\begin{equation}\label{equ:node-local}
[\spec\big( \cO_{\underline{U}}[\tilde{x},\tilde{w}]/(\tilde{x}\tilde{y} = t', \tilde{x}^r = x, \tilde{y}^{r} = y) \big)/\mu_{r}]
\end{equation}
for some $t' \in \cO_{\underline{S}}$. Here for a generator $\gamma \in \mu_{r}$, the $\mu_{r}$-action is given by $\gamma(\tilde{x}) = \zeta \tilde{x}$ and $\gamma(\tilde{y}) = \zeta' \tilde{y}$ for some primitive $r$-th roots of unity $\zeta$ and $\zeta'$. The {\em balanced} condition implies that $\zeta' = \zeta^{-1}$.
\subsubsection{Stacky structure along markings}
Let ${\bar p} \to \underline{C}$ be a geometric point of a marking corresponding to $\sigma_i$. Shrinking $\underline{S}$ if necessary, there exists an \'etale neighborhood $\underline{V} \to \underline{C}$ of ${\bar p}$ with an \'etale morphism
\[
\underline{V} \to \spec \cO_{\underline{S}}[z].
\]
The pull-back $\underline{\cC}\times_{\underline{C}}\underline{V}$ is given by the stack quotient
\begin{equation}\label{equ:marking-local}
\big[\spec\big( \cO_{\underline{U}}[\tilde{z}]/(\tilde{z}^{r_i} = z) \big) \big/ \mu_{r_i} \big]
\end{equation}
and for each $\zeta \in \mu_{r_i}$, the action is given by $\tilde{z} \mapsto \zeta\tilde{z}$.
\subsubsection{Logarithmic twisted curves}\label{sss:log-twisted-curve}
A {\em log twisted $n$-pointed curve} over a fine and saturated log scheme $S$ in the sense of \cite{Ol07} consists of
\[
(\pi\colon \cC \to S, \{\sigma_i\}_{i=1}^n)
\]
such that
\begin{enumerate}
\item The underlying data $(\underline{\cC} \to \underline{C} \to \underline{S}, \{\sigma_i\}_{i=1}^n)$ is a twisted $n$-pointed curve over $\underline{S}$, where $\underline{\cC} \to \underline{C}$ is the coarse moduli morphism.
\item $\pi$ is proper, logarithmically smooth, and integral morphism of fine and saturated logarithmic stacks.
\item If $\underline{U} \subset \underline{\cC}$ is the non-critical locus of $\underline{\pi}$, then $\oM_{\cC}|_{\underline{U}} \cong \pi^*\oM_{S}\oplus\bigoplus_{i=1}^{n}\NN_{\sigma_i}$ where $\NN_{\sigma_i}$ denotes the constant sheaf over $\sigma_i$ with fiber $\NN$.
\end{enumerate}
For simplicity, we may refer to $\pi\colon \cC \to S$ as a log twisted curve.
The {\em pull-back} of a log twisted curve $\pi\colon \cC \to S$ along an arbitrary morphism of fine and saturated log schemes $T \to S$ is the log twisted curve $\pi_T\colon \cC_T:= \cC\times_S T \to T$.
\subsubsection{The combinatorial structure of log twisted curves}\label{curvecombinatorial}
Given a log twisted curve $(\pi\colon \cC \to S, \{\sigma_i\}_{i=1}^n)$ over $S$, we have an induced morphism of sheaves of monoids
\[
\bar{\pi}^{\flat}\colon \underline{\pi}^{*}\oM_{S} \to \oM_{\cC}
\]
on $\underline{\cC}$. The structure of $\bar{\pi}^{\flat}$ is similar to the case of log curves without twists which we described below.
The morphism $\bar{\pi}^{\flat}$ is an isomorphism away from nodes and marked points.
At a marked point $p \to \cC$ with $s = \pi(p)$, we have the stalk $\oM_{\cC, p} \cong \underline{\pi}^*\oM_{S,s}\oplus\NN$, and the morphism $\bar{\pi}^{\flat}_{p}\colon \underline{\pi}^*\oM_{S,s} \to \underline{\pi}^*\oM_{S,s}\oplus\NN$ is the inclusion to the first factor.
At a node $q \to \cC$ with the geometric point $\bar{q} \to q$ and image $\bar{s} = \pi(\bar{q})$, we have the stalk $\oM_{\cC, \bar{q}} \cong \underline{\pi}^*\oM_{S,\bar{s}}\oplus_{\NN}\NN^{2}$, where the direct sum is determined by a map
\begin{equation}\label{node-characteristic}
\NN \to \oM_{S,\bar{s}}, \ \ \ 1 \mapsto \rho_q
\end{equation}
and the diagonal map $\NN \to \NN^{2}$. Indeed, the diagonal map is induced by the relation $t' = \tilde{x}\tilde{y}$ in the local chart (\ref{equ:node-local}). The two generators $(1,0), (0,1) \in \NN^2$ correspond to the local coordinates $\tilde{x}, \tilde{y}$ of the two branches meeting at the node. The element $\rho_{q}$ corresponds to the local section $t'$. The morphism $\bar{\pi}^{\flat}_{\bar{q}}\colon \underline{\pi}^*\oM_{S,\bar{s}} \to \underline{\pi}^*\oM_{S,\bar{s}}\oplus_{\NN}\NN^2$ in this case is again the inclusion to the first factor.
\subsubsection{The stack of log twisted curves}\label{sss:curve-stack}
Denoted by $\fM^{\mathrm{tw}}_{g,n}$ the category of genus $g$ log twisted
curves with $n$ marked points over the category of log schemes.
By \cite{Ol07}, the fibered category $\fM^{\mathrm{tw}}_{g,n}$ is represented
by a log algebraic stack.
Indeed, the underlying stack $\underline{\fM}^{\mathrm{tw}}_{g,n}$ is the stack
parameterizing twisted curves with the same discrete data.
The boundary of $\underline{\fM}^{\mathrm{tw}}_{g,n}$ parameterizing singular curves
is a normal crossings divisor whose associated divisorial log
structure defines the log structure of $\fM^{\mathrm{tw}}_{g,n}$.
\subsubsection{Log stable maps with twisted source curves}
We fix a log algebraic stack $Y$ as the target.
\begin{definition}
A {\em log map} to $Y$ over a fine and saturated log scheme $S$ consists of the following data
\[
(\pi\colon \cC \to S, f\colon \cC \to Y)
\]
where $\cC \to S$ is a log twisted curve over $S$, and $f$ is a
morphism of log stacks.
The \emph{pull-back} of a log map along an arbitrary morphism of log
schemes is defined via the pull-back of log twisted curves as usual.
Note that we do not require a log map to be representable.
When $Y$ is a separated log Deligne--Mumford stack, a log map is
\emph{stable} if the underlying twisted map is stable in the usual
sense.
In particular, a stable log map is representable.
\end{definition}
For simplicity, we may write $f\colon \cC \to Y$ for a log map.
\subsubsection{Deligne--Faltings targets of rank one}\label{sss:rank-one}
Throughout this paper, we will mainly focus on the following type of targets.
\begin{definition}
A log algebraic stack $Y$ is {\em Deligne--Faltings type of rank one} if there is a morphism of sheaves of monoids $\NN_{Y} \to \oM_Y$ which locally lifts to a chart of $\cM_Y$. Here $\NN_{Y}$ denotes the constant sheaf over $Y$ with fiber $\NN$.
\end{definition}
For a fine and saturated monoid $P$, denote by $\cA_{P}$ the log algebraic stack with the underlying stack $\big[\spec(k[P])/\spec(k[P^{gp}]) \big]$ and the log structure induced by the affine toric variety $\spec(k[P])$.
For simplicity, denote by $\cA := \cA_{\NN}$. Let $\infty_{\cA}\subset \cA$ be the boundary divisor associated to the log structure $\cM_\cA$. The log stack $\cA$ has the following universal property that if $Y$ is Deligne--Faltings type of rank one, then there is a strict morphism $Y \to \cA$.
\subsection{The combinatorial structure of twisted log maps}\label{ss:log-combinatorial}
The combinatorial structure of log maps with twisted source curves is similar to the case without twists as in \cite{GS13,AC14, Ch14}. We introduce it following \cite{ACGS17}. For our purpose, we assume $Y$ is Deligne--Faltings type of rank one.
\subsubsection{The induced morphism of sheaves of monoids}
Let $(\pi\colon \cC \to S, f\colon \cC \to Y)$ be a log map over $S$. First consider the case where $\underline{S}$ is a geometric point with $\oM_{S} = Q$. Denote by $\cM := \underline{f}^*\cM_{Y}$. Thus, $\cM$ is a Deligne--Faltings log structure on $\underline{\cC}$ of rank one. This leads to a pair of morphisms of sheaves of monoids
\[
(\bar{\pi}^{\flat}\colon Q \to \oM_{\cC}, \bar{f}^{\flat}\colon \oM \to \oM_{\cC}).
\]
where we view $Q$ as the constant sheaf of monoids on $\underline{\cC}$. The morphism $\bar{\pi}^{\flat}$ is described in Section \ref{curvecombinatorial}. We describe the behavior of $\bar{f}^{\flat}$ at generic points, marked points, and nodes of $\underline{\cC}$ as follows.
\subsubsection{The stalks of $\oM$}
Since $\cM$ is Deligne--Faltings type of rank one, for any point $s \to \underline{\cC}$ the sheaf $\oM_{s}$ is a constant sheaf of monoids with fiber either $\NN$ or the trivial one $\{0\}$.
\subsubsection{The structure of $\bar{f}^{\flat}$ at generic points}\label{sss:map-at-generic-pt}
If $s = \eta$ is a generic point, then we have a local morphism of monoids $\bar{f}^{\flat}_{\eta}\colon \oM_{\eta} \to Q$.\footnote{A morphism of monoids $h\colon P \to Q$ is {\em local} if $h^{-1}(Q^{\times}) = P^{\times}$.}
If $\oM_{\eta} = \NN$, we call the irreducible component $Z \subset \underline{\cC}$ containing $\eta$ a {\em degenerate component}, and we call the element $e_{Z} := \bar{f}^{\flat}_{\eta}(1) \in Q$ the {\em degeneracy} of $f$ along $Z$.
If $\oM_{\eta} = \{0\}$, we call the irreducible component $Z \subset \underline{\cC}$ containing $\eta$ a {\em non-degenerate component}. The {\em degeneracy} of a non-degenerate component $Z$ is defined to be $e_{Z} = 0 \in Q$.
\subsubsection{The structure of $\bar{f}^{\flat}$ at marked points}\label{sss:map-at-marking}
If $s = p \to \sigma_i$ is a marked point, then we have a local morphism of monoids $\bar{f}^{\flat}_{p}\colon \oM_{p} \to Q\oplus\NN$. Consider the composition
\[
c_p\colon \oM_{p} \stackrel{\bar{f}^{\flat}_{p}}{\longrightarrow} Q\oplus\NN \stackrel{pr_2}{\longrightarrow} \NN
\]
If $\oM_{p} = \NN$, the morphism $c_{p}$ is determined by $c_{p}(1) \in \NN$. We call $c_p$ or equivalently $c_{p}(1)$ the {\em contact order} at $p$. The marked point $p$ has the {\em trivial contact order} if $c_p(1) = 0$.
Let $\eta$ be the generic point of the irreducible component $Z$ containing $p$, and assume that $Z$ is degenerate. Since the generalization morphism $\chi_{\eta,p}\colon Q\oplus\NN \to Q$ is just the projection to the first factor, we obtain
\[
\bar{f}^{\flat}\colon \NN \to Q\oplus\NN, \ \ \ 1 \mapsto e_Z + c_{p}(1)\cdot(0,1).
\]
\subsubsection{The structure of $\bar{f}^{\flat}$ at nodal points}\label{sss:nodecombinatorial}
Suppose $s = q \to \underline{\cC}$ is a nodal point contained in the closures of two generic points $\eta_1, \eta_2$ of the two branches meeting at $q$. Using the description of nodes in Section \ref{curvecombinatorial}, we have a local morphism
\[\bar{f}^{\flat}_{q}\colon \oM_{q} \to Q\oplus_{\NN}\NN^2.\]
Let $(1,0), (0,1) \in \NN^2$ correspond to the two local coordinates
around $q$ of the two branches of $\eta_1$ and $\eta_2$ respectively.
If $\oM_{q} = \NN$, choosing labeling carefully, we may assume that
\begin{equation}\label{nodecombinatorial}
\bar{f}^{\flat}_{q}(1) = e + c_{q}\cdot (1,0)
\end{equation}
for some $c_q \in \NN$ and $e \in Q$. We call $c_{q}$ the {\em contact order} of the node $q$. Observe the following commutative diagram
\begin{equation}\label{nodegeneralization}
\xymatrix{
\oM_{q} \ar[r]^{\bar{f}^{\flat}_q} \ar[d] & \oM_{\cC, q} \ar[d] \\
\oM_{\eta_i} \ar[r]^{\bar{f}^{\flat}_{\eta_i}} & \oM_{\cC, \eta_i}.
}
\end{equation}
where the vertical arrows are the generalization morphism. Applying the commutativity of the above diagram with $i=1$ to (\ref{nodecombinatorial}) , we obtain that
\begin{equation}\label{nodedegeneracy}
\bar{f}^{\flat}_{q}(1) = e_{Z_1} + c_{q}\cdot (1,0)
\end{equation}
where $e_{Z_1}$ is the degeneracy of the irreducible component $Z_1$ containing $\eta_1$. Using $i=2$, we have
\begin{equation}\label{nodalequation}
e_{Z_1} + c_{q}\rho_q = e_{Z_2}
\end{equation}
where $e_{Z_2}$ is the degeneracy of the irreducible component $Z_2$ containing $\eta_1$. This is the nodal equation as in \cite[(3.3.2)]{Ch14}.
If $\oM_{q} = \{0\}$, then $\bar{f}^{\flat}_{q}$ is necessarily trivial, and $c_q = 0$.
Since the commutativity of \eqref{nodegeneralization} holds in this
case as well, taking generalization, we obtain
$e_{Z_1} = e_{Z_2} = 0$. In particular, Equation \eqref{nodalequation}
holds for all nodes.
\subsubsection{The natural partial ordering}\label{sss:partial-order}
For a twisted curve $\underline{\cC}$ over a geometric point, recall that its {\em dual intersection graph} $\underline{G}$ consisting of the set of vertices $V(\underline{G})$ corresponding to irreducible components, the set of edges $E(\underline{G})$ corresponding to nodes, and the set of half-edges $L(\underline{G})$ corresponding to marked points.
Let $q \to \underline{\cC}$ map to a node joining two irreducible components $Z_1, Z_2$ with Equation (\ref{nodalequation}). We introduce the partial ordering $\poleq$ as follows:
\begin{enumerate}
\item If $c_q > 0$, we write $v_1 \poleq v_2$
\item If $c_q = 0$, we write $v_1 \poleq v_2$ and $v_2 \poleq v_1$, or equivalently $v_1 \sim v_2$.
\end{enumerate}
Then $\poleq$ extends to a partial order on the set $V(\underline{G})$, called the {\em minimal partial order}.
The minimal partial order yields an orientation of $\underline{G}$ as follows. Let $l \in E(\underline{G})$ be the corresponding edge joining vertices $v_1, v_2$ associated to $Z_1, Z_2$ respectively. The edge $l$ is oriented from $v_1$ to $v_2$ if $v_1 \poleq v_2$, and is oriented both ways if $v_1 \sim v_2$.
\subsubsection{The logarithmic combinatorial type}\label{logcombinatorialtype}
We introduce the {\em log combinatorial type} of the log map $(\cC \to S, f\colon \cC \to Y)$ over a geometric point $\underline{S}$ following \cite[3.4]{Ch14} and \cite[4.1.1]{AC14}:
\begin{equation}\label{equ:combinatorial-type}
G = \big(\underline{G}, V(G) = V^{n}(G) \cup V^{d}(G), \poleq, (c_i)_{i\in L(G)}, (c_l)_{l\in E(G)} \big)
\end{equation}
where
\begin{enumerate}[(a)]
\item $\underline{G}$ is the dual intersection graph of the underlying curve $\underline{\cC}$.
\item $V^{n}(G) \cup V^{d}(G)$ is a partition of $V(G)$ where $V^{d}(G)$ consists of vertices of degenerate components.
\item $\poleq$ is the minimal partial order defined in Section \ref{sss:partial-order}.
\item Associate to a leg $i\in L(G)$ the contact order $c_i \in \NN$ of the corresponding marking $\sigma_i$.
\item Associate to an edge $l\in E(G)$ the contact order $c_l \in \NN$ of the corresponding node.
\end{enumerate}
\begin{remark}
Our definition of log combinatorial types is similar to the definition of types in \cite[1.10]{GS13} and \cite[2.3.7]{ACGS17}. Since we work with Deligne--Faltings type targets, we are able to include more combinatorial information such as the partition and partial order on $G$.
\end{remark}
These combinatorial data behave well under generalization:
\begin{proposition}\label{prop:combinatorial-generalization}
Let $f\colon \cC \to Y$ be a log map over an arbitrary log scheme $S$. Then
\begin{enumerate}
\item The contact order $c_i$ along the $i$th marking $\sigma_i$ is a constant over each connected components of $S$.
\item Let $W \subset \cC$ be a connected locus of nodes in $\cC$. Then the contact order of the nodes is constant along $W$.
\end{enumerate}
\end{proposition}
\begin{proof}
The proof is identical to the case of \cite[Lemma 3.2.4, 3.2.9]{Ch14}.
\end{proof}
\subsection{Minimality}\label{sss:minimality}
\subsubsection{The monoid}\label{sss:min-monoid}
We recall the construction of minimal monoids in \cite{Ch14, AC14, GS13}. Consider a log map $(\cC \to S, f\colon \cC \to Y)$ over a geometric point $\underline{S}$ with the log combinatorial type $G$. We introduce a variable $\rho_l$ for each edge $l \in E(G)$, and a variable $e_v$ for each vertex $v \in V(G)$. Denote by $h_l$ the relation
$ e_{v'} = e_v + c_l\cdot\rho_l
$
for each edge $l$ with the two ends $v \poleq v'$ and contact order $c_l$. Denote by $h_v$ the following relation
$
e_v = 0
$
for each $v \in V^{n}(G)$. Consider the following abelian group
\[
\mathcal{G} = \big(\bigoplus_{v \in V(G)} \ZZ e_v \bigoplus_{l \in E(G)} \ZZ \rho_l \big) \big/ \langle h_v, h_l \ | \ v\in V^{d}(G), \ l \in E(G) \rangle
\]
Let $\mathcal{G}^{t} \subset \mathcal{G}$ the torsion subgroup. Consider the following composition
\[
\big(\bigoplus_{v \in V(G)} \NN e_v \bigoplus_{l \in E(G)} \NN \rho_l\big) \to \mathcal{G} \to \mathcal{G}/\mathcal{G}^{t}
\]
Let $\oM(G)$ be the smallest submonoid that is saturated in $\mathcal{G}/\mathcal{G}^{t}$, and contains the image of the above composition. We call $\oM(G)$ the {\em minimal monoid} associated to $G$, or associated to the log map.\footnote{The monoid $\oM(G)$ is called the {\em basic monoid} in \cite{GS13}.}
\begin{proposition}\label{prop:minimality}
There is a canonical map of monoids $\phi\colon \oM(G) \to \oM_S$ induced by sending $e_v$ to the degeneracy of the component associated to $v$, and sending $\rho_l$ to the element $\rho_{q}$ as in Equation (\ref{node-characteristic}) associated to $l$. In particular, the monoid $\oM(G)$ is fine, saturated, and sharp.
\end{proposition}
\begin{proof}
This follows from the same proof of \cite[Proposition 3.4.2]{Ch14}.
\end{proof}
For later use, we observe the following.
\begin{corollary}\label{cor:separate-non-distinct-log}
There is a canonical splitting $\oM(G) = \oM(G)' \oplus \NN^{d}$ where $d$ is the number of edges in $E(G)$ whose contact orders are trivial. In particular, the image of the element $e_{v}$ is contained in $\oM(G)'$ for all $v \in V(G)$.
\end{corollary}
\begin{proof}
Note that when $c_{l} = 0$, the element $\rho_{l}$ is not involved in the relation $h_{l}$. The collection of such $\rho_l$ generates the factor $\NN^{d}$.
\end{proof}
\subsubsection{Minimal objects}
Same as in \cite{GS13, Ch14, AC14}, we define the minimal objects using the canonical morphism $\phi$.
\begin{definition}
A log map $(\cC \to S, f\colon \cC \to Y)$ over $S$ is called
\emph{minimal}\footnote{The terminology used in \cite{GS13} is {\em
basic}.} if for each of its geometric fibers, the induced
canonical morphism as in Proposition \ref{prop:minimality} is an
isomorphism.
\end{definition}
The definition is justified by the openness of minimality.
\begin{proposition}\label{prop:minimal-open}
For any family of log maps $(\cC \to S, f\colon \cC \to Y)$ over a log scheme $S$, if the fiber $f_{\bar{s}}\colon \cC_{\bar{s}} \to Y$ over a geometric point $\bar{s} \to S$ is minimal, then there is an \'etale neighborhood $U \to S$ of $\bar{s}$ such that the fiber $f_{U}\colon \cC_{U} \to Y$ is minimal.
\end{proposition}
\begin{proof}
This follows from the same proof of \cite[Proposition 3.5.2]{Ch14} and \cite[Proposition 1.22]{GS13}.
\end{proof}
Minimal objects have the following universal property which is the key to the construction of the moduli stack.
\begin{proposition}\label{prop:minimal-universal}
For any log map $f\colon \cC \to Y$ over a log scheme $S$, there exists a minimal log map $f_m\colon \cC_m \to Y$ over $S_m$ and a morphism of log schemes $\Phi\colon S \to S_m$ such that
\begin{enumerate}
\item The underlying morphism $\underline{\Phi}$ is an isomorphism.
\item $f\colon \cC \to Y$ is the pull-back of $f_m\colon \cC_m \to Y$ along $\Phi$.
\end{enumerate}
Furthermore, the pair $(f_m, \Phi)$ is unique up to a unique isomorphism.
\end{proposition}
\begin{proof}
The proof is identical to the situation of log maps with no orbifold twists on the source curves. We refer to \cite[Proposition 1.24]{GS13} and \cite[Proposition 4.1.1]{Ch14} for the details.
\end{proof}
\subsubsection{Finiteness of automorphisms}\label{sss:finite-auto}
Let $f\colon \cC \to Y$ be a log map over $S$ with $\underline{S}$ a geometric point. An {\em automorphism} of a stable log map is a pair $(\psi\colon \cC \to \cC, \theta\colon S \to S)$ of compatible automorphisms of log schemes such that $\psi\circ f = f$. Denote by $\Aut(f)$ the automorphism group of the log map $f$, and $\Aut(\underline{f})$ the automorphism group of the corresponding underlying map. We have the following property:
\begin{proposition}\label{prop:finite-auto}
Suppose the log map $f\colon \cC \to Y$ over $S$ is stable and minimal. Then the natural group morphism $\Aut(f) \to \Aut(\underline{f})$ is injective. In particular, the group $\Aut(f)$ is finite.
\end{proposition}
\begin{proof}
The proof is identical to the case of \cite[Proposition 1.25]{GS13} and \cite[Lemma 3.8.3]{Ch14}.
\end{proof}
\subsection{The algebraicity of the stacks of twisted log maps}
\subsubsection{The set-up and statement}\label{sss:log-moduli-set-up}
Fix a separated log Deligne--Mumford stack $Y$ as the target with
$\cM_Y$ of Deligne--Faltings type of rank one.
Consider the {\em discrete data}
\begin{equation}\label{discretedata}
\beta = (g,n, {\bf c} = \{c_i\}_{i=1}^{n}, A)
\end{equation}
for twisted log maps in $Y$ where $g$ is the genus, $n$ is the number of markings,
$c_i$ is the contact order of the $i$-th marking, and $A \in H_2(\underline{Y})$ is a curve class.
Let $\beta' = (g,n, \bf{c})$ be the {\em reduced discrete data} obtained by removing the curve class, and $\underline{\beta} = (g,n, A)$ the {\em underlying discrete data} by removing the contact orders.
Denote by $\scrM(Y, \beta)$ the category of stable log maps to $Y$ with the discrete data $\beta$ fibered over the category of log schemes, and $\scrM(\underline{Y}, \underline{\beta})$ the stack of usual twisted stable maps to $\underline{Y}$. For our purposes, we view $\scrM(\underline{Y}, \underline{\beta})$ as a log stack equipped with the canonical log structure given by its universal curves. Composing with the forgetful morphism $Y \to \underline{Y}$, we obtain a canonical morphism
\begin{equation}\label{equ:forgetlog}
\scrM(Y,\beta) \to \scrM(\underline{Y}, \underline{\beta}).
\end{equation}
We first establish the algebraicity following the method of \cite{AW, Wi16}.
\begin{theorem}\label{thm:algebraicity}
The morphism \eqref{equ:forgetlog} is representable by log Deligne--Mumford stacks locally of finite type.
\end{theorem}
\subsubsection{Reduction to the universal stack}\label{sss:universal-stack}
Following the universal target strategy of Abramovich--Wise \cite{AW}, consider the canonical strict morphism $Y \to \cA$. For any log map $f\colon \cC \to Y$ over $W$, the composition $\cC \to Y \to \cA$ is a log map to $\cA$ over $W$.
Denote by $\fM(\cA,\beta')$ the category of log maps to $\cA$ with the reduced discrete data $\beta'$. The above composition defines a canonical morphism
\begin{equation}\label{equ:forgetgeometry}
\scrM(Y,\beta) \to \fM(\cA,\beta').
\end{equation}
On the other hand, consider the stack $\fM_{g,n}(\underline{\cA})$
parameterizing (not necessarily representable) usual maps to
$\underline{\cA}$ from genus $g$, $n$-marked log twisted curves.
It is an algebraic stack locally of finite type by \cite[Theorem
1.2]{HR14}.
We further view $\fM_{g,n}(\underline{\cA})$ as a log stack equipped with the
canonical log structure induced by its universal twisted curve.
We have:
\begin{proposition}\label{prop:uni-min-stack}
The canonical morphism
\[
\fM(\cA,\beta') \to \fM_{g,n}(\underline{\cA}).
\]
induced by the forgetful morphism $\cA \to \underline{\cA}$ is representable by log Deligne--Mumford stacks locally of finite type. In particular, the fibered category $\fM(\cA,\beta')$ is representable by log algebraic stacks locally of finite type.
\end{proposition}
\begin{proof}[Proof of Theorem \ref{thm:algebraicity}]
The underlying map $\underline{Y} \to \underline{\cA}$ of $Y \to \cA$ induces a strict morphism of log stacks
\[
\scrM(\underline{Y}, \underline{\beta}) \to \fM_{g,n}(\underline{\cA}),
\]
where both stacks are equipped with the canonical log structures from their universal curves. The two morphisms (\ref{equ:forgetlog}) and (\ref{equ:forgetgeometry}) induce
\[
\scrM(Y,\beta) \to \scrM(\underline{Y},\underline{\beta})\times_{\fM_{g,n}(\underline{\cA})}\fM(\cA,\beta'),
\]
where the fiber product is in the fine and saturated category. The above morphism is an isomorphism. Indeed, the datum of a log map to $Y$ is equivalent to the datum of an underlying map to $\underline{Y}$ and a log map to $\cA$ with compatible compositions to $\underline{\cA}$. Thus, the algebraicity of Theorem \ref{thm:algebraicity} follows from Proposition \ref{prop:uni-min-stack}. The Deligne--Mumford property is a consequence of Proposition \ref{prop:finite-auto}.
\end{proof}
\subsubsection{Proof of Proposition \ref{prop:uni-min-stack}}
The proof is essentially the one in \cite[Corollary 1.1.1]{Wi16}. Here we record the details for completeness.
Let $U \to \fM_{g,n}(\underline{\cA})$ be a strict morphism from a log scheme $U$. We will show that the product
\[
\fU := \fM(\cA, \beta')\times_{\fM_{g,n}(\underline{\cA})} U
\]
is a log algebraic stack locally of finite type.
Denote by $V \subset \Log_U$ the open substack of Olsson's log stack that parameterizes morphisms of fine and saturated log structures $\cM_U \to \cM'$, see \cite{Ol03}. Thus $V$ is a log algebraic stack locally of finite type with the tautological morphism $V \to U$.
The composition $V \to U \to \fM_{g,n}(\underline{\cA})$ defines a family of
underlying twisted maps over $\underline{V}$, denoted by
$\underline{f}\colon \underline{\cC} \to \underline{\cA}$.
Since $\fM_{g,n}(\underline{\cA})$ carries the canonical log structure from
its universal curve, pulling back the universal curve with its
canonical log structure, we obtain a log curve $\pi\colon \cC \to V$.
Write $\cM:= \underline{f}^*\cM_{\cA}$. Observe that to define a log map $f\colon \cC \to \cA$ compatible with the underlying morphism $\underline{f}$ is equivalent to defining a morphism of log structures $f^{\flat}\colon \cM \to \cM_{\cC}$. Consider the stack $\Hom_{\Sch/\underline{\cC}}(\cM, \cM_{\cC})$ over $\underline{\cC}$ which to each $\underline{\cC}$-scheme $\underline{T}$, associates the category of morphisms $\cM|_{\underline{T}} \to \cM_{\cC}|_{\underline{T}}$, where $*|_{\underline{T}}$ denotes the pull-back of $*$ to $\underline{T}$. By \cite[Proposition 2.9]{GS13} and \cite[Theorem 2.2]{Wi16}, the morphism $\Hom_{\Sch/\underline{\cC}}(\cM, \cM_{\cC}) \to \underline{\cC}$ is representable by algebraic spaces, and is quasi-compact, quasi-separated, and locally of finite presentation.
Consider the push-forward
$\underline{\fV} := \pi_*\Hom_{\Sch/\underline{\cC}}(\cM, \cM_{\cC})$ along the
underlying morphism of $\pi\colon \cC \to V$.
By \cite[Theorem 1.2]{HR14}, the stack $\underline{\fV}$ is an algebraic
stack over $\underline{V}$ locally of finite type.
Denote by $\fV \to V$ the strict morphism with underlying map
$\underline{\fV} \to \underline{V}$.
For each strict morphism $T \to \fV$, by construction we obtain a log
curve $\cC_T \to T$ by pulling back $\cC \to V$, and a morphism of log
structures $\cM|_{\cC_T} \to \cM_{\cC_T}$, hence a log map
$f_T\colon \cC_T \to \cA$.
By Proposition \ref{prop:minimal-open} and
\ref{prop:minimal-universal}, $\fU \subset \fV$ is the open substack
parameterizing minimal objects.
This completes the proof of Proposition \ref{prop:uni-min-stack}.
\subsubsection{Log smoothness of the universal stack}
The following log smoothness result of \cite[Proposition 3.2]{AW} will be useful in our later construction.
\begin{proposition}\label{prop:uni-stack-smooth}
The tautological morphism
\[
\fM(\cA,\beta') \to \fM^{\mathrm{tw}}_{g,n}
\]
by taking the source log curves, is log \'etale. In particular, the stack $\fM(\cA,\beta')$ is log \'etale and equi-dimensional.
\end{proposition}
\begin{proof}
Observe that $\cA$ is log \'etale over $\spec \mathbb C$. Furthermore, we may view $\fM^{\mathrm{tw}}_{g,n}$ as the stack of pre-stable log maps to $\spec \mathbb C$.
The result follows from the same proof of \cite[Proposition 3.2]{AW}, since the orbifold structures on source curves play no role.
\end{proof}
\subsection{Relative boundedness of twisted log maps}
The boundedness of stable log maps without orbifold structures has been proved in \cite{Ch14, AC14, GS13} under certain assumptions. The general situation is proved in \cite{ACMW17} by reducing to the case of \cite{AC14}. For our purposes, we will only consider the Deligne--Faltings case of rank one in the orbifold situation.
Consider the forgetful morphism of log algebraic stacks
\[
{\bf F}\colon \fM(Y,\beta) \to \fM(\underline{Y},\underline{\beta})
\]
where $\fM(\underline{Y},\underline{\beta})$ has the canonical log structure from its universal curve. For each strict morphism $W \to \fM(\underline{Y},\beta)$, consider the projection
$${\bf F}_W\colon \fM(Y,\beta)_W := \fM(Y,\beta) \times_{\fM(\underline{Y},\underline{\beta})} W \to W$$
\begin{definition}\label{def:combinatorially-finite}
For a strict morphism $W \to \fM(\underline{Y},\beta)$, the discrete data $\beta$ is called {\em combinatorially finite over $W$} if the collection of log combinatorial types of log maps over geometric points of $\fM(Y,\beta)_W$ is finite.
\end{definition}
\begin{remark}
Observe that if $W = \scrM(\underline{Y}, \underline{\beta})$ then $\fM(Y,\beta)_W = \scrM(Y,\beta)$. Thus the above definition is compatible with combinatorially finiteness in \cite[Definition 3.3]{GS13}.
\end{remark}
We provide a relative boundedness result for later use.
\begin{proposition}\label{prop:boundedness}
Suppose $\beta$ is combinatorially finite over $W$ for a strict morphism $W \to \fM(\underline{Y},\underline\beta)$. Then ${\bf F}_W$ is of finite type.
\end{proposition}
\begin{proof}
The proof of the statement is similar to the case of \cite[Section 5.4]{Ch14} and \cite[Section 3.2]{GS13}. For completeness, we give the details with necessary modifications to fit our situation.
First observe that the statement is local on $W$. Thus we may assume $W$ is a scheme of finite type, and we are allowed to shrink $W$ if needed. Since the morphism $F$ is locally of finite type, it suffices to show that the underlying topological space of $\fM(Y,\beta)_W$ is quasi-compact.
\step{1}{Reduction to combinatorially constant strata}
As $\beta$ is combinatorially finite over $W$, $\fM(Y,\beta)_W$ has finitely many strata such that the geometric fibers of each stratum have the same log combinatorial type. It remains to show that the underlying topological space of each such stratum is quasi-compact.
\step{2}{Construct combinatorially constant underlying strata}
We shrink and stratify $W$ such that over each stratum:
\begin{enumerate}
\item The dual graphs of the underlying curves are constant, say $\underline{G}$.
\item The partition $V(\underline{G}) = V^{n}(\underline{G})\cup V^{d}(\underline{G})$ where $V^{d}(\underline{G})$ consists of components with images contained in the locus of $Y$ with non-trivial log structure, is constant.
\end{enumerate}
Replace $W$ by a such stratum with the reduced scheme structure. Denote by $\underline{G}$ and $V(\underline{G}) = V^{n}(\underline{G})\cup V^{d}(\underline{G})$ the corresponding dual graph and partition over $W$. Let $G$ be a log combinatorial type of the fibers over $\fM(Y,\beta)_W$.
\step{3}{Construct source log curve}
Let $Q_1 = \NN^n$ where $n$ is the number of edges in $E(G)$.
Let $Q_2 = \oM(G)$ and consider the canonical map
$\psi\colon Q_1 \to Q_2$ induced by the edge element $\rho_l$ as in
Section~\ref{sss:min-monoid}.
It induces a morphism of log stacks $\cA_{Q_2} \to \cA_{Q_1}$.
Consider the fiber product $S = \cA_{Q_2}\times_{\cA_{Q_1}}W$ where
$W \to \cA_{Q_1}$ is the canonical strict morphism.
Pulling back the families over $W$, we obtain a log curve
$\pi\colon \cC \to S$ and a usual pre-stable map
$\underline{f}\colon \underline{\cC} \to \underline{Y}$.
\step{4}{The morphism of characteristic sheaves}
Denote by $\cM' := \underline{f}^{*}\cM_Y$.
The log combinatorial type $G$ determines a unique morphism of
characteristic sheaves $\bar{f}^{\flat}\colon \oM' \to \oM_{\cC}$ by
the descriptions in Section \ref{sss:map-at-generic-pt},
\ref{sss:map-at-marking}, and \ref{sss:nodecombinatorial}.
Lifting $\underline{f}$ to a log map with the combinatorial type $G$ is
equivalent to constructing a dashed arrow making the following diagram
commutative
\[
\xymatrix{
\cM' \ar@{-->}[rr]^{f^{\flat}} \ar[d]_{q_{\cM'}} && \cM_{\cC} \ar[d]^{q_{\cM_{\cC}}} \\
\oM' \ar[rr]^{\bar{f}^{\flat}} && \oM_{\cC}
}
\]
where the two vertical arrows are quotients by $\cO^*_{\underline{\cC}}$.
\step{5}{Parameterize morphisms of $\cO^*$-torsors}
We first parameterize lifting $f^{\flat}$ of $\bar{f}^{\flat}$ as a
morphism of sheaves of monoids compatible with the $\cO^*$-action.
Since $Y$ is of Deligne--Faltings type of rank one, there is a global
morphism $h\colon \NN \to \oM'$ that locally lifts to a chart.
Consider the two $\cO^*$-torsors $\cT_Y = q_{\cM'}^{-1}(h(1))$ and
$\cT_{\cC} = q_{\cM_{\cC}}^{-1}(\bar{f}^{\flat}\circ h(1))$.
Observe that for any lifting $f^{\flat}$, its restriction
$f^{\flat}|_{\cT_Y}$ factors through $\cT_{\cC}$, and uniquely
determines $f^{\flat}$.
Consider the sheaf of morphisms of $\cO^*$-torsors
$I := \Isom_{\cC}(\cT_Y, \cT_{\cC})$ which is again an $\cO^*$-torsor.
Let $L$ be the corresponding line bundle.
Using Grothendieck duality for Deligne--Mumford stacks (see for
example \cite{Ni08}), the proof of \cite[Proposition 2.2]{CL12} shows
that the push-forward $\underline{\pi}_*L $ is represented by a separated
scheme of finite type over $S$.
The push-forward $\underline{\pi}_{*}I$ parameterizes sections of $L$
avoiding the zero section of $L$.
Thus it is an open sub-scheme of $\underline{\pi}_*L$ over $S$.
In particular, it is of finite type.
\step{6}{Construct logarithmic lifting}
Note that $\underline{\pi}_{*}I$ associates, to each strict morphism $T \to S$, the category of isomorphisms $\cT_Y|_{\cC_T} \to \cT_{\cC}|_{\cC_T}$ where $\cC_T = \cC\times_S T$. Recall from the above discussion that such isomorphisms is equivalent to a morphism of sheaves of monoids $f^{\flat}_T\colon \cM'_{T} \to \cM_{\cC_T}$ compatible with the $\cO^*$-action where $\cM'_{T}$ is the pullback of $\cM'$ along $T \to S$. Such $f^{\flat}_T$ is a morphism of log structures if it is compatible with the structure morphisms. This same proof as in \cite[Lemma 5.4.3]{Ch14} implies that the locus of $\underline{\pi}_{*}I$ with such compatibility forms a closed subset $V$. For our purposes, we may take $V$ with the reduced scheme structure, hence it is of finite type.
\smallskip
Finally observe that the universal morphism
$f^{\flat}_V\colon \cM'_V \to \cM_{C_V}$ over $V$ defines a family of
minimal log maps $f_V\colon \cC_V \to Y$ over $V$ with the constant
log combinatorial type $G$.
The tautological morphism $V \to \fM(Y,\beta)_W$ surjects onto the
locus with the log combinatorial type $G$.
The boundedness follows since $V$ is of finite type.
\end{proof}
\subsection{The weak valuative criterion}
We fix a discrete valuation ring $R$ with the maximal ideal $\fm$ and the residue field $R/\fm$. Let $K$ be the quotient field of $R$. We have the following version of valuative criterion necessary for properness.
\begin{proposition}\label{prop:log-map-valuative}
Consider a commutative diagram of solid arrows of underlying stacks
\begin{equation}\label{diag:valuative}
\xymatrix{
\spec K \ar[rr] \ar[d] && \underline{\fM(Y,\beta)} \ar[d]^{\underline{\bf F}} \\
\spec R \ar[rr] \ar@{-->}[rru] && \underline{\fM(\underline{Y},\underline{\beta})}
}
\end{equation}
Possibly after replacing $R$ by a finite extension of DVRs, and $K$ by
the induced extension of the quotient field, there exists a dashed
arrow making the above diagram commutative.
Furthermore, such a dashed arrow is unique up to a unique isomorphism.
\end{proposition}
\begin{proof}
The statement above can be proved identically as in \cite{GS13, Ch14}. As the proof is quite long, we will only sketch the idea, mainly following \cite{GS13}, and will refer to the corresponding sections of \cite{GS13, Ch14} for details.
Denote by $f_{K}\colon \cC_K \to Y$ over $\eta$ the minimal log map induced
by the top arrow of (\ref{diag:valuative}) where
$\underline{\eta} = \spec K$.
Similarly, let $\underline{f}_R\colon \underline{\cC}_R \to \underline{Y}$ over $\underline{S} = \spec R$
be the usual pre-stable map induced by the bottom arrow.
The commutativity of the square of the solid arrows is equivalent to
the condition that the restriction $\underline{f}_R|_{\underline{\eta}}$ is the
underlying map $\underline{f}_K$.
To construct the dashed arrow, we need to extend the minimal log map
$f_{K}$ to the whole $\underline{S}$ with the underlying map given by
$\underline{f}_R$.
We need to show that possibly after a finite extension, such an
extension exists and is unique.
The first step is to extend the log combinatorial type to the closed
fiber over $\spec R/\fm$.
Following Section \ref{logcombinatorialtype}, the dual graph $\underline{G}$
and the partition $V^{n}(\underline{G})\cup V^{d}(\underline{G})$ is determined by
the closed fiber of the underlying map $\underline{f}_{R}$.
The contact orders at the markings are determined by the discrete data
$\beta$.
It suffices to determine the contact order at each node of the closed
fiber which will also determine the partial order $\poleq$ by the
discussion in Section \ref{sss:partial-order}.
Now the contact orders nodes can be determined \'etale locally around
each node by the same argument as in \cite[Section 6.2]{Ch14} or
\cite[Section 4.1]{GS13}.
This defines a unique log combinatorial type $G$.
The second step is to construct a unique log scheme $S$ with the underlying $\spec R$ extending the log scheme $\eta$ such that the fiber $\oM_{S}|_{\spec R/\fm} = \oM(G)$. This step can be carried out identically as in \cite[Section 4.2]{GS13}, since it only uses the complement of markings and nodes, and orbifold structures play no role. We then obtain a unique morphism $S_R \to S$ where $\cC_R \to S_R$ is the canonical log curve associated to the underlying curve $\underline{\cC}_R \to \underline{S}$. Thus pulling back the canonical log curve, we obtain a unique log curve $\cC \to S$.
Finally, to extend $f_{K}$ to the closed fiber, one needs to construct a morphism $f_R\colon \cC \to Y$ lifting $\underline{f}_R$. This can be done using the same argument as in \cite[Seciont 4.3]{GS13} or a similar argument as in \cite[Section 6.3]{Ch14} by first constructing the log map \'etale locally on $\cC$, then gluing them using the canonicity of the construction.
\end{proof}
\section{Stable log maps with uniform maximal degeneracy}
\label{sec:UMD}
In this section, we introduce a configuration of log structures which
is the key to the construction of the reduced perfect obstruction
theory, and subsequently Witten's r-spin class.
We again fix the target $Y$ with the log structure $\cM_Y$ of rank one Deligne--Faltings type.
\subsection{Uniformed maximal degeneracy}
\subsubsection{Maximal degeneracies}\label{sss:max-deg}
Consider a log map $f\colon \cC \to Y$ over $S$ with $\underline{S}$ a geometric point. Denote by $G$ the corresponding log combinatorial type of $f$, and $\oM(G)$ the minimal monoid. Let $\phi\colon\oM(G) \to \oM_S$ be the canonical morphism as in Proposition \ref{prop:minimality}.
Consider the natural partial order $\poleq_{\oM_S}$ on $\oM_S$ such
that $e_1 \poleq e_2$ iff $(e_2 - e_1) \in \oM_S$.
The partial order $\poleq_{\oM_S}$ is a refinement of $\poleq$ of $G$
in the sense that $v_1 \poleq v_2$ in $V(G)$ implies
$e_{v_1} \poleq e_{v_2}$ in $\oM_S$.
\begin{definition}
A degeneracy $\phi(e_v) \in \oM_S$ is called \emph{maximal} if
$\phi(e_v)$ is maximal in the set of all degeneracies
under $\poleq_{\oM_S}$.
The corresponding vertex $v \in V(G)$ is called a \emph{maximally
degenerate vertex} of $f$.
\end{definition}
As $\poleq_{\oM_S}$ is a partial order, there could be more than one maximal degeneracy in $\oM_S$. On the other hand, different vertices are allowed to have the same degeneracy in $\oM_S$.
\begin{definition}\label{def:uniform-deg}
The log map $f\colon \cC \to Y$ over $S$ is said to have \emph{uniform maximal degeneracy}
if the set of degeneracies has a supremum under $\poleq_{\oM_S}$.
A family of log maps is said to have \emph{uniform maximal
degeneracy} if each geometric fiber has uniform maximal
degeneracy.
\end{definition}
The above definition for families is justified by the following.
\begin{proposition}\label{prop:um-open}
For any family of log maps $f\colon \cC \to Y$ over a log scheme $S$, if the fiber $f_{\bar{s}}\colon\cC_{\bar{s}} \to Y$ over a geometric point $s \to S$ has uniform maximal degeneracy, then there is an \'etale neighborhood $U \to S$ of $s$ such that the pull-back family $f_{U}\colon \cC_{U} \to Y$ over $U$ has uniform maximal degeneracy.
\end{proposition}
Proposition \ref{prop:um-open} follows immediately from Lemma \ref{lem:generize-degeneracy} and \ref{lem:generize-po} below.
\subsubsection{Generalization of degeneracies and partial orders}
Consider a pre-stable log map $f\colon \cC \to Y$ over a log scheme
$S$ together with a chart $h\colon \oM_{S,s} \to \cM_S$ where
$s \to S$ is a geometric point.
Here $f$ does not necessarily have uniform maximal degeneracy.
Using the composition $\oM_{S,s} \to \cM_S \to \oM_S$, any
$e \in \oM_{S,s}$ is viewed as a section of $\oM_S$.
Let $e_{t} \in \oM_{S,t}$ be the fiber of $e$ over $t \in S$.
Denote by $G$ the log combinatorial type of $f_s$.
\begin{lemma}\label{lem:generize-degeneracy}
Notations as above, suppose $e \in \oM_{S,s}$ is the degeneracy of $v \in V(G)$. Then there is an \'etale neighborhood $U \to S$ of $s$ such that for any geometric point $t \in U$, the fiber $e_t \in \oM_{S,t}$ is a degeneracy.
\end{lemma}
\begin{proof}
Shrinking $S$ if necessary, we may choose a section $\sigma\colon \underline{S} \to \underline{\cC}$ such that $\sigma(\underline{S})$ is contained in the smooth non-marked locus of $\cC \to S$, and intersects the component of $\cC_s$ corresponding to $v$. Consider the pull-back morphism
\[
\sigma^*(\bar{f}^{\flat})\colon (\sigma \circ f)^*\oM_Y \to \sigma^*\oM_{\cC} = \oM_S.
\]
The equality on the right hand side follows from the assumption that $\sigma(\underline{S})$ avoids all nodes and markings.
Since $\oM_{Y}$ is of Deligne--Faltings type of rank one, we may choose a morphism $\NN \to \oM_Y$ which locally lifts to a chart. Denote again by $1 \in \oM_Y$ the image of $1 \in \NN$ via this morphism. By the discussion in Section \ref{sss:map-at-generic-pt}, the fiber of the image $\sigma^*(\bar{f}^{\flat})(1)_t \in \oM_{S,t}$ over each geometric point $t \in S$ is the degeneracy of the component of $\cC_t$ intersecting $\sigma(\underline{S})$. In particular, we have $\sigma^*(\bar{f}^{\flat})(1)_s = e$.
\end{proof}
Conversely, every degeneracy of a nearby fiber is the generalization of some degeneracy from the central fiber:
\begin{corollary}
Notations as above, there is an \'etale neighborhood $U \to S$ of $s$ such that for any geometric point $t \in U$ and any degeneracy $e' \in \oM_{S,t}$, there is a degeneracy $e \in \oM_{S,s}$ such that $e_t = e'$.
\end{corollary}
\begin{proof}
Notations as in the proof of Lemma \ref{lem:generize-degeneracy}, we
may further shrink $S$ and choose a finite set of extra markings
$\{\sigma_i \to \underline{\cC}\}$ avoiding nodes and the original
markings, whose union $\cup \sigma_i(\underline{S})$ intersects each
irreducible component of each geometric fiber of $\cC \to S$.
\end{proof}
The partial order $\poleq_{\oM_{S,t}}$ is well-behaved under generalization:
\begin{lemma}\label{lem:generize-po}
Notations as above, consider a pair of elements $e_1, e_2 \in \oM_{S,s}$ with $e_1 \poleq_{\oM_{S,s}} e_2$. Then we have $e_{1,t} \poleq_{\oM_{S,t}} e_{2,t}$ in $\oM_{S,t}$ for any geometric point $t \to S$.
\end{lemma}
\begin{proof}
By assumption, we have $(e_2 - e_1) \in \oM_{S,s}$, hence
$$(e_2 - e_1)_t = (e_{2,t} - e_{1,t}) \in \oM_{S,t}.$$
\end{proof}
\subsubsection{The category of log maps with uniform maximal degeneracies}\label{sss:um-category}
Let $Y$ be a log stack of rank one Deligne--Faltings type.
We introduce the fibered category $\fU(Y,\beta')$ of pre-stable log
maps to $Y$ with uniform maximal degeneracy and reduced discrete data
$\beta'$ over the category of fine and saturated log schemes.
If furthermore $Y$ is a separated log Deligne--Mumford stack, denote
by $\scrU(Y,\beta) \subset \fU(Y,\beta')$ the sub-category of stable
log maps with discrete data $\beta$ as in \eqref{discretedata}.
By the universality as in Proposition \ref{prop:minimal-universal}, there are tautological morphisms of fibered categories as inclusions of subcategories:
\begin{equation}\label{equ:forget-max}
\scrU(Y,\beta) \to \scrM(Y,\beta) \ \ \mbox{and} \ \ \fU(Y,\beta') \to \fM(Y,\beta').
\end{equation}
We next introduce the minimality of the subcategory $\fU(Y,\beta')$.
\subsection{Minimality with uniform maximal degeneracy}
\subsubsection{Log combinatorial type with uniform maximal degeneracy}\label{sss:log-type-um}
Let $f\colon \cC \to Y$ be a pre-stable log map over $S$ with uniform maximal degeneracy. First assume that $\underline{S}$ is a geometric point.
Let $G$ be the log combinatorial type of $f$, and $\phi\colon \oM(G) \to \oM_S$ be the canonical morphism. Denote by $V_{\max} \subset V(G)$ be the subset of vertices having the maximal degeneracy in $\oM_S$. We call $(G, V_{\max})$ the {\em log combinatorial type with uniform maximal degeneracy}.
\subsubsection{Minimal monoids with uniform maximal degeneracy}\label{sss:um-monoid}
Consider the torsion-free abelian group
\[
\big( \oM(G)^{gp}\big/ \sim \big)^{tf}
\]
where $\sim$ is given by the relations $(e_{v_1} - e_{v_2}) = 0$ for any $v_1, v_2 \in V_{\max}$. By abuse of notation, we may use $e_v$ for the image of the degeneracy of the vertex $v$ in $\big( \oM(G)^{gp}\big/ \sim \big)^{tf}$. Thus, for any $v \in V_{\max}$ their degeneracies in $\big( \oM(G)^{gp}\big/ \sim \big)^{tf}$ are identical, denoted by $e_{\max}$.
Let $\oM(G,V_{\max})$ be the saturated submonoid in $\big( \oM(G)^{gp}\big/ \sim \big)^{tf}$ generated by
\begin{enumerate}
\item the image of $\oM(G) \to \big( \oM(G)^{gp}\big/ \sim \big)^{tf}$, and
\item the elements $(e_{\max} - e_v)$ for any $v \in V(G)$.
\end{enumerate}
By the above construction, we obtain a natural morphism of monoids $\oM(G) \to \oM(G,V_{\max})$. On the other hand, we have a canonical morphism of monoids $\phi\colon \oM(G) \to \oM_S$ by Proposition \ref{prop:minimality}. Putting these together, we observe the following canonical factorization:
\begin{proposition}\label{prop:minimize-umd}
There is a canonical morphism of monoids
\begin{equation}\label{equ:minimize-umd}
\phi_{\max}\colon \oM(G,V_{\max}) \to \oM_S.
\end{equation}
such that the morphism $\phi\colon \oM(G) \to \oM_S$ factors through $\phi_{\max}$.
\end{proposition}
\begin{corollary}\label{cor:separate-non-distinct-log-U}
There is a canonical splitting
\begin{equation*}
\oM(G, V_{\max}) = \oM(G,V_{\max})' \oplus \NN^{d}
\end{equation*}
where $d$ is the number of edges in $E(G)$ whose contact order is
zero.
Furthermore, the image of the element $e_{v}$ is contained in
$\oM(G)'$ for all $v \in V(G)$.
\end{corollary}
\begin{proof}
This follows directly from Corollary \ref{cor:separate-non-distinct-log} and the construction of $\oM(G, V_{\max})$.
\end{proof}
\begin{definition}
We call $\oM(G,V_{\max})$ the {\em minimal monoid with uniform maximal degeneracy} associated to $(G, V_{\max})$, or simply the {\em minimal monoid} associated to $(G,V_{\max})$.
\end{definition}
\begin{definition}\label{def:umd-minimal}
A stable log map $f\colon \cC \to Y$ over $S$ with $\underline{S}$ a
geometric point is called \emph{minimal with uniform maximal
degeneracy} if \eqref{equ:minimize-umd} is an isomorphism.
A family of log maps is called \emph{minimal with uniform maximal
degeneracy} if each of its geometric fibers is so.
\end{definition}
\subsubsection{Openness of minimality with uniform maximal degeneracy}
The definition of minimal objects in families with uniform maximal degeneracy is justified by the following analogue of Proposition \ref{prop:minimal-open}:
\begin{proposition}\label{prop:min-umd-open}
For any family of log maps $f\colon \cC \to Y$ over a log scheme $S$, if the fiber $f_{s}\colon\cC_{s} \to Y$ over a geometric point $s \to S$ is minimal with uniform maximal degeneracy, then there is an \'etale neighborhood $U \to S$ of $s$ such that the family $f_{U}\colon \cC_{U} \to Y$ is minimal with uniform maximal degeneracy.
\end{proposition}
\begin{proof}
By Proposition \ref{prop:um-open}, replacing $S$ by an \'etale neighborhood of $s$, we may assume that $f\colon \cC \to Y$ over $S$ has uniform maximal degeneracy. For each geometric point $t\in S$, denote by $(G_t, V_{\max,t})$ the log combinatorial type of the fiber $f_t\colon \cC_t \to Y$ over $t$, see Section \ref{sss:log-type-um}.
Let $f_m\colon \cC_m \to Y$ over $S_m$ be the associated minimal objects as in Proposition \ref{prop:minimal-universal} such that $f$ is the pull-back of $f_m$ along a morphism $S \to S_m$. Shrinking $\underline{S}$ if necessary, we choose two charts $\oM_{S,s} \to \cM_{S}$ and $\oM_{S_m, s} \to \cM_{S_m}$. We view elements of $\oM_{S,s}$ and $\oM_{S_m,s}$ as global sections of $\oM_{S}$ and $\oM_{S_m}$ via the following compositions respectively:
\[
\oM_{S,s} \to \cM_{S} \to \oM_S \ \ \ \mbox{and} \ \ \ \oM_{S_m, s} \to \cM_{S_m} \to \oM_{S_m}.
\]
For each geometric point $t \in S$ we have a commutative diagram of solid arrows
\[
\xymatrix{
\oM_{S_m, s} \ar[r] \ar[d] & \oM_{S_m,t} \ar[d] \\
\oM(G_s, V_{\max,s}) \ar@{=}[d] \ar@{-->}[r]^{\chi} & \oM(G_t, V_{\max,t}) \ar[d]^{\phi_{\max,t}} \\
\oM_{S,s} \ar[r]^{\chi_{s,t}} & \oM_{S,t}.
}
\]
where the top and bottom horizontal arrows are the generalization morphisms given by the two charts above, the compositions of the vertical arrows are given by the morphism $S \to S_m$, and the factorization through $\oM(G_t, V_{\max,t})$ follows from Proposition \ref{prop:minimize-umd}. By the construction in Section \ref{sss:um-monoid}, the arrow on the top induces the dashed arrow $\chi$ making the above diagram commutative.
First observe that the lower commutative square in the above diagram implies that $\phi_{\max,t}$ is surjective.
Indeed, the groupification of the generalization morphism $\oM_{S,s}^{gp} \to \oM_{S,t}^{gp}$ is surjective. Since it factors through $\oM(G_t,V_{\max,t})^{gp}$, the morphism $\phi_{\max,t}^{gp}$ is also surjective. Furthermore, $\oM_{S,t}$ is the saturation of the submonoid in $\oM_{S,t}^{gp}$ generated by the image of $\oM_{S,s}$ which is precisely the image $\phi_{\max,t}(\oM(G_t,V_{\max,t}))$.
To see that $\phi_{\max,t}$ is injective, it remains to prove the injectivity of $\phi_{\max,t}^{gp}$. Consider the set
\begin{equation}\label{equ:log-kernel}
F = \{e \in \oM_{S,s} \ | \ \chi_{s,t}(e) = 0\}.
\end{equation}
By \cite[Lemma 3.5]{Ol03}, the group $F^{gp}$ is the kernel of the morphism $\oM_{S,s}^{gp} \to \oM_{S,t}^{gp}$. Let $K$ be the kernel of $\oM(G_s, V_{\max,s})^{gp} \to \oM(G_t,V_{\max,t})^{gp}$, hence $K \subset F^{gp}$. We will prove $F^{gp} = K$ by showing that the composition $F \hookrightarrow \oM(G_s, V_{\max,s}) \stackrel{\chi}{\to} \oM(G_t, V_{\max,t})$ is trivial.
Indeed, consider the fine submonoid $\oN \subset \oM(G_s, V_{\max,s})^{gp}$ generated by the degeneracy $e_v$ for each $v \in V(G_s)$, the element $\rho_l$ for each $l\in E(G)$, and the element $(e_{\max} - e_v)$ for each $v \in V(G)$. Let $e \in \oM(G_s, V_{\max,s})$ be one of the above three types. Observe that $\chi(e) = 0$ if $\chi_{s,t}(e) = 0$ by the construction in Section \ref{sss:um-monoid}, hence $\chi(\oN\cap F) = 0$.
Since $\oM(G_s, V_{\max,s})$ is the saturation of $\oN$ in $\oM(G_s, V_{\max,s})^{gp}$, $F$ is the saturation of $\oN\cap F$. We conclude that $\chi(F) = 0$.
\end{proof}
\subsubsection{The universality}
The minimal objects in $\fU(Y,\beta')$ have a universal property
similar to the case of Proposition \ref{prop:minimal-universal}:
\begin{proposition}\label{prop:min-umd-universal}
For any log map $f\colon \cC \to Y$ over a log scheme $S$ with uniform maximal degeneracy, there exists a log map $f_{mu}\colon \cC_{mu} \to Y$ over $S_{mu}$ which is minimal with uniform maximal degeneracy, and a morphism of log schemes $\Phi_{u}\colon S \to S_{mu}$ such that
\begin{enumerate}
\item The underlying morphism $\underline{\Phi}_u$ is an isomorphism.
\item $f\colon \cC \to Y$ is the pull-back of $f_{mu}\colon \cC_{mu} \to Y$ along $\Phi_u$.
\end{enumerate}
Furthermore, the pair $(f_{mu}, \Phi_u)$ is unique up to a unique isomorphism.
\end{proposition}
\begin{proof}
Let $f_m\colon \cC_m \to Y$ over $S_m$ be the associated minimal object as in Proposition \ref{prop:minimal-universal}, so that $f$ is the pull-back of $f_m$ along $\Phi\colon S \to S_m$ with $\underline{\Phi}$ the identity of $\underline{S}$.
Since the statement is local on $S$, we are free to shrink $S$ if needed. Thus, we may assume there are charts
\[
h_{S_m}\colon \oM_{S_m,s} \to \cM_{S_m} \ \ \mbox{and} \ \ h_S\colon \oM_{S,s} \to \cM_{S}
\]
for some geometric point $s \to S$.
Denote by $(G, V_{\max})$ the log combinatorial type of the fiber
$f_s$ over $s$.
By Proposition~\ref{prop:minimize-umd}, the morphism
$\phi\colon \oM(G) = \oM_{S_m,s} \to \oM_{S,s}$ factors through
$\phi_{\max}\colon Q:=\oM(G,V_{\max}) \to \oM_{S,s}$.
Write $\tilde{\phi}\colon \oM(G) \to Q$ for the canonical morphism.
Denote by $\cM_{S_{mu}}$ the log structure on $\underline{S}$ associated to
the pre-log structure defined by the composition
$h\colon Q \to \oM_{S,s} \stackrel{h_S}{\to} \cM_{S}$.
Thus, there is a morphism of log structures
$\cM_{S_{mu}} = Q\oplus_{h^{-1}\cO^*_{\underline{S}}}\cO^*_{\underline{S}} \to
\cM_S$.
Then the following assignments on the right define a unique dashed
arrow on the left which makes the diagram of log structures
commutative:
\[
\xymatrix{
& \cM_{S_m} \ar@{-->}[ld] \ar[rd] & & &h_{S_m}(e) \ar@{|-->}[ld] \ar@{|->}[rd] & \\
\cM_{S_{mu}} \ar[rr] && \cM_{S} & h \circ \tilde{\phi}(e) + v \ar@{|->}[rr] && h_S\circ \phi(e) + u
}
\]
where $u \in \cO^*$ and $v \in \cO^*$ are the unique, invertible
sections making the diagram commutative.
This defines a morphism of log schemes
$S_{mu} := (\underline{S}, \cM_{S_{mu}}) \to S_m$ through which $S \to S_m$
factors.
Further observe that such a morphism depends on the choice of charts
$h_S$ and $h_{S_m}$.
However, different choices of charts induce a unique isomorphism of
$S_{mu}$ compatible with the arrows to and from $S_m$ and $S$
respectively.
Pulling back the log map over $S_m$, we obtain a log map
$f_{mu}\colon \cC \to Y$ over $S_{mu}$ which further pulls back to $f$
over $S$.
Note that the geometric fiber $f_{mu,s}$ is minimal with uniform
maximal degeneracy over $s$.
Further shrinking $\underline{S}$ and using
Proposition~\ref{prop:min-umd-open}, we obtain a family of log maps
over $S_{mu}$ minimal with uniform maximal degeneracy as needed.
\end{proof}
\subsubsection{Finiteness of automorphisms}
Consider a log map $f\colon \cC \to Y$ over $S$ with $\underline{S}$ a geometric point. Suppose $f$ is minimal with uniform maximal degeneracy. Let $f_m\colon \cC \to Y$ over $S_m$ be the minimal log map given by Proposition \ref{prop:finite-auto} such that $f$ is the pull-back of $f_m$ along a morphism $\Phi\colon S \to S_m$. Let $\Aut(f)$ and $\Aut(f_m)$ be the automorphism groups introduced in Section \ref{sss:finite-auto}. They are related as follows.
\begin{proposition}\label{prop:finite-auto-umd}
Notations as above, there is an injective homomorphism of groups $\Aut(f) \to \Aut(f_m)$. In particular, $\Aut(f)$ is finite if $f$ is stable.
\end{proposition}
\begin{proof}
We first construct this group homomorphism. Consider an element $(\psi\colon \cC \to \cC, \theta\colon S \to S)$ in $\Aut(f)$. Note that $f$ can be obtained as the pull-back of $f_m$ via either $S \stackrel{\Phi}{\longrightarrow} S_m$ or the composition $S \stackrel{\theta}{\longrightarrow} S \stackrel{\Phi}{\longrightarrow} S_m$. By the canonicity in Proposition \ref{prop:minimal-universal}, there is a unique isomorphism $(\psi_m\colon\cC_m \to \cC_m, \theta_m\colon S_m \to S_m)$ in $\Aut(f_m)$ fits in the following commutative diagram:
\[
\xymatrix{
S \ar[r]^{\theta} \ar[d]_{\Phi} & S \ar[d]^{\Phi} \\
S_m \ar[r]^{\theta_m} & S_m
}
\]
The arrow $\Aut(f) \to \Aut(f_m)$ is then defined by $(\psi,\theta) \mapsto (\psi_m,\theta_m)$.
To see the injectivity, observe that the morphism
$\cM^{gp}_{S_m} \to \cM^{gp}_{S}$ is surjective by the construction of
Section \ref{sss:um-monoid}.
Thus $\theta_m$ being the identity implies that $\theta$ is also the
identity.
\end{proof}
\subsection{The stack}
\subsubsection{The statements}
Consider the fibered categories of log maps with uniform maximal
degeneracies as in Section~\ref{sss:um-category}.
We now establish their algebraicity and properness.
By Proposition~\ref{prop:uni-min-stack}, \ref{prop:boundedness} and
\ref{prop:log-map-valuative}, it suffices to build these properties
upon the stack of log maps.
We first consider the case of the universal target.
\begin{theorem}\label{thm:max-uni-moduli}
The tautological morphism as in (\ref{equ:forget-max})
\[\fU(\cA, \beta') \to \fM(\cA, \beta')\]
is representable by log algebraic spaces of finite type. Furthermore, it is proper and log \'etale. In particular, the fibered category $\fU(\cA, \beta')$ is represented by a log smooth log algebraic stack locally of finite type.
\end{theorem}
Then consider the following cartesian diagram
\[
\xymatrix{
\scrU(Y,\beta) \ar[r] \ar[d] & \fU(Y,\beta') \ar[r] \ar[d] & \fU(\cA, \beta') \ar[d] \\
\scrM(Y,\beta) \ar[r] & \fM(Y,\beta') \ar[r] & \fM(\cA, \beta')
}
\]
where the vertical arrows are given by (\ref{equ:forget-max}), and the
two horizontal arrows of the right square are induced by the canonical
strict morphism $Y \to \cA$.
Note that imposing a curve class and requiring underlying maps being
stable are both representable by open embeddings.
The followig is an immediate consquence of the above theorem.
\begin{theorem}\label{thm:max-moduli}
The canonical morphism $\scrU(Y,\beta) \to \scrM(Y,\beta)$ is a proper, representable and log \'etale morphism of log Deligne--Mumford stacks. In particular, $\scrU(Y,\beta)$ is of finite type if $\scrM(Y, \beta)$ is so.
\end{theorem}
We now give the proof of Theorem \ref{thm:max-uni-moduli}, which splits to two parts.
\subsubsection{Representability, boundedness and log \'etaleness}
For simplicity, write $\fM := \fM(\cA,\beta')$ and $\fU := \fU(\cA,\beta')$.
Consider Olsson's log stack $\Log_{\fM}$, which associates to each
strict morphism $T \to \fM$ the category of morphisms of fine log
structures $\cM_T \to \cM$ over $\underline{T}$.
By Proposition~\ref{prop:min-umd-universal}, we may view $\fU$ as the
category fibered over the category of schemes parameterizing log maps
minimal with uniform maximal degeneracy.
By Proposition~\ref{prop:min-umd-open}, the tautological morphism
$\fU \to \Log_{\fM}$ is an open embedding.
Since $\Log_{\fM}$ is algebraic, $\fU$ is a log algebraic stack
equipped with the universal minimal log structure.
By Proposition~\ref{prop:finite-auto-umd}, the morphism $\fU \to \fM$
is representable.
The log \'etaleness of $\fU \to \fM$ follows from \cite[Theorem 4.6
(ii), (iii)]{Ol03}. By Proposition \ref{prop:uni-stack-smooth}, the
stack $\fU$ is log \'etale.
To prove that $\fU \to \fM$ is of finite type, consider a strict
morphism $T \to \fM$ from a log scheme $T$ of finite type, and write
$U := T\times_{\fM}\fU$.
Since being of finite type is a property local on the target, it
suffices to show that $U$ is of finite type.
Denote by $\Lambda$ the collection of log combinatorial types of log maps over $T$. Since $T$ is of finite type, the set $\Lambda$ is finite. Let $\Lambda_{um} = \{(G,V_{\max}) \ | \ G \in \Lambda \}$ be the collection of log combinatorial types of log maps over $U$ as in Section \ref{sss:log-type-um}. The set $\Lambda_{um}$ is again finite as the number of choices of $V_{\max} \subset V(G)$ for a fixed $G \in \Lambda$ is finite.
For each $(G,V_{\max}) \in \Lambda_{um}$, the canonical morphism (\ref{equ:minimize-umd}) induces a morphism of log stacks $\cA_{\oM(G,V_{\max})} \to \cA_{\oM(G)}$. Consider
\[
\cA_{\oM(G,V_{\max}), T} = T\times_{\Log}\cA_{\oM(G,V_{\max})}
\]
where $T \to \Log$ is the canonical strict morphism, and the morphism on the right is the composition $\cA_{\oM(G,V_{\max})} \to \cA_{\oM(G)} \to \Log$. By \cite[Corollary 5.25]{Ol03}, there is an \'etale morphism
\[
\cA_{\oM(G,V_{\max}), T} \to \Log_T.
\]
By the construction of $\fU$, $U$ is an open sub-stack of $\Log_T$. By Definition \ref{def:umd-minimal} and Proposition \ref{prop:min-umd-open}, $U$ is covered by the image of the finite union:
\[
\bigcup_{(G,V_{\max}) \in \Lambda_{um}} \cA_{\oM(G,V_{\max}), T} \to \Log_T.
\]
Thus $U$ is of finite type.
\subsubsection{Properness}
Since $\fU \to \fM$ is representable and of finite type, for properness it suffices to prove the weak valuative criterion.
\step{1}{The set-up of the weak valuative criterion}
Let $R$ be a discrete valuation ring, $\fm \subset R$ be its maximal ideal, and $K$ be its quotient field. Consider a commutative diagram of solid arrows of the underlying stacks
\[
\xymatrix{
\spec K \ar[rr] \ar[d] && \underline{\fU} \ar[d] \\
\spec R \ar[rr] \ar@{-->}[rru]&& \underline{\fM}
}
\]
It suffices to show that possibly after replacing $R$ by a finite
extension of discrete valuation rings, and $K$ by the corresponding
finite extension of quotient fields, there exists a unique dashed
arrow making the above diagram commutative.
Let $f$ be a minimal log map over $S = (\spec R, \cM_S)$ given by the bottom arrow of the above diagram. Denote by $s, \eta \in S$ the closed and generic points with the log structure pulled back from $S$ respectively. Let $f_{\eta_u}$ be the log map over $\eta_u = (\underline{\eta}, \cM_{\eta_{u}})$ minimal with uniform maximal degeneracy given by the top arrow. There is a canonical morphism $\eta_u \to \eta$ such that $f_{\eta_u}$ is the pull-back of $f_{\eta}$. We will construct the dashed arrow by extending $f_{\eta_u}$ to a log map over $\spec R$ which is the pull-back of $f$, and is minimal with uniform maximal degeneracy.
\step{2}{Determine the combinatorial type of the closed fiber}
Passing to a finite extension of $R$ and $K$, denote by $G$ the log
combinatorial type of the closed fiber $f_{s}$ of $f$, and by
$(G_{\eta}, V_{\max,\eta_u})$ the log combinatorial type of
$f_{\eta_u}$.
We next determine the log combinatorial type $(G, V_{\max})$ of
possible extensions of $f_{\eta_u}$.
We may assume that there exists a chart $h\colon \oM(G) \to \cM_{S}$ after taking a further base change.
For each $v\in V(G)$, denote by $e_v \in \oM(G)$ the corresponding degeneracy. Denote by $\gd$ the following composition
\begin{equation}\label{equ:close-general}
\oM(G) \stackrel{h}{\longrightarrow} \cM_S \longrightarrow \cM_{\eta} \longrightarrow \cM_{\eta_u}.
\end{equation}
By Lemma \ref{lem:generize-degeneracy}, the general fiber of $\gd(e_v)$ corresponds to a degeneracy of some vertex $v_{\eta} \in V(G_{\eta})$. Consider the subset $V' \subset V(G)$ consisting of vertices $v$ such that $\gd(e_v)_\eta$ corresponds to the degeneracy of vertices in $V_{\max,\eta_u}$. We define a partial order on $V'$ as follows.
For any $v_1, v_2 \in V'$, observe $\gd(e_{v_2}) - \gd(e_{v_1}) \in K^{\times}$ as it is a difference of maximal degeneracies over $\eta$. We define
\[v_1 \poleq_u v_2 \ \ \mbox{if} \ \ \big(\gd(e_{v_2}) - \gd(e_{v_1}) \big) \in R.\]
Denote by $V_{\max} \subset V'$ be the collection of maximal elements under this partial order $\poleq_u$.
We show that $(G,V_{\max})$ is necessarily the log combinatorial type of any possible extension $f_{S_u}$ of $f_{\eta_u}$ over $S_u = (\spec R, \cM_{S_u})$ with uniform maximal degeneracy. Given such an extension, let $V'_{\max}$ be the collection of maximally degenerated vertices of the closed fiber of $f_{S_u}$. By Lemma \ref{lem:generize-degeneracy} and \ref{lem:generize-po}, we have the inclusion $V'_{\max} \subset V'$.
Consider the canonical morphism $\psi\colon S_u \to S$ along which $f$ pulls back to $f_{S_u}$. Thus $\gd$ can be also given by the following composition
\[
\oM(G) \stackrel{h}{\longrightarrow} \cM_S \stackrel{\psi^{\flat}}{\longrightarrow} \cM_{S_u} \longrightarrow \cM_{\eta_u}.
\]
Suppose $v_2 \in V'_{\max}$. Then since $\big(\psi^{\flat}\circ h(e_{v_2}) - \psi^{\flat}\circ h(e_{v_1}) \big) \in \cM_{S_u}$ for any $v_1 \in V'$, we have $\big( \gd(e_{v_2}) - \gd(e_{v_1}) \big) \in R$. This implies $V'_{\max} \subset V_{\max}$. The other direction $V_{\max} \subset V'_{\max}$ is similar.
\step{3}{Principalize degeneracies of elements in $V_{\max}$}
Let $\cK_0 \subset \cM_{S}$ be the log ideal generated by $\{h(e_v) \ | \ v \in V_{\max}\}$. Let $\hat{S}_0 \to S$ be the log blow-up along $\cK_0$, and $f_{\hat{S}_0}$ be the pull-back of $f$. We show that $\eta_u \to S$ factors through $\hat{S}_0 \to S$ uniquely.
Indeed, let $(G_{\eta}, V_{\max,\eta_u})$ be the log combinatorial type of $f_{\eta_u}$. By Lemma \ref{lem:generize-degeneracy} and \ref{lem:generize-po}, $\gd(e_v)$ corresponds to the maximal degeneracy of $f_{\eta_u}$ for any $v \in V_{\max}$. Thus $\cK_0$ pulls back to a locally principal log ideal over $\eta_u$ via $\eta_u \to S$. It follows from the universal property of log blow-ups that there is a unique morphism $\eta_u \to \hat{S}_0$ lifting $\eta_u \to S$ .
Since the underlying of $\hat{S}_0 \to S$ is projective, the underlying morphism of $\eta_u \to \hat{S}_0$ extends to a strict morphism $S_0 \to \hat{S}_0$ with the underlying $\underline{S}_0 = \spec R$. In particular, we obtain a morphism $\psi_0\colon \eta_u \to S_0$. Denote by $f_{S_0}$ the pull-back of $f_{\hat{S}_0}$over $S_0$. Consider the composition
\[
\gd_0\colon \oM(G) \stackrel{h}{\longrightarrow} \cM_S \longrightarrow \cM_{S_0}
\]
We show that the elements in $V_{\max}$ have the same degeneracy associated to the closed fiber of $f_{S_0}$ by showing that
\begin{equation}\label{equ:prin-max-deg-0}
\big(\gd_0(e_{v_2}) - \gd_0(e_{v_1})\big) \in R^{\times}, \ \ \mbox{for any} \ \ v_1, v_2 \in V_{\max}.
\end{equation}
Indeed, observe
\begin{equation}\label{equ:prin-max-deg}
\big(\gd(e_{v_2}) - \gd(e_{v_1}) \big) \in R^{\times} , \ \ \mbox{for any} \ \ v_1, v_2 \in V_{\max}.
\end{equation}
Since $S_0 \to S$ factors through $\hat{S}_0 \to S$, we have $\big(\gd_0(e_{v_2}) - \gd_0(e_{v_1})\big) \in \cM_{S_0}$. Since $\gd = \psi^{\flat}_{0}\circ\gd_0$, the claim follows from the fact that
\[
\psi^{\flat}_{0}\big( \gd_0(e_{v_2}) - \gd_0(e_{v_1}) \big) = \gd(e_{v_2}) - \gd(e_{v_1}) \in R^{\times}.
\]
\step{4}{Maximize the the degeneracy of elements in $V_{\max}$}
Fix $v_0 \in V_{\max}$.
Consider the finite set
$
V(G) \setminus V_{\max} = \{v_1, \cdots, v_k\}.
$
Define $\cK_{i} \subset \cM_{S_0}$ to be the log ideal generated by $\{ \gd_0(e_{v_i}), \gd_{0}(e_{v_0})\}$ for $i = 1, 2, \cdots, k$. By (\ref{equ:prin-max-deg}) the log ideal $\cK_i$ is independent of the choice of $v_0 \in V_{\max}$. Consider the following diagram
\[
\xymatrix{
\hat{S}_k \ar[r] & \hat{S}_{k-1} \ar[r] & \cdots \ar[r] & \hat{S}_1 \ar[r] & S_0 \\
&&&& \eta_u \ar[u]_{\psi_0} \ar@{-->}[lu] \ar@{-->}[lllu] \ar@/^/@{-->}[llllu]
}
\]
where $\hat{S}_{i+1} \to \hat{S}_i$ is the log blow-up of the pull-back of $\cK_i$ via $\hat{S}_i \to S_0$.
Since $\big( \gd(e_0) - \gd(e_{v_i})\big) \in \cM_{\eta_u}$, the log ideal $\cK_i$ pulls back to a locally principal log ideal over $\eta_u$ via $\psi_0$. Thus we obtain a sequence of dashed arrows $\hat{\psi}_i\colon \eta_u \to \hat{S}_i$ lifting $\psi_0$ as in the above diagram.
Since log blow-ups are projective, we obtain a strict morphism $S_k \to \hat{S}_k$ with underlying $\underline{S}_k = \spec R$ extending the underlying morphism of $\psi_{0}$. Thus for each $i$ we have morphisms $\psi_i\colon \eta_u \to S_i$ and $S_i \to S_0$. Let $f_{S_k}\colon C_{S_k} \to \cA$ over $S_{k}$ be the pull-back of $f_{S_0}$.
Consider the composition
$\gd_k\colon \oM(G) \stackrel{h}{\longrightarrow} \cM_S \longrightarrow \cM_{S_k}.$
Since the pull-back of $\cK_i$ is locally principal over $S_k$, we have that either
$\big( \gd_k(e_{v_0}) - \gd_k(e_{v_i})\big)$
or
$\big( \gd_k(e_{v_i}) - \gd_k(e_{v_0})\big)$
belongs to $\cM_{S_u}$. We next show that the latter is not possible.
Indeed the construction in Step 2 implies that
$\big( \gd(e_{v_0}) - \gd(e_{v_i})\big) \in \cM_{\eta_{u}}\setminus R^{\times}.$
Since $\gd = \psi^{\flat}_k\circ\gd_k$, we necessarily have that $\big( \gd_k(e_{v_0}) - \gd_k(e_{v_i})\big) \in \cM_{S_k}\setminus R^{\times}$ for any $i=1,\cdots, k$. Thus $f_{S_k}$ over $S_k$ has uniform maximal degeneracy by Proposition \ref{prop:um-open}.
\step{5}{Verify the extension and uniqueness}
We show that $f_{S_k}$ is the unique extension of $f_{\eta_u}$ as needed. First observe that the pull-back of $f_{S_k}$ along $\psi_{k}$ is the log map $f_{\eta_u}$ minimal with uniform degeneracy. Thus the universality of Proposition \ref{prop:min-umd-universal} implies that $\psi_k$ induces an isomorphism between the generic fiber $f_{S_k,\underline{\eta}}$ of $f_{S_k}$ and $f_{\eta_u}$. Using Proposition \ref{prop:min-umd-universal} again, we obtain a log map $f_{S_u}$ over $S_{u}$ which is minimal with uniform maximal degeneracy, and a morphism $S_{k} \to S_u$ with the identity underlying morphism, along which $f_{S_u}$ pulls back to $f_{S_k}$. This provides the desired extension of $f_{\eta_u}$.
To see the uniqueness, let $f_{S_u}$ over $S_u$ be any extension of $f_{\eta_u}$. Note that there is a canonical morphism $S_{u} \to S$ along which $f$ pulls back to $f_{S_u}$. Since the log combinatorial type $(G,V_{\max})$ is unique as shown in Step 2, the log ideal $\cK_0$ as in Step 3 pulls back to a locally principal log ideal over $S_u$, hence there is a unique morphism $S_u \to S_0$ such that $f_{S_u}$ is the pull-back of $f_{S_0}$.
By Condition (2) of Section \ref{sss:um-monoid}, the log ideal $\cK_i$ as in Step 4 pulls back to a locally principal log ideal over $S_u$, hence a unique morphism $S_u \to S_k$ such that $f_{S_u}$ is the pull-back of $f_{S_k}$. Applying the universality of Proposition \ref{prop:min-umd-universal} one more time, we obtain an isomorphism $S_u \to S_k$ compatible with pull-back of log maps.
\smallskip
This completes the proof of Theorem \ref{thm:max-uni-moduli}. \qed
\subsection{The logarithmic twist}\label{ss:log-twists}
Here we introduce the {\em log twist} which is the key to extending the cosection across the boundary.
Consider the stack $\fU := \fU(\cA, \beta')$ with its universal pre-stable log map
$
f_{\fU}\colon \cC_{\fU} \to \cA
$
and the projection $\pi_{\fU}\colon \cC_{\fU} \to \fU.$
\subsubsection{The boundary torsor of $\fU$}\label{sss:boundary-torsor}
Consider the global section
$
e_{\max} \in \Gamma(\fU,\oM_{\fU})
$
which is the maximal degeneracy over each geometric point. Consider the $\cO_{\fU}^{*}$-torsor over $\fU$
\begin{equation}\label{equ:boundary-torsor}
\cT_{\max} := e_{\max}\times_{\oM_{\fU}}\cM_{\fU}
\end{equation}
and the corresponding line bundle
$
\bL_{\max} \supset \cT_{\max}.
$
The composition
\[
\cT_{\max} \to \cM_{\fU} \to \cO_{\fU}
\]
induces a morphism of line bundles
\begin{equation}\label{equ:boundary-complex}
\bL_{\max} \stackrel{}{\longrightarrow} \cO_{\fU}.
\end{equation}
Since $\fU$ is log smooth by Theorem \ref{thm:max-uni-moduli}, the dual of the above defines a section of $\bL_{\max}^{\vee}$ whose vanishing locus is a Cartier divisor $\Delta_{\max} \subset \fU$ such that $\bL_{\max}^{\vee} \cong \cO_{\fU}(\Delta_{\max})$.
\subsubsection{The torsor from the target}
By Section \ref{sss:rank-one}, the characteristic sheaf $\oM_{\cA}$ admits a global section
$
\delta_{\infty} \in \Gamma(\cA, \oM_{\cA})
$
whose image in $\oM_{\cA}$ is a local generator. Consider the $\cO^*$-torsor over $\cA$:
\begin{equation}\label{equ:target-torsor}
\cT_{\infty} := \delta_{\infty}\times_{\oM_{\cA}}\cM_{\cA}
\end{equation}
and the corresponding line bundle $\cO_{\cA}(-\infty_{\cA}) \supset \cT_{\infty}$. The composition
\[
\cT_{\infty} \to \cM_{\cA} \to \cO_{\cA}
\]
corresponds to the canonical embedding
$
\cO_{\cA}(-\infty_{\cA}) \to \cO_{\cA}.
$
\subsubsection{The universal twist}
We construct the {\em log twist} as follows.
\begin{lemma}\label{lem:torsor-twist}
Suppose all contact orders in $\beta'$ are trivial. Then $f_{\fU}^{\flat}$ induces a morphism compatible with the $\cO^*_{\cC_{\fU}}$-action
\begin{equation*}
\tf_{\fU}^{\flat}\colon (\pi^{*}_\fU\cT_{\max})\otimes (f^{*}_{\fU}\cT^{\vee}_{\infty}) \to \cM_{\cC_{\fU}}, \ \ a\otimes (-b) \mapsto a - f_{\fU}^{\flat}(b)
\end{equation*}
where $\cT^{\vee}_{\infty}$ is the dual torsor of $\cT_{\infty}$.
\end{lemma}
\begin{proof}
Consider the sequence of inclusions
\[
\pi^{*}_\fU\cT_{\max} \subset \pi^{*}_\fU\cM_{\fU} \subset \cM_{\cC_{\fU}},
\]
and the composition
\[
f^{*}_{\fU}\cT^{\vee}_{\infty} \subset f^{*}_{\fU}\cM^{gp}_{\cA} \to \cM^{gp}_{\cC_{\fU}},
\]
where the last arrow induced by $f_{\fU}^{\flat}$.
Putting these together, we obtain
\[
(\pi^{*}_\fU\cT_{\max})\otimes (f^{*}_{\fU}\cT^{\vee}_{\infty}) \to \cM^{gp}_{\cC_{\fU}}, \ \ \ \ a\otimes (-b) \mapsto a - f_{\fU}^{\flat}(b).
\]
To see this morphism factors through $\cM_{\cC_{\fU}}$, it suffices to show the image of the composition
\[
(\pi^{*}_\fU\cT_{\max})\otimes (f^{*}_{\fU}\cT^{\vee}_{\infty}) \to \cM^{gp}_{\cC_{\fU}} \to \oM^{gp}_{\cC_{\fU}}
\]
is contained in $\oM_{\cC_{\fU}}$. Note the image is of the form $e_{\max} - \bar{f}_{\fU}^{\flat}(\delta_{\infty})$. Since $e_{\max}$ is the maximal degeneracy and the contact orders are all trivial, we have $e_{\max} - \bar{f}_{\fU}^{\flat}(\delta_{\infty}) \in \oM_{\cC_{\fU}}$ by the description in Section \ref{ss:log-combinatorial}.
\end{proof}
\begin{proposition}\label{prop:log-twist}
Suppose the contact orders in $\beta'$ are all trivial. Then there is a natural morphism of line bundles over $\cC_{\fU}$
\begin{equation}\label{equ:log-twist}
\tf_{\fU}\colon \pi^{*}_{\fU}\bL_{\max}\otimes f^{*}_{\fU}\cO(\infty_{\cA}) \to \cO_{\cC_{\fU}}
\end{equation}
such that $\tf_{\fU}$ vanishes along non-maximally degenerate components, and is surjective everywhere else.
\end{proposition}
\begin{proof}
The morphism $\tf_{\fU}$ is obtained by composing $\tf_{\fU}^{\flat}$ as in Lemma \ref{lem:torsor-twist} with the structural morphism $\cM_{C_{\fU}} \to \cO_{C_{\fU}}$, and using the corresponding line bundles $\cT_{\infty} \subset \cO_{\cA}(-\infty)$ and $\cT_{\max} \subset \bL_{\max}$.
Consider a non-maximally degenerate component $Z$ with degeneracy $e_Z$. Then over the generic point of $Z$ we have $$e_{\max} - \bar{f}_{\fU}^{\flat}(\delta_{\infty})=e_{\max} - e_Z \in \cM_{\fU}\setminus \{ 0 \}$$ as $e_{\max}$ is the maximal degeneracy. Since the target of $\tf_{\fU}$ is the trivial line bundle, we conclude that $\tf_{\fU}$ vanishes over the non-maximally degenerate components.
Then observe that $e_{\max} - \bar{f}_{\fU}^{\flat}(\delta_{\infty}) = 0$ in $\oM_{C_{\fU}}$ over the the maximally degenerate components except those nodes joining maximally degenerate components with non-maximally degenerate components.
\end{proof}
\subsection{A partial expansion}\label{ss:partial-expansion}
Denote by $\cN_{\max} \subset \cM_{\fU}$ the sub-log structure
generated by $\cT_{\max}\subset \cM_{\fU}$.
Then $e_{\max} \subset \Gamma(\fU, \oM_{\fU})$ is a global section
whose image in $\oN_{\max}$ is a local generator.
Denote by $\cA_{\max} := \cA$ the log stack with the boundary divisor $\Delta$ given by the origin. The inclusion $\cN_{\max} \hookrightarrow \cM_{\fU}$ defines a morphism of log stacks $\fm\colon \fU \to \cA_{\max}$ with $\fm^{-1}(\Delta) = \Delta_{\max}$.
Let
$
\fb\colon \cA^{e} \to \cA\times\cA_{\max}
$
be the blow-up of $\infty_{\cA}\times\Delta$ with the naturally induced log structure. Indeed, there a unique open dense point in $\cA^{e}$ with the trivial log structure whose complement is a simple normal crossings divisor in $\cA^e$. The divisorial log structure associated to this simple normal crossings divisor is $\cM_{\cA^e}$. Furthermore $\fb$ is log \'etale.
Let $\cE_{\fb} \subset \cA^e$ be the exceptional divisor of $\fb$ and
$\infty_{\cA^{e}} \subset \cA^e$ be the proper transform of
$\infty_{\cA}\times\cA_{\max} \subset \cA\times \cA_{\max}$.
\begin{lemma}\label{lem:lift-max-deg}
Suppose the contact orders in $\beta'$ are all trivial. Then there is a commutative diagram of log stacks
\[
\xymatrix{
\cC_{\fU} \ar[rr]^{f^{e}_{\fU}} \ar[rd]_{f_{\fU}\times\fm} && \cA^{e} \ar[ld]^{\fb} \\
&\cA\times\cA_{\max}&
}
\]
such that
\begin{enumerate}
\item The inverse image $(f^e_{\fU})^{-1}(\infty_{\cA^e})$ is empty.
\item For any geometric point $w \to \fU$, an irreducible component $Z \subset \cC_{w}$ over $w$ dominates $\cE_{\fb}$ via $f^e_{\fU}$ if and only if $Z$ is maximally degenerate with non-trivial degeneracy.
\end{enumerate}
\end{lemma}
\begin{proof}
We first construct the morphism $f^e_{\fU}$. Denote by
$$\cK \subset \cM_{\cA\times\cA_{\max}} := \cM_{\cA}\oplus_{\cO^*}\cM_{\cA_{\max}}$$
the log ideal generated by $\cT_{\max}$ and $\cT_{\infty}$.
The pull-back
$(f_{\fU}\times \fm)^{\bullet}\cK \subset \cM_{\cC_{\fU}}$ is the log
ideal generated by $\cT_{\max}$ and the image
$f_{\fU}^{\flat}(\cT_{\infty})$.
Since $\fb$ is the log blow-up of $\cK$, to show that
$f_{\fU}\times \fm$ lifts to $f^e$, it suffices to show that
$(f_{\fU}\times \fm)^{\bullet}\cK$ is locally principal, which follows
from Lemma \ref{lem:torsor-twist}.
Now consider geometric points $w \to \Delta_{\max}$ and $x \to \cC_w$. Denote by $e'_{\max}$ and $\delta'$ the corresponding local generators of $\cT_{\max}$ and $f_{\fU}^{\flat}(\cT_{\infty})$ in a neighborhood $W \subset \cC_{w}$ of $x$ respectively. Then by Lemma \ref{lem:torsor-twist} we have $e'_{\max} - \delta' \in \cM_{C_{\fU}}$. Let $\alpha(e'_{\max} - \delta') \in \cO_{W}$ be the corresponding image.
By construction of $\fb$, locally in the smooth topology we can choose a coordinate of $\cE_{\fb}\setminus \infty_{\cA^{e}}$ mapping to $\alpha(e'_{\max} - \delta')$ via $(f^e_{\fU})^*$. Thus $f^e_{\fU}(W)$ dominates $\cE_{\fb} \setminus \infty_{\cA^{e}}$ if and only if $\alpha(e'_{\max} - \delta') \neq 0$ on $W$. Statement (2) follows from the fact that $\alpha(e'_{\max} - \delta')$ vanishes only along non-maximally degenerate components of $\cC_w$.
To see (1), observe that $(f^e_{\fU,w})^{-1}(\infty_{\cA^e})$ is supported on the poles of the section $\alpha(e'_{\max} - \delta')$ over the maximally degenerate components of $\cC_w$. But $\alpha(e'_{\max} - \delta')$ has no poles by Lemma \ref{lem:torsor-twist}.
\end{proof}
We give another description of \eqref{equ:log-twist}. Since $\cE_{\fc} := \pi^*_{\fU}\Delta_{\max} - \cE_{\fb}$ is effective, there is a natural inclusion $(f^e_{\fU})^*\cO(\cE_{\fb}) \to \pi^*_{\fU}\cO(\Delta_{\max})$, hence
\begin{equation}\label{equ:log-twisted-expansion}
\pi^*_{\fU}\bL_{\max}\otimes (f^e_{\fU})^*\cO(\cE_{\fb}) \cong \pi^*_{\fU}\cO(-\Delta_{\max})\otimes (f^e_{\fU})^*\cO(\cE_{\fb}) \to \cO_{\cC_{\fU}}.
\end{equation}
\begin{lemma}\label{lem:compare-universal-log-twist}
The two morphisms (\ref{equ:log-twisted-expansion}) and \eqref{equ:log-twist} are identical.
\end{lemma}
\begin{proof}
Since $\fb^*[\infty_{\cA}\times\cA_{\max}] = [\cE_{\fb}] + [\infty_{\cA^e}]$, pulling back \eqref{equ:log-twist} via $\fb$, we have
\[
\pi^*_{\fU}\bL_{\max}\otimes (f^e_{\fU})^*\cO(\cE_{\fb} + \infty_{\cA^e}) \to \cO_{\cC_{\fU}}.
\]
By Lemma \ref{lem:lift-max-deg}, the above morphism becomes
\[
\pi^*_{\fU}\bL_{\max}\otimes (f^e_{\fU})^*\cO(\cE_{\fb}) \to \cO_{\cC_{\fU}},
\]
which is (\ref{equ:log-twisted-expansion}).
\end{proof}
\section{Logarithmic fields}
\label{sec:logfields}
\subsection{$r$-spin curves and their moduli}
The case of stable $r$-spin curves has been studied in \cite{Ja98, Ja00, AbJa03, Ch08}. Following the strategy of \cite{AbJa03}, we extend $r$-spin structures to twisted pre-stable curves.
\subsubsection{$r$-spin structures}
\begin{definition}\label{def:rspin}
An $n$-marked, genus $g$, $r$-spin curve over a scheme $S$ consists of the following data
\[
(\cC \to S, \cL, \cL^{r} \cong \omega^{\log}_{\cC/S})
\]
where
\begin{enumerate}
\item $\cC \to S$ is a family of genus $g$, $n$-marked twisted pre-stable curves.
\item $\cL$ is a representable line bundle over $\cC$ with a given isomorphism $\cL^r \cong \omega^{\log}_{\cC/S}$ where $\omega^{\log}_{\cC/S}$ is the log cotangent bundle of the log smooth morphism $\cC \to S$.
\end{enumerate}
The pull-back of $r$-spin curves is defined in the usual sense. For simplicity, we may write $(\cC \to S, \cL)$ for an $r$-spin curve over $S$.
\end{definition}
\begin{notation}
For the purposes of this paper, we would like to view the family of curves $\cC \to S$ as a family of log curves equipped with the canonical log structure pulled-back from the stack of log curves as in Section \ref{sss:curve-stack}. This avoids adding extra underlines to both $\cC$ and $S$.
\end{notation}
\begin{notation}
Unlike the usual notations in logarithmic geometry, the log cotangent bundle of $\cC\to S$ in this paper is denoted by $\omega^{\log}_{\cC/S}$ rather than $\omega_{\cC/S}$. We reserve the notation $\omega_{\cC/S}$ for the dualizing line bundle of the family $\cC \to S$. This choice of notations is compatible with the commonly used notations in FJRW theory.
\end{notation}
\subsubsection{Monodromy representation along markings and nodes}
Consider an $r$-spin curve $(\cC \to S, \cL)$ and its $i$-th marking
$\sigma_i \subset \cC$ with the cyclic group $\mu_{r_i}$.
As the line bundle $\cL$ is representable, the action of $\mu_{r_i}$
on $\cL|_{\sigma_i}$ factors through a group homomorphism
\[
\gamma_i\colon \mu_{r_i} \hookrightarrow \mathbb{G}_{m}
\]
which is called the {\em monodromy representation} along $\sigma_{i}$.
In this paper, we use $\vgamma = (\gamma_i)_{i=1}^{n}$ to denote the collection of monodromy representations along the $n$ marked points. This is a discrete invariant of $r$-spin curves.
\bigskip
Consider a geometric point $q \to \cC$ which is a node. \'Etale locally around $q$, we have the model (\ref{equ:node-local}). Denote by $\cC_{q+}$ and $\cC_{q-}$ the two components intersecting at $q$ with respect to the two coordinates $x$ and $y$ respectively. We obtain two monodromy representations
\[
\gamma_{q\pm}\colon \mu_{r} \to \mathbb{G}_{m}
\]
of $\cL|_{q}$ at $q \in \cC_{q\pm}$ respectively. The representability of $\cL$ implies that both $\gamma_{+}$ and $\gamma_{-}$ are injective. The balanced condition of $\cC$ at the node $q$ implies that the composition
\[
\mu_r \stackrel{\gamma_{+}\times\gamma_{-}}{\longrightarrow} \mathbb{G}_{m}\times\mathbb{G}_{m} \longrightarrow \mathbb{G}_{m}
\]
is trivial, where the second arrow is the multiplication morphism.
\subsubsection{$r$-spin structure as twisted stable maps}
Given an $r$-spin curve $(\cC \to S, \cL)$ we obtain a unique commutative diagram as follows:
\begin{equation}\label{diag:spin-map-correspondence}
\xymatrix{
&& (C, \omega^{\log}_{C/S})^{1/r} \ar[d] \\
\cC \ar[rru] \ar[rr] \ar[d] && C \ar[d] \\
S_1 \ar[rr] && S_2.
}
\end{equation}
where
\begin{enumerate}
\item $\cC \to C$ is the coarsification. Here we equip both $\cC \to S_1$ and $C \to S_2$ with their canonical log structures as a family of log curves. This is a log \'etale morphism. Furthermore, the bottom morphism $S_1 \to S_2$ induces an identity of the underlying structure $\underline{S_1} = \underline{S_2} = \underline{S}$, see \cite[Theorem~1.9]{Ol07}.
\item $(C, \omega^{\log}_{C/S})^{1/r} \to C$ is strict and \'etale with the underlying morphism given by taking the $r$-th root stack of $\omega^{\log}_{\cC/S}$ over $C$.
\item $\cC \to (C, \omega^{\log}_{C/S})^{1/r}$ is induced by the $r$-spin structure $\cL^r \cong \omega^{\log}_{\cC/S}$.
\end{enumerate}
Our description of the $r$-spin structure is similar to the case of \cite[Section~1.5]{AbJa03} except that we equip the two families of curves with their canonical log structure for later use.
Conversely, by pulling back the universal $r$-th root along $\cC \to (C, \omega^{\log}_{C/S})^{1/r}$ we obtain an $r$-spin bundle over $\cC$. To summarize, we have
\begin{lemma}\label{lem:spin-twist-stable-map}
The data of an $r$-spin curve $(\cC \to S, \cL)$ is equivalent to the diagram (\ref{diag:spin-map-correspondence}).
\end{lemma}
\subsubsection{The stack of $r$-spin structures}
Denote by $\fM_{g,\vgamma}^{1/r}$ the stack of genus $g$, $n$-marked, $r$-spin curves with monodromy data $\vgamma$ along markings. It can be viewed a fibered category over the category of usual schemes as the log structures on the curves are the canonical ones.
\begin{proposition}\label{prop:curve-stack}
The stack $\fM_{g,\vgamma}^{1/r}$ is a smooth, log smooth algebraic
stack locally of finite presentation.
Furthermore, the tautological morphism removing the $r$-spin
structures
\[
\fM_{g,\vgamma}^{1/r} \to \fM^{\mathrm{tw}}_{g, n}
\]
is locally of finite type, quasi-separated, strict, and log \'etale.
\end{proposition}
\begin{proof}
Denote by $\pi\colon \fC \to \fM^{\mathrm{tw}}_{g, n}$ the universal curve,
and $\fC \to C$ the universal coarse moduli morphism.
Also, denote by $(\fC, \omega_{\fC/\fM^{\mathrm{tw}}_{g, n}})^{1/r}$ the
root stack over $\fC$ parameterizing $r$-th roots of
$\omega_{\fM^{\mathrm{tw}}_{g, n}}^{\log}$.
As
$\omega_{C/\fM^{\mathrm{tw}}_{g, n}}^{\log}|_{\fC} \cong
\omega_{\fC/\fM^{\mathrm{tw}}_{g, n}}^{\log}$, we observe that
$\tilde{\fC} := (C, \omega_{C/\fM^{\mathrm{tw}}_{g,
n}}^{\log})^{1/r}\times_{C}\fC \cong (\fC,
\omega_{\fC/\fM^{\mathrm{tw}}_{g, n}}^{\log})^{1/r}$ with an \'etale
projection $\tilde{\fC} \to \fC$.
Consider $S \to \fM^{\mathrm{tw}}_{g, n}$ with the pull-back family
$\tilde{\fC}_S \to \fC_S \to S$.
By the description of \eqref{diag:spin-map-correspondence}, giving
an $r$-spin bundle $\cL_S$ over $\fC_S$ is equivalent to giving a
section $s$ of the projection $\tilde{\fC}_S \to \fC_S$ such that
the composition
$\fC_S \to \tilde{\fC}_S \to (C, \omega_{C/\fM^{\mathrm{tw}}_{g,
n}}^{\log})^{1/r}_S$ is representable.
Thus the stack $\fM_{g,\vgamma}^{1/r}$ is an open substack of the
stack $\pi_*\tilde{\fC}$ parameterizing sections of the morphism
$\tilde{\fC} \to \fC$ over $\fM^{\mathrm{tw}}_{g, n}$ with discrete data
$\vgamma$.
By \cite[Theorem~1.3]{HR14}, the stack $\fM_{g,\vgamma}^{1/r}$ is
algebraic, and the tautological morphism
$\fM_{g,\vgamma}^{1/r} \to \fM^{\mathrm{tw}}_{g, n}$ is locally of finite
type and quasi-separated.
As $\fM^{\mathrm{tw}}_{g, n}$ carries the canonical locally free log
structure, it remains to show that the morphism
$\fM_{g,\vgamma}^{1/r} \to \fM^{\mathrm{tw}}_{g, n}$ is \'etale in the usual
sense.
We check it using the infinitesimal lifting property.
Let $A \to B$ be a small extension of Artin rings, and consider the commutative diagram of solid arrows
\[
\xymatrix{
\spec B \ar[r] \ar[d] & \fM_{g,\vgamma}^{1/r} \ar[d] \\
\spec A \ar[r] \ar@{-->}[ru]& \fM^{\mathrm{tw}}_{g, n}
}
\]
It suffices to show that there is a unique dashed arrow marking the above diagram commutative. Pulling back the universal families, it remains to construct the section given by the dashed arrows fitting in the commutative diagram of solid arrows
\[
\xymatrix{
\tilde{\fC}_{\spec B} \ar[r] \ar[d] & \tilde{\fC}_{\spec A} \ar[d] \\
\fC_{\spec B} \ar[r] \ar@/^1pc/[u] & \fC_{\spec A} \ar@/_1pc/@{-->}[u]
}
\]
But since the vertical arrows are \'etale, by the infinitesimal lifting of \'etale morphisms, such dashed arrow exist and is unique.
\end{proof}
The following is an analogue of \cite[Corollary 2.2.2]{AbJa03}
\begin{corollary}\label{cor:r-spin-finite}
The tautological morphism $\fM^{1/r}_{g,\vgamma} \to \fM_{g,n}$ is proper and quasi-finite.
\end{corollary}
\begin{proof}
By viewing $r$-spin curves as twisted stable maps, the properness follows from \cite[Theorem 1.4.1]{AV02}. Since the morphism $\fM_{g,\vgamma}^{1/r} \to \fM^{\mathrm{tw}}_{g, n}$ is \'etale and $\fM^{\mathrm{tw}}_{g, n}\to \fM_{g,n}$ has zero dimensional fibers, we conclude that the composition $\fM_{g,\vgamma}^{1/r} \to \fM^{\mathrm{tw}}_{g, n} \to \fM_{g,n}$ is quasi-finite.
\end{proof}
\subsubsection{Log $r$-spin curves and their stacks}
\begin{definition}\label{def:log-rspin}
A {\em log $r$-spin curve} over a log scheme $S$ consists of
\[
(\cC \to S, \cL)
\]
where $\cC \to S$ is a log curve (not necessarily equipped with the canonical log structure), and $\cL$ is an $r$-spin structure over the canonical log curve of $\cC \to S$. The {\em pull-back} of the log $r$-spin curve is defined as usual using fiber products in the fine and saturated category.
\end{definition}
As every log curve is obtained by the unique pull-back from the associated canonical log curve, we have that
\begin{corollary}
The log stack $\fM_{g,\vgamma}^{1/r}$ with its canonical log structure given by its universal curve represents the category of log $r$-spin curves fibered over the category of log schemes.
\end{corollary}
\subsection{Log $r$-spin fields and their moduli}
\subsubsection{Log $r$-spin fields}\label{sss:spin-field}
Given a log $r$-spin curve $(\cC \to S, \cL)$, consider the $\mathbb{P}^1$-bundle
\[
\underline{\cP} := \mathbb{P}(\cL\oplus\cO_{\cC}) \to \underline{\cC}.
\]
Denote by $0_{\cP}$ and $\infty_{\cP}$ the zero and infinity section of the above $\mathbb{P}^1$-bundle with normal bundles $\cL$ and $\cL^{\vee}$ respectively. Let $\cM_{\infty_{\cP}}$ be the log structure over $\underline{\cP}$ associated to the Cartier divisor $\infty_{\cP}$. It is Deligne--Faltings type of rank one, see Section \ref{sss:rank-one}.
Denote by $\cP' = (\underline{\cP}, \cM_{\infty_{\cP}})$ and
$\cP = (\underline{\cP},
\cM_{\cC}|_{\underline{\cP}}\oplus_{\cO^*}\cM_{\infty_{\cP}})$ the
corresponding log stacks where $\cM_{\cC}|_{\underline{\cP}}$ is the
pull-back of $\cM_{\cC}$.
There is a natural projection
\begin{equation}\label{equ:compact-r-spin}
\cP \to \cC.
\end{equation}
\begin{definition}\label{def:r-spin-field}
A \emph{log $r$-spin field} over a log scheme $S$ consists of
\[
(\cC \to S, \cL, f\colon \cC \to \cP)
\]
where
\begin{enumerate}
\item $(\cC \to S, \cL)$ is a log $r$-spin curve over $S$.
\item $f$ is a section of $\cP \to \cC$.
\end{enumerate}
It is called \emph{stable} if
$\omega^{\log}_{\cC/S}\otimes f^*\cO(0_{\cP})^{k}$ is positive for
$k \gg 0$.
The \emph{pull-back} of a log $r$-spin field is defined as usual via
pull-back of log curves.
\end{definition}
\subsubsection{Associated log map of log $r$-spin fields}
Note that giving a log $r$-spin field $f\colon \cC \to \cP$ is equivalent to giving an \emph{associated log map}
\begin{equation}\label{equ:reduce-target}
\cC \to \cP'.
\end{equation}
In fact, the inclusion $\cM_{\infty_{\cP}} \to \cM_{\cC}|_{\underline{\cP}}\oplus_{\cO^*}\cM_{\infty_{\cP}}$ defines a natural morphism $\cP \to \cP'$. Thus (\ref{equ:reduce-target}) is given by the composition
$
\cC \to \cP \to \cP'.
$
On the other hand, given a morphism (\ref{equ:reduce-target}) we recover the $r$-spin field $f$ via
$
\cC \to \cP'\times_{\underline{\cC}}\cC =: \cP.
$
For convenience, we may use $f$ for the corresponding log map (\ref{equ:reduce-target}) when there is no danger of confusion.
\begin{definition}\label{def:umd-spin-field}
A log $r$-spin field has {\em uniform maximal degeneracy} if its associated log map has {\em uniform maximal degeneracy}.
It is called {\em minimal (with uniform maximal degeneracy)} if the associated log map (\ref{equ:reduce-target}) is minimal (with uniform maximal degeneracy).
\end{definition}
\subsubsection{The discrete data of log $r$-spin fields}
The discrete data of log $r$-spin fields is given by
\begin{equation}\label{equ:spin-data}
\beta:=(g, \vgamma = (\gamma_i)_{i=1}^{n}, {\bf c} = (c_i)_{i=1}^{n})
\end{equation}
where
\begin{enumerate}
\item $g$ is the genus.
\item $\gamma_i$ is the monodromy representation at the $i$-th marking.
\item $c_i$ is the contact order of the associated log map at the $i$-th marking.
\end{enumerate}
Comparing to the discrete data in (\ref{discretedata}), the above (\ref{equ:spin-data}) does not specify the curve class. However, since we only allow sections, the curve class is uniquely determined by the collection of contact orders $\bf{c}$:
\begin{equation}\label{equ:fields-class}
A = [0_{\cP}] + \sum_{i=1}^{n} c_i \cdot [\cP_{\sigma_i}]
\end{equation}
where $0_{\cP}$ is the zero section of the projection $\cP \to C$, and $\cP_{\sigma_i}$ is the fiber over the $i$-th marking $\sigma_i$.
\subsubsection{Automorphisms of minimal stable log $r$-spin fields}
An automorphism of a log $r$-spin field can be defined similarly as in Section \ref{sss:finite-auto} by taking into account the automorphisms on the target $\cP$ induced by the automorphisms of the curve.
\begin{proposition}\label{prop:r-spin-field-finite-auto}
Consider a log $r$-spin field $f\colon \cC \to \cP$ over $S$ with $\underline{S}$ a geometric point. Suppose it is minimal (with uniform maximal degeneracy). Then its automorphism group is finite.
\end{proposition}
\begin{proof}
By Proposition~\ref{prop:finite-auto} and
\ref{prop:finite-auto-umd}, it suffices to show that the underlying
structure
$(\underline{\cC}, \cL, \underline{f}\colon \underline{\cC} \to
\underline{\cP})$ has finite automorphisms.
To simplify notation, we will abuse notation and write
$(\cC, \cL, f\colon \cC \to \cP)$ instead of
$(\underline{\cC}, \cL, \underline{f}\colon \underline{\cC} \to
\underline{\cP})$ in this proof.
For this, it suffices to prove that $f_i\colon \cC_i \to \cP$ has
finitely many automorphisms for any irreducible component
$\cC_i \subset \cC$ with all special points of $\cC_i$ marked.
Since $\cP \to \cC$ is representable, $f_i$ has finitely many
automorphisms when $\cC_i$ is a stable curve.
It remains to consider the situation when $\cC_i$ is of genus zero
and has at most two special points.
In these cases $\omega_{\cC_i}^{\log}$ and hence $\cL$ have
non-positive degree.
Furthermore $f_i$ cannot be the zero or infinity section since
$\cO(f_i^* 0_\cP)$ would then have non-positive degree.
Suppose $\cC_i$ has at most one special point.
Then $\cL$ has negative degree along $\cC_i$.
Since $f(\cC_i)$ intersects the zero section properly, it must
intersect the infinity section properly at at least one special
point.
We note that for stability $\cO(f_i^* 0_\cP)$ must have positive
degree, so that $f_i$ intersects properly with the zero section.
If $\cC_i$ has only one special point, we may view the intersection
point(s) as additional marking(s) since they are preserved by
automorphisms.
It therefore suffices to consider the case that $\cC_i$ has exactly
two special points, in which case $\cL$ is trivial, and we may view
$f_i$ as a stable map to $\mathbb{P}^1$.
But since $f_i$ has non-trivial intersection with the zero section,
this stable map and hence $f_i$ have finitely many automorphisms.
\end{proof}
\subsubsection{The stacks of log $r$-spin fields}\label{sss:r-spin-field-stack}
Let $\SF_{\ddata}^{1/r}$ be the category of stable $r$-spin fields over the category of log schemes with the discrete data $\beta$. Let $\USF_{\ddata}^{1/r} \subset \SF_{\ddata}^{1/r}$ be the subcategory consisting of objects with uniform maximal degeneracy. Next we show that
\begin{theorem}\label{thm:spin-fields-moduli}
The two categories $\USF_{\ddata}^{1/r}$ and $\SF_{\ddata}^{1/r}$ are represented by proper log Deligne--Mumford stacks.
\end{theorem}
For later use, we introduce $\cS$ the stack over
$\fM_{g,\vgamma}^{1/r}$, which associates, to each strict morphism
$T \to \fM_{g,\vgamma}^{1/r}$, the category of sections
$\underline{f}$ of the underlying projective bundle
$\underline{\cP}_T := \mathbb{P}(\cL_{T}\oplus\cO_{\underline{\cC}_T}) \to
\underline{\cC}_T$ with the curve class given by
\eqref{equ:fields-class}.
Here $(\underline{\cC}_T \to \underline{T}, \cL_{T})$ is the spin
structure given by $T \to \fM_{g,\vgamma}^{1/r}$.
We may view $\cS$ as a log stack with the strict morphism to
$\fM_{g,\vgamma}^{1/r}$.
Note that $\cS$ is an open substack of the stack parameterizing
twisted stable maps with the family of targets
$\underline{\cP}_{\fM_{g,\vgamma}^{1/r}} \to \fM_{g,\vgamma}^{1/r}$,
as requiring $\underline{f}$ to be a section of
$\underline{\cP}_T \to \underline{\cC}_T$ is an open condition.
The following is a consequence of \cite[Theorem 1.4.1]{AV02}:
\begin{lemma}\label{lem:algebraicity-usual-section}
The stack $\cS$ is algebraic locally of finite type. Furthermore, the tautological morphism $\cS \to \fM_{g,\vgamma}^{1/r}$ is proper and of Deligne--Mumford type.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{thm:spin-fields-moduli}]
By Theorem \ref{thm:max-uni-moduli}, the tautological morphism
\[
\USF_{\ddata}^{1/r} \to \SF_{\ddata}^{1/r}
\]
is proper, log \'etale, and representable by algebraic spaces of finite type. Thus to prove Theorem \ref{thm:spin-fields-moduli}, it remains to prove the statements for $\SF_{\ddata}^{1/r}$ only. We first verify the representability.
Consider the tautological morphism by removing log structures
\[
\SF_{\ddata}^{1/r} \to \cS.
\]
By Proposition \ref{prop:uni-min-stack}, this morphism is represented by algebraic stack locally of finite type. The algebraicity of $\cS$ in Lemma \ref{lem:algebraicity-usual-section} implies that the stack $\SF_{\ddata}^{1/r}$ is also algebraic and locally of finite type. Proposition \ref{prop:r-spin-field-finite-auto} further implies that $\SF_{\ddata}^{1/r}$ is a Deligne--Mumford stack.
It remains to prove the properness. We will divide this into two parts: the boundedness part will be proved in Section \ref{ss:r-spin-field-boundedness}, and the valuative criterion will be checked in Section \ref{ss:valuative}.
\end{proof}
\subsection{Boundedness}\label{ss:r-spin-field-boundedness}
We next prove the following result:
\begin{proposition}\label{prop:spin-field-boundedness}
The stack $\SF_{\ddata}^{1/r}$ is of finite type.
\end{proposition}
Consider the tautological morphism
\begin{equation}\label{equ:take-coarse-curve}
\SF^{1/r}_{\ddata} \to \fM_{g,n}
\end{equation}
by taking the corresponding coarse curves. Using the above morphism, the proof of Proposition \ref{prop:spin-field-boundedness} splits into the following two lemmas.
\begin{lemma}
The tautological morphism (\ref{equ:take-coarse-curve}) is of finite type.
\end{lemma}
\begin{proof}
Note that the morphism (\ref{equ:take-coarse-curve}) is given the composition
\[
\SF^{1/r}_{\ddata} \to \cS \to \fM_{g,n}^{1/r} \to \fM_{g,n}
\]
where the middle arrow is of finite type by Lemma \ref{lem:algebraicity-usual-section}, and the right arrow is of finite type by Corollary \ref{cor:r-spin-finite}. It remains to show that the morphism $\SF^{1/r}_{\ddata} \to \cS$ is of finite type.
Let $T \to \cS$ be any strict morphism from a log scheme $T$ of finite type, and write $\SF_T := \SF^{1/r}_{\ddata}\times_{\cS}T$. It suffices to show that $\SF_T$ is of finite type. By Proposition \ref{prop:boundedness}, it suffices to show that the discrete data $\beta$ is combinatorially finite over $T$, see Definition \ref{def:combinatorially-finite}. We prove this by applying the strategy similar to \cite[Proposition 5.3.1]{Ch14}.
Denote by $\underline{f}_T$ the universal section of
$\underline{\cP}_T \to \underline{\cC}_T$ over $\underline{T}$.
As $T$ is of finite type, there are finitely many dual graphs for
geometric fibers of the source curve
$\underline{\cC}_T \to \underline{T}$.
Let $\underline{G}$ be any such dual graph of $\underline{\cC}_t$ for
some geometric point $t \to T$.
It remains to show that the choices of log combinatorial types as in
(\ref{equ:combinatorial-type}) with the given dual graph
$\underline{G}$ is finite.
Note that the partition
$V(\underline{G}) = V^n(\underline{G}) \sqcup V^{d}(\underline{G})$ as
in (\ref{equ:combinatorial-type}) is uniquely determined by the
underlying section $\underline{f}_t$.
Indeed, $V^{d}(\underline{G})$ consists of irreducible
components whose images via $\underline{f}_t$ are contained in the
infinity section of $\cP_{t}$.
The contact orders along the marked points are determined by $\beta$.
Since $\underline{G}$ is a finite graph, the choices of
partial orderings $\poleq$ on $V(\underline{G})$ is also finite.
We fix such a choice, denoted again by $\poleq$.
It remains to show that the choices of the contact orders at each
nodes are finite.
Let $Z \subset \underline{\cC}_t$ be an irreducible component. Let $q \in \underline{\cC}_t$ be a nodal point joining $Z$ with another irreducible component $Z'$. We call $q$ an {\em incoming node} if $Z' \poleq Z$, and an {\em outgoing node} if $Z \poleq Z'$. The same discussion as in \cite[Proposition 5.2.4]{Ch14} implies that
\begin{equation}\label{equ:degree-contact-orders}
\deg \underline{f}^*(\infty_{\cP})|_{Z} = \sum_{q\text{ incoming node}} c_q - \sum_{q\text{ outgoing node}} c_q,
\end{equation}
where $c_q$ is the contact order at the node $q$. Note that if a node $q$ is both incoming and outgoing, then necessarily $c_q = 0$.
To bound the choices of contact orders at the nodes, we construct a partition:
\[
V(\underline{G}) = V_1 \sqcup V_2 \sqcup \cdots \sqcup V_k
\]
inductively as follows. First, we choose $V_1$ to be the collection of
largest elements in $V(\underline{G})$ with respect to $\poleq$.
Supposing that $V_1, \cdots, V_i$ are chosen, we choose
$V_{i+1} \subset V(\underline{G}) \setminus (\cup_{j=1}^{i}V_j)$ to be
the collection of largest elements with respect to $\poleq$.
By construction, a node $q$ joining component(s) in the same $V_i$
must have $c_q = 0$. Let $Z_1$ be any component corresponding to an
element in $V_1$. Then $Z_1$ has only incoming node(s).
By (\ref{equ:degree-contact-orders}), the choices of contact orders at
these nodes are finite, as contact orders are non-negative integers.
In particular, there are finitely many choices for the contact orders
of the outgoing nodes attached to components of $V_2$.
Now suppose the number of choices of contact orders of the outgoing
nodes attached to components of $V_i$ is finite.
Using (\ref{equ:degree-contact-orders}) and the condition that contact
orders are non-negative integers again, we conclude that the incoming
nodes of components of $V_i$, hence the outgoing nodes of components
of $V_{i+1}$ have finitely many choices of contact orders.
By induction, the number of choices of contact orders at each nodes is
finite.
This finishes the proof.
\end{proof}
\begin{lemma}
The image of the morphism $\SF^{1/r}_{\ddata} \to \fM_{g,n}$ is
contained in an open substack of finite type.
\end{lemma}
\begin{proof}
To bound the image of $\SF^{1/r}_{\ddata} \to \fM_{g,n}$, it suffices to show that the number of rational components of the fibers over $\SF^{1/r}_{\ddata}$ is bounded. For this, it suffices to show that the numbers of unstable components of the source curves are bounded.
Consider any geometric point $t \to \SF^{1/r}_{\ddata}$ with the fiber $f_t\colon \cC_t \to \cP_t$. By the stability as in Definition \ref{def:r-spin-field}, the line bundle $f_t^*(\cO(0_{\cP_t}))$ has non-negative degree along each component of $\cC_t$, and positive degree along each unstable component of $\cC_t$. Furthermore, since the spin bundle $\cL_t$ over $t$ is representable, the degree of $f_t^*(\cO(0_{\cP_t}))$ along each unstable component is at least $\frac{1}{r}$. Since $\deg f_t^*(\cO(0_{\cP_t})) = (2g-2 + n)/r$, the number of unstable components of $\cC_t$ is at most $(2g-2 + n)$.
\end{proof}
\subsection{Valuative criterion}\label{ss:valuative}
Let $R$ be a discrete valuation ring, $m_R \subset R$ be its maximal ideal, and $K$ be its quotient field. Let $( \cC_{\eta} \to \eta, \cL, f_{\eta}\colon \cC_{\eta} \to \cP_{\eta})$ be a minimal stable object over $\eta = (\spec K, \cM_{\eta})$. Possibly after a finite extension of $R$, we wish to uniquely extend $f_{\eta}$ to a family $f\colon \cC \to \cP$ over $S = (\spec R, \cM_{S})$.
\subsubsection{Reduce to the case of nondegenerate irreducible generic fiber.}
By Proposition \ref{prop:log-map-valuative}, it suffices to extend the underlying section $\underline{f}_{\eta}$ to a family of sections $\underline{f}$ over $\spec R$. Taking partial normalization along the splitting nodes of $\underline{\cC}_{\eta}$ and labeling them, it suffices to extend $\underline{f}_{\eta}$ over each irreducible components. Thus we may assume that $\underline{\cC}_{\eta}$ is irreducible.
We may further assume that the image of $\underline{f}_{\eta}$ is not entirely contained in $0_{\cP_{\eta}}$ or $\infty_{\cP_{\eta}}$, as otherwise we may simply extend $\underline{f}_{\eta}$ as $0_{\cP_{\eta}}$ or $\infty_{\cP_{\eta}}$ respectively. Passing to a finite extension if necessary, we may assume that $\underline{f}_{\eta}$ intersects $0_{\cP_{\eta}}$ and $\infty_{\cP_{\eta}}$ properly along $\underline{\eta}$-points of $\underline{\cC}_{\eta}$.
As the log structures are irrelevant for extending the underlying structure, we will drop the underline in this section for simplicity, and all stacks are assumed to be underlying stacks unless otherwise specified. It remains to prove the following result.
\begin{proposition}\label{prop:valuative}
Let $(\cC_{\eta}, \cL_{\eta})$ be an irreducible $r$-spin curve, and $f_{\eta}$ be a section of $\cP_{\eta} := \mathbb{P}(\cL_{\eta}\oplus\cO_{\cC_{\eta}}) \to \cC_{\eta}$ Denote by $0_{\cP_{\eta}}$ and $\infty_{\cP_{\eta}}$ the zero and infinity sections of $\cP_{\eta}$. Suppose that
\begin{enumerate}
\item $f_{\eta}$ is neither the zero nor the infinity section.
\item $f_{\eta}$ intersects the infinity section only along marked points.
\item $\omega^{\log}_{\cC_{\eta}}\otimes f_{\eta}^{*}(\cO(0_{\cP_{\eta}}))^{k}$ is positive for $k\gg 0$.
\end{enumerate}
Possibly after a finite extension, there is a unique $r$-spin curve $(\cC, \cL)$ over $\spec R$ and a section $f$ of $\cP := \mathbb{P}(\cL\oplus\cO_{\cC}) \to \cC$ extending the triple $(\cC_{\eta}, \cL_{\eta}, f_{\eta})$ such that $\omega^{\log}_{\cC}\otimes f^{*}(\cO(0_{\cP}))^{k}$ is positive for $k\gg 0$.
\end{proposition}
\begin{remark}
In the above proposition, marked points are allowed to be broad, namely the inertia group along the marking can be trivial.
\end{remark}
\begin{notation}
In this section, we call $(\cC_i, \cL_i, f_i)$ an \emph{$r$-spin
triple} if $(\cC_i, \cL_i)$ is an $r$-spin curve over $\spec R$,
and $f_i$ is a section of $\cP_i := \mathbb{P}(\cL_i\oplus \cO) \to \cC_i$.
Their generic fibers over $\eta$ are decorated by subscripts $\eta$.
\end{notation}
We state a useful tool:
\begin{lemma}\label{lem:map-extending}
Consider an $r$-spin curve $(\cC_{\eta}, \cL_{\eta})$ with its coarse moduli $\cC_{\eta} \to C_{\eta}$. Let $C \to \spec R$ be a pre-stable curve extending $C_{\eta}$. Possibly after a finite base change, there is a unique $r$-spin curve $(\cC, \cL)$ over $\spec R$ extending $(\cC_{\eta}, \cL_{\eta})$, and a unique twisted stable map $f\colon \cC_1 \to \cP := \mathbb{P}(\cL\oplus \cO_{\cC})$ over $\spec R$ extending $f_{\eta}$.
\end{lemma}
\begin{proof}
To prove the statement, we apply properness of twisted stable maps twice, see \cite{AV02}. First, we extend the $r$-spin structure using the twisted stable map point of view as in (\ref{diag:spin-map-correspondence}). We then extend $f_{\eta}$ to $f$ as twisted stable maps.
\end{proof}
\subsubsection{Construct an extension with auxiliary markings}
Denote by $\Lambda$ the set of markings of $\cC_{\eta}$. Taking a finite base change if necessary, we may assume that $f_{\eta}$ intersects $0_{\cP_{\eta}}$ properly along $\eta$-points of $\cC_{\eta}$. Denote by $\Lambda_0$ the set of these intersection points which are non-marked in $\cC_{\eta}$. Let $\cC'_{\eta}$ be the marked curve given by $\cC_{\eta}$ together with the set of markings $\Lambda_0 \cup \Lambda$.
Let $\cC'_{\eta} \to C'_{\eta}$ be the coarse moduli morphism. Possibly after a finite base change, let
\begin{equation}\label{equ:any-extension}
C'_1 \to \spec R
\end{equation}
be any family of pre-stable curves with the set of markings
$\Lambda \cup \Lambda_0$ extending $C'_{\eta}$.
Let $C_1 \to \spec R$ be the family of pre-stable curves obtained by
removing the set of markings $\Lambda_0$ from $C'_1$.
By Lemma~\ref{lem:map-extending}, we obtain an $r$-spin curve
$(\cC_1, \cL_1) \to \spec R$ extending $(\cC_{\eta}, \cL_{\eta})$ with
the coarse moduli $\cC_1 \to C_1$.
Let $\tilde{\cC}_1 \to \cC'_1$ be the $r$-th root stack along the
markings in $\Lambda_{0}$.
Then $\tilde{\cC}_1$ has the set of markings $\Lambda\cup \Lambda_0$.
Consider the line bundle over $\tilde{\cC}_1$:
\begin{equation}\label{equ:spin-extra-twist}
\tilde{\cL}_1 = \cL_{1}|_{\tilde{\cC}_1}\otimes\cO_{\tilde{\cC}_1}(\sum_{x\in\Lambda_0}x).
\end{equation}
One verifies directly that
$(\tilde{\cL}_1)^{\otimes r} \cong \omega^{\log}_{\tilde{\cC}_1/\spec
R}$, hence $(\tilde{\cC}_1, \tilde{\cL}_1)$ is an $r$-spin curve
over $\spec R$.
Denote by $\tilde{\cP}_1 := \mathbb{P}(\tilde{\cL}_1\oplus\cO)$. As $\tilde{\cC}_1$ is irreducible, the section $f_{\eta}$ induces a section $\tilde{f}_{1,\eta}$ of $\tilde{\cP}_{1,\eta} \to \tilde{\cC}_{1,\eta}$ which is neither the zero nor the infinity sections by assumption.
\begin{lemma}\label{lem:ext-with-extra-marking}
Notations as above, possibly after a finite extension, there is an $r$-spin triple $(\tilde{\cC}_2, \tilde{\cL}_2,\tilde{f}_2)$ extending $(\tilde{\cC}_{1,\eta}, \tilde{\cL}_{1,\eta}, \tilde{f}_{1,\eta})$.
\end{lemma}
\begin{proof}
Observe that $\tilde{f}_{1,\eta}$ intersects the zero and infinity section of $\tilde{\cP}_{1,\eta}$ only along markings in $\Lambda\cup\Lambda_0$. Possibly after a further finite base change, we obtain a stable map $\tilde{f}_2\colon \tilde{\cC}_2 \to \tilde{\cP}_1$ extending $\tilde{f}_{1, \eta}$.
We claim that the composition $\tilde{\cC}_2 \to \tilde{\cP}_1 \to \tilde{\cC}_1$ contracts only rational components with precisely two special points. Let $Z \subset \tilde{\cC}_2$ be a contracted component. Then $Z$ cannot be a rational tail:
Otherwise, $\tilde{f}_2|_{Z}$ surjects onto a fiber of
$\tilde{\cP}_1 \to \tilde{\cC}_1$.
As over the generic point all intersections with zero and infinity
sections are marked, $Z$ contains at least two special points.
Suppose $Z$ has at least three special points. Then two of the special points are either a marked point or a node joining $Z$ with a tree of rational components contracting to a point of $\tilde{\cC}_1$. The above discussion implies that such a tree contains at least one marked point. This is impossible since $\tilde{\cC}_2 \to \tilde{\cC}_1$ preserves marked points.
Since $\tilde{\cC}_2 \to \tilde{\cC}_1$ contracts only rational
bridges and is compatible with markings, we have
$\omega^{\log}_{\tilde{\cC}_2/\spec R} =
\omega^{\log}_{\tilde{\cC}_1/\spec R}|_{\tilde{\cC}_2}$.
We check that
$(\tilde{\cC}_2, \tilde{\cL}_2 := \tilde{\cL}_1|_{\tilde{\cC}_{2}})$
is an $r$-spin curve over $\spec R$, hence
$\tilde{\cP}_1|_{\tilde{\cC}_2} = \tilde{\cP}_2 :=
\mathbb{P}(\tilde{\cL}_2\oplus\cO)$.
Thus $\tilde{f}_1$ pulls back to a section $\tilde{f}_2$ of
$\tilde{\cP}_2 \to \tilde{\cC}_2$ as needed.
\end{proof}
\subsubsection{Remove auxiliary markings}
\begin{lemma}\label{lem:remove-extra-marking}
Let $(\tilde{\cC}_2, \tilde{\cL}_2,\tilde{f}_2)$ be as in
Lemma~\ref{lem:ext-with-extra-marking}.
Let $\tilde{\cC}_2 \to \cC_2$ be obtained by first rigidifying along
markings in $\Lambda_0$, then removing
$\Lambda_0$ from the set of markings.
Then there is an $r$-spin triple $(\cC_2, \cL_2, f_2)$ extending
$(\cC_{\eta}, \cL_{\eta}, f_{\eta})$ such that
\begin{enumerate}
\item $\tilde{\cL}_2 = \cL_2|_{\tilde{\cC}_2}\otimes\cO_{\tilde{\cC}_2}(\sum_{x\in\Lambda_0}x)$,
\item $f_{2}$ and $\tilde{f}_2$ are isomorphic away from the sections in $\Lambda_0$,
\item $f_{2}$ sends sections in $\Lambda_0$ to the zero section of $\cP_2 := \mathbb{P}(\cL_2\oplus \cO)$.
\end{enumerate}
\end{lemma}
\begin{proof}
We first construct the spin bundle $\cL_2$. Let $\cC_2 \to C_2$ be
the coarse moduli morphism.
Then $C_2$ over $\spec R$ extends $C_{\eta}$ as a family of
pre-stable curves with the set of markings $\Lambda$.
By Lemma~\ref{lem:map-extending}, we obtain an $r$-spin curve
$(\cC_3, \cL_3)$ over $\spec R$ extending $(C_{\eta}, L_{\eta})$.
Let $\tilde{\cC}_3 \to \cC_3$ be the $r$-th root construction along
sections in $\Lambda_0$, and view $\tilde{\cC}_3$ as a family of
pre-stable curves with markings $\Lambda\cup\Lambda_0$.
Consider the line bundle
$ \tilde{\cL}_3 :=
\cL_{3}|_{\tilde{\cC}_3}\otimes\cO_{\tilde{\cC}_3}(\sum_{x \in
\Lambda_0}x)$ over $\tilde{\cC}_3$.
We check that $\tilde{\cL}_3$ is an $r$-spin bundle over
$\tilde{\cC}_3$.
Since
$\tilde{\cC}_{3,\eta} = \tilde{\cC}_{1,\eta} =
\tilde{\cC}_{2,\eta}$, the $r$-spin structure
$(\tilde{\cC}_3, \tilde{\cL}_3)$ over $\spec R$ extends
$(\tilde{\cC}_{2,\eta}, \tilde{\cL}_{2,\eta} =
\tilde{\cL}_{1,\eta})$.
As both $\tilde{\cC}_2$ and $\tilde{\cC}_3$ have the same coarse
curve $C_2$, by the uniqueness of Lemma~\ref{lem:map-extending}, we
conclude that
$(\tilde{\cC}_{3},\tilde{\cL}_3) \cong
(\tilde{\cC}_{2},\tilde{\cL}_2)$, hence $\cC_3 = \cC_2$ and
$\cL_2 := \cL_3$.
This proves (1).
We now construct the section $f_2$. Possibly after a finite extension, $f_{\eta}$ extends to a twisted stable map $\hat{f}_2\colon \hat{\cC}_2 \to \cP_2$. Consider the following commutative diagram
\[
\xymatrix{
\tilde{\cP}_2 \ar@{-->}[rr] \ar[d] && \cP_2 \ar[d] && \\
\tilde{\cC}_2 \ar@/^1pc/[u]^{\tilde{f}_2} \ar[rr] && \cC_2 && \hat{\cC}_2 \ar[ll] \ar[llu]_{\hat{f}_2}
}
\]
where the dashed arrow is a rational map which is well-defined and
isomorphic away from the fibers over sections in $\Lambda_0$.
Thus away from the preimages of $\hat{\cC}_2 \to \cC_2$ over
$\Lambda_0$, the morphism $\hat{f}_2$ is isomorphic to $\tilde{f}_2$.
The morphism $\hat{\cC}_2 \to \cC_2$ is a contraction of rational
components over $\Lambda_0$ of the closed fiber.
Let $Z_x \subset \hat{\cC}_2$ be the preimage of a point $x \in \cC_{2,\spec R/m_R}$ in $\Lambda_0$. Suppose $Z_x$ is not a point. Then $\hat{f}_2|_{Z_x}$ surjects onto a fiber of $\cP_2 \to \cC_2$, hence intersects the infinity section non-trivially. This no possible, as such intersection point(s) must be marked in $\Lambda$, and is disjoint from sections in $\Lambda_0$. Thus $\hat{\cC}_2 \to \cC_2$ is an isomorphism, and $f_2 := \hat{f}_2$ is a section of $\cP_2 \to \cC_2$. This proves (2).
The third statement follows from that $f_{2,\eta} = f_{\eta}$ sends sections in $\Lambda_0$ to the zero section of $\cP_{2} \to \cC_2$.
\end{proof}
\subsubsection{Contract unstable components}
Let $\cC_2 \to C_2$ be the coarse moduli morphism where $C_2$ is a
family of pre-stable curves over $\spec R$ with the set of markings
$\Lambda$.
An irreducible component $\cZ \subset \cC_2$ is \emph{unstable} if
$\omega^{\log}_{\cC_2}\otimes f^{*}(\cO(0_{\cP_2}))^{k}$ fails to be
positive on $\cZ$ for $k\gg 0$.
Let $Z \subset C_2$ be the image of $\cZ$.
Then $Z$ is \emph{unstable} if $\cZ$ is so.
Note that all unstable components are over the closed point
$\spec R/m_R$, and are rational components with at most two markings.
\begin{lemma}\label{lem:stabilize-bridge}
Let $C_2 \to C_3$ be a contraction of an unstable component $Z$ with two special points. Possibly after a further finite base change, we obtain an $r$-spin triple $(\cC_3, \cL_3,f_3)$ extending $(\cC_{\eta}, \cL_{\eta}, f_{\eta})$ such that
\begin{enumerate}
\item $\cC_3 \to C_3$ is the coarse moduli morphism.
\item $\cC_2 \to \cC_3$ contracts $\cZ \subset \cC_2$ to a point.
\item $\cP_{2} = \cP_{3}\times_{\cC_3}\cC_2$ and $f_{2}$ is the pull-back of $f_3$.
\end{enumerate}
Here we do not require that $f_2$ is of the form in Lemma \ref{lem:remove-extra-marking}.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:map-extending}, we obtain an $r$-spin curve $(\cC_3, \cL_{3})$ over $\spec R$ extending $(\cC_{\eta}, \cL_{\eta})$ with the coarse moduli morphism $\cC_3 \to C_3$. Consider the cartesian diagram of solid arrows
\[
\xymatrix{
\cC_2 \ar@{-->}[r] \ar@{-->}[rd] & \hat{\cC}_2 \ar[r] \ar[d] & (C_2, \omega^{\log}_{C_2/\spec R})^{1/r} \ar[d] \ar[r] & C_2 \ar[d] \\
& \cC_3 \ar[r] & (C_3, \omega^{\log}_{C_3/\spec R})^{1/r} \ar[r] & C_3
}
\]
The square on the right is cartesian as
$\omega^{\log}_{C_3/\spec R} = \omega^{\log}_{C_2/\spec R}|_{C_3}$.
Let $\cC''_2 \to \hat{\cC}_2$ be the twisted stable map extending the
isomorphism $\cC_{2,\eta} \to \cC_{3,\eta}$.
By pulling back the universal $r$-th root via
$\cC''_2 \to \hat{\cC}_2 \to (C_2, \omega^{\log}_{C_2/\spec
R})^{1/r}$, we obtain an $r$-spin bundle $\cL''_2$ over $\cC''_2$
extending $\cL_{\eta}$. By uniqueness of Lemma
\ref{lem:map-extending}, we obtain
$(\cC''_2, \cL''_2) = (\cC_2,\cL_2)$ hence the dashed horizontal arrow
as above. The skew dashed arrow is then the composition
$\cC_2 \to \hat{\cC}_2 \to \cC_3$. Thus (2) follows.
It follows from the above construction that $\cL_2 = \cL_{3}|_{\cC_2}$. Hence $\cP_{2}$ is the pull-back of $\cP_{3}$ via $\cC_2 \to \cC_3$, and $f_2$ is the pull-back of $f_2$. This proves (3).
\end{proof}
We next remove unstable rational tails. We first prove
\begin{lemma}\label{lem:unstable-tail-vanishing}
Choosing the extension (\ref{equ:any-extension}) appropriately, we may assume that $\cC_2$ in Lemma \ref{lem:remove-extra-marking} has unstable rational tails contained in the zero section of $\cP_2$ via $f_2$.
\end{lemma}
\begin{proof}
Let $\cZ \subset \cC_2$ be an unstable rational tail not contained in the zero section of $\cP_2$. Then $\cZ$ contains no section from $\Lambda_0$. Since $\cZ$ is from a component $\tilde{\cZ} \subset \tilde{\cC}_2$, and $\tilde{f}_2$ is a twisted stable map, $\cZ$ maps to a rational tail $Z \subset C_1$. Blowing down $Z$, we obtain another extension (\ref{equ:any-extension}) together with the same set of sections $\Lambda\cup\Lambda_0$.
\end{proof}
We then contract the unstable rational tails inductively as follows.
\begin{lemma}\label{lem:stabilize-tail}
Let $(\cC_2, \cL_2,f_2)$ be an $r$-spin triple extending $(\cC_{\eta},\cL_{\eta}, f_{\eta})$. Suppose all unstable rational tails of $\cC_2$ are contained in the zero section of $\cP_{2} \to \cC_2$ via $f_2$. Let $\cC_2 \to C_2$ be the coarse moduli morphism, and $C_2 \to C_3$ be the contraction of an unstable rational tail $Z$. Then there is a triple $(\cC_3, \cL_3, f_3)$ extending $(\cC_{\eta},\cL_{\eta}, f_{\eta})$ with coarse moduli morphism $\cC_3 \to C_3$ such that all unstable rational tails of $\cC_3$ are contained in the zero section of $\cP_{3} = \mathbb{P}(\cL_3 \oplus \cO)$ via $f_3$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:map-extending}, we obtain the $r$-spin curve $(\cC_3, \cL_3)$ extending $(\cC_{\eta}, \cL_{\eta})$ with the coarse moduli morphism $\cC_3 \to C_3$, and a twisted stable map $f_4\colon \cC_4 \to \cP_3$ over $\spec R$ extending $f_{\eta}$. Let $x$ be the image of $Z \to C_3$. As the pairs $(\cC_2, \cL_2)$ and $(\cC_3, \cL_3)$ are isomorphic away from the preimage of $Z$ and $x$, $f_4$ and $f_2$ are isomorphic away from the preimage of $Z$ and $x$. Thus the composition $\cC_4 \to \cP_3 \to \cC_3$ is a contraction of rational components $\cZ_x$ over $x \in \cC_3$. Suppose $\cZ_x$ is not a point. Since $f_4$ is a twisted stable map, $f_4|_{\cZ_x}$ must intersect the infinity section along a marking in $\Lambda$. This is a contradiction.
\end{proof}
We start with an $r$-spin triple as in Lemma \ref{lem:remove-extra-marking} and \ref{lem:unstable-tail-vanishing}, and inductively apply Lemma \ref{lem:stabilize-bridge} and \ref{lem:stabilize-tail} by contracting unstable components. After finitely many steps, we obtain $(\cC, \cL, f)$ as in Proposition \ref{prop:valuative}.
\subsubsection{Separatedness}
Consider stable extensions $(\cC_i, \cL_i, f_i)$ of
$(\cC_{\eta}, \cL_{\eta}, f_{\eta})$ for $i=1,2$.
Let $\cC_i \to C_i$ be the coarse moduli for $i=1,2$.
By Lemma \ref{lem:map-extending}, it suffices to show that there is an
isomorphism $C_1 \cong C_2$ extending the one over $\eta$.
Let $C_3$ be a family of prestable curves over $\spec R$ extending
$C_{\eta}$ with dominant morphisms $C_3 \to C_i$ for $i=1,2$.
We may assume $C_3$ has no rational components with at most two
special points contracted in both $C_1$ and $C_2$ by further
contracting these components.
Let $\cC_3' \to \cC_1\times \cC_2\times C_3$ be the family of twisted stable maps over $\spec R$ extending the obvious one $\cC_\eta \to \cC_1\times \cC_2\times C_3$. Observe that the composition $\cC_3' \to \cC_1\times \cC_2\times C_3 \to C_3$ is the coarse moduli morphism. Indeed, if there is a component of $\cC_3'$ contracted in $C_3$, then it will be contracted in both $\cC_1$ and $\cC_2$ as well.
Let $\cC_3 \to (\cC_3',\omega^{\log}_{\cC_3'/\spec R})^{1/r}$ be the twisted stable map extending the spin structure over $\eta$. Then we obtain a (not necessarily representable) spin bundle $\cL_3$ over $\cC_3$. We next compare $C_3$ and $C_i$ for $i=1,2$.
Set $U_i^{(0)} = C_3$ for $i=1,2$. Let $U_i^{(k+1)}$ be obtained by
removing from $U_i^{(k)}$ the rational components with precisely one
special point in $U_i^{(k)}$ and that are contracted in $C_i$.
Note that these removed rational components need not be proper, and
their closure may have more than one special points in $C_3$.
We observe that this process must stop after finitely many steps.
Denote by $U_i \subset C_3$ the resulting open subset.
\begin{lemma}\label{lem:cover}
\begin{enumerate}
\item $U_1 \cup U_2 = C_3$.
\item $C_3 \setminus U_1 \subset U_2$ and $C_3 \setminus U_2 \subset U_1$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose $z \in C_3 \setminus (U_1\cup U_2) \neq \emptyset$. Then there is a tree of rational curves in $C_3$ attached to $z$ and contracted in both $C_1$ and $C_2$. This contradicts the assumption on $C_3$. Statement (2) follows from (1).
\end{proof}
Consider the coarse moduli morphism $\cC_3 \to C_3$.
Denote by $\mathcal{U}_i := \cC_3\times_{C_3}U_i$ for $i=1,2$.
Since $U_{i} \to C_i$ contracts only rational components with two
special points in $U_i$, we have
$\omega^{\log}_{C_i/\spec R}|_{U_i} = \omega^{\log}_{U_i/\spec R}$
which further pulls back to
$\omega^{\log}_{\cC_i/\spec R}|_{\mathcal{U}_i} = \omega^{\log}_{\mathcal{U}_i/\spec
R}$.
Thus the pull-back $\cL_{i}|_{\mathcal{U}_i}$ is an $r$-th root of
$\omega^{\log}_{\mathcal{U}_i/\spec R}$.
Recall the $r$-spin bundle $\cL_{3}|_{\mathcal{U}_i}$.
Note that $\mathcal{U}_{i,\eta} = \cC_{\eta}$ and
$\cL_{3}|_{\mathcal{U}_{i,\eta}} \cong \cL_{i,\eta}$.
This isomorphism extends to $\cL_{3}|_{\mathcal{U}_{i}} = \cL_{i}|_{\mathcal{U}_i}$
uniquely for $i=1,2$.
This allows us to glue the pull-backs $f_{i}|_{\mathcal{U}_i}$ to a field
$f_3: \cC_3 \to \cP_3$.
\begin{lemma}\label{lem:degree-comparison}
$\deg f_{3,s}^*\cO(0_{\cP_3})|_{\overline{\mathcal{U}_{i,s}}} \geq \deg f_{i,s}^*\cO(0_{\cP_i})$, for $i=1,2$.
\end{lemma}
\begin{proof}
Note that $\omega^{\log}_{\cC_3/\spec R}|_{\overline{\mathcal{U}_{i,s}}} = \omega^{\log}_{\cC_i/\spec R}|_{\overline{\mathcal{U}_{i,s}}}(D')$ for some effective divisor $D'$ supported on the special points $\overline{\mathcal{U}_{i,s}} \setminus \mathcal{U}_{i,s}$ of $\cC_3$. Further note that $\cL_{3}$ and $\cL_{i}$ are the $r$-th roots of $\omega^{\log}_{\cC_3/\spec R}$ and $\omega^{\log}_{\cC_i/\spec R}$ respectively and $\cL_{3}|_{\mathcal{U}_{i}} = \cL_{i}|_{\mathcal{U}_i}$.
Thus there exists an effective divisor $D$ supported on
$\overline{\mathcal{U}_{i,s}} \setminus \mathcal{U}_{i,s}$ such that
$\cL_3|_{\overline{\mathcal{U}_{i,s}}} \cong
\cL_i|_{\overline{\mathcal{U}_{i,s}}}(D)$.
If $D = 0$, then we are done.
If $D \neq 0$, then it is not possible that one, or equivalently,
both $f_{3,s}|_{\overline{\mathcal{U}_{i,s}}}$ and
$f_{i,s}|_{\overline{\mathcal{U}_{i,s}}}$ map into the infinity section
because in that case both of $\cL_3|_{\overline{\mathcal{U}_{i,s}}}$ and
$ \cL_i|_{\overline{\mathcal{U}_{i,s}}}$ are trivial.
Suppose $f_{3,s}|_{\overline{\mathcal{U}_{i,s}}}$ and
$f_{i,s}|_{\overline{\mathcal{U}_{i,s}}}$ are not the infinity
section. By comparing the degree component-wise, we may assume $\overline{\mathcal{U}_{i,s}}$ is irreducible, hence $f_{3,s}|_{\overline{\mathcal{U}_{i,s}}}$ and
$f_{i,s}|_{\overline{\mathcal{U}_{i,s}}}$ are viewed as rational sections of
$\vb(\cL_3|_{\overline{\mathcal{U}_{i,s}}})$ and
$\vb(\cL_i|_{\overline{\mathcal{U}_{i,s}}})$, respectively.
Since they agree on $\mathcal{U}_{i,s}$ and $\cL_3|_{\overline{\mathcal{U}_{i,s}}} \cong
\cL_i|_{\overline{\mathcal{U}_{i,s}}}(D)$ for an effective $D$, we have
\begin{equation*}
\deg f_{3,s}^*\cO(0_{\cP_3})|_{\overline{\mathcal{U}_{i,s}}} - \deg f_{i,s}^*\cO(0_{\cP_i})|_{\overline{\mathcal{U}_{i,s}}} \geq 0.
\end{equation*}
\end{proof}
Suppose $C_1 \neq C_2$. Then we have $U_i \neq C_i$ for some $i$, say $i=1$. By construction each connected component of $C_3 \setminus U_1$ is a tree of proper rational curves in $U_2$ with no marked point, hence $\cT := (\cC_3 \setminus \mathcal{U}_1) \subset \mathcal{U}_2$.
By construction, the composition $\cT \to \cC_3 \to \cC_2$ is a closed
immersion and $f_{3}|_{\cT} = f_{2}|_{\cT}$.
Since $\deg \omega^{\log}_{\cC_3/\spec R}|_{\cT} < 0$ (unless $\cT = \emptyset$), the stability of $f_2$ implies
\begin{equation*}
\deg f_3^*\cO(0_{\cP_3})|_{\cT} = \deg f_2^*\cO(0_{\cP_2})|_{\cT} > 0.
\end{equation*}
Using Lemma \ref{lem:degree-comparison}, We calculate
\[
\begin{aligned}
\deg f_{3,s}^*\cO(0_{\cP_3}) &= \deg f_{3,s}^*\cO(0_{\cP_3})|_{\overline{\mathcal{U}_{1,s}}} + \deg f_{3,s}^*\cO(0_{\cP_3})|_\cT \\
&\geq \deg f_{1,s}^*\cO(0_{\cP_1}) + \deg f_{3,s}^*\cO(0_{\cP_3})|_\cT.
\end{aligned}
\]
Since $\deg f_{3,s}^*\cO(0_{\cP_3}) = \deg f_{1,s}^*\cO(0_{\cP_1})$, we conclude that $\cT = \cC_3 \setminus \mathcal{U}_1 = \emptyset$.
Observe that $C_3 = U_1 \to C_1$ contracts proper rational components
with precisely two special points.
Let $Z \subset C_3$ be such a component, and let
$\cZ = Z \times_{C_3}\cC_3$.
Since $f_3|_{\cC_3 = \mathcal{U}_1}$ is the pull-back of $f_1$, we have
\begin{equation}\label{equ:bridge-trivial-degree}
\deg f_3^*\cO(0_{\cP_3})|_{\cZ} = 0.
\end{equation}
On the other hand, since $Z$ has two special points in $C_3$ and is
contracted in $C_1$, it is not contracted in $C_2$.
Denote by $\cZ' \subset \cC_2$ the component dominating
$Z \subset C_2$.
Then $\cZ'$ has precisely two special points.
Furthermore $f_2|_{\cZ'}$ and $f_3|_{\cZ}$ coincide away from the two
special points.
Using \eqref{equ:bridge-trivial-degree}, we observe that
$\deg f_2^*\cO(0_{\cP_2})|_{\cZ'} = 0$, which contradicts
the stability of $f_2$.
Thus $C_3 \to C_1$ is an isomorphism.
This completes the proof of Proposition \ref{prop:valuative}. \qed
\subsubsection{Failure of properness without log structure along $\infty_{\cP}$}\label{sss:properness-failure}
As our target has the non-trivial log structure $\cM_{\infty_{\cP}}$
along the infinity section (see Section~\ref{sss:spin-field}), a
non-degenerate component can only intersect $\infty_{\cP}$ along nodes
or markings.
Hence we have condition (2) in Proposition~\ref{prop:valuative}.
It turns out that this condition is necessary for proving the weak
valuative criterion for the moduli of meromorphic sections of the spin
bundle.
We exhibit this necessity using the following example.
Consider the case that $r = 1$. Let $C = \mathbb{P}^1$ with three marked points $z = 1, 2, \infty$ where $z = u/v$ for a fixed homogeneous coordinates $[u:v]$ of $\mathbb{P}^1$. Consider a family of meromorphic differentials $f_t = t\frac{dz}z$ over $C$ where $t$ is the parameter over a punctured disc $\Delta$. Observe that $f_t$ intersects the infinity section transversally at a single non-marked point $z=0$. We claim that the limit as $t \to 0$ does not exist as a section of $\mathbb{P}(\omega^{\log}\oplus\cO)$ with finite automorphisms.
Suppose possibly after a finite base change, the family $f_t$ extends to a family of sections of $\mathbb{P}(\omega^{\log}_{C_{\Delta}/\Delta}\oplus\cO) \to C_{\Delta}$ over the family of prestable curves $C_{\Delta} \to \Delta$.
Consider the closed fiber $f_0$ over $C_0$. As our main focus will be the limiting behavior around $z=0 \in C$, and as the three marked points do not collide, we may assume that there is a component $M \subset C_0$ containing all the three markings. Observe that the restriction $f_0|_{M}$ is the zero section. Furthermore, there is a connected tree of rational curves $T \subset C_0$ glued to the point $z = 0 \in M$, hence $C_0 = M \cup T$.
First consider a rational tail $Z \subset C_0$ contained in $T$.
If $f_0$ has finite automorphisms, then $f_0|_{Z}$ cannot be the
infinity section.
Suppose $Z$ does not intersect the infinity section.
Then for degree reasons $f_0|_{Z}$ is the zero section, in which case
$f_0$ again has infinite automorphisms.
Thus $C_0$ has a unique rational tail $Z \subset T$ intersects the
infinity section transversally at a single point.
In particular, $T$ is a chain of rational curves.
Consider the component $Z' \subset T$ glued to $z = 0 \in M$. Suppose $Z' \neq Z$. The restriction $f_0|_{Z'}$ cannot be the zero section, as otherwise $f_0$ can have infinite automorphisms by scaling $Z'$. Thus $f_0|_{Z'}$ must intersect the infinity section. Then both $f_{0}|_{Z}$ and $f_{0}|_{Z'}$ intersects the infinity section. This contradicts the fact that the generic fiber $f_t$ only intersects the infinity section transversally at $z = 0$. Consequently we have $T = Z$, and $f_{0}|_{Z}$ intersects the zero section as well.
Finally, observe that
$\mathbb{P}(\omega^{\log}_{C_{\Delta}/\Delta}\oplus\cO)|_{Z}$ is the
Hirzebruch surface $H_1$ with zero section given by the $(-1)$-curve.
An straightforward calculation shows that there is no section of
$H_1 \to Z$ intersects infinity section transversally at a single
point and the zero section at another point.
Alternatively, such a section would correspond to a meromorphic
differential on $Z$ holomorphic outside a single point and with a
simple pole at this point.
By the residue theorem such a differential does not exist.
\section{Cosections and the reduced virtual cycle}
\label{sec:virtual}
\subsection{The logarithmic perfect obstruction theory}
The perfect obstruction theory of stable log maps has been formulated in \cite{GS13, AW} in different but equivalent ways using the log cotangent complexes of \cite{LogCot}. Here we will follow the method of \cite{AW}.
\subsubsection{The canonical perfect relative obstruction theory}\label{ss:log-obs}
Let $\SF:=\SF_{\ddata}^{1/r}$ be the stack of stable log $r$-spin fields with the discrete data $\beta$ as in (\ref{equ:spin-data}). Let $\beta'$ be the reduced discrete data as in Section \ref{sss:log-moduli-set-up}. Consider the universal stack $\fM(\cA,\beta')$ as in Section \ref{sss:universal-stack}. Consider the fiber product in the fine and saturated category
\begin{equation}\label{equ:log-base-stack}
\fM:=\fM(\cA,\beta')\times_{\fM^{\mathrm{tw}}_{g, n}}\fM^{1/r}_{g,\vgamma}
\end{equation}
where the two arrows to $\fM^{\mathrm{tw}}_{g, n}$ are the tautological
morphisms. By Propositions \ref{prop:uni-stack-smooth} and
\ref{prop:curve-stack}, $\fM$ is log smooth and equi-dimensional.
By \eqref{equ:forgetgeometry}, we have the tautological morphism
\[
\SF \to \fM
\]
induced by the associated log maps (\ref{equ:reduce-target}) and the spin structures. This leads to the following commutative diagram
\[
\xymatrix{
\cC_{\SF} \ar@/^1pc/[rrd]^{=} \ar[rd]^{f_{\SF}} \ar@/_1pc/[ddr]_{h}&&& \\
& \cP_{\SF} \ar[r] \ar[d] & \cC_{\SF} \ar[r]^{\pi_{\SF}} \ar[d] & \SF \ar[d] \\
& \cP_{\fM} \ar[r] & \cC_{\fM} \ar[r]_{\pi_{\fM}} & \fM
}
\]
where $f_{\SF}\colon \cC_{\SF} \to \cP_{\SF}$ is the universal log
$r$-spin field, $\pi_{\SF}\colon \cC_{\SF} \to \SF$ and
$\pi_{\fM}\colon \cC_{\fM} \to \fM$ are the universal log curves.
Note that the two squares are both cartesian with vertical strict
arrows.
We reserve the letter $\LL$ for the log cotangent complexes of
\cite{LogCot}, and the letter $\TT$ for its dual.
For what follows, without further decoration all functors are
automatically in the derived sense.
Observe that the left and right Cartesian squares imply
\[
f^*_{\SF}\LL_{\cP_{\SF}/\cC_{\SF}} \cong h^*\LL_{\cP_{\fM}/\cC_{\fM}}, \ \ \ \mbox{and} \ \ \ \pi^*_{\SF}\LL_{\SF/\fM} \cong \LL_{\cC_{\SF}/\cC_{\fM}}
\]
respectively. By the commutativity of arrows to $\cC_{\fM}$, we have
\[
h^*\LL_{\cP_{\fM}/\cC_{\fM}} \to \LL_{\cC_{\SF}/\cC_{\fM}},
\]
hence the morphism
\[
f^*_{\SF}\LL_{\cP_{\SF}/\cC_{\SF}} \to \pi^*_{\SF}\LL_{\SF/\fM}.
\]
Since $\cP_{\SF} \to \cC_{\SF}$ is log smooth and integral, we have
$\LL_{\cP_{\SF}/\cC_{\SF}} \cong \Omega_{\cP_{\SF}/\cC_{\SF}}$ the log
cotangent bundle.
Pushing forward along $\pi_{\SF}$, and using the adjunction morphism
$\pi_{\SF,*} \pi^*_{\SF} \LL_{\SF/\fM} \to \LL_{\SF/\fM}$, we obtain
\begin{equation}\label{equ:log-perfect-obs-cotangent}
\pi_{\SF,*}f_{\SF}^*\Omega_{\cP_{\SF}/\cC_{\SF}} \to \LL_{\SF/\fM}.
\end{equation}
Since the morphism $\SF \to \fM$ is strict, we have that
\[
\LL_{\SF/\fM} \cong \LL_{\underline{\SF}/\underline{\fM}}
\]
where $\LL_{\underline{\SF}/\underline{\fM}}$ is the cotangent complex in the usual sense.
\begin{proposition}
The morphism (\ref{equ:log-perfect-obs-cotangent}) defines a perfect obstruction theory for $\SF \to \fM$ in the sense of Behrend-Fantechi \cite[Section 7]{BF97}.
\end{proposition}
\begin{proof}
Note that $\pi_{\SF,*}f_{\SF}^*\Omega_{\cP_{\SF}/\cC_{\SF}} $ is a two-term complex perfect in $[-1, 0]$. It suffices to show that (\ref{equ:log-perfect-obs-cotangent}) is an obstruction theory.
Let $S_0 \to S$ be a strict closed embedding induced by a square-zero ideal. Given a commutative diagram of solid arrows
\[
\xymatrix{
S_0 \ar[r] \ar[d] & \SF \ar[d] \\
S_1 \ar[r] \ar@{-->}[ur] & \fM
}
\]
we want to study the dashed arrow lifting the bottom arrow. Using the
associated log map \eqref{equ:reduce-target}, the above diagram of
solid arrows translates to the following commutative diagram of solid
arrows, and the dashed arrow translates to the dashed arrow below:
\[
\xymatrix{
\cC_{S_0} \ar[r] \ar[d] & \cP'_{\cC_{S}} \ar[d] \\
\cC_{S} \ar[r] \ar@{-->}[ur] & \cA\times\underline{\cC}_{S}
}
\]
Note that the vertical arrow on the right hand side is strict and
smooth with the tangent bundle
$T_{\cP'_{\cC_{S}}/\cA\times\underline{\cC}_S} \cong
T_{\cP_{\cC_S}/\cC_S}$.
Now the statement follows from the deformation theory of log morphisms
in \cite[Theorem 5.9]{LogCot}.
\end{proof}
Observe that the above construction of perfect obstruction theory is compatible with arbitrary base changes.
\begin{lemma}\label{lem:obs-base-change}
For any morphism $S \to \fM$, consider $\SF_{S} := S\times_{\fM}\SF$ with the pull-back $f_{\SF_S}\colon \cC_{\SF_S} \to \cP_{\SF_S}$. Then the perfect obstruction theory (\ref{equ:log-perfect-obs-cotangent}) of $\SF \to \fM$ pulls back to a perfect obstruction theory
\[
\pi_{\SF_S,*}f_{\SF_S}^*\Omega_{\cP_{\SF_S}/\cC_{\SF_S}} \to \LL_{\SF_S/S}.
\]
of the strict morphism $\SF_S \to S$.
\end{lemma}
\subsubsection{The case of maps with uniform maximal degeneracy}
Replacing $\fM(\cA,\beta')$ by $\fU(\cA,\beta')$ in (\ref{equ:log-base-stack}), we obtain
\begin{equation}\label{equ:log-base-max-deg}
\fU:=\fU(\cA,\beta')\times_{\fM^{\mathrm{tw}}_{g, n}}\fM^{1/r}_{g,\vgamma}
\end{equation}
By Theorem \ref{prop:uni-stack-smooth}, the natural projection
$
\fU \to \fM
$
is log \'etale. Thus $\fU$ is log smooth and equi-dimensional.
Now consider $\USF := \USF_{\ddata}^{1/r}$ and the universal log $r$-spin field $f_{\USF}\colon \cC_{\USF} \to \cP_{\USF}$ over $\USF$. Since $\USF = \fU\times_{\fM}\SF$, applying Lemma \ref{lem:obs-base-change}, we obtain a relative perfect obstruction theory
\[
\pi_{\USF,*}f_{\USF}^*\Omega_{\cP_{\USF}/\cC_{\USF}} \to \LL_{\USF/\fU}.
\]
For later use, denote by $T_{\cP_{\USF}/\cC_{\USF}} = \Omega^{\vee}_{\cP_{\USF}/\cC_{\USF}}$ the log cotangent bundle of the projection $\cP_{\USF} \to \cC_{\USF}$. Taking dual of the above morphism, we obtain
\begin{equation}\label{equ:log-perfect-obs}
\TT_{\USF/\fU} \to \pi_{\USF,*}f_{\USF}^*T_{\cP_{\USF}/\cC_{\USF}} =: \EE_{\USF/\fU}.
\end{equation}
The following result will be useful for later calculation.
\begin{lemma}\label{lem:pull-back-log-tangent}
Let $f\colon \cC \to \cP$ be a log $r$-spin fields over a log scheme $S$, and $\cL$ the corresponding spin bundle over $\cC$. Then we have
\[
f^*T_{\cP/\cC} \cong f^*(\cO_{\cP}(0_{\cP})) \cong \cL\otimes f^*(\cO_{\cP}(\infty_{\cP})).
\]
\end{lemma}
\begin{proof}
Note that the usual tangent bundle is
$T_{\underline{\cP}/\underline{\cC}} = \cO_{\cP}(0_{\cP} +
\infty_{\cP})$.
The log tangent bundle
$T_{\cP/\cC} \subset T_{\underline{\cP}/\underline{\cC}}$ is the
subsheaf consisting of vector fields vanishing along
$\infty_{\cP}$. Thus we have $T_{\cP/\cC} = \cO_{\cP}(0_{\cP})$
which proves the first equality.
The second equality follows from the observation
that
$$\cO_{\cP}(0_{\cP} - \infty_{\cP}) = \cL|_{\cP},$$
where $\cL|_{\cP}$ is the pull-back of $\cL$ along the morphism
$\cP \to \cC$.
\end{proof}
\subsection{The relative cosection}
Consider the universal log $r$-spin fields
\[
f_{\USF}\colon \cC_{\USF} \to \cP_{\USF}
\]
over $\USF := \USF_{\ddata}^{1/r}$. Denote by $\pi_{\USF}\colon \cC_{\USF} \to \USF$ the projection, and $\cL_{\USF}$ the universal $r$-spin bundle over $\cC_{\USF}$.
Throughout the rest of this section, we impose the following condition which is necessary for the cosection construction.
\begin{assumption}\label{ass:narrow}
All marked points are narrow with the zero contact order in $\beta$.
\end{assumption}
\begin{notation} For a vector bundle $V$ over a log stack $X$, write $\vb(V)$ to be the geometric vector bundle associated to $V$ with the strict morphism $\vb(V) \to X$. For any morphism $Y \to X$, denote by $V|_{Y}$ and $\vb(V)|_{Y}$ the pull-back of $V$ and $\vb(V)$ respectively.
\end{notation}
\subsubsection{The twisted spin section}
Consider the canonical inclusion
\begin{equation}\label{equ:P-zero-section}
\iota\colon \cO_{\cP_{\USF}} \to \cO_{\cP_{\USF}}(0_{\cP}).
\end{equation}
Using the isomorphisms
\[
\cO_{\cP_{\USF}}(0_{\cP}) \cong \cL_{\USF}|_{\cP_{\USF}} \otimes \cO_{\cP_{\USF}}(1) \cong \cL_{\USF}|_{\cP_{\USF}} \otimes \cO_{\cP_{\USF}}(\infty_{\cP}),
\]
we obtain
\begin{equation}\label{equ:underlying-twist}
f^*_{\USF}\iota\colon \cO_{\cC_{\USF}} \to \cL_{\USF}\otimes f^*_{\USF}\cO_{\cP_{\infty}}(\infty_{\cP}).
\end{equation}
By Assumption \ref{ass:narrow}, we may pull back \eqref{equ:log-twist} via $\USF \to \fU$ and obtain
\begin{equation}\label{equ:log-twist-spin}
\tf_{\USF}\colon \pi^*_{\USF}\bL_{\max}\otimes f^*_{\USF}\cO(\infty_{\cP}) \to \cO_{\cC_\USF}.
\end{equation}
By abuse of notation, $\bL_{\max}$ denotes the pull-back of the corresponding line bundle over $\fU$. Then we obtain a morphism
\begin{equation}\label{equ:tangent-twist}
f_{\USF}^*T_{\cP_{\USF}/\cC_{\USF}} \cong \cL_{\USF}\otimes f^*_{\USF} \cO(\infty_{\cP}) \stackrel{\otimes\tf^{\vee}_{\USF}}{\longrightarrow} \cL_{\USF} \otimes \pi^*_{\USF} \bL_{\max}^{\vee}.
\end{equation}
Composing with (\ref{equ:underlying-twist}), we obtain the {\em twisted spin section}
\begin{equation}\label{equ:twisted-spin-section}
\bs_{\USF} := (\otimes\tf^{\vee}_{\USF})\circ (f^*_{\USF}\iota)\colon \cO_{\cC_{\USF}} \to \cL_{\USF}\otimes \pi^*_{\USF}\bL_{\max}^{\vee},
\end{equation}
or equivalently a morphism $\bs_{\USF}\colon \cC_{\USF} \to \vb(\cL_{\USF}\otimes \pi^*_{\USF}\bL_{\max}^{\vee}).$
\subsubsection{The twisted superpotential and its differentiation}
Write for simplicity
\[
\omega_{\log,\USF} := \omega^{\log}_{\cC_{\USF}/\USF} \ \ \ \mbox{and} \ \ \ \omega_{\USF} := \omega_{\cC_{\USF}/\USF}.
\]
The $r$-spin structure $\cL_{\USF}^{r}\cong \omega_{\log,\USF}$ defines an isomorphism
\[
(\cL_{\USF}\otimes \pi^*_{\USF}\bL_{\max}^{\vee})^{\otimes {r}} \cong \omega_{\log,\USF}\otimes \pi^*_{\USF}\bL_{\max}^{\otimes {(-r)}}
\]
hence a non-linear morphism
\[
W\colon \vb(\cL_{\USF}\otimes \pi^*_{\USF}\bL^{\vee}_{\max}) \to \vb(\omega_{\log,\USF}\otimes \pi^*_{\USF}\bL_{\max}^{\otimes (-{r})})
\]
called the \emph{twisted superpotential}. For convenience, we may equip both the source and the target of $W$ with the log structures pulled back from $\cC_{\USF}$. In particular, $W$ is a strict morphism. Differentiating $W$, we have
\[
\diff W\colon T_{\vb(\cL_{\USF}\otimes \pi^*_{\USF}\bL^{\vee}_{\max})/\cC_{\USF}} \to W^* T_{\vb(\omega_{\log,\USF}\otimes \pi^*_{\USF}\bL_{\max}^{\otimes {(-r)}})/\cC_{\USF}}.
\]
Pulling back $\diff W$ via the twisted spin section \eqref{equ:twisted-spin-section} gives
\begin{equation}\label{equ:before-twist}
\bs_{\USF}^*(\diff W) \colon \cL_{\USF}\otimes \pi^*_{\USF}\bL_{\max}^{\vee} \to \omega_{\log,\USF}\otimes \pi^*_{\USF}\bL_{\max}^{\otimes -r}.
\end{equation}
Pushing forward along $\pi_{\USF}$ and applying the projection formula, we have
\begin{equation}\label{equ:push-diff-potential}
\pi_{\USF,*}\bs_{\USF}^*(\diff W)\colon \pi_{\USF,*}(\cL_{\USF})\otimes \bL_{\max}^{\vee} \to \pi_{\USF,*}(\omega_{\log,\USF})\otimes \bL_{\max}^{\otimes -r}.
\end{equation}
Denote by $\Sigma := \sum_i \sigma_i$ the sum of marked points of
$\cC_{\USF}$. Since all the markings are narrow, recall from \cite[Lemma 3.2]{CLL15}
that the push-forward of the natural inclusion
$\cL_{\USF}(-\Sigma) \hookrightarrow \cL_{\USF}$ is an isomorphism
\begin{equation}\label{equ:narrow-iso}
\pi_{\USF,*}\cL_{\USF}(-\Sigma) \stackrel{\cong}{\longrightarrow} \pi_{\USF,*}\cL_{\USF}.
\end{equation}
Twisting down (\ref{equ:before-twist}) by the markings and pushing forward, we have
\begin{equation}\label{equ:after-twist}
\pi_{\USF,*}\bs_{\USF}^*(\diff W)(-\Sigma) \colon \pi_{\USF,*}\big(\cL_{\USF}(-\Sigma)\big)\otimes\bL_{\max}^{\vee} \to \pi_{\USF,*}\omega_{\USF}\otimes\bL_{\max}^{\otimes -r}.
\end{equation}
The two morphisms (\ref{equ:push-diff-potential}) and (\ref{equ:after-twist}) fit in a commutative diagram
\begin{equation}\label{diag:differentiate-potential}
\xymatrix{
\pi_{\USF,*}\big(\cL_{\USF}(-\Sigma)\big)\otimes\bL_{\max}^{\vee} \ar[d]_{\cong} \ar[rrr]^{\pi_{\USF,*}\bs_{\USF}^*(\diff W)(-\Sigma)} &&& \pi_{\USF,*}\omega_{\USF}\otimes\bL_{\max}^{\otimes -r} \ar[d] \\
\pi_{\USF,*}(\cL_{\USF})\otimes \bL_{\max}^{\vee} \ar[rrr]^{\pi_{\USF,*}\bs_{\USF}^*(\diff W)} &&& \pi_{\USF,*}(\omega_{\log,\USF})\otimes \bL_{\max}^{\otimes -r}
}
\end{equation}
where the vertical arrows are induced by twisting down by $\Sigma$, and the left vertical arrow is the isomorphism (\ref{equ:narrow-iso}).
\bigskip
We give a point-wise description of $R^1\pi_{\USF,*}\bs_{\USF}^*(\diff W)(-\Sigma)$. Consider a geometric point $w \to \USF$ with the pullback $(\cC_w/w, \cL_{w}, f_w)$. Using Serre duality and the spin structure $\cL_{w}^{r} \cong \omega_{\log, w}$, we have
\begin{align*}
H^1\big(\cL_{w}(-\Sigma)\big)\otimes{\bL_{\max,w}^{\vee}} &\cong H^{0}\big( \cL^{\vee}_{w}(\Sigma)\otimes\omega_{w}\big)^\vee \otimes {\bL_{\max,w}^{\vee}}\\
&\cong H^{0}\big( \cL^{\vee}_{w}\otimes\omega_{\log,w}\big)^\vee \otimes {\bL_{\max,w}^{\vee}} \\
&\cong H^0(\cL_{w}^{r-1})^{\vee}\otimes\bL^{\vee}_{\max,w}.
\end{align*}
The fiber $R^1\pi_{\USF,*}\bs_{\USF}^*(\diff W)(-\Sigma)_w$ is then given by
\begin{equation}\label{equ:fiber-diff-W}
H^0(\cL_{w}^{r-1})^{\vee}\otimes {\bL}^{\vee}_{\max, w} \to {\bL}^{\otimes -r}_{\max, w}, \ \ \ {\dot{s}} \mapsto {r} \bs_{w}^{r-1} \cdot {\dot{s}}.
\end{equation}
where $\bs_w$ is the fiber of (\ref{equ:twisted-spin-section}) at $w$,
hence
${\bs}_{w}^{r-1} \in H^0(\cL_{w}^{r-1})\otimes {\bL}^{\otimes (1 -
r)}_{\max, w}$.
\subsubsection{The relative cosection}
Pushing forward (\ref{equ:tangent-twist}), we obtain
\begin{equation}\label{equ:push-tangent-twist}
\EE_{\USF/\fU} \to \pi_{\USF,*}(\cL_{\USF}) \otimes \bL_{\max}^{\vee}.
\end{equation}
Composing with the left and top arrows of (\ref{diag:differentiate-potential}), we obtain
\begin{equation}\label{equ:derived-cosection}
\EE_{\USF/\fU} \to \pi_{\USF,*}\omega_{\USF}\otimes\bL_{\max}^{\otimes -r},
\end{equation}
whose $H^1$ defines the {\em relative cosection}
\begin{equation}\label{equ:relative-cosection}
\sigma_{\USF/\fU}\colon \obs_{\USF/\fU} := H^1(\EE_{\USF/\fU}) \to R^1\pi_{\USF,*}\omega_{\USF}\otimes\bL_{\max}^{\otimes -r} \cong \bL_{\max}^{\otimes -r}.
\end{equation}
By abuse of notation, denote by $\Delta_{\max} \subset \USF$ the pre-image of $\Delta_{\max}\subset \fU$. Let $\USF^{\circ} := \USF \setminus \Delta_{\max}$. Then $\USF^{\circ}$ is the stack parameterizing sections of the $r$-spin bundle. Note that $\USF^{\circ}$ is the stack $X$ as in \cite[Section 3]{CLL15}.
\begin{lemma}\label{lem:old-compatible}
In the $r$-spin case, the restriction of \eqref{equ:log-perfect-obs} to $\USF^{\circ}$ is the perfect obstruction theory in \cite[(3.2)]{CLL15}, and $\sigma_{\USF/\fU}|_{\fU^{\circ}}$ is the relative cosection in \cite[(3.5)]{CLL15}.
\end{lemma}
\begin{proof}
By assumption, we have that
\[
f^*_{\USF^{\circ}}\cO(\infty_{\cP}) = \cO_{\cC_{\USF^{\circ}}} \ \ \ \mbox{and} \ \ \ \bL_{\max}|_{\USF^{\circ}} = \cO_{\USF^{\circ}}.
\]
Then the statement follows from the construction of $\sigma_{\USF/\fU}$.
\end{proof}
\subsubsection{Surjectivity of $\sigma_{\USF/\fU}$ along the boundary}
\begin{lemma}\label{lem:narrow-marking-vanishing}
Suppose a narrow marking has the trivial contact order. Then its image via $f_{\USF}$ is contained in the zero section $0_{\cP} \subset \cP_{\USF}$.
\end{lemma}
\begin{proof}
We first show that the image of such a narrow marking avoids the
infinity section.
Since this is an open condition, it suffices to check this over a
geometric fiber $w \to \USF$ with the log twisted field
$f_w\colon \cC_w \to \cP_w$.
Suppose $f_w(\sigma_i) \in \infty_{\cP}$ for a narrow marking
$\sigma_i \in \cC_w$.
Then there is an irreducible Zariski neighborhood $V \subset \cC_w$
of $p$ such that
$\oM_{\cC_w} \cong \pi_w^*\oM_{w}\oplus \sigma_{i,*}\NN_{\sigma_i}$,
see Section~\ref{sss:log-twisted-curve}.
Since the contact order of $\sigma_i$ is trivial, $f^{\flat}_w|_{V}$
induces a morphism of $\cO^*$-torsors over $V$ of the form
$ f^{\flat}_w|_{V}\colon f^*_{w}\cT_{\infty_{\cP}}|_{V} \to
\cT_{e_V} $ where $e_V \in \pi_w^*\oM_{w}$ is the degeneracy of
$f_w$ along $V$,
$\cT_{e_V} = e_V \times_{\oM_{\cC_w}} \cM_{\cC_w}$, and
$\cT_{\infty_{\cP}} \subset \cM_{\cP_{w}}$ is the preimage of the
torsor $\cT_{\infty}$ as in \eqref{equ:target-torsor} via
$\cP_{w} \to \cA$.
Taking the corresponding line bundles, we obtain a morphism
$f^*\cO(-\infty_{\cP})|_{V} \to \cO_{V}$ whose dual
$\cO_V \to f^*\cO(\infty_{\cP})|_{V}$ is non-vanishing at $p$ since
the contact order at $p$ is trivial.
Note that $f^*\cO(\infty_{\cP})|_{V} \cong \cL^{\vee}_w|_{V}$, we
thus obtain a local section of $\cL^{\vee}_w$ non-vanishing at a
narrow marking.
But by \cite[Lemma 3.2]{CLL15} and \cite[Proposition 3.0.3]{AbJa03}
such a local section vanishes at $\sigma_i$.
This is a contradiction.
Since $f_{\USF}(\sigma_i)$ avoids $\infty_{\cP}$, locally $f_{\USF}$ is a section of $\cL_{\USF}$ around $\sigma_{i}$, hence vanishes along $\sigma_{i}$. This completes the proof.
\end{proof}
We next prove the surjectivity of $\sigma_{\USF/\fU}$ along $\Delta_{\max}$.
\begin{proposition}\label{prop:cosection-boundary-surjective}
The vanishing locus $(\sigma_{\USF/\fU} = 0) \subset \USF$ is given by the locus along which $f_{\USF}$ is the zero section.
\end{proposition}
\begin{proof}
By Lemma \ref{lem:old-compatible} and \cite[Lemma 3.6]{CLL15}, $\sigma_{\USF/\fU}|_{\USF^{\circ}}$ vanishes along the locus where $f_{\USF}$ is the zero section. It remains to show that $\sigma_{\USF/\fU}$ is surjective along $\Delta_{\max}$. Since $\bL_{\max}^{\otimes -r}$ is a line bundle, the image of $\sigma_{\USF/\fU}$ is a torsion-free sub-sheaf of $\bL_{\max}^{\otimes -r}$. Thus it suffices to show that $\sigma_{\USF/\fU}$ is surjective at each geometric point of $\Delta_{\max}$.
Let $w \in \Delta_{\max}$ be a geometric point with $(\cC_w/w, \cL_{w}, f_w)$. Taking $H^1$ of (\ref{equ:tangent-twist}) over $w$, we have
\begin{equation}\label{equ:H1-tangent-twist}
H^{1}\big(\cL_{w}\otimes f^*_{w} \cO(\infty_{\cP_w})\big) \to H^{1}(\cL_{w}) \otimes {\bL}_{\max, {w}}^{\vee}.
\end{equation}
By construction $\sigma_{\USF/\fU,w}$ is the following composition
\begin{align*}
H^{1}\big(\cL_{w}\otimes f^*_{w} \cO(\infty_{\cP_w})\big) & \stackrel{}{\longrightarrow} H^{1}(\cL_{w}) \otimes {\bL}_{\max, {w}}^{\vee} \\
\big(\mbox{by \eqref{equ:narrow-iso}} \big) \ \ \ \ \ & \stackrel{\cong}{\longleftarrow} H^1\big(\cL_{w}(-\Sigma)\big)\otimes{\bL_{\max,w}^{\vee}} \\
\big( \mbox{by \eqref{equ:after-twist}} \big) \ \ \ \ \ & \stackrel{}{\longrightarrow} {\bL}^{\otimes -r}_{\max, w}
\end{align*}
where the first arrow is (\ref{equ:H1-tangent-twist}). Applying Serre duality and taking the dual, we have $\sigma^{\vee}_{\USF/\fU,w}$:
\begin{align*}
H^0\big(\cL^{\vee}_{w}\otimes f^*_{w} \cO(-\infty_{\cP_w})\otimes\omega_w\big) &\longleftarrow H^{0}(\cL^{\vee}_{w}\otimes\omega_{w})\otimes {\bL}_{\max, {w}}\\
&\stackrel{\cong}{\longrightarrow} H^0(\cL_{w}^{r-1})\otimes\bL_{\max,w} \\
&\stackrel{}{\longleftarrow} \bL^{\otimes r}_{\max, w}.
\end{align*}
where the first and last arrow is given by the dual of (\ref{equ:H1-tangent-twist}) and (\ref{equ:fiber-diff-W}) respectively. We describe $\sigma^{\vee}_{\USF/\fU,w}$ via the above composition as follows.
Suppose $v_0 \in \bL^{\otimes r}_{\max, w}$ is a non-zero vector. Applying dual of (\ref{equ:fiber-diff-W}), we obtain a vector $v_1:=(r \bs_{w}^{r-1})^{\vee}(v_0) \in H^0(\cL_{w}^{r-1}\otimes\bL_{\max,w})$. By Proposition \ref{prop:log-twist}, the section $v_1$ is non-trivial along the sub-curve $Z \subset \cC_w$ consisting of maximally degenerate components, and vanishes along $\cC_w \setminus Z$.
By Lemma \ref{lem:narrow-marking-vanishing}, since $Z$ contains no markings, we have $(\cL_w|_{Z})^{r} \cong \omega_{w}|_{Z}$. Thus $v_1$ is a section of $\cL^{\vee}_{w}\otimes\omega_{w}\otimes {\bL}_{\max, {w}}$ non-trivial along $Z$, and vanishes along $\cC_w \setminus Z$.
Finally observe that the dual of (\ref{equ:H1-tangent-twist}) is given by
\[
\cL^{\vee}_{w}\otimes\omega_{w}\otimes {\bL}_{\max, {w}} \stackrel{\otimes\tf_{\USF}}{\longrightarrow} \cL^{\vee}_{w}\otimes f^*_{w} \cO(-\infty_{\cP_w})
\]
hence $\sigma^{\vee}_{\USF/\fU,w}(v_0) = v_1\otimes\tf_{\USF}$. By Proposition \ref{prop:log-twist} again, $\tf_{\USF}$ hence $v_1\otimes\tf_{\USF}$ is non-trivial along $Z$. In particular, $\sigma^{\vee}_{\USF/\fU,w}(v_0) \neq 0$.
The above analysis implies that $\sigma^{\vee}_{\USF/\fU,w}$ is injective, hence $\sigma_{\USF/\fU,w}$ is surjective. This completes the proof.
\end{proof}
\subsection{Factorization of the relative obstruction}
\subsubsection{An auxiliary twist}\label{sss:auxiliary-twist}
Denote by $\cL_{\fU,-}:= \cL_{\fU}(-\Sigma)$ for simplicity. Similar to the construction of $\cP_{\fU}$ in Section \ref{sss:spin-field}, we formulate the stack $\cP_{\fU,-}$ with $\cL_{\fU}$ replaced by $\cL_{\fU,-}$. The log structure of $\cP_{\fU,-}$ is defined to be
\[
\cM_{\cP_{\fU,-}} := \cM_{\cC_{\fU}}|_{\cP}\oplus_{\cO^*}\cM_{\infty_{\cP_{-}}}
\]
where $\infty_{\cP_{-}} \subset \cP_{\fU,-}$ is the corresponding infinity section. The natural morphism $\vb(\cL_{\fU,-}) \to \vb(\cL_{\fU})$ induces a birational map of log stacks $\xymatrix{\cP_{\fU,-} \ar@{-->}[r] & \cP_{\fU}}$ which is isomorphic away from fibers over marked points. Denote by $\cP_{\fU,reg} \subset \cP_{\fU,-}$ the open sub-stack where the above rational map is well-defined. Let $\bt\colon \cP_{\fU,reg} \to \cP_{\fU}$ be the corresponding morphism. Denote by $\cP_{\USF,reg}$ the pull-back of $\cP_{\fU,reg}$ with the corresponding morphism $\bt\colon \cP_{\USF,reg} \to \cP_{\USF}$.
\begin{lemma}
There is a canonical factorization
\[
\xymatrix{
\cC_{\USF} \ar[rd]_{f_{\USF,-}} \ar[rr]^{f_{\USF}} && \cP_{\USF} \\
&\cP_{\USF,reg} \ar[ru]_{\bt}&
}
\]
\end{lemma}
\begin{proof}
Note that $\cP_{\fU,reg} \subset \cP_{\fU,-}$ is obtained by removing the fiber of $\infty_{\cP_{-}}$ over marked points. The statement follows from Lemma \ref{lem:narrow-marking-vanishing}.
\end{proof}
Denote by $\cP_{\fU,-} \to \cA$ the morphism of log stacks such that $\cM_{\infty_{\cP_{-}}}$ is the pull-back of $\cM_{\cA}$. Consider the natural morphism
$
\USF \to \fU
$
induced by the composition of $f_{\USF,-}$ with $\cP_{\fU,-} \to \cA$. The above lemma implies that $\USF$ can be viewed as the log stack parameterizing log twisted sections $f_{T,-}\colon \cC_{T} \to \cP_{T,-}$ for any $T \to \fU$. The same construction in Section \ref{ss:log-obs} provides a perfect obstruction theory of $\USF \to \fU$:
\begin{equation}\label{equ:-perfect-obs}
\TT_{\USF/\fU} \to \pi_{\USF,*}f^*_{\USF,-}T_{\cP_{\USF,reg}/\cC_{\USF}} \cong \pi_{\USF,*}f^*_{\USF,-}T_{\cP_{\USF,-}/\cC_{\USF}} =: \EE_{\USF/\fU, -}.
\end{equation}
On the other hand since $f^{*}_{\USF}\cO(\infty_{\cP}) \cong f^*_{\USF,-}\cO(\infty_{\cP,-})$, we calculate
\begin{align}
f^*_{\USF,-}T_{\cP_{\USF,-}/\cC_{\USF}} &\cong \cL_{\USF,-} \otimes f^*_{\USF,-}\cO(\infty_{\cP,-}) \nonumber \\
&\cong \cL_{\USF}(-\Sigma)\otimes f^{*}_{\USF}\cO(\infty_{\cP}) \label{equ:twist-log-tangent} \\
&\cong f^*_{\USF}T_{\cP_{\USF}/\cC_{\USF}}(-\Sigma).\nonumber
\end{align}
Using (\ref{equ:narrow-iso}) and Lemma \ref{lem:narrow-marking-vanishing}, we have
\[
\pi_{\USF,*}f^*_{\USF,-}T_{\cP_{\USF,reg}/\cC_{\USF}} \cong \pi_{\USF, *}f^*_{\USF}T_{\cP_{\USF}/\cC_{\USF}}.
\]
To summarize:
\begin{lemma}\label{lem:twist-obs}
The two perfect obstruction theories (\ref{equ:log-perfect-obs}) and (\ref{equ:-perfect-obs}) are identical.
\end{lemma}
We now view $\USF$ with the universal family $f_{\USF,-}\colon \cC_{\USF} \to \cP_{\USF,-}$.
\subsubsection{Partial expansion and contraction}
The morphism $\fm\colon \fU(\cA,\beta') \to \cA_{\max}$ from Section \ref{ss:partial-expansion} induces a morphism $\fU \to \cA_{\max}$ which will again be denoted by $\fm$ by abuse of notation. Consider the following cartesian diagram of fine log stacks
\begin{equation}\label{equ:bundle-partial-expansion}
\xymatrix{
\cP^{e}_{\fU,-} \ar[r] \ar[d]_{\fb} & \cA^{e} \ar[d]^{\fb} \\
\cP_{\fU,-} \ar[r] & \cA \times \cA_{\max}
}
\end{equation}
where the bottom is the product of $\fm$ and $\cP_{\fU,-} \to \cA$.
By construction, one checks that the bottom arrow satisfies the flatness conditions in \cite[Proposition (4.1)]{KKato}, hence is integral in the sense of \cite[Definition (4.3)]{KKato}. In particular, the underlying structure of the above cartesian diagram is a cartesian diagram of the underlying algebraic stacks. We remark that the above diagram is indeed cartesian in the fine and saturated category. Since the saturation plays no role in the following discussion, we omit the details here.
In the above diagram, since the right vertical arrow is log \'etale,
the left vertical arrow is again log \'etale.
By abuse of notation, we denote both vertical arrows by $\fb$.
Let $\infty_{\cP^{e}_{-}} \subset \cP^{e}_{\fU,-}$ be the pre-image of
$\infty_{\cA^e} \subset \cA^e$, and write
$\cP^{e,\circ}_{\fU,-} := \cP^{e,\circ}_{\fU,-}\setminus
\infty_{\cP^{e}_{-}}$.
Denote by $\cE_{\fb} \subset \cP^{e}_{\fU,-}$ the exceptional divisor
contracted by $\fb$.
In the following, we view the (relative) normal bundle
$\cN_{\infty_{\cP^e_-}}$ of $\infty_{\cP^e_-}$ as
a line bundle over $\cC_{\fU}$:
\begin{lemma}
$
\cN^{\vee}_{\infty_{\cP^e_{-}}} \cong \cL_{\fU,-}\otimes \pi^*_{\fU}\bL^{\vee}_{\max}.
$
\end{lemma}
\begin{proof}
Observe that
$\fb^*[\infty_{\cP_{-}}] = [\infty_{\cP^e_{-}}] + [\cE_{\fb}]$ where
$[*]$ denotes the corresponding divisor class.
Pulling back to $\cC_{\fU}$ via the identification
$\cC_{\fU} \cong \infty_{\cP_{-}} \cong \infty_{\cP^e_{-}}$, we
obtain
\[
\cO(\infty_{\cP_{-}})|_{\infty_{\cP_{-}}} \cong \big(\cO(\infty_{\cP^e_{-}})\otimes \cO(\cE_{\fb})\big)|_{\infty_{\cP^e_{-}}}.
\]
Using
$\cO(\infty_{\cP_{-}})|_{\infty_{\cP_{-}}} \cong
\cN_{\infty_{\cP_{-}}} \cong \cL^{\vee}_{\fU,-}$ and
$\cO(\infty_{\cP^e_{-}})|_{\infty_{\cP^e_{-}}} \cong
\cN_{\infty_{\cP^e_{-}}}$, we obtain
\[
\cL^{\vee}_{\fU,-} \cong \cN_{\infty_{\cP^e_{-}}}\otimes \cO(\cE_\fb)|_{\infty_{\cP^e_{-}}}.
\]
Finally, observe that
$\cO(\cE_{\fb})|_{\infty_{\cP^e_{-}}} \cong
\pi^*_{\fU}\bL_{\max}^{\vee}$, which leads to the desired
isomorphism.
\end{proof}
\begin{lemma}\label{lem:contraction}
There is a commutative diagram of log stacks
\[
\xymatrix{
\cP_{\fU,-}^{e,\circ} \ar[rr]^{\fc} \ar[rd] && \vb(\cL_{\fU,-}\otimes \pi^*_{\fU}\bL^{\vee}_{\max}) \ar[ld] \\
&\cC_{\fU}&
}
\]
where $\fc$ is a birational morphism contracting $\cE_{\fc}$, the proper transform of $\cP_{\fU,-}\times_{\cC_{\fU}}\Delta_{\max}$, to the zero section of $\vb(\cL_{\fU,-}\otimes \pi^*_{\fU}\bL^{\vee}_{\max})$.
\end{lemma}
\begin{proof}
Note that once the underlying morphism of $\fc$ is defined, the morphism on the level of log structures is automatically obtained since the right skew arrow is strict.
We may assume for simplicity that all stacks in the rest of this proof have the trivial log structure.
Note that $3 [\infty_{\cP^e_{-}}]$ is a relative nef divisor of the
family of nodal rational curves $\cP^e_{\fU,-} \to \cC_{\fU}$.
Let $\bar{\fc}\colon \cP^e_{\fU,-} \to \cP^{c}_{\fU,-}$ be the induced
contraction, and $\cE_{\fc} \subset \cP^e_{\fU,-}$ the exceptional
locus contracted by $\bar{\fc}$.
Then $\cE_{\fc}$ is the proper transform of
$\cP_{\fU,-}\times_{\fU}\Delta_{\max}$.
Observe that the resulting projection $\cP^{c}_{\fU,-} \to \cC_{\fU}$
is again a smooth $\mathbb{P}^1$-fibration since the contracted locus
consists of a family of $(-1)$-curves over $\cC_{\fU}$.
Furthermore, note that $\bar{\fc}$ induces an embedding $\cN_{\infty_{\cP^e_{-}}} \to \cP^{c}_{\fU}$ over $\cC_{\fU}$ with complement $0_{\cP^{c}_{-}} \subset \cP^{c}_{\fU,-}\setminus \cN_{\infty_{\cP^e_{-}}}$ given by the image of the zero section $0_{\cP^e_{-}} \subset \cP^e_{\fU,-}$. We thus obtain
\[\cP^c_{\fU,-} \cong \mathbb{P}(\cN_{\infty_{\cP^e_{-}}}\oplus \cO) \cong \mathbb{P}(\cO \oplus \cN^{\vee}_{\infty_{\cP^e_{-}}}).\]
Thus $\fc$ is obtained from $\bar{\fc}$ by removing $\infty_{\cP^e_{-}}$ and its image in $\cP^c_{\fU,-}$.
\end{proof}
Consider the canonical morphism induced by the divisor $\cE_{\fc}$
\begin{equation}\label{equ:C-twist}
\iota_{\cE_{\fc}}\colon \cO_{\cP^e_{\fU,-}} \to \cO_{\cP^e_{\fU,-}}(\cE_{\fc}) \cong \bL_{\max}^{\vee}(-\cE_{\fb}),
\end{equation}
and the morphism of log tangent bundles
\[
\diff\fc\colon T_{\cP^{e,\circ}_{\fU,-}/\cC_{\fU}} \to \fc^*T_{\vb(\cL_{\fU,-}\otimes \pi^*_{\fU}\bL^{\vee}_{\max})/\cC_{\fU}}.
\]
\begin{lemma}\label{lem:diff-c}
$\diff\fc = \otimes\iota_{\cE_{\fc}}$.
\end{lemma}
\begin{proof}
Consider the morphism of log cotangent bundles
\[
\fc^*\colon \Omega_{\vb(\cL_{\fU,-}\otimes \pi^*_{\fU}\bL^{\vee}_{\max})/\cC_{\fU}} \to \fc^*\Omega_{\cP^{e,\circ}_{\fU,-}/\cC_{\fU}}.
\]
Note that $\fc$ is an isomorphism away from the divisor $\cE_{\fc}$.
Furthermore, the contraction $\fc$ is the blow-up of the zero section
of $\vb(\cL_{\fU,-}\otimes \pi^*_{\fU}\bL^{\vee}_{\max})$.
A local coordinate calculation shows that
$\fc^* = \otimes \iota_{\cE_{\fc}}^{\vee}$.
Taking dual, we obtain the desired equality.
\end{proof}
\subsubsection{Twisted spin section via partial expansion}
Consider the commutative diagram of solid arrows
\begin{equation}\label{diag:lift-log-fields}
\xymatrix{
\cC_{\USF} \ar@/^/[rrd] \ar@/_/[ddr]_{f_{\USF,-}} \ar@{.>}[rd]|-{f^{e}_{\USF,-}} && \\
&\cP^{e,\circ}_{\USF,-} \ar[r] \ar[d]^{\fb} & \cA^{e,\circ} \ar[d] \\
&\cP_{\USF,-} \ar[r] & \cA\times\cA_{\max}
}
\end{equation}
where the square is the pull-back of (\ref{equ:bundle-partial-expansion}) via $\USF \to \fU$. We then obtain the dashed arrow $f^e_{\USF,-}$.
Consider the following composition
\begin{equation}\label{equ:minus-section}
\bs_{\USF,-}\colon C_{\USF} \stackrel{f^e_{\USF,-}}{\longrightarrow} \cP^{e,\circ}_{\USF,-} \stackrel{\fc_\USF}{\longrightarrow} \vb(\cL_{\USF,-}\otimes \pi^*_{\USF}\bL^{\vee}_{\max})
\end{equation}
where $\fc_\USF$ is the pull-back of the contraction $\fc$ as in Lemma \ref{lem:contraction}.
\begin{lemma}\label{lem:twisted-spin-marking-vanishing}
The section $\bs_{\USF}$ in (\ref{equ:twisted-spin-section}) is given by the composition
\[
C_{\USF} \stackrel{\bs_{\USF,-}}{\longrightarrow} \vb(\cL_{\USF,-}\otimes \pi^*_{\USF}\bL^{\vee}_{\max}) \stackrel{}{\longrightarrow} \vb(\cL_{\USF}\otimes \pi^*_{\USF}\bL^{\vee}_{\max}).
\]
\end{lemma}
\begin{proof}
Since $\bt\colon \cP_{\USF,reg} \to \cP_{\USF}$ is well-defined along the zero section, pulling back (\ref{equ:P-zero-section}) we have
\[
\bt^*\iota\colon \cO_{\cP_{\USF,reg}} \to \cO_{\cP_{\USF,reg}}(0_{\cP_{-}})\otimes \cO_{\cC_{\USF}}(\Sigma)|_{\cP_{\USF,reg}}.
\]
Since $\cO_{\cP_{\USF,reg}}(0_{\cP_{-}}) \cong \cL_{\USF,-}|_{\cP_{\USF,reg}}\otimes \cO_{\cP_{\USF,reg}}(\infty_{\cP_{-}})$, further pulling back to $\cP_{\USF,reg}^e = \fb^{-1}(\cP_{\USF,reg})$, we have
\[
\fb^*\bt^*\iota\colon \cO_{\cP^e_{\USF,reg}} \to \cL_{\USF,-}|_{\cP^e_{\USF,reg}}\otimes \cO_{\cP^e_{\USF,reg}}(\infty_{\cP^e_{-}}+\cE_{\fb}) \otimes \cO_{\cC_{\USF}}(\Sigma)|_{\cP^e_{\USF,reg}}
\]
which is naturally the restriction of
\[
\fb^*\bt^*\iota\colon \cO_{\cP^e_{\USF,-}} \to \cL_{\USF,-}|_{\cP^e_{\USF,-}}\otimes \cO_{\cP^e_{\USF,-}}(\infty_{\cP^e_{-}}+\cE_{\fb}) \otimes \cO_{\cC_{\USF}}(\Sigma)|_{\cP^e_{\USF,-}}.
\]
Since $f_{\USF}$ factors through $f^e_{\USF,-}$, we have $(f^e_{\USF,-})^*(\fb^*\bt^*\iota) = f^*_{\USF}\iota$.
By Lemma \ref{lem:compare-universal-log-twist}, we have $(f^e_{\USF,-})^*(\otimes\iota_{\cE_{\fc}}) = (\otimes\tf^{\vee}_{\USF})$ in (\ref{equ:tangent-twist}). Putting things together, we have
\begin{align*}
\bs_{\USF} = (\otimes\tf^{\vee}_{\USF}) \circ f^*_{\USF}\iota &= (f^e_{\USF,-})^*(\otimes\iota_{\cE_{\fc}}) \circ (f^e_{\USF,-})^*(\fb^*\bt^*\iota) \\
&= (f^e_{\USF,-})^*\big((\otimes\iota_{\cE_{\fc}}) \circ (\fb^*\bt^*\iota) \big).
\end{align*}
Note that $(\otimes\iota_{\cE_{\fc}}) \circ (\fb^*\bt^*\iota)$ is the following morphism
\[
\cO_{\cP^e_{\USF,-}} \to \cL_{\USF,-}|_{\cP^e_{\USF,-}}\otimes \cO_{\cP^e_{\USF,-}}(\infty_{\cP^e_{-}})\otimes\bL_{\max}^{\vee}\otimes\cO_{\cC_{\USF}}(\Sigma)|_{\cP^e_{\USF,-}}.
\]
which factors through the natural morphism
\begin{equation}\label{equ:twisted-spin-Pe}
\cO_{\cP^e_{\USF,-}} \to \cL_{\USF,-}|_{\cP^e_{\USF,-}}\otimes \cO_{\cP^e_{\USF,-}}(\infty_{\cP^e_{-}})\otimes\bL_{\max}^{\vee}.
\end{equation}
Write $V = \vb(\cL_{\USF,-}\otimes \pi^*_{\USF}\bL^{\vee}_{\max})$ for simplicity.
The section $\bs_{\USF,-}$ is the pull-back of the following canonical morphism via itself
\[
\iota_{-}\colon \cO_{V} \to \cO_{V}(0_{V}) \cong (\cL_{\USF,-}\otimes \pi^*_{\USF}\bL^{\vee}_{\max})|_{V}.
\]
This pulls back to
\[
\fc^* \iota_{-}\colon \cO_{\cP^{e,\circ}_{\USF,-}} \to \cL_{\USF,-}|_{\cP^{e,\circ}_{\USF,-}} \otimes\bL_{\max}^{\vee},
\]
which is the restriction of (\ref{equ:twisted-spin-Pe}). Since $\bs_{\USF}$ factors through $f^{e}_{\USF,-}$, the section (\ref{equ:twisted-spin-Pe}) pulls back to $\bs_{\USF,-}$ via $f^{e}_{\USF,-}$. This finishes the proof.
\end{proof}
\subsubsection{Relative cosection via partial expansion}
For simplicity, write
\[
\tilde{\cL}_{\USF,-} := \cL_{\USF,-}\otimes \pi^*_{\USF}\bL^{\vee}_{\max} \ \ \mbox{and} \ \ \tilde{\omega}_{\USF} := \omega_{\USF} \otimes \pi^*_{\USF}\bL^{-\otimes r}_{\max}
\]
Consider the following composition
\begin{equation}\label{equ:twist-composition}
\xymatrix{
\cP^{e,\circ}_{\USF,-} \ar[r]^{\fc \ \ \ \ } & \vb(\tilde{\cL}_{\USF,-}) \ar[r]^{W\ \ \ \ } \ar@/_1.5pc/[rr]_{W_{-}} & \vb\big(\tilde{\omega}_{\USF}((1-r)\Sigma)\big) \ar[r] & \vb(\tilde{\omega}_{\USF}).
}
\end{equation}
Take the differentiation
\[
T_{\cP^{e,\circ}_{\USF,-}/\cC_{\USF}} \stackrel{\diff \fc}{\longrightarrow} \fc^*T_{\vb(\tilde{\cL}_{\USF,-})/\cC_{\USF}} \stackrel{\diff W_{-}}{\longrightarrow} (W_{-}\circ\fc)^*\vb(\tilde{\omega}_{\USF}).
\]
Using \eqref{equ:twist-log-tangent} and pulling back to $\cC_{\USF}$, we have
\[
\cL_{\USF,-}\otimes f^{*}_{\USF}\cO(\infty_{\cP}) \stackrel{(f^e_{\USF,-})^*\diff \fc}{\longrightarrow} \tilde{\cL}_{\USF,-} \stackrel{\bs_{\USF,-}^*\diff W_{-}}{\longrightarrow} \tilde{\omega}_{\USF}.
\]
Further pushing forward, we obtain
\begin{equation}\label{equ:relative-cosection-via-expansion}
\begin{aligned}
\EE_{\USF/\fU} := \pi_{\USF,*}\big(\cL_{\USF,-}\otimes f^{*}_{\USF}\cO(\infty_{\cP})\big) &\stackrel{\pi_{\USF,*}(f^e_{\USF,-})^*\diff \fc}{\longrightarrow} \pi_{\USF,*}\tilde{\cL}_{\USF,-} \\
&\stackrel{\pi_{\USF,*}\bs_{\USF,-}^*\diff W_{-}}{\longrightarrow} \pi_{\USF,*}\tilde{\omega}_{\USF}
\end{aligned}
\end{equation}
\begin{proposition}\label{prop:relative-cosection-comparison}
The composition \eqref{equ:relative-cosection-via-expansion} is \eqref{equ:derived-cosection}. In particular, the relative cosection \eqref{equ:relative-cosection} is the $H^1$ of \eqref{equ:relative-cosection-via-expansion}.
\end{proposition}
\begin{proof}
By Lemma \ref{lem:compare-universal-log-twist}, we have $(f^e_{\USF,-})^*(\otimes\iota_{\cE_{\fc}}) = (\otimes\tf^{\vee}_{\USF})$, where $\iota_{\cE_{\fc}}$ is as in \eqref{equ:C-twist}. By Lemma \ref{lem:diff-c}, $(f^e_{\USF,-})^*\diff \fc$ is obtained by tensoring \eqref{equ:tangent-twist} by $\cO_{\cC_{\USF}}(-\Sigma)$. By \eqref{equ:narrow-iso}, the arrow $\pi_{\USF,*}(f^e_{\USF,-})^*\diff \fc$ is \eqref{equ:push-tangent-twist}. Further observe that the arrow $\pi_{\USF,*}\bs_{\USF,-}^*\diff W_{-}$ is \eqref{equ:after-twist}. This proves the statement.
\end{proof}
\subsubsection{The twisted Hodge bundle}
Denote by $\tilde{\omega}_{\fU} := \omega_{\fU} \otimes \pi^*_{\fU}\bL^{-\otimes r}_{\max}$. Consider the direct image cone
$
\bC(\pi_{\fU,*}\tilde{\omega}_{\fU})
$
as in \cite[Definition 2.1]{CL12}. This is an algebraic stack over $\fU$ parameterizing sections of $\tilde{\omega}_{\fU}$, see \cite[Proposition 2.2]{CL12}. We further equip it with the log structure pulled back from $\fU$. For simplicity, we write $\fH := \bC(\pi_{\fU,*}\tilde{\omega}_{\fU})$, and denote by $\bs_{\fH}\colon C_{\fH} \to \vb(\tilde{\omega}_{\fH})$ the universal section over $\fH$.
By \cite[Proposition 2.5]{CL12}, the strict morphism $\fH \to \fU$ has a perfect obstruction theory
\begin{equation}\label{equ:Hodge-perfect-obs}
\TT_{\fH/\fU} \to \EE_{\fH/\fU} := \pi_{\fH,*}\tilde{\omega}_{\fH}.
\end{equation}
By projection formula, we have
\begin{equation}\label{equ:fake-obs}
R^1\pi_{\fU,*} \tilde{\omega}_{\fU} = (R^1\pi_{\fU, *}\omega_{\fU}\otimes \bL^{-\otimes r}_{\max}) \cong \bL^{-\otimes r}_{\max}.
\end{equation}
This implies that
$R^0\pi_{\fU,*} \tilde{\omega}_{\fU} \cong R^0\pi_{\fU,
*}\omega_{\fU}\otimes \bL^{-\otimes r}_{\max}$ is indeed a vector
bundle whose associated geometric vector bundle is $\fH$.
In particular, the morphism $\fH \to \fU$ is strict and smooth.
Thus $\TT_{\fH/\fU}$ is a vector bundle over $\fH$ concentrated in
degree zero, and the following morphism is trivial:
\[
0 = H^1(\TT_{\fH/\fU}) \stackrel{0}{\longrightarrow} R^1\pi_{\fH,*} \tilde{\omega}_{\fH} \cong \bL^{-\otimes r}_{\max}.
\]
The section $\bs_{\USF}$ as in \eqref{equ:twisted-spin-section} defines a section
\[
\bs_{\USF}^{\otimes r}\colon \cC_{\USF} \to \vb(\cL_{\USF}^{\otimes r}\otimes \pi^*_{\USF}\bL^{-\otimes r}_{\max}) \cong \vb(\omega_{\log,\USF}\otimes \pi^*_{\USF}\bL^{-\otimes r}_{\max}).
\]
By Lemma \ref{lem:twisted-spin-marking-vanishing}, $\bs_{\USF}$ is a global section of $\cL_{\USF}(-\Sigma)\otimes \pi^*_{\USF}\bL_{\max}$. Thus $\bs_{\USF}^{\otimes r}$ factors through a section
\[
\cC_{\USF} \to \vb(\omega_{\USF}\otimes \pi^*_{\USF}\bL^{-\otimes r}_{\max}),
\]
which is again denoted by $\bs_{\USF}^{\otimes r}$. This induces a morphism
\[\USF \to \fH\]
such that $\bs_{\USF}^{\otimes r}$ is the pull-back of $\bs_{\fH}$.
\subsubsection{Obstruction factorization}
\begin{lemma}\label{lem:obs-commute}
There is a canonical commutative diagram
\begin{equation}\label{diag:rel-obs-commute}
\xymatrix{
\TT_{\USF/\fU} \ar[r] \ar[d] & \TT_{\fH/\fU}|_{\USF} \ar[d] \\
\EE_{\USF/\fU} \ar[r] & \EE_{\fH/\fU}|_{\USF}
}
\end{equation}
where the bottom arrow is \eqref{equ:relative-cosection-via-expansion}, and the left and right vertical arrows are the perfect obstruction theories (\ref{equ:log-perfect-obs}) and (\ref{equ:Hodge-perfect-obs}) respectively.
\end{lemma}
\begin{proof}
Consider the following commutative diagram over $\cC_{\fU}$:
\[
\xymatrix{
\cC_\USF \ar[rr] \ar[d]_{f_{\USF,-}^e} && \cC_\fH \ar[d]^{s_\fH} \\
\cP^{e,\circ}_{\fU,-} \ar[rr]^{(W_{-}) \circ \fc} && \vb(\tilde{\omega}_{\fU})
}
\]
By abuse of notation, $s_\fH$ is the composition $ \cC_\fH \to \vb(\tilde{\omega}_{\fH}) \to \vb(\tilde{\omega}_{\fU})$. We obtain a commutative diagram of log cotangent complexes
\begin{equation*}
\xymatrix{
\pi_{\USF}^* \TT_{\USF/\fU} \cong \TT_{\cC_\USF/\cC_\fU} \ar[rr] \ar[d] && \TT_{\cC_\fH/\cC_\fU}|_{\cC_{\USF}} \ar[d]\\
(f_{\USF,-}^e)^* \TT_{\cP^{e,\circ}_{\fU,-}/\cC_\fU} \ar[rr]^{(f_{\USF,-}^e)^*\diff (W_{-}) \circ \fc} && (s_\fH)^* \TT_{\vb(\tilde{\omega}_{\fU})/\cC_\fU}|_{\cC_\USF}
}
\end{equation*}
Since $\fb$ in \eqref{diag:lift-log-fields} is log \'etale, we have
$(f_{\USF,-}^e)^* \TT_{\cP^{e,\circ}_{\fU,-}/\cC_\fU} \cong
(f_{\USF,-})^* \TT_{\cP_{\fU,-}/\cC_\fU}$.
Applying $\pi_{\USF,*}$ and using adjunction we obtain
\begin{equation*}
\xymatrix{
\TT_{\USF/\fU} \ar[rrr] \ar[d] &&& \TT_{\fH/\fU}|_\USF \ar[d] \\
\pi_{\USF,*}(f_{\USF,-})^* \TT_{\cP_{\fU,-}/\cC_\fU} \ar[rrr]^{\pi_{\USF,*} (f_{\USF,-}^e)^*\diff (W_{-}) \circ \fc} &&& \pi_{\USF,*}(s_\fH)^* \TT_{\vb(\tilde{\omega}_{\fU})/\cC_\fU}|_{\cC_\USF}
}
\end{equation*}
which is \eqref{diag:rel-obs-commute}.
\end{proof}
\begin{proposition}\label{prop:rel-obs-factorization}
The injection $H^1(\TT_{\USF/\fU}) \to \obs_{\USF/\fU}$ factors through the kernel of the relative cosection $\sigma_{\USF/\fU}$ in (\ref{equ:relative-cosection}).
\end{proposition}
\begin{proof}
By Lemma \ref{lem:obs-commute}, taking $H^1$ of (\ref{diag:rel-obs-commute}), we obtain a commutative diagram
\[
\xymatrix{
H^1(\TT_{\USF/\fU}) \ar[r] \ar[d] & H^1(\TT_{\fH/\fU}) = 0 \ar[d] \\
\obs_{\USF/\fU} \ar[r]^{\sigma_{\USF/\fU}} & \bL_{\max}^{-\otimes r}
}
\]
where $H^1(\TT_{\fH/\fU}) = 0$ follows from the smoothness of $\fH \to \fU$.
\end{proof}
\subsection{The reduced relative perfect obstruction theory}
The dual of \eqref{equ:boundary-complex} induces a complex with amplitude $[0,1]$ over $\fU$:
\[
\FF := \cO_{\fH} \stackrel{\epsilon}{\longrightarrow} \bL_{\max}^{-\otimes r}.
\]
Since $\fH \to \fU$ is log smooth, $\epsilon$ is injective. Consider the cokernel $\cok \epsilon$. Then $\FF = \cok\epsilon[-1]$ in the derived category. The composition
\[
\EE_{\fH/\fU} \to H^1(\EE_{\fH/\fU})[-1] \cong \bL^{-\otimes r}_{\max}[-1] \twoheadrightarrow \cok\epsilon[-1]
\]
defines a morphism of complexes
$
\EE_{\fH/\fU} \to \FF|_{\fH},
$
and hence a triangle
\begin{equation}\label{tri:reduce-Y}
\EE_{\fH/\fU}^{\mathrm{red}} \longrightarrow \EE_{\fH/\fU} \longrightarrow \FF|_{\fH} \stackrel{[1]}{\longrightarrow}
\end{equation}
where the notation $|_{*}$ stands for derived pull-back to $*$.
\begin{lemma}\label{lem:red-O}
$H^1(\EE^{\mathrm{red}}_{\fH/\fU}) = \cO_{\fH/\fU}$.
\end{lemma}
\begin{proof}
Taking the long exact sequence of (\ref{tri:reduce-Y}) and using $H^0(\FF) = 0$, we have an exact sequence
\[
0 \to H^1(\EE_{\fH/\fU}^{\mathrm{red}}) \to H^1(\EE_{\fH/\fU}) \to H^1(\FF|_{\fH})\to 0.
\]
Since $H^1(\EE_{\fH/\fU}) \to H^1(\FF|_{\fH})$ is precisely the morphism $\bL^{-\otimes r}_{\max} \twoheadrightarrow \cok\epsilon$, it follows that $H^1(\EE^{\mathrm{red}}_{\fH/\fU}) = \cO_{\fH/\fU}$.
\end{proof}
The composition
\begin{equation}\label{equ:composing-to-moduli}
\EE_{\USF/\fU} \to \EE_{\fH/\fU}|_{\USF} \to \FF|_{\USF},
\end{equation}
yields a triangle
\begin{equation}\label{tri:reduce-X}
\EE_{\USF/\fU}^{\mathrm{red}} \longrightarrow \EE_{\USF/\fU} \longrightarrow \FF|_{\USF} \stackrel{[1]}{\longrightarrow}
\end{equation}
\begin{lemma}\label{lem:red-obs-factor}
The perfect obstruction theories $\TT_{\fH/\fU} \to \EE_{\fH/\fU}$ and $\TT_{\USF/\fU} \to \EE_{\USF/\fU}$ factor through $\TT_{\fH/\fU} \to \EE_{\fH/\fU}^{\mathrm{red}}$ and $\TT_{\USF/\fU} \to \EE_{\USF/\fU}^{\mathrm{red}}$ respectively. Furthermore, they fit in a commutative diagram
\begin{equation}\label{diag:red-obs-commute}
\xymatrix{
\TT_{\USF/\fU} \ar[r] \ar[d] & \TT_{\fH/\fU}|_{\USF} \ar[d] \\
\EE^{\mathrm{red}}_{\USF/\fU} \ar[r] & \EE^{\mathrm{red}}_{\fH/\fU}|_{\USF}
}
\end{equation}
\end{lemma}
\begin{proof}
By Lemma \ref{lem:obs-commute}, we have a commutative diagram of solid arrows:
\begin{equation}\label{diag:reduce-X-Y}
\xymatrix{
\TT_{\USF/\fU} \ar@/^1pc/[rrd] \ar@{-->}[rd] \ar[dd] &&&& \\
& \EE^{\mathrm{red}}_{\USF/\fU} \ar[dd] \ar[r] & \EE_{\USF/\fU} \ar[dd] \ar[r] & \FF|_{\USF} \ar[r]^{[1]} \ar@{=}[dd] & \\
\TT_{\fH/\fU}|_{\USF} \ar@/^1pc/[rrd]|{\ \ \ \hole \ \ } \ar@{-->}[rd] &&&& \\
& \EE^{\mathrm{red}}_{\fH/\fU}|_{\USF} \ar[r] & \EE_{\fH/\fU}|_{\USF} \ar[r] & \FF|_{\USF} \ar[r]^{[1]} &
}
\end{equation}
where the two horizontal lines are triangles (\ref{tri:reduce-X}) and (\ref{tri:reduce-Y}), and the two curved arrows are the corresponding perfect obstruction theories.
Since $\fH \to \fU$ is representable and smooth, the complex $\TT_{\fH/\fU}$ is represented by the relative tangent bundle $T_{\fH/\fU}$. Thus the composition $\TT_{\fH/\fU} \to \EE_{\fH/\fU} \to \FF|_{\fH}$ is the zero morphism. This yields the lower dashed arrow $\TT_{\fH/\fU} \to \EE_{\fH/\fU}$.
Now by the commutativity, the composition $\TT_{\USF/\fU} \to \EE_{\USF/\fU} \to \FF|_{\USF}$ is the same as $\TT_{\USF/\fU} \to \TT_{\fH/\fU}|_{\USF} \to \FF|_{\USF}$, hence is trivial. Thus, we obtain the top dashed arrow $\TT_{\USF/\fU} \to \EE_{\USF/\fU}$.
\end{proof}
\begin{lemma}\label{lem:red-perfect}
The two complexes $\EE^{\mathrm{red}}_{\USF/\fU}$ and $\EE^{\mathrm{red}}_{\fH/\fU}$ are perfect with tor-amplitude in $[0,1]$.
\end{lemma}
\begin{proof}
Since $\EE_{\fH/\fU}$, $\EE_{\USF/\fU}$ and $\FF$ are perfect in $[0,1]$, the complexes $\EE^{\mathrm{red}}_{\fH/\fU}$ and $\EE^{\mathrm{red}}_{\USF/\fU}$ are at least perfect in $[0,2]$. It suffices to show that $H^2(\EE^{\mathrm{red}}_{\fH/\fU}) = 0$ and $H^2(\EE^{\mathrm{red}}_{\USF/\fU}) = 0$.
Taking the long exact sequence of (\ref{tri:reduce-Y}), we have an exact sequence
\[
H^{1}(\EE_{\fH/\fU}) \to H^1(\FF|_{\fH}) \to H^2(\EE^{\mathrm{red}}_{\fH/\fU}) \to 0
\]
Since the left arrow is $\bL^{-\otimes r}_{\max} \twoheadrightarrow \cok\epsilon$, we have $H^2(\EE^{\mathrm{red}}_{\fH/\fU}) = 0$.
Similarly using (\ref{tri:reduce-X}), we have an exact sequence
\[
H^{1}(\EE_{\USF/\fU}) \to H^1(\FF|_{\USF}) \to H^2(\EE^{\mathrm{red}}_{\USF/\fU}) \to 0.
\]
By \eqref{equ:composing-to-moduli}, the left arrow is the following composition
\[
H^{1}(\EE_{\USF/\fU}) \to H^{1}(\EE_{\fH/\fU}|_{\USF}) \twoheadrightarrow H^1(\FF|_{\USF}).
\]
By construction, $\FF_{\USF}|_{\USF\setminus\Delta_{\max}} = 0$ is the zero complex. It suffices to show that the above composition is surjective along a neighborhood of $\Delta_{\max}$, which follows from Proposition \ref{prop:cosection-boundary-surjective} and Lemma \ref{lem:obs-commute}.
\end{proof}
\begin{lemma}\label{lem:red-obs}
The two arrows $\TT_{\fH/\fU} \to \EE_{\fH/\fU}^{\mathrm{red}}$ and $\TT_{\USF/\fU} \to \EE_{\USF/\fU}^{\mathrm{red}}$ define perfect obstruction theories of $\fH \to \fU$ and $\USF \to \fU$ respectively.
\end{lemma}
\begin{proof}
We verify the case of $\TT_{\USF/\fU} \to \EE_{\USF/\fU}^{\mathrm{red}}$. The case of $\TT_{\fH/\fU} \to \EE_{\fH/\fU}^{\mathrm{red}}$ is similar. By the triangle \eqref{tri:reduce-X} and the factorization of Lemma \ref{lem:red-obs-factor}, we have a surjection $H^0(\TT_{\USF/\fU}) \twoheadrightarrow H^0(\EE^{\mathrm{red}}_{\USF/\fU})$ and an injection $H^1(\TT_{\USF/\fU}) \hookrightarrow H^1(\EE^{\mathrm{red}}_{\USF/\fU})$. Since $\FF$ is perfect in $[0,1]$, (\ref{tri:reduce-X}) implies that $H^0(\TT_{\USF/\fU}) \to H^0(\EE^{\mathrm{red}}_{\USF/\fU})$ is also injective, and hence an isomorphism.
\end{proof}
The proof of the above lemma leads to the following
\begin{corollary}\label{cor:red-cos-surj}
\begin{enumerate}
\item $H^0(\EE_{\USF/\fU}^{\mathrm{red}}) = H^0(\EE_{\USF/\fU})$.
\item Diagram (\ref{diag:reduce-X-Y}) induces a morphism between long exact sequences
\[
\xymatrix{
0 \ar[r] & H^0(\FF|_{\USF}) \ar[r] \ar[d]^{\cong} & H^1(\EE^{\mathrm{red}}_{\USF/\fU}) \ar[r] \ar[d]^{\sigma^{\mathrm{red}}_{\USF/\fU}} & H^1(\EE_{\USF/\fU}) \ar[r] \ar[d]^{\sigma_{\USF/\fU}} & H^1(\FF|_{\USF}) \ar[r] \ar[d]^{\cong}& 0 \\
0 \ar[r] & H^0(\FF|_{\USF}) \ar[r] & H^1(\EE^{\mathrm{red}}_{\fH/\fU}) \ar[r] & H^1(\EE_{\fH/\fU}) \ar[r] & H^1(\FF|_{\USF}) \ar[r] & 0
}
\]
where the morphism $\sigma^{\mathrm{red}}_{\USF/\fU}$ is surjective along $\Delta_{\max}$.
\end{enumerate}
\end{corollary}
\begin{proof}
It remains to verify the surjectivity of $\sigma^{\mathrm{red}}_{\USF/\fU}$ along $\Delta_{\max}$. This follows from the surjectivity of $\sigma_{\USF/\fU}$ along $\Delta_{\max}$ by Proposition \ref{prop:cosection-boundary-surjective}.
\end{proof}
We summarizes our construction below.
\begin{proposition}\label{prop:reduced-obs}
The morphism $\USF \to \fU$ admits a {\em reduced perfect obstruction theory}
\begin{equation}\label{equ:reduced-relative-obs}
\TT_{\USF/\fU} \to \EE^{\mathrm{red}}_{\USF/\fU},
\end{equation}
and a {\em reduced relative cosection}
\begin{equation}\label{equ:reduced-relative-cosection}
\sigma^{\mathrm{red}}_{\USF/\fU}\colon \obs^{\mathrm{red}}_{\USF/\fU} := H^1(\EE^{\mathrm{red}}_{\USF/\fU}) \to \cO_{\USF}
\end{equation}
with the following properties
\begin{enumerate}
\item $\EE^{\mathrm{red}}_{\USF/\fU}|_{\USF\setminus\Delta_{\max}} = \EE_{\USF/\fU}|_{\USF\setminus\Delta_{\max}}$.
\item $\sigma^{\mathrm{red}}_{\USF/\fU}|_{\USF\setminus\Delta_{\max}} = \sigma_{\USF/\fU}|_{\USF\setminus\Delta_{\max}}$
\item $\sigma^{\mathrm{red}}_{\USF/\fU}$ is surjective along $\Delta_{\max}$.
\end{enumerate}
In particular, $\sigma^{\mathrm{red}}_{\USF/\fU}$ and $\sigma_{\USF/\fU}$ have the same degeneracy loci.
\end{proposition}
\begin{proof}
The perfect obstruction theory has been verified in Lemma \ref{lem:red-perfect} and \ref{lem:red-obs}. The formation of $\sigma^{\mathrm{red}}_{\USF/\fU}$ and its surjectivity along $\Delta_{\max}$ follows from Corollary \ref{cor:red-cos-surj}.
Finally (1) follows from the observation $\FF|_{\USF,\setminus \Delta_{\max}} = 0$. Statement (2) follows from (1) and \eqref{diag:reduce-X-Y}.
\end{proof}
\begin{notation}\label{not:red-virtual-cycle}
Since $\fU$ is equi-dimensional, denote by $[\USF]^{\mathrm{red}}$ the virtual fundamental class of $\USF$ defined by the relative perfect obstruction theory \eqref{equ:reduced-relative-obs}, see \cite{BF97}.
\end{notation}
\subsection{The reduced absolute perfect obstruction theory}
\subsubsection{Resolution of the base}
\begin{lemma}\label{lem:base-resolution}
Let $\fV \subset \fU$ be a finite type open substack, and write $\Delta_{\max, \fV} = \Delta_{\max} \cap \fV$. Then there exists a birational, log \'etale, projective morphism of log stacks
$
\tilde{\phi} \colon \tilde{\fV} \to \fV
$
such that
\begin{enumerate}
\item $\tilde{\phi}|_{\fV\setminus \Delta_{\max,\fV}}$ is an isomorphism onto $\fV\setminus \Delta_{\max,\fV}$.
\item The log structure of $\tilde{\fV}$ is locally free. In particular, the underlying stack of $\tilde{\fV}$ is smooth.
\end{enumerate}
\end{lemma}
\begin{proof}
Recall from Corollary~\ref{cor:separate-non-distinct-log-U} that
there is a canonical splitting
$\cM_{\fV} = \cM'_{\fV}\oplus_{\cO^*}\cM''_{\fV}$ where
$\oM''_{\fV,s} = \NN^d$ is the factor corresponding to nodes with
the trivial contact order for each geometric point $s \to \fV$.
Indeed, given a node over $s$, if it has the trivial contact order,
then it can either be smoothed out or remain the trivial contact
order in a neighborhood of $s$.
Observe that $\cM'_{\fV}$ is trivial along
$\fV\setminus \Delta_{\max,\fV}$ as the curves have no degenerate
components away from $\Delta_{\max}$.
Denote by $\cA'_{\fV}$ and $\cA''_{\fV}$ the Artin fans associated
to the log structures $\cM'_{\fV}$ and $\cM''_{\fV}$ respectively,
see \cite[Proposition 3.1.1]{ACMW17}.
By Theorem \ref{thm:max-uni-moduli}, we have a strict, smooth
morphism of log stacks $\fV \to \cA'_{\fV}\times \cA''_{\fV}$.
Let $\cY \to \cA'_{\fV}$ be the projective sub-division provided by
\cite[Theorem 4.4.2]{ACMW17}.
In particular, it is projective and log \'etale, and $\cM_{\cY}$ is
locally free.
Consider the induced projective, log \'etale morphism
\[
\tilde{\phi} \colon \tilde{\fV} := \fV\times_{\cA'_{\fV}\times\cA''_{\fV}}(\cY \times \cA''_{\fV}) \to \fV.
\]
Now (2) follows from the construction.
Since $\cM'_{\fV}$ is the trivial log structure on
$\fV\setminus \Delta_{\max,\fV}$, $\tilde{\phi}$ is an isomorphism
away from $\Delta_{\max,\fV}$.
This proves (1).
\end{proof}
Let $\fV \subset \fU$ be a finite type open substack containing the
image of $\USF$.
We fix a resolution $\tilde{\phi} \colon \tilde{\fV} \to \fV$ as in
Lemma~\ref{lem:base-resolution}.
Consider the fiber products
\[
\tilde{\fH} := \fH\times_{\fU}\tilde{\fV} \ \ \ \mbox{and} \ \ \ \tilde{\USF} := \USF\times_{\fU}\tilde{\fV}.
\]
The perfect obstruction theories $\TT_{\fH/\fU} \to \EE_{\fH/\fU}^{\mathrm{red}}$ and $\TT_{\USF/\fU} \to \EE_{\USF/\fU}^{\mathrm{red}}$ in Lemma \ref{lem:red-obs} pull back to perfect obstruction theories
\[
\TT_{\tilde{\fH}/\tilde{\fV}} \to \EE_{\tilde{\fH}/\tilde{\fV}}^{\mathrm{red}} \ \ \ \mbox{and} \ \ \ \TT_{\tilde{\USF}/\tilde{\fV}} \to \EE_{\tilde{\USF}/\tilde{\fV}}^{\mathrm{red}}.
\]
Since $\tilde{\fV}$ is equi-dimensional, let $[\tilde{\USF}]^{\mathrm{red}}$
be the virtual cycle of $\tilde{\USF}$ defined by the above perfect
obstruction theory as in \cite{BF97}. By Lemma
\ref{lem:base-resolution} and the virtual push-forward of \cite{Co06,
Ma12}, we obtain:
\begin{lemma}\label{lem:push-along-resolution}
$\tilde{\phi}_{*}[\tilde{\USF}]^{\mathrm{red}} = [\USF]^{\mathrm{red}}$
\end{lemma}
\subsubsection{The absolute reduced theory and cosection}
Consider the morphism of triangles:
\begin{equation}\label{diag:abs-obs-H}
\xymatrix{
\TT_{\tilde{\fH}/\tilde{\fV}} \ar[r] \ar[d] & \TT_{\tilde{\fH}} \ar[r] \ar[d] & \TT_{\tilde{\fV}}|_{\tilde{\fH}} \ar[r]^{[1]} \ar[d]^{\cong} & \\
\EE^{\mathrm{red}}_{\tilde{\fH}/\tilde{\fV}} \ar[r] & \EE^{\mathrm{red}}_{\tilde{\fH}} \ar[r] & \TT_{\tilde{\fV}}|_{\tilde{\fH}} \ar[r]^{[1]} &
}
\end{equation}
\begin{lemma}\label{lem:lift-cosection}
The induced morphism $H^1(\EE^{\mathrm{red}}_{\tilde{\fH}/\tilde{\fV}}) \to H^1(\EE^{\mathrm{red}}_{\tilde{\fH}})$ is an isomorphism and $H^1(\EE^{\mathrm{red}}_{\tilde{\fH}}) \cong \cO_{\tilde{\fH}}$.
\end{lemma}
\begin{proof}
Since $\tilde{\fV}$ is smooth, we have $H^1(\TT_{\tilde{\fV}}) = 0$. Consider the induced morphism between long exact sequences
\[
\xymatrix{
H^{0}(\TT_{\tilde{\fH}}) \ar[r] \ar[d]^{\cong} & H^{0}(\TT_{\tilde{\fV}}|_{\tilde{\fH}}) \ar[r] \ar[d]^{\cong} & H^{1}(\TT_{\tilde{\fH}/\tilde{\fV}}) \ar[r] \ar[d] & H^{1}(\TT_{\tilde{\fH}}) \ar[r] \ar[d] & 0 \\
H^{0}(\EE^{\mathrm{red}}_{\tilde{\fH}}) \ar[r] & H^{0}(\TT_{\tilde{\fV}}|_{\tilde{\fH}}) \ar[r] & H^{1}(\EE^{\mathrm{red}}_{\tilde{\fH}/\tilde{\fV}}) \ar[r] & H^{1}(\EE^{\mathrm{red}}_{\tilde{\fH}}) \ar[r] & 0
}
\]
Since $\tilde{\fH} \to \tilde{\fV}$ is smooth, $H^{0}(\TT_{\tilde{\fH}}) \to H^{0}(\TT_{\tilde{\fV}}|_{\tilde{\fH}})$ and $H^{0}(\EE^{\mathrm{red}}_{\tilde{\fH}}) \to H^{0}(\TT_{\tilde{\fV}}|_{\tilde{\fH}})$ are both surjective. Thus $H^1(\EE^{\mathrm{red}}_{\tilde{\fH}/\tilde{\fV}}) \to H^1(\EE^{\mathrm{red}}_{\tilde{\fH}})$ is an isomorphism. Lemma \ref{lem:red-O} implies that $H^1(\EE^{\mathrm{red}}_{\tilde{\fH}}) \cong \cO_{\tilde{\fH}}$.
\end{proof}
Now consider the morphism of triangles:
\begin{equation}\label{diag:abs-obs-X}
\xymatrix{
\TT_{\tilde{\USF}/\tilde{\fV}} \ar[r] \ar[d] & \TT_{\tilde{\USF}} \ar[r] \ar[d]^{\varphi_{\tilde{\USF}}} & \TT_{\tilde{\fV}} \ar[r]^{[1]} \ar[d]^{\cong} & \\
\EE^{\mathrm{red}}_{\tilde{\USF}/\tilde{\fV}} \ar[r] & \EE^{\mathrm{red}}_{\tilde{\USF}} \ar[r] & \TT_{\tilde{\fV}} \ar[r]^{[1]} &
}
\end{equation}
By \cite[Proposition A.1. (1)]{BL00}, we obtain a perfect obstruction theory $\TT_{\tilde{\USF}} \to \EE^{\mathrm{red}}_{\tilde{\USF}}$ of $\tilde{\USF}$ with the corresponding virtual circle $[\tilde{\USF}]^{\mathrm{red}}$.
The bottom morphism in (\ref{diag:red-obs-commute}) induces a morphism of triangles
\[
\xymatrix{
\EE^{\mathrm{red}}_{\tilde{\USF}/\tilde{\fV}}|_{\tilde{\USF}} \ar[r] \ar[d] & \EE^{\mathrm{red}}_{\tilde{\USF}}|_{\tilde{\USF}} \ar[r] \ar[d] & \TT_{\tilde{\fV}}|_{\tilde{\USF}} \ar[r]^{[1]} \ar[d] & \\
\EE^{\mathrm{red}}_{\tilde{\fH}/\tilde{\fV}} \ar[r] & \EE^{\mathrm{red}}_{\tilde{\fH}} \ar[r] & \TT_{\tilde{\fV}}|_{\tilde{\USF}} \ar[r]^{[1]} &
}
\]
Taking $H^1$ and applying Lemma \ref{lem:lift-cosection}, we have a commutative diagram
\[
\xymatrix{
H^1(\EE^{\mathrm{red}}_{\tilde{\USF}/\tilde{\fV}}) \ar@{->>}[r] \ar[d]_{\sigma^{\mathrm{red}}_{\tilde{\USF}/\tilde{\fV}}} & H^1(\EE^{\mathrm{red}}_{\tilde{\USF}}) \ar[d]^{\sigma^{\mathrm{red}}_{\tilde{\USF}}} \\
\cO \ar[r]^{=} & \cO.
}
\]
Observe that $\sigma^{\mathrm{red}}_{\tilde{\USF}/\tilde{\fV}}$ is the pull-back of $\sigma^{\mathrm{red}}_{\USF/\fU}$ in \eqref{equ:reduced-relative-cosection}. We call $\sigma^{\mathrm{red}}_{\tilde{\USF}}$ the {\em absolute reduced cosection}.
\subsubsection{Proof of Theorem \ref{thm:main}}
Denote by $\tilde{\Delta}_{\max} := \tilde{\USF}\times_{\USF}\Delta_{\max}$. By Lemma \ref{lem:base-resolution} (1), we have the identity $\USF^{\circ} := \tilde{\USF} \setminus \tilde{\Delta}_{\max} = \USF \setminus \Delta_{\max}$. Consider the open embedding $\iota\colon \USF^{\circ} \hookrightarrow \tilde{\USF}$ with the trivial perfect obstruction theory. Thus the virtual pull-back $\iota^{!}$ in the sense of \cite{Ma12} is just the flat pull-back.
Denote by $\sigma_{\USF^{\circ}} = \sigma_{\tilde{\USF}}^{\mathrm{red}}|_{\USF^{\circ}}$. By Lemma \ref{lem:base-resolution}, \ref{lem:old-compatible}, and Proposition \ref{prop:reduced-obs} (2), the morphism $\sigma_{\USF^{\circ}}$ is the absolute cosection in \cite[Proposition 3.4]{CLL15} in the $r$-spin case. We then obtain:
\begin{lemma}
$[\USF^{\circ}]_{\sigma_{\USF^{\circ}}}$ is the {\em Witten's top Chern class} as in \cite[Definition-Proposition 3.9]{CLL15}.
\end{lemma}
On the other hand, let $\tilde{\USF}(\sigma_{\tilde{\USF}}^{\mathrm{red}})$ (respectively $\USF^{\circ}(\sigma_{\USF^{\circ}})$) be the degeneracy loci of $\sigma_{\tilde{\USF}}^{\mathrm{red}}$ (respectively $\sigma_{\USF^{\circ}}$). Since $\sigma^{\mathrm{red}}_{\tilde{\USF}/\tilde{\fV}}$ is the pull-back of $\sigma^{\mathrm{red}}_{\USF/\fU}$, Proposition \ref{prop:reduced-obs} (3) implies that $\sigma_{\tilde{\USF}}^{\mathrm{red}}$ is surjective along $\tilde{\Delta}_{\max}$, hence $\tilde{\USF}(\sigma_{\tilde{\USF}}^{\mathrm{red}}) = \USF^{\circ}(\sigma_{\tilde{\USF}}^{\mathrm{red}})$.
Let $\iota^{!}_{\sigma_{\tilde{\USF}}^{\mathrm{red}}}$ be the cosection localized virtual pull-back as in \cite{CKL17}. Since $\iota^{!}_{\sigma_{\tilde{\USF}}^{\mathrm{red}}} = \iota^{!}$ and $\tilde{\USF}(\sigma_{\tilde{\USF}}^{\mathrm{red}}) = \USF^{\circ}(\sigma_{\USF^{\circ}})$, applying \cite[Theorem 2.6]{CKL17} we have the following equalities in $A_*(\USF^{\circ}(\sigma_{\USF^{\circ}}))$:
\[
[\tilde{\USF}]^{\mathrm{red}}_{\sigma_{\tilde{\USF}}^{\mathrm{red}}} = \iota^{!}_{\sigma_{\tilde{\USF}}^{\mathrm{red}}}[\tilde{\USF}]^{\mathrm{red}}_{\sigma_{\tilde{\USF}}^{\mathrm{red}}} = [\USF^{\circ}]_{\sigma_{\USF^{\circ}}}.
\]
Let $\tilde{i}\colon \USF^{\circ}(\sigma_{\USF^{\circ}}) \to \tilde{\USF}$ be the closed embedding. By \cite{KiLi13}, we have:
\begin{lemma}
$\tilde{i}_{*}[\USF^{\circ}]_{\sigma_{\USF^{\circ}}} = [\tilde{\USF}]^{\mathrm{red}}$.
\end{lemma}
Finally, let $i = \tilde{\phi}\circ \tilde{i}\colon \USF^{\circ}(\sigma_{\USF^{\circ}}) \to \USF$ be the closed embedding. Applying Lemma \ref{lem:push-along-resolution}, we have:
\begin{proposition}\label{prop:comparison}
$i_{*}[\USF^{\circ}]_{\sigma_{\USF^{\circ}}} = [\USF]^{\mathrm{red}}$.
\end{proposition}
This completes the proof of Theorem \ref{thm:main}.
|
{
"timestamp": "2018-05-08T02:13:31",
"yymm": "1805",
"arxiv_id": "1805.02304",
"language": "en",
"url": "https://arxiv.org/abs/1805.02304"
}
|
\section{Introduction}
\label{sec:intro}
\input{sec_introduction}
\section{Preliminaries}
\label{sec:preliminaries}
\input{sec_preliminaries}
\section{Problem Formulation}
\label{sec:problemFormulation}
\input{sec_problem}
\section{Heuristic solutions}
\label{sec:alg}
\input{sec_algorithms}
\section{Experimental Evaluation}
\label{sec:exp}
\input{sec_experimentalEvaluation}
\section{Conclusion and Future work}
\label{sec:con}
This paper investigates the market-based aggregation problem using the FO model to capture flexible charging loads of EVs.
It proposes $3$ market-based FO aggregation techniques that efficiently aggregate loads from thousand of EVs taking
into account real market requirements.
Consequently, the techniques produce aggregated FOs that can be transformed into flexible orders and be traded in the energy market.
The paper financially evaluates the proposed techniques based on real electricity prices and shows that a $27$\% cost reduction on energy purchase can be achieved via flexible orders.
In our future work,
we will enrich our techniques considering pricing forecast models and uncertainty in patterns of driving behavior.
Furthermore, we will investigate more variations of the proposed algorithm and we prove the theoretical lower bounds for the their complexity.
Moreover, we will examine a price-maker market scenario and different market strategies for the BRPs.
\begin{acks}
This is work was supported in part by the TotalFlex project funded by the ForskEL program of Energinet.dk and the GoFLEX project funded under the Horizon 2020 program.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
\subsection{Heuristic Market-based Aggregation Main Algorithm}
The goal of \ma{} is to produce AFOs that respect the flexible order requirements while avoiding the high complexity of the problem and at the same time provide good results in terms of bidden energy amount.
Thus, given a set of FOs $F$, \ma{} (\refalg{ha}) performs incremental binary aggregations so that the produced AFOs increase the captured energy in each step.
In addition, the algorithm maps the flexible order requirements to threshold parameters that must be respected during the performed aggregations.
Consequently, it introduces 3 thresholds, namely, the slice power ($\mathit{spt}$), time flexibility ($\mathit{tft}$), and power profile ($\mathit{ppt}$) thresholds that correspond to flexible order requirements.
It sets $\mathit{spt}$ to $100$ since flexible orders must have multiples of 100kW power.
Moreover, \ma{} assigns $1$ and $23$ to $\mathit{tft}$ and $\mathit{ppt}$, respectively, since flexible orders must have a time interval of $1$ and duration at most $23$ hours.
Permitted amount deviation is represented by $e$ that is assigned values from $0$kW to $5$kW.
\begin{algorithm}[tb]
\begin{algorithmic}[1]
\Require{$F$ - set of FO{}s,
$e$ - amount deviation}
\Ensure{$\mathit{AF}$ - set of AFO{}s}
\State $continue\leftarrow true$, $\mathit{AF}\leftarrow \emptyset$
\While {$continue=true$}\label{lin:habodystart}
\State $\mathit{ppt}\leftarrow23$, $\mathit{spt}\leftarrow100$
\State{$\mathit{PF,UF},f_{ini},\mathit{tft}\leftarrow$Initialize($F$)} \label{lin:haini}
\State{$\mathit{PF,AF} \leftarrow$Process($\mathit{PF},\mathit{AF}, f_{ini},\mathit{tft},\mathit{ppt},\mathit{spt},e$)} \label{lin:haprocess}
\State{$F, continue\leftarrow$Examine($\mathit{PF,UF},\mathit{AF},continue$)} \label{lin:haexamine}
\EndWhile\label{lin:habodyend}
\State\Return Top5EnergyAFOs($\mathit{AF})$\label{lin:hareturn}
\end{algorithmic}
\caption{Heuristic Market-Based Aggregation}\label{alg:ha}
\end{algorithm}
\begin{algorithm}[tb]
\begin{algorithmic}[1]
\Function{Initialize}{$F$}
\State $f_{ini}\leftarrow$SelectAmongLongestTheMostFlexibleFO($\mathit{F}$)\label{lin:lpini}
\State\Return $ \mathit{F}\setminus f_{ini},\emptyset,f_{ini},1$\label{lin:returnini}
\EndFunction
\end{algorithmic}
\caption{Longest Profile - Initialization phase}\label{alg:ln}
\end{algorithm}
The body of \ma{} consists of $3$ phases (functions), i.e.,
{\em initialization}, {\em processing}, and {\em examination} (\refalg{ha},~\reflinto{habodystart}{habodyend}).
During the initialization phase (\reflin{haini}), \ma{} identifies the FO with which to start binary aggregations ($f_{ini}$) and the subset of the FOs ($\mathit{PF}$) that participates in the aggregations.
Then, during the processing phase (\reflin{haprocess}), it produces all the potential binary aggregations between $f_{ini}$ and the FOs in $\mathit{PF}$ to produce AFOs that fulfill the flexible order requirements.
Afterwards, during the examination phase (\reflin{haexamine}), \ma{} examines whether it shall restart using the remaining FOs or terminate.
\subsection{Main Algorithm variants}
The initialization phase is salient for the outcome of the algorithm as it mainly defines the solution space that the algorithm explores.
Hence, we introduce $3$ variants of \ma{} that have different initialization phases, namely, the {\em Largest Profile} (LP), {\em Dynamic Profile} (DP), and {\em Dynamic Time Flexibility} (DTF).
LP focuses on producing AFOs with many slices because a long FO usually captures large energy amounts.
On the other hand,
given an FO with many slices, it is very difficult to fulfill the flexible order amount requirements and, especially, the slice amount equality required.
For this reason, DP excludes from aggregation the FOs with extremely large profiles (outliers).
DTF focuses on time flexibility of the FOs that has a prominent role in aggregation since it is directly correlated to the alignments.
Thus, DTF takes into account the time flexibility distribution of the initial set and gradually excludes from aggregation the FOs with low time flexibility compared to the initial set.
\textbf{LP - Initialization phase.}
LP starts by selecting the most
flexible FO among the ones with the largest profile size (\refalg{ln},~\reflin{lpini}).
An FO with large profile size and high time flexibility has high probability to time-wise overlap with profiles of other FOs.
So, AFOs that fulfill the flexible order requirements through different alignments can be produced.
LP uses the initial set $F$ as the processing set $\mathit{PF}$ (\reflin{returnini})
and then executes the processing and examination phase.
\begin{algorithm}[tb]
\begin{algorithmic}[1]
\Function{Initialize}{$F$}
\State $\mathit{uf}\leftarrow$UpperFenceProfileSize($F$)\label{lin:dpuf}
\State $\mathit{PF}\leftarrow$FOsWithProfileAtLeastUF($\mathit{F,uf}$)\label{lin:dpal}
\State $f_{ini}\leftarrow$SelectTheMostFlexibleFOAmongLongest($\mathit{PF}$)\label{lin:dpselectfnom}
\State\Return $\mathit{PF}\setminus f_{ini}, F\setminus\mathit{PF}, f_{ini},1$\label{lin:dpreturn}
\EndFunction
\caption{Initialization phase - Dynamic Profile algorithm}\label{alg:dp}
\end{algorithmic}
\end{algorithm}
\textbf{DP - Initialization phase.}
During the initialization phase, DP divides the initial set $F$ into $2$ subsets.
First, DP computes the upper fence ($\mathit{uf}$)~\cite{boxplot} of the power profile size of the FOs in $F$ (\refalg{dp}~\reflin{dpuf}).
Then, it stores in $\mathit{PF}$ the FOs that have profile size of at most $\mathit{uf}$ (\reflin{dpal}).
It selects as $f_{ini}$ the most flexible FO in $\mathit{PF}$ among the ones with the longest profile and removes it from $\mathit{PF}$ (\reflinto{dpselectfnom}{dpreturn}).
For instance, given the set $F$ in~\reffig{dldtfex}a
(\begin{math}\{f_1,\dots,f_6\end{math}\}), $\mathit{uf}$ is $4$, see~\reffig{dldtfex}b.
DP excludes $f_1$, which has a very long profile compared to the other FOs (red circle in~\reffig{dldtfex}b), from $F$ and selects FO $f_6$ as $f_{ini}$.
FOs with very long profiles have difficulties satisfying the slice equality and it is likely that they have small time flexibility due to their long profiles (e.g., many charging hours for the EVs).
Thus, they have less potential alignments to further satisfy the flexible order requirements.
Then, DP continues aggregation with the processing and examination phase using $\mathit{PF}$, i.e., \begin{math}F\setminus \{f_{ini}\cup f_1\}\end{math}.
\textbf{DTF - Initialization phase.}
DTF takes into account the time flexibility distribution of the initial set $F$ and excludes FOs with low time flexibility compared to the initial set.
It computes the lower fence of time flexibility distribution of $F$ and sets the time flexibility threshold ($\mathit{tft}$) equal to the lower fence~\cite{boxplot} (\refalg{dtf}~\reflin{dtflf}).
It splits $F$ based on the lower fence of the time flexibility distribution in the set.
It stores the FOs with time flexibility at least $\mathit{tft}$ in $\mathit{PF}$ (\reflin{dtfsplit1}).
DTF then selects $f_{ini}$ from $\mathit{PF}$ (\reflin{dtfini}).
As a result, the algorithm excludes the FOs that have very small time flexibility.
For instance, given the set $F$ in~\reffig{dldtfex}a, $\mathit{tft}$ equals $6$,~\reffig{dldtfex}c.
Thus, DTF excludes $f_3$, which has very low time flexibility compared to the other FO{}s in the set, from $F$, see the blue circle in~\reffig{dldtfex}c.
DTF then sets $\mathit{tft}$ to $6$, selects FO $f_1$ as $f_{ini}$, and continues aggregation with $\mathit{PF}$, i.e., \begin{math}F\setminus \{f_{ini}\cup f_3\}\end{math}.
FOs with small time flexibility have lower probability to contribute in aggregation due to the low number of alignments that they have.
Moreover, by setting $\mathit{tft}$ equal to the lower fence, DTF reduces the number of examined alignments and consequently the complexity of the algorithm.
Thus, AFOs with greater time flexibility are more likely to be produced.
\begin{figure}[tb]
\begin{tabular}{ c c }
\includegraphics[width=0.145\textwidth]{Alg_dp_df}&
\hspace{-1.5em}
\includegraphics[width=0.35\textwidth]{boxplots}\\
(a) & (b) \hspace{4.8em} (c) \\
\end{tabular}
\caption{DL and DTF example, profile size and time flexibility box plots \label{fig:dldtfex}}
\end{figure}
\begin{algorithm}[tb]
\begin{algorithmic}[1]
\Function{Initialize}{$F$}
\State $\mathit{tft}\leftarrow$LowerFenceTimeFlexibility($F$)\label{lin:dtflf}
\State $\mathit{PF}\leftarrow$FOsWithTimeFlexibilityAtLeast$\mathit{tft}$($F,\mathit{tft}$) \label{lin:dtfsplit1}
\State $f_{ini}\leftarrow$SelectTheMostFlexibleFOAmongLongest($\mathit{PF}$)\label{lin:dtfini}
\State\Return $ \mathit{PF}\setminus f_{ini}, F\setminus\mathit{PF},f_{ini},\mathit{tft}$
\EndFunction
\end{algorithmic}
\caption{Initialization phase - Dynamic Time Flexibility}\label{alg:dtf}
\end{algorithm}
\textbf{Processing phase.}
In the processing phase, \ma{}
examines all the potential binary aggregations between $f_{ini}$ and the FOs in $\mathit{PF}$ defined in the initialization phase.
The FOs are examined in descending order according to their time flexibility.
FOs with high time flexibility have more potential to participate in an aggregation that fulfills the flexible order requirements because of high number of alignments.
\begin{algorithm}[tb]
\begin{algorithmic}[1]
\Function{Process}{$\mathit{PF},\mathit{AF}, f_{ini},\mathit{tft},\mathit{ppt},\mathit{spt},e$}
\State $\mathit{PF}_{tmp} \leftarrow \emptyset$, $\mathit{f}_{a} \leftarrow$ null
\ForAll { $f\in \mathit{PF}$ }\label{lin:ppfirstfor}
\State$f_{cand}\leftarrow$ null, $bestCV\leftarrow \infty$
\ForAll {alignment $al$ of $\{f_{ini}, f\}$ }\label{lin:ppsecondfor}
\State $f_x\leftarrow$BinaryAggregation($f_{ini},f,al,\mathit{tft},\mathit{ppt}$)
\If{RMSE($f_x, \mathit{spt}$)$<$RMSE($f_{ini}, \mathit{spt}$)}\label{lin:pprmse}
\If{CV($f_x$)$<$$ bestCV$}\label{lin:ppcv
\State $bestCV\leftarrow$CV($f_x$)\label{lin:ppcv}, $f_{cand}\leftarrow f_{x}$
\EndIf
\EndIf\label{lin:ppconend}
\EndFor
\If{ $f_{cand} \neq$ null}
\State $\mathit{PF}_{tmp} \leftarrow \mathit{PF}_{tmp} \cup f$\label{lin:pptmp}, $f_{ini} \leftarrow f_{cand} $
\EndIf
\If{ $\forall s \in P( f_{ini} ), spt -e < s.p < spt +e$} \label{lin:tlcond
\State $f_a \leftarrow f_{ini}$\label{lin:florder}
\State $\mathit{PF} \leftarrow \mathit{PF} \setminus \mathit{PF}_{tmp}$ \label{lin:ppdel}
\State $\mathit{PF}_{tmp} \leftarrow \emptyset$, $\mathit{spt} \leftarrow \mathit{spt} $$+$$100$ \label{lin:tlincrease}
\EndIf
\EndFor
\State \Return $ \mathit{PF}$, $\mathit{AF} \cup f_a$\label{lin:ppreturn}
\EndFunction
\end{algorithmic}
\caption{Processing phase}\label{alg:pp}
\end{algorithm}
\begin{algorithm}[tb]
\begin{algorithmic}[1]
\Function{Examine}{$\mathit{PF,UF, AF}, continue$}
\If{$\mathit{PF\cup UF}$$=$$\emptyset$ \textbf{or} ($\mathit{|AF|}$$\geq$$5$ \textbf{and} \par\hskip\algorithmicindent
totalEnergy($\mathit{PF}$$\cup$$\mathit{UF}$)$<$Energy5\textsuperscript{th}AFO($\mathit{AF}$))} \label{lin:exfirstif}
\State $continue \leftarrow false$\label{lin:setcontinuefalse}
\EndIf
\State\Return $\mathit{PF\cup UF}$, $ continue$\label{lin:exreturn}
\EndFunction
\end{algorithmic}
\caption{Examination phase}\label{alg:ex}
\end{algorithm}
\ma{} examines, through the potential alignments, all the binary aggregations that fulfill the time flexibility $\mathit{tft}$ and the power profile thresholds $\mathit{ppt}$ (\refalg{pp},~\reflinto{ppfirstfor}{ppsecondfor}).
Among the AFOs that reduce the root mean square error (RMSE) between $f_{ini}$ and the slice power threshold $\mathit{spt}$, it chooses the one with the minimum coefficient of variation (CV) (\reflinto{pprmse}{ppconend}).
By promoting the reduction of RMSE, the produced AFO $f_{cand}$ has a power profile closer to $\mathit{spt}$.
In particular, the use of RMSE during aggregation prevents the increase of profile length of the potential AFO and contributes to the production of slices with values closer to $\mathit{spt}$.
Consequently, alignments that lead to power profiles that time-wise overlap each other are preferred for aggregation.
Moreover, because the slices of an AFO might have power deviations, the second condition of CV (\reflin{ppcv}) is used.
A low CV of $f_{cand}$ contributes to the elimination of power profile deviations and to the production of AFOs with slice power amounts closer to each other.
For instance, given the FOs in~\reffig{alignments} and $\mathit{spt}$ equal to $3$, the RMSE between the slices of AFO $f_{12}$ and $\mathit{spt}$ is equal to $1$ and lower than the RMSE between the longest FO $f_{123}$ and $\mathit{spt}$, which is $1.8$.
Similarly, $f_{12}$ and $f_{123}$ have CV equal to $0$ and $0.4$, respectively, with $f_{12}$ having no power fluctuations.
Thus, the reduction of RMSE and CV lead to AFOs that fulfill the flexible order energy requirements.
\begin{figure*}[tb]
\begin{center}
\begin{tabular}{cccc}
\hspace{-1em}
\includegraphics[width=0.24\textwidth]{AvgTimeFlexBoxPlot}& \includegraphics[width=0.24\textwidth]{AvgLengthAFOBoxPlot}&
\includegraphics[width=0.24\textwidth]{PercentageParticipation} &
\includegraphics[width=0.24\textwidth]{PercentageBiddenEnergyTop5} \\
(a) Average time flexibility &
(b) Average profile length&
(c) Participation in aggregation
(d) Traded energy\\
\end{tabular}
\caption{Average time flexibility, average profile length, participation of FOs, and traded energy}
\label{fig:expFig1}
\end{center}
\end{figure*}
When an AFO with power amounts around $\mathit{spt}$ is produced,
an $e$ kW deviation per slice is permitted (\refalg{pp}~\reflin{tlcond}).
At that point, an AFO $f_a$ that fulfills the flexible order criteria is produced (\reflin{florder}).
The FOs that participate in aggregation are temporally stored (\reflin{pptmp}) and when an AFO $f_a$ is produced, they are removed from $\mathit{PF}$ (\reflin{ppdel}).
Then, $\mathit{spt}$ is increased by 100 (\reflin{tlincrease}) so that AFOs with larger energy are produced during the following aggregation.
As a result, the processing phase produces an AFO that captures large amounts of energy and fulfills the time flexibility and power amount requirements of a flexible order.
When all the FOs in $\mathit{PF}$ are processed, \ma{}
returns both $\mathit{PF}$ and
the output set $\mathit{AF}$ with the aggregated FO $f_a$ (\reflin{ppreturn}).
\textbf{Examination phase.}
During the examination phase, \ma{} first examines if there are any FOs in either $\mathit{PF}$ or $\mathit{UF}$ to further continue aggregation (\refalg{ex}~\reflin{exfirstif}).
In case,
the total energy of the remaining FOs is larger than the $5$th in descending size energy AFO,
\ma{} continues using the remaining FOs (\reflin{exreturn}).
Otherwise,~\ma{} does not continue the execution (\reflin{setcontinuefalse}).
As a result, the algorithm ensures that the remaining FOs cannot produce an AFO with energy greater than one of the $5$ produced AFOs.
Since the $5$ AFOs with the most energy will be transformed into flexible orders, the algorithm terminates (\refalg{ha}~\reflin{hareturn}).
\subsection{Experimental setup}
We consider a BRP managing a portfolio of EVs represented by FO{}s.
The BRP utilizes our proposed aggregation algorithms to produce AFOs that respect the flexible order requirements.
The BRP transforms the $5$ AFOs
which capture the highest amount of energy into flexible orders and trades them in Elspot.
In order to examine the scalability of our proposed algorithms,
we create $8$ differently-sized FO{} datasets, from $5$K to $40$K FO{}s (multiples of $5$K),
with characteristics based on the probability distributions suggested in~\cite{7098444}.
Moreover, we consider that all EVs use the charging option described in~\refsec{EVmodel} and need to be fully charged.
Thus, the initial SOC of all EVs is within [$20$\%, $85$\%], while they must be charged up to $90$\%.
Details about the characteristics of the datasets are in~\reftab{characteristics}.
We compare our techniques with two baseline aggregation techniques~\cite{TKDEEManolis}.
We use Start-Alignment (SA) aggregation, see~\refsec{SA} and Start-Alignment Grouping (SAG) aggregation.
SAG groups together FOs that have both the same earliest start charging time and the same time flexibility and then applies SA on each group.
As a result, it produces one AFO per group.
We evaluate our techniques in terms of output size (\#AFOs), participation of FOs in aggregation, percentage of energy traded in the market, running time, and both time flexibility and profile length of AFOs.
\subsection{Market-based aggregation results}
\textbf{Output size.}
SA always produces one AFO whereas SAG produces more than $100$ AFOs in all cases.
Both LP and DP produce less than or equal to $5$ AFOs in all cases.
DTF produces more than $5$ AFOs in $75$\% of the cases as
the energy threshold is activated in a later step compared to the other techniques due to the division of the processed set.
\begin{figure*}[tb]
\begin{center}
\begin{tabular}{cccc}
\hspace{-1em}\includegraphics[width=0.24\textwidth]{ProcTime}&
\includegraphics[width=0.24\textwidth]{OverallInitializedAgg} & \includegraphics[width=0.24\textwidth]{ScheduleAndPricingAvg}&
\includegraphics[width=0.23\textwidth]{CostReductionOverallEnergy}\\
(a) Processing time&
(b) \# of initialization phases&
(c) Charging time vs pricing &
(d) Cost reduction\\
\end{tabular}
\caption{Processing time, number of initialization phases, charging times and pricing for 40K
FOs dataset, and cost reduction for all datasets}
\label{fig:expFig2}
\end{center}
\end{figure*}
\textbf{Time flexibility and profile length.}
Regarding the baseline techniques,
SA produces long AFOs with very low time flexibility as it aggregates all FOs into one.
On the contrary, SAG produces short and time flexible AFOs due to the grouping phase it applies, see~\reffig{expFig1}a, b.
LP uses as initial FO ($f_{\mathit{ini}}$) the longest FO of the dataset.
Usually, such an FO has low time flexibility and so do the produced AFOs.
Due to the long profile of $f_{\mathit{ini}}$, LP might utilize all the time flexibility of the remaining FOs to produce an AFO that reduces the distance to the power profile threshold ($\mathit{ppt}$).
Consequently, LP produces long AFOs with very low time flexibility, see~\reffig{expFig1}a, b.
The AFOs produced by DP are more flexible than the ones from LP since DP applies a dynamic profile size approach and excludes from aggregation very long FOs.
As a result, FOs with similar profiles are aggregated together and less time flexibility is required to find a proper alignment that minimizes the distance to $\mathit{ppt}$.
Consequently, AFOs with less slices compared to LP are produced, see~\reffig{expFig1}b.
Finally, DTF produces the most flexible AFOs among our proposed techniques.
We see in~\reffig{expFig1}a that the average time flexibility of the produced AFOs is greater than $4$ in all datasets.
DTF achieves it by utilizing the time flexibility threshold.
However, DTF produces long AFOs, similar to LP, because it also selects as $f_{\mathit{ini}}$ the longest AFO of the processed set, see~\reffig{expFig1}b.
\begin{table}[tb]
\centering
\begin{tabular}{ |c ||c |c |c |c| c|}
\hline
& Distr.& Mean &St. dev & Min & Max \\ \hline \hline
Battery capacity (kWh)& UD$*$ & $23$& 4 & 16& 30 \\ \hline
Arrival time& TGD$*$ & $19$$:$$00$& 2h & $16$$:$$00$ & $1$$:$$00$ \\ \hline
Departure time& TGD$*$ & $7$$:$$00$& 2h & $5$$:$$00$ & $12$$:$$00$ \\ \hline
Initial Battery SOE (\%)& TGD$*$ & 75&$25$& $20$& $85$ \\ \hline
\end{tabular}
$*$ UD: uniform distribution, TGD: truncated Gaussian distribution
\caption{EV data probability distribution\label{tab:characteristics}}
\end{table}
\textbf{Participation and traded energy.}
In order to quantify the participation of FOs in aggregation,
we take into account only the FOs that participate in the aggregation of the $5$ (or less) largest in energy AFOs,
i.e., the AFOs that are transformed into flexible orders.
Similarly, we compute the traded energy by taking into account only the energy captured by the AFOs that are transformed into flexible orders.
SA aggregates all FOs into one AFO and thus participation in aggregation is $100$\%, see~\reffig{expFig1}c.
The slices of the AFO have very high power differences and since a flexible order requires a flat power profile,
the power of the highest slice is considered for the whole profile of the AFO.
As a result, on average, $2.5$ times the energy captured by that AFO is traded, see~\reffig{expFig1}d where $100$\% is the energy needed for all the FOs.
On the contrary, SAG produces too many AFOs and since only the $5$ largest are traded,
we see a very low participation percentage and
the lowest percentage of traded energy among the techniques ($69.7$\% on average).
In general, the longest AFOs capture more energy as they have more slices and more FOs participate in their aggregation.
Thus, LP, which produces the longest AFOs, obtains both the highest participation percentage ($98.6$\%) and traded energy percentage ($97.5$\%) in all the cases among our proposed techniques, see in~\reffig{expFig1}c, d.
DTF follows with an average participation value of $94.4\%$ and $91.7\%$ percentage of traded energy.
DP has the lowest percentage in both participation and traded energy,
$94.2$\% and $88.8$\% on average respectively.
The reason is that DP excludes very long FOs, which usually capture large energy, from aggregation.
\textbf{Processing time.}
Both SA and SAG are fast techniques with processing times below one second as they examine a very small solution space and do not consider the market requirements.
LP is the fastest among all our proposed techniques since it efficiently activates the energy threshold, see~\reffig{expFig2}a.
The processing time of DP follows a close to linear growth rate.
DTF has an increasing trend for processing time, but it shows similar processing times for datasets with different sizes, e.g., for datasets with $30$K and $35$K FOs.
The reason is that the processing time is highly driven by the number of initialization phases.
The size of the dataset might increase, but the new added FOs might lead to less initialization phases and therefore to less aggregation comparisons.
That is why we also notice that both processing time and number of initialization phases follow similar patterns.
Whenever the number of initialization phases is increased compared to the previous dataset, processing time also increases.
For instance, we see in~\reffig{expFig2}b that when the size of the dataset is increased from $15$K to $20$K for both LP and DTF, the number of initialization phases is reduced.
As a result, the processing time is similar for both the datasets and slightly increases for the $25$K.
Eventually, when the size of the dataset is further increased, it becomes more difficult for DTF to fulfill the market requirements and thus both the initialization phases and the processing time are highly increased.
\subsection{Financial evaluation}
Since the overall goal of a BRP is to trade the AFOs in the market using flexible orders, we financially evaluate our aggregation techniques.
We compare the cost of buying the energy needed to charge the EVs based on plug-in time (traditional approach)
with the cost of charging the EVs by utilizing flexible orders.
Moreover, in order to compare our techniques with the optimal solution,
we consider a {\em non-realizable in practice} scenario where each FO directly participates in the market without aggregation
and each EV is charged when the charging cost is minimized.
Due to the fact that flexibility appears during the night~\cite{6461500},
we consider a $48$ hours trading period with a repetition of the $24$h Elspot average prices of $2017$~\cite{Nordpool}, see price curve in~\reffig{expFig2}c.
In the same figure, we illustrate the time and the energy amount
used to charge the $40$K dataset based on our techniques, the two baseline techniques, the plug-in times of the EVs, and the optimal charging.
We see that the charging of the EVs based on the plug-in time occurs when the prices are still high and it does not take advantage of the price drop that occurs in the night of the first $24$ hours.
SA and SAG produce AFOs that do not fulfill the market requirements.
As a result, more energy than needed has to be traded in the market.
In particular, SA trades $1.52$ times more energy than needed to charge the EVs.
Thus, the surplus energy is traded in the regulation market and it results in losses for the BRP, see negative cost reduction in~\reffig{expFig2}d.
Regarding SAG, the produced AFOs capture a low percentage of the energy needed and they also require extra energy to be traded in order to fulfill the market requirements.
Consequently, the cost reduction due to the flexible orders trading is eliminated by the losses from the surplus energy trading.
As a result, we see only $1.1$\% cost reduction on average when SAG is applied.
On the contrary, the optimal charging option charges all the EVs when the price has the lowest value.
That is why we see a spike in the graph reaching $180$MW after the $24$\textsuperscript{th} hour.
Our proposed aggregation techniques also take advantage of the lowest prices.
LP produces long AFOs which expand over many hours and have low time flexibility.
That is why we see in~\reffig{expFig2}c that part of the charging occurs when the prices are high.
DTF produces AFOs that are also long, but they are more flexible than the AFOs produced by LP.
Therefore, EVs are charged when prices are a bit lower and DTF achieves a higher cost reduction,~\reffig{expFig2}d.
Finally, DP produces short and flexible AFOs.
As a result, it takes advantage of the lowest prices occurring only for a few hours, see~\reffig{expFig2}c.
When the energy for the $40$K FOs dataset is purchased based on the plug-in times of the EVs, it costs 8,612 euros.
On the contrary, when LP, with the highest participation, is applied on the $40$K dataset, 39,584 FO{}s participate in aggregation, see first bar ($98.96\%$) in~\reffig{expFig1}c.
The 39,584 FO{}s produce $5$ AFO{}s
which are further transformed into flexible orders.
The cost of purchasing the energy needed for the $5$ AFO{}s is computed based on the flexible orders trading and
it is 6,670 euros, see~\reffig{expFig2}c.
The price also includes the cost ($0.46$ euro) of the imbalances ($62$kW) of the flexible orders, see~\refsec{marketFramework}.
The energy needed for the remaining $416$ (40,000$-$39,584) FO{}s is bought based on their plug-in time and it is $117$ euros.
Thus, the overall energy bought to charge $40$K EVs, when LP is used, costs
6,670$+$126$=$6,796 euros.
Therefore, LP achieves a $21$\% cost reduction in energy purchase, see LP bar for $40$K dataset in~\reffig{expFig2}d.
We see in~\reffig{expFig2}d that DP achieves on average a 24.4\% cost reduction.
DTF follows with 20.2\% and LP with 19.1\% average cost reduction.
The cost reduction based on the optimal solution is 27.4\% on average.
Thus, LP, DTF, and DP achieve $69.8$\%, $73.7$\%, and $88.9$\%
of the optimal cost reduction, respectively.
Notably, the cost reduction that DP achieves {\em only} for the FO{}s that participate in aggregation
is on average $98.3$\% of the optimal one.
In~\reffig{expFig4}, we illustrate the cost reduction that DP achieves during $2017$.
We consider $364$ trading periods of $48$ hours.
The first trading period includes both the $1$\textsuperscript{st} and the $2$\textsuperscript{nd} day of $2017$.
The second trading period includes the $2$\textsuperscript{nd} and the $3$\textsuperscript{rd} day of $2017$ and so on.
The average cost reduction is $28$\% and, interestingly, we notice at the end of the year a cost reduction of more than $800$\%.
The reason is that for several consecutive days, Elspot prices were negative early in the morning and even reached $-$$50$ euros/MWh on the $24$\textsuperscript{th} of December at $2$$:$$0$$0$.
\begin{figure
\centering
\includegraphics[width=0.4\textwidth]{CostReductionOverallEnergyDP}
\caption{Yearly cost reduction based on DP\label{fig:expFig4} }
\end{figure}
\textbf{Uncertainty on FOs forecast.}
The driving behavior can usually be forecasted with very high precision and accuracy.
However, there might be cases where the anticipated flexible load is smaller because
the EVs are not plugged-in as anticipated.
Thus, we considered an uncertainty scenario for the $40$K FOs set according to which
a percentage of the FOs that participated in aggregation do not need the purchased energy
after the flexible orders derived from DP are placed.
As a result, BRP has to sell back the exceeded purchased amount of energy for a lower price (regulating price).
The difference between the initial cost of purchasing the energy via flexible orders and the cost of the exceeded energy is considered as losses for the BRP.
Moreover, BRP has to distribute (assuming no profit) the purchased energy and thus the cost to less costumers (EV owners) compared to the initial estimation.
We see in~\reffig{unc}, that the price paid by the EV owners based on plug-in time is fixed.
On the other hand, the price for the energy purchased via flexible orders increases as the percentage of the EVs that do not participate in aggregation increases.
The reason is that the cost is higher due to the imbalances and at the same time less consumers use the purchased energy.
We see in~\reffig{unc} that the cost for the consumers via flexible orders is greater than the plug-in charging cost when more than $23$\% of the EVs where imprecisely forecasted.
\begin{figure
\centering
\includegraphics[width=0.4\textwidth]{uncertainty}
\caption{Energy price paid by consumers\label{fig:unc} }
\end{figure}
\textbf{Summary:} By applying our proposed techniques on the aforementioned $365$ trading periods,
DP achieves the highest cost reduction in $66$\% of the periods and DTF achieves
the highest cost reduction in the remaining $34$\% of the periods.
The reason is that the financial impact of the techniques is highly correlated with the pricing curve of the trading period.
Thus, in cases where the price drops for only few hours close to the plug-in charging time, DP is the most suitable technique.
On the other hand, when the price drops for longer periods but much later than the plug-in charging time, DTF achieves a higher reduction than DP.
\begin{comment}
We have three different techniques, i.e., Longest Profile (LP), Dynamic Length (DL), and Dynamic Time flexibility (DT).
We see in~\reffig{numAFOs} that the techniques are producing more than 5 aggregated flex-offers (AFOs), but only the five with the maximum total energy are bidden in the market.
During each step, the techniques produce an AFO.
When they produce 5 AFOs they apply a pruning step.
According to the pruning step, they compute the minimum energy captured by the individual AFOs.
Aggregation stops if the remaining (to be) FOs cannot produce an AFO with energy greater than the minimum among the 5 already AFOs.
That is the reason why LP has very large processing times in dataset 15K and 40K.
The 5 AFOs do not let the pruning step to be applied, see~\reffig{ProcTime}.
LP produces long AFOs with time flexibility almost 1.
DL produces the ``shortest'' AFOs with more time flexibility than LP, but less than DT.
DT produces long AFOs and very flexible, see~\reffig{avgLT}.
We also see the amount profiles of the top 5 AFOs in~\reffig{AFOtop5profiles40K}.
3 different market scenarios regarding pricing.
\begin{itemize}
\item
Two continuous days with prices equal to the average prices of 2015 of elspot market.
\item
Two continuous days with the second one having the highest standard deviation (SD) of prices.
The first one is the regular day of the calendar before the inspected one (with the highest (SD).
High SD means extreme values within the day.
\item
Two continuous days with the second one having the highest coefficient of variances (CV) of prices.
The first one is the regular day of the calendar before the inspected one (with the highest (CV).
High CV means a lot of price fluctuations within the day.
\end{itemize}
\textbf{Participation.}
We see in~\reffig{PercentageBiddedEnergy}a that LP bids the largest energy volumes followed by DT and DL.
Longest profiles of AFOs lead to largest energy volumes.
LP bids on average 81.9\% of the total energy, DL 61.5\% and DT 71.2\%, see~\reffig{PercentageBiddedEnergy}a.
Although DL bids the smallest amount of energy in all the cases, it has high participation of FOs into aggregation, see~\reffig{PercentageBiddedEnergy}b.
Average participation of LP is 89.9\%, of DL 79.6\%, and 81.7\% for DT.
Participation is higher than percentage of bidden energy because FOs with shorter profiles are used during aggregation.
\textbf{Cost reduction.}
All the techniques succeed similar cost reduction when average pricing is taken into account,
see~\reffig{scheduleAndPricingAvg}.
Cost reduction is larger when large standard deviation is considered in pricing, see~\reffig{scheduleAndPricingSD}.
DT succeeds the greatest reduction in all cases since the lowest prices occur for longer periods.
However, when there are pricing fluctuations and the lowest prices occur for short periods, DL succeeds better reduction or at least similar to DT, see~\reffig{scheduleAndPricingCV}b.
When the prices are negative, there is gain from charging the EVs, see~\reffig{scheduleAndPricingCV}a.
We can also see in~\reffig{CostReductionBiddedEnergyAvg} the cost reduction for the FOs which participated in aggregation where DL is a clear winner in all the scenarios and all the pricing cases.
The reason is that DL produces short and big AFOs and even their small flexibility is sufficient enough to reduce the cost of buying the energy.
\reffig{costBuyingEnergy} shows the cost for buying energy.
\end{comment}
\subsection{Electric vehicle model}
\label{sec:EVmodel}
We consider the energy used to charge EVs to be appropriate for flexible energy trading.
The reason is that the lithium-ion batteries of EVs are ample power demand devices and their charge can be time shifted when the EVs are plugged-in for more hours than needed for charging.
We consider EVs that can be continuously charged with a power-constant voltage (CP-CV) option~\cite{7098444} and their charge is taking place in the range of 20\% to 90\%
state of charge (SOC) so that the battery life is preserved~\cite{6345063}.
As a result, when an EV is plugged in for charging, its battery capacity is at least 20\% and the user would like to fully charge it (90\%) for his/her next trip.
The SOC is computed according to the following formula based on~\cite{7098444}:
\begin{math}
\mathit{SOC_{final}} = \mathit{SOC_{ini}} + \frac{\eta_c \cdot \eta_b\cdot P \cdot time_{cha}}{C}
\end{math} (1)
where \begin{math}\mathit{SOC_{final}}\end{math} is the final state of charge equal to 90\% of the total battery capacity ($C$).
Parameters \begin{math} \eta_c\end{math} and \begin{math}\eta_b\end{math} represent the efficiency of the charger
and the internal resistance of the battery, respectively.
We represent with $P$ the power used to charge an EV
that is constant over the [20\%, 90\%] interval of SOC and $ time_{cha}$ is the time needed to charge it up to $\mathit{SOC_{final}}$.
In our work, because we take into account time shifted loads, we use the flex-offer (FO) concept~\cite{TKDEEManolis}, introduced in the MIRABEL project~\cite{mirabelonline,pub:mirabel_endm2012}
to represent the charging of a flexible EV.
An FO captures flexibility from different dimensions (e.g., time, energy, and/or combined)~\cite{Valsomatzis15},
from different devices~\cite{Siksnys:2016:DFS:2934328.2934339}, and
can be used for different purposes, e.g., tackle electrical grid bottlenecks~\cite{7778808}.
Thus, we define an {\em FO{} $f$ }to be a tuple \begin{math}f=(T(f), P(f))\end{math} where $T(f)$ is the start charging flexibility interval and
$P(f)$ is the power profile.
\begin{math}T(f)=[t_{es}, t_{ls}]\end{math}
where $t_{es}$ and $t_{ls}$ are the {\em earliest start charging time} and {\em latest start charging time}, respectively.
We define time flexibility ($\mathit{tf}$) to be the difference between $t_{ls}$ and $t_{es}$.
The {\em power profile} is a sequence of (\begin{math}m\in\mathbb{N}_{>0}\end{math}) consecutive slices,
\begin{math}P(f)=\langle s^{(1)},\dots, s^{(m)}\rangle\end{math} where a {\em slice} $s^{(i)}$ has a power value $p$ measured in kW.
The duration of slices is 1 hour.
For instance,
an EV is plugged in at a house between $1$ and $8$ a.m.
The EV continuously utilizes $3.7$kW for $3.3$ hours to be charged.
However, energy trading is performed per hour and we also use hourly resolution to model the EVs charging.
To respect the hourly granularity, we equally distribute the sum of the energy needed during the first and the last regular charging hours and we reduce power fluctuations in the model.
Therefore, we assume that the
EV consumes $2.4$kWh both during the first and the last charging hours and $3.7$kWh during the hours in-between.
The EV can be modeled by an FO
$f$$=$$([1,4],\langle2.4, 3.7, 3.7, 2.4\rangle)$, see~\reffig{FOandFO}a.
Next, we describe the market framework where such FOs shall be traded.
\subsection{Market framework}
\label{sec:marketFramework}
The Nordic/Baltic market for electrical energy named Nord Pool is considered in our work.
Nord Pool is one of the most mature energy markets~\cite{Modeling}
and Europe's leading power market~\cite{Nordpool}.
It consists of the day-ahead (Elspot) and intra-day markets.
We focus on Elspot because it has one of
the largest turnovers in the Nordic system and it also supports flexible energy trading~\cite{Biegel2014354}.
Trading in Elspot occurs daily through orders (bids).
Each day before 12 p.m., the balance responsible parties (BRPs) place their purchase and/or selling orders (bids) in Elspot for the following day.
The orders specify the energy amount a BRP desires to buy/sell and the price the BRP is willing to pay/be paid for the corresponding energy.
Since 2016,
Elspot supports 4 different order types:
single hourly orders (price dependent or independent),
block orders,
exclusive groups, and
{\em flexible orders}~\cite{Nordpool}.
We focus on flexible orders that support flexibility trading.
When a BRP places a flexible order in Elspot, it states the {\em name}, the {\em time interval}, the {\em price limit}, the {\em volume}, and the {\em duration} of the order.
The time unit is one hour and volume is expressed in MW.
The duration expresses the number of hours during which the order can be activated over the interval \begin{math}[1, 23]\end{math}.
The time interval must exceed the duration by at least one hour and expresses the potential activation times of the order.
Volume is either positive, if the order is a purchase order or negative, if it is a sell order.
A BRP can place $5$ flexible orders during a trading day.
{\em Hypothetically}, a BRP could purchase the energy needed to charge the above mentioned flexible EV, represented by FO $f$
through a flexible order.
The duration of a flexible order is mapped to the number of slices of $f$, the volume to the power of the slices, and the time interval to the time flexibility of $f$.
For instance, a BRP could place a flexible order named ``F1'', with duration $4$ hours and time interval from $1$ to $8$.
The volume of F1 is $0.0037$MW (in order to satisfy all the slices) and its price limit is 35 euros/MWh.
However, the energy needed to charge a single EV is (much) too small to be traded in Elspot.
In particular, the minimum contract size and the volume trade lot for a flexible order are both $100$kW, while the power used by an EV is a few kW.
Moreover, when the duration of a flexible order is more than one hour, the volume needed for these hours shall be constant.
As a result, it is necessary to {\em aggregate} FOs to trade the flexible loads of the EVs through flexible orders in Elspot market.
\begin{figure}[tb]
\begin{tabular}{cc}
\hspace{-0.15in}
\includegraphics[width=0.24\textwidth]{FO} &
\includegraphics[width=0.24\textwidth]{flexibleorder}\\
(a) & (b) \\
\end{tabular}
\caption{An examle of an FO and a flexible purchase order\label{fig:FOandFO}}
\end{figure}
The flexible
order is activated in the time
interval that optimizes social welfare provided that the price is respected~\cite{Nordpool}.
Given F1 in a liquid market, the order is activated when the cost of buying the required energy is minimized.
For instance, we see in~\reffig{FOandFO}b that F1 is activated in time slots $3$, $4$, $5$, and $6$ where the price is $25$ euros/MWh.
Thus, the energy needed to charge the EV costs \begin{math}25\cdot0.0037\cdot4=0.37\end{math} euros.
On the contrary,
if time flexibility of the EV is disregarded, its charging occurs based on a price independent order and its plug-in time (time slot $1$-$4$ in~\reffig{FOandFO}b).
As a result, the energy needed to charge the EV is purchased based on a price independent order and the price is set by Elspot.
In that case and according to~\reffig{FOandFO}b, the cost is \begin{math}33\cdot0.0037\cdot2 + 25\cdot0.0037\cdot2=0.4292\end{math} euros, $16$\% more than the cost achieved by flexible order F1.
Therefore, a flexible order has a higher probability to achieve a better price than a price independent order because it takes into account the time flexibility of the flexible loads and thus can be favored by price reductions.
The absolute difference (imbalance) between the purchased energy and the energy needed is traded in the balance market and usually for a higher price than the one in Elspot.
Consequently, the BRP desires to be as precise as possible regarding the purchased energy from Elspot.
Regarding the communication among an EV and a BRP,
we assume an Information and Communication Technology (ICT) infrastructure~\cite{Albano2015133}.
When an EV is plugged in, an FO is generated requiring the minimum interaction with the owner of the EV.
The FO generation takes into account the historical use of the EV, the
$\mathit{SOC_{ini}}$ of the EV, the charging characteristics of the EV, and the technical characteristics of the charging station, e.g., a home charging installation~\cite{Neupane:2017:GEF:3077839.3077850}.
Identifying the time flexibility of an FO is challenging, but appropriate forecast techniques can be designed taking into account daily/weekly driving patterns~\cite{6465724}.
\subsection{FO aggregation}
\label{sec:SA}
Based on~\cite{TKDEEManolis}, FO aggregation is the function that given a set of FOs $F$, produces a set of aggregated FOs $\mathit{AF}$ where
\begin{math}\mathit{|AF|}\leq |F|\end{math}.
The produced AFOs capture large amounts of energy that can be traded in the market.
Due to the time flexibility of the FOs, there are different alignment combinations that can lead to different AFOs.
According to start-alignment FO aggregation,
the earliest start charging time of an aggregated FO (AFO) $f_{a}$ is the minimum earliest start charging time
among all the FOs that produced it, i.e.,
\begin{math}f_{a}.t_{es} = min_{f\in F'}(f.t_{es}),F'\subseteq F\end{math}.
The latest start charging time of $f_{a}$ is the sum of its $t_{es}$ and the minimum time flexibility among all the FOs in $F'$, i.e., \begin{math}f_{a}.t_{ls} = f_{a}.t_{es} + min_{f\in F'}(tf(f))\end{math}.
The power profile of $f_{a}$ is produced by summing up the power profiles of the FOs when they are aligned according to their earliest start charging time.
For instance, we see in~\reffig{alignments}a three FOs,
\begin{math}f_1=([1,5], \langle1,1\rangle)\end{math}
\begin{math}f_2=([2,3], \langle1,1\rangle)\end{math}, and
\begin{math}f_3=([4,5], \langle1\rangle)\end{math}, that produce AFO $f_{123}$
where \begin{math}f_{123}.t_{es} = f_{1}.t_{es}=1\end{math} and
$f_{123}.t_{ls}$ is the sum of $f_{123}.t_{es}$
and time flexibility of $f_{2}$ or $f_{3}$,
i.e., $f_{123}.t_{ls}=2$.
The power profile of $f_{123}$ is produced by summing up the power profiles of $f_1$, $f_2$, and $f_3$ based on their alignments.
Thus,
\begin{math}f_{123}.s^{(1)}.p = f_{1}.s^{(1)}.p =1\end{math},
\begin{math}f_{123}.s^{(2)}.p = f_{1}.s^{(2)}.p + f_{2}.s^{(1)}.p=2\end{math},
\begin{math}f_{123}.s^{(3)}.p = f_{2}.s^{(2)}.p =1\end{math}, and
\begin{math}f_{123}.s^{(4)}.p = f_{3}.s^{(1)}.p =1\end{math}.
Due to the time flexibility of the FOs, there are different alignment combinations that can lead to different AFOs.
For instance, given the $3$ FOs \begin{math}f_1,f_2,f_3\end{math} in~\reffig{alignments} with time flexibility 4, 1, and 1, respectively, there are 20
\begin{math}(5\cdot2\cdot2)\end{math} alignment combinations.
As a result, based on different alignments, time flexibility of the FOs can be adjusted accordingly and different power profiles for the AFOs are produced.
\begin{figure}[tb]
\centering
\includegraphics[width=0.47\textwidth]{Alignments}
\caption{Flex-offer aggregation according to different alignments \label{fig:alignments}}
\end{figure}
Moreover, a set of FOs can be partitioned and each subset can produce an AFO.
Consequently, the output size of aggregation can be greater than one.
For instance, we see in~\reffig{alignments}b that the output of aggregation is $2$ AFOs, i.e., $f_{12}$ and $f_{3}$.
In particular, FO $f_1$ is aligned with $f_2$ and time flexibility of $f_1$ is adjusted so that $f_1.t'_{es}$ is equal to $f_2.t_{es}$.
Consequently, the power profiles of $f_1$ and $f_2$ are summed up and they produce AFO $f_{12}$.
\subsection{Market-based FO aggregation}
\label{sec:MBFO}
Given a portfolio, the goal of a BRP is to maximize its profit by purchasing, for the minimum price, the energy that it sells to its customers.
We consider flexible EVs to be part of a BRP's portfolio and, since the energy purchase takes place through orders, we examine if the energy needed to charge the flexible EVs can be purchased through flexible orders.
The purchasing strategy of a BRP depends on many different factors, e.g., the content of the portfolio (factories, households, etc.) and pricing forecast.
The strategy is out of scope of this work and left for Future work.
However, since a flexible order has in general a higher probability to achieve a lower purchase price,
we consider the goal of a BRP to be the maximization of the purchased energy through flexible orders.
In our work, we introduce {\em market-based FO aggregation} (MAGG)
to be the aggregation that given a set of FOs, outputs between one and five AFOs that fulfill the flexible order requirements, see~\refequto{eq1}{eq2}.
AFOs summarize the energy requirements and flexibilities in amount and time imposed by the
technical requirements of a flexible order.
In order for an FO to fulfill the flexible order requirements, the FO must have ($1$) time flexibility at least one (\refequ{eq3}) and ($2$) between $1$ and $23$ slices (\refequ{eq4}).
Moreover, since the minimum contract size and the trade lot of a flexible order are both $100$kW, ($3$) the values of the slices of the FOs shall be multiples of $100$kW.
For illustrating purposes, we assume in our example below that {\em both the volume and the trade lot for a flexible order is $2$kW instead of $100$kW}.
For instance, we see in~\reffig{alignments} that none of the individual FOs fulfills the power profile requirements of a flexible order ($2$kW).
Thus, market-based FO aggregation is necessary.
In that case, market-based FO aggregation produces
AFO $f_{12}$ that fulfills the flexible-order requirements
since its time flexibility is 1 and the power of both the slices equals to 2, see~\reffig{alignments}b.
FO $f_3$ is also part of the aggregation output, but it is not a valid AFO
because it does not fulfill the power profile requirement, i.e., its slice amount is lower than 2.
The flexible EVs are represented by a set of FOs.
For instance, $5000$ EVs that are part of a BRP's portfolio are represented by a set of FOs $F$.
Each EV is an FO $f$ of the set, i.e.,
\begin{math} f \in F, f=(T(f),P(f)), T(f)=[t_{es}, t_{ls}], P(f) = \langle s^{(1)}, \dots, s^{(m)}\rangle\end{math}.
A BRP must aggregate the FOs to produce AFOs that fulfill the flexible order requirements and can be then placed in the market as flexible orders.
The volume of energy is expressed through the sum of the slices of the FOs and the power of each slice must be a multiple of 100kW (\refequ{eq6}).
However, due to technical charging characteristics (EV power demand is in the interval [3.7kW,11kW] for household charging),
we take into account a power range to define the valid power amounts.
Thus, instead of considering exact multiples of $100$kW for the power amount of each slice, we permit an insignificant amount deviation of $e$kW per slice, e.g., $5$kW (\refequ{eq6}).
When the financial evaluation of market-based aggregation occurs, the deviated amount will be considered to be traded in balance market, see~\refsec{marketFramework}.
Hence,
the problem of maximizing the bidden energy through flexible orders given a set of FOs is formulated as follows:
\begin{subequations}
\begin{align}
&{\text{Maximize}}
& &\sum_{f_a\in \mathit{AF}} \sum_{s\in P(f_a)} s.p\label{equ:eq1}\\
& \text{subject to}
& & \mathit{AF} = \mathit{MAGG(F)}, 1\leq|\mathit{AF}|\leq5 \label{equ:eq2}\\
& & & \forall f_a\in \mathit{AF}, \mathit{tf(f_a)}\geq 1 \label{equ:eq3}\\
& & & \forall f_a\in \mathit{AF}, 1\leq|P(f_a)|\leq 23 \label{equ:eq4}\\
& & & \forall f_a \in \mathit{AF}, \forall s\in P(f_a), \label{equ:eq5}\\
& & &s.p= x\cdot 100\text{kW} \pm e \text{kW}, x\in \mathbb{N}_{> 0}, e\in [0,5] \label{equ:eq6}
\end{align}
\end{subequations}
\subsection{Market-based FO aggregation complexity.}
Given a set of FO{}s $F$,
there are $\stirling{|F|}{k}$ ways (Stirling numbers of the second kind~\cite{graham1994concrete}) to partition the $|F|$ FO{}s into $k$
subsets.
Applying aggregation on each subset produces an AFO.
In market-based FO aggregation, the size of the output is between $1$ and $5$.
Thus, $k$ can be assigned values from $1$ to $5$.
Therefore, there are $\stirling{|F|}{1}$ ways to partition $|F|$ FO{}s into $1$ non-empty subset of FO{}s.
There are $\stirling{|F|}{2}$ ways to partition the $|F|$ FO{}s into $2$ non-empty subsets, where the aggregated FO{}s are $2$ and so on.
Thus, given $|F|$ FO{}s, there are
\begin{math}\stirling{|F|}{1}+\stirling{|F|}{2}+\dots+\stirling{|F|}{5}=\sum_{k=1}^{5}\stirling{|F|}{k}\end{math} ways to partition the FO{}s.
Moreover, the number of the different aggregated FO{}s depends on the alignments of the FO{}s and thus on their time flexibility.
In particular, given a set of FO{}s $\mathit{SF}$ ($\mathit{SF}$$\subseteq$$F$) with time flexibility \begin{math}\mathit{tf(f_1),\dots,tf(f_{|\mathit{SF}|})}\end{math} respectively,
the number of the aggregation results (aggregated FO{}s) that can be produced is:
\begin{math}\prod_{i=1}^{|\mathit{SF}|}\mathit{tf(f_i)}\end{math}.
Hence, given an average number of partitions (\begin{math}\mathit{avg(al)}\end{math}),
there are
\begin{math}\sum_{k=1}^{5}\stirling{|F|}{k})\times\mathit{avg(al)}\end{math} potential aggregation results.
Furthermore,
the complexity of the problem, as an Integer Linear Programming problem, is too high to be solved by state-of-the-art solvers~\cite{LPsolvers}.
\begin{example}
Given a set with $100$ FO{}s, there are \begin{math}\sum_{k=1}^{5}\stirling{100}{k}= 6.5738\cdot10^{67}\end{math} potential partitions that can produce from 1 to 5 AFOs.
Assuming $20$ alignments per partition on average,
there are in total
\begin{math}20\cdot6.5738\cdot10^{67}=1.3148\cdot10^{69}\end{math} (approximately the estimated number of atoms in the Milky Way Galaxy)
potential aggregation results that have to be examined in order to find the optimal one.
\end{example}
|
{
"timestamp": "2018-05-28T02:01:26",
"yymm": "1805",
"arxiv_id": "1805.02301",
"language": "en",
"url": "https://arxiv.org/abs/1805.02301"
}
|
\section*{Abstract}
The results of a detailed analysis of SMA, VLA, and IRAM observations of the region of massive star formation S255N in CO(2---1), N$_2$H$^+$ (3---2), NH$_3$ (1,1), C$^{18}$O (2---1) and some other lines is presented. Combining interferometer and single-dish data has enabled a more detailed investigation
of the gas kinematics in the moleclar core on various spatial scales. There are no signs of rotation or
isotropic compression on the scale of the region as whole. The largest fragments of gas ($\approx$0.3 pc) are located
near the boundary of the regions of ionized hydrogen S255 and S257. Some smaller-scale fragments are
associated with protostellar clumps. The kinetic temperatures of these fragments lie in the range 10---
80 K. A circumstellar torus with inner radius R$_{in}$ $\approx$ 8000 AU and outer radius R$_{out}$ 12 000 AU has been
detected around the clump SMA1. The rotation profile indicates the existence of a central object with mass
$\approx$ 8.5/ sin 2 (i) M$_\odot$ . SMA1 is resolved into two clumps, SMA1---NE and SMA1---SE, whose temperatures
are $\approx$150 K and $\approx$25 K, respectively. To all appearances, the torus is involved in the accretion of surrounding
gas onto the two protostellar clumps.
\section{Introduction}
Studies of molecular clouds have attracted considerable attention. Much effort has gone into both
simulations and observations of these objects (see, e.g., \citep{rotcluster,obs}). However, there remain relatively few well studied regions of high-mass star formation. They are encountered more rarely than the regions of low-mass star formation, and are located at large distances from the Sun, complicating observations of them. In relation to detailed studies of processes occurring in their cores, the resolution of single dishes is insufficient, while interferometers have limited sensitivity to extended structures.
The object considered in the current study is part of a molecular cloud \citep{oldie,lev} located between the two zones of ionized hydrogen S255 and S257 (Fig.~\ref{fig:cyg}). It is supposed that the entire S254---S258 complex was formed by successive star formation\citep{bieging}: expansion ofthe HII zones led to the compression of gas in the molecular cloud. This cloud contains three cores: S255IR, S255S, and S255N. The first is located in a later stage of evolution \citep{wang,ojha}: its radiation is appreciably brighter in the IR, and water-vapor and Class II methanol maser emission is detected in this
region \citep{zinhr,kurtz}. The core S255S is the youngest of the three, as is testified to by its lower brightness in the IR and in the 1.3~mm lines, as well as the compactness of the outflow in the core \citep{wang}. A smaller number of 1.3~mm molecular lines is detected [6], and there are no signs of massive-star formation; we accordingly did not consider it in the current study.
The molecular cloud with which S255N (Sh2-255 FIR1, G192.60-MM1) is associated has been observed on several single-dish radio telescopes (OSO 20 m, IRAM 30 m, NRAO 12 m) \citep{zin2009}; the core of S255N has been observed with the SMA and VLA interferometers \citep{zinhr}. We present here combined data for the first time. The mass of the S255N core indicated by single-dish observations is $\sim$300 M$_\odot$, with n$_{H_{2}}$~$\sim$2$\times$10$^5$ cm$^{-3}$, T$_k$~$\sim$40K, $\Delta$V~$\sim$2km/s. The luminosity of the core is of order 10$^5$ L$_\odot$ \citep{minier}. An ultracompact HII region is observed in this region\citep{kurtz94}, as well as H$_2$O and Class I CH$_3$OH masers. These characteristics of the core provide evidence for the formation of massive stars in this region. A lack of coincidence in the positions of the emission peaks in the CO(2---1), HCN(1---0), HNC(1---0), HCO$^+$(1---0), C$^{18}$O (2---1), and C$^{34}$S(5---4) molecular lines has also been demonstrated \citep{zin2009}. The presence of hot, massive protostellar clumps in the core was shown in \citep{zinhr,cyg2007}, with the velocity of half of the cores are different from the velocity of the ambient gas. The mass of the brightest of these clumps is $\approx$16 M$_\odot$ \citep{zinhr} for a distance of 1.78 kpc\citep{burns}. The presence of two spectral components in the (1, 1) ammonia lines in directions not coincident with a hot clump was detected in\citep{zinhr}. Several high-velocity outflows are present in the core, which also indicates rich gas kinematics. We attempted in the current study to estimate the motion of gas on scales fom the size of the core to the sizes of clumps, based on the collected data. We also used non-standard data-analysis methods that are robust to a noise to analyze emission that is weak against the background of the hot-clump emission. We used original methods to detect and identify kinematic fragments of the core, which were traced in position–velocity diagrams in a number of molecular lines.
\section{Observational data and analysis}
We considered data obtained on the SMA\citep{wang,zinhr} and VLA\citep{zinhr} interferometric arrays and the 30-m IRAM telescope \citep{wang,zinhr}. Table~\ref{t:mol} presents a list of the observed molecules, the frequencies of their transitions, and the instruments used. All the data were reduced anew, and self-calibration was appied to the data of \citep{zinhr}. The CO(2---1), C$^{18}$O (2---1), CH$_3$CN(12---11), and SO(6$_{5}$---5$_{4}$) lines were observed twice in the compact and once in the extended configuration of the SMA. These data were inverted into a single image at each line. The width of the synthesized antenna beam was $\approx$ 1.14$''$ at 217 GHz and $\approx$ 3.4$''$ for the remaining observations (the compact configuration of the SMA). The width of the VLA beam was $\approx$ 2.55$''$ (23.7 GHz), and the width of the IRAM beam was 12$''$ (217 GHz). The coordinates of the phase center for the SMA observations were $\alpha$(2000)=06$^h$12$^m$53.800$^s$, $\delta$(2000)=17$^\circ$59$'$22.097$''$.
\subsection{Data reduction}
Since one of the main aims of our study was to investigate the kinematics of the protostellar core on various spatial scales, we considered objects detected earlier in the 255N core with various extents from 2.5$\times$10$^{-3}$ pc to 0.48 pc [11]. The maximum angular sizes to which the instruments used at the various frequencies are sensitive are 66$''$ for the VLA and 25$''$ for the SMA, which corresponds to linear sizes of 0.52 and 0.2 pc at a distance of 1.78 kpc \citep{minier}. These limits hinder the interpretation of observations over a wide range of spatial scales. An important role is also played by the loss of flux by the interferometer, which we compensated for by combining the interferometric and single-dish data. As an example, the profile of the CO(2---1) line indicated by the SMA observations differs strongly from the profile observed on the IRAM 30-m telescope in the same direction. Therefore, we consolidated the SMA data with the observations on
the 30-m IRAM antenna. This was carried out for the N$_2$H$^+$ (3---2), CO(2---1), and SiO(5---4) lines, as well as the 1.3-mm continuum observations. The spectral-line data were combined in the visibility domain based on the relative integrated-flux calibration
in the region of overlapping spatial frequencies; the reconstructed continuum images were consolidated via a Fourier transform using standard procedures of the MIRIAD package \citep{miriad}. The “wings” of the CO(2---1) line at 3--7 km/s and 11--15 km/s from the single-dish data were used for the relative flux calibration of the synthesized visibilities at the frequency of this line.
CO(2---1), C$^{18}$O(2---1), SO(6$_{5}$---5$_{4}$), DCO$^+$(4---3), DCN(3---2) data were self-calibrated using the observations in the CH$_3$OH ($4_{2}$-$3_{1}$) line \citep{zinhr}. The self-calibration based on this line was
fairly effective, since a single maser source was observed in the telescope field \citep{selfcal}.
\subsection{Identification of kinematic fragments of the core}
The presence of at last two spectral components with different Doppler velocities in the 255N region
was noted earlier \citep{zinhr}, however their spatial distribution was not studied. In our current study, we carried out a careful analysis of the data considered, with the aim of conducting a detailed study of the spatial---kinematic structure of the core.
Application of the moment method for the identification of kinematic fragments does not always provide the best solution, especially for lines of molecules with hyperfine structure, such as NH$_3$ and N$^2$H$^+$,
since such estimates can be shifted due to asymmetry of the line caused by its hyperfine structure. In addition, in the case of observations of two or more spectral components, the first moment yields an estimate of their mean velocity. Determining the spatial boundaries of components precisely is problematic, since they can interact, overlap, and be represented differently in different molecular lines, depending on whether or not the conditions in the components are the same. Therefore, we estimated the minimum number of boundaries from the number of peaks in a histogram of the line intensities as a function of their velocities (Fig.~\ref{fig:hist}). Thus, instead of a spatial distribution, we obtained the distribution of the line intensities in various velocity ranges. This characterizes the distribution of the density, temperature, or some other physical parameter, depending on what quantity is traced by the studied line. We chose three molecular lines for this analysis: CO(2---1), N$_2$H$^+$ (3---2), and NH$_3$ (1,1). We expected to observe signs of fragments in the outer, rarified layers of the core in the CO line. The emission in NH$_3$ and N$^2$H$^+$ is excited in quiescent and dense regions of the core. It is possible to obtain more accurate velocity estimates using these lines, since their transitions display hyperfine structure. Their spectra are fitted fairly well with the local thermodynamic equilibrium (LTE) model presented below. The shape of the histogram (Fig.~\ref{fig:hist}) provides information about the internal structure of the core -- whether it is homogeneous or is appreciably perturbed, and also whether the gas can be described as forming isolated or overlapping velocity components (that is, as kinematic fragments of the core), or whether the gas is involved in some process with a single spectral distribution.
We estimated the positions of the line centers by fitting model line profiles for the transition to the observed spectra, taking into account hyperfine structure in an LTE approximation with a Gaussian optical-depth profile:
\begin{equation}
T(\nu)=\sum_{n_g=1}^{m} (J(T_{ex}^{n_g}) - J(T_{bg}))(1-e^{-\tau_{n_g}(v)}),
\end{equation}
\begin{equation}
\tau_{n_g}(v) = \sum_{i=1}^{n_{hfs}} 16\pi^3\nu_{ref}^4\mu^2 \frac{S g_i N_{u}^{n_g} } {3 c^3 J(T_{ex}^{n_g}) } \phi_{n_gi}(v),
\end{equation}
\begin{equation}
\phi_{n_gi}(v) = \frac{1}{\sqrt{2 \pi} \sigma_{n_g} } e^{\frac{-(v-v_i-v_{n_g})^2}{2 \sigma_{n_g}^2}},
\end{equation}
\noindent where $T_{ex}^{n_g}$ is the excitation temperature of spectral component $n_g$, $N_{u}^{n_g}$ the column density of the molecules in the upper level for the transition considered, $\sigma_{n_g}$ the line width, $v_{n_g}$ the position of the line-component center (the subscript $n_g$ denotes the desired parameters of the model used in the fitting), $J(T)=\frac{h\nu/k}{exp(h\nu/kT)-1}$, $T_{bg} = 2.73 K$ is the cosmic microwave background temperature, $\nu_{ref}$ the transition frequency, $\mu$ the dipole moment, $S$ the line strength, $g_i$ the statistical weight of transition i of the hyperfine structure, $n_{hfs}$ the number of components of the hyperfine structure (18 for NH$_3$(1,1), 24 for NH$_3$(2,2), 38 for N$_2$H$^+$(3---2)) $v_i$ the Doppler velocity of the transition $i$ in the line, and $m$ the number of spectral components in the line.
The values of $S$,$g_i$ and$v_i$ were taken from the CDMS database \citep{cdms}. The approximation was applicable for the NH$_3$ (1,1) and (2,2) and N$_2$H$^+$(3---2) lines. At a number of positions on the map, two peaks were observed in the spectrum; accordingly, we allowed for the possibility of fitting lines containing $m$ independent spectral components. In our data sets, $m$ did not exceed two. The choice between one or two components was based on a comparison of the rms deviations in the spectrum after subtracting the model. If the ratio of the deviations for $m$=2 and $m$=1 was less than 1.4, we used the two-component model.
Thus, after fitting a line at each point in the map, we had estimates of the Doppler shifts of one or two spectral components. However, the number of kinematic components over the entire map can be appreciably greater than two. The number of such components can be estimated from a histogram of the velocity distribution of the line intensities. In the simplest case, the integrated intensities of the spectral components whose Doppler velocities lay within a given bin of the histogram were added. There also exist a large number of methods enabling choice of the bin width based on the data set considered, two
of which we used in our analysis: the rule of Knuth \citep{knuth} and the Bayesian block method \citep{bb}, as realized
in the astropy library \citep{astropy}. Figure~\ref{fig:hist} presents examples of applying these various methods.
Since a large fraction of a map contains lines with intensities at the detection threshold (3---5$\sigma$), and the channel width is of order the line width (5 in a group, 25 in a line, in the case of ammonia), we estimated the velocity using the k nearest neighbors (kNN) method \citep{knn}, since least-squares fitting for these regions either diverged or converged to a local minimum, with the suppression of one component by the other. Based on the fitting at each point in the map, regions were chosen whose $\chi^2$ values for the LTE approximation were less than unity. The spectra at these points were used as a reference set of objects relative to which the remaining regions were considered. The velocities were estimated as
\begin{equation}
v_k=\frac {\sum_{i=1}^n v_i,d(s_k,s_i)^2}{\sum_{i=1}^n d(s_k,s_i)^2}
\end{equation}
where $v_k$ is the velocity in a spectrum of interest, $v_i$ the velocity for the known (reference) spectrum, $s_k$, $s_i$ a spectrum of interest and a spectrum whose estimated velocity is known, $n=5$ the number of nearby spectra used to estimate the velocity, $d(s_1,s_2) = \sqrt{\sum_{j=1}^m (s_{1_j} - s_{2_j})^2}$ the distance in intensity space between the two spectra (analogous to the least-squares residual), $m$ the number of channels, and $s_{i_j}$ the intensity of the spectrum in channel $j$. In this form, the method resembles least squares fitting, but the algorithm for the descent of the parameters to those yielding the minimum residual is replaced by a search of n "similar" spectra from the reference set. Examples of fitting a spectrum using the least-squares method and the first nearest neighbor method are presented in Fig.~\ref{fig:fit}. The $\chi^2$ value for the LTE approximation is 266 (Jy/beam)$^2$ , and for the first nearest neighbor method is 72 (Jy/beam) $^2$. A more detailed analysis of the k nearest neighbors method will be described in future publications.
\subsection{Analysis of methanol observations}
Estimates of the physical parameters (kinetic temperature and density of molecular hydrogen, specific column density (column density of molecules per unit velocity interval), and relative methanol abundance) were carried out using the database of methanol energy-level populations \citep{sali2004}. This database contains methanol energy-level populations calculated at nodes of a parameter grid. The parameter values were varied from those corresponding to a dark molecular cloud to values that may be realized in regions heated by an embedded object or excited by the passage of a shock. The kinetic temperature was varied from 10 to 220 K, the density of molecular hydrogen from $10^3$ to $10^9$ cm$^{-3}$, the abundance of methanol relative to hydrogen from $10^{-9}$ to $10^{-6}$, and the specific methanol column density from $10^8$ to $10^{13}$ cm$^{-3}$s. The computations of the methanol level populations in the database were carried out in the LVG approximation. A model for the excitation of the methanol molecule developed for the conditions characteristic for star-forming regions, including regions of massive-star formation, was used in the computations \citep{sobolev}.
\section{Results}
\subsection{Spatial--kinematic structure of the core}
To investigate the kinematics of the gas inside the core, we considered first and foremost the emission in the N$_2$H$^+$ (3---2) and NH$_3$ (1,1) lines, since these trace dense, quiescent gas. In addition, both lines contain hyperfine structure, making it possible to accurately measure Doppler shifts, and thereby the velocity of the gas. We compared the N$_2$H$^+$ (3---2) data with the data for the CO(2---1), (6$_{5}$---5$_{4}$), and C$^{18}$O (2---1) lines in a position–velocity diagram (see Fig. \ref{fig:pvcore}).
The S255N core is part of a more extended structure \citep{oldie}, which is also observed in our data. The CO(2---1) line emission has no boundary at the north and south of the map, as can be seen in Fig. \ref{fig:mco} (combined data). The map of the first moment of this line on scales exceeding 20$''$ corresponds to the map presented in \citep{lev}. However, a comparison of the first and second moments of the CO line (Fig. \ref{fig:cocorr}) indicates that the line width grows toward the center of the velocity range in a close to linear shape. The direction of the velocity variations is close to the direction toward the ionized regions S255 and S257 from the center of the map, with the mean gas velocity observed at the map
center. There is no clearly distinguished symmetry axis, such as would be characteristic for the presence of rotation, as was shown in \citep{rotcluster}. Such a distribution would not be characteristic for an isotropic collapse. However, the broadening of lines toward the center of the velocity range may indicate mixing of two portions of gas driven by the expansion of envelopes.
As can be seen in Fig. \ref{fig:kn2h}, the kinematics of the core are very complex. The presence of two spectral components of ammonia in gas that is not associated with protostellar clumps was indicated in \citep{zinhr}, as we also found in our data. Shifts in the component velocities from point to point are observed. The overall direction of the velocity gradient lies from the Southwest to the Northeast and East, from $\sim$6.7 km/s to $\sim$11 km/s. However, the velocity does not vary uniformly. There are regions where there is essentially no shift in the line. Figure 7 presents the velocity distribution of the pixels for maps of the CO(2---1), N$_2$H$^+$(3---2) and NH$_3$(1,1) lines. The greatest coincidence of peaks in the distributions for the various lines occurs at a velocity of $\sim$7.8 km/s. The gas with such velocities is localized primarily in the southwestern part of the map. There is also a modest peak at $\sim$7.1 km/s located at the southern edge of the map, which can also be identified in the velocity slice marked 1 in Fig. \ref{fig:pvcore}. According to the data of \citep{lev,zinhr}, the velocity of the S255N core is 8.9 km/s, which corresponds to peak 4 in Fig. \ref{fig:nvh}. However, its position in CO is shifted by $\sim$0.3 km/s relative to its position in N$_2$H$^+$ and NH$_3$ . Modest peaks are also present at 8.3 km/s and 9.6 km/s. We denote each kinematic fragment using numbers from 1 to 5, beginning with the fragment with the lowest velocity, according to the histogram in Fig. \ref{fig:nvh}.
Thus, the gas in the core is distributed inhomogeneously. There exist some regions within which there are no significant velocity shifts, which we will refer to as kinematic fragments of the core. Figure \ref{fig:pvcore} presents a frequency slice through the data cube in the CO(2---1), N$_2$H$^+$(3---2), SO(6$_{5}$---5$_{4}$) and C$^{18}$O(2---1) lines along a path passing along the periphery of the core and crossing through these fragments. This path is shown by the red line in Fig.\ref{fig:nh3_pv}. The CO spectrum slice shape overall repeats the spatial–frequency structure of the N$_2$H$^+$ and NH$_3$ lines, however, appreciably more compact structures are traced in the SO and C$^{18}$O lines. Fragments 1, 3, and 4 are correlated with both of these lines, while fragment 2 is predominantly correlated with C$^{18}$O. CO emission at $\sim$7.9 km/s is observed along the entire path of the slices. We suggest that this is associated with the diffuse gas surrounding the core observed in \citep{oldie}. In contrast to CO, the N$_2$H$^+$ and NH$_3$ emission at this velocity is localized only in the southeastern part of the core. This region is depicted in Fig. \ref{fig:pvcore} at the beginning and end of the slice path. Two spectral components in both the N$_2$H$^+$ and NH$_3$ lines are observed a distance 50$''$ from the beginning of the path, as was also noted earlier in \citep{zinhr}. The red component is not obviously represented in the CO line. There is likewise no boundary between the fragments in the CO and N$_2$H$^+$ lines with angular and spectral resolutions of 1.2$''$ and 0.5 km/s, apart from the fifth fragment. It is also noteworthy that, in spite of the different spatial positions of the emission maxima in the N$_2$H$^+$ and NH$_3$ lines, their spatial--frequency distributions are otherwise similar to each other. The large extent of the N$_2$H$^+$ emission can be explained by the addition of the single-dish data to the interferometric data.
The velocities of the clumps SMA1, SMA2, and SMA3 presented in \citep{zinhr} are close to the gas velocities observed in the N$_2$H$^+$ (3---2) line, which is not the case for SMA4, SMA5, and SMA6. In addition, a continuous, elongated filament of gas in the C$^{18}$O (2---1) line can be traced between clumps SMA1--3; the velocity gradient along this filament is reflected in the position–velocity diagram in Fig. \ref{fig:psma123}. It is striking that the DCO$^+$ (4---3) line is shifted relative to the of the C$^{18}$O (2---1) emission peak observed in the clumps. The peak of the DCO$^+$ emission, which is located at the edge of the map, has coordinates (7$''$, --34$''$) relative to the phase center, and a full width at half maximum of 0.7 km/s with its center at 8.8 km/s and an intensity of $\approx$3 K. The DCO$^+$, CO, and N$_2$H$^+$ spectra are presented in Fig. \ref{fig:dcospecs}. The CO line has red wing, suggesting a high-velocity outflow. An IR source with similar coordinates is presented in \citep{ir,spitzer}, which in all likelihood is identified with an outflow and the DCO peak. The spatial–velocity structure of the gas is more complex in the vicinity of the clump SMA1 than in the rest of the core.
\subsection{Central region}
The region of intersection of kinematic fragments 2, 3, and 5 in the vicinity of SMA1 is of the most interest. The peak of the 1.3-mm continuum emission also lies in this direction. According to the data presented in \citep{zinhr}, the Doppler velocity of the brightest clump, associated with a peak in the dust emission of S255N---SMA1, is 8 km/s; however, the presence of two spatially unresolved clumps inside SMA1 is proposed in \citep{cyg2007}. The existence of a bipolar outflow in the CO(2---1) and SiO(5---4) lines is also known. Evidence for rotation of the clump SMA1 with a radius of $\sim$1.5$''$, corresponding to 2500 AU at a distance of 1.78 kpc, is presented in \citep{wang}, but the hypothess that two clumps are present was not considered. We observed an appreciably more extended structure toward SMA1 in the ammonia data (Fig. \ref{fig:nh3}) than the structure described in \citep{wang,zinhr}, oriented perpendicular to the outflow, whose velocity varies from 9 to 10.4 km/s, primarily along the structure. Figure 12 presents a velocity map together with a position-velocity diagram for this clump in the C$^{18}$O (2---1) and DCO$^+$ (4---3) lines with the velocities estimated from the ammonia line indicated.
The position–velocity diagram in the (1, 1) ammonia line is presented in Fig.~\ref{fig:nh3_pv}. Such a velocity profile is characteristic for a Keplerian torus (see, e.g., \citep{torus,t2}) with its plane of rotation facing us, with inner and outer radii R$_{in}\approx$ 8000 AU and R$_{out}\approx$ 12000 AU, and a central mass of M$_{c}~\approx$~8.5 M$_\odot/sin^2(i)$, where is the angle between the torus axis and the direction toward the observer. The shape of the ammonia emission contours and the linear radial dependence of the velocity in the inner part of the torus testify that the inclination of the torus is close to 90$^\circ$. We constructed a position–velocity diagram similarly to the method proposed in the appendix of \citep{pvd}, but without taking into account the radial velocity. We compared the shape of the observed and model contours in the position–velocity diagram. The sizes of the outer and inner radii can also be traced in the observed position–velocity diagram, as a linear and Keplerian part of the diagram (Fig.~\ref{fig:nh3_pv}). The inhomogeneous radial distribution of the ammonia intensity, and also the presence of emission at distances from the clump exceeding the outer radius of the torus, are noteworthy. Ammonia emission shifted by 0.8 km/s is observed to the north (the positive direction in Fig. 13).
Figure~\ref{fig:gridmap}a shows that the C$^{18}$O (2---1) emission between the clumps SMA1 and SMA3 is continuous spatially and also in frequency. In addition, we found that SMA1 consists of two individual clumps, which we will refer to as SMA1-NE and SMA1-SW. SMA1-NE is brighter in C$^{18}$O (2---1) and has a Doppler velocity of 7 km/s, while SMA1-SW has a Doppler velocity of 9.8 km/s. According to the C$^{18}$O (2---1) data, their coordinates are $\alpha$(2000)=06$^h$12$^m$53.715$^s\pm$0.005$^s$ $\delta$(2000)=18$^\circ$00$'$27.73$''\pm$0.07$''$ abd $\alpha$(2000)=06$^h$12$^m$53.588$^s\pm$0.007$^s$ $\delta$(2000)=18$^\circ$00$'$26.2$''\pm$0.1$''$, respectively.
\subsection{Physical properties}
\subsubsection{Analysis of methanol observations}
\label{ch3oh}
Emission in seven methanol lines was detected in S255N: three Class I maser lines, one Class II maser line, and three quasi-thermal lines (see Table~\ref{tab1}). However, to all appearances, the emission of the Class II maser line is thermal.
We detected a bright source with the coordinates 06$^h$12$^m$53.70$^s$ +18$^\circ$00$'$24.7$'$ and with a velocity of 11.2 km/s in the $8_{-1}$--$7_{0}$ and $9_{-1}$--$8_{0}$ methanol transitions. The spectra of this source are shown in Fig.~\ref{fig:maser}. The intensity of the lines is $\approx$7 Jy/beam, appreciably higher than the intensity in the direction of the outflow with coordinates 06$^h$12$^m$54.2$^s$ +18$^\circ$00$'$14.9$'$. This large difference in flux densities suggests that this is a maser source. Maser emission at the frequencies for these transitions has also been reported, for example, in \citep{m1,m2,m3}. The maser source is localized at the southeastern boundary of the clump SMA1 and a bipolar outflow, in both space and frequency, as can be seen in Fig. \ref{fig:pvcore}. We detected only one bright source, rather than the two reported in \citep{kurtz}. Class I emission is also observed for the other maser sources reported in \citep{kurtz}. However, the insufficient sensitivity hinders drawing unambiguous conclusions about the nature of the molecular excitation. The line is fairly narrow and shows signs of maser emission in the directions toward points 5, 6, and 7, but the shape of the spectrum in the direction of the blue part of the bipolar outflow resembles the observed profiles for the SO(6$_5$---5$_4$) and SiO(5---4) lines.
The positions of the points where our estimates were carried out are shown in Fig.~\ref{fig:mcco}. In most cases, the maximum methanol line emission is shifted relative to the center of the clumps presented in \citep{zinhr}. Methanol emission peak arranged along the line of the high-velocity outflow can also be distinguished \citep{wang}. The coordinates of these maxima are presented in Table~\ref{pos}.
At all peaks line profiles differ from Gaussian. Two spectral peaks at $\sim$10 km/s and $\sim$6 km/s are clearly visible at positions (1) and (2), while there is no obvious division into spectral components at positions (3) and (4). The spectra at positions (1) and (2) were fitted with two Gaussians, and estimates were obtained separately for each of the components. Only one component was fitted at positions (3) and (4), at velocities of 7 and 8 km/s, respectively. Our estimates of the physical parameters of the gas are presented in Table~\ref{params}.
\subsubsection{Other lines}
The angular separation of the two emission peaks in the CH$_3$CN (12$_{0}$---11$_{0}$) and C$^{18}$O (2---1) lines within SMA1, identified with NE and SW, is 1.5$''$, which corresponds to a linear size in the plane of the sky of $\sim$2700 AU at a distance of 1.78 kpc\citep{burns}.
Our re-reduction of the data for the (12---11) transitions \citep{wang,zinhr} leads to a separation of SMA1--NE and SMA1--SW in space and frequency, as can be seen in Fig.~\ref{fig:ch3cn_pv}. Rotational diagrams \citep{ch3cn} were used to estimate the kinetic temperatures of the gas (T$_k$) for SMA1--NE and SMA1--SW, which were found to be 150$\pm$5K and 25$^{+5K}_{-10K}$, respectively. We used the first five transitions of the K ladder for the estimate for SMA1--NE, and the first three transitions for SMA1--SW. The clump temperatures indicated by the analysis of the methanol data range from 20K to 100K and from 45K to 100K, respectively, close to the estimates based on the CH$_3$CN data. The coordinates of the IR source indicated in \citep{spitzer} are appreciably closer to SMA1--NE than to SMA1--SW.
We constructed a map of the distribution of T$_k$ based on the (2, 2) and (1, 1) ammonia lines (Fig.~\ref{fig:tkin}). Typical values lie in the range 10-60 K. Figure~\ref{fig:gridmap} shows that the ammonia emission is associated with the SW clump. Temperature estimates for SMA1--SW based on the NH$_3$ data are close to the estimates from the CH$_3$CN data, and there is no indication of hot gas associated with SMA1--NE from the ammonia data. The DCN(3---2) and DCO$^+$ (4---3) emission to the south of the clumps also correlates with the region of low temperatures around SMA1---SW, but there is no DCN emission to the north. The DCO+ emission toward the north (Fig.~\ref{fig:gridmap}) is associated with gas interacting with SMA1--NE, since it lies in space and frequency inside the part of the region of C$^{18}$O (2---1) emission where there is no ammonia. The temperature of the gas in the direction of SMA2 indicated by the ammonia data grows to 54K. A cool region with a temperature of about 10K with its peak at the coordinates 6$^h$12$^m$53$^s$, 18$^\circ$01$'$01$''$ is also observed at the northern boundary of the map. It is striking that the temperature grows to the East and West of the ammonia emission peak in S255N--NH3, consistent with the ratio of the line intensities for these two transitions at these coordinates. We used the velocity distribution of the temperatures indicated by the ammonia data to estimate the temperature range characteristic for each fragment, presented in Table \ref{tbl:vcmp}. The estimates of the kinetic temperatures based on the methanol and ammonia data are similar at points 5, 8, and NH$_3$ of Table~\ref{pos}.
The estimated mass of the filament joining the clumps SMA2, SMA1, and SMA3 is $\sim$8M$_\odot$. We used the data for the C$^{18}$O (2---1) line to calculate this mass. The column density of C$^{18}$O was obtained using equation (15.28) of \citep{Rohlfs2004}, and the mass of gas was calculated from the abundance of this isotope of CO, 1.7$\times$10$^{-7}$ \citep{Freeking1982}.
\section{Discussion}
The core S255N has a fairly complex kinematic structure. To all appearances, it is being compressed by expanding HII regions\citep{bieging}. The region of N$_2$H$^+$ (3---2) emission, which traces dense gas, is located closely between the radiation-heated dust observed in the IR around the ionized regions S255 and S257. The large-scale kinematic fragments are oriented parallel to the boundary of the gas and dust envelopes, but there is no clear indication of the nature of these fragments. According to \citep{spitzer}, 13 Class I and II protostellar IR sources were detected in the core. Seven large protostellar clumps with masses of 1--16 M$_\odot$ were also detected in the core, which are partially identified with IR sources. The presence of so many sources provides evidence of active formation of both low-mass and high-mass stars in the core. The absence of velocity profiles characteristic of compression suggests that the core has not yet made the transition to the periphery-collapse phase.
Note that the different spatial scales traced by different molecules have a mutal kinematic structure. However, several characteristic features are observed. The outer, dispersed regions of the core traced by the CO (2---1) line contain a component at a velocity of $\sim$8 km/s, which appears to be related to unperturbed gas surrounding the core. In addition, three directions for variations of the velocity can be distinguished from the center toward the northeast, the southwest, and the southeast. These last two correspond to the directions toward the ionized regions S255 and S257, suggesting that some kinematic fragments may come about as a consequence of an external interaction. Velocity slices for molecules tracing dense gas (NH$_3$ (1,1), N$_2$H$^+$ (3---2), C$^{18}$O (2---1)) resemble the section in the CO(2---1) line, apart from the component at $\sim$8 km/s, and also the region around the clumps. However, not all of the gas fragments can be formed by external factors. In particular, an extended structure connecting the clumps SMA1, SMA2 and SMA3 and kinematically related to these clumps can be seen in the C$^{18}$O line. In addition, two fragments are observed along the line of sight in the direction of the blue wing of the bipolar outflow from SMA1 in the N$_2$H$^+$ and NH$_3$ lines; one of these is not associated with dispersed gas of the core, since it is not represent in the CO(2---1) line.
The N$_2$H$^+$ NH$_3$ C$^{18}$O and SO data suggest that the matter in the core is distributed fairly inhomogeneously, with kinematically distinct interacting gas components being present. Clumps are not present in all of these, as can be seen in the velocity in Fig.\ref{fig:nvh}.
The dense, compact gas observed in C$^{18}$O (2---1) has an elongated structure. This emission is appreciably correlated with the emission in the N$_2$H$^+$ (3---2) line, but the portion that is not is located in the region of overlap of the kinematic fragments of the core. Figure~\ref{fig:gridmap} shows that SMA1 is surrounded by matter mainly to the north and south, as can be seen in the position–velocity diagram. The line spectrum at the currently available resolution varies smoothly, separating into two components toward SMA1 (NE and SW) -- a sign of interaction of the elongated structure with the clumps. It is also important that the emission at 8-9 km/s is appreciably weaker at the center of SMA than in the filament. This may indicate that a large fraction of the filament gas in the vicinity of the clumps is interacting with this filament (Fig.~\ref{fig:pvcore}).
The C$^{18}$O, CH$_3$CN, CH$_3$OH and DCO$^+$ data unambiguously indicate the presence of two clumps, which were not resolved earlier and have been interpreted as the object S255N SMA1. We have also continued to use the notation SMA1 when referring to the most evolved, kinematically rich central region of the core. It is noteworthy that the mean velocity of the detected giant torus coincides with the velocity of SMA1--SW, although its size is much larger than the distance between NE and SW (in the plane of the sky). Compact regions of DCO$^+$ emission that are kinematically related to both clumps are observed. An appreciably fraction of the DCO$^+$ emission is located outside the clumps, but inside the torus, correlating with C$^{18}$O (Fig. \ref{fig:gridmap}). The NE clump is associated with a bipolar outflow (Fig.~\ref{fig:outflow}) observed in the wings of CO(2---1), SO(6$_{5}$---5$_{4}$) and SiO(5---4) lines, and also with a water maser \citep{cyg2007} and methanol masers from \citep{kurtz} and detected by us. Regions of DCO$^+$ emission are present in the gas interacting with NE. This effect is not observed for SW. It is noteworthy that SW is located practically on the line of the bipolar outflow from NE. This feature could quite plausibly result due to the action of the outflow on the gas in the core. However, the relevance of the torus in this picture remains unclear. Signs of the torus are also observed in the C$^{18}$O (2---1) data. Asymmetry of the CO velocity profiles relative to NE can be traced in Fig. \ref{fig:outflow}.
Analysis of the position–velocity diagram in the C$^{18}$O, SO, NH$_3$ and DCO$^+$ lines suggests that the torus traced in the ammonia lines is related to the accretion of the more extended, dense material of the elongated, filament-like structure onto the two young protostellar clumps. The position--velocity diagram in the C$^{18}$O line in the vicinity of SMA1 indicates an active transfer of matter between the clumps and the ambient gas in the rotational plane of the torus, with quasi-Keplerian radial profiles (Figs. \ref{fig:gridmap} a,b,c).
The detected torus is one of several dozen known disk-like objects around massive protostars \citep{disk2, diskoutflow, disk3,quanz,chini,sako}. However, a large fraction of known disks have smaller sizes or are located around less massive protostars. Tori that are larger than the disks and have appreciable thicknesses are also known \citep{t2,torus5,torus2,torus3,torus4,torus}. The object G28.20-0.05 is closest in terms of its size and physical parameters. The object we have discovered is large ($\sim$24 000 AU in diameter, as opposed to $\le$12 000 AU for G28.20-0.05). The two regions both host an ultracompact HII region. The temperatures for the two objects estimated from ammonia lines are similar\citep{torus2}. However, warmer, chemically rich torus-like rotating structures are also observed \citep{torus3,torus4}. The presence of a dust component in a similar object is also known \citep{quanz}. No signs of such a component have been detected in the case of S255N SMA1 \citep{ir,wang,zinhr}.
Theoretical models predict the appearance of torus-like objects that are gravitationally unstable \citep{transtorus}, together with an important role for the late accretion of matter onto a protostar as a transitional structure \citep{diskf}. The detection of a torus is consistent with the hypothesis of the formation of massive stars via disk accretion \citep{accretion,massive}. Observations of the region with better angular resolution and simultaneously higher sensitivity are needed for estimates of the accretion rates and detailed studies of the interaction of the torus and the ambient gas.
\section{Conclusion}
We have considered the kinematic structure of the S255N core on various spatial scales, from the outer layers of the core to dense clumps in the core. Our analysis has led to the following discoveries.
\begin{enumerate}
\item The core does not show characteristic signs of either isotropic collapse or rotation. Overall, the large-scale motions are in agreement with those described in \citep{lev}, however, the structure of the core is very inhomogeneous.
\item At least five extended parts of the core are observed, within which the gas velocities are similar.
\item Large fragments of the core ($\sim$1/3 the size of the core itself) are oriented parallel to the boundaries of the HII regions S255 and S257.
\item A large fraction of smaller fragments is associated with the SMA1 clump. One of these fragments is located to the southeast of SMA1 along the line of a bipolar outflow, and is elongated along this outflow. Another fragment connecting the clumps SMA3-2-1 is also elongated, and it is fairly massive ($\sim$~8M$_\odot$).
\item We have detected a circumstellar torus rotating around SMA1, with inner radius R$_{in}\approx$8000 AU and outer radius R$_{out}\approx$ 12000 AU. The rotation profile is characteristic of Keplerian motion around a central mass of $\sim$ 8.5/$sin^2(i)$ M$_\odot$.
\item SMA1 is spatially resolved into two separated clumps, NE and SW. They actively interact with the torus and a filament. One of the clumps is fairly cool ($\sim$25 K), while the other is much hotter ($\sim$150 K). The bipolar outflow is associated with the hot source.
\item A Class I maser source was detected in the CH$_3$OH $4_{2}$--$3_{1}$, $8_{-1}$--$7_{0}$ and $9_{-1}$--$8_{0}$ lines, with coordinates $\alpha$(2000)=06$^h$12$^m$53.69$^s$ $\delta$(2000)=+18$^\circ$00$'$25.0$'$ and radial velocity $\approx$11.2 km/s. This source is located at the northeast boundary of the bipolar outflow from SMA1--NE.
\item The kinetic temperature and number density of the gas in the clumps and outflows vary in the ranges 15--220 K and 10$^4$ -- 10$^5$ cm$^{-3}$ . The filling factor for the sources in the outflows is low, suggesting strong fragmentation of the gas.
\item The temperature of the gas in the core is distributed inhomogeneously both in terms of the parameters of different fragments and inside the fragments themselves. Fragments that do not contain clumps have lower temperatures and smaller temperature ranges.
\item We detected a clump with the coordinates (7$''$, -34$''$ ) relative to SMA1, together with a related high-velocity outflow.
\end{enumerate}
\section{Acknowledgments}
We thank L.E. Pirogov and A.V. Lapinov for useful discussions of material and methods. We also thank the referee for thoughtful comments that have helped improve this paper.
This work was supported by the Russian Foundation for Basic Research (grants 16-32-00873, 16-02-00761, 15-02-06098), the Russian Science Foundation (grant 17-12-01256), decree No. 211 of the Government of the Russian Federation (contract 02.A03.21.0006), and the Ministry of Education and Science of the Russian Federation (grant RK AAAA-A17-117030310283-7).
petez@ipfran.ru
\newpage
|
{
"timestamp": "2018-05-08T02:16:38",
"yymm": "1805",
"arxiv_id": "1805.02465",
"language": "en",
"url": "https://arxiv.org/abs/1805.02465"
}
|
\section{}
\newpage
\section{Introduction}
Among the approaches to model dark matter, those exploiting
Bose-Einstein condensate (BEC) are very popular, see
e.g.~\cite{Sin,Boehmer,Harko,Arbey,Hu,Kain} and the
review~\cite{review}. They share nice features of cold dark matter,
show a number of advantages, but also encounter their own
difficulties, e.g. the problem of gravitational
collapse~\cite{Guzman}, overestimated dark halo mass~\cite{Harko}
etc.
As the true nature of dark matter constituents is unknown, many
exotic candidates were considered, e.g. axionic~\cite{axion} or even
stringy ones~\cite{stringy}. In some papers, the authors exploit
certain models of nonstandard thermostatistics with aim to describe
basic objects of quantum cosmology~\cite{Tavayef}, the physics of
dark matter~\cite{Infinite,Dil} or black
holes~\cite{Strominger,Ng,Zare}. It is worth to examine various
nontrivial models of nonstandard thermostatistics as possible
candidates for modeling, at least effectively, main properties of
dark matter in order to choose most adequate one.
In this paper we explore the $\mu$-Bose gas model as a possible
model of dark matter, and come to important facts. The $\mu$-Bose
gas model was first proposed in~\cite{GR-intercepts,GM-correl}
where the correlation functions intercepts of 2nd and higher order
were derived. The study of $\mu$-Bose gas thermodynamics started
in~\cite{RGK} and used the special so-called $\mu$-calculus. That
allowed to explore basic quantities, e.g. $\mu$-analogs of
elementary and special functions.
There are diverse deformed Bose gas models, see
e.g.~\cite{GR-intercepts,GM-correl,Martin91,Manko,
Chaichian,Monteiro,ShuChen02,AdGavr,GavrSigma,AlginFibOsc,ScarfoneSwamy09,GR-12}.
As usual, the deformations are based on respective deformed
oscillator (DO) models such as
$q$-oscillators~\cite{ArikCoon76,Biedenharn} or the 2-parameter
$p,q$-deformed (or Fibonacci) oscillators~\cite{Chakrabarti91}. A
plenty of nonstandard one-parameter DOs exist~\cite{Plethora08},
along with polynomially deformed ones~\cite{Polynomially}.
Among the so-called quasi-Fibonacci oscillators~\cite{Kachurik} we find the
$\mu$-oscillator~\cite{Jann}. Note that unlike DOs of polynomial
type, the $\mu$-oscillator (and $\mu$-bosons) belong to the less
studied class of rational type DOs. The DO models often possess
unusual properties, e.g. energy level degeneracies, nontrivial
recurrent relations for energy spectra etc. Nontrivial features of
DOs motivate their application in diverse fields of quantum physics.
Physical meaning of deformation parameter(s) of deformed model
depends on its specific application to physical system. Say, when
the model of ideal gas of {\it deformed} bosons is applied to
compute the intercepts of the momentum correlation
functions~\cite{GR-intercepts,AdGavr,GavrSigma}, one effectively
takes into account the non-zero proper volume of
particles~\cite{av95}, or their internal structure, or
compositeness~\cite{GKM2011,GM2012}. The experimental data on the
two-pion correlation-function intercepts unravel~\cite{Abelev}
non-Bose type behavior of pions, and the use of deformed BGM has
shown its efficiency~\cite{GavrSigma,AGP,GM_NP}. There exists an
application of $q$-Bose gas setup to description of the phonon
spectrum of $^4$He, and the agreement with experiment is
obvious~\cite{Monteiro}.
In the $\mu$-BGM and other deformed analogs of Bose gas model,
quantum statistical interaction gets
modified~\cite{GR-12,AlginSenay}. Moreover, deformation can also
absorb~\cite{ScarfoneSwamy09} an interaction present in the
initially non-deformed system.
About the plan of our paper. In Sec.~2 we give a setup of the
$\mu$-deformed Bose gas model and of $\mu$-calculus. Thermodynamical
quantities are considered in Sec.~3: the total number of particles
is given explicitly and from it -- the partition function (all with
explicit $\mu$-dependence). In Sec.~4, within geometric approach to
thermodynamics (see e.g.~\cite{Ruppeiner, Janyszek,Quevedo,Ubriaco})
we confirm the existence of Bose-like condensation in $\mu$-Bose
gas. Critical temperature of condensation and its dependence on the
deformation parameter $\mu$ are studied. Other aspects or
thermodynamical functions, useful for the application, are
considered in the next section. Discussion of most important
features and virtues of $\mu$-Bose gas enabling its application to
model dark matter, and the concluding remarks, are given in the
final section of the paper.
\section{Deformed analogs of Bose gas model}
Like in other works on deformed oscillators, see e.g.~\cite{Monteiro,ScarfoneSwamy09,AlginFibOsc},
we deal in fact with the (system of) deformed bosons. The important virtues of such deformation is
its ability to provide effective account of interaction between particles, their non-zero volume,
their inner (composite) structure etc.
The $\mu$-deformed BGM associated with $\mu$-oscillator~\cite{Jann}
was introduced in~\cite{GR-intercepts, GM-correl}. Therein and in
this paper the thermal average of the operator $\mathcal{O}$ is
determined by the formula
\begin{equation}\label{eq.2}
\langle \mathcal{O} \rangle=\frac{Tr(\mathcal{O}e^{-\beta H})}{Z},
\end{equation}
$Z$ being the grand canonical partition function. Its logarithm is
\begin{equation}\label{eq.3}
\ln Z=-\sum_i\ln(1-ze^{-\beta\varepsilon_i})
\end{equation}
with the fugacity $z=e^{\beta\widetilde{\mu}}$\, ($\widetilde{\mu}$
is chemical potential). The familiar formula
\begin{equation}\label{eq.4}
N=z\frac{d}{dz}\ln Z
\end{equation}
for the number of particles will be modified (deformed), see below.
To study the $\mu$-BGM as the model for the system of deformed bosons, we use
the Hamiltonian with chemical potential $\widetilde{\mu}$
\begin{equation}\label{eq.1}
H=\sum_i(\varepsilon_i-\widetilde{\mu})N_i\, .
\end{equation}
Here $\varepsilon_i$ is kinetic energy of particle in the state
"$i$" and $N_i$ the particle number (occupation number) operator
corresponding to state "$i$".
\underline{\it Elements of $\mu$-calculus}.
To develop the $\mu$-analog of BGM, we extend (deform) the notion of derivative.
So-called $\mu$-derivative, introduced in~\cite{RGK}, differs from the known
Jackson or $q$-derivative~\cite{Kac} and its $p,q$-extension (used in~\cite{GR-12}).
The easiest way to define the $\mu$-extension is to apply the rule
\begin{equation}\label{eq.10}
\mathcal{D}^{(\mu)}_x x^n\!=\![n]_{\mu}x^{n\!-\!1}, \quad
[n]_{\mu}\!\equiv\!\frac{n}{1+\mu n} \quad \mbox{($\mu$-bracket)}\,
\end{equation}
so that the $\mu$-derivative involves $\mu$-bracket from the work on $\mu$-oscillator~\cite{Jann}.
If $\mu\!\rightarrow\! 0$, then $[n]_{\mu}\to n$ and the $\mu$-extension
$\mathcal{D}^{(\mu)}_x$ reduces to ordinary $d/dx$.
Formula for $\mu$-derivative acting on monomials $x^m$ is enough for us in this work,
while the general rule for $\mu$-derivative action on a function $f(x)$ is
\begin{equation}\label{eq.11}
\mathcal{D}^{(\mu)}_xf(x)=\int^1_0dtf'_x(t^{\mu}x), \qquad
f'_x(t^{\mu}x)=\frac{d f(t^{\mu}x)}{d x}.
\end{equation}
Clearly, formula (\ref{eq.10}) stems from this general definition.
Note that the inverse $\bigl({\mathcal{D}^{(\mu)}_x}\bigr)^{-1}$ of
the $\mu$-derivative $D^{(\mu)}_x$ in (\ref{eq.10}) and
(\ref{eq.11}) is also known (we omit it).
For $k$th power of $\mu$-derivative acting on $x^n$ we have
\begin{equation}\label{eq.12}
(\mathcal{D}^{(\mu)}_x)^kx^n=\frac{[n]_{\mu}!}{[n-k]_{\mu}!}x^{n-k},
\qquad [n]_{\mu}!\equiv\frac{n!}{(n;\mu)}
\end{equation}
where $(n;\mu)\equiv(1+\mu)(1+2\mu)...(1+n\mu)$.
There exist certain $q,{\mu}$- or $(p,q;\mu)$-deformed extensions
of $\mu$-derivative:
instead of $(d/dx)f(t^{\mu}x)$ in (\ref{eq.11}) take
$\mathcal{D}^{q}_xf(t^{\mu}x)$ or $\mathcal{D}^{(p,q)}_xf(t^{\mu}x)$.
The two extensions correspond to the quasi-Fibonacci $(q;\mu)$-DO or
$(p,q;\mu)$-DO in~\cite{Kachurik}.
So, to develop the $\mu$-Bose gas thermodynamics we replace,
where necessary, the usual $d/dz$ with $\mu$-derivative $D^{(\mu)}_z$.
Due to the $\mu$-derivative, basic parameter $\mu$ enters the treatment:
the system gets $\mu$-deformed.
For $\mu \ll 1$, the usual and $\mu$-deformed
derivatives of a function have similar behavior, that
is easily seen by acting with $\mu$-derivative and usual one
on the monomial, logarithmic, exponential function, etc.
Such property of $\mu$-derivative can justify
its use in developing thermodynamics of $\mu$-Bose gas.
The $\mu$-bracket $[n]_{\mu}$ and $\mu$-factorial
$[n]_{\mu}!$, see (\ref{eq.10}), (\ref{eq.12}), generate
$\mu$-deformed analogs~\cite{RGK} of familiar functions:
$\mu$-exponential $\exp_{\mu}(x)$, $\mu$-logarithm $\ln_{\mu}(x)$
(with $\mu$-numbers $[n]_{\mu}$ and $\mu$-factorial
$[n]_{\mu}!=[n]_{\mu}[n-1]_{\mu}...[2]_{\mu}[1]_{\mu}$).
New special functions e.g. $\mu$-polylogarithms
do also appear, see~\cite{RGK} and below.
Formula for $D^{(\mu)}_x$ operating on general product $f(x)\cdot g(x)$ is also known.
\section{Thermodynamics of $\mu$-Bose gas model}
Thermodynamics of $\mu$-BGM is based on $\mu$-calculus. We consider
the gas of non-relativistic particles {for the regimes
\cite{Pathria} of both high and low temperatures\footnote{These
regimes are respectively given by the inequalities
$\frac{v}{\lambda^3}\gg 1$ and $\frac{v}{\lambda^3}\ll 1$ which
involve the specific volume $v=V/N$ and thermal wavelength
$\lambda$, see (\ref{eq.22}) and below.}}.
\underline{\it Total number of particles}.
The usual relation for total number of Bose gas particles is given in (\ref{eq.4}).
For $\mu$-BGM, the total number of particles $N\equiv N^{(\mu)}$ is defined as
$N^{(\mu)}=z\mathcal{D}^{(\mu)}_z\ln Z$ using $\mu$-derivative
$\mathcal{D}^{(\mu)}$ from (\ref{eq.10}). Applying it, for $\mu\geq
0$, to the ($\log$ of) partition function in (\ref{eq.3}) we get
\begin{equation}\label{eq.16}
N^{(\mu)}\!=\!
\sum_i\!\sum_{n=1}^{\infty}\frac{[n]_{\mu}}{n}e^{-\beta\varepsilon_in}z^n\, .
\end{equation}
Set $0\leq|ze^{-\beta\varepsilon_i}|<1$ in (\ref{eq.16}).
For non-relativistic particles the energy $\varepsilon_i$
is\footnote{Similarly to refs.~\cite{AlginFibOsc,GR-12} and others,
we initially take the particle kinetic energy as that of non-relativistic {\it free particle}.
But, the very particles in the model are not the usual bosons, because of the deformation
of thermodynamics that uses $\mu$-calculus. As result, all thermodynamical quantities
including the mean kinetic energy of particle become dependent on the
deformation parameter $\mu$.}
\begin{equation}\label{eq.17}
\varepsilon_i=\frac{\overrightarrow{p}_i\overrightarrow{p}_i}{2m}
=\frac{p_i^2}{2m}\, ,
\end{equation}
involving 3-momentum $\overrightarrow{p}_i$ of particle of mass $m$
in the $i$-th state.
At $z\rightarrow 1$ the summand in (\ref{eq.16}) diverges if $p_i=0, i=0$.
Suppose that the $i=0$ ground state admits macroscopically large occupation number.
For $z\neq 1$ we as well separate the term with $p_i=0$ from the remaining sum:
\begin{equation}\label{eq.18}
N^{(\mu)}={\sum_i} ' \sum_{n=1}^{\infty}\frac{[n]_{\mu}}{n}(e^{-\beta\varepsilon_i})^nz^n +
\sum_{n=1}^{\infty}\frac{[n]_{\mu}}{n}z^n.
\end{equation}
The symbol ${\sum_i}'$ means that the $i\!=\!0$ term is dropped from the sum.
For large volume $V$ and large $N$ the spectrum of single-particle states is almost continuous
so we replace the sum $\sum_i\rightarrow \frac{V}{(2\pi \hbar)^3}\int d^3p$
(the ground state, $p_0$, does not contribute to the integral, so it starts from zero).
Further calculation of $N^{(\mu)}$ proceeds (in $d=3$) similarly to
the case of $p,q$-Bose gas~\cite{GR-12} (see also ref.~\cite{RGK}
for details), and the final result reads:
\begin{equation}\label{eq.22}
N^{(\mu)}=\frac{V}{\lambda^3}g_{3/2}^{(\mu)}(z)+g_0^{(\mu)}(z),
\qquad g_0^{(\mu)}(z)=N_0^{(\mu)} \, .
\end{equation}
Here $\lambda\!=\!\sqrt{\frac{2\pi\hbar^2}{mkT}}$ is the thermal wavelength; $g_0^{(\mu)}(z)$
and $g_{3/2}^{(\mu)}(z)$ are $\mu$-analogs of polylogarithm $g_l(z)\!=\!\sum_{n=1}^{\infty}z^n/n^l$,
or $\mu$-polylogarithms, defined as
\begin{equation}\label{eq.23}
g_{l}^{(\mu)}(z)=\sum_{n=1}^{\infty}\frac{[n]_{\mu}}{n^{l+1}}z^n.
\end{equation}
For real $\mu>0$ the convergence properties are not spoiled (like for the usual $g_l(z)$,
there should be $|z|<1$). If $\mu\rightarrow 0$, we recover polylogarithm $g_l(z)$.
For further needs, it is convenient to rewrite the expression
(\ref{eq.22}) in terms of volume per particle $v\equiv V/N^{(\mu)}$.
\underline{\it Deformed grand partition function}.
In $\mu$-BGM, we use the relations between thermodynamic functions
similar to those of usual Bose gas thermodynamics, but in our case all the
thermodynamic functions like the partition function etc., become
$\mu$-dependent.
The deformed partition function $\ln Z^{(\mu)}$ satisfies
\begin{equation}\label{eq.25}
N^{(\mu)}=z\frac{d}{dz}\ln Z^{(\mu)}
\qquad {\rm or } \qquad
\ln Z^{(\mu)}=\Bigl(z\frac{d}{dz}\Bigr)^{-1}N^{(\mu)}.
\end{equation}
To apply $\bigl(z\frac{d}{dz}\bigr)^{-1}$, we use the
property $f\Bigl(z\frac{d}{dz}\Bigr)z^k=f(k)z^k$ for a function
$f(X)$ which admits power series expansion. From (\ref{eq.25}),
(\ref{eq.22}), (\ref{eq.23}) we infer
\begin{equation}\label{eq.28}
\ln Z^{(\mu)}=\frac{V}{\lambda^3}
\sum_{n=1}^{\infty}\frac{[n]_{\mu}}{n^{5/2}}(n)^{-1}z^n
+\sum_{n=1}^{\infty}\frac{[n]_{\mu}}{n}(n)^{-1}z^n .
\end{equation}
In a more compact form,
\begin{equation}\label{eq.29}
Z^{(\mu)}(z,T,V)=\exp\biggl(\frac{V}{\lambda^3}g^{(\mu)}_{5/2}(z)+g^{(\mu)}_1(z)\!\biggr).
\end{equation}
Formulas (\ref{eq.28})-(\ref{eq.29}) provide the $\mu$-partition function
and play basic role: from these
we can derive other thermodynamical functions and relations.
\section{Geometric approach to $\mu$-Bose gas model}
We study the $\mu$-thermodynamics using thermodynamic geometry
in the space of two parameters $\beta, \gamma$, where $\gamma=-\beta\tilde{\mu}$ and
$\tilde{\mu}$ is chemical potential.
The (scalar) curvature in the thermodynamic parameters space
provides the efficient and elegant tools for exploring
thermodynamical properties of the system under
study~\cite{Ruppeiner,Janyszek,Ubriaco}. This geometric construction
is formed, see below, by the derivatives of a thermodynamic
potential that determines a surface in the space of thermodynamic
parameters. The curvature reveals the extremal points of this
surface, identified~\cite{Quevedo} with phase transitions: its
singularity gives a sufficient condition for the existence of the
phase transition point(s). Moreover, the curvature is nothing but a
thermodynamic measure of interaction within the system, and its sign
indicates attractive or repulsive character of this interaction.
The components of (symmetric) metric in the Fisher-Rao
representation are
\begin{equation}
G_{\beta\beta}=\frac{\partial^2\ln Z^{(\mu)}}{\partial\beta^2},\quad
G_{\beta\gamma}=\frac{\partial^2\ln Z^{(\mu)}}{\partial\gamma\,\partial\beta},\quad
G_{\gamma\gamma}=\frac{\partial^2\ln Z^{(\mu)}}{\partial\gamma^2}.
\label{met}
\end{equation}
Using the thermodynamic relations, we also have
$$
G_{\beta\beta}=-\left(\frac{\partial U}{\partial \beta}\right)_{\gamma},\quad
G_{\beta\gamma}=-\left(\frac{\partial N}{\partial\beta}\right)_{\gamma},\quad
G_{\gamma\gamma}=-\left(\frac{\partial N}{\partial \gamma}\right)_{\beta}.
$$
Remembering (\ref{eq.29}), we arrive at the metric (given by
$\mu$-polylogarithms):
\begin{eqnarray}
G_{\beta\beta}&=&\frac{15}{4}\frac{V}{\lambda^3\beta^2}\,g^{(\mu)}_{\frac{5}{2}}(z),
\hspace{11mm}
G_{\beta\gamma}=\frac{3}{2}\frac{V}{\lambda^3\beta}\,g^{(\mu)}_{\frac{3}{2}}(z),\\
G_{\gamma\gamma}&=&\frac{V}{\lambda^3}\,g^{(\mu)}_{\frac{1}{2}}(z)+g^{(\mu)}_{-1}(z).
\end{eqnarray}
The determinant of the metric $g\equiv\det|G_{ij}|$ results as
(let $\beta\leftrightarrow 1,\, \gamma\leftrightarrow 2$)
\begin{equation}
g\!=\!\frac{3V}{4\lambda^3\beta^2}\left(5g^{(\mu)}_{\frac{5}{2}}(z)
g^{(\mu)}_{-1}(z)+\frac{V}{\lambda^3}\left(5g^{(\mu)}_{\frac{5}{2}}(z)
g^{(\mu)}_{\frac{1}{2}}(z)\!-\!3g^{(\mu)}_{\frac{3}{2}}(z)
g^{(\mu)}_{\frac{3}{2}}(z)\right)\right)
\end{equation}
and the inverse metric is\
$G^{11}=G_{22}/g$, $G^{12}=-G_{12}/g$, $G^{22}=G_{11}/g$.
As the metric components are given
by the derivatives of partition function, the Christoffel symbols
and the Riemann tensor are found as
\begin{eqnarray}
\Gamma_{\lambda\sigma\nu}&=&\frac{1}{2}\bigl(\ln Z^{(\mu)}\bigr)_{, \lambda\sigma\nu},\\
R_{\lambda\sigma\nu\rho}&\equiv&
G^{\kappa\tau}\left(\Gamma_{\kappa\lambda\rho}\Gamma_{\tau\sigma\nu}
- \Gamma_{\kappa\lambda\nu}\Gamma_{\tau\sigma\rho} \right).
\end{eqnarray}
Calculation of the Christoffel symbols yields
\begin{eqnarray}
\Gamma_{\beta\beta\beta}&=& -\frac{105}{16}\frac{V}{\lambda^3\beta^3}\,g^{(\mu)}_{\frac{5}{2}}(z),\\
\Gamma_{\beta\beta\gamma}&=&\Gamma_{\beta\gamma\beta}=\Gamma_{\gamma\beta\beta}
=-\frac{15}{8}\frac{V}{\lambda^3\beta^2}\,g^{(\mu)}_{\frac{3}{2}}(z),\\
\Gamma_{\beta\gamma\gamma}&=&\Gamma_{\gamma\beta\gamma}=\Gamma_{\gamma\gamma\beta}
=-\frac{3}{4}\frac{V}{\lambda^3\beta}\,g^{(\mu)}_{\frac{1}{2}}(z),\\
\Gamma_{\gamma\gamma\gamma}&=&-\frac{1}{2}\left(\frac{V}{\lambda^3}\,g^{(\mu)}_{-\frac{1}{2}}(z) +
g^{(\mu)}_{-2}(z) \right).
\end{eqnarray}
Since the scalar curvature in $2$-dimensional space is determined by
one component of Riemann tensor $R=2R_{\beta\gamma\beta\gamma}/g$, we obtain our main
result:
\begin{equation}
R=\frac52\frac{ 5g^{(\mu)}_{\frac{3}{2}} g^{(\mu)}_{\frac{3}{2}}
g^{(\mu)}_{-1} -7g^{(\mu)}_{\frac{5}{2}} g^{(\mu)}_{\frac{1}{2}}
g^{(\mu)}_{-1} + 2g^{(\mu)}_{\frac{3}{2}} g^{(\mu)}_{\frac{5}{2}}
g^{(\mu)}_{-2} +\frac{V}{\lambda^3} \, {\cal W}}
{\left(5g^{(\mu)}_{\frac{5}{2}} g^{(\mu)}_{-1} +
\frac{V}{\lambda^3}\left(5g^{(\mu)}_{\frac{5}{2}}
g^{(\mu)}_{\frac{1}{2}} - 3g^{(\mu)}_{\frac{3}{2}}
g^{(\mu)}_{\frac{3}{2}}\right)\right)^2}\, ,
\end{equation}
$$
{\cal W}\equiv2g^{(\mu)}_{\frac{3}{2}} g^{(\mu)}_{\frac{5}{2}}
g^{(\mu)}_{-\frac{1}{2}} -4g^{(\mu)}_{\frac{5}{2}}
g^{(\mu)}_{\frac{1}{2}} g^{(\mu)}_{\frac{1}{2}}
+2g^{(\mu)}_{\frac{3}{2}} g^{(\mu)}_{\frac{3}{2}}
g^{(\mu)}_{\frac{1}{2}}.
$$
The $\mu$-polylogarithm $g^{(\mu)}_l(z)$ involved in $R$ is singular
at $z\to 1$, for $l \le 1$ in the case $\mu = 0$ and for $l \le 0$
when $\mu \ne 0$. Curvature $R(z)$ in isothermal process has
characteristic properties as a function of
fugacity\footnote{Positive sign of the curvature corresponds to
attraction among particles, and the magnitude of curvature is
growing function of $\mu$. That is natural since the parameter $\mu$
of deformation provides effective account of quantum-statistical
inter-particle interactions.}.
In the case $v/\lambda^3 \ll 1$, $R(z)$ has no singularities as seen in
Fig.~1 (left panel).
If $v/\lambda^3$ is sufficiently large (see Fig~1, right panel) to
neglect the terms which do not contain this factor, the curvature
$R(z)$ is singular at $z\to 1$. We conclude that, in the latter
situation, the system undergoes phase transition, and hence
Bose-like condensation takes place. That is, the $\mu$-Bose gas
model {\it satisfies basic necessary property}.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\linewidth]{mu-l.eps}\qquad
\includegraphics[width=0.45\linewidth]{mu-gr.eps}
\end{center}
\vspace*{-3mm} \caption{Scalar curvature $R(z)$ in isothermal
process $\beta={\rm const}$ for various values of deformation
parameter $\mu = 0, \frac{1}{2}, 1$. Left panel: $v/\lambda^3 \ll 1
$. Right panel: $v/\lambda^3 \gg 1$.}
\end{figure}
\section{Critical temperature of condensation}
For low temperatures and high density we can obtain~\cite{RGK}, like
for $p,q$-Bose gas \cite{GR-12}, the critical temperature
$T^{(\mu)}_c$ of condensation in the considered $\mu$-BGM.
At vanishing $N_0^{(\mu)}$ in (\ref{eq.22}),
the critical temperature $T_c^{(\mu)}$ of $\mu$-Bose gas is
determined by the equation $\lambda^3/{v}=g^{(\mu)}_{3/2}(1)$ that
gives \cite{RGK}
\begin{equation}\label{eq.37}
T_c^{(\mu)}=\frac{2\pi\hbar^2/mk}{\bigl(vg^{(\mu)}_{3/2}(1)\bigr)^{2/3}}
\quad {\rm and} \quad
\frac{T_c^{(\mu)}}{T_c}=\Biggl(\frac{2.61}{g^{(\mu)}_{3/2}(1)}\Biggr)^{2/3}\, ,
\end{equation}
the latter being the ratio of $\mu$-critical
$T_c^{(\mu)}$ to the critical $T_c$ of Bose gas~\cite{Pathria}.
As is seen, the ratio $T_c^{(\mu)}/{T_c}$
has an important feature: the stronger is deformation (measured by
$\mu$) the higher is $T_c^{(\mu)}$. \ Say, for $\mu=0.06$ we have
$T_c^{(\mu)}\simeq 1.22 \cdot T_c$.
If $\mu\!\rightarrow\!0$ (no-deformation limit), the ratio is $T_c^{(\mu)}/{T_c}=1$,
i.e, the $\mu$-critical temperature tends to usual one,
$T_c^{(\mu)}\rightarrow T_c$ (a kind of consistency).
The existence of condensate of $\mu$-bosons is {\it the crucial
property} for the use of $\mu$-BGM in the
modeling of dark matter.
Let $T$ be in the interval $0\!<\!T\!<\!T^{(0)}_c\!\leq
T^{(\mu)}_c$. In the $\mu$-deformed case we have
$\frac{U^{(\mu)}}{T}=\frac25 c^{(\mu)}_v= \frac35 S^{(\mu)}$ in
similarity with pure Bose case, i.e. $\frac{U}{T}=\frac25 c_v=
\frac35 S .$ From these we infer two useful relations:\
$\frac{U^{(\mu)}}{c^{(\mu)}_v}=\frac{U}{c_v}$
and $\frac{U^{(\mu)}}{S^{(\mu)}}=\frac{U}{S}$.
\section{Other issues important for modeling dark matter}
Above, exploiting thermodynamic geometry, we
verified main property of $\mu$-BGM needed for its ability to model
dark matter. The appearance of Bose-like condensation as such
property, is confirmed.
As mentioned in \cite{Harko}, the dark matter surrounding dwarf
galaxies must be "{\it strongly coupled, dilute system of
particles}". In case of $\mu$-Bose gas, we emphasize that the
interaction between $\mu$-bosons (of quantum-statistical origin) is
also attractive, and due to deformation can even be stronger than
that for pure bosons. To witness, compare the two 2nd virial
coefficients: $\mu$-dependent $V_2^{(\mu)}$~\cite{RGK}, and the
standard $V_2^{(Bose)}\!=\!2^{-5/2}$ (drop the "-" sign):
$$
V_2^{(\mu)}-{V_2}^{Bose}= 2^{-7/2} \biggl(
\frac{[2]_{\mu}}{[1]^2_{\mu}} - 2 \biggr)
=2^{-5/2}\frac{\mu^2}{1+2\mu} > 0 .
$$
The enhanced attraction (of quantum origin) implies that the
$\mu$-bosons are "more bosonic" than usual bosons. This property is
good for providing {\it strongly coupled} system of
(quasi)particles.
We see that at large $\mu$ one has
$g^{(\mu)}_l(z)\to\mu^{-1}g^{(0)}_l(z)\equiv\mu^{-1}g_{l+1}(z)$,
where $g_l(z)$ is the polylogarithm. Then, the internal energy per
particle would not depend on $\mu$ while the total one does, due to
scale factor.
However, unlimited growth of $\mu$-parameter (strength of
attraction) could lead to a collapse of the studied quantum system.
To prevent that, we can find some bound on the values of $\mu$, say,
requiring to forbid negative pressure.
We take (virial expansion of) the equation of state~\cite{RGK}
to second order i.e.
\begin{equation}\label{EoS}
\frac{P v}{k T}=1-\frac{\ \ \ [2]_{\mu}}{2^{7/2}[1]^2_{\mu}}\frac{\lambda^3}{v}\, .
\end{equation}
Impose $P=0$ or
$\frac{2^{-7/2}[2]_{\mu}}{[1]^2_{\mu}}\frac{\lambda^3}{v} = 1$ and
find critical strength $\bar{\mu}$ of deformation:
\begin{equation}
\bar{\mu} = \kappa -1 + \sqrt{(\kappa -1)\kappa},
\hspace{10mm}
\kappa\equiv 2^{5/2} \frac{v}{\lambda^3}\, .
\end{equation}
With $\mu \leq \bar{\mu}$ we avoid the collapse. Obviously,
$\kappa\!=\!1$ means $\bar{\mu}=0$ and so $\mu=0$, that is pure Bose
case. On the other hand, the bound taken as say $\bar{\mu}=1$ with
$0 < \mu \le 1$ corresponds to the value $\kappa=\frac{4}{3}$.
Equation of state (\ref{EoS}) in its application to the dark matter
involves the temperature in the range $0<T\leq T^{(\mu)}_c$, with
non-vanishing pressure $P$ of $\mu$-BEC. At the same time, the dark
matter pressure within the ordinary BEC concept is also supposed to
be non-zero: $P=2\pi a\hbar^2\rho^2/m^3$, even at $T=0$, because of
the supposed scattering with length $a$ of bosons with the mass
density $\rho$. Since the effect of $\mu$-deformed statistics and
the scattering represent two non-identical processes, the both can
be taken together into account within an extended, unified model of
dark model in a future study.
There exists some other, "characteristic" value ${\mu}_0$ of ${\mu}$
for which the deformed entropy (see eq.~(41) and Fig.~6 in
\cite{RGK}) becomes \ $ \frac{S^{(\mu_0)}\lambda^3}{V k_B}=1$ or
just
$\frac{S^{(\mu_0)}\lambda^3}{V}=k_B$.
With the ${\mu}_0$, we bound our interval as
$0<\mu\leq{\mu}_0$.
At this deformation strength ${\mu}_0\simeq 1.895$, we obtain the
relation
\begin{equation}
g_{3/2}(1) = 3.3535 \ g_{3/2}^{(\mu=\mu_0)}(1)
\end{equation}
for the polylogarithm and $\mu$-polylogarithm. Then we find that
the critical volume-per-particle (divided by cube of thermal
wavelength) in the $\mu$-BGM is related with similar quantity of the
usual Bose gas by the formula
$$ \Bigl(\frac{v^c}{\lambda^3}\Bigr)_{\mu = \mu_0}=
3.3535\, \Bigl(\frac{v^c}{\lambda^3}\Bigr)_{Bose} \, . $$
Hence, we can estimate the ratio in our model using
respective estimates from Bose-condensate case~\cite{Harko}.
In Ref.~\cite{Harko}, the BEC concept, using the Gross-Pitaevskii
equation in the Thomas-Fermi approximation, is applied for finding a
space distribution or density profile of the dark matter. It gives
the radius and total mass of the dark matter halo:
\begin{equation}\label{RM}
R=\pi\sqrt{\frac{\hbar^2a}{Gm^3}},\qquad
M=\frac{4}{\pi}R^3\rho^{(c)},\label{MR}
\end{equation}
dependent on the (repulsive) $s$-wave scattering length $a$, the
gravitational constant $G$, mass $m$ of constituent particle and the
central mass density $\rho^{(c)}$.
With varying $R$ and $\rho^{(c)}$, their model is able to reproduce
characteristics of the rotational curves in the outer region of the
galaxies and the total mass $M$. However, it neglects a
supplementary interaction in the inner region and leads to some
overestimation of $M$, compared with the observational data e.g.
those discussed in~\cite{Harko}.
To resolve this, we attempt to account for additional, as compared
to the BEC model, attraction earned due to $\mu$-deformed statistics
and influencing the value $\rho^{(c)}$. Omitting here the dynamics
aspects\footnote{The issues concerning respective modification of
the Gross-Pitaevskii equation and its implications are under study,
and we hope to report on that in near future.} and determining
$\rho^{(c)}$ by the ratio $m/v$, an effective interaction is
included by defining the (critical) volume-per-particle in the case
of $\mu$-deformed thermodynamics: $v=\lambda^3/g^{(\mu)}_{3/2}(1)$.
It means that
$\rho^{(c)}_{(\mu)}=\biggl(g^{(\mu)}_{3/2}(1)/g^{(0)}_{3/2}(1)\biggr)\rho^{(c)}$
and therefore $M^{(\mu)}=\biggl(g^{(\mu)}_{3/2}(1)/g^{(0)}_{3/2}(1)\biggr)M$
will play the role of new corrected characteristics. Since
$g^{(\mu)}_{3/2}(1)<g^{(0)}_{3/2}(1)$ at $\mu>0$, these predictions
can give instead of $M^{BEC}\equiv M^{(0)}$ a better agreement with
the observational data.
\section{Concluding remarks}
Thermodynamics of the $\mu$-Bose gas -- total mean number
of particles $N^{(\mu)}$, the ($\log$ of) $\mu$-deformed partition function etc.,
involve $\mu$-generalization $g^{(\mu)}_{k}(z)$ of polylogarithms
$g_{k}(z)$. This fact influences other results in the paper.
Metric tensor, Christoffel symbols and hence the Riemann curvature
get expressed through $\mu$-polylogarithms. Positive sign of
curvature and its divergence as $z\!\to\!1$ witness the attraction
between $\mu$-particles and confirm Bose-like condensation.
Formula for the $\mu$-critical temperature $T_c^{(\mu)}$ is
compared with usual Bose case, with infinite statistics system, and
with $T_c^{(p,q)}$ of $p,q$-Bose gas model~\cite{GR-12} (therein,
$T_c^{(p,q)}$ can be both higher and lower, depending on the values
of parameters $p$ and $q$).
The ratio $T_c^{(\mu)}/{T_c}$ as a function of $\mu$-parameter shows: \
critical temperature $T_c^{(\mu)}$
exceeds\footnote{Concerning this property, it would be very
interesting to give detailed analysis of diverse species
of cold atoms (Rb, K, Ca etc.),
for which the existence of BEC
is already confirmed. However,
this goes beyond the scope of present paper.}
critical $T_c$ of usual Bose gas, in contrast with the system of infinite
statistics showing lower critical temperature~\cite{Infinite} than
the usual Bose $T_c$. We consider such property as one of the
virtues, from the viewpoint of applying $\mu$-BGM to modeling dark
matter: in our case stability of the condensate extends higher in
temperature than with usual $T_c$. Other facts important for the
modeling and valid for the infinite statistics system~\cite{Infinite},
e.g. the smallness of particle mass, can be demonstrated for $\mu$-bosons as well.
Further remarkable properties of $\mu$-Bose gas model (say, the
falling behavior of entropy-per-volume versus the $\mu$-parameter --
that means decreasing chaoticity with growing deformation strength),
when used for modeling dark matter, can also lead to interesting
implications.
In the context of dark matter, the inner structure of its
constituents at a given deformation plays a remarkable role in their
response to extrinsic perturbation. The parameter $\mu$ is determined by
specific conditions of the dark matter existence in each galaxy or
a local region of Universe.
Moreover, since deformed critical temperature obeys the condition
$T^{(\mu)}_c\geq T^{(0)}_c>T$, the present model of dark matter may
be also valid for interstellar environment where the temperature
$T$, concentration of dark matter particles $\varrho$
($T_c\sim\varrho^{2/3}$), and parameter $\mu$ can vary.
Thus, the $\mu$-Bose gas model has its own virtues and like
infinite statistics system can be used to effectively model basic
properties of dark matter.
In that domain, the present $\mu$-Bose gas model may turn out to be just
as successful as the ($\tilde{\mu},q$)-deformed analog of Bose gas model
has shown itself in the effective description, see Fig.~5 in~\cite{GM_NP},
of the observed (in the STAR/RHIC experiments) non-Bose like
behavior of the intercepts of two-pion correlations. Next, like
in~\cite{GM_NP}, {\it compositeness} of (quasi-)particles which
constitute dark matter may be of importance and can be accounted for
by extending the $\mu$-BGM using the results
of~\cite{GKM2011,GM2012}. Clearly, further study is needed to find
new arguments in favor of the proposed model of dark matter.
\section*{Acknowledgements}
This work was partly supported by The National Academy of Sciences
of Ukraine (project No. 0117U000237), and partly by the Grant
(M.V.Kh.) for Young Scientists of National Academy of Sciences of
Ukraine (No.~0117U003534). We would also like to thank the anonymous
referee(s) for the useful and constructive remarks.
|
{
"timestamp": "2018-06-04T02:11:39",
"yymm": "1805",
"arxiv_id": "1805.02504",
"language": "en",
"url": "https://arxiv.org/abs/1805.02504"
}
|
\section{Introduction}
The continuous technological advancement in the fields of genetics, genomics and bioinformatics has led to large data sets on gene clusters, gene-gene interactions and the association of genes with diseases. One of the challenges is to use this wealth of information by combining the data from different sources and experiments to get a better (intuitive) understanding of the biological processes underpinning disease. GeneVis is a web-based visualization tool to explore high-level gene clusters (e.g. Kyoto Encyclopedia of Genes and Genomes\footnote{http://www.kegg.jp/}, KEGG, pathway functions), gene-to-gene interactions (e.g. Human Integrated Protein-Protein Interaction Reference\footnote{http://cbdm-01.zdv.uni-mainz.de/~mschaefer/hippie/}, HIPPIE) and gene-disease associations (e.g. Genome-Wide Association Studies\footnote{http://www.gwascentral.org/}, GWAS). Data from these sources are generated in a domain-specific way (i.e in their own format using different methods), which makes it difficult to directly relate data from the different sources to each other. GeneVis was designed to overcome this problem by generating insightful visualizations to create a higher-level understanding of multi-domain research results.
GeneVis uses force-directed graphs as a basis to visualize relations between gene clusters, between clusters and individual genes and between individual genes (figure~\ref{fig:02}). Force-directed graphs are a general way to visualize relational data~\citep{eades}\citep{tollis}\citep{thomas}, although their effectiveness can succumb to visual clutter whenever the number of data items gets large. To reduce this visual clutter the GeneVis visualization tool introduces a higher-level view in the form of a cluster graph, called the Cluster view, where sets of genes are used as graph nodes instead of individual genes (left panel, figure~\ref{fig:02}). This reduces the number of nodes in the graph and reduces the number of visual items to inspect and comprehend. This higher-level cluster graph gives the user a concise but insightful view of the whole gene dataset, using node deformations and color maps to display the average gene association to the cluster and cluster size and the average association with a disease, respectively (left panel, figure~\ref{fig:02}). GeneVis also gives the user the ability to "zoom" into a cluster of interest and reveal the underlying gene subset and gene interactions. This level is called the Gene View (right panel, figure~\ref{fig:02}). Both in Cluster and Gene View gene sets can be explored by zooming, selecting and highlighting genes or clusters. Finally, a visual-scheme (or color map) over both views called the "Disease Mode" allows quick visual inspection of the association of genes or gene clusters with a specific disease or trait, using data from gene-disease association datasets (GWAS data by default).
\begin{figure
\centering
\includegraphics[width=\columnwidth]{./clusterNodeUitleg.png}
\caption{Cluster node representation. This shows that the minor radius (radius inner circle) is defined by the number of genes and the major radius is defined by the minor radius multiplied by the average gene association score.}\label{fig:01}
\end{figure}
\section{Method}
\label{secMethod}
\subsection{Datasets}
\label{secData}
GeneVis allows a simultaneous visualization of three complementary types of datasets: a gene cluster dataset, a gene-gene interaction dataset and a gene-disease association dataset. All datasets must be provided in either tab or comma-separated tabular file formats. For example file formats see Appendix .
\subsubsection{Gene cluster dataset}
To visualize gene clusters with GeneVis, the gene cluster dataset should contain a list of gene names with the associated gene cluster. The cluster data can either be based on a hard (0 or 1) or soft probability clustering (between 0 and 1), which both will be handled appropriately by GeneVis. Each cluster must contain a subset of genes, where each gene has an association value ${a_c}$, where $c$ is the cluster to which the gene is associated. This association value ${a_c}$ can either be a given $p$-value from a dataset (for a soft-clustered dataset) or an automatically computed fraction ${a_c=\frac{1}{N^c_g}}$(for a hard-clustered dataset), determined by the number of clusters ${N^c_g}$ a specific gene is associated with. Each row within the dataset, except the first header row, represents a gene. The columns represent the gene Entrez ID (geneEntrezId), the gene name (geneName) and all the gene clusters. All the columns must be named accordingly in a header row. The association values between genes and clusters are stored in the cells of the tabular structure (see table ~\ref{geneclusterformat}).
\begin{table}
\centering
\resizebox{\columnwidth}{!}
\begin{tabular}{|l|l|c|c|c|c|}
\hline
\textbf{geneEntrezId} & \textbf{geneName} & \textbf{GLYCOLYSIS ...} & \textbf{CITRATE\_CYCLE ...} & \textbf{PENTOSE ...} & \textbf{...} \\
\hline
873 & CBR1 & 0.2 & 0.4 & 0.9 & ... \\
\hline
2026 & ENO2 & 0.6 & 0.6 & 0.2 & ... \\
\hline
2665 & GDI2 & 0.1 & 0.2 & 0.1 & ... \\
\hline
... & ... & ... & ... & ... & ... \\
\hline
\end{tabular}%
}
\caption{Gene cluster dataset format example}
\label{geneclusterformat}
\end{table}
\subsubsection{Gene-gene interaction dataset}
GeneVis can visualize any form of gene-gene interaction that can be scored as an interaction strength between 0 and 1. To visualize gene-gene interactions, GeneVis requires a tabular dataset of genes containing 3 columns, the source gene Entrez ID (SourceGeneId), the target gene Entrez ID (TargetGeneId) and the interaction score (score)(see table ~\ref{genegeneformat}). All the columns must be named accordingly in a header row. Each row entry contains the interaction strength (score, varying between 0-1) between 2 individual genes (source and target), which will affect the representation in Gene View, see section~\ref{secGeneView}. Different types of gene-gene interactions can be visualized in this way, for instance protein-protein interactions discovered with co-immunoprecipitation assays or yeast-2-hybrid~\citep{stelzl}\citep{likw}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{SourceGeneId} & \textbf{TargetGeneId} & \textbf{score} \\
\hline
216 & 216 & 0.75 \\
\hline
3679 & 1134 & 0.73 \\
\hline
55607 & 71 & 0.65 \\
\hline
... & ... & ... \\
\hline
\end{tabular}%
\caption{Gene-gene interaction dataset format example}
\label{genegeneformat}
\end{table}
\subsubsection{Gene-disease association dataset}
The gene-disease association dataset is a tabular dataset containing diseases and its associated genes and scores. Each row in this dataset is a study about a certain disease and it's associated gene(s) and that studies $p$-value (see table ~\ref{genediseaseformat}). This dataset must contain at least the following columns: gene names (Genes), disease/trait (Disease/Trait) and the p-value of the association ($p$-value). This dataset will be visually represented in the Disease Mode, see section~\ref{secDiseaseMode}.
\begin{table}
\centering
\resizebox{\columnwidth}{!}
\begin{tabular}{|l|l|c|c|}
\hline
\textbf{Disease/Trait} & \textbf{Genes} & \textbf{p-Value} & \textbf{...} \\
\hline
depressive disorder & CBX4 & 0.0000002 & ... \\
\hline
depressive disorder & PDZD2 & 0.0000003 & ... \\
\hline
depressive disorder & CTC-497E21.5 & 0.0000007 & ... \\
\hline
... & ... & ... & ... \\
\hline
\end{tabular}%
}
\caption{Gene-disease association dataset format example}
\label{genediseaseformat}
\end{table}
\subsection{Cluster View}
\label{secClusterView}
GeneVis offers two levels of visualization. In Cluster View (left panel, figure~\ref{fig:02}) all gene clusters present in the gene cluster dataset and their overlap are shown. The layout of the graph of clusters is determined by means of a force-directed graph layout algorithm~\citep{tollis}. Each gene cluster is represented with an ellipse-shaped node, where the ellipse size defined by the inner circle radius represents the number of genes present in the cluster and the length of the major axis of the ellipse is inverse proportionally scaled to represent the average of gene association scores within the cluster multiplied by the minor radius(figure~\ref{fig:01}). So spherical clusters have a higher average gene association score than more elliptical clusters and proportionally smaller nodes have less associated genes than proportionally bigger nodes. A random color is assigned to each node and while the Disease Mode is active they are colored according to their disease association, see section~\ref{secDiseaseMode} and figure~\ref{fig:02} and figure~\ref{fig:03}.
An edge between two cluster nodes describes the gene overlap between the clusters. The size and the color intensity of the edge both scale with the amount of overlap between the clusters, meaning the wider and brighter the edge, the more overlap there is between the connected clusters, in terms of numbers of genes.
Clusters can be inspected by highlighting the cluster, which will reveal the underlying genes associated with the cluster in GeneView.
\subsection{Gene View}
\label{secGeneView}
The Gene View (right panel, figure~\ref{fig:02}) visualizes the individual gene interactions within a selected cluster. Each gene is represented as a node and each edge is an interaction between the genes. The gene nodes are represented as pie charts, where the size represents that gene’s association with the selected cluster and the pie pieces represent all the clusters in which that gene is present, proportional by cluster membership. The edge width and color represents the interaction score between genes.
The Gene View has an extensive interaction model for the user to explore the gene datasets. Individual genes can be highlighted and queried by hovering and clicking, respectively. There are three methods to highlight the gene interactions:
1) "Level of connectedness highlighting" which highlights all the gene interactions connected to the highlighted gene through different levels of connectivity, e.g. a level of connectedness of 2 reveals all the connected genes to the highlighted gene as level 1 and all the genes connected to the genes in level 1 as level 2.
2) "Link threshold highlighting" progresses through all the connected gene interactions and highlights those interactions whose interaction score is above the given threshold.
3) "Top-n link highlighting" highlights the top n highest interaction links and its genes starting from the highlighted gene.
\subsection{Disease Mode}
\label{secDiseaseMode}
The Disease Mode will change the color and opacity representation of the nodes and edges of both the Cluster View and the Gene View based on a selected disease from the gene-disease association dataset.
The Cluster View cluster nodes and edges that do not contain any genes associated to the selected disease will have a lower opacity in addition to their color and size. This reveals a disease sub-cluster graph network, which gives a sparse overview on the disease and cluster relations. Each cluster node will be colored according to that clusters enrichment score, where highly significant scores (${p<0.05}$) appear red, weakly significant scores (${0.05<p<0.1}$) appear orange and not significant scores (${p>0.1}$) appear white. This enrichment score is a modified one-tailed Fisher Exact $p$-value called the EASE score.
The Gene View gene nodes are color-coded according to the $p$-values associated to the study behind the selected disease obtained from the gene-disease association dataset.
\section{Application}
\label{secApplication}
In figure~\ref{fig:02} and~\ref{fig:03} an example of the tool is given for the gene cluster dataset KEGG, gene-gene association dataset HIPPIE and gene-disease association dataset GWAS. Figure~\ref{fig:02} shows the UI of GeneVis with on the left side panel the clusters and the right side panel the genes associated to the Cytokine receptor cluster. Both are colored coded by their association to the Crohn's disease(where red is ${p<0.05}$, orange is ${0.05<p<0.1}$ and white is ${p>0.1}$). Figure~\ref{fig:03} shows on the first row an in depth comparison of the cluster view color mapped with 3 different diseases, Crohn's disease, Migraine without Aura and Prostate cancer. The following 3 rows show an in depth comparison of the gene view of 3 different clusters (Cytokine receptor interaction, pathways in cancer and CPI3k-Akt signaling pathway) color mapped by the same diseases. In this example Genevis shows which of the clusters are associated to the currently selected disease. These gene-cluster and disease association can than be further explored on a gene-level in the Gene View. In turn the Gene View shows which genes, which have not been associated to the disease yet, are connected to the disease associated genes. This way the user can exploratively find potential new studies when it comes for example gene-disease associations.
\section{Implementation}
\label{secImplementation}
GeneVis is a web-based visualization tool written in HTML, CSS, JavaScript, jQuery\footnote{http://jquery.com/} and the framework D3.js\footnote{http://d3js.org/}. Roughly \texttildelow80\% of the application uses the framework D3.js, which is a versatile data-driven HTML DOM element manipulation framework. D3.js made it possible to provide user's with full interactive control over their gene cluster data with interactive frame rates.
\begin{sidewaysfigure*}[p
\centering
\includegraphics[keepaspectratio=true,width=\linewidth]{./UIcytokite1920.png}
\caption{GeneVis. This image is the full interface of the GeneVis. The left panel is the Cluster View and the right panel is the Gene View. The bottom of the interface is the interaction bar, which contains all the data set loading and graph interaction buttons.}\label{fig:02}
\end{sidewaysfigure*}
\begin{figure*}[p
\centering
\includegraphics[keepaspectratio=true,height=\textheight]{./clusterandgenescombinedv2.png}
\caption{The Cluster View (first row) and the Gene View (last 3 rows). The first row shows a comparison of 3 Cluster Views representations for 3 different diseases: Crohn's Disease, Migraine without Aura and Prostate Cancer. In the Cluster View row the three different pathways are annotated to show the comparison between these pathways relative to the selected disease. For example, the Cytokine receptor interaction pathway is an active pathway in inflammatory diseases and as Crohn's disease is an inflammatory bowel disease it has a significant enrichment score and is color coded accordingly in red. The 3 Gene View rows show 9 comparative GeneView Disease Mode representations between the 3 different diseases and 3 different pathways. Just like the Cluster View a clear difference can be seen between the diseases in the different pathways.}\label{fig:03}
\end{figure*}
|
{
"timestamp": "2018-05-08T02:17:15",
"yymm": "1805",
"arxiv_id": "1805.02493",
"language": "en",
"url": "https://arxiv.org/abs/1805.02493"
}
|
\section{Introduction} The Nash- Cournot oligopolistic market model is one of fundamental models in economics that has been earned attention of many authors, see e.g. \cite{Au1,Fa1,Fa2,Fu2,Ko1,Ku1,Mur1} and the references cited therein. In this model it is assumed that there are $N$-firms producing a common homogeneous commodity. Each firm $i$ has a strategy set $D_i \subset\Bbb R_{+}$
and a profit function $f_i$ defined on the strategy set $D:= D_1 \times\cdots\times D_N$ of the model. Let $x_i\in D_i$ be a corresponding production level
of firm $i$. Actually, each firm seeks to maximize its profit by choosing the corresponding production level under the presumption that the production of the
other firms are parametric input. A commonly used approach to this model is based upon the famous Nash equilibrium concept.\\
\indent We recall that a point (strategy) $x^* = (x^*_1,\ldots,x^*_N) \in D$ is said to be a Nash equilibrium point of this Nash-Cournot
oligopolistic market model if $$ f_i(x^*) \geq f_i(x^*[x_i]) \ \forall x_i \in D_i, \ \forall i,$$
where the vector $x^*[x_i]$ is obtained from $x^*$ by replacing $x^*_i$ with $x_i$.\\
\indent In the linear Nash-Cournot model the profit function of firm $i$ is given by
\begin{equation}\label{1}
f_i(x) = (\alpha - \beta\sum_{j=1}^{N} x_j)x_i - h_i(x_i) \ (i=1,\ldots,N),
\end{equation}
where $ \beta > 0$, $ \alpha > 0$ and, for every $i$, the cost function $h_i$ is affine that depends only on the quantity $x_i$ of firm $i$. In this linear case, it has been shown that (see e.g. \cite{Ko1}) the model has a unique Nash equilibrium point which is the unique solution of a strongly convex quadratic program. In the case $h_i$ is differentiable convex, the problem of finding a Nash equilibrium point can be formulated as a monotone variational inequality \cite{Fa1,Mur1} which can be solved by available methods for the monotone variational inequality.\\
\indent In some practical applications, the cost for production of a unit commodity decreases as the quantity of the production gets larger. The cost function then is concave rather than convex. Nash-Cournot oligopolistic models with concave cost functions are considered in recent paper by Bigi and Passacantando in \cite{Bi2}. For these models, as it is shown \cite{MHQ} that the problem can be formulated as a mixed variational inequality
of the form
$$\text{Find}\ x^* \in D: \langle F(x^*), x-x^*\rangle + \varphi (x) - \varphi(x^*) \geq 0\ \forall x\in D.$$
In this problem $F$ is not monotone and $\varphi$ may not be convex, and therefore the existing methods for the monotone variational inequality cannot be
applied. In \cite{MHQ} an algorithm is proposed for finding a global equilibrium point of the model when some of the cost
functions are piecewise linear concave. However the algorithm there is efficient only when the number of the piecewise linear concave
cost functions is relatively small. In \cite{QM1} a proximal point method was described for finding a stationary point of the model.
However a stationary point may not be a global, even not a local equilibrium point.\\
\indent In this paper we continue our work in \cite{MHQ} and \cite{QM1} by considering Nash-Cournot models, where some of the cost functions are separable
concave, the remaining costs are affine. Namely we approximate the model with concave cost functions by piecewise linear concave cost models that can be solved by an existing Search-and-Check algorithm in \cite{MHQ}. Thanks to the fact that the strategy set is a rectangle (box) and the cost functions are separable increasing, the model has particular features that can be employed to develop efficient algorithms for solving it. We propose two algorithms: the
first one is a search-check-branch procedure that approximates the model with concave cost functions by the models with piecewise linear concave functions. Thanks to the affine property of the price function and separability of the concave cost function the latter models can be equivalently formulated as a strongly convex quadratic problem. In order to obtain better approximate solutions the algorithm use an adaptive rectangular bisection which is performed
only in the space of the concave variables. The computational results on a lot number of randomly generated data show that this algorithm are efficient for models with a medium number ($\leq 40)$ of the firms having concave cost functions, the number of total variables may be much larger. In order to solve the models with larger number of the firms having concave cost functions we use again the convex envelope of a concave function over a box to
develop an algorithm for obtaining a local equilibrium point.\\
\indent The remaining part of the paper is organized as follows. In the next section we define a gap function that can serve as a stoping criterion for the algorithms. The third section is devoted to description of the algorithms and analysis of their convergence. We close the paper with some computational
results and experiences.
\section {A Gap Function as a Stoping Criterion}
In this section, we define a gap function for Nash-Cournot models involving concave cost functions. This gap function will serve as a stoping criterion for checking whether a point is equilibrium or not. To be precise, we consider the Nash- Cournot oligopolistic market model presented above under the assumption that each profit function $f_j$ is defined by \eqref{1} where $h_j$, $j=1,\ldots,n$ with $n\leq N$ is increasing concave while $h_i$ with
$i>n$ is increasing affine. This assumption is motivated by the fact that for some firms the cost consists of both the production and transportation costs, while for the other ones, the production need not to transport. In practice the transportation cost function is concave (see the example in \cite{Bi2}).\\
\indent First, we define the bifunction $\phi$ by taking
\begin{equation}\label{4}
\phi(x, y) := \langle \tilde{B_1}x - a, y - x \rangle
+ y^TB_1y - x^TB_1x + h(y) -
h(x)\end{equation}
where $$a :=(\alpha, \alpha,\ldots,\alpha)^T,$$
$$B_1 := \begin{pmatrix} \beta &0&0&\ldots&0\cr
0&\beta &0& \ldots&0 \cr
\ldots&\ldots&\ldots&\ldots&\ldots\cr
0&0&0&0&\beta \end{pmatrix}, \
\tilde{B_1} := \begin{pmatrix} 0&\beta &\beta &\ldots&\beta \cr
\beta &0&\beta &\ldots&\beta \cr
\ldots&\ldots&\ldots&\ldots&\ldots\cr
\beta &\beta&\beta&\ldots&0\end{pmatrix},$$
and we suppose that
$$h(x) := \sum_{i = 1}^N h_i(x_i).$$
Then the problem of finding an equilibrium point for the model can be formulated as a mixed variational inequality problem MV$(D)$ of the form (see e.g.\cite {MHQ})
$$ \begin{cases} \text{find a point} \ x\in D \ \text{such that}\\
\Phi(x, y):= \langle \tilde{B_1}x - a, y - x \rangle + \varphi(y)
- \varphi(x) \geq 0 \ \forall y\in D,
\end{cases}\eqno MV(D)$$
where $\varphi (y):= y^TB_1y + h(y)$, $\varphi(x):= x^TB_1x + h(x)$. Clearly, $\varphi$ is a DC separable function if each $h_i$ is concave, in particular case, if each $h_i$ is affine, then $\varphi$ is a separable strongly quadratic convex function. In the latter case every local equilibrium point is global one and we have the following lemma.
\begin{lemma}\cite{Ko1,MHQ}\label{2.1a}
Suppose that the cost function $h$ is affine(classical model) given as
$h(x) := \mu^Tx + \xi$. Then variational inequality $MV(D)$ can be
equivalently formulated as the convex quadratic programming problem
$$\min\{ x^T (2B_1 +\tilde{B_1}) x + (\mu -a)^T: x\in D\}.$$
\end{lemma}
\indent Gap functions are commonly used to determine stoping rules in optimization, variational inequality and equilibrium problems as well as to reformulate them as a mathematical programming problem. Following this idea, we now define a gap function for the Nash-Cournot equilibrium models with separable concave cost functions. Namely, for Problem MV$(D)$ we define a gap function by taking, for each $x\in D$,
\begin{equation}\label{gap1}
g(x):=-\min\{\Phi(x,y): y\in D\}.
\end{equation}
\begin{lemma} \label{3.1} Suppose that cost function $h_i$ is continuous on $D_i$ for all $i=1, 2, \ldots, N$. Then\\
\indent (i) The function $g(x)$ is well defined, continuous and $g(x) \geq 0 \ \forall x\in D$;\\
\indent (ii) A point $x^*\in D$ is equilibrium for the model if only if $g(x^*)=0$.
\end{lemma}
{\it Proof.}
This lemma can be derived from Theorem 2.1 in \cite{Fu2}. Here we give a direct proof for MV$(D)$.\\
\indent (i) Since $D$ is compact and, for each $x\in D$, $\Phi(x, .)$ is continuous on $D$, $\Phi(x,.)$ attains its minimum
on $D$. Further, from property $\Phi(x,x)=0$, it follows that $g(x) \geq 0$ for every $x\in D$.\\
\indent (ii) Suppose that $x^* \in D$ is an equilibrium point, then
$$\Phi(x^*,y) \geq 0 \ \text{for all} \ y\in D,$$
which implies $g(x^*) \leq 0.$ Hence $g(x^*)= 0$. Conversely, if
$g(x^*)=0$, then from the definition of $g(x^*)$ one has
$\Phi(x^*,y)\geq 0$ for all $y\in D$, that means that $x^*$ is a
equilibrium point of the model. $\hfill \square$\\
\indent Motivated by this lemma, we call a point $x_\epsilon$ an $\epsilon$-equilibrium point if $g(x_\epsilon) \leq \epsilon$.\\
We rewrite the bifunction $\Phi$ as
$$\Phi(x, y) = \langle \tilde{B_1}x - a, y - x \rangle
+ \beta \sum_{i=1}^N y^2_i + \sum_{i=1}^N h_i(y_i) -
\beta\sum_{i=1}^N
x^2_i - \sum_{i=1}^N h_i(x_i),$$
the gap function $g$ then can be rewritten as
\begin{equation}\label{g2}
g(x)= - \min_{y\in D}\Big \{ \langle \tilde{B_1}x - \alpha, y - x
\rangle+ \beta\sum_{i=1}^N y^2_i +\sum_{i=1}^N h_i(y_i)\Big \} +
\beta \sum_{i=1}^Nx_i^2 +\sum_{i=1}^N h_i(x_i)
\end{equation}
Since $D$ is the box of the form
$$D:= \{ x^T = (x_1,\ldots,x_N): 0\leq l_i \leq x_i \leq u_i, \ i=1,\ldots,N\}$$
we can further write $g(x)$ as
\begin{equation}\label{g3}
\begin{aligned}
g(x)= - &\sum_{i=1}^N \min_{l_i\leq y_i\leq u_i}\Big \{ (
\tilde{B_1}x
- \alpha)_i (y_i-x_i) + \beta y^2_i + h_i(y_i)\Big\} \\
&+ \beta \big{(}\sum_{i=1}^Nx_i\big{)}^2 +\sum_{i=1}^N h_i(x_i).
\end{aligned}
\end{equation}
A simple arrangement using \eqref{g3} yields
\begin{equation}\label{g4}
\begin{aligned}
g(x)= - &\sum_{i=1}^N \min_{l_i \leq y_i\leq u_i} \Big \{ \beta
y^2_i + \Big{(}\beta \sigma_{(-i)}(x)
- \alpha \Big{)}y_i + h_i(y_i)\Big\} \\
& + \beta \big{(}\sum_{i=1}^N x_i\big{)}^2 - \alpha^T x + \sum_{i=1}^N h_i(x_i),
\end{aligned}
\end{equation}
where $\sigma^{(-i)}(x):= \sum_{j\neq i}^N x_j$. From \eqref{g4}
it follows that evaluating $g(x)$, for each $x\in D$, one needs to
solve $N$-optimal problems each of them is one-variable
minimization problem of the form
\begin{equation}\label{2.8}
\min_{l_i \leq y_i\leq u_i} \Big \{ \beta y^2_i + \Big{(}\beta
\sigma^{(-i)}(x)
- \alpha \Big{)}y_i + h_i(y_i)\Big\}, \ i=1, 2, \ldots, N.
\end{equation}
In order to compare the Cournot model presented above with existing models let us consider the Bertrand model. In a Bertrand model the firms producing a common homogenous commodity. In contrast to the Cournot model, here each firm sets prices rather than the production quantity. So, in such a model, the demand is a function of price.and the customers buy from firms with lowest price. However, often this assumption is not realistic, since usually the products of the firms are not entirely interchangeable, and thus some consumers may prefer one product to the other even it costs somewhat more.\\
\indent Suppose that the quantity level $x_i$ produced by firm $i$ depends on the price and given by
$$x_i(p) = \gamma_i - \sigma_i p_i + \sum_{j\neq i}^n \lambda_{ij}p_j, \ i=1, \ldots, n$$
where $\gamma_i, \ \sigma_i > 0$, $\lambda_{ij} \geq 0$ if $\ (j\neq i)$. The condition $\sigma_i >0$ means that the demand for firm $i$ decreases as its price increases, while $\lambda_{ij} \geq 0$. ($i\not= j)$ means that the demand for firm $i$ increases when other firms increase their price.\\
\indent The profit function of firm $i$ then is given as
$$f_i(p) := p_ix_i - h_i(x_i),$$
where, following \cite{Bi2}, we assume that the cost $h_i(.)$ is a concave function of the production level and is given by
$$h_i(x_i) = \nu_ix_i - d_ix_i^2 \ \text{with} \ d_i \geq 0.$$
Then an elementary computation shows that the cost is a function of the price as
$$ \begin{array}{lll}
h_i(p) & =& -d_i\sigma_i^2p_i^2 + \sigma_i\big{[}2d_i \big{(} \gamma_i + \sum_{j\neq i}^n\lambda_{ij}p_j\big{)}-\nu_i\big{]}p_i + \nu_i\big{(} \gamma_i + \sum_{j\neq i}^n\lambda_{ij}p_j\big{)} \\
&& - d_i\big{(} \gamma_i + \sum_{j\neq i}^n\lambda_{ij}p_j\big{)}^2
\end{array}$$
The profit function then takes the form
$$\begin{array}{lll}
f_i(p) & = & \sigma_i(d_i\sigma_i - 1)p_i^2 + \big{[} \sigma_i \nu_i + (\gamma_i + \sum_{j\neq i}^n\lambda_{ij}p_j)(1-2d_i \sigma_i)\big{]}p_i \\
& &+ d_i \big{(}\gamma_i + \sum_{j\neq i}^n\lambda_{ij}p_j \big{)}^2 - \nu_i (\gamma_i + \sum_{j\neq i}^n\lambda_{ij}p_j)
\end{array}$$
Each firm $i$ attempts to maximize its profit by choosing a corresponding price level on its strategy set $[0, T_i]$ by solving the optimization problem
$$f_i(p) = \max_{y_i \in [0,T_i]} f_i(p[y_i])), \ \ \forall i=1, \ldots, n,$$
where $p[y_i]$ is the vector obtained from $p$ by replacing $p_i$ with $y_i$.\\
\indent By the same technique as in the Nash-Cournot model the problem of finding a Nash equilibrium point of this Bertrand model can be formulated as a mixed variational inequality of the form
$$\begin{array}{lll} \text{Find} \ \ p\in T:= T_1\times\ldots\times T_n:
\Phi(p,y):= \langle Gp - y, p-y \rangle + \psi(y)-\psi(p)\geq 0 \ \forall y\in T
\end{array}$$
where
$$ G = \left ( \begin{array} {ccccc}
0 & \lambda_{12}(1-2d_1 \sigma_1) & \ldots & \lambda_{13}(1-2d_1 \sigma_1)&\lambda_{1n}(1-2d_1 \sigma_1) \cr
\lambda_{21}(1-2d_2 \sigma_2) & 0 & \ldots & \lambda_{23}(1-2d_2 \sigma_2)&\lambda_{2n}(1-2d_2 \sigma_2) \cr
\ldots& \ldots& \ldots& \ldots&\ldots \cr
\lambda_{n1}(1-2d_n \sigma_n) & \ldots& \ldots&\lambda_{n,n-1}(1-2d_n \sigma_n)& 0
\end{array} \right )
$$
with
$$r_i = \gamma_i(1-2d_i \sigma_i), \ i=1, \ldots, n,$$
$$ \begin{array}{lll}
\psi(y) = \sum_{i=1}^n \sigma_i(d_i\sigma_i -1)y_i^2, \\
\end{array}
$$
So as the Nash-cournot model, the Bertrand model can be formulated as a mixed variational inequality $MV(D)$.
Note that since $\sigma_i(d_i \sigma_i -1), \ i=1, \ldots, n$ may be negative, the function $\psi(.)$ may not convex.
\section{An Algorithm for Global Equilibria}
In this section we describe an algorithm for approximating a global equilibrium point of the model. The idea of the proposed algorithm is quite natural, it uses
the convex envelope of the concave cost function to approximate the original model with the one having piecewise linear concave costs. The latter can be solved by an algorithm developed in \cite{MHQ} to obtain an approximate equilibrium point $x$. Then by evaluating the gap function we can
check whether or not the obtained point $x$ is an $\epsilon$-equilibrium point. If not, we use an adaptive rectangular bisection to get a better approximate point. Thanks to the rectangular structure of the strategy set and separability of the cost function, the proposed algorithm can be implemented easily.
\subsection{A Search-Check-Branch Algorithm}
First we recall \cite{Ho1} that the convex envelope of a function $\varphi$ on a convex set $C$ is the convex function on $C$, denoted by $\co_C\varphi$ such that $\co_C\varphi(x) \leq \varphi(x)$ for every $x\in C$, and if $\xi$ is any convex function on $C$ satisfying $\xi(x) \leq \varphi(x)$ for
every $x\in C$, then $\xi(x) \leq \co_C\varphi(x)$ for every $x\in C$. It is well known \cite{Ho1} that the convex envelope of a concave function is affine, and that if $C= C_1\times \ldots\times C_N$ is compact and $\varphi$ is separable, i.e., $\varphi(x_1,\ldots,x_N)= \sum_{j=1}^N \varphi_j(x_j)$ then $\co\varphi(x) = \sum_{j=1}^N\co\varphi_j(x_j)$ where $\co\varphi_j$ is the convex envelope of $\varphi_j$ over $C_j$. Clearly, since $h_i$, $i > n$ is affine,
$h_i \equiv coh_i$ on every convex set.\\
\indent The algorithm we are going to describe is a search-check-branch procedure. For a given tolerance $\epsilon \geq 0$, at each iteration,
the algorithm consists of three steps. The search-step requires solving convex quadratic programs for the approximate model with piecewise linear concave
cost functions to obtain an approximate equilibrium point. The check-step uses the gap function presented in the preceding section to check whether the obtained solution is an $\epsilon$- equilibrium point or not yet. The branch-step employs an adaptive rectangular bisection performed in the space of concave variables to obtain a better approximation for the model.\\
\indent To be precise, suppose that the strategy set $D := D_1\times \cdots\times D_N$. Let
$$I^0:= D_1\times \ldots\times D_n, \ J^0:= D_{n+1}\times \cdots\times D_N. $$
For a $n$-dimensional subbox $I \subseteq I^0 $, define
\begin{equation}\label{DI}
D_I:=\{x^T:= (x_1,\ldots, x_N): (x_1,\ldots,x_n) \in I, (x_{n+1},\ldots, x_N)\in J^0 \}
\end{equation}
and consider the convex mixed variational inequality CMV($D_I$) defined as
$$ \begin{array}{l}
\text{Find} \ x^{D_I}\in D_I \ \text{such that:}\\
\langle \tilde{B}_1x^{D_I}- \alpha, y-x^{D_I} \rangle + y^TB_1y + \co_{I}h(y)
+ \sum_{j=n+1}^N h_j(y)\\ - (x^TB_1x^{D_I} + \co_{I}h(x^{D_I})+ \sum_{j=n+1}^N h_j(x^{D_I})
) \geq 0\ \forall y\in D_I.
\end{array}
$$
In what follows we write $x^{D_I}= ( x^I,x^J)$ with $x^I \in I$,
$x^J\in J^0$.\\
\indent Since $\co_{I}h(.)$ is affine, by Lemma \ref{2.1a}, this problem is reduced to the strongly convex quadratic program
$$\min_{x\in D_I}\{ x^TQx + (c^I)^Tx\}, \eqno(QD_I)$$
where $Q:= \dfrac{1}{2}\tilde{B_1} + B_1$, $c^I = (c^{I_1}, \ldots,
c^{I_N})^T$ with $c^{I_j}:= (a^{I_j} - \alpha) (j=1, 2, \ldots,N)$.\\
\indent Suppose that each strategy set $D_j$ ( $j=1,\ldots,n$) has been divided into interval $D_{j,1},\ldots, D_{j,k_j}$ on each of
them the cost function is affine. Let $\Delta$ be the set of $n$-dimensional subboxes defined as
$$\Delta:=\{ B:= I_1\times\cdots\times I_n: I_j \in \{D_{j,1}, \ldots , D_{j, k_j}\}, j = 1,\ldots,n\}.$$
Define $\Sigma$ as the family of $N$-dimensional subboxes by taking
$$\Sigma:=\{ I= B\times J^0: B \in \Delta\}.$$
\indent Let us define the gap function for the model with piecewise concave cost function, that is
\begin{equation}\label{gapb}
\bar{g}(x):= - \min_{y\in D}\bar{\phi}(x, y)
\end{equation}
where
\begin{equation}\label{phib}
\bar{\phi}(x, y) := \langle \tilde{B_1}x - a, y - x \rangle
+ y^TB_1y - x^TB_1x + \bar{h}(y) -
\bar{ h} (x)\},
\end{equation}
where $\bar{h}$ is the piecewise linear concave function obtained by
taking the convex envelope of $h$ on each element of $\Sigma$.\\
\indent Note that, since $h_i$ is affine on $D_i$ for every $i=n+1,\ldots,N$, the convex envelope of $h_i$ on any subbox coincides
with $h_i$ for every $i= n+1,\ldots,N$. In particular, $\co_D h$ is affine and
$$\co_D h(x) = \sum_{j=1}^n\co_{I^0} h_j(x) + \sum_{i=n+1}^N h_i(x).$$
\indent First we briefly describe the algorithm in \cite{MHQ} as follows.\\
\noindent {\bf Algorithm 1} (Search-and-Check). Choose a tolerance $\epsilon \geq 0$.\\
\noindent {\bf Step 1}: Select a subbox $I\in \Sigma$.\\
\noindent {\bf Step 2}: Solve the strongly convex quadratic problem $(QD_I$) to obtain its unique solution $x^{D_I}$.\\
\noindent {\bf Step 3}:\\
\indent a) If $\bar{g}(x^{D_I}) \leq \epsilon$, terminate: $x^{D_I}$ is an $\epsilon$-equilibrium point for piecewise concave cost model.\\
\indent (b) If $\bar{g}(x^{D_I}) > \epsilon$ and $\Sigma = \emptyset$, then terminate: the model has no equilibrium point. Otherwise, replace
$\Sigma$ by $\Sigma \setminus \{I\}$ and return to {\bf Step 1}.\\
\indent It is obvious that in the worst case, the algorithm searches all subboxes in $\Sigma$, however the computational results reported in \cite{MHQ} show that by using the gap function, in general, the algorithm finds an $\epsilon$- equilibrium point without searching all elements of $\Sigma$.\\
\indent Using Algorithm $1$ described above we can develop an algorithm for approximating an equilibrium point of the model where some of the cost functions are concave. The idea is quite natural. In fact, at each iteration we use the convex envelope of the concave cost function to obtain a model with
piecewise lineae concave cost function to which we can apply the search-and-check Algorithm $1$ to obtain an approximate equilibrium point. If the obtained point is not yet an $\epsilon$- equilibrium point, we use an adaptive rectangular bisection (Rule $1$ below) to reduce the difference between the concave function and its convex envelope to obtain a better approximate equilibrium point for the original model, and so on.\\
\noindent {\bf An adaptive rectangular bisection} (Rule 1). Let $I$ be a given $n$-dimensional subbox of $D_1\times\ldots\times D_n$. For $x^I\in I$, define
$$j_{max}:= \argmax_{1\leq j \leq n} \{ h_j(x^I_j)-\co h_j(x^I_j)\}.$$ Then we bisect $I$ into two boxes via the middle point of edge $I_{j_{max}}$. We call this middle point the {\it bisection point} and $ j_{\max}$ the {\it bisection index}.\\
\indent For this bisection we have the following lemma whose proof can be found, e.g., in \cite {MO1,Mu1}.
\begin{lemma}\label{bi} Let $\{I^k\}$ be an infinite sequence of boxes generated by the adaptive rectangular bisection Rule $1$ such that $I^{k+1}\subset
I^k$ for every $k$. Let $b^k$ be the bisection point and $j_k$ be the bisection index for $I^k$. Then $lim_{k\to \infty}( h_{j_k}(b^k)- \co_{I^k}h_{j_k}(b^k) ) = 0$. Consequently, $\{I_{j_k}\}$ tends to a singleton. provided $h_{j_k}$ is (concave) not affine on $I_{j_k}$ for every $j_k$.
\end{lemma}
\indent For each subbox $I$ having $n$-edges $I_j$ $(j=1,\ldots,n)$ we define
$$\rho(I_j):= \max_{t\in I_j} \{ h_j(t) - coh_j(t)\}$$
and
\begin{equation}\label{roI}
\rho(I):=\max\{ \rho(I_j): \j=1,\ldots,n\}.
\end{equation}
The algorithm now can be described as follows:\\
\noindent {\bf Algorithm 2} (Search-Check-Branch for global equilibria).\\
\noindent {\it Initial step}. Choose a tolerance $\epsilon \geq 0$, take the initial box $I^0 := D_1\times\ldots\times D_n$. Solve the convex mixed
variational inequality CMV($D$) defined as
$$\text {Find}\ x \in D: \overline{\Phi}_0(x,y):= \langle \tilde{B_1}x - a, y - x \rangle
+ y^TB_1y - x^TB_1x + \co_D h(y) -
\co_D h(x)\geq 0\forall y\in D,$$
which is equivalent to the strongly convex quadratic program $(QD_{I^0})$ to obtain its unique solution $u^0$.\\
\indent Let $\Sigma_0:= \{I^0\}$. $x^0 := u^0$.\\
\noindent {\bf Iteration $k$ } ($k= 0,1\ldots$)\\
\indent At the beginning of each iteration $k$ we have:\\
\indent $\bullet$ $\Sigma_k$: a finite family of $n$-dimensional subboxes of $ I^0$;\\
\indent $\bullet$ $ u^k = (u^{k_1},u^{k_2})$ with $u^{k_1} \in I^0$, $u^{k_2} \in J^0$, the equilibrium point of the model with piecewise linear
concave function;\\
\indent $\bullet$ $x^k\in D$: the currently best feasible point, i.e., $g(x^k)$ is smallest among the obtained
feasible points so far.\\
\noindent {\bf Step 1}.\\
\indent a) If $g(x^k) \leq \epsilon$, terminate: $x^k$ is an $\epsilon$-equilibrium point of the original model.\\
\indent b) If $g(x^k) > \epsilon$, choose $I^k \in \ \Sigma_k$ such that
$$ \rho(I^k) = \max \{ \rho(I); \ I\in \Sigma_k\}. $$
\noindent {\bf Step 2}. Use the bisection Rule 1 described above to bisect $I^k$ into two
boxes $I^{k^+}$ and $I^{k^-}$. Let $j_k$ be the bisection index for $I^k$.\\
\noindent {\bf Step 3}. Solve the strongly convex quadratic program $(QD_I)$
with $I = I^{k^-}$ and $I= I^{k^+}$ to obtain $x^{k+}$ and $x^{k-}$ respectively.\\
\noindent {\bf Step 4}. If either $g(x^{k+}) \leq \epsilon$ or $g(x^{k-})\leq \epsilon$, terminate.\\
\noindent Otherwise, update $x^k$, $\Sigma_k$ and the linear piecewise concave cost function by taking respectively
$$x^{k+1} \in \{x^k, x^{k+}, x^{k-}\} \ \text{ such that}\ g(x^{k+1}) = \min \{g(x^k), g(x^{k-}), g(x^{k+})\}, $$
$$\Sigma_{k+1} =(\Sigma_k \setminus \{ I^k\}) \cup\{I^{k^-}, I^{k^+}\}.$$
\noindent {\bf Step 5}. Compute the convex envelope of function $h_{j_k}$ on the egde $j_k$ of the subboxes $I^{k^-}$, $I^{k^+}$, thereby to obtain the new approximation bifunction
$$\overline{\Phi}_{k+1}(x,y):= \langle \tilde{B_1}x - a, y - x \rangle
+ y^TB_1y - x^TB_1x + \co_{k+1} h(y) -
\co_{k+1} h(x),$$
where $\co_{k+1}h$ is the convex envelope of $h$ obtained by replacing the convex envelope of $h_{j_k}$ on the edge $j_k$ of $I^k$ by the convex envelope of $h_{j_k}$ on the edge $j_k$ of $I^{k^-}$ and $I^{k^+}$. Then use Algorithm 1 with the just obtained piecewise linear concave cost function to solve the newly approximated piecewise linear concave model to obtain $u^{k+1}$.\\
Increase $k$ by one and go to {\bf Step 1} of iteration $k$.\\
\noindent Suppose that every model with piecewise linear concave cost function has an $\epsilon$- equilibrium point for any $\epsilon > 0$. Then we have the
following convergence result.\\
\noindent {\bf Convergence Theorem.}\\
{\it \indent (i) If the algorithm terminates at iteration $k$ then $x^k$ is an $\epsilon$-equilibrium point.\\
\indent (ii) If the algorithm does not terminate, it generates an infinite sequence $\{ x^k\}$ such that any its cluster point is an equilibrium point whenever the model has an equilibrium point.
Furthermore $g(x^k) \searrow 0$ as $k\to \infty$.}\\
\noindent {\it Proof.} The statement (i) is obvious.\\
\noindent To prove statement (ii) we suppose that the algorithm never terminates. Let $x^*$ be any cluster point of $\{x^k\}$. Then
there exists a subsequence of $\{x^{k_q}\}$ that tends to $x^*$. Thus the corresponding sequence of selected intervals has a nested sequence, which, by taking a subsequence if necessary, we denote also by $I^{k_q}$. Since $I^{k_q}$ is the box to be bisected at iteration $k_q$, by Lemma \ref{bi}, $\{I^{k_q}\}$ tends to a singleton, which implies that $h_{j_q}(x_{j_q}) - \co h_{j_q}(x_{j_q}) \to 0$ as $q\to \infty$ ($j_q$ is the bisection index at iteration $k_q$). By the rule for selecting the bisection index, we have $h_j(x_j ) - \co h_j(x_j ) \to 0$ for every $j$. Since $u^{k_q}$ is an equilibrium point of the model with piecewise linear concave cost function, we have $\bar{g}_{k_q}(u^{k_q}) = 0$ for every $q$, where $\bar{g}_{k_q}$ is the gap function for the piecewise linear concave cost model at iteration $k_q$. By the definition of the gap function $g$ for the original model and of $\bar{g}$ for the approximate model, and the rule for selecting bisection index, we can write
$$\bar{g}(u^{k_q}) - 2\sigma_{k_q} \leq g(u^{k_q}) \leq \bar{g}(u^{k_q}) + 2\sigma_{k_q} \ \forall q.$$
Letting $q\to \infty$, since $\sigma_{k_q} \to 0$, $u^{k_q} \to u^*$, by continuity of $g$, we obtain $g(u^*) = 0$.\\
On the other hand, since $x^{k_q}$ is the currently best feasible point obtained at iteration $k_q$, we have $0 \leq g(x^{k_q}) \leq g(u^{k_q})$. Letting $q \to \infty$, by continuity of $g$, we obtain $0 \leq g(x^*) = g(u^*) = 0$, which means that $x^*$ is an equilibrium of the model. Note that, since $x^k$ is the currently best feasible point obtained at iteration $k$, by definition, the sequence $\{g(x^k)\}$ is nonincreasing. Since the whole sequence $\{x^k\}$ is bounded, it has a subsequence $\{x^{k_j}\}$ converging to some $\bar{x}$. Then, as we just have shown, $\bar{x}$ is an equilibrium point which implies $g(\bar{x}) = 0$. Then the whole sequence $\{g(x^k)\}$ tends to $0$ as well. $\hfill \square$
\begin{Remark}\label{4.2}
In order to save the memory, we may use a criterion to delete every subbox that does not contain an equilibrium point in it.
\end{Remark}
The following lemma gives a criterion that can be used to check whether a subbox
contains an equilibrium point or not. In fact, for a subbox $D_I:= \{x\in D: l^I \leq x\leq u^I\}$,
let us define the numbe $$\tilde{g}(D_I):= - \min_{y\in D_I}\{\langle \tilde{B}_1u^I - a, y
\rangle +y^TB_1y +h(y)\} - (l^I)^T B_1 l^I + a^T u^I - h(l^I). $$
Then we have the following lemma:
\begin{lemma} \label {del} Suppose $x^{D_I}$ is an optimal solution of Problem $(QD_I)$.\\
\indent (i) If $\co_Ih(x^{D_I}) = h(x^{D_I})$ then $x^I$ is the equilibrium point the model restricted on $D_I$.\\
\indent (ii) If $\tilde{g}(D_I) > 0$, the subbox $D_I$ contains no
equilibrium point of the model.
\end{lemma}
{\it Proof.}\\
\indent (i) Since $x^{D_I}$ is the solution of $(QD_I)$, we have
$$\langle \tilde{B}_1x^{D_I} - a, y-x^{D_I} \rangle + y^TB_1y + \co_Ih(y) - (x^I)^TB_1x^{D_I} - \co_Ih(x^{D_I}) \geq 0, \forall y\in D_I.$$
Note that $h(y) \geq \co_Ih(y), \ \forall y\in D_I$, by the assumption, $\co_Ih(x^{D_I}) = h(x^{D_I})$, we obtain
$$\langle \tilde{B}_1x^{D_I} - a , y-x^{D_I} \rangle + y^TB_1y + h(y) -
(x^I)^TB_1x^{D_I} - h(x^{D_I}) \geq 0$$
for every $y\in D_I$, which means that $x^{D_I}$ is the equilibrium point the model restricted on $D_I$.\\
\indent (ii) We now prove that $g(x) > 0 $ for all $x\in D_I$. Indeed, by definition
$$\Phi(x, y) = \langle \tilde{B}_1x - a, y \rangle +y^TB_1y +h(y) - x^TB_1x + a^T x - h(x).$$
Since $y\geq 0, \tilde{B}_1$ and $B_1$ are non-negative matrices, $h_i(.) (i=1, 2, \ldots, n)$ are increasing functions and $l^I \leq x \leq u^I $ for every $x \in D_I$, we can write, for every $y\in D$ and $x\in D_I$,
\begin{equation}\label{gg}
\begin{aligned}
\Phi(x,y) & = \langle \tilde{B}_1x - a, y \rangle +y^TB_1y +
h(y) - x^TB_1x + a^T x - h(x)\\
& \leq \langle \tilde{B}_1u^I - a, y \rangle +y^TB_1y +h(y) -
(l^I)^TB_1l^I + a^Tu^I - h(l^I) .
\end{aligned}
\end{equation}
By the definition of $\tilde{g}(D_I)$, it follows from \eqref{gg} that
$$g(x): = - \min_{y\in D} \phi(x,y))\geq - \min_{y\in D_I} \phi(x,y)) \geq \tilde{g}(D_I)> 0 \ \forall\ x\in D_I,$$
which implies that $D_I$ does not contain an equilibrium point. $\hfill \square$
\section{An Algorithm for Local Equilibria }
Using the fact that a point $x^*\in D$ is an equilibrium point of the model if and only if the gap function $g(x^*) = 0$, we
say that a point $\bar{x}$ is a {\it local equilibrium point} of the model if there exists an open set $B \subset D$ such that $\bar{x}\in B$, $g_B(\bar{x}) = 0,$ where $g_B$ stands for the gap function of the model restricted on $B$. Note that because of concavity of the cost function, in this equilibrium Nash-Cournot model, a local equilibrium point may not be a global one.\\
\indent In this section, we propose an algorithm for approximating a local equilibrium point of the model by using again the gap function.\\
Namely, for a subbox $$I:=\{ x=(x_1,\ldots,x_n)^T : l_i \leq x_i \leq
u_i, \ i=1,\ldots,n\}, $$ let, as before, $D_I$ be the subbox of $D$
consists of all points $x^T =(x_1,\ldots, x_n,\ldots, x_N)$ such that $(x_1,\ldots,x_n) \in I$. That
is
$$D_I =\{ x=(x_1,\ldots,x_N)^T, \ l_i\leq x_i \leq u_i, i=1,\ldots N\}.$$
Then define the gap function $g_{D_I}$ restricted on $D_I$ by taking
\begin{equation}\label{glo}
\begin{aligned}
g_{D_I}(x)= - &\sum_{i=1}^N \min_{l_i \leq y_i\leq u_i} \Big \{
\beta y^2_i + \Big{(}\beta \sigma_{(-i)}(x)
- \alpha \Big{)}y_i + h_i(y_i)\Big\} \\
& + \beta \sum_{i=1}^N x^2_i - a^T x + \sum_{i=1}^N h_i(x_i),
\end{aligned}
\end{equation}
where $\sigma^{(-i)}(x):= \sum_{j\neq i}^N x_j$.
As before we use the convex envelope of the concave function $h$ on each subbox $D_I$ to obtain a convex mixed variational inequality
whose solution can be obtained by solving a strongly convex quadratic over $D_I$. If it happens that at the obtained solution
the values of the cost function and its convex envelope on $D_I$ coincide, this solution is a local equilibrium point of the model.
Otherwise we bisect $I$ to reduce the difference between the cost function and its convex envelope on $ D_I$. Note that if
$g_{D_I}(x) = 0$ for some $x\in D_I$, then $x$ is a local equilibrium point. Thus, if $x\in D_I$ and $g_I(x) \leq \epsilon$, then $x$ is an $\epsilon$-local equilibrium point. Since $x^{D_I}$ is the equilibrium point of the model with respect
to $D_I$, from the definitions of the convex envelope of $h$ and the gap function restricted on $D_I$, it follows that $h(x^{D_I}) -
\co_I h(x^{D_I}) = 0$ implies $g_{D_I}(x^{D_I}) = 0$. The algorithm now can be described as follows.\\
\noindent{\bf Algorithm 3} (Search-Check-Branch for local equilibria).
\noindent{\it Initial step}. Choose tolerances $\epsilon > 0$ and solve Problem (Q$D$)
to obtain its optimal solution $x^{I^0}$.\\
\noindent Compute $\rho_0 := \rho(I^0)$ and $\epsilon_0:= g_D(x^{I^0})$. Set the initial box $I^0$ and let
$\Gamma_0 :=\{I^0\}$.\\
\noindent {\it Iteration} $k \ (k=0, 1, \ldots).$ At the beginning of each iteration $k$ we have:\\
\indent $\bullet$ $\Gamma_k$: a finite family of $n$-dimensional subboxes of $I^0$;\\
\indent $\bullet$ $\epsilon_k = \min \{ g_{D_I}(x^{D_I}): I\in \Gamma_k\}$, where $x^{D_I}$ is the optimal solution of the convex quadratic
program (Q$D_I)$;\\
\noindent {\bf Step 1.} (Stoping criteria) If $\epsilon_k \leq \epsilon$, terminate: $x^{D_I}$ with $\epsilon_k = g_{D_I}(x^{D_I})$ is an $\epsilon$-local equilibrium point.
\noindent {\bf Step 2.} (Selection) Choose $I^k \in \Gamma_k$ such that
$$\rho_k := \rho(I^k) = \max\{\rho(I): I\in \Gamma_k\}.$$
\noindent {\bf Step 3.} (Bisection): Divide the subbox $I^k$ into
two subboxes $I^{k^+}$ and $I^{k^-}$ by the bisection Rule $1$.\\
\noindent {\bf Step 4.} Solve the strongly convex quadratic programs (Q$D_I$)
with $I:= I^{k^+}$
and $I:= I^{k^-}$ to obtain the optimal solutions
$x^{k^+}$ and $x^{k^-}$ respectively.
Compute $\rho(I^{k^+})$ and $\rho(I^{k^-})$.\\
\noindent {\bf Step 5.} Let $ \epsilon_{k+1}:= \text{argmin} \{\epsilon_k,
g_{D_I}(x^{D_I}) \ \text{with}\ I = I^{k^-}\ \text{and} \ I = I^{k^+} \}.$\\
\noindent {\bf Step 6.} (Updating) If $g(x^{D_I}) > 0$ delete $I$ from
further consideration.\\
\indent Let $\Gamma_{k+1}$ be the remaining
set. If $\Gamma_{k+1} = \emptyset$, terminate: the model has no
$\epsilon$-local equilibrium point. Otherwise, go to iteration $k$ with
$k:=k+1$.\\
\noindent {\bf Convergence.} {\it The algorithm terminates after a finite iteration yielding an $\epsilon$- local equilibrium point whenever
it does exist}.\\
\indent The proof of this convergence result is evident because of the fact that $\epsilon > 0$, that the sequence of selected boxes tends to a
singleton and that the gap function is continuous.
\begin{Remark}\label{4.3} If for every $i$, the cost function $h_i$ satisfies the condition
\begin{equation}\label{ex}
\nabla^2h_i(y_i) \geq - 2\beta_i \ \forall y_i \in D_i.
\end{equation}
Then the model admits a solution.\\
Indeed, for each $x\in D$, let $H_i(x)$ be the solution set of the problem
$$\min_{y_i\in D_i}\Big {\{} \varphi_i(x_{-i},y_i):= \beta y_i^2 + \Big{(} \beta \sum_{j\neq i}^Nx_j - \alpha_i\Big{)}y_i +h_i(y_i)\Big{\}}. \eqno(QD_i(x)).$$
It is easy to check that condition \eqref{ex} ensures that the object function of this problem is convex in $y_i$. Thus $H_i(x)$ is a closed convex of the interval $D_i$. Since the objective function of this problem is continuous and the feasible is compact, the solution set $H_i(x)$ is a upper semicontinuous mapping from $D$ into itself, by well-known Kakutani fixed point, the mapping $H(x):= H_1(x)\times \ldots\times H_N(x) $ has a fixed point $x^*$, which is also an equilibrium point of the model.\\
\indent Note that both the cost functions
$$h_i(y_i) = \ell_iy_i - d_i y_i^2$$
with $\beta_i > d_i > 0$ for all $i=1, \ldots, n$.
used in \cite{Bi2} and
$$h_i(y_i) = \mu_iy_i + \ln(1+ \gamma_iy_i),$$
with $\gamma_i > 0$ and $\gamma_i^2 \leq 2\beta_i, \ \forall i = 1, \ldots, n$
satisfy condition \eqref{ex}.
\end{Remark}
\section{Computational Results and Experiments} The proposed two algorithms
were implemented in MATLAB. The programs were executed on a
PC Core 2Duo 2*2.0 GHz, RAM 2GB. We tested the program on
different groups of problems, each of them contains ten problems of
different sizes $N$ and $n$, but having randomly generated input data.
Namely, for each problem, the numbers $\alpha$, $\beta$, $\mu_i$
($i=n+1,\ldots,N)$ are randomly generated in the interval $[ 20. 30] $,
$[0.001, 0.005]$ and $[10. 20]$ respectively. We take the cost
functions of the form
\begin{equation}\label{excost}
h_j(x_j) = a_j x_j + ln(1+ \gamma_j x_j) , \ (j=1,\ldots,n), \
h_i(x_j):= \mu_i x_i \ (i = n+1,\ldots,N).
\end{equation}
where $\gamma_j $ and $a_j$ are randomly generated in
$[7,15]$ and $[2,7]$ respectively. The strategy set of firm $i$ is
$D_i := [0, u_i]$ where
each $u_i$ is randomly generated
in the interval $[100. 500]$.
The obtained results are reported in Table $4.1$ below, where we use the following headings:
\begin{itemize}
\item $N$: number of the firms;
\item $n$: number of the firms having concave (but not affine) cost;
\item {\it Average time}: the average time (in second) needed to solve one
problem;
\item {\it Average iter}: the average numbers of
iterations for one problem.
\item {\it Glob-GSCB}: number of problems for which an equilibrium point
was obtained by Search-Check-Branch Algorithm for global equilibria.
\item {\it Glob-LCB}: number of problems for which a global
optimal solution was obtained by Search-Check-Branch for local equilibria.
\end{itemize}
\begin{center}
\begin{tabular}{|c|c|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|}
\hline
\multicolumn{2}{|c|}{Size} & \multicolumn{3}{|c|}{GSCB-Alg.} &
\multicolumn{3}{|c|}{LSCB-Alg.} \\ \hline N & n & Average time
& Average iter. &Glob-GSCB & Average time & Average iter & Glob-LSCB \\
\hline
5 & 5 & 0.00 & 1 & 10 & 0.03 &1& 10 \\
50 & 5 & 8.98 & 133 & 10 & 0.06 & 1&10 \\
100 & 5 & 17.89 & 171 & 10 & 0.18 & 2&8 \\
200 & 5 &1.78 & 7 & 10 & 0.29 & 2&8 \\ \hline
10 & 10 & 9.65 & 308 & 10 & 0.05 &1& 8 \\
50 & 10 & 82.35 & 1141 & 10 & 0.22 & 4 & 4 \\
100 & 10 & 47.05 & 445 & 10 & 0.43 & 5& 7 \\
200 & 10 & 41.06 & 203 & 10 & 0.33 & 2& 7 \\ \hline
20 & 20 & 127.15 & 2478 & 10 & 1.29 & 24&1\\
50 & 20 & 98.10 & 1231 & 10 & 0.50 & 7 &3\\
100 & 20 & 105.00 & 914 & 10 & 1.72 & 16 &3 \\
200 & 20 & 440.88 & 2216 & 10 & 1.94 & 11& 5 \\ \hline
30 & 30 & 286.57 & 3754 & 10 & 0.89 & 13 & 2 \\
50 & 30 & 246.44 & 2901 & 10 & 1.23 & 17& 1\\
100 & 30 & 872.27 & 7193 & 10 & 0.73 & 7 & 2 \\
200 & 30 & 750.72 & 3514 & 10 & 2.70 & 15 & 4 \\ \hline
40 & 40 & 515.10 & 5944 & 10 & 3.09 & 40 & 2 \\
50 & 40 & 1332.10 & 14820 & 9 & 7.69 & 97 & 0 \\
100 & 40 & 646.53 & 5213 & 10& 2.85 & 26 & 0 \\
200 & 40 & 898.09 & 4169 & 9& 3.83 & 21 & 1 \\ \hline
100 & 100 & Skip & - & - & 20.21 & 148 & 0 \\
200 & 100 & Skip & - & - & 132.64 & 568 & 0 \\
200 & 200 & Skip & - & -& 107.63 & 400 & 0 \\
300 & 200 & Skip & - & - & 252.67 & 579 & 0 \\ \hline
\end{tabular}
\end{center}
\centerline{Table 4.1}
\indent From the obtained results reported in Table $4.1$ we can conclude the
followings for the tested concave cost functions given as \eqref{excost}.\\
\indent $\bullet$ Algorithm $2$ for global equilibrium point can solve
models with a moderate number ($n\leq 40)$ of concave cost
functions, while Algorithm $3$ can solve models where the number of
concave cost functions much larger.\\
\indent $\bullet$ For models where the number of the firms having concave
cost is somewhat large ($n \geq 40$), the local equilibrium
point obtained by the local algorithm is often not a global one.
\section{Conclusion}
A Nash-Cournot oligopolistic equilibrium
model involving concave cost functions may have local equilibrium
points that are not global ones. We have approximated such a model
with the one having piecewise linear concave function by using the
convex envelope of a separable concave function over a box. Based
upon this approximation we have proposed two algorithms for
approximating a global as well as local equilibrium points that
employ a gap function as a stoping criterion for the algorithms, and
an update rectangular bisection to make the approximation better.
Some computational results have been reported showing efficiency of
the proposed algorithms for models where the number of the concave
(but not affine) cost functions is not large ($n\leq 40$) for
global algorithm, and $(n \leq 200$) for local one. An open question
that would be interesting for further consideration is to find a
differentiable gap function, for which a local optimization
algorithms such as descent ones in \cite{Fu1a} or DCA in
\cite{Ph1} could be applied efficiently.
\section*{Acknowledgements}
This work is supported by the National Foundation for Science and Technology Development (NAFOSTED), Vietnam.
|
{
"timestamp": "2018-05-08T02:10:22",
"yymm": "1805",
"arxiv_id": "1805.02171",
"language": "en",
"url": "https://arxiv.org/abs/1805.02171"
}
|
\section{Introduction}
The identification of genuine properties of a system, from single molecules to a complex composite systems, represents a primary goal for physical and chemical analysis. As metrological requirements become increasingly demanding in terms of performance, understanding the ultimate precision achievable in the estimation of a parameter represents a key issue. In this respect, quantum metrology, aiming at designing protocols to perform optimal measurements, figures as the most appealing and intriguing field of research and applications~\cite{giovannetti2004,giovannetti2006}.
Phase estimation has long represented the heart of quantum metrology~\cite{Paris08,giovannetti2011,RafalReview}: in a large number of technological areas the estimation problem is concerned with determining a single parameter, and this is typically manifested as a phase shift of the quantum state describing the probe. The engineering of such state then aims at providing the optimal choice for an enhanced sensitivity in the estimation: particular families of states, as squeezed~\cite{caves1981,breitenbach1997,rozema2014} or {\it N}00{\it N} states~\cite{lee2002,afek2010,giovannetti2011,joo2011}, are often used to feed interferometers, showing how nonclassicality represents the primary ingredient of the probe states. Nevertheless, an increase in sensitivity balances the robustness of the quantum state: the more these resources are informative, the more they are difficult to obtain and fragile. Their metrological yield can then be spoilt by ungoverned or spurious couplings, unavoidable in any real experiment \cite{dorner2009,kacprowicz2010,peter2011,datta2011,escher2011,knysh2011,demko2012,gill2000,macchiavello2003,ballester2004,pinel2013,genoni2013,sergey2013,crowley2014,Birchall2016}
In the context of noisy quantum metrology, several attempts have been done in order to restore a quantum advantage \cite{Matsuzaki2011,Chin12, Chaves2013,Kessler14,Arrad14,Dur14, Brask2015, Sekatski2015a,Plenio2016,Gefen2016, Smirne16, Haase2017, Gorecka2017,Layden2017,Sekatski2017,Matsuzaki2017,Zhou2018,Albarelli2018}; in all these metrological schemes, a proper characterization of the noise affecting the system is however required. It is not always the case that such characterization can be performed in advance: for instance, in time-varying cases the noise process itself can change, and it is then important to design strategies that treat the assessment of both unitary parameters, such as phases, and dissipative parameters, including loss or phase diffusion, at equal pace by demanding a multiparameter approach. Such extended characterization is akin in spirit to channel tomography~\cite{orieux2013,rozema2014,zhou2015}, aside from the important difference that one allows for a single choice of probes, and not for a tomographically complete family. Multiparameter has been the subject of intensive research over the last years, and this has highlighted the emergence of a trade-off in the achievable precision on individual parameters in many practical instances~\cite{vaneph2013,Vidrighin14,pezze2017,magdalena2017,roccia2017a}. On the other hand, working in a multiparameter setting also brings the advantage of making the estimation process more robust against small deviations of the designed probes from the optimal states~\cite{Vidrighin14}.
Here we present an estimation experiment in which the multiparameter approach is followed to obtain the value of a phase shift and, at the same time, a reliable estimate of the quality of the probe that actually investigates the material, corresponding in our case to the mode indistinguishability of two input Fock states.
At difference from \cite{Birchall2016} where phase-estimation with not-perfectly-indistinguishable photons is investigated, the resources are devoted to both estimation tasks. Hence, by estimating both quantities at the same time, it is possible to reduce biases due to an uncertain knowledge of the probe.
We apply this method to the investigation of chiral aqueous solutions of fructose investigated by two-photon {\it N}00{\it N} states. The theoretical generalization to higher photon numbers $N$ demonstrates the presence of a trade-off in the scaling associated to the precision on phase and mode distinguishablity.
\section{Two-parameter estimation of phase and visibility}
A common setup in quantum phase estimation uses single-photon pairs produced via a spontaneous parametric down conversion (SPDC) process: in the typical scheme, the two photons are first combined on a beam splitter (BS), so that Hong-Ou-Mandel interference~\cite{HOM1987} produces in a $N$00$N$ state with $N{=}2$, {\it i.e.} a state in a superposition of two photons being present in either mode, and none on the other. The monitored element, imparting a phase shift $\phi$, is then inserted on one of the modes, with the other left unperturbed. The detection scheme has the two modes recombined on a second BS, and photon counters on the outputs. The combination of the nonclassicality of the state and of the optimality of the measurement choice results in oscillations of the photon counting probabilities occurring with a phase $2\phi$, hence in a superior precision than attainable with classical light of the same average energy. This strategy, although effective, clashes with the non-ideal visibility $v$ of the two-photon interference on the two BSs: a second characteristic parameter to be estimated is then introduced. The value of $v$ is limited both by the distinguishabilty of the two photons in spectral and spatial degrees of freedom, but also to dephasing or depolarisation mechanisms taking places inside the sample. Therefore a preliminary calibration performed under conditions which do no reflect those present at the time of phase estimation might weaken of the metrological capabilities of the protocol. We have then explored the alternative approach of assessing the values of $\phi$ and $v$ simultaneously.
\begin{figure}[t]
\centering
{\includegraphics[width=0.95\linewidth]{Fig1_2.pdf}}
\caption{Experimental set-up: each one of the two single photons (wavelength 810nm ) of the pair generated via type-I SPDC from a Beta Barium Borate (BBO, 3mm-length) non-linear crystal excited via a continuous wave (80mW power) pump laser, passes through an half wave plate (HWP1 at 0$^{\circ}$ and HWP2 at 45$^{\circ}$) before being combined on a polarized beam splitter (PBS1). These photons are used to estimate the birefringent phase imparted by the optical activity of a chiral solution. A wave plate (HWP3) and a second polarizer (PBS2) project the outcoming photons onto different polarizations. In the calibration procedure, an additional HWP, not sketched here, replaces the solution to impart a well-defined phase.}
\label{fig:setup}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.435\textwidth]{Fig2a.pdf}
\includegraphics[width=0.4\textwidth]{Fig2b.pdf}
\includegraphics[width=0.4\textwidth]{Fig2c.pdf}\hspace{0.5cm}
\hspace{0.5cm}\includegraphics[width=0.4\textwidth]{Fig2d.pdf}
\caption{Multiparameter Bayesian estimation for setup calibration. Panel (a): estimated phase (blue triangles, left scale) and visibility (green circles; right scale) vs. calibration phase. Dashed lines are linear fit of data. Panel (b) and (c): estimated variance (times the number of resources $M$) for visibility (b) and phase (c) as a function of the imparted phase. The dashed line represents the corresponding CRBs. Panel (d): estimated covariance for the visibility and phase as a function of the imparted phase. The dashed line represents the corresponding CRB. All covariance matrices have been estimated from $M\simeq70$K repetitions. Error bars are smaller than the marker size for all data.}
\label{fig:calibration}
\end{figure*}
Figure~\ref{fig:setup} shows the experimental apparatus we used to implement the phase estimation. Two photons with mutually orthogonal polarizations, horizontal ($H$) and vertical ($V$), are combined on a polarizing beam splitter (PBS). Having very similar spectra, the two photons are highly indistinguishable, and their perfect interference would produce the {\it N}00{\it N} state in the left- ($L$) and right-circular ($R$) polarization modes:
\begin{equation}
\begin{aligned}
\hat{a}^{\dagger}_H \hat{a}^{\dagger}_V \vert0\rangle=&\frac{1}{2}\left(({\hat{a}^{\dagger}_R})^2-({ \hat{a}^{\dagger}_L})^2\right)\vert0\rangle\\
=&\frac{1}{\sqrt{2}}\left(\vert2_R,0_L\rangle-\vert0_R,2_L\rangle\right).
\end{aligned}
\end{equation}
Introducing a phase $\phi$ on the $R$-mode is equivalent to rotating a linear polarization by an angle $\phi/2$, and modifies the state as
\begin{equation}
\vert \psi \rangle=\cos{\phi}\, \hat{a}^{\dagger}_H \hat{a}^{\dagger}_V \vert0\rangle-\sin{\phi}\,\frac{(\hat{a}^{\dagger}_H)^2-(\hat{a}^{\dagger}_V)^2}{2} \vert0\rangle,
\label{eq:state}
\end{equation}
The phase $\phi$ modulates the populations in the states $\vert \uparrow \rangle = \vert 1_H,1_V\rangle$ and $\vert \downarrow \rangle = \left(\vert2_H,0_V\rangle-\vert0_H,2_V\rangle\right)/\sqrt{2}$, which represent the basis of an effective two-level system, {\it i.e.} a qubit. The detection scheme consists of a half wave plate (HWP) and a second PBS, allowing to select arbitrary linear polarizations via the angular position $\theta$ of the HWP. Photon counting is performed by fiber-coupled avalanche photodiodes (APD) place of each of the two output arms from the PBS. In the realistic case when the modulations in the state defined in \eqref{eq:state} occurs with visibility $v$, the relevant detection probabilities are:
\begin{equation}
\begin{aligned}
p_1(\theta|\phi,v)&=\frac{1}{1+v}\left(1+v\cos(8\theta-2\phi)\right)\\
p_2(\theta|\phi,v)&=\frac{v}{1+v}\sin^2(4\theta-\phi).
\end{aligned}
\label{eqn:prob_2par}
\end{equation}
where $p_1(\theta|\phi,v)$ describe the probability of a coincidence count between the two arms (associated to $\vert \uparrow \rangle$), and $p_2(\theta|\phi,v)$ is the probability of finding two photons in either arm (both events are associated to $\vert \downarrow \rangle$). Because of the underlying single-qubit structure of the state in \eqref{eq:state}, at least two settings of $\theta$ must be chosen to resolve the two parameters: this amounts to performing a positive-operator valued measurement (POVM) with 2$\times$3 outcomes. Furthermore, since our detectors can not resolve the photon number, we have actually adopted four settings of $\theta$ (viz. $\theta=\{0,\;\pi/16,\;\pi/8,\; 3\pi/16\}$), and used the post-selected probabilites
\begin{equation}
p(\theta|\phi,v)=\frac{1}{4}\left(1+v\cos(8\theta-2\phi)\right),
\end{equation}
which only consider the coincidence events for each setting. In the post-selection picture, the probability above treats $\theta$ as the outcome of the measurement scheme. Assuming that the four settings are in fact performed randomly, each one with probability 1/4, Eq. (4) quantifies the probability that the coincidence event detected corresponds to the particular setting $\theta$. Data are collected in the form of a vector $\bar n$, formed by four coincidence count rates $n_\theta$ associated to each setting $\theta=\{0,\;\pi/16,\;\pi/8,\; 3\pi/16\}$: therefore, we post-select 4 out of the possible 4$\times$3 outcomes.
An experimental joint distribution for the measured values of $\phi$ and $v$ is obtained by Bayesian estimation. This consists in using Bayes's theorem to update the {\it a priori} joint probability $P_A(\phi,v)$, based on the knowledge of the measured values $n_\theta$: $P_B(\phi,v | \bar n) = {\mathcal N} P_A(\phi,v) \prod_\theta p(\theta|\phi,v)^{n_\theta}$ ($ {\mathcal N}$ is a normalization constant).
\section{Experimental Results}
We have tested the performance of our experiment with a calibration step, by inserting an additional HWP between the two PBSs: this imparts a set phase $\varphi$ depending on its angle setting, and provides of the metrological capabilities of our multiparameter strategy. Fig.~\ref{fig:calibration}a shows, as a function of the imparted phase, the results of the measured of $\phi$ and $v$ from $P_B(\phi,v|\bar n)$, quantified as the first moments of the marginal distributions $\phi_B$ and $v_B$:
\begin{equation}
\begin{aligned}
&\phi_B=\int \phi\,P_B(\phi,v|\bar n) d\phi\,dv,\\
&v_B=\int v\,P_B(\phi,v|\bar n) d\phi\,dv,
\end{aligned}
\end{equation}
with the integration limits set by the width of $P_A(\phi,v)$. A linear regression of the values highlights the goodness of the phase estimation $\phi$, as its slope is $s_\phi=1.011\pm0.004$, in agreement with the expected value 1. Concerning the visibility, the estimation appears to be affected by fluctuations around a constant mean value instead, as the slope of the linear fit of that data confirms, $s_v=-0.001\pm0.003$. Such fluctuations can be consider as the manifestation of spurious effects not accounted for in our modelling, such as different optical coupling of the initial $H$ and $V$ photons onto the two different fibers.
\begin{figure*}[t]
\centering
\includegraphics[width=0.47\textwidth]{Fig3_a.pdf}
\includegraphics[width=0.47\textwidth]{Fig3_b.pdf}
\label{fig:fructose}
\caption{Bayesian joint probabilities for visibility ($V$) and phase ($\Phi$) (upper panel) and its difference (right-hand colored scale) with respect the one at the CRB (lower panel) for fructose (sucrose) in aqueous solution. A number $M\simeq50$K ($M\simeq75$K) of repetitions have been employed.}
\label{fig:fructose}
\end{figure*}
A more stringent test in metrology is the verification of the Cram{\'e}r-Rao bound (CRB). This sets a lower bound to the covariance matrix $\Sigma$ of the estimated parameters, whose elements are defined as the second moments of $P_B(\phi,v|\bar n)$:
\begin{equation}
\begin{aligned}
&\Delta^2\phi = \Sigma_{\phi,\phi}=\int (\phi-\phi_B)^2 P_B(\phi,v|\bar n)d\phi\,dv,\\
&\Delta^2v = \Sigma_{v,v}=\int (v-v_B)^2 P_B(\phi,v|\bar n)d\phi\,dv,\\
&\Sigma_{\phi,v}=\Sigma_{v,\phi}=\int (\phi-\phi_B)(v-v_B) P_B(\phi,v|\bar n) d\phi\,dv.
\label{sigma}
\end{aligned}
\end{equation}
The measurement strategy is characterised by its Fisher information matrix ${\mathcal F}$, whose elements are:
\begin{equation}
\mathcal{F}_{ij}=\sum_\theta \frac{\partial_i p(\theta|\phi,v)\,\partial_j p(\theta|\phi,v)}{p(\theta|\phi,v)}
\label{eq:fisher}
\end{equation}
with $i$ and $j$ can correspond to either $\phi$ or $v$. The CRB asserts that, given a number of trials $M$, the covariance matrix is bounded as
\begin{equation}
\Sigma \geq \mathcal{F}^{-1}/M.
\label{crb}
\end{equation}
This matrix inequality holds in the asymptotic limit of a large number of trials, and sets lower bounds for the individual precisions $\Delta^2\phi$ and $\Delta^2v$, as well as on their covariance $\Sigma_{\phi,v}$.
The conditional probabilities in~\eqref{eq:fisher}, describing our experiment, corresponds to the post-selected coincidence events of our detectors. The effect of post-selection on estimation precision has been described in detail in the literature \cite{gendra2013,combes2014}, and the consequences on our experimental results are better discussed in the Appendix. It is important to notice that, while typically post-selection has been investigated as a tool to enhance the estimation precision, in our case it is a consequence of the limitation of the experimental setup.
In Fig.~\ref{fig:calibration} we report the measured uncertainties and covariance, in very good agreement to the expected values predicted by the CRB in \eqref{crb} with $M=\sum_\theta n_\theta$. Oscillations of the attainable precisions in \eqref{sigma} can be observed: the available information is distributed between the phase and the visibility, depending on the value of $\phi$; covariances are modulated as well, and the best estimation for either individual parameter corresponds to minimal correlation.
As an application of our protocol, we perform the estimation of the phase imparted by aqueous solutions of fructose. It is common knowledge that sugars are an interesting example of chiral molecules, able to impart a rotation to an initial linear polarization. Monitoring their optical activity via light-matter coupling can thus represent a valuable approach to infer information on their interaction with the surroundings. The most relevant environment for their application is the aqueous solution: investigation with quantum light has been undertaken in \cite{Tischlere2016} in a single-parameter approach. Fig.~\ref{fig:fructose} reports the Bayesian joint probability distribution for two different sugar aqueous solutions, namely of fructose (F) and sucrose (S) at the same nominal concentration of $c=0.3$ g/ml. The upper 3D plots show the reconstructed distributions, which give back the following average values $\phi_F=-0.145$ rad and $\phi_S=0.089$ rad, consistently with the values obtained by using classical light of a close wavelength (808 nm) in the same apparatus. The underlying contour plots show the difference between the reconstructed distribution and the expected Gaussian saturating the CRB; for both concentrations, the deviations remain of the order of 0.01. In order to assess quantitatively how close our estimation lies to the CRB, we adopt the likelihood ratio test predicting that, under the null hypothesis that $\Sigma$ saturates \eqref{crb}, the variable
\begin{equation}
l = M^2 \text{Tr}\left(F\cdot\Sigma\right)-M\left(\ln\det(\Sigma)+\ln\det(M\,F)\right)-2
\end{equation}
is distributed as $\chi^2$ variable with 3 degrees of freedom~\cite{anderson2003}. The measured values for the two concentrations are $l_{F}=2.63$, and $l_{S}=0.10$, both compatible with the critical value $7.81$ for the 95\% confidence interval.
\section{Scaling laws for multiparameter estimation}
\begin{figure*}[t]
\includegraphics[width=0.325\textwidth]{Fig4a.pdf}
\includegraphics[width=0.325\textwidth]{Fig4b.pdf}
\includegraphics[width=0.315\textwidth]{Fig4c.pdf}
\caption{Scaling of the Fisher information: (a) effective Fisher information for the distinguishability parameter $\epsilon$. The points correspond to numerical results, and the solid lines correspond to $2N^2$ (black) and $N$ (red). (b) effective Fisher information for the phase $\phi$. The points correspond to numerical results, and the solid lines correspond to $2N(N+1)$ (black) and $N$ (red). (c) Trade-off in the optimality of individual estimations quantified by $\Upsilon$. In all plots: {\color{blu} $\bullet$}: $\epsilon=0.14$, {\color{ocra} $\blacksquare$}: $\epsilon=0.23$, {\color{verde} $\blacklozenge$}: $\epsilon=0.32$, {\color{rosso} $\blacktriangle$}: $\epsilon=0.50$, {\color{viola} $\blacktriangledown$}: $\epsilon=1$.}
\label{fig:scaling}
\end{figure*}
The usefulness of quantum resources is typically assessed by looking at how the precision on given parameters scale with the number of photons $N$ in the probe. for phase estimation, quantum probes can reach a scaling law o f the Fisher Information as $N^2$ while classical resources are limited to $N$. For loss the Fisher Information grows as $N$ for both classical and quantum probes~\cite{monras2007,genoni2011}. For these purposes we generalized to states with $2N$ photons: we consider Holland-Burnett (HB) states~\cite{Holland1993} that are obtained by quantum interference of two $N$ photon states arriving on input modes with creation operators $a_{H}^\dagger$ and $b_{V}^\dagger$, which are made interfere. A phase $\phi$ is then inserted, and the detection scheme considers a second interference, followed by photon-number resolving detectors on each arms. In order to account for distinguishability, we take the standard decomposition $b_{V}^\dagger=\sqrt{1-\epsilon^2}\,a_V^\dagger+\epsilon\,q_V^\dagger$, where $a_V^\dagger$ interferes perfectly with $a_{H}^\dagger$, while $q_V^\dagger$ does not. The parameter $\epsilon$ defines the distiguishability of the two modes: from $\epsilon=0$ for perfect indistinguishability to $\epsilon=1$ for complete distinguishability. We are interested in how the available Fisher information associated to $\phi$ and to $\epsilon$ scales with the number of photons $N$: we use as quantifiers the effective values $\tilde F_{i,i}=1/(F^{-1})_{i,i}$ for $i=\phi,\epsilon$, optimized over all possible phases.
The results of our numerical simulations are reported in Fig.~\ref{fig:scaling}a and b. For moderate distinguishability, the effective Fisher information on $\phi$ decreases with respect to its value $2N(N+1)$ at $\epsilon=0$~\cite{Holland1993}, but retains a quicker growth than the classical scaling as $N$, obtained for $\epsilon=1$.
Nevertheless, while the effective Fisher information $\tilde{F}_{\phi\phi}$ is reduced due to the presence of correlation between the two parameters, the plot leads us to conjecture that, as observed in \cite{Birchall2016} for the phase-estimation only problem, an asymptotic quadratic scaling is maintained also for distinguishability $0<\epsilon<1$. Regarding the distinguishability $\epsilon$ we remark a non-monotonic behavior: the information initially decreases with respect to the linear scaling, but a quadratic behaviour $2N^2$ is eventually observed in the limit $\epsilon=1$. These optimal values, however, are obtained for different phases $\phi$: in general, it is not possible to satisfy the optimality conditions for both parameters at once. In order to understand how the information is partitioned, we adopt the parameter:
\begin{equation}
\Upsilon =\max_\gamma \left(\frac{\tilde F_{\phi,\phi}(\gamma)}{\max_\alpha \tilde F_{\phi,\phi}(\alpha)}+\frac{\tilde F_{\epsilon,\epsilon}(\gamma)}{\max_\beta \tilde F_{\epsilon,\epsilon}(\beta)}\right).
\end{equation}
This figure of merit aims to quantify the overall effectiveness of the measurement scheme, obtained by varying the phase $\phi$, calculating the sum of the ratios between the effective Fisher information for respectively $\phi$ and $\epsilon$, and their maximum values, reached for a particular value of $\phi$. The corresponding results are show in in Fig. \ref{fig:scaling}c: for each value of $\epsilon$ there exist a value of $N$ that achieves the best compromise in the jointly attainable precision. Our numerical results also suggest that, while for highly noisy probes (large values of $\epsilon$) the optimal value occurs for small $N$, for nearly-ideal probes (small values of $\epsilon$), optimality is reached for larger values of $N$, where the quantum enhancement in the phase estimation is more prominent.
\section{Conclusions}
Multiparameter estimation can be an effective way to tackle the problem of operating quantum sensors in the presence of noise, an unavoidable challenge in realistic conditions. We have applied such an approach to integrate phase estimation with a simultaneous characterization of the probe, by measuring phase and visibility of interference fringes at once. Depending on the value of the phase, oscillations in the achieved precision on individual parameters are observed, and correlations are introduced. The measurement scheme has been used to investigate the optical activity of fructose solutions. Numerical simulations have been undertaken to study how the precisions scale with the photon number in Holland-Burnett states.
Our results highlight the presence of trade-off conditions, as part of the information need being devoted to determine the quality of the probe at the expense of the precision on the phase. Realising the promises of quantum sensing will need to understand the price of achieving robust operation in unfavourable conditions: our study is an important step in this direction.
\section*{Acknowledgments}
We thank P. Aloe, F. Somma, A. Sodo, F. Bruni, and M.G.A Paris for useful discussions. This work has been funded by the European Commission via the Horizon 2020 Programme (Grant Agreement No. 665148 QCUMbER). MGG acknowledges support from Marie Sk\l{}odowska-Curie Action H2020-MSCA-IF-2015 (project ConAQuMe, grant no. 701154). The Grant of Excellence Departments, MIUR (ARTICOLO 1, COMMI 314--337 LEGGE 232/2016), is gratefully acknowledged.
|
{
"timestamp": "2018-05-08T02:18:50",
"yymm": "1805",
"arxiv_id": "1805.02561",
"language": "en",
"url": "https://arxiv.org/abs/1805.02561"
}
|
\section{Introduction}
Circular or highly elliptical light pulses in the extreme ultraviolet spectral range offer numerous applications in chiral-sensitive light-matter interactions in both gas and condensed phase, ranging from chiral recognition \cite{Cireasa2015, Beaulieu2016} to time-resolved magnetization dynamics
and spin currents~\cite{Boeglin2010,Cavalieri2007, Graves2013, Bigot2013, Bigot2009, Stanciu2007, Kirilyuk2010}, not only in
condensed matter but also in isolated atoms \cite{barth2014hole}.
Until recently,
such radiation has only been available at large-scale facilities (e.g. synchrotrons, XFELs), with ultrafast time resolution requiring
free electron lasers.
On the other hand, laboratory-based sources of pulses of attosecond duration ($1$~as $=10^{-18}$~s)
are becoming broadly available, allowing one
to monitor and control electronic dynamics at their intrinsic (attosecond) timescales \cite{Hentschel2001, Drescher2002, Goulielmakis2010, Sansone2010, Krausz2009, Calegari2014, Gruson2016}. However, the available attosecond pulses are typically linearly polarized, which limits their applicability making them unable
to probe and control such processes as ultrafast spin and chiral dynamics. These latter require
the generation of attosecond pulses with controllable degree of polarization.
Ultrashort XUV/X-Ray pulses are obtained via the high harmonic generation (HHG) process~\cite{Krausz2009},
and their production is linked to the recombination of the electron promoted to
the continuum with the hole it has left in the parent atom or molecule \cite{Smirnova2009}.
Such recombination is more likely when the electron is driven by a linearly polarized laser field,
leading to linearly polarized harmonics. The suppression of electron return to the parent ion in strongly elliptic
fields means that brute-force approach to the generation of highly elliptically polarized attosecond pulses
by using a highly elliptical driver fails. One practical solution to this problem is to use
two-color driving fields in the so-called bicircular configuration, originally proposed
in \cite{Eichmann1995,Zuo1995, Milosevic2000}.
Following elegant recent experiments \cite{Fleischer2014}, this proposal has finally attracted the attention
it has deserved \cite{Fleischer2014, Pisanty2014, Ivanov2014, Hickstein2015, Medisauskas2015, Kfir2015, Ferre2015, Fan2015,Jimenez-Galan2017, Pisanty2017},
including schemes for the generation of chiral attosecond pulses~\cite{Medisauskas2015,Kfir2016,Hernandez-Garcia2016, Bandrauk2016, Odzak2016, Baykusheva2016}.
The bicircular configuration consists of combining a circularly polarized fundamental field
with its counter-rotating second harmonic. The Lissajous figure of the resulting field is shown in the
inset of Fig.~\ref{fig:Spectra}(a). It is symmetric with respect to
a rotation of 120$^{\degree}$. Each of the leaves in the trefoil generates an
attosecond burst, totalling three burst per laser cycle. In the frequency domain, this field produces
circularly-polarized harmonic peaks at (3N+1)$\omega_{IR}$ and (3N+2)$\omega_{IR}$ values,
with the helicity of the fundamental field and the second harmonic, respectively,
while $3N\omega_{\mathrm{IR}}$ harmonic lines are
symmetry-forbidden (see e.g. ~\cite{Milosevic2000, Fleischer2014, Pisanty2014, Ivanov2014, Hickstein2015, Medisauskas2015, Kfir2015, Jimenez-Galan2017, Alon1998}).
The alternating helicity of the allowed harmonics is, however, an important obstacle en
route to producing circularly polarized attosecond pulses or pulse trains.
Indeed, to achieve this goal one set of the harmonic lines, e.g. $3N+1$, with well defined helicity,
must dominate over the other set (e.g. $3N+2$), across a wide range of spectral energies. This is not
always the case, see e.g. \cite{Eichmann1995, Baykusheva2016} for measurements in argon or helium or
Fig.~\ref{fig:Spectra}(a) for calculations in helium.
Kfir and collaborators reported substantial suppression of
the (3N+2) harmonic lines of neon by optimizing phase matching conditions in a gas-filled hollow fiber~\cite{Kfir2015,Kfir2016}.
Further analysis demonstrated that such suppression appears already at the microscopic level when the system is
ionized from the 2p orbital of neon (see Fig.~\ref{fig:Spectra}(b))~\cite{Medisauskas2015,Milosevic2015, Baykusheva2016},
opening the way to practical generation of polarization-controlled attosecond bursts.
Recently, Ref.~\cite{Dorney2017} reported that changing the intensity ratio between the two circular drivers offers a possibility to suppress
even further the $3N+2$ lines.
The exact physical origins of the suppression at the atomic level and the
physical mechanisms responsible for it are, however, not completely understood.
While Ref.~\cite{Medisauskas2015} argued that the origin of the suppression
is linked to the initial angular momentum of the ionizing orbital, Ref.
\cite{Baykusheva2016} suggested the Cooper-like suppression of the
recombination step to be chiefly responsible.
Here, we first present experimental results for neon, showing control over the contrast between the $3N+1$ and $3N+2$ harmonic lines for different relative intensities between the two drivers.
Motivated by these experimental results, we focus on the theoretical investigation of the underlying physical mechanism at the single-atom level, providing a detailed analytical and numerical analysis. We demonstrate
the interplay of three fundamental mechanisms responsible for the different contrast between neighboring harmonic lines.
Their identification offers clear insight into the underlying electronic dynamics
and the possibilities to control the ellipticity of the generated
attosecond pulses.
The first mechanism at play is based on the Fano-Bethe type propensity rules in one-photon
recombination \cite{Fano1985}.
The second mechanism is traced to the Barth-Smirnova type propensity rules in strong-field ionization
\cite{Barth2011, Barth2013, kaushal2015opportunities}.
The third mechanism is based on the impact of the two driving laser
fields on the continuum-continuum transitions, i.e. the electron dynamics between
ionization and recombination.
The interplay of these three effects links the observed spectral features to the specific
aspects of the sub-cycle electronic dynamics.
In particular, we show that the suppression of the (3N+2) harmonic lines can be controlled
by varying the relative intensity between the two fields, thus providing relatively easy means for
controlling the ellipticity of the attosecond bursts, as suggested in \cite{Dorney2017}.
We confirm our theoretical analysis with the numerical
solution of the time-dependent Schr\"odinger equation (TDSE) and measurements in neon.
Furthermore, as inferred in \cite{Dorney2017}, we show and explain that the suppression is not restricted to systems with
outer-electrons in $p$ orbitals. Our theory also predicts, and the full solution of the TDSE confirms,
the possibility of significant suppression of the $(3N+2)$ lines in the HHG spectrum of helium,
starting with the ground s-orbital. The suppression is achieved when the intensity of the fundamental field is
sufficiently stronger than that of the second harmonic, eliminating the
misconception that the desired suppression cannot happen when the electron is emitted from a
spherically symmetric orbital, or that it requires the Cooper-minimum-like suppression of
the recombination matrix elements \cite{Baykusheva2016}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Fig1-crop.pdf}
\caption{Numerical TDSE HHG spectrum of (a) helium and (b) neon.
The inset in (a) shows the y vs x components of the driving electric field. For helium, the fundamental
and second harmonic drivers have field strengths
of $F_{\omega} = F_{2\omega} = 0.056$~a.u., and
duration of 12~fs. For neon, the field strengths of the drivers are
$F_{\omega} = F_{2\omega} = 0.073$~a.u. and
their duration is 12~fs. The red lines indicate the $\hat{\mathbf{e}}_+$ spherical component,
which corresponds to light rotating with the fundamental driver (counter-clockwise), while the blue line indicates the
$\hat{\mathbf{e}}_-$ spherical component, which corresponds to light
rotating with the second harmonic (clockwise). Angular
momentum conservation imposes that the $3N+1$ harmonic lines rotate
with the fundamental field, the $3N+2$ lines rotate with the second
harmonic, and the $3N$ lines are forbidden. The inset in (b) shows the
cut-off harmonics for which the blue lines start to dominate in the spectrum of neon.}
\label{fig:Spectra}
\end{figure}
\section{Experimental observation}
We begin with the description of our experimental results, which motivate our numerical and analytical analysis.
To generate the two-colour bi-circular
fields, we have used a Ti:sapphire-based laser system with a single stage regenerative amplifier
producing 35 fs pulses with up to 4 mJ energy and central wavelength of $\sim795$ nm at 1 kHz repetition rate.
The carrier-envelope phase (CEP) of the pulses was not locked.
The laser beam was directed into the optical setup shown in Fig.~\ref{fig:setup}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{setup.png}
\caption{Schematic of the optical setup. The BBO (beta-barium-borate) crystal was used
for the generation of the second harmonic of the fundamental beam, MCP denoted the detector for the XUV radiation,
BS is the beam-splitter.}
\label{fig:setup}
\end{figure}
The original beam was split into two beams with 60/40 ratio.
The first, stronger, beam was sent onto a 0.1 mm thin BBO crystal to generate the second harmonic at $\sim400$ nm
with the pulse energy of up to 0.8 $mJ$. The second, weaker, beam remained at a fundamental wavelength of $795$ nm with
the pulse energy up to 1.5 $mJ$.
The intensity was estimated based on comparing the observed cutoff position with the theoretical calculations
(which is a standard approach, see e.g. \cite{Smirnova2009}), and taking into account the ratio of pulse energies and focusing conditions for the two incident driving fields.
We could also tune smoothly
the energies of the two pulses and their ratio by
changing the original input pulse energy and by using a reflective attenuator (Fig.~\ref{fig:setup}).
Both beams were passed through achromatic broadband $\lambda/2$ and $\lambda/4$ waveplates,
yielding nearly circular polarization ($\varepsilon\simeq0.97$) for both the fundamental ($``red"$)
and its counter-rotating second-harmonic ($``blue"$) beams.
The beams were carefully combined in collinear geometry
and focused with a single Ag-mirror at f/100 into a {2-mm}-long gas cell containing target gas.
The cell was initially sealed with a metal foil, which was burned through by the laser beam at the
start of the experiment. The resulting opening $d_0\cong60$ $\mu$m was similar to the focal spot size,
allowing us to keep the gas pressure inside the cell at $\approx 20-40$ $mbar$ and the vacuum inside the
interaction chamber at the level of $P_{rest}\approx10^{-3}$ $mbar$.
After passing the gas cell, the driving beams were blocked by a $200$ nm thick aluminum foil.
The transmitted XUV radiation was directed towards the XUV spectrometer (for details see \cite{Jimenez-Galan2017}).
The XUV-spectra generated in neon in Fig.~\ref{fig:NeSp-1} show the characteristic structure for the bi-circular scheme.
The harmonics with order $3N$ are strongly suppressed. The allowed $3N+1$ and $3N+2$ harmonics are nearly
circularly polarised (see results of Ref.~\cite{Fleischer2014, Kfir2015}), rotating in the same direction as the $``red"$ and the $``blue"$ beams, respectively.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{NeSp-1-crop.pdf}
\caption{High harmonics generated in Neon for different peak intensities of the fundamental $I_\omega$ and second harmonic $I_{2\omega}$ circularly polarized fields. Intensities (in units of $10^{14}$ $W/cm^2$) are estimated from
the pulse energy, focal diameter, and the cutoff position:
(a) $I_{\omega}\sim 4$, $I_{2\omega}\sim 3.3 $; (b) $I_{\omega}\sim 2.6 $, $I_{2\omega}\sim 3.3 $; (c) $I_{\omega}\sim2.5 $, $I_{2\omega}\sim 2.0$; (d) $I_{\omega}\sim 1.6$, $I_{2\omega}\sim 2.0$.
The ratios of the intensities of the adjacent harmonics with orders $3N+1$ and $3N+2$ are marked in the figures near to the corresponding harmonic pairs.}
\label{fig:NeSp-1}
\end{figure}
Figures ~\ref{fig:NeSp-1} and ~\ref{fig:NeSp-2} show spectra of the XUV
radiation generated for different relative intensities of the fundamental and the second harmonic fields.
The driving fields were changed in a such a way that the intensity of one colour was fixed and the intensity of the
other was varied.
In Fig.~\ref{fig:NeSp-1} we have varied the intensity of the fundamental field while in Fig.~\ref{fig:NeSp-2}
the intensity of the second harmonic field was varied.
In addition, the left and right columns in Figs. ~\ref{fig:NeSp-1} and ~\ref{fig:NeSp-2} present results for different values
of the fixed fields.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{NeSp-2-crop.pdf}
\caption{High harmonics generated in Neon for different peak intensities of the fundamental $I_\omega$ and
second harmonic $I_{2\omega}$ fields. Intensities, in units of $10^{14}$ $W/cm^2$, were estimated from
the pulse energy, focal diameter, and the cutoff position:
(a) $I_\omega\sim1.2$, $I_{2\omega}\sim0.7 $; (b) $I_{\omega}\sim 1.2 $, $I_{2\omega}\sim 2.8 $; (c) $I_\omega\sim 1.7 $,
$I_{2\omega}\sim 0.9 $; (d) $I_{\omega}\sim 1.7 $, $I_{2\omega}\sim 2.0 $.
The ratios of the intensities of the adjacent harmonics with orders $3N+1$ and $3N+2$
are marked in the figure near the corresponding harmonic pairs.}
\label{fig:NeSp-2}
\end{figure}
Figures ~\ref{fig:NeSp-1} and ~\ref{fig:NeSp-2} demonstrate the dependence of the relative intensities
of the $3N+1$ and $3N+2$ harmonics on the ratio between $I_{\omega}$ and $I_{2\omega}$.
The increase of one driving field intensity relative to the other,
independently of the total field strength, appears to enhance the generation of high harmonics co-rotating
with it. Therefore, increasing the intensity of, e.g., the red field will change the ratio between the
neighbouring harmonics and will affect the overall chirality of the generated pulse train.
Note that if the ratio $I_{\omega}$/$I_{2\omega}$ is kept constant and both intensities are
changed, the overall harmonic asymmetry is not affected, but the harmonic cut-off scales with the total intensity.
We also noted that when the intensity of the $``blue"$ beam becomes larger than the intensity of the $``red"$ beam, the
forbidden $3N$ harmonics become more prominent in the spectrum
(see figs. ~\ref{fig:NeSp-1} (b) (d) and ~\ref{fig:NeSp-2} (b) (d)).
We discuss this phenomenon in detail in a separate publication, showing that
it is related to the breaking of the dynamical symmetry in the system due to excitation of Rydberg states
by the strong blue driver~\cite{Jimenez-Galan2017}.
These experimental results demonstrate a simple practical
way of controlling the degree of circularity of the attosecond pulse train by favouring
harmonics with particular helicity, via changing the intensity ratio of the two driving fields.
Of course, macroscopic effects may play a role in these findings. For example,
the macroscopic effects can include the contribution of the free electrons to phase matching,
and the number of such electrons changes with increasing the intensity of the two driving fields. Nevertheless,
the experimental trends in Figures ~\ref{fig:NeSp-1} and ~\ref{fig:NeSp-2} remain consistent
across a broad range of intensities of both fields.
Below we show that the dominant role of the $3N+1$ harmonics co-rotating with the red driver
and the control over the ratio of $3N+1$ vs $3N+2$ harmonics originate
already at the single-atom (microscopic) level, in accord with the results of
~\cite{Jimenez-Galan2017, Medisauskas2015}.
We provide detailed analysis of this control and its physical origins using
a theoretical
model based on the strong field approximation (SFA), and then
use time-dependent Schr\"odinger equation calculations to further corroborate our
predictions.
\section{Method} \label{section_method}
We consider a single active electron moving in the binding potential $V_0$ and interacting
with the light field in the dipole approximation. In length gauge,
\begin{equation}\label{eq:hamiltonian}
H= \frac{\hat{\mathbf{p}}^{ 2}}{2} + \hat{\mathbf{r}} \cdot \mathbf{E}(t) + V_0(\hat{\mathbf{r}}),
\end{equation}
where $\hat{\mathbf{p}}$ is the momentum operator, $\hat{\mathbf{r}}$ is the
position operator, and $\mathbf{E}(t)$ is the time-dependent electromagnetic
field for the bicircular configuration, defined by
\begin{equation}\label{eq:electric_field}
\mathbf{E}(t) =\mathrm{Re} \left\{ -F_{\omega} e^{-i\omega t} \hat{\mathbf{e}}_+
+ F_{2\omega} e^{-i 2\omega t} \hat{\mathbf{e}}_{-} \right\}.
\end{equation}
We use atomic units unless otherwise stated. In the above,
$F_{\omega}$ and $F_{2\omega}$ are the electric field strengths
of the fundamental and second harmonic fields, respectively, $\omega$
is the frequency of the fundamental field, and
$\hat{\mathbf{e}}_{\pm} = \mp (\hat{\mathbf{e}}_x \pm i\,\hat{\mathbf{e}}_y)/\sqrt{2}$
are the unit vectors describing the polarization state of the driving fields.
The unit vector $\hat{\mathbf{e}}_+$ corresponds to light rotating in
the counter-clockwise direction, while the unit
vector $\hat{\mathbf{e}}_-$ corresponds to light rotating in the clockwise direction.
In the following, we will use the same notation to indicate the polarization of the generated
high harmonics, and will depict the $\hat{\mathbf{e}}_{\pm}$ component as red/blue lines in the figures.
For the analytical description of the problem, we begin by assuming
the core potential $V_0(\hat{\mathbf{r}})$ in Eq.~\eqref{eq:hamiltonian}
to be short-range, defined by the conditions:
\begin{equation}\label{eq:short_range_potential}
\begin{split}
&H_0 | \psi_{\ell m} \rangle = -I_p | \psi_{\ell m} \rangle, \\
&\langle \mathbf{r} | \psi_{\ell m} \rangle = C_{\kappa \ell} \,\kappa^{3/2}\,(\kappa r)^{-1}\,e^{-\kappa r}\,Y_{\ell m} (\theta_r,\phi_r).
\end{split}
\end{equation}
This approach, based on explicitly using the asymptotic behavior of the
ground state wavefunction
outside the range of the potential (rather than the potential itself)
is common to many analytical
treatments~\cite{Perelomov1966, Barth2011, Barth2013, frolov2003model, frolov2008effective}, particularly
with the effective range method developed by Frolov, Manakov, and Starace
~\cite{frolov2003model,frolov2008effective} and used extensively to study high harmonic
generation ~\cite{frolov2009analytic, frolov2008wavelength, frolov2011high}.
With this procedure we neglect atom-specific
features such as Cooper minima or Fano resonances, which can play a role for some atoms in some energy regions~\cite{Baykusheva2016}.
In the above, $\psi_{\ell m}$ is the ground state of the atomic system, with ionization
potential $I_p$, $\kappa = \sqrt{2 I_p}$, $Y_{\ell m}$ is the spherical harmonic of angular and
magnetic quantum number $\ell$ and $m$, respectively, and $C_{\kappa \ell}$ is a dimensionless real
constant, its exact value is not relevant for our purposes.
The induced temporal dipole, proportional to the response of an individual atom or molecule to the laser field, can be written as
\cite{Vrakking2014}
\begin{equation}
\label{eq:induced_dipole}
\begin{split}
&\langle \Psi (t) | \hat{\mathbf{r}} | \Psi (t) \rangle =
\sum_{m=-\ell}^{\ell} -i \int_{t_0}^{t} dt' e^{i I_p (t'-t)} \times \\
&\times \int d\mathbf{p}\,e^{-i S_V (\mathbf{p},t,t')} \mathbf{d}_{\ell m}^{({\text {rec}})} [\mathbf{p} + \mathbf{A}(t)] \Upsilon_{\ell m} [\mathbf{p} + \mathbf{A}(t')] + {\rm c.c.},
\end{split}
\end{equation}
where `c.c.' denotes the complex conjugate of the preceding expression,
the sum over $m$ takes into account the $m$-degeneracy of the ground state,
$ \mathbf{p}$ is the canonical momentum,
and $\mathbf{A} (t) = -\int dt\,\mathbf{E}(t)$ is the vector potential of the field
given by Eq.~\eqref{eq:electric_field}.
The Volkov phase $S_V (\mathbf{p},t,t')$ is
\begin{equation}
S_V (\mathbf{p},t,t') = \frac{1}{2} \int_{t'}^t d\tau\,[\mathbf{p} + \mathbf{A}(\tau)]^2,
\end{equation}
and the (scalar) ionization factor is
\begin{equation}
\Upsilon_{\ell m} [\mathbf{p} + \mathbf{A} (t') ] = \left[ \frac{[\mathbf{p} + \mathbf{A}(t')]^2}{2} + I_p \right] \langle \mathbf{p} + \mathbf{A}(t') | \psi_{\ell m} \rangle.
\end{equation}
For the plane-wave continuum, the plane waves are defined as
\begin{equation}
\langle \mathbf{r} | \mathbf{p} + \mathbf{A}(t) \rangle = \frac{e^{i \mathbf{v} (t) \cdot \mathbf{r}}}{(2\pi)^{3/2}},
\end{equation}
where $\mathbf{v} (t) = \mathbf{p} + \mathbf{A}(t)$ is the kinetic momentum.
The recombination dipoles are computed for the instantaneous electron
velocity at the moment of emission $t$,
\begin{equation}
\mathbf{d}_{\ell m}^{({\text{rec}})} [\mathbf{p} + \mathbf{A}(t)] = \langle \psi_{\ell m} | \hat{\mathbf{r}} | \mathbf{p} + \mathbf{A}(t) \rangle.
\end{equation}
We note that this theoretical approach is well adapted to incorporate
exact recombination dipoles (see e.g. \cite{frolov2008wavelength,frolov2011high}).
This expression for the induced dipole is a direct extension of the Perelomov, Popov, and Terent'ev approach
to high harmonic generation ~\cite{Vrakking2014}. The expresssion includes only the processes where the electron recombines
to the same orbital from which it was ionized and assumes that there is no permanent dipole in the ground state.
The HHG spectrum at the frequency $N\omega$ is proportional to the induced frequency dipole,
$\mathbf{D}(N\omega)$,
\begin{equation}\label{eq:def_frequency_dipole}
\begin{split}
&I (N\omega) \propto (N\omega)^4 |\mathbf{D} (N\omega)|^2,\\
&\mathbf{D}(N\omega) = \int dt \langle \Psi (t) | \hat{\mathbf{r}} | \Psi (t) \rangle \,e^{i N\omega t}.
\end{split}
\end{equation}
The five integrals in Eq.~\eqref{eq:def_frequency_dipole} can be computed using the saddle point method
(see e.g. ~\cite{Vrakking2014}). Amongst all possible electron trajectories,
the saddle point method selects
those that contribute the most to the five-fold integral in
Eq.~\eqref{eq:def_frequency_dipole}, termed `quantum trajectories' (see e.g. \cite{salieres2001feynman}) and reflecting purely quantum under-the-barrier motion~\cite{Lewenstein1994}
of the otherwise classical trajectories, with
complex times of ionization and recombination.
This leads to complex ionization and recombination velocities and to complex ionization and recombination angles, which would be crucial to
the analysis of the high harmonic spectrum and, especially, to the contrast of the
adjacent harmonic lines with opposite helicity.
Applying the saddle point method, we can express the induced frequency dipole as
\begin{equation}\label{eq:FrequencyDipole}
\begin{split}
&\mathbf{D}(N\omega) = \sum_j^{n_s} \mathbf{D}^{(j)} (N\omega), \\
&\mathbf{D}^{(j)}(N\omega) \approx \sum_{m=-\ell}^{\ell} \mathbf{d}_{\ell m} ^{\text{ (rec)}} [\mathbf{p}_s^{(j)} + \mathbf{A}(t_r^{(j)})] \times \\
& \times e^{-i S(\mathbf{p}_s^{(j)},t_r^{(j)},t_i^{(j)})} \Upsilon_{\ell m} [\mathbf{p}_s^{(j)} + \mathbf{A} (t_i^{(j)})] e^{i N\omega t_r^{(j)}}.
\end{split}
\end{equation}
The quantities $\mathbf{p}_s, t_r$ and $t_i$ are the complex-valued saddle point solutions for the momentum, and times of recombination and ionization, respectively.
The sum of index $j$ runs over all the $n_s$ stationary points. We will consider
only those stationary points that correspond to the so-called short trajectories,
which we can easily identify \cite{Milosevic2000}.
In this case, $n_s = 3M$, where $M$ is the number of laser cycles.
Let us consider the contribution of
a single laser cycle. The spectral intensity is the coherent sum of the
three consecutive bursts. Since we are interested in the contrast between the two helicities
of the emitted light ($\hat{\mathbf{e}}_+$ and $\hat{\mathbf{e}}_-$) in the spectra, it is useful to
separate the intensity of the emitted light into the two helical components,
\begin{equation}\label{eq:SpectrumIntensity}
I(N\omega) = I_{+}(N\omega) + I_{-} (N\omega), \\
\end{equation}
where
\begin{equation}
I_\pm (N\omega) = \left | A_\pm^{(1)} e^{i\phi_{\pm}^{(1)}} + A_{\pm}^{(2)} e^{i\phi_\pm^{(2)}} + A_\pm^{(3)} e^{i\phi_\pm^{(3)}} \right |^2,
\end{equation}
with the amplitude and phase of the bursts corresponding to the amplitude and
the phase of the frequency dipole in Eq.~\eqref{eq:FrequencyDipole},
\begin{equation}\label{eq:SpectrumIntensity_Definitions}
A_{\pm}^{(j)} = | D_{\pm}^{(j)}(N\omega) | \quad \text{and} \quad \phi_\pm^{(j)} = \arg \left[D_{\pm}^{(j)}(N\omega) \right].
\end{equation}
Above, $D_{\pm}$ is the $\hat{\mathbf{e}}_{\pm}$ helical component of the dipole vector $\mathbf{D}$, i.e., $D_{\pm} = \mathbf{D} \cdot \hat{\mathbf{e}}^*_\pm$, with $\hat{\mathbf{e}}_{\pm} \cdot \hat{\mathbf{e}}^*_{\pm}=1$.
In the long pulse limit, we have $A_\pm^{(1)} = A_\pm^{(2)} = A_\pm^{(3)} \equiv A_\pm$, and $\phi_\pm^{(j)} - \phi_\pm^{(j-1)} = (2\pi/3) (N \mp 1)$ (see Appendix \ref{app:amp&phase}), so
that the intensity can be written as
\begin{equation}\label{eq:intensity}
I_\pm (N\omega) = A_\pm ^2 \left | 1 + 2\cos \left[\frac{2\pi}{3} (N \mp 1)\right] \right|^2.
\end{equation}
The second term on the right-hand side of \eqref{eq:intensity} is responsible for
the well-known selection rules, i.e., harmonics $3N\pm1$ are favored for $\hat{\mathbf{e}}_\pm$ and $3N$ harmonics are suppressed. The relative strengths of the harmonic lines,
however, and thus the contrast between the $(3N+1)$ and $(3N-1)$ lines is
contained exclusively in the amplitude $A_\pm$.
According to Eq.~\eqref{eq:FrequencyDipole}, we may write,
\begin{equation}\label{eq:dipole_reduced}
\begin{split}
&A_{\pm}^2 = e^{2(\Im \{S\} - N\omega \Im\{t_r\})} \times \\
&\times \left| \sum_{m=-\ell}^{\ell} \left(\text{d}_{\ell m, \pm} ^{({\rm rec})} [\mathbf{p}_s + \mathbf{A}(t_r)] \right)\,\Upsilon_{\ell m} [\mathbf{p}_s + \mathbf{A} (t_i)] \right|^2,
\end{split}
\end{equation}
where $\Im\{x\}$ denotes the imaginary part of $x$ and we have assumed that both the imaginary return time and the action do not change considerably from burst to burst, i.e., $\Im \{t_r^{(1)}\} \approx \Im \{t_r^{(2)}\} \approx \Im \{t_r^{(3)}\} \approx \Im \{t_r\}$ and $\Im \{S^{(1)}\} \approx \Im \{S^{(2)}\} \approx \Im \{S^{(3)}\} \approx \Im \{S\}$. Since $\exp\left[2(\Im \{S\} - N\omega \Im\{t_r\}\right]$ is a scalar independent of $m$, it does not influence the contrast
between the two helical components in the spectrum and we can ignore it
for our purposes.
\section{Propensity rules in two-color HHG}
In this section we will analyze how and why the strength of harmonic lines varies across the HHG spectrum of atoms emitting $s$-shell and $p$-shell electrons in the bicircular scheme.
In particular, we will concentrate on two systems: helium and neon, whose HHG spectrum
obtained by solving numerically the TDSE is
shown in Fig.~\ref{fig:Spectra}.
To solve the TDSE we used the code described in \cite{Patchkovskii2016}. To simulate the neon atom, we used the 3D single-active electron pseudo-potential given in \cite{Tong2005}, with a value of $z_{eff} = 10$
at the point $r=0$. To eliminate the contribution of long trajectories and bound states, and save computational time, we have used a small radial box of 70~a.u. The total number of points was $nr=2500$, we used a uniform grid spacing of 0.02~a.u. and a complex boundary absorber at $50$~a.u. The time grid had a spacing of $dt = 0.0025$~a.u. and the maximum angular momenta included in the expansion was $\ell_{\text{max}} = 60$. All the discretization parameters have been checked for convergence.
Eq.~\eqref{eq:dipole_reduced} has now reduced our problem to the study of the ionization and recombination matrix elements within a single burst. Let us focus first
on the one-photon recombination matrix element. To begin with, we
point out one key aspect, which distinguishes
one photon recombination in high harmonic emission from standard
one-photon ionization and/or recombination process.
This key aspect is often overlooked and implicitly ignored
when one states that the recombination step in HHG is
the (time-reversed) analog of one-photon ionization. The difference, however,
is clear: in HHG recombination is conditioned on the electron return to
the parent core and therefore carries in it the imprint of the ionization step.
In the quantum trajectory analysis, this imprint is encoded into
the complex-valued recombination time $t_r$ and (in general)
the complex-valued velocity ${\bf v}(t_r)$. Below we will denote the real and
imaginary parts of these and associated quantities with one or two `primes'
correspondingly, e.g. $\Re\{t_r\}=t_r', \Im\{t_r\}=t_r''$, etc.
For the one-photon dipole
transition we apply the
Wigner-Eckhart theorem and separate the radial part from the angular part:
\begin{equation}\label{eq:recombination_matrix_element}
\begin{split}
&\text{d}_{\ell m, \pm}^{(\rm rec)}[\mathbf{p}_s + \mathbf{A}(t_r)] \equiv \langle \psi_{\ell m} | \hat{\mathbf{d}} \cdot \hat{\mathbf{e}}^*_\pm | \mathbf{v} (t_r) \rangle = \\
& = \sum_{\ell'=0}^{\infty}\,\mathcal{R}_{v(t_r), \ell'} e^{-i (m\pm1) \phi_{v(t_r)}'} \tj{\ell}{\ell'}{1}{-m}{(m\pm 1)}{\mp 1} \times \\
&\times \sqrt{\frac{(\ell'-m \mp 1) !}{(\ell'+m \pm 1) !}}\,P_{\ell'}^{(m\pm 1)}\left(\cos(\theta_{v (t_r)})\right)\, e^{(m \pm 1)\phi''_{v(t_r)}},
\end{split}
\end{equation}
where $\mathbf{v}(t_r) = \mathbf{p}_s + \mathbf{A}(t_r)$ is the saddle point complex velocity vector at the time of recombination, whose radial, polar and azimuthal components are $v(t_r)$, $\theta_{v(t_r)}$ and $\phi_{v(t_r)}$, respectively. As pointed out above, the prime and
double prime superscripts indicate, respectively, the real and imaginary parts of the
corresponding quantity.
The radial factor $\mathcal{R}_{v(t_r),\ell'}$ is the same for $\hat{\mathbf{e}}_\pm$ and does not depend on the initial quantum number $m$, and hence does not influence the contrast between the lines (see Appendix \ref{app:IonRec}). The sum of index $\ell'$ runs over all angular momenta of the continuum states from which one-photon recombination occurs. Finally, the 3j-coefficients reflect standard angular momentum algebra for one-photon transitions. For recombination to an $s$ state, the 3j coefficients are zero unless $\ell'=1$, while for recombination to a $p$ state, they are zero unless $\ell'=0$ or $\ell'=2$.
The z-component of the recombination velocity is negligible for collinear bicircular fields,
so that $\cos \theta_{v(t_r)} = 0$.
The difference between the recombination matrix element in Eq.~\eqref{eq:recombination_matrix_element} and its inverse photoionization counter-part is
now clear. The inverse photo-absorption process occurs from a continuum
state with real velocity. The recombination process, due to the fact that the electron
is conditioned to return to the same place and state from which it was ionized, occurs
from a continuum state with complex velocity vector -- only its square has to be real-valued.
This difference leads to the appearance of the last exponential factor in Eq.~\eqref{eq:recombination_matrix_element}, which is absent in the inverse photo-absorption process.
We may thus identify two separate contributions to the recombination dipole.
The first is common with the photoionization process, and includes all of the terms
in Eq.~\eqref{eq:recombination_matrix_element} except for the last exponential. The second is
due to the recombination condition, which is given by the last exponential.
Each of these two contributions is governed by a different propensity rule
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Fig2-crop.pdf}\caption{Top: permitted dipole transitions in recombination to (a) $s$, (b) $p+$ and (c) $p-$ states. Thick arrows indicate those favored by the Fano-Bethe propensity rule. Center: recombination matrix element (Eq.~\eqref{eq:recombination_matrix_element}), in arbitrary units, for final (d) $s$, (e) $p+$ and (f) $p-$ states. Bottom: Numerical TDSE high harmonic spectra when an electron is emitted from (g) $s$, (h) $p+$ and (i) $p-$ states. Colored lines indicate the helicity of the emitted light: red for $\hat{\mathbf{e}}_+$ (counter-clockwise) and blue for $\hat{\mathbf{e}}_+$ (clockwise).}\label{fig:Recombination}
\end{figure}
\subsection{Propensity 1: The Fano-Bethe rule.}
The first contribution, i.e., the emission analogue of the photo-absorption process, follows the propensity rule given by Fano and Bethe \cite{Fano1985, Bethe1964}. This states that in one-photon transitions, the helicity of the absorbed/emitted photon preferentially co-rotates with the final state (see Fig.~\ref{fig:Recombination}(a), Fig.~\ref{fig:Recombination}(b)). Mathematically, in part this is a consequence of
the fact that the 3-j coefficient in Eq.~\eqref{eq:recombination_matrix_element} is larger
when $m=1$ for $\hat{\mathbf{e}}_+$ and when $m=-1$ for $\hat{\mathbf{e}}_-$. In Figs.~\ref{fig:Recombination}(d) - \ref{fig:Recombination}(f))
we show the full recombination matrix element for
the $s$ ($\ell=0,m=0$), $p+$ ($\ell=1, m=1$) and $p-$ ($\ell=1,m=-1$) states, as
a function of the harmonic order.
To extract the recombination velocities, we solved the corresponding saddle point equations
and used the results for the wavefunction in Eq.~\eqref{eq:short_range_potential}.
For helium (Fig.~\ref{fig:Recombination}(d), Fig.~\ref{fig:Recombination}(g)), we considered an $s$-symmetric ground state with $I_p=0.9$~a.u. and driving fields of $12$~fs duration and field strengths $F_{\omega} = F_{2\omega} = 0.056$~a.u. For neon (Fig.~\ref{fig:Recombination}(e), Fig.~\ref{fig:Recombination}(f),Fig.~\ref{fig:Recombination}(h), Fig.~\ref{fig:Recombination}(i)), we considered a $p$-symmetric ground state with $I_p=0.797$~a.u. and driving pulses of $12$~fs duration and field strengths of $F_{\omega} = F_{2\omega} = 0.074$~a.u. The pulse parameters and $I_p$ were chosen to match those used in the TDSE calculations.
As stated earlier, the $\hat{\mathbf{e}}_\pm$ component of the high harmonic lines are depicted as red/blue lines to highlight that they have the same helicity as that of the fundamental/second-harmonic driver.
For $p+$ (Fig.~\ref{fig:Recombination}(b), Fig.~\ref{fig:Recombination}(e),Fig.~\ref{fig:Recombination}(h)), the red lines co-rotate with the final state, while the blue lines counter-rotate with it. As predicted by the Fano-Bethe propensity rule, the red lines dominate the spectrum, with the exception of lower energies. The reason for the latter is simple. Recombination to a final state $p+$ happens from the two partial waves $s$ and $d$, as shown in Fig.~\ref{fig:Recombination}(a). While recombination from a $d$ partial wave can occur to states co-rotating and counter-rotating with the field, recombination from an
$s$ wave is only possible to states counter-rotating with the field.
At lower energies, the probability of the electron recombining from an s-orbital is higher,
and this is reflected in the spectrum as a dominance of the counter-rotating (blue) lines.
For a final $p-$ state (Fig.~\ref{fig:Recombination}(c), Fig.~\ref{fig:Recombination}(f), Fig.~\ref{fig:Recombination}(i)), the argument is analogous to that for $p+$, only now the red lines counter-rotate with the final state, while the blue lines co-rotate with it.
It is worth noticing that the Fano-Bethe propensity rule cannot be modified by
altering the parameters of the laser pulses, since it only depends on the modulus of
the velocity, which is always fixed by the saddle point solutions,
i.e., $v(t_r) = \sqrt{2(N\omega - I_p)}$ \cite{Vrakking2014}.
\subsection{Propensity 2: The Recombination condition.}
If the recombination process in HHG were only the inverse of photoionization, then
the recombination matrix elements for states $p+$ and $p-$ would have been equivalent, up to
the corresponding change in the helicity of the emitted light.
A comparison of panels e and f in Fig.~\ref{fig:Recombination} shows that this is not the case.
While the counter-rotating lines are the same (blue for $p+$ and red for $p-$), the
co-rotating lines (red for $p+$ and blue for $p-$) are clearly different. The co-rotating line is more dominant in $p+$ than it is in $p-$.
This difference is the consequence of the recombination condition, i.e.,
the factor $e^{(m\pm 1) \phi''_{v(t_r)}}$ in Eq.~\eqref{eq:recombination_matrix_element}.
This factor is equal to unity, and thus does not play a role
for the counter-rotating lines, but it enhances or suppresses
the co-rotating lines, depending on $\phi''_{v(t_r)}$.
If $\phi''_{v(t_r)}$ is positive, which is the case for the vast majority of relevant harmonic
orders and pulse parameters, then the red lines are enhanced in $p+$ while
the blue lines are damped in $p-$. Intuitively, this propensity rule accounts for
which photon is more likely to be absorbed or emitted in the free-free transition.
As mentioned in \cite{Dorney2017}, to lowest order, absorption of one extra photon of frequency $\omega$ (``red") leads to the
emission of the ``red" harmonic line. Absorption of one extra photon of frequency
2$\omega$ (``blue") leads to the emission of the ``blue" harmonic line.
For equal intensities of the $\omega$ and $2\omega$ fields, the low energy photons are more
likely to cause free-free transitions~\cite{Jimenez-Galan2016}, and thus the red lines dominate the harmonic spectrum.
In contrast to the Fano-Bethe propensity rule described above, this propensity rule (in particular $\phi''_{v(t_r)}$) depends on the parameters of the driving fields through the saddle
point equations, and thus offers excellent opportunities for control.
When the electron is emitted from a single orbital, e.g., $s$, $p+$ or $p-$, the sum in Eq.~\eqref{eq:dipole_reduced} runs over one single index, and the ionization term $\Upsilon_{\ell m} [\mathbf{p}_s + \mathbf{A} (t_i)]$ acts as a global factor. In this case, therefore,
only the two mentioned above propensity rules will be responsible for the contrast
between the harmonic lines in the spectrum. Indeed, in Figs.~\ref{fig:Recombination}(g) - Fig.~\ref{fig:Recombination}(i) we
show the solution of the TDSE for the case of ionization from $s$, $p+$ and $p-$ orbitals,
respectively. The contrast between the two helicities (red and blue lines) follows nicely
that predicted by our model, and thus follows the two simple propensity
rules highlighted above.
In particular, the numbers above the harmonic lines in Fig.~\ref{fig:Recombination}(h), Fig.~\ref{fig:Recombination}(i) depict the ratio between the maxima of consecutive 3N+1 and 3N+2 lines in $p+$
and between consecutive $3N+2$ and $3N+1$ lines in $p-$. The fact that red lines are stronger in $p+$ than
blue lines are in $p-$, is a manifestation of the recombination-conditioned propensity rule.
However, for all noble gas atoms except helium, the electron is emitted
from both the $p+$ and $p-$ orbitals. In this case, the ionization factor
now plays a crucial role. In essence, it acts as an additional weighting factor for the contribution
of $p+$ and $p-$ electrons. Indeed, in this case the $\hat{\mathbf{e}}_\pm$ components of the intensity
observed in the $p$ spectrum are proportional to
\begin{equation}\label{eq:intensity_2p}
\begin{split}
I_\pm &\propto \left| \langle \psi_{11} | \text{d}_\pm | \mathbf{v} (t_r) \rangle \right|^2 \left| \Upsilon_{11} [\mathbf{v} (t_i)] \right|^2 + \\
&+ \left| \langle \psi_{1-1} | \text{d}_\pm | \mathbf{v} (t_r) \rangle \right|^2 \left| \Upsilon_{1-1} [\mathbf{v} (t_i)] \right|^2 + \\
&+F_{int} \cos\left[ 2(\phi_{v(t_i)}' - \phi_{v (t_r)}') \right] e^{\pm 2\phi_{v (t_r)}''},
\end{split}
\end{equation}
where the first two terms above are the spectra observed when the
electron is emitted from the $p+$ and $p-$ orbitals, respectively, and
the last term is the interference term which is different for each helicity component $\hat{\mathbf{e}}_\pm$.
The phase of the interference term is composed of twice the phase difference
between the ionization and recombination angles, and thus depends on the pulse parameters
through the saddle point equations. However, the influence of the interference term on the contrast between the lines, while slightly enhancing the ``red" lines at low energies, is small
compared to the preceding two contributions in Eq.~\eqref{eq:intensity_2p} (see Fig.~\ref{fig:Ionization}). It is therefore not
a crucial term for our purposes and we will not comment on it further here.
We note, nonetheless, that if one wishes to go beyond the plane wave approximation and
consider the real scattering states, then the interference term will contain
the interplay between the scattering phases, which could lead to additional effects,
as pointed out in \cite{Baykusheva2016}.
Let us now concentrate on the first two terms in Eq.~\eqref{eq:intensity_2p} and pose the
question: which orbital has the strongest ionization factor, $p+$ or $p-$?
The answer
is given by the third propensity rule, described below.
Note that while the ionization factor depends on the complex-valued ionization
time $t_i$ and the velocity ${\bf v}(t_i)$, it is also conditioned on the electron
return to the parent core.
\subsection{Propensity 3: The Barth-Smirnova rule}
Barth and Smirnova predicted that tunnel ionization
driven by circular fields preferentially removes electrons from states
counter-rotating with the field~\cite{Barth2011, Barth2013}.
The same rule applies in the bicircular case \cite{ayuso2017attosecond, Milosevic2016}, when
the electron is required to return to the core.
Incidentally, we note that this tunneling propensity rule is opposite to the
photoionization one given by Fano-Bethe's propensity rule. In Fig.~\ref{fig:Ionization}(a)
we show the ionization factor, as a function of the harmonic number,
for the electrons emitted from $p+$ and $p-$ states, and for the same pulse parameters
as those used in Fig.~\ref{fig:Recombination}.
The $p+$ state, which counter-rotates with respect to the total bi-circular field,
is dominant over the largest part of the relevant (more intense)
part of the spectrum. The ionization factor, however, depends on the energy,
and close to harmonic 40, emission from $p-$ state becomes dominant.
Again, we stress that this propensity rule depends on the pulse parameters, and
can therefore be altered.
To calculate the contrast of red to blue lines in the $p$ spectra within our model, we multiply the
$\hat{\mathbf{e}}_+$ component of the recombination factor in the $p+$ spectrum (red line in Fig.~\ref{fig:Recombination}(e)) by the $p+$ ionization factor (magenta line in Fig.~\ref{fig:Ionization}(a)), and sum it with the $\hat{\mathbf{e}}_+$ component of the recombination factor in the $p-$ spectrum (red line in Fig.~\ref{fig:Recombination}(f)) times the $p-$ ionization factor (green line in Fig.~\ref{fig:Ionization}(a)), according to Eq.~\eqref{eq:intensity_2p}. This gives us the amplitude of the $\hat{\mathbf{e}}_+$ (red) component in the $p$ spectrum. Analogously, we obtain the $\hat{\mathbf{e}}_-$ (blue) component.
The $\hat{\mathbf{e}}_+$ and $\hat{\mathbf{e}}_-$ amplitudes obtained this way are plotted in Fig.~\ref{fig:Ionization}(b),
as a function of the harmonic order. As commented earlier, Fig.~\ref{fig:Ionization}(b) shows that neglecting the interference term
(last term in Eq.~\eqref{eq:intensity_2p}) gives essentially the same ``red" and ``blue" contrast, except at lower orders, where SFA is not accurate in the first place. The red and blue lines in the spectrum calculated from the TDSE (Fig.~\ref{fig:Ionization}(c))
follows that predicted by the model (square of the solid red and blue lines in Fig.~\ref{fig:Ionization}(b)) and coincides also with that predicted by the SFA in the rotating frame of reference~\cite{Pisanty2017}.
We now have all the ingredients to understand the features in the HHG spectrum of
neon, which we reproduce again in Fig.~\ref{fig:Ionization}(c) for clarity. The
spectrum shows a clear dominance of the red lines in the plateau region, as our
model predicts. For increasing harmonic orders, the blue lines become comparable to
the red lines and, near the cut-off region ($\simeq$ HH45), the blue lines dominate.
This feature is also well reproduced by our simple model. For low-energy harmonics
the agreement is not so good, which is expected since the strong field approximation is
not suited to treat below-threshold and close-to-threshold harmonics.
Nonetheless, the higher contribution of blue lines at harmonics close to threshold,
due to the stronger contribution of the $s$-wave to the
recombination process as highlighted earlier, is observed in both the spectrum and our model.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Fig3-crop.pdf}\caption{(a) SFA-calculated ionization factor, in arbitrary units, of electrons emitted from the $p+$ (magenta) and $p-$ (green) orbital of neon. (b) SFA-calculated square of the product of the ionization factor and recombination matrix element in the spectrum of neon separated in the two helical components (in arbitrary units), including (dashed lines) and neglecting (solid lines) the interference term in Eq.~\eqref{eq:intensity_2p}. (c) Spectrum of neon calculated solving the TDSE and separated in the two helical components (same as Fig.~\ref{fig:Spectra}(b)).}\label{fig:Ionization}
\end{figure}
\section{Control of the helicity and harmonic contrast in bi-circular fields}
In the previous section we have outlined the three propensity rules
responsible for the relevant features in the spectra. In this section we will use
them to exert control over the ratio between the red and blue harmonic lines.
As we have pointed out, the Fano-Bethe propensity rule cannot be altered by changing
the pulse parameters, but we will now show how the recombination and
ionization propensity rules modify the spectrum when the intensities of the two pulses are varied.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Fig4-crop.pdf}\caption{Control of helicity of
attosecond pulses generated in Helium, using the bicircular driving field.
Each of the two columns represents a different intensity ratio between the fundamental and
the second harmonic: $I_{\omega}/I_{2\omega}=1$ for panels (a,c), and
$I_{\omega}/I_{2\omega}=2.25$ for panels (b,d).
Top row: SFA-calculated recombination matrix element for helium, as a function of harmonic order, in arbitrary units. Center row: TDSE-calculated spectrum of helium in linear (top) and logarithmic (bottom) scale. Bottom row:
attosecond pulse trains (APT) generated by taking an energy window from harmonic 9 to harmonic 42. The degree of circularity $e$ of the generated APTs is given (see text for details) and electric field strengths are expressed in arbitrary units. The pulse parameters used in both the SFA and TDSE calculations are the same.}\label{fig:SpectrumHelium}
\end{figure}
Let us first consider helium. As stated previously, in this case
the ionization factor does not play any role in the contrast between the two helicities, so
this contrast gives us direct access to the propensity rules in the recombination process.
Fig.~\ref{fig:SpectrumHelium}(a), Fig.~\ref{fig:SpectrumHelium}(b) shows the recombination factor
of helium,
$\langle \psi_{0 0} | \hat{\mathbf{d}} \cdot \hat{\mathbf{e}}^*_\pm | \mathbf{v} (t_r) \rangle$,
for two intensity ratios between the fundamental and second harmonic,
$I_{\omega}/I_{2\omega}=1$ and $2.25$, respectively. The
duration of both driving pulses was set to $12$~fs and the field strength of the second harmonic
was always fixed at $F_{2\omega}=0.056$~a.u. As the ratio increases,
so does the contrast between the red and blue lines in both the recombination amplitude
and in the HHG spectrum (Fig.~\ref{fig:SpectrumHelium}(b), Fig.~\ref{fig:SpectrumHelium}(d)). Higher red intensities translate
into a higher number of ``red" photons to be absorbed or emitted in the
continuum and, consequently, to a stronger dominance of red harmonic lines in the
spectrum.
In Fig.~\ref{fig:SpectrumHelium}(e), Fig.~\ref{fig:SpectrumHelium}(f) we show the attosecond pulse train (APT)
computed by inverse Fourier transform of the corresponding spectra, taking an energy window from harmonic 9 up to harmonic 42.
When the intensity of the second harmonic driver is the same as that of the fundamental, there are three linearly polarized bursts per laser cycle ($T \approx 110$~a.u).
When the intensity of the fundamental is 2.25 that
of the second harmonic (Fig.~\ref{fig:SpectrumHelium}(f)), the bursts from the APT are highly
elliptically polarized and rotating in the $\hat{\mathbf{e}}_+$ (counter-clockwise) direction. We can calculate the degree of circularity of the generated attosecond pulse
train by integrating the two helical components $E_{\pm} = \mp (E_x\pm iE_y)/\sqrt{2}$ of the electric field over a temporal window, which we choose from $t_1=-70$~a.u. to $t_2 = 70$~a.u.
(same interval as that shown in Fig.~\ref{fig:SpectrumHelium}(e), Fig.~\ref{fig:SpectrumHelium}(f) ).
The degree of circularity can then be defined as $e=(|E_+|^2 - |E_-|^2)/(|E_+|^2 + |E_-|^2)$.
Positive (negative) $e$ will yield an attosecond pulse train elliptically polarized in the counter-clockwise (clockwise) direction; $e=0$ corresponds to a linearly polarized field, while $|e|=1$ corresponds to a circularly
polarized field. The value of $e$ for the corresponding generated APT is shown in Fig.~\ref{fig:SpectrumHelium}(e), Fig.~\ref{fig:SpectrumHelium}(f). We can control
the polarization by simply changing the intensity of the red driving field with
respect to the second harmonic. Note that in this case, the generated bursts from the APT will always have the same helicity as the fundamental driving field,
regardless of the frequency window applied to the spectrum to obtain them.
It is important to stress that the contrast between red and blue lines in helium appears because of the recombination-conditioned propensity rule, i.e., due to the fact that the trajectories
propagate in complex space-time. If the electron trajectories moved in real-valued space-time,
then no contrast would have been observed, and the APT generated in helium will
always be linearly polarized, irrespective of the intensity ratio between the
fundamental driver and the second harmonic.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Fig5b-crop.pdf}\caption{
Top row: SFA-calculated matrix elements for recombination to the
(a) $p+$ (b) $p-$ orbitals of neon in arbitrary units.
Bottom row: (c) SFA-calculated ionization factor of electrons emitted from the
$p+$ (magenta) and $p-$ (green) orbitals of neon and (d) SFA-calculated product of
the ionization and recombination dipoles in the spectrum of neon, in arbitrary units. For all panels, solid red (blue) lines indicate emission
of light with $\hat{\mathbf{e}}_+$ ($\hat{\mathbf{e}}_-$) helicity using a driver intensity
ratio of $I_{\omega}/I_{2\omega} = 0.5$. The dashed red (blue)
lines indicate emission of light with $\hat{\mathbf{e}}_+$ ($\hat{\mathbf{e}}_-$) helicity using a driver
intensity ratio of $I_{\omega}/I_{2\omega}= 1$.}\label{fig:RecIon_2wstronger}
\end{figure}
We now turn our attention to neon. In Fig.~\ref{fig:RecIon_2wstronger}
we show the recombination factor for $p+$ and $p-$ (panels a and b),
the ionization factor (panel c), and the resulting ratio between red and blue harmonic lines in the spectrum (panel d), when $I_{\omega}/I_{2\omega} = 0.5$.
In this case, there are more ``blue" photons to be exchanged with the field during the free-free transitions,
and hence the blue lines would be stronger than in the case of equal intensities.
This is what we observe in the recombination factor,
shown in Fig.~\ref{fig:RecIon_2wstronger}(a), Fig.~\ref{fig:RecIon_2wstronger}(b) for the $p+$ and $p-$ spectra, respectively. The ionization factor now favors the $p+$ spectrum throughout the below-threshold and plateau
harmonics (Fig.~\ref{fig:RecIon_2wstronger}(c) ). The $p+$ spectrum, however, has
extended the dominance of the blue lines to higher
harmonic orders. This, added to a decrease in the red/blue ratio, is detrimental to generating highly circular APT in the plateau.
Additionally, increasing the intensity of the second harmonic increases the contribution of forbidden harmonics due to Rydberg state population, as seen in Figs.~\ref{fig:NeSp-1},\ref{fig:NeSp-2} and also in~\cite{Jimenez-Galan2017}.
In this case, the small radial box used to reduce computational time and the influence of long trajectories, eliminates the contribution of Rydberg orbits.
The predicted contrast is given in Fig.~\ref{fig:RecIon_2wstronger}(d), which is in
good agreement with what the TDSE spectrum shows (Fig.~\ref{fig:SpectrumNeon}(a) ).
The pulses in this case were 12~fs long with a field strength of $F_{\omega} = F_{2\omega}/\sqrt{2} = 0.052$~a.u.
The bursts of the APT generated in this case by filtering everything but the plateau region are close to linear ($e=0.23$, Fig.~\ref{fig:SpectrumNeon}(e) ).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Fig5a-crop.pdf}\caption{Same as Fig.~\ref{fig:RecIon_2wstronger}, but solid lines now indicate results using a driver intensity ratio of $I_{\omega}/I_{2\omega} = 2$.}\label{fig:RecIon_wstronger}
\end{figure}
What happens when the fundamental field is stronger than the second harmonic?
Again, we see a
stronger dominance of red harmonic lines in the recombination factor in
both the $p+$ and $p-$ spectra due to the higher number of ``red" photons
exchanged in the continuum, as we saw in helium.
As for the ionization factor, the $p-$ spectrum now dominates over all the relevant harmonic region.
The latter is responsible for an extended dominance of red lines to lower harmonic orders, as it is illustrated in Fig.~\ref{fig:RecIon_wstronger}(d). The interplay between the ionization and recombination factors leads to a higher and more extended red/blue contrast in the spectrum in this case, as compared to the case of equal intensities. We observe this in both our model prediction (Fig.~\ref{fig:RecIon_wstronger}(d) ) and the HHG spectrum (Fig.~\ref{fig:SpectrumNeon}(c) ), both computed with the same pulse parameters. In particular, we used a field strength of $F_{\omega} = \sqrt{2} F_{2\omega} = 0.1$~a.u.
Therefore, just like in the case of helium, increasing the fundamental intensity
with respect to the second harmonic dramatically increases the circularity
of the APT generated with the plateau harmonics to the value $e=0.77$, which we show in Fig.~\ref{fig:SpectrumNeon}(g).
In Fig.~\ref{fig:SpectrumNeon} (d) we show the intensity ratio of the consecutive harmonics $H_{3N+1}/H_{3N+2}$ for three different $I_{\omega}/I_{2\omega}$ ratios: 0.5 (blue diamonds), 1 (green crosses) and 2 (red squares). The ratio between consecutive harmonics clearly increases as the $I_{\omega} / I_{2\omega}$ ratio is increased for the most intense high harmonics.
Furthermore, the higher total intensity in this case allows us to look at the cut-off harmonics more clearly. The change from dominating ``red" lines to ``blue" lines in this region is evident, as also predicted by our model (see Fig.~\ref{fig:RecIon_wstronger}(d) ). In Fig.~\ref{fig:SpectrumNeon}(h) we show the APT obtained from filtering out all harmonics below HH48. The bursts are remarkably elliptical and very clearly separated. The helicity of the bursts in this case is opposite to those obtained by filtering the plateau energies only, i.e., the bursts rotate in the $\hat{\mathbf{e}}_-$ direction, with a degree of circularity of $e=-0.45$. Only by spectral filtering, therefore, we can, within a single experiment, obtain two circular attosecond pulse trains rotating in opposite directions.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{Fig6-crop.pdf}\caption{Control of helicity of
attosecond pulses generated in Neon, using the bicircular driving field. TDSE-calculated HHG spectra in neon (left panels, with linear scale in the top and logarithmic scale in the bottom) and APT obtained from the spectra (right panels) for three different driver intensity ratios: (a,e) $I_{\omega}/I_{2\omega} = 0.5$, (b,f) $I_{\omega}/I_{2\omega} = 1$, (c,g,h) $I_{\omega}/I_{2\omega} = 2$. The APTs in panels (e,f,g) were obtained by inverse Fourier transform of the plateau harmonics in panels (a,b,c), respectively, while the APT in panel (h) was obtained from the cut-off harmonics of panel (c). Electric field strengths are expressed in arbitrary units. Panel (d) shows the ratio between the peak intensity of $H_{3N+1}$ and $H_{3N+2}$ for an $I_{\omega}/I_{2\omega}$ ratio of 0.5 (blue diamonds), 1 (green crosses) and 2 (red squares). A factor of 0.1 and 0.5 has been applied to $H_{13}/H_{14}$ and $H_{16}/H_{17}$, respectively, in the case of $I_{\omega}/I_{2\omega} = 2$. The black line indicates equal intensity of $H_{3N+1}$ and $H_{3N+2}$ harmonics.}\label{fig:SpectrumNeon}
\end{figure*}
\section{Conclusions} \label{section_conclusions}
In conclusion, we have developed comprehensive analytical understanding of the
electron dynamics occurring in the high harmonic generation process driven
by two-color counter-rotating fields, and have linked these dynamics to the
features observed in the experimental spectrum. We have identified the three
propensity rules responsible for the contrast between the 3N+1 and 3N+2 harmonic
lines in the HHG spectra of noble gas atoms, and demonstrated how these
rules depend on the laser parameters and can be used to shape the
polarization properties of the emitted attosecond pulses.
In particular, we were able to show that for atoms emitting from an $s$-shell (e.g., helium),
the contrast between the two light helicities in the HHG spectra
follows the one-photon recombination matrix element but still encodes the
condition of the electron return. This offers one a
unique opportunity to obtain the characteristics of
what is, formally, recombination from a continuum state with complex-valued
velocity.
For atoms emitting $p$ electrons, information on the different ionization rates of $p+$ and $p-$ electrons can similarly be obtained. This understanding of the dynamics demonstrates an easy to implement and efficient mechanism to control the polarization of the generated attosecond pulse trains. By increasing the ratio of intensities between the fundamental and second harmonic drivers, the contrast between the two adjacent harmonic lines in the spectrum dramatically increases, leading to more circular attosecond bursts. Moreover, we have showed that, for $p$ states, when the fundamental field is stronger than the second harmonic, the APT generated from the plateau harmonics rotates with the fundamental driver, while that generated by the cut-off harmonics rotates with the second harmonic. In this way, we are able to obtain, from the same experiment, two circularly polarized APTs with opposite helicity. Contrary to the common view, we have shown that it is possible to generate highly chiral attosecond bursts by using initial $s$-orbitals. Furthermore, we have linked this possibility to the fact that the electron carries complex velocity at the time of recombination, and thus represents a measurable confirmation of the electron's propagation in complex space-time during the HHG process.
\section{Acknowledgements}
AJG, NZ, and MI acknowledge financial support from the DFG QUTIF grant IV 152/6-1.
DA and OS acknowledge support from the DFG grant SM 292/2-3.
EP acknowledges financial support from MINECO grants
FISICATEAMO (FIS2016-79508-P) and Severo Ochoa (SEV-2015-0522), Fundaci\'o Cellex, Generalitat de Catalunya (2014 SGR 874 and CERCA/Program), and ERC grants EQuaM (FP7-ICT-2013-C No. 323714), QUIC (H2020-FETPROACT-2014 No. 641122) and OSYRIS (ERC-2013-ADG No. 339106).
|
{
"timestamp": "2018-05-08T02:12:26",
"yymm": "1805",
"arxiv_id": "1805.02250",
"language": "en",
"url": "https://arxiv.org/abs/1805.02250"
}
|
\section{Introduction} The intriguing possibility that the merging of compact massive objects can lead to the emission of gravitational wave (GW) echoes, eventually detectable by the LIGO-VIRGO interferometers, has been investigated by various authors~\cite{Abedi:2016hgu, Abedi:2017isz, Abedi:2018npz}, but remains a controversial topic, see for example~\cite{Ashton:2016xff, Westerweck:2017hus}. The emission mechanism of GW echoes relies on the existence of a very massive post-merger object of mass $M$, featuring a photon-sphere, see~\cite{Weinberg:1972kfs, Misner:1974qy, Claudel:2000yi, Virbhadra:1999nm}, leading to partial GWs trapping. The photon-sphere is a surface located at $R= 3M$ where circular photon orbits are possible thanks to an angular potential barrier. It is featured by both black holes, see for example the discussion in~\cite{Shapiro:1983du}, and by ultracompact stars~\cite{1985CQGra...2..219I, Nemiroff:1993zz}.
For black holes (BHs), GW echoes require a second reflection surface to avoid the GWs absorption, related to quantum effects close to the BH horizon, see for example~\cite{Barcelo:2017lnx}. As discussed in~\cite{Ferrari:2000sr}, GW echoes can also be produced by ultracompact stars featuring a photon-sphere. In this case, there is no need of an internal reflection surface because, unlike BHs, the ultracompact star is not capable of absorbing a sizable fraction of GWs.
The GW170817 event~\cite{TheLIGOScientific:2017qsa} has been interpreted as the merging of two neutron stars (NSs) with an estimated total mass $M \approx 2.7 M_\odot$. The final stellar object has not been firmly established: it can be a massive compact star or a BH. The possible presence of GW echoes in the GW170817 event has been analyzed in \cite{Abedi:2018npz}, where it is claimed that a signal at a frequency $\approx 72 $ Hz with a 4.2$\sigma$ significance level is present. The authors interpret this signal as originating from quantum effects close to the BH horizon.
An interpretation of this echo signal as originating from an ultracompact star has been first proposed in~\cite{Pani:2018flj}. This preliminary analysis, conducted by a simplified incompressible EoS, has shown that to produce a signal at such a low frequency the stellar object formed in the coalescence of the NSs should be very compact, close to the Buchdahl's limit radius~\cite{Buchdahl:1959zz} $R_B = 9/4 M$. Thus, the compact stellar object produced in the NS merging should have a compactness $M/R$ larger than $1/3$ to have a photon-sphere, and smaller (but very close) to $4/9$ to emit GW echoes at a frequency of tens of Hz.
Since strange stars are known to be very compact~\cite{Alcock:1986hz,Haensel:1986qb}, we examine the possibility that the ultracompact object produced in the GW170817 event is a strange star and evaluate the frequency of the corresponding GW echoes. In particular, we study whether strange stars may have a photon-sphere and approach the Buchdahl's limit. In our approach we assume that the conversion of nuclear matter to deconfined quark matter happens by means of the extremely high densities produced in the NS merging. An important aspect is, indeed, that the analysis of the GW170817 tidal deformability suggests that the EoS of the merging NSs cannot be too stiff~\cite{TheLIGOScientific:2017qsa, Annala:2017llu, Most:2018hfd,Lim:2018bkq}, see also~\cite{Radice:2017lry} for an analysis based on multimessanger observations. Thus, the merging stellar objects can well be two standard NSs, or a NS and a hybrid star~\cite{Nandi:2017rhy, Burgio:2018yix}, characterized by a not-too-stiff EoS. However, if the final stellar object emits GW echoes it has to be very compact and therefore with a different, very stiff EoS. For this reason we assume that the source of the GW echoes is a strange star \textcolor{black}{produced by the merging of the two NSs.} To have the most compact configuration we assume a simple MIT bag model~\cite{Farhi:1984qu} EoS with the largest possible stiffness, corresponding to a speed of sound equal to the speed of light.
\textcolor{black}{The formation of a strange star would certainly be accompanied by a release of energy, as discussed in framework of supernova explosions, see for example~\cite{Benvenuto:1989qr, Pagliara:2013tza, Ouyed:2017nuy}, possibly affecting the gamma and neutrino emissions associated to the merging of NSs. The GW post merger emission could also be different, but we are not aware of any simulation of merging of NSs leading to the formation of a strange star. In the present paper we limit our analysis to the post merger GW echo signal.}
Although the strange star is initially hot and presumably in an highly excited state, possibly rotating at high frequency, we neglect both the temperature and the spinning effects, considering a static configuration of cold quark matter. We will then argue that both effects should be negligible in the present context. However, it is maybe of interest the fact that the excited strange star could relax also emitting radio waves at kHz frequencies (or smaller)~\cite{Mannarelli:2014ija, Mannarelli:2015jia, Flores:2017kte}.
The present study could, in principle, lead to interesting information on the quark matter EoS and on the possible realization of the Bodmer and Witten hypothesis~\cite{Bodmer:1971we,Witten:1984rs} that standard nuclei are not the ground state of matter. We remark that although the current astrophysical observations of masses and radii of NSs can in principle constrain the EoS of matter at supra-saturation densities, simultaneous mass and radius observations are difficult, meaning that several model EoSs, obtained considering rather different matter composition and interactions, are capable of describing a wealth of astrophysical data. The observation of NSs with a gravitational mass $M\simeq 2 M_\odot$~\cite{Demorest:2010bx,Antoniadis:2013pzd} has challenged nuclear EoSs, excluding the too soft ones. If a compact star with an even larger mass, say of about $2.5 M_\odot$, is the final stellar object resulting in the NSs merging associated to the GW170817 event, although still compatible with extreme nuclear matter EoSs, it would certainly exclude a larger number of models, possibly challenging the present understanding of core-collapse neutron star formation~\cite{Lattimer:2012nd}. As we will see, requiring that this compact object emits GW echoes further constrains the model EoSs, excluding the known nuclear EoSs, as already shown in~\cite{1985CQGra...2..219I,Pani:2018flj}, and constraining the quark matter EoS to be very stiff. Actually, even considering extreme strange star models with a very stiff quark matter EoS we can only marginally cross the photon-sphere radius line, obtaining GW echoes frequencies of the order of tens of kHz.
The present paper is organized as follows. In Sec.~\ref{sec:Model} we discuss the strange star model and obtain the corresponding mass-radius diagram, comparing strange stars with nuclear EoSs. In Sec.~\ref{sec:Frequency} we evaluate the typical GW echo frequency emitted by the last stable strange star configuration. We draw our conclusions in Sec.~\ref{sec:Conclusions}. We use geometrized units, with $G=c=1$.
\section{ The model}\label{sec:Model}
We consider a simple bag model EoS with energy density
\begin{equation}\label{eq:simpleEoS}
\rho= p + 4 B\,,
\end{equation}
where $p$ is the pressure, $B$ is the bag constant and the speed of sound has been set equal to $1$. For simplicity we neglect the stellar rotation, thus the structure can be obtained solving the equations of Tolman-Oppenheimer-Volkov (TOV)
\begin{align}\label{eq:TOV0}
\frac{ d \Phi}{dr} & =-\frac{1}{\rho+p} \frac{d p}{d r}\,, \\
\frac{d m}{d r}& = 4 \pi \rho r^2\,,\label{eq:dm}\\
\frac{d p}{d r} &= (\rho+p) \frac{m + 4 \pi p r^3}{2 m r -r^2} \label{eq:dp}\,,
\end{align}
where $m(r)$ is the gravitational mass within the radius $r$ and $\Phi(r)$ is the gravitational potential.
The first equation follows from hydrostatic equilibrium and can be used to determine the gravitational field inside the star once the pressure, and hence the energy density, has been determined by solving Eqs.~\eqref{eq:dm} and~\eqref{eq:dp} iteratively. In Fig.~\ref{fig:MvsR} we report the obtained masses and radii for two different values of the bag constant: $B_1=(145 \text{ MeV})^4$ (a typical bag model value) and $B_2=(185 \text{ MeV})^4$, corresponding to the curves SS1 and SS2, respectively. With this extreme EoS, the $M(R)$ curves cross the photon-sphere line $M= R/3$, but do not approach the Buchdahl's limit line. The reason is that for small masses and radii, the stellar mass is expected to grow as $R^3$,
because strange quark matter is self-bound. Therefore, for small radii the $M(R) $ curve of strange stars stands below the photon-sphere radius. It can only approach it when the $M(R) $ curve bends, which happens for sufficiently large masses. For large masses the gravitational pull helps to compress the structure, however it eventually leads to an unstable branch, when a central density increase leads to a gravitational mass reduction~\cite{Shapiro:1983du}. The last stable configurations, with the largest masses, correspond to the tips of the $M(R)$ curves in the mass-radius diagram of Fig.~\ref{fig:MvsR}. These are as well the stable most compact configurations. Thus, it seems that strange stars cannot reach the Buchdahl's limit line. The considered values of the bag constant lead to maximum masses $M_\text{max}\approx 2 M_\odot$, for SS2, and of $M_\text{max}\approx 3.3 M_\odot$ for SS1. Intermediate maximum masses can be obtained for values of the bag constant in the range $B_1<B<B_2$, which can be easily inferred considering that the maximum mass scales as~\cite{Witten:1984rs}
\begin{equation}
M_\text{max} \propto B^{-1/2} \,.\end{equation} Thus, for values of the bag constant in the above range, one spans maximum masses compatible with the $2 M_\odot$ observations~\cite{Demorest:2010bx,Antoniadis:2013pzd} and the GW170817 estimated total mass of $2.7 M_\odot$ \cite{TheLIGOScientific:2017qsa}. To make clear how extreme are these cases, consider that the central baryonic densities of these strange stars are about $25$ times the nuclear saturation density. Actually, such extreme values of the baryonic densities are in agreement with the results obtained by simple models of NS collapse~\cite{1991A&A...252..651G} and by numerical simulations including rotation, see for example~\cite{Shibata:2003iw, Baiotti:2004wn}. In these works, polytropic EoSs are used to mimic nuclear matter. Instead, in our approach we assume, maybe more reasonably, that at such large densities quark matter is liberated~\cite{Cabibbo:1975ig} and thus the collapse of two NSs leads to the formation of a strange star. Whether the strange star is the final stellar object or it collapses to a black hole depends, in our very simple model, on the value of the bag constant. Small values of the bag constant do indeed allow to have strange stars with a large mass. Hereafter we assume that the final stellar object is a strange star, but we will comment on the possible collapse of a strange star to a black hole.
One may expect that a different quark matter EoS could provide a structure approaching the Buchdahl's limit line in Fig.~\ref{fig:MvsR}. A very general parameterization of the quark matter EoS is~\cite{Alford:2004pf}
\begin{equation}
P = \frac{3}{4 \pi^2} a_4 \mu^4 - \frac{3}{4 \pi^2} a_2 \mu^2 - B\,,
\label{eq:EoS_quark_matter}
\end{equation}
where $a_4$, $a_2$ are parameters independent of the average quark chemical potential $\mu$. Varying these parameters we obtain last stable strange stars that are less compact than those reported in Fig.~\ref{fig:MvsR}, basically because the EoS in Eq.~\eqref{eq:EoS_quark_matter} is less stiff than the simple parameterization in Eq~\eqref{eq:simpleEoS}. See for example the mass-radius diagram reported in~\cite{Mannarelli:2014ija} for some $M(R)$ results obtained with the parameterization in Eq~\eqref{eq:EoS_quark_matter}.
Regarding standard nuclear matter, as already noted in~\cite{1985CQGra...2..219I,Pani:2018flj}, the $M(R)$ curves obtained by the nuclear EoSs approach the photon-sphere line from below, but do not cross it. As representative examples we consider in Fig~\ref{fig:MvsR} the BBB2 \cite{Baldo:1997ag}, the SLy4 \cite{Douchin:2001sv} and the MS1~\cite{Mueller:1996pm} EoSs, which at the largest possible mass values have a speed of sound in the central region close to $1$, but nonetheless are not sufficiently compact to cross the photon-sphere line.
\begin{figure}
\includegraphics[width=0.48\textwidth]{MvsR_2.eps}
\caption{Mass-radius diagram for various compact star models. The emission of GW echoes can only happen for those stellar models that cross the photon-sphere line. Standard NSs do not seem to be possible candidates. Strange stars with a maximally stiff EoS are marginally compatible with this requirement. }
\label{fig:MvsR}
\end{figure}
\section{Frequency of the gravitational wave echoes}\label{sec:Frequency}
In the proposed model the GWs emitted by the stellar object are partially reflected back by the angular potential barrier at the photon-sphere. \textcolor{black}{One may indeed conceive the photon-sphere as a trap for GWs, with characteristic frequencies of the order of the inverse of the length scale of the trap}. Thus, the smaller is the trap, {\it i.e} the closer is the stellar solution to the photon-sphere line in Fig.~\ref{fig:MvsR}, the larger is the GW echo frequency. Even considering the last stable strange stars, corresponding to the most compact configuration, we obtain solutions that do not approach the Buchdahl's limit. For this reason we expect that only GW echoes at large frequencies are produced.
The typical echo time can be evaluated as the light time from the center of the star to the photon-sphere, see~\cite{Pani:2018flj}, corresponding to
\begin{equation}\label{eq:tau_echo}
\tau_\text{echo} = \bigintss_0^{3M} \hspace{-.5cm}\frac{dr}{\sqrt{e^{2\Phi (r)}\left(1- \frac{2 m(r)}{r}\right)}}\,,
\end{equation}
where the $m(r)$ and $\Phi(r)$ are determined by solving the TOV's equations in Eqs.~(\ref{eq:TOV0}-\ref{eq:dp}). We are assuming, quite reasonably, that GWs are not absorbed by the strange star. The GW echo frequency can be approximated by $\omega_\text{echo}=\pi/\tau_\text{echo}$~\cite{Cardoso:2016rao,Cardoso:2016oxy,Cardoso:2017cqb,Cardoso:2017njb,Mark:2017dnq}. \textcolor{black}{In~\cite{Abedi:2018npz} the estimated frequency is given by $1/(2 \tau_\text{echo})$, which should actually correspond to the repetition frequency of the echo signal. The argument underlying our approximation is that the echo frequency corresponds to that of standing waves inside the photon-sphere, see for example the discussion in~\cite{Kokkotas:1995av} and~\cite{Andersson:1995ez}. Thus, it is assumed that during the merger of the NSs these modes are excited and partially trapped inside the photon-sphere. After some time, they leak outside with approximately the same frequency of the standing waves. The frequency of the GW echo is therefore determined by the eigenmodes of the photon-sphere trap, and is not related to the frequency of the GW emission during the inspiral. }
Most of the contribution to the integral in Eq.~\eqref{eq:tau_echo} comes from the strange star interior and
for the two considered models we obtain that the lowest frequencies are of the order of tens of kHz. In particular, for the last stable massive stars, corresponding to the tips of the SS1 and SS2 curves in Fig.~\ref{fig:MvsR}, we obtain $\omega_\text{1,echo}\simeq 17$ kHz and $\omega_\text{2,echo}\simeq 27$ kHz, respectively. Values of the bag constant \textcolor{black}{lying between $B_1$ and $B_2$} lead to intermediate values of the echo frequency.
\section{ Conclusion}\label{sec:Conclusions}
We have examined the possibility that a strange star has been produced in the GW170817 merging event and has emitted a GW echo.
Considering extreme strange star models having a speed of sound equal to $1$, we have obtained that the most compact structures do cross the photon-sphere line, which is a necessary condition for producing GW echoes. However, the considered models do not approach the Buchdahl's limit line corresponding to $R_B = 9/4 M$, which would lead to a GW echo emission at a frequency close to the values estimated in~\cite{Pani:2018flj} and thus approaching the frequency reported in~\cite{Abedi:2018npz}.
With our model the typical frequencies are of the order of $10$ kHz.
The basic reason of the discrepancy between our results and those of~\cite{Pani:2018flj} is that strange quark matter is self-bound, but is not incompressible. Incompressible matter is characterized by a superluminal (actually infinite) speed of sound. In our approach we have instead assumed a speed of sound equal to the speed of light. In this case it is still possible to cross the photon-sphere line, but the star cannot be too compact because at that point gravitational effects are large, leading to the gravitational collapse. This leads to the typical behavior depicted in Fig.~\ref{fig:MvsR}, with the last stable compact configurations close to the photon-sphere line.
We have neglected the stellar rotation and possible temperature effects on the EoS.
Regarding the stellar rotation, we have solved the TOV's equations assuming a static stellar model. However, including rotation it is expected to slightly change the GW echo frequency, see for example the estimates reported in~\cite{Pani:2018flj}. Those estimates apply to the present model for the basic reason that strange stars are hardly deformable.
Regarding the temperature effects, one should compare the expected temperatures produced in the NSs merging with the corresponding quark chemical potentials. Since in strange stars the quark chemical potential is of the order of hundreds of MeV, it seems unlikely that such a high temperature scale is produced in the merging or in the post-merger environment.
We have restricted our analysis to strange stars, but different exotic ultracompact star models have been proposed, including boson stars~\cite{Wheeler:1955zz, Colpi:1986ye, Jetzer:1991jr}, see~\cite{Carignano:2016lxe, Brandt:2018bwq} for recent studies, and the so-called Q-stars~\cite{Bahcall:1989ff}, both having a similar self-bound EoS. Whether they are sufficiently compact to approach the Buchdahl's limit line is a topic that will be considered in a future work.
An interesting possibility is that the strange star produced by the merging of NSs is in the unstable branch. Since stars in the unstable branch are more compact than stable stars, they may lead to GW echoes at lower frequencies. In this case the star would quickly collapse to a black hole, but it might have enough time to produce a GW echo signal. The estimated time for NS collapse to black hole is of the order of the ms~\cite{1991A&A...252..651G, Shibata:2003iw, Baiotti:2004wn}, and it strongly depends on how far from equilibrium is the initial stellar configuration.
\textcolor{black}{A delayed collapse, on timescales of $10-100$ ms, is obtained for differentially rotating stars, see for example~\cite{Shapiro:2000zh}, and for stiff EoSs~\cite{Hotokezaka:2013iia}. We are not aware of any simulation of merging NSs leading to the formation of an unstable strange star, however, since the EoS in~\eqref{eq:EoS_quark_matter} is extremely stiff, it may lead to collapsing times of the order of $100$ ms or more. In this case, the collapsing time could be longer than $\tau_\text{echo}$, thus allowing, at least in principle, the emission of GW echoes at lower frequencies than those obtained in the present work. Note that for realistic estimates of the echo timescale one should evaluate Eq.~\eqref{eq:tau_echo} considering that the density and the pressure of the collapsing ultracompact star change with time.}
\acknowledgments
We thank Valeria Ferrari and Viviana Fafone for useful discussions.
|
{
"timestamp": "2018-06-06T02:06:33",
"yymm": "1805",
"arxiv_id": "1805.02278",
"language": "en",
"url": "https://arxiv.org/abs/1805.02278"
}
|
\section{Introduction}
An arrangement of hyperplanes is a finite collection of codimension one affine subspaces in a finite dimensional vector space. Associated to these spaces,
there is a plethora of algebraic, combinatorial and topological invariants. Arrangements are easily defined but they lead to
deep and beautiful results that put in connection various area of mathematics. We refer the reader to the seminal text \cite{orlterao} for a comprehensive account of this subject.
One of the main goals in the study of hyperplane arrangements is to decide whether a given invariant is combinatorically determined, and, if so, to express it explicitly in terms of the intersection lattice of the arrangement.
We describe the new package \textbf{arrangements} that computes several combinatorial invariants
(like the lattice of intersections and its flats, the Poincar\'e, the characteristic and the Tutte polynomials)
and algebraic ones (like the Orlik-Terao and the Solomon-Terao ideals)
of hyperplane arrangement for the software CoCoA (\cite{CoCoALib}, \cite{AbbottBigatti2016} and \cite{COCOA}).
Moreover, several functions for the class of free hyperplane arrangements are implemented. In addition, this package allows
also to do computations with multiarrangements.
Finally, several known families of arrangements (like classic reflection arrangements, Shi arrangements, Catalan arrangements, Shi-Catalan arrangements, graphical arrangements
and signed graphical ones) can be easily constructed.
We introduce this package via several examples.
Specifically, in Section 2 we first recall the definitions of various combinatorial invariants of a given arrangement and then describe how to compute them.
In Section 3, we describe how to work with free hyperplane arrangements, and in Section 4
how to define the Orlik-Terao ideal and the Solomon-Terao one.
Finally, in Section 5 we describe the class of multiarrangements with particular emphasis to the free ones.
This package will be part of the official release CoCoA-5.2.4.
\section{Combinatoric of arrangements}
Let $V$ be a vector space of dimension $l$ over a field $K$. Fix a system of coordinate $(x_1,\dots, x_l)$ of $V^\ast$.
We denote by $S = S(V^\ast) = K[x_1,\dots, x_l]$ the symmetric algebra. A finite set of affine hyperplanes $\ensuremath{\mathcal{A}} =\{H_1, \dots, H_n\}$ in $V$ is called a \textbf{hyperplane arrangement}.
For each hyperplane $H_i$ we fix a defining equation $\alpha_i\in S$ such that $H_i = \alpha_i^{-1}(0)$,
and let $Q(\ensuremath{\mathcal{A}})=\prod_{i=1}^n\alpha_i$. An arrangement $\ensuremath{\mathcal{A}}$ is called \textbf{central} if each $H_i$ contains the origin of $V$.
In this case, the defining equation $\alpha_i\in S$ is linear homogeneous, and hence $Q(\ensuremath{\mathcal{A}})$ is a homogeneous polynomial of degree $n$.
The operation of \textbf{coning} allows to transform any arrangement $\ensuremath{\mathcal{A}}$ of $V$ with $n$ hyperplanes
into a central arrangement $c\ensuremath{\mathcal{A}}$ in a vector space of dimension $l+1$ with $n+1$ hyperplanes, see \cite{orlterao}.
Notice that in CoCoA to compute the cone of an arrangement $\ensuremath{\mathcal{A}}$, the homogenizing variable needs to be already present in the ring in which the equation of $\ensuremath{\mathcal{A}}$ is defined.
For example, we can construct the cone of the Shi arrangement of type $A$ in CoCoA as follows:
\begin{lstlisting}[alsolanguage=cocoa]
/**/ use S::=QQ[x,y,z,w];
/**/ A := ArrShiA(S, 3); A;
[x-y, x-z, y-z, x-y-1, x-z-1, y-z-1]
/**/ ArrCone(A, w);
[x-y, x-z, y-z, x-y-w, x-z-w, y-z-w, w]
\end{lstlisting}
Let $L(\ensuremath{\mathcal{A}})=\{\bigcap_{H\in\mathcal{B}}H \mid \mathcal{B}\subseteq\ensuremath{\mathcal{A}}\}$ be the \textbf{lattice of intersection} of $\ensuremath{\mathcal{A}}$.
Define a partial order on $L(\ensuremath{\mathcal{A}})$ by $X\le Y$ if and only if $Y\subseteq X$, for all $X,Y\in L(\ensuremath{\mathcal{A}})$.
Note that this is the reverse inclusion. The elements of $L(\ensuremath{\mathcal{A}})$ are called \textbf{flats} of $\ensuremath{\mathcal{A}}$. Define a rank function on $L(\ensuremath{\mathcal{A}})$ by $\rk(X)=\codim(X)$.
$L(\ensuremath{\mathcal{A}})$ plays a fundamental role in the study of hyperplane arrangements, in fact it determines the combinatoric of the arrangement.
We can compute the flats in the lattice of intersection of the reflection arrangement of type $D$ in CoCoA in the following way:
\begin{lstlisting}[alsolanguage=cocoa]
/**/ use S::=QQ[x,y,z];
/**/ A := ArrTypeD(S,3); A;
[x-y, x+y, x-z, x+z, y-z, y+z]
/**/ ArrFlats(A);
[[ideal(0)],
[ideal(x-y), ideal(x+y), ideal(x-z), ideal(x+z),
ideal(y-z), ideal(y+z)],
[ideal(x, y), ideal(x-z, y-z), ideal(x+z, y+z),
ideal(x-z, y+z), ideal(x+z, y-z), ideal(x, z),
ideal(y, z)],
[ideal(x, y, z)]]
\end{lstlisting}
Let $\mu\colon L(\ensuremath{\mathcal{A}})\to\mathbb{Z}$ be the \textbf{M\"obius function} of $L(\ensuremath{\mathcal{A}})$ defined by
$$\mu(X)=
\begin{cases}
1 & \text{for } X=V,\\
-\sum_{Y<X}\mu(Y) & \text{if } X>V.
\end{cases}$$
The \textbf{Poincar\'e polynomial} of $\ensuremath{\mathcal{A}}$ is defined by $$\pi(\ensuremath{\mathcal{A}},t) = \sum_{X\in L(\ensuremath{\mathcal{A}})}\mu(X)(-t)^{\rk(X)},$$
and it satisfies the formula
$$\pi(c\ensuremath{\mathcal{A}},t)=(t+1)\pi(\ensuremath{\mathcal{A}},t).$$
We now verify the previous result for the Shi arrangement of type $A$ in CoCoA.
\begin{lstlisting}[alsolanguage=cocoa]
/**/ use S::=QQ[x,y,z,w];
/**/ A := ArrShiA(S, 3);
/**/ pi_A := ArrPoincarePoly(A); pi_A;
9*t^2 +6*t +1
/**/ cA := ArrCone(A, w);
/**/ pi_cA := ArrPoincarePoly(cA); pi_cA;
9*t^3+15*t^2+7*t+1
/**/ pi_A := ArrPoincarePoly(A);
/**/ t := indets(RingOf(pi_A),1);
/**/ pi_cA = (1+t)*pi_A;
true
\end{lstlisting}
For any flat $X\in L(\ensuremath{\mathcal{A}})$ define the subarrangement $\ensuremath{\mathcal{A}}_X$ of $\ensuremath{\mathcal{A}}$ by
$$\ensuremath{\mathcal{A}}_X=\{H\in\ensuremath{\mathcal{A}}~|~X\subseteq H\}.$$
Similarly, define the \textbf{restriction} of $\ensuremath{\mathcal{A}}$ to $X$ as the arrangement $\ensuremath{\mathcal{A}}^X$ in $X$
$$\ensuremath{\mathcal{A}}^X=\{X\cap H~|~H\in\ensuremath{\mathcal{A}}\setminus\ensuremath{\mathcal{A}}_X \text{ and } X\cap H\ne\emptyset\}.$$
The \textbf{characteristic polynomial} of $\ensuremath{\mathcal{A}}$ is $$\chi(\ensuremath{\mathcal{A}},t) =t^l\pi(\ensuremath{\mathcal{A}},-t^{-1})= \sum_{X\in L(\ensuremath{\mathcal{A}})}\mu(X)t^{\dim(X)}.$$
The characteristic polynomial is characterized by the following recursive relation
$$\chi(\ensuremath{\mathcal{A}}, t) = \chi(\ensuremath{\mathcal{A}}_H, t)-\chi(\ensuremath{\mathcal{A}}^H, t),$$
for any $H\in \ensuremath{\mathcal{A}}$.
We verify the previous result for $\ensuremath{\mathcal{A}}^{[-1,2]}$ the Shi-Catalan arrangement of type $A$ in CoCoA.
\begin{lstlisting}[alsolanguage=cocoa]
/**/ use S ::= QQ[x,y,z];
/**/ A := ArrShiCatalanA(S, 3, [-1, 2]); A;
[x-y, x-z, y-z, x-y-1, x-z-1, y-z-1, x-y+1, x-y+2,
x-z+1, x-z+2, y-z+1, y-z+2]
/**/ A_1 := ArrDeletion(A,4); A_1;
[x-y, x-z, y-z, x-z-1, y-z-1, x-y+1, x-y+2, x-z+1,
x-z+2, y-z+1, y-z+2]
/**/ A_2 := ArrRestriction(A,4); A_2;
[y[1]-y[2]+1, y[1]-y[2], y[1]-y[2]-1, y[1]-y[2]+2,
y[1]-y[2]+3]
/**/ ArrCharPoly(A) = ArrCharPoly(A_1) - ArrCharPoly(A_2);
true
\end{lstlisting}
For $i=0,\dots,l$ we define the \textbf{$i$-th Betti number} $b_i(\ensuremath{\mathcal{A}})$ by the formula
$$\chi(\ensuremath{\mathcal{A}},t)=\sum_{i=0}^l(-1)^ib_i(\ensuremath{\mathcal{A}})t^{l-i}.$$
The importance of the characteristic polynomial in combinatorics is justified by the following result
from \cite{crapo1970foundations}, \cite{orlik1980combinatorics} and \cite{zaslavsky1975facing}.
\begin{Theorem}
We have that
\begin{enumerate}
\item If $\ensuremath{\mathcal{A}}$ is an arrangement in $\mathbb{F}_q^l$ (vector space over a finite field $\mathbb{F}_q$), then $|\mathbb{F}^l_q\setminus \bigcup_{H\in\ensuremath{\mathcal{A}}}H|=\chi(\ensuremath{\mathcal{A}}, q)$.
\item If $\ensuremath{\mathcal{A}}$ is an arrangement in $\mathbb{C}^l$, then the topological $i$-th Betti number of the complement is $b_i(\mathbb{C}^l \setminus \bigcup_{H\in\ensuremath{\mathcal{A}}}H)=b_i(\ensuremath{\mathcal{A}})$.
\item If $\ensuremath{\mathcal{A}}$ is an arrangement in $\ensuremath{\mathbb{R}}^l$, then $|\chi(\ensuremath{\mathcal{A}},-1)|$ is the number of chambers and $|\chi(\ensuremath{\mathcal{A}}, 1)|$ is the number of bounded chambers.
\end{enumerate}
\end{Theorem}
\begin{figure}[htbp]
\begin{tikzpicture}[scale=0.5]
\draw(-5,-5) -- (5,5);
\draw(-2,-5) -- (-2,5);
\draw(2,-5) -- (2,5);
\draw(-5,-2) -- (5,-2);
\draw(-5,2) -- (5,2);
\coordinate [label=right:{$x=0$}] (V0) at (-2,-4.5);
\coordinate [label=right:{$x=1$}] (V0) at (2,-4.5);
\coordinate [label=right:{$y=0$}] (V0) at (5,-2);
\coordinate [label=right:{$y=1$}] (V0) at (5,2);
\coordinate [label=right:{$x=y$}] (V0) at (5,4.8);
\end{tikzpicture}
\caption{A line arrangement in $\mathbb{R}^2$}\label{Fig:linearr}
\end{figure}
Using the previous statements, we can compute the Betti numbers, the number of chambers and the number of bounded chambers of the arrangement in Figure \ref{Fig:linearr} in CoCoA.
\begin{lstlisting}[alsolanguage=cocoa]
/**/ use S ::=QQ[x,y];
/**/ A := [x,x-1,y,y-1,x-y];
/**/ ArrBettiNumbers(A);
[1, 5, 6]
/**/ NumChambers(A);
12
/**/ NumBChambers(A);
2
\end{lstlisting}
Associated to each hyperplane arrangement, it can be naturally defined a third polynomial.
The \textbf{Tutte polynomial} of $\ensuremath{\mathcal{A}}$ is
$$T_\ensuremath{\mathcal{A}}(x,y)=\sum_{\substack{\mathcal{B}\subseteq\ensuremath{\mathcal{A}} \\ \mathcal{B} \text{ central }}}(x-1)^{\rk(\ensuremath{\mathcal{A}})-\rk(\mathcal{B})}(y-1)^{|\mathcal{B}|-\rk(\mathcal{B})}.$$
As shown in \cite{ardila2007computing}, it turns out that the Tutte and the characteristic polynomials are related by
$$\chi(\ensuremath{\mathcal{A}},t)=(-1)^{\rk(\ensuremath{\mathcal{A}})}t^{|\ensuremath{\mathcal{A}}|-\rk(\ensuremath{\mathcal{A}})}T_{\ensuremath{\mathcal{A}}}(1-t,0).$$
We verify the previous result for the Boolean arrangement in CoCoA. Notice that here,
since the Tutte and the characteristic polynomials live in different rings, we need to construct a ring
homomorphism, with the command \texttt{PolyRingHom}, to check the required equality.
\begin{lstlisting}[alsolanguage=cocoa]
/**/ use S ::= QQ[x,y,z];
/**/ A := ArrBoolean(S, 3);
/**/ Tutte_A := ArrTuttePoly(A); Tutte_A;
t[1]^3
/**/ char_A := ArrCharPoly(A);
/**/ R:=RingOf(Tutte_A);
/**/ P:=RingOf(char_A);
/**/ t:=indets(P,1);
/**/ psi := CanonicalHom(BaseRing(R),P);
/**/ phi := PolyRingHom(R, P, psi, [1-t,0]);
/**/ char_A=(-1)^3*t^(len(A)-3)*phi(Tutte_A);
true
\end{lstlisting}
\section{Free hyperplane arrangements}
In the theory of hyperplane arrangements, the freeness of an arrangement is a very important algebraic property.
In fact, freeness implies several interesting geometric and combinatorial properties of the arrangement itself.
See for example \cite{terao1980arrangementsI}, \cite{yoshinaga2012freeness}, \cite{abe2016divisionally-im}, \cite{Gin-freearr} and \cite{palezzato2018free}.
We denote by $\Der_{V} =\{\sum_{i=1}^l f_i\partial_{x_i}~|~f_i\in S\}$ the $S$-module of \textbf{polynomial vector fields} on $V$ (or $S$-derivations).
Let $\delta = \sum_{i=1}^l f_i\partial_{x_i}\in \Der_{V}$. iI $f_1, \dots, f_l$ are homogeneous polynomials of degree~$d$ in $S$, then $\delta$ is said to be \textbf{homogeneous of polynomial degree} $d$.
In this case, we write $\pdeg(\delta) = d$.
For any central arrangement $\ensuremath{\mathcal{A}}$ we define the \textbf{module of vector fields logarithmic tangent} to $\ensuremath{\mathcal{A}}$ (logarithmic vector fields) by
$$D(\ensuremath{\mathcal{A}}) = \{\delta\in \Der_{V}~|~ \delta(\alpha_i) \in \ideal{\alpha_i} S, \forall i\}.$$
The module $D(\ensuremath{\mathcal{A}})$ is obviously a graded $S$-module and we have that $D(\ensuremath{\mathcal{A}})= \{\delta\in \Der_{V}~|~ \delta(Q(\ensuremath{\mathcal{A}})) \in \ideal{ Q(\ensuremath{\mathcal{A}})} S\}$.
\begin{Definition}
A central arrangement $\ensuremath{\mathcal{A}}$ is said to be \textbf{free with exponents $(e_1,\dots,e_l)$}
if and only if $D(\ensuremath{\mathcal{A}})$ is a free $S$-module and there exists a basis $\delta_1,\dots,\delta_l \in D(\ensuremath{\mathcal{A}})$
such that $\pdeg(\delta_i) = e_i$, or equivalently $D(\ensuremath{\mathcal{A}})\cong\bigoplus_{i=1}^lS(-e_i)$.
\end{Definition}
Let $\delta_1,\dots,\delta_l \in D(\ensuremath{\mathcal{A}})$.
Then $\det(\delta_i(x_j))$ is divisible by $Q(\ensuremath{\mathcal{A}})$.
One of the most famous characterization of freeness is due to Saito \cite{saito} and it uses the determinant of the coefficient
matrix of $\delta_1,\dots,\delta_l$ to check if the arrangement $\ensuremath{\mathcal{A}}$ is free or not.
\begin{Theorem}[Saito's criterion]\label{theo:saitocrit}
Let $\delta_1, \dots, \delta_l \in D(\ensuremath{\mathcal{A}})$. Then the following facts are equivalent
\begin{enumerate}
\item $D(\ensuremath{\mathcal{A}})$ is free with basis $\delta_1, \dots, \delta_l$, i. e. $D(\ensuremath{\mathcal{A}}) = S\cdot\delta_1\oplus \cdots \oplus S \cdot\delta_l$.
\item $\det(\delta_i(x_j))=c Q(\ensuremath{\mathcal{A}})$, where $c\in K\setminus\{0\}$.
\item $\delta_1, \dots, \delta_l$ are linearly independent over $S$ and $\sum_{i=1}^l\pdeg(\delta_i)=n$.
\end{enumerate}
\end{Theorem}
Given a simple graph $G$, we can define the graphical arrangement $\ensuremath{\mathcal{A}}(G)$, see \cite{orlterao}. In \cite{stanley2007introduction}, Stanley showed
that $\ensuremath{\mathcal{A}}(G)$ is free if and only if $G$ is a chordal graph. See also \cite{suyama2015vertex-weighted-a} and \cite{suyama2017signed} for more general results.
We verify this result for a given graphical arrangement in CoCoA.
\begin{lstlisting}[alsolanguage=cocoa]
/**/ use S::=QQ[x,y,z,w];
/**/ G:=[[1,2],[1,3],[1,4],[2,4],[3,4]];
/**/ A:=ArrGraphical(S,G);
/**/ ArrDerMod(A);
matrix( /*RingWithID(18935, "QQ[x,y,z,w]")*/
[[1, 0, 0, 0],
[1, x-y, 0, 0],
[1, x-z, x*z-z^2-x*w+z*w, x*y-y*z-x*w+z*w],
[1, x-w, 0, x*y-x*w-y*w+w^2]])
/**/ ArrExponents(A);
[0, 1, 2, 2]
/**/ B:=ArrDeletion(A,3);
/**/ IsArrFree(B);
false
\end{lstlisting}
\section{Algebras}
In \cite{orlik1994commutative}, Orlik and Terao introduced a commutative analogue of the Orlik-Solomon algebra in order to answer a question of Aomoto
related to cohomology groups of a certain ``twisted'' de Rham chain complex.
The crucial difference between the Orlik-Solomon algebra and Orlik-Terao algebra is not the difference between the exterior algebra and symmetric algebra,
but rather the fact that the Orlik-Terao algebra actually records the ``weights'' of the dependencies among the hyperplanes.
Let $\ensuremath{\mathcal{A}} =\{H_1, \dots, H_n\}$ be an arrangement in $V$ and $\Lambda\subseteq \{1,\dots,n\}$. If $\bigcap_{i\in\Lambda}H_i\ne\emptyset$ and $\codim(\bigcap_{i\in\Lambda}H_i)<|\Lambda|$,
then we say that $\Lambda$ is $\textbf{dependent}$. If $\Lambda$ is dependent, then there exist $c_i\in K$ such that $\sum_{i\in\Lambda}c_i\alpha_i=0$.
\begin{Definition} Let $R$ be the ring $K[y_1,\dots, y_n]$. For each dependent set $\Lambda=\{i_1,\dots, i_k\}$, let $r_\Lambda=\sum_{j=1}^kc_{i_j}y_{i_j} \in R$. Define now
$$f_\Lambda=\partial(r_\Lambda)=\sum_{j=1}^kc_{i_j}(y_{i_1}\cdots \hat{y}_{i_j}\cdots y_{i_k}),$$ and let $I$ be the ideal of $R$ generated by the $f_\Lambda$.
This ideal is called the \textbf{Orlik-Terao ideal} of $\ensuremath{\mathcal{A}}$.
The \textbf{Orlik-Terao algebra} $\OT(\ensuremath{\mathcal{A}})$ is the quotient $R/I$. The \textbf{Artinian Orlik-Terao algebra} $\AOT(\ensuremath{\mathcal{A}})$ is the quotient of
$\OT(\ensuremath{\mathcal{A}})$ by the square of the variables.
\end{Definition}
These algebras and their Betti diagrams give us a lot of information on the given arrangement, for example about its formality. See for example \cite{schenck2009orlik}.
We can construct the Orlik-Terao ideal, its Artinian version and the Betti diagram of the Orlik-Terao algebra of the Braid arrangement in CoCoA as follows:
\begin{lstlisting}[alsolanguage=cocoa]
/**/ use S ::= QQ[x,y,z];
/**/ A:=ArrBraid(S,3);
/**/ OT_A := OrlikTeraoIdeal(A); OT_A;
ideal(y[1]*y[2]-y[1]*y[3]+y[2]*y[3])
/**/ PrintBettiDiagram(RingOf(OT_A)/OT_A);
0 1
---------------
0: 1 -
1: - 1
---------------
Tot: 1 1
/**/ ArtinianOrlikTeraoIdeal(A);
ideal(y[1]*y[2]-y[1]*y[3]+y[2]*y[3], y[1]^2, y[2]^2, y[3]^2)
\end{lstlisting}
In \cite{abe2018solomon}, the authors introduced a new algebra associated to a hyperplane arrangement. This algebra can be considered as a generalization of
the coinvariant algebras in the setting of hyperplane arrangements and it contains the cohomology rings of regular nilpotent Hessenberg varieties.
\begin{Definition} Let $\ensuremath{\mathcal{A}}$ be an arrangement in $V$ and $f\in S$. Then the ideal $\mathfrak{a}(\ensuremath{\mathcal{A}},f)=\{\delta(f)~|~\delta\in D(\ensuremath{\mathcal{A}})\}$ is called the \textbf{Solomon-Terao ideal}
of $\ensuremath{\mathcal{A}}$ with respect to $f$. The \textbf{Solomon-Terao algebra} of $\ensuremath{\mathcal{A}}$ with respect to $f$ is the quotient $\ST(\ensuremath{\mathcal{A}},f)=S/\mathfrak{a}(\ensuremath{\mathcal{A}},f)$.
\end{Definition}
We can construct the Solomon-Terao ideal of the reflection arrangement of type $D$ with respect to
$f$ the sum of the square of the variables in the following way in CoCoA:
\begin{lstlisting}[alsolanguage=cocoa]
/**/ use S ::= QQ[x,y,z];
/**/ A:=ArrTypeD(S,3);
/**/ f:=x^2+y^2+z^2;
/**/ SolomonTeraoIdeal(A,f);
ideal(2*x^2+2*y^2+2*z^2, 6*x*y*z, 2*x^2*y^2-2*y^4+2*x^2*z^2-2*z^4)
\end{lstlisting}
\section{Multiarrangements of hyperplanes}
A \textbf{multiarrangement} is a pair $(\ensuremath{\mathcal{A}}, m)$ of an arrangement $\ensuremath{\mathcal{A}}$ with a map $m\colon\ensuremath{\mathcal{A}}\to\mathbb{Z}_{\ge0}$, called the \textbf{multiplicity}.
An arrangement $\ensuremath{\mathcal{A}}$ can be identified with a multiarrangement with constant multiplicity $m\equiv 1$, which is sometimes called a simple arrangement.
Define $Q(\ensuremath{\mathcal{A}},m) = \prod_{i=1}^n\alpha_i^{m(H_i)}$ and $|m| = \sum_{i=1}^nm(H_1)$.
With this notation, the main object is the \textbf{module of vector fields logarithmic tangent} to $\ensuremath{\mathcal{A}}$ with multiplicity $m$ (logarithmic vector field)
defined by
$$D(\ensuremath{\mathcal{A}},m) = \{\delta\in \Der_{V}~|~ \delta(\alpha_i) \in \ideal{\alpha_i}^{m(H_i)} S, \forall i\}.$$
The module $D(\ensuremath{\mathcal{A}},m)$ is a graded $S$-module. In general, contrarily to the case of simple arrangements, we have that
$D(\ensuremath{\mathcal{A}},m)$ does not coincide with $\{\delta\in \Der_{V}~|~ \delta(Q(\ensuremath{\mathcal{A}})) \in \ideal{ Q(\ensuremath{\mathcal{A}},m)} S\}$.
\begin{Definition}
Let $\ensuremath{\mathcal{A}}$ be central arrangement. The multiarrangement $(\ensuremath{\mathcal{A}},m)$ is said to be \textbf{free with exponents $(e_1,\dots,e_l)$}
if and only if $D(\ensuremath{\mathcal{A}},m)$ is a free $S$-module and there exists a basis $\delta_1,\dots,\delta_l \in D(\ensuremath{\mathcal{A}},m)$
such that $\pdeg(\delta_i) = e_i$, or equivalently $D(\ensuremath{\mathcal{A}},m)\cong\bigoplus_{i=1}^lS(-e_i)$.
\end{Definition}
As for simple arrangements, if $\delta_1,\dots,\delta_l \in D(\ensuremath{\mathcal{A}},m)$, then $\det(\delta_i(x_j))$ is divisible by $Q(\ensuremath{\mathcal{A}},m)$.
Moreover, we can generalize Theorem \ref{theo:saitocrit}.
\begin{Theorem}[Generalized Saito's criterion]
Let $\delta_1, \dots, \delta_l \in D(\ensuremath{\mathcal{A}})$. Then the following facts are equivalent
\begin{enumerate}
\item $D(\ensuremath{\mathcal{A}},m)$ is free with basis $\delta_1, \dots, \delta_l$, i. e. $D(\ensuremath{\mathcal{A}},m) = S\cdot\delta_1\oplus \cdots \oplus S \cdot\delta_l$.
\item $\det(\delta_i(x_j))=c Q(\ensuremath{\mathcal{A}},m)$, where $c\in K\setminus\{0\}$.
\item $\delta_1, \dots, \delta_l$ are linearly independent over $S$ and $\sum_{i=1}^l\pdeg(\delta_i)=|m|$.
\end{enumerate}
\end{Theorem}
Given a simple arrangement $\ensuremath{\mathcal{A}}$ and $H$ one of its hyperplane, we can naturally define the \textbf{Ziegler's multirestriction} (see \cite{ziegler1986multiarrangements})
as the multiarrangement $(\ensuremath{\mathcal{A}}^H, m^H)$, where the function $m^H\colon A^H\to\mathbb{Z}_{>0}$ is defined by
$$X\in\ensuremath{\mathcal{A}}^H\mapsto\#\{H'\in\ensuremath{\mathcal{A}}~|~H\supset X\}-1.$$
\begin{Theorem}[\cite{ziegler1986multiarrangements}] Let $\ensuremath{\mathcal{A}}$ be a central arrangement. If $\ensuremath{\mathcal{A}}$ is free with exponents $(1,e_2,\dots,e_l)$,
then $(\ensuremath{\mathcal{A}}^{H_1}, m^{H_1})$ is free with exponents $(e_2,\dots,e_l)$.
\end{Theorem}
In general, the converse of the previous theorem is false. However, we have the following
\begin{Theorem}[\cite{yoshinaga2004characterization}] Assume $l\ge 4$. Then a central arrangement $\ensuremath{\mathcal{A}}$ is free with exponents $(1,e_2,\dots,e_l)$ if and only if the following conditions are satisfied.
\begin{enumerate}
\item $\ensuremath{\mathcal{A}}$ is locally free along $H_1$, i.e. $\ensuremath{\mathcal{A}}_X$ is free for any $X\in L(\ensuremath{\mathcal{A}})$ with $X\subset H_1$ and $X\ne\emptyset$,
\item the Ziegler's multirestriction $(\ensuremath{\mathcal{A}}^{H_1}, m^{H_1})$ is a free multiarrangement with exponents $(e_2,\dots,e_l)$.
\end{enumerate}
\end{Theorem}
We can construct the Ziegler's multirestriction of a given arrangement and verify the previous statements in CoCoA as follows:
\begin{lstlisting}[alsolanguage=cocoa]
/**/ use S ::= QQ[x,y,z];
/**/ A:=[x,y,z,x-y,x-y-z,x-y+2*z];
/**/ A_1:=MultiArrRestrictionZiegler(A,z);A_1;
[[y[1], 1], [y[2], 1], [y[1]-y[2], 3]]
/**/ MultiArrDerMod(A_1);
matrix( /*RingWithID(18, "QQ[y[1],y[2]]")*/
[[y[1]*y[2], y[1]^3],
[y[1]*y[2], 3*y[1]^2*y[2]-3*y[1]*y[2]^2+y[2]^3]])
/**/ MultiArrExponents(A_1);
[2, 3]
/**/ ArrExponents(A);
[1, 2, 3]
\end{lstlisting}
|
{
"timestamp": "2018-05-08T02:14:37",
"yymm": "1805",
"arxiv_id": "1805.02366",
"language": "en",
"url": "https://arxiv.org/abs/1805.02366"
}
|
\section{Introduction}
This paper is concerned with link invariants defined from diagrams. We use standard notation and terminology: A (tame, classical) \emph{link} $L=K_1 \cup \dots \cup K_{\mu}$ has $\mu$ disjoint components, each of which is a \emph{knot}, i.e., a piecewise smooth copy of $\mathbb{S}^1$ in $\mathbb{S}^3$. A \emph{diagram} $D$ of $L$ in the plane is obtained from a projection with only finitely many singularities, all of which are double points called \emph{crossings}. At each crossing, $D$ distinguishes the underpassing component by removing two short segments, one on each side of the crossing. Removing these segments splits $D$ into a finite number of arc components. The set of arc components of $D$ is denoted $A(D)$, and the set of crossings of $D$ is denoted $C(D)$. We also use standard notation for rings of Laurent polynomials with integer coefficients, $\Lambda=\mathbb{Z}[t^{\pm1}]$ and $\Lambda_{\mu}=\mathbb{Z}[t_1^{\pm1},\dots,t_{\mu}^{\pm1}]$.
The idea of a \emph{quandle} or \emph{distributive groupoid} was introduced in the 1980s by Joyce \cite{J} and Matveev \cite{M}. In the intervening decades a sizable literature has developed, involving many different generalizations and special cases of the quandle idea. In this paper we generalize one of these special cases.
\begin{definition}
\label{aquandle}
An \emph{Alexander quandle} is a module $M$ over the ring $\Lambda$. The quandle operation is given by
\[
a_2 \triangleright a_1=(1-t) \cdot a_1 + t \cdot a_2.
\]
\end{definition}
Notice that for an Alexander quandle, the quandle operation is determined by the addition and scalar multiplication operations of the module. As we do not refer to any non-Alexander quandles in this paper, we use notation and terminology for modules rather than quandles. For instance, the following definition is equivalent to the definition of Alexander quandle colorings in the literature, even though the definition does not include the word ``quandle.''
\begin{definition}
\label{aquandlecolor}
Let $D$ be a link diagram, and $M$ a $\Lambda$-module. An \emph{Alexander coloring} of $D$ with values in $M$ is given by a function $f:A(D) \to M$ such that at every crossing $c$ as indicated in Figure~\ref{crossfig}, the following equation is satisfied:
\[
f(a_3)=(1-t) \cdot f(a_1) + t \cdot f(a_2).
\]
\end{definition}
\begin{figure}
\centering
\begin{tikzpicture} [>=angle 90]
\draw [thick] [->] (1,1) -- (-1,-1);
\draw [thick] (-1,1) -- (-.2,0.2);
\draw [thick] (0.2,-0.2) -- (1,-1);
\node at (1.3,1.3) {$a_1$};
\node at (-1.3,1.3) {$a_2$};
\node at (1.3,-1.3) {$a_3$};
\end{tikzpicture}
\caption{The arcs incident at a crossing.}
\label{crossfig}
\end{figure}
Here is a multivariate version of Definition~\ref{aquandlecolor}.
\begin{definition}
\label{aquandlecolormultiv}
Let $D$ be a diagram of a link $L=K_1 \cup \dots \cup K_{\mu}$, and let $M$ be a module over the ring $\Lambda_{\mu}$. Let $\kappa:A(D) \to \{1,\dots,\mu\}$ be the map with $\kappa(a)=i$ if and only if $a$ is an arc of $K_i$. Then a \emph{multivariate Alexander coloring} of $D$ with values in $M$ is given by a function $f:A(D) \to M$ such that at every crossing $c$ as indicated in Figure~\ref{crossfig}, the following equation is satisfied:
\[
f(a_3)=(1-t_{\kappa(a_2)}) \cdot f(a_1) + t_{\kappa(a_1)} \cdot f(a_2).
\]
The set of all multivariate Alexander colorings of $D$ with values in $M$ is denoted $\mathrm{Color}_A(D,M)$.
\end{definition}
Here are several remarks about these definitions.
1. Definition \ref{aquandlecolormultiv} includes Definition \ref{aquandlecolor}. If $M$ is a $\Lambda$-module then $M$ is also a $\Lambda_{\mu}$-module, with $t_i \cdot m = t \cdot m$ $\forall m \in M$ $\forall i \in \{1,\dots,\mu\}$. In particular, there is no difference between Definitions \ref{aquandlecolor} and \ref{aquandlecolormultiv} when $\mu=1$.
2. When we refer to Definition \ref{aquandlecolor} we sometimes use the phrase ``standard Alexander coloring'' to emphasize that we are not discussing Definition \ref{aquandlecolormultiv}.
3. Definition \ref{aquandlecolormultiv} does not seem to be associated with a notion of ``multivariate Alexander quandles'' analogous to the notion of standard Alexander quandles. There is no quandle structure on $M$ because $\kappa$ is defined on $A(D)$, not $M$.
4. Nosaka has pointed out that he mentioned the possibility of defining link colorings in $\Lambda_{\mu}$-modules in \cite[Remark 2.6]{N}. This idea was also mentioned by Manturov and Ilyutko \cite[Theorem 3.14]{Ma}, in the more general context of virtual links. These authors did not develop the results we present below, though.
5. For each $m \in M$, the constant function $f(a)=m$ satisfies Definition \ref{aquandlecolor} and the nonconstant function $f(a)=(1-t_{\kappa(a)}) \cdot m$ satisfies Definition \ref{aquandlecolormultiv}.
6. $\mathrm{Color}_A(D,M)$ is itself a module over $\Lambda_{\mu}$, using pointwise addition and scalar multiplication. That is, if $f_1,f_2 \in \mathrm{Color}_A(D,M)$ and $\lambda \in \Lambda_{\mu}$ then $(f_1+f_2)(a) = f_1(a)+f_2(a) \text{ and } (\lambda \cdot f_1)(a) = \lambda \cdot f_1(a)$ $\forall a \in A(D)$.
Before stating results, we briefly recall some basic information about Alexander modules. We refer to the literature for more thorough discussions of these famous invariants of classical links \cite{BZ, CF, F, H}.
Each oriented link diagram $D$ has an associated \emph{Alexander matrix} $M(D)$. The columns of $M(D)$ are indexed by $A(D)$, and the rows of $M(D)$ are indexed by $C(D)$. Suppose $c$ is a crossing with the incident arcs indexed as in Figure~\ref{crossfig}. (N.b. The underpassing arcs are indexed using the orientation of $a_1$: $a_2$ is on the right side of an observer facing forward on $a_1$, and $a_3$ is on the left side.) If $a_2 \neq a_3$, then the row of $M(D)$ corresponding to $c$ has these entries:
\[
M(D)_{ca}=
\begin{cases}
1-t_{\kappa(a_2)}\text{,} & \text{if }a=a_1 \\
t_{\kappa(a_1)}\text{,} & \text{if }a=a_2 \\
-1\text{,} & \text{if }a=a_3 \\
0\text{,} & \text{if }a \notin\{a_1,a_2,a_3\}
\end{cases}
\]
If $a_2 = a_3$, then the row of $M(D)$ corresponding to $c$ has these entries:
\[
M(D)_{ca}=
\begin{cases}
1-t_{\kappa(a_2)}\text{,} & \text{if }a=a_1 \\
t_{\kappa(a_1)}-1\text{,} & \text{if }a=a_2=a_3 \\
0\text{,} & \text{if }a \notin\{a_1,a_2\}
\end{cases}
\]
The reader familiar with the free differential calculus will recognize that the entries of the $c$ row of $M(D)$ are the images in $\Lambda_{\mu}$ of the free derivatives of the Wirtinger relator $a_1a_2a_1^{-1}a_3^{-1}$ corresponding to the crossing $c$.
\begin{definition}
\label{almod}
If $D$ is a diagram of $L$ then the \emph{Alexander module} $M_A(L)$ is the $\Lambda_{\mu}$-module presented by $M(D)$.
\end{definition}
That is to say, if $D$ is a diagram of $L$ and $\Lambda_{\mu}^{A(D)}$ is the free $\Lambda_{\mu}$-module on the set $A(D)$, then $M_A(L)$ is isomorphic to the quotient of $\Lambda_{\mu}^{A(D)}$ by the submodule $S$ generated by all elements of the form
\[
(1-t_{\kappa(a_2)}) \cdot a_1 + t_{\kappa(a_1)} \cdot a_2 - a_3
\]
where the arcs $a_1,a_2,a_3$ appear at a crossing of $D$ as in Figure \ref{crossfig}.
If $M$ is a $\Lambda_{\mu}$-module and $f:A(D) \to M$ is an arbitrary function, then $f$ defines a $\Lambda_{\mu}$-linear map $\widehat{f}:\Lambda_{\mu}^{A(D)} \to M$. This map $\widehat{f}$ defines a $\Lambda_{\mu}$-linear map with domain $M_A(L)$ if and only if $S \subseteq \ker(\widehat{f})$. We deduce the following result, which we call the Fundamental Theorem of Alexander colorings.
\begin{theorem}
\label{main1}
Let $D$ be a diagram of $L=K_1 \cup \dots \cup K_{\mu}$, and let $M$ be a module over $\Lambda_{\mu}$. If $M_A(L)$ is the Alexander module of $L$, then
\[
\mathrm{Color}_A(D,M) \cong \mathrm{Hom}_{\Lambda_{\mu}}(M_A(L),M) .
\]
\end{theorem}
Many authors have discussed the fact that standard Alexander colorings are connected to the Alexander module, or to the Alexander polynomials \cite{B, EN, HHO, I, I2, J, KL, L, M, N}. In particular, Inoue \cite{I2} stated the following version of the fundamental theorem for standard Alexander colorings. Inoue's result involves the \emph{reduced} Alexander module $M_A^{red}(L)$, i.e., the $\Lambda$-module presented by a matrix obtained from an Alexander matrix $M(D)$ by replacing $t_1,\dots,t_{\mu}$ with a single variable, $t$.
\begin{corollary} (\cite{I2})
\label{maincor1}
Let $D$ be a diagram of a link $L$, and let $M$ be a $\Lambda$-module. Then the $\Lambda$-module of Alexander colorings of $D$ with values in $M$ is isomorphic to $\mathrm{Hom}_{\Lambda}(M_A^{red}(L),M)$.
\end{corollary}
There is no easily computable set of complete invariants for modules over $\Lambda$ and $\Lambda_{\mu}$, so these modules can be difficult to work with. Theorem~\ref{main1} and Corollary~\ref{maincor1} yield more convenient results when $M$ is both a module over $\Lambda_{\mu}$ and a vector space over a field, because a vector space is characterized up to isomorphism by its dimension. Before stating results we recall a standard definition of classical knot theory.
\begin{definition}
Let $D$ be a diagram of a link $L=K_1 \cup \dots \cup K_{\mu}$. Then the \emph{elementary ideals} $E_j(L)$ are ideals of $\Lambda_{\mu}$, indexed by $j \geq 0 \in \mathbb{N}$.
\begin{itemize}
\item If $j \geq |A(D)|$, then $E_j(L)=\Lambda_{\mu}$.
\item If $|A(D)|>j\geq |A(D)|-|C(D)|$, then $E_j(L)$ is the ideal of $\Lambda_{\mu}$ generated by the determinants of $(|A(D)|-j) \times (|A(D)|-j)$ submatrices of $M(D)$.
\item If $j<|A(D)|-|C(D)|$, then $E_j(L)=(0)$.
\end{itemize}
\end{definition}
The elementary ideals of links have been studied thoroughly; see \cite{H} for a detailed account of the theory.
Suppose $F$ is a field, $\varphi:\Lambda_{\mu} \to F$ is a homomorphism of rings with unity, and $M$ is a vector space over $F$. Let $M_\varphi$ denote the $\Lambda_{\mu}$-module obtained from $M$ using $\varphi$. (That is, if $\lambda \in \Lambda_{\mu}$ and $m \in M$ then $\lambda \cdot m = \varphi(\lambda) \cdot m$.) In Section 2 we prove the following.
\begin{theorem}
\label{main2}
Let $F$ be a field, let $\varphi:\Lambda_{\mu} \to F$ be a homomorphism of rings with unity, and let $M$ be a vector space over $F$. Let $L$ be a $\mu$-component link, and let $j_0$ be the smallest index with $\varphi(E_{j_0}(L)) \neq 0$. Then for any diagram $D$ of $L$,
\[
\mathrm{Color}_A(D,M_\varphi) \cong \mathrm{Hom}_F(F^{j_0},M)_\varphi.
\]
It follows that $\mathrm{Color}_A(D,M_\varphi)$ is a vector space over $F$ of dimension $j_0 \cdot dim_F(M)$.
\end{theorem}
In case $M=F$, we have the following.
\begin{corollary}
\label{maincor2}
Let $F$ be a field, and let $\varphi:\Lambda_{\mu} \to F$ be a homomorphism of rings with unity. Let $L$ be a $\mu$-component link, and let $j_0$ be the smallest index with $\varphi(E_{j_0}(L)) \neq 0$. Then for any diagram $D$ of $L$,
\[
\mathrm{Color}_A(D,F_\varphi) \cong \mathrm{Hom}_F(F^{j_0},F)_\varphi.
\]
It follows that $\mathrm{Color}_A(D,F_\varphi)$ is a vector space over $F$ of dimension $j_0$.
\end{corollary}
If $\varphi(t_i)=\varphi(t_j)$ $\forall i,j$, then the colorings described in Corollary \ref{maincor2} are standard Alexander colorings. These colorings have been studied by Kauffman and Lopes \cite{KL}, who refer to them as colorings by linear Alexander quandles. The most familiar instances are the Fox colorings, which correspond to homomorphisms with $\varphi(t_i)=-1$ $\forall i$.
As far as we know, the precise statement of Corollary \ref{maincor2} has not appeared before, although a version of the special case for Fox colorings was announced recently \cite{STW}. Inoue \cite{I, I2} stated a similar result for standard Alexander colorings, with the elementary ideals replaced by the higher Alexander polynomials. In Section 3 we show that Inoue's version of Corollary \ref{maincor2} is incorrect even in the simplest case, i.e., Fox colorings of knots with $F=GF(3)$, the field of three elements.
After discussing examples in Sections 3 -- 5, we outline the extension of Theorem \ref{main2} from fields to principal ideal domains in Section 6.
\section{Proof of Theorem \ref{main2}}
Our proof of Theorem \ref{main2} begins with two lemmas, which provide useful properties of tensor products in conjunction with ring homomorphisms. Full accounts of the general theory of tensor products may be found in standard algebra texts, like \cite{La}.
If $\varphi:\Lambda_{\mu} \to R$ is a homomorphism of commutative rings with unity and $M$ is an $R$-module, then we denote by $M_\varphi$ the $\Lambda_{\mu}$-module on $M$ with $\lambda \cdot m = \varphi(\lambda) \cdot m$ for $\lambda \in \Lambda_{\mu}$ and $m \in M$.
\begin{lemma} \label{adj}
Let $\varphi:\Lambda_{\mu} \to R$ be a homomorphism of commutative rings with unity, and let $M$ be an $R$-module. If $D$ is a diagram of a $\mu$-component link $L$, then
\[
\mathrm{Color}_A(D,M_\varphi) \cong \mathrm{Hom}_{R}(R_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} M_A(L),M)_\varphi.
\]
\end{lemma}
\begin{proof}
The isomorphism
\begin{equation}
\label{isom}
\mathrm{Hom}_{R}(R_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} M_A(L),M)_\varphi \cong \mathrm{Hom}_{\Lambda_{\mu}}(M_A(L),M_\varphi)
\end{equation}
is a special case of the general property that $\mathrm{Hom}$ and $\otimes$ define adjoint functors. This particular type of adjointness is mentioned (for instance) by Lang \cite[p. 637]{La}. The lemma follows from (\ref{isom}) and the fundamental theorem.
\end{proof}
\begin{lemma} \label{pres} Suppose $D$ is a diagram of a $\mu$-component link $L$ and $\varphi:\Lambda_{\mu} \to R$ is a homomorphism of commutative rings with unity. Then $\varphi(M(D))$ is a presentation matrix for the $R$-module $R_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} M_A(L)$.
\end{lemma}
\begin{proof} As $M(D)$ is a presentation matrix for $M_A(L)$, there is an exact sequence
\[
\Lambda_{\mu}^{C(D)} \xrightarrow{f} \Lambda_{\mu}^{A(D)} \xrightarrow{g} M_A(L) \xrightarrow{} 0 \text{,}
\]
where $f$ is the homomorphism represented by the matrix $M(D)$. A standard property of tensor products is the fact that for any set $S$,
\[
R_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} \Lambda_{\mu}^{S} \cong R^S \text{}
\]
with $1 \otimes s$ corresponding to $s$ for each $s \in S$. Moreover, if $\mathrm{id}$ is the identity map of $R$ then the homomorphism
\[
\mathrm{id} \otimes f: R_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} \Lambda_{\mu}^{C(D)} \to R_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} \Lambda_{\mu}^{A(D)}
\]
is represented by the matrix $\varphi(M(D))$, with respect to the natural bases.
Another standard property of tensor products is right exactness. This property guarantees that
\[
R_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} \Lambda_{\mu}^{C(D)} \xrightarrow{\mathrm{id}\otimes f} R_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} \Lambda_{\mu}^{A(D)} \xrightarrow{\mathrm{id}\otimes g} R_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} M_A(L) \xrightarrow{} 0
\]
is an exact sequence of $R$-modules. It follows that $\varphi(M(D))$ is a presentation matrix for $R_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} M_A(L)$.
\end{proof}
\begin{corollary}
\label{lastcor}
Suppose $L$ is a $\mu$-component link, $F$ is a field and $\varphi:\Lambda_{\mu} \to F$ is a homomorphism of rings with unity. Let $j_0$ be the smallest integer with $\varphi(E_{j_0}(L)) \neq 0$. Then $j_0$ is the dimension of $F_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} M_A(L)$ as a vector space over $F$.
\end{corollary}
\begin{proof}
If $D$ is a diagram of $L$ then Lemma \ref{pres} tells us that $\varphi(M(D))$ is a presentation matrix for the $F$-vector space $F_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} M_A(L)$. For a vector space, the only isomorphism-invariant information provided by a presentation matrix is the dimension: an $m \times n$ matrix of rank $r$ is a presentation matrix for a vector space of dimension $n-r$.
The rank $r$ of $\varphi(M(D))$ is the size of the largest square submatrix with nonzero determinant. Determinants are functorial, in the sense that every square $\Lambda_{\mu}$-matrix $X$ has $\varphi(\mathrm{det}X)=\mathrm{det}(\varphi(X))$. It follows that the rank $r$ of $\varphi(M(D))$ is the largest size of a square submatrix $X$ of $M(D)$ with $\varphi(\mathrm{det}X) \neq 0$. If $j_0$ is the smallest index with $\varphi(E_{j_0}(L)) \neq 0$ then the largest size of a square submatrix $X$ of $M(D)$ with $\varphi(\mathrm{det}X) \neq 0$ is $|A(D)|-j_0$, so
\[
\mathrm{dim}_F(F_\varphi \otimes_{\Lambda_{\mu}} \hspace{-0.25 em} M_A(L))=|A(D)|-r=|A(D)|-(|A(D)|-j_0)=j_0.
\]
\end{proof}
Theorem \ref{main2} follows from Lemma \ref{adj} and Corollary \ref{lastcor}.
\section{Two knots}
Inoue~\cite{I} asserted that ``the number of all quandle homomorphisms of a knot quandle to an Alexander quandle is completely determined by Alexander polynomials of the knot.'' Corollary \ref{maincor2} implies a similar assertion, with `Alexander polynomials' replaced by `elementary ideals.' In this section we observe that for Fox tricolorings of the knots pictured in Figure \ref{knotsfig}, Corollary \ref{maincor2} is correct and Inoue's assertion is incorrect.
\begin{figure} [bht]
\centering
\begin{tikzpicture} [scale=0.6]
\draw [thick] (-13+1,0+0.5) -- (-13+1,1+0.5);
\draw [thick] (-13+1,4+0.5) -- (-13+1,3+0.5);
\draw [->] [>=angle 90] [thick] (-13+1,0+0.5) -- (-13+5.5,0+0.5);
\draw [thick] (-13+10,0+0.5) -- (-13+5.5,0+0.5);
\draw [thick] (-13+1,4+0.5) -- (-13+10,4+0.5);
\draw [thick] (-13+1,1+0.5) -- (-13+1.8,1.8+0.5);
\draw [thick] (-13+2.2,2.2+0.5) -- (-13+3,3+0.5);
\draw [thick] (-13+3,3+0.5) -- (-13+5,1+0.5);
\draw [thick] (-13+1,3+0.5) -- (-13+3,1+0.5);
\draw [thick] (-13+3,1+0.5) -- (-13+3.8,1.8+0.5);
\draw [thick] (-13+4.2,2.2+0.5) -- (-13+5,3+0.5);
\draw [thick] (-13+7,1+0.5) -- (-13+5,3+0.5);
\draw [thick] (-13+5,1+0.5) -- (-13+5.8,1.8+0.5);
\draw [thick] (-13+6.2,2.2+0.5) -- (-13+7,3+0.5);
\draw [thick] (-13+9,1+0.5) -- (-13+7,3+0.5);
\draw [thick] (-13+7,1+0.5) -- (-13+7.8,1.8+0.5);
\draw [thick] (-13+8.2,2.2+0.5) -- (-13+9,3+0.5);
\draw [thick] (-13+9,3+0.5) -- (-13+11,3+0.5);
\draw [thick] (-13+9.8,1+0.5) -- (-13+9,1+0.5);
\draw [thick] (-13+10,0+0.5) -- (-13+10,2.8+0.5);
\draw [thick] (-13+10,4+0.5) -- (-13+10,3.2+0.5);
\draw [thick] (-13+11,3+0.5) -- (-13+11,1+0.5);
\draw [thick] (-13+10.2,1+0.5) -- (-13+11,1+0.5);
\node at (-13+1.5,1+0.1) {$u$};
\node at (-13+1.5,3.4+0.5) {$v$};
\node at (-13+3,3.4+0.5) {$w$};
\node at (-13+5,3.4+0.5) {$x$};
\node at (-13+7,3.4+0.5) {$y$};
\node at (-13+10.7,2+0.5) {$z$};
\draw [->] [>=angle 90] [thick] (0,0) -- (4,0);
\draw [thick] (8,0) -- (4,0);
\draw [thick] (0,5) -- (8,5);
\draw [thick] (0,5) -- (0,3);
\draw [thick] (0,0) -- (0,2);
\draw [thick] (2,2) -- (0,2);
\draw [thick] (1,2.2) -- (1,2.8);
\draw [thick] (2,3) -- (0,3);
\draw [thick] (1,1) -- (1,1.8);
\draw [thick] (1,3.2) -- (1,4);
\draw [thick] (4,1) -- (4,4);
\draw [thick] (3,4) -- (4,4);
\draw [thick] (3,1) -- (4,1);
\draw [thick] (1.5,3) -- (0,3);
\draw [thick] (2,3) -- (2.3,3.3);
\draw [thick] (2.7,3.7) -- (3,4);
\draw [thick] (2,2) -- (2.3,1.7);
\draw [thick] (3,1) -- (2.7,1.3);
\draw [thick] (2,1) -- (3,2);
\draw [thick] (2,1) -- (1,1);
\draw [thick] (3,2) -- (3.8,2);
\draw [thick] (5,2) -- (4.2,2);
\draw [thick] (2,4) -- (1,4);
\draw [thick] (2,4) -- (3,3);
\draw [thick] (3.8,3) -- (3,3);
\draw [thick] (4.2,3) -- (5,3);
\draw [thick] (5,3) -- (6,2);
\draw [thick] (6,3) -- (7,2);
\draw [thick] (7,3) -- (8,2);
\draw [thick] (6,2) -- (6.3,2.3);
\draw [thick] (7,3) -- (6.7,2.7);
\draw [thick] (5,2) -- (5.3,2.3);
\draw [thick] (6,3) -- (5.7,2.7);
\draw [thick] (7,2) -- (7.3,2.3);
\draw [thick] (8,3) -- (7.7,2.7);
\draw [thick] (8,3) -- (8,5);
\draw [thick] (8,2) -- (8,0);
\node at (7.6,0.6) {$a$};
\node at (7.6,4.4) {$b$};
\node at (7,1.6) {$c$};
\node at (4.8,3.4) {$d$};
\node at (4.8,1.6) {$e$};
\node at (3.5,4.4) {$g$};
\node at (1.5,4.4) {$h$};
\node at (1.5,0.6) {$i$};
\node at (0.7,2.5) {$j$};
\end{tikzpicture}
\caption{The knots $6_1$ and $9_{46}$.}
\label{knotsfig}
\end{figure}
The Alexander polynomials and elementary ideals of the knots $6_1$ and $9_{46}$ were calculated by Crowell and Fox~\cite[Chapter VIII, Examples (4.5) and (4.6)]{CF}. The two knots have the same Alexander polynomials: $\Delta_1=2t^2-5t+2$ and $\Delta_k=1$ for $k>1$. Both knots also have $E_k=(\Delta_k)$ for $k \neq 2$. For $6_1$, $E_2=(\Delta_2)=(1)$ but for $9_{46}$, $E_2=(2-t,1-2t)$. Notice that if $GF(3)$ is the field of three elements then the homomorphism $\varphi:\Lambda \to GF(3)$ with $\varphi(t)=-1$ has
\[
\varphi(2t^2-5t+2)=\varphi(2-t)=\varphi(1-2t)=0.
\]
We see that with respect to this homomorphism $\varphi$, $6_1$ has $j_0=2$ and $9_{46}$ has $j_0=3$.
A Fox tricoloring \cite[Exercise VI.6]{CF} of a link diagram $D$ is a function $f:A(D) \to GF(3)$. At each crossing as in Figure \ref{crossfig}, the sum $f(a_1)+f(a_2)+f(a_3)$ must be $0$ in $GF(3)$. (This is simply the requirement that the coloring satisfies Definition \ref{aquandlecolor}, with $M=GF(3)_\varphi$.) We leave it to the reader to verify the following descriptions of the spaces of Fox tricolorings of $6_1$ and $9_{46}$.
\begin{itemize}
\item Every Fox tricoloring of $6_1$ is given by arbitrary values of $f(u)$ and $f(v)$ in $GF(3)$, with $f(w)=f(z)=-f(u)-f(v)$, $f(x)=f(u)$, and $f(y)=f(v)$.
\item Every Fox tricoloring of $9_{46}$ is given by arbitrary values of $f(a),f(b)$ and $f(g)$ in $GF(3)$, with $f(c)=-f(a)-f(b)$, $f(d)=f(b)$, $f(e)=f(a)$, $f(h)=-f(b)-f(g)$, $f(i)=-f(a)-f(g)$ and $f(j)=f(g)$.
\end{itemize}
It follows that the space of Fox tricolorings of $6_1$ has dimension $2$ over $GF(3)$, and the space of Fox tricolorings of $9_{46}$ has dimension $3$ over $GF(3)$. We see that $6_1$ and $9_{46}$ have different numbers of Fox tricolorings, even though all of their Alexander polynomials are the same.
\section{Two links}
In this section we apply Corollary \ref{maincor2} to the torus link $T_{(2,8)}$ and Whitehead's link $W$, pictured in Figure \ref{linksfig}.
\begin{figure} [bht]
\centering
\begin{tikzpicture} [scale=0.8]
\draw [->] [>=angle 90] [thick] (0,0+0.5) -- (0,2);
\draw [thick] (1,1+0.5) -- (1,2);
\draw [thick] (1,1+0.5) -- (1.3,0.7+0.5);
\draw [thick] (1.7,0.3+0.5) -- (2,0+0.5);
\draw [thick] (2,1+0.5) -- (2.3,0.7+0.5);
\draw [thick] (2.7,0.3+0.5) -- (3,0+0.5);
\draw [thick] (3,1+0.5) -- (3.3,0.7+0.5);
\draw [thick] (3.7,0.3+0.5) -- (4,0+0.5);
\draw [thick] (4,1+0.5) -- (4.3,0.7+0.5);
\draw [thick] (4.7,0.3+0.5) -- (5,0+0.5);
\draw [thick] (0,0+0.5) -- (1,0+0.5);
\draw [thick] (2,1+0.5) -- (1,0+0.5);
\draw [thick] (3,1+0.5) -- (2,0+0.5);
\draw [thick] (4,1+0.5) -- (3,0+0.5);
\draw [thick] (5,1+0.5) -- (4,0+0.5);
\draw [thick] (5,4-0.5) -- (6,4-0.5);
\draw [thick] (6,0+0.5) -- (6,4-0.5);
\draw [thick] (6,0+0.5) -- (5,0+0.5);
\draw [thick] (0,4-0.5) -- (0,2);
\draw [thick] (0,4-0.5) -- (1,4-0.5);
\draw [->] [>=angle 90] [thick] (1,3-0.5) -- (1,2);
\draw [thick] (1,4-0.5) -- (1.3,3.7-0.5);
\draw [thick] (1.7,3.3-0.5) -- (2,3-0.5);
\draw [thick] (2,4-0.5) -- (2.3,3.7-0.5);
\draw [thick] (2.7,3.3-0.5) -- (3,3-0.5);
\draw [thick] (3,4-0.5) -- (3.3,3.7-0.5);
\draw [thick] (3.7,3.3-0.5) -- (4,3-0.5);
\draw [thick] (4,4-0.5) -- (4.3,3.7-0.5);
\draw [thick] (4.7,3.3-0.5) -- (5,3-0.5);
\draw [thick] (5,1+0.5) -- (5,3-0.5);
\draw [thick] (2,4-0.5) -- (1,3-0.5);
\draw [thick] (3,4-0.5) -- (2,3-0.5);
\draw [thick] (4,4-0.5) -- (3,3-0.5);
\draw [thick] (5,4-0.5) -- (4,3-0.5);
\draw [->] [>=angle 90] [thick] (8,0.8) -- (8,2);
\draw [thick] (8,3.2) -- (8,2);
\draw [thick] (8,3.2) -- (8.8,3.2);
\draw [thick] (9.2,3.2) -- (12,3.2);
\draw [thick] (8,0.8) -- (10.8,0.8);
\draw [thick] (12,0.8) -- (11.2,0.8);
\draw [thick] (12,0.8) -- (12,3.2);
\draw [thick] (9,1) -- (11,3);
\draw [thick] (11,1) -- (10.2,1.8);
\draw [thick] (9,3) -- (9.8,2.2);
\draw [thick] (11,1) -- (11,0);
\draw [->] [>=angle 90] [thick] (9,0) -- (10,0);
\draw [thick] (10,0) -- (11,0);
\draw [thick] (9,0) -- (9,0.6);
\draw [thick] (9,3) -- (9,4);
\draw [thick] (11,4) -- (9,4);
\draw [thick] (11,4) -- (11,3.4);
\end{tikzpicture}
\caption{$T_{(2,8)}$ and Whitehead's link.}
\label{linksfig}
\end{figure}
With the indicated orientations, the elementary ideals of these links are $E_j(T_{(2,8)})=E_j(W)=\Lambda_2$ for $j>1$, $E_j(T_{(2,8)})=E_j(W)=(0)$ for $j<1$, $E_1(W)=(\Delta_1(W))\cdot(t_1-1,t_2-1)=(t_{1}-1)(t_{2}-1)\cdot(t_1-1,t_2-1)$, and
\[
E_1(T_{(2,8)})=(\Delta_1(T_{(2,8)}))\cdot(t_1-1,t_2-1)=(t_{1}^{3}+t_{1}^{2}t_{2}+t_{1}t_{2}^{2}+t_{2}^{3})\cdot(t_1-1,t_2-1).
\]
The elementary ideals may be confirmed using the Alexander matrices obtained from Figure \ref{linksfig}, as described in the introduction. The Alexander polynomials $\Delta_1(T_{(2,8)})$ and $\Delta_1(W)$ may also be verified on the LinkInfo website \cite{Cha}, where the two links are labeled L5a1\{1\} and L8a14\{1\}.
\begin{table} [bht]
\centering
\begin{tabular} {cccc}
$\varphi(t_1)$ & $\varphi(t_2)$ & $j_0(T_{(2,8)})$ & $j_0(W)$ \\[5 pt]
1 & 1 & 2 & 2 \\
1 & -1 & 2 & 2 \\
-1 & -1 & 1 & 1
\end{tabular}
\caption{Values of $j_0$ for homomorphisms $\varphi:\Lambda_2 \to GF(3)$.}
\label{tabl1}
\end{table}
Both links have $E_0=(0)$ and $E_2=\Lambda_2$, so both links have $j_0 \in \{1,2\}$ for every instance of Corollary \ref{maincor2}; if $F$ is a field then a ring homomorphism $\varphi:\Lambda_2 \to F$ yields $j_0=2$ if and only if $\varphi(E_1)=(0)$. Table \ref{tabl1} gives the $j_0$ values for homomorphisms $\varphi:\Lambda_2 \to GF(3)$. (All of the elementary ideals of both links are symmetric with respect to the transposition $t_1 \leftrightarrow t_2$, so we do not need to list the homomorphism with $\varphi(t_1)=-1$ and $\varphi(t_2)=1$; it yields the same $j_0$ values as the homomorphism with $\varphi(t_1)=1$ and $\varphi(t_2)=-1$.) We see that $\textrm{Color}_A(T_{(2,8)},GF(3)_\varphi) \cong \textrm{Color}_A(W,GF(3)_\varphi)$ for every homomorphism of rings with unity $\varphi:\Lambda_2 \to GF(3)$.
The $j_0$ values for homomorphisms $\varphi:\Lambda_2 \to GF(5)$ appear in Table \ref{tabl2}. In the first four rows, we see that $\textrm{Color}_A(T_{(2,8)},GF(5)_\varphi) \cong \textrm{Color}_A(W,GF(5)_\varphi)$ for every homomorphism of rings with unity $\varphi:\Lambda_2 \to GF(5)$ that has $\varphi(t_1)=\varphi(t_2)$. In the last three rows, we see that there are homomorphisms with $\varphi(t_1) \neq \varphi(t_2)$ and $\textrm{Color}_A(T_{(2,8)},GF(5)_\varphi) \ncong \textrm{Color}_A(W,GF(5)_\varphi)$.
\begin{table} [bht]
\centering
\begin{tabular} {cccc}
$\varphi(t_1)$ & $\varphi(t_2)$ & $j_0(T_{(2,8)})$ & $j_0(W)$ \\ [5 pt]
1 & 1 & 2 & 2 \\
2 & 2 & 1 & 1 \\
3 & 3 & 1 & 1 \\
4 & 4 & 1 & 1 \\
1 & 2 & 2 & 2 \\
1 & 3 & 2 & 2 \\
1 & 4 & 2 & 2 \\
2 & 3 & 2 & 1 \\
2 & 4 & 2 & 1 \\
3 & 4 & 2 & 1
\end{tabular}
\caption{Values of $j_0$ for homomorphisms $\varphi:\Lambda_2 \to GF(5)$.}
\label{tabl2}
\end{table}
The links $T_{(2,8)}$ and $W$ are of interest because of the fact that for every abelian group $A$, they have isomorphic groups of Fox colorings in $A$. This fact was verified using Goeritz matrices in \cite[Section 6]{Tcol}, but we can also deduce it from Corollary \ref{maincor3} below, because the homomorphism $\varphi:\Lambda_2 \to \mathbb{Z}$ defined by $\varphi(t_1)=\varphi(t_2)=-1$ has $\varphi(E_j(T_{(2,8)}))=\varphi(E_j(W))$ $\forall j$.
\section{A non-invertible link}
The Laurent polynomial ring $\Lambda_{\mu}=\mathbb{Z}[t_1^{\pm1},\dots,t_{\mu}^{\pm1}]$ has an automorphism given by $t_i \mapsto t_{i}^{-1}$ $\forall i$. This automorphism is sometimes called \emph{conjugation}, and denoted by an overline. Here are two important properties of conjugation.
\begin{enumerate}
\item Let $L^{inv}$ be the \emph{inverse} of an oriented link $L$, obtained by reversing the orientation of every component of $L$. Then $E_j(L)=\overline{E_j(L^{inv})}$ $\forall j$.
\item If $K$ is a knot then $E_j(K)=\overline{E_j(K)}$ $\forall j$.
\end{enumerate}
To verify property 1, let $D$ be a diagram of $L$ and let $D^{inv}$ be the diagram of $L^{inv}$ obtained from $D$ by reversing the orientation of every component. The effect of the orientation reversals is to interchange the indices of the arcs $a_2$ and $a_3$ at every crossing as indicated in Figure \ref{crossfig}. Observe that the effect of (a) interchanging $a_2$ and $a_3$ at every crossing and (b) replacing every $t_i$ with $t_{i}^{-1}$ in the resulting matrix is the same as the effect of (c) multiplying the $a$ column of $M(D)$ by $t_{\kappa(a)}$ for each $a \in A(D)$ and (d) multiplying the $c$ row of $M(D)$ by $-t_{\kappa(a_1)}^{-1}t_{\kappa(a_2)}^{-1}$ for each crossing as indicated in Figure \ref{crossfig}. Property 1 follows because operations (c) and (d) involve multiplying rows and columns by units of $\Lambda_{\mu}$, and hence do not affect the elementary ideals.
Verifying property 2 is more difficult; see \cite[Chapter IX]{CF}.
\begin{figure} [bht]
\centering
\begin{tikzpicture} [scale=1.0]
\draw [->] [>=angle 90] [thick] (0,3) -- (0,5.5);
\draw [thick] (0,8) -- (0,5.5);
\draw [thick] (0,8) -- (3,8);
\draw [thick] (3.8,3) -- (0,3);
\draw [thick] (3,7.2) -- (3,8);
\draw [thick] (3,6.2) -- (3,6.8);
\draw [thick] (3,5.8) -- (3,4);
\draw [thick] (8,4) -- (3,4);
\draw [thick] (8,4) -- (8,1.2);
\draw [thick] (8,0) -- (8,0.8);
\draw [thick] (10,0) -- (8,0);
\draw [thick] (10,0) -- (10,2);
\draw [thick] (10,2) -- (8.2,2);
\draw [thick] (7.8,2) -- (6.2,2);
\draw [thick] (5.8,2) -- (4.2,2);
\draw [thick] (3.8,2) -- (2,2);
\draw [thick] (2,2.8) -- (2,2);
\draw [thick] (2,3.2) -- (2,7);
\draw [thick] (7,7) -- (2,7);
\draw [thick] (7,7) -- (7,4.2);
\draw [thick] (7,3.8) -- (7,3);
\draw [thick] (6.2,3) -- (7,3);
\draw [thick] (4.2,3) -- (5.8,3);
\draw [thick] (4,0) -- (4,3.8);
\draw [thick] (4,0) -- (6,0);
\draw [thick] (6,0.8) -- (6,0);
\draw [thick] (6,1.2) -- (6,3.8);
\draw [thick] (1,1) -- (3.8,1);
\draw [thick] (9,1) -- (4.2,1);
\draw [thick] (9,1) -- (9,1.8);
\draw [->] [>=angle 90] [thick] (9,6) -- (9,4);
\draw [thick] (9,4) -- (9,2.2);
\draw [thick] (7.2,6) -- (9,6);
\draw [thick] (6.8,6) -- (6,6);
\draw [thick] (5.18,5.26) -- (6,6);
\draw [thick] (4.8,4.9) -- (4,4.2);
\draw [thick] (2.2,6) -- (4,6);
\draw [thick] (6,4.2) -- (4,6);
\draw [thick] (1,6) -- (1.8,6);
\draw [thick] (1,6) -- (1,3.2);
\draw [thick] (1,2.8) -- (1,1);
\node at (5,7.3) {$K_2$};
\node at (8,6.3) {$K_1$};
\node at (9.7,0.3) {$a$};
\node at (9.2,5) {$b$};
\node at (7,0.7) {$c$};
\node at (8.3,3.3) {$d$};
\node at (7,1.7) {$e$};
\node at (6.3,5.7) {$g$};
\node at (6.7,3.3) {$h$};
\node at (6.2,2.5) {$i$};
\node at (5,3.3) {$j$};
\node at (5,1.7) {$k$};
\node at (5,0.3) {$l$};
\node at (3,0.7) {$m$};
\node at (3,1.7) {$n$};
\node at (0.3,7.7) {$o$};
\node at (1.3,5) {$p$};
\node at (3.7,5.7) {$q$};
\node at (3.3,6.5) {$r$};
\node at (4.3,4.8) {$s$};
\node at (7.3,5) {$u$};
\end{tikzpicture}
\caption{Turaev's non-invertible link, $T$.}
\label{linkfig}
\end{figure}
Properties 1 and 2 indicate that the elementary ideals cannot detect non-invertibility of knots. However the elementary ideals can sometimes detect non-invertibility of links. An example is the two-component link $T$ pictured in Figure \ref{linkfig}, which was discussed by Turaev \cite{T2}. With the indicated component indices and orientations, $T$ has the elementary ideals $E_3(T)=\Lambda_2$ and $E_2(T)=(t_1-3,t_2-1,7)$. (We do not present detailed calculations.) Notice that if $\varphi:\Lambda_2 \to GF(7)$ is the ring homomorphism with $\varphi(t_1)=3$ and $\varphi(t_2)=1$ then $\varphi(E_2(T)) = 0$ but $\varphi(\overline{E_2(T)})$ includes the nonzero element $\varphi(t_{1}^{-1}-3)=5-3=2$. It follows that $E_2(T) \neq E_2(T^{inv})$, so $T$ is not invertible.
Corollary \ref{maincor2} tells us that multivariate Alexander colorings detect the non-invertibility of $T$: the dimension of $\mathrm{Color}_A(D,GF(7)_\varphi)$ over $GF(7)$ is $3$, but the dimension of $\mathrm{Color}_A(D^{inv},$ $GF(7)_\varphi)$ is no more than $2$. We leave it to the reader to verify the following explicit descriptions of these spaces.
\begin{itemize}
\item Every $f \in \mathrm{Color}_A(D,GF(7)_\varphi)$ is given by arbitrary values of $f(a),f(b)$ and $f(i)$ in $GF(7)$, with $f(c)=f(b)-2f(a)$, $f(d)=f(j)=f(k)=-2f(a)$, $f(a)=f(e)=f(h)=f(n)=f(o)=f(r)=f(u)$, $f(g)=2f(a)+f(b)$, $f(l)=4f(a)-2f(b)+3f(i)$, $f(m)=f(i)$, $f(p)=2f(a)+f(i)$, $f(q)=4f(a)+f(i)$ and $f(s)=f(a)-2f(b)+3f(i)$.
\item Every $f \in \mathrm{Color}_A(D^{inv},GF(7)_\varphi)$ is given by arbitrary values of $f(b)$ and $f(i)$ in $GF(7)$, with $f(a)=f(d)=f(e)=f(h)=f(j)=f(k)=f(n)=f(o)=f(r)=f(u)=0$, $f(c)=f(g)=f(b)$, $f(l)=f(s)=3f(b)+5f(i)$, and $f(m)=f(p)=f(q)=f(i)$.
\end{itemize}
\section{Principal ideal domains}
The special theory of modules over principal ideal domains is explained in many algebra books, like \cite{Ja, La, Se}. We summarize the ideas briefly.
Suppose $R$ is a principal ideal domain and $X$ is an $m \times n$ matrix with entries from $R$. Define the elementary ideals $E_j(X)$ as follows: if $j \geq n$, then $E_j(X)=R$; if $n>j\geq \mathrm{max}\{0,n-m\}$, then $E_j(X)$ is the ideal of $R$ generated by the determinants of $(n-j) \times (n-j)$ submatrices of $X$; and if $j<\mathrm{max}\{0,n-m\}$, then $E_j(X)=(0)$. As $R$ is a principal ideal domain, for each integer $j$ there is an $e_j(X) \in R$ such that $E_j(X)$ is the principal ideal generated by $e_j(X)$. Determinants satisfy the Laplace expansion property, so these elements $e_j(X)$ form a sequence of divisors: $e_{j+1}(X) \mid e_j(X)$ $\forall j$. The quotients $d_j(X)=e_j(X)/e_{j+1}(X)$ are the \emph{invariant factors} of $X$. Like the $e_j$, the $d_j$ are well-defined only up to associates, i.e. the principal ideals $(d_j(X))$ are invariants of $X$, but the particular elements $d_j(X)$ are not. The invariant factors also form a sequence of divisors: $d_{j+1}(X) \mid d_j(X)$ $\forall j$. The \emph{Smith normal form} of $X$ is the $m \times n$ matrix obtained from the diagonal matrix
\begin{equation*}%
\begin{pmatrix}
d_{0}(X) & 0 & 0 & 0\\
0 & d_{1}(X) & 0 & 0\\
0 & 0 & \ddots & 0\\
0 & 0 & 0 & d_{n-1}(X)
\end{pmatrix}
\end{equation*}
by adjoining $m-n$ rows of zeroes if $n<m$, and removing $n-m$ rows of zeroes if $n>m$.
The Smith normal form of $X$ is equivalent to $X$, i.e., there are invertible matrices $P,Q$ such that $PXQ$ is equal to the Smith normal form of $X$. It follows that if $X$ is a presentation matrix for the $R$-module $M$, then the Smith normal form of $X$ is also a presentation matrix for $M$. That is, if $X$ is a presentation matrix for $M$ then
\begin{equation}
\label{sumform}
M \cong \bigoplus_{j=0}^{n-1} \text{ } R/(d_j(X)).
\end{equation}
The fact that the $d_j(X)$ form a sequence of divisors implies that $(d_j(X)) \subseteq (d_{j+1}(X))$ $\forall j$. In particular, if $(d_i(X))=R$ then $(d_j(X))=R$ $\forall j\geq i$. Notice that values of $j$ with $(d_j(X))=R$ contribute nothing of significance to the direct sum of (\ref{sumform}).
Using Lemmas \ref{adj} and \ref{pres}, we deduce the following theorem from (\ref{sumform}).
\begin{theorem}
\label{main3}
Let $R$ be a principal ideal domain, and let $\varphi:\Lambda_{\mu} \to R$ be a homomorphism of rings with unity. Suppose $D$ is a diagram of a $\mu$-component link $L$, and $d_0,d_1,\dots,d_{|A(D)|-1}$ are the invariant factors of $\varphi(M(D))$. Then for any $R$-module $M$,
\begin{equation*}
\mathrm{Color}_A(D,M_\varphi) \cong \bigoplus_{j=0}^{|A(D)|-1} \mathrm{Hom}_R(R/(d_j),M)_\varphi.
\end{equation*}
\end{theorem}
The direct sum of Theorem \ref{main3} seems to vary from one diagram to another, but the invariance of the Alexander module guarantees that if $D$ and $D'$ are diagrams of the same link and $|A(D)|<|A(D')|$ then the invariant factors $d'_j$ of $\varphi(M(D'))$ with $j \geq |A(D)|-1$ all generate the same principal ideal, $(d'_j)=R$. It follows that these invariant factors contribute nothing of significance to the direct sum of Theorem \ref{main3}.
\begin{corollary}
\label{maincor3}
Let $R$ be a principal ideal domain, let $\varphi:\Lambda_{\mu} \to R$ be a homomorphism of rings with unity, and let $M$ be an $R$-module. Then for any diagram $D$ of a $\mu$-component link $L$, the $\Lambda_{\mu}$-module $\mathrm{Color}_A(D,M_\varphi)$ is determined up to isomorphism by $M$ and the images under $\varphi$ of the elementary ideals of $L$.
\end{corollary}
The examples of Section 3 show that if we replace ``elementary ideals'' by ``Alexander polynomials'' in Corollary \ref{maincor3} then the resulting statement is false, in general.
Theorem \ref{main3} implies Theorem \ref{main2} in two different ways. (i) Suppose $F$ is the field of quotients of a principal ideal domain $R$. (Perhaps $F=R$.) If $M$ is a vector space over $F$, then $\mathrm{Hom}_R(R/(d_j),M)_\varphi$ is isomorphic to either $(0)$ (if $d_j \neq 0$) or $M_\varphi$ (if $d_j=0$). (ii) Suppose $I$ is a maximal ideal of a principal ideal domain $R$, $F=R/I$ and $M$ is a vector space over $F$. Then $\mathrm{Hom}_R(R/(d_j),M)_\varphi$ is isomorphic to either $(0)$ (if $d_j \notin I$) or $M_\varphi$ (if $d_j \in I$).
We close with thanks to an anonymous reader, who provided helpful comments on the first version of the paper.
|
{
"timestamp": "2018-09-05T02:21:45",
"yymm": "1805",
"arxiv_id": "1805.02189",
"language": "en",
"url": "https://arxiv.org/abs/1805.02189"
}
|
\section{Introduction}\label{intro}
Humanity is currently nudging towards widespread industrialization of quantum computers. As such, practitioners of quantum mechanics, a field which has long been treated as a high-level science, are being pushed towards a more engineering-oriented approach. While quantum mechanics is essential to central concepts of electrical engineering, such as electrical conduction \cite{olsen2010} and semiconductor devices \cite{steele1957}, simplified versions of these concepts are usually taught to engineers. For instance, the Drude model \cite{drude1900}, which was actually invented \emph{before} quantum mechanics, is still often used for describing electrical conduction. Similarly, semiconductor devices have long been taught using band theory \cite{nussbaum1962} without ever showing how these bands arise from quantum interference \cite{olsen2010}.
Since quantum mechanics is fundamental to the operation of quantum computers, it seems highly probable that such oversimplification will not be so useful for training the new breed of electrical and computer engineers that will be designing and using superconducting quantum computers \cite{mermin,merminbook}. Media is rife with excessively simple explanations of the basic principles of quantum computing, such as that a Qbit \cite{mermin,merminbook} can be both zero and one at the same time, so the computer does every possible computation at once. These kinds of statements may be useful for journalists to hint at the power of quantum computing, but they are confusing and misleading for the students that should one day be designing these powerful computers. For instance, while it may be true that an ideal quantum computer can be made to perform every possible computation at the same time, it is also true that only one result can ever be measured. It is only through careful engineering of quantum interference (in the case of gate model architectures) or quantum tunneling (in the case of annealing architectures) that correct answers to difficult problems can be obtained more efficiently than with classical computers.
The lumped element model has long been used as the most simple model of classical circuits. A method of quantizing lumped element circuits described by \citet{devoret} has been used by the superconducting quantum computing field for decades. Curiously, the relationship between the circuit topology and the component types determines whether or not a circuit can be quantized using this method. For instance, the two circuits shown in Figure \ref{topo} have identical topologies; they differ only in the placement of the components within this topology. But only the top one may be quantized. And yet classically, both circuits have valid solutions.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.8]{Fig1_topo.pdf}
\caption{Two lumped element circuits with the same topology. Circuit (a) can be quantized using current methods \citet{devoret}, while circuit (b) cannot. }
\label{topo}
\end{figure}
It is an inarguable truth that \emph{reality is quantum}. The correspondence principle says that classical descriptions are always an approximation of that quantum reality. The fact that some lumped element circuits cannot be quantized using current methods means that there must be some fundamental piece of our quantum reality missing from the lumped element model that cannot be approximated away if we wish to describe the quantum mechanics of the circuit.
Here, we show how to quantize lumped element circuits, demonstrating improvements to the method of \citet{devoret} that allow all lumped element circuits to be quantized, as long as the parasitic components that arise from the geometry of the circuit are always included. Our method permits simple and automated quantization of every lumped element circuit. An online quantization tool that uses our method can be accessed by students in order to explore and understand quantum circuits \footnote{The automated online quantization tool can be found at this website: https://app.qspicelabs.com}.
Our finding that the circuit geometry must be included for a complete quantum description of circuits is an eminently intuitive result with deep physical meaning. A circuit controls the flow of currents and strength of fields, but the fields arise from the properties of free space itself. Thus it makes physical sense that complete description of the circuit must describe not only the topology - how the components are connected - but the geometrical arrangement of those connections in free space. Practitioners of quantum circuits have long known through experience that the circuit layout is extremely important to the noise characteristics of the final circuits. Our results provide a quantitative way to connect this intuition to the circuit design process.
Understanding the method described here requires a basic background in classical mechanics, specifically Lagrangian mechanics, in which the dynamics of a physical system are fully defined by the action of the system. The constant of quantum mechanics, $\hbar$, is in units of action. Any system may be quantized by defining its Lagrangian. Application of this familiar technique of system quantization to the lumped element circuit model is a natural way to demonstrate principles of quantum mechanics by using a system very familiar to undergraduate electrical engineers. Thus the work described here would work well as part of a course on quantum electrical and computer engineering. We suggest the textbook ``Quantum Mechanics for Scientists and Engineers'' by Miller \cite{qeng}, the introduction to the Langriangian appropriate for undergraduate students of Hanc et al. \cite{hanc2004}, and the work by Mermin introducing quantum computing to computer scientists \cite{mermin,merminbook}.
We begin by reviewing background information, including lumped element circuits, the process of quantization, and the reason why some lumped element circuits cannot be quantized in Section \ref{back}. Next we assess the assumptions of the lumped element model in Section \ref{assum} to identify those which might be revised to create the most simple, but complete model of quantum circuits. In Section \ref{geom}, we show how geometric components can be added to the lumped element model to make it fully quantizable. Finally, to validate the model we show the results of automated numerical simulations in Section \ref{results}.
\section{Background}\label{back}
Quantum circuits are made out of superconducting wires and components. This superconductivity suppresses the natural dissipation of energy, and thus the dissipation of quantum information. As such, the quantum circuit stays in the same quantum state, subject to time evolution under the rules of quantum mechanics, until an interaction changes that state. That interaction may be intentional (through a control or measurement process), or an uncontrolled noisy interaction with the external environment. The goal of this work is to show how a quantum circuit evolves under the rules of quantum mechanics between interaction events.
We need only consider capacitive and inductive components to discuss quantization of the circuit. Thus in the present work, we neglect dissipative components (like resistors), sources, and non-linear components (like Josephson junctions), whose effects on quantum circuits have already been well-described by \citet{devoret}. The issue of choosing a coordinate system, which requires the choice of ground node and a spanning tree, has also been well-described by \citet{devoret} and is discussed here only when necessary.
\subsection{Lumped element circuits} \label{classical}
In this Section, we review the traditional, classical methods of solving for the time evolution of lumped element circuits using coupled differential equations.
As a first approximation, circuits can simply be treated as a set of interconnected components using the lumped element model. Each component has a type that determines its behavior - inductors store energy in magnetic fields, while capacitors store energy in electric fields - and a number of terminals that can be connected to other components. A circuit topology must also be defined; to be a single circuit, wires must interconnect all the terminals in such a way that a path can be found between any two components. Using this description of a circuit along with Kirchhoff's circuit laws, the behavior of the corresponding real circuit being modeled can often be accurately predicted and understood.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.8]{Fig2_unquan.pdf}
\caption{A lumped element circuit that cannot be quantized in any representation using current methods. (a) The circuit. The node flux and loop charge variables used in the solution are shown. (b) The circuit reduced to a representation which can be quantized, but with loss of information about node 3 and loop 1.}
\label{unquan}
\end{figure}
These lumped element circuits are described by only a few degrees of freedom (DOF). Consider the circuit shown in Figure \ref{unquan}(a), which is composed of four components, $c=4$. In the classical lumped element model, either the voltages ($V$) across or the currents ($i$) through each component can be used as the degrees of freedom. For each component type, different equations relate these two quantities: for a capacitor with capacitance $C$, $i=C\dot V$, and for an inductor with inductance $L$, $V=L \dot i$.
But these component degrees of freedom are highly dependent on one another. Independent DOF can be defined instead. Each circuit has $n$ nodes and $l$ loops. In the classical lumped element circuit model, the DOF are defined as the voltages $V_n$ at the $n$ circuit nodes and the currents $i_l$ around the $l$ circuit loops. One of the nodes is always defined as the ground node, which is at a constant voltage of zero, leaving $n-1$ node variables. The number of loops is equal to $l=c+1-n$, for a total of $c$ independent degrees of freedom.
Here we will find it more convenient to modify the node voltages and loop currents slightly to node fluxes and loop charges by taking time integrals,
\begin{eqnarray}
\label{phidef}\phi_n&=& \int_{-\infty} ^t V_n(t') dt',\\
\label{Qdef}Q_l&=& \int_{-\infty} ^t i_l(t') dt.
\end{eqnarray}
In our four component circuit, there are two non-grounded nodes and two loops. The node fluxes and loop charges are indicated on the diagram in Fig. \ref{unquan}(a).
Let us consider the classical solution of the lumped element circuit shown in Figure \ref{topo}(a). Working in the node representation, we write equations using Kirchhoff's current law, which says that the currents arriving at each node sum to zero. For our circuit, these equations at nodes 2 and 3 are,
\begin{subequations}
\begin{eqnarray}
\label{cn1}i_1+i_2-i_3=C_1 \ddot \phi_2 +C_2 \ddot \phi_2 + \frac{1}{L_3} (\phi_3-\phi_2) &=& 0, \\
\label{cn2}i_4+i_3=\frac{1}{L_4} \phi_3 + \frac{1}{L_3} (\phi_3-\phi_2) &=& 0,
\end{eqnarray}
\end{subequations}
where $i_1,i_2\dots$ are the currents through each component.
To get the flux (or voltage) across a particular component, we have used differences between the flux at the nodes the component is attached to. For instance, the inductor $L_3$ goes between nodes 3 and 2, thus the flux across the inductor is $\phi_3-\phi_2$. Here are all the definitions for the circuit.
\begin{subequations}
\begin{eqnarray}
\label{fc1} \phi_{C1}&=& \phi_2 \\
\phi_{C2}&=& \phi_2 \\
\phi_{L3}&=& \phi_3-\phi_2 \\
\label{fl4} \phi_{L4}&=& \phi_3
\end{eqnarray}
\end{subequations}
Solving Eq. \ref{cn2} for $\phi_2$ and inserting into Eq. \ref{cn1}, we obtain the following equations,
\begin{eqnarray}
-(C_1 +C_2) \ddot \phi_2 &=& \frac{1}{L_3+L_4} \phi_2 , \\
\phi_3&=&\frac{L_4}{L_3+L_4} \phi_2,
\end{eqnarray}
for which the solution is,
\begin{subequations}
\begin{eqnarray}
\label{omega} \omega&=&\frac{1}{\sqrt{(C_1+C_2)(L_3+L_4)}}, \\
\label{sol} \phi_2&=&A \cos{(\omega t + \theta)}, \\
\phi_3&= &\frac{L_4}{L_3+L_4} A e^{\omega t + \theta},
\end{eqnarray}
\end{subequations}
where $A$ and $\theta$ are constants to be determined by the initial conditions.
Note that even though there are two independent node variables ($V_2,V_3$ or $\phi_2,\phi_3$), the solution has only one frequency and the solution for node 3 is simply a constant times the solution of node 2. This happens because there are no capacitors attached to node 3, only inductors. Thus there is no differential equation for $\phi_3$. Variables like this are often referred to as \emph{passive} variables \cite{devoret}. We can remove the passive node by combining the series inductors and parallel capacitors in the circuit in Figure \ref{unquan}(a) to obtain the circuit shown in Figure \ref{unquan}(b). This circuit has the same solution of Eq. \ref{sol}.
For comparison, we invite the reader to find the solution for the circuit shown in Figure \ref{topo}(a) instead. For this circuit, one finds there are two frequencies - one for each independent variable - and that the solution for node 3 is a differential function of the solution for node 2. None of the nodes are passive because there are capacitors attached to all nodes.
Classical lumped element circuits can also be solved using the loop representation. For each loop, we write equations using Kirchhoff's voltage law, which says that the voltages across each of the components that surround a loop sum to zero. For our circuit, these equations are,
\begin{subequations}
\begin{eqnarray}
\label{lp1}-V_1+V_2&=& \frac{1}{C_1} (Q_1+Q_2) + \frac{1}{C_2} Q_1 = 0, \\
\label{lp2}-V_1-V_3+V_4&=& \frac{1}{C_1} (Q_1+Q_2) + L_3 \ddot Q^2 _2 + L_4 \ddot Q^2 _2 = 0,
\end{eqnarray}
\end{subequations}
We leave it is as an exercise for the reader to complete the solution in the loop representation, showing that the frequency $\omega$ and the solutions are identical to the solution in the node representation.
But we can immediately see within Eqs. \ref{lp1} and \ref{lp2} that there is no differential equation for $Q_1$. This makes loop 1 a passive variable. Loops are passive when they have no inductors around them.
\subsection{Lagrangian mechanics for lumped element circuits}
Next, we present an alternative method to solve for a lumped element circuit using Lagrangian mechanics. To discuss the method, we will consider the simplest possible circuit, shown in Fig. \ref{unquan}(b), consisting of only one capacitor and one inductor connected in parallel. The circuit has one loop and one non-grounded node.
The Lagrangian is defined as the kinetic energy minus the potential energy,
\begin{eqnarray}
\mathcal{L}=KE-PE
\end{eqnarray}
where the potential energy depends only on the variables themselves and the kinetic energy depends on the time derivative of the variables.
Working in the node flux representation and using the classical definitions for the energy stored in a capacitor or inductor
\begin{eqnarray}
E_C&=&\frac{CV^2}{2}=\frac{Q^2}{2 C}, \\
E_L&=&\frac{\phi^2}{2 L}=\frac{LI^2}{2},
\end{eqnarray}
the Lagrangian for our simple circuit becomes,
\begin{eqnarray}
\label{lsimp} \mathcal{L}=\frac{(C_1+C_2) \dot{\phi}^2}{2}-\frac{ \phi^2}{2 (L_1+L_2)}.
\end{eqnarray}
The classical evolution of the system is determined by the Euler-Lagrange equation,
\begin{equation}
\frac{d}{dt} ( \frac{\partial \mathcal{L}}{\partial \dot{\phi}} ) = \frac{\partial \mathcal{L}}{\partial \phi}.
\end{equation}
For a more complicated Lagrangian with many degrees of freedom, there will be a Euler-Lagrange equation for each variable. For our simple circuit, the Euler-Lagrange equation is
\begin{equation}
\ddot{\phi}=-\frac{1}{(C_1+C_2)(L_3+L_4)} \phi,
\end{equation}
which has same solution shown in Eq. \ref{sol}. Thus we have shown that the Lagrangian method produces the same answer as the classical method used in Section \ref{classical} for our circuit.
\subsection{Quantization of lumped element circuits}
The guiding principle of quantum circuits is that some macroscopic systems, those composed of effectively infinite numbers of particles, can still be accurately described by only a few degrees of freedom (DOF) evolving according to the rules of quantum mechanics. It is not the reduction of a large system to a few DOF that makes the description quantum; we already reduced the system to a few DOF using the classical treatment in Section \ref{classical}.
What makes the circuit quantum is that these DOF can be expressed as pairs of canonically conjugate variables which must then be related to one another by the Heisenberg uncertainty principle. The Heisenberg uncertainty principle relates the precision with which we can know the value of one variable (quantified by standard deviation $\delta \phi$) to the precision with which we can know the value of its conjugate variable by,
\begin{equation}
\label{heis} \delta \phi \delta \mathbbm{q} \ge \frac{\hbar}{2}.
\end{equation}
To be a conjugate pair, one variable of the pair must be the derivative of the action with respect to the other variable of the pair. The action is defined using the Lagrangian as,
\begin{equation}
S=\int_{t_2}^{t_2} \mathcal{L} dt ,
\end{equation}
and has dimensions of energy$\times$time. That is the reason why pairs of flux (dimensions of (energy$\times$time)/charge) and charge variables are used to quantize a circuit - because they are conjugate pairs. In contrast, voltage and current do not form a conjugate pair.
In this work, we consider only time independent Lagrangians. In this case, the conjugate variables can be obtained directly from the Lagrangian as,
\begin{equation}
\label{conjn} \mathbbm{q}_n=\frac{\partial \mathcal{L}}{\partial \dot{\phi_n}}.
\end{equation}
For the Lagrangian in Eq. \ref{lsimp}, we obtain the conjugate variable,
\begin{equation}
\mathbbm{q}=(C_1+C_2) \dot \phi_2
\end{equation}
From this definition, we can see that the physical interpretation of the conjugate variable $\mathbbm{q}$ is the charge stored on the plates of all of the capacitors attached to that node. As such, the $\mathbbm{q}_n$ variables are called capacitive node charges.
The Hamiltonian, which gives the total energy of the system, is obtained from the Lagrangian through the Legendre transformation using the conjugate variables,
\begin{equation}
H= \sum_n \mathbbm{q}_n \dot{\phi_n} -\mathcal{L}.
\end{equation}
To quantize the system, one simply replaces each pair of conjugate variables $\phi,\mathbbm{q}$ with a pair of non-commuting operators $\hat{\phi},\hat{\mathbbm{q}}$,
\begin{equation}
[\hat\phi, \hat{\mathbbm{q}}] = i \hbar.
\end{equation}
The Hamiltonian itself, $H$, is also replaced with an operator $\hat{H}$. The time evolution of the system can then be calculated using the Hamiltonian according to the rules of quantum mechanics.
\subsection{Unquantizable lumped element circuits} \label{quant}
Now let us return to the circuit shown in Figure \ref{unquan}(a) and see why it cannot be quantized. The Lagrangian for this circuit is,
\begin{eqnarray}
\label{lag1n} \mathcal{L}=\frac{C_1 \dot{\phi_2}^2}{2}+\frac{C_2 \dot{\phi_2}^2}{2}-\frac{ (\phi_3-\phi_2)^2}{2 L_3}-\frac{ \phi_3^2}{2 L_4}.
\end{eqnarray}
We then obtain for the conjugate variables,
\begin{subequations}
\begin{eqnarray}
\label{q2} \mathbbm{q}_2&=& (C_1+C_2) \dot\phi_2,\\
\label{q3} \mathbbm{q}_3 &=& 0.
\end{eqnarray}
\end{subequations}
Now we can see the problem with this particular circuit; the conjugate variable for node 2 is not actually a variable, it is simply always equal to zero. Since we cannot identify a pair of canonical conjugate variables for node 2, the circuit cannot be quantized and we cannot determine the time evolution of this variable according to the rules of quantum mechanics.
Let us discuss the physical meaning of this problem. A quantum description of the circuit means that pairs of canonical conjugate variables are related to one another by the Heisenberg uncertainty principle (Eq. \ref{heis}). If the conjugate variable is always precisely equal to zero, then its measurement error is also $\delta \mathbbm{q}_2=0$. Inserting this in Eq. \ref{heis}, we obtain an infinite measurement error, $\delta \phi_2=\infty$, for the flux at node 2. This means that the flux at node 2 is completely undetermined. If we measure it, we might get anything. This is unphysical nonsense, so there must be something we are missing.
What is the origin of this issue? Recall that the physical interpretation of $\mathbbm{q}_n$ is the charge stored on the plates of all of the capacitors attached to that node. Thus the reason that the conjugate variable at node 3 is undefined is because the node is passive - there are no capacitors attached to it.
(One might think to avoid this problem by making node 2 the ground node. We leave it is as an exercise for the reader to show why this solution does not work.)
Now let us attempt to instead quantize the circuit in the loop charge representation. Using the derivative of Eq. \ref{Qdef}; $I_l=\dot Q_l$; the Lagrangian is
\begin{eqnarray}
\mathcal{L}=\frac{L_3 \dot{Q_2}^2}{2}+\frac{L_4 \dot{Q_2}^2}{2}-\frac{ (Q_1-Q_2)^2}{2 C_1}-\frac{ Q_1^2}{2 C_2}.
\end{eqnarray}
Note that in the node flux representation, the kinetic energy is the energy stored in the capacitors. In contrast, here in the loop charge representation, the kinetic energy is the energy stored in the inductors. The conjugate variables,
\begin{equation}
\label{conjl} \mathcal{\Phi}_n=\frac{\partial \mathcal{L}}{\partial \dot{Q_n}},
\end{equation}
are called inductive loop fluxes. For our circuit, they are
\begin{subequations}
\begin{eqnarray}
\label{p1} \mathcal{\Phi}_1&=& 0,\\
\label{p2} \mathcal{\Phi}_2 &=& (L_3+L_4) \dot Q_2.
\end{eqnarray}
\end{subequations}
In this representation, we find that the conjugate variable for loop 1 is undefined. Similarly, this is because loop 1 is passive - there are no inductors around the loop. Since no conjugate variable is defined for the charge around loop 1, we cannot determine the time evolution of this variable according to the rules of quantum mechanics.
If we want to solve for the properties of this particular circuit, we can reduce it to the circuit in Figure \ref{unquan}(b) by combining the two capacitors and the two inductors. This reduced circuit has a capacitor connected to every node and so is quantizable in the node flux representation, as we showed in the last Section. Similarly, the reduced circuit has an inductor around every loop, so is quantizable in the loop charge representation.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.8]{Fig3_unquan_unred.pdf}
\caption{Irreducible lumped element circuit that cannot be quantized. }
\label{unquan2}
\end{figure}
However, by reducing the circuit in this way, we lose information about one of the degrees of freedom in our solution. And not every unquantizable circuit can be reduced to a circuit which is solvable. Figure \ref{unquan} shows another circuit which cannot be quantized. It cannot be quantized in the node flux representation because the central node does not have a capacitor connected to it. Likewise, it cannot be quantized in the loop charge representation because the outermost loop, the one that runs around the outside edge of the circuit, does not have an inductor around it. And, this circuit cannot be reduced to a simpler circuit which is solvable.
As such, we cannot count on being able to deal with the problem of unquantizable circuits by reducing the circuit to a simpler circuit that is solvable. However, we have chosen a circuit that \emph{is} can be reduced so that we have a way to verify the method we will present in the rest of this manuscript, which allows us to quantize any lumped element circuit whose geometry is defined.
To summarize the problem, using the present method for quantizing circuits, we are unable to quantize circuits in the node flux representation when there are passive nodes which are not attached to any capacitor. Similarly, we are unable to quantize circuits in the loop charge representation when there are passive loops which do not go through an inductor. Circuits which contain both passive nodes and passive loops cannot be quantized in any representation using this method, even though we can write classical equations determining the time evolution of the circuit. Quantum mechanics is a more fundamental theory than classical mechanics. If we can describe the system classically, there must be a way to describe it quantum mechanically.
\section{Discarding assumptions of the lumped element model} \label{assum}
In the classical world of circuits, it is already well known that the lumped element is nothing more than an approximation. To better understand why we are having issues quantizing some lumped element circuits, let us review the three assumptions of the lumped element model to see which may need to be discarded in order to quantize all lumped element circuits.
The first assumption of the lumped element model is that the length scale of the circuit $r$ is smaller than the wavelength of the radiation,
\begin{equation}
r \ll \lambda.
\end{equation}
In typical quantum circuits, the size of the circuit elements are on the micrometer scale. Inductances are at the nano-Henry scale, while capacitances are at the pico-Farad scale. This leads to GHz scale frequencies; $f=\omega/2\pi=1/2\pi\sqrt{LC}=5$ GHz. These microwaves have wavelengths in the centimeter range, which is indeed orders of magnitude larger than the length scale of the circuit. Thus there does not appear to be an issue with this assumption.
The second assumption of the lumped element model is that outside of the conducting wires, the magnetic flux $\Phi_B=\vec{B} \cdot \vec{A}$ can be taken to be constant,
\begin{equation}
\label{assumf} \frac{\partial \Phi_B }{\partial t} \simeq 0.
\end{equation}
This assumption is normallly used to obtain Kirchhoff's voltage law. Here we will show the derivation of a modified form of Kirchhoff's voltage law by taking this term to be small, but not necessarily zero. The derivation starts with the integral form of the Maxwell-Faraday equation,
\begin{equation}
\label{mxf}\oint_{\partial S} \vec{E} \cdot d\vec{r} = \iint_S \frac{\partial \vec{B} }{\partial t} d\vec{A} = \frac{ \partial {\Phi}_l}{\partial t}.
\end{equation}
where $S$ is a surface bounded by the contour $\partial S$, with the contour $\partial S$ chosen to run around a loop in the circuit. Thus ${\Phi}_l(t)$ is the magnetic flux through the loop. (Note that unlike the inductive loop flux $\mathcal{\Phi}$ defined in Eq. \ref{conjl}, which is the flux in the inductors that the loop of wire passes through, ${\Phi}_l(t)$ is the magnetic flux going through the surface $S$.)
Since the contour $\partial S$ goes around a loop in the circuit, it must pass through at least two components. The voltage across each component around the loop can be defined as,
\begin{equation}
V_{c}=\int_{c\in \partial S}\vec{E} \cdot d\vec{r} .
\end{equation}
Substituting this into Eq. \ref{mxf}, we obtain Kirchhoff's voltage law,
\begin{equation}
\label{kvl} \sum_{c\ around \ l} V_{c} = \frac{ \partial {\Phi}_l}{\partial t} \simeq 0,
\end{equation}
where the r.h.s. is normally taken to be identically zero in the lumped element model. As we explained in Section \ref{classical}, we prefer to work with charge and flux variables rather than current and voltage variables because they form conjugate pairs. We can integrate both sides of Eq. \ref{kvl} to obtain the following `flux law'.
\begin{equation}
\label{mkvl} \sum_{c\ around \ l} \phi_{c} = {\Phi}_l.
\end{equation}
The lumped element model (specifically Kirchhoff's voltage law) assumes that ${\Phi}_l$ is a constant. If we discard this assumption, then ${\Phi}_l$ becomes a time varying value.
The third assumption of the lumped element model is that inside the conducting wires, the charge $q$ is constant,
\begin{equation}
\label{assumq} \frac{\partial q }{\partial t}\simeq 0
\end{equation}
This assumption is normally used to obtain Kirchhoff's current law. Here we will show the derivation of a modified form of Kirchhoff's current law by taking this term to be small, but not necessarily zero. The derivation starts with the charge continuity equation (which is itself derived from Ampere's law),
\begin{equation}
\label{cont} \nabla \cdot \vec{J} =\frac{\partial \rho }{\partial t},
\end{equation}
where $\rho$ is the charge density and $J$ is the current density. Let us choose a small volume $V$ that completely encloses a node in the circuit (but does not enclose any part of any component) with bounding surface $\partial V$.
\begin{equation}
\label{vol2} \iiint_V \nabla \cdot \vec{J} =\iiint_V \frac{\partial \rho }{\partial t}
\end{equation}
Now integrate the right hand side of Eq. \ref{vol2} over that volume, and apply the divergence theorem to the left hand side, obtaining
\begin{eqnarray}
\label{cont2} \oiint_{\partial V} \vec{J} \cdot d\vec{A} =\frac{\partial q_n(t). }{\partial t},
\end{eqnarray}
where $q_n(t)$ is the charge at the node. (Note that unlike the capacitive node charge $\mathbbm{q}$ defined in Eq. \ref{conjn}, which is the charge on the capacitors attached to the node, $q_n(t)$ is the charge collected at the node itself.) Since the surface contour $\partial V$ goes around a node in the circuit, it must pass through at least two conductors that go into the components that are attached to the node. The current going into each component can be defined by,
\begin{equation}
i_{c}=\int_{c\in \partial V} \vec{J} \cdot d\vec{A} .
\end{equation}
Substituting this into Eq. \ref{cont2}, we obtain the usual form of Kirchhoff's current law,
\begin{equation}
\label{kcl} \sum_{c \ attached \ n} i_{c} = \frac{\partial q_n }{\partial t} \simeq 0,
\end{equation}
where the r.h.s. is normally taken to be identically zero in the lumped element model. As we described in the last Section, we prefer to work with charge and flux variables rather than current and voltage variables because they form conjugate pairs. We can integrate both sides of Eq. \ref{kcl} to obtain the following `charge law'.
\begin{equation}
\label{mkcl} \sum_{c \ attached \ n} Q_{c} = q_n .
\end{equation}
The lumped element model (specifically Kirchhoff's current law) assumes that $q_n$ is a constant. If we discard this assumption, then $q_n$ becomes a time varying value.
In this section, we have discussed the second and third assumptions of the lumped element model, which are used in the classical lumped element model to derive Kirchhoff's voltage and current laws. In the quantum lumped element model, where we deal with conjugate flux and charge variables instead of voltage and current variables, these assumptions amount to taking the flux through a loop and the charge at a node to be constants. If we discard these assumptions, then the flux through a loop and the charge at a node must be treated as variables, even though they are very small. In the next Section, we show how they can be included in the Lagrangian.
\section{Geometric components} \label{geom}
Now that we have reviewed the assumptions of the lumped element model, let us return to our unquantizable circuit. In Section \ref{quant}, we were unable to quantize a circuit containing a loop which does not have any inductive component around it. However, \emph{any} loop of wire has a self-inductance associated with it, which is a function only of the geometry of the loop. This is often referred to as a `parasitic' inductance, terminology that refers to the assumption that these inductances are so small that they are normally neglected in the lumped element model. Indeed, loops with sizes on the order of micrometers will have self-inductances at the pico-Henry scale, many orders of magnitude smaller than the components in these circuits which are designed to be inductive.
But let us consider the fact that an inductor is a component specifically designed to enhance the natural ability of free space to store energy in a magnetic field in response to a changing electrical current. As such, it is not so much that the self-inductances are particularly small; rather, it is that the inductances of the components designed to be inductive are particularly large.
Similarly, in Section \ref{quant}, we were unable to quantize a circuit containing a node which does not have any capacitor attached to it. However, \emph{any} two electrical conductors have a capacitance between them, which is a function only of the relative geometry of the conductors. However, these are normally so small that they can safely be neglected. Nodes on the opposite sides of a circuit loop with sizes on the order of micrometers will have a capacitance between them of only femto or even atto-Farads.
Since both of these so-called `parasitic' components depend only on the geometry of the circuit, we refer to them here as `geometric components' instead. To include these geometric components in the description of the circuit, we must do two things. The first thing is quite straightforward; we must include the energy stored in these geometric components in the Lagrangian. For the flux representation, after adding the geometric components to Lagrangian in Eq. \ref{lag1n}, we obtain
\begin{eqnarray}
\nonumber \mathcal{L}_n&=&\frac{C_1 \dot{\phi_2}^2}{2}+\frac{C_2 \dot{\phi_2}^2}{2} + \frac{C_{g32} (\dot{\phi_3}-\dot{\phi_2})^2}{2} + \frac{C_{g21} \dot{\phi_2}^2}{2} + \frac{C_{g31} \dot{\phi_3}^2}{2}- \\
&&\frac{ (\phi_3-\phi_2)^2}{2 L_3}-\frac{ \phi_3^2}{2 L_4} - \frac{ \Phi_1^2}{2 L_{g1}} - \frac{ {\Phi}_2^2}{2 L_{g2}}.
\end{eqnarray}
Now let us check what the conjugate variables, $ \mathbbm{q}_n=\partial \mathcal{L} / \partial \dot{\phi_n}$, have become.
\begin{subequations}
\begin{eqnarray}
\mathbbm{q}_2&=& (C_1+C_2+C_{g21}+C_{g32}) \dot\phi_2 - C_{g32} \dot\phi_3,\\
\mathbbm{q}_3 &=& (C_{g32}+C_{g31}) \dot\phi_3 - C_{g32} \dot\phi_2, \\
\mathbbm{Q}_1 &=& \frac{\mathcal{L} }{\partial \dot{{\Phi}}_1}=0, \\
\mathbbm{Q}_2 &=& \frac{\mathcal{L} }{\partial \dot{{\Phi}}_2}=0.
\end{eqnarray}
\end{subequations}
Now each node variable has a conjugate variable defined, and so they may be quantized. But we have added two new variables, which give the flux through the loops. These do not have conjugate variables defined. So we appear to have simply pushed the problem back a stage.
One way that we could deal with this problem is to not treat the loop fluxes as variables, but instead treat them as constants, as other authors do \citet{devoret}. This is justifiable as they are very small - indeed they are usually treated as constants to derive Kirchhoff's voltage and current laws. Below in Section \ref{results} we show simulations that use this method.
However, it is also informative to see what we can learn if we treat them as variables. We have not yet explicitly considered Kirchhoff's laws, either here or in Section \ref{back} where we first wrote the Lagrangian for the circuit. Recall the flux law, Eq. \ref{mkvl}, which we derived in Section \ref{assum} from Maxwell's equations
\begin{eqnarray}
\nonumber \sum_{c\ around \ l} \phi_{c} = {\Phi}_l.
\end{eqnarray}
Though we did not explicitly explain this, in Section \ref{back} we simply took ${\Phi}_l(t)$ to always be zero (making the same assumption used to derive Kirchhoff's voltage law) by defining the flux across each component in Eqs. \ref{fc1}-\ref{fl4} to be the difference between the flux at the two nodes on either side of the component. For instance, consider the second loop, which goes through components $C_1$, $L_3$, and $L_4$. Using Eqs. \ref{fc1}-\ref{fl4}, we obtain,
\begin{eqnarray}
\nonumber \sum_{c\ around \ l=2} \phi_{c} &=& \phi_{C1} + \phi_{L3} - \phi_{L4} \\
\label{kir2}&=& (\phi_2)+(\phi_3-\phi_2)-(\phi_3) = 0
\end{eqnarray}
But by including geometric inductances, the flux through the loops have become non-zero, time varying quantities. Eqs. \ref{fc1}-\ref{fl4} must be redefined to include these new variables using our flux law. Here, we will use the following definitions,
\begin{subequations}
\begin{eqnarray}
\phi_{C1}&=& \phi_2 \\
\phi_{C2}&=& \phi_2 -{\Phi}_1 \\
\label{fluxl3} \phi_{L3}&=& \phi_3-\phi_2 \\
\label{fluxl4} \phi_{L4}&=& \phi_3 +{\Phi}_2
\end{eqnarray}
\end{subequations}
With these new definitions, Eq. \ref{kir2} becomes,
\begin{eqnarray}
\nonumber \sum_{c\ around \ l=2} \phi_{c} = (\phi_2)+(\phi_3-\phi_2)-(\phi_3+{\Phi}_2) = -{\Phi}_2,
\end{eqnarray}
thus fulfilling Kirchhoff's flux law (with the negative sign reflecting the fact that ${\Phi}_2$ was been defined in Figure \ref{unquan}(a) with a left handed circulation.) The reader is invited to check that the flux law is also fulfilled for the first loop in the circuit. \citet{devoret} gives a more in depth discussion of how these definitions can be determined for any arbitrary circuit by choosing a spanning tree.
Our Lagrangian becomes,
\begin{eqnarray}
\nonumber \mathcal{L}_n&=&\frac{C_1 \dot{\phi_2}^2}{2}+\frac{C_2 (\dot \phi_2 -\dot{\Phi}_1)^2}{2} + \frac{C_{g32} (\dot{\phi_3}-\dot{\phi_2})^2}{2} + \frac{C_{g21} \dot{\phi_2}^2}{2} + \frac{C_{g31}(\dot \phi_3 +\dot{\Phi}_2)^2}{2}- \\
&&\frac{ (\phi_3-\phi_2)^2}{2 L_3}-\frac{ (\phi_3 +{\Phi}_2)^2}{2 L_4} - \frac{ \Phi_1^2}{2 L_{g1}} - \frac{ {\Phi}_2^2}{2 L_{g2}},
\end{eqnarray}
and the conjugate variables are now all defined,
\begin{subequations}
\begin{eqnarray}
\mathbbm{q}_2&=& (C_1+C_2+C_{g21}+C_{g32}) \dot\phi_2 - C_{g32} \dot\phi_3 -C_2 \dot{\Phi}_1,\\
\mathbbm{q}_3 &=& (C_{g32}+C_{g31}) \dot\phi_3 - C_{g32} \dot\phi_2- C_{g31} \dot\Phi_2, \\
\mathbbm{Q}_1 &=& C_2 \dot{\Phi}_1 - C_2 \dot\phi_2 ,\\
\mathbbm{Q}_2 &=& C_{g31} \dot{\Phi}_2 + C_{g31} \dot\phi_3.
\end{eqnarray}
\end{subequations}
Now that the conjugate variables have been defined, the circuit can be quantized.
Note that we have not included here the effect of parasitic mutual inductances between different loops in the circuit, a topic which the reader is invited to explore by defining an inductance matrix. The reader may also be interested to find the solution for the circuit in the loop charge representation by including the geometric components in the Lagrangian. (Hint: a condition defining the net charge of the circuit must be used to solve for the system in the loop charge representation.)
\section{Results} \label{results}
Here we show solutions for both circuits of Figure \ref{topo} using the following values for the components: $C_1$=2 pico-Farad (pF), $C_2$=4 pF, $L_3$=1 nano-Henry (nH), $L_4$=3 nH. The initial conditions are 2 mili-Volts (mV) across each capacitor in the circuit and 0 nano-Amperes (nA) through each inductor in the circuit.
In the last Section, we showed that any circuit could be fully quantized by treating the loop fluxes (node charges) as variables in addition to the node fluxes (loop charges). But we also showed that taking the loop fluxes (node charges) as constants is equivalent to using Kirchhoff's voltage and current laws. In the simulations shown here, we will use this latter method. This topic will be discussed further in the next Section.
\begin{table}
\caption{Solution parameters for the circuit shown in Fig. \ref{topo}(a), which has no passive variables, in both the node flux and loop charge representations.}
\begin{center}
\begin{tabular}{|c|c c c c||}
\hline
representation&freq. (GHz) & variable & freq. (GHz) & variable \\ [0.5ex]
\hline\hline
node & 3.56 & $\phi_2$ & 2.51 & $\phi_3$ \\
loop & 2.51 & $Q_1$ & 3.56 & $Q_2$ \\
\hline
\end{tabular}
\end{center}
\label{table1}
\end{table}
Figure \ref{qres} shows the results of simulations for the lumped element circuit shown in Fig. \ref{topo}(a) and Table \ref{table1} shows the parameters of the solution. This is the circuit with no passive nodes or loops - there is a capacitor attached to each node and an inductor around each loop. We invite the reader to derive the Hamiltonian for this circuit and compare it to our solution. Table \ref{table1} shows that the solution describes oscillators with frequencies of 2.51 and 3.56 GHz. These frequencies do not depend on whether the node flux or loop current representation is used.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.8]{Fig_results_quan.pdf}
\caption{Quantum simulations of the circuit shown in Fig. \ref{topo}(a), which has no passive variables. Solutions were calculated by time evolving the Hamiltonian calculated in both the node flux and loop charge representations.}
\label{qres}
\end{figure}
However, these two oscillators are strongly coupled together. As such, the solutions shown in Figure \ref{qres} are not just simple oscillators. The voltage across $C_2$ has a complex shape with a frequency of approximately 1 GHz, which is the difference between the frequency of the two oscillators. The voltage across $C_1$ also has a complex shape, with a frequency larger than the frequency of either oscillator.
Figure \ref{unqres} shows the results of simulations for the lumped element circuit shown in Fig. \ref{topo}(b) and Table \ref{table2} shows the parameters of the solution. This is the circuit that has a passive node and a passive loop, and so cannot be quantized unless geometric components are included in the solution.
For comparison, a simulation for the `reduced' circuit shown in Fig. \ref{unquan}(b) is also shown. This reduced version of the circuit has only one capacitor with capacitance 6 pF - comprising the two parallel capacitors in Fig. \ref{unquan}(a). Likewise, there is only one inductor with inductance 4 nH - comprising the two series inductors in Fig. \ref{unquan}(a). Referring to Eq. \ref{omega}, we find that this produces a frequency of 6.46 rad/ns or 1.03 GHz. The circuit solution plotted in Figure \ref{qres} does indeed show a simple oscillator with a frequency of 1.03 GHz. However, the solution for this `reduced' circuit lost information about the individual components that were combined to create the simplified circuit.
\begin{table}
\caption{Solution parameters for the circuit shown in Fig. \ref{topo}(b), which contains passive variables. Solutions of the full circuit in both the node flux and loop charge representations are compared to the solution for the `reduced' circuit shown in Fig. \ref{unquan}(b).}
\begin{center}
\begin{tabular}{|c|c c c c||}
\hline
representation&freq. (GHz) & variable & freq. (GHz) & variable \\ [0.5ex]
\hline\hline
node, reduced & 1.03 & $\phi_2$ & & \\
node & 2.05 & $\phi_2$ & 18221 & $\phi_3$ \\
loop & 1.80 & $Q_1$ & 1378 & $Q_2$ \\
\hline
\end{tabular}
\end{center}
\label{table2}
\end{table}
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.8]{Fig_results_unquan.pdf}
\caption{Quantum simulations of the circuit shown in Fig. \ref{topo}(b), which contains passive variables. Simulations of the full circuit in both the node flux and loop charge representations are compared to the solution for the `reduced' circuit shown in Fig. \ref{unquan}(b).}
\label{unqres}
\end{figure}
Figure \ref{unqres} also shows solutions calculated using the methods presented here to quantize the full circuit. A geometric capacitance of $C_{g23}$=8.9$\times 10^{-8}$ pF and a geometric self inductance of $L_{g1}$=1.0$\times 10^{-6}$ nH was used in these solutions. These are reasonable values for nodes and loops with length scales on the order of a hundred micrometers composed of wires several micrometers wide. Table \ref{table2} shows that the solutions for the full circuit have oscillator frequencies very different from the frequency of the reduced circuit. Nevertheless, with the correct coupling between these different oscillators, the solutions for the voltage across $C_1$ shown in Figure \ref{unqres}(a) are nearly identical.
However, the inclusion of the geometric components does have an important impact on the solution for a circuit with passive nodes or loops. Figure \ref{unqres}(b) shows the voltage across each of the inductors in the circuit. The net voltage across both inductors is a simple oscillator with a frequency of approximately 1 GHz. But the voltage across each individual inductor shows high frequency oscillations. These originate from tiny fluctuations in the charge of node 3. Since the capacitance attached to this node is very small and $V=Q/C$, these tiny charge fluctuations cause visible high frequency noise in the voltage.
Similarly, Figure \ref{unqres}(c) shows the current through each of the capacitors in the circuit. The net current through both capacitors is a simple oscillator with a frequency of approximately 1 GHz. But the current through each individual capacitor shows high frequency oscillations. These originate from tiny fluctuations in the flux through loop 1. Since the self inductance attached to this node is tiny and $I=\phi/L$, these small flux fluctuations cause visible high frequency noise in the current.
Here it is important to point out that the simulations in different representations \emph{do not produce the same result} for the circuit with passive nodes and loops. The large-scale behavior is the same, but the high frequency noise depends not only on the representation (node or loop), but also on the spanning tree used (though we have not shown simulations with different trees here - the reader is invited to test some themselves.) When the node representation is used, high frequency noise appears only in the voltage at node 3. When the loop representation is used, high frequency noise appears only in the current around loop 1.
\section{Discussion} \label{summary}
In this paper we have demonstrated something that everyone who works with quantum circuits already knows through experience - that the geometry of the circuit can have a significant impact on noise in the circuit such that it is difficult to accurately predict using simulation exactly how the real circuit will respond. This occurs through the same mechanisms that create noise in classical circuits - the parasitic components that arise from the circuit layout and the approximations contained within Kirchhoff's laws.
However, for students who come from a background in classical circuits and are just learning quantum mechanics, it is quite informative to see how this noise appears in the mathematics of the Hamiltonian. Because the values of the geometric components tend to be orders of magnitude smaller than the lumped element components and because frequency is inversely proportional to these small component values, the frequency of passive nodes and loops tends to be much larger than the frequencies of the active nodes and loops. A similar result is found if the full solution is considered, with loop fluxes (node charges) quantized rather than treated as constants. The loop fluxes (node charges) also have much larger frequencies than the node fluxes (loop charges).
High frequency corresponds to high energy. For instance, the frequencies of the passive variables in Table \ref{table2} are in the THz regime, corresponding to temperatures from dozens to hundreds of Kelvin. In contrast, quantum circuits are generally operated at sub-Kelvin temperatures. Thus the population of these high energy states will generally be quite small.
\begin{figure}[ht!]
\centering
\includegraphics[scale=1.0]{Fig_realInductor.pdf}
\caption{Layout of a superconducting inductor drawn with KLayout.}
\label{real}
\end{figure}
However, it is unphysical that these frequencies are really quite so high as we have found here. Consider the realistic `S'-shaped inductor layout shown in Figure \ref{real}. Instead of treating this inductor as a single lumped element, it would be more accurate to treat it as many inductors in series. Nodes in the middle of this series of inductors are passive, unless we consider the capacitances between them. Each curve of the `S' does have a small capacitance with the curve above and below it. This is modeled as a small capacitor in parallel with each inductor. Though these capacitors are small, they are not quite so small as the capacitor that goes between the node at the very top of the inductor and the node at the very bottom of the inductor. Thus high frequencies will still appear in the solution, but they will not be at quite such high frequencies as the THz scale.
What is the physical source of these high frequency components? Consider the depiction of the electric and magnetic fields of an LC circuit shown in Figure \ref{physical}. When there is a current in the circuit, as depicted on the right side of the Figure, then energy is stored within a strong magnetic field inside the inductor. A current in a wire loop always creates a magnetic field inside that loop. An inductor is constructed with many overlapping loops in order to strongly concentrate the magnetic field inside of it. But no matter how perfect the inductor is made, the magnetic field can never be entirely confined within the inductor coil. Magnetic flux will always appear within the loops of the circuit itself.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.8]{Fig_Phys.pdf}
\caption{Depiction of the electric and magnetic fields in and around an LC circuit during a single period of the oscillation between the lumped element capacitor and the lumped element inductor.}
\label{physical}
\end{figure}
Moreover, this magnetic flux within the circuit loop cannot simply be considered to be leakage from the inductor. The inductor coil depicted in the Figure creates a magnetic field that lies parallel to the page, while the circuit loop creates a magnetic field that lies perpendicular to the page.
When energy is stored within a strong electric field inside the capacitor, as depicted on the left side of the Figure, this means that there is a voltage between the two nodes of the circuit. Voltage is defined as the line integral of the electric field between the two nodes. If the path of that line integral goes through the wires and through the capacitor itself, then the voltage is acquired through a strong electric field integrated over the short path between the two plates of the capacitor. But if the path goes through the center of the circuit loop instead, then the voltage will only be nonzero if the electric field outside of the capacitor is not zero. The voltage is instead acquired through a weak electric field integrated over the path between the nodes, which is much longer than the distance between the capacitor plates. No matter how perfect the capacitor is made, the electric field can never be entirely confined within its plates. There will always be small electric fields between the nodes of the circuit.
At first thought, this stray electric field could simply be considered leakage from the capacitor. However, our solutions show that these stray fields oscillate with a frequency much larger than the frequency of the design components and thus with a frequency much larger than the variation of a leakage field from the capacitor. Stray electric fields could also arise from the collection of small charges at the nodes. Normally it is assumed in the derivation of Kirchhoff's laws that the net charge inside a conductor is always exactly zero, and so these node charges must be zero. But we discarded this assumption when deriving Eq. \ref{kcl}.
The reasoning behind the assumption that node charges must be zero is that the electrons repel one another and so tend to end up as far away from one another as they can get, such as on the plates of the lumped element capacitors. But this reasoning only applies at steady state. The electrons move so fast that this steady state is usually achieved very quickly, much more quickly than the period of the oscillatory motions between the designed components. But it is precisely in the case of quick changes between different steady states that high frequency components come into play when a Fourier expansion of the time changing field is considered.
Therefore, small electric and magnetic fields appear outside the components, within the loops of the circuit itself. These are not just stray fields that leak from the components, but a system of small loop flux and node charge variables that are normally neglected by Kirchhoff's laws. The high frequency oscillations arise as the small electric and magnetic fields within the circuit loops change and interact with one another and with the large low frequency oscillations within the components of the circuit. They are a fundamental source of noise that arises from the geometry of the circuit itself.
|
{
"timestamp": "2018-05-08T02:14:06",
"yymm": "1805",
"arxiv_id": "1805.02341",
"language": "en",
"url": "https://arxiv.org/abs/1805.02341"
}
|
\section{Introduction}
\label{sec:introduction}
The development of better information retrieval systems is driven by how improvements are measured. The design of test collections and evaluation metrics that started with the Cranfield paradigm in the early 1960s allowed researchers to analyze the quality of different retrieval models in an automated and cost-effective way. Since then, many
evaluation metrics have been proposed to measure the \emph{effectiveness} of information retrieval systems~\cite{sparck1976information, voorhees2005trec, scholer2016information}.
Selecting a suitable set of metrics for a specific task is challenging.
Comparing metrics empirically against user satisfaction or search effectiveness
requires data that is often unavailable. Moreover, findings may be biased to the subjects, retrieval systems or other experimental factors.
An alternative consists of modeling theoretically the desirable properties of retrieval systems, as well as the abstraction of the expected users' behavior when performing a specific task. For instance,
a metric that looks at how early the relevant document is retrieved in the ranking -- such as Reciprocal Rank~\cite{Voorhees99thetrec-8} -- would be an appropriate metric to analyze the performance of systems on a single-item navigational task.
However, is often challenging to come up with the proper evaluation tools for more complex search scenarios, as is the case of search result diversification~\cite{santos2015search}. In this context, the ranking of retrieved documents must be optimized in such a way that diverse query aspects are captured in the first positions. The challenge is that the evaluation of system outputs is affected by multiple variables such as: the deepness of ranking positions, the amount of documents in the ranking related to the same query aspect, relevance grades, the diversity of query aspects captured by single documents or the user's effort when inspecting the ranking.
Axiomatic analysis has been shown to be an effective methodology to better understand the foundamentals of evaluation metrics~\cite{van1974foundation,busin2013axiometrics,Amigo_2013, Ferrante:2015}. In the context of evaluation, axiomatic approaches consist of a verifiable set of formal constraints that reflect which quality factors are captured by metrics, facilitating the metric selection in specific scenarios.
To our knowledge, there is no comprehensive axiomatic analysis of the behavior of diversity metrics in the literature. This paper provides a set of ten formal constraints that focus on both retrieval and diversity quality dimensions.
We found that every constraint is satisfied at least by one metric. However, none of the existing diversity metrics satisfy all the proposed constraints simultaneously.
In order to solve this gap, we define the metric \emph{Rank-Biased Utility (RBU)} by integrating components from different metrics in order to capture every formal constraints. RBU is an adaptation of the well-known Rank-Biased Precision metric~\cite{RBP} that incorporates redundancy and the user's effort associated to the inspection of documents in the ranking. Our experiments using standard diversity test collections validate our axiomatic analysis. Results show that,
satisfying every constraint with a single metric leads to \textit{unanimous} evaluation decisions when compared against other existing metrics,
i.e., RBU captures quality criteria which are reflected by different metrics. Therefore, this metric offers a solution in the absence of knowledge about the specific characteristic of a diversity-oriented retrieval scenario. Moreover, the theoretical framework presented in this paper helps to decide which metric should be used.
The paper is organized as follows. Section~\ref{sec:relatedwork} describes related work on evaluation of evaluation metrics. Section~\ref{sec:constraints} introduces the formal constraints that we propose to analyze relevance and diversity properties of metrics. Section~\ref{sec:metric:analysis} provides a comprehensive analysis of existing diversity metrics according to these constraints and Section~\ref{sec:rbu} defines the proposed RBU metric. Section~\ref{sec:metaevaluation} details the results of our experiments. Finally, Section~\ref{sec:conclusions} concludes the work.
\section{Related Work}
\label{sec:relatedwork}
There is no consensus of meta-evaluation criteria for search result diversification. Some works inherit meta-evaluation criteria from ad-hoc metrics such as \emph{sensitivity} to system differences~\cite{sakai2011evaluating, Luo-13, Sakai-10, Golbus-13}. This methodology however does not give information about to what extent metrics capture diversity properties. {\citet{Smucker-12}} studied the correspondence between metric scores and user effort when exploring document rankings. This methodology has the advantage of being realistic -- effort is calibrated from historical log data -- but only focuses on partial quality aspects.
Most of works on diversity metrics are supported by descriptive analysis. In 2008, {\citet{Clarke-08}} meta-evaluated $\alpha$-nDCG by analyzing the effect of modifying the diversity parameter $\alpha$ under different datasets.
One year later, {\citet{Agrawal-09}} checked the \emph{intent-aware} scheme for diversification by studying the evaluation results of three search engines.
{\citet{Clarke-09}} proposed Novelty- and Rank-Biased Precision (NRBP), an extension of RBP~\cite{RBP} for diversification,
joining properties of the original RBP metric, $\alpha$-nDCG and intent-aware metrics.
In 2010, {\citet{Sakai-10}} compared their proposed approach to $\alpha$-NDCG and NRBP, in terms of metric agreement under different
parameters. The authors considered some meta-evaluation criteria such as interpretability, computability or capability to
accommodate graded relevance and score ranges.
Three years later, \citet{chandar2013preference} evaluated their approach by studying correlation with previous metrics while reflecting other ranking quality issues.
{\citet{Luo-13}} proposed the Cube Test metric. They studied the effect
of the metric parameters under synthetic system outputs, in the same manner than {\citet{Clarke-08}}. \citet{Tangsomboon-14} in 2014 and also \citet{Yu-17} in 2017, supported their proposed metrics in terms of agreement and disagreement with previous metrics.
Not many works define a way of quantifying the suitability of metrics to capture diversity. An exception is the work by \citet{Golbus-13}
who defined \emph{Document Selection Sensitivity}. This meta-measure
reports to what extent metrics are sensitive to document rankings containing relevant documents but different grades of diversity. Within this line, we define in this work \emph{Metric Unanimity} (\MU), which quantifies to what extent a metric is sensitive to quality aspects captured by other existing metrics.
On the other hand, metrics have been successfully analyzed in terms of formal constraints in ad-hoc retrieval scenarios
\cite{Amigo_2013, moffat:2013, Ferrante:2015}. The axiomatic methodology
consists of identifying theoretical situations in which metrics should behave in a particular manner. This methodology
has several strengths: it is objective, independent from
datasets and it facilitates the interpretation of metrics. We found only a few initial works in the context of formal constraints for search result diversification. For instance, {\citet{Leelanupab-13}} reviewed the appropriateness of intent-aware, stating an extreme particular situation in which ERR-IA does not behave as expected. In our work, we meta-evaluate existing metrics on the basis of ten constraints that formalize desirable properties for ranking and diversity effectiveness.
\section{Axiomatic Constraints}
\label{sec:constraints}
\subsection{Problem Formalization}
We formalize the output of a document retrieval system
as an ordered list of documents $\vec{d}=(d_1,\ldots,d_n)$ of length $n$, extracted from a collection of documents $\mathcal{D}$. In order to express formal constraints,
we use $\vec{d}_{i \leftrightarrow j}$ to denote the result of
swapping documents between positions $i$ and $j$. Likewise,
$\vec{d}_{d \leftrightarrow d'}$ denotes the result of replacing the document $d$
with the document $d'$ in the ranking $\vec{d}$.
For search result diversification, we consider a set of query aspects
$\mathcal{T}=\{t_1,\ldots,t_m\}$. For instance, users searching for a restaurant may be interested in the menu, the offers, opening times, etc. Each aspect has an associated \emph{weight} $w(t_j)$ and
the sum of all aspect weights adds up to $1$: $\sum_{j=1}^m w(t_j)=1$.
On the other hand, $r(d_i,t_j)\in[0\ldots 1]$ represents the graded \textit{relevance} of document $d_i$ to the aspect $t_j$. We assume the user's behavior follows the \emph{cascade model}, i.e., the user inspects the ranking sequentially from the top to the bottom, until either (i) the user's information needs get satisfied or (ii) the user stops looking (i.e., user's patience is exhausted). Following the same user model than the one used by Expected Reciprocal Rank~\cite{chapelle2009expected}, we consider \textit{relevance} as the suitability of the document to satisfy the user needs, which has a negative correspondence with the probability of exploring more documents. Finally, we use $Q(\vec{d})$ to denote the ranking quality score, i.e., the score given by applying an evaluation metric $Q$ to a given ranking $\vec{d}$.
Our axiomatic approach consists of a set of ten
formal constraints that evaluation metrics may satisfy. These constraints are grouped into two sets: \emph{relevance-oriented} and \emph{diversity-oriented}, that we describe below.
In the definition of the constraints, we may refer to the following conditions:
\emph{single aspect} $(|\mathcal{T}|=1)$;
\emph{balanced aspects} $(\forall t \in \mathcal{T} \ldotp w(t)=1/|\mathcal{T}|)$;
\emph{binary relevance} $(\forall t,d \ldotp r(d,t) \in \{0,r_c\})$;
\emph{no aspect overlap} $(r(d,t) > 0 \Rightarrow \forall t' \neq t \ldotp r(d,t') = 0)$;
and \emph{relevance contribution} $\left( r \left( d, t \right) \ll 1 \right)$. The last condition means that finding new relevant documents about the same topic is always effective. In other words, there is always room for new documents to fully satisfy the user needs.
\subsection{Relevance-Oriented Constraints}
In order to isolate relevance from diversity and redundancy, for these constraints we will assume
\emph{single aspect}
and \emph{relevance contribution}.
For the sake of legibility, we use the notation:
$r(d)=r(d,t)$. We also denote $d^{\mbox{\it rel}}$ and $d^{\neg
\mbox{\it rel}}$ as relevant and non-relevant documents, respectively. That is:
$\forall i\in 1..n \ldotp r(d_i^{\neg rel})=0$ and $r(d_i^{rel})=r_{c}$.
Under these assumptions, we import the five
constraints proposed by {\citet{Amigo_2013}} which capture previous axiomatic
properties~\cite{moffat:2013, Ferrante:2015}.
\begin{cons}
[Priority, Pri] Swapping items in concordance with
their relevance increases the ranking quality score. Being $k>0$:
\begin{equation}\label{eq:Pri}
r\left(d_{i+k}\right)>r\left(d_i\right)\Longrightarrow Q\left(\vec{d}_{i\leftrightarrow i+k}\right)>Q\left(\vec{d}\right)
\end{equation}
\end{cons}
The next constraint is based on the intuition that the
effect of relevance depends on the document ranking position.
This constraint is also referred as \textit{top-heaviness}:
\begin{cons}
[Deepness, Deep] Correctly swapping contiguous items has more effect
in early ranking positions:
\begin{equation}\label{eq:Deep}
\resizebox{.9\hsize}{!}{%
$r(d_i)=r(d_j) < r(d_{i+1})=r(d_{j+1}) \Longrightarrow
Q\left(\vec{d}_{i\leftrightarrow i+1}\right) > Q\left(\vec{d}_{j\leftrightarrow j+1}\right)$
}
\end{equation}
where $i<j$.
\end{cons}
The next constraint reflects that the effort spent by the user to inspect
a long (deep) list of search results is limited. In other words, there is an
area of the ranking that may never get explored by the user:
\begin{cons}
[Deepness Threshold, DeepTh] Assuming binary relevance, there
exists a value $n$ large enough such that, retrieving only one relevant document
at the top of the ranking is better than retrieving $n$ relevant documents
after $n$ non-relevant documents:
\begin{equation}\label{eq:DeepTh}
\exists n\in\mathbb{N}^+ \ldotp Q\left(d_1^{\mbox{rel}},\ldots\right) > Q\left(d_1^{\neg \mbox{rel}}, \ldots , d_n^{\neg \mbox{rel}} , d_1^{ \mbox{rel}}, \ldots , d_n^{ \mbox{rel}}\right)
\end{equation}
\end{cons}
On the other hand, we can assume that there exists a (short) ranking area which is
always explored by the user. In other words, at least a few documents are inspected by the user
with a minimum effort. This means that, at the top of the ranking, the amount of
captured relevant documents is more important than their relative rank positions.
\begin{cons}
[Closeness Threshold, CloseTh] Assuming binary relevance, there
exists a value $m$ small enough such that retrieving one relevant document
in the first position is worse than $m$ relevant documents after $m$ non-relevant documents:
\begin{equation}\label{eq:CloseTh}
\exists m\in\mathbb{N}^+ \ldotp Q\left(d_1^{\mbox{rel}}, \ldots\right) < Q \left(d_1^{\neg \mbox{rel}}, \ldots , d_m^{\neg \mbox{rel}}, d_1^{\mbox{rel}}, \ldots , d_m^{\mbox{rel}} \right)
\end{equation}
\end{cons}
In some particular scenarios, however, this may not hold. For instance, in audio-only search scenarios, search results may be delivered sequentially one-at-a-time
Finally, the amount of documents returned is also an aspect of the system quality.
In the same manner that capturing diversity in the first positions is desirable,
adding non-relevant documents to the end of the ranking should be penalized by metrics.
In other words, the cutoff used by the system to \emph{stop} returning search results has also an impact on users. Therefore, adding noise at the bottom of the ranking should decrease its effectiveness.
\begin{cons}
[Confidence, Conf] Adding non-relevant documents decreases the
score:
\begin{equation}\label{eq:Conf}
Q\left(\vec{d}\right) > Q\left(\vec{d}, d^{\neg \mbox{rel}}\right)
\end{equation}
\end{cons}
\subsection{Diversity-Oriented Constraints}
The first diversity-oriented constraint is related to the fact that the metric should be sensitive
to the \emph{novelty} of aspects covered by a single document:
\begin{cons}
[Query Aspect Diversity, \Nov] Covering more aspects in the same document (i.e., without additional effort of inspecting more documents) increases the score. Assuming relevance contribution $(\forall d,t \ldotp r(d,t)\ll 1)$:
\begin{equation}\label{eq:Nov}
\forall t\in\mathcal{T} \ldotp
\left(r(d_i',t)>r(d_i,t)\right) \Longrightarrow
Q\left(\vec{d}_{d_i \leftrightarrow d_i'}\right) > Q \left(\vec{d}\right)
\end{equation}
\end{cons}
To calculate the \emph{gain} obtained by observing a new relevant document in the ranking, most of the existing diversity metrics take into account
the number of previously observed documents
that are related with the same aspect.
The more an aspect has been covered earlier in the ranking, the less a new document relevant to this aspect contributes to the gain. Formally:
\begin{cons}
[Redundancy, Red]
Assuming binary relevance, balanced aspects and no aspect overlap, and being $d$ and $d'$ documents relevant to different aspects $r(d,t)=r(d',t')=r_c$, then:
\begin{equation}\label{eq:Red}
\begin{split}
|\{d_i\in\vec{d} \ldotp r(d_i,t)=r_c\}| & > |\{d_i\in\vec{d} \ldotp r(d_i,t')= r_c\}| \Longrightarrow\\
Q\left(\vec{d},d'\right) & > Q\left(\vec{d},d\right)
\end{split}
\end{equation}
\end{cons}
The \Red constraint assumes binary relevance, by counting relevant documents for each
query aspect. In order to consider graded relevance
in previously observed documents, we can apply the monotonicity
principle. That is, if an aspect $t$ is captured to a greater
extent than a second aspect $t'$ in every previously observed document,
then the ranking is more redundant w.r.t. $t$ than $t'$. Formally:
\begin{cons}
[Monotonic Redundancy, MRed]
Assuming two balanced aspects ($\mathcal{T}=\{t,t'\}$), relevance contribution, and being $d$ and $d'$ documents exclusively relevant to each
aspect, $0<r(d,t)=r(d',t')\ll 1$ and $r(d,t')=r(d',t)=0$:
\begin{equation}\label{eq:MRed}
\forall d_i\in \vec{d} \ldotp \left(r(d_i,t)>r(d_i,t')\right)
\Longrightarrow
Q\left(\vec{d},d'\right) > Q\left(\vec{d},d\right)
\end{equation}
\end{cons}
Intuitively, as well as the exploration capacity or patience of the user is limited, the user's information need is also finite.
This means that there should exists a certain point on which
a new single piece of information completely satisfies user's information needs,
in such a way that retrieving any other documents addressing the same query
aspect is not beneficial. Formally:
\begin{cons}
[Aspect Relevance Saturation, Sat]
Assuming no aspect overlap, there exists a finite relevance value $r_{max}$
large enough such that:
\begin{align}\label{eq:Sat}
\begin{split}
(r(d_{n},t) = r_{max}) ~~\wedge ~~&(r(d_{n+1},t) > 0) \Longrightarrow \\
Q\left(\vec{d}\right)~\ge~&Q\left(\vec{d}, d_{n+1} \right)
\end{split}
\end{align}
\end{cons}
Finally, the following constraint captures the relative weight of aspects $w(t)$
w.r.t. the user's information need:
\begin{cons}
[Aspect Relevance, AspRel] Aspects with higher weights have more effect
in score of the ranking quality. Formally, assuming no aspect overlap, and being $d_i$ and $d_i'$ documents
that are relevant to different aspects that have not been observed before,
$\forall j<i \ldotp r(d_j,t)=r(d_j,t')=0$, and $r(d_i,t)=r(d_i',t')>0$ then:
\begin{equation}\label{eq:AspRel}
w(t) < w(t') \Longrightarrow
Q\left( \vec{d}_{d_i\leftrightarrow d_i'} \right) > Q\left(\vec{d}\right)
\end{equation}
\end{cons}
In summary, we have defined a total of ten constraints: five relevance-oriented constraints
(\Pri, \Deep, \DeepTh, \CloseTh and \Conf), and five constraints for search result diversification (\Nov, \Red, \MRed, \Sat, and \AspRel). The next section provides an axiomatic analysis of the most popular retrieval and diversity metrics using these constraints.
\section{Metric Analysis}
\label{sec:metric:analysis}
In this section, we firstly analyze standard metrics designed to evaluate retrieval systems in non-diversified scenarios (i.e., single-aspect). Then we analyze the \emph{intent-aware} family of metrics, as well as a number of popular diversity metrics.
\subsection{Standard Metrics for Ad-hoc Retrieval}
We analyze here metrics that do not consider multiple aspects of a query or topic, including:
Precision at a cutoff $k$ (P@$k$),
Reciprocal Rank (RR)~\cite{Voorhees99thetrec-8},
Average Precision (AP),
Rank-Biased Precision (RBP)~\cite{RBP},
Expected Reciprocal Rank (ERR@$k$)~\cite{chapelle2009expected} and
Normalized Discounted Cumulative Gain (nDCG@$k$)~\cite{DCG}.
RBP uses a parameter $p$ that defines user's \emph{patience}, modeled as the probability
of the user to inspect the next document in the ranking. P@$k$, ERR and nDCG
include a cutoff $k$ that limits the rank
positions considered in the evaluation measurement.\footnote{
Due to lack of space, here we focus on the formal properties of the metrics and we provide references to the definition and explanation
of the metrics.}
The upper part of Table~\ref{tab:constraints} summarizes the properties for the retrieval effectiveness metrics.
\input{04_analysis_metrics_tbl}
The constraints defined by {\citet{Amigo_2013}} assume that relevance judgments are binary.
However, our axiomatic framework defines the constraints
\Pri and \Deep over graded relevance (Eq.~\ref{eq:Pri} and~\ref{eq:Deep}, respectively).
Therefore, RR, AP and
P@$k$ become undefined.\footnote{{\citet{Amigo_2013}}'s analysis shows
that P@$k$ does not satisfy the \Pri and \Deep constraints, given that it does not consider the order of documents before position $k$.}
The rest of the analysis is inline with the one presented by {\citet{Amigo_2013}}:
The other metrics (nDCG@$k$,ERR@$k$ and RBP) satisfy \Pri and \Deep constraints
by applying a relevance discounting factor depending on the depth of the ranking position.
With regards to \DeepTh (Eq.~\ref{eq:DeepTh}) and \CloseTh (Eq.~\ref{eq:CloseTh}) constraints,
metrics that rewards relevance in deep ranking positions such as AP or nDCG@$k$ satisfy \CloseTh but not \DeepTh, while metrics
that focus on the top of the ranking (P@k, RR and ERR@k) satisfy
\DeepTh but not \CloseTh. RBP satisfies both \CloseTh and \DeepTh.
The reason is that RBP is supported by a probabilistic user behavior model that takes into account
the limitations of the ranking exploration process (i.e., user's patience).
None of these metrics satisfy \Conf.
This family of metrics are not applicable in the context of multiple
query aspects. Therefore, they do not satisfy the diversity-oriented constraints.
\subsection{Intent-Aware Metrics}
\label{sec:analaysis:intent-aware}
The \emph{intent-aware} scheme~\cite{Agrawal-09} extends standard metrics such as AP or ERR to make them applicable to diversification scenarios.
Firstly, each query aspect is evaluated independently and then a weighted average considering query aspect weights is computed. Being $M_t(\vec{d})$ the score of $\vec{d}$
according to the metric $M$ when only the relevance to aspect $t$ is considered:
\begin{equation*}
M\mbox{-IA}(\vec{d}) = \sum_{ t \in \mathcal{T}} w(t) M_t(\vec{d})
\end{equation*}
The central part of Table~\ref{tab:constraints} includes the properties for the intent-aware version of the metrics discussed before. Intent-aware metrics converge to the corresponding standard effectiveness metric when the query has only one aspect. Consequently, they inherit the properties of the original metric over the relevance-oriented constraints \Pri, \Deep, \DeepTh and \CloseTh.
Let us now analyze the diversification-oriented constraints. Besides AP-IA@$k$, RR-IA and P-IA@$k$, which are undefined in the context of graded relevance judgments, the intent-aware metrics nDCG-IA@$k$, ERR-IA@$k$ and RBP-IA satisfy the \Nov constraint. If a document is relevant for several aspects, then the averaged score across query aspects increases.
Most of metrics do not satisfy \Red and \MRed.
In the case of P-IA@$k$, the precision averaged across aspects in a certain cutoff $k$ is independent from to which particular aspect the documents are relevant to.\footnote{For instance, being $n_i$ the amount of
relevant documents for the aspect $t_i$, the average P@$k$ across
aspects is: $\frac{1}{|{\mathcal T}|}\sum_{t_i\in{\mathcal T}}\frac{n_i}{k}\propto \sum_{t_i\in{\mathcal T}} n_i$.}
RR-IA@$k$ neither satisfies \Red given that is sensitive only to the first relevant document for each query aspect. In the case of AP-IA@$k$,
the relevance contribution of a document to the aspect $t$ is
higher if relevant documents for $t$ have been observed earlier in the
ranking.\footnote{The contribution
of a relevant document in AP is proportional
to the precision achieved at the document's position, which
is higher when relevant documents appear in the
previous positions. For instance, being $N_r$ the fixed
amount of relevant documents for every aspect in the collection, and being
$d_{t}$, $d'_{t}$ two documents related with aspect $t$, and $d_{t'}$ a document
related with aspect $t'$ then: $\mbox{AP-IA@}2(d_t,d'_t)=1\frac{1}{N_r}+1\frac{2}{N_r}>
1\frac{1}{N_r}+\frac{1}{2}\frac{1}{N_r}=\mbox{AP-IA@}2(d_t,d_{t'})$}
nDCG-IA@$k$ and RBP-IA also fail to satisfy the \Red constraint. These two metrics are not sensitive to the relevance of
previously observed documents. The contribution of documents depends on
the rank position and the amount of relevant documents
in the collection.
On the other hand,
the metric ERR-IA@$k$ satisfies both \Red and \MRed,
due to the component $\prod_{j<i}(1-r(d_j,t))$ which estimates the probability of the user
to be satisfied by previously observed documents according to graded relevance levels.
The \Sat constraint is not satisfied by P-IA@$k$, AP-IA@$k$, nDCG-IA@$k$ nor RBP-IA. The reason is that
all these metrics reward new relevant documents regardless the the gain obtained by previous observed documents. However, the saturation relevance for RR-IA@$k$ and ERR-IA@$k$
is $1$. Finally, the \AspRel constraint by all the intent-aware metrics analyzed in this work,
given that they all consider the first relevant
document for each aspect in the ranking and all of them consider aspect weights $w(t)$.
\subsection{Other Diversity Metrics}
Besides the intent-aware metrics ($M\mbox{-IA}$), other metrics have been proposed to evaluate the effectiveness of search result diversification systems~\cite{santos2015search}.
{\citet{Zhai-03}} proposed Subtopic Recall (S-Recall@$k$), which measures the number
of aspects captured in the first $k$ positions. Given that the metric only measures the coverage of aspects, does not satisfy \Pri, \Deep, \CloseTh and \Conf relevance-oriented constraints. The only diversity oriented constraint that satisfies is \Sat, given that S-Recall@$k$ considers only the first relevant document for each query aspect and it does not consider aspect weights. Likewise, the metric S-RR@$100\%$ -- an extension to RR also proposed by~{\citet{Zhai-03}}, defined as the inverse of the rank position
on which a complete coverage of aspects is obtained -- satisfies the same properties as S-Recall@$k$.
{\citet{Clarke-08}} proposed Novelty-Biased
Discounted Cumulative Gain ($\alpha$-nDCG@$k$).\footnote{Note that given that the proposed formal constraints and experiments in this work compare metrics at topic (or query) level, the normalization factor in metrics such as $\alpha$-nDCG@$k$ can be ignored.} This metric is defined as:
\begin{equation*}\label{eq:alphandcg}
\mbox{$\alpha$-nDCG}@k (\vec{d}) =
\sum_{i=1}^{k}
\frac{\sum_{t\in\mathcal{T}}r(d_i,t)(1-\alpha)^{c(i,t)}}
{\log(i+1)}
\end{equation*}
where $c(i,t)$ represents the amount of documents previously
observed that capture the aspect $t$.
Similarly to the original
nDCG, it satisfies \Pri, \Deep and \CloseTh constraints. However, unlike nDCG,
\DeepTh is also satisfied due to the redundancy factor $(1-\alpha)^{c(i,t)}$, which also
allows to satisfy \Red. \Nov is satisfied due to the additive relevance across aspects.
In contrast, $\alpha$-nDCG@$k$ does not satisfy the constraints \MRed and \Sat. The reason is that the redundancy component $(1-\alpha)^{c(i,t)}$ does not consider the relevance grade of previously observed documents.
Finally, this metric does not consider the weight of aspects and therefore \AspRel is not satisfied.
{\citet{Clarke-09}} proposed
Novelty- and Rank-Biased Precision (NRBP), and adaptation of RBP for search
result diversification, defined as:
\begin{equation*}
\mbox{NRBP}(\vec{d}) = \sum_{i=1}^{\infty}p^{i-1}\sum_{t\in\mathcal{T}} r(d_i,t)(1-\alpha)^{c(i,t)}
\end{equation*}
Similarly to the original RBP, NRBP satisfies
all relevance-oriented constraints except \Conf, given that only relevant documents affect
the score. In terms of diversity-oriented constraints, NRBP behaves similarly to $\alpha$-nDCG@$k$ given that diversification is modeled in a similar manner.
{\citet{sakai2011evaluating} proposed the D\#-Measure which combines a D-Measure (e.g., D-nDCG~\cite{Sakai-10}) with the ratio of aspects captured in the first
$k$ positions (modeled by S-Recall@$k$):
\begin{equation*}
\mbox{D\#-Measure}@k(\vec{d})=\lambda \cdot \mbox{S-Recall}@k(\vec{d})+(1-\lambda)\cdot\mbox{D-Measure}@k(\vec{d})
\end{equation*}
NRBP inherits the properties from nDCG-IA@$k$, which already satisfies \DeepTh and \AspRel. Therefore, the S-Recall@$k$ component does not contribute with any additional
constraint satisfaction.
None of previous metrics satisfy \Conf. However, there exist in the literature \emph{utility}-oriented metrics that penalyze non-relevant documents at the end of the ranking. Two examples are the Normalized Discounted Cumulated Utility (nDCU)~\cite{Yang-07}, and the generalized version Expected Utility (EU)~\cite{Yang-09}. EU is very similar to $\mbox{$\alpha$-nDCG}@k (\vec{d})$ but includes a cost factor. Being $e$ the estimated effort for accessing one document, EU can be expressed as:
\begin{equation*}\label{eq:eu}
\mbox{EU} (\vec{d}) =
\sum_{i=1}^{|\vec{d}|}\frac{1}{1+\log(i)}\left(
\sum_{t\in\mathcal{T}}r(t) r(d_i,t)(1-\alpha)^{c(i,t)}-e\right)
\end{equation*}
EU inherits the $\alpha\mbox{-nDCG}@k (\vec{d})$ properties, but capturing \AspRel and \Conf. However EU does still not satisfy \MRed and \Sat.
The Cube Test metric (CT@$k$)~\cite{Luo-13} satisfies \Sat by adding a saturation factor. Assuming a linear time effort w.r.t. the amount of inspected documents, CT@$k$ can be expressed as:
\begin{equation*}
\mbox{CT@}k( \vec{d})
= \sum_{i=1}^{|\vec{d}|} \frac{1}{i}
\sum_{t\in\mathcal{T}} r(t) r(d_i,t) (1-\alpha) ^ {c(i,t)} f_{\mbox{\it Sat}}
\end{equation*}
where $f_{\mbox{Sat}}$ is 0 or 1 depending if the sum of relevance of documents for the aspect exceeds a certain saturation level. The reciprocal rank discounting factor $\left(\frac{1}{i}\right)$ affects the constraint \CloseTh, rewarding the positions of documents over the amount of relevant documents in top area. In addition, \Conf is neither satisfied. There is no contribution or penalty for documents with zero relevance.
Table~\ref{tab:constraints} also includes the proposed metric \textit{Rank-Biased Utility} ($\mbox{RBU}$), which we describe below.
\section{Rank-Biased Utility}
\label{sec:rbu}
The quality of a diversified ranking depends (at least) on the following factors: (i)~the position of relevant documents in the ranking; (ii)~the redundancy regarding each of the aspects covered by previously observed documents;
(iii)~the weights of the aspects seen in the ranking and (iv) the \emph{effort} -- in terms of user cost or time -- derived from inspecting relevant or non-relevant documents. The analysis described in Section~\ref{sec:metric:analysis} shows that none of the existing metrics take into account all these factors.
To fill this gap, we propose \textit{Ranking-Biased Utility} (RBU), which satisfies all the retrieval and diversity-oriented formal constraints (see proofs in the appendix).
The analysis shows that RBP~\cite{RBP} is the only metric that satisfies the four
first relevance constraints, while ERR-IA@$k$~\cite{Agrawal-09, chapelle2009expected} is the only metric that satisfies all the five diversity-oriented constraints. Expected Utility (EU) is the only that satisfies \Conf, capturing the suitability of the ranking cutoff.
In order to satisfy every constraint, RBU combines the user
exploration deepness model from RBP with the redundancy modeled in ERR-IA@$k$, and also adds the \emph{user effort} component $e$ in EU to satisfy the \Conf constraint.
The metrics RBP and ERR-IA@$k$ can be combined together under the following user behavior assumptions:
(i)~The user has a probability
$p$ to explore the next document and (ii)~the user has a probability
$r(d_j,t)$ to get gain from document $d_j$ for the topic $t$.
Similarly to the ERR-IA@$k$, the probability of being satisfied by document
$d_i$ after observing the documents that occur earlier in the ranking is:
$$
r(d_i,t)\prod_{j=1}^{i-1}(1-r(d_j,t))
$$
Analogously to the user model followed by RBP, the resulting
contribution of a document $d_i$ in the position $i$ must be weighted
according to $p^i$:
\begin{equation*}
p^i r(d_i,t) \prod_{j=1}^{i-1}(1-r(d_j,t))
\end{equation*}
In order to satisfy \AspRel, the weighted sum of contributions across aspects in $\mathcal{T}$ is:
\begin{equation*}
p^i \sum_{t\in\mathcal{T}} w(t) r(d_i,t) \prod_{j=1}^{i-1}(1-r(d_j,t))
\end{equation*}
And the cumulative gain across rank positions until $k$ is:
\begin{equation*}
\mbox{RBU}@\mbox{k}(\vec{d})=\sum_{i=1}^k p^i \sum_{t\in\mathcal{T}}w(t)r(d_i,t)\prod_{j=1}^{i-1}\left(1-r(d_j,t)\right)
\end{equation*}
Similarly to EU, we define RBP in utility terms in order to capture \Conf.
Being $e$ the effort of observing a document,
the rank biased accumulated effort is weighted according to $p^i$, that is: $ \left(\sum_{i=1}^k p^i e \right)$.
Finally, combining the relevance contribution with the cumulative effort, we obtain:
\begin{equation}\label{eq:rbu}
\resizebox{0.9\hsize}{!}{$\displaystyle
\mbox{RBU$@$k}(\vec{d})=\sum_{i=1}^kp^i \left( \sum_{t\in\mathcal{T}}\left(w(t) r(d_i,t)\prod_{j=1}^{i-1}\left(1-r(d_j,t)\right) \right) - e \right)
$}
\end{equation}
RBU@$k$ matches with the RBP-IA metric when assuming a zero
effort ($e=0$), and a small contribution of documents in terms of gain for query aspects,
$$r(d_i,t)\ll1\Longrightarrow\prod_{j=1}^{i-1}\left(1-r(d_j,t)\right)\simeq 1\Longrightarrow $$
$$ \mbox{RBU$@$k}(\vec{d})= \sum_{t\in \mathcal T} w(t) \sum_{j\le i} \left(p^{i-1} r(d_i,t) 1 \right) - 0 = \sum_{t\in\mathcal{T}} w(t) \hspace{0.1em}\mbox{RBP}_t(\vec{d})$$
On the other hand, RBU@$k$ is equivalent to the metric ERR-IA@$k$ when the effort component is zero ($e=0$), and the probability of exploring the next document is maximal ($p=1$):
$$\sum_{i=1}^k 1^i \left( \sum_{t\in\mathcal{T}} \left( w(t) r(d_i,t) \prod_{j=1}^{i-1}(1-r(d_j,t) ) \right) -0 \right) = \sum_{t\in\mathcal{T}} w(t) \hspace{0.1em}\mbox{ERR}_t@k(\vec{d})$$
We now discuss the role of the effort component $e$, which
represents the cost inherently associated to inspect a new document in the ranking
\footnote{In this work, the effort of inspecting or judging a relevant or non-relevant document is the same. We leave for future work the definition of formal constraints that consider these differences~\cite{turpin2009including, Smucker-12}.}
For instance, if $e=0.1$ and the inspected document $d_i$ has a relevance of $0.1$ to aspect $t_i$, then the actual gain is zero:
$$r(d_i,t)\prod_{j<i}\left(1-r(d_j,t)\right)-e=
0.1\prod_{j<i}\left(1-0\right)-0.1=0$$
We have introduced RBU@$k$ and shown that the proposed metric satisfies all the relevance- and diversity-oriented formal constraints. The experiments described in the following sections compare RBU@$k$ to other metrics in the context of standard evaluation campaigns for search result diversification.
\section{Experiments}
\label{sec:metaevaluation}
We start defining our meta-evaluation metric. Then we evaluate the metrics
in different scenarios based on the TREC Web Track 2014 ad-hoc retrieval
task~\cite{collins2015trec}, which includes search result diversification. Finally, we corroborate our results under the context of the TREC
Dynamic Domain task~\cite{yang2015overview}.\footnote{Releasable data and scripts used in these experiments are available at \url{https://github.com/jCarrilloDeAlbornoz/RBU}. Diversity metrics and RBU are also included in the EvALL evaluation framework~\cite{amigo2017evall} \url{http://evall.uned.es/}.}
\subsection{Meta-evaluation: Metric Unanimity}
We aim to quantify the ability of metrics to
capture diversity in addition to traditional ranking quality aspects.
For this purpose, we define the \emph{Metric Unanimity (\MU)}.
\MU quantifies to what extent a metric is sensitive to quality
aspects captured by other existing metrics. It follows a similar concept used by \textit{Strictness},\footnote{Strictness checks to what extent a metric can outscore metrics that achieve a low score according to other metrics.} proposed by~{\citet{Amigo_2013}} for the ad-hoc retrieval scenario.
Our intuition is that, if a system improves another system for every quality criteria, this should be \emph{unanimously} reflected by every metric. A metric that captures all quality criteria should
reflect these improvements.
\input{06_experiments_scenarios_tbl}
Considering the space of system output pair comparisons (i.e., $Q\vec{d})>Q(\vec{d}')$) and a set of metrics, \MU can be formalized as the Point-wise Mutual Information (PMI) between decisions of a metric and
improvements reported simultaneously by the rest of metrics in the set. Formally, let be $m$ a metric, $\mathcal{M}$ the rest of metrics, and a set of system outputs $\mathcal{S}$. Being $\Delta m_{i,j}$ and $\Delta{\mathcal M}_{i,j}$ statistical variables over system pairs $(\vec{d_i},\vec{d_j}) \in \mathcal{S}^2$, indicating a system improvement according to the metric and to the rest of metrics, respectively: \footnote{The a priori probability of a system improvement for every metric is fixed $P(\Delta m_{i,j})=\frac{1}{2}$. That is, for the cases on which two system outputs obtain the same score $m(\vec{d_i})=m(\vec{d_j})$, we add $0.5$ to the statistical count.}
\begin{align*}
&\Delta m_{i,j}\equiv m(\vec{d_i})> m(\vec{d_j})\\
&\Delta{\mathcal M}_{i,j}\equiv\forall m\in \mathcal{M} \ldotp\left(m(\vec{d_i})\ge m(\vec{d_j})\right)
\end{align*}
Then \MU is formalized as:
\begin{equation*}\label{eq:um}
\resizebox{.9\hsize}{!}{%
$\mbox{MU}_{\mathcal{M},\mathcal{S}}(m) = \mbox{PMI} \left(
\Delta m_{i,j}, \Delta {\mathcal M_{i,j}} \right)
=
\log \left(
\frac
{P(\Delta m_{i,j},\Delta {\mathcal M_{i,j}})}
{P(\Delta m_{i,j})\cdot P(\Delta {\mathcal M_{i,j}})}
\right)$
}
\end{equation*}
Let us consider the following example illustrated by the Table below:
\begin{center}
\begin{tabular}{ c c c c }
\toprule
& $m^1$ & $m^2$ & $m^3$\\
\midrule
$S_1$ & 1 & 0.8 & 1 \\
$S_2$ & 0.5 & 0.3 & 0.2 \\
$S_3$ & 0.2 & 0.4 & 0.5 \\
\bottomrule
\end{tabular}
\end{center}
\vspace{0.6em}
The example consists of three metrics and three system outputs. We now compute the \MU of the metric $m^1$ regarding the rest of metrics $\mathcal{M}=\{m^2, m^3\}$. Here, there are 6 sorted pairs of system outputs: $(S_1,S_2)$,$(S_2,S_1)$, $(S_1,S_3)$, etc. The improvements reported by $m^1$ are: $\Delta m^1_{1,2}$, $\Delta m^1_{1,3}$, and $\Delta m^1_{2,3}$. The improvement reported simultaneously by the other metrics are: $\Delta \mathcal{M}_{1,2}$, $\Delta \mathcal{M}_{1,3}$, and $\Delta \mathcal{M}_{3,2}$. $m^1$ agrees with $\mathcal{M}$ in two cases. Therefore $\MU_{\mathcal{M}}(m^1)=\log\left(\frac{2/6}{3/6\cdot 3/6}\right)=0.415$.
\MU has four properties that we describe below.
\begin{description}
\item[{\it Property 1.}] Capturing every unanimous improvement maximizes \MU regardless the other
decisions:
$$\mbox{MU}_{\mathcal{M},\mathcal{S}}(m)=\log\left(\frac{P(\Delta m_{i,j},\Delta {\mathcal M_{i,j}})}{\frac{1}{2}\cdot k}\right)\propto P(\Delta m_{i,j},\Delta {\mathcal M_{i,j}})$$
\item[{\it Property 2.}] A metric $m_{\mbox{\it rand}}$ which assigns random or constant scores
to every system outputs achieves a zero \MU, capturing the \emph{sensitivity} of metrics:
$$
\mbox{MU}_{\mathcal{M},\mathcal{S}}(m_{\mbox{\it rand}})
= \log \left(
\frac
{\frac{1}{2}\cdot P(\Delta {\mathcal M_{i,j}})}
{\frac{1}{2}\cdot P(\Delta {\mathcal M_{i,j}})}
\right)
= \log(1) = 0
$$
\item[{\it Property 3.}] \MU is asymmetric. A metric $m$ can be \textit{unanimous} regarding the rest of metrics,
while the rest of metrics are not.
$$MU_{\{m_2,m_3\}}(m_1) \neq MU_{\{m_1,m_3\}}(m_2)\neq MU_{\{m_1,m_2\}}(m_3)$$
\item[{\it Property 4.}] \MU is not affected
by the predominance of a certain family of metrics in the set $\mathcal{M}$:
$$
\mbox{MU}_{\mathcal{M}\cup\{m'\},\mathcal{S}}(m)
=
\mbox{MU}_{\mathcal{M}\cup\{m',m',\ldots, m'\},\mathcal{S}}(m)
$$
\end{description}
\subsection{Experiment 1: TREC Web Track 2014}
\label{sec:experiments:official}
This first experiments aims to measure \MU in a standard diversification evaluation campaign: the TREC Web Track 2014 ad-hoc retrieval task~\cite{collins2015trec}. In this benchmark, systems need to perform ad-hoc retrieval from the ClueWeb-12 collection, for a total of 50 test topics and return the top 10,000 documents. Some of the topics have multiple aspects --therefore, diversified rankings may be more effective. We use the 30 official runs submitted to the ad-hoc retrieval task and available at TREC's website.
Using our own implementation of the metrics, we execute over the official runs the following metrics:
AP, RR, AP-IA and RR-IA which do not require any parameter;
P@$k$, ERR@$k$, NDCG@$k$ and their corresponding intent-aware variants,
using $k \in \{10, 20, 50,$ $100, 1000\}$;
S-Recall@$k$, RBP, NRBP and $\alpha$-nDCG@$k$; with $p\in\{0.8,0.9,0.99\}$
} and $\alpha\in\{0.1,0.25,0.5,0.75\}$;
EU and our proposed metric RBU with the effort parameter $e \in \{0.001,0.05,0.1,0.5\}$.
For metrics that do not accept multiple query aspects, we consider the maximum relevance across aspects: $r(d)=max_{t\in{\mathcal T}}(r(d,t))$.
The first column in Table~\ref{tab:results} shows the metrics ranked by \MU. For the sake of clarity, the table includes for each metric the variant with highest \MU.
Results show that metrics that satisfy only a few constraints
such as P@$k$ or S-Recall@$k$ are substantially less unanimous than
the rest of metrics. This means that metrics with higher scores cover the same quality criteria captured by P@$k$ or S-Recall@$k$, but these two metrics do not capture other criteria captured
by the rest of metrics.
Our second observation is that a metric with a shallow cutoff (e.g., ERR@$50$) -- i.e., it takes into account a few documents in the ranking -- has lower \MU score than its deep counterpart (e.g., ERR@$1000$). This behavior is consistent for every metric and variants. Likewise, higher values for the patience parameter $p$ in RBP obtains higher \MU scores. Intuitively, the shallower the metric is, the less probable is to capture improvements in deep ranking positions.
RBU obtains the highest scores, when $p=0.99$ (i.e., the metric considers deep positions in the ranking) and all the tested values for the effort component $e$.
\input{06_experiments_official_tbl}
\subsection{Experiment 2: Simulating Alternative Scenarios}
\label{sec:experiments:simulations}
In order to study the behavior of metrics under different situations and to corroborate our findings, we repeat the experiment described before after artificially modifying some parameters of the official TREC Web Track experimental setup.
The second column in Table~\ref{tab:results}
shows the results when:
\begin{enumerate}
\item Enforcing all relevance judgments to be graded: we replace each
discrete relevance value $r$ by a random value between zero and $r$
: $r'(d)=\mbox{\it rand}(0..r(d))$. This is related to the \MRed constraint.
\item Randomly assigning a certain weight to each aspect $t$ in such a way that the sum
of the weights for each topic (or query) adds up to 1: $w(t)=\mbox{\it rand}(0..1)$ and $\sum_{t \in {\mathcal T}}w'(t)=1$. This is related to the \AspRel constraint.
\item The ranking of documents returned by each system is manipulated by reducing randomly its length: $|\vec{d}|=\mbox{rand}(0,\ldots,|\vec{d}|)$.
This variation simulates the situation in which
systems should cut their output rankings
according to their confidence of retrieving (or not) more relevant documents.
This tuning is related to the \Conf constraint, which is only satisfied by EU and the proposed metric.
\end{enumerate}
As a result, the difference in terms of \MU scores between RBU and the other metrics is larger in this simulated scenario. The experiment suggests that this effect is not due to the fact of satisfying any single constraint, but satisfying several constraints simultaneously.
Although EU satisfies \Conf and ERR-IA@$k$ satisfies \MRed and \Sat, RBU outperforms both metrics in terms of \MU.
In all the previous experiments, we have seen that \MU rewards
the fact of considering deeper positions in the ranking. In order
to isolate this variable, the next simulation (Table~\ref{tab:results}, third column) reduces the length of rankings substantially,
by defining a random cutoff between $0$ and $50$: $|\vec{d}|=\mbox{rand}(0..50)$. Consequently,
metrics that use a cutoff equal or greater than $k=50$ will not be rewarded by \MU. Remarkably, all the RBU variants with an effort parameter $e$ higher than zero obtain the highest \MU scores -- RBU with $e=0$ (omitted in the table) achieves a 0.7709 \MU score.
This suggests that the effort component $e$ plays an important role when evaluating rankings with different lengths.
\subsection{Experiment 3: Considering Metrics and Default Parameters used in Official Evaluation}
\MU scores depend on the set of metrics in consideration. Therefore, the results could be biased by the selected metric set $\mathcal{M}$ and variants. In order to avoid this bias, we consider the official metrics and parameters used by the
TREC Web Track organizers. In addition, to avoid the effect of implementation variations or bugs, we compare RBU (implemented by ourselves) against the official evaluation scores
released by TREC (first column in Table~\ref{tab:results2}).
In this case, AP-IA gets the highest \MU score. In terms of RBU, we can see that $p$ values and \MU scores are correlated. This shows again than \MU is biased by the the amount of documents in the ranking that are \emph{visible} to the metric. Note that most of metrics proposed by the organizers use a cutoff no greater than $k=20$. That is, most of metrics
receive less information than AP-IA or NRBP, which take into account \emph{all} the documents in the ranking.
In order to avoid this effect, we focus on metrics that apply the the cutoff $k=20$, and we apply the same cutoff to RBU: RBU@$20$\footnote{In this experiment
we use the official evaluation scores. Therefore, we cannot adapt
AP-IA nor NRBP to this cutoff.}
Maintaining the amount of documents visible to metrics constant, RBU achieves the same \MU score (0.9556) for all the tested variants, obtaining the highest \MU score among the metrics. This suggests that the RBU performance in terms of \MU is not due to differences in the length of the observed ranking.
The high \MU scores of RBU could be possibly due to the fact of having an explicit component for the user effort ($e$ parameter),
rather than the ability to capture other quality aspects such as diversity and redundancy.
In order to isolate this variable, we consider only three RBU variants with zero value
in the effort parameter $\left(e=0, p=\left\{0.8,0.9,0.99\right\}\right)$. Results at the bottom of second column in Table~\ref{tab:results2} show that RBU also outperforms the rest of metrics when $e=0$.
\subsection{Experiment 4: Validation using TREC Dynamic Domain Track}
In order to check the robustness of our
empirical conclusions, we repeat the same experiment over
TREC Dynamic Domain 2015~\cite{yang2015overview}, which includes 23 official runs.
This track consists of an interactive search scenario.
Systems receive aspect-level feedback iteratively and need to
dynamically retrieve as many relevant documents for aspects
as possible, using as few iterations as possible.
An important particularity of this task is that the system
must predict the optimal ranking cutoff which is
closely related with the \Conf constraint.
The official metrics used in this track are Cube Test (CT@$k$) and
Averaged Cube Test (ACT@$k$)~\cite{Luo-13}, which are included in our experiments.
The rightmost column in Table~\ref{tab:results2} shows that we obtain similar results: all the RBU variants are at the top of the metrics ranking. In this case, the
user effort parameter $e$ is important, given that it is necessary
to outperform other metrics such as CT@$k$ or ACT@$k$. In addition, we achieved again the same
result when considering only one RBU variant, appearing at
the top in terms of \MU scores.
\section{Conclusions}
\label{sec:conclusions}
We defined an axiomatic framework to analyze diversity metrics and found that none of the existing metrics satisfy all the constraints. Inspired by this analysis, we proposed Rank-Biased Utility (RBU, Equation~\ref{eq:rbu}), which satisfies all the formal constraints. Our experiments over standard diversity evaluation campaigns show that the proposed metric has more \emph{unanimity} than the official metrics used in the campaigns, i.e., RBU captures more quality criteria than the ones captured by other metrics.
We believe our contributions would help researchers and analysts to define their evaluation framework (e.g., which evaluation metric should be used?) in order to analyze the effectiveness of systems in the context of scenarios involving search result diversification. Future work includes a further parameter sensitivity analysis of metrics, as well as the study of other meta-evaluation criteria such as sensitivity or robustness against noise.
\section*{Appendix: Formal Proofs}
\begin{proof}
\textit{Rank-Biased Utility (\RBU, Eq.~\ref{eq:rbu}) satisfies the constraints: \Pri (Eq.~\ref{eq:Pri}), \Deep (Eq.~\ref{eq:Deep}), \DeepTh (Eq.~\ref{eq:DeepTh}) and \CloseTh (Eq.~\ref{eq:CloseTh}).}
{\small
\RBU is defined as:
$$\mbox{RBU$@$k}(\vec{d})=\sum_{i=1}^kp^i \left( \sum_{t\in\mathcal{T}}\left(w(t)r(d_i)\prod_{j=1}^{i-1}(1-r(d_j,t))\right)-e \right)$$
In the context of these constraints, it is assumed that there is only a single aspect $t$ for a given query or topic. Therefore,
\RBU can be expressed as:
$$\mbox{RBU$@$k}(\vec{d})=\sum_{i=1}^kp^i \left( \left(r(d_i)\prod_{j=1}^{i-1}(1-r(d_j,t))\right)-e \right)$$
In addition, the condition \emph{relevance contribution} is assumed, i.e., the relevance
of single documents does not completely cover the user information needs $r(d)\ll 1$. Therefore, we can assume
that $$\prod_{j=1}^{i-1}(1-r(d_j,t))\simeq \prod_{j=1}^{i-1} 1=1$$
Finally, the four constraints compare rankings with the same length. This means
that we can eliminate the user cost component $e$, which is $e\sum_{i=1}^kp^i$
for every ranking in comparison. Under all
these assumptions, \RBU is equivalent to the traditional $\mbox{RBP}$ metric~\cite{RBP}:
$$\mbox{RBU$@$k}(\vec{d})\propto\sum_{i=1}^k p^i r(d_i)=\mbox{RBP$@$k}(\vec{d})$$
According to the study by {\citet{Amigo_2013}}, $\mbox{RBP}$ satisfies the four constraints enumerated above.
}
\end{proof}
\begin{proof}
\textit{\RBU satisfies the \Conf constraint (Eq.~\ref{eq:Conf}).}
{\small
\RBU can be expressed as:
{\scriptsize
$$ \mbox{RBU$@$k}(\vec{d})=\sum_{i=1}^kp^i \sum_{t\in\mathcal{T}}\left(w(t)r(d_i)\prod_{j=1}^{i-1}(1-r(d_j,t))\right)-e\sum_{i=1}^kp^i$$
}
then
{\scriptsize
\begin{align*}
&\mbox{RBU}@k\left(\vec{d}\right)>\mbox{RBU}@k\left(\vec{d}, d^{\neg rel}\right)\Leftrightarrow\\
&\mbox{RBU$@$k}(\vec{d})>\mbox{RBU$@$k}(\vec{d})-p^{n+1}e\Leftrightarrow 0>-p^{n+1}e \\
\end{align*}
}
}
\end{proof}
\begin{proof}
\textit{\RBU satisfies the \Nov constraint (Eq.~\ref{eq:Nov}).}
{\small
Under the constraint conditions:
$\mbox{RBU}\left(\vec{d}_{d_i\leftrightarrow d_i'}\right)>\mbox{RBU}\left(\vec{d}\right)$ is equivalent to:
{\scriptsize
\begin{align*}
&p^i\sum_{t\in\mathcal{T}}\left(w(t)r(d_i',t)\prod_{j=1}^{i-1}(1-r(d_j,t))\right)>
p^i\sum_{t\in\mathcal{T}}\left(w(t)r(d_i,t)\prod_{j=1}^{i-1}(i-r(d_j,t))\right)\Leftrightarrow\\
&\sum_{t\in\mathcal{T}}\left(w(t)r(d_i',t)\right)>
\sum_{t\in\mathcal{T}}\left(w(t)r(d_i,t)\right)\Leftarrow \sum_{t\in\mathcal{T}}\left(r(d_i',t)\right)>
\sum_{t\in\mathcal{T}}\left(r(d_i,t)\right)\Leftarrow\\
&\forall t\in\mathcal{T} \ldotp r(d_i',t)>r(d_i,t)
\end{align*}
}}
\end{proof}
\begin{proof}
\textit{\RBU satisfies the \Red constraint (Eq.~\ref{eq:Red}).}
{\small
Under the constraint conditions:
{\scriptsize
\begin{align*}
\mbox{RBU}
\left(\vec{d},d'\right) &> \mbox{RBU}\left(\vec{d},d\right) \Leftrightarrow\\
& w(t')r(d',t')\prod_{j=1}^{|\vec{d}|}(1-r(d_j,t'))>w(t)r(d,t)\prod_{j=1}^{|\vec{d}|}(1-r(d_j,t))\Leftrightarrow\\
&\prod_{j=1}^{|\vec{d}|}(1-r(d_j,t))>\prod_{j=1}^{|\vec{d}|}(1-r(d_j,t))\Leftrightarrow\\
&(1-r_c)^{\left|\left\{d_i\in\vec{d} | r(d_i,t')=r_c\right\}\right|}>(1-r_c)^{\left|\left\{d\in\vec{d} | r(d,t)=r_c\right\}\right|}\Leftrightarrow\\
&\left|\left\{d_i\in\vec{d} | r(d_i,t)=r_c\right\}\right|>\left|\left\{d\in\vec{d} | r(d,t')=r_c\right\}\right|
\end{align*}
}
}
\end{proof}
\begin{proof}
\textit{\RBU satisfies the \MRed constraint (Eq.~\ref{eq:MRed}).}
{\small
Under the constraint conditions:
{\scriptsize
\begin{align*}
\mbox{RBU}\left(\vec{d},d'\right)&>\mbox{RBU}\left(\vec{d},d\right)\Leftrightarrow\\
& w(t')r(d',t')\prod_{j=1}^{|\vec{d}|}(1-r(d_j,t'))>w(t)r(d,t)\prod_{j=1}^{|\vec{d}|}(1-r(d_j,t))\Leftrightarrow\\
&\prod_{j=1}^{|\vec{d}|}(1-r(d_j,t))>\prod_{j=1}^{|\vec{d}|}(1-r(d_j,t))\Leftarrow \forall d_i\in \vec{d} \ldotp r(d_i,t)>r(d_i,t')
\end{align*}
}
}
\end{proof}
\begin{proof}
\textit{\RBU satisfies the \Sat constraint (Eq.~\ref{eq:Sat}).}
{\small
There exists a relevance value $r(d_n,t)=r_{max}=1$
large enough such that
{\scriptsize
\begin{align*}
\mbox{RBU}\left(\vec{d},d_{n+1}\right)&=\sum_{i=1}^np^i \sum_{t'\in\mathcal{T}}\left(w(t')r(d_i)\prod_{j=1}^{i-1}(1-r(d_j,t'))\right)-e\sum_{i=1}^np^i+\\
&\sum_{t'\in\mathcal{T}}\left(w(t')r(d_{n+1})(1-r(d_n,t'))\prod_{j=1}^{n-1}(1-r(d_j,t'))\right)-ep^{n+1}\\
\end{align*}
}
Given that $\forall t'\neq t \ldotp r(d_{n+1},t')=0$, it is equivalent to:
{\scriptsize
\begin{align*}
\mbox{RBU}\left(\vec{d},d_{n+1}\right)&=\sum_{i=1}^np^i \sum_{t'\in\mathcal{T}}\left(w(t')r(d_i)\prod_{j=1}^{i-1}(1-r(d_j,t'))\right)-e\sum_{i=1}^np^i+\\
&\left(w(t)r(d_{n+1})(1-r(d_n,t'))\prod_{j=1}^{n-1}(1-r(d_j,t))\right)-ep^{n+1}\\
\end{align*}
}
Given that $1-r(d_n,t')=0$, we obtain:
{\scriptsize
\begin{align*}
\mbox{RBU}\left(\vec{d},d_{n+1}\right)&=\sum_{i=1}^np^i \sum_{t'\in\mathcal{T}}\left(w(t')r(d_i)\prod_{j=1}^{i-1}(1-r(d_j,t'))\right)-e\sum_{i=1}^np^i+0=\mbox{RBU}\left(\vec{d}\right)\\
\end{align*}
}
}
\end{proof}
\vspace{-01.2em}
\vspace{-1.3mm}
\begin{proof}
\textit{\RBU satisfies the \AspRel constraint (Eq.~\ref{eq:AspRel}).}
{\small
Under the constraint conditions:
{\scriptsize
\begin{align*}
& \mbox{RBU}\left(\vec{d}_{d_i\leftrightarrow d_i'}\right)>RBU\left(\vec{d}\right)\Leftrightarrow\\
& w(t')r(d',t')\prod_{j=1}^{i-1}(1-r(d_j,t'))>w(t)r(d,t)\prod_{j=1}^{i-1}(1-r(d_j,t))\Leftrightarrow \\
& w(t')r(d',t')\prod_{j=1}^{i-1}1>w(t)r(d,t)\prod_{j=1}^{i-1}1\Leftrightarrow w(t')>w(t)
\end{align*}
}
\vspace{-3em}
}
\end{proof}
|
{
"timestamp": "2018-08-21T02:08:19",
"yymm": "1805",
"arxiv_id": "1805.02334",
"language": "en",
"url": "https://arxiv.org/abs/1805.02334"
}
|
\section{Introduction}
Covariance matrix and precision matrix (inverse of the covariance matrix) are among the most fundamental quantities in Statistics as they describe the dependence between different variables (components) of a multivariate observation. Not surprisingly, they play pivotal roles in many statistical problems including graphical models, classification, clustering, and regression, which are used extensively in many application areas including biological, engineering, and finance. Take the Gaussian graphical model (GGM) as an example. The precision matrix provides great insight into the conditional dependence structure in a graph, since the conditional independence of $i$-th and $j$-th variables of an undirected Gaussian Markov random field is equivalent to the $(i,j)$-th entry of the precision matrix being zero, {see a recent review by \citet{pourahmadi2013high}}. Such results have helped researchers to identify complex network structures in applications such as high-throughput biological data, {for example, in \citet{wille2004sparse}. }
Estimating the precision matrix, especially under the high dimensional setting where the variable dimension $p$ can possibly be larger than the sample size $n$, is a particularly challenging problem. Given the current prevalence of high dimensional data and the wide utility of precision matrix, this problem has received significant attention in recent literature. When the sample covariance matrix is positive definite, its inverse is a natural estimator for the precision matrix. However, the inverse of sample covariance matrix as an estimator is demonstrated to have poor performance in numerous studies \citep{johnstone2001distribution,paul2007asymptotics,pourahmadi2013high}.
Moreover, when $p >n$, the precision matrix estimation problem is ill-posed without further restricting assumptions. {One of the most commonly used assumptions to remedy this issue is to assume that the precision matrix is sparse, i.e., a large majority of its entries are zero \citep{dempster1972covariance}, which turns out to be quite useful in practice in the aforementioned GGM owing to its interpretability. Another possibility is to assume a sparse structure on the covariance matrix through, for example, a sparse factor model \citep{carvalho2008high, fan2008high, fan2011high, buhlmann2011statistics, pourahmadi2013high, rovckova2016fast}, to obtain a sparse covariance matrix estimator, and invert it to estimate the precision matrix. However, the precision matrix estimator obtained from this strategy is not guaranteed to be sparse, which is important for interpretability in our context. }%
Regularization provides a general framework for dealing with high dimensional problems. There are two major approaches that utilize regularization to estimate the precision matrix and its sparse structure.
The first one is \emph{regression based approach} where a sparse regression model is estimated separately for each column to identify and estimate the nonzero elements of that column in the precision matrix $\Theta$ \citep{meinshausen2006high,peng2009partial,Zhou09,khare2015convex}. This approach focuses more on the sparse selection of the entries, and the estimated precision matrix is generally not positive definite.
The other is \emph{likelihood based approach} which aims to optimize the negative log-likelihood function \eqref{eq:loglik} together with an element-wise penalty term on $\Theta$ \citep{yuan2007model,banerjee2008model,friedman2008sparse,fan2009network}. Among these methods, Graphical Lasso (GLasso) \citep{friedman2008sparse} is the most commonly used owing to its scalability. GLasso estimator for the precision matrix is also not guaranteed to be positive definite. \cite{mazumder2012graphical} proposed algorithms that modify GLasso and ensure positive definiteness of the estimated precision matrix. Apart from these two general approaches, regularization can be applied with other forms of loss functions, an example of which is the CLIME estimator proposed by \cite{cai2011constrained}.
{Theoretical properties of the likelihood based methods for Gaussian graphical models have been studied in the literature.} In \citet{rothman2008sparse}, \citet{lam2009sparsistency} and \citet{loh2015regularized}, estimation error rates in Frobenius norm have been established for likelihood based estimators with Lasso and SCAD penalties. For GLasso, stronger results in entrywise maximum norm are obtained by \cite{ravikumar2011high} under a restrictive assumption on $\Theta$, called the irrepresentable assumption, { when the multivariate distribution of the observations has an exponential tail (such as sub-Gaussian distributions). A slower rate is shown when the distribution has a polynomial tail (such as $t$-distributions with sufficiently large degrees of freedom).} Similar results on estimation error rate in maximum norm are shown by \cite{loh2014support} for non-convex penalized estimators under sub-Gaussian distributions but their results require beta-min conditions. \cite{cai2011constrained} provide such results for CLIME estimator both under exponential and polynomial tails with the assumption that all the absolute column sums of $\Theta$ are bounded.
The precision matrix estimation problem is less studied under the Bayesian framework possibly due to the high computational cost associated with MCMC when $p$ is large. \cite{marlin2009sparse} proposed a Bayesian model and a variational Bayes algorithm for GGMs with a block structure. \cite{wang2012bayesian} proposed a Bayesian version of GLasso and the associated posterior computation algorithms. \cite{carvalho2009objective}, \cite{dobra2011bayesian}, \cite{Wangli2012} and \cite{Mohammadi2015} used G-Wishart priors and proposed stochastic search methods for the computation. \cite{banerjee2015bayesian} studied a Bayesian approach with mixture prior distributions that have a point-mass and a Laplace distribution. They provide posterior consistency results and a computational approach using Laplace approximation. With the exception of \cite{banerjee2015bayesian}, theoretical properties of Bayesian methods for sparse precision matrix estimation have not been studied. The results of \cite{banerjee2015bayesian} are on estimation error rate in Frobenius norm similar to those of \cite{rothman2008sparse}, but assume the underlying distribution to be Gaussian.
In this paper, we propose a new Bayesian approach for estimation and structure recovery for GGMs. Specifically, to achieve adaptive shrinkage, we model the off-diagonal elements of $\Theta$ using a continuous spike-and-slab prior with a mixture of two Laplace distributions, {which is known as the spike-and-slab Lasso prior in \citet{rockova2015bayesian}, \citet{rovckova2016fast} and \citet{rovckova2016spike}}. Continuous spike-and-slab priors are commonly used for high dimensional regression \citep{George93, Ishwaran05,Narisetty14} and a Gibbs sampling algorithm is often used for posterior computation. However, such a Gibbs sampler for our problem has an extremely high computational burden and instead {we propose a novel EM algorithm for computation, which is motivated by the EM algorithm for linear regression from \citet{rovckova2014emvs} and the one for factor models from \cite{rovckova2016fast}.} Our novel computational and theoretical contributions in the paper are summarized as follows:
\begin{itemize
\item We propose a new approach for precision matrix estimation, named BAGUS, short for
``\textbf{BA}yesian regularization for
\textbf{G}raphical models with \textbf{U}nequal \textbf{S}hrinkage." The adaptive (unequal) shrinkage is due to the non-convex penalization by our Bayesian formulation.
\item Although the Gaussian likelihood is used in our Bayesian formulation, our theoretical results hold beyond GGMs. We have shown that our procedure enjoys the optimal estimation error rate of $O_p\left(\sqrt{\frac{\log p}{n}} \right)$ in the entrywise maximum norm and selection consistency under both exponential and polynomial tail distributions with very mild conditions. Our theoretical result is stronger than the best existing result by \cite{cai2011constrained}, as we assume boundedness of $\Theta$ in operator norm which is weaker than the assumption of bounded absolute column sum of $\Theta$.
\item We propose a fast EM algorithm which produces a maximum a posteriori (MAP) estimate of the precision matrix and (approximate) posterior probabilities on all edges that can be used to learn the graph structure. The EM algorithm has computational complexity comparable to the state-of-the-art GLasso algorithm \citep{mazumder2012graphical}.
\item Our algorithm is guaranteed to produce a symmetric and positive definite estimator unlike many existing estimators including CLIME.
\end{itemize}
The remaining part of the paper is organized as follows. In Section 2, we present our model and prior set-up in the Bayesian framework along with a discussion on its penalized likelihood perspective. In Section 3, we provide our theoretical consistency results followed by the details of the EM algorithm in Section 4. Section 5 presents numerical results in extensive simulation studies and a real application for predicting telephone center call arrivals. Proofs, technical details, and R code used for empirical results can be found in Online Supplementary Material.
\subsection*{Notation}
For a $p \times q$ matrix $A=[a_{ij}]$, we denote its Frobenius norm by $\|A\|_{F}=\sqrt{\sum_{(i,j)}a_{ij}^2}$, the entrywise $\ell_{\infty}$ norm (i.e., maximum norm) $ \| A \|_{\infty}={\max_{(i,j)}|a_{ij}|}$, and its spectral norm by $ \| A \|_2=\sup \{ \| A\mathbf{x}\|: \mathbf{x} \in \mathbb{R}^q, \|\mathbf{x} \|\le 1\}$ where $\| \mathbf{x}\|$ denotes the $l_2$ norm of vector $\mathbf{x}$.
For a
$p \times p$ square matrix $A$, let $A^{-}$ denote the off-diagonal elements of $A$, $A^{+}$ the diagonal elements of $A$, and $\lambda_{\min}(A)$ and $\lambda_{\max}(A)$ the smallest and the largest eigenvalues, respectively. For a square symmetric matrix $A$, {its spectral norm is equal to its maximum eigenvalue, that is, $\| A \|_2 = \lambda_{\max}(A)$, and}
its maximum absolute column sum (i.e., the $\ell_{1}/\ell_{1}$ operator norm) is the same as its maximum absolute row sum (i.e., the $\ell_{\infty}/\ell_{\infty}$ operator norm), denoted by $\vertiii{A}_{\infty}={\max_{1\le j\le q}\sum_{i=1}^{p}|a_{ij}|}$.
Let $\Theta^0=[\theta^{0}_{ij}]$ and $\Sigma^0=[\sigma^{0}_{ij}]$ denote the true precision matrix and covariance matrix, respectively. Let ${S^0}=\{(i,j):\theta_{ij}^0\ne0 \}$ denote the index set of all nonzero entries in $\Theta^0$ and ${{S^0}^{c}}$ is its complement.
Define ${\theta^0_{\max}}=\max_{ij}|\theta^0_{ij}|$ and $M_{\Sigma^0}=\vertiii{\Sigma^0}_{\infty}$. Define $\Gamma=\Theta^{-1}\otimes\Theta^{-1}$ as the Hessian matrix of $g:=-\log\det(\Theta)$. {$\Gamma_{(j,k),(l,m)}$ corresponds to the second partial derivative $\frac{\partial^2 g}{\partial \theta_{jk}\partial \theta_{lm}}$, and for any two subsets $T_1$ and $T_2$ of $\{(i,j):1\le i,j\le p\}$, we use $\Gamma_{T_1T_2}$ to denote the matrix with rows and columns of $\Gamma$ indexed by $T_1$ and $T_2$ respectively.}
We further denote $M_{\Gamma^0}=\vertiii{{\Gamma^{0}}^{-1}_{{S^0S^0}}}_{\infty}=\vertiii{({\Theta^0}\otimes{\Theta^0})_{{S^0S^0}}}_{\infty}$. Define the column sparsity $d=\underset{{i=1,2,...,p}}{\max} card\{j: \theta_{ij}^{0}\ne 0\}$ and the off-diagonal sparsity $s = card({S^0}) - p$, where $card$ denotes the cardinality of the set in its argument.
\section{Bayesian Regularization for Graphical Models}
Our data consist of a random sample of $n$ observations $Y_1, \dots, Y_n$ which are assumed to be $iid$ $p$-variate random vectors following a multivariate distribution with mean zero and precision matrix $\Theta$. In short, we use the following notation:
$$Y_1, \dots, Y_n \overset{iid}{\sim} N(0,\Theta^{-1}).$$ Our primary goal is to estimate $\Theta$ and identify the sparse structure in the elements of $\Theta$. For the Bayesian framework, we work with the Gaussian $\log$-likelihood given by
\begin{equation} \label{eq:loglik}
\ell(\Theta) = \log f(Y_1, \dots, Y_n | \Theta)=\frac{n}{2}\Big(\log \det (\Theta) - \text{tr} (S\Theta)\Big)
\end{equation}
where $S=[s_{ij}] = \frac{1}{n}\sum Y_i Y_i^t$ denotes the sample covariance matrix of the data. We note that in spite of working with the Gaussian likelihood, we allow the observations to have non-Gaussian distributions including those with polynomial tails.
\subsection{Bayesian Formulation}
Next we describe our prior specification on the following two groups of parameters: the diagonal entries $\{\theta_{ii} \}$ and the off diagonal entries, where the latter is reduced to the upper triangular entries $\{\theta_{ij}: i<j \}$ due to symmetry.
On the upper triangular entries $\theta_{ij}$ ($i < j$), we place the following spike-and-slab prior, {known as the spike-and-slab Lasso prior developed in a series of work by \citet{rockova2015bayesian}, \citet{rovckova2016fast} and \citet{rovckova2016spike}}:
\begin{equation} \label{def:SS:prior}
\pi(\theta_{ij}) = \frac{\eta}{2v_1} \exp \Big \{ -\frac{|\theta_{ij}|}{v_1} \Big \} + \frac{1-\eta}{2v_0} \exp \Big \{ -\frac{|\theta_{ij}|}{v_0} \Big \} ,
\end{equation}
which is a mixture of two Laplace distributions of different scales $v_0$ and $v_1$ with $v_1 > v_0 > 0.$ The mixture distribution (\ref{def:SS:prior}), represents our prior on $\theta_{ij}$ which could take values of relatively large magnitude modeled by the Laplace distribution with scale parameter $v_1$ (i.e., the \enquote{slab} component), or which could take values of very small magnitude modeled by the Laplace distribution with scale parameter $v_0$ (i.e., the \enquote{spike} component).
In the traditional spike-and-slab prior, the \enquote{spike} component is set to be a point mass at zero, which corresponds to our setting with $v_0=0.$ Here we use a continuous version of the spike-and-slab prior, in which $v_0$ is set be nonzero but relatively small compared with $v_1$. Continuous spike-and-slab priors with normal components were proposed by \cite{George93} in the linear regression context and their high dimensional shrinkage properties were studied by \cite{Ishwaran05} and \cite{Narisetty14}. {\citet{rockova2015bayesian} and \citet{rovckova2016spike} considered the spike-and-slab Lasso prior given by \eqref{def:SS:prior} for linear regression and
studied the adaptive shrinkage property of such priors as well as various asymptotic properties concerning the posterior mode.} An advantage of continuous spike-and-slab priors is that the continuous prior distributions on $\theta_{ij}$ allow the use of efficient algorithms that do not require switching the active dimension of the parameter.
For the diagonal entries $\theta_{ii}$ of the precision matrix, a weakly informative Exponential prior is specified since $\theta_{ii}$ do not need to be shrunk to zero: $$ \pi(\theta_{ii}) = \tau\exp(-\tau\theta_{ii}) \mathbbm{1}(\theta_{ii} > 0).$$
Although $\Theta$ can be fully parameterized by these two groups of parameters, they are not independent as the determinant of $\Theta$ needs to be positive. Therefore, the support for the joint prior distribution on elements of $\Theta$ is restricted such that $\Theta$ is positive definite, i.e., $\Theta\succ 0.$ In addition, we constrain the spectral norm of $\Theta$ to be upper bounded: $\| \Theta\|_2\le B$. Such a constraint is not very restrictive since it often appears in the assumptions for theoretical studies of precision matrix estimation anyway: a large spectral norm of $\Theta$ implies high correlation among variables, a setup in which most methods fail. An important consequence of this constraint will be discussed in Section \ref{sec:convexity}.
So our prior distribution on $\Theta$ is given by
\begin{equation} \label{eq:overall:prior}
\pi(\Theta)= \prod_{i<j} \pi(\theta_{ij}) \prod_i \pi(\theta_{ii}) \mathbbm{1}(\Theta\succ 0)\mathbbm{1}( \| \Theta \|_2\le B).
\end{equation}
\subsection{The Penalized Likelihood Perspective}\label{sec:penalize}
If estimation of $\Theta$ is of main interest, then a natural choice is the MAP estimator $\tilde{\Theta}$ that maximizes the posterior distribution $\pi(\Theta | Y_1, \cdots, Y_n)$. This is equivalent to minimizing the following objective function under the constraint $\|\Theta\|_2\le B$ and $\Theta\succ 0$:
\begin{eqnarray}
L(\Theta) &=& -\log\pi(\Theta | Y_1, \cdots, Y_n) \nonumber \\
&= & -\ell(\Theta) - \sum_{i<j}\log \pi(\theta_{ij} | \eta) - \sum_{i}\log\pi(\theta_{ii}|\tau) + \text{Const.}\nonumber \\
&= & \frac{n}{2} \Big (\text{tr}(S\Theta) -\log \det(\Theta) \Big ) + \sum_{i<j} \text{pen}_{SS}(\theta_{ij}) + \sum_{i} \text{pen}_1(\theta_{ii}) + \text{Const.} \label{eq:Loss}
\end{eqnarray}
where
\begin{equation}
\text{pen}_{SS}(\theta) = -\log \left [ \Big(\frac{\eta}{2v_1}\Big) e^{-\frac{|\theta|}{v_1}} +\Big(\frac{1-\eta}{2v_0} \Big) e^{-\frac{|\theta|}{v_0}}\Big) \right ] \label{eq:pen_SS}
\end{equation}
and $\text{pen}_1(\theta)= \tau|\theta|. $
If viewed from the penalized likelihood perspective, the objective function $L(\Theta)$ employs two penalty functions, induced by our Bayesian formulation. The penalty function on the diagonal entries, $\text{pen}_1(\theta)$, is the same as the Lasso penalty. The hyperparameter $\tau$ is suggested to be small, so the Lasso penalty mainly shrinks the estimates of $\theta_{ii}$ instead of truncating them to be zero.
More importantly, the penalty function on the off-diagonal entries, $\text{pen}_{SS}(\theta)$, coming from the spike-and-slab prior has an interesting shrinkage property. To highlight the difference between this penalty and the Lasso penalty, we plotted them in Figure \ref{fig:pencom}. We also compare our spike-and-slab penalty with the spike-and-slab penalty that arises by using a mixture of two normal distributions \citep{george1997approaches} instead of Laplace distributions:
\[ \text{pen}_{NSS}(\theta) = -\log \left [ \Big(\frac{\eta}{\sqrt{2 \pi v_1}}\Big) e^{-\frac{\theta^2}{2 v_1}} +\Big(\frac{1-\eta}{\sqrt{2 \pi v_0}} \Big) e^{-\frac{\theta^2}{2 v_0}}\Big) \right ],
\]
where ``$NSS$" in the subscript stands for normal spike-and-slab prior. In Figure \ref{fig:pencom}, we set $v_0=0.1$ and $v_1=10$ for both $\text{pen}_{SS}(\theta)$ and $\text{pen}_{NSS}(\theta)$. Also, we subtract their values at $0$ so the corresponding penalty at $\theta=0$ is zero. We can see that the penalty function we use, $\text{pen}_{SS}(\theta)$, provides the best continuous approximation of the $L_0$ penalty among the three.
To gain more insight about the penalty functions, we plot the derivatives/subgradient of the spike-and-slab penalty $\text{pen}_{SS}(\theta)$ in Figure \ref{fig:devcom}.
A simple calculation reveals that
\begin{equation} \label{eq:bias}
\frac{\partial}{\partial |\theta|} \text{pen}_{SS}(\theta) =\frac{1}{v_1} \frac{\frac{\eta}{2v_1}e^{-\frac{|\theta|}{v_1}}}{\pi(\theta)} +\frac{1}{v_0} \frac{\frac{1-\eta}{2v_1}e^{-\frac{|\theta|}{v_0}}}{\pi(\theta)} = \frac{w (\theta)}{v_1} + \frac{1-w (\theta)}{v_0},
\end{equation}
which is a weighted average of $1/v_1$ and $1/v_0$ with the weight $w (\theta)$ being the conditional probability of $\theta$ belonging to the ``slab'' component \citep{rovckova2016spike}.
Recall that the derivative of a penalty function should ideally have its maximum at zero and then decay gradually to 0 (asymptotically), because a non-decreasing derivative with respect to $|\theta|$ leads to a bias and affects the performance in finite sample settings \citep{fan2001variable,loh2014support}.
This is the case with $\text{pen}_{SS}(\theta)$:
As $|\theta|$ becomes larger, the mixing weight $w$ gets larger, which leads to a smooth transition from a large penalty $1/v_0$ produced from the ``spike'' component, to a smaller penalty $1/v_1$ from the ``slab'' component. From Figure \ref{fig:devcom}, we can see that $\text{pen}_{NSS}(\theta)$ does not have this desired property, and neither does the Lasso penalty.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=1\linewidth,height=0.27\linewidth]{pen.pdf}
\end{center}
\caption{Plot of different penalty functions. (a): penalty induced from the spike-and-slab prior with a mixture of Laplace distributions; (b): penalty induced from the spike-and-slab prior with a mixture of normal distributions; (c): Lasso penalty. }\label{fig:pencom}
\end{figure}
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=1\linewidth]{pen_derivative.pdf}
\end{center}
\caption{Plot of the derivative/subgradient of the penalty functions
}
\label{fig:devcom}
\end{figure}
\subsection{Posterior Maximization and Local Convexity}\label{sec:convexity}
The non-convexity of our spike-and-slab penalty $\text{pen}_{SS}(\theta)$ leads to desired shrinkage and selection behavior, but it could bring additional computation challenges as the posterior objective function $L(\Theta)$ is no longer convex and may have multiple local optima. However, this is not a problem in our case with the upper bound on the spectral norm of $\Theta$ (\ref{eq:overall:prior}). More specifically, the following theorem ensures that the optimization of $L(\Theta)$ with the spectral norm constraint is a convex optimization problem, that is, locally within the spectral norm ball, we are dealing with convex optimization resulting in a unique MAP estimate. This result is motivated by Lemma 6 from \cite{loh2014support}.
\begin{thm} \label{lemma:convex}
If $B < (2nv_0)^{\frac{1}{2}}$, then $\min_{\Theta\succ 0,\|\Theta\|_2\le B}L(\Theta)$
is a strictly convex problem.
\end{thm}
\begin{proof}
Decompose $L(\Theta)$ as the sum of the following two terms: $-\ell(\Theta)-\frac{1}{8v_0}\|\Theta\|_F^2$ and $ \sum_{i<j} \text{pen}_{SS}(\theta_{ij}) + \sum_{i} \text{pen}_1(\theta_{ii}) +\frac{1}{8v_0}\|\Theta\|_F^2$. We prove the theorem by checking that the second-order subgradient of each term in the decomposition of $L(\Theta)$ is positive which would imply that both the terms are strictly convex.
The second-order subgradient of the first term is given by $ -\nabla^2\ell(\Theta) - \frac{1}{4v_0}$, where $-\nabla^2\ell(\Theta)=\frac{n}{2}(\Theta\otimes\Theta)^{-1}$. The smallest eigenvalue of $-\nabla^2\ell(\Theta) $ can be bounded as:
$$\lambda_{\min} \left(-\nabla^2\ell(\Theta) \right) - =\frac{n}{2}\lambda_{\max}^{-1}(\Theta\otimes\Theta) =\frac{n}{2}\lambda_{\max}^{-2}(\Theta) > \frac{1}{4v_0},$$
{ where the last inequality is because $\|\Theta\|_2\le B\le(2nv_0)^{\frac{1}{2}}$ implies that $\lambda^2_{\max}(\Theta)\le 2nv_0$ and leads to $\frac{n}{2}\lambda_{\max}^{-2}(\Theta)\ge\frac{1}{4v_0}.$ Therefore, $-\nabla^2\ell(\Theta) - \frac{1}{4v_0}$ is strictly convex.
We now consider the second-order subgradient of $\text{pen}_{SS}(\theta_{ij})$:
\begin{equation*}
|\text{pen}_{SS}{''}(\theta_{ij})|=\frac{(\frac{1}{v_0}-\frac{1}{v_1})\frac{\eta v_0}{(1-\eta)v_1}e^{\frac{\theta_{ij}}{v_0}-\frac{\theta_{ij}}{v_1}}}{(\frac{\eta v_0}{(1-\eta)v_1}e^{\frac{\theta_{ij}}{v_0}-\frac{\theta_{ij}}{v_1}}+1)^2} \leq \frac{1}{4} \left(\frac{1}{v_0}-\frac{1}{v_1} \right) <\frac{1}{4v_0},
\end{equation*}
where the first inequality is because for any $x$, $\frac{|x|}{(1+|x|)^2} \leq \frac{1}{4}$. This implies that the second term in the decomposition of $L(\Theta)$ is also strictly convex and the theorem is proved.
\end{proof}
\subsection{Uncovering the Sparse Structure} \label{sec:sparse_structure}
In many applications, identifying the zero entries in $\Theta$ (referred to as structure estimation or graph selection) is also of major interest along with the estimation of $\Theta$. Inference on the latent sparse structure of $\Theta$ or equivalently the sparse structure of a graph can be directly induced from our spike-and-slab prior. We can re-express the spike-and-slab prior (\ref{def:SS:prior}) as the following two-level hierarchical prior:
\begin{equation}
\left \{ \begin{array}{lcl}
\theta_{ij} \mid r_{ij}=0 & \sim & \textsf{DE}(0,v_0) \\
\theta_{ij} \mid r_{ij}=1 & \sim & \textsf{DE}(0,v_1)
\end{array} \right. \label{eq:spike:slab}
\end{equation}
where $r_{ij}$ follows
\begin{equation}
\label{prior:r_ij}
r_{ij} \mid \eta \ \sim \textsf{Bern}(\eta).
\end{equation}
Here $\textsf{DE}(0, v)$ denotes the double exponential (Laplace) distribution with scale $v$ and and $\textsf{Bern}(\eta)$ denotes the Bernoulli distribution with probability $\eta$ .
We can view the binary variable $r_{ij}$ as the indicator for the sparsity pattern: $r_{ij}
=1$ implies $\theta_{ij}$ being the \enquote{signal} (i.e., from the slab component), and $r_{ij}=0$ implies $\theta_{ij}$ being the \enquote{noise} (i.e., from the spike component). {In the fully Bayesian approach, the posterior inclusion probability for an edge connecting $i$ and $j$ is given by }
\[ \mathbb{P}(r_{ij}=1 | Y_1, \dots, Y_n ) = \int \mathbb{P}(r_{ij}=1 | \theta_{ij} ) \pi(\theta_{ij} | Y_1, \dots, Y_n) d \theta_{ij},
\]
{
which is the integrated probability of $\theta_{ij}$ being from the slab component (corresponding to $\gamma_{ij}=1$) with respect to the posterior distribution of $\theta_{ij}$. In our analysis, we approximate this probability by using the MAP estimator $\tilde{\Theta}$ as follows:}
\begin{equation} \label{eq:p_ij}
p_{ij} = \mathbb{P}(r_{ij}=1 | \tilde{\theta}_{ij} ) = \frac{ \left ( \frac{\eta}{2v_1} \right) e^{-\frac{|\tilde{\theta}_{ij}|}{v_1}}}{\left (\frac{\eta}{2v_1}\right) e^{-\frac{|\tilde{\theta}_{ij}|}{v_1}} +\left (\frac{1-\eta}{2v_0} \right) e^{-\frac{|\tilde{\theta}_{ij}|}{v_0}} }.
\end{equation}
We can then threshold $p_{ij}$ to identify the edges: if $p_{ij}$ is greater than a pre-specified threshold such as 0.5, then the $(i,j)$ pair is identified as an edge.
Denote $\mathbb{P}(r_{ij}=1 | \tilde{\theta}_{ij} = 0)$ by $p^{\star}(0)$. The quantity $\frac{1}{p^{\star}(0)}-1 = v_1 (1-\eta)/(v_0 \eta)$ represents the interplay of all the parameters $(v_0, v_1, \eta)$ and it plays an important role both in our asymptotic analysis for precision matrix estimation that will be presented in the next section, and also in the analysis of \citet{rovckova2016spike} and \citet{rockova2015bayesian} for high-dimensional linear regression.
\section{Theoretical Results}
Let $\tilde{\Theta}$ denote the MAP estimator, the unique minimizer of the loss function (\ref{eq:Loss}). In this Section, we provide theoretical results on the estimation accuracy of $\tilde{\Theta}$. We also show that the structure selected based on thresholding the posterior probabilities $p_{ij}$ matches the true sparse structure with probability going to one.
\subsection{Conditions}
\subsubsection{Tail Conditions on the Distribution of $Y$}
In our analysis, we do not restrict to the situation where the true distribution of $Y$ is Gaussian. Instead, we provide analysis for two cases according to the tail conditions on the true distribution of a $p$-variate random vector $Y = (Y^{(1)},Y^{(2)},...,Y^{(p)})$.
\begin{enumerate}
\item[(C1)] Exponential tail condition: Suppose that there exists some $0<\eta_1<1/4$ such that $\frac{\log p}{n}<\eta_1$ and
\begin{equation}
Ee^{t{Y^{(j)}}^2}\le K \text{ for all }|t|\le\eta_1, \text{ for all } j=1, \dots, p
\end{equation}
where $K$ is a bounded constant.
\item[(C2)]Polynomial tail condition: Suppose that for some $\gamma$, $c_1>0$, $p\le c_1n^\gamma$, and for some $\delta_0>0$,
\begin{equation}
E|Y^{(j)}|^{4\gamma+4+\delta_0}\le K, \text{ for all } j=1, \dots, p.
\end{equation}
\end{enumerate}
Note that when $Y$ follows a Gaussian or a sub-Gaussian distribution, condition (C1) is satisfied. { When $p=n$, condition (C2) is satisfied for $t$-distributions with degrees of freedom greater than $8$. When $p=n^2$, condition (C2) is satisfied for $t$-distributions with degrees of freedom greater than $12$.} The same tail conditions are also considered by \cite{cai2011constrained} and \cite{ravikumar2011high}.
\subsubsection{Conditions on $\Theta^0$}
We make the following assumption on the true precision matrix $\Theta^0$ for studying estimation accuracy.
\begin{enumerate}
\item[(A1)] $\lambda_{\max}(\Theta^0)\le 1/k_1<\infty$ or equivalently $0<k_1\le\lambda_{\min}(\Sigma^0)$,
{ where $k_1$ is some constant greater than $0$.}
\end{enumerate}
{Note that because the largest eigenvalue of $\Theta^0$ is bounded, all the elements of $\Theta^0$ are bounded, and cannot grow with $p$ and $n$.}
In addition, we make the minimum signal assumption below for studying sparse structure recovery.
\begin{enumerate}
\item[(A2)] The minimal \enquote{signal} entry satisfies $\label{cond:C}
{\underset{(i,j)\in {S^0}}{\min}{|\theta^0_{ij}|}}\geq K_0 {\sqrt{\frac{\log p}{n}}}$,
where $K_0>0$ is a sufficiently large constant not depending on $n$.
\end{enumerate}
Similar and in some cases stronger assumptions are imposed in other theoretical analysis of precision matrix estimation and sparse structure recovery \citep{rothman2008sparse, lam2009sparsistency, ravikumar2011high, cai2011constrained, loh2014support}. For a comparison of various theoretical results, see the discussion in Section \ref{sec:compare}.
\subsection{Theoretical Results}
The following theorem gives estimation accuracy under the entrywise $\ell_{\infty}$ norm. In particular, the following theorem implies that with an appropriate choice of $(v_0, v_1, \eta, \tau)$ and $B$, we could achieve the $O_p\left(\sqrt{\frac{\log p}{n}}\right)$ error rate for distributions with an exponential or a polynomial tail.
\begin{thm}(Estimation accuracy in entrywise $\ell_{\infty}$ norm)\label{Thm:estimate}\\
Assume condition (A1) holds. For any pre-defined constants $C_3>0$, $\tau_0>0$, define $C_1=\eta_1^{-1}(2+\tau_0+\eta_1^{-1}K^2)$ when the exponential tail condition (C1) holds, and $C_1=\sqrt{({\theta^0_{\max}}+1)(4+\tau_0)}$ when the polynomial tail condition (C2) holds. Assume that\\
i) the prior hyper-parameters $v_0,v_1,\eta,$ and $\tau$ satisfy
\begin{equation}
\begin{cases}
\label{eq:hyper}
\frac{1}{nv_1}={C_3}\sqrt{\frac{\log p}{n}}(1-\varepsilon_1), ~~ \frac{1}{nv_0}>C_4\sqrt{\frac{\log p}{n}} \\
\frac{v_1^2(1-\eta)}{v_0^2\eta}\le p^\varepsilon, ~\text{ and } ~ \tau\le C_3\frac{n}{2}\sqrt{\frac{\log p}{n}}
\end{cases}
\end{equation}
for some constants $ \varepsilon_1>0$, $C_4 > C_3$ and some sufficiently small $\varepsilon$, \\
ii) the spectral norm $B$ satisfies $\frac{1}{k_1}+2d(C_1+C_3)M_{\Gamma^0}\sqrt{\frac{\log p}{n}}<B<(2nv_0)^{\frac{1}{2}}$, and\\
iii) the sample size $n$ satisfies $\sqrt{n}\ge M\sqrt{\log p}$,\\
~~~where $M=\max\Big\{2d(C_1+C_3)M_{\Gamma^0}{\max\Big({3M_{\Sigma^0}},{3M_{\Gamma^0}{M_{\Sigma^0}}^3},\frac{2}{k_1^2}\Big)},\frac{2C_3\varepsilon_1}{k_1^2}\Big\}$.\\
Then, the MAP estimator $\tilde{\Theta}$ satisfies
\begin{equation} \label{eq:elementwise:l_infinity}
\|\tilde{\Theta}-\Theta^0\|_{\infty}\le2(C_1+C_3)M_{\Gamma^0}\sqrt{\frac{\log p}{n}}.
\end{equation}
with probability greater than $1-\delta_1$, where $\delta_1=2p^{-\tau_0}$ when condition (C1) holds, and $\delta_1=O(n^{-\delta_0/8}+p^{-\tau_0/2})$ when condition (C2) holds.
\end{thm}
Theorem \ref{Thm:estimate} shows that the estimation error of our MAP estimator $\tilde{\Theta}$ can be controlled through an interplay between the parameters $(v_0,v_1,\eta, \tau)$ and $B$. To help readers understand this result, we provide an explanation of the required conditions.
In our proof, the term $\frac{1}{n} \text{pen}{'}_{SS}(\theta)$, which decreases from $1/(n v_0)$ to $1/(n v_1)$ when $|\theta|$ increases from zero to infinity, serves as an adaptive thresholding value. The conditions in \eqref{eq:hyper} ensure the following properties of this adaptive thresholding rule: 1) to eliminate noise, $1/(nv_0)$ is set to be bigger than $\sqrt{(\log p)/n}$, the typical noise level in high-dimensional analysis; 2) to reduce bias due to thresholding, $1/(nv_1)$ is set to be of a constant order of $\sqrt{(\log p)/n}$, or much smaller by varying $\varepsilon_1$; 3) the thresholding level should be close to $1/(n v_1)$ when $\theta$ is of a certain order bigger than the noise level $\sqrt{(\log p)/n}$, which is ensured by the upper bound on $\frac{v_1^2(1-\eta)}{v_0^2 \eta}$.
The upper bound on $B$ in condition $ii)$ is to ensure that our objective function $L(\Theta)$ is strictly convex. However, $B$ cannot be too small, otherwise, even if $L(\Theta)$ is convex, the constrained local mode cannot achieve the desired estimation accuracy $\|\tilde{\Theta}-\Theta^0\|_{\infty}=O_p \Big (\sqrt{\log p/n} \Big )$.
\begin{comment}
{Our proof is inspired by the proofs from \cite{rothman2008sparse}
and \cite{ravikumar2011high}. We use a constructive proof approach to prove the results following three steps:
\begin{itemize}
\item \emph{Step 1:} Construct a solution set $\mathcal{A}$ for the constraint problem:
$$
\arg\min_{\Theta\succ 0,\|\Theta\|_2\le B,\Theta_{\mathcal{B}^c}=0}L\left(\Theta\right),$$ by defining
\begin{equation*}
\mathcal{A} =\left \{ \Theta: \frac{n}{2}\left(S-\Theta^{-1}\right)_{\mathcal{B}}+Z_{\mathcal{B}} =0, \Theta\succ 0, \| \Theta\|_2\le B \right \},
\end{equation*}
where $\mathcal{B}= \{(i,j):|\theta_{ij}^0|>2(C_1+C_3)M_{\Gamma^0}\sqrt{\log p/n}\}\cup \{(i,j):i=j\}$.
\item \emph{Step 2:} Show that there exists a $\tilde{\Theta} \in \mathcal{A}$ satisfying
$\|\tilde{\Theta}-\Theta^0\|_{\infty}=O_p \Big (\sqrt{\log p/n} \Big ).$
The lower bound on $B$ in condition $ii)$ is used to ensure: when $\tilde{\Theta}$ satisfies $\|\tilde{\Theta}-\Theta^0\|_{\infty}=O_p \Big (\sqrt{\log p/n} \Big )$, its spectral norm is bounded by $B$, i.e., $||\tilde{\Theta}||_2 < B$.
\item \emph{Step 3:} Prove that $\tilde{\Theta}$, which is positive definite by construction, is a local minimizer of the loss function $L(\Theta)$ by showing $L(\Theta) \ge L(\tilde{\Theta})$ for any $\Theta$ in a small neighborhood of $\tilde{\Theta}$.
The interplay between $(v_0,v_1,\eta)$ is used to achieve the ideal curvature of $\text{pen}_{SS}(\cdot)$ and prove this result. We require $1/(nv_0)$ to be strong enough to control the sampling error $S_{ij} - (\Theta^0)^{-1}_{ij}$ and estimation error $(\Theta^0)^{-1}_{ij} - (\tilde{\Theta}^{-1})_{ij}$ when $(i,j) \in \mathcal{B}^c$, and let $1/(nv_1)$ to be weak, i.e., grow at any rate slower than $O(C_3\sqrt{\log p/n})$. In order to prove $\tilde{\Theta}$ to be local minimal, we also need the first and second derivative of the penalty function $\text{pen}_{SS}(\theta_{ij})$ bounded when $(i,j) \in \mathcal{B}$, we achieve that by specifying $1/(nv_1)$ and $v_1^2(1-\eta)/(v_0^2\eta)$ as in the theorem.
In addition, since $L(\Theta)$ is strictly convex with the upper bound on $B$ in condition $ii)$, we conclude that the estimator $\tilde{\Theta}$ we constructed is the unique minimizer such that $\|\tilde{\Theta}-\Theta^0\|_\infty=O_p\left(\sqrt{\log p/n}\right)$.
\end{itemize}}
\end{comment}
When $M_{\Gamma^0}, M_{\Sigma^0}$ remain constant as a function of $(n,p,d)$, Theorem 2 guarantees that with proper tuning, an estimation error bound of $O(\sqrt{\log p/n})$ in $\ell_\infty$ norm can be achieved for the MAP estimator $\tilde{\Theta}$ with high probability. Similar results can be found in \citet{ravikumar2011high} and \citet{loh2014support} when $M_{\Gamma^{0}}, M_{\Sigma^{0}}$ are constants. { If $M_{\Gamma^0}, M_{\Sigma^0}$ are of the order $O(p)$, then we require the sample size $n$ to grow faster than the order $O(p)$.}
Theorem \ref{Thm:estimate} follows from a more general result stated as Theorem \ref{thm:proof} in Appendix A from the Online Supporting Material. The specific definition for $C_4$ and the one for $\varepsilon$ are also provided in Theorem \ref{thm:proof} in the Online Supporting Material.
We now present the following result on estimation accuracy of $\tilde{\Theta}$ in terms of Frobenius norm, spectral norm and $\ell_{\infty}/\ell_{\infty}$ operator norm. This result is based on Theorem \ref{Thm:estimate} and Lemma \ref{lemma:5} from Appendix A.
\begin{thm}(Estimation accuracy in other norms)\label{thm:corollary}\\
Under the same conditions of Theorem \ref{Thm:estimate}, \\
(i) if the exponential tail condition (C1) holds, then
\begin{equation}
\begin{split}
& \| \tilde{\Theta}-\Theta^0 \|_{F}<2\Big(\eta_1^{-1}(2+\tau_0+\eta_1^{-1}K^2)+C_3\Big)M_{\Gamma^0}\sqrt{\frac{(p+s)\log p}{n}},\\
&\vertiii{\tilde{\Theta}-\Theta^0}_{\infty}, \| \tilde{\Theta}-\Theta^0\|_{2}<2\Big(\eta_1^{-1}(2+\tau_0+\eta_1^{-1}K^2)+C_3\Big)M_{\Gamma^0}\min\{d,\sqrt{p+s}\}\sqrt{\frac{\log p}{n}},
\end{split}
\end{equation}
with probability greater than $1-2p^{-\tau_0}$; \\
(ii) if the polynomial tail condition (C2) holds, then
\begin{equation} \label{eq:cond:elementwies:l_infinity}
\begin{split}
&\| \tilde{\Theta}-\Theta^0 \|_{F}<2(\sqrt{({\theta^0_{\max}}+1)(4+\tau_0)}+C_3)M_{\Gamma^0}\sqrt{\frac{(p+s)\log p}{n}},\\
&\vertiii{\tilde{\Theta}-\Theta^0}_{\infty},\|\tilde{\Theta}-\Theta^0\|_{2}<2(\sqrt{({\theta^0_{\max}}+1)(4+\tau_0)}+C_3)M_{\Gamma^0}\min\{d,\sqrt{p+s}\}\sqrt{\frac{\log p}{n}},
\end{split}
\end{equation}
with probability greater than $1-O(n^{-\delta_0/8}+p^{-\tau_0/2}).$
\end{thm}
Next, we discuss selection consistency for the sparse structure before providing a comparison of our results with the existing results in Section \ref{sec:compare}.
As discussed in Section \ref{sec:sparse_structure}, we propose to estimate ${S^0}$, the set of nonzero elements of $\Theta$, by thresholding the inclusions probability $p_{ij}$ that is defined at (\ref{eq:p_ij}). The following theorem shows that {$\hat{S^0}=\{(i,j): p_{ij}\ge T\}$}, the set of edges with posterior probability greater than $T$, is a consistent estimator of ${S^0}$ for any $0<T<1$.
\begin{thm}(Selection consistency)
Assume the same conditions in Theorem \ref{Thm:estimate} and condition (A2) with the following restriction:
\begin{equation}
\label{eq:gap}
\epsilon_0< \frac{1}{\log p}\log \left(\frac{v_1(1-\eta)}{v_0\eta} \right)<(C_4-C_3)\big ( K_0-2(C_1+C_3)M_{\Gamma^0} \big )
\end{equation}
for some arbitrary small constant $\epsilon_0>0$. Then, for any $T$ such that $0<T<1$, we have
\begin{equation*}
\mathbb{P} \Big ( \hat{{S^0}}={S^0} \Big ) \rightarrow 1.
\end{equation*}\label{Thm:select}
\vspace{-2ex}
\end{thm}
A proof of Theorem \ref{Thm:select} is provided in Appendix B.
{In our model, sparsity is induced by an interplay between the parameters $v_0,v_1$ and $\eta$ through $\log\left(v_1\left(1-\eta\right)/(v_0\eta)\right)$. When $\log\left(v_1\left(1-\eta\right)/(v_0\eta)\right)$ falls in the gap mentioned in Equation \eqref{eq:gap}, the selection consistency can be achieved.}
\subsection{Comparison with Existing Results}\label{sec:compare}
We compare our results with those of GLasso \citep{ravikumar2011high}, CLIME \citep{cai2011constrained} and the non-convex regularization based method by \cite{loh2014support}.
In \cite{ravikumar2011high}, the irrepresentable condition, $\vertiii{\Gamma_{{{S^0}^c}{S^0}}\Gamma_{{S^0}{S^0}}^{-1}}_{\infty} \le 1-\alpha$, is needed to establish the rate of convergence in entrywise $\ell_{\infty}$ norm. Such an assumption is quite restrictive, and is not needed for our results. In addition, under the polynomial tail condition, the rate of convergence established in \cite{ravikumar2011high} is $O_p \left (\sqrt{\frac{p^c}{n}}\right)$, slower than our rate $O_p \left (\sqrt{\frac{\log p}{n}} \right)$.
The theoretical results for CLIME \citep{cai2011constrained} are similar to ours in terms of estimation accuracy. However, the main difference is the assumption on $\Theta^0$. We assume boundedness of the largest eigenvalue of $\Theta^0$, which is strictly weaker than the boundedness of $\vertiii{\Theta^0}_\infty$ (the $\ell_{\infty}/\ell_{\infty}$ operator norm), the assumption imposed for CLIME. The weakness of our assumption follows from H\"older's inequality. To illustrate the strict difference between these assumptions, we consider the following precision matrix as an example:
\begin{equation} \label{eq:star:structure}
\theta^0_{ii} = 1, \forall i; \quad \theta^0_{1,i}= \theta_{i,1} = \frac{1}{\sqrt{p}}, \text{ if } i \ne 1; \quad \theta_{ij}^0=0 \text{ if } i \ne j \text{ and } i \ne 1.
\end{equation}
The precision matrix above has the so-called star structure, which is frequently observed in networks with a hub. In Figure \ref{star}, we plot the maximum eigenvalue and the maximum of the absolute row sum of this matrix with varying dimension $p$. We can see that it is easy to satisfy the upper bound on maximum eigenvalue, but not the upper bound on the $\ell_{\infty}/\ell_{\infty}$ operator norm, since the latter is diverging with $p$.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.8\linewidth]{star_norm.pdf}
\end{center}
\caption{Plots of the maximum eigenvalue (solid line) and the $\ell_{\infty}/\ell_{\infty}$ operator norm (dashed line) for precision matrices with the star structure (\ref{eq:star:structure}). {Our model assumption corresponds to an upper bound on the solid line, while the one for CLIME corresponds to an upper bound on the dashed line.}}\label{star}
\end{figure}
The major difference between our results and those from \cite{loh2014support} is also in the weakness of the assumptions. The beta-min condition (minimal signal strength) is needed for the rate of estimation accuracy established in \cite{loh2014support}, while we do not require this assumption for estimation consistency. In addition, their results are only available for sub-Gaussian distributions, while we consider a much broader class of distributions, i.e., distributions with exponential or polynomial tails.
\section{Computation with EM Algorithm}
We now describe how to compute the MAP estimate $\tilde{\Theta}.$ Directly optimizing the negative log of the posterior distribution (\ref{eq:Loss}) is not easy. One numerical complication comes from the penalty term (\ref{eq:pen_SS}): it has a summation inside the logarithm due to the mixture prior distribution on $\theta_{ij}.$ The expectation-maximization (EM) algorithm is a popular tool in handling such a complication.
Recall the two-level hierarchical representation of the prior on $\theta_{ij}$ introduced in Section \ref{sec:sparse_structure}. Define $R$ as the $p \times p$ matrix with binary entries $r_{ij}$. Then the full posterior distribution $\pi(\Theta,R | Y_1, \cdots, Y_n)$ is proportional to
\begin{equation} \label{eq:joint}
f(Y_1, \dots, Y_n |\Theta) \cdot \Big [ \prod_{i<j}\pi(\theta_{ij}|r_{ij})\pi(r_{ij}|\eta) \Big ] \cdot \Big [ \prod_{i}\pi(\theta_{ii}|\tau)\Big]\mathbbm{1}(\Theta\succ 0)\mathbbm{1}(\|\Theta\|_2\le B).
\end{equation}
We treat $R$ as latent and derive an EM algorithm to obtain the MAP estimate of $\Theta$ from the M-step and the posterior distribution of $R$ from the E-step upon convergence. The E-step of our algorithm is inspired by the {EM algorithm for linear regression from \citet{rovckova2014emvs} and the one for factor models from \cite{rovckova2016fast}}, and the M-step of our algorithm is inspired by the optimization procedure used by GLasso \citep{banerjee2008model, friedman2008sparse,mazumder2012graphical}.
\subsection{The E-step}
At the E-step, we first compute the distribution of $R$ given the parameter value from the previous iteration $\Theta^{(t)}$. Note that the binary indicator $r_{ij}$ does not appear in the likelihood function, and only appears in \eqref{eq:spike:slab} and \eqref{prior:r_ij} in the prior specification. It is easy to show that $r_{ij} \mid \Theta^{(t)}, Y_1, \dots, Y_n$ follows $\textsf{Bern}(p_{ij})$ with
\begin{equation} \label{P}
\log \frac{p_{ij}}{1 - p_{ij}} = \log\frac{v_0}{v_1}+\log\frac{\eta}{1 - \eta} -\frac{|\theta_{ij}^{(t)}|}{v_1}+\frac{|\theta_{ij}^{(t)}|}{v_0}.
\end{equation}
Next we evaluate the expectation of $\log \pi(\Theta,R | Y_1, \cdots, Y_n)$ with respect to $\pi(R | \Theta^{(t)}, Y_1, \dots, Y_n)$, which gives rise to the so-called $Q$ function:
\begin{equation}
\begin{aligned}
\label{Q}
Q(\Theta|\Theta^{(t)})
&= \Big\{\frac{n}{2} \log \det(\Theta)-\frac{n}{2} \text{tr}(S\Theta) + \sum_{i}(\log\tau-\tau\theta_{ii}) \\
&+\sum_{i<j}p_{ij} \Big [ -\log(2v_1)-\frac{|\theta_{ij}|}{v_1} +\log\eta \Big ] \\
&+\sum_{i<j}(1-p_{ij})\Big [ -\log(2v_0)-\frac{|\theta_{ij}|}{v_0} +\log(1-\eta) \Big ] \Big\}\mathbbm{1}(\Theta\succ 0)\mathbbm{1}(\|\Theta\|_2\le B).
\end{aligned}
\end{equation}
\subsection{The M-step}
At the M-step of the $(t+1)$th iteration, we sequentially update $\Theta$ in a column by column fashion to maximize $Q(\Theta|\Theta^{(t)}).$
Without loss of generality, we describe the updating rule for the last column of $\Theta$ while fixing the others.
For convenience, partition the covariance matrix $W$ and the precision matrix $\Theta$ as follows:
\[
W= \begin{bmatrix}
W_{11} & w_{12} \\
w_{12}^T &w _{22}\\
\end{bmatrix}
\quad \quad
\Theta= \begin{bmatrix}
\Theta_{11} & \theta_{12} \\
\theta_{12} ^T & \theta_{22}\\
\end{bmatrix}
\]
where $W_{11}$ is the $(p-1) \times (p-1)$ sub-matrix, $w_{12}$ is the $(p-1) \times 1$ vector at the last column of $W$ and $w_{22}$ is the diagonal entry at the bottom-right corner.
The sample covariance matrix $S$, the binary indicator matrix $R=[r_{ij}]$, and the conditional probability matrix $P=[p_{ij}]$ where $p_{ij}$ is defined in (\ref{P}) are also partitioned similarly. We list the following equalities from $W \Theta = \mathbf{I}_p$ which will be used in our algorithm:
\begin{equation} \label{eq:W:Theta:equalities}
\begin{bmatrix}
W_{11} & w_{12} \\
\cdot & w_{22}\\
\end{bmatrix}
=
\begin{bmatrix}
\Theta_{11}^{-1}+\frac{\Theta_{11}^{-1}\theta_{12}\theta_{12}^T\Theta_{11}^{-1}}{\theta_{22}-\theta_{12}^T\Theta_{11}^{-1}\theta_{12}} & -\frac{\Theta_{11}^{-1}\theta_{12}}{\theta_{22}-\theta_{12}^T\Theta_{11}^{-1}\theta_{12}} \\
\cdot &\frac{1}{\theta_{22}-\theta_{12}^T\Theta_{11}^{-1}\theta_{12}}
\end{bmatrix}.
\end{equation}
Given $\Theta_{11}$, to update the last column $(\theta_{12}, \theta_{22})$, we set the subgradient of $Q$ with respect to $(\theta_{12}, \theta_{22})$ to zero.
First, take the subgradient of $Q$ with respect to $\theta_{22}$:
\begin{equation}
\frac{\partial Q}{\partial \theta_{22}} =\frac{n}{2}\frac{1}{\theta_{22}-\theta_{12}^T\Theta_{11}^{-1}\theta_{12}}-\frac{n}{2} \left ( s_{22}+\tau \right ) =0.
\label{eq:identity}
\end{equation}
Due to Equations \eqref{eq:W:Theta:equalities} and \eqref{eq:identity}, we have
\begin{equation*}
w_{22} = \frac{1}{\theta_{22}-\theta_{12}^T\Theta_{11}^{-1}\theta_{12}} = s_{22}+\frac{2}{n}\tau,
\end{equation*}
which leads to the following update for $\theta_{22}$:
\begin{equation}
\theta_{22} \leftarrow \frac{1}{w_{22}}+\theta_{12}^T\Theta_{11}^{-1}\theta_{12}. \label{theta22}
\end{equation}
Next take the subgradient of $Q$ with respect to $\theta_{12}$: \begin{equation}
\begin{split}
\frac{\partial Q}{\partial \theta_{12}}=& \frac{n}{2} \Big ( \frac{-2\Theta_{11}^{-1}\theta_{12}}{\theta_{22}-\theta_{12}^T\Theta_{11}^{-1}\theta_{12}}-2s_{12} \Big )-\Big(\frac{1}{v_1}p_{12}+\frac{1}{v_0}(1-p_{12})\Big)\odot \text{sign}(\theta_{12})\\
=&n(-\Theta_{11}^{-1}\theta_{12}w_{22}-s_{12})-\Big(\frac{1}{v_1}p_{12}+\frac{1}{v_0}(1-p_{12})\Big)\odot \text{sign}(\theta_{12})=0,
\label{der1}
\end{split}
\end{equation}
where $A \odot B$ denotes the element-wise multiplication of two matrices. Here the second line of \eqref{der1} is due to the identities in (\ref{eq:W:Theta:equalities}). To update $\theta_{12}$, we then solve the following stationary equation for $\theta_{12}$ with coordinate descent, under the constraint $\|\Theta\|_2\le B$:
\begin{equation}
\begin{split}
&ns_{12}+nw_{22}\Theta_{11}^{-1}\theta_{12}+\Big(\frac{1}{v_1}P_{12}+ \frac{1}{v_0}(1-P_{12})\Big)\odot \text{sign}(\theta_{12})=0.
\end{split}
\label{eq:cor}
\end{equation}
The coordinate descent algorithm for updating $\theta_{12}$ is summarized in Algorithm \ref{Algo2}.
Since only one column is changed, checking the bound $\|\Theta\|_2\le B$ is computationally feasible (see Appendix C in the Supplementary Material for more details). In practice, we could also proxy the constraint on $\|\Theta\|_2$ with a constraint on the largest absolute value of the elements in $\Theta$. In our empirical studies, this relaxation performs quite well.
\begin{algorithm}[!htbp]
\begin{algorithmic}
\State \textbf{Initialize} $\theta_{12}$ from the previous iteration as the starting point.
\Repeat
\For{$j$ in $1:(p-1)$}
\State Solve the following equation for ${\theta_{12}}_{j}$:
$${ns_{12}}_j+nw_{22}{\Theta_{11}^{-1}}_{j,\setminus j}{\theta_{12}}_{\setminus j}+nw_{22}{\Theta_{11}^{-1}}_{j,j}{\theta_{12}}_j+\Big[\Big(\frac{1}{v_1}P_{12}+ \frac{1}{v_0}(1-P_{12})\Big)\odot \text{sign}(\theta_{12})\Big]_{j}=0.$$
\EndFor
\Until Converge or Max Iterations Reached.
\State If $\|\Theta\|_2> B:$
\textbf{Return} $\theta_{12}$ from the previous iteration
\State Else: \textbf{Return} $\theta_{12}$
\end{algorithmic}
\caption{Coordinate Descent for $\theta_{12}$}
\label{Algo2}
\end{algorithm}
When updating $(\theta_{12}, \theta_{22})$, we need $\Theta_{11}^{-1}$. Instead of directly computing the inverse of $\Theta_{11}$, we compute it from
$$\Theta_{11}^{-1} = W_{11}-w_{12}w_{21}/w_{22},$$
which is derived from (\ref{eq:W:Theta:equalities}).
After the update of $(\theta_{12}, \theta_{22})$ is completed, we ensure that $W \Theta = \mathbf{I}_p$ holds by updating $W_{11}$ and $w_{12}$ via identities from (\ref{eq:W:Theta:equalities}). Therefore, we always keep a copy of the most updated covariance matrix $W$ in our algorithm. { Note we don't update $w_{22}$ here, only because the relationship related to $w_{22}$ within $W \Theta = \mathbf{I}_p$ is already ensured. That is, if $w_{22}$ is updated using (\ref{eq:W:Theta:equalities}), it remains unchanged.}
\subsection{The Output}
The entire algorithm, BAGUS, is summarized and displayed as Algorithm \ref{Algo1}. After convergence, we extract the following output from our algorithm: the $P$ matrix, the posterior probability on the sparse structure, from the E-step and the MAP estimator $\tilde{\Theta}$ from the M-step.
\begin{algorithm}[!htbp]
\begin{algorithmic}
\State \textbf{Initialize} $W=\Theta$=$\mathbf{I}$
\Repeat
\State Update $P$ with each entry $p_{ij}$ updated as $
\log \frac{p_{ij}}{1 - p_{ij}} \leftarrow\Big(\log\frac{v_0}{v_1}+\log\frac{\eta}{1 - \eta} -\frac{|\theta_{ij}^{(t)}|}{v_1}+\frac{|\theta_{ij}^{(t)}|}{v_0}\Big). $
\For{$j$ in $1:p$}
\State Move the $j$-th column and $j$-th row to the end (implicitly), namely $\Theta_{11}:=\Theta_{\setminus j \setminus j}$, $\theta_{12}:=\theta_{\setminus j j}$, $\theta_{22}:=\theta_{jj}$
\State Update $w_{22}$ using $w_{22} \leftarrow s_{22}+\frac{2}{n}\tau$
\State Update $\theta_{12}$ by solving (\ref{eq:cor}) with Coordinate Descent for $\theta_{12}$.
\State Update $\theta_{22}$ using $\theta_{22}\leftarrow \frac{1}{w_{22}}+\theta_{12}^T\Theta_{11}^{-1}\theta_{12}.$
\State Update {$W_{11}$, $w_{12}$} using (\ref{eq:W:Theta:equalities})
\EndFor
\Until Converge\\
\textbf{Return} $\Theta$, $P$
\end{algorithmic}
\caption{BAGUS}
\label{Algo1}
\end{algorithm}
To obtain an estimate of the sparse structure in $R$, we threshold the entries of $P$, namely:
$$ \hat{r}_{ij} = 1, \text{ if } P_{ij} \ge 0.5; \quad \hat{r}_{ij} = 0, \text{ otherwise.}$$
As shown in Theorem \ref{Thm:select}, thresholding entries of $P$ with any number $T$ such that $0<T<1$ could recover the true sparse structure with probability converging to $1.$
For many existing algorithms, the positive definiteness of the estimate of $\Theta$ is not guaranteed. For example, GLasso \citep{friedman2008sparse} can only ensure the positive definiteness of the estimate of the covariance matrix $W$, but not of the estimate of the precision matrix $\Theta$, as shown in \cite{mazumder2012graphical}. The following theorem shows that MAP estimate $\tilde{\Theta}$ returned by our algorithm is ensured to be symmetric and positive definite.
\begin{thm}(Symmetry and positive definite) The estimate of $\Theta$ returned by BAGUS is always symmetric, and it is also positive definite if the initial value $\Theta^{(0)}$ is positive definite. \label{thm:sym}
\end{thm}
A proof is given in the Supplementary Material.
\subsection{Remarks}
\begin{itemize}
\item[] \textbf{Computation Cost.} In BAGUS, the computation cost is $O(p^2)$ for updating one column. There are $p$ columns in $\Theta$ to update, so the overall computational complexity of our algorithm is $O(p^3)$, which matches the computation cost for GLasso.
\item[] \textbf{Parameter Tuning.} BAGUS involves the following hyperparameters: $\eta$, $\tau$, $v_0$, and $v_1$. We always set $\eta = 0.5$ and $\tau = v_0$ so that there are only two parameters $v_0$ and $v_1$ to be tuned. { parameter tuning has an empirical Bayes flavor. In our simulations, we use the theoretical results to set the rough range of the hyper-parameters, and then use a BIC-like criterion to tune the hyper-parameters: }
\begin{equation}
\begin{split}
&\text{BIC}=n\Big(\text{tr}(S\hat{\Theta})-\log \det(\hat{\Theta})\Big)+\log(n)\times\#\{(i,j):1\le i<j\le p, \hat{\theta}_{ij}\ne 0\}.
\end{split}
\end{equation}
The same BIC criterion is used by \cite{yuan2007model} while a similar BIC criterion with a regression based working likelihood is used by \cite{peng2009partial}.
\end{itemize}
\section{Empirical Results}\label{sec:emp}
In this section, we compare our method with the competitive alternatives in both simulated and real datasets and study the performance of our approach.
\subsection{Twelve Simulation Settings}
Following the simulation studies from related work \citep{yuan2007model,friedman2008sparse,peng2009partial,cai2011constrained}, we generate data $Y$ from a multivariate Gaussian distribution with mean $0$ and precision matrix $\Theta^0 = ( \theta_{ij}^0 )$.
We consider four different models, i.e., four different forms of $\Theta^0$. The first three have been considered in \cite{yuan2007model} and the fourth one is similar to the set-up in \cite{peng2009partial}.
\begin{enumerate}
\item Model 1 (star model): $\theta^0_{ii}$ = 1, $\theta^0_{1i}= \theta^0_{i1}$ = $\frac{1}{\sqrt{p}}$.
\item Model 2 ($AR(2)$ model): $\theta^0_{ii}$ = 1, $\theta^0_{i,i-1}= \theta^0_{i-1,i}$ = 0.5 and $\theta^0_{i,i-2} = \theta^0_{i-2,i}$ = 0.25.
\item Model 3 (circle model): $\theta^0_{ii}$ = 2, $\theta^0_{i,i-1}$ = $\theta^0_{i-1,i}$ = 1, and $\theta^0_{1p}=\theta^0_{p1}$=0.9.
\item Model 4 (random graph): The true precision matrix $\Theta^0$ is set as follows.
\begin{enumerate}
\item Set $\theta^0_{ii}=1$.
\item Randomly select $1.5\times p$ of the off-diagonal entries $\theta^0_{ij}$ ($i \ne j$) and set their values to be uniform from $[0.4,1]\cup[-1,-0.4]$; {set the remaining off-diagonal entries to be zero.}
\item Calculate the sum of absolute values of the off-diagonal entries for each column, and then divide each off-diagonal entry by $1.1$ fold of the corresponding column sum. Average this rescaled matrix with its transpose to obtain a symmetric and positive definite matrix.
\item Multiple each entry by $\sigma^2$, which is set to be $3$.
\end{enumerate}
\end{enumerate}
For each model, we consider three cases with different values for $p$:
$$1)\ p=50; \quad 2)\ p=100; \quad 3) \ p=200.$$
So, we consider a total of $12$ simulation settings. In each setting, $n = 100$ observations are generated, and results are aggregated based on $50$ replications.
For estimation accuracy of $\Theta^0$, we use Frobenius norm (denoted as Fnorm). For selection accuracy, we consider three criteria: sensitivity, specificity and MCC (Matthews correlation coefficient):
$$\text{Specificity}=\frac{\text{TN}}{\text{TN+FP}}, \qquad \text{Sensitivity}=\frac{\text{TP}}{\text{TP+FN}},~~ \text{and}$$
$$\text{MCC}=\frac{\text{TP}\times \text{TN-FP}\times \text{FN}}{\sqrt{\text{(TP + FP)(TP + FN)(TN + FP)(TN + FN)}}},$$
where TP (true positive), FP (false positive), TN (true negative), and FN (false negative) are based on detection of edges in the graph corresponding to the true precision matrix $\Theta^0$. { MCC returns a value between $-1$ and $+1$, and the higher the MCC, the better the structure recovery is. A coefficient of $+1$ in MCC represents a perfect structure recovery, and we note that recovering all the edges simultaneously is very challenging and none of the existing methods are able to ensure that. In addition, we note} that it may not be meaningful to compare the results across graphs with different values of $p$ because the level of sparsity changes with $p$ which makes it difficult to assess the difficulty of the setting based on $p$ alone. For instance, for most models considered in our simulation study, the level of sparsity increases along with $p$, because of which all the methods have their specificity increasing when $p$ gets larger (see Tables \ref{model1}-\ref{model5}). So we recommend against comparing the results as $p$ changes and instead to compare the results across different methods within the same setting.
In the simulation study, we compare our method, denoted as BAGUS, with the following alternatives: GLasso from \cite{friedman2008sparse}, SPACE from \cite{peng2009partial} and CLIME from \cite{cai2011constrained}. They are all shown to have estimation consistency under various conditions as discussed in Section \ref{sec:compare}. We also considered the regression based method from \cite{meinshausen2006high}, but the results are not presented here because tuning the parameters as suggested in \cite{meinshausen2006high} gave us ``NA'' for MCC in multiple scenarios considered here.
For each simulated data set, tuning for our model uses the aforementioned BIC criterion with a parameter set of $\eta=0.5$,
{$v_0=\tau=(0.4,2,4,20)\times\sqrt{\frac{1}{n\log p}}$ }
and $v_1$ ranges from $v_0\times (1.5, 3, 5, 10)$. The tuning parameters for GLasso are chosen with 10-fold CV, the tuning parameters for SPACE are chosen from the BIC-like criterion proposed in \citet{peng2009partial} and the tuning and estimation for CLIME estimator is done using the R package \texttt{flare} \citep{flare} as suggested on the homepage\footnote{\url{http://www-stat.wharton.upenn.edu/~tcai/paper/html/Precision-Matrix.html}} of \cite{cai2011constrained}. For cross validation, the number of $\lambda$ values is set to be 40. Results for all the simulated cases are summarized in Tables \ref{model1}-\ref{model5}.
In almost all the settings considered, our method BAGUS performs the best in terms of both selection accuracy, i.e., MCC, and estimation accuracy, i.e., Fnorm. We believe that it is due to the adaptive nature of the Bayesian penalization and the weaker conditions under which the consistency results hold true for BAGUS. Other than BAGUS, SPACE usually performs well in terms of sparse selection and GLasso performs well in terms of estimation accuracy. However, SPACE has a large estimation error in most cases and GLasso tends to have smaller MCC. In our simulation study, CLIME estimator did not perform very well. It is particularly worth noting that for the star graph, where the assumption for CLIME fails (see discussion in Section \ref{sec:compare}), the performance of CLIME is particularly worse.
In Figure \ref{fig:ROC}, we plot the receiver operating characteristic (ROC) curves for all the methods considered under different models by varying hyper (tuning) parameters for the case with $p=50$. This is to see the performance of different methods by removing the effect of tuning. Our method BAGUS remains at the top in all the settings considered in terms of area under the ROC curve (AUC). This plot suggests that except for the star graph, performance of CLIME is not as poor as indicated by the selected graph, which suggests that the performance of CLIME could be improved by better tuning. However, for the star graph, CLIME is still observed to be particularly worse even in view of the ROC curve.
We also recorded the average of the estimated structures from the 50 replicates and compare it with the truth to get a visual understanding of the performance of different methods, shown in Figures
\ref{fig:star}-\ref{fig:random}.
It is noticeable that GLasso and CLIME provide noisier estimates than BAGUS by including many zero entries in the selection; BAGUS and SPACE are sparser and appear closer to the true precision matrix. However, SPACE usually produces noisier estimates than BAGUS (for Models 1-3) and misses a lot of true signals for Model 4. In summary, BAGUS provides a highly competitive performance across the models considered.
\begin{sidewaysfigure}[!htbp]
\caption{Average of the estimated precision matrices for the model with the {\bf star structure} }\vspace{2ex}
\includegraphics[width=1\linewidth]{Star.png}\vspace{4ex}
\label{fig:star}
\caption{Average of the estimated precision matrices for the model with the {\bf AR(2) structure}}\vspace{2ex}
\includegraphics[width=1\linewidth]{AR2.png}
\label{fig:AR2}
\end{sidewaysfigure}
\begin{sidewaysfigure}[!htbp]
\caption{Average of the estimated precision matrices for the model with the {\bf circle structure}}\vspace{2ex}
\includegraphics[width=1\linewidth]{Circle.png}\vspace{4ex}
\label{fig:circle}
\caption{Average of the estimated precision matrices for the model with the {\bf random structure}}\vspace{2ex}
\includegraphics[width=1\linewidth]{random.png}
\label{fig:random}
\end{sidewaysfigure}
\begin{figure}[!htbp]
\includegraphics[width=\linewidth]{ROC.pdf}
\caption{ROC Curves for different methods and different data generating models with $p=50.$}
\label{fig:ROC}
\end{figure}
\subsection{Real Application: Telephone Call Center Data}
We now apply our method to the analysis of data from a telephone call center in a major U.S. northeastern financial organization. The data consists of the arrival time of each phone call in 2002 every day from 7 AM till midnight, except for six days when the data collecting machine is out of order. More details about this data can be found in \citet{shen2005analysis}.
Following the pre-processing as suggested by \citet{huang2006covariance} and \cite{fan2009network} for this data set, we divide each day into $102$ $10$-minute intervals and count the number of call arrivals for each interval, denoted as $N_{it}$ where $t=1:102$ and $i=1:239$. Only 239 days of data are considered here, after we remove holidays and days when the data collecting machine was faulty. Represent the observations on the $i$-th day as $Y_i = (Y_{i1}, Y_{i2}, \dots)^T$, a $102 \times 1$ vector with $Y_{it}=\sqrt{N_{it}+\frac{1}{4}}$, a variance stabilizing transformation of the number of calls. Let $\mu$ and $\Theta$ denote the mean vector and precision matrix of the 102-dimensional vector $Y$.
We apply all the methods considered on the first 205 days of data to estimate $\Theta$, as well as $\mu$, and use the remaining 34 days of data to evaluate the performance. The performance evaluation is carried out as follows. First divide the $102$ observations for each day into two parts $(Z_{i1}$ and $Z_{i2})$, where $Z_{i1}$ is a $51\times 1$ vector containing data from the first $51$ intervals on the $i$-th day and $Z_{i2}$ is also a $51\times 1$ vector containing the remaining $51$ observations, then partition the mean vector $\mu$ and the precision matrix $\Theta$ accordingly. Under the multivariate Gaussian assumption, the best mean squared error forecast of $Z_{i2}$ given $Z_{i1}$ is given by
\begin{equation} \label{eq:blup}
\mathbb{E}(Z_{i2} | Z_{i1})=u_{2}-\Theta_{22}^{-1}\Theta_{21}(Z_{i1}-u_1),
\end{equation}
which is also the best linear unbiased predictor for non-Gaussian data. So plugging the estimates of $\mu$ and $\Theta$ based on the first 205 days into (\ref{eq:blup}), we evaluate the prediction accuracy for $Z_{i2}$ for the remaining 34 days. We adopt the same
criterion used by \citet{fan2009network}, the average absolute forecast error (AAFE), to measure the prediction performance:
\begin{equation}
\textsf{AAFE}_t=\frac{1}{34}\sum_{i=206}^{239}|\hat{Y}_{it}-Y_{it}|.
\end{equation}
where $\hat{Y}_{it}$ and $Y_{it}$ denote the predicted and observed values, respectively.
We compare the prediction performance based on estimates from our method BAGUS, the inverse of the sample covariance matrix (denoted as ``Sample"), GLasso and CLIME. The prediction errors for these methods at all $51$ time points are shown in Figure \ref{fig:pred}. Their average AAFE values are displayed in Table \ref{tab:pred}, along with the average AAFE values for Adaptive Lasso and SCAD taken from \cite{fan2009network}.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{predict_2.pdf}
\caption{Prediction error for the call center cata: $\text{AAFE}_t$ on $Y$ axis and $t$ on X axis. }\label{fig:pred}
\end{center}
\end{figure}
\begin{table}[!htbp]
\begin{center}
\caption{Average Prediction error for different methods} \label{tab:pred}
\resizebox{0.8\textwidth}{!}{\begin{minipage}{\textwidth}
\begin{tabular}{lllllllll}
\hline
&Sample& GLasso & Adaptive Lasso & SCAD & CLIME&BAGUS \\
Average AAFE&1.46&1.38&1.34&1.31&1.14&{\bf1.00}\\ \hline
\end{tabular}
\end{minipage}}
\end{center}
\end{table}
From the results, we see that BAGUS and CLIME have a significantly improved performance in prediction accuracy when compared with the other methods. To look further into the estimates provided by these methods, we present the sparsity structures estimated from GLasso, CLIME, and BAGUS in Figure \ref{fig:heat}. In this figure, yellow points (appear in light tone when converted to grayscale) indicate signals and blue points (dark tone in grayscale) indicate noise. In the Gaussian graphical model context, a yellow point suggests that the call arrivals in the corresponding two time intervals are conditionally dependent. It is interesting to find that a strong autoregressive type of dependence structure is present in estimators from all methods. However, the methods differ in terms of the degree of autoregression suggested by their corresponding estimates. The estimated structure from {BAGUS} is the most sparse one and suggests a small degree of autoregression compared to those of GLasso and CLIME. That is, BAGUS indicates that the telephone call arrivals majorly depend only on recent history, while others indicate dependence over a long history. Based on the prediction accuracies of different methods, the sparser dependence structure suggested by {BAGUS} seems sufficient to provide good prediction although it is difficult to know which structure, in reality, is closer to the underlying precision matrix. In terms of practical utility, this provides support in favor of storing and managing less amount of historical data that could potentially reduce cost of data management.
\begin{landscape}
\begin{multicols*}{2}
\begin{table}[H]
\resizebox{0.54\linewidth}{!}{\begin{minipage}{\linewidth}
\caption{Model1 Star}
\label{model1}
\begin{tabular}{llllllllll}
\hline
\multicolumn{5}{c}{$n=100,p=50$} \\ \hline
&Fnorm& Specificity & Sensitivity& MCC \\
GLasso & 2.301(0.126) &0.687(0.015)& 0.998(0.004)& 0.339(0.011)\\
CLIME & 3.387(0.401) &0.452(0.051)& 0.971(0.023)& 0.168(0.021) \\
SPACE & 2.978(0.244) &0.972(0.039)& 1.000(0.003)& 0.824(0.163)\\
BAGUS&\textbf{1.053(0.107)}&1.000(0.000)& 1.000(0.000)&\textbf{1.000(0.000)}\\
\hline
\multicolumn{5}{c}{$n=100,p=100$} \\ \hline
&Fnorm& Specificity & Sensitivity& MCC \\
GLasso &4.219(0.118)&0.715(0.007)& 0.989(0.008)& 0.260(0.005)\\
CLIME&4.818(0.449)& 0.998(0.004)& 0.336(0.000)& 0.131(0.067)\\
SPACE &3.207(0.311)&0.987(0.022)& 0.996(0.024)& 0.842(0.162)\\
BAGUS&\textbf{1.499(0.138)}&1.000(0.000)& 1.000(0.000)&\textbf{1.000(0.000)}\\
\hline
\multicolumn{5}{c}{$n=100,p=200$} \\ \hline
&Fnorm& Specificity & Sensitivity& MCC \\
GLasso &3.028(0.068)&0.947(0.003)& 0.999(0.002)& 0.389(0.009)\\
CLIME&5.595(0.528)&0.978(0.018)& 0.000(0.000)& -0.014(0.006)\\
SPACE &3.735(0.294)&0.985(0.007)& 1.000(0.000)& 0.656(0.138)\\
BAGUS &\textbf{2.006(0.100)}&1.000(0.000)& 1.000(0.001)& \textbf{1.000(0.001)}
\\
\hline\\
\end{tabular}
\caption{Model 2: $AR(2)$ \label{model3}}
\begin{tabular}{lllllllll}
\hline
\multicolumn{5}{c}{$n=100,p=50$} \\ \hline
&Fnorm& Specificity & Sensitivity& MCC \\
GLasso &{\bf3.361(0.240)} & 0.479(0.056)&0.981(0.015)&0.251(0.028) \\
CLIME&3.758(0.381)&0.822(0.054)& 0.906(0.039)&0.472(0.053)\\
SPACE & 5.903(0.070) & 0.982(0.004)&0.608(0.038)&0.656(0.029) \\
BAGUS&3.671(0.291)&0.997(0.002)& 0.551(0.032)& \textbf{0.707(0.025)}\\
\hline
\multicolumn{5}{c}{$n=100,p=100$} \\ \hline
&Fnorm& Specificity & Sensitivity& MCC \\
GLasso & 8.130(0.035)& 0.901(0.007)& 0.745(0.028)&0.382(0.017)\\
CLIME&5.595(1.578)&0.837(0.075)& 0.821(0.191)& 0.371(0.085)\\
SPACE &9.819(0.083)&0.991(0.002)&0.566(0.025)&0.625(0.021)\\
BAGUS&\textbf{5.330(0.369)}& 0.998(0.001)& 0.549(0.018)& \textbf{0.707(0.022)}\\
\hline
\multicolumn{5}{c}{$n=100,p=200$} \\ \hline
&Fnorm& Specificity & Sensitivity& MCC \\
GLasso & 11.728(0.045)&0.990(0.001)& 0.478(0.017)& 0.481(0.014)\\
CLIME&11.552(0.382)&0.989(0.004)& 0.580(0.031)& 0.539(0.028)\\
SPACE &13.696(0.079)&0.995(0.000)& 0.518(0.018)& 0.588(0.013)\\
BAGUS&\textbf{8.214(0.548)}&0.998(0.001)& 0.543(0.015)& \textbf{0.677(0.027)}\\
\hline
\end{tabular}
\end{minipage}}
\end{table}
\begin{table}[H]
\resizebox{0.54\linewidth}{!}{\begin{minipage}{\linewidth}
\centering
\caption{Model 3: Circle \label{model4}}
\begin{tabular}{lllllllll}
\hline
\multicolumn{5}{c}{$n=100,p=50$} \\ \hline
&Fnorm& Specificity & Sensitivity& MCC\\
GLasso &\textbf{4.319(0.174)}& 0.492(0.064)& 1.000(0.000)&0.196(0.024) \\
CLIME&5.785(0.440)&0.555(0.026)& 1.000(0.000)& 0.221(0.010)\\
SPACE &19.402(0.232)& 0.930(0.006)&1.000(0.000)&0.595(0.019)\\
BAGUS&\textbf{4.253(0.578)}&0.993(0.004)& 0.964(0.029)& \textbf{0.903(0.049)}\\
\hline
\multicolumn{5}{c}{$n=100,p=100$} \\ \hline
&Fnorm& Specificity & Sensitivity&MCC\\
GLasso &6.981(0.192) & 0.647(0.005)& 1.000(0.000)& 0.189(0.002) \\
CLIME& 19.282(2.802)&0.224(0.226)& 0.995(0.015)& 0.069(0.058)\\
SPACE &27.737(0.345)&0.975(0.010)& 0.994(0.008)&0.674(0.062)\\
BAGUS&\textbf{6.012(0.513)}&0.996(0.002)& 0.957(0.032)& \textbf{0.895(0.055)}\\
\hline
\multicolumn{5}{c}{$n=100,p=200$} \\ \hline
&Fnorm& Specificity & Sensitivity& MCC \\
GLasso &\textbf{7.664(0.209)}&0.752(0.003)& 1.000(0.000)& 0.172(0.001)\\
CLIME&33.009(0.535)&0.857(0.154)& 0.769(0.167)& 0.209(0.052)\\
SPACE &32.142(0.832)&0.981(0.012)& 0.783(0.212)& 0.485(0.129)\\
BAGUS &10.378(1.001)&0.995(0.001)& 0.886(0.033)& \textbf{0.752(0.028)}\\
\hline\\
\end{tabular}
\caption{Model4: Random Graph \label{model5}}
\begin{tabular}{lllllllll}
\hline
\multicolumn{5}{c}{$n=100,p=50$} \\ \hline
&Fnorm& Specificity & Sensitivity& MCC \\
GLasso & 7.017(0.256)&0.877(0.010)& 0.766(0.039)& 0.417(0.027)\\
CLIME&11.347(0.452)&0.971(0.012)& 0.614(0.068)& 0.572(0.042)\\
SPACE &12.278(0.183)&1.000(0.000)& 0.073(0.031)& 0.257(0.051)\\
BAGUS&\textbf{5.811(0.357)}&0.999(0.001)& 0.443(0.032)& \textbf{0.637(0.027)}\\
\hline
\multicolumn{5}{c}{$n=100,p=100$} \\ \hline
&Fnorm& Specificity & Sensitivity& MCC \\
GLasso & 11.851(0.900)& 0.837(0.047)& 0.720(0.049)&0.285(0.033)\\
CLIME&12.649(1.587)&0.735(0.153)& 0.761(0.120)& 0.243(0.123)\\
SPACE &17.706(0.203)& 1.000(0.000)&0.068(0.015)& 0.236(0.028)\\
BAGUS&\textbf{8.754(0.366)}&0.999(0.001)& 0.400(0.022)& \textbf{0.598(0.022)}\\
\hline
\multicolumn{5}{c}{$n=100,p=200$} \\ \hline
&Fnorm& Specificity & Sensitivity& MCC \\
GLasso & 15.054(0.356)&0.951(0.012)& 0.633(0.029)& 0.307(0.017)\\
CLIME&23.568(0.954)&0.993(0.004)& 0.469(0.048)&0.492(0.038)\\
SPACE &24.997(0.213)&0.999(0.000)& 0.090(0.014)& 0.221(0.024)\\
BAGUS&\textbf{13.096(0.522)}& 0.999(0.000)& 0.382(0.050)& \textbf{0.565(0.032)}\\
\hline
\end{tabular}
\end{minipage}}
\end{table}
\end{multicols*}
\end{landscape}
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=1\linewidth]{heatmap.png}
\end{center}
\caption{Sparsity structures estimated for different methods for the call center data}\label{fig:heat}
\end{figure}
\section{Conclusion}
In high dimensional data analysis, there is a large literature on penalization from a frequentist viewpoint majorly focusing on Lasso based convex penalties and some non-convex penalties such as SCAD. On the other hand, in the Bayesian framework, a variety of shrinkage and sparsity inducing prior distributions have been proposed. In the context of graphical models, our work demonstrates that spike-and-slab priors with Laplace distributions provide adaptive penalization that leads to better theoretical and empirical performance compared to state-of-the-art methods. Since some recent papers \citep{rovckova2016fast, deshpande2017simultaneous} have also found spike-and-slab Lasso priors to be useful in other high dimensional contexts, we believe that our strategy of Bayesian regularization will be advantageous in a broad range of high dimensional problems and that its success demonstrated in our work will motivate further interest in this direction.
\newpage
\begin{center}
{\large\bf SUPPLEMENTARY MATERIAL}
\end{center}
\spacingset{1.48}
\section*{Appendix A: Proofs of the Main Theorems}\label{sec:proof}
For convenience, we introduce the following additional notation that will be used throughout the Appendix.
\begin{enumerate}[i.]
\item Let $\tilde{W}$ denote the difference between the sample covariance matrix $S$ and the true covariance matrix $\Sigma^0 = \left(\Theta^{0}\right)^{-1}$ and $\Delta$ the difference between an estimate $\tilde{\Theta}$ and the true precision matrix $\Theta^{0}$. That is,
\begin{eqnarray*}
\tilde{W} &=& S- \Sigma^{0} \\
\Delta &=& \tilde{\Theta}- \Theta^{0}.
\end{eqnarray*}
\item Let $R\left(\Delta\right)$ denote the difference between $n\tilde{\Theta}^{-1}/2$, the gradient of $n\log\det(\tilde{\Theta})/2$, and its first-order Taylor expansion at $\Theta^{0}$:
\begin{equation*} R\left(\Delta\right) = \frac{n}{2}\left(\tilde{\Theta}^{-1}-\Sigma^{0}+\Sigma^{0}\Delta\Sigma^{0}\right).
\end{equation*}
\item Recall our objective function
\[
L(\Theta) = \frac{n}{2} \Big ( \text{tr}(S\Theta) - \log \det(\Theta) \Big ) + \frac{1}{2} \sum_{i, j} \text{pen}_{SS}(\theta_{ij}) +\sum_{i} \text{pen}_1(\theta_{ii}), \]
where
\begin{eqnarray*}
\text{pen}_{SS}(\theta_{ij}) =
- \log \Big [ \Big(\frac{\eta}{2v_1}\Big )e^{-\frac{|\theta_{ij}|}{v_1}} +\Big(\frac{1-\eta}{2v_0} \Big ) e^{-\frac{|\theta_{ij}|}{v_0}}\Big ] , ~ \text{and }~ \text{pen}_{1}(\theta_{ii}) = \tau | \theta_{ii}|
\end{eqnarray*}
denote the penalty terms on $\theta_{ij}$ $(i \ne j)$ and $\theta_{ii}$, respectively.
Let $Z_{ij}$ denote the subgradient of the penalty term with respect to $\theta_{ij}$:
\[Z_{ij}= Z_{ij}(\theta_{ij}) = \begin{cases}
\tau & \quad \text{if } i=j \\
\frac{1}{2}\text{pen}{'}_{SS}(\theta_{ij}) & \quad \text{if } i\ne j,\quad \theta_{ij}\ne0 \\
\left[-1,1\right]\times \frac{\frac{\eta}{2v_1^2}+\frac{1-\eta}{2v_0^2}}{\frac{\eta}{v_1}+\frac{1-\eta}{v_0}} & \quad \text{if } i\ne j,\quad \theta_{ij}=0
\end{cases}
\]
where
\[
\text{pen}{'}_{SS}\left(\theta_{ij}\right)=\frac{\frac{\eta}{2v_1^2}e^{-\frac{|\theta_{ij}|}{v_1}}+\frac{1-\eta}{2v_0^2}e^{-\frac{|\theta_{ij}|}{v_0}}}{\frac{\eta}{2v_1}e^{-\frac{|\theta_{ij}|}{v_1}}+\frac{1-\eta}{2v_0}e^{-\frac{|\theta_{ij}|}{v_0}}}\text{sign}\left(\theta_{ij}\right).
\]
Let $Z=[Z_{ij}]$, then the subgradient of the objective function $L(\Theta)$ is
\begin{equation*}
\partial L(\Theta) = \frac{n}{2}\left(S-\Theta^{-1}\right)+Z.
\end{equation*}
\item We denote the index set of diagonal entries as $\mathcal{D}:= \{(i,j):i=j\}$.
{For any subset {$\mathcal{S}$} of $\{(i,j):1\le i,j\le p\}$ and $p \times p$ matrix $A$, we use $A_{\mathcal{S}}$ to denote the submatrix of $A$ with entries indexed by {$\mathcal{S}$}.}
\end{enumerate}
\subsubsection*{}
In this Appendix, we first prove the following main result.
\begin{thm
\label{thm:proof}
Assume condition (A1) and $\|\tilde{W}\|_{\infty}=\max_{ij}|s_{ij}-\sigma^0_{ij}|\le C_1\sqrt{\log p/n}$. If \\
(i) the prior hyper-parameters $v_0,v_1, \eta$ and $\tau$ satisfy:
\begin{equation}
\begin{cases}
\frac{1}{nv_1}={C_3}\sqrt{\frac{\log p}{n}}(1-\varepsilon_1),\text{ where } C_3<C_2 , \varepsilon_1>0,\\
\frac{1}{nv_0}>C_4\sqrt{\frac{\log p}{n}},\\
\frac{v_1^2(1-\eta)}{v_0^2\eta}\le \varepsilon_1 p^{2(C_2-C_3)M_{\Gamma^0}[C_4-C_3]},\\
\tau\le C_3\frac{n}{2}\sqrt{\frac{\log p}{n}},
\end{cases}
\end{equation}
where $C_4=(C_1+M_{\Sigma^0}^22(C_1+C_3)M_{\Gamma^0}+
6(C_1+C_3)^2dM_{\Gamma^0}^2M_{\Sigma^0}^3/M)$, \\
(ii) the spectral norm $B$ satisfies $1/k_1+2d(C_1+C_3)M_{\Gamma^0}\sqrt{\log p/n}<B<(2nv_0)^{\frac{1}{2}}$, and \\
(iii) the sample size $n$ satisfies $\sqrt{n}\ge M\sqrt{\log p}$,
where
$$M=\max\Big\{2d(C_1+C_3)M_{\Gamma^0}{\max\Big({3M_{\Sigma^0}},{3M_{\Gamma^0}{M^3_{\Sigma^{0}}}},2/k_1^2\Big)},2C_3\varepsilon_1/k_1^2\Big\},$$ then the MAP estimator $\tilde{\Theta}$ satisfies
\begin{equation*}
\|\tilde{\Theta}-\Theta^0\|_{\infty}<2(C_1+C_3)M_{\Gamma^0}\sqrt{\frac{\log p}{n}}.
\end{equation*}
\end{thm}
\medskip
Before presenting our proof, we list two preliminary results as lemmas and list some properties of the penalty function $\text{pen}_{SS}(\delta)$, which will be useful. {Proofs of these lemmas are in Appendix B.}
\begin{lemma}
\label{lemma:4}
Define $r:=\max\left\{2M_{\Gamma^0} \Big(\|\tilde{W}\|_{\infty}+\frac{2}{n}\max (\frac{1}{2}\text{pen}^{'}_{SS}(\delta),\tau)\Big),2(C_1+C_3)M_{\Gamma^0}\sqrt{\frac{\log p}{n}}\right\}$, and $\mathcal{A}:=\left \{ \Theta: \frac{n}{2}\left(S-\Theta^{-1}\right)_{\mathcal{B}}+Z_{\mathcal{B}} =0, \Theta\succ 0, \| \Theta\|_2\le B \right \}$ with $\mathcal{B}=\{(i,j):|\theta_{ij}^0|>2(C_1+C_3)M_{\Gamma^0}\sqrt{\log p/n}\}\cup \mathcal{D}$.
If parameters $r$ and $B$ satisfy:
\begin{equation*}
\begin{cases}
r \le \min \left\{\frac{1}{3M_{\Sigma^0}d},\frac{1}{3dM_{\Gamma^0}M_{\Sigma^0}^3} \right\},\\
\min{|\theta_{\mathcal{B}\cap \mathcal{D}^c}^{0}|}\ge r+\delta,\\
1/k_1+dr<B,
\end{cases}
\end{equation*}
for some $\delta >0$, {where $k_1$ is the lower bound on $\lambda_{\min}(\Sigma^0)$,}
then the set $\mathcal{A}$ is non-empty. Moreover, there exists a $\tilde{\Theta}\in \mathcal{A}$ such that
$\|\Delta\|_{\infty} :=\|\tilde{\Theta}-\Theta^0\|_{\infty}\le r.$
\end{lemma}
\begin{lemma}
\label{lemma:5}
Suppose that $\|\tilde{\Theta}-\Theta^0\|_{\infty}\le r$,
then
\begin{eqnarray}
&&\|\tilde{\Theta}-\Theta^0\|_{F}\le r\sqrt{p+s}, \label{eq1:lemma5}\\
&&\vertiii{\tilde{\Theta}-\Theta^0}_{\infty}, \|\tilde{\Theta}-\Theta^0\|_{2}\le r\min\{d,\sqrt{p+s}\}, \text{ and} \label{eq2:lemma5}\\
&&\| \tilde{\Theta}^{-1}-\Sigma^0 \|_{\infty}\le M_{\Sigma^0}^2r+\frac{3}{2}dM^3_{\Sigma^{0}}r^2.
\label{eq3:lemma5}
\end{eqnarray}
\end{lemma}
\noindent \textbf{Properties of $\text{pen}_{SS}(\delta)$} \vspace{1ex}\\
We now provide some useful results on the penalty function $\text{pen}_{SS}(\delta)$.
\begin{itemize}
\item Bound on the magnitude of the first derivative of $\text{pen}_{SS}(\delta)$:
\begin{eqnarray}
\frac{1}{n}|\text{pen}{'}_{SS}(\delta)|&=\frac{\frac{\eta}{2v_1^2}e^{-\frac{|\delta|}{v_1}}+\frac{1-\eta}{2v_0^2}e^{-\frac{|\delta|}{v_0}}}{n\left(\frac{\eta}{2v_1}e^{-\frac{|\delta|}{v_1}}+\frac{1-\eta}{2v_0}e^{-\frac{|\delta|}{v_0}}\right)} \nonumber \\
&=\frac{1}{nv_1}+\frac{\frac{1}{n}(\frac{1}{v_0}-\frac{1}{v_1})}{\frac{\eta v_0}{(1-\eta)v_1}e^{\frac{|\delta|}{v_0}-\frac{|\delta|}{v_1}}+1} \nonumber \\
&<\frac{1}{nv_1} \left(1+\frac{\frac{v_1^2(1-\eta)}{v_0^2\eta}}{e^{\frac{|\delta|}{v_0}-\frac{|\delta|}{v_1}}} \right).
\label{eq:firstderv}
\end{eqnarray}
Choose $1/\left(nv_0\right)>C_4\sqrt{\log p/n}$ and $1/\left(nv_1\right)<C_3\sqrt{\log p/n}$ as in Theorem \ref{thm:proof}, and if further let $v_1^2\left(1-\eta\right)/\left(v_0^2\eta\right)=\xi p^{\psi[C_4-C_3]}$, when $\delta\ge\psi\sqrt{\log p/n}$, then we have
\begin{equation}
\frac{\frac{v_1^2\left(1-\eta\right)}{v_0^2\eta}}{e^{\frac{|\delta|}{v_0}-\frac{|\delta|}{v_1}}}\le \frac{\xi p^{\psi[C_4-C_3]}} {p^{\psi[C_4-C_3]}}\le \xi.
\label{eq:reason}
\end{equation}
Let $\xi$ to be sufficiently small, i.e., $\xi<\varepsilon_1$, then we have \[ \frac{1}{n}|\text{pen}{'}_{SS}(\delta)| < C_3\sqrt{ \frac{\log p}{n}}.\]
\item Bound on the magnitude of the second derivative of $\text{pen}_{SS}(\delta)$: \vspace{1ex}\\
With the same choice of $v_0$ and $v_1$ as in Theorem \ref{thm:proof}, when $\delta\ge\psi\sqrt{\log p/n}$, we have
\begin{eqnarray}
\frac{1}{2n}|\text{pen}_{SS}^{''}(\delta)|& = & \frac{\left(\frac{1}{v_0}-\frac{1}{v_1}\right)\frac{\eta v_0}{(1-\eta)v_1}e^{\frac{\delta}{v_0}-\frac{\delta}{v_1}}}{2n \left(\frac{\eta v_0}{(1-\eta)v_1}e^{\frac{\delta}{v_0}-\frac{\delta}{v_1}}+1 \right)^2} \nonumber \\
&< &\frac{ \left(\frac{1}{v_0}-\frac{1}{v_1} \right)}{2n \left( \frac{\eta v_0}{(1-\eta)v_1}e^{\frac{\delta}{v_0}-\frac{\delta}{v_1}}+1 \right)} \nonumber \\
&< & \frac{(1-\eta)v_1}{2nv_0^2\eta e^{\frac{\delta}{v_0}-\frac{\delta}{v_1}}}<\frac{\xi}{2nv_1} \label{eq:secondderv} \\
&< & \frac{C_3}{2}\xi\sqrt{\frac{\log p}{n}}< \frac{C_3}{2}\varepsilon_1\sqrt{\frac{\log p}{n}}, \label{eq:secondder:2}
\end{eqnarray}
where (\ref{eq:secondderv}) is due to (\ref{eq:reason}). In addition, when $n$ satisfies the condition $(iii)$ in Theorem \ref{thm:proof}, (\ref{eq:secondder:2}) is always upper bounded by $\frac{1}{4}k_1^2$.
\end{itemize}
\begin{proof}[\textbf{Proof of Theorem \ref{thm:proof}}]
Our proof is inspired by the techniques from \cite{rothman2008sparse}
and \cite{ravikumar2011high}.
Here is the outline of the proof.
\begin{itemize}
\item \emph{Step 1:} Construct a solution set $\mathcal{A}$ for the constraint problem:
$$
\arg\min_{\Theta\succ 0,\|\Theta\|_2\le B,\Theta_{\mathcal{B}^c}=0}L\left(\Theta\right),$$ by defining
\begin{equation*}
\mathcal{A} =\left \{ \Theta: \frac{n}{2}\left(S-\Theta^{-1}\right)_{\mathcal{B}}+Z_{\mathcal{B}} =0, \Theta\succ 0, \| \Theta\|_2\le B \right \},
\end{equation*}
where $\mathcal{B}= \{(i,j):|\theta_{ij}^0|>2(C_1+C_3)M_{\Gamma^0}\sqrt{\log p/n}\}\cup \mathcal{D}$.
{For $\theta^0_{ij}\in \mathcal{B}\cap \mathcal{D}^c$ and , define $\min\left(|\theta_{ij}^0|\right)$ as $2(C_1+C_2)M_{\Gamma^0}\sqrt{\log p/n}.$ We then have $|\theta_{ij}^0|\ge2(C_1+C_2)M_{\Gamma^0}\sqrt{\log p/n}$ when $\theta^0_{ij}\in \mathcal{B}\cap \mathcal{D}^c$ and $|\theta_{ij}^0|\le2(C_1+C_3)M_{\Gamma^0}\sqrt{\log p/n}$ when $\theta^0_{ij}\in \mathcal{B}^c\cap\mathcal{D}^c.$}
\item \emph{Step 2:} Prove $\mathcal{A}$ is not empty and further show that there exists $\tilde{\Theta} \in \mathcal{A}$ satisifying
$\|\tilde{\Theta}-\Theta^0\|_{\infty}=O_p \Big (\sqrt{\log p/n} \Big ).$
\item \emph{Step 3:} Finally prove that $\tilde{\Theta}$, which is positive definite by construction, is a local minimizer of the loss function $L(\Theta)$ by showing $L(\Theta) \ge L(\tilde{\Theta})$ for any $\Theta$ in a small neighborhood of $\tilde{\Theta}$. Since $L(\Theta)$ is strictly convex when $B<(2nv_0)^{\frac{1}{2}}$, we then conclude that $\tilde{\Theta}$ is the unique minimizer such that $\|\tilde{\Theta}-\Theta^0\|_\infty=O_p\left(\sqrt{\log p/n}\right)$.
\end{itemize}
At \emph{Step 2}, we apply Lemma \ref{lemma:4}. First we check its conditions.
\begin{enumerate}
\item Consider $r=2(C_1+C_3)M_{\Gamma^0}\sqrt{\log p/n}$. For ${\theta^0_{ij}}\in \mathcal{B}\cap\mathcal{D}^c$, we have ${\theta^0_{ij}}\ge r+2(C_2-C_3)M_{\Gamma^0}\sqrt{\log p/n}$. That is, the $\delta$ defined in Lemma \ref{lemma:4} is greater or equal to $2(C_2-C_3)M_{\Gamma^0}\sqrt{\log p/n}$.
\item Recall the properties of $\text{pen}_{SS}(\delta)$.
We have $|\text{pen}{'}_{SS}(\delta)|/n<C_3\sqrt{\log p/n}$. With the bound of $\|\tilde{W}\|_\infty$ and the condition on sample size $n$, we have
\begin{eqnarray*}
2M_{\Gamma^0} \left(\|W\|_{\infty}+\max \left(\frac{1}{n} \text{pen}^{'}_{SS}(\delta),\frac{2}{n}\tau\right)\right)&\le& 2(C_1+C_3)M_{\Gamma^0}\sqrt{\frac{\log p}{n}}\\
&\le & \min \left \{\frac{1}{3M_{\Sigma^0}d},\frac{1}{\frac{3}{2}dM_{\Gamma^0}M_{\Sigma^0}^3} \right\}.
\end{eqnarray*}
\end{enumerate}
Thus, conditions for Lemma \ref{lemma:4} are all satisfied. By Lemma \ref{lemma:4}, we conclude that there exists a solution $\tilde{\Theta} \in \mathcal{A}$ satisfying
\begin{equation*}
\|\tilde{\Theta}-\Theta^0\|_{\infty}=\|\Delta\|_{\infty}\le2(C_1+C_3)M_{\Gamma^0}\sqrt{\frac{\log p}{n}}.
\end{equation*}
That is, the solution $\tilde{\Theta}$ we constructed is $O_p\left(\sqrt{\log p/n}\right)$ from the truth in entrywise $l_{\infty} $ norm.
At \emph{Step 3}, we need to show that the solution $\tilde{\Theta}$ we constructed is indeed a local minimizer of the objective function $L(\Theta)$. It suffices to show that
$$G(\Delta_1)=L(\tilde{\Theta}+\Delta_1)-L(\tilde{\Theta}) \ge 0$$
for any $\Delta_1$ with $\|\Delta_1\|_{\infty} \le \epsilon$. Re-organize $G(\Delta_1)$ as follows:
\begin{equation*}
\begin{split}
G(\Delta_1) =~ &\frac{n}{2}\Big(tr\left(\Delta_1\left(S-\tilde{\Theta}^{-1}\right)\right)-\left(\log|\tilde{\Theta}+\Delta_1|-\log|\tilde{\Theta}|\right)+tr\left(\Delta_1\tilde{\Theta}^{-1}\right)\Big)\\
&-\sum_{i<j}\log\left(\frac{\eta}{2v_1}e^{-\frac{|\tilde{\theta}_{ij}+{\Delta_1}_{ij}|}{v_1}}+\frac{1-\eta}{2v_0}e^{-\frac{|\tilde{\theta}_{ij}+{\Delta_1}_{ij}|}{v_0}}\right)\\
&+\sum_{i<j}\log\left(\frac{\eta}{2v_1}e^{-\frac{|\tilde{\theta}_{ij}|}{v_1}}+\frac{1-\eta}{2v_0}e^{-\frac{|\tilde{\theta}_{ij}|}{v_0}}\right)+\tau\sum_{i}\left(\tilde{\theta}_{ii}+{\Delta_1}-\tilde{\theta}_{ii}\right)\\
&=\text{(I)} + \text{(II)}+ \text{(III)},\\
\end{split}
\end{equation*}
where
\begin{eqnarray*}
\text{(I)}&=&\frac{n}{2}\Big(tr\left({\Delta_1}\left(S-\tilde{\Theta}^{-1}\right)\right)-\left(\log|\tilde{\Theta}+{\Delta_1}|-\log|\tilde{\Theta}|\right)+tr\left({\Delta_1}\tilde{\Theta}^{-1}\right)\Big),\\
\text{(II)}&=&-\frac{1}{2}\sum_{i<j}\log\left(\frac{\eta}{2v_1}e^{-\frac{|\tilde{\theta}_{ij}+{\Delta_1}_{ij}|}{v_1}}+\frac{1-\eta}{2v_0}e^{-\frac{|\tilde{\theta}_{ij}+{\Delta_1}_{ij}|}{v_0}}\right)\\
&& +\frac{1}{2}\sum_{i<j}\log\left(\frac{\eta}{2v_1}e^{-\frac{|\tilde{\theta}_{ij}|}{v_1}}+\frac{1-\eta}{2v_0}e^{-\frac{|\tilde{\theta}_{ij}|}{v_0}}\right),\\
\text{(III)}&=&\tau\sum_{i}\left(\tilde{\theta}_{ii}+{\Delta_1}_{ii}-\tilde{\theta}_{ii}\right)=\tau{\Delta_1}_{ii}.
\end{eqnarray*}
Bound (I) as follows.
\begin{eqnarray*}
&&\log|\tilde{\Theta}+{\Delta_1}|-\log|\tilde{\Theta}| \\
&=&\text{tr} \left({\Delta_1}\tilde{\Theta}^{-1}\right)-\text{vec}\left(\Delta_1\right)^{T}\int_{0}^{1}(1-v)\Big((\tilde{\Theta}^{-1}+v{\Delta_1})^{-1}\otimes (\tilde{\Theta}^{-1}+v{\Delta_1})^{-1}dv\Big)\text{vec}({\Delta_1}) \\
&\le& \text{tr} \left({\Delta_1}\tilde{\Theta}^{-1}\right)-\frac{1}{4}k_1^2\|{\Delta_1}\|_F^2.
\end{eqnarray*}
where the last inequality can be shown with the same proof for Theorem 1 in \cite{rothman2008sparse} with $\sqrt{n}\ge 4(C_1+C_3)dM_{\Gamma^0}/k_1^2\sqrt{\log p}$. Thus,
\begin{eqnarray*}
\text{(I)}&\ge& \frac{n}{2}\Big(tr\left({\Delta_1}\left(S-\tilde{\Theta}^{-1}\right)\right)+\frac{1}{4}k_1^2\|{\Delta_1}\|_F^2\Big)\\
&= & \frac{n}{2}\Big(\sum_{i,j}\left({\Delta_1}_{ij}\left(s_{ij}-{\tilde{\Theta}^{-1}}_{ij}\right)\right)+\frac{1}{4}k_1^2\|{\Delta_1}\|_F^2\Big).
\end{eqnarray*}
Next consider $\text{(II)}$. For any $(i,j)$ $\notin$ $\mathcal{B}$, $\tilde{\theta}_{ij}=0$, $|\tilde{\theta}_{ij}+{\Delta_1}_{ij}|=|{\Delta_1}_{ij}|$, and therefore
\begin{eqnarray*}
&&-\log\left(\frac{\eta}{2v_1}e^{-\frac{|\tilde{\theta}_{ij}+{\Delta_1}_{ij}|}{v_1}}+\frac{1-\eta}{2v_0}e^{-\frac{|\tilde{\theta}_{ij}+{\Delta_1}_{ij}|}{v_0}}\right)+\log\left(\frac{\eta}{2v_1}e^{-\frac{|\tilde{\theta}_{ij}|}{v_1}}+\frac{1-\eta}{2v_0}e^{-\frac{|\tilde{\theta}_{ij}|}{v_0}}\right)\\
&=&\log\frac{\Big(\frac{\eta}{2v_1}e^{-\frac{|0|}{v_1}}\Big)+\Big(\frac{1-\eta}{2v_0}e^{-\frac{|0|}{v_0}}\Big)}{\Big(\frac{\eta}{2v_1}e^{-\frac{|{\Delta_1}_{ij}|}{v_1}}\Big)+\Big(\frac{1-\eta}{2v_0}e^{-\frac{|{\Delta_1}_{ij}|}{v_0}}\Big)}\\
&=&\frac{|{\Delta_1}_{ij}|}{v_0}-\log\Big(\frac{v_0\eta e^{\frac{|{\Delta_1}_{ij}|}{v_0}-\frac{|{\Delta_1}_{ij}|}{v_1}}+v_1{(1-\eta)}}{v_0\eta+v_1{(1-\eta)}}\Big).
\end{eqnarray*}
For any $(i,j)$ $\in$ $\mathcal{B}$ and $i\ne j$, applying Taylor expansion, for some $v\in(0,1), $ we have
\begin{equation*}
\begin{split}
&-\log\left(\frac{\eta}{2v_1}e^{-\frac{|\tilde{\theta}_{ij}+{\Delta_1}_{ij}|}{v_1}}+\frac{1-\eta}{2v_0}e^{-\frac{|\tilde{\theta}_{ij}+{\Delta_1}_{ij}|}{v_0}}\right)+\log\left(\frac{\eta}{2v_1}e^{-\frac{|\tilde{\theta}_{ij}|}{v_1}}+\frac{1-\eta}{2v_0}e^{-\frac{|\tilde{\theta}_{ij}|}{v_0}}\right)\\
&=\text{pen}_{SS}{'}(\tilde{\theta}_{ij}){\Delta_1}_{ij}+\frac{1}{2}\text{pen}_{SS}{''}\left(\tilde{\theta}_{ij}+v{\Delta_1}_{ij}\right){\Delta_1}_{ij}^2.
\end{split}
\end{equation*}
Combining the results above, we have
\begin{equation*}
\begin{split}
G(\Delta_1)\ge& \frac{n}{2}\Big(\sum_{i,j}\left({\Delta_1}_{ij}(s_{ij}-{\tilde{\Theta}^{-1}}_{ij})\right)+\frac{1}{4}k_1^2\|{\Delta_1}\|_F^2\Big)+\sum_{i=j,\in \mathcal{B}}\tau{\Delta_1}_{ii}\\
&+\frac{1}{2}\sum_{i\ne j, \in \mathcal{B}}\left(\text{pen}_{SS}^{'}(\tilde{\theta}_{ij}){\Delta_1}_{ij}+\frac{1}{2}\text{pen}_{SS}^{''}(\tilde{\theta}_{ij}+v{\Delta_1}_{ij}){\Delta_1}_{ij}^2\right)\\
&-\frac{1}{2}\sum_{\notin \mathcal{B}}\left(-\frac{|{\Delta_1}_{ij}|}{v_0}+\log\Big(\frac{v_0\eta e^{\frac{|{\Delta_1}_{ij}|}{v_0}-\frac{|{\Delta_1}_{ij}|}{v_1}}+v_1{(1-\eta)}}{v_0\eta+v_1{(1-\eta)}}\Big)\right)\\
&=\text{(A)}+\text{(B)}+\text{(C)},
\end{split}
\end{equation*}
where
\begin{equation*}
\begin{split}
\text{(A)}=&\frac{n}{2}\Big(\sum_{(i,j)\in \mathcal{B}}{\Delta_1}_{ij}(s_{ij}-{\tilde{\Theta}^{-1}}_{ij}+\frac{2}{n}Z_{ij})\Big),\\
\text{(B)}=&\frac{n}{2}\left(\sum_{(i,j)\notin \mathcal{B}}\left({\Delta_1}_{ij}(s_{ij}-{\tilde{\Theta}^{-1}}_{ij})-\frac{1}{n}\Big(-\frac{|{\Delta_1}_{ij}|}{v_0}+\log\frac{v_0\eta e^{\frac{|{\Delta_1}_{ij}|}{v_0}-\frac{|{\Delta_1}_{ij}|}{v_1}}+v_1{(1-\eta)}}{v_0\eta+v_1{(1-\eta)}}\Big)\right)\right),\\
\text{(C)}=&\frac{n}{8}k_1^2\|{\Delta_1}\|_F^2+\sum_{i\ne j, \in \mathcal{B}}\frac{1}{4}\text{pen}_{SS}{''}\left(\tilde{\theta}_{ij}+v{\Delta_1}_{ij}\right){\Delta_1}_{ij}^2.
\end{split}
\end{equation*}
Next, we show that all three terms, (A), (B), and (C), are non-negative.
\begin{itemize}
\item $\text{(A)}=0$ because of the way $\tilde{\Theta}$ is constructed.
\item $\text{(C)} \ge 0$ by the property of $\text{pen}_{SS}{''}(\delta)$ stated before.
\item For term (B), we will first bound $s_{ij}-{\tilde{\Theta}^{-1}}_{ij}$:
\begin{equation*}
\begin{split}
|s_{ij}-{\tilde{\Theta}^{-1}}_{ij}|&\le |s_{ij}-{\sigma}^0_{ij} |+|{\tilde{\Theta}^{-1}}_{ij}-\sigma^0_{ij}|\\
&\le C_1\sqrt{\frac{\log p}{n}}+M_{\Sigma^0}^22\left(C_1+C_3\right)M_{\Gamma^0}\sqrt{\frac{\log p}{n}}+\frac{3}{2}dM^3_{\Sigma^{0}}\left(2(C_1+C_3)M_{\Gamma^0}\sqrt{\frac{\log p}{n}}\right)^2\\
&\le \left(C_1+M_{\Sigma^0}^22\left(C_1+C_3\right)M_{\Gamma^0}+
6\left(C_1+C_3\right)^2dM_{\Gamma^0}^2M_{\Sigma^0}^3/M\right)\sqrt{\frac{\log p}{n}},
\end{split}
\end{equation*}
where the second line is due to Lemma \ref{lemma:5}.
Next, we bound the fraction after the $\log$ function in $\text{(B)}$. For simplicity, denote it by $f({\Delta_1}_{ij})$. Since $1/v_0-1/v_1>0$, $f({\Delta_1}_{ij})$ is a monotone function of ${\Delta_1}_{ij}$ and $f({\Delta_1}_{ij})$ goes to $1$ as ${\Delta_1}_{ij}$ goes to $0$. That is, $f({\Delta_1}_{ij})$ can be arbitrary close to 0, when ${\Delta_1}_{ij}$ is sufficiently small. Therefore the second term after summation can be arbitrary close to ${\Delta_1}_{ij}/(nv_0)$.
So if choosing $1/(nv_0)>C_1+M_{\Sigma^0}^22(C_1+C_3)M_{\Gamma^0}+
6(C_1+C_3)^2dM_{\Gamma^0}^2M_{\Sigma^0}^3/M$ and $\epsilon>0$ sufficiently small, we have (B)$>$0 when $\|\Delta_1\|_{\infty}\le \epsilon$.
\end{itemize}
Combining the results above, we have shown that
there always exists a small $\epsilon>0$, such that $G(\Delta_1)\ge 0$ for any $\|\Delta_1\|_{\infty}\le \epsilon$. That is, $\tilde{\Theta}$ is a local minimizer. So we have proved Theorem \ref{thm:proof}.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{Thm:estimate}}]
\cite{cai2011constrained} have shown that the sample noise $\tilde{W}$ can be bounded by $\sqrt{\frac{\log p}{n}}$ times a constant with high probability for both exponential tail and polynomial tail (see the proofs of their Theorem 1 and 4). That is,
\begin{itemize}
\item When condition (C1) holds,
$$\|\tilde{W}\|_{\infty} \le\eta_1^{-1}(2+\tau_0+\eta_1^{-1}K^2)\sqrt{\frac{\log p}{n}}$$
with probability greater than $1-2p^{-\tau_0}.$
\item When condition (C2) holds,
$$\|\tilde{W}\|_{\infty} \le\sqrt{({\theta^0_{max}}+1)(4+\tau_0)\frac{\log p}{n}}, \quad {\theta^0_{max}}=\max_{ij}\theta_{ij},$$
with probability greater than $1-O(n^{-\delta_0/8}+p^{-\tau_0/2})$.
\end{itemize}
With the results above on $\|\tilde{W}\|_{\infty}$ and Theorem \ref{thm:proof}, we have proven Theorem \ref{Thm:estimate}.
\end{proof}
\section*{Appendix B: Other Proofs}
\begin{proof}[\textbf{Proof of Lemma \ref{lemma:4}}] Show both $|\Delta_{\mathcal{B}}\|_{\infty}$ and $ \|\Delta_{\mathcal{B}^c}\|_{\infty}$ are bounded by $r$. Thus, $\|\Delta\|_{\infty} \le r.$
\begin{enumerate}
\item By construction,
$$\|\Delta_{\mathcal{B}^c}\|_{\infty}\le 2(C_1+C_3)M_{\Gamma^0}\sqrt{\log p/n}\le r.$$
\item The proof for $\|\Delta_{\mathcal{B}}\|_{\infty} \le r$ is inspired by \cite{ravikumar2011high}. Define $G(\Theta_{\mathcal{B}})=n\left(-\Theta_{\mathcal{B}}^{-1}+S_{\mathcal{B}}\right)/2+Z_{\mathcal{B}}$. By definition, the set of $\Theta_{\mathcal{B}}$ that satisfies $G(\Theta_{\mathcal{B}})=0$ is the set $\mathcal{A}$.
Consider a mapping $F$ from $\mathbb{R}^{|{\mathcal{B}}|}\rightarrow \mathbb{R}^{|{\mathcal{B}}|}$:
\begin{equation}
F\left(\text{vec}\left(\Delta_{\mathcal{B}}\right)\right)=\frac{2}{n}\Big(-{\Gamma^{0}}_{{\mathcal{B}}{\mathcal{B}}}^{-1}\text{vec}\left(G\left(\Theta_{\mathcal{B}}^0+\Delta_{\mathcal{B}}\right)\right)\Big)+\text{vec}\left(\Delta_{\mathcal{B}}\right).
\end{equation}
By construction, $F\left(\text{vec}\left(\Delta_{\mathcal{B}}\right)\right)=\text{vec}\left(\Delta_{\mathcal{B}}\right)$ if and only if $G\left(\Theta_{\mathcal{B}}^0+\Delta_{\mathcal{B}}\right)=G\left(\Theta_{\mathcal{B}}\right)=0.$
Let $\mathbb{B}\left(r\right)$ denote the $\ell_{\infty}$ ball in $\mathbb{R}^{|{\mathcal{B}}|}$.
If we could show that $F\left(\mathbb{B}\left(r\right)\right)\subseteq \mathbb{B}\left(r\right)$, then because $F$ is continuous and $\mathbb{B}\left(r\right)$ is convex and compact, by Brouwer's fixed point theorem, there exists a fixed point $\text{vec}(\Delta_{\mathcal{B}})\in \mathbb{B}(r)$. Thus $\|\Delta_{\mathcal{B}}\|_{\infty}\le r$.
Let $\Delta\in\mathbb{R}^{p\times p}$ denote the zero-padded matrix, equal to $\Delta_{\mathcal{B}}$ on ${\mathcal{B}}$ and zero on ${\mathcal{B}}^c$.
\begin{equation*}
\begin{split}
F\left(\text{vec}\left(\Delta_{\mathcal{B}}\right)\right)&=\frac{2}{n}\Big(-{\Gamma^{0}}_{{\mathcal{B}}{\mathcal{B}}}^{-1}\text{vec}\left(G\left(\Theta_{\mathcal{B}}^0+\Delta_{\mathcal{B}}\right)\right)\Big)+\text{vec}\left(\Delta_{\mathcal{B}}\right)\\
&=-{\Gamma^{0}}_{{\mathcal{B}}{\mathcal{B}}}^{-1}\Big(\left(-(\Theta^0+\Delta)_{\mathcal{B}}^{-1}+S_{\mathcal{B}}\right)+\frac{2}{n}Z_{\mathcal{B}}\Big)+\text{vec}\left(\Delta_{\mathcal{B}}\right)\\
&=-{\Gamma^{0}}_{{\mathcal{B}}{\mathcal{B}}}^{-1}\Big(-\left(\Theta^0+\Delta\right)_{\mathcal{B}}^{-1}+{\Theta^{0}_{\mathcal{B}}}^{-1}-{\Theta^{0}_{\mathcal{B}}}^{-1}+S_{\mathcal{B}})+\frac{2}{n}Z_{\mathcal{B}}\Big)+\text{vec}\left(\Delta_{\mathcal{B}}\right)\\
&={\Gamma^{0}}_{{\mathcal{B}}{\mathcal{B}}}^{-1}\text{vec}\left(\Theta^{0^{-1}}\Delta\Theta^{0^{-1}}\Delta J\Theta^{0^{-1}}\right)_{\mathcal{B}}-{\Gamma^{0}}_{{\mathcal{B}}{\mathcal{B}}}^{-1}\left(\text{vec}\left(W_{\mathcal{B}}+\frac{2}{n}Z_{\mathcal{B}}\right)\right).
\end{split}
\end{equation*}
Denote
\begin{eqnarray*}
\mathbf{I} &=& {\Gamma^{0}}_{{\mathcal{B}}{\mathcal{B}}}^{-1}\text{vec}\left(\Theta^{0^{-1}}\Delta\Theta^{0^{-1}}\Delta J\Theta^{0^{-1}}\right)_{\mathcal{B}} \\
\mathbf{II}&=&{\Gamma^{0}}_{{\mathcal{B}}{\mathcal{B}}}^{-1}\left(\text{vec}\left(W_{\mathcal{B}}+\frac{2}{n}Z_{\mathcal{B}}\right)\right).
\end{eqnarray*}
Then $F\left(\text{vec}\left(\Delta_{\mathcal{B}}\right)\right)\le \|\mathbf{I}\|_{\infty}+\|\mathbf{II}\|_{\infty}$. So it suffices to show $\|\mathbf{I}\|_{\infty}+\|\mathbf{II}\|_{\infty}\le r$.
For the first relationship, we have
\begin{equation*}
\begin{split}
\|\mathbf{I}\|_{\infty}&\le \vertiii{{\Gamma^{0}}^{-1}_{{\mathcal{B}}{\mathcal{B}}}}_{\infty}\|\text{vec}(\Theta^{0^{-1}}\Delta\Theta^{0^{-1}}\Delta J\Theta^{0^{-1}})_{\mathcal{B}}\|_{\infty}\\
&\le M_{\Gamma^0}\|R(\Delta)\|_{\infty}\\
&\le \frac{3}{2}dM_{\Gamma^0}M_{\Sigma^0}^3\|\Delta\|_{\infty}^2,
\end{split}
\end{equation*}
where the last inequality is due to $\|\Delta\|_{\infty}\le r\le 1/(3M_{\Sigma^0}d)$ and Lemma 5 from \cite{ravikumar2011high}. Since $r\le1/(3dM_{\Gamma^0}M_{\Sigma^0}^3)$, we further have $\|\mathbf{I}\|_{\infty}\le r/2.$
By assumption, $\min{|\theta_{\mathcal{B} \cap \mathcal{D}^c}^{0}|}\ge r+\delta$, thus when $\|\Delta\|_{\infty}\le r$, $\min|{\theta}_{\mathcal{B}\cap {\mathcal{D}^c}}|\ge\delta$, since $\text{pen}^{'}_{SS}\left(|\theta|\right)$ is monotonic decreasing, we have $\|Z_{\mathcal{B}\cap\mathcal{D}^c}\|_{\infty}\le\frac{1}{2}\text{pen}^{'}_{SS}(\delta)$. Thus, for the second relationship, we have
\begin{equation*}
\begin{split}
\|\mathbf{II}\|_{\infty}&\le{\Gamma^{0}}^{-1}_{{\mathcal{B}}{\mathcal{B}}} \Big(\|W\|_{\infty}+\frac{2}{n}\max \left( \frac{1}{2}\text{pen}^{'}_{SS}(\delta),\tau\right)\Big)\\
& \le M_{\Gamma^0}\left(\|W\|_{\infty}+\frac{2}{n}\max \left( \frac{1}{2}\text{pen}^{'}_{SS}(\delta),\tau\right)\right) \le r/2
\end{split}
\end{equation*}
by assumption.
\end{enumerate}
Thus, there exists a point $\tilde{\Theta}$ such that $\|\tilde{\Theta}-\Theta^0\|_\infty\le r.$
Because $\|\tilde{\Theta}\|_2\le \|\tilde{\Theta}-\Theta^0\|_2+\|\Theta^0\|_2$ and $\|\tilde{\Theta}-\Theta^0\|_2\le \vertiii{\tilde{\Theta}-\Theta^0}_\infty\le dr$, we have $\|\tilde{\Theta}\|_2\le 1/k_1+dr<B.$ Because $dr < \frac{1}{3M_{\Sigma^0}} < \frac{1}{3}\lambda_{\min}(\Theta^0)$, we have $\lambda_{\min}(\tilde{\Theta}) >0.$So it is inside $\mathcal{A}$ by assumption. That is, $\mathcal{A}$ is non empty.
\end{proof}
\begin{proof} [\textbf{Proof of Lemma \ref{lemma:5}}]
Since there are only $p+s$ nonzero entries, we prove (\ref{eq1:lemma5}):
$$\|\tilde{\Theta}-\Theta^0\|_{F}=\sqrt{\sum_{(i,j)\in S_g}(\tilde{\theta}_{ij}-\theta_{ij}^0)^2}\le r\sqrt{p+s}.$$
Since there are at most $d$ nonzero entries in each column of $\Theta$ and $\Theta$ is symmetric,
$$\|\tilde{\Theta}-\Theta^0\|_{2}\le \vertiii{\tilde{\Theta}-\Theta^0}_{\infty}\le rd.$$
In addition, since the $\ell_\infty/\ell_\infty$ operator norm is bounded by Frobenius norm, we prove (\ref{eq2:lemma5}). We skip the proof for (\ref{eq3:lemma5}), which is nearly identical to Corollary 4 in \cite{ravikumar2008high}. \end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{Thm:select}}](Selection consistency)\\
Recall
\begin{equation}
\begin{split}
\log\frac{p_{ij}}{1-p_{ij}}&=\Big(\log\frac{v_0\eta}{v_1(1-\eta)}-\frac{|\tilde{\theta}_{ij}|}{v_1}+\frac{|\tilde{\theta}_{ij}|}{v_0}\Big)\\
&=\Big(-\log\frac{v_1(1-\eta)}{v_0\eta}-\frac{|\tilde{\theta}_{ij}|}{v_1}+\frac{|\tilde{\theta}_{ij}|}{v_0}\Big).
\end{split}
\end{equation}
\begin{itemize}
\item When $\theta^0_{ij}=0$,
by constructor, $\tilde{\theta}_{ij}=0.$ Then with our choice of $v_1(1-\eta)/\left(v_0\eta\right)$,
$$\log\frac{p_{ij}}{1-p_{ij}}\rightarrow -\infty.$$
\item When $\theta^0_{ij}\ne0$, we have
\begin{equation}
\begin{split}
\log\frac{p_{ij}}{1-p_{ij}}&=\Big(\log\frac{v_0\eta}{v_1(1-\eta)}-\frac{|\theta_{ij}|}{v_1}+\frac{|\theta_{ij}|}{v_0}\Big)\\
&\ge \left(-\log\frac{v_1(1-\eta)}{v_0\eta}+\left(\frac{1}{v_0}-\frac{1}{v_1}\right)\left(|\theta^0_{ij}|-|\theta^0_{ij}-\theta_{ij}|\right)\right)\\
&\ge -\log\frac{v_1(1-\eta)}{v_0\eta}+(C_4-C_3)\left(K_0-2(C_1+C_3)M_{\Gamma^0}\right)\log p.
\end{split}
\end{equation}
Then with our choice of $v_1(1-\eta)/ (v_0\eta)$,
$$\log\frac{p_{ij}}{1-p_{ij}}\rightarrow +\infty.$$
\end{itemize}
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{thm:sym}}]
The estimate of the precision matrix is symmetric due to construction.
Next we show that the estimate is ensured to be positive definite. Assume $\Theta^{(t)}$, the $t$-th update of the estimate is positive definite. Apparently, this assumption is satisfied with $t=0$ since the initial estimate $\Theta^{(0)}$ is positive definite.
Then it suffices to show that $\det(\Theta^{(t+1)})\succ0$. WLOG, assume we update the last column of $\Theta$ in the $(t+1)$-th iteration. Using Schur complements, we have
$$ \det\left(\Theta^{(t+1)}\right)=\det\left(\Theta_{11}^{(t)}\right)\left(\theta_{22}^{(t+1)}-\theta_{12}^{{(t+1)^T}}\Theta_{11}^{{(t)}^{-1}}\theta_{12}^{(t+1)}\right).$$
Because $\det(\Theta^{(t)})\succ0$, we have $\det\left(\Theta_{11}^{(t)}\right)>0.$ Further,
the updating rule of our algorithm ensures that
$$\left(\theta_{22}^{(t+1)}-\theta_{12}^{{(t+1)^T}}\Theta_{11}^{{(t)}^{-1}}\theta_{12}^{(t+1)}\right)=\frac{1}{w_{22}^{(t+1)}}>0.$$
Thus, $\det\left(\Theta^{(t+1)}\right) >0$.
\end{proof}
\section*{Appendix C: Checking $\|\Theta\|_2\le B$.}
Algorithm 1 involves checking the spectral norm constraint $\|\Theta\|_2\le B$ after every column update of $\Theta.$ Computing $\|\Theta\|_2$ can be computationally intensive, however, since we only change one column (and corresponding one row) at a time, the constraint can be checked without calculating $\|\Theta\|_2$ every time. Suppose we know $\|\Theta^{(t)}\|_2$ (or an upper bound) at the previous step, and denote $\Delta^{(t)}:= \Theta^{(t+1)} - \Theta^{(t)}$ to be the difference between the estimates after one column update. In order to check the bound, it is sufficient to make sure that $\|\Theta ^{(t)}\|_2+\|\Delta^{(t)}\|_2<B$. It is easy to check this constraint because $\|\Delta^{(t)}\|_2$ is a rank two matrix with its maximum eigenvalue available in closed form. Only when $\|\Theta ^{(t)}\|_2+\|\Delta\|_2$ exceeds $B$, we will need to recalculate $\|\Theta^{(t+1)}\|_2$ again.
\bibliographystyle{apalike}
|
{
"timestamp": "2018-05-22T02:13:12",
"yymm": "1805",
"arxiv_id": "1805.02257",
"language": "en",
"url": "https://arxiv.org/abs/1805.02257"
}
|
\section{Introduction}
As the end products of binary evolution, double white dwarf (WD) binaries are
good probes for testing stellar and binary evolutionary theory \citep{mars95,toon14}.
Especially, they are thought to be
progenitors of Type Ia supernovae \citep{iben84, webb84},
AM CVn systems \citep{bree12, kili14}, and R CrB stars \citep{webb84}.
Furthermore, close double WDs are believed to be the main Galactic gravitational sources
in the frequency range of $10^{-4}$ to 0.1 Hz, which will be detected by the \emph{Laser Interferometer Space Antenna}
detector \citep{hils90,nele01, herm12}.
Based on the Sloan Digital Sky Survey (SDSS; \cite{eise06, york00}) subspectra,
the double WD binary SDSS J125733.63$+$542850.5 (hereafter J1257) was first
discovered by the Sloan WD Radial velocity Mining Survey \citep{bade09}.
Its radial velocity variations with a semi-amplitude of 323 $\rm km\,s^{-1}$ were interpreted to
originate from a 0.9 ${\rm M_\odot}$ WD while the companion was suggested to be a neutron star or black hole \citep{bade09}.
The two distinct components were revealed in the spectra
of \emph{B} and \emph{R} band spectroscopy,
and the Balmer lines with deep radial velocity variable were identified to
come from a cool, extremely low-mass WD with mass less than 0.3 ${\rm M_\odot}$ \citep{kulk10, mars11}.
Recently, \cite{bour15} fit both the Hubble Space Telescope Cosmic Origins Spectrograph
and the Space Telescope Imaging Spectrograph spectra, and the SDSS \textit{ugriz} flux
with a Markov Chain Monte Carlo approach.
Their results indicate that the massive component has a surface gravity log $g_1\sim8.73\pm0.05$,
and an effective temperature $T_1\sim13030\pm70$ {\rm K}.
Detailed evolutionary models reveal the mass to be $M_1=1.06\pm0.05~{\rm M_\odot}$,
and a corresponding cooling age of $\tau_1=1.0$ {\rm Gyr} or 1.2 {\rm Gyr} for carbon/oxygen and oxygen/neon
WD models respectively \citep{kowa06,alth07,trem11}.
However, the low-mass WD with $M_2<0.24~{\rm M_\odot}$
has a temperature of $T_2\sim6400\rm ~K$ and a cooling age of $\tau_2\geq5$ Gyr \citep{mars11,bour15}.
The two cooling ages are in contradiction with binary stellar evolutionary theory.
The progenitor of the low-mass He WD probably had a mass of
$1-2~{\rm M_\odot}$ \citep{istr14}, and should have evolved much more slowly
than $5-6~{\rm M_\odot}$ progenitor of the more massive WD.
After the formation of the low-mass WD, CNO flashes could cause it to fill its Roche lobe,
and accretion heating would alter the thermal structure of the massive WD
with a duration of $\sim10^{6}~\rm yr$ \citep{bild06}.
However, this mass transfer timescale ($\sim100~\rm yr$) is too short to influence the cooling history of the massive WD.
In this paper, we propose that the difference in the cooling ages
originate from the heating process during the formation of a strange dwarf.
We describe the strange dwarf scenario in section 2.
In section 3, we discuss the changes of the orbital parameters during phase transition
and the possible spin-down process of the massive WD.
Employing the MESA code, we simulate the evolutionary history of J1257 in section 4.
A brief summary and discussion are presented in section 5.
\section{Strange dwarf Scenario}
Based on the hypothesis that the strange quark matter may be the most stable state of matter, \cite{witt84}
proposed that the pulsars may be strange quark stars (SSs) rather than neutron stars (NSs).
Following this idea, the concept of strange dwarfs (SDs) was introduced by \cite{glen95a, glen95b} as strange counterpart of WDs.
They pointed out that the inner density of ordinary stable WDs is always below critical density \textbf{$\rho_{\rm c}\sim10^9 ${\rm g~cm}$^3$},
which is the central density of a maximum-mass ($\sim1~{\rm M_\odot}$) WD \citep{baym71}.
The structure and thermal evolution of SDs with different masses have been studied by \cite{benv96} in detail.
Their computation indicates that the thermal evolution of SD with mass larger
than $1~{\rm M_\odot}$ is similar to that of a WD with the same mass.
Following these studies, we assume that if the central density of a WD exceeds a critical density, a strange quark core will emerge
in the center region and the star evolves to be a SD.
During this phase transition (PT, hereafter) process the mass of star will decrease slightly (about several percent).
Part of them will translate to the binding energy of the more compact SD,
and the other is lost from the star.
The energy transportation during the process may heat the star and result in a hotter SD.
Under the assumptions mentioned above,
we outline the evolutionary stages of J1257 as follows, which is also illustrated in Fig. 1.
\begin{itemize}
\item We start from a compact binary (with an orbital period $P_i\sim 1.5$ days),
which consists of a white dwarf with $M_1\sim0.6-0.8~{\rm M_\odot}$
and a main sequence star with $M_2\sim1.5~{\rm M_\odot}$.
\item Roche lobe overflow. The WD accretes around $0.3-0.5~{\rm M_\odot}$
material from the donor star, and spins up to nearly breakup rotation.
The spin-up process would also significantly broaden the hydrogen line profiles, as reported by \cite{kulk10}.
\item Formation of the second WD.
The mass transfer terminates and the secondary star evolves into a low-mass WD.
The central density of the primary WD,
which is centrifugally diluted by the fast spinning due to accretion starts to spin down.
Mean while the second WD gradually cools.
\item PT. After several ${\rm Gyr}$, the massive WD's spin down causes its density to be above the critical density,
thus PT takes place in its core and a nascent SD is heated up to $10^8~{\rm K}$.
\item Cooling of the massive SD.
According to \cite{benv96}, the cooling process of the SD is similar to a normal WD with the same mass.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth,trim={30 0 80 0}]{f1.eps}
\caption{\label{fig:sch} Illustration of our strange dwarf evolution from the compact white dwarf binary to the present observed system J1257.}
\end{figure}
The temperature of the nascent SD can be estimated as follows.
Assuming that the difference between the gravitational masses of a WD and a SD with the same quark number is $\Delta M$,
and that a fraction $\alpha$ ($<1$) of the rest energy of the mass loss during PT was used to heat the nascent SD,
the thermal energy received by the SD is:
\begin{equation}
Q=\alpha\Delta Mc^2.
\end{equation}
The internal energy of the SD with a temperature $T$ can be estimated as:
\begin{equation}
U_{\rm SD}=\frac{3}{2}{k_{\rm B}}NT,
\end{equation}
where $k_{\rm B}$ is Boltzmann's constant and $N=M_{\rm SD}/Am_{\rm u}$ is the total number of quarks and electrons.
Here $M_{\rm SD}$ is the mass of the SD, $m_{\rm u}$ is the atomic mass unit, $A$ is the relative particle mass.
Following \cite{alco86}, we assume that the SD is composed by roughly equal numbers of up, down and strange quarks and a small number (ignorable) electrons.
Considering that the mass of SD decreased a little during PT while the quark number keeps constant, we take $A\sim0.3$.
Since the effective temperature of the WD before PT was much lower than the nascent SD,
the internal energy of WD $U_{\rm WD}$ ($\ll U_{\rm SD}$) is ignorable.
Because $Q=U_{\rm SD} - U_{\rm WD}\approx U_{\rm SD}$, the temperature of the nascent SD can be written as:
\begin{equation}
T=\frac{2}{3}\frac{\alpha\Delta Mc^2}{k_{\rm B}N}=\frac{2}{3}\frac{\alpha\Delta Mc^2Am_{\rm u}}{k_{\rm B}M_{\rm SD}}.
\end{equation}
PT between NS and SS has been extensively investigated for different equations of state.
Several works suggested that the difference between the gravitational masses a NS and a
SS with the same baryon number is roughly $M_{\rm NS}-M_{\rm SS}\approx 0.15~{\rm M_\odot}$
for a NS with mass $\sim1.5~{\rm M_\odot}$ \citep[e.g.,][]{bomb00, drag07, marq17}\footnote{Lower value is also possible, for example,
the gravitational mass between NS and hyperon star given by \cite{scha02} is $\sim0.03~\rm {M_\odot}$.}.
Considering the difference in the mass and compactness between a WD and a NS, in this work we take
$\Delta M\sim 0.05~{\rm M_\odot}$.
Considering that most of the energy liberated during PT is assumed to be taken away by neutrinos (and anti-neutrinos),
similar as in supernova explosions \citep{kipp12}, and only a small fraction is used to heat the nascent SD, we set $\alpha\sim 0.001$ as lower limit.
Taking $A\sim0.3$, and $M_{\rm SD}=1.05~\rm M_{\odot}$, the nascent SD had an initial effective temperature $T\simeq10^8~{\rm K}$.
In comparison, the temperature of a $1.0 ~\rm M_\odot$ WD is $<10^7~\rm K$.
\section{Orbital Change during PT and the Spin Down of Progenitor WD}
\subsection{Orbital change during PT}
We first discuss the influence of PT on the eccentricity of the binary. Considering that the PT in the core of WD took place quickly,
and a kick velocity $V_{\rm k}$ was imparted to the new born SD, one can solve the orbital parameters during PT following \cite{shao16}.
Due to the mass transfer with a long duration, the orbit of the binary before PT can be thought to be circular.
Setting $\phi$ to be the positional angle of $V_{\rm k}$ with respect to the pre-PT orbital plane
and $\theta$ the angle between $V_{\rm k}$ and the pre-PT orbital velocity $V_{\rm 0}$ ($=(2\pi GM_{\rm 0}/P_{\rm orb, 0})^{1/3}$),
the ratio between the semi-major axes before and after PT is \citep{hill83, dewi03}:
\begin{equation}
\frac{a_0}{a}=2-\frac{M_{0}}{M_{0}-\Delta{M}}(1+\nu+2\nu~{\rm cos}~\theta),
\end{equation}
where $\nu=V_{\rm k}/V_{\rm 0}$, and
$M_{0}$ and $P_{\rm orb, 0}$ are the total mass of the binary and orbital period of the binary before PT, respectively.
Under the influence of mass loss and kick, the eccentricity after PT can be written as \citep{hill83, dewi03}:
\begin{multline}
1-e^2=\frac{a_{\rm 0}M_{\rm 0}}{a({M_{0}-\Delta{M})}}[1+ 2\nu~{\rm cos}~\theta \\+ \nu^2({\rm cos}^2\theta + {\rm sin}^2\theta~{\rm sin}^2\phi)].
\end{multline}
Taking $M_{0}=M_{2}+M_{1}=1.3~{\rm M_\odot}$, $\Delta{M}=0.05~{\rm M_\odot}$, $P_{\rm orb, 0}=0.22~{\rm day}$,
we simulated the possibility of small eccentricities ($e<0.01$) after PT for different $V_{\rm k}$ in the range of $0-50~{\rm km~s^{-1}}$.
For each $V_{\rm k}$, we set $10^7$ independent random values for ${\rm cos}\theta$ and $\phi$
of uniform distribution in the interval of -1 to 1 and 0 to $\pi$, respectively.
According to the observations of \cite{bade09} and \cite{mars11},
the current orbit of J1257 is circular and a WD binary with $e<0.01$ could evolve into a circular orbit on a timescale of $\sim1\ {\rm G yr}$.
Fig.~2 shows the possibility distribution for the eccentricity less than 0.01 with different kick velocities.
When $V_{\rm k}\leq5~{\rm km~s^{-1}}$ and $V_{\rm k}\geq50~{\rm km~s^{-1}}$, the possibilities with $e<0.01$ are less than $0.1\%$ and $0.2\%$, respectively.
However, the relevant possibility is $\geq1\%$ when $6\leq V_{\rm k}\leq20~{\rm km~s^{-1}}$.
Especially, the relevant possibility is as high as $10\%$ for a kick velocity range of $8-9~\rm km\, s^{-1}$.
Similar to accretion induced collapse of neutron star \citep{hurl10}, the nascent SD should obtain a low kick velocity.
Therefore, PT process has a relatively large possibility to result in a nearly circular orbit.
According to the relation between the pre-PT and the post-PT orbital separation $a_0/(1+e)\leq a \leq a_0/(1-e)$ \citep{flan75},
one can derive the change of orbital period is $\leq2\%$ when $e<0.01$.
Since the changes are relatively small, we ignore the orbital change of the binary during PT in our simulation.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth,trim={30 0 80 0},clip]{f2.eps}
\caption{\label{fig:sch} Possibility distribution for the eccentricity less than 0.01 under different values of the kick velocity.}
\end{figure}
\subsection{Spin down of massive WD }
Similar to pulsars, we consider the spin-down of WD with an angular velocity $\Omega=2\pi/P_{\rm s}$
is dominated by magnetic dipole radiation \footnote{Gravitational wave radiation might be an efficient mechanism
extracting angular momentum from fast rotating WDs due to r-mode instability
in a short timescale ($<10^{8}~\rm yr$, Yoon \& Langer 2004)}, and the energy loss rate is:
\begin{equation}
\dot{E}_{\rm d}=-\frac{2}{3c^{3}}\mu^{2}\Omega^{4},
\end{equation}
where $\mu=BR^3=B_{7}R_{9}^{3}\times 10^{34}~{\rm G~cm^3}$ is the magnetic dipole moment,
and $B_{7}$ is the surface magnetic field in units of $10^7~{\rm G}$, and $R_9$ is the radius in units of $10^9~{\rm cm}$ of the WD, respectively.
The rotational energy of WD changes at a rate:
\begin{equation}
\dot{E}_{\rm s}=I\Omega\dot{\Omega},
\end{equation}
where $I\sim MR^2\approx10^{51}R_{9}^{2}~{\rm g~cm}^{2}$ is the moment of inertia of WD.
If we assume that the braking torque of the WD fully originate from the magnetic dipole radiation, the spin period of the WD changes at a rate:
\begin{equation}
\dot{P}_{\rm s}=\frac{8\pi^2}{3c^3}\frac{\mu^2}{IP_{\rm s}}=K/{P_{\rm s}},
\end{equation}
where $K=8\pi^2\mu^2/3c^3I\sim B_{7}^{2}R_{9}^{4}\times{10^{-13}}{\rm ~s}$.
With simple integration, one can get the spin-down timescale of the WD from the initial spin period $P_{\rm s, 0}$
to the spin period of PT $P_{\rm s}$:
\begin{equation}
\tau_{\rm SD}=\frac{P_{\rm s}^{2}-P_{\rm s,0}^{2}}{2K}\approx \frac{P_{\rm s}^{2}-P_{\rm s,0}^{2}}{6B_{7}^{2}R_{9}^{4}}{\rm Myr}.
\end{equation}
Based on the theory of accretion disk-magnetic field interaction developed by \cite{GL1979},
\cite{kulk10} inferred that the magnetic field of J1527 was $\sim 10^{5}$ G when it spin up to its current spin period of $\sim60{\rm ~s}$.
However, \cite{cumm02} showed that rapid accretion could reduce the field strength at the surface of the accreting WD
because the field is advected into the interior by the accretion
flow. Therefore, many non-magnetic WDs ( $B\la
10^{5}$ G) may have submerged magnetic fields when they
were accreting at rates greater than the critical rate $\dot{M}_{\rm cr}=1-5\times 10^{-10}~\rm M_\odot\,yr^{-1}$,
and the magnetic field would re-emerge when the mass transfer terminates, and the re-emergence timescale of the field is:
\begin{equation}
\tau_{\rm re}\simeq 300\times(\frac{\Delta M_{\rm acc}}{0.1 {\rm ~M_\odot}})^{7/5}{\rm Myr},
\end{equation}
where $\Delta M_{\rm acc}$ is the accreted mass of the WD. Based on the model of \cite{cumm02} for magnetic field evolution, we propose that the surface field of the more massive WD in J1257 decreased from $\sim10^7 {\rm ~G}$ to $\sim10^5 {\rm ~G}$ due to rapid accretion.
According to the simulation in the next section, the accreted mass of the massive WD is $\Delta M_{\rm acc}\approx 0.45{\rm ~M_\odot}$, so the re-emergence timescale $\tau_{\rm re}\approx2.5\rm ~Gyr$.
Taking $P_{\rm s, 0}=10~{\rm s}$ (close to breakup rotation), $P_{\rm s}=60~{\rm s}$, $B_{7}=1$, and $R_{9}=0.6$,
one can derive the spin-down timescale of the massive WD is about $4.5~{\rm Gyr}$.
Considering the re-emergence timescale of the magnetic field $\tau_{\rm re}\approx 2.5~\rm Gyr$ ,
the total duration of 7 Gyr before the PT for the massive WD is approximately in agreement with the cooling age of the low mass WD.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth,trim={30 0 80 0},clip]{f3.eps}
\caption{\label{fig:sch} Evolution of orbital period for WD binaries including a $1.5{\rm~ M_\odot}$ donor star and a $0.65{\rm ~M_\odot}$ WD.
The solid, dashed and dotted curves represent the metallicity $z=0.0001,0.0005$ and $0.001$ respectively.
Numbers inside the curves denote the initial orbital periods.}
\end{figure}
\section{Numerical Simulation}
Using the MESA module \citep{paxt15}, we have simulated the evolution of the compact WD binary consisting of a WD and a main sequence star,
to test whether it is possible to reproduce the characteristics of J1257.
According to the estimation in the previous section, the orbital change during PT can be neglected while the mass growth of the WD is considered.
For an accreting WD, hydrogen and helium shell flashes always trigger nova outbursts,
which blow off the accreted matter and even result in convective dredge-up.
Therefore, the mass accumulation efficiency for accreting hydrogen should be less than 1.
During the mass transfer, the mass growth rate of the accreting WD is described as follows:
\begin{equation}
\dot{M}_{\rm 1}=\eta_{\rm He}\eta_{\rm H}|\dot{M}_{2}|,
\end{equation}
where $\dot{M}_{2}$ is the mass transfer rate of the donor star,
$\eta_{\rm H}$ and $\eta_{\rm He}$ is the accumulation efficiencies
during hydrogen burning and helium burning, respectively.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth,trim={30 0 80 0},clip]{f4.eps}
\caption{\label{fig:sch} Evolutionary tracks of the effective temperature of donor star for WD binaries including a $1.5{\rm~ M_\odot}$ donor star
and a $0.65{\rm ~M_\odot}$ WD. Three cases are same to Figure 3.}
\end{figure}
For the accumulation efficiency of hydrogen, a prescription given by
\cite{hach99} and \cite{han04} was adopted, i.e.
\begin{equation}
\eta_{\rm H}=
\begin{cases}
\dot{M}_{\rm cr}/|\dot{M}_{2}| &\text{$|\dot{M}_2|>\dot{M}_{\rm cr}$}, \\
1 &\text{$\dot{M}_{\rm cr}>|\dot{M}_2|>0.125\dot{M}_{\rm cr}$},\\
0 &\text{$|\dot{M}_2|<0.125\dot{M}_{\rm cr}$}.
\end{cases}
\end{equation}
In equation (12), $\dot{M}_{\rm cr}$ is a critical mass-accretion rate:
\begin{equation}
\dot{M}_{\rm cr}=5.3\times10^{-7}\frac{1.7-X}{X}(M_{1}-0.4)~{\rm M}_{\odot}~{\rm yr}^{-1},
\end{equation}
where $X$ is the mass abundance of hydrogen in the accreted matter.
For the accumulation efficiency $\eta_{\rm He}$ during helium burning, the prescriptions given by \cite{kato04} was adopted.
The mass loss $(1-\eta_{\rm H}\eta_{\rm He})\dot{M}_{2}$ during hydrogen and
helium burning is assumed to be ejected in the vicinity of the WD in the form of isotropic winds,
carrying away the specific angular momentum of the WD \citep{hach96, sobe97}.
In addition, we also consider angular momentum loss caused by gravitational radiation and
magnetic braking (with $\gamma=3.0$, \cite{rapp83, paxt15}).
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth,trim={30 0 80 0},clip]{f5a.eps}
\includegraphics[width=0.9\linewidth,trim={30 0 80 0},clip]{f5b.eps}
\caption{\label{fig:sch} Evolution of the donor-star mass (top panel) and the WD mass (bottom panel) WD binaries including a $1.5{\rm~ M_\odot}$ donor star
and a $0.65{\rm ~M_\odot}$ WD. Three cases are same to Figure 3.}
\end{figure}
To study the progenitor properties of J1257, we simulated the evolution of a large number of WD binaries.
The relevant binaries would be thought to be the progenitor candidates of J1257 if the following three
conditions are satisfied: (1) the current orbital period is 4.6 hr when the age of the system is within Hubble time;
(2) the binary evolves into a detached system when the accreting WD's mass is about $1.05-1.15{\rm ~M_\odot}$
(Because the WD must experience a long-term spin-down process,
hence the donor star should not overflow its Roche lobe when the WD mass increases to $1.05-1.15~{\rm ~M_\odot}$);
(3) the effective temperature of the donor star is near 6400 K.
Figures 2-4 show an example evolution in which the initial donor mass and the initial WD mass are $1.5{\rm~ M_\odot}$, and $0.65{\rm ~M_\odot}$, respectively.
We change the initial metallicity in order to fit the observed parameters of J1257.
As shown in Fig. 3, because the material is transferred from the more massive donor star to the less massive WD, the orbital period firstly decreases.
With the reversal of the mass ratio, the orbital period increases until the binary evolve into a detached system.
Subsequently, the donor star gradually evolve into a WD and enters the cooling stage,
and magnetic braking and gravitational radiation induce a compact double WDs binary.
Once the mass transfer ceases, the massive WD spin down due to magnetic dipole radiation, then trigger PT during the cooling of the low-mass WD.
The heating process during PT results in the formation of a hot SD.
Similar to Fig. 4, the simulated effective temperatures of the low-mass WD are always
higher than the observation in the Hubble time except for $z=0.0001$,
in which the donor star orbits a WD with an initial orbital period of 1.492 days.
Fig. 5 shows the evolutionary tracks of the donor-star mass and the accreting WD's mass.
It is clear that our simulated donor-star masses are consistent with the observed data for three different metallicities.
In calculation, the donor star with higher metallicity would produce systems with lower mass secondary WDs.
These difference should arise from the metallicity dependence of the stellar wind mass loss,
which tend to reduce the stellar mass of donor star.
\section{Summary and Discussion}
Assuming both WD and SD are different stages of stellar evolution,
in this work we propose a SD scenario to interpret the puzzle of the cooling age of two WD in J1257.
The massive WD is thought to be a SD originating from the PT of the $1.05-1.15~{\rm ~M_\odot}$ WD,
thus its higher effective temperature can be interpreted as a result of heating during PT.
A simple estimation indicates that a mass loss $\Delta M\sim 0.05~{\rm M_\odot}$ during PT can heated the nascent SD up to $10^8~{\rm K}$.
Based on these assumptions, we use the MESA code to simulate the evolution of a large number of WD binaries consisting of a $0.65~{\rm ~M_\odot}$ WD
and a $1.5~{\rm M_\odot}$ main-sequence star for different initial orbital period and metallicities.
Our simulation indicate that metallicities have important influence on the effective temperature of the donor star.
When $z=0.0001$, the calculate orbital period, the donor-star mass, and the effective temperature of the donor star are consistent with the observed data.
Therefore, we propose that the PT of a massive WD may be responsible for the puzzling cooling age of two WDs.
We expect further detailed multi-waveband observations for this source to obtain more precise constraints.
\section*{Acknowledgments}
This work was supported by the National Natural Science Foundation of China under grant number 11573016, 11733009, 11773015 and 11605110,
the National Key Research and Development Program of China (2016YFA0400803), and the Program for Innovative Research Team
(in Science and Technology) at the University of Henan Province.
|
{
"timestamp": "2018-05-08T02:10:44",
"yymm": "1805",
"arxiv_id": "1805.02187",
"language": "en",
"url": "https://arxiv.org/abs/1805.02187"
}
|
\section{Introduction}
In this paper we introduce and analyze a general semi-random multigraph process, arising from an interplay between a sequence of random choices on the one hand, and a strategy of our choice (that may also involve randomness) on the other.
The process is defined as follows. We start with an empty graph on the vertex set $[n]$. In each round, \textit{Builder} is offered a vertex $v$, chosen uniformly at random (u.a.r.\ for brevity) with replacement from the set $[n]$, independently of all previous choices. Builder then irrevocably chooses an additional vertex $u$ and adds the edge $uv$ to his (multi)graph, with the possibility of creating multiple edges and loops.
The (possibly randomized) algorithm that Builder uses in order to add edges throughout this process is called the \textit{strategy} of Builder.
As a special case, we also show how the process can be used to approximate (using suitable strategies) some well-known random graph models such as the Erd\H{o}s-Renyi random graph model (see~\cite{ER}), the random multigraph model (see~\cite{HKK}), the $k$-out model (see~\cite{walkup1980matchings}), and the min-degree process (see~\cite{Wormald}).
Given a positive integer $n$ and a monotone increasing graph property $\mathcal P$, we consider the one-player game in which Builder's goal is to build a multigraph with vertex set $[n]$ satisfying $\mathcal P$ as quickly as possible; we denote this game by $(\cp, n)$. The general problem discussed in this paper is to determine the typical number of rounds Builder needs in order to construct such a multigraph under optimal play. We mostly focus on the \emph{online version} of the game, where in each round Builder is presented with the next random vertex only after he chose a vertex in the previous round and added the corresponding edge to his graph, but also consider the \emph{offline version}, in which Builder is given the entire sequence of random vertex choices before the game starts.
\paragraph*{A formal treatment.}
Suppose that Builder follows some fixed strategy $\mathcal S$. Let $\mathcal S(n,m)$ denote the resulting multigraph if Builder follows $\mathcal S$ for $m$ rounds. That is, $\mathcal S(n,m)$ is the probability space of all multigraphs with vertex set $[n]$ and with $m$ edges, where each of these edges is chosen as follows. First a vertex $v \in [n]$ is chosen u.a.r., and then another vertex $u$ is chosen, according to $\mathcal S$, and the edge $uv$ is added to the graph. Sometimes, when $n$ and $\mathcal S$ are clear from the context, we use $G_m$ to denote Builder's multigraph after $m$ rounds.
For the online version of the game $(\cp, n)$, for $0 \leq p \leq 1$ and for a strategy $\mathcal{S}$, we define $\tau_p(\mathcal{S})$ to be the smallest integer $m$ for which $G_m \sim \mathcal S(n,m)$ satisfies $\cp$ with probability at least $p$. If no such integer $m$ exists, then we define $\tau_p(\mathcal{S})$ to be $+\infty$. Furthermore, we define $\tau_{p}(\mathcal{P}, n)$ to be the minimum of $\tau_p(\mathcal{S})$, taken over all possible strategies $\mathcal{S}$ for $(\mathcal{P}, n)$. In other words, $\tau_p(\cp, n)$ can be seen as the smallest number of rounds Builder needs in order to build a multigraph which satisfies property $\cp$ with probability (at least) $p$, provided he adopts a best possible strategy for this purpose.
For the offline version of the game $(\cp, n)$ we define $\tau'_p(\mathcal{S})$ and $\tau'_p(\cp, n)$ in an analogous manner. Note that $\tau'_p(\cp, n) \leq \tau_p(\cp, n)$ holds for every $\cp$, $n$ and $p$.
For a given monotone increasing graph property $\cp$, our prime objective for the online version of the game $(\cp, n)$ is to obtain tight upper bounds on $\tau_{1-o(1)}(\cp, n)$ and tight lower bounds on $\tau_{o(1)}(\cp, n)$, where $o(1)$ is a positive function tending to zero as $n$ tends to infinity. With some abuse of notation, we say that these bounds are, respectively, upper and lower bounds on $\tau(\cp, n)$ which hold with high probability (w.h.p.~for brevity). Note that in order to prove that w.h.p. $\tau(\cp,n) \leq m$, it suffices to present a strategy $\mathcal S$ such that w.h.p. $G_m \sim \mathcal S(n,m)$ satisfies $\cp$. On the other hand, in order to prove that w.h.p. $\tau(\cp,n) \geq m$, one has to show that for any strategy $\mathcal S$, w.h.p. the graph $G_m \sim \mathcal S(n,m)$ does not satisfy $\cp$. Our prime objective for the offline version of the game is analogous, namely, to obtain tight upper and lower bounds on $\tau'(\cp, n)$ which hold with high probability.
In this paper we will establish such lower and upper bounds on $\tau(\cp, n)$ and on $\tau'(\cp, n)$ for several natural graph properties $\cp$.
In the next three subsections we describe the main contributions of this paper: In Subsection \ref{subsec:connections} we explain how our model is connected to other random graph models; Subsection \ref{subsec:offline_results} is dedicated to our results on the offline version; and finally, in Subsection \ref{subsec:online_results} we state tight upper and lower bounds for several properties of interest in the online version.
\subsection{Connections to other random graph models}
\label{subsec:connections}
The first few results we obtain in this paper show that, in some sense, our process generalizes several well-known random graph models. Namely, given a certain model of random graphs $\mathcal G$, we prove that there exists an appropriate choice of a strategy ${\mathcal S}_{\mathcal G}$ for Builder such that ${\mathcal S}_{\mathcal G}(n,m)$ can be coupled to $\mathcal G$. Such results have independent interest, but will also enable us to use these well-studied models to draw conclusions about our semi-random graph process.
\medskip
\paragraph{\textbf{The random graph and multigraph models.}}
We first look at two classical random graph processes. The first process $\{G_m\}_{m=0}^{\binom{n}{2}}$ is defined as follows. Let $G_0$ be the empty graph with vertex set $[n] := \{1, \ldots, n\}$ and, for every $m \geq 0$, let $G_{m+1} = G_m \cup e$, where $e \in \binom{[n]}{2} \setminus E(G_m)$ is chosen u.a.r. It is well-known and easy to see that $G_m \sim G(n,m)$ for every $0 \leq m \leq \binom{n}{2}$, that is, this random graph process generates the Erd\H{o}s-Renyi random graph model~\cite{ER}. The second process we consider is the random multigraph process $\{M(n,m)\}_{m \geq 0}$ that was introduced in~\cite{HKK}. This process is similar to the first one except that, in each round, the edge we add is chosen u.a.r.~from $\binom{[n]}{2}$, allowing the graph to have multiple edges. For this reason, the process is also not limited in length. We prove that our semi-random graph process can be used to generate the second of these processes and to approximate the first.
\begin{proposition}\label{pro:Mnm}
There exists a strategy $\mathcal S_{M}$ for Builder such that the probability space $\mathcal S_{M}(n,m)$ is the same as the probability space $M(n,m)$.
Furthermore, there exists a strategy $\mathcal S_{G}$ for Builder such that if $m=o(n^2)$, then $H\sim G(n,m)$ and $H'\sim \mathcal S_{G}(n,(1+o(1))m)$ can be coupled
in such a way that w.h.p.\ $H\subseteq H'$. Finally, if $m = o(n)$, then the strategy $\mathcal S_{G}$ is such that $H\sim G(n,m)$ and $H'\sim \mathcal S_{G}(n,m)$ can be coupled in such a way that w.h.p.\ $H= H'$.
\end{proposition}
The following two results are immediate corollaries of Proposition~\ref{pro:Mnm}.
\begin{corollary}\label{cor:Gnm}
Let $m_{\mathcal P}$ be a positive integer for which w.h.p.\ $H \sim G(n,m_{\mathcal P})$ satisfies the monotone increasing graph property $\mathcal P$. If $m_{\mathcal P} = o \left(n^2 \right)$, then w.h.p.\ $\tau(\mathcal P,n) \leq (1+o(1)) m_{\mathcal P}$.
\end{corollary}
\begin{corollary}\label{cor:Mnm}
Let $m_{\mathcal P}$ be a positive integer for which w.h.p.\ $M \sim M(n, m_{\mathcal P})$ satisfies the monotone increasing graph property $\mathcal P$. Then w.h.p.\ $\tau(\mathcal P,n) \leq m_{\mathcal P}$.
\end{corollary}
\paragraph{\textbf{The $k$-out model.}}
Given a graph $G$ with minimum degree $\delta$ and a positive integer $k \leq \delta$, let $\mathcal G_{k\text{-out}}(G)$ denote the probability space of subgraphs $H$ of $G$ obtained via the following procedure: each vertex $v \in V(G)$ chooses $k$ out-neighbors uniformly at random among its neighbors in $G$ to create a digraph $D$; then, $H$ is obtained by ignoring orientations in $D$ and replacing multiple edges with single edges. We abbreviate $\mathcal G_{k\text{-out}}(K_n)$ under $\mathcal G_{k\text{-out}}(n)$. This model first appeared in ``The Scottish Book"~\cite{Scottish} and was also introduced by Walkup in 1980~\cite{walkup1980matchings}, where he proved that for every sufficiently large integer $n$, a graph $H \sim \mathcal G_{2\text{-out}}(K_{n,n})$ typically admits a perfect matching.
The following result asserts that a typical $G \sim \mathcal G_{k\text{-out}}(n)$ can be approximated using our semi-random graph process (another related result will be discussed in Subsection~\ref{subsec::kout}).
\begin{proposition}\label{pro:Kout}
Fix a positive integer $k$. There exists a strategy $\mathcal S_{out}$ for Builder such that $H \sim \mathcal G_{k\text{-out}}(n)$ and $G \sim \mathcal S_{out}(n, k n + o(n))$ can be coupled in such a way that w.h.p.\ $H \subseteq G$.
\end{proposition}
The following result is an immediate corollary of Proposition~\ref{pro:Kout}.
\begin{corollary} \label{cor:Kout}
Let $k_{\mathcal P}$ be a positive integer for which w.h.p.\ $H \sim \mathcal G_{k_{\mathcal P}\text{-out}}(n)$ satisfies the monotone increasing graph property $\mathcal P$. Then w.h.p.\ $\tau(\mathcal P,n) \leq (k_{\mathcal P} + o(1)) n$.
\end{corollary}
Corollary~\ref{cor:Kout} has several consequences. For example, it implies an upper bound on the duration of the online Hamiltonicity game and the perfect matching game. This is further discussed in Subsection~\ref{subsec::kout} and Section~\ref{sec::openprob}.
\medskip
\paragraph{\textbf{A min-degree process.}} The graph process $\{G_{\min}(n,m)\}_{m \geq 0}$, introduced
by Wormald in 1995~\cite{Wormald}, is defined as follows. Let $G_{\min}(n, 0)$ be the empty graph with vertex set $[n]$ and, for every $m \geq 0$, let $G_{\min}(n, m+1)$ be obtained from $G_{\min}(n,m)$ by first choosing a vertex $u$ of minimum degree in $G_{\min}(n,m)$ u.a.r., and then connecting it by a new edge to a vertex $v \in [n] \setminus \{u\}$ chosen u.a.r.~among all vertices which are not connected to $u$ by an edge of $G_{\min}(n,m)$. For more about this process (and other related processes), see, e.g.,~\cite{KKRL, KS, Wormald}.
We show that this process can be approximated using the semi-random graph process.
\begin{proposition}\label{pro:Gmin}
If $m = o \left(n^2 \right)$, then there exists a strategy $\mathcal S_{min}$ for Builder such that $H \sim G_{\min}(n,m)$ and $G \sim \mathcal S_{min}(n, (1+o(1))m)$ can be coupled
in such a way that w.h.p.\ $H \subseteq G$.
\end{proposition}
The next result is an immediate corollary of Proposition~\ref{pro:Gmin}.
\begin{corollary}\label{cor:Gmin}
Let $m_{\mathcal P}$ be a positive integer for which w.h.p.\ $H \sim G_{min}(n,m_{\mathcal P})$ satisfies the monotone increasing graph property $\mathcal P$. If $m_{\mathcal P} = o \left(n^2 \right)$, then w.h.p.\ $\tau(\mathcal P, n) \leq (1+o(1)) m_{\mathcal P}$.
\end{corollary}
In order to prove Proposition~\ref{pro:Gmin} we use two other min-degree processes (which we also approximate using the semi-random graph process), presented in Subsection~\ref{subsec:min_deg_process}.
\medskip
Corollaries~\ref{cor:Gnm}, \ref{cor:Mnm}, \ref{cor:Kout} and~\ref{cor:Gmin} show that building a graph which satisfies some monotone increasing graph property $\mathcal P$ in our semi-random graph process is at least as fast as building such a graph in various other well-known graph processes. As we will see in the next two subsections (see Theorems~\ref{thm:OffFix} and~\ref{th::fixedGraphUpperBound}), for some properties the semi-random graph process is much faster than all of these other processes.
\subsection{Offline games}
\label{subsec:offline_results}
Here we state the upper and lower bounds on $\tau'(\mathcal{P}, n)$ that we obtain for various natural graph properties $\mathcal{P}$. Given a fixed graph $H$, let $\mathcal{P}_H$ denote the property of containing $H$ as a subgraph. Our next result determines the order of magnitude of $\tau'(\mathcal{P}_H, n)$ for every fixed graph $H$.
\begin{theorem}\label{thm:OffFix}
Let $H$ be a fixed graph and let $r = \min \{\Delta^+(D) : D \textrm{ is an orientation of } H\}$. Let $f, g : \mathbb{N} \to \mathbb{R}$ be functions such that $f(n)$ tends to zero arbitrarily slowly as $n$ tends to infinity and $g(n)$ tends to infinity arbitrarily slowly as $n$ tends to infinity. Then w.h.p. $f(n) \cdot n^{(r-1)/r} \leq \tau'(\mathcal{P}_H, n) \leq g(n) \cdot n^{(r-1)/r}$.
\end{theorem}
Given a positive integer $k$, let ${\cd}_k$ denote the property of having minimum degree at least $k$. Our next result determines $\tau'({\cd}_k, n)$ asymptotically for every fixed positive integer $k$.
\begin{theorem} \label{th::MinDegkOffline}
Let $k$ be a positive integer and let $\alpha_k$ be the unique positive root of $f_k(x) := \sum_{i=0}^{k-1} \left(k-i \right) \frac{x^i}{i!} - x e^x$. Then w.h.p. $\tau'({\cd}_k, n) = (\alpha_k + o(1)) n$.
\end{theorem}
\begin{remark}
It is not hard to see that, for every $k$, indeed $f_k(x)$ has exactly one positive root. Moreover, $\lim_{k \rightarrow \infty} \alpha_k = k/2$, that is, in the offline version, when $k$ is sufficiently large, Builder has a strategy to build a graph with minimum degree $k$ in $kn/2 + o(n)$ rounds.
\end{remark}
\subsection{Online games}
\label{subsec:online_results}
In this subsection we state the upper and lower bounds on $\tau(\mathcal{P}, n)$ that we obtain for various natural graph properties $\mathcal{P}$.
\begin{theorem} \label{th::fixedGraphUpperBound}
Let $H$ be a
fixed graph and let $d$ be its degeneracy. Then w.h.p. $\tau({\mathcal
P}_H, n) \leq g(n) \cdot n^{(d-1)/d}$, where $g : \mathbb{N} \to
\mathbb{R}$ is a function which tends to infinity arbitrarily slowly
as $n$ tends to infinity
\end{theorem}
It seems plausible that the upper bound stated in Theorem~\ref{th::fixedGraphUpperBound} is of the correct order of magnitude for every fixed graph $H$. At the moment, we can only prove this for cliques.
\begin{theorem} \label{thm:lowerCliqueOnLine}
For a positive integer $d$, let ${\mathcal P}_d$ denote the graph
property of containing a copy of $K_d$ as a subgraph. Then w.h.p.
$\tau({\mathcal P}_d, n) \geq f(n) \cdot n^{(d-2)/(d-1)}$ for every $d
\geq 2$, where $f : \mathbb{N} \to \mathbb{R}$ is a function which
tends to zero arbitrarily slowly as $n$ tends to infinity
\end{theorem}
A consequence of Theorems~\ref{thm:lowerCliqueOnLine} and~\ref{thm:OffFix} is that, perhaps not surprisingly, $\tau(\mathcal P_d, n) \gg \tau'(\mathcal P_d, n)$ for every $d \geq 3$.
\medskip
Our next result determines $\tau({\cd}_k, n)$ asymptotically for every fixed positive integer $k$.
\begin{theorem} \label{th::OnMinDegk}
For every positive integer $k$, there exists a positive real number $h_k$ such that w.h.p. $\tau({\cd}_k, n) = (h_k + o(1)) n$.
\end{theorem}
Note that the real numbers $h_k$ appearing in the statement of Theorem~\ref{th::OnMinDegk} can be (and were) calculated using Wormald's differential equations method~\cite{Wormald, Wormald2}. We discuss this in greater detail in Subsection~\ref{subsec:online_min_deg_k}.
\medskip
Given a positive integer $k$, let ${\mathcal C}_k$ denote the property of being $k$-vertex-connected. Clearly, $\tau({\mathcal C}_k, n) \geq \tau({\mathcal D}_k, n)$ holds for every $k$. Our next result shows that, like in several other graph models, ${\mathcal C}_k$ and ${\mathcal D}_k$ occur roughly at the same time in our semi-random graph process.
\begin{theorem} \label{thm:OnkCon}
Let $k \geq 3$ be a fixed integer. Then w.h.p. $\tau({\mathcal C}_k, n) = (h_k + o(1)) n$.
\end{theorem}
Note that, trivially, $\tau({\mathcal C}_1, n) = n-1$. On the other hand, the best bounds we currently have for $\tau({\mathcal C}_2, n)$ stem from the fact that $(h_2 + o(1))n = \tau({\mathcal D}_2, n) \leq \tau({\mathcal C}_2, n) \leq \tau({\mathcal C}_3, n) = (h_3 + o(1))n$.
\medskip
We summarize the results of the last two subsections in the following table.
\begin{table}[h!]
\centering
\begin{tabular}{||c|| c c | c c||}
\hline
Property & Online Bounds & & Offline Bounds & \\ [0.5ex]
\hline\hline
\multirow{2}{*}{$\mathcal{P}_H$} & \multirow{2}{*}{ $\leq g(n) \cdot n^{(d-1)/d}$} & \multirow{2}{*}{(\ref{th::fixedGraphUpperBound})} & $\geq f(n) \cdot n^{(r-1)/r}$ & \multirow{2}{*}{(\ref{thm:OffFix})}
\\ & & & $\leq g(n) \cdot n^{(r-1)/r}$ &
\\
\hline
${\mathcal P}_d$ & $\geq f(n) \cdot n^{(d-2)/(d-1)}$ & (\ref{thm:lowerCliqueOnLine}) & & \\
\hline
${\cd}_k$ & $= (h_k + o(1)) n$ & (\ref{th::OnMinDegk}) & $\alpha_k + o(1)$ & (\ref{th::MinDegkOffline}) \\
\hline
${\mathcal C}_k$ & = $(h_k + o(1)) n$ &(\ref{thm:OnkCon}) & & \\
\hline
\multicolumn{5}{c}{}\\
\multicolumn{5}{c}{$g$ tends to infinity arbitrarily slowly as $n$ tends to infinity}\\
\multicolumn{5}{c}{$f$ tends to zero arbitrarily slowly as $n$ tends to infinity}\\
\multicolumn{5}{c}{ $h_k$ is a known constant }\\
\multicolumn{5}{c}{$\alpha_k$ is the unique positive root of $f_k(x) := \sum_{i=0}^{k-1} \left(k-i \right) \frac{x^i}{i!} - x e^x$}
\end{tabular}
\end{table}
\subsection{Organization}
The rest of the paper is structured as follows. In Subsection~\ref{subsec::notation} we list some, mostly standard, notation that will be used throughout the paper. In Subsections~\ref{subsec::prob_tools} and~\ref{sec::BallsBins} we collect various useful probabilistic tools that will be used in later sections. In Section~\ref{sec:OtherModels} we show how to use our semi-random process to approximate various random graph models; in particular, we prove Propositions~\ref{pro:Mnm}, \ref{pro:Kout}, and~\ref{pro:Gmin}. In Section~\ref{sec:off} we study the offline version of the process; in particular, we prove Theorems~\ref{thm:OffFix} and~\ref{th::MinDegkOffline}. In Section~\ref{sec:on} we study the online version of the process; in particular, we prove Theorems~\ref{th::fixedGraphUpperBound}, \ref{thm:lowerCliqueOnLine}, \ref{th::OnMinDegk}, and~\ref{thm:OnkCon}. Finally, Section~\ref{sec::openprob} contains a discussion of our semi-random graph process, including open problems and suggestions for future research.
\section{Preliminaries} \label{sec::prelims}
\subsection{Notation} \label{subsec::notation}
Let $\mathbb{N}$ denote the set of all non-negative integers and let $\mathbb{R}$ denote the set of all real numbers. For a positive integer $n$ let $[n] = \{1, \ldots, n\}$.
In a (multi)graph $G=(V,E)$, $V$ is the set of vertices and $E$ is a
multiset of elements from $\binom {V}2$.
Let $d_G(v)=|\{e\in E\ :\ v\in e \}|$
denote the degree of $v$ in $G$, that is, the number of edges incident to $v$ including multiplicities.
Let $\Delta(G)$ denote the maximum degree of $G$ and $\delta(G)$ the minimum degree of $G$.
For a vertex $v\in V$, let $N_G(v)=\{u\in V\setminus\{v\}\ :\ uv\in E \}$ denote the set of distinct neighbors of the vertex $v$. Note that for every $v\in V$, $|N_G(v)|\leq d_G(v)$.
For a directed graph $D$ and a vertex $v \in V(D)$, let $d^+_D(v)$ denote the out-degree of $v$ in $D$ and let $\Delta^+(D) = \max \{d_D^+(v) : v \in V(D)\}$ denote the maximum out-degree of $D$. Often, if there is no risk of confusion, we abbreviate $d_D^+(v)$ under $d^+(v)$.
A graph $G$ is called $k$-degenerate if every subgraph of $G$ has a vertex of degree at most $k$. Equivalently, a graph $G$ is $k$-degenerate if there exists an ordering $(v_1, \ldots, v_n)$ of its vertices such that $v_i$ has at most $k$ neighbors in the set $\{v_1, \ldots, v_{i-1}\}$ for every $2 \leq i \leq n$. The \textit{degeneracy} of a graph $G$ is the smallest value of $k$ for which $G$ is $k$-degenerate.
For a monotone increasing property $\mathcal P$ and a vertex $v \in V(K_n)$, at any point during the game $(\mathcal P, n)$, let $\off(v)$ denote the number of times $v$ was offered up to that point.
\subsection{Probabilistic tools}
\label{subsec::prob_tools}
The following well-known bounds on the tails of the binomial distribution, due to Chernoff (see, e.g.,
\cite{AloSpe2008}, \cite{JLR}), will be used extensively.
\begin{lemma}\label{Che}
Let $X\sim \ensuremath{\textrm{Bin}}(n,p)$, $\mu=\mathbb{E}(X)$ and $a\geq 0$, then
\begin{enumerate}
\item $\Pr\left[X\leq \mu-a\right]\leq \exp\left(-\frac{a^2}{2\mu}\right)$;
\item $\Pr\left[X\geq \mu+a\right]\leq \exp\left(-\frac{a^2}{2(\mu+\frac a3)}\right)$.
\end{enumerate}
\end{lemma}
The following is a well-known concentration inequality due to Azuma~\cite{Azuma}.
\begin{theorem} \cite[Theorem 2.27]{JLR} \label{th::Azuma}
Suppose that $Z_1, \ldots, Z_m$ are independent random variables taking their values in the set $[n]$. Suppose further that $X = f(Z_1, \ldots, Z_m)$, where $f : [n]^m \to {\mathbb R}$ is a function such that there exist constants $c_1, \ldots, c_m$ for which the following condition holds:
\begin{description}
\item [(a)] If $z, z' \in [n]^m$ differ only in the $k$th coordinate, then $|f(z') - f(z)| \leq c_k$.
\end{description}
Then for every $t \geq 0$ we have
$$
\Pr (|X - \mathbb{E}(X)| \geq t) \leq 2 \exp \left\{- \frac{t^2}{2 \sum_{k=1}^m c_k^2} \right\} \,.
$$
\end{theorem}
The following is a simplified version of a concentration inequality due to Talagrand~\cite{Talagrand}.
\begin{theorem} \cite[page 81]{MR} \label{th::Talagrand}
Let $X$ be a non-negative random variable, not identically $0$, which is determined by $n$ independent trials $T_1, \ldots, T_n$, and satisfying the following for some $c, r > 0$:
\begin{description}
\item [(a)] changing the outcome of any one trial can affect $X$ by at most $c$, and
\item [(b)] for any $s$, if $X \geq s$, then there is a set of at most $r s$ trials whose outcomes certify that $X \geq s$,
\end{description}
then for any $0 \leq t \leq \mathbb{E}(X)$
$$
\Pr \left(|X - \mathbb{E}(X)| > t + 60 c \sqrt{r \mathbb{E}(X)} \right) \leq 4 \exp \left\{ -\frac{t^2}{8 c^2 r \mathbb{E}(X)} \right\} \,.
$$
\end{theorem}
\subsection{Balls Into Bins} \label{sec::BallsBins}
Consider $m$ balls, placed into $n$ bins labeled $1, 2, \ldots, n$, where for each ball, we choose a bin u.a.r.\ and independently from all previous choices. For every $1 \leq i \leq m$, let $Z_i$ denote the bin chosen for ball $i$. Note that $Z_1, \ldots, Z_m$ are independent random variables. For every non-negative integer $k$, let $X_k^m = X_k^m(n)$ be the random variable counting the number of bins containing exactly $k$ balls and let $f(n, m, k) = \mathbb{E}(X_k^m)$. It is evident that
\begin{equation} \label{eq::expectationXkmn}
f(n,m,k) = n \binom{m}{k} \lf( \frac{1}{n} \rt)^{k} \lf( 1 - \frac{1}{n} \rt)^{m-k}.
\end{equation}
If $k = o(\min \{n, \sqrt{m}\})$, then~\eqref{eq::expectationXkmn} takes the following simpler form:
\begin{equation} \label{e:exactly-k}
f(n,m,k) = (1 + o(1)) \frac{e^{- m/n}}{k!} \cdot \frac{m^k}{n^{k-1}}.
\end{equation}
The following bound on the maximum number of balls in any single bin is an immediate consequence of~\eqref{e:exactly-k}.
\begin{corollary} \label{cor::maxLoad}
If $m = O(n)$, then w.h.p. $\max \{k : X_k^m(n) > 0\} \leq \log n$.
\end{corollary}
\begin{remark}
Much more accurate bounds on $\max \{k : X_k^m(n) > 0\}$ are known for a wider range of values of $m$ (see, e.g.,~\cite{RS}). However, Corollary~\ref{cor::maxLoad} will suffice for the purposes of this paper.
\end{remark}
We next prove that $X_k^m$ is concentrated around its mean.
\begin{lemma} \label{lem::Balls}
For every $t \geq 0$
$$
\Pr(|X_k^m - f(n,m,k)| \geq t) \leq 2 \exp \{- t^2/(8 m)\}.
$$
\end{lemma}
\begin{proof}
It is evident that $X_k^m = f(Z_1, \ldots, Z_m)$ for some function $f : [n]^m \to {\mathbb R}$. Moreover, since moving one ball from one bin to another can change $X_k^m$ by at most $2$, it follows that Property (a) from Theorem~\ref{th::Azuma} holds with $c_1 = \ldots = c_m = 2$. Applying Theorem~\ref{th::Azuma} to $X_k^m$, we conclude that
$$
\Pr(|X_k^m - f(n,m,k)| \geq t) \leq 2 \exp \{- t^2/(8 m)\}.
$$
\end{proof}
Lemma~\ref{lem::Balls} can be used to show that $X_k^m$ is concentrated around its mean whenever $f(n,m,k) = \omega(\sqrt{m})$. For smaller values of $f(n,m,k)$, we use the following lemma.
\begin{lemma} \label{lem::BallsTalagrand}
If $\sum_{i = k+1}^m X_i^m = 0$, then
$$
\Pr \left(|X_k^m - f(n,m,k)| > t + 120 \sqrt{k f(n,m,k)} \right) \leq 4 \exp \left\{ -\frac{t^2}{32 k f(n,m,k)} \right\},
$$
for every $0 \leq t \leq f(n,m,k)$.
\end{lemma}
\begin{proof}
It is evident that $X_k^m$ is a non-negative random variable which not identically $0$, and that $X_k^m$ is determined by $m$ independent trials. Moreover, since moving one ball from one bin to another can change $X_k^m$ by at most $2$, it follows that Property (a) from Theorem~\ref{th::Talagrand} holds with $c = 2$. Now, if $X_k^m \geq s$, then there are $s$ bins, each containing exactly $k$ balls. Since $\sum_{i = k+1}^m X_i^m = 0$ by assumption, it follows that there are $k s$ balls which certify that $X_k^m \geq s$. Therefore, Property (b) from Theorem~\ref{th::Talagrand} holds with $r = k$. We can thus apply Theorem~\ref{th::Talagrand} to deduce that
$$
\Pr \left(|X_k^m - f(n,m,k)| > t + 120 \sqrt{k f(n,m,k)} \right) \leq 4 \exp \left\{ -\frac{t^2}{32 k f(n,m,k)} \right\}
$$
holds for every $0 \leq t \leq f(n,m,k)$ as claimed.
\end{proof}
\section{Connections to other random graph models} \label{sec:OtherModels}
\subsection{Random graph model and random multigraph model}
\begin{proof}[Proof of Proposition~\ref{pro:Mnm}]
Builder's strategies for approximating both processes are similar.
\textbf{Strategy $\mathcal S_M$:} When offered some vertex $v \in [n]$, he chooses a vertex $u$ u.a.r.~from $[n] \setminus \{v\}$. Then, Builder adds the edge $uv$ to his graph.
\textbf{Strategy $\mathcal S_G$:}
When offered some vertex $v \in [n]$, he chooses a vertex $u$ u.a.r.~from $[n] \setminus \{v\}$. Then, Builder adds $uv$. If $uv$ was added before, then he considers this round as a failure (which will not be part of the graph he aims to build).
It is easy to see that, following $\mathcal S_M$, the probability of any edge being chosen in any round is $2 \cdot \frac{1}{n} \cdot \frac{1}{(n-1)} = \binom{n}{2}^{-1}$ which is precisely the probability of this edge to be chosen in $\{M(n,m)\}_{m \geq 0}$. That is, these two processes are identical.
As for the classical random graph process, the two processes $G(n,m)$ and $\mathcal S_G(n,m)$ begin to differ as soon as the first multiple edge is claimed. We begin by showing that w.h.p. this does not happen if $m = o(n)$. Indeed, fix $m = o(n)$ and assume we have run $\mathcal S_G(n,k)$ for some $0 \leq k < m$. The probability that the edge chosen in round $k+1$ already exists in $G_k\sim \mathcal S_G(n,k)$ is at most $k \binom{n}{2}^{-1}$. Hence, the probability that $G_m$ contains multiple edges is at most $\sum_{i=0}^{m-1} i \binom{n}{2}^{-1} = \binom{m}{2} \binom{n}{2}^{-1} = o(1)$.
As $m$ gets larger, some failures are expected, that is, w.h.p. Builder's (multi)graph will contain multiple edges. However, their number will be negligible assuming that $m = o(n^2)$. Indeed, by the above calculation, the expected number of failures after $m$ rounds of running our process according to $\mathcal S_G$ is at most $\sum_{i=0}^{m-1} i \binom{n}{2}^{-1} = \binom{m}{2} \binom{n}{2}^{-1} = o(m)$. Hence, it follows by Markov's inequality that w.h.p.\ there are $o(m)$ failures.
\end{proof}
\subsection{The $k$-out model} \label{subsec::kout}
In this subsection we prove Proposition~\ref{pro:Kout} by describing a strategy $\mathcal S$ for which $\mathcal S(n,m)$ can be coupled to the well-studied random graph model $\mathcal G_{k\text{-out}}(G)$ (where $G$ is a graph on $n$ vertices, $k \leq \delta(G)$ is a positive integer, and $m$ roughly equals $k n$).
The strategy $\mathcal S_{out}$ is roughly described as follows. Suppose that the set of vertices is $[n]$. For every $1 \leq t \leq n$, we consider $k$ consecutive rounds of the process, and in all such rounds we connect the offered vertex to $t$. If at least one of these rounds adds a loop or an edge that already exists, we run $r$ extra rounds (for a suitable $r$), and connect all vertices offered in these rounds to $t$ as well. As it turns out, for every fixed $k$, the total number of extra rounds required is w.h.p. $o(n)$.
\begin{proof} [Proof of Proposition~\ref{pro:Kout}]
For positive integers $k$ and $r$, let $\mathcal G'_{(k,r)\text{-out}}(n)$ be the probability space of multigraphs $H$ obtained via the following procedure:
each vertex $i \in [n]$ chooses $k$ out-neighbors, one by one, u.a.r.\ with replacement among all the vertices in $[n]$ (including $i$). If, during these $k$ choices, $i$ chose itself or the same vertex more than once, then $i$ chooses $r$ additional out-neighbors, using the same procedure, to create a digraph $D$. Finally, $H$ is obtained from $D$ by ignoring orientations (note that $H$ might contain loops and multiple edges).
Given positive integers $k$ and $r$, let $\mathcal S_{out} = \mathcal S_{out}(k,r)$ be the following strategy of Builder.
\medskip
\noindent \textbf{Strategy $\mathcal S_{out}$:} For every $i \geq 1$, let $u_i$ denote the vertex Builder is offered in the $i$th round. In the first $k$ rounds, Builder claims the edges $1 u_1, \ldots, 1 u_k$. If $1 \in \{u_1, \ldots, u_k\}$ or $u_i = u_j$ for some $1 \leq i < j \leq k$, then in the next $r$ rounds Builder claims the edges $1 u_{k+1}, \ldots, 1 u_{k+r}$. Builder now chooses the ``out-neighbors'' of $t$ for every $2 \leq t \leq n$ similarly. Namely, assume that for some $2 \leq t \leq n$ Builder has already chosen the ``out-neighbors'' of $j$ for every $1 \leq j \leq t-1$ (using the process described above for $t=2$) and this took him $m_{t-1}$ rounds. In rounds $m_{t-1} + 1, \ldots, m_{t-1} + k$, Builder claims the edges $t u_{m_{t-1} + 1}, \ldots, t u_{m_{t-1} + k}$. If $t \in \{u_{m_{t-1} + 1}, \ldots, u_{m_{t-1} + k}\}$ or $u_i = u_j$ for some $m_{t-1} + 1 \leq i < j \leq m_{t-1} + k$, then in the next $r$ rounds Builder claims the edges $t u_{m_{t-1} + k + 1}, \ldots, t u_{m_{t-1} + k + r}$.
\medskip
For every $1 \leq i \leq n$, let $A_i$ denote the multiset of ``out-neighbors'' Builder chooses for $i$ when playing according to $\mathcal S_{out}$; we refer to $A_i$ as the \emph{$i$th block}. Observe that for every $1 \leq i \leq n$, we have $|A_i| = k$ (in which case we will say that $A_i$ is \emph{small}) or $|A_i| = k+r$ (in which case we will say that $A_i$ is \emph{big}). Let $n'$ denote the number of big blocks.
\begin{observation}\label{obs:kout}
The probability space $\mathcal S_{out}(n, k n + r n')$ is the same as the probability space $\mathcal G'_{(k,r)\text{-out}}(n)$.
\end{observation}
\begin{claim}\label{claim:1kout}
Following $\mathcal S_{out}$ with $r=2$, w.h.p.\ $A_i \setminus \{i\}$ contains at least $k$ different vertices for every $1 \leq i \leq n$.
\end{claim}
\begin{proof}
A block $A_i$ is called \emph{bad of type I} if some vertex $u \in [n]$ appears in $A_i$ at least three times. Similarly, a block $A_i$ is called \emph{bad of type II} if there are two distinct vertices $u, v \in [n]$, each appearing twice in $A_i$. Finally, a block is called \emph{bad} if it is bad of type I or II. For every $1 \leq i \leq n$, let $p_i$ denote the probability that $A_i$ is bad of type I. Clearly
$$
p_i \leq \binom{|A_i|}{3} \frac{1}{n^2} \leq \frac{k(k+1)(k+2)}{6 n^2}
$$
for every $1 \leq i \leq n$. Therefore the probability that there exists a block which is bad of type I is at most
$$
\sum_{i=1}^n p_i \leq \frac{k(k+1)(k+2)}{6 n} = o(1).
$$
An analogous argument shows that the probability that there exists a block which is bad of type II is at most
$$
\sum_{i=1}^n \binom{|A_i|}{2} \binom{|A_i|-2}{2} \frac{1}{n^2} \leq \frac {k^2 (k+2)^2}{4n} = o(1).
$$
It follows that w.h.p.\ there are no bad blocks. We conclude that w.h.p.\ $A_i \setminus \{i\}$ contains at least $k$ different vertices for every $1 \leq i \leq n$ as claimed.
\end{proof}
\begin{claim}\label{claim:2kout}
Following $\mathcal S_{out}$ (with any fixed $k$ and $r$), w.h.p.\ $n' = o(n)$.
\end{claim}
\begin{proof}
Our aim is to show that w.h.p.\ the number of big blocks is negligible compared to $n$. A block $A_i$ is called \emph{big of type I} if some vertex $u \in [n]$ appears in $A_i$ at least twice. Similarly, a block $A_i$ is called \emph{big of type II} if $i \in A_i$. Observe that a block is big if and only if it is big of type I or II. For every $1 \leq i \leq n$, let $p_i$ denote the probability that $A_i$ is big of type I. Clearly
$$
p_i \leq \binom{|A_i|}{2} \frac{1}{n} \leq \frac{(k+r)^2}{2n}
$$
for every $1 \leq i \leq n$. Let $X_1$ denote the number of blocks which are big of type I. Then $X_1$ is stochastically dominated by a random variable $Y_1 \sim \ensuremath{\textrm{Bin}} \left(n, \frac{(k+r)^2}{2n} \right)$. Therefore, by Lemma~\ref{Che} we have
$$
\Pr[X_1 \geq \log n] \leq \Pr[Y_1 \geq \log n] \leq e^{- \log n} = o(1).
$$
Thus w.h.p.\ the number of blocks which are big of type I is at most $\log n$.
An analogous argument shows that w.h.p.\ the number of blocks which are big of type II is at most $\log n$ as well. We conclude that w.h.p.\ there are $o(n)$ big blocks.
\end{proof}
\begin{claim}\label{claim:3kout}
Let $k$ be a positive integer. Then $H \sim \mathcal G_{k\text{-out}}(n)$ and $H'\sim \mathcal G'_{(k,2)\text{-out}}(n)$ can be coupled in such a way that w.h.p.\ $H\subseteq H'$.
\end{claim}
\begin{proof}
Consider the oriented versions of $\mathcal G_{k\text{-out}}(n)$ and $\mathcal G'_{(k,2)\text{-out}}(n)$, denoted by $\vec{\mathcal G}_{k\text{-out}}(n)$ and by $\vec{\mathcal G}'_{(k,2)\text{-out}}(n)$, respectively. In order to prove the claim it suffices to show that $D \sim \vec{\mathcal G}_{k\text{-out}}(n)$ and $D'\sim \vec{\mathcal G}'_{(k,2)\text{-out}}(n)$ can be coupled in such a way that w.h.p.\ $D \subseteq D'$.
For every $1 \leq i \leq n$, let $X^{(i)}_1, X^{(i)}_{2}, \ldots$ be an infinite sequence of independent random variables, each having the uniform distribution on $[n]$. We can generate $D \sim \vec{\mathcal G}_{k\text{-out}}(n)$ as follows. For every vertex $1 \leq i \leq n$, let $\ell_i$ denote the smallest integer for which $\left \{X^{(i)}_1, X^{(i)}_{2}, \ldots, X^{(i)}_{\ell_i} \right\}$ contains exactly $k$ different elements from $[n] \setminus \{i\}$. These $k$ elements will be the $k$ out-neighbors of vertex $i$. To generate $D' \sim \vec{\mathcal G}'_{(k,2)\text{-out}}(n)$, for every vertex $1 \leq i \leq n$, we look at $\left\{X^{(i)}_1, X^{(i)}_{2}, \ldots, X^{(i)}_{k} \right\}$; if it contains exactly $k$ different elements from $[n] \setminus \{i\}$, then these $k$ elements will be the $k$ out-neighbors of vertex $i$. Otherwise, the $k+2$ elements of $\{X^{(i)}_1, X^{(i)}_{2}, \ldots, X^{(i)}_{k+2}\}$ will be the out-neighbors of vertex $i$ (as a directed multigraph).
Since w.h.p.\ $\ell_i \leq k+2$ holds for every $1 \leq i \leq n$ by Observation~\ref{obs:kout} and by Claim~\ref{claim:1kout}, it follows that w.h.p.\ $D \subseteq D'$.
\end{proof}
Combining Observation~\ref{obs:kout} with Claim~\ref{claim:3kout} implies that $H \sim \mathcal G_{k\text{-out}}(n)$ and $G \sim \mathcal S_{out}(n, k n + 2 n')$ can be coupled in such a way that w.h.p.\ $H \subseteq G$. Since by Claim~\ref{claim:2kout} we have that $n' = o(n)$, this concludes the proof of Proposition~\ref{pro:Kout}.
\end{proof}
It was proved by Bohman and Frieze~\cite{bohman2009hamilton} that w.h.p.\ $G \sim \mathcal G_{\text{3-out}}(n)$ admits a Hamilton cycle (and, moreover, $3$ is the smallest integer for which this holds). Hence, it readily follows from Corollary~\ref{cor:Kout} that w.h.p.\ $\tau(\mathcal{H}, n) \leq (3 + o(1)) n$, where $\mathcal{H}$ is the property of admitting a Hamilton cycle (we would like to thank Michael Krivelevich for pointing this out). Similarly, for every fixed positive integer $k$, we have that w.h.p.\ $\tau(\cd_k,n) \leq (k + o(1)) n$, since the minimum degree of $\mathcal G_{k\text{-out}}(n)$ is at least $k$ (however, this bound is weaker than the result stated in Theorem~\ref{th::OnMinDegk}). In~\cite{frieze1986maximum}, Frieze extended the result of Walkup that we have mentioned in Subsection~\ref{subsec:connections}, by showing that w.h.p.\ $\mathcal G_{2\text{-out}}(n)$ admits a perfect matching, provided that $n$ is even. Combining this result with Corollary~\ref{cor:Kout}, implies that w.h.p.\ $\tau(\mathcal {PM},n) \leq (2 + o(1)) n$, where $\mathcal {PM}$ is the property of admitting a perfect matching. Frieze's result was further improved by Karo{\'n}ski, Overman and Pittel~\cite{KOP}, who proved that w.h.p.\ $\mathcal G_{(1+2e^{-1})\text{-out}}(K_{n,n})$ admits a perfect matching, where $\mathcal G_{(1+2e^{-1})\text{-out}}(K_{n,n})$ is obtained as follows. First, pick a random element of $\mathcal G_{1\text{-out}}(K_{n,n})$ and then give every vertex that has been chosen as a neighbor by at most one other vertex a `second chance' to pick another random neighbor. In the model $\mathcal G_{(1+e^{-1})\text{-out}}(K_{n,n})$, where only vertices that were not chosen at all are getting a
`second chance', w.h.p.\ there is no perfect matching~\cite{KOP}. We will discuss the games
$(\mathcal{H}, n)$ and $(\mathcal{PM}, n)$ further in Section~\ref{sec::openprob}.
In the remainder of this section we briefly explain how to approximate $\mathcal G_{(1+2e^{-1})\text{-out}}(K_{n,n})$ using our semi-random graph process. For simplicity we will restrict our attention to graphs with an even number of vertices.
\begin{proposition}\label{pro:KoutBip}
There exists a strategy $\mathcal S'_{out}$ for Builder such that $H \sim \mathcal G_{(1+2e^{-1})\text{-out}}(K_{n/2, n/2})$ and $G \sim \mathcal S'_{out}(n,(1+2e^{-1}+o(1))n)$ can be coupled in such a way that w.h.p.\ $H \subseteq G$.
\end{proposition}
\begin{proof}
Let $[n] = X_0 \cup X_1$ be an arbitrary equipartition; denote $X_0 = \left\{v^{(0)}_1, \ldots, v^{(0)}_{n/2} \right\}$ and $X_1 = \left\{v^{(1)}_1, \ldots, v^{(1)}_{n/2} \right\}$. For $i \in \{0,1\}$ and every positive integer $k$, let $x_i(k)$ denote the number of times a vertex of $X_i$ was offered during the first $k$ rounds; clearly $x_0(k) + x_1(k) = k$. We are now ready to describe Builder's strategy.
\medskip
\noindent \textbf{Strategy $\mathcal S'_{out}$:} For every positive integer $k$, let $v^{(i)}_{j_k}$ (where $i \in \{0,1\}$ and $1 \leq j_k \leq n/2$) denote the vertex Builder is offered in round $k$. The strategy is divided into the following two phases.
\smallskip
\noindent \textbf{Phase I:} Let $f_0(n)$ be the number of rounds until $\min \{x_0(k), x_1(k)\} \geq n/2$ first occurs. For every $1 \leq k \leq f_0(n)$ Builder plays the $k$th round as follows:
\begin{enumerate}[$(1)$]
\item If $x_i(k) \leq n/2$, then he connects $v^{(i)}_{j_k}$ to $v^{(1-i)}_{x_i(k)}$ (that is, he connects the vertex he is offered to the $x_i(k)$th vertex from the other part).
\item If $x_i(k) > n/2$ but $x_{1-i}(k) < n/2$, then he connects $v^{(i)}_{j_k}$ to $v^{(1-i)}_{1}$.
\end{enumerate}
\noindent \textbf{Phase II:} At the beginning of this phase, for $i \in \{0,1\}$, let $Y_i$ be the set of vertices of $X_i$ that were offered at most once during Phase I. For $i \in \{0,1\}$ let $y_i = |Y_i|$ and let $Y_i = \left\{u^{(i)}_1, \ldots, u^{(i)}_{y_i} \right\}$. Let $y = \max \{y_0, y_1\}$. For $i \in \{0,1\}$ and every positive integer $k$, let $y_i(k)$ denote the number of times a vertex of $X_i$ was offered during the first $k$ rounds of Phase II. Let $f_1(n)$ be the number of rounds in Phase II until $\min \{y_0(k), y_1(k)\} \geq y$ first occurs. For every $1 \leq k \leq f_1(n)$ Builder plays the $k$th round of Phase II as follows:
\begin{enumerate}[$(i)$]
\item If $y_i(k) \leq y$, then he connects $v^{(i)}_{j_k}$ to $u^{(1-i)}_{y_i(k)}$ (that is, he connects the vertex he is offered to the $y_i(k)$th vertex of $Y_{1-i}$).
\item If $y_i(k) > y$ but $y_{1-i}(k) < y$, then he connects $v^{(i)}_{j_k}$ to $u^{(1-i)}_{1}$.
\end{enumerate}
Let $G \sim \mathcal S'_{out}(n, f_0(n) + f_1(n))$ and let $H$ be the graph obtained from $G$ by removing all the edges Builder has claimed in steps $(2)$ and $(ii)$. Then $H \sim \mathcal G_{(1+2e^{-1})\text{-out}}(K_{n/2,n/2})$.
It remains to prove that w.h.p.\ $f_0(n) + f_1(n) = (1 + 2e^{-1} + o(1)) n$. Indeed, for $i \in \{0,1\}$, let $R_i$ denote the number of times a vertex of $X_i$ was offered during the first $n + \sqrt{n \log n}$ rounds of the game. Clearly $R_i \sim \ensuremath{\textrm{Bin}}(n + \sqrt{n \log n}, 1/2)$ and thus $\Pr(R_i < n/2) \leq \Pr \left(R_i \leq \mathbb{E}(R_i) - \frac{\sqrt{n \log n}}{2} \right) = o(1)$ holds by Lemma~\ref{Che}. It follows that w.h.p.\ $f_0(n) \leq n + \sqrt{n \log n}$.
Similarly, for $i \in \{0,1\}$, let $N_i$ denote the number of vertices of $X_i$ that were offered at most once during the first $f_0(n)$ rounds. For every $1 \leq j \leq n/2$, let $I_j$ be the indicator random variable for the event ``$v_j^{(0)}$ was offered at most once during the first $f_0(n)$ rounds''. Then, for every $1 \leq j \leq n/2$, it holds that
$$
\Pr(I_j = 1) = (1 - 1/n)^{f_0(n)} + f_0(n) \cdot 1/n \cdot (1 - 1/n)^{f_0(n)-1} = (2+o(1)) e^{-1},
$$
where the last equality holds by the concentration result we proved for $f_0(n)$ (note that, by definition, $f_0(n) \geq n$). Therefore
$$
\mathbb{E}(N_0) = \sum_{j=1}^{n/2} \mathbb{E}(I_j) = (1+o(1)) e^{-1} n.
$$
Observe that changing the offered vertex in any single round of the process can change $N_0$ by at most 1. Hence, applying Theorem~\ref{th::Azuma}, we deduce that
$$
\Pr(|N_0 - \mathbb{E}(N_0)| \geq \sqrt{n \log n}) \leq 2 \exp \left\{- \frac{n \log n}{2 f_0(n)} \right\} = o(1),
$$
where the last equality holds by the concentration result we proved for $f_0(n)$. An analogous argument shows that $\mathbb{E}(N_1) = (1+o(1)) e^{-1} n$ and that $\Pr(|N_1 - \mathbb{E}(N_1)| \geq \sqrt{n \log n}) = o(1)$. Hence, w.h.p.\ $y = (1 + o(1)) y_i = (1 + o(1)) \mathbb{E}(N_i) = (1+o(1)) e^{-1} n$ for $i \in \{0,1\}$. Finally, an analogous argument to the one we used to prove the concentration of $R_i$, implies that w.h.p.\ $f_1(n) = (2 + o(1)) y = (2+o(1)) e^{-1} n$. We conclude that w.h.p.\ $f_0(n) + f_1(n) = (1 + 2e^{-1} + o(1)) n$ as claimed.
\end{proof}
Proposition~\ref{pro:KoutBip} implies the following improved upper bound on the duration of the game $(\mathcal{PM}, n)$.
\begin{corollary} \label{cor:KoutBip}
Assume that w.h.p.\ $H \sim \mathcal G_{(1+2e^{-1})\text{-out}}(n)$ satisfies the monotone increasing graph property $\mathcal P$. Then w.h.p.\ $\tau(\mathcal P, n) \leq (1+2e^{-1}+o(1)) n$. In particular, w.h.p.\ $\tau(\mathcal{PM}, n) \leq (1+2e^{-1}+o(1)) n$.
\end{corollary}
\subsection{The min-degree process}
\label{subsec:min_deg_process}
Recall the min-degree graph process. Let $G_{\min}(n, 0)$ be the empty graph with vertex set $[n]$ and, for every $m \geq 0$, let $G_{\min}(n, m+1)$ be obtained from $G_{\min}(n,m)$ by first choosing a vertex $u$ of minimum degree in $G_{\min}(n,m)$ u.a.r., and then connecting it by a new edge to a vertex $v \in [n] \setminus \{u\}$ chosen u.a.r.~among all vertices which are not connected to $u$ by an edge of $G_{\min}(n,m)$.
For proving Proposition~\ref{pro:Gmin}, we will use the following related models $\{G'_{\min}(n,m)\}_{m \geq 0}$ and $\{G''_{\min}(n,m)\}_{m \geq 0}$ which are defined as follows. Let $\{G'_{\min}(n,m)\}_{m \geq 0}$ be the same as $\{G_{\min}(n,m)\}_{m \geq 0}$ except that we allow multiple edges, that is, we choose $v$ u.a.r.~among all vertices of $[n] \setminus \{u\}$. Similarly, $\{G''_{\min}(n,m)\}_{m \geq 0}$ is the same as $\{G_{\min}(n,m)\}_{m \geq 0}$ except that we allow loops and multiple edges, that is, we choose $v$ u.a.r.~among all vertices of $[n]$. We first prove that our semi-random multigraph process can be used to generate $\{G''_{\min}(n,m)\}_{m \geq 0}$.
\begin{proposition}\label{pro:G''min}
There exists a strategy $\mathcal S_{min}$ for Builder such that the probability space $\mathcal S_{min}(n,m)$ is the same as the probability space $G''_{\min}(n,m)$.
\end{proposition}
The semi-random process can also be used to approximate $\{G'_{min}(n,m)\}_{m \geq 0}$.
\begin{proposition}\label{pro:G'min}
If $m = o \left(n^2 \right)$, then the strategy $\mathcal S_{min}$ is such that $H \sim G'_{\min}(n,m)$ and $G \sim \mathcal S_{min}(n, (1+o(1))m)$ can be coupled
in such a way that w.h.p.\ $H \subseteq G$.
\end{proposition}
The following are immediate corollaries of Propositions~\ref{pro:G''min} and~\ref{pro:G'min}.
\begin{corollary}\label{cor:G''min}
Let $m_{\mathcal P}$ be a positive integer for which w.h.p.\ $H \sim G''_{min}(n,m_{\mathcal P})$ satisfies the monotone increasing graph property $\mathcal P$. Then w.h.p.\ $\tau(\mathcal P, n) \leq m_{\mathcal P}$.
\end{corollary}
\begin{corollary}\label{cor:G'min}
Let $m_{\mathcal P}$ be a positive integer for which w.h.p.\ $H \sim G'_{min}(n,m_{\mathcal P})$ satisfies the monotone increasing graph property $\mathcal P$. If $m_{\mathcal P} = o \left(n^2 \right)$, then w.h.p.\ $\tau(\mathcal P, n) \leq (1+o(1)) m_{\mathcal P}$.
\end{corollary}
\begin{proof}[Proof of Proposition~\ref{pro:G''min}]
The strategy $\mathcal S_{\min}$ used by Builder is the following.
\textbf{Strategy $\mathcal S_{min}$:} Whenever Builder is offered some vertex $v$, he connects it to a vertex $u$, chosen u.a.r.~among all vertices of minimum degree (observe that this could result in loops and multiple edges). For this purpose, the degree of $v$ in Builder's graph is computed \textit{before} it is offered.
Fix a non-negative integer $r$ (that may depend on $n$), an arbitrary multigraph $G$ with vertex set $[n]$ and $r$ edges, and arbitrary indices $1 \leq i, j \leq n$. In order to prove that our process generates $\{G''_{\min}(n,m)\}_{m \geq 0}$, it suffices to prove that the probability of $ij$ being added to $G$ in round $r+1$ of our process is equal to the probability of $ij$ being added to $G$ in round $r+1$ of $\{G''_{\min}(n,m)\}_{m \geq 0}$.
Indeed, in $\{G''_{\min}(n, m)\}$ the first vertex of the added edge is picked uniformly at random from those vertices that have minimum degree before the round, and the second vertex is picked u.a.r. from all vertices. On the other hand, in our process the offered vertex is picked u.a.r. from all vertices, while the second vertex is picked u.a.r. from the vertices that had minimum degree before the round. The statement follows.
\end{proof}
We are now ready to prove Proposition~\ref{pro:Gmin}.
Note that a similar and, in fact slightly simpler process, can be used to approximate $\{G'_{\min}(n,m)\}_{m \geq 0}$ (thus proving Proposition~\ref{pro:G'min}).
\begin{proof}[Proof of Proposition~\ref{pro:Gmin}]
We apply $\mathcal S_{\min}$ as described in Subsection \ref{subsec:connections}. Whenever Builder claims a loop or a multiple edge, he ignores this edge and considers this round to be a failure. We continue running this process until there are $m$ edges in Builder's graph which are not failures.
That is, for every $m$ we will be able to generate $G_{\min}(n,m)$ by $\mathcal S_{\min}(n,m+f(m))$, where $f(m)$ is the number of failures. We now prove that, if $m = o(n^2)$, then w.h.p. $f(m) = o(m)$. Consider a specific round of our process which starts with a (simple) graph $G$ with $r$ edges. Let $j$ denote the second vertex we choose in this round. By the description of our process, we must have $d_G(j) = \delta(G) \leq 2r/n$. Hence, the probability that the first vertex we choose in this round is $j$ or one of its neighbors in $G$ is at most $\frac{1 + 2r/n}{n} = \frac{1}{n} + \frac{2r}{n^2}$. For every positive integer $m$, let $Y_m$ denote the random variable which counts the number of failures that occur during the first $m$ rounds of running our process according to $\mathcal S_{\min}$. Then
$$
\mathbb{E}(Y_m) \leq \frac{m}{n} + \sum_{r=1}^m \frac{2r}{n^2} = O(m/n + m^2/n^2) = o(m),
$$
where the last equality holds by our assumption that $m = o(n^2)$. Applying Markov's inequality to $Y_m$, we conclude that indeed w.h.p. the number of failures is $o(m)$.
\end{proof}
\section{Offline games} \label{sec:off}
Theorems~\ref{thm:OffFix} and~\ref{th::MinDegkOffline} (as well as a few other results which will be discussed in Section~\ref{sec::openprob}) are consequences of a general result. Before stating it, we need to introduce some notation.
For a directed graph $D$ with vertex set $\{v_1, \ldots, v_r\}$, let $d_i^+$ denote the out-degree of $v_i$ in $D$ for every $1 \leq i \leq r$. For a given sequence $S=\{v_i\}_{i\in \mathbb{N}}$, let $m(D)$ denote the smallest integer $j$ such that in the subsequence $S'=(v_1,v_2,\dots,v_j)$ there are $r$ distinct vertices $u_1, \ldots, u_r \in [n]$ so that for every $1 \leq i \leq r$, $u_i$ appears at least $d_i^+$ times in $S'$. For an undirected graph $H$, let $m(H) = \min \{m(D) : D \textrm{ is an orientation of } H\}$.
\begin{proposition}\label{prop:FindingH}
Let $H$ be a graph on at most $n$ vertices, let $S=\{v_i\}_{i\in \mathbb{N}}$ be a sequence of vertices from $[n]$, chosen independently and uniformly at random with replacement, and let $\mathcal P_H$ be the graph property of containing $H$ as a subgraph. Then $\tau'(\mathcal{P}_H, n) = m(H)$.
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop:FindingH}]
Starting with the lower bound, suppose that Builder has a strategy $\mathcal{S}$ to construct a copy of $H$ in $\ell$ rounds. During the game, played according to $\mathcal{S}$, orient each edge claimed by Builder from the vertex he was offered to the vertex he chose. Fix some copy of $H$ in $G_{\ell}$ and let $D'$ be its orientation according to the aforementioned rule. Then $\ell \geq m(D') \geq m(H)$.
As for the upper bound, we will describe a strategy for Builder to construct a copy of $H$ in $m(H)$ rounds. Fix an arbitrary orientation $D$ of $H$ such that $m(D) = m(H)$. Let $\{v_1, \ldots, v_r\}$ denote the vertex set of $H$ and let $u_1, \ldots, u_r$ be vertices in $[n]$ such that $\off(u_i) \geq d_D^+(v_i)$ for every $1 \leq i \leq r$. For every $1 \leq i \leq r$ and $1 \leq j \leq d_D^+(v_i)$, let $v_{i,1}, \ldots, v_{i,d^+(v_i)}$ denote the out-neighbors of $v_i$ in $D$. Let $\varphi$ be the function which maps $v_i$ to $u_i$ for every $1 \leq i \leq r$. In every round, if there exist $1 \leq i \leq r$ and $1 \leq j \leq d_D^+(v_i)$ such that the vertex offered in this round is $u_i$ and this is the $j$th time it is offered, then Builder claims the edge $(u_i, \varphi(v_{i,j}))$, if it is free. In any other case, he plays arbitrarily. It is evident that, if Builder follows this strategy, then, after $m(H)$ rounds, $G_m[\{u_1, \ldots, u_r\}]$ contains a copy of $H$.
\end{proof}
\subsection{Offline fixed graphs} \label{subsec::OffFixGraph}
\begin{proof}[Proof of Theorem~\ref{thm:OffFix}]
Let $r, f$, and $g$ be as in the statement of the theorem. It follows by Proposition~\ref{prop:FindingH} that $m(H)=\tau'(\mathcal{P}_H, n)$ and so it remains to prove that w.h.p.\ $f(n) \cdot n^{(r-1)/r} \leq m(H) \leq g(n) \cdot n^{(r-1)/r}$. Fix some $m < f(n) \cdot n^{(r-1)/r}$. The expected number of vertices which were offered at least $r$ times during the first $m$ rounds is
\begin{eqnarray} \label{eq::fnMarkov}
\sum_{k = r}^m f(n,m,k) &=& \sum_{k = r}^m n \binom{m}{k} \left(\frac{1}{n} \right)^k \left(1 - \frac{1}{n} \right)^{m-k}
\leq \sum_{k = r}^m \left(\frac{e}{k} \right)^k \frac{m^k}{n^{k-1}} \nonumber \\
&\leq& 3 \sum_{k = r}^m (f(n))^k \cdot n^{(k(r-1)/r) - k + 1} \leq 3 (f(n))^r + \sum_{k = r+1}^m n^{1 - k/r} = o(1).
\end{eqnarray}
Since every orientation of $H$ contains a vertex of out-degree at least $r$, it follows from~\eqref{eq::fnMarkov} and from Markov's inequality that w.h.p. $m(H) > m$.
Next, set $m = g(n) \cdot n^{(r-1)/r}$. Clearly we can assume that $g(n) \leq \log n$ and thus, in particular, $m = o(n^{r/(r+1)})$. Hence, a simple calculation which is very similar to the one in~\eqref{eq::fnMarkov}, shows that w.h.p. no vertex was offered more than $r$ times during the first $m$ rounds. On the other hand, the expected number of vertices which were offered exactly $r$ times is
$$
f(n,m,r) = (1 + o(1)) \frac{e^{- m/n}}{r!} \cdot \frac{m^r}{n^{r-1}} \geq C (g(n))^r = \omega(1),
$$
where the first equality holds by~\eqref{e:exactly-k} from Subsection~\ref{sec::BallsBins} and $C$ is some constant which depends on $r$. Therefore, by Lemma~\ref{lem::BallsTalagrand}, w.h.p. at least $v(H)$ vertices were offered exactly $r$ times each. Hence, $m(H) \leq m$ as claimed. \end{proof}
\subsection{Offline Minimum Degree $k$}
\begin{proof}[Proof of Theorem~\ref{th::MinDegkOffline}]
For a positive integer $k$ and a vertex $v \in V(K_n)$, at any point during the game $({\cd}_k, n)$,
let $\off_k(v) = \min \{2 \off(v), k + \off(v)\}$, where $\off(v)$ is the number of times $v$ was offered up to that point. The idea behind this definition of $\off_k(v)$ is that when Builder claims an edge which is incident with $v$, he advances the sum of degrees in his graph towards having minimum degree $k$ by $2$ if $d(v) < k$, and by $1$ otherwise (in both cases, this is only if he chooses the other endpoint of this edge wisely). Therefore, this parameter allows us to track the evolution of Builder's graph and show that, in the offline version of the game $({\cd}_k, n)$, Builder can construct a graph with minimum degree $k$ as soon as $\sum_{v \in V(K_n)} \off_k(v) \geq k n$ first occurs, but not sooner. We begin by stating and proving the following two simple auxiliary claims.
\begin{claim} \label{cl::Yk}
Let $Y_k^r = \sum_{i=0}^{k-1} (k - i) X_i^r$, where $X_i^r$ is the random variable which counts the number of vertices that were offered precisely $i$ times during the first $r$ rounds. Then $Y_k^r \leq r$ if and only if $\sum_{v \in V(K_n)} \off_k(v) \geq k n$.
\end{claim}
\begin{proof}
Our claim readily follows from the following calculation:
\begin{align*}
\sum_{v \in V(K_n)} \off_k(v) &= \sum_{\stackrel{v \in V(K_n)}{\off(v) \leq k}} 2 \off(v) + \sum_{\stackrel{v \in V(K_n)}{\off(v) > k}} (k + \off(v)) \\
&= 2 \sum_{i=0}^k i X_i^r + k \sum_{i = k+1}^r X_i^r + \sum_{i = k+1}^r i X_i^r \\
&= \sum_{i=0}^r i X_i^r + k \sum_{i=0}^r X_i^r - \sum_{i=0}^{k-1} (k - i) X_i^r \\
&= r + k n - Y_k^r.
\end{align*}
\end{proof}
\begin{claim} \label{cl::dk}
For a vertex $v \in V(K_n)$, at any point during the process, let $d_k(v) = \min \{d(v), k\}$. Then $\sum_{v \in V(K_n)} \off_k(v) \geq \sum_{v \in V(K_n)} d_k(v)$ holds at any point during the process.
\end{claim}
\begin{proof}
We will prove the claim by induction on the number of rounds in the process. It is clearly true at the beginning of the process when $\sum_{v \in V(K_n)} \off_k(v) = \sum_{v \in V(K_n)} d_k(v) = 0$. Assume that $\sum_{v \in V(K_n)} \off_k(v) \geq \sum_{v \in V(K_n)} d_k(v)$ holds immediately after the $i$th round of the process, for some $i \geq 0$, and consider the $(i+1)$st round. Let $u$ be the vertex Builder is offered in the $(i+1)$st round and let $v$ be the vertex he connects to $u$ in this round. If $u$ has degree at least $k$ in the beginning of the $(i+1)$st round, then $\off_k(u)$ is increased by $1$ and $\off_k(x)$ is unchanged for every $x \in V(K_n) \setminus \{u\}$. Moreover, regardless of Builder's strategy, $d_k(v)$ is increased by at most $1$ and $d_k(x)$ is unchanged for every $x \in V(K_n) \setminus \{v\}$. Similarly, if $u$ has degree at most $k-1$ in the beginning of the $(i+1)$st round, then $\off_k(u)$ is increased by $2$ and $\off_k(x)$ is unchanged for every $x \in V(K_n) \setminus \{u\}$. Moreover, regardless of Builder's strategy, $d_k(u)$ is increased by $1$, $d_k(v)$ is increased by at most $1$, and $d_k(x)$ is unchanged for every $x \in V(K_n) \setminus \{u,v\}$. In both cases $\sum_{v \in V(K_n)} \off_k(v) \geq \sum_{v \in V(K_n)} d_k(v)$ holds immediately after the $(i+1)$st round.
\end{proof}
Returning to the proof of Theorem~\ref{th::MinDegkOffline}, assume first that $\sum_{v \in V(K_n)} \off_k(v) < k n$. It follows from Claim~\ref{cl::dk} that $\sum_{v \in V(K_n)} d_k(v) \leq \sum_{v \in V(K_n)} \off_k(v) < k n$. Clearly this means that the minimum degree in Builder's graph is strictly less than $k$.
Assume now that $\sum_{v \in V(K_n)} \off_k(v) \geq k n$ holds after $r$ rounds of the process. We will show that Builder has a strategy to ensure that the minimum degree in his graph will be at least $k$. For every $1 \leq i \leq r$, let $u_i$ denote the vertex which Builder was offered in the $i$th round. Immediately after the $r$th round, for every $0 \leq j \leq k-1$ let $L_j$ denote the set of vertices which were offered exactly $j$ times. We will construct $k$ bipartite graphs $H_1 = (A_1 \cup B_1, E_1), \ldots, H_k = (A_k \cup B_k, E_k)$. Moreover, for every $1 \leq j \leq k$, we will construct a matching $M_j$ in $H_j$ which saturates $B_j$. This is done recursively as follows. $A_1 = \{u_1, \ldots, u_r\}$ (note that $A_1$ is a multiset of size $r$ in which every $v \in V(K_n)$ appears precisely $\off(v)$ times), $B_1 = L_0 \cup \ldots \cup L_{k-1}$ and $E_1 = \{uv : u \in A_1, v \in B_1 \textrm{ and } u \neq v\}$. Note that $|B_1| = \sum_{j=0}^{k-1} |L_j| = \sum_{j=0}^{k-1} X_j^r \leq Y_k^r \leq r = |A_1|$, where the last inequality holds by Claim~\ref{cl::Yk}. Let $u \in A_1$ and $v \in B_1$ be arbitrary vertices and let $0 \leq j \leq k-1$ be the unique integer such that $v \in L_j$. By definition, if $uv \notin E_1$, then $u = v$. Since $v \in L_j$, it follows that $d_{H_1}(v) = |A_1| - j \geq |A_1| - k$. Similarly, $d_{H_1}(u) \geq |B_1| - 1$. Since, moreover, $r \gg k$ for sufficiently large $n$, a straightforward application of Hall's Theorem shows that $H_1$ has a matching which saturates $B_1$; let $M_1$ be such a matching chosen arbitrarily. Assume we have already constructed $H_1, \ldots, H_i$ and $M_1, \ldots, M_i$ for some $1 \leq i < k$. Let $Z_i$ denote the set of vertices of $A_i$ that were matched in $M_i$ and let $A_{i+1} = A_i \setminus Z_i$ (again, $A_{i+1}$ is a multiset and so, if some vertex appears $\ell_1$ times in $A_i$ and $\ell_2$ in $Z_i$, then it will appear $\ell_1 - \ell_2$ times in $A_{i+1}$). Let $B_{i+1} = L_0 \cup \ldots \cup L_{k-i-1}$. Let $F_i^1 = \{uv \in E_i : u \in Z_i \textrm{ or } v \in L_{k-i}\}$ and let $F_i^2 = \{uv \in E_i : \exists u' \in Z_i \textrm{ such that } u'v \in M_i \textrm{ and } u' \textrm{ is a copy of } u\}$. Finally, let $E_{i+1} = E_i \setminus (F_i^1 \cup F_i^2)$. It remains to prove that $H_{i+1}$ has a matching which saturates $B_{i+1}$. We first claim that $|B_{i+1}| \leq |A_{i+1}|$. Indeed
\begin{eqnarray*}
|A_{i+1}| - |B_{i+1}| &=& |A_1| - \sum_{j=1}^i |M_j| - |B_{i+1}| = r - \sum_{j=1}^{i+1} |B_j| \\
&=& r - \sum_{j=1}^i j |L_{k-j}| - (i+1) \sum_{j=i+1}^k |L_{k-j}| \geq r - \sum_{j=0}^{k-1} (k-j) X_j^r \\
&=& r - Y_k^r \geq 0,
\end{eqnarray*}
where the last inequality holds by Claim~\ref{cl::Yk}.
Now, consider arbitrary vertices $u \in A_{i+1}$ and $v \in B_{i+1}$. By definition, if $uv \notin E_{i+1}$, then either $u = v$, or $uv \in M_1 \cup \ldots \cup M_i$, or $u'v \in M_1 \cup \ldots \cup M_i$ for some $u'$ which is a copy of $u$. Therefore
$$
d_{H_{i+1}}(v) \geq |A_{i+1}| - \off(v) - i \cdot \max \{\off(z) : z \in V(K_n)\} \geq |A_{i+1}| - k \log n,
$$
where the last inequality holds by Corollary~\ref{cor::maxLoad}. Similarly,
$$
d_{H_{i+1}}(u) \geq |B_{i+1}| - 1 - \off(u) \geq |B_{i+1}| - 1 - \log n.
$$
Since, moreover, $|A_{i+1}| \geq |B_{i+1}| \gg \log n$ for sufficiently large $n$, it follows by a straightforward application of Hall's Theorem that $H_{i+1}$ indeed has a matching which saturates $B_{i+1}$; let $M_{i+1}$ be such a matching chosen arbitrarily. This proves the existence of the matching $M_j$ in $H_j$ for every $1 \leq j \leq k$.
We are now ready to describe Builder's strategy. For every $1 \leq i \leq r$, Builder plays the $i$th round as follows. If $u_i$ is unmatched in $M_1 \cup \ldots \cup M_k$, then Builder claims an arbitrary free edge which is incident to $u_i$. Otherwise, let $1 \leq j \leq k$ be the unique integer such that $u_i$ is matched in $M_j$. Builder claims the unique edge of $M_j$ which is incident to $u_i$ if it is free, and an arbitrary free edge which is incident to $u_i$ otherwise. We claim that, by following this strategy, after $r$ rounds the minimum degree in Builder's graph is at least $k$. Indeed, let $v \in V(K_n)$ be an arbitrary vertex. If $\off(v) \geq k$, then the degree of $v$ in Builder's graph is at least $k$ regardless of his strategy. Assume then that $\off(v) = \ell$ for some $0 \leq \ell \leq k-1$. By Builder's strategy, $v$ is matched in $M_j$ for every $1 \leq j \leq k - \ell$. Hence, its degree in Builder's graph is at least $\ell + (k - \ell) = k$.
Finally, we are in a position to determine $\tau({\cd}_k, n)$ asymptotically. It is the smallest integer $r$ for which $\sum_{v \in V(K_n)} \off_k(v) \geq k n$. By Claim~\ref{cl::Yk} it is then also the smallest $r$ for which $Y_k^r \leq r$. It follows by Lemma~\ref{lem::Balls} that $Y_k^r$ is concentrated around its expectation and thus $(1 + o(1)) \mathbb{E}(Y_k^r) \leq r$. By linearity of expectaion and by~\eqref{e:exactly-k} the latter inequality translates to
$$
\alpha n \geq (1 + o(1)) \mathbb{E}(Y_k^r) = (1 + o(1)) \sum_{i=0}^{k-1} (k - i) \mathbb{E}(X_i^r) = (1 + o(1)) \sum_{i=0}^{k-1} \left(k - i \right) \frac{e^{- \alpha}}{i!} \alpha^i n,
$$
where $\alpha := r/n$. Since $f_k(x) = \sum_{i=0}^{k-1} \left(k-i \right) \frac{x^i}{i!} - x e^x$ is a continuous function, it follows that $\alpha = \alpha_k + o(1)$ where $\alpha_k$ is the unique positive root of $f_k(x)$, as claimed.
\end{proof}
\section{Online games}\label{sec:on}
\subsection{Online fixed graph game}
\begin{proof}[Proof of Theorem~\ref{th::fixedGraphUpperBound}]
We will describe a strategy for Builder to build a copy of $H$ and will then prove that w.h.p.\ building such a copy using this strategy will take him at most $g(n) \cdot n^{(d-1)/d}$ rounds. Let $(v_1, \ldots, v_r)$ be a degeneracy ordering of the vertices of $H$, that is, an ordering such that $v_k$ has at most $d$ neighbors in $\{v_1, \ldots, v_{k-1}\}$ for every $2 \leq k \leq r$. We refer to these neighbors as the \textit{back neighbors} of $v_k$. We will define a mapping $\varphi : V(H) \to [n]$ such that $G_{m_k}[\{\varphi(v_1), \ldots, \varphi(v_k)\}]$ contains a copy of $H[\{v_1, \ldots, v_k\}]$ for every $1 \leq k \leq r$, where $m_k$ is the number of the round in which this is achieved for the first time. We will do so inductively as follows. Let $u_1$ be the vertex Builder is offered in the first round of the game and let $\varphi(v_1) = u_1$. Assume now that for some $1 \leq k \leq r-1$, Builder has already built a graph $G_{m_k}$ and defined $\varphi(v_i)$ for every $1 \leq i \leq k$ such that $G_{m_k}[\{\varphi(v_1), \ldots, \varphi(v_k)\}]$ contains a copy of $H[\{v_1, \ldots, v_k\}]$. Builder would now wish to define $\varphi(v_{k+1})$; he does so as follows. Let $v_{i_1}, \ldots, v_{i_\ell}$ be the back neighbors of $v_{k+1}$ in $H$.
If $\ell = 0$, then Builder defines $\varphi(v_{k+1}) = u$ for an
arbitrary vertex $u \in [n] \setminus \{\varphi(v_1), \ldots,
\varphi(v_k)\}$ and $m_{k+1} = m_k$. Assume then that $1 \leq \ell
\leq d$ and observe that $\varphi(v_{i_j})$ was already defined for
every $1 \leq j \leq d$.
For every $i > m_k$, let $u_i$ denote the vertex Builder is offered in the $i$th round. If $u_i \in \{\varphi(v_1), \ldots, \varphi(v_k)\}$, then Builder plays arbitrarily. Otherwise, let $t$ denote the total number of times $u_i$ was offered in rounds $m_k + 1, \ldots, i$. Builder claims the edge $u_i \varphi(v_{i_t})$. If, moreover, $t = \ell$, then Builder defines $\varphi(v_{k+1}) = u_i$ and $m_{k+1} = i$ (in particular, $1 \leq t \leq \ell$ and so this strategy is well-defined).
Now, observe that $m_{k+1} - m_k$ is the smallest number of rounds until some $u_i \in [n] \setminus \{\varphi(v_1), \ldots, \varphi(v_k)\}$ is offered $\ell$ times (counting offers in rounds $m_k + 1, \ldots, m_{k+1}$). By Builder's strategy, no vertex is offered more than $\ell$ times during those $m_{k+1} - m_k$ rounds and so the conditions of Lemma~\ref{lem::BallsTalagrand} are satisfied. Since, moreover, $r$ is a constant and $\ell \leq d$, it follows from Lemma~\ref{lem::BallsTalagrand} that w.h.p.\ $m_{k+1} - m_k <g(n)/r \cdot n^{(d-1)/d}$. This is true for every $1 \leq k < r$ (and $m_1 = 1$) and so w.h.p.\ the entire game lasts at most $g(n) \cdot n^{{(d-1)}/{d}}$ rounds as claimed.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:lowerCliqueOnLine}]
The assertion of the theorem trivially holds for $d=2$. Hence, for the remainder of the proof we may assume that $d \geq 3$.
The main ingredient of the proof is the following claim which upper bounds the number of copies of $K_\ell$, for some integer $\ell\geq3$, in Builder's graph up to some specific round.
\begin{claim} \label{claim:NumberOfCopiesOfKk}
For positive integers $\ell$ and $m = m(n)$ and a strategy $\mathcal S$ of Builder, let $Z_{m, \ell}^{\mathcal S}$ denote the number of copies of $K_\ell$ in $G_m$ when Builder is playing according to the strategy ${\mathcal S}$. Then w.h.p.\ $Z_{m,\ell}^{\mathcal S} \leq a(n) \cdot \frac {m^{\ell-1}}{n^{\ell-2}}$, for any function $a(n)$ which tends to infinity as $n$ tends to infinity.
\end{claim}
\begin{proof}
For positive integers $\ell \geq 3$ and $m = m(n)$ and a strategy ${\mathcal S}$, let $Y_{m, \ell}^{\mathcal S}$ denote the number of copies of $K_{\ell}$ that Builder creates in the $m$th round when he plays according to ${\mathcal S}$. Note that $Z^{\mathcal S}_{m,\ell}=\sum_{i=1}^mY^{\mathcal S}_{i,\ell}$.
Let ${\mathcal S}$ be an arbitrary strategy of Builder. We prove by induction on $\ell$ that $\mathbb{E}(Y^{\mathcal S}_{m,\ell}) \leq \left(\frac {(\ell-1) m}{n} \right)^{\ell-2}$. First we introduce some useful notation. For an integer $m$, let $G'$ be a copy of $K_\ell$ in $G_m$. For $v\in V(G') \subseteq V(G_m)$, we denote by $(K_{\ell-1},v)_{G'}$ the ordered pair consisting of the copy of $K_{\ell-1}$ in $G'$ that does not contain $v$, and the remaining vertex $v$. For the induction basis $\ell = 3$, it is evident that in order to create a copy of $K_3$ in some round $r$, we must touch a vertex $v$ that belongs to a copy $G'$ of $K_2$. If $v$ was offered to Builder in round $r$, then the pair $(K_1,v)_{G'}$ will lie in at most one copy of $K_3$ after round $r$ (it is impossible to create two different copies of $K_3$, both containing $G'$, in one round). The number of such potential pairs after $m$ rounds is at most $2m$ and the probability that a specific vertex $v$ will be offered in round $r$ is $\frac 1n$. Therefore, $\mathbb{E}(Y^{\mathcal S}_{m,3})\leq 2m\cdot \frac 1n=\frac {2m}{n}$. Now, for the induction step, assume that $\mathbb{E}(Y^{\mathcal S}_{m,\ell}) \leq \left(\frac {(\ell-1) m}{n} \right)^{\ell-2}$. In order to create a copy of $K_{\ell+1}$, Builder must touch a vertex $v$ that belongs to a copy $G'$ of $K_\ell$. If $v$ was offered to Builder in some round $r$, then the pair $(K_{\ell-1},v)_{G'}$ will lie in at most one copy of $K_{\ell+1}$ immediately after the $r$th round. Since for each copy $G'$ of $K_\ell$ there are $\ell$ ordered pairs $(K_{\ell-1},v)_{G'}$, the expected number of such potential pairs after $m-1$ rounds is ${\ell}\cdot\mathbb{E}(Z^{\mathcal S}_{m-1,\ell})$ and the probability that a specific vertex $v$ will be offered in round $r$ is $\frac 1n$. Therefore,
\begin{align*}
\mathbb{E}(Y^{\mathcal S}_{m,\ell+1})&\leq\ell\cdot\mathbb{E}(Z^{\mathcal S}_{m,\ell})\cdot \frac 1n
=\frac \ell n \sum_{i=1}^m \mathbb{E}(Y^{\mathcal S}_{i,\ell})\\
&\leq \frac \ell n \sum_{i=1}^m (\ell-1)^{\ell-2}\left(\frac {i}n\right)^{\ell-2}
=\frac {\ell(\ell-1)^{\ell-2}}{n^{\ell-1}}\sum_{i=1}^{m}i^{\ell-2}\\
&\leq \frac {\ell(\ell-1)^{\ell-2}}{n^{\ell-1}}\cdot m^{\ell-1}
\leq \frac {\ell^{\ell-1}}{n^{\ell-1}}\cdot m^{\ell-1}.
\end{align*}
This concludes the induction, from which it follows that
\begin{equation}\label{eq:LowerOnLineFix}
\mathbb{E}(Z^{\mathcal S}_{m,\ell})=\sum_{i=1}^m \mathbb{E}(Y^{\mathcal S}_{i,\ell})\leq (\ell-1)^{\ell-2}\cdot\frac {m^{\ell-1}}{n^{\ell-2}}.
\end{equation}
Now, let ${\mathcal S}^m_{op}$ be a strategy for Builder which maximizes the number of copies of $K_\ell$ in $G_m$.
Let $Z_{m,\ell}^{op}$ be the number of copies of $K_\ell$ in $G_m$ when Builder plays according to ${\mathcal S}^m_{op}$. It follows by \eqref{eq:LowerOnLineFix} that $\mathbb{E}(Z_{m,\ell}^{op})\leq C\cdot\frac {m^{\ell-1}}{n^{\ell-2}}$, where $C = C(\ell)$ is a constant. By Markov's inequality we have that
$$\lim_{n\to\infty}\Pr\left[Z_{m,\ell}^{op}>a(n)\cdot \frac {m^{\ell-1}}{n^{\ell-2}} \right]= 0$$ for any function $a(n)\to \infty$. Since $$\Pr\left[Z_{m,\ell}^{{\mathcal S}}>a(n)\cdot \frac {m^{\ell-1}}{n^{\ell-2}} \right]\leq \Pr\left[Z_{m,\ell}^{op}>a(n)\cdot \frac {m^{\ell-1}}{n^{\ell-2}} \right]$$ holds for every strategy ${\mathcal S}$, our claim follows.
\end{proof}
Now, let $m =
(a(n))^{-1} n^{(d-2)/(d-1)}$. Then, by Claim~\ref{claim:NumberOfCopiesOfKk}, w.h.p.
$Z_{m,d}^{\mathcal S} < 1$ for any strategy ${\mathcal S}$. That is,
if $m \leq (a(n))^{-1} n^{(d-2)/(d-1)}$, then w.h.p. $G_m$ does not
contain a copy of $K_d$.
\end{proof}
\subsection{Online Minimum Degree $k$} \label{subsec:online_min_deg_k}
Our main goal in this subsection is to prove Theorem~\ref{th::OnMinDegk}. In fact, we will study three variants of the minimum degree $k$ game: $(\cd_k, n)$, where loops and multiple edges are not counted when calculating the degree of each vertex; $(\cd'_k, n)$, where multiple edges are counted but loops are not; and $(\cd''_k, n)$, where all edges, including multiple edges and loops, are counted (every loop increases the degree of the vertex by two).
$(\cd'_k, n)$ will be useful in the next subsection, concerned with $k$-connectivity.
Recall the strategy $\mathcal S_{\min}$ presented in Subsection \ref{subsec:min_deg_process}. We show that a simple variant of this strategy, denoted $\mathcal{S}^\dagger_{\min}$, is optimal for the game $(\cd''_k, n)$. Utilizing Proposition \ref{pro:Gmin}, we conclude that $\mathcal{S}_{\min}$ is almost optimal for all of these three games in some precise sense.
Before stating our results, we need the following additional notation and terminology.
For two random variables $X$ and $Y$, taking values in $\mathbb{N}$, an integer $\ell \geq 0$, and a real number $\varepsilon \geq 0$, we say that $X$ \textit{$(\ell, \varepsilon)$-dominates} $Y$ if $\Pr(X \leq t+\ell) \geq \Pr(Y \leq t) - \varepsilon$ for any $t$. In our context, we identify each strategy $\mathcal{S}$ for a given game $\mathcal{G} = ({\mathcal P}, n)$ with its hitting time $H_\mathcal{G}(\mathcal{S})$ for this game, i.e., the random variable representing the number of rounds required for $\mathcal{S}$ to win $\mathcal{G}$. We say that $\mathcal{S}$ \textit{$(\ell, \varepsilon)$-dominates} another strategy $\mathcal{S}'$ if $H_\mathcal{G}(\mathcal{S})$ $(\ell, \varepsilon)$-dominates $H_\mathcal{G}(\mathcal{S}')$; in the special case $\ell = \varepsilon = 0$, we simply say that $\mathcal{S}$ \textit{dominates} $\mathcal{S}'$. $\mathcal{S}$ is \textit{$(\ell, \varepsilon)$-optimal} for a given game if it $(\ell, \varepsilon)$-dominates any other strategy $\mathcal{S}'$ for this game; if $\mathcal{S}$ is $(0, 0)$-optimal, we simply say that it is \textit{optimal}.
\begin{theorem} \label{th::MinDegkOnline}
For every fixed positive integer $k$, the strategy $\mathcal S_{\min}$ is $(o(n), o(1))$-optimal for all three games $(\cd_k, n)$, $(\cd'_k, n)$, and $(\cd''_k, n)$.
\end{theorem}
Consider the following strategy, denoted $\mathcal S^\dagger_{\min}$, which is a slight variant of $\mathcal S_{\min}$. In any given round, let $G$ denote Builder's graph immediately before this round starts. Once a vertex $v$ is offered to Builder, we increase its degree by $1$, and only then choose a vertex $u$ u.a.r. among all vertices of minimum degree at this point. Hence, unlike in $\mathcal S_{\min}$, it is possible that $v$ was a vertex of minimum degree before it was offered but is not after we increase its degree by $1$, and so will surely not be chosen as the second vertex in this round.
The main technical ingredient in the proof of Theorem~\ref{th::MinDegkOnline} is the following lemma.
\begin{lemma} \label{lem:MinDegK''}
$\mathcal S^\dagger_{\min}$ is optimal for $(\cd''_k, n)$.
\end{lemma}
The next lemma compares the performance of $\mathcal S_{\min}$ and $\mathcal S^{\dagger}_{\min}$, asserting that $\mathcal S_{\min}$ is indeed essentially optimal.
\begin{lemma} \label{lem:S_min}
$\mathcal{S}_{\min}$ $(o(n), o(1))$-dominates $\mathcal{S}^{\dagger}_{\min}$ for $(\cd''_k, n)$.
\end{lemma}
Theorem~\ref{th::MinDegkOnline} is an immediate corollary of Lemmas~\ref{lem:MinDegK''} and~\ref{lem:S_min}, and Propositions~\ref{pro:G''min} and~\ref{pro:Gmin}.
Before proceeding to the proofs of these lemmas, we briefly discuss previous results on the behavior of $\mathcal{S}_{\min}$.
Wormald showed, using his seminal differential equations method~\cite{Wormald, Wormald2}, that for every positive integer $k$ there exists a constant $h_k$ such that w.h.p. the min-degree process reaches minimum degree $k$ after $h_k n + o(n)$ rounds. This was explicitly shown in~\cite{Wormald, Wormald2} for $\{G'_{min}(n,m)\}_{m \geq 0}$ (corresponding to the game $(\cd'_k, n)$), but it is easy to show that it still holds for $\{G_{min}(n,m)\}_{m \geq 0}$ and $\{G''_{min}(n,m)\}_{m \geq 0}$ as well. Theorem~\ref{th::OnMinDegk} is thus an immediate corollary of Theorem~\ref{th::MinDegkOnline}. In fact, we obtain the following more general result as implied by Propositions~\ref{pro:Gmin} and~\ref{pro:G''min}.
\begin{corollary}
Let $k$ be a positive integer. Then w.h.p. $\tau({\cd}_k, n) = (h_k + o(1)) n$. The same is true for $\tau(\cd'_k, n)$ and $\tau(\cd''_k, n)$.
\end{corollary}
The first few $h_k$'s were explicitly calculated in~\cite{KKRL}; it was shown there that
\begin{eqnarray*}
&& h_1 = \ln 2 \approx 0.6931 \\
&& h_2 = \ln 2 + \ln (1 + \ln 2) \approx 1.2197 \\
&& h_3 = \ln ((\ln 2)^2 + 2(1 + \ln 2)(1 + \ln(1 + \ln 2))) \approx 1.7316
\end{eqnarray*}
Calculating $h_k$ for $k > 3$ can be carried out in a straightforward manner, by iteratively solving a simple differential equation with suitable initial conditions. For more details, see Subsections 3.1 and 3.2 of~\cite{Wormald2}.
We now proceed to the proofs of Lemmas~\ref{lem:MinDegK''} and~\ref{lem:S_min}.
\begin{proof}[Proof of Lemma~\ref{lem:MinDegK''}]
For any integer $i \geq 0$, a strategy $\mathcal{S}$ of Builder is called \textit{$i$-minimizing} if in each of the first $i$ rounds, Builder chooses to connect the vertex he is offered to a vertex of minimum degree. $\mathcal{S}$ is said to be \textit{minimizing} if it is $i$-minimizing for every $i$.
In order to prove the lemma, it suffices to show that any $i$-minimizing strategy for $(\cd''_k, n)$ is dominated by some $(i+1)$-minimizing strategy. Indeed, seeing that domination is a transitive relation and that, trivially, any strategy is $0$-minimizing, the last statement implies that any strategy is dominated by a minimizing strategy. Moreover, any two minimizing strategies $\mathcal{S}$ and $\mathcal{S}'$ are clearly equivalent (in the sense that $\mathcal{S}$ dominates $\mathcal{S}'$ and $\mathcal{S}'$ dominates $\mathcal{S}$). Since $\mathcal S^{\dagger}_{\min}$ is a minimizing strategy, we conclude that it is optimal.
Let $\mathcal{S}$ be an $i$-minimizing strategy, and consider the following $(i+1)$-minimizing strategy $\mathcal{S}'$;
$\mathcal{S}'$ is identical to $\mathcal{S}$ in the first $i$ rounds.
Conditioned on the degree sequence of Builder's graph immediately after the first $i$ rounds are completed and the vertex of round $i+1$, say $v_{i+1}$, is offered, for any vertex $v$ of the graph let $q_v$ denote the probability that, when playing according to $\mathcal{S}$, Builder chooses $v$ as the second vertex in round $i+1$.
At this point, let $w$ be an arbitrary vertex of minimum degree and let $u$ be a vertex chosen randomly according to the distribution induced by $\mathcal{S}$, that is, for any vertex $v$, the probability that $u = v$ is $q_v$. In round $i+1$, when playing according to $\mathcal{S}'$, Builder claims an edge connecting $w$ and $v_{i+1}$.
In the remainder of the game $\mathcal{S}'$ instructs Builder to play as follows.
As long as $d_G(w) \leq d_G(u)$ (where $G$ denotes Builder's graph at any point during the game), $\mathcal{S}'$ imitates the behavior of $\mathcal{S}$ under the assumption that the second vertex chosen in round $i+1$ was $u$ (and not $w$, as was actually instructed by $\mathcal{S}'$).
If, at some point, the degree of $w$ in Builder's graph exceeds that of $u$, then $\mathcal{S}'$ ``switches roles'' between these two vertices, that is, from now on, whenever $\mathcal{S}$ dictates that the offered vertex should be connected to $u$, $\mathcal{S}'$ dictates that it should be connected to $w$ instead, and vice versa. The behavior of $\mathcal{S}'$ with respect to other vertices is identical to that of $\mathcal{S}$.
Clearly, $\mathcal{S}'$ is $(i+1)$-minimizing, and it is not hard to see that it dominates $\mathcal{S}$.
Indeed, let $E$ denote the event that a role switch occurs (between some two vertices $u$ and $w$ as described) at some point when following the strategy $\mathcal{S}'$. Conditioning on $E$, the distribution of the hitting time of $\mathcal{S}$ is identical to that of $\mathcal{S}'$.
On the other hand, conditioning on the complement of $E$, at any point during the game, and for any non-negative integer $\ell$, the probability that both vertices $u$ and $w$ have degree at least $\ell$ when playing according to $\mathcal{S}$ is at most the probability that both vertices have degree at least $\ell$ when playing according to $\mathcal{S}'$. In fact, if $u \neq w$, then after round $i+1$ the probability that $\min \{d_G(u), d_G(w)\} \geq \ell$ when playing according to $\mathcal{S}$ is equal to the probability that $\min \{d_G(u), d_G(w)\} \geq \ell+1$ when playing according to $\mathcal{S}'$. Hence, $\mathcal{S}'$ dominates $\mathcal{S}$ in this case as well.
\end{proof}
In the remainder of this subsection, a graph $H$ with vertex set $\{u_1, \ldots, u_t\}$ is said to be \textit{degree-dominated} by a graph $G$ with vertex set $\{v_1, \ldots, v_t\}$ if there exists a permutation $\pi : [t] \to [t]$ such that $d_H(u_i) \leq d_G(v_{\pi(i)})$ for every $1 \leq i \leq t$.
For the proof of Lemma \ref{lem:S_min}, we will need the following fact, which can be straightforwardly proved by induction.
\begin{observation}
\label{obs:degree_dominate}
Suppose that $H$ and $G$ are graphs on the same number of vertices, such that $G$ degree-dominates $H$. Let $X_H$ (respectively, $X_G$) be the random variable representing the number of rounds required for Builder to reach minimum degree $k$ when following the strategy $\mathcal{S}_{\min}$, starting from the graph $H$ (respectively, $G$). Then $X_G$ dominates $X_H$. The same holds for $\mathcal S^{\dagger}_{\min}$.
\end{observation}
\begin{proof}[Proof of Lemma \ref{lem:S_min}]
A round of the game played according to $\mathcal{S}_{\min}$ is considered to be a \textit{failure} if both of the following conditions are met.
\begin{enumerate} [(a)]
\item [$(a)$] The vertex $v$ offered in this round is of minimum degree.
\item [$(b)$] $\mathcal S_{\min}$ instructs Builder to connect $v$ to itself in this round.
\end{enumerate}
We first show that the number of failures when playing according to $\mathcal{S}_{\min}$ is $o(n)$ w.h.p., and then we show how this implies the statement of the lemma.
Let $N$ denote the number of vertices of minimum degree in Builder's graph immediately before a given round begins. The probability that both (a) and (b) above hold is $\frac{N}{n} \cdot \frac{1}{N} = \frac{1}{n}$. Since $\mathcal{S}_{\min}$ always reaches minimum degree $k$ after at most $kn$ rounds, the expected number of failures is bounded by $kn / n = k$, and thus, by Markov's inequality, the total number of failures is w.h.p. $o(n)$.
Suppose now that we play a round of $\mathcal S_{\min}$ and of $\mathcal S^{\dagger}_{\min}$ in parallel, starting from the same graph $G$, and using the same source of randomness. Let $v$ be the vertex offered in this round. Observe that, conditioning on the event that this round is \textit{not} a failure for $\mathcal{S}_{\min}$, the distribution on the second vertex chosen in this round according to $\mathcal S_{\min}$ is identical to that of $\mathcal S^\dagger_{\min}$. On the other hand, suppose that this round is a failure, and one runs another round of $\mathcal S_{\min}$, increasing by one the degree of some vertex $u \neq v$ that was a minimum-degree vertex of $G$. The resulting graph, obtained by running two rounds of $\mathcal{S}_{\min}$ starting from $G$, degree-dominates any graph generated by one round of $\mathcal S^{\dagger}_{\min}$ starting from $G$. It thus follows by Observation~\ref{obs:degree_dominate} and the fact that w.h.p. the number of failures is $o(n)$, that $\mathcal{S}_{\min}$ $(o(n), o(1))$-dominates $\mathcal{S}^{\dagger}_{\min}$.
\end{proof}
\subsection{Online $k$-connectivity}
In this subsection we prove Theorem~\ref{thm:OnkCon}. Note that the lower bound is an immediate corollary of Theorem~\ref{th::OnMinDegk}. For the upper bound, we utilize a slightly modified min-degree process.
Consider a multigraph process $\{G^*_{\min}(n,m)\}_{m \geq 0}$, which is defined exactly like the process $\{G'_{\min}(n,m)\}_{m \geq 0}$ except that, instead of choosing the first vertex among all vertices of minimum degree, we choose it among all vertices with the smallest number of distinct neighbors. These two processes are identical as long as there are no multiple edges. Once there are multiple edges, the neighborhood of an endpoint of such edges could have minimum size while the degree of that endpoint could be strictly larger than the minimum degree.
Consider the strategy $\mathcal{S}^*_{\min}$ defined as follows.
\textbf{Strategy $\mathcal S^{*}_{\min}$:} Whenever Builder is offered some vertex $v$, he connects it to a vertex $u$, chosen u.a.r. among all vertices of $[n] \setminus \{v\}$ that have the smallest number of distinct neighbors.
The following result is analogous to Proposition \ref{pro:G'min}, and its proof (which is omitted) is essentially the same as that of Proposition \ref{pro:G'min}.
\begin{proposition}\label{pro:G*min}
If $m = o \left(n^2 \right)$, then the strategy $\mathcal S^*_{min}$ is such that $H \sim G^*_{\min}(n,m)$ and $G \sim \mathcal S^*_{min}(n, (1+o(1))m)$ can be coupled
in such a way that w.h.p.\ $H \subseteq G$.
\end{proposition}
For every positive integer $k$, let $H^*_k = H^*_k(n)$ denote the hitting time for the property that every vertex of $G^*_{\min}(n,m)$ has at least $k$ distinct neighbors, i.e.
$$
H^*_k = \min \{m : |N(u)| \geq k \textrm{ for every } u \in V(G^*_{min}(n,m))\}
$$
Furthermore, for every positive integer $k$, let $H_k$ denote the hitting time for the property that the minimum degree of $G_{\min}(n,m)$ is $k$, i.e.
$$
H_k = \min \{m : \delta(G_{\min}(n,m)) \geq k\}.
$$
We stress that $G_{\min}$ refers here to the min-degree process that does not allow multiple edges (as opposed to $G'_{\min}$) or loops. Recall the notion of $(\ell,\varepsilon)$-domination from Subsection~\ref{subsec:online_min_deg_k}.
\begin{lemma}\label{lem::G*}
Fix a positive integer $k$. Then $H^{*}_k$ $(\log n, o(1))$-dominates $H_k$.
\end{lemma}
\begin{remark}
The $\log n$ term was chosen arbitrarily; it can be replaced with any function that tends to infinity with $n$.
\end{remark}
\begin{proof}
Consider a round of $G^{*}_{\min}$ to be a \textit{failure} if a multiple edge is chosen in this round.
For any multigraph $G$, let $simp(G)$ denote the simple graph $H$ so that $uv$ is an edge of $H$ if and only if $uv$ appears at least once in $G$.
It suffices to prove the following two statements.
\begin{enumerate}
\item For any multigraph $G$, the following two edge distributions are identical.
\begin{enumerate}
\item The edge distribution of a single round of $G_{\min}$ starting from $simp(G)$.
\item The edge distribution of a single round of $G^*_{\min}$ starting from $G$, conditioned on the event that this round is not a failure.
\end{enumerate}
\item For any fixed $k$, the number of failures of $G^{*}_{\min}$ until the point that any vertex in the generated graph has at least $k$ distinct neighbors is w.h.p.\ at most $\log n$.
\end{enumerate}
We start by proving the first statement. Note that a vertex $v$ has minimum degree in $simp(G)$ if and only if it has a minimum number of distinct neighbors in $G$; let $V_{\min}$ denote the set of all such vertices. Let $N = |V_{\min}|$ and let $\delta$ denote the degree of the vertices of $V_{\min}$ in $simp(G)$. The probability of an edge $uv$ to be chosen according to each of the distributions is $|\{u, v\} \cap V_{\min}| / N (n-1-\delta)$, so the distributions are indeed identical.
To prove the second statement, observe that the probability for a multiple edge to be chosen in a single round of $G^*_{\min}$ is bounded from above by $k / (n-1)$, and thus the total expected number of failure rounds in $G^{*}_{\min}(n, m)$ is $O(k^2)$ as long as $m = O(kn)$. Putting, say, $m = nk+\log{n}$, we deduce by Markov's inequality that w.h.p.\ (for fixed $k$) the total number of failures in $G^{*}_{\min}(n, m)$ is bounded from above by $\log n$. Conditioning on this event, we know that the generated graph had at least $m - \log{n} = kn$ successful rounds among its first $m$ rounds. Hence, it must already hold that any vertex has at least $k$ distinct neighbors at this point, concluding the proof.
\end{proof}
Building on Proposition~\ref{pro:G*min} and Lemma~\ref{lem::G*}, Theorem~\ref{thm:OnkCon} is now an immediate corollary of the following result which strengthens a result from~\cite{KKRL}; our proof builds on their method.
\begin{theorem} \label{th::k-con}
Let $k \geq 3$ be a fixed integer. Then w.h.p. $G^*_{\min}(n, \alpha n)$ is $k$-connected if $\alpha > h_k$ and is not $k$-connected if $\alpha < h_k$.
\end{theorem}
\begin{remark}
It follows from the results in~\cite{KKRL} that $k$ cannot be taken to be smaller than $3$ in Theorem~\ref{th::k-con}.
\end{remark}
In the proof of Theorem~\ref{th::k-con} we will make use of the following auxiliary lemma.
\begin{lemma} \label{lem::smallSets}
Let $k$ be a positive integer and let $G \sim G^*_{\min}(n, \alpha n)$, where $\alpha \leq k$. Then w.h.p. $e_G(A) \leq |A|$ for every set $A \subseteq [n]$ of size $1 \leq |A| \leq 101 k^2$.
\end{lemma}
\begin{proof}
Fix some integer $k \geq 1$ and let $M = 200 k^2$. For every $1 \leq t \leq M$, a round of the process is said to be of \emph{type $t$} if at the start of that round, the number of vertices whose neighborhood is of minimum size is larger than $n^{(t-1)/M}$ and is at most $n^{t/M}$. Since $\alpha \leq k$, it follows that $\delta(G) \leq 2k$ holds throughout the process. Moreover, it follows by the description of the process that in every round we increase the number of neighbors of some vertex whose neighborhood is of minimum size or we choose a multiple edge. Since, by the proof of Lemma~\ref{lem::G*}, w.h.p. there are at most $\log n$ rounds in which we choose a multiple edge, it follows that w.h.p., throughout the process there are at most $2k n^{t/M} + \log n \leq 3k n^{t/M}$ rounds of type $t$ for every $1 \leq t \leq M$.
Fix an integer $1 \leq i \leq 101 k^2$ and a set $A \subseteq [n]$ of size $i$. We would like to bound from above the probability that $e_G(A) \geq i+1$. Let $S_i = \left\{(s_1, \ldots, s_M) \in {\mathbb N}^M : \sum_{t=1}^M s_t = i+1 \right\}$. For every $\bar{s} = (s_1, \ldots, s_M) \in S_i$, let $p_{i,\bar{s}}$ denote the probability that, for every $1 \leq t \leq M$, at least $s_t$ edges with both endpoints in $A$ were claimed during rounds of type $t$. Then
\begin{eqnarray*}
Pr(e_G(A) \geq i+1) &\leq& \sum_{\bar{s} \in S_i} p_{i, \bar{s}} \\
&\leq& \sum_{(s_1, \ldots, s_M) \in S_i} \prod_{t=1}^M \binom{3k n^{t/M}}{s_t} \left(\frac{i}{n^{(t-1)/M}} \right)^{s_t} \left(\frac{i}{n} \right)^{s_t}\\
&\leq& \sum_{(s_1, \ldots, s_M) \in S_i} c_k n^{\sum_{t=1}^M (t s_t/M - (t-1) s_t/M - s_t)} = c'_k n^{(i+1)(1/M - 1)},
\end{eqnarray*}
where $c_k$ and $c'_k$ are appropriate constants, depending on $k$ but not on $n$.
A union bound over all relevant choices of $A$ then shows that the probability that there exists a set $A \subseteq [n]$ such that $|A| = i$ for some $1 \leq i \leq 101 k^2$ and $e_G(A) \geq i+1$ is at most
$$
\sum_{i=1}^{101 k^2} \binom{n}{i} c'_k n^{(i+1)(1/M - 1)} \leq \sum_{i=1}^{101 k^2} c'_k n^{i + (i+1)(1/M - 1)} \leq c'_k\cdot 101 k^2 \cdot n^{-1} \cdot n^{(101 k^2 + 1)/(200 k^2)} = o(1),
$$
where the last inequality holds since $M = 200 k^2$ and $i \leq 101 k^2$.
\end{proof}
We are now in a position to prove Theorem~\ref{th::k-con}.
\begin{proof}
If $G$ is $k$-connected, then, in particular, every vertex of $G$ has at least $k$ distinct neighbors. It follows by the definitions of $H^*_k$ and $h_k$ that w.h.p. $G^*_{\min}(n, \alpha n)$ is not $k$-connected if $\alpha < h_k$. Assume then that $\alpha > h_k$. Since $k$-connectivity is a monotone increasing property and $h_k < k$, we can assume that $\alpha \leq k$. In order to prove that $G \sim G^*_{\min}(n, \alpha n)$ is w.h.p. $k$-connected, we will show that the probability that there exist pairwise disjoint sets $S, R$, and $T$ such that $[n] = S \cup R \cup T$, $|R| = k-1$, $1 \leq |S| \leq |T|$, and $E_G(S,T) = \emptyset$, tends to $0$ as $n$ tends to infinity. Since, by the definitions of $H^*_k$ and $h_k$, w.h.p. every vertex of $G$ has at least $k$ distinct neighbors, we can restrict our attention to the case $|S| \geq 2$.
Fix a triple $S, R, T$ as above, where $|S| = s$ for some $2 \leq s \leq 100 k^2$. Let $A = S \cup R$ and observe that $|A| = s + k - 1 \leq 101 k^2$. It follows by Lemma~\ref{lem::smallSets} that w.h.p. $e_G(A) \leq s + k - 1$ and $e_G(S) \leq s$. Since, moreover, w.h.p. every vertex of $G$ has at least $k$ distinct neighbors, if $E_G(S,T) = \emptyset$, then $e_G(A) \geq k s - s$. Since $k \geq 3$ and $s \geq 2$, this is a contradiction unless $k = 3$ and $s = 2$. In the latter case $|A| = 4$ and $e_G(A) \geq 5$ which again contradicts Lemma~\ref{lem::smallSets}.
Now, fix a triple $S, R, T$ as above, where $100 k^2 \leq |S| \leq (n-k+1)/2$. A round of the process is said to be \emph{bad} if, in that round, the first vertex is chosen from $R$, \emph{good} if it is chosen from $T$, and \emph{great} if it is chosen from $S$. It suffices to prove that the probability that no edges between $S$ and $T$ were claimed in any round which is not bad is $o(1)$. Since $\alpha \leq k$, it follows that $\delta(G) \leq 2k$ holds throughout the process. Since, moreover, the first vertex chosen in every round has the least number of distinct neighbors and there are at most two edges between any pair of vertices by Lemma~\ref{lem::smallSets}, there can be at most $2 \cdot 2 k |R| \leq 4k^2$ bad rounds. Let $X_S$ be the random variable which counts the number of great rounds. Since $\delta(G) \geq k$ holds w.h.p. at the end of the process, and there are at most $4k^2$ bad rounds, if $E_G(S,T) = \emptyset$, then $X_S \geq k |S|/2 - 2k^2$. Therefore, the probability that $S, R, T$ as above exist is at most
\begin{eqnarray} \label{eq::largeSets}
&& \sum_{s = 100 k^2}^{(n-k+1)/2} \sum_{i = k s/2 - 2 k^2}^{\alpha n} \binom{n}{s} \binom{n}{k-1} Pr(X_S = i) Pr(E_G(S,T) = \emptyset \; | \; X_S = i) \nonumber \\
&\leq& n^{k-1} \sum_{s = 100 k^2}^{(n-k+1)/2} \sum_{i = k s/2 - 2k^2}^{\alpha n} \binom{n}{s} \left(\frac{s+k-2}{n-1} \right)^i \left(\frac{n-s-1}{n-1} \right)^{\alpha n - i - 4 k^2}
\end{eqnarray}
It follows from Stirling's formula that $\binom{n}{s} \leq \frac{n^n}{s^s (n-s)^{(n-s)}}$ for every $n$ and $s$. Hence, a straightforward calculation shows that
$$
\binom{n}{s} \left(\frac{s+k-2}{n-1} \right)^s \left(\frac{n-s-1}{n-1} \right)^{\alpha n - s - 4 k^2} \leq e^k \left(1 - \frac{s}{n}\right)^{(\alpha - 1) n - 4 k^2} < e^{-s/2},
$$
where the last inequality holds since $h_k \geq h_3 > 1.7$ holds for every $k \geq 3$. Therefore~\eqref{eq::largeSets} can be bounded from above by
\begin{eqnarray*}
&& n^{k-1} \sum_{s = 100 k^2}^{(n-k+1)/2} \sum_{i = k s/2 - 2k^2 - s}^{\alpha n - s} \left(\frac{s+k-2}{n-s-1} \right)^i e^{-s/2} \\
&\leq& n^{k-1} \sum_{s = 100 k^2}^{\log^2 n} \sum_{i = k s/2 - 2k^2 - s}^{\alpha n - s} \left(\frac{s+k-2}{n-s-1} \right)^i + \alpha n^k \sum_{s = \log^2 n}^{(n-k+1)/2} e^{-s/2} \\
&\leq& \alpha n^k \log^2 n \left(\frac{2 \log^2 n}{n} \right)^{48 k^2} + \alpha n^{k+1} e^{- \log^2 n/2} = o(1),
\end{eqnarray*}
where the last inequality holds for every $k \geq 3$.
\end{proof}
\section{Concluding remarks and open problems}\label{sec::openprob}
In this paper we have initiated the research on the semi-random graph process, leading to many intriguing open questions. We mention just a few of the possible directions for future research.
\medskip
\noindent \textbf{Online fixed graph.} Let $H$ be an arbitrary fixed graph and let $d$ be the degeneracy of $H$. It is proved in Theorem~\ref{th::fixedGraphUpperBound} that w.h.p. $\tau({\mathcal P}_H, n) = O(n^{(d-1)/d})$. For the special case $H = K_{d+1}$, a lower bound of the same order of magnitude is proved in Theorem~\ref{thm:lowerCliqueOnLine}. We believe that such a bound holds for any graph $H$.
\begin{conjecture} \label{conj::fixedGraph}
Let $H$ be an arbitrary fixed graph and let $d$ be the degeneracy of $H$. Then w.h.p. $\tau({\mathcal P}_H, n) = \Theta(n^{(d-1)/d})$.
\end{conjecture}
Note that the assertion of Conjecture~\ref{conj::fixedGraph} is trivially true for $d=1$, that is, when $H$ is a forest.
\medskip
\noindent \textbf{Perfect matching.} Recall that $\mathcal{PM}$ denotes the property of containing a perfect matching. It follows from our results that w.h.p.
\begin{equation} \label{eq::PM}
(\ln 2 + o(1)) n = \tau'(\mathcal{PM}, n) \leq \tau(\mathcal{PM}, n) \leq (1 + 2/e + o(1)) n,
\end{equation}
where the last inequality holds by Corollary~\ref{cor:KoutBip}. On the other hand, the equality is a simple corollary of Proposition~\ref{prop:FindingH}. Indeed, let $H$ be a matching consisting of $n/2$ edges (for convenience, we will assume that $n$ is even). Observe that in every orientation of $H$ there are precisely $n/2$ vertices of out-degree $1$ and precisely $n/2$ vertices of out-degree $0$. Therefore, using the notation of Subsection \ref{sec::BallsBins}, we have $\tau'(\mathcal{PM}, n) = m(H) = \min \{m : X_0^m \leq n/2\}$. The required equality now follows by~\eqref{e:exactly-k} and since $X_0^m$ is concentrated around its mean by Lemma~\ref{lem::Balls}. We believe that neither the lower nor the upper bound in~\eqref{eq::PM} is tight. It would be interesting to close or at least reduce the gap between these two bounds.
\medskip
\noindent \textbf{Hamilton cycle.} Recall that $\ch$ denotes the property of admitting a Hamilton cycle. Similarly to the case of a perfect matching, it is not hard to show that w.h.p.
\begin{equation} \label{eq::HAMoff}
\tau'(\ch, n) = (\alpha + o(1)) n,
\end{equation}
where $\alpha = 1.14619...$ is the unique positive real number satisfying $1 = (2 + \alpha) e^{-\alpha}$ (it is straightforward to verify that there exists a unique positive real number which satisfies this equation). The equality~\eqref{eq::HAMoff} is a simple corollary of Proposition~\ref{prop:FindingH}. Indeed, let $H$ be a cycle of length $n$. Observe that in every orientation of $H$, there are $r$ vertices of out-degree $1$, $(n-r)/2$ vertices of out-degree $2$, and $(n-r)/2$ vertices of out-degree $0$, for some integer $0 \leq r \leq n$. Therefore, a necessary and sufficient condition for Builder to construct a Hamilton cycle is $\sum_{k=2}^n X_k^m \geq (n - X_1^m)/2$ which is equivalent to $n - 2 X_0^m - X_1^m \geq 0$. Setting $m = (c + o(1)) n$ and using~\eqref{e:exactly-k} and Lemma~\ref{lem::Balls}, we conclude that w.h.p. the aforementioned necessary and sufficient condition holds for $c$ which satisfies $0 = (1 - 2 e^{-c} - c e^{-c}) n$ as required.
For the online Hamilton cycle game we have the following bounds (which hold w.h.p.):
\begin{equation} \label{eq::HAMon}
(h_2 - o(1)) n \leq \tau(\ch, n) \leq (3 + o(1)) n,
\end{equation}
where $h_2 = \log 2 + \log (1 + \log 2) \approx 1.219736$ (as explicitly calculated in \cite{Wormald2}). Indeed, the lower bound holds by Theorem~\ref{th::OnMinDegk} since minimum degree at least $2$ is a trivial necessary condition for Hamiltonicity. On the other hand, the upper bound holds by Corollary~\ref{cor:Kout} and the well known result asserting that a random graph generated by the $3$-out model is w.h.p.\@ Hamiltonian~\cite{bohman2009hamilton}. It would be interesting to close or at least reduce the gap between the lower and upper bounds in~\eqref{eq::HAMon}.
\medskip
\noindent \textbf{Bounded degree graphs.} Let $H$ be a graph with vertex set $[n]$ and with bounded maximum degree. Observe that $\tau({\mathcal P}_H, n) \leq (1 + o(1)) n \log n$. Indeed, Builder can construct $H$ as follows. For every $1 \leq i \leq n$, let $j_1, \ldots, j_{d_i}$ be an arbitrary ordering of the neighbors of $i$ in $H$. In each round, if Builder is offered some vertex $i$ for the $r$th time for some $1 \leq r \leq d_i$, then Builder claims the edge $i j_r$, otherwise he claims an arbitrary edge. Using this strategy, it is evident that $\tau({\mathcal P}_H, n) \leq m$, where $m$ is the smallest integer such that, during the first $m$ rounds, every $1 \leq i \leq n$ is offered at least $\Delta(H)$ times. Using~\eqref{e:exactly-k} and Lemma~\ref{lem::Balls}, a straightforward calculation shows that $m = (1 + o(1)) n \log n$.
Note that $\tau({\mathcal P}, n) = O(n)$ holds for every property ${\mathcal P}$ we considered in this paper. This observation has led Noga Alon to ask us the following question.
\begin{question} \label{q::linearUpperBound}
Is there a graph $H$ on $n$ vertices with bounded maximum degree such that $\tau({\mathcal P}_H, n) = \omega(n)$?
\end{question}
\medskip
Another possible direction of future research is the study of natural variations of our process. These include the following:
\medskip
\noindent \textbf{A digraph process.} Consider the same semi-random graph process, except that whenever Builder claims an edge, he must orient it from the vertex he was offered to the vertex he chose. His goal now is to build a digraph which satisfies some predetermined increasing property as soon as possible. Consider for example the aim of building a directed Hamilton cycle. Let $H$ be an undirected cycle on $n$ vertices and let $D_1$ and $D_2$ be orientations of $H$ such that in $D_1$ the out-degree of every vertex is $1$ and in $D_2$ the out-degree of every vertex is either $0$ or $2$ (assume for convenience that $n$ is even). It is not hard to see that $\tau'({\mathcal P}_{D_1}, n) = \tau({\mathcal P}_{D_1}, n) = (1 + o(1)) n \log n$. Both equalities follow from the fact that Builder can construct $D_1$ as soon as every vertex is offered at least once but not sooner. Indeed, if some vertex is never offered, then its out-degree in Builder's graph will be $0$ (both in the offline and in the online games). On the other hand, in the online game, Builder can play as follows: for every $1 \leq i \leq n$, the first time vertex $i$ is offered, Builder connects it to vertex $(i \mod n) + 1$; in any other round he plays arbitrarily. This proves the first equality. Using~\eqref{e:exactly-k} and Lemma~\ref{lem::Balls}, a straightforward calculation proves the second equality. In light of~\eqref{eq::HAMoff} and~\eqref{eq::HAMon}, this shows that, in general, the digraph process behaves very differently than the graph process. Now, consider constructing $D_2$. Similarly to the case of an undirected Hamilton cycle, one can show that $\tau'({\mathcal P}_{D_2}, n) = \Theta(n)$. This shows that (at least in the offline case) the digraph process in which Builder aims to build some digraph $D$ might really depend on $D$ and not just on its underlying undirected graph. As for the online case, the following question seems plausible.
\begin{question} \label{q::DirectedHamCycle}
Is it true that for every $\varepsilon > 0$ there exist constants $C$ and $n_0$ such that $\tau({\mathcal P}_D, n) \leq C n$ holds for every $n \geq n_0$ and every orientation $D$ of the $n$-cycle $H$ in which the number of vertices of out-degree $0$ is at least $\varepsilon n$?
\end{question}
\medskip
\noindent \textbf{Non-uniform sampling.} In the process we studied in this paper, the vertex Builder was offered in every round was chosen u.a.r. One could also study a similar process where the vertices Builder is offered are chosen according to some other probability distribution (which can differ between rounds). For example, consider the following random process which was studied in~\cite{RW1, RW2, RW3}. For a positive integer $d$, the random $d$-process $\{G_i\}_{i=0}^N$, where $N = \lfloor n d/2 \rfloor$, is defined as follows. $G_0$ is the empty graph on $n$ vertices and, for every $i \geq 0$, $G_{i+1} = G_i \cup e_{i+1}$, where $e_{i+1} = uv$ is chosen u.a.r.~from the set of all non-edges of $G_i$ for which $\max \{d_{G_i}(u), d_{G_i}(v)\} < d$. While we cannot use our process as is to approximate the random $d$-process, we can easily do so if the vertices Builder is offered in every round are chosen u.a.r.~from the set of all vertices whose degree is strictly smaller than $d$. Another example of a random graph process we can approximate by offering Builder vertices according to an appropriate probability distribution is the min-min random graph process~\cite{CK}. One can of course consider various probability distributions for the aforementioned digraph process as well.
\medskip
\noindent \textbf{Delaying increasing graph properties.} In this paper, we studied $\tau({\mathcal P}, n)$ (and, similarly, $\tau'({\mathcal P}, n)$ for the offline game) which is the \textbf{smallest} number of rounds in the online game Builder needs in order to build a graph on $n$ vertices which satisfies the increasing graph property ${\mathcal P}$. Instead, we can have Builder try to avoid satisfying ${\mathcal P}$ for as long as possible. Formally, we define $T({\mathcal P}, n)$ (and, similarly, $T'({\mathcal P}, n)$ for the offline game) to be the \textbf{largest} number of rounds in the online game for which Builder can maintain a graph on $n$ vertices which does not satisfy the increasing graph property ${\mathcal P}$. Note that in order for this to make sense, we can no longer allow Builder to create loops or multiple edges. Consider for example the property ${\mathcal P}_t$ of containing a connected component on at least $t$ vertices. It is trivial that $\tau({\mathcal P}_t, n) = \tau'({\mathcal P}_t, n) = t-1$. On the other hand, studying $T({\mathcal P}_t, n)$ (and to some extent also $T'({\mathcal P}_t, n)$) seems to have merit, especially with relation to the phase transition in the size of the largest component. Another interesting example is the property ${\mathcal P}_{\Delta}$ of being triangle-free. The problem of determining $T({\mathcal P}_{\Delta}, n)$ and $T'({\mathcal P}_{\Delta}, n)$ is related to classical problems in extremal graph theory and to other restricted random graph processes (see, e.g.,~\cite{Bohman, BK, FGM}).
\section*{Acknowledgements}
We would like to thank the anonymous referees for helpful comments. We would like to thank Peleg Michaeli for suggesting the study of this model and to thank Noga Alon and Michael Krivelevich for helpful discussions. The research on this project was initiated during a joint research workshop of Tel Aviv University and the Free University of Berlin on Positional Games and Extremal Combinatorics, held in Berlin in 2016; we would like to thank both institutions for their support.
|
{
"timestamp": "2019-07-01T02:04:35",
"yymm": "1805",
"arxiv_id": "1805.02259",
"language": "en",
"url": "https://arxiv.org/abs/1805.02259"
}
|
\section{Introduction}
Revenue maximization in online advertising is an important development direction of leading Internet companies (like real-time ad exchanges~\cite{2015-ManagSci-Balseiro}, search engines~\cite{2013-IJCAI-He}, and social networks), in which a large part of ad inventory is sold via widely applicable second price auctions~\cite{2013-IJCAI-He,2014-ICML-Mohri}, including the generalizations GSP~\cite{2014-ECRA-Sun} and VCG~\cite{1981-MOR-Myerson}.
The optimization of revenue in these auctions is mostly controlled by means of reserve prices, whose proper setting is studied both by game-theoretical methods~\cite{1981-MOR-Myerson,2009-Book-Krishna} and by machine learning approaches~\cite{2007-Book-Nisan,2013-SODA-Cesa-Bianchi,2014-ECRA-Sun,2014-ICML-Mohri,2017-NIPS-Medina,2017-WWW-Drutsa}.
A large number of online auctions in, for example, ad exchanges involve only a single buyer~\cite{2013-NIPS-Amin,2014-NIPS-Mohri,2014-NIPS-Amin,2017-WWW-Drutsa}, and, in this case, a second-price auction with reserve reduces to a \emph{posted-price auction}~\cite{2003-FOCS-Kleinberg} where the seller sets a reserve price for a good (e.g., an advertisement
space) and the buyer decides whether to accept or reject it (i.e., to bid above or below the price).
In our study, we focus on a scenario in which the seller \emph{repeatedly} interacts through a posted-price mechanism with the \emph{same} strategic buyer that holds a \emph{fixed} private valuation for a good and seeks to maximize his cumulative surplus.
At each round of this game, the seller is able to chose the price based on previous decisions of the buyer: he applies a deterministic online learning algorithm announced to the buyer in advance~\cite{2014-NIPS-Mohri}.
While previous studies on this scenario~\cite{2013-NIPS-Amin,2014-NIPS-Mohri,2017-WWW-Drutsa} provide the seller with pricing algorithms that guarantee \emph{lower bounds} on his cumulative revenue for any buyer valuation (via worst-case strategic regret minimization), we search for pricing algorithms that \emph{exactly} maximize the \emph{expectation} of the seller's cumulative revenue over a given distribution of buyer valuations.
The cumulative utilities (surplus for the buyer and revenue for the seller) are considered as discounted sums of corresponding instant utilities gained at each round, what allows us to cover a wide range of games (including the ones with infinite number of rounds and finite games without discounting).
We start our study from addressing the case when both the seller and the buyer have the same discount.
We show that the constant pricing algorithm with the Myerson price $p^\ast=\mathop{\mathrm{argmax}}_pH_D(p),$ where $ H_D(p) = p \cdot \mathbb{P}_{V\sim D}[V\ge p]$ and $D$ is the valuation distribution, maximizes our optimization objective (see Theorem~\ref{maintheorem}).
This result tells us that any dynamic learning of prices based on previous decisions of the buyer can \emph{not} increase the expected cumulative revenue of the seller with respect to a much simpler approach that offers the optimal constant price over all rounds.
Further we also show that the above mentioned optimal pricing is not unique.
Namely, there exists an optimal pricing algorithm (referred to as ``big deal") that proposes the following choice to the buyer: pay a large price at the first round and get all goods in the subsequent rounds for free, otherwise get nothing (see Prop.~\ref{prop_non_uniqu}).
The same discount for both participants of the game assumes that we do not give any advantage to each of them over the other one. However, in many real applications, there exists an imbalance between the sides in the patience to wait for utility. This asymmetry is often modeled by different discounts for them~\cite{2013-NIPS-Amin,2014-NIPS-Amin,2014-NIPS-Mohri}.
In our work, we address both the case of less patient seller and the case of less patient buyer.
First, in the case when the buyer's discount rate is larger than the seller's one, we find that the algorithm ``big deal" with a specific price at the first round can still be effectively applied by the seller (i.e., with optimal outcome). Namely, it allows the seller to ``accumulate" all his revenue at the first round and, in this way, to avoid the uncomfortable discounting in the future rounds; this discount makes the constant algorithm with Myerson's price suboptimal (see Sec.~\ref{sec_NonEqualDisc_BgS}).
Second, in the inverse case, when the buyer's discount rate is lower than the seller's one, the optimization problem becomes surprisingly more complicated. In this case, we reduce it to the optimization of a bilinear form in ${\mathbf{v}} = \{v_j\}_j$ and $\{\mathbb{P}_{V\sim D}[V \ge v_i]\}_j$ (see Theorem~\ref{th_problem_equivalence}).
This functional constitutes a multivariate analogue of the one-dimensional function $H_D(p) $ widely used in static auctions to find the optimal pricing.
Our reduction does not admit a closed form solution in general, but allows to find the optimal algorithm by means of state-of-the-art numerical optimization techniques (e.g., gradient ones).
In contrast to the previous cases, the optimal algorithm in this case of less patient buyer is non-trivial and its prices depend on both the valuation distribution and the discounts.
Finally, we numerically solve the above mentioned reduced problem for a series of representative discounts and analyze properties of the obtained optimal algorithms (see Sec.~\ref{sec_NonEqualDisc_BlS}). In this way, we show, in particular, that an optimal algorithm may be non-consistent\footnote{A consistent algorithm never sets prices lower (higher) than earlier accepted (rejected, resp.) ones.} and provides revenue larger than the constant algorithm with Myerson's price.
The most important conclusion consists in the following.
Only in the case of equal discounts, the seller cannot advantageously use the ability to change prices in dynamic fashion (i.e., to learn them) w.r.t.\ the static approach. But, both in the case when the seller is far more ready to wait for revenue than the buyer, and, more surprisingly, in the inverse case, the seller can boost his revenue w.r.t.\ the one obtained by the optimal constant algorithm.
Overall, the above described thorough study of optimal pricing algorithms for repeated auctions with different discounts constitutes the main contribution of our work.
The ideas behind our techniques of theoretical analysis are simple and, to the best of our knowledge, novel; they might thus be used for future foundations of repeated auctions, e.g., the ones with multiple buyers.
\section{Preliminaries, problem statement and related work}
\label{sec_Prelim}
\subsection{Setup of repeated posted-price auctions}
\label{subsec_Setup}
We consider the following standard mechanism of \emph{repeated posted-price auctions}~\cite{2013-NIPS-Amin,2014-NIPS-Mohri,2016-SSRN-Chen,2017-WWW-Drutsa,2017-ArXiV-Drutsa}.
The seller repeatedly proposes goods (e.g., advertisement spaces) to a single buyer over a sequence of rounds (one good per round).
The buyer holds a \emph{fixed private valuation} $v \in [0; +\infty)$ for a good, i.e., the valuation $v$ is unknown to the seller and is equal for goods offered in all rounds.
At each round $t \in \mathbb{N}$, the seller offers a price $p_t$ for a good, and the buyer makes his allocation decision $a_t \in \{0, 1\}$: to buy the currently offered good ($a_t = 1$), or not ($a_t = 0$).
In our setting, the seller's price $p_t, t\in\mathbb{N},$ depends on the previous answers $a_1, .., a_{t - 1}$ of the buyer (a.k.a.\ the history up to the round $t$), i.e., the seller uses \emph{a pricing algorithm} $\mathcal{A}$ to set prices in the deterministic online learning manner~\cite{2013-NIPS-Amin,2014-NIPS-Mohri,2017-WWW-Drutsa}.
The sequence of the buyer's answers is denoted by $\mathbf{a} = \{a_t\}_{t = 1}^\infty$ and is referred to as \emph{a buyer strategy}.
Hence, given an algorithm $\mathcal{A}$ and a strategy $\mathbf{a}$, the price sequence $\{p_t\}_{t = 1}^\infty$ is uniquely determined.
The \emph{instant surplus} $a_t (v - p_t)$ and the \emph{instant revenue} $a_t p_t$ are thus gained by the buyer and the seller, respectively, at each round $t\in\mathbb{N}$.
An instant surplus (or revenue) obtained in different rounds may contribute differently to the total (cumulative) profit of the buyer (or the seller, respectively).
We model this by discount factors $\gamma^\mathtt{B}_t$ and $\gamma^\mathtt{S}_t$ at each round $t\in\mathbb{N}$ and get \emph{the total discounted surplus} and \emph{the total discounted revenue} of the following form:
\vspace{-0.2cm}
\begin{equation}
\label{eq_Sur_Rev}
\mathrm{Sur}_{{\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, v, \mathbf{a}) := \sum_{t = 1}^{\infty}\gamma^\mathtt{B}_t a_t (v - p_t) \quad \text{and}\quad \mathrm{Rev}_{{\boldsymbol\g}^\mathtt{S}}(\mathcal{A}, \mathbf{a}) := \sum_{t = 1}^{\infty} \gamma^\mathtt{S}_t a_t p_t , \quad \hbox{respectively.}
\vspace{-0.2cm}
\end{equation}
We assume that \emph{the discount sequences} ${\boldsymbol\g}^\mathtt{B} = \{\gamma^\mathtt{B}_t\}_{t = 1}^\infty$ and ${\boldsymbol\g}^\mathtt{S} = \{\gamma^\mathtt{S}_t\}_{t = 1}^\infty$ are non-negative, $\gamma^\mathtt{B}_{t},\gamma^\mathtt{S}_{t}\ge0$, $\:\forall t\in\mathbb{N}$, and the series converges, $\Gamma^\mathtt{B}\!\!:=\!\!\sum_{t = 1}^{\infty} \gamma^\mathtt{B}_t, \Gamma^\mathtt{S}\!\!:=\!\!\sum_{t = 1}^{\infty} \gamma^\mathtt{S}_t < \infty$. We also assume that there are no zeros between positive numbers in the sequences ${\boldsymbol\g}^\mathtt{B}$ and ${\boldsymbol\g}^\mathtt{S}$.
Note that discounts allow us to consider a general setting, which covers a wide range of cases including finite games without discounting (i.e., $\gamma^\mathtt{B}_t = \gamma^\mathtt{S}_t=\mathbb{I}_{\{t\le T\}}$\footnote{$\mathbb{I}_{B}$ denotes the indicator of the condition $B$, i.e., $\mathbb{I}_{B} = 1$, when $B$ holds, and $0$, otherwise.} for some horizon $T\in\mathbb{N}$) and infinite games with
discount rates that decrease geometrically (i.e., $\gamma^\mathtt{B}_t = \gamma^\mathtt{S}_t=\gamma^{t-1}$ for some $\gamma\in(0,1)$)~\cite{2013-NIPS-Amin}.
Both the seller and the buyer may have the same discount (${\boldsymbol\g}^\mathtt{B}_t = {\boldsymbol\g}^\mathtt{S}_t$), which is a reasonable assumption since it does not give any privilege to each party over the other one. For instance, money inflation, a common interpretation of the discount factor, affects the preferences of both participants for current
gains versus future ones equally.
The case when the discounts are different (${\boldsymbol\g}^\mathtt{B}_t \neq {\boldsymbol\g}^\mathtt{S}_t$) is important for real applications as well~\cite{2013-NIPS-Amin}.
The discounting can also be considered as a model for uncertainty of the participants about the total number of rounds of their interaction (i.e., the factor $\gamma_t$ is a priori probability that repeated auctions will last exactly $t$ rounds).
Following a standard assumption in mechanism design, which matches the practice in ad exchanges~\cite{2014-NIPS-Mohri}, the pricing algorithm $\mathcal{A}$, used by the seller, \emph{is announced to the buyer in advance}~\cite{2013-NIPS-Amin,2017-WWW-Drutsa}. In this case, the buyer is able to act strategically against this algorithm, i.e., to chose \emph{the optimal strategy} $\mathbf{a}^{Opt}(\mathcal{A}, v, {\boldsymbol\g}^\mathtt{B})$ in the set of all possible strategies $\mathfrak{S} := \{0, 1\}^\mathbb{N}$, i.e., $ \mathbf{a}^{Opt}(\mathcal{A}, v, {\boldsymbol\g}^\mathtt{B}) = \mathop{\mathrm{argmax}}_{\mathbf{a} \in \mathfrak{S}} \mathrm{Sur}_{{\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, v, \mathbf{a})$\footnote{\label{fn_SRev}We show existence of the maximum in Appendix~\ref{app_subsec_optstrat_exist}. If there is a tie, i.e., more than one optimal strategy, the buyer selects one of them arbitrary (as in~\cite{1981-MOR-Myerson,2009-Book-Krishna}).},
This leads us to the definition of the \emph{strategic revenue} of the pricing algorithm $\mathcal{A}$, which faces the strategic buyer with a valuation $v\in[0,\infty)$:
\begin{equation}
\label{eq_SRev}
\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S},{\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, v): = \mathrm{Rev}_{{\boldsymbol\g}^\mathtt{S}}(\mathcal{A}, \mathbf{a}^{Opt}(\mathcal{A}, v, {\boldsymbol\g}^\mathtt{B})).
\end{equation}
\subsection{Notation and auxiliary definitions}
\label{subsec_Notations}
Following~\cite{2003-FOCS-Kleinberg,2014-NIPS-Mohri,2017-WWW-Drutsa}, we associate a deterministic pricing algorithm with a complete infinite binary tree $\mathfrak{T}$ in which each vertex is labeled with a price. The algorithm offers the price from a current node (starting from the root) and moves to the left (right) child of the node if the buyer answers $a_t=0$ ($=1$, respectively).
Clearly, buyer decisions at rounds $1,..,t$ encode bijectively paths from the root to tree nodes and, thus, nodes as well. Hence, we apply short notations for the nodes by means of the dictionary of finite strings $\mathfrak{N} := \{\mathtt{0}, \mathtt{1}\}^\ast$: the root is the empty string $\mathfrak{e}$, its left child is $\mathtt{0}$, the right one is $\mathtt{1}$, the right child of $\mathtt{0}$ is $\mathtt{0}\mathtt{1}$, etc. (e.g., $\mathtt{0}^k$ denotes the string of $k$ zeros).
Similarly, we denote buyer strategies by infinite strings from the alphabet $\{\mathtt{0}, \mathtt{1}\}$\footnote{We purposely use different outline of the numbers \emph{zero} and \emph{one} to distinguish their use in numerical expressions (as $0$, $1$) and their use in strings that encode nodes or strategies (as elements of the alphabet $\{\mathtt{0}, \mathtt{1}\}$).} to save space (e.g., the buyer that follows $\mathtt{1}\mathtt{0}^\infty$ accepts the price at the first round, $a_1=1$, and rejects all remaining ones, $a_t=0,t>1$).
Overall, the set of pricing algorithms $\mathfrak{A}$ is equivalent to the set of mappings from the nodes $\mathfrak{N}$ to $[0; +\infty)$, and we use thus them interchangeably: $\mathfrak{A} = [0; +\infty)^\mathfrak{N}$. The price of an algorithm $\mathcal{A}\in\mathfrak{A}$ offered at a node $\mathfrak{n}\in\mathfrak{N}$ is denoted by $\mathcal{A}(\mathfrak{n})$.
\subsection{Problem statement}
Let possible buyer valuations be distributed on $[0,+\infty)$ according to some distribution $D$, i.e., the buyer valuation $v$ (fixed over all rounds) is a realization of a random variable $V\sim D$.
Following a standard assumption in classical auction theory~\cite{2007-Book-Nisan,2009-Book-Krishna}, \emph{the valuation distribution} $D$ is known by the seller.
We also assume that the distribution $D$ has finite expectation, i.e., $\mathbb{E}_{V\sim D}[V]<\infty$, and is continuous; these assumptions are standard in auction theory as well~\cite{1981-MOR-Myerson,2009-Book-Krishna}.
So, we consider the problem of finding a pricing algorithm $\mathcal{A}^\ast \in \mathfrak{A}$ that maximizes the expected strategic revenue\footnote{Note that, in repeated auctions, revenue is usually compared to the one that would have been earned by offering the buyer's valuation $v$ if it was known in advance to the seller, resulting in the notion of the strategic regret $\mathrm{SReg}_{{\boldsymbol\g}^\mathtt{S},{\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, v):= \Gamma^\mathtt{S} v \!-\! \mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S},{\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, v)$. Regret is a powerful instrument to obtain lower bounds on revenue~\cite{2003-FOCS-Kleinberg,2013-NIPS-Amin,2017-WWW-Drutsa}, but, in our setup, minimization of the expected strategic regret is equivalent to our problem.}: $\mathbb{E}_{V\sim D} [ \mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S},{\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V) ] \rightarrow \max$.
From a game-theoretic view, we consider a two-player non-zero sum repeated game with incomplete information and unlimited supply in which the seller commits to the pricing (since he announces the algorithm before the auctions take place).
An attentive reader may also note that, due to the commitment and the presence of only one buyer, our setting can be formalized as a two stage game.
The common knowledge here are the discounts ${\boldsymbol\g}^\mathtt{B} $, ${\boldsymbol\g}^\mathtt{S} $, and the prior distribution $D$ of the private valuation $V$, while the realization $v$ of $V$ is known only by the buyer.
At the first stage, the seller picks a pricing algorithm $\mathcal{A}\in\mathfrak{A}$, his choice is announced to the buyer;
at the second stage, the buyer picks a buyer strategy $\mathbf{a}\in\mathfrak{S}$.
The buyer's utility is the surplus and the seller's one is the expected revenue (see Eq.~(\ref{eq_Sur_Rev})).
Thus, if some pricing $\mathcal{A}^*\in\mathfrak{A}$ is a solution to our problem, then the pair $(\mathcal{A}^*,\mathbf{a}^{Opt}(\mathcal{A}^*,v,{\boldsymbol\g}^\mathtt{B}))$ will be an equilibrium of above described game.
\begin{remark}
\label{remark_first_discount}
Note that both an optimal buyer strategy and an optimal algorithm will remain optimal, if the discount ${\boldsymbol\g}^\mathtt{B}$ or ${\boldsymbol\g}^\mathtt{S}$ is multiplied by any positive constant. Hence, from here on in our paper we assume w.l.o.g. that $\gamma^\mathtt{B}_1 = 1$ and $\gamma^\mathtt{S}_1 = 1$.
\end{remark}
\subsection{Related work}
\label{subsec_RelWork}
Optimization of seller revenue in auctions was generally reduced to a selection of proper reserve prices for buyers\footnote{Of course, there are other options to optimize revenue like quality scores for advertisements in ad auctions~\cite{2013-IJCAI-He}, but they are significantly less popular. And, surely, revenue optimization was also considered in other contexts such as trade-offs between auction stakeholders~\cite{2014-WWW-Goel} or between auction properties (e.g., simplicity, expressivity~\cite{2015-NIPS-Morgenstern}, and revenue monotonicity~\cite{2014-WWW-Goel}). } (e.g., in VCG~\cite{1981-MOR-Myerson}, GSP~\cite{2014-ECRA-Sun}, and other auctions~\cite{2016-WWW-Paes}).
In such setups, these prices usually depend on distributions of buyer bids or valuations~\cite{1981-MOR-Myerson} and was in turn estimated by machine learning techniques~\cite{2013-IJCAI-He,2014-ECRA-Sun,2016-WWW-Paes}, while alternative approaches learned reserve prices directly~\cite{2014-ICML-Mohri,2017-NIPS-Medina}.
In contrast to these works, we consider an online deterministic learning framework for repeated auctions.
Revenue optimization for repeated auctions was mainly concentrated on algorithmic reserve prices, that are updated in online fashion over time, and was also known as dynamic pricing, see the extensive survey~\cite{2015-SORMS-den-Boer} on this field.
Oh the one hand, dynamic pricing was studied under game-theoretic view in context of different aspects such as
budget constraints~\cite{2015-ManagSci-Balseiro,2016-EC-Balseiro},
mean field equilibria~\cite{2011-ECOMexch-Iyer,2015-ManagSci-Balseiro},
strategic buyer behavior~\cite{2015-EC-Chen,leme2012sequential},
multi-period contracts~\cite{1985-EL-Besanko}, etc.
A series of studies~\cite{1993-JET-Schmidt,2015-SODA-Devanur,2017-EC-Immorlica} close to ours considered repeated sales where the seller does not commit for its pricing policy (in contrast to our setting), what required thus special approaches (such as the concept of perfect Bayesian equilibrium) to address the revenue optimization problem. That studies showed that the seller earns less in settings without commitment than with it.
Another line of works like~\cite{pavan2014dynamic,2013-OR-Kakade} studied auction environment settings of a general form and was aimed to find revenue optimal mechanisms that are incentive compatible (truthful). In contrast to these studies, we consider a specific mechanism of repeated posted-price auctions and do not require its truthfulness (e.g., the algorithms in Sec.~\ref{subsec_finite_game_studies} and~\ref{subsec_infinite_game_studies}).
Finally, our work can be considered as further development of classical auction theory~\cite{2007-Book-Nisan,2009-Book-Krishna}: in particular, in the case of a more patient seller, to address the optimal pricing problem we derive a multidimensional optimization functional, defined in Eq.~(\ref{prop_problem_reduction_eq_2}), which is a multivariate analogue of the classical one, $p \cdot \mathbb{P}_{V\sim D}[V\ge p]$, used to determine the optimal reserve price in static auctions.
Overall, the optimal pricing in our scenario of repeated posted-price auctions with different discounts for the seller and the buyer, to the best of our knowledge, was never considered in existing studies, and we believe that the key ideas behind our analysis may be used for future foundation on repeated auctions.
Oh the other hand, revenue optimization in dynamic pricing was considered from algorithmic and learning approaches:
as bandit problems~\cite{2011-COLT-Amin,2015-NIPS-Zoghi,2015-NIPS-Lin}
(e.g., UCB-like pricing~\cite{2015-TEC-Babaioff}, bandit feedback models~\cite{2016-JMLR-Weed});
from the buyer side (valuation learning~\cite{2011-ECOMexch-Iyer,2016-JMLR-Weed}, competition between buyers and optimal bidding~\cite{2014-WWW-Hummel,2016-JMLR-Weed}, interaction with several sellers~\cite{2016-ICML-Heidari}, etc.);
from the seller side against several buyers~\cite{2013-SODA-Cesa-Bianchi,2017-SSRN-Kanoria,2016-EC-Roughgarden,2016-NIPS-Feldman};
and a single buyer with stochastic valuation (myopic~\cite{2003-FOCS-Kleinberg,2016-SODA-Chawla} and strategic buyers~\cite{2013-NIPS-Amin,2014-NIPS-Amin,2014-NIPS-Mohri,2016-SSRN-Chen}, feature-based pricing~\cite{2014-NIPS-Amin,2016-EC-Cohen}, limited supply~\cite{2015-TEC-Babaioff}).
The most relevant studies from these works on online learning are \cite{2013-NIPS-Amin,2014-NIPS-Mohri,2017-WWW-Drutsa,2017-ArXiV-Drutsa}, where our scenario of the strategic buyer with a fixed private valuation is considered.
Amin et al.~\cite{2013-NIPS-Amin} proposed to seek for algorithms that have the lowest possible upper bound on the strategic regret for the \emph{worst case} buyer valuation, i.e., $\sup_{v\in[0,1]}[\mathrm{SReg}_{{\boldsymbol\g}^\mathtt{S},{\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, v, \mathbf{a})]\le O(f(T))$, where $T$ is the finite game horizon.
This problem was recently solved in~\cite{2017-WWW-Drutsa}, where the algorithm PRRFES with a tight regret bound in $\Theta(\log\log T)$ was proposed. Some extensions of this algorithm were proposed in~\cite{2017-ArXiV-Drutsa}.
In contrast to these studies, first, we search for a pricing algorithm that maximizes the strategic revenue \emph{expected} over buyer valuations, i.e., $\mathbb{E}_v[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S},{\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, v)]$, (equivalently, s.t.\ $\mathbb{E}_v[\mathrm{SReg}_{{\boldsymbol\g}^\mathtt{S},{\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, v)]\rightarrow\min$), which matches the practice of ad exchanges and optimization goals in classical auction theory~\cite{2009-Book-Krishna}.
Second, our revenue optimization problem is solved \emph{exactly} (not approximately and not via optimization of lower/upper bounds).
Third, our study considers a more general setup in which not only the buyer's surplus is discounted over rounds, but also the seller's revenue does.
\section{Constant pricing algorithms}
\label{sec_ConstAlg}
We start investigation of the problem from study of \emph{constant algorithms}, i.e., such algorithms that propose only one price over all rounds independently of the buyer's decisions.
\begin{definition}
A pricing algorithm $\mathcal{A}$ is said to be \emph{constant}, if there exists a price $ p \in [0; +\infty)$ s.t., at each node $\mathfrak{n} \in \mathfrak{N}$, the algorithm's price $\mathcal{A}(\mathfrak{n})$ equals $p$. This price $p$ is referred to as \emph{the algorithm price} and is denoted by $p(\mathcal{A})$. \emph{The set of all constant algorithms} is denoted by $\mathfrak{A}_0 \subset \mathfrak{A}$.
\end{definition}
Note that
since a constant algorithm $\mathcal{A} \in \mathfrak{A}_0$ offers a price $p=p(\mathcal{A})$ that is independent of buyer decisions, the buyer has no incentive to lie and behaves thus truthfully. Hence, the buyer either rejects the price all the rounds, or accepts it (in our notations, applies the strategy $\mathtt{0}^\infty$ or $\mathtt{1}^\infty$, resp.) depending on whether his valuation $v$ is lower than $p$ or not.
Since $\mathrm{Rev}_{{\boldsymbol\g}^\mathtt{S}}(\mathcal{A}, \mathtt{0}^\infty)=0$ and $\mathrm{Rev}_{{\boldsymbol\g}^\mathtt{S}}(\mathcal{A}, \mathtt{1}^\infty)=p\sum_{t = 1}^{\infty} \gamma^\mathtt{S}_t$, the expectation of the strategic revenue of the constant algorithm $\mathcal{A}$ is
\begin{equation*}
\mathbb{E}_{V\sim D} \left[ \mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S},{\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V) \right] = \mathbb{P} [V < p] \cdot \mathrm{Rev}_{{\boldsymbol\g}^\mathtt{S}}(\mathcal{A}, \mathtt{0}^\infty) + \mathbb{P} [V \ge p] \cdot \mathrm{Rev}_{{\boldsymbol\g}^\mathtt{S}}(\mathcal{A}, \mathtt{1}^\infty) = \mathbb{P} [V \ge p] \cdot p \cdot \Gamma^\mathtt{S}.
\end{equation*}
It is easy to see that a constant algorithm $\mathcal{A}$ is optimal if its price $p(\mathcal{A})$ is the global maximum point of the function $H_D(p) := \mathbb{P} [V \ge p] \cdot p$, which is well known in the theory of non-repeated auctions~\cite{1981-MOR-Myerson,2007-Book-Nisan,2009-Book-Krishna}.
The existence of a global maximum point of $H_D(p)$ for our distribution $D$ is shown in Appendix~\ref{app_subsec_globmax_H_D}, and we refer to the leftmost one of them as \emph{the Myerson price} $p^\ast(D)$~\cite{1981-MOR-Myerson}.
Note that this price can be find via the first-order necessary condition $p = (1-F_D(p))/f_D(p)$, when the distribution $D$ has continuous probability density $f_D$ ($F_D$ is its cumulative distribution function).
\begin{definition}
The constant algorithm $\mathcal{A} \in \mathfrak{A}_0$ with the price $p(\mathcal{A})$ equal to the Myerson price $p^\ast(D)$ of the distribution $D$ is called \emph{the optimal constant algorithm} and is denoted by $\mathcal{A}^\ast_D$.
\end{definition}
\section{Equal discounts of the seller and the buyer}
\label{sec_EqDiscounts}
In this section, we study the case when the seller and the buyer discount their utilities equally, i.e., ${\boldsymbol\g} := {\boldsymbol\g}^\mathtt{S} = {\boldsymbol\g}^\mathtt{B}$, and we use the following notation for the strategic revenue: $\mathrm{SRev}_{\boldsymbol\g} := \mathrm{SRev}_{{\boldsymbol\g}, {\boldsymbol\g}}$.
First of all, we summarize some useful properties of surplus and revenue as functions of the valuation $v$.
\begin{remark}
\label{remark_strat_prop}
Let a pricing algorithm $\mathcal{A}\in\mathfrak{A}$ and the discount sequence ${\boldsymbol\g}$ be given. For simplicity, we will use the following short notations of surpluses as mappings from the valuation domain:
$S_{\mathbf{a}}(v):=\mathrm{Sur}_{\boldsymbol\g}(\mathcal{A},v,\mathbf{a})$ and $S(v):=\mathrm{Sur}_{\boldsymbol\g}(\mathcal{A},v,\mathbf{a}^{Opt}(\mathcal{A}, v, {\boldsymbol\g}))$, for which the following hold:
\begin{enumerate}
\item for each strategy $\mathbf{a} \in \mathfrak{S}$, the surplus $S_{\mathbf{a}}$ w.r.t.\ this strategy is a linear function of $v$ of the form $S_{\mathbf{a}}(v)=q_{\mathbf{a}} v - r_{\mathbf{a}}$, where $q_{\mathbf{a}} = \sum_{t=1}^\infty\gamma_ta_t$ is the \emph{discounted quantity} of purchased goods and $r_{\mathbf{a}}$ is the discounted revenue of the seller (i.e., $r_{\mathbf{a}} = \mathrm{Rev}_{\boldsymbol\g}(\mathcal{A},\mathbf{a})$);
\item the strategic (optimal) surplus $S$ is convex as a function of $v$, because it is the maximum of a set of linear functions: $S(v)=\max_{\mathbf{a}\in\mathfrak{S}}S_{\mathbf{a}}(v)$ (by definition);
\item the strategic surplus $S(v)$ is non-negative for any $v \ge 0$ since, for the strategy $\mathbf{a} = \mathtt{0}^\infty$, we have $ S_\mathbf{a}(v) = 0$, which implies in turn that $S(v) \ge S_\mathbf{a}(v) = 0, \:\forall v\ge 0$;
\item the derivative $S'(v)$ exists for almost all $v \in[0; +\infty)$ (i.e., it does not exist on a set of Lebesgue measure zero), because $S(v)$ is convex and is thus absolutely continuous.
\end{enumerate}
\end{remark}
\begin{lemma}\label{Rlemma}
For any pricing algorithm $\mathcal{A}\in\mathfrak{A}$, the strategic revenue $R(v):=\mathrm{SRev}_{\boldsymbol\g}(\mathcal{A},v)$ is increasing on the valuation domain $[0; +\infty)$, it starts from zero (i.e., $R(0) = 0$), and the random variable $R(V)$ has thus finite non-negative expectation (i.e., $0 \le \mathbb{E}\left[R(V)\right] < +\infty$).
\end{lemma}
\begin{proof}
We prove only the first claim since the utilized technique will be useful further.
The other claims are quite simple and are deferred to Appendix~\ref{app_subsec_proof_RLemma} due to space constraints.
For any two valuations $v_1$ and $v_2\in[0; +\infty)$ s.t. $v_1<v_2$, and two corresponding optimal strategies $\mathbf{a}^1$ and $\mathbf{a}^2\in\mathfrak{S}$, i.e., such that $S(v_j) = S_{\mathbf{a}^j}(v_j)$, $j=1,2,$ (using the notations from Remark~\ref{remark_strat_prop}), we have
\begin{equation*} \label{increasepair}
S_{\mathbf{a}^1}(v_1) \ge S_{\mathbf{a}^2}(v_1) \quad \hbox{and}
\quad S_{\mathbf{a}^2}(v_2) \ge S_{\mathbf{a}^1}(v_2).
\end{equation*}
Therefore, since $S_{\mathbf{a}^j}, j=1,2,$ are linear, they either coincide (then $r_{\mathbf{a}^1}=r_{\mathbf{a}^2}$), or have an intersection point $w$ in $[v_1,v_2]\!\subset\![0; +\infty)$. In the latter case, one gets $S_{\mathbf{a}^1}(v)\!\ge\!S_{\mathbf{a}^2}(v) \:\forall v\in [0,w]$, which implies $-r_{\mathbf{a}^1}\ge -r_{\mathbf{a}^2}$ when $v=0$. Hence, we obtain $R(v_2)=r_{\mathbf{a}^2}\ge r_{\mathbf{a}^1}=R(v_1)$ for any $v_2>v_1\ge 0$.
\end{proof}
Similarly to the optimal surplus function $S(\cdot)$ and the strategic revenue one $R(\cdot)$, we introduce \emph{the strategic purchased quantity} $Q(\cdot)$ as a map from the valuation domain, i.e., $Q(v) := \sum_{t = 1}^\infty \gamma_t a^O_t(v)$, where $\{a^O_t(v)\}_{t = 1}^\infty=\mathbf{a}^{Opt}(\mathcal{A}, v, {\boldsymbol\g})$. Note that $S(v)=Q(v)v-R(v)$, for each $v\in[0,+\infty)$.
\begin{lemma}\label{QSlemma}
Assume that, for a given $v \ge 0$, the derivative $S^\prime(v)$ exists. Then, $Q(v)$ is uniquely defined and equals to $S^\prime(v)$ for any optimal strategy $\mathbf{a}$ of the buyer that holds the valuation $v$.
\end{lemma}
The proof of this lemma is simple and rather technical; it is also deferred to Appendix~\ref{app_subsec_proof_QSlemma} due to space constraints.
Lemma~\ref{QSlemma} together with the identity $\mathrm{SRev}_{\boldsymbol\g}(\mathcal{A}, v) \!=\! R(v)\!=\! Q(v) v \!-\! S(v)$ gives us:
\begin{corollary}
\label{corollary_uniqDefSRev}
For almost all $v\in[0; +\infty)$, the strategic revenue $\mathrm{SReg}_{\boldsymbol\g}(\mathcal{A},v)$ is uniquely defined for any optimal strategy $\mathbf{a}$ of the buyer that holds the valuation $v$\footnote{Remind that the strategic revenue may not be uniquely defined (see Footnote~\ref{fn_SRev} near the definition of the strategic revenue).}.
\end{corollary}
\begin{remark} \label{Qremark}
Function $Q(v)$ is defined almost everywhere and non-decreasing on its domain, since $Q^\prime(v) = S^{\prime \prime}(v)$, which also defined almost everywhere and not less than 0, since $S$ is convex on its domain\footnote{Note that this fact can be proved directly like in Lemma~\ref{Rlemma}.}. Also by the definition $Q(v) \le \Gamma$ and, thus, $Q(+\infty)$ is finite.
\end{remark}
\subsection{Optimality of the constant algorithm with the Myerson price}
\label{subsec_OptAlgProof}
We use notations for the distribution functions: $F(v) := \mathbb{P}[V \le v]$ and $G(v) := 1 - F(v) = \mathbb{P}[V > v]$.
\begin{lemma}
\label{lemma_ExpR_to_Int_dQ_dS}
For the mappings $S(v)$, $R(v)$, and $Q(v)$ the following identity holds:
\begin{equation*}
\label{eq_lemma_ExpR_to_Int_dQ_dS}
\mathbb{E} \left[R(V)\right] = \int\limits_{[0; +\infty)} G(v) Q(v) dv + \int\limits_{[0; +\infty)} G(v) v dQ(v) - \int\limits_{[0; +\infty)} G(v) dS(v).
\end{equation*}
\end{lemma}
The proof is rather technical, relies on the properties of $S$, $R$, and $Q$ established in the above statements, and is thus deferred to Appendix~\ref{app_subsec_proof_lemma_dQ_dS}.
\begin{theorem}
\label{maintheorem}
Assume the valuation $V \sim D$ and the discount sequence ${\boldsymbol\g} = \{\gamma_t\}_{t = 1}^\infty$ satisfy the aforementioned conditions (see Sec.~\ref{sec_Prelim}). Then the expected strategic revenue of an arbitrary pricing algorithm $\mathcal{A} \in \mathfrak{A}$ is not greater than the one of the optimal constant algorithm $\mathcal{A}^\ast_D$:
\begin{equation} \label{maininequality}
\forall \mathcal{A} \in \mathfrak{A} \quad \hbox {we have} \quad \mathbb{E} \left[ \mathrm{SRev}_{\boldsymbol\g}(\mathcal{A}, V) \right] \le \mathbb{E} \left[ \mathrm{SRev}_{\boldsymbol\g}(\mathcal{A}^\ast_D, V) \right].
\end{equation}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{maintheorem}]
Consider an arbitrary algorithm $\mathcal{A} \in \mathfrak{A}$ and use the notations $S$, $R$, and $Q$ introduced above. From Lemma~\ref{lemma_ExpR_to_Int_dQ_dS}, we have
\begin{equation}
\label{eq_maintheorem_proof_1}
\mathbb{E} \left[R(V)\right] = \!\!\!\!\!\int\limits_{[0; +\infty)}\!\!\!\!\! G(v) Q(v) dv + \!\!\!\!\!\int\limits_{[0; +\infty)}\!\!\!\!\! G(v) v dQ(v) - \!\!\!\int\limits_{[0; +\infty)}\!\!\!\!\! G(v) dS(v) = \!\!\!\!\!\int\limits_{[0; +\infty)}\!\!\!\!\! G(v) v dQ(v),
\end{equation}
where the latter identity of Eq.~(\ref{eq_maintheorem_proof_1}) holds due to the facts that $S$ is absolutely continuous on its domain (see Remark \ref{remark_strat_prop}), thus, $\int_{[0; +\infty)}G(v) dS(v) = \int_{[0; +\infty)} G(v) S^\prime(v) dv$, and that $S'(v) = Q(v)$ almost everywhere (see Lemma~\ref{QSlemma}). By definition, we have $H_D(v)=G(v) v$, $\:\forall v\ge0,$ and, hence, Eq.~(\ref{eq_maintheorem_proof_1}) implies that $\mathbb{E} \left[R(V)\right] = \int_{[0; +\infty)} H_D(v) dQ(v)$ can be upper bound by the expression
\begin{equation}
\label{eq_maintheorem_proof_2}
H_D(p^\ast(D)) \cdot \int_{[0; +\infty)} \!\!\!\!\!\!1 dQ(v) = H_D(p^\ast(D)) \cdot (Q(+\infty) - Q(0)) \le H_D(p^\ast(D)) \cdot \Gamma,
\end{equation}
where $H_D(v)$ is bounded by its maximum $H_D(p^\ast(D))$, the first identity is due to the fact that $Q$ is non-decreasing on $v$, and non-negative $Q(v)$ is bounded by $\Gamma$ for all $v\ge 0$ (see Remark~\ref{Qremark}).
Finally, remind that the expected strategic revenue $\mathbb{E}\left[\mathrm{SRev}_{\boldsymbol\g}(\mathcal{A}^\ast_D, V)\right]$ of the optimal constant algorithm $\mathcal{A}^\ast_D$ equals to the right hand side of Eq.~(\ref{eq_maintheorem_proof_2}) (see Sec.~\ref{sec_ConstAlg}).
\end{proof}
Th.~\ref{maintheorem} states that the optimal constant algorithm $\mathcal{A}^\ast_D$ is, in fact, optimal among all pricings~$\mathfrak{A}$.
\subsection{Non-uniqueness of the optimal algorithm: ``big deal" pricing}
\label{subsec_OptAlgNonUnique}
It appears that \emph{the optimal constant algorithm $\mathcal{A}^\ast_D$ is not the unique optimal one}.
We provide an example of applying a general technique for building optimal algorithms of certain form.
\begin{proposition}
\label{prop_non_uniqu}
Let the game have at least 2 rounds (i.e., $\Gamma > \gamma_1$). If an algorithm $\mathcal{A}_1$ sets the first price $p_1$ equal to $\Gamma p^\ast(D)/ \gamma_1$ and sets all further prices either $p_t = 0, t\ge2$, if the buyer accepts the first offer, $a_1=1$, or $p_t = 2 \gamma_1 p_1/(\Gamma - \gamma_1), t\ge2$, otherwise; then the algorithm $\mathcal{A}_1$ is optimal.
\end{proposition}
\begin{proof}
First, note that the buyer has no incentive to lie after the first round since the algorithm prices $p_t, t\ge3,$ do not depend on his decisions $a_t, t\ge2$. Hence, possible candidates for optimal strategies are $\mathtt{0}^\infty$, $\mathtt{1}^\infty$, $\mathtt{0}\mathtt{1}^\infty$, and $\mathtt{1}\mathtt{0}^\infty$.
It easy to see that the optimal buyer strategy in response to $\mathcal{A}_1$ is $\mathtt{1}^\infty$ for the case $v > p^\ast(D)$ and $\mathtt{0}^\infty$ for $v < p^\ast(D)$.
Indeed, if the buyer accepts $p_1$, further offers are for free goods that will be accepted. If the buyer rejects $p_1$, then, for any strategy $\mathbf{a} \in \mathfrak{S}$ s.t. $a_1=0$, we have
\begin{equation}
\label{eq_prop_non_uniqu_proof_1}
S_\mathbf{a}(v) \le (\Gamma -\gamma_1) ( v - 2 \gamma_1 p_1/(\Gamma - \gamma_1) )
< \Gamma v - 2 \gamma_1 p_1 < \Gamma v - \gamma_1 p_1 = S_{\mathtt{1}^\infty}(v).
\end{equation}
Thus, if $S_{\mathtt{1}^\infty}(v) > 0 = S_{\mathtt{0}^\infty}(v)$, then $\mathtt{1}^\infty$ is optimal strategy, and, if $S_{\mathtt{1}^\infty}(v) < 0$, then Eq.~(\ref{eq_prop_non_uniqu_proof_1}) implies optimality of $\mathtt{0}^\infty$.
Finally, note that $S_{\mathtt{1}^\infty}(v) = \Gamma v - \gamma_1 p_1 = \Gamma (v - p^\ast(D))$ that implies $S_{\mathtt{1}^\infty}(v) > 0 \Leftrightarrow v > p^\ast(D)$.
Hence, the expected strategic revenue of $\mathcal{A}_1$ is
\begin{equation}
\label{eq_prop_non_uniqu_proof_2}
\mathbb{E}\left[\mathrm{SRev}_{\boldsymbol\g}(\mathcal{A}_1, V)\right]
= \mathbb{P}[p^\ast(D) \le V] \cdot \gamma_1 \Gamma p^\ast(D) /\gamma_1
= H_D(p^\ast(D)) \Gamma
= \mathbb{E}\left[\mathrm{SRev}_{\boldsymbol\g}(\mathcal{A}^\ast_D, V)\right].
\vspace{-0.3cm}
\end{equation}
\end{proof}
\vspace{-0.2cm}
The key idea behind the algorithm $\mathcal{A}_1$ is quite simple. Roughly speaking, the seller ``accumulates" all his revenue at the first round by proposing the buyer a ``big deal": to pay a large price at the first round and get all goods in the subsequent rounds for free, or, otherwise, get nothing\footnote{A similar pricing was proposed by~\cite{2013-OR-Kakade} for a class of mechanism environments with multiplicative separability and zero production cost. Their mechanism charges an up-front payment (before rounds starts) and posts zero price each round obtaining thus truthfulness. In contrast to that study, the ``big deal" pricing posts a large price at the first round (our setup does not allow an up-front payment) and is not truthful (since the price $p_1 = \Gamma p^\ast(D)/ \gamma_1$ is accepted by the strategic buyer whose valuation $v > p^\ast(D)$, not $v > p_1$).}.
Note that this optimal pricing algorithm depends both on the discounting ${\boldsymbol\g}$ and the valuation distribution $D$:
the price $p_1$ is calculated based on the knowledge of the total discounted revenue $\Gamma p^\ast(D) $ that is earned by $\mathcal{A}^\ast_D$ from selling all goods.
An attentive reader may note that the idea of the aforementioned technique allows, in fact, to build more variants of optimal algorithms by ``spreading" the revenue $\Gamma p^\ast(D)$ in a certain way along the rightmost path of the tree $\mathfrak{T}$.
In Sections~\ref{sec_NonEqualDisc_BgS} and~\ref{sec_NonEqualDisc_BlS}, we show that \emph{$\mathcal{A}_1$ may remain optimal in the cases when the constant algorithm $\mathcal{A}^\ast_D$ is no longer optimal}.
\section{Less patient seller}
\label{sec_NonEqualDisc_BgS}
Now we are ready to study the cases when the seller and the buyer discounts are different.
Further, we argue that \emph{the constant algorithm $\mathcal{A}^\ast_D$ is no longer optimal among all algorithms $\mathfrak{A}$} in these cases.
We start our investigation from a seller which is less patient than the buyer in willingness to wait for the revenue. We consider the case when ${\boldsymbol\g}^\mathtt{S} \le {\boldsymbol\g}^\mathtt{B}$ (i.e., $\gamma_t^\mathtt{S} \le \gamma_t^\mathtt{B} \:\forall t\in\mathbb{N}$); e.g., when the discounts decrease geometrically: ${\boldsymbol\g}^\mathtt{S} = \{\gamma_\mathtt{S}^{t - 1}\}_{t = 1}^\infty$ and ${\boldsymbol\g}^\mathtt{B} = \{\gamma_\mathtt{B}^{t - 1}\}_{t = 1}^\infty$, where $0<\gamma_\mathtt{S}\le\gamma_\mathtt{B}<1$.
\begin{lemma}
\label{lemma_BgS_upperbound}
Let $\mathcal{A}\in\mathfrak{A}$, then the following upper bound for its expected strategic revenue holds:
\begin{equation}
\mathbb{E} \left[ \mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V) \right] \le \Gamma^\mathtt{B} \cdot H_D(p^\ast(D))
\end{equation}
\end{lemma}
\begin{proof}
Let $\mathbf{a}^{Opt}(\mathcal{A}, v, {\boldsymbol\g}^\mathtt{B}) = \{a^O_t\}_{t = 1}^\infty$, then, using the independence of $\mathbf{a}^{Opt}$ on the seller's discount, we get
$\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, v) = \sum_{t = 1}^{\infty} \gamma^\mathtt{S}_t a^O_t (v - p_t) \le \sum_{t = 1}^{\infty} \gamma^\mathtt{B}_t a^O_t (v - p_t) = \mathrm{SRev}_{{\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{B}} (\mathcal{A}, v) = \mathrm{SRev}_{{\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, v).$ Finally,
$
\mathbb{E} \left[ \mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V) \right] \le \mathbb{E} \left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V)\right] \le \Gamma^\mathtt{B} \cdot H_D(p^\ast(D)),
$ where Theorem~\ref{maintheorem} is applied with ${\boldsymbol\g}={\boldsymbol\g}^\mathtt{B}$ to infer the latter inequality.
\end{proof}
\begin{proposition}
\label{prop_BgS_A1}
Let ${\boldsymbol\g}^\mathtt{S}$ and ${\boldsymbol\g}^\mathtt{B}$ be the seller and the buyer discounts, respectively, s.t.\ ${\boldsymbol\g}^\mathtt{S} \le {\boldsymbol\g}^\mathtt{B}$. Then the algorithm $\mathcal{A}_1$ from Proposition~\ref{prop_non_uniqu} with ${\boldsymbol\g}$ set to ${\boldsymbol\g}^\mathtt{B}$ (i.e., with $p_1 = \Gamma^\mathtt{B} p^\ast(D)$) is optimal in $\mathfrak{A}$.
\end{proposition}
\begin{proof} Since the optimal strategy is independent of the seller's discount, the beginning of the proof is similar to the one of Prop.~\ref{prop_non_uniqu} up to Eq.~(\ref{eq_prop_non_uniqu_proof_2}), where the seller's discount is used for the first time. In our case of different discounts, the identity Eq.~(\ref{eq_prop_non_uniqu_proof_2}) on the expected strategic revenue will have the form
$
\mathbb{E}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}_1, V)\right]
= \mathbb{P}[p^\ast(D) \le V] \cdot \gamma^\mathtt{S}_1 \Gamma^\mathtt{B} p^\ast(D) /\gamma^\mathtt{B}_1
= H_D(p^\ast(D)) \Gamma^\mathtt{B},
$
where we used $\gamma^\mathtt{S}_1=\gamma^\mathtt{B}_1=1$ (see Remark~\ref{remark_first_discount}).
We see that $\mathcal{A}_1$ achieves the upper bound of Lemma~\ref{lemma_BgS_upperbound} and is thus optimal.
\end{proof}
The relative expected revenue of the optimal algorithm $\mathcal{A}_1$ w.r.t.\ the optimal constant one $\mathcal{A}^\ast_D$ is $\Gamma^\mathtt{B}/\Gamma^\mathtt{S}$ which is $> 1$, when ${\boldsymbol\g}^\mathtt{S} < {\boldsymbol\g}^\mathtt{B}$; i.e., \emph{the optimal revenue is larger than the one obtained by offering the Myerson price constantly} (in contrast to the equal discount case).
For instance, for geometric discounts ${\boldsymbol\g}^\mathtt{S} = \{\gamma_\mathtt{S}^{t - 1}\}_{t = 1}^\infty$ and ${\boldsymbol\g}^\mathtt{B} = \{\gamma_\mathtt{B}^{t - 1}\}_{t = 1}^\infty$, this revenue improvement ratio $\Gamma^\mathtt{B}/\Gamma^\mathtt{S}$ is equal to $ (1-\gamma_\mathtt{S})/(1-\gamma_\mathtt{B})$ and goes to $+\infty$ as $\gamma_\mathtt{B}\rightarrow 1-$ for a fixed $\gamma_\mathtt{S}$.
Moreover, the algorithm $\mathcal{A}_1$ provides exactly the same expected revenue as if the seller played in the game with the same discount as the buyer one ${\boldsymbol\g}^\mathtt{B}$.
\emph{This result is quite surprising}, because the dominance of the buyer's discount ${\boldsymbol\g}^\mathtt{B}$ over the seller's one ${\boldsymbol\g}^\mathtt{S}$ suggests a hypothesis that the seller should earn lower than with ${\boldsymbol\g}^\mathtt{B}$ (e.g., see the revenue of $\mathcal{A}^\ast_D$). But the ability of the seller to apply the trick of ``accumulation" of all his revenue at the first round (see Sec.~\ref{subsec_OptAlgNonUnique}) allows him to get the payments for all goods discounted by the buyer's ${\boldsymbol\g}^\mathtt{B}$ at the first round and to boost thus his revenue over the constant pricing.
\section{Less patient buyer}
\label{sec_NonEqualDisc_BlS}
In contrast to the previous cases, finding an optimal pricing here is much more difficult problem since the technique used in Sec.~\ref{sec_EqDiscounts} and~\ref{sec_NonEqualDisc_BgS} to upper bound the expected strategic revenue is no longer applicable (because it relies on the condition ${\boldsymbol\g}^\mathtt{S} \le {\boldsymbol\g}^\mathtt{B}$).
As we will see further, in the studied case, the obtained optimal algorithms are not trivial and require derivation of a multivariate analogue of the functional $H_D(\cdot)$ to be found in a multidimensional space. We obtain this functional in Sec.~\ref{subsec_finite_games_theory} and use it to provide extensive analysis of optimal algorithms in Sec.~\ref{subsec_finite_game_studies} and~\ref{subsec_infinite_game_studies}.
\begin{definition}
\label{def_DiscRate}
For a discount sequence ${\boldsymbol\g} = \{\gamma_t\}_{t = 1}^\infty$, we define \emph{the discount rate} sequence ${\boldsymbol\nu}({\boldsymbol\g}) := \{\nu_t({\boldsymbol\g})\}_{t = 1}^\infty$ as the sequence of the ratios of consecutive components of ${\boldsymbol\g}$: $\nu_t({\boldsymbol\g}) := \gamma_{t + 1}/\gamma_t$ when $\gamma_t > 0$, and $\nu_t({\boldsymbol\g}) := 0$ when $\gamma_t = 0$\footnote{Recall that if $\gamma_t = 0$ then $\gamma_{t'}=0$ for any $t' \ge t$, i.e., ${\boldsymbol\g}$ has no zeros between positive components (see Sec.~\ref{subsec_Setup}). Hence, the discount rate sequence ${\boldsymbol\nu}({\boldsymbol\g})$ has no zeros between positive components as well.}.
\end{definition}
\begin{remark}
\label{remark_DiscRateInequality}
Let ${\boldsymbol\g}^1 = \{\gamma^1_t\}_{t = 1}^\infty$ and ${\boldsymbol\g}^2 = \{\gamma^2_t\}_{t = 1}^\infty$ be some discounts sequences. Then, the condition ${\boldsymbol\nu}({\boldsymbol\g}^2) \ge {\boldsymbol\nu}({\boldsymbol\g}^1)$ is equivalent to the one that the sequence $\{\gamma^2_t/\gamma^1_t\}_{t = 1}^\infty$ is non-decreasing (formally, treating $0/0$ as $+\infty$). The proof of this statement straightforwardly follows from Definition~\ref{def_DiscRate}.
\end{remark}
From here on in this section we consider the discounts ${\boldsymbol\g}^\mathtt{S} $ and ${\boldsymbol\g}^\mathtt{B}$ such that ${\boldsymbol\nu}({\boldsymbol\g}^\mathtt{S}) \ge {\boldsymbol\nu}({\boldsymbol\g}^\mathtt{B})$.
This condition means that the seller is more patient than the buyer \emph{locally at each round} (see Remark~\ref{remark_DiscRateInequality}). In particularly, ${\boldsymbol\nu}({\boldsymbol\g}^\mathtt{S}) \ge {\boldsymbol\nu}({\boldsymbol\g}^\mathtt{B})$ implies that ${\boldsymbol\g}^\mathtt{S} \ge {\boldsymbol\g}^\mathtt{B}$, i.e., the seller is globally more patient than the buyer as well, but the inverse implication is not true\footnote{We believe that the studied case of ${\boldsymbol\nu}({\boldsymbol\g}^\mathtt{S}) \ge {\boldsymbol\nu}({\boldsymbol\g}^\mathtt{B})$ covers a large variety of discount sequences (e.g., the geometric ones) that describe a more patient seller. Nonetheless, the study of the case when ${\boldsymbol\g}^\mathtt{S} \ge {\boldsymbol\g}^\mathtt{B}$ and ${\boldsymbol\nu}({\boldsymbol\g}^\mathtt{S}) \not\ge {\boldsymbol\nu}({\boldsymbol\g}^\mathtt{B})$ is interesting and is left for future work. A possible direction to study this case consists in our following insight: if the buyer is locally more patient than the seller at some round $t$ (i.e., $\nu_t({\boldsymbol\g}^\mathtt{S}) < \nu_t({\boldsymbol\g}^\mathtt{B})$), then the trick similar to the one used in the ``big deal" algorithm can be applied at this round $t$ to get an optimal algorithm.}.
A typical example of the studied case is a pair of geometric discounts: ${\boldsymbol\g}^\mathtt{S} = \{\gamma_\mathtt{S}^{t - 1}\}_{t = 1}^\infty$ and ${\boldsymbol\g}^\mathtt{B} = \{\gamma_\mathtt{B}^{t - 1}\}_{t = 1}^\infty$, where $0<\gamma_\mathtt{B}\le\gamma_\mathtt{S}<1$.
\begin{definition}
Let ${\boldsymbol\g}$ be a discount sequence, then an algorithm $\mathcal{A}\in\mathfrak{A}$ is said to be \emph{completely active for ${\boldsymbol\g}$}, if for any strategy $\mathbf{a}\in\mathfrak{S}$ there exists a valuation $v \in [0; +\infty)$ such that $S_\mathbf{a}(v) = S(v)$, where $S$ and $S_\mathbf{a}$ are defined in Remark~\ref{remark_strat_prop}, i.e., the surplus function $S_\mathbf{a}$ is tangent to the optimal surplus function $S$. We denote the set of all completely active algorithms for ${\boldsymbol\g}$ by $\tilde \mathfrak{A}({\boldsymbol\g})$.
\end{definition}
In the next subsection, we will obtain the central results of our study. We do it for the case of a finite number of rounds, but, in Sec.~\ref{subsec_infinite_game_studies}, we show how to use these results to obtain approximately optimal algorithms for the case of the infinite number of rounds.
\subsection{Finite games: multivariate optimization functional}
\label{subsec_finite_games_theory}
In this section, we consider the case of the game with a finite time horizon $T\in\mathbb{N}$: in particular, in this case, seller algorithms, buyer strategies, and all discounts (including ${\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}$) are considered as their $T$-length variants (they can be defined in a natural way similarly to their infinite analogues). For simplicity of presentation, we assume that all discounts are positive (i.e., $\neq \!\!0$) in all $T$ rounds.
\begin{definition}
\label{def_RegularDiscount}
A discount sequence ${\boldsymbol\g}$ is said to be \emph{regular}\footnote{The reasons to introduce this class of discounts are discussed in Remark~\ref{remark_RegularDiscount}.}, if ${\boldsymbol\g}\cdot\mathbf{a}^1 \neq {\boldsymbol\g}\cdot\mathbf{a}^2$ for any pair of strategies $\mathbf{a}^1, \mathbf{a}^2\in\mathfrak{S}$, i.e., any buyer strategy $\mathbf{a}\in\mathfrak{S}$ results in a unique discounted quantity of purchased goods.
Here we used the short notation for the scalar product: $\mathbf{a}\cdot\mathbf{b}:=\sum_ta_tb_t$.
\end{definition}
In the following important proposition we show that any algorithm can be transformed to a completely active one for the discount ${\boldsymbol\g}^\mathtt{B}$ with no loss in the expected strategic revenue.
\begin{proposition} \label{prop_RatesCompleteActiveness}
In a $T$-round game, let ${\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}$ be discounts s.t.\ ${\boldsymbol\nu}({\boldsymbol\g}^\mathtt{B}) \le {\boldsymbol\nu}({\boldsymbol\g}^\mathtt{S})$ and ${\boldsymbol\g}^\mathtt{B}$ is a regular one. Then, for any pricing algorithm $\mathcal{A} \in \mathfrak{A}$, there exists a completely active algorithm $\tilde \mathcal{A} \in \tilde \mathfrak{A}({\boldsymbol\g}^\mathtt{B})$ s.t.\
\begin{equation}
\label{prop_RatesCompleteActiveness_eq1}
\mathbb{E}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V)\right] \le \mathbb{E}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\tilde \mathcal{A}, V)\right].
\end{equation}
\end{proposition}
\begin{proof}
For a given algorithm and a given discount $\gamma^\mathtt{S}$, we will use the notation $r_\mathbf{a} := \mathrm{Rev}_{{\boldsymbol\g}^\mathtt{S}}(\mathcal{A}, \mathbf{a})$ for any $\mathbf{a}\in\mathfrak{S}$ (similarly to Remark~\ref{remark_strat_prop}, but indicating explicitly the seller's discount).
The main idea of the proof consists in the following technique.
We will consider all strategies $\mathbf{a}$ s.t. $S_\mathbf{a}(v) < S(v)$ $\:\forall v \in [0; +\infty)$ (referred to as \emph{non-active}), and, consequently, for each of them denoted by $\mathbf{a}$, we apply the following procedure of modifying the source algorithm $\mathcal{A}$: define a transformation $\mathcal{A}'$ that does not change $S_\mathbf{b}$ for $\mathbf{b} \in\mathfrak{S}\setminus\{\mathbf{a}\}$, moves $S_\mathbf{a}$ to the left until it is tangent to $S$ in some $v \in [0; +\infty)$, decreases $r_\mathbf{a}$, and does not decrease $r_\mathbf{b}$ for $\mathbf{b} \in\mathfrak{S}\setminus\{\mathbf{a}\}$. That will imply that the expected strategic revenue of the transformed algorithm $\mathcal{A}'$ is no lower than the one of the source algorithm $\mathcal{A}$. In this way, we will (one-by-one) make all strategies active.
Let us consider the set of all non-active strategies. If it is empty, then $\mathcal{A}\in \tilde \mathfrak{A}({\boldsymbol\g}^\mathtt{B})$ and Eq.~(\ref{prop_RatesCompleteActiveness_eq1}) holds.
Otherwise, note that the ``always-reject" strategy $\mathbf{a} = \mathtt{0}^T$ is always active, since $S_\mathbf{a}(0) = 0 = S(0)$. Hence, one can order all non-active strategies by ``the last $\mathtt{1}$ index" $t_1(\mathbf{a}) = \max \{t|\ a_t = 1\}$.
We take a non-active strategy $\mathbf{a}$ with the smallest $t_1(\mathbf{a})$, denoting $t_1 := t_1(\mathbf{a})$ and the node \mbox{$\mathfrak{n} := a_1 a_2 \dots a_{t_1 - 1}$}, and construct a new algorithm $ \mathcal{A}'$ based on the source one $\mathcal{A}$ in the following way.
Set $\mathcal{A}' = \mathcal{A}$ and transform the prices $ \mathcal{A}'(\mathfrak{n}), \mathcal{A}'(\mathfrak{r}(\mathfrak{n})), \ldots, \mathcal{A}'(\mathfrak{l}^{T - t_1-1}(\mathfrak{r}(\mathfrak{n})))$ as follows:
\begin{enumerate}
\item decrease $\mathcal{A}'(\mathfrak{n})$ until the function $S_\mathbf{a}$ is tangent to the function $S$ in some $v \in [0; +\infty)$;
\item if $t_1 < T$, increase $ \mathcal{A}'(\mathfrak{l}^j(\mathfrak{r}(\mathfrak{n})))$ for $j = 0, \dots, T - t_1-1$ in such a way that
\begin{equation}
\label{prop_RatesCompleteActiveness_proof_eq1}
\gamma^\mathtt{B}_{t_1} \cdot \mathcal{A}'(\mathfrak{n}) + \gamma^\mathtt{B}_{t_1 + j+1} \cdot \mathcal{A}'(\mathfrak{l}^j(\mathfrak{r}(\mathfrak{n}))) = \mathrm{const}.
\end{equation}
\end{enumerate}
Since we chosen $\mathbf{a}$ with the smallest $t_1(\mathbf{a})$ among non-active strategies the price $\mathcal{A}'(\mathfrak{n})$ obtained in the step~1 is non-negative (and, thus, this step is correct).
Indeed, substitute the $t_1$-th component in $\mathbf{a}$ by $0$ and denote the obtained strategy by $\mathbf{b}$. Due to selection of $\mathbf{a}$, the strategy $\mathbf{b}$ is active.
Therefore, assume $\mathcal{A}'(\mathfrak{n})$ is decreased to $0$, then the function $S_\mathbf{a}(v)$ becomes equal to $S_\mathbf{b}(v) + \gamma^\mathtt{B}_{t_1} v$ by the definition. Since $S_\mathbf{b}$ is tangent to $S$, the increase of its slope by $\gamma^\mathtt{B}_{t_1}$ will result in intersection with $S$. This means that $S_\mathbf{a}$ will be tangent to $S$ before $\mathcal{A}'(\mathfrak{n})$ reaches $0$.
Now let us prove that the transformation $\mathcal{A}'$ satisfies properties announced at the beginning of the proof.
Let $\mathbf{b} \in\mathfrak{S}\setminus\{\mathbf{a}\}$.
The step~2 implies that the transformation does not change $S_\mathbf{b}$.
For a strategy $\mathbf{b}$ that does not come through the node $\mathfrak{r}(\mathfrak{n})$, the revenue $r_\mathbf{b}$ remains the same, since the algorithm prices that contribute to $r_\mathbf{b}$ are not altered. For $\mathbf{b} \neq \mathbf{a}$ that comes through the node $\mathfrak{r}(\mathfrak{n})$, let us prove that $r_\mathbf{b}$ can only increase. Since $\mathbf{b} \neq \mathbf{a}$ there is a round $t=t_1+j+1, j\ge0,$ where $b_t=1$. Let $j$ s.t.\ this $t$ is the first round of acceptance after reaching the node $\mathfrak{r}(\mathfrak{n})$, and let us denote the node where this acceptance take place by $\mathfrak{m}:=\mathfrak{l}^j(\mathfrak{r}(\mathfrak{n}))$. Therefore, one can write the following expression for the increment of $r_\mathbf{b}$:
$
\gamma^\mathtt{S}_{t_1} \left( \mathcal{A}'(\mathfrak{n}) - \mathcal{A}(\mathfrak{n}) + ({\gamma^\mathtt{S}_{t_1 + j+1}}/{\gamma^\mathtt{S}_{t_1}}) \big( \mathcal{A}'(\mathfrak{m}) - \mathcal{A}(\mathfrak{m}) \big) \right) = $
$ = \gamma^\mathtt{S}_{t_1} \left(-({\gamma^\mathtt{B}_{t_1 + j+1}}/{\gamma^\mathtt{B}_{t_1}})\big (\mathcal{A}'(\mathfrak{m}) - \mathcal{A}(\mathfrak{m})\big) + ({\gamma^\mathtt{S}_{t_1 + j+1}}/{\gamma^\mathtt{S}_{t_1}}) \big(\mathcal{A}'(\mathfrak{m}) - \mathcal{A}(\mathfrak{m})\big)\right)
\ge 0,$
where we used Eq.~(\ref{prop_RatesCompleteActiveness_proof_eq1}) to obtain the first equation and used ${\boldsymbol\nu}({\boldsymbol\g}^\mathtt{B}) \le {\boldsymbol\nu}({\boldsymbol\g}^\mathtt{S})$ to obtain the last inequality. So, $r_\mathbf{b}$ can only increase for $\mathbf{b} \in\mathfrak{S}\setminus\{\mathbf{a}\}$.
Finally, since $S_\mathbf{a}$ becomes tangent to $S$, which is convex (see Remark~\ref{remark_strat_prop}), the function $S_\mathbf{a}$ either equals to $S$ exactly in one point $v\in[0; +\infty)$ or coincides with $S_\mathbf{b}$ for some $\mathbf{b} \in\mathfrak{S}\setminus\{\mathbf{a}\}$. The latter case is impossible since a function $S_\mathbf{b}$ have different slope for different strategy $\mathbf{b}$, because of regularity of $\gamma^\mathtt{B}$. Therefore, the optimal strategy does not change for the buyer with any valuation $v$ except the only one s.t.\ $S_\mathbf{a}(v)=S(v)$, and the strategic revenue expectation is not affected by the decrease of $r_\mathbf{a}$ (due to continuity of the valuation distribution $D$).
Thus, $\mathbb{E}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V)\right] \le \mathbb{E}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}( \mathcal{A}', V)\right]$ and the number of non-active strategies of $\mathcal{A}'$ is reduced by one w.r.t.\ $\mathcal{A}$. After that, we repeatedly apply the above described transformation to $\mathcal{A}'$ until the resulted algorithm has no non-active strategies. In this way, we get $\tilde\mathcal{A}\in\tilde\mathfrak{A}$ that satisfies Eq.~(\ref{prop_RatesCompleteActiveness_proof_eq1}).
\end{proof}
An attentive reader may note that the the finiteness of the game is crucially used in the assumption that any (non-active) strategy $\mathbf{a}$ has ``the last $\mathtt{1}$ index" $t_1(\mathbf{a})$. It is certainly untrue for infinite strategies since there are the ones that accept the offer infinite number of rounds. Therefore, we consider the validity of the Prop.~\ref{prop_RatesCompleteActiveness}'s statement (or its analogue) for the infinite game as an open research question that could be considered as a possible direction for future work.
\begin{corollary}
\label{cor_completely_active_corollary}
In a $T$-round game, let ${\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}$ be discounts s.t.\ ${\boldsymbol\nu}({\boldsymbol\g}^\mathtt{B}) \le {\boldsymbol\nu}({\boldsymbol\g}^\mathtt{S})$ and ${\boldsymbol\g}^\mathtt{B}$ is a regular one. If there exists an optimal pricing algorithm $\mathcal{A}^* \in \mathfrak{A}$, then there exists an optimal completely active algorithm $\tilde \mathcal{A}^* \in \tilde \mathfrak{A}({\boldsymbol\g}^\mathtt{B})$. Thus, $\max_{\mathcal{A} \in \mathfrak{A}} \mathbb{E}[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}} (\mathcal{A}, V)] = \max_{\tilde\mathcal{A} \in \tilde \mathfrak{A}({\boldsymbol\g}^\mathtt{B})} \mathbb{E}[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}} (\tilde\mathcal{A}, V)]$.
\end{corollary}
This corollary can be easily obtained from the previous proposition and tells us that one can search for an optimal pricing algorithm among the class of completely active ones $\tilde\mathfrak{A}$.
Our next goal is to show that this class of algorithms $\tilde\mathfrak{A}$ can be linearly parametrized by the set $\Delta^k:= \{{\mathbf{v}} = \{v_j\}_{j=1}^k\in \mathbb{R}^k|\ 0 \le v_1 \le \dots \le v_k \}$, where $k := k(T) := 2^T - 1$. In order to do this, first of all, we introduce several matrix and vector notations.
First, from here on in our paper we fix an order of nodes $\mathfrak{N} = \{\mathfrak{n}_1,\ldots,\mathfrak{n}_k\}$\footnote{E.g., a consistent order: the nodes from the left subtree come before the root node $\mathfrak{e}$, and the ones from the right subtree come after the root $\mathfrak{e}$; then we recursively repeat this rule for the left and right subtrees.}, and, given this, we represent an algorithm $\mathcal{A}\in\mathfrak{A}$ as the vector of its prices $\mathcal{A} = (\mathcal{A}(\mathfrak{n}_1),\ldots,\mathcal{A}(\mathfrak{n}_k))$; note we use the same notation both for the algorithm and its vector representation, since the object type could be easily restored from the context where it is used.
We also introduce the map ${\mathbf{p}}: \mathfrak{S}\times\mathfrak{A} \rightarrow \mathbb{R}^T$, where ${\mathbf{p}}(\mathbf{a}, \mathcal{A})$ is the vector of consecutively offered prices by the algorithm $\mathcal{A}\in\mathfrak{A}$ along the path $\mathbf{a}\in\mathfrak{S}$.
Second, given a regular discount ${\boldsymbol\g}$, we introduce the notion of \emph{${\boldsymbol\g}$-dependent natural order} of the buyer strategies $\mathfrak{S} = \{\mathtt{0},\mathtt{1}\}^T$: $\mathbf{a}\prec_{\boldsymbol\g}\mathbf{b} \Leftrightarrow {\boldsymbol\g}^\mathtt{B} \cdot \mathbf{a} < {\boldsymbol\g}^\mathtt{B} \cdot \mathbf{b}$ for any $\mathbf{a}, \mathbf{b}\in\mathfrak{S}$. The important property of this order consists in that the slope of the ${\boldsymbol\g}$-discounted surplus function $S_\mathbf{a}$ is lower than the one of $S_\mathbf{b}$ when $\mathbf{a}\prec_{\boldsymbol\g}\mathbf{b}$. Using this order, we index the strategies: $\mathfrak{S} = \{\mathbf{a}^0,\ldots,\mathbf{a}^k\}$; note that the strategy $\mathtt{0}^T$ is always the first one $\mathbf{a}^0$, while the strategy $\mathtt{1}^T$ is the last one $\mathbf{a}^k$.
Third, given another discount ${\boldsymbol\g}'$, we introduce the payment vector ${\mathbf{r}}({\boldsymbol\g}', {\boldsymbol\g}, \mathcal{A})$, whose $j$-th component is $r_j({\boldsymbol\g}', {\boldsymbol\g}, \mathcal{A}) := {\boldsymbol\g}' \cdot {\mathbf{p}}(\mathbf{a}^j, \mathcal{A})$ for $j = 1,\ldots,k$ (note that we exclude the zero payment corresponded to the zeroth strategy $\mathbf{a}^0$).
We treat all vectors as vector-columns in our matrix operations.
Finally, we introduce the following $k \times k$ matrices:
\begin{itemize}
\item $J_T$ is a two-diagonal matrix with $1$ on the diagonal and $-1$ under the diagonal;
\item $Z_T({\boldsymbol\g}) = \mathrm{diag}(z_1,\ldots,z_k)$, where with $z_j = ({\boldsymbol\g} \cdot \mathbf{a}^j - {\boldsymbol\g} \cdot \mathbf{a}^{j- 1})^{-1}$ for $j = 1,\ldots,k$;
\item $K_T({\boldsymbol\g}, {\boldsymbol\g}')=((\kappa_{ij}))_{i,j = 1,\ldots,k}$, where $\kappa_{ij} = \gamma'_ta_t$ if the path $\mathbf{a}^i\in\mathfrak{S}$ passes through the node $\mathfrak{n}_j\in\mathfrak{N}$ whose round is $t$\footnote{In other words, the node $\mathfrak{n}_j$ can be represented in the string notation as $a^i_1\ldots a^i_{t-1}$ for some $1\le t \le T$ (see Sec.~\ref{subsec_Notations}).}, and $\kappa=0$, otherwise. Note that, by the definition, the $i$-th component of the vector $K_T({\boldsymbol\g}, {\boldsymbol\g}') \mathcal{A}$ is equal to $\sum_{t = 1}^T \gamma'_t a^i_t \mathcal{A}(a^i_1 \dots a^i_{t - 1})$.
\end{itemize}
\begin{lemma}
\label{lemma_linear_transormation}
In a $T$-round game, let ${\boldsymbol\g}$ be a regular discount, the strategies $\mathfrak{S}$ are naturally ordered by ${\boldsymbol\g}$ (as above), while the matrix and vector notations are introduced as above, then the set of completely active pricing algorithms $\tilde \mathfrak{A}({\boldsymbol\g})$ (i.e., their vector representations) can be linearly mapped onto $\Delta^{k(T)}$ by the matrix $W_T({\boldsymbol\g}) := Z_T({\boldsymbol\g})J_TK_T({\boldsymbol\g}, {\boldsymbol\g})$, which is correctly defined and is invertible.
\end{lemma}
\begin{proof}
First, by the definition of the matrix $K_T({\boldsymbol\g}, {\boldsymbol\g})$ and the vector $\mathcal{A}$, we have that the payment vector ${\mathbf{r}}({\boldsymbol\g}, {\boldsymbol\g}, \mathcal{A}) = K_T({\boldsymbol\g}, {\boldsymbol\g})\mathcal{A}$.
Second, let us denote the intersection point of the lines $S_{\mathbf{a}^j}$ and $S_{\mathbf{a}^{j - 1}}$ by $v_j$ for $j = 1, \dots, k$ and combine them in the vector ${\mathbf{v}} = (v_1,\ldots,v_k)$.
From the identities
$$
{\boldsymbol\g} \cdot \mathbf{a}^j v_j - r_j({\boldsymbol\g}, {\boldsymbol\g}, \mathcal{A}) = S_{\mathbf{a}^j}(v_j) = S_{\mathbf{a}^{j-1}}(v_j) ={\boldsymbol\g} \cdot \mathbf{a}^{j - 1} v_j - r_j({\boldsymbol\g}, {\boldsymbol\g}, \mathcal{A}) , \qquad j = 1, \dots, k,
$$
by simple arithmetic calculations, one can show that these intersection points can be expressed via the payment vector in the following matrix form: ${\mathbf{v}} = Z_T({\boldsymbol\g}) J_T {\mathbf{r}}({\boldsymbol\g}, {\boldsymbol\g}, \mathcal{A})$. Combining with the previous finding, we have that ${\mathbf{v}} = Z_T({\boldsymbol\g}) J_TK_T({\boldsymbol\g}, {\boldsymbol\g})\mathcal{A}$. So, we obtain in this way the linear map ${\mathbf{w}}_{\boldsymbol\g} (\mathcal{A}) := W_T({\boldsymbol\g})\mathcal{A} : \mathfrak{A} \rightarrow \mathbb{R}^k$ that depends on ${\boldsymbol\g}$.
The proof of the statement that ${\mathbf{w}}_{\boldsymbol\g} (\mathcal{A}) \in \Delta^{k(T)}$ if and only if $\mathcal{A} \in \tilde \mathfrak{A}({\boldsymbol\g})$ could be made via two inductions and is rather technical. Hence, it is deferred to Appendix~\ref{app_subsec_proof_lemma_linear_transormation} due to space constraints.
The matrices $Z_T$, $J_T$, and $K_T$ are invertible\footnote{This fact is trivial for matrices $Z_T$ and $J_T$. To show this for $K_T$, just apply the induction. By rearranging of rows and columns of $K_T$ (it does not affect the property of invertibility) one can obtain a block diagonal matrix with two blocks. Each of these blocks is based on a matrix with the form like $K_{T-1}$.}, thus, both the matrix $W_T$ and the map ${\mathbf{w}}_{\boldsymbol\g}: \mathfrak{A} \rightarrow \mathbb{R}^k$ are invertible as well. Hence, $\tilde \mathfrak{A}({\boldsymbol\g})$ is linearly mapped onto $\Delta^{k(T)}$ by ${\mathbf{w}}_{\boldsymbol\g}$.
\end{proof}
\begin{proposition}
\label{prop_problem_reduction}
In a $T$-round game, let ${\boldsymbol\g}^\mathtt{S}$ be a discount, ${\boldsymbol\g}^\mathtt{B}$ be a regular discount, the strategies $\mathfrak{S}$ are naturally ordered by ${\boldsymbol\g}^\mathtt{B}$ (as above), while the matrix and vector notations are introduced as above.
Then there exists an invertible linear transformation ${\mathbf{w}}_{{\boldsymbol\g}^\mathtt{B}}:\tilde \mathfrak{A}({\boldsymbol\g}^\mathtt{B}) \to \Delta^{k}, k = k(T)$ s.t., for any completely active pricing algorithm $\mathcal{A} \in \tilde \mathfrak{A}({\boldsymbol\g}^\mathtt{B})$, its expected strategic revenue has the form
\begin{equation}
\label{prop_problem_reduction_eq_1}
\mathbb{E}_{V\sim D}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V)\right] = L_{D,{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}({\mathbf{v}}) \quad \hbox{for} \quad {\mathbf{v}} := {\mathbf{w}}(\mathcal{A}),
\end{equation}
where
\begin{equation}
\label{prop_problem_reduction_eq_2}
L_{D,{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}({\mathbf{v}}) := (1 - F_D({\mathbf{v}}))^\intercal \Xi_T({\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}){\mathbf{v}}, \qquad {\mathbf{v}}\in\Delta^{k},
\end{equation}
$\Xi_T({\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}):= J_T \cdot K_T({\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{S}) K_T({\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{B})^{-1} J_T^{-1} Z_T({\boldsymbol\g}^\mathtt{B})^{-1}$ is the invertible $k\times k$ matrix that depends only on the discounts, the vector $(1 - F_D({\mathbf{v}})) \in \mathbb{R}^{k}$ has the $i$-th component equal to $1 - F_D(v_i)$, and $F_D$ is the cumulative distribution function of the variable $V$.
\end{proposition}
\begin{proof}
Let us take the transformation ${\mathbf{w}}_{{\boldsymbol\g}^\mathtt{B}}$ defined by ${\mathbf{w}}_{{\boldsymbol\g}^\mathtt{B}} (\mathcal{A}) := W_T({\boldsymbol\g}^\mathtt{B})\mathcal{A}$ (as in the proof of Lemma~\ref{lemma_linear_transormation}) and ${\mathbf{v}} = {\mathbf{w}}_{{\boldsymbol\g}^\mathtt{B}} (\mathcal{A})$. Recall that, in this case, the $j$-th component of ${\mathbf{v}}$ is the intersection point of the straight-line functions $S_{\mathbf{a}^j}$ and $S_{\mathbf{a}^{j - 1}}$. It is evident that the strategic buyer chooses the strategy $\mathbf{a}^j$, when his valuation $v$ is in the segment $[v_{j}; v_{j + 1})$ for $j \ge 0$ (to be formally correct, we set $v_0 := 0, v_{k + 1} := +\infty$). Thus, the expected strategic revenue equals to
$$
\mathbb{E}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V)\right] = \sum_{j = 1}^k (F_D(v_{j + 1}) - F_D(v_j)) ({\boldsymbol\g}^\mathtt{S} \cdot {\mathbf{p}}(\mathbf{a}^j, \mathcal{A})) = \sum_{j = 1}^k (F_D(v_{j + 1}) - F_D(v_j)) r_j({\boldsymbol\g}^\mathtt{S},{\boldsymbol\g}^\mathtt{B} , \mathcal{A}),
$$
see the definitions of ${\mathbf{p}}$ and ${\mathbf{r}}$ before Lemma~\ref{lemma_linear_transormation}.
Let us denote by $dF({\mathbf{v}})$ the $k$-dimensional vector with $F_D(v_{j + 1}) - F_D(v_j)$ in the $j$-th component, then, using the identity $dF({\mathbf{v}}) = J_T^\intercal (1 - F_D({\mathbf{v}}))$, we have
$$
\mathbb{E}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V)\right] = dF(v)^\intercal {\mathbf{r}}({\boldsymbol\g}^\mathtt{S},{\boldsymbol\g}^\mathtt{B} , \mathcal{A}) = (1 - F_D(v))^\intercal J_T {\mathbf{r}}({\boldsymbol\g}^\mathtt{S},{\boldsymbol\g}^\mathtt{B} , \mathcal{A}).
$$
From the definition of the matrix $K_T$, one can obtain ${\mathbf{r}}({\boldsymbol\g}^\mathtt{S},{\boldsymbol\g}^\mathtt{B} , \mathcal{A}) = K_T({\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{S}) \mathcal{A}$ (as in the proof of Lemma~\ref{lemma_linear_transormation}).
Finally, we have $\mathcal{A} = W_T({\boldsymbol\g}^\mathtt{B})^{-1} {\mathbf{v}} = K_T({\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{B})^{-1} J_T^{-1} Z_T({\boldsymbol\g}^\mathtt{B})^{-1} {\mathbf{v}}$ due to ${\mathbf{v}} = W_T({\boldsymbol\g}^\mathtt{B})\mathcal{A}$ and invertibility of ${\mathbf{w}}_{\gamma^{\mathtt{B}}}$.
So, let us combine all together:
$$
\mathbb{E}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V)\right] = (1 - F_D({\mathbf{v}}))^\intercal J_T \cdot K_T({\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{S}) K_T({\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{B})^{-1} J_T^{-1} Z_T({\boldsymbol\g}^\mathtt{B})^{-1} {\mathbf{v}},
$$
where the matrix product between $(1 - F_D({\mathbf{v}}))^\intercal$ and ${\mathbf{v}}$ is exactly the matrix $\Xi_T({\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B})$.
\end{proof}
Corollary~\ref{cor_completely_active_corollary} and Proposition~\ref{prop_problem_reduction} immediately infer the following key result of our study.
\begin{theorem}
\label{th_problem_equivalence}
In a $T$-round game, let ${\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}$ be discounts s.t.\ ${\boldsymbol\nu}({\boldsymbol\g}^\mathtt{B}) \le {\boldsymbol\nu}({\boldsymbol\g}^\mathtt{S})$ and ${\boldsymbol\g}^\mathtt{B}$ is a regular one.
The optimization problem of finding an optimal algorithm is equivalent to maximization of the multivariate functional $L_{D,{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\cdot)$ over the set $\Delta^k = \{{\mathbf{v}} \in \mathbb{R}^k|\ 0 \le v_1 \le \dots \le v_k \}$, $k = 2^T-1$, i.e.,
\begin{equation}
\label{th_problem_equivalence_eq1}
\max_{\mathcal{A} \in \mathfrak{A}} \mathbb{E}_{V\sim D}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{S}} (\mathcal{A}, V)\right] = \max_{{\mathbf{v}} \in \Delta^k} L_{D,{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}({\mathbf{v}}),
\end{equation}
where $L_{D,{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}$ is defined in Eq.~(\ref{prop_problem_reduction_eq_2}) and depends only on the discounts and the distribution $D$ of the valuation variable $V$.
\end{theorem}
It is quite important to emphasize that the $k$-dimensional functional $L_{D,{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}$ is a \emph{bilinear form} applied to the vectors ${\mathbf{v}}$ and $1-F_D({\mathbf{v}})$.
This bilinear form is independent of the distribution $D$ and is defined by the matrix $\Xi_T({\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B})$.
In this view, we note that there is a strong relationship between our optimization functional $L_{D,{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}$ and the function $H_D$ (see Sec.~\ref{sec_ConstAlg}).
In other words, the functional $L_{D,{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}$ constitutes the key basis of optimal algorithms and is fundamental for them as the function $H_D(p) = p\mathbb{P}_{V\sim D}[V\ge p ]$ is fundamental for optimal pricing in static auctions.
\begin{figure}
\centering
\vspace{-2mm}
\includegraphics[width=\columnwidth]{T2Uniform.eps}
\vspace{-8mm}
\caption{$2$-round game. The prices $\mathcal{A}^\ast(\mathtt{0}), \mathcal{A}^\ast(\mathfrak{e}), \mathcal{A}^\ast(\mathtt{1})$ and the relative expected strategic revenue (w.r.t.\ $\mathcal{A}^*_D$) of the optimal algorithm $\mathcal{A}^\ast$ for discount rates:
(a) $\gamma_\mathtt{S} = 0.8$ and various $\gamma_\mathtt{B}$;
(b) $\gamma_\mathtt{B} = 0.2$ and various $\gamma_\mathtt{S}$.}
\label{img_T2_Uniform}
\vspace{-2mm}
\end{figure}
\begin{remark}[Th.~\ref{maintheorem} as a special case of Th.~\ref{th_problem_equivalence}]
\label{remark_Lfunctional_forequal}
Let us consider the case of equal discounts, ${\boldsymbol\g}^\mathtt{S} = {\boldsymbol\g}^\mathtt{B}$, then $K_T({\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{S})= K_T({\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{B})$ and the matrix $\Xi_T({\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}) = J_T \cdot K_T({\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{S}) K_T({\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{B})^{-1} J_T^{-1} Z_T({\boldsymbol\g}^\mathtt{B})^{-1}$ becomes equal just to the diagonal matrix $ Z_T({\boldsymbol\g}^\mathtt{B})^{-1}=\mathrm{diag}(\alpha_1,\ldots,\alpha_k)$, $\alpha_j \!\!=\!\! {\boldsymbol\g}^\mathtt{B} \cdot \mathbf{a}^j - {\boldsymbol\g}^\mathtt{B} \cdot \mathbf{a}^{j- 1}$. Hence,
$$\textstyle L_{D,{\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{B}} ({\mathbf{v}}) = (1 - F_D({\mathbf{v}}))^\intercal Z_T({\boldsymbol\g}^\mathtt{B})^{-1}{\mathbf{v}} = \sum_{j=1}^k(1-F_D(v_j))\alpha_jv_j = \sum_{j=1}^kH_D(v_{j})\alpha_j.$$
Since $\alpha_j>0$ (due to the dependence of the order of $\{\mathbf{a}^j\}_j$ on $ {\boldsymbol\g}^\mathtt{B}$) and $H_D(v) \le H_D(p^*(D)), \:\forall v,$ (see Sec.~\ref{sec_ConstAlg}) we infer that this sum above is maximal when $v_1 = \ldots = v_k = p^*(D)$.
Thus, in the case of equal discounts, the optimization of the functional $L_{D,{\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{B}}$ reduces to the maximization of the function $H_D$ used to find Myerson's price $p^*(D)$. This is expected and \emph{additionally highlights the strong similarity of our optimization functional for the dynamic pricing to the one for the static pricing}.
\end{remark}
So, in the particular case of equal discounts, the optimization of $L_{D,{\boldsymbol\g}^\mathtt{B}, {\boldsymbol\g}^\mathtt{B}}$ has no closed form solution since it reduces to the optimization of $H_D$. Hence, we expect that, in the other cases, generally, our optimization problem does not admit a closed form solution as well.
In the next subsections, we numerically find the maximum of $L_{D,{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}$ for several representative games and show that the obtained optimal algorithms are no longer constant and significantly outperform the optimal constant pricing in terms of the expected strategic revenue.
\begin{remark}[on regularity of ${\boldsymbol\g}^\mathtt{B}$]
\label{remark_RegularDiscount}
The regularity of the discount ${\boldsymbol\g}^\mathtt{B}$ is used in two cases, namely, to get:
(1)~the uniqueness of ${\boldsymbol\g}$-dependent natural order of the strategies $\mathfrak{S}$;
(2)~zero probability of the set of the valuations for which the optimal buyer strategy is not unique.
The case~(1) is used in Lemma~\ref{lemma_linear_transormation} and Prop.~\ref{prop_problem_reduction}; there, regularity is just needed for simplicity of presentation of the proofs; these statements (possibly with a slight change) will certainly hold without this restriction on ${\boldsymbol\g}^\mathtt{B}$.
The case~(2) is used in Prop.~\ref{prop_RatesCompleteActiveness} to guarantee that the strategic buyer will not prefer (with non-zero probability) a strategy that has been non-active before the transformation. So, Prop.~\ref{prop_RatesCompleteActiveness} may not hold without regularity of ${\boldsymbol\g}^\mathtt{B}$. But we believe that one can obtain a similar result for a series of algorithms that "converges" to a one from $\tilde \mathfrak{A}$ and use this series to obtain the statement of Th.~\ref{th_problem_equivalence}.
In any way, the restriction on the regularity of ${\boldsymbol\g}^\mathtt{B}$ does not harm the main conclusions of our work, because, for a finite horizon, regular discounts are more frequent than non-regular ones, e.g., there is just a finite number of non-regular geometric discounts for a finite horizon. Hence, our qualitative results from Sec.~\ref{subsec_finite_game_studies} and~\ref{subsec_infinite_game_studies} are not affected by this restriction.
\end{remark}
\begin{figure}
\centering
\vspace{-2mm}
\includegraphics[width=\columnwidth]{T3Uniform.eps}
\vspace{-8mm}
\caption{$3$-round game. The prices $\mathcal{A}^\ast(\mathfrak{n})$, for nodes $\mathfrak{n}\in\mathfrak{N}$ s.t.\ $|\mathfrak{n}|\le 2$, and relative expected strategic revenue (w.r.t.\ $\mathcal{A}^*_D$) of the optimal algorithm $\mathcal{A}^\ast$ for discounts:
(a) $\gamma_\mathtt{S} = 0.8$ and various $\gamma_\mathtt{B}$;
(b) $\gamma_\mathtt{B} = 0.2$ and various $\gamma_\mathtt{S}$.}
\label{img_T3_Uniform}
\vspace{-4mm}
\end{figure}
\subsection{Finite games: case study}
\label{subsec_finite_game_studies}
In this subsection, based on several representative game settings, we demonstrate how to find optimal algorithms using the functional $L_{D,{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}$ and show the key properties of these algorithms.
We consider finite geometric discounts ${\boldsymbol\g}^\mathtt{B} = \{\gamma_\mathtt{B}^{t - 1} \mathbb{I}_{\{t \le T\}}\}_{t = 1}^\infty, {\boldsymbol\g}^\mathtt{S} = \{\gamma_\mathtt{S}^{t - 1} \mathbb{I}_{\{t \le T\}}\}_{t = 1}^\infty$ for $0 < \gamma_\mathtt{B} < \gamma_\mathtt{S} < 1$ and
the valuation $V$ uniformly\footnote{During our experimentation, we also analyzed some other distributions. Since the results for them are found to be similar to the ones for uniform, we present these results in Appendix~\ref{app_sec_numsulutions_diff_distr}.} distributed in $[0,1]$. For $2$- and $3$-round games, according to Th.~\ref{th_problem_equivalence}, we find the optimal pricing algorithms by maximizing the functional $L_{D,{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}$ from Eq.~(\ref{prop_problem_reduction_eq_2}).
Their expected revenues are compared with the expected revenue $H_D(p^*(D))\Gamma^\mathtt{S}$ of the optimal constant pricing $\mathcal{A}^*_D$ (see Sec.~\ref{sec_ConstAlg}), which is \emph{treated as the baseline} from here on in this paper.
{\bf The case of $T=2$.}
The maximum of the $3$-variate functional $L_{D,{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}$ can be found in the hyperplane $v_2 = v_3$ (the proof is provided in Appendix~\ref{app_subsec_dimension_reduction}).
Thus, for $T=2$ the maximization problem is reduced\footnote{This case show that even though the dimension of the problem in right-hand side of Eq.~(\ref{th_problem_equivalence_eq1}) can be reduced, it still could not be reduced to a one-dimensional problem in general. The same we observe in the case of $T=3$.} to a $2$-variate optimization of the function $L_2:\Delta^2 \to \mathbb{R}$, where
$L_2(v_1,v_2) = (1 - F_D({\mathbf{v}}))^\intercal \Upsilon_2(\gamma_\mathtt{S}, \gamma_\mathtt{B}){\mathbf{v}}$, ${\mathbf{v}}\in\Delta^2$, and $\Upsilon_2(\gamma_\mathtt{S}, \gamma_\mathtt{B}) =
\begin{pmatrix}
\gamma_\mathtt{S} & 0 \\
- (\gamma_\mathtt{S} - \gamma_\mathtt{B}) & 1 + \gamma_\mathtt{S} - \gamma_\mathtt{B}
\end{pmatrix} $.
Note that, for the uniform distribution $D = U[0; 1]$, $F_D(v) = v$ and the optimized functional $L_2$ becomes thus quadratic. Hence the problem can be solved by means of QP. We solve this problem numerically using the Sequential Least Squares Programming method.
So, for several pairs of $(\gamma_\mathtt{S}, \gamma_\mathtt{B})$, we find the optimal algorithm $\mathcal{A}^*$ and depict in Fig.~\ref{img_T2_Uniform} both its prices $\mathcal{A}^*(\mathfrak{n})$ for all nodes $\mathfrak{n}$ and its relative expected strategic revenue (w.r.t.\ $\mathcal{A}^*_D$). Namely, Fig.~\ref{img_T2_Uniform}(a) contains results for $\gamma_\mathtt{S} = 0.8$ and $\gamma_\mathtt{B}\in\{0.01 + i \cdot 0.005\}_{i = 0}^{148}$, while Fig.~\ref{img_T2_Uniform}(b) contains results for $\gamma_\mathtt{B} = 0.2$ and $\gamma_\mathtt{S}\in\{0.2 + i \cdot 0.005\}_{i = 0}^{159}$.
First, at the bottom of Fig.~\ref{img_T2_Uniform} we see that \emph{the optimal algorithm outperforms the baseline optimal constant pricing for any observed pair of discounts}.
Second, the top part of Fig.~\ref{img_T2_Uniform} demonstrates us that, for any pair of discounts, the optimal algorithm is a \emph{consistent pricing}, i.e., the one which never sets prices lower (higher) than earlier accepted (rejected, resp.) ones~\cite{2017-WWW-Drutsa}. In fact, this property is theoretically guaranteed for the studied case; namely, it easily follows from the relation between the optimal prices and the optimum ${\mathbf{v}}$: $\mathcal{A}^*(\mathtt{0}) = v_1$, $\mathcal{A}^*(\mathfrak{e}) = \gamma_\mathtt{B} v_1 + (1 - \gamma_\mathtt{B}) v_2 $, and $\mathcal{A}^*(\mathtt{1}) = v_3$.
Third, the obtained optimal algorithms are appeared to be continuous in $\gamma_\mathtt{S}$ and $\gamma_\mathtt{B}$. Moreover, if the distance between the discount rates $\gamma_\mathtt{S}$ and $\gamma_\mathtt{B}$ converges to $0$, then the optimal algorithm $\mathcal{A}^*$ converges to the optimal constant one $\mathcal{A}^*_D$ (what experimentally supports Remark~\ref{remark_Lfunctional_forequal}).
{\bf The case of $T=3$.}
In a similar way as it done for the previous case, the dimensionality of the optimization problem can be lowered from $7$ to $4$, when $\gamma_\mathtt{B} < (\sqrt 5 - 1)/2$, and to $5$, when $\gamma_\mathtt{B} > (\sqrt 5 - 1)/2$\footnote{The different cases are results of the change of the order of the values $\{{\boldsymbol\g}^\mathtt{B} \cdot \mathbf{a}|\mathbf{a}\in\mathfrak{S}\}$ at the border point $(\sqrt 5 - 1)/2$.}.
The method to solve the optimization problem and the set of $(\gamma_\mathtt{S}, \gamma_\mathtt{B})$ are the same as in the case of $T=2$. Fig.~\ref{img_T3_Uniform} is arranged similarly to Fig.~\ref{img_T2_Uniform}.
Analogously to the case of $T=2$, in Fig.~\ref{img_T3_Uniform}, we observe the superiority of the optimal algorithm $\mathcal{A}^*$ over the baseline $\mathcal{A}^*_D$ for any pair of discount rates, as well as convergence to $\mathcal{A}^*_D$ as $|\gamma_\mathtt{S} - \gamma_\mathtt{B}|\to 0$ and the continuity of $\mathcal{A}^*$ in $\gamma_\mathtt{S}$ and $\gamma_\mathtt{B}$.
But, in contrast to the the case of $T=2$, \emph{the optimal algorithm may be non-consistent}: the condition of consistency is violated by the reverse order of the prices $\mathcal{A}^*(\mathfrak{e}) < \mathcal{A}^*(\mathtt{0}\mathtt{1})$ for $\gamma_\mathtt{B}>$ $\approx 0.54$ (which seen in Fig.~\ref{img_T3_Uniform}(a)), i.e., the seller offers a price larger than the one at the first round if the buyer rejects the first price, but accepts the one at the second round.
There is a lot of other interesting observations: e.g., pairs of equal prices when $\gamma_\mathtt{B} \to 0$ (see Fig.~\ref{img_T2_Uniform} and~\ref{img_T3_Uniform}); some specific area of pairs of $(\gamma_\mathtt{S}, \gamma_\mathtt{B})$ where algorithm prices becomes equal (see Fig.~\ref{img_T3_Uniform}), etc. They are seen further in Fig.~\ref{img_T4_UniformInf} as well, and a thorough study of them is deferred to future work.
\begin{figure}
\centering
\vspace{-2mm}
\includegraphics[width=\columnwidth]{T4UniformInf.eps}
\vspace{-8mm}
\caption{Infinite game. The prices $\mathcal{A}^\ast_4(\mathfrak{n})$, for nodes $\mathfrak{n}\in\mathfrak{N}$ s.t.\ $|\mathfrak{n}|\le 3$, of the optimal $4$-step algorithm $\mathcal{A}^\ast_4$ and the relative expected strategic revenue (w.r.t.\ $\mathcal{A}^*_D$) of the optimal $\tau$-step algorithm $\mathcal{A}^\ast_\tau, \tau=2,..,6,$ for discounts:
(a) $\gamma_\mathtt{S} = 0.8$ and various $\gamma_\mathtt{B}$;
(b) $\gamma_\mathtt{B} = 0.2$ and various $\gamma_\mathtt{S}$.}
\label{img_T4_UniformInf}
\vspace{-4mm}
\end{figure}
\subsection{Infinite game: approximately optimal algorithms and case study}
\label{subsec_infinite_game_studies}
Let us return to the case of the infinite game with ${\boldsymbol\nu}({\boldsymbol\g}^\mathtt{B}) < {\boldsymbol\nu}({\boldsymbol\g}^\mathtt{S})$. In this case, we have no powerful instrument to find an optimal pricing (unlike to the case of finite games).
However, one can approximate the optimal algorithm by an optimal one in some finite dimensional subclass of $\mathfrak{A}$.
Namely, for $\tau\in\mathbb{N}$, let us say that $\mathcal{A}$ is a \emph{$\tau$-step pricing algorithm}, if \mbox{$\:\forall \mathbf{a}, t > \tau:\ \mathcal{A}(\mathbf{a}_{1: t - 1}) = \mathcal{A}(\mathbf{a}_{1: \tau - 1}), $} i.e., at rounds $t>\tau$, it offers the price equal to the one that has been offered at the round $\tau$.
The set of all $\tau$-step algorithms is denoted by $\mathfrak{A}_{\tau}$ and we refer to any $\mathcal{A}\in\mathfrak{A}_{\tau}$ as a \emph{finite algorithm} as well.
An attentive reader may note that the problem of finding the optimal $\tau$-step algorithm for the infinite game is equivalent to finding the optimal algorithm in the $\tau$-round game with the finite discounts $\tilde {\boldsymbol\g}^\mathtt{S}$ and $\tilde {\boldsymbol\g}^\mathtt{B}$,
where the operator $\tilde{\boldsymbol\g}$ means: $\tilde\gamma_t := \gamma_t, t<\tau$;
$\tilde\gamma_\tau :=\sum_{t = \tau}^\infty \gamma_t$; and $\tilde\gamma_t := 0, t > \tau$. The condition ${\boldsymbol\nu}({\boldsymbol\g}^\mathtt{B}) < {\boldsymbol\nu}({\boldsymbol\g}^\mathtt{S})$ implies ${\boldsymbol\nu}(\tilde{\boldsymbol\g}^\mathtt{B}) < {\boldsymbol\nu}(\tilde{\boldsymbol\g}^\mathtt{S})$ (see Remark~\ref{remark_DiscRateInequality}). Hence, one can apply the optimization technique from Theorem~\ref{th_problem_equivalence}.
The following proposition (the proof is presented in Appendix~\ref{app_subsec_proof_prop_approx_by_finite_alg}) formally states that the expected strategic revenue of the optimal $\tau$-step algorithm $\mathcal{A}^*_\tau$ converges to one of the optimal pricing $\mathcal{A}^*$ when $\tau\to\infty$.
\begin{proposition}
\label{prop_approx_by_finite_alg}
let ${\boldsymbol\g}^\mathtt{S}$, ${\boldsymbol\g}^\mathtt{B}$ be discounts s.t.\ ${\boldsymbol\nu}({\boldsymbol\g}^\mathtt{B}) \le {\boldsymbol\nu}({\boldsymbol\g}^\mathtt{S})$ and $\Gamma^\mathtt{S}_\tau := \sum_{t = \tau + 1}^\infty \gamma^\mathtt{S}_t$ for $\tau\in\mathbb{N}$. Then the following bounds hold:
$$
\max_{\mathcal{A} \in \mathfrak{A}_\tau} \mathbb{E}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V)\right] \le \max_{\mathcal{A} \in \mathfrak{A}} \mathbb{E}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V)\right] \le \max_{\mathcal{A} \in \mathfrak{A}_\tau} \mathbb{E}\left[\mathrm{SRev}_{{\boldsymbol\g}^\mathtt{S}, {\boldsymbol\g}^\mathtt{B}}(\mathcal{A}, V)\right] + \Gamma^\mathtt{S}_\tau \mathbb{E}\left[V\right].
$$
\end{proposition}
Finally, let us consider geometric discounts ${\boldsymbol\g}^\mathtt{B} = \{\gamma_\mathtt{B}^{t - 1} \}_{t = 1}^\infty, {\boldsymbol\g}^\mathtt{S} = \{\gamma_\mathtt{S}^{t - 1} \}_{t = 1}^\infty$ for $0 < \gamma_\mathtt{B} < \gamma_\mathtt{S} < 1$ and
the valuation $V$ uniformly distributed in $[0,1]$. Following the procedure described in Sec.~\ref{subsec_finite_game_studies}, we numerically find optimal $\tau$-step algorithm $\mathcal{A}^*_\tau$, $\tau=2,..,6$, for the same set of pairs $(\gamma_\mathtt{S}, \gamma_\mathtt{B})$ as in the case of $T=2$ in Sec.~\ref{subsec_finite_game_studies}. The obtained in this way prices of $\mathcal{A}^*_4$ and the relative expected revenue of $\mathcal{A}^*_\tau$, $\tau=2,..,6$ are arranged in Fig.~\ref{img_T4_UniformInf} similarly to Fig.~\ref{img_T2_Uniform}.
We see that the expected strategic revenue of $\mathcal{A}^*_\tau$ converges quite quickly to the optimal one. This observation constitutes the empirical evidence of Prop.~\ref{prop_approx_by_finite_alg}, which suggests that the convergence rate is equal to $\gamma_\mathtt{S}$. We also can note the observations similar to the ones made for the optimal algorithms in Sec.~\ref{subsec_finite_game_studies}. In particular, we see that in the case of the infinite game, \emph{the baseline optimal constant algorithm is significantly outperformed by algorithms with noticeably non-static pricing} as well.
\section{Conclusions}
We studied online learning algorithms that maximize expected cumulative revenue of repeated posted-price auctions in the scenario with a strategic buyer that holds a fixed private valuation.
More precisely, we investigated the situation in which the seller ant the buyer may have different level of the patience to wait for utility, and which is modeled via own discounts in the cumulative utilities for the buyer and the seller.
Surprisingly, we found that only in the case of equal discounts, the seller cannot advantageously use the ability to change prices in dynamic fashion (i.e., to learn them) with respect to the static approach.
Namely, the case of equal discounts admits two optimal algorithms; one of them constantly offers the Myerson price, while the other one proposes a ``big deal": pay for all goods in advance (at the first round) or get nothing.
But, first, in the case of more patient buyer, the pricing algorithm ``big deal" was shown to outperform the constant pricing.
Second, in the inverse case when the seller's discount rate is larger than the one of the buyer,
we reduced the problem of finding an optimal algorithm to a multidimensional optimization problem with a multivariate analogue of the functional used to determine Myerson's price. Our reduction does not admit a closed form solution in general (similarly, to the case of revenue optimal static auctions), but can be solved by state-of-the-art numerical optimization techniques (like gradient ones).
We conducted extensive analysis of numerically found optimal algorithms to demonstrate that they are non-trivial, may be non-consistent, and generate larger expected revenue than the constant pricing with the Myerson price.
Overall, this work provided clear techniques for obtaining guarantees on the seller's revenue in repeated posted-price auctions that may help in future studies on a more sophisticated scenarios and auction mechanisms.
\bibliographystyle{abbrv}
|
{
"timestamp": "2018-05-08T02:19:07",
"yymm": "1805",
"arxiv_id": "1805.02574",
"language": "en",
"url": "https://arxiv.org/abs/1805.02574"
}
|
\section{Introduction }
The last decades have seen an increasing interest in the study of
``Manifolds with Density'', which is a manifold where both perimeter and
volume carry the same weight. To have an idea of the
possible applications of that subject one can consult, for instance \cite{Mo}, \cite{Mo1}
and the references therein.
In particular, much attention has been devoted
to find, for a given manifold with density, its isoperimetric set (see, e.g.,
\cite{BCMR}, \cite{BBCLT}, \cite{BCM, BCM2, BCM3, BMP, XR},
\cite{CMV}, \cite{CJQW}, \cite{Cham}, \cite{DDNT}, \cite{Howe},
\cite{KZ}, \cite{MadernaSalsa}, \cite{Mo1}, \cite{Mo2}).
On the other hand, many authors have studied isoperimetric problems
when volume and perimeter carry two different weights.
A remarkable example is obtained when
the manifold is $\mathbb R^{N}$ and the two weights are two
different powers of the distance from the origin.
More precisely, given two real numbers $k$ and $l$, the problem is to find the set $G$ in $\mathbb R^{N}$
which minimizes the weighted perimeter $\displaystyle\int_{\partial G } |x|^k \, {\mathcal H}_{N-1} (dx) $
once the weighted volume $\displaystyle\int_{ G } |x|^l \, dx $ is prescribed.
Such a problem is far from being artificial since its solution allows to compute, for instance,
the best constants in the well-known Caffarelli-Kohn-Nirenberg inequalities as well as
to establish the radiality of the corresponding minimizers. Several partial results have been obtained on such an issue (see, e.g.,
\cite{ABCMP}, \cite{BBMP2}, \cite{C}, \cite{diGiosia_etal}, \cite{DHHT}, \cite{Howe}, \cite{Mo}) and a
complete solution is contained in in the recent paper
(see \cite{diGiosia_etal}). There the authors find the full range of the
parameters $k$ and $l$ for which the isoperimetric set is the ball centered at the origin.
The first step of their proof consists of reducing the problem
into a two-dimensional one by means of spherical symmetrization (also known as
foliated Schwarz symmetrization).
\\
Let
$\mathbb{R}^{N} _{+} := \{ x \in \mathbb{R}^N :\, x_N >0\} $.
The problem that we address here is the following:
Given $k,l \in \mathbb{R}$, $\alpha > 0$,
\medskip
\noindent {\sl Minimize $\displaystyle\int_{\partial \Omega } |x|^k x_N^\alpha \, {\mathcal H}_{N-1} (dx) $
among all smooth sets
$\Omega \subset \mathbb{R} ^{N}_{+}$ satisfying $\displaystyle\int_{\Omega } |x|^lx_N^\alpha \, dx =1$.}
\medskip
Let $B_R$ denote the ball of $\mathbb R^N$ of radius $R$ centered at the origin
and let $B$ and $\Gamma$ denote the Beta and the Gamma function, respectively.
Our main result, contained in Section 5, is the following.
\begin{theorem}
\label{maintheorem}
Let $N\in \mathbb{N} $, $N\geq 2$,
$k,l \in \mathbb{R} $, $\alpha > 0$ and $l+N+\alpha >0$.
Further, assume that one of the following conditions holds:
\\
{\bf (i)} $l+1\leq k $;
\\
{\bf (ii)} $k\leq l+1$ and $ l\frac{N+\alpha-1}{N+\alpha} \leq k\leq 0$;
\\
{\bf (iii)} $N\geq 2$, $ 0\leq k\leq l+1$ and
\begin{equation}\label{l_1N3}
l\le l_1 (k,N,\alpha ) := \frac{(k+N+\alpha-1)^3 }{(k+N+\alpha-1)^2 - \frac{(N+\alpha-1)^2 }{N+\alpha} } -N -\alpha\,.
\end{equation}
\\
Then
\begin{equation}
\label{mainineq}
\displaystyle\int_{\partial \Omega } |x|^k x_N^\alpha\, {\mathcal H}_{N-1} (dx)
\geq
C_{k,l,N, \alpha} ^{rad}
\left(
\displaystyle\int_{\Omega } |x|^lx_N^\alpha\, dx
\right)
^{(k+N+\alpha-1)/(l+N+\alpha) } ,
\end{equation}
for all smooth sets $\Omega $ in
$\mathbb{R}^N _+ $,
where
\begin{eqnarray}
\label{defCkl}
C_{k,l,N, \alpha} ^{rad} & := &
\frac{\displaystyle\int_{\partial B_1 } |x|^k x_N^\alpha\, {\mathcal H}_{N-1} (dx)}
{\left( \displaystyle\int_{B_1 \cap \mathbb{R}^N _+}
|x|^l x_N^\alpha\, dx \right ) ^{(k+N+\alpha-1)/(l+N+\alpha) } }
\\
&= &
\nonumber
\left( l+\alpha +N\right) ^{\frac{k+N+\alpha -1}{l+N+\alpha }}\left(
B\left( \frac{N-1}{2},\frac{\alpha +1}{2}\right) \frac{\pi ^{\frac{N-1}{2}}}{
\Gamma \left( \frac{N-1}{2}\right) }\right) ^{\frac{l-k+1}{l+N+\alpha }}.
\end{eqnarray}
Equality in (\ref{mainineq}) holds if $\Omega =B_R\cap\mathbb{R}_+^N$.
\end{theorem}
\noindent
Note that the weights we consider are not radial and it seems not trivial to use
spherical symmetrization. So that we did not try to adapt the techniques contained in \cite{diGiosia_etal},
and, depending on the regions where the three parameters lie, we use different methods.
The proof in the case {\bf (i)} is given in \cite{ABCMP_atti}. It is based on Gauss's Divergence Theorem.
In the case {\bf (ii)} (see Theorem \ref{th1bis}) the proof uses an appropriate change of variables,
which has been introduced in \cite{H} and \cite{HK},
together with the isoperimetric inequality with respect to the weight $x_{N}^{\alpha} $.
The case {\bf (iii)} (see Theorem \ref{th1ter}) is the most delicate and it requires several different arguments:
again a suitable change of variables, then an interpolation argument, introduced for the first time in our previous paper \cite{ABCMP} and, finally, the so-called starshaped rearrangement.
In Section 4 we provide some necessary conditions on $k$, $l$ and $\alpha$ such that
the half-ball centered at the origin is an isoperimetric set. In the proof we
firstly evaluate the second variation of the perimeter functional.
The claim is achieved using the fact that such a variation
at a minimizing set must be nonnegative, together with a nontrivial
weighted Poincar\' e inequality on the sphere derived in \cite{BCM2}.
\noindent Part of these results have been announced in \cite{ABCMP_atti}.
\section{Notation and preliminary results }
Throughout this article $N$ will denote a natural number with $N\geq 2$,
$k$ and $l $ are real numbers, while $\alpha$ is a nonnegative number
and
\begin{equation}
\label{ass1}
l+N+\alpha>0 .
\end{equation}
\noindent
Let us introduce some notation.
\begin{eqnarray*}
\mathbb{R}^{N}_{+}
& := &
\left\{ x \in \mathbb{R}^{N}: x_N >0 \right\},
\\
\mathbb{S}^{N-1}_{+}
& := &
\left\{ x \in \mathbb{S}^{N-1} : x_N >0 \right\},
\\
B_{R}(x_0 )
& := &
\left\{ x\in \mathbb{R}^{N}:\left\vert x-x_0 \right\vert <R\right\} , \quad (x_0 \in \mathbb{R}^N ),
\\
B_{R}
& := &
B_{R}(0), \quad (R>0),
\\
B_{R}^{+}
& := &
B_{R} \cap \mathbb{R}^{N}_{+}.
\\
\end{eqnarray*}
Furthermore, ${\mathcal L} ^m $ will denote the $m$-dimensional Lebesgue measure, ($1\leq m\leq N$), and
\begin{eqnarray*}
\omega _N & := & {\mathcal L}^N (B_1 ),
\\
\kappa (N, \alpha ) & := & {\mathcal L} ^{N-1} (\mathbb{S} ^{N-1} _+ ).
\end{eqnarray*}
Note that
\begin{equation}
\label{measSN-1+}
\kappa (N, \alpha ) = B\left( \frac{
N-1}{2},\frac{\alpha +1}{2}\right)
\frac{\pi ^{\frac{N-1}{2}}}{\Gamma \left(
\frac{N-1}{2}\right) },
\end{equation}
where $B$ and $\Gamma$ are the Beta function and the Gamma function,
respectively,
(see \cite{BCM3}).
\\
We will use frequently $N$-dimensional spherical coordinates $(r, \theta)$ in $\mathbb{R} ^N$:
$$
\mathbb{R}^N \ni x = r\theta , \quad \mbox{where $r=|x|$, and $\theta = x|x|^{-1} \in \mathbb{S}^{N-1} $.}
$$
If $M$ is any set in $\mathbb{R}^N _{+}$, then $\chi _M $ will denote its characteristic function.
\noindent Next, let $k$ and $l$ be real
numbers satisfying (\ref{ass1}). We define a measure $\mu _{l, \alpha}$ by
\begin{equation}
d\mu _{l, \alpha}(x)=|x|^{l} x_N^\alpha\,dx.
\label{dmu}
\end{equation}
If $M \subset $ ${\mathbb R}^{N}_{+}$ is a measurable set with finite
$\mu _{l, \alpha} $-measure, then we define $M^{\star }$, the
\\
$\mu_{l,\alpha }$-symmetrization of $M$,
as follows:
\begin{equation}
M^{\star } := B_{R}^{+}
\hspace{.2 cm}
\text{with }
R:
\mu_{l, \alpha}
\left( B_{R}^{+} \right) =
\mu _{l, \alpha} \left( M\right) = \int_M d\mu _{l, \alpha} (x) .
\label{mu_(M)}
\end{equation}
If $u: \mathbb{R}^{N}_{+} \rightarrow \mathbb{R} $ is a measurable function such that
$$
\mu_{l, \alpha} \left( \left\{ |u(x)|>t\right\} \right) <\infty \qquad \forall t>0,
$$
then let $u^{ \star }$ denote the weighted Schwarz symmetrization of $u$, or,
in short, the \\
$\mu_{l, \alpha} -$symmetrization of $u$, which is given by
\begin{equation}
u^{ \star }(x)=\sup \left\{ t\geq 0:\mu_{l, \alpha}
\left( \left\{ |u(x)| >t\right\} \right)
>
\mu _{l, \alpha} \left( B_{\left\vert x\right\vert }^{+} \right) \right\} .
\label{u_star}
\end{equation}
Note that $u^{\star }$ is radial and radially non-increasing,
and if $M$ is a measurable set with finite $\mu _l $-measure, then
$$
\left( \chi _M \right) ^{\star} = \chi _{M^{ \star }} .
$$
The {\sl $\mu _{k, \alpha}$--perimeter\/} of a measurable set $M $ is given by
\begin{equation}
P_{\mu _{k, \alpha}}(M ):=\sup \left\{ \int_{M }\mbox{div}\,\left(x_N^\alpha |x|^{k}
\mathbf{v}\right) \,dx:\,\mathbf{v}\in C_{0}^{1}(\mathbb{R}^N ,\mathbb{R}^{N}),\,|
\mathbf{v}|\leq 1\mbox{ in }\, M \right\} .
\end{equation}
\noindent It is well-known that the above
\textsl{distributional definition} of
weighted perimeter is equivalent to the following
\begin{equation}
P_{\mu_{k} }(M )
= \left\{
\begin{array}{ccc}
\displaystyle\int_{\partial \Omega }|x|^{k} \, {\mathcal H} _{N-1}(dx)
& \mbox{ if } &
\partial \Omega \mbox{ is } (N-1)-\mbox{rectifiable } \\
& & \\
+ \infty \qquad
& \mbox{ otherwise,} &
\end{array}
\right.
\end{equation}
where, here and throughout, ${\mathcal H} _{N-1} $ will denote the $(N-1)$-dimensional Hausdorff-measure.
We will call a set $\Omega \subset \mathbb{R}^N _+ $ {\sl smooth},
if
for every $x_0 \in \partial \Omega \cap \mathbb{R}^N _+ $,
there is a number $r >0$ such that $B_r (x_0 )\subset \mathbb{R}^N _+ $, $B_r (x_0 ) \cap \Omega $
has exactly one connected component and $B_r (x_0 ) \cap \partial \Omega $
is the graph of a $C^1 $--function on an open set in $\mathbb{R} ^{N-1} $.
Let $\Omega \subset \mathbb{R} ^{N}_{+}$ and $p\in \left[ 1,+\infty \right) $.
We will denote by $L^{p}(\Omega ,d\mu
_{l, \alpha})$ the space of all Lebesgue measurable real valued functions $u$ such
that
\begin{equation}
\left\Vert u \right\Vert
_{L^{p}(\Omega ,d\mu _{l, \alpha})}
:=\left(
\int_{\Omega
}
\left\vert
u\right\vert
^{p}d\mu _{l, \alpha} (x)
\right) ^{1/p}
<+\infty .
\label{Norm_Lp}
\end{equation}
\\
By $W^{1,p}(\Omega ,d\mu _{l, \alpha})$ we denote the weighted Sobolev space
consisting of all functions which together with their weak derivatives $u_{x_{i}}$, ($i=1,...,N$),
belong to $L^{p}(\Omega ,d\mu _{l, \alpha})$.
This space will be equipped with the norm
\begin{equation}
\left\Vert u\right\Vert _{W^{1,p}(\Omega ,d\mu _{l, \alpha})}:=\left\Vert
u\right\Vert _{L^{p}(\Omega ,d\mu _{l, \alpha})}+\left\Vert \nabla u\right\Vert
_{L^{p}(\Omega ,d\mu _{l, \alpha})}.
\label{Norm_Wp}
\end{equation}
Finally, ${\mathcal D} ^{1,p}( \Omega ,d\mu _{k, \alpha})$ will stand for the closure of
$C_{0}^{\infty }(\mathbb{R}^N )$ under the norm
$$
\left( \int_{\Omega } |\nabla u|^p \, d\mu _{k,\alpha } (x) \right) ^{1/p}.
$$
We will often use the following well-known {\sl Hardy-Littlewood inequality}
\begin{equation}
\label{hardylitt1}
\int_{\mathbb{R}^{N}_{+} } uv \, d\mu _{l, \alpha}(x) \leq \int_{\mathbb{R}^{N}_{+} } u^{ \star} v^{\star} \, d\mu _{l, \alpha} (x) ,
\end{equation}
which holds for any couple of functions
$u,v\in L^2 (\mathbb{R}^{N}_{+} , d\mu _{l, \alpha} )$.
Now let us recall the so-called starshaped rearrangement
(see \cite{Kaw}) which we will use in Section 5.
For later convenience, we will write $y$ for points in $\mathbb{R}^{N}_{+} $ and $(z, \theta )$ for corresponding $N$-dimensional spherical coordinates ($z= |y|$, $\theta = y|y|^{-1} $).
\\
We call a measurable set $M\subset \mathbb{R}^{N}_{+} $ {\sl starshaped\/} if the set
$$
M\cap \{ z\theta : \, z\geq 0 \}
$$
is either empty or a segment
$\{ z\theta : \, 0\leq z< m(\theta ) \} $
for some number $m(\theta ) >0 $, for almost every $\theta \in {\mathbb S} ^{N-1} $.
\\
If $M$ is a bounded measurable set in $\mathbb{R}^{N}_{+} $, and
$\theta \in {\mathbb S}^{N-1}_{+} ,$ then let
$$
M(\theta ) := M\cap \{ z\theta :\, z\geq 0\}.
$$
There is a unique number $m(\theta )\in [0,+\infty )$ such that
$$
\int_0 ^{m(\theta )} z^{N-1}\, dz = \int_{M(\theta )} z^{N-1} \, dz.
$$
We define
$$
\widetilde{M}(\theta ) := \{ z\theta : \, 0\leq z\leq m(\theta ) \} ,
\quad (\theta \in {\mathbb S} ^{N-1}_{+} ),
$$
and
$$
\widetilde{M} := \{ z\theta : \, z\in \widetilde{M}(\theta ) , \,
\theta \in {\mathbb S} ^{N-1}_{+} \} .
$$
We call the set $\widetilde{M}$ the {\sl starshaped rearrangement of $M$\/}.
\\
Note that $\widetilde{M} $ is Lebesgue measurable and starshaped, and
we have
\begin{equation}
\label{starsh1}
{\mathcal L} ^N (M) = {\mathcal L} ^N (\widetilde{M}).
\end{equation}
If $v:\mathbb{R}^{N}_{+} \to \mathbb{R} $ is a measurable function with compact support,
and $t\geq 0$, then let
$ E_t $ be the super-level set $\{ y: \, |v(y)| \geq t\} $. We
define
$$
\widetilde{v} (y) := \sup \{ t\geq 0 :\, y \in \widetilde{E_t } \} .
$$
We call $\widetilde{v} $ the {\sl starshaped rearrangement of $v$ \/}.
It is easy to verify that
$\widetilde{v}$ is equimeasurable with $v$,
that is, the following properties hold:
\begin{eqnarray}
\label{starsh2}
& & \widetilde{E_t} = \{ y:\, \widetilde{v} (y)\geq t\} ,
\\
\label{starsh3}
& & {\mathcal L} ^N (E_t ) =
{\mathcal L} ^N (\widetilde{E_t} ) \quad \forall t\geq 0.
\end{eqnarray}
This also implies Cavalieri's principle:
If $F\in C ([0, +\infty ))$ with $F(0)=0$ and if
$F(v) \in L^1 ( \mathbb{R}^N ) $, then
\begin{equation}
\label{caval1}
\int_{\mathbb{R} ^N } F(v)\, dy =
\int_{\mathbb{R} ^N } F(\widetilde{v} )\, dy
\end{equation}
and if $F$ is non-decreasing, then
\begin{equation}
\label{monrearr}
\widetilde{F(v)} = F(\widetilde{v}).
\end{equation}
Note that the mapping
$$
z\longmapsto \widetilde{v} (z\theta ) , \quad (z\geq 0),
$$
is non-increasing for all $\theta \in {\mathbb S} ^{N-1} $.
\\
If $v,w\in L^2 (\mathbb{R}^{N}_{+} )$ are functions with compact support,
then there holds
Hardy-Littlewood's inequality:
\begin{equation}
\label{harlit}
\int_{\mathbb{R}^{N}_{+} } vw \, dy \leq \int_{\mathbb{R}^{N}_{+} }
\widetilde{v} \widetilde{w} \, dy.
\end{equation}
If $f:(0,+\infty) \to \mathbb{R} $ is a measurable function with compact support, then its (equimeasurable) {\sl non-increasing rearrangement }, $\widehat{f} : (0,+\infty )\to [0,+\infty )$,
is the monotone non-increasing
function such that
$$
{\mathcal L} ^1 \{ t \in [0,+\infty ) :\, |f(t )| > c\} =
{\mathcal L}^1 \{ t \in [0,+\infty ) :\, \widehat{f}(t ) > c \} \quad \forall c\geq 0,
$$
see \cite{Kaw}, Chapter 2.
A general P\'{o}lya-Szeg\"o principle for non-increasing rearrangement has been given in \cite{Lan}, Theorem 2.1. For later reference we will only need a special case:
\begin{lemma}
\label{Landes}
Let $\delta \geq 0$, and let $f:(0,+\infty ) \to \mathbb{R} $ be a bounded, locally Lipschitz continuous function with bounded support, such that
$$
\int_0 ^{+\infty } t ^{\delta } |f' (t ) |\, dt <+\infty .
$$
Then $\widehat{f} $ is locally Lipschitz continuous and
\begin{equation}
\label{landes1}
\int_0 ^{+\infty } t^{\delta } |\widehat{f}' (t ) |\, dt
\leq
\int_0 ^{+\infty } t^{\delta } |f' (t ) |\, d t.
\end{equation}
\end{lemma}
\bigskip
\section{The functionals ${\mathcal R}_{k,l,N,\alpha}$ and ${\mathcal Q}_{k,l,N,\alpha}$ }
Throughout this section we assume (\ref{ass1}), i.e.
\begin{equation*}
k+N+\alpha-1 >0 \ \mbox{ and } \ l+N+\alpha>0 .
\end{equation*}
If $M $ is any measurable subset of $\mathbb R^{N}_{+}$, with $0<\mu _{l,\alpha} (M)<+\infty $, we set
\begin{equation}
\label{rayl1}
{\mathcal R}_{k,l,N, \alpha} (M) :=
\frac {P_{ \mu_{k , \alpha}} (M) }
{ \left( \mu_{l,\alpha} (M) \right)^{(k+N+\alpha-1)/(l+N+\alpha)} }.
\end{equation}
Note that
\begin{equation}
\label{Rklsmooth}
{\mathcal R} _{k,l,N,\alpha} (M ) =
\frac{
\displaystyle\int_{\partial M }x_N^\alpha |x|^k \, {\mathcal H}_{N-1} (dx)
}{
\left( \displaystyle\int_{M }x_N^\alpha |x|^l \, dx \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} }
\end{equation}
if the set $M$ is smooth.
If $u\in C_0 ^1 (\mathbb{R} ^N_+ )\setminus \{ 0\} $, we set
\begin{equation}
\label{rayl2}
{\mathcal Q}_{k,l,N, \alpha} (u ) := \frac{\displaystyle\int_{\mathbb{R} ^N_+ }x_N^\alpha |x|^k |\nabla u| \, dx}{
\left( \displaystyle\int_{\mathbb{R} ^N_+ } x_N^\alpha|x|^l |u| ^{(l+N+\alpha)/(k+N+\alpha-1)} \, dx \right) ^{(k+N+\alpha-1)/(l+N+\alpha)}}.
\end{equation}
Finally, we define
\begin{equation}
\label{isopco}
C_{k,l,N, \alpha}^{rad} := {\mathcal R}_{k,l,N, \alpha}(B_1 \cap {\mathbb{R} ^N_+ }).
\end{equation}
We study the following isoperimetric problem:
\\[0.5cm]
{\sl Find the constant $C_{k,l,N, \alpha} \in [0, + \infty )$, such that}
\begin{equation}
\label{isopproblem}
C _{k,l,N, \alpha} := \inf \{ {\mathcal R}_{k,l,N, \alpha} (M):\,
\mbox{{\sl $M$ is measurable with $0<\mu _{l,\alpha} (M) <+\infty $.}} \}
\end{equation}
Moreover, we are interested in conditions on $k$, $l$ and $\alpha$ such that
\begin{equation}
\label{isoradial}
{\mathcal R}_{k,l,N, \alpha} (M) \geq {\mathcal R}_{k,l,N, \alpha} (M^{ \star} )
\end{equation}
holds for all measurable sets $M\subset {\mathbb{R} ^N_+ }$ with $ 0<\mu _{l,\alpha}(M)<+\infty $.
\\[0.1cm]
Let us begin with some immediate observations.
\\
If $M$ is a measurable subset of $\mathbb R^{N}_{+}$ with finite $\mu _{l,\alpha}$-measure and $\mu_{k,\alpha} $-perimeter,
then there exists a sequence of smooth sets
$\{ M_n \} $
such that
$$\lim_{n\to \infty } \mu _{l,\alpha} (M_n \Delta M) =0 \,\,\,\, \text{and} \,\,
\lim_{n\to \infty } P_{\mu _{k,\alpha} } (M_n ) = P_{\mu _{k,\alpha}} (M) .
$$
This property is well-known for Lebesgue measure (see for instance
\cite{G}, Theorem 1.24)
and its proof carries over to the weighted case. This implies that we also have
\begin{equation}
\label{CklNsmooth}
C_{k,l,N, \alpha} = \inf \{ {\mathcal R}_{k,l,N, \alpha} (\Omega ):\, \Omega \subset \mathbb{R} ^{N}_+, \, \Omega
\mbox{ smooth} \} .
\end{equation}
The functionals ${\mathcal R}_{k,l,N, \alpha } $ and ${\mathcal Q}_{k,l,N,\alpha } $
have the following homogeneity properties,
\begin{eqnarray}
\label{hom1}
{\mathcal R}_{k,l,N, \alpha } (M ) & = & {\mathcal R}_{k,l,N, \alpha } (tM ) ,
\\
{\mathcal Q}_{k,l,N, \alpha } (u) & = & {\mathcal Q}_{k,l,N,\alpha } (u^t ),
\end{eqnarray}
where $t>0$, $M $ is a measurable set with $0<\mu_{l, \alpha} (M)<+\infty $,
$u\in C_0 ^1 (\mathbb{R}^N_+ )\setminus \{ 0\}$,
\\
$tM := \{tx:\, x\in M \} $
and $u^t (x):= u(tx) $, ($x\in \mathbb{R} ^N_+ $), and there holds
\begin{equation}
\label{isopconst2}
C_{k,l,N, \alpha} ^{rad} = {\mathcal R}_{k,l,N, \alpha} (B_1^{+} ).
\end{equation}
Hence we have that
\begin{equation}
\label{relCC}
C_{k,l,N, \alpha} \leq C_{k,l,N, \alpha} ^{rad} ,
\end{equation}
and (\ref{isoradial}) holds if and only if
$$
C_{k,l,N,\alpha } = C_{k,l,N,\alpha } ^{rad} .
$$
Finally, we recall the following weighted isoperimetric inequality proved, for example, in \cite{BCM2} (see also \cite{XR} and \cite{ MadernaSalsa}).
\begin{proposition}\label{BCM2}
For all measurable sets $M\subset \mathbb{R} ^N_+$, with $0< \mu _{0, \alpha} (M)<+\infty $, the following inequality holds true
\begin{equation}\label{isopclass}
{\mathcal R} _{0,0,N, \alpha} (M) :=
\frac {P_{ \mu_{0, \alpha}} (M) }
{ \left( \mu_{0,\alpha} (M) \right)^{(N+\alpha-1)/(N+\alpha)} } \geq C_{0,0,N, \alpha} ^{rad}:=
\frac {P_{ \mu_{0, \alpha}} (M^{ \star}) }
{ \left( \mu_{0,\alpha} (M^{ \star}) \right)^{(N+\alpha-1)/(N+\alpha)} } \,,
\end{equation}
where $M^{\star}=B_{R}^{+} $ with $R$ such that $\mu_{0, \alpha}(M)=\mu_{0, \alpha}(M^{ \star})$
\end{proposition}
We recall that the isoperimetric constant $C_{0,0,N, \alpha} ^{rad}$ is explicitly computed in \cite{BCM2}, see also \cite{MadernaSalsa} for the case $N=2$.
\begin{lemma}
\label{hardylitt}
Let $l>l' >-N -\alpha$. Then
\begin{equation}
\label{hardylitt2}
\frac{\left( \mu _{l, \alpha} (M) \right)
^{1/(l+N+\alpha)}
}{
\left( \mu _{l', \alpha} (M) \right)
^{1/(l'+N+\alpha)}
}
\geq \frac{\left( \mu _{l, \alpha} (M^{ \star}) \right)
^{1/(l+N+\alpha)}
}{
\left( \mu _{l', \alpha} (M^{ \star}) \right)
^{1/(l'+N+\alpha)}
}
\end{equation}
for all measurable sets $M\subset \mathbb{R} ^N_+ $ with $0<\mu_{l, \alpha}(M)<+\infty $.
Equality holds only for half-balls $B_{R}^{+} $, ($R>0$).
\end{lemma}
{\sl Proof: } Let $M^{ \star} $ be the
$\mu_{l, \alpha} $-symmetrization of $M$. Then we obtain, using the Hardy-Littlewood inequality,
\begin{eqnarray*}
\mu _{l' , \alpha} (M) =\int_Mx_N^\alpha |x| ^{l'} \, dx & = & \int_{\mathbb{R} ^N_+ } |x|^{l'-l} \chi _M (x)\, d\mu _{l, \alpha} (x)
\\
& \leq &
\int_{\mathbb{R} ^N _+} \left( |x|^{l'-l} \right) ^{ \star} \left( \chi _M \right) ^{ \star} (x)\, d\mu _{l, \alpha} (x)
\\
& = &
\int_{\mathbb{R} ^N_+ } |x|^{l'-l} \chi _{M^{ \star} } (x)\, d\mu _{l, \alpha} (x)
\\
& = &
\int_{M^{ \star} } x_N^\alpha |x|^{l' }\, dx =\mu _{l', \alpha } (M^{ \star} ).
\end{eqnarray*}
This implies (\ref{hardylitt2}).
\noindent Next assume that equality holds in (\ref{hardylitt2}). Then we must have
$$
\int_M |x|^{l'-l} \, d\mu _{l, \alpha} (x) = \int_{M^{ \star} } |x|^{l'-l} d\mu _{l, \alpha} (x) ,
$$
that is,
$$
\int_{M\setminus M^{ \star} } |x|^{l'-l} \, d\mu _{l, \alpha} (x) = \int_{M^{ \star} \setminus M} |x|^{l'-l} d\mu _{l, \alpha} (x) .
$$
Since $l'-l<0$, this means that
$ \mu _l ( M\Delta M^{ \star} )=0$. The Lemma is proved.
$\hfill \Box $
\begin{lemma}
\label{rangekl1}
Let $k,l, \alpha$ satisfy (\ref{ass1}). Assume that $l>l' >-N-\alpha$ and
$C_{k,l,N, \alpha} = C_{k,l,N, \alpha} ^{rad} $.
Then we also have
$C_{k,l',N, \alpha} = C_{k,l',N, \alpha} ^{rad} $.
Moreover, if $
{\mathcal R}_{k,l',N, \alpha} (M ) = C_{k,l',N, \alpha} ^{rad} $
for some measurable set $M\subset \mathbb{R} ^N_+ $, with $0< \mu _{l' , \alpha} (M) <+\infty $,
then $M = B_{R}^{+}$ for some $R>0$.
\end{lemma}
{\sl Proof:} By our assumptions and Lemma \ref{hardylitt} we have for every measurable set $M$ with
$0<\mu _{l, \alpha}(M) <+\infty $,
\begin{eqnarray*}
{\mathcal R}_{k,l',N, \alpha} (M ) & = & {\mathcal R}_{k,l,N, \alpha} (M )
\cdot
\left[
\frac{
\left(
\mu _{l, \alpha} (M)
\right) ^{1/(l+N+\alpha)}
}{
\left( \mu_{l', \alpha} (M)
\right) ^{1/(l'+N+\alpha)}
}
\right] ^{k+N+\alpha-1}
\\
& \geq &
C_{k,l',N, \alpha}^{rad},
\end{eqnarray*}
with equality only if
$M = B^{+}_{R}$ for some $R>0$.
$\hfill \Box $
\begin{lemma}
\label{R2}
Assume that $k \leq l+1$. Then
\begin{equation}
\label{ineqQR}
C_{k,l,N, \alpha}
=
\inf \left\{ {\mathcal Q}_{k,l,N, \alpha} (u) :\, u\in C_0 ^1 (\mathbb{R}_+ ^N
)\setminus \{ 0\} \right\} .
\end{equation}
\end{lemma}
{\sl Proof: }
The proof uses classical arguments (see, e.g. \cite{FleRi}).
We may restrict ourselves to nonnegative functions $u$.
By (\ref{isopproblem}) and the coarea formula we obtain,
\begin{eqnarray}
\label{coarea1}
\int_{\mathbb{R} ^N_+ }x_N^\alpha |x|^k |\nabla u| \, dx & = &
\int _0 ^{\infty } \int\limits_{u=t } x_N^\alpha |x|^k \, {\mathcal H} _{N-1 } (dx) \, dt
\\
\nonumber
& \geq & C_{k,l,N, \alpha} \int_0 ^{\infty } \left( \int_{u>t } x_N^\alpha |x|^l \, dx
\right) ^{(k+N+\alpha-1)/(l+N+\alpha)} \, dt.
\end{eqnarray}
Further,
Cavalieri's principle gives
\begin{equation}
\label{cavalieri}
u(x)= \int_0 ^{\infty } \chi _{\{ u>t\} } (x)\, dt , \quad (x\in \mathbb{R} ^N ).
\end{equation}
Hence (\ref{cavalieri}) and Minkowski's inequality for integrals (see \cite{Stein}) lead to
\begin{eqnarray}
\label{ineqmeas}
& &
\\
\nonumber
&&\int_{\mathbb{R} ^N_+ }x_N^\alpha |x|^l |u|^{(l+N+\alpha)/(k+N+\alpha-1)} \, dx \qquad \qquad \\
\nonumber &=& \int_{\mathbb{R} ^N _+}x_N^\alpha |x|^l \left| \int_0 ^{\infty}
\chi_{\{ u>t\} } (x)\, dt \right| ^{(l+N+\alpha)/(k+N+\alpha-1)} \, dx
\\
\nonumber
& \leq & \left( \int_0 ^{\infty } \left(
\int_{\mathbb{R}^N_+ }x_N^\alpha |x|^l \chi _{\{ u>t \} } (x) \, dx \right)
^{(k+N+\alpha-1)/(l+N+\alpha)} \, dt \right) ^{(l+N+\alpha)/(k+N+\alpha-1)}
\\
\nonumber
& = & \left( \int_0 ^{\infty } \left( \int_{ u>t }x_N^\alpha |x|^l \, dx \right)
^{(k+N+\alpha-1)/(l+N+\alpha)} dt \right) ^{(l+N+\alpha)/(k+N+\alpha-1)} .
\end{eqnarray}
Now (\ref{coarea1}) and (\ref{ineqmeas}) yield
\begin{equation}
\label{ineqQ1}
{\mathcal Q}_{k,l,N, \alpha} (u) \geq C_{k,l,N, \alpha} \quad \forall u\in C_0 ^1\setminus \{ 0\} (\mathbb{R}_+ ^N ).
\end{equation}
To show (\ref{ineqQR}),
let $\varepsilon >0$, and choose a smooth set
$\Omega $ such that
\begin{equation}
\label{ineqR1}
{\mathcal R}_{k,l,N,\alpha} (\Omega ) \leq C_{k,l,N,\alpha } +\varepsilon .
\end{equation}
It is well-known that there exists a sequence $\{ u_n \} \subset
C_0 ^{\infty } (\mathbb{R} ^N )\setminus \{ 0\} $
such that
\begin{eqnarray}
\label{limperim}
\lim_{n\to \infty } \int_{\mathbb{R}^N _+} x_N^\alpha |x|^k |\nabla u_n | \, dx =
\int_{\partial \Omega }x_N^\alpha |x|^k \, {\mathcal H} _{N-1} (dx) ,
\\
\label{limmeas}
\lim_{n\to \infty } \int_{\mathbb{R}_+ ^N } x_N^\alpha |x|^l
|u_n |^{(l+N+\alpha)/(k+N+\alpha-1)} \, dx = \int_{ \Omega } x_N^\alpha|x|^l \, dx.
\end{eqnarray}
To do this, one may choose mollifiers of $\chi _{\Omega } $
as $u_n $ (see e.g. \cite{Talenti1}).
Hence, for large enough $n$ we have
\begin{equation}
\label{ineqQ2}
{\mathcal Q}_{k,l,N,\alpha } (u_n ) \leq C_{k,l,N,\alpha } + 2\varepsilon .
\end{equation}
Since $\varepsilon $ was arbitrary, (\ref{ineqQR}) now
follows from (\ref{ineqQ1}) and (\ref{ineqQ2}).
$\hfill \Box $
\section{Necessary conditions}
In this section we assume that
\begin{equation*}
k+N+\alpha-1 >0 \ \mbox{ and } \ l+N+\alpha>0 .
\end{equation*}
The main result is Theorem \ref{R4} which highlights
the phenomenon of symmetry breaking.
\noindent The following result holds true.
\begin{lemma}
\label{R3}
A necessary condition for
\begin{equation}
\label{C>0}
C_{k,l,N, \alpha} >0
\end{equation}
is
\begin{equation}
\label{k_l_ineq1}
l \frac{N+\alpha-1}{N+\alpha} \leq k .
\end{equation}
\end{lemma}
{\sl Proof:} Assume that $k<l(N+\alpha-1 )/(N+\alpha)$, and let
$te_1 = (t, 0, \ldots , 0)$, ($t>2$). Since for any $x\in B_1 (te_1 )$, it results $t-1\le |x|\le t+1$, we have
$$
{\mathcal R}_{k,l,N, \alpha} (B_1 (te_1 ) ) \leq
D \frac{ (t +1)^k}{ (t-1) ^{l (k+N+\alpha-1)/(l+N+\alpha)} }.
$$
where the positive constant $D= D(k,l, N, \alpha) $ is given by
$$
D=\frac{
\displaystyle\int_{\partial (B_1 (te_1 )\cap \mathbb{R}_+^{N})}x_N^\alpha \, {\mathcal H}_{N-1} (dx)
}{
\left( \displaystyle\int_{B_1 (te_1 )\cap \mathbb{R}_+^{N} }x_N^\alpha \, dx \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} }
$$
Since $k-l(k+N+\alpha-1)/(l+N+\alpha) <0$, it follows that
$$
\lim_{t\to \infty } {\mathcal R}_{k,l,N,\alpha } (B_1 (te_1 ) ) =0.
$$
$\hfill \Box $
\begin{theorem}
\label{R4}
A necessary condition for
\begin{equation}
\label{isop1}
C_{k,l,N, \alpha} = C_{k,l,N, \alpha} ^{rad}
\end{equation}
is
\begin{equation}
\label{k_l_ineq2}
l+1 \leq k + \frac{N+\alpha-1}{ k+N+\alpha-1} .
\end{equation}
\end{theorem}
\medskip
\begin{remark} \rm
Theorem \ref{R4} means that if $l+1 \leq k + \frac{N+\alpha-1}{ k+N+\alpha-1}$,
then symmetry breaking occurs, that is $C_{k,l,N, \alpha} < C_{k,l,N, \alpha} ^{rad} $.
Our proof relies on the fact that the second variation of the perimeter for smooth volume-preserving
perturbations from the ball $B_{1}^{+} $ is non-negative if and only if (\ref{k_l_ineq2}) holds. Note that this also
follows from a general second variation formula with volume and perimeter densities, see \cite{Mo2}.
\end{remark}
{\sl Proof:} First we assume $N\geq 2$. Let $(r, \theta )$ denote
$N$--dimensional spherical coordinates, such that
$$
\theta _1 = \arccos \frac{x_N}{|x|} , \quad\theta_1 \in [0, \pi /2],
$$
and
$u \in C^2 (\mathbb{S}_{+}^{N-1} )$, $s\in C^2 (\mathbb{R})$ with $s(0)=0$,
and define
$$
U(t ) := \{ x=r\theta \in \mathbb{R}_+ ^N : \, 0\leq r < 1+ t u(\theta ) + s(t) \} ,
\quad (t\in \mathbb{R} ).
$$
Note that $U(0)= B_{1}^{+} $.
By the Implicit Function Theorem, we may choose $s$ in such a way that
\begin{equation}
\label{intid1}
\int_{U(t)}x_N^\alpha |x|^l \, dx = \int _{B_{1}^{+} } x_N^\alpha |x|^l \, dx \quad \mbox{for $|t|<t_0$},
\end{equation}
for some number $t_0 >0$. We set $s_1 := s'(0) $ and $s_2 := s^{\prime \prime} (0)$.
Let $d \Theta$ be the surface element on the sphere
and
\begin{equation}
\label{h}
h:= h(\theta_1) = \cos^{\alpha} \theta_1 = \left( \frac{x_N}{|x|} \right)^{\alpha}.
\end{equation}
Since
$$
\int_{U(t)} x_N^\alpha|x|^l \, dx = \int_{\mathbb{S}_+ ^{N-1 } } h
\int_0 ^{1+ t u(\theta ) + s(t)} \rho ^{l+N+\alpha-1} \, d\rho \, d\Theta,
$$
a differentiation at $t=0$ of (\ref{intid1}) leads to
\begin{eqnarray}
\label{intid2}
0 & = & \int_{\mathbb{S}_+ ^{N-1} } (u+ s_1 )\, h d\Theta \quad \mbox{and }
\\
\label{intid3}
0 & = & (l+N+\alpha-1) \int_{\mathbb{S}_+^{N-1} } (u+ s_1 )^2 h \, d\Theta + s_2
\int_{\mathbb{S}_+^{N-1} } h \, d\Theta .
\end{eqnarray}
Next we consider the perimeter functional
\begin{eqnarray}
\label{perim}
J(t) & := & \int_{\partial U(t)} x_N^\alpha |x|^k \, {\mathcal H}_{N-1} (dx)
\\
\nonumber
& = & \int_{\mathbb{S}_+ ^{N-1} } (1+tu + s(t) )^{k+N+\alpha-2}
\sqrt{ (1+ tu+s(t) )^2 + t^2 |\nabla _{\theta } u|^2 } \, h \, d\Theta ,
\end{eqnarray}
where $\nabla _{\theta }$ denotes the gradient on the sphere.
Differentiation at $t=0$ of (\ref{perim}) leads to
\begin{eqnarray*}
J'(0) & = & (k+N+\alpha-1) \int_{\mathbb{S} _+^{N-1} } (u+ s_1 ) \, h \, d\Theta , \quad \mbox{and }
\\
J^{\prime \prime } (0) & = &
(k+N+\alpha-2) (k+N+\alpha-1) \int_{\mathbb{S} _+^{N-1} } (u+s_1 )^2 \, h \, d\Theta +
\\
& & + (k+N+\alpha-1) s_2 \int_{\mathbb{S} _+^{N-1} } \, h \, d\Theta + \int_{\mathbb{S}_+ ^{N-1} }
|\nabla _{\theta } u|^2 \, h \, d\Theta .
\end{eqnarray*}
By (\ref{intid2}) and (\ref{intid3}) this implies
\begin{equation}
\label{Jprime}
J'(0) = 0,
\end{equation}
and
\begin{equation}
\label{Jprimeprime}
J^{\prime \prime } (0)
=(k+N+\alpha-1) (k-l-1) \int_{\mathbb{S}_+^{N-1} } (u+s_1 )^2 \, h \, d\Theta +
\int_{\mathbb{S}_+^{N-1} } |\nabla _{\theta } u|^2 \, h \, d\Theta .
\end{equation}
Now assume that (\ref{isop1}) holds.
Then we have ${\mathcal R}_{k,l,N,\alpha} (U(t)) \geq {\mathcal R}_{k,l,N,\alpha} (B_{1}^{+} )$
for all $t$ with $|t|<t_0 $. In view of (\ref{intid1})
this means that $J(t) \geq J(0) $ for $|t|<t_0 $, that is,
\begin{equation}
\label{Jderiv}
J^{\prime \prime } (0) \geq 0 = J'(0).
\end{equation}
The second condition is (\ref{Jprime}), and the first condition
implies, in view of (\ref{intid2}) and
(\ref{Jprimeprime}), that
\begin{eqnarray}
\label{intineq1}
0 & \leq & (k+N+\alpha-1)(k-l-1) \int_{\mathbb{S}_+^{N-1} } v^2 \, h \,d\Theta +
\int_{\mathbb{S} _+^{N-1} } |\nabla _{\theta } v|^2 \, h \, d\Theta
\\
\nonumber
& & \forall v\in C^2 (\mathbb{S} _+^{N-1} ) \ \mbox{ with } \
\int_{\mathbb{S}_+^{N-1} } v \, h \, d\Theta =0.
\end{eqnarray}
Applying Proposition 2.1 in \cite{BCM2}, we get
$$
\int_{\mathbb{S} _+^{N-1} } |\nabla _{\theta } v|^2 \, h \, d\Theta \ge (N+\alpha-1)
\int_{\mathbb{S}_+^{N-1} } v^2 \, h \, d\Theta
$$
for any $ v\in C^2 (\mathbb{S} _+^{N-1} ) $ with
$ \displaystyle\int_{\mathbb{S}_+^{N-1} }h v \, d\Theta =0$.
The conclusion follows.
$\hfill \Box $
\section{The case of negative $\alpha$}
\noindent
In this section we firstly show that the relative isoperimetric
problem in $\mathbb{R}_{+}^{2}$ for $\alpha \in \left( -1,0\right) $ and $k=l=0$ has
no solution. Nevertheless, in Theorem \ref{St_Not_Iso}, we prove that, the second variation
of the perimeter w.r.t. volume-preserving smooth perturbations
at the half circle is nonnegative for such values
of the parameters.
\noindent Throughout this section the points in $\mathbb{R}_{+}^{2}$
will be simply denoted by $(x,y)$.
\bigskip
\begin{theorem}
\label{Not_Ex}
Let
\begin{equation}
N=2,\text{ }\alpha \in \left( -1,0\right) \,\, \text{and } k=l=0 .
\label{H_NE}
\end{equation}
Then there is no constant $C\in \left( 0,+\infty \right) $ such that
\begin{equation*}
\int_{\partial \Omega \backslash \left\{ y=0\right\} }y^{\alpha }dl\geq
C\left( \displaystyle\int\limits_{\Omega }y^{\alpha }dxdy\right) ^{\frac{
\alpha +1}{\alpha +2}},\text{ for any set }\Omega \subset \mathbb{R}_{+}^{2}.
\end{equation*}
\end{theorem}
\noindent {\sl Proof:}
\ \ Let $0<a<b$ and
\begin{equation*}
\Omega _{a,b}:=\left\{ (x,y)\in \mathbb{R}_{+}^{2}:0<x<1,\text{ }a<y<b\right\} .
\end{equation*}
We have
\begin{equation*}
A_{\alpha }\left( \Omega _{a,b}\right) :=\displaystyle\int\limits_{\Omega
_{a,b}}y^{\alpha }dxdy=\int_{a}^{b}t^{\alpha }dt=\frac{b^{\alpha
+1}-a^{\alpha +1}}{\alpha +1}.
\end{equation*}
while
\begin{equation*}
P_{\alpha }\left( \Omega _{a,b}\right) :=\int_{\partial \Omega
_{a,b}}y^{\alpha }dl=2\int_{a}^{b}t^{\alpha }dt+a^{\alpha }+b^{\alpha }=
\frac{2}{\alpha +1}\left( b^{\alpha +1}-a^{\alpha +1}\right) +a^{\alpha
}+b^{\alpha }.
\end{equation*}
Setting
\begin{equation*}
U:=a^{\alpha +1},\text{ \ }V:=b^{\alpha +1}-a^{\alpha +1}\hspace{0.5cm}
(U,V>0)
\end{equation*}
we have
\begin{equation*}
A_{\alpha }\left( \Omega _{a,b}\right) =\frac{V}{\alpha +1}\ \text{\ and \ }
P_{\alpha }\left( \Omega _{a,b}\right) =\frac{2}{\alpha +1}V+U^{\frac{\alpha
}{\alpha +1}}+\left( U+V\right) ^{\frac{\alpha }{\alpha +1}}.
\end{equation*}
In order to conclude to proof we claim that $\forall \epsilon >0$ $\exists $
$0<a<b$ such that
\begin{equation*}
R_{\alpha }\left( \Omega _{a,b}\right)
\equiv
\frac{P_{\alpha }\left(
\Omega _{a,b}\right) }{\left[ A_{\alpha }\left( \Omega _{a,b}\right) \right]
^{\frac{\alpha +1}{\alpha +2}}}<\epsilon .
\end{equation*}
First choose $V$ small enough to have
\begin{equation*}
2\left( \alpha +1\right) ^{-\frac{1}{\alpha +1}}\text{ }V^{\frac{1}{\alpha +2
}}<\frac{\epsilon }{2}
\end{equation*}
and then $U$ large enough to have
\begin{equation*}
\frac{U^{\frac{\alpha }{\alpha +1}}+(U+V)^{\frac{\alpha }{\alpha +1}}}{
\left( \frac{1}{\alpha +1}\right) ^{\frac{\alpha +1}{\alpha +2}}V^{\frac{
\alpha +1}{\alpha +2}}}<\frac{\epsilon }{2}.
\end{equation*}
Then
\begin{equation*}
R_{\alpha }\left( \Omega _{a,b}\right) =2\left( \alpha +1\right) ^{-\frac{1}{
\alpha +1}}\text{ }V^{\frac{1}{\alpha +2}}+\frac{U^{\frac{\alpha }{\alpha +1}
}+(U+V)^{\frac{\alpha }{\alpha +1}}}{\left( \frac{1}{\alpha +1}\right) ^{
\frac{\alpha +1}{\alpha +2}}V^{\frac{\alpha +1}{\alpha +2}}}<\frac{\epsilon
}{2}+\frac{\epsilon }{2}=\epsilon .
\end{equation*}
\hfill $\Box $
\bigskip
\noindent Now let $\alpha \in \left( -1,0\right) $ and
consider the measure $d\nu =\cos ^{\alpha }t \, dt $.
We introduce the weighted
Sobolev space
$H^{1}\left( \left( -\frac{\pi }{2},\frac{\pi }{2}\right)
;d\nu \right) $ which is made of functions $\phi :\left( -\frac{\pi }{2}
,\frac{\pi }{2}\right) \rightarrow \mathbb{R}$ such that
\begin{eqnarray*}
\left\Vert \phi \right\Vert _{H^{1}\left( \left( -\frac{\pi }{2},\frac{\pi }{
2}\right) ; \,d\nu \right) }^{2}
&=&
\left\Vert \phi \right\Vert_{L^{2}\left( \left( -\frac{\pi }{2},\frac{\pi }{2} \right) ; \, d\nu \right)
}^{2}+\left\Vert \phi ^{\prime }\right\Vert _{L^{2}\left( \left( -\frac{\pi }{2},
\frac{\pi }{2} \right) ; \, d\nu \right) }^{2} \\
&=&
\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}
\phi (t)^{2} \, d\nu +
\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}
\phi ^{\prime }(t)^{2} \, d\nu <\infty .
\end{eqnarray*}
Finally let
\begin{equation*}
V :=\left\{ \phi \in H^{1}\left( \left( -\frac{\pi }{2},\frac{\pi }{2}
\right) ; \, d\nu \right) :\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}
}\phi \, d\nu =0\right\} .
\end{equation*}
In the following Lemma we prove that
$ V $
is compactly embedded in
$L^{2}\left( \left( -\frac{\pi }{2},\frac{\pi }{2} \right) ; \, d\nu \right) $.
\begin{lemma}
\label{embedd}
If $\left\{ w_{n}\right\} _{n\in
N}\subset V $ is such that
\begin{equation*}
\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}w_{n}^{\prime }(t)^{2} \, d\nu \leq C\text{ \ }\forall n \in \mathbb{N}
\end{equation*}
then there exists $ w\in V $ such that there holds
\begin{equation*}
\lim_{n\rightarrow \infty }\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left\vert
w_{n}(t)-w(t)\right\vert ^{2}\, d\nu =0\text{.}
\end{equation*}
\end{lemma}
\noindent {\sl Proof:} \ Note that
\begin{equation*}
\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}w_{n}^{\prime }(t)^{2}dt\leq \int_{-
\frac{\pi }{2}}^{\frac{\pi }{2}}w_{n}^{\prime }(t)^{2}\cos ^{\alpha }tdt\leq
C\text{ \ }\forall n \in \mathbb{N} .
\end{equation*}
By the definition of $V$ we can infer that for each $n\in \mathbb{N}$, there exists $
t_{n}\in (-\frac{\pi }{2},\frac{\pi }{2})$ such that, up to a subsequence, $w_{n}(t_{n})=0.$ So
we have
\begin{equation*}
w_{n}(t)=\int_{t_{n}}^{t}w_{n}^{\prime }(\sigma )d\sigma
\end{equation*}
and therefore
\begin{equation*}
\left\vert w_{n}(t)\right\vert ^{2}\leq \left( \int_{-
\frac{\pi }{2}}^{\frac{\pi }{2}}\left\vert w_{n}^{\prime }(\sigma
)\right\vert d\sigma \right) ^{2}\leq \pi \int_{-\frac{\pi }{2}}^{\frac{\pi
}{2}}\left\vert w_{n}^{\prime }(\sigma )\right\vert ^{2}d\sigma \leq C\text{
\ }\forall n \in \mathbb{N}.
\end{equation*}
So $w_{n}$ is bounded in $H^{1} \left( -\frac{\pi }{2},\frac{\pi }{2} \right)$
and, therefore, there exists
$w\in C^{0}\left( \left[ -\frac{\pi }{2},\frac{\pi }{2
}\right] \right)
\cap H^{1} \left( -\frac{\pi }{2},\frac{\pi }{2} \right) $
such that, up to a subsequence,
\begin{equation*}
w_{n}(t)\rightarrow w(t)\text{ uniformly in }\left[ -\frac{\pi }{2},\frac{
\pi }{2}\right] .
\end{equation*}
The assertion easily follows, since
\begin{equation*}
\cos ^{\alpha }t\in L^{1}\left( -\frac{\pi }{2},\frac{\pi }{2}\right) \text{
\ }\forall \alpha \in (-1,0) .
\end{equation*}
\hfill $\Box $
\bigskip
\noindent Now define the Rayleigh quotient
\begin{equation*}
Q(v):=\frac{\displaystyle\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v^{\prime }(t)^{2}\cos
^{\alpha }tdt}{\displaystyle\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v(t)^{2}\cos ^{\alpha
}tdt}, \,\,\,\ \text{with} \,\,\,\, v \in V .
\end{equation*}
\begin{lemma}
\label{W_Wirt}
There holds
\begin{equation*}
\mu :=\min_{\phi \in V}Q(v)=1+\alpha .
\end{equation*}
\end{lemma}
\noindent {\sl Proof:} \ \ Note that $\sin t\in V $. An integration by parts
gives
\begin{equation}
Q(\sin t)=\frac{\displaystyle\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\cos ^{\alpha +2}tdt}{
\displaystyle\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\sin ^{2}t\cos ^{\alpha }tdt}
=
\frac{ \left( \alpha +1\right) \displaystyle\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\sin
^{2}t\cos ^{\alpha }tdt}{\displaystyle\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\sin
^{2}t\cos ^{\alpha }tdt}=\alpha +1,
\label{sint}
\end{equation}
and, therefore
\begin{equation*}
\mu \leq \alpha +1.
\end{equation*}
Now, by contradiction, assume that
\begin{equation*}
\mu <1+\alpha .
\end{equation*}
By Lemma \ref{embedd} there exists a function $u\in V$ such that $Q(u)=\mu $
which satisfies the Euler equation
\begin{equation}
-\left( u^{\prime }\cos ^{\alpha }(t)\right) ^{\prime }=\mu u\cos ^{\alpha
}(t)\text{ \ on \ }\left( -\frac{\pi }{2},\frac{\pi }{2}\right) .
\label{eig_eq}
\end{equation}
We set
\begin{equation*}
R(v):=\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v^{\prime }(t)^{2}\, d \nu -\mu \int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v(t)^{2} \, d \nu ,
\text{ \ }v\in V,
\end{equation*}
and
\begin{equation*}
u_{1}(t)=\frac{u(t)-u(-t)}{2},\text{ \ }u_{2}(t)=\frac{u(t)+u(-t)}{2}.
\end{equation*}
We have
\begin{equation*}
R(u)=R(u_{1})+R(u_{2})=0.
\end{equation*}
Hence at least one of the following statements must be true
\begin{equation}
R(u_{1})\leq 0, \tag{i} \label{i}
\end{equation}
or
\begin{equation} \label{ii}
R(u_{2})\leq 0. \tag{ii}
\end{equation}
\noindent Our aim is to reach a contradiction by showing that (\ref{i})
and (\ref{ii}) are both false.
\vspace{.5cm}
\noindent \textbf{Case (i)}: Assume $R(u_{1})\leq 0.$
\noindent Since $u_{1}$ is odd we have
\begin{equation*}
v_{1}:=\frac{u_{1}(t)}{\sin t}\in C^{1}\left( \left[ -\frac{\pi }{2},\frac{
\pi }{2}\right] \right)
\end{equation*}
and
\begin{equation*}
R(u_{1})=\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left( v_{1}^{\prime }\sin
t+v_{1}\cos t\right) ^{2}\cos ^{\alpha }tdt-\mu \int_{-\frac{\pi }{2}}^{
\frac{\pi }{2}}v_{1}^{2}\sin ^{2}t\cos ^{\alpha }tdt=
\end{equation*}
\begin{equation*}
=\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}2v_{1}^{\prime }v_{1}\sin t\cos
^{\alpha +1}tdt+\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left( v_{1}^{\prime
}\right) ^{2}\sin ^{2}t\cos ^{\alpha }tdt +
\end{equation*}
\begin{equation*}
+\int_{-\frac{\pi }{2}}^{\frac{\pi
}{2}}v_{1}^{2}\cos ^{\alpha +2}tdt+-\mu \int_{-\frac{\pi }{2}}^{\frac{\pi }{
2 }}v_{1}^{2}\sin ^{2}t\cos ^{\alpha }tdt
\end{equation*}
\begin{eqnarray*}
&=&
(\alpha +1)\int_{-\frac{\pi }{2}}^{
\frac{\pi }{2}}v_{1}^{2}\sin ^{2}t\cos ^{\alpha }tdt-\int_{-\frac{\pi }{2}
}^{ \frac{\pi }{2}}v_{1}^{2}\cos ^{\alpha +2}tdt+ \\
&&\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left( v_{1}^{\prime }\right)
^{2}\sin ^{2}t\cos ^{\alpha }tdt+\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}
}v_{1}^{2}\cos ^{\alpha +2}tdt-\mu \int_{-\frac{\pi }{2}}^{\frac{\pi }{2}
}v_{1}^{2}\sin ^{2}t\cos ^{\alpha }tdt
\end{eqnarray*}
Recalling the assumption $\alpha +1-\mu >0$, we have
\begin{eqnarray*}
R(u_{1}) &=&(\alpha +1)\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v_{1}^{2}\sin
^{2}t\cos ^{\alpha }tdt+\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left(
v_{1}^{\prime }\right) ^{2}\sin ^{2}t\cos ^{\alpha }tdt-\mu \int_{-\frac{\pi
}{2}}^{\frac{\pi }{2}}v_{1}^{2}\sin ^{2}t\cos ^{\alpha }tdt \\
&=&(\alpha +1-\mu )\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v_{1}^{2}\sin
^{2}t\cos ^{\alpha }tdt+\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left(
v_{1}^{\prime }\right) ^{2}\sin ^{2}t\cos ^{\alpha }tdt\geq 0,
\end{eqnarray*}
where equality holds if and only if $\ \mu =\alpha +1$ and $v_{1}$ is a
constant. This contradicts our assumption.
\vspace{.5cm}
\noindent \textbf{Case (ii)}: Assume $R(u_{2})\leq 0.$
\noindent Since $u_{2}$ is even function belonging to $V$, we have
\begin{equation*}
0 = \int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}u_{2}\cos ^{\alpha }tdt = 2
\int_{0}^{\frac{\pi }{2}}u_{2}\cos ^{\alpha }tdt.
\end{equation*}
Then there exists $c\in \left( 0,\frac{\pi }{2}\right) $ such that
\begin{equation*}
u_{2}(c)=u_{2}(-c)=0.
\end{equation*}
From \eqref{eig_eq} we deduce that
\begin{equation}
\int_{-c}^{c}\left( u_{2}^{\prime }\right) ^{2}\cos ^{\alpha }tdt=
-\int_{-c}^{c}u_{2}\left( u_{2}^{\prime }\cos
^{\alpha }t\right) ^{\prime }dt=\mu \int_{-c}^{c}u_{2}^{2}\cos ^{\alpha }tdt.
\label{-c_+c}
\end{equation}
On the other hand, setting
\begin{equation*}
v_{2}:=u_{2}\cos ^{\frac{\alpha }{2}}t ,
\end{equation*}
we obtain from (\ref{-c_+c})
\begin{eqnarray}
\int_{-c}^{c}\left( u_{2}^{\prime }\right) ^{2}\cos ^{\alpha }tdt
&=&\int_{-c}^{c}\left( v_{2}^{\prime }\cos ^{-\frac{\alpha }{2}}t+\frac{
\alpha }{2}v_{2}\cos ^{-\frac{\alpha }{2}-1}t\sin t\right) ^{2}\cos ^{\alpha
}tdt \label{v2} \\
&=&\int_{-c}^{c}\left( v_{2}^{\prime }\right) ^{2}dt+\alpha
\int_{-c}^{c}v_{2}v_{2}^{\prime }\tan tdt+\frac{\alpha ^{2}}{4}
\int_{-c}^{c}v_{2}^{2}\tan ^{2}tdt. \notag
\end{eqnarray}
Since $v_{2}\left( \pm c\right) =0$
and
$v_{2}\in C^{1}\left[ -c,c\right] $, the
classical one-dimensional Wirtinger inequality implies that
\begin{equation}
\int_{-c}^{c}\left( v_{2}^{\prime }\right) ^{2}dt\geq \left( \frac{\pi }{2c}
\right) ^{2}\int_{-c}^{c}v_{2}^{2}dt,
\label{W_1d}
\end{equation}
where equality holds if and only if $v_{2}$
is proportional to $ \sin \left( \dfrac{\pi t}{2c} \right) $
Inequalities (\ref{-c_+c}) and (\ref{W_1d}) ensure
\begin{eqnarray}
\int_{-c}^{c}\left( u_{2}^{\prime }\right) ^{2}\cos ^{\alpha }tdt &\geq
&\left( \frac{\pi }{2c}\right) ^{2}\int_{-c}^{c}v_{2}^{2}dt \\
&&-\frac{\alpha }{2}\int_{-c}^{c}v_{2}^{2}\left( 1+\tan ^{2}t\right) dt+
\frac{\alpha ^{2}}{4}\int_{-c}^{c}v_{2}^{2}\tan ^{2}tdt \notag \\
&=&\left( \frac{\pi ^{2}}{4c^{2}}-\frac{\alpha }{2}\right)
\int_{-c}^{c}v_{2}^{2}dt+\left( \frac{\alpha ^{2}}{4}-\frac{\alpha }{2}
\right) \int_{-c}^{c}v_{2}^{2}\tan ^{2}tdt \notag \\
&>&\left( \frac{\pi ^{2}}{4c^{2}}-\frac{\alpha }{2}\right)
\int_{-c}^{c}v_{2}^{2}dt \notag \\
&=&\left( \frac{\pi ^{2}}{4c^{2}}-\frac{\alpha }{2}\right)
\int_{-c}^{c}u_{2}^{2}\cos ^{\alpha }tdt. \notag
\end{eqnarray}
Finally equation \eqref{eig_eq} implies
\begin{equation*}
1+\alpha >\mu >\frac{\pi ^{2}}{4c^{2}}-\frac{\alpha }{2}\geq 1-\frac{\alpha
}{2}
\end{equation*}
and therefore \ $\frac{3}{2}\alpha >0 ,$ a contradiction.
\hfill $\Box $
\begin{theorem}
\label{St_Not_Iso}
Let $N=2,\text{ }\alpha \in \left( -1,0\right) $ and $k=l=0$. Then
the functional $J$ defined in \eqref{perim},
satisfies $J^{\prime \prime}(0) \geq 0$.
\end{theorem}
\noindent {\sl Proof:}
The assertion follows from Lemma \ref{W_Wirt} and taking into account of \eqref{Jprimeprime}.
\hfill $\Box $
\vspace{.5cm}
\section{Main results}
This section is devoted to the proof of Theorem \ref{maintheorem}, that is,
we obtain sufficient conditions on $k,l$ and $N$ such that
$ C_{k,l,N, \alpha} = C_{k,l,N, , \alpha} ^{rad}$ holds, or equivalently,
\begin{equation}
\label{ineqrad}
{\mathcal R}_{k,l,N, \alpha} (M) \geq C_{k,l,N, \alpha}^{rad}
\quad \mbox{for all measurable sets $M \subset \mathbb R^{N}_{+}$ with $0< \mu _{l, \alpha} (M) <+\infty $.}
\end{equation}
Proofs of Theorem \ref{ineqrad} are given in various subsections, each of which addresses
one of the cases ofTheorem \ref{maintheorem}.
First let us recall that the proof of case (i) of Theorem \ref{maintheorem} has been given
in \cite{ABCMP_atti}.
\begin{remark}
\label{sufficiency}
Condition (\ref{k_l_ineq1}), i.e. $l\frac{N+\alpha-1}{N+\alpha}\le k$ is a
necessary and sufficient condition for
$C_{k,l,N, \alpha} >0$.
\end{remark}
{\sl Proof:\/}
The necessity follows from Lemma \ref{R3}, and the sufficiency in the case
$l+1\leq k$ follows from case (i) in Theorem \ref{maintheorem}.
Finally, assume that $k< l+1$. Then (\ref{isopproblem}) is equivalent to
(\ref{ineqQR}), by Lemma \ref{R2}.
Now the main Theorem of \cite{CKN} tells us that condition
(\ref{k_l_ineq1}) is also sufficient
for $C_{k,l,N, \alpha} >0$.
$\hfill \Box $
\subsection{Proof of Theorem \ref{maintheorem}, case (ii).}
The case $k \leq 0$ and $\alpha = 0$ has been addressed in \cite{ChiHo}, Theorem 1.3.
We significantly extend such a result by considering all nonnegative values of $\alpha$ and treating,
at least for some values of the parameters,
the equality case in (\ref{isop1}).
\begin{theorem}
\label{th1bis}
Let $k,l$ satisfy
\begin{equation} \label{lk}
l \frac{N+\alpha-1}{N+\alpha} \leq k
\leq \min\{0, l+1\}.
\end{equation}
Then (\ref{isop1}) holds.
Moreover if
$l \frac{N+\alpha-1}{N+\alpha} < k$
and
\begin{equation}
\label{M=BR}
{\mathcal R}_{k,l,N, \alpha} (M) = C_{k,l,N, \alpha} ^{rad} \
\mbox{ for some measurable set $M$ with $0<\mu _l (M)< +\infty $},
\end{equation}
then $M= B_{R}^{+}$ for some $R>0$.
\end{theorem}
{\sl Proof :\/}
Let $u\in C^{\infty }_0(\mathbb{R}_+ ^N)\setminus \{ 0 \} $.
We set
$$
y:=x|x|^\frac{k}{N+\alpha-1}\, , \quad v(y):=u(x)\, , \quad
s:=r^\frac{k+N+\alpha-1}{N+\alpha -1}\,
.
$$
Using $N$-dimensional spherical coordinates, denoting with $\nabla_\theta$
the tangential part of the gradient on
${\mathbb S^{N-1}}$, we obtain
\begin{eqnarray}
\label{cambio1}
& &
\int_{\mathbb{R}_+ ^N} x_N^\alpha
|x|^l
|u|^{(l+N+\alpha)/(k+N+\alpha-1)} \, dx
\\
\nonumber
& = &
\int_{\mathbb{S}^{N-1}_+}
\int_0^{\infty}
r^{l+N+\alpha-1} |u|^{(l+N+\alpha)/(k+N+\alpha-1) }\,
h dr\, d\Theta
\\
\nonumber
& = &
\frac{N+\alpha-1}{k+N+\alpha-1}
\int_{\mathbb{S}^{N-1}_+}
\int_0^{\infty}
s^{\frac{l+N+\alpha}{k+N+\alpha-1}(N+\alpha-1)-1 }
|v|^{(l+N+\alpha)/(k+N+\alpha-1)}\,
h ds \, d\Theta
\\
\nonumber
& = &
\frac{N+\alpha-1}{k+N+\alpha-1}
\int_{\mathbb{R} _+^N} y_n^\alpha
|y|^{\frac{l+N+\alpha}{k+N+\alpha-1}(N+\alpha-1)-N}|v|^{(l+N+\alpha)/(k+N+\alpha-1)}\, dy
\\
\nonumber
& = &
\frac{N+\alpha-1}{k+N+\alpha-1}
\int_{\mathbb{R} _+^N}
|y|^{(l(N+\alpha-1)-k(N+\alpha))/(k+N+\alpha-1)}
|v|^{(l+N+\alpha)/(k+N+\alpha-1)}\, dy
\, .
\end{eqnarray}
Further we calculate
\begin{eqnarray}
\label{cambio2}
& &
\int_{\mathbb{R} _+^N} x_N^\alpha |x|^k |\nabla_x u| \, dx
\\
\nonumber
& = &
\int_{\mathbb{S}^{N-1} _+}
\int_0^{\infty}
r^{k+N+\alpha-1}
\left(
u_r ^2 +\frac{|\nabla_\theta u|^2}{r^2}
\right) ^{1/2}h \,
dr \, d\Theta
\\
\nonumber
& = &
\int_{\mathbb{S}^{N-1} }
\int_0^{\infty}
s^{N+\alpha-1}
\left(
v_s ^2+\frac{|\nabla_\theta v|^2}{s^2}
\left(
\frac{N+\alpha-1}{k+N+\alpha-1} \right) ^2 \right) ^{1/2} \, h \, ds \, d\Theta
\nonumber
\\
\nonumber
& \geq &
\int_{\mathbb{S}^{N-1} }
\int_0^{\infty}
s^{N+\alpha-1}
\left(
v_s ^2 +\frac{|\nabla_\theta v|^2}{s^2} \right) ^{1/2} \, h \, ds \, d\Theta
\\
\nonumber
& = &
\int_{\mathbb{R}_+^N}y_N^\alpha |\nabla_y v| \, dy \, ,
\end{eqnarray}
where we have used (\ref{lk}).
By \eqref{cambio1} and \eqref{cambio2} we deduce,
\begin{eqnarray}
\label{Q2}
& & \hspace {1cm}
{\mathcal Q}_{k,l,N, \alpha}(u)
\\
\nonumber
& \geq &
\frac{\displaystyle
\int_{\mathbb R ^N_+} y_N^\alpha |\nabla_y v| \, dy}{\displaystyle
\left(
\int_{\mathbb R ^N_+} y_N^\alpha |y|^{l' }|v|^{(l+N+\alpha)/(k+N+\alpha-1)}\, dy \right)
^{(k+N+\alpha-1)/(l+N+\alpha)}
}
\left(
\frac{k+N+\alpha-1}{N+\alpha-1}
\right) ^{(k+N+\alpha-1)/(l+N+\alpha)}
\\
\nonumber
& = &
\left( \frac{k+N+\alpha-1}{N+\alpha-1} \right)
^{(k+N+\alpha-1)/(l+N+\alpha)}
{\mathcal Q}_{0,l' ,N, \alpha }(v)\, ,
\end{eqnarray}
where we have set $l' :=\frac{l(N+\alpha-1)-k(N+\alpha)}{k+N+\alpha-1}$.
Note that we have $-1 \leq l' \leq 0$ by the assumptions (\ref{lk}).
\\
Hence we may apply Lemma \ref{R2} to both sides of (\ref{Q2}).
This yields
\begin{equation}
\label{relationCC}
C_{k,l,N, \alpha} \geq \left( \frac{k+N+\alpha-1}{N+\alpha-1} \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} C_{0,l', N, \alpha} .
\end{equation}
Furthermore, Lemma \ref{rangekl1} tells us that
\begin{equation}
\label{CCrad}
C_{0,l',N, \alpha} = C_{0,l', N, \alpha} ^{rad} .
\end{equation}
Since also
$$
\left( \frac{k+N+\alpha-1}{N+\alpha-1} \right)
^{(k+N+\alpha-1)/(l+N+\alpha)} C_{0,l',N, \alpha} ^{rad} =C_{k,l,N, \alpha}^{rad} \, .
$$
From this, (\ref{relationCC}) and (\ref{CCrad}),
we deduce that $C_{k,l,N, \alpha}\ge C_{k,l,N, \alpha}^{rad}$.
Since $C_{k,l,N, \alpha}\le C_{k,l,N, \alpha}^{rad}$ by definition, (\ref{isop1}) follows.
\\
Next assume that
${\mathcal{R}}_{k,l,N, \alpha} (M) = C_{k,l,N, \alpha} ^{rad}$ for
some measurable set $M \subset \mathbb R^{N}_{+}$ with $0<\mu _l (M)<
+\infty $.
If $l(N+\alpha-1)/(N+\alpha) <k$, then Lemma \ref{rangekl1} tells us that we must have
$M=B_{R}^{+} $ for some $R>0$.
$\hfill \Box $
\begin{remark}
\rm
$\text{}$
\\
{\bf (a)} A well-known special case of Theorem \ref{th1bis} is $k=0 = l $,
see \cite{MadernaSalsa}, \cite{BCM} and \cite{XR}.
\\
{\bf (b)}
The idea to use spherical coordinates,
and in particular the inequality (\ref{cambio2}) in our last proof,
appeared already in some work of T. Horiuchi,
see \cite{H} and \cite{HK}.
\end{remark}
\subsection{Proof of Theorem \ref{maintheorem}, case (iii).}
Now we treat the case when $k$ assumes non-negative values.
Throughout this subsection we assume $k\leq l+1$.
The main result is Theorem \ref{th1ter}.
Its proof is long and requires some auxiliary results.
But the crucial idea is an interpolation argument that occurs
in the proof of the following Lemma \ref{4.3}, formula (\ref{ineq2}).
\begin{lemma}
\label{4.3}
Assume $l(N+\alpha-1)/(N+\alpha)\leq k$ and $k\geq 0$.
Let $u\in C_0 ^1 (\mathbb{R}^N_+)\setminus \{ 0 \} $, $u\geq 0$,
and define $y,z$ and $v$ by
\begin{equation}
\label{transf1}
y:= x|x| ^{\frac{k}{N+\alpha-1}} , \ z:= |y| \ \mbox{ and }\ v(y) :=
u(x), \qquad x\in \mathbb{R}_+ ^N .
\end{equation}
Then for every
$A\in \left[
0, \frac{(N+\alpha-1) ^2}{ (k+N+\alpha-1 )^2 }
\right]
$,
\begin{equation}
\label{ineq1}
{\mathcal Q}_{k,l,N, \alpha} (u) \geq \left(
\frac{k+N+\alpha-1}{N+\alpha-1}
\right)
^{ \frac{k+N+\alpha-1}{l+N+\alpha} }
\cdot
\frac{
\left(
\displaystyle\int_{\mathbb{R}_+ ^N }
y_N^\alpha|\nabla _y v| \,
\, dy
\right)
^A
\cdot
\left(
\displaystyle\int_{\mathbb{R}_+ ^N }
y_N^\alpha | v_z|
\, dy
\right)
^{1-A}
}{
\left(
\displaystyle\int_{\mathbb{R}_+ ^N }
y_N^\alpha |y|
^{ \frac{l(N+\alpha-1)-k(N+\alpha)}{k+N+\alpha-1} }
v
^{ \frac{l+N+\alpha}{k+N+\alpha-1} }
\, dy
\right)
^{ \frac{k+N+\alpha-1}{l+N+\alpha} }
}
.
\end{equation}
\end{lemma}
{\sl Proof:}
We calculate as in the proof of Theorem \ref{th1bis} ,
$$
\int_{\mathbb{R}_+^N }x_N^\alpha |x| ^k |\nabla _x u | \, dx
=
\int_{\mathbb{S}_+^{N-1} }
\int_0^{\infty}
s^{N+\alpha-1}
\left(
v_s ^2+\frac{|\nabla_\theta v|^2}{s^2}
\left(
\frac{N+\alpha-1}{k+N+\alpha-1} \right) ^2 \right) ^{1/2} \, h \, ds \, d\Theta
$$
Since the mapping
$$
t\longmapsto \log
\left(
\int _{{\mathbb S}_+^{N-1} }
\int_0^{+\infty } z^{N+\alpha-1}
\sqrt{ v_z ^2 + t\frac{|\nabla _{\theta } v|^2 }{z^2 } } \, h \, dz\, d\Theta \right)
$$
is concave, we deduce that for every
$A\in \left[ 0, \frac{(N+\alpha-1) ^2}{ (k+N+\alpha-1 )^2 } \right] $,
\begin{eqnarray}
\label{ineq2}
& & \int_{\mathbb{R}_+^N }x_N^\alpha |x| ^k |\nabla _x u | \, dx
\\
\nonumber
& \geq &
\left(
\int_{\mathbb{S}_+^{N-1} } \int_0 ^{+\infty } z^{N+\alpha-1}
\sqrt{
v_z ^2 + \frac{|\nabla _{\theta } v|^2 }{z^2 }
}
\, h \, dz \, d\Theta \right)
^A
\cdot
\left(
\int_{\mathbb{S}_+^{N-1}}
\int_0^{+\infty } z^{N+\alpha-1} |v_z | \, h \, dz\, d\Theta
\right)
^{1-A}
\\
\nonumber
& = &
\left(
\int_{\mathbb{R}^N_+ } y_N^\alpha|\nabla _y v| \, dy
\right)
^A
\cdot
\left(
\int_{\mathbb{R}^N_+ } y_N^\alpha|v_z | \, dy
\right)
^{1-A} .
\end{eqnarray}
Finally, we have
\begin{equation}
\label{equaldenom}
\int_{\mathbb{R}^N_+ }x_N^\alpha |x| ^l u
^{ \frac{l+N+\alpha}{k+N+\alpha-1} } \, dx =
\frac{N+\alpha -1}{k+N+\alpha -1} \int_{\mathbb{R}_+^N} y_N^\alpha
|y|
^{ \frac{l(N+\alpha -1)-k(N+\alpha )}{k+N+\alpha -1} }
v
^{ \frac{l+N+\alpha }{k+N+\alpha -1} } \, dy .
\end{equation}
Now (\ref{ineq1}) follows from
(\ref{ineq2}) and (\ref{equaldenom}).
$\hfill \Box $
\\[0.1cm]
Next we want to estimate the right-hand-side of (\ref{ineq1}) from below.
We will need a few more properties of the starshaped rearrangement.
\begin{lemma}
\label{4.2}
Assume $l(N+\alpha-1)/(N+\alpha)\leq k$.
Then we have for any function
$v\in C_0 ^1 (\mathbb{R}^N_+ )\setminus \{ 0 \}$ with $v\geq 0$,
\begin{eqnarray}
\label{starsh5}
& &
\int_{\mathbb{R}_+^N } y_N^\alpha |y|
^{ \frac{l(N+\alpha-1)-k(N+\alpha)}{k+N+\alpha-1} }
v
^{ \frac{l+N+\alpha}{k+N+\alpha-1} }
\, dy
\leq
\int_{\mathbb{R}_+^N } y_N^\alpha |y|
^{ \frac{l(N+\alpha-1)-k(N+\alpha)}{k+N+\alpha-1} }
\widetilde{v}
^{ \frac{l+N+\alpha}{k+N+\alpha-1} }
\, dy,
\\
\label{vL1}
& &
\frac{
y\cdot \nabla \widetilde{v} }{|y|}
\equiv
\frac{
\partial \widetilde{v} }{
\partial z }
\in L^1 (\mathbb{R}_+ ^N )
\quad \mbox{and }
\\
\label{starsh6}
& &
\int_{\mathbb{R}^N_+ }
y_N^\alpha\left| \frac{ \partial v}{\partial z} \right| \, dy
\geq
\int_{\mathbb{R}^N_+ }
y_N^\alpha\left|\frac{ \partial \widetilde{v} }{\partial z} \right|
\, dy.
\end{eqnarray}
\end{lemma}
{\sl Proof:} Let us prove (\ref{starsh5}). Set
$$
w(y):= |y|
^{ \frac{l(N+\alpha-1)-k(N+\alpha)}{l+N+\alpha} } .
$$
Since $l(N+\alpha-1)-k(N+\alpha)\leq 0$, we have $w= \widetilde{w} $.
Hence (\ref{starsh5}) follows from (\ref{harlit}) and (\ref{monrearr}).
\\
Next let $\zeta := z^N $ and define $V$ and $\hat{V} $ by
$V(\zeta ,\theta) := v(z\theta )$, and
$\widehat{V} (\zeta ,\theta ) := \widetilde{v} (z\theta )$.
Observe that for each $\theta \in \mathbb{S}_+^{N-1} $,
$\widehat{V} (\cdot , \theta ) $ is the equimeasurable
non-increasing rearrangement of
$V (\cdot ,\theta )$. Further we have
$$
\frac{ \partial v }{ \partial z }
=
N\zeta
^{ \frac{N-1}{N} }
\frac{ \partial V}{\partial \zeta } \ \mbox{ and } \
\frac{ \partial \widetilde{v} }{ \partial z }
=
N\zeta
^{ \frac{N-1}{N} }
\frac{ \partial \widehat{V} }{ \partial \zeta }
.
$$
Since $\frac{\partial v}{\partial z} \in L^{\infty } (\mathbb{R}^N )$,
Lemma \ref{Landes} tells us
that for every $\theta \in {\mathbb S} ^{N-1} $,
\begin{eqnarray*}
\int_0^{+\infty } z^{N+\alpha-1} \left| \frac{\partial v}{\partial z}
(z\theta ) \right| \, dz
& = &
\int_0 ^{+\infty } \zeta
^{ \frac{N+\alpha-1}{N} }
\left| \frac{\partial V}{\partial \zeta } (\zeta ,\theta )
\right| \, d \zeta
\\
& \geq &
\int_0 ^{+\infty } \zeta
^{ \frac{N+\alpha-1}{N} }
\left| \frac{\partial \widehat{V} }{\partial \zeta }
(\zeta ,\theta )\right| \, d\zeta
\\
& = &
\int_0^{+\infty } z^{N+\alpha-1} \left|
\frac{ \partial \widetilde{v} }{\partial z} (z\theta )\right| \, dz .
\end{eqnarray*}
Integrating this over ${\mathbb S}_+ ^{N-1}$,
we obtain (\ref{starsh6}).
$\hfill \Box$
A final ingredient is
\begin{lemma}
\label{4.1}
Assume that $l(N+\alpha-1)/(N+\alpha)\leq k$,
and let $M \subset \mathbb R^{N}_{+}$ be a bounded starshaped set. Then
\begin{eqnarray}
\label{holder1}
& &
\left(
\int_M y_N^\alpha |y|
^{ \frac{l(N+\alpha-1) -k(N+\alpha)}{k+N+\alpha-1} }
\, dy
\right)
^{ \frac{k+N+\alpha-1}{l+N+\alpha} }
\\
\nonumber
& \leq &
d_1
\left(
\int_M y_N^\alpha\, dy
\right)
^{ \frac{(N+\alpha-1)(l-k+1) }{ l+N+\alpha} }
\cdot
\left(
\int_M y_N^\alpha|y|^{-1} \, dy
\right)
^{ \frac{k(N+\alpha)-l(N+\alpha-1) }{ l+N+\alpha} } ,
\quad \mbox{ where}
\\
& &
\label{d1}
d_1 =
\left(
\frac{k+N+\alpha-1}{l+N+\alpha}
\right)
^{ \frac{k+N+\alpha-1}{l+N+\alpha} }
\cdot
\left(
\frac{N+\alpha}{N+\alpha-1}
\right)
^{ \frac{(N+\alpha-1)(l-k+1)}{l+N+\alpha} } .
\end{eqnarray}
Moreover,
if $k<l+1$ and $l(N+\alpha-1)/(N+\alpha) <k$, then
equality in (\ref{holder1}) holds only
if $M=B_{R}^{+} $ for some $R>0$.
\end{lemma}
{\sl Proof:} Since $M$ is starshaped,
there is a bounded measurable function
$m : \mathbb{S} ^{N-1}_{+} \to [0, +\infty )$, such that
\begin{equation}
\label{Mrepresent}
M= \{ z\theta :\, 0\leq z < m (\theta ), \
\theta \in \mathbb{S} ^{N-1}_{+} \} .
\end{equation}
Using H\"older's inequality we obtain
\begin{eqnarray}
\label{chain}
& &
\hspace{1cm} \int_M y_N^\alpha|y|
^{ \frac{l(N+\alpha-1) -k(N+\alpha)}{k+N+\alpha-1} }
\, dy
\\
\nonumber
& = &
\frac{k+N+\alpha-1}{(l+N+\alpha)(N+\alpha-1)}
\int_{{\mathbb S}_+ ^{N-1} } m (\theta )
^{ \frac{(l+N+\alpha)(N+\alpha-1)}{k+N+\alpha-1} }
\, h \, d\Theta
\\
\nonumber
& = &
\frac{k+N+\alpha-1}{(l+N+\alpha)(N+\alpha-1)}
\int_{{\mathbb S}_+ ^{N-1} } m (\theta )
^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{k+N+\alpha-1}(N+\alpha-1) } m (\theta )
^{ \frac{(N+\alpha-1)(l-k+1)}{k+N+\alpha-1} (N+\alpha)}
\, h \, d\Theta
\\
\nonumber
& \leq &
\frac{k+N+\alpha-1}{(l+N+\alpha)(N+\alpha-1)}
\left(
\int_{{\mathbb S}_+ ^{N-1} }
m (\theta ) ^{N+\alpha}
\, h \, d\Theta
\right)
^{ \frac{(N+\alpha-1)(l-k+1)}{k+N+\alpha-1} }
\\
\nonumber
\qquad & \times
&
\left(
\int_{{\mathbb S}_+ ^{N-1} }
m (\theta ) ^{N+\alpha-1}
\, h \, d\Theta
\right)
^{ \frac{k(N+\alpha)- l(N+\alpha-1)}{k+N+\alpha-1} }
\\
\nonumber
& = &
\frac{k+N+\alpha-1}{(l+N+\alpha)(N+\alpha-1)}
\left(
(N+\alpha) \int_M y_N^\alpha dy
\right)
^{ \frac{(N+\alpha-1)(l-k+1)}{k+N+\alpha-1} }
\times
\\
\nonumber
\qquad & \times
&
\left(
(N+\alpha-1) \int_M |y| ^{-1} y_N^\alpha\, dy
\right)
^{ \frac{k(N+\alpha)- l(N+\alpha-1)}{k+N+\alpha-1} } ,
\end{eqnarray}
and (\ref{holder1}) follows.
If $k<l+1$ and $l(N+\alpha -1)/(N+\alpha ) < k$, then (\ref{chain}) holds
with equality only if $m (\theta )=\mbox{const }$.
$\hfill \Box $
\\[0.1cm]
Now we are ready to prove our main result.
\begin{theorem}
\label{th1ter}
Assume $0\le k\leq l+1$ and
\begin{equation}
\label{crucial}
l\leq
\frac{(k+N+\alpha-1)^3 }{(k+N+\alpha-1)^2 - \frac{(N+\alpha-1)^2 }{ N+\alpha}} -N -\alpha.
\end{equation}
Then (\ref{isop1}) holds.
Furthermore, if
inequality (\ref{crucial}) is strict,
then (\ref{M=BR}) holds only if $M=B_{R}^{+}$ for some $R>0$.
\end{theorem}
{\sl Proof: } First observe that the conditions
$k\geq 0$ and (\ref{crucial}) also imply
$l(N+\alpha-1)/(N+\alpha) \leq k$. Let $u \in C_0 ^{\infty }
(\mathbb{R}_+^N)\setminus \{ 0\} $, $u\geq 0$,
and let $v$ be given by
(\ref{transf1}).
In view of (\ref{crucial}), we may choose
$$
A=\frac{(N+\alpha)(l-k+1)}{l+N+\alpha}
$$
to obtain
\begin{eqnarray}
\label{ineq5bis}
{\mathcal Q}_{k,l,N, \alpha} (u)
& \geq &
\left( \frac{k+N+\alpha-1}{N+\alpha-1} \right)^{ \frac{k+N+\alpha-1 }{ l+N+\alpha} } \times
\\
\nonumber
& \times &
\frac{
\left(
\displaystyle\int_{\mathbb{R} ^N }
y_N^\alpha |\nabla _y v|
\, dy
\right)
^{ \frac{(N+\alpha)(l-k+1) }{ l+N+\alpha} }
\cdot
\left(
\displaystyle\int_{\mathbb{R}_+ ^N }
y_N^\alpha |v_z |
\, dy
\right)
^{ \frac{k(N+\alpha)-l(N+\alpha-1) }{ l+N+\alpha} }
}{
\left(
\displaystyle\int_{\mathbb{R}_+^N } y_N^\alpha |y|
^{ \frac{l(N+\alpha-1)-k(N+\alpha ) }{ k+N+\alpha-1} }
v
^{ \frac{l+N+\alpha}{k+N+\alpha-1} }
\, dy
\right)
^{ \frac{k+N+\alpha-1}{l+N+\alpha} }
}
\end{eqnarray} .
Further, (\ref{starsh6}) and
Hardy's inequality yield
\begin{equation}
\label{ineq4}
\int_{\mathbb{R}^N } y_N^\alpha |v_z| \, dy
\geq
\int_{\mathbb{R}^{N}_{+} }
y_N^\alpha |\widetilde{v}_z| \, dy
\geq (N+\alpha -1)
\int_{\mathbb{R}^{N}_{+} } y_N^\alpha\frac{\widetilde{v}}{|y|}
\, dy\,,
\end{equation}
where $\widetilde{v}$ denotes the starshaped rearrangement of $v$.
Together with (\ref{ineq5bis}) and (\ref{starsh5}) this leads to
\begin{eqnarray}
\label{ineq5final}
{\mathcal Q}_{k,l,N, \alpha} (u)
& \geq &
(N+\alpha-1)
^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{l+N+\alpha} }
\left(
\frac{k+N+\alpha-1}{N+\alpha-1}
\right)
^{ \frac{k+N+\alpha-1 }{ l+N+\alpha} }
\cdot
\\
\nonumber
& & \cdot
\frac{
\left(
\displaystyle\int_{\mathbb{R}_+ ^N }
y_N^\alpha|\nabla _y v|
\, dy
\right)
^{ \frac{(N+\alpha)(l-k+1) }{ l+N+\alpha} }
\cdot
\left(
\displaystyle\int_{\mathbb{R}_+^N }
y_N^\alpha\frac{\widetilde{v} }{|y|}
\, dy
\right)
^{ \frac{k(N+\alpha)-l(N+\alpha-1) }{ l+N+\alpha} }
}{
\left(
\displaystyle\int_{\mathbb{R}_+^N } y_N^\alpha |y|
^{ \frac{l(N+\alpha-1)-k(N+\alpha) }{ k+N+\alpha-1} }
\widetilde{v}
^{ \frac{l+N+\alpha}{k+N+\alpha-1} }
\, dy
\right)
^{ \frac{k+N+\alpha-1}{l+N+\alpha} }
} .
\end{eqnarray}
Now let $M $ be a bounded measurable subset of $\mathbb R^{N}_{+}$.
Then combining (\ref{limperim}), (\ref{limmeas})
and the argument leading to (\ref{CklNsmooth}) we deduce that
there exists a sequence of non-negative functions
$\{ u_n \} \subset C_0 ^1 (\mathbb{R}^N_+ )$ such that
\begin{equation}
\label{lim1}
\lim_{n\to \infty }
\int_{\mathbb{R}_+^N } x_N^\alpha|x| ^k |\nabla u_n | \, dx =
P_{\mu _k, \alpha } (M)
\end{equation}
and
\begin{equation}
\label{lim2}
u_n \longrightarrow \chi_{M } \quad
\mbox{ in $L^p (\mathbb{R}^{N}_{+} ) $ for every $p\geq 1 $.}
\end{equation}
We define
$M ':= \{ y= x|x|^{\frac{k}{N+\alpha-1}} : \, x \in M \} $
and $v_n (y) := u_n (x) $.
Let $\widetilde{v_n } $ and $\widetilde{M '} $
be the starshaped rearrangements of $v_n $ and $M ' $ respectively.
Then (\ref{lim1}) and (\ref{lim2}) also imply
\begin{eqnarray}
\label{lim3}
& & \lim_{n\to \infty } \int_{\mathbb{R}^N_+ } y_N^\alpha |\nabla _y v_n | \, dy =
P_{\mu _0, \alpha } (M' ),
\quad \mbox{and}
\\
\label{lim4}
& & \widetilde{v_n }\longrightarrow
\chi _{\widetilde{M ' } } \
\mbox{ in $L^p (\mathbb{R}^{N}_{+} ) $ for every $p\geq 1 $.}
\end{eqnarray}
Choosing $u=u_n $ in (\ref{ineq5final}) and passing to the limit
$n\to \infty $,
we obtain, using (\ref{lim1}), (\ref{lim2}), (\ref{lim3}),
(\ref{lim4}) and Proposition \ref{BCM2}
\begin{eqnarray}
\label{ineq6}
& & {\mathcal R}_{k,l,N, \alpha} (M )
\\
\nonumber
& \geq &
(N+\alpha-1)
^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{l+N+\alpha} }
\left(
\frac{k+N+\alpha-1}{N+\alpha-1}
\right)
^{ \frac{k+N+\alpha-1}{l+N+\alpha} }
\cdot
\\
\nonumber
& & \cdot
\frac{
\left( P_{\mu _0, \alpha } (\widetilde{M'}) \right)
^{ \frac{(N+\alpha)(l-k+1)}{l+N+\alpha} }
\cdot
\left(
\displaystyle\int_{\widetilde{M' } }
\frac{y_N^\alpha dy}{|y| }
\right)
^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{l+N+\alpha} }
}{
\left(
\displaystyle\int_{\widetilde{M '} } y_N^\alpha |y|
^{ \frac{l(N+\alpha-1)-k(N+\alpha)}{k+N+\alpha-1} }
\, dy
\right)
^{ \frac{k+N+\alpha-1}{l+N+\alpha} }
}
\\
\nonumber
& \geq &
(N+\alpha-1)
^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{l+N+\alpha} }
\left(
\frac{k+N+\alpha-1}{N+\alpha-1}
\right)
^{ \frac{k+N+\alpha-1}{l+N+\alpha} }
\left(
C_{0,0,N, \alpha} ^{rad}
\right)
^{ \frac{(N+\alpha)(l-k+1)}{l+N+\alpha} }
\times
\\
\nonumber
& & \times
\frac{
\left( \mu_{0,\alpha} (\widetilde{M'}) \right)
^{ \frac{(N+\alpha-1)(l-k+1)}{l+N+\alpha} }
\cdot
\left(
\displaystyle\int_{\widetilde{M' } }
\frac{ y_N^\alpha dy}{|y| }
\right)
^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{l+N+\alpha} }
}{
\left(
\displaystyle\int_{\widetilde{M '} } y_N^\alpha |y|
^{ \frac{l(N+\alpha-1)-k(N+\alpha)}{k+N+\alpha-1} }
\, dy
\right)
^{ \frac{k+N+\alpha-1}{l+N+\alpha} }
} .
\end{eqnarray}
In view of (\ref{holder1}) and since
$\mu _0 (M') = \mu_0 (\widetilde{M'})$
we finally get from this
\begin{eqnarray}
\label{ineq7}
& &
{\mathcal R}_{k,l,N, \alpha} (M)
\\
&\geq&
\nonumber
(N+\alpha-1)
^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{l+N+\alpha} }
\left( \frac{k+N+\alpha-1}{N+\alpha-1} \right)
^{ \frac{k+N+\alpha-1}{l+N+\alpha} }
\left(
C_{0,0,N, \alpha} ^{rad}
\right)
^{ \frac{(N+\alpha)(l-k+1)}{l+N+\alpha} } \frac{1}{d_1}
\\
\nonumber
& = & \left( \displaystyle\int_{\mathbb S^{N-1}_{+}} \,h \,d \Theta \right) ^{\frac{l-k+1}{l+N+\alpha} }
\cdot (l+N+\alpha)^{\frac{k+N+\alpha-1}{l+N+\alpha} } = C_{k,l,N, \alpha} ^{rad} ,
\end{eqnarray}
and (\ref{isop1}) follows by (\ref{CklNsmooth}).
\\
Now assume that (\ref{M=BR}) holds.
If inequality (\ref{crucial}) is strict,
then Lemma \ref{rangekl1} tells us that we must have
$M= B_{R}^{+} $ for some $R>0$.
$\hfill \Box $
\begin{remark}
\rm
Note that if $N+\alpha \geq 3$
, then (\ref{crucial}) covers the important range
$$
l=0\leq k\leq 1.
$$
However, we emphasize that this is not true when $2\leq N+\alpha <3$.
\end{remark}
\medskip
\noindent
\section{Applications}
In this section we provide some applications of our results.
\subsection{P\'{o}lya-Szeg\"o principle}
First we obtain a P\'{o}lya-Szeg\"o principle related to our
isoperimetric inequality (\ref{isop1}) (cf. \cite{Talenti2})
Assume that the numbers $k, l$ and $\alpha$ satisfy (\ref{ass1}) and one of the conditions {\bf (i)}-{\bf (iii)} of
Theorem \ref{maintheorem}. Then (\ref{mainineq}) implies
\begin{equation}
\label{Isop_klalpha}
\int_{\partial \Omega }
|x|^k x_N ^{\alpha } {\mathcal H}_{N-1}(dx)
\geq
\int_{\partial \Omega ^{ \star }}
|x|^k x_N ^{\alpha }
{\mathcal H}
_{N-1}(dx)
\end{equation}
for every smooth set $\Omega \subset \mathbb{R} ^N_+ $, where $\Omega ^{\star}$ is the $\mu_{l,\alpha } $-symmetrization of $\Omega $.
We will use (\ref{Isop_klalpha}) to prove the following
\begin{theorem}
\label{ps}
(P\'{o}lya-Szeg\"o principle)
Let the numbers $k,l$ and $\alpha $ satisfy one of the conditions {\bf (i)}-{\bf (iii)}
of Theorem \ref{maintheorem}.
Further, let $p\in [1, +\infty)$ and $m:= pk+(1-p) l $. Then there holds
\begin{equation}
\int_{\mathbb{R}^N _+ }
\left\vert \nabla u\right\vert ^p
d \mu _{m,\alpha } (x)
\geq
\int_{\mathbb{R}^N _+ }\left\vert \nabla
u^{ \star }\right\vert ^{p}
d\mu_{m,\alpha } (x)
\quad
\forall u\in {\mathcal D} ^{1,p} (\mathbb{R} ^N _+ , d\mu _{m, \alpha } ),
\label{PS_k_l}
\end{equation}
where $u^{\star } $ denotes the $\mu _{l,\alpha } $-symmetrization of $u$.
\end{theorem}
{\sl Proof:}
It is sufficient to consider the case that $u$ is non-negative. Further,
by an approximation argument we may assume that
$u \in C^{\infty}_{0}(\mathbb{R}^{N} ) $.
Let
\begin{eqnarray*}
I & := &
\int_{\mathbb{R}^{N}_+ } | \nabla u| ^{p}
|x| ^{pk+(1-p)l} x_N ^{\alpha }\, dx \quad \mbox{and}\\
I ^{\star } & := &
\int_{\mathbb{R}^{N}_+ } | \nabla u^{\star} | ^{p}
|x| ^{pk+(1-p)l} x_N ^{\alpha }\, dx .
\end{eqnarray*}
The coarea formula yields
\begin{eqnarray}
\label{1coarea}
I
& = &
\int_{0}^{\infty }\int_{u=t} |\nabla
u| ^{p-1} |x| ^{pk+(1-p)l} x_N ^{\alpha }\, {\mathcal H}_{N-1}(dx)\, dt \quad \mbox{and}
\\
\label{coarea2}
I^{\star}
& = &
\int_{0}^{\infty }\int_{u^{\star } =t} |\nabla
u^{\star} | ^{p-1} |x| ^{pk+(1-p)l} x_N ^{\alpha }\, {\mathcal H}_{N-1}(dx)\, dt
.
\end{eqnarray}
Further, H\"older's inequality gives
\begin{equation}
\label{1holder}
\int_{ u=t} |x|^k x_N ^{\alpha } \, {\mathcal H} _{N-1} (dx)
\leq
\left( \int_{ u=t} |x|^{kp +l(1-p)} |\nabla u| ^{p-1} x_N ^{\alpha } \, {\mathcal H} _{N-1} (dx) \right) ^{\frac{1}{p} }
\cdot
\left( \int_{ u=t} \frac{|x|^l x_N ^{\alpha }}{|\nabla u|} \, {\mathcal H}_{N-1} (dx)
\right)
^{\frac{p-1}{p} } ,
\end{equation}
for a.e. $t\in [0, +\infty )$.
Hence (\ref{1coarea}) together with (\ref{1holder}) tells us that
\begin{equation}
\label{coarea3}
I
\geq
\int_{0}^{\infty }
\left( \int_{u=t} |x| ^{k} x_N ^{\alpha }\, {\mathcal H}
_{N-1}(dx)
\right) ^{p} \cdot
\left(
\int_{u=t}\frac{ |x| ^{l}x_N ^{\alpha }}{
| \nabla u| } x_N ^{\alpha } \, {\mathcal H}_{N-1}(dx)
\right) ^{1-p} \, dt.
\end{equation}
Since $u^{\star} $ is a radial function, we obtain in an analogous manner,
\begin{equation}
\label{coarea4}
I^{\star}
=
\int_{0}^{\infty }
\left( \int_{u^{\star} =t} |x| ^{k} x_N ^{\alpha } \, {\mathcal H}
_{N-1}(dx)
\right) ^{p} \cdot
\left(
\int_{u^{\star} =t}\frac{ |x| ^{l}x_N ^{\alpha } }{
| \nabla u^{\star} | } \, {\mathcal H}_{N-1}(dx)
\right) ^{1-p} \, dt.
\end{equation}
Observing that
\begin{equation}
\label{meas_u>t}
\int_{u>t} |x|^{l} x_N ^{\alpha } \, dx
=
\int_{u^{\star }>t}
|x|^{l} x_N ^{\alpha } \, dx \quad \forall t\in [0, +\infty ),
\end{equation}
Fleming-Rishel's formula yields
\begin{equation}
\label{flemingrishel}
\int_{u=t } \frac{|x|^l x_N ^{\alpha }}{|\nabla u|} \, {\mathcal H}_{N-1} (dx)
=
\int_{u^{\star} =t } \frac{|x|^l x_N ^{\alpha }}{|\nabla u^{\star} |} \, {\mathcal H}_{N-1} (dx)
\end{equation}
for a.e. $t\in [0, +\infty )$.
Hence
(\ref{flemingrishel}) and (\ref{Isop_klalpha}) give
\begin{eqnarray*}
& &
\int_{0}^{\infty }
\left( \int_{u=t} |x|^k x_N ^{\alpha } \, {\mathcal H}
_{N-1}(dx) \right) ^{p}
\cdot
\left( \int_{u=t}\frac{| x| ^{l} x_N ^{\alpha } }{
| \nabla u| } \, {\mathcal H}_{N-1}(dx) \right) ^{1-p}
\, dt
\\
& \geq &
\int_{0}^{\infty }\left( \int_{u^{\star} =t} |x| ^{k} x_N ^{\alpha }
\, {\mathcal H}_{N-1}(dx) \right) ^{p} \cdot \left( \int_{u^{\star}=t}
\frac{|x|^{l} x_N ^{\alpha } }{|\nabla u^{\star} | } \, {\mathcal H}
_{N-1}(dx)\right) ^{1-p} \, dt.
\end{eqnarray*}
Now (\ref{PS_k_l}) follows from this, (\ref{coarea3}) and (\ref{coarea4}).
$\hfill \Box$
\\[0.1cm]
An important particular case of Theorem \ref{ps} is
\begin{corollary}
\label{specialcasePS}
Let $p\in [1, +\infty )$, $N+\alpha \geq 3 $, $a\geq 0 $, $u\in {\mathcal D} ^{1,p}
(\mathbb{R}^N _+ , d\mu _{ap ,\alpha }) $, and let $u^{\star } $ be the $\mu_{0,\alpha } $-symmetrization of $u$.
Then
\begin{equation}
\label{PSspecial}
\int_{\mathbb{R}^N _+ } \left| \nabla u\right|^p \, d\mu _{ap, \alpha } (x)
\geq
\int_{\mathbb{R}^N _+ } \left| \nabla u^{\star} \right|^p \, d\mu _{ap, \alpha } (x) .
\end{equation}
\end{corollary}
{\sl Proof: } We choose $k:= a $ and $l:= 0$. If $a\in [0,1]$ then $k,l$
satisfy either one of the conditions {\bf (ii)} or {\bf (iii)}, see also Remark 5.2. If $a\geq 1 $, then $k,l$ satisfy condition {\bf (i)} of Theorem \ref{maintheorem}. Hence (\ref{PSspecial}) follows from Theorem \ref{ps}.
$\hfill \Box $
\subsection{Caffarelli-Kohn-Nirenberg-type inequalities}
Next we will use Theorem \ref{ps} to obtain best constants in some
inequalities of Caffarelli-Kohn-Nirenberg-type.
Let $p,q, a, b$ be real numbers such that
\begin{eqnarray}
\label{CKNassump1}
& & 1\leq p \leq q \left\{
\begin{array}{ll}
\leq \frac{(N+\alpha )p}{N+\alpha -p} & \mbox{ if } \ p< N+\alpha
\\
< +\infty & \mbox{ if } \ p\geq N + \alpha
\end{array}
\right.
,
\\
\nonumber
& & a> 1-\frac{N+\alpha }{p}, \quad \mbox{and }
\\
\nonumber
& & b= b(a,p,q,N, \alpha ) = (N+\alpha ) \left( \frac{1}{p} -\frac{1}{q} \right) + a-1 .
\end{eqnarray}
We define
\begin{eqnarray}
\label{p*}
p^* & := & \left\{
\begin{array}{ll}
\frac{(N+\alpha )p}{N+\alpha -p} & \mbox{ if } p<N+\alpha
\\
+\infty & \mbox{ if } p\geq N+\alpha
\end{array}
\right.
,
\\
& &\nonumber
\\
\label{fctalE}
E_{a,p,q,N, \alpha } (v)
& := &
\frac{\displaystyle\int_{\mathbb{R} ^N _+ } |x|^{ap} |\nabla v|^p x_N ^{\alpha }\, dx
}{
\left( \displaystyle\int_{\mathbb{R}^N _+ } |x|^{bq} |v|^q x_N ^{\alpha } \, dx \right) ^{p/q} },
\quad v\in C_0 ^{\infty } (\mathbb{R}^N )\setminus \{ 0\} ,
\\
\label{SapqN}
S_{a,p,q,N, \alpha } & := & \inf \{ E_{a,p,q,N,\alpha } (v): \, v\in C_0 ^{\infty }
(\mathbb{R}^N ) \setminus \{ 0\} \}, \quad \mbox{and}
\\
\label{SapqNrad}
S_{a,p,q,N,\alpha } ^{rad} & := & \inf \{ E_{a,p,q,N,\alpha } (v): \, v\in C_0 ^{\infty }
(\mathbb{R}^N )\setminus \{ 0\} , \ v \mbox{ radial }\}.
\end{eqnarray}
Note that with this new notation we have
\begin{eqnarray}
\label{E=Q}
E_{k,1,\frac{l+N+\alpha }{k+N+\alpha -1} ,N, \alpha } (v) & = & {\mathcal Q}_{k,l,N,\alpha } (v) \quad \forall v\in C_0 ^{\infty } (\mathbb{R}^N )\setminus \{ 0\} ,
\\
\label{S=C}
S_{k,1,\frac{l+N+\alpha }{k+N+\alpha -1} ,N, \alpha } (v) & = & C_{k,l,N,\alpha } \quad \mbox{and}
\\
\label{Srad=Crad}
S_{k,1,\frac{l+N+\alpha }{k+N+\alpha -1} ,N,\alpha } ^{rad} & = & C_{k,l,N,\alpha } ^{rad} .
\end{eqnarray}
\\
We are interested
in the range of values $a$ (depending on $p,q,N$ and $\alpha $) for which
\begin{equation}
\label{S=S_rad}
S_{a,p,q,N,\alpha } = S_{a,p,q,N,\alpha } ^{rad}
\end{equation}
holds.
\\
First observe that the case $1<p=q$ (which is equivalent to $a-b=1$)
corresponds to a weighted Hardy-Sobolev-type inequality. Note that inequality \eqref{eq:theorem:Hardy with weight} below was already known when $\alpha=0$ (see, for example \cite{HK} and references therein). We have:
\begin{theorem}
\label{hardysobolev}
\label{theorem:Hardy with weight}
Let $p\geq 1$, $\alpha\geq 0$ and $k\in\mathbb{R}$ be such that $N-p+\alpha +k>0$.
Then we have
\begin{equation}
\label{eq:theorem:Hardy with weight}
\int_{\mathbb{R}^N_+} |\nabla u(x)|^p \, d\mu _{k,\alpha } (x)
\geq
\left(\frac{N-p+k+\alpha }{p}\right)^p
\int_{\mathbb{R}^N_+ } \frac{| u(x)|^p }{|x|^p } \, d\mu_{k,\alpha } (x)
\end{equation}
for all $u\in {\mathcal D} ^{1,p} (\mathbb{R}^N_+ , d\mu_{k,\alpha }) $
and
\begin{equation}
\label{constant}
S_{a,p,p,N,\alpha } ^{rad} = S_{a,p,p,N,\alpha }
=\left(\frac{N-p+k+\alpha }{p}\right)^p .
\end{equation}
Moreover there is no function $u\in {\mathcal D} ^{1,p}(\mathbb{R}^N_+,d\mu_{k,\alpha } )$ satisfying equality in \eqref{eq:theorem:Hardy with weight} and such that\\
$\int _{\mathbb{R}^N_+ } |\nabla u|^p d\mu_{k,\alpha } \neq 0.$
\end{theorem}
{\sl Proof:} The first two steps follow the line of proof of \cite{GaPe}, Lemma 2.1.
\\
\textit{Step 1.} Assume first that $u\in C_0^{\infty}(\mathbb{R}^N)$. Then
we have for every $x\in \mathbb{R}^N _+ $,
$$
|u(x)|^p=
- \int_1^{\infty}\frac{d}{dt}|u(tx)|^p\, dt=
- \int_1^{\infty} p|u(tx)|^{p-2}u(tx)\langle x,\nabla u(tx)\rangle \, dt .
$$
Multiplying this with $x_N ^{\alpha } |x|^{k-p} $ and integrating over $\mathbb{R}^N _+$ we find
\begin{eqnarray}
\nonumber
\int_{\mathbb{R}^N_+ }|u(x)|^p x_N ^{\alpha } |x|^{k-p} \, dx & = & - p\int_{1}^{\infty}\left[
\int_{\mathbb{R}^N_+ } |u(tx)|^{p-2} u(tx) \langle x, \nabla u(tx)\rangle x_N ^{\alpha } |x|^k \, dx
\right] \, dt
\\
\nonumber
& = &
- p\int_{1}^{\infty}\frac{1}{t^{N-p+\alpha +k }}\left[
\int_{\mathbb{R}^N_+ } \frac{|u(y)|^{p-2} u(y) }{|y|^{p}}\langle y, \nabla u(y)\rangle y_N ^{\alpha } |y|^k \, dy
\right] \, dt
\\
\label{identityhardy}
& =&
- \frac{p}{N-p+\alpha +k }
\int_{\mathbb{R}^N_+} \frac{|u(x)|^{p-2} u(x) }{|x|^{p}}\langle x, \nabla u(x)\rangle x_N ^{\alpha } |x|^k \, dx .
\end{eqnarray}
Note that by a density argument (\ref{identityhardy}) still holds for functions $u\in {\mathcal D}^{1,p} (\mathbb{R}^N_+ , d\mu _{k,\alpha } )$.
In view of the inequality
\begin{equation}
\label{eq:estimate nabla u by Cauch-Sch}
- u(x) \langle x,\nabla u(x)\rangle \leq |u(x)||x| |\nabla u(x)|
\end{equation}
this leads to
\begin{equation}
\label{ineq1hardy}
\int_{\mathbb{R}^N_+ }|u(x)|^p x_N ^{\alpha }|x|^{k-p} \, dx \leq
\frac{p}{N-p+k+\alpha }
\int_{\mathbb{R}^N_+ } \frac{|u(x)|^{p-1} }{|x|^{p-1}}|\nabla u(x)| x_N ^{\alpha } |x|^k \, dx .
\end{equation}
Using H\"older's inequality, with $p'$ being the conjugate exponent of $p$, we obtain that (this step is not necessary if $p=1$)
\begin{eqnarray}
\nonumber
& & \int_{\mathbb{R}^N_+ } \frac{|u(x)|^{p-1}}{|x|^{p-1}}|\nabla u(x)|x_N ^{\alpha } |x|^k \, dx
\\
\nonumber
&
= &
\int_{\mathbb{R}^N_+}\left\{
\frac{|u(x)|^{p-1}}{|x|^{p-1}}\left[ x_N ^{\alpha } |x|^k \right] ^{1/p'} \right\} \left\{ |\nabla u(x)|\left[ x_N ^{\alpha } |x|^k \right] ^{1/p} \right\} \, dx
\\
\label{ineq2hardy}
& \leq &
\left(
\int_{\mathbb{R}^N_+ }|u(x)|^p x_N ^{\alpha } |x|^{k-p}\, dx
\right)^{1/p'}
\cdot \left(
\int_{\mathbb{R}^N_+ }|\nabla u(x)|^p x_N ^{\alpha } |x|^k \, dx
\right)^{1 /p} .
\end{eqnarray}
Plugging this estimate into (\ref{ineq1hardy}) concludes the first statement of the theorem.
\smallskip
\textit{Step 2.} Next we show (\ref{constant}).
Let $\varepsilon >0$ and define
$$
M_{\epsilon}=\frac{N-p+k+\alpha +\epsilon}{p},\qquad
u_{\epsilon}(x)=\left\{\begin{array}{rl}
1&\text{ if }|x|\leq 1
\smallskip \\
|x|^{-M_{\epsilon}}&\text{ if }|x|>1.
\end{array}\right.
$$
Note that
$$
\int_{\mathbb{R}^N_+}|\nabla u_{\epsilon}|^p x_N ^{\alpha } |x|^k \, dx ={M_{\epsilon}}^p\int_{\mathbb{R}^N_+ \backslash B_1}x_N ^{\alpha }|x|^{k-(M_{\epsilon}+1) p}\, dx.
$$
Hence, by Lemma \ref{lemma:integrability w times power} (ii) below we obtain for any $\epsilon >0$ that $u_{\epsilon}\in {\mathcal D}^{1,p}(\mathbb{R}^N_+, d\mu_{k,\alpha } ).$
On the other hand, we have that
$$
\int_{\mathbb{R}^N_+}|u_{\epsilon}(x)|^p x_N ^{\alpha }|x|^{k-p}\, dx=
\int_{\mathbb{R}^N_+ \backslash B_1} x_N ^{\alpha } |x|^{k-(M_{\epsilon}+1)p}\, dx +\beta,
$$
where, by Lemma \ref{lemma:integrability w times power} (i),
$$
\beta=\int_{B_1^+ } x_N ^{\alpha }|x|^{k-p}<\infty.
$$
Now set
\begin{displaymath}
\displaystyle{
Q_{\epsilon}
=
\frac{
\int_{\mathbb{R}^N_+ } |\nabla u_{\epsilon}|^p x_N ^{\alpha } |x|^{k} \, dx
}{
\int_{\mathbb{R}^N_+} |u_{\epsilon}|^p x_N ^{\alpha } |x|^{k-p } \, dx }=
\frac{
\int_{\mathbb{R}^N_+ \backslash B_1} x_N ^{\alpha } |x|^{k- (M_{\epsilon}+1)p}\, dx
}{
\beta +\int_{\mathbb{R}^N_+\backslash B_1}|x|^{k-(M_{\epsilon}+1)p}}
\, dx .}
\end{displaymath}
Note also that $(M_{\epsilon}+1)p=N+k+\alpha +\epsilon$.
Therefore we obtain from Lemma \ref{lemma:integrability w times power} (iii) that
$$
\lim_{\epsilon\to 0}Q_{\epsilon}=(M_0)^p=\left(\frac{N-p+k+\alpha }{p}\right)^p.
$$
This proves the second equality in (\ref{constant}). The first equality in (\ref{constant}) follows from the fact that the approximating functions $u_{\varepsilon}$ are radial.
\textit{Step 3.} Let us now show that there is no nontrivial function satisfying equality in \eqref{eq:theorem:Hardy with weight}.
\\
Assume that equality holds in (\ref{eq:theorem:Hardy with weight}). Then there holds equality in (\ref{ineq1hardy}) and (\ref{ineq2hardy}). Hence we must have
\begin{eqnarray}
\label{identity3hardy}
& & -u(x) \langle x, u(x)\rangle =|u(x)||x|\, |\nabla u(x)| \quad \mbox{and}
\\
\label{identity4hardy}
& & \frac{|u(x)|}{|x|} = \frac{p}{N-p+k+\alpha } \, |\nabla u(x)| \quad \mbox{for a.e. $x\in \mathbb{R}^N _+ .$}
\end{eqnarray}
An integration of this leads to
\begin{equation}
\label{u=}
u(x) = |x|^{-(N-p+k+\alpha )/p} h\left( x|x|^{-1} \right) ,
\end{equation}
with a measurable function $h: \mathbb{S} ^{N-1} _+ \to \mathbb{R}$.
Since $|x|^{-1} u\in L^p (\mathbb{R}^N _+, d\mu_{k,\alpha }) $, this implies that $h=0$ a.e. on $\mathbb{S}^{N-1} _+ $. The claim is proved.
$\hfill \Box$
\noindent
\begin{lemma}
\label{lemma:integrability w times power}
Let $\delta >0$. Then
\begin{eqnarray*}
\mbox{(i)} & &
\int_{B_1 ^+ }x_N ^{\alpha } |x|^{-N -\alpha +\delta }\, dx <\infty, \quad \mbox{ and }
\\
\mbox{(ii)} & &
\int_{\mathbb{R}^N_+ \backslash B_1} x_N ^{\alpha } |x|^{-N -\alpha -\delta }\, dx <\infty.
\end{eqnarray*}
Further, there holds
$$
\lim_{\delta \to 0+0 }\int_{\mathbb{R}^N_+ \backslash B_1} x_N ^{\alpha }|x|^{-N -\alpha -\delta }\, dx =\infty.
$$
\end{lemma}
{\sl Proof: }
We use $N$-dimensional spherical coordinates to show that
\begin{align*}
\int_{B_1 ^+ } x_N ^{\alpha } |x|^{-N -\alpha +\delta }=&\int_{\mathbb{S}^{N-1}_+ }\left(
\int_0^1 \left( \frac{x}{|x|}\right) ^{\alpha } r^{-1+\delta } dr \right)d\mathcal{H}^{N-1}(x)
\smallskip \\
=&\int_{\mathbb{S}^{N-1}_+ } \left(\frac{x}{|x|}\right) ^{\alpha }d\mathcal{H}^{N-1}(x)\left(\int_{0}^1 r^{-1+\delta }dr\right).
\end{align*}
From this (i) follows. (ii) and (iii) follow similarly.
$\hfill \Box$
\\[0.1cm]
\hspace*{1cm} From now on let us assume that
\begin{equation}
\label{maincase}
1<p<q \left\{
\begin{array}{ll}
\leq p^* & \mbox{ if }\ p<N+\alpha
\\
<+\infty & \mbox{ if } \ p\geq N+\alpha
\end{array}
\right.
.
\end{equation}
We begin with the following
\begin{lemma}
\label{CKN}
Assume that $a, b, p,q,N$ and $ \alpha $ satisfy the conditions (\ref{CKNassump1}) and (\ref{maincase}).
Further, assume that there exist real numbers $k$ and $l$ which satisfy $l+N+\alpha >0$ and one of the conditions
{\bf (i)}-{\bf (iii)} of Theorem \ref{maintheorem}, and such that
\begin{eqnarray}
\label{akl}
& & ap = kp + l(1-p) \ \mbox{ and }
\\
\label{bq<l}
& & bq \leq l.
\end{eqnarray}
Then (\ref{S=S_rad}) holds.
\end{lemma}
{\sl Proof:} Let $u\in {\mathcal D} ^{1,p} (\mathbb{R} ^N _+, d\mu_{ap, \alpha } )\setminus \{ 0\} $,
and let $u^{\star} $ be the $\mu_{l,\alpha} $-symmetrization of $u$.
Then we have by Theorem \ref{ps} and (\ref{akl}),
\begin{equation}
\label{ps1}
\int_{\mathbb{R} ^N _+ } |x|^{ap} |\nabla u| ^p x_N ^{\alpha } \, dx \geq
\int_{\mathbb{R} ^N _+ } |x|^{ap} |\nabla u^{\star}| ^p x_N ^{\alpha } \, dx.
\end{equation}
Further, it follows from (\ref{hardylitt1}) and (\ref{bq<l})
that
\begin{equation}
\label{bqint}
\int_{\mathbb{R} ^N _+ } |x|^{bq} | u| ^q x_N ^{\alpha } \, dx \leq
\int_{\mathbb{R} ^N } |x|^{bq} | u^{\star}| ^q x_N ^{\alpha } \, dx.
\end{equation}
Finally, (\ref{ps1}) together with (\ref{bqint}) yield
\begin{equation}
\label{E>E*}
E_{a,p,q,N,\alpha } (u) \geq E_{a,p,q,N,\alpha } (u^{\star} ),
\end{equation}
and the assertion follows.
$\hfill \Box $
\\[0.1cm]
Now we define
\begin{eqnarray}
\label{def_a1}
a_1 & := & \frac{N+\alpha -1}{q-\frac{q}{p} +1} +1 -\frac{N+\alpha }{p}, \ \ \mbox{ and }
\\
\label{def_a2}
a_2 & := &
\frac{N+\alpha -1}{(q- \frac{q}{p} +1 )\sqrt{ (N+\alpha )( \frac{1}{p} -\frac{1}{q})}} +1 -\frac{N+\alpha }{p} .
\end{eqnarray}
Observe that the conditions (\ref{maincase}) imply that
\begin{equation}
\label{a2>a1>0}
a_2\geq a_1
\geq 0,
\end{equation}
and equality in the two inequalities holds iff $p<N+\alpha $ and $q=p^* $.
\\
Moreover, an elementary calculation shows that
\begin{eqnarray}
\label{a1cond}
a_1 & = &
\max \Big\{ a: \, a= k + l\left( \frac{1}{p} -1\right) , \ bq\leq l ,
\\
\nonumber
& & \qquad \qquad
-N-\alpha < l \leq k \frac{N+\alpha }{N+\alpha -1 } \leq 0 \Big\}
\quad \mbox{ and }
\\
\label{a2cond}
a_2 & = &
\max \Big\{ a: \, a= k + l\left( \frac{1}{p} -1\right) , \ bq\leq l ,
\ k\geq 0,
\\
\nonumber
& & \qquad \qquad 0< l+ N+\alpha \leq \frac{ (k+N+\alpha -1)^3}{(k+N+\alpha -1)^2 - \frac{(N+\alpha -1)^2 }{N+\alpha } } \Big\} .
\end{eqnarray}
The main result of this section is the following
\begin{theorem}
\label{best_a}
Assume that (\ref{maincase}) holds.
Then we have
\begin{equation}
\label{s=s*}
S_{a,p,q,N,\alpha } = S_{a,p,q,N,\alpha } ^{rad} \qquad \forall a\in \Big(
1-\frac{N+\alpha }{p} ,a_2 \Big].
\end{equation}
\end{theorem}
{\sl Proof: } Let $a \in \Big(
1-\frac{N+\alpha }{p} ,a_2 \Big]$. We define
\begin{eqnarray}
\label{l}
l & := & q \left( a+ \frac{N+\alpha }{p} -1 \right) -N -\alpha , \quad
\mbox{and }
\\
\label{k}
k & := & \left( 1+ q-\frac{q}{p} \right) \left( a+ \frac{N+\alpha }{p} -1 \right) -N-\alpha +1 .
\end{eqnarray}
This implies
\begin{eqnarray*}
a & = & k+l \left(\frac{1}{p} -1 \right) ,
\\
bq & = & l \quad \mbox{and}
\\
l+ N+\alpha & = & \frac{k+ N+\alpha -1 }{ \frac{1}{q} -\frac{1}{p} +1} >0 .
\end{eqnarray*}
Now we split into two cases:
\\[0.1cm]
{\bf 1.} Let $a\leq a_1 $.
\\
Then
$$
k\leq 0,
$$
and since $q\leq p^* $ if $p< N+\alpha $ and $q<+\infty $ otherwise, we have
\begin{eqnarray*}
l\frac{N+\alpha -1}{N+\alpha } -k
& = & (k+N+\alpha -1)
\frac{
-\frac{1}{N+\alpha } -\frac{1}{q} +\frac {1}{p} }{ \frac{1}{q}-\frac{1}{p} +1}
\\
& \leq & 0.
\end{eqnarray*}
Hence we are in case {\bf (ii)} of Theorem \ref{maintheorem}, so that the assertion follows by Lemma \ref{CKN}, for $a \leq a_1 $.
\\[0.1cm]
{\bf 2.} Next let $a_1 \leq a\leq a_2 $.
\\
This implies
\begin{eqnarray}
\nonumber
k & \geq & 0 \quad \mbox{and }
\\
\label{estk}
k+ N+\alpha -1 & \leq &
\frac{N+\alpha -1}{\sqrt{ (N+\alpha ) \left( \frac{1}{p}-\frac{1}{q} \right) } } .
\end{eqnarray}
Now, from (\ref{estk}) we deduce
\begin{eqnarray*}
& & l+N+\alpha - \frac{
(k+N+\alpha -1 ) ^3
}{
(k+N+\alpha -1 )^2 - \frac{(N+\alpha -1)^2 }{N+\alpha }
}
\\
& = & \frac{
(k+N+\alpha -1)
\left(
(k+N+\alpha -1)^2
\left( \frac{1}{p}-\frac{1}{q}
\right)
- \frac{(N+\alpha -1)^2 }{N+\alpha } \right)
}{
\left(
\frac{1}{q} -\frac{1}{p}+1
\right)
\left(
(k+N+\alpha -1)^2- \frac{(N+\alpha -1)^2 }{N+\alpha }
\right)
}
\\
& \leq & 0.
\end{eqnarray*}
Hence we are in case {\bf (iii)} of Theorem \ref{maintheorem}, so that the assertion follows again by Lemma \ref{CKN} .
$
\hfill \Box$
\\[0.1cm]
{\bf Remark 6.1:} The characterizations (\ref{a1cond}) and (\ref{a2cond}) and the inequalities (\ref{a2>a1>0}) show that
the bound $a_2 $
cannot be improved using our method.
\\[0.1cm]
Finally we evaluate the constants $S_{a,p,q,N,\alpha } ^{rad} $
and the corresponding radial minimizers.
\\
For any radial function $v\in C_0 ^{\infty }
(\mathbb{R}^N ) \setminus \{ 0\} $, it is easy to check the following equality
$$
E_{a,p,q,N, \alpha } (v)
=
\left[B\left( \frac{
N-1}{2},\frac{\alpha +1}{2}\right)
\right]^{1-\frac pq}
\frac{
\pi ^{\frac{N-1}{2}\frac{q-p}{q}}}{
\left(\Gamma \left[
\frac{N-1}{2}\right)\right]^\frac{q-p}{q} }
\frac{\displaystyle\int_{\mathbb{R} ^N _+ } |x|^{ap+\alpha} |\nabla v|^p \, dx
}{
\left( \displaystyle\int_{\mathbb{R}^N _+ } |x|^{bq+\alpha} |v|^q \, dx \right) ^{p/q} },
$$
Therefore by Theorem 1.4 in \cite{Musina}, we deduce that the function
$$
U(x)=\left(1+|x|^\frac{(N-p+ap+\alpha)(q-p)}{p(p-1)}\right)^\frac{p}{p-q}\,.
$$
achieves the infimum of $E_{a,p,q,N, \alpha }$, that is $S_{a,p,q,N,\alpha } ^{rad}=E_{a,p,q,N, \alpha } (U)$.
\subsection{Problems in an orthant} Among the possible extensions of our isoperimetric results we would like to address a problem in an orthant with monomial weights.
Let $O_+ $ denote the orthant
$$
O_+ := \{ x\in \mathbb{R} ^N :\, x_i >0 , \, i=1,\ldots , N \} ,
$$
and let $a _1 , \ldots , a _N $ be positive numbers.
Using multi-index notation
we have
\begin{eqnarray*}
{\bf a } & := & (a _1 , \ldots , a _N ),
\\
| {\bf a } | & := & a _1 + \ldots + a _N ,
\\
x^{{\bf a}} & := & x_1 ^{a_1 } \cdots x_N ^{a_N } , \quad (x\in \mathbb{R}^N ).
\end{eqnarray*}
Following the lines of proof of Theorem 1.1 we obtain the following isoperimetric result. We leave the details to the reader.
\begin{theorem}
\label{secondmaintheorem}
Let $N\in \mathbb{N} $, $N\geq 2$,
$k,l \in \mathbb{R} $, ${\bf a} = (a _1 , \ldots , a _N ) $ where $a _i >0 $, ($i=1, \ldots ,N$), and $l+N+|{\bf a }| >0$.
Further, assume that one of the following conditions holds:
\\
{\bf (i)} $l+1\leq k $;
\\
{\bf (ii)} $k\leq l+1$ and $ l\frac{N+|{\bf a}| -1}{N+|{\bf a}| } \leq k\leq 0$;
\\
{\bf (iii)} $N\geq 2$, $ 0\leq k\leq l+1$ and
\begin{equation}\label{l_1N3new}
l\leq
\frac{(k+N+|{\bf a}| -1)^3 }{(k+N+|{\bf a}|-1)^2 - \frac{(N+|{\bf a}|-1)^2 }{N+|{\bf a}| } } -N -|{\bf a}| \,.
\end{equation}
\\
Then
\begin{equation}
\label{mainineqnew}
\displaystyle\int_{\partial \Omega } |x|^k x^{{\bf a}}\, {\mathcal H}_{N-1} (dx)
\geq
D
\left(
\displaystyle\int_{\Omega } |x|^l x^{{\bf a}}\, dx
\right)
^{(k+N+|{\bf a}|-1)/(l+N+|{\bf a}|) } ,
\end{equation}
for all smooth sets $\Omega $ in
$O_+ $,
where
\begin{eqnarray}
\label{defCklnew}
D= D(k,l,N, {\bf a} ) & := &
\frac{\displaystyle\int_{\partial B_1 } |x|^k x^{{\bf a}}\, {\mathcal H}_{N-1} (dx)}
{\left( \displaystyle\int_{B_1 \cap O_+ }
|x|^l x^{{\bf a}} \, dx \right ) ^{(k+N+|{\bf a}|-1)/(l+N+|{\bf a}|) } } .
\end{eqnarray}
Equality in (\ref{mainineqnew}) holds if $\Omega =B_R\cap O_+ $.
\end{theorem}
\section*{Acknowledgements}
The authors are grateful to Gyula Csat\' o who kindly comunicated to us a proof of a general Hardy type inequality a particular case of which is Theorem \ref{hardysobolev}. The authors would to thanks University of Naples Federico II and South China University of Technology of Guangzhou for supporting some visiting appointment and their colleugues for their kind hospitality.
\bigskip
|
{
"timestamp": "2018-05-08T02:17:38",
"yymm": "1805",
"arxiv_id": "1805.02518",
"language": "en",
"url": "https://arxiv.org/abs/1805.02518"
}
|
\section{Introduction}
\IEEEPARstart{I}{mage} enhancement is one of the important domain of image processing which aims to improve the appearance of the image for display or further analysis. It is an important preprocessing step in most computer vision and image processing pipelines. Computer Vision aims to automate the process of visual perception. It enhances a vision system's ability to detect objects, resulting in an overall improvement in accuracy. Image enhancement helps us to transform image on the basis of psychophysical characteristics of human visual system\cite{i1}. The different enhancement algorithms may be classified according to the objective achieved, such as filtering, noise removal, edge and contrast enhancement. In particular, contrast manipulation techniques are applied on the images to extract information which are not easily distinguished on low contrast images. It is widely used to achieve wider dynamic range\cite{i2}.
Histogram specification and Histogram equalization are two of the most well-known algorithms, in the class of contrast enhancement, due to its simplicity and effectiveness\cite{i3}. Histogram specification (HS) redistribute the gray levels of an input image such that the distribution of the transformed gray levels closely matches a given desirable probability density function (PDF). Histogram equalization (HE) is a special case of HS in which the desired PDF is an uniform distribution. HE results in improved dynamic range of the image but conventional HE is rarely used in consumer electronic products such as TV, mobile phones, due to its inability to preserve brightness\cite{i4}.
Several algorithms have been proposed to overcome HE’s disadvantages\cite{i5,i6,i7,i8,i9,i10,i11,i12,i13}. The bi-histogram equalization (BBHE) is a part of class of algorithms which aims at preserving the mean brightness of a histogram-equalized image\cite{i4}. BBHE decomposes an image into two sub-images based on the mean brightness of the image and performs HE on each sub-image\cite{i9}. Many algorithms have been proposed to improve upon BBHE such as dualistic sub-image histogram equalization (DSIHE), minimum mean brightness error bi-histogram equalization (MMBEBHE) and recursive mean-separate histogram equalization (RMSHE)\cite{i10,i11,i12,i13}. In general, algorithms in this class do not modify the gray level transformation function but instead modify the region(s) of application of HE.
Fuzzy logic has found many applications in image processing, pattern recognition, etc. Image associated with imprecision can be handled efficiently by application of Fuzzy set theory. Some fuzzy techniques have also been proposed for contrast enhancement\cite{i14,i15,i16,i17,i18}. Fuzzy histogram equalization (FHE) extends traditional HE by using a transformation based on fuzzy weighted values assigned to each gray level\cite{i15}. In\cite{i16}, the gray levels are transformed according to a parametrised fuzzy membership function (MF). However, the MFs take on a fixed form and hence may not be appropriate for all images. In\cite{i17}, the desired histogram is modelled as one parametrised fuzzy number and fuzzy inference rules and operations are used to tune this number and to decide the slope of mapping function. Although the algorithm is automatic, tuning of the fuzzy number should be performed carefully for good generalization. In\cite{i18}, a fuzzy algorithm was proposed which considered both global and local information of the image using the fuzzy entropy principle.
In conventional histogram specification, the primary difficulty is to get a meaningful and appropriate transformation analytically. This ambiguity may be resolved by choosing a set of functions which are considered suitable, and providing hyper parameters which improve the generality of the algorithm. In\cite{i19,i20}, modified cosine functions and genetic algorithms are used to find suitable transform functions. In\cite{i21}, the cumulative density function (CDF) of a given image is approximated as a piecewise-linear function and mapped to a PDF for usage in HS. Dynamic histogram specification (DHS) proposed in\cite{i22} generates an appropriate PDF using gray levels (critical points) in the histogram whose first derivative is greater than a specific gain value and whose second derivative is equal to zero. DHS may enhance image contrast without losing the shape features of original histogram. \par
Type-2 fuzzy sets are finding very wide applicability in rule-based fuzzy logic systems (FLSs) because they let uncertainties be modelled by them whereas such uncertainties cannot be modelled by type-1 fuzzy sets. Type-1 fuzzy, despite having a name which carries the connotation of uncertainty, research has shown that there are limitations in the ability of T1 FSs to model and minimize the effect of uncertainties \cite{i23,i24,i25}. This is because a T1 FS is certain in the sense that its membership grades are crisp values. Recently, type-2 FSs\cite{i26}, characterized by MFs that are themselves fuzzy, have been attracting interests. Interval type-2 (IT2) FSs1, a special case of type-2 FSs, are currently the most widely used for their reduced computational cost. We use the concept of IT2 fuzzy to estimate the appropriate PDF for a given image. The selection of PDF occurs automatic in contrast to the conventional histogram specification where we need to provide the PDF.
In this paper, we propose a simple fuzzy approach for automatic generation of a suitable PDF from the input image for contrast enhancement using HS. The proposed algorithm works in 5 stages - symmetric Gaussian fitting on the input histogram, generation of IT2 fuzzy MF and FOU, obtaining membership value (MV), PDF generation, and histogram specification. The authentication of result is done by using average information content(AIC) index.
The rest of the paper follows the following organisation- Section II provides the pre-requisites for the research which includes a brief introduction to fuzzy sets, histogram equalisation and histogram specification. Section III provides the detailed description of the proposed algorithm, and a brief discussion of its salient features in 5 sub-sections. Experimental results and discussion can be found in Section IV. Lastly, Section V presents the conclusion of our work and further research possibilities.
\section{Background}
The concepts which are extensively used in the paper are: Fuzzy Sets, Histogram Equalization and Histogram Specifications. These fundamental building blocks are briefly described as follows:
\subsection{Type-1 and Type-2 Fuzzy sets}
In classical set theory, membership of an element $x$ belonging to a domain of discourse (universe) $U$ to a set $A$ may be represented as a binary function $\mathrm{\mu}_A(x)$ which is defined as in \eqref{first}
\begin{eqnarray} \label{first}
\mu_A(x) = \begin{cases}
1 & \text{if } x \in A \\ \\
0 & \text{if } x \not\in A
\end{cases}
\end{eqnarray}
where $\mu_A(x)$ is called the \textit{membership function}. Set $A$ (which can also be treated as a subset of $U$) is mathematically equivalent to its membership function $\mu_A(x)$ in the sense that knowing $\mu_A(x)$ is the same as knowing $A$ itself.\cite{b1,b2}\par
Zadeh\cite{b3} proposed a non-binary, function-theoretic representation of set membership as a mechanism for handling uncertainty. The membership function for such fuzzy sets represents the degree of membership of an element to the set. A Type-1 Fuzzy Set (T1 FS) $A$ is comprised of domain $U$ of the real numbers (called the \textit{universe of discourse} of A) together with the \textit{membership function} (MF) $\mu_A):U \to[0,1]$. For each value of $x$, $\mu_A(x)$ is the \textit{degree of membership} or \textit{membership grade}, of $x$ in $A$. When $U$ is continuous, $A$ is written as \begin{eqnarray}
A = \int_U \mu_A(x)/x
\end{eqnarray}
fig. \ref{fig_sim1}\subref{A T1 MF} shows an example of type-1 fuzzy membership function. Detailed discussion of fuzzy sets and operations can be found in\cite{b2}.\par
A type-2 fuzzy set (T2 FS) $\widetilde{A}$ is the extension of the concept of T1 FS. It introduce the concept of uncertainty in membership grades of T1 FSs. Mathematically, A \textit{T2 FS} (also called a \textit{GT2 FS}) is denoted by a bivariate function on the Cartesian product $\mu:X\times[0,1] $ into $[0,1]$, where $X$ is the universe of discourse for the \textit{primary variable} of $\widetilde{A},\,x$. The 3D MF of $\widetilde{A}$ is usually denoted as $\mu_{\widetilde{A}}(x,u)$, where $x\in U$ and $u\in U =[0,1]$, that is \begin{eqnarray}
\widetilde{A} = {((x,u),\mu_{\widetilde{A}}) | \forall x\in X, {\forall u\in J_x} \subseteq[0,1]}
\end{eqnarray}
in which $0\leq\mu_{\widetilde{A}}\leq 1$. $\widetilde{A}$ can also be expressed as \begin{eqnarray}
\widetilde{A} = \int_{x\in X}\int_{u \in J_x}\mu_{\widetilde{A}}(x,u)/(x,u)\,\,where \,\,J_x \subseteq[0,1]\end{eqnarray} where
$\int\int$ denotes union over all admissible $x$ and $u$ and $J_x$ is the range of primary membership. For discrete universes of discourses $\int$, $X$ and $U$ is replaced by $\Sigma$, $X_d$ and $U_d$ respectively. Fig.\ref{fig_sim1}\subref{A T2 MF} represents a graphical representation of a T2 MF. A vertical slice of the T2 MF at $x = x'$ defines the secondary membership function at that point, given by\cite{b2}
\begin{eqnarray}
\mu_{\widetilde{A}} = f_{x'}(u) = {\int_{u \in J_x'}\frac{\mu_{\widetilde{A}}(x',u)}{u}\, ,where \,\, J_x \subseteq[0,1]}
\end{eqnarray}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .35]{Draftpics/fig1.png}
\label{A T1 MF}}
\hfil
\subfloat[]{\includegraphics[scale = .35]{Draftpics/fig2.png}
\label{A T2 MF}}
\caption{Membership function (a) Type-1, (b) Type-2}
\label{fig_sim1}
\end{figure}
Uncertainty in the primary membership of a T2 FS is termed as \textit{footprint of uncertainty} (FOU) and the bounds of the FOU are accordingly called the upper and lower MFs. A FS which does not impose any additional constraints on the secondary membership function is known as a \textit{general T2 FS}. One class of MFs that have gained immense popularity due to its mathematical traceability and ease of computability are \textit{interval Type-2 FS} (IT2). The secondary MFs of IT2 Fss are interval sets represented as
\begin{eqnarray}
\widetilde{A} = \int_{x\in X} \frac{\int_{u\in J_x} \frac{1}{u}}{x} \,\,\, where J_x \subseteq[0,1]\end{eqnarray}
A fuzzy logic system (FLS) or a fuzzy inference system defines a mapping of input data to a scalar valued output using fuzzy rules\cite{b1}. Typically, a FLS consists of four major components - a fuzzifier, a fuzzy rule base, an inference engine, and a defuzzifier\cite{b2}. Fig. \ref{General Structure of a type-1 FLC} shows the block diagram for T1 FLS. The non-fuzzy input values are converted to fuzzy variables by the fuzzifier and processed by an inference engine using the fuzzy rule base and then converted back to crisp output value(s) using the defuzzifier. The defuzzifier may contain a \textit{type-reducer} for the conversion of high dimensional fuzzy sets (T2 or above) to T1 sets. Computing centroid of the set is generally performed for type reduction of T2 fuzzy set. Karnik and Mendel\cite{b2} have developed iterative algorithm (known as \textit{KM algorithm}) for computing centroid. Easier methods like \textit{mean value computation of the FOU of primary MF} are available for IT2. \par
\begin{figure}
\centering
\includegraphics[scale=.65 ]{Draftpics/fig3.png}
\caption{{General Structure of a type-1 FLC}}\label{General Structure of a type-1 FLC}
\end{figure}
Type-1 FLSs have been developed and successfully implemented in different areas of engineering. For example, in linguistic evaluation of machine tools, modelling different operational research problems and fault detection in gearboxes\cite{b4}. \par
T1 FSs are crisp and precise in a sense that there MFs are known perfectly, which is not usually the case. T2 FSs adds a layer of uncertainty by introducing uncertainty in MFs i.e. \textit{fuzzy MFs}. MFs are 3-Dimension in this case. Shaded region in Fig. \ref{fig_sim1}\subref{A T2 MF} shows the cross section of FOU of a T2 MF. T2 FSs have been shown to be more effective than T1 FSs in the connection admission control (CAC) method in ATM networks\cite{b5}, prediction of Mackey-Glass chaotic time-series\cite{b6}, adaptive noise cancellation of signals\cite{b7}, and in modelling of radiographic tibia image data using neuro-fuzzy clustering\cite{b8}, among several other applications.
\subsection{Histogram Equalization}
Histogram Equalization (HE) is a technique for adjusting image intensities to enhance contrast.
Let $I(x,y)$ be a given image as a matrix of integer pixel intensities ranging from $0$ to $L-1$. $L$ is the number of possible intensity values (gray levels), $256$ in our case. Let $p_n$ denote the normalized histogram of $I(x,y)$ with a bin for each possible intensity. So, \begin{eqnarray}
p_n = \frac{no.\,of\,pixels\,with\,intensity\,n}{total\,no.\,of\,pixels}\,\,\,\,\,\,\,n = 0,1,\cdots,L-1 \end{eqnarray}
The probability of occurrence of a gray level in the input image may be approximated by the PDF $P_I(g)$ as \begin{eqnarray} P_I(g) = \frac{n(g)}{n},\,\,\,\, g\in {g_0,\cdots,g_{(L-1)}}
\end{eqnarray}
where $g$ is a gray level, $n(g)$ is the number of pixels with gray level equal to $g$ and $n$ is total number of pixels in the image[3]. In HE, the transformation function $T(g_i)$ mapping an input gray level $g_i$ to an output level $g_f$ is given by \begin{eqnarray}\label{dynamicrange}
g_f = T(g_i)= \displaystyle{\sum\limits_{g \leq g_i} \frac{n(g)}{n}}
\end{eqnarray}
This transformation is also known as \textit{histogram linearisation}\cite{i3}, and is mathematically equivalent to \textit{cumulative frequency histogram}. It has been shown that in the case of continuous gray level values, the continuous linearisation of input gray levels produces a uniform probability distribution of output gray levels, irrespective of the form of $P_I(g)$\cite{i3}. It has been observed the application of \eqref{dynamicrange} increases the dynamic range of output image histogram by \textit{flattening} the histogram\cite{i4}. \par
There are following disadvantages in using HE-
\begin{itemize}
\item Brightness Alteration
\item Detail Loss
\item Can produce undesirable effects when applied to low colour depth
\item Is indiscriminate. It may increase the contrast of background noise, while decreasing the usable signal.
\end{itemize}
\subsection{Histogram Specification}
Histogram Specification (HS) is a technique that transforms gray levels of an input image $I(x,y)$ such that the histogram of transformed image approximates a given PDF. HE is the special case of HS where PDF used is the constant PDF. For the desired $P_D(g)$, we define the following transformations: \begin{eqnarray}\label{trans9}
G(g_f) = \displaystyle{\sum\limits_{g \leq g_f}P_D(g)}
\end{eqnarray}\begin{eqnarray}\label{trans10}
T(g_i) = \displaystyle{\sum\limits_{g \leq g_i}\frac{n(g)}{n}}
\end{eqnarray}
where $g_f$ represents a gray level in the output image and $g_i$ represents a gray level in the input image. All other symbols have the same meaning as before. Mapping between the output levels and the input levels is given by the transformation-\begin{eqnarray}\label{trans11}
g_f = G^{-1}(T(g_i))
\end{eqnarray}\par
The procedure of HS can be summarized as follows-
\begin{enumerate}
\item Using the desired PDF, obtain the transformation function $G(g_f)$ using \eqref{trans9}
\item Equalize the gray levels of the original image using \eqref{trans10}
\item Using \eqref{trans11}, map the input gray levels to a suitable output gray level.
\end{enumerate}
\par
Due to strong dependence of output image on the supplied PDF, it's crucial to select the appropriate PDF so as to improve the contrast enhancement.
\section{Proposed Method: Automatic Fuzzy Histogram Specification using IT2 FS}
The proposed algorithm consists of 5 stages: symmetric Gaussian fitting on the histogram, generation of IT2 fuzzy MF and hence Footprint of Uncertainty (FOU), obtaining Membership Value (MV), generation of desired PDF and transformation of histogram of input image to that of output image, using Histogram Specification. The rest of the section explains the five stages in detail followed by the precise summary of our proposed method.
\subsection{Symmetric Gaussian fitting on the input histogram}
The first stage has following key steps to be done:\\
\subsubsection{Input Histogram preprocessing}
The histogram of an image is smoothed using a moving average filter. Another methods available for smoothing are using a symmetric window such as triangular window or a hyper-cube or to fit data with a smoothing spline[32]. The result is then normalized and is used in further processing. Fig. \ref{gfit}\subref{HistogramOrgImage} shows the initial histogram of image given in Fig. \ref{gfit}\subref{OriginalImage} and the smoothed and normalized histogram is shown in Fig. \ref{gfit}\subref{Smooth Normal Histogram}\\
\subsubsection{Gaussian fitting}
A general Gaussian is given by \eqref{generalgaussian}
\begin{eqnarray} \label{generalgaussian}
G(\textbf{x}) = a\cdot exp\left(-\frac{1}{2}(\textbf{x} - \textbf{$\mu$}^T)\Sigma^{-1}(\textbf{x}-\textbf{$\mu$})\right)
\end{eqnarray}
where $a$ is the height, $\mu$ is the mean vector, and $\Sigma$ is the covariance matrix. We can model a histogram having $n$ peaks as the sum of Gaussian functions as \begin{eqnarray}
\widetilde{G}(\textbf{x}) = \sum\limits_{i = 1}^{n}G_i(\textbf{x})
\end{eqnarray}
To approximate the smoothed histogram as Gaussian functions, the objective function to be minimised is \begin{eqnarray}
J(\textbf{p}_i) = \frac{1}{2}\left(\sum\limits_{i=1}^n G_i(\textbf{x}) - H(\textbf{x})\right)^2
\end{eqnarray}
where parameter vector $\textbf{p} = (a_i,\mu_i,\Sigma_i)$ is for the $i^{th}$ Gaussian function $G_i(\textbf{x})$ and $H(\textbf{x})$ is the smoothed histogram of the input data. We can apply gradient descent method to estimate the parameter vector $\textbf{p}_i$ which uses the following update rule- \begin{eqnarray}
{\textbf{p}_i}^{new} = {\textbf{p}_i}^{old} -\rho\frac{\partial J}{\partial\textbf{p}_i}
\end{eqnarray}
where $\rho$ is a positive learning constant.\cite{p1}\par
For Gaussian functions, the partial derivatives of $J$ with respect to each component of $\textbf{p}_i$ are \begin{eqnarray}
\frac{\partial J}{\partial a_i} = \left(\sum\limits_{j=}^n G_j(\textbf{x}) - H(\textbf{x})\right)\cdot exp\left(-\frac{1}{2}(\textbf{x}-\textbf{$\mu$}_i)^T{\Sigma^{-1}}_i (\textbf{x} - \textbf{$\mu$}_i)\right)
\end{eqnarray}
\begin{eqnarray}
\frac{\partial J}{\partial \textbf{$\mu$}_i} = \frac{1}{2}\left(\sum\limits_{j=1}^n G_j(\textbf{x})-H(\textbf{x})\right)\cdot G_i(\textbf{x})\cdot(\textbf{x}-{\textbf{$\mu$}_i})^T\cdot({\Sigma_i}^{-T} + {\Sigma_i}^{-1})
\end{eqnarray}
\begin{eqnarray}
\frac{\partial J}{\partial \textbf{$\Sigma$}_i} = \frac{1}{2}\left(\sum\limits_{j=1}^n G_j(\textbf{x})-H(\textbf{x})\right)\cdot G_i(\textbf{x})\cdot\left({\textbf{$\Sigma$}_i}^{-T}(\textbf{x}-\textbf{$\mu$}_i)\cdot{(\textbf{x}-\textbf{$\mu$}_i)}^T{\textbf{$\Sigma$}_i}^{-T}\right)
\end{eqnarray}
\begin{figure}[!t]
\centering
\subfloat[]{\includegraphics[scale = .33]{Draftpics/lena.png}
\label{OriginalImage}}
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_hist.png}
\label{HistogramOrgImage}}
\hfil
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_histnorm.png}
\label{Smooth Normal Histogram}}
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_gaussfit.png}
\label{gaussfit}}
\caption{Illustration of gaussian fitting : (a) input image - \lq{Lena}\rq, (b) histogram of input image, (c) smoothened and normalized histogram, (d) best-fit gaussian}
\label{gfit}
\end{figure}
For gradient descent methods, choice of the initial values for the parameters is critical, due to the local minima problem. Therefore, we can use the following heuristic approach that consists of following steps to obtain the initial parameters.\cite{p1}
\begin{enumerate}
\item Using the least squares approximation, fit a polynomial function (PF) of the lowest possible degree (i.e. to avoid over fitting) such that the fit to each smoothed histogram has a reasonably small error.
\item Calculate the extrema (maxima and minima) values for the PF in Step 1 and determine the the position of the all Gaussians by the observing the positive values maxima, ignoring the ones that have small peaks.
\item Initialise the heights of the Gaussians by the maxima values (peak values) and initialise the mean values of the Gaussians as the location of these peaks. Initialise the standard deviation of each Gaussian as the shortest value among the distances between the mean of the Gaussian and the nearest minima or roots of the PF.
\item The mid-value of the two identified consecutive peak(s) are considered to be the partition point(s) ${pp}_i$ which is used further in determining membership values.
\end{enumerate}
Fig. \ref{gfit}\subref{gaussfit} shows the Gaussian fitting over the given histogram.
\subsection{Generation of IT2 fuzzy MF and FOU}
The values of the histogram which lie above and below the best-fit curve $(G(g))$ are identified. The values exceeding this curve are used to generate the upper MF (by the process of Gaussian fitting) and analogously, the values below the curve are used to generate the lower MF. For LMF, Gaussian is fitted over the function $L(g)$ which is obtained by \eqref{L(g)} -
\begin{eqnarray} \label{L(g)}
L(g) = min\, \{G(g),H(g)\} =
\begin{cases}
H(g) & \text{if } H(g) < G(g) \\ \\
G(g) & \text{if } H(g) \geq G(g)
\end{cases}
\end{eqnarray}
Similarly, For UMF, a function $U(g)$ can be defined over which Gaussian is fitted using \eqref{U(g)}\begin{eqnarray} \label{U(g)}
U(g) = max\, \{G(g),H(g)\} = \begin{cases}
G(g) & \text{if } H(g) \leq G(g) \\ \\
H(g) & \text{if } H(g) > G(g)
\end{cases}
\end{eqnarray}
The upper and lower MF are modelled as sum of Gaussian Functions. This sum may be expressed as \begin{eqnarray} \label{F_tilda}
F\widetilde(x) = \sum\limits_{i=1}^N F_i(x),
\end{eqnarray}
where $F_i(x)$ is defined as in \eqref{eq of gaussian}, $N$ is the number of functions. \par
\begin{eqnarray}\label{eq of gaussian}
F(x) = a\cdot exp\left(-\frac{1}{2}\frac{(x-\mu)^2}{{\sigma}^2}\right)
\end{eqnarray}
where $a$ is the height, $\mu$ is the center, and $\sigma$ is the standard deviation of the Gaussian. Thus, the upper MF (UMF) represents an upper bound functional approximation of the image histogram while the lower MF (LMF) represents a lower bound approximation. Fig. \ref{fig_simfou}\subref{fou} shows the obtained footprint of uncertainty.\par
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .45]{Draftpics/umfit.png}
\label{fitumf}}
\hfil
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lmfit.png}
\label{fitlmf}}
\caption{Illustration of gaussian fitting for (a) UMF and (b) LMF}
\label{fig_sim12}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_FOU.png}
\label{fou}}
\hfil
\subfloat[]{\includegraphics[scale = .35]{Draftpics/lena_mf3D.png}
\label{FOU3d}}
\caption{Illustration of IT2 membership function : (a) footprint of uncertainty (FOU) and (b) the fuzzy MF}
\label{fig_simfou}
\end{figure}
We define the \textit{reach} of a function $F_i(x)$ as the interval in which the value attained by $F_i(x)$ is greater than the value of all the other symmetric Gaussian functions. Mathematically, the \textit{reach} of the function $F_i(x)$ is the closed interval $[c_{i_1},c_{i_2}]$ such that \begin{eqnarray}\label{reach}
F_i(x) \geq F_j(x), \quad \forall x\in [c_{i_1},c_{i_2}],\quad j\in \{1,\cdots,N\}.
\end{eqnarray}
where the start and end points of each interval $\left(c_{i_1}\right.$ and $\left.c_{i_2}\right)$ are called the \textit{crossover} values of the function. From \eqref{reach}, it can be deduced that two symmetric Gaussian functions defining the histogram can have overlapping reach only at the crossover points.\par
We define the \textit{domain} of a gray level $d(g)$ as
\begin{eqnarray}\label{domain}
d(g) = min\,\{i|g \in [c_{i_1},c_{i_2}]\}
\end{eqnarray}
Thus, \textit{domain} refers to a function in whose reach a particular gray level lies. For levels which may lie in multiple reaches, we arbitrarily choose the smallest function whose reach contains that point. The domain at crossover point is taken as the minimum of the possible values. The notions of reach, crossover points and domain is used in PDF generation in the upcoming section. \par
The sections obtained after Gaussian fitting can be assigned a fuzzy label. For example, for 2 Gaussian fitting, it can be classified as dark and bright, and hence clustering of gray levels can be done efficiently.
\subsection{Obtaining Membership Value (MV)}
A fuzzy \textit{Membership Value} (MV) is defined as a transformation from a fuzzy MF to a real number. Since the proposed algorithm utilizes an IT2 MF, MV computations is performed on the upper and the lower MFs using piecewise linear or affine functions, thereby saving computational expense which is incurred in computation for a general T2 MF. That is, IT2 MV of a gray level is a pair of real numbers obtained by sequentially applying a linear or affine MV computation technique to the upper and lower MFs extracted in the previous stage. The manner of formulation of the computation technique affects the manner in which the PDF is generated, which can be seen from \eqref{upperPDF}, \eqref{lowerPDF} and \eqref{PDFfinal}. Four possible methods for membership value computation are as follows:\\
\subsubsection{Fuzzy Membership Value- Point-wise Method}
It is based on the difference in the values of the fuzzy MF (lower/upper) and the histogram for each gray level. The membership value of a gray level $g$ with respect to the $i^{th}$ membership function $F_i$ is given by
\begin{eqnarray}\label{pointwise mv1}
M_{PW}(g,i) = 1-|F_i(g)-H(g)|
\end{eqnarray}
where $H(g)$ is the histogram value. Thus, the membership value of a gray level decreases as the difference between the fuzzy MF (histogram approximation) and the histogram increase. The PDF generation technique takes this into consideration by assigning higher probabilities to smaller values, which leads to better distribution of gray levels in the output histogram. The overall membership value $M_v(g)$ for a particular MF(upper/lower) of a gray level $g$ is given by \begin{eqnarray}\label{pointwise mv2}
M_v(g) = M_{PW}(g,d(g))
\end{eqnarray}
where $d(g)$ is the domain of level $g$ (as defined in \eqref{domain}). It can easily be seen that the defined $M_v \in [0,1]$. Fig. \ref{fig_simPW} to show MV using this method.\\
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_lmv2.png}
\label{PW Upper MV}}
\hfil
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_umv2.png}
\label{Lower MV}}
\caption{Memebership Value using point-wise method : (a) upper MV and (b) lower MV}
\label{fig_simPW}
\end{figure}
\subsubsection{Fuzzy Membership Value- Center-of-weights Method}
The weighted mean (also called as the center-of-weight) of a given set of values represents the point at which the entire weighted set of values may be assumed to be concentrated\cite{p2}. Using this, Membership Value may be defined as
\begin{eqnarray}\label{cowmv1}
M_v(g) = M_{CW}(d(g))
\end{eqnarray}
where $d(g)$ is the domain of gray level $g$ and the function $M_{CW}(i)$ is defined as \begin{eqnarray}\label{cowmv2}
M_{CW}(i) = F_i(\overline{g_i})
\end{eqnarray}
where
\begin{eqnarray}\label{cowmv3}
\overline{g_i} = \frac{\sum\limits_{g = 0}^{L-1}min\{F_i(g),H(g)\}\cdot g}{\sum\limits_{g = 0}^{L-1}min\{F_i(g),H(g)\}}
\end{eqnarray}
Equation \eqref{cowmv3} defines $\overline{g_i}$ as the center-of-weight of the area of overlap of the histogram and the component function $F_i$ of the upper or lower MF, and the function $M_{CW}(i)$ computes the membership value of $\overline{g_i}$ in \eqref{cowmv2}. Since the component membership function $F_i$ is an symmetric Gaussian, the value of $M_{CW}(i)$ decreases as the distance between $\overline{g_i}$ and the center of the symmetric Gaussian $\mu_i$ increases. Eqautions \eqref{cowmv1} and \eqref{cowmv2} together imply that the membership value of all gray levels in the reach of $F_i$ is given by $M_{CW}(i)$. Hence, localized equalization of the input histogram may be performed according to an adjusted value (the fuzzy MV). This is because uniform probability is assigned to all gray levels in the reach interval by the PDF generation method. Fig. \ref{fig_simCW} shows the obtained MV using this method.\\
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_umv3.png}
\label{CW Upper MV}}
\hfil
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_lmv3.png}
\label{CW Lower MV}}
\caption{Memebership Value using center-of-weights method : (a) upper MV and (b) lower MV}
\label{fig_simCW}
\end{figure}
\subsubsection{Fuzzy Membership Value- Area Method}
The area method is based on extent of overlap of a particular gray levels domain and the normalized histogram values. Membership Value of a gray level $g$ is obtained by using-\begin{eqnarray}\label{areamv1}
M_v(g) = M_A(d(g))
\end{eqnarray}
where $d(g)$ is the domain of the level $g$ and the function $M_A(i)$ is defined as \begin{eqnarray}\label{areamv2}
M_A(i) = \frac{\sum\limits_{g = 0}^{L-1}min\{F_i(g),H(g)\}}{\sum\limits_{g = 0}^{L-1}F_i(g)}
\end{eqnarray}
Equation \eqref{areamv2} shows that as the area of overlap between the minimum of functions - $F_i(g)$ and $H(g)$ and fitted function increases, the value of fuzzy membership function increases. The nature of increase depends on the relative magnitude of overlap and can be use for controlling the PDF formation.\par
If the best-fit gaussian function is not a good approximation of the histogram in the neighbourhood of gray level $g$, then the overall magnitude of membership function is small. However, if the domain of the function sufficiently approximates the histogram, the obtained membership values may tend to be large. Fig. \ref{fig_simAr} shows the upper and lower MV obtained from this method.\\
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_umv4.png}
\label{Ar Upper MV}}
\hfil
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_lmv4.png}
\label{Ar Lower MV}}
\caption{Memebership Value using area method : (a) upper MV and (b) lower MV}
\label{fig_simAr}
\end{figure}
\subsubsection{Fuzzy Membership Value- KM Algorithm}
We aim to divide the given histogram in $n$ appropriate divisions, which we call as \textit{clusters}. We find the centroid of these clusters $v_j$ $(j\in\{1,\cdots,n\} )$ using the KM algorithm, followed by type reduction from type-2 to type-1 which is described in this section:\cite{p3}\par
To get $v_j$, as shown in Fig. \ref{fig_simKarMen}\subref{Center of IT2 fuzzy set}
Karnik \textit{et al.} have suggested an iterative algorithm which estimates both the ends of an interval, that is, estimating $v_R$ and $v_L$ for each of the desired centroid $v_j$. It can be estimated by arranging the pattern set in ascending order and finding the location between upper and lower membership for a pattern set. It is required for an ascending ordered pattern set to represent a left value $v_L$ and a right value $v_R$ of an interval cluster center. We apply the iterative algorithm to estimate the cluster center. It first estimates $v_R$ and $v_L$ for every centroid, from which $v_j$ can be calculated easily.\cite{p3}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .6]{Draftpics/KM_exp.png}
\label{Center of IT2 fuzzy set}}
\hfil
\subfloat[]{\includegraphics[scale = .45]{Draftpics/typereduckm.png}
\label{MV_KM}}
\caption{Illustration of Karnik-Mendel method : (a) center of IT2 fuzzy set, (b) type-reduction}
\label{fig_simKarMen}
\end{figure} \par
\begin{algorithm}
\caption{Finding maximum of center $v_j$ : $v_R$}\label{algo}
\begin{algorithmic}[1]
\State Set fuzzfier $m$ = arbitrary. (We chose 2).
\State Compute centroid $v_j$ using \eqref{u(x_i)} and \eqref{compute centroid}
\begin{eqnarray}\label{u(x_i)}
u(x_i) = \frac{umf(x_i)+lmf(x_i)}{2}
\end{eqnarray}
\begin{eqnarray}\label{compute centroid}
v_j = \frac{\sum\limits_{i = 1}^N x_i\cdot {u(x_i)^m}}{\sum\limits_{i = 1}^N {u(x_i)^m}}
\end{eqnarray}
\State Sort pattern indices for all N patterns $(i=1,\cdots,N)$ in ascending order. Sorted feature = $x_1 \leq \cdots \leq x_N$
\State Set comparison = false;
\While {comparison = false}
\State Find interval index $k\,(1\leq k\leq N-1)$ such that $x_k\leq v_j\leq x_{(k+1)}$;
\ForAll {patterns}
\If {$(i\leq k)$}
\State Set primary membership $u_j(x_i)\,=\,lmf(x_i)$;
\Else
\State Set primary membership $u_j(x_i)\,=\,umf(x_i)$;
\EndIf
\EndFor
\State Compute maximum of center candidate $v_j'$ using the modified $u_j(x_i)$ obtained above;
\If {$(v_j = v_j')$}
\State Set Comparison = TRUE;
\Else
\State Set $v_j\,=\,v_j'$;
\EndIf
\EndWhile
\State \textbf{end}
\State Set $v_R\,=\,v_j'$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Finding maximum of center $v_j$ : $v_L$}\label{algo2}
\begin{algorithmic}[1]
\If {$(i\leq k)$}
\State Set primary membership $u_j(x_i)\,=\,umf(x_i)$;
\Else
\State Set primary membership $u_j(x_i)\,=\,lmf(x_i)$;
\EndIf
\end{algorithmic}
\end{algorithm}
\begin{figure}[H]
\centering
{\includegraphics[scale = .45]{Draftpics/mvkm.png}
\label{MV KM}}
\caption{Membership Value using Karnik-Mendel method}
\label{fig_simKMN}
\end{figure}
\par
A crisp center for estimated center $v_j$ is simple to compute as we are working on IT2 fuzzy sets.
\begin{eqnarray}
v_j = \frac{v_L + v_R}{2}
\end{eqnarray}
Fig. \ref{fig_simKarMen}\subref{Center of IT2 fuzzy set} shows the center $v_j = v$ for one cluster. \par
In our proposed algorithm, we perform type reduction in order to estimate cluster centers. For this, left memberships ${u_j}^L(x_i)$ and right memberships ${u_j}^R(x_i)$ for all patterns have already been estimated to organize $v_L$ and $v_R$, respectively. Therefore, type reduction can be achieved using ${u_j}^L(x_i)$ and ${u_j}^R(x_i)$ (\eqref{type reduction KM} and \eqref{TypeRed KM}) and hence membership value of a gray level $g$ can be found out using \eqref{MV KM}. Note that $C$ is the total number of clusters.\cite{p3}
\begin{eqnarray} \label{type reduction KM}
u_j(x_i) = \frac{{u_j}^L(x_i) + {u_j}^R(x_i)}{2}, \qquad \qquad j\,=\,1,\cdots, C
\end{eqnarray}
\begin{eqnarray}\label{TypeRed KM}
T(g) = \sum \limits_{j=1}^C u_j(g) ,\qquad \qquad g \in \, cluster \, j
\end{eqnarray}
\begin{eqnarray}\label{MV KM}
M_v(g) = T(v_j), \qquad\qquad \, \forall \, g \in cluster \, j
\end{eqnarray}
\subsection{PDF Generation}
Gray levels which have high frequency of occurrence concentrate near the peak(s) of the histogram of the image. Thus, it is difficult to differentiate between those levels. This problem can be solved by using the formula for the desired PDF which assigns higher probability to those gray levels which are away from the peak(s) of membership function. Probability value assigned is directly proportional to the distance of gray level from the peak(s) of histogram.\par
\textbf{PDF for Point-wise, Center of weights and Area methods:}\par
We propose the following linear equation for generation of a suitable PDF for histogram specification using the fuzzy membership values (obtained through methods as described in the previous section). The un-normalized probability of occurrence ${P_D}^U(g)$ for the upper MF of a given gray level $g$ in the output histogram may be calculated as
\begin{eqnarray} \label{upperPDF}
{P_D}^U(g) = \begin{cases}
T + 2{M_v}^U(g)\left[\left(\frac{\mu_{d(g)}+c_{i_1}}{2}\right)-g\right] & \text{if } g < \mu_{d(g)} \\ \\
T - 2{M_v}^U(g)\left[\left(\frac{\mu_{d(g)}+c_{i_2}}{2}\right)-g\right] & \text{if } g \geq \mu_{d(g)}
\end{cases}
\end{eqnarray}
where $T$ is the largest gray level in the (normalized) image, i.e, $L-1$, ${M_v}^U(g)$ is the fuzzy membership value of $g$ calculated for the upper MF, ${\mu}_i$ is the center of the $i^{th}$ component membership function $F_i$, $d(g)$ is the domain function of gray level $g$, $c_{i_1}$ and $c_{i_2}$ are the crossover points for $d(g)$.\par
In an analogous manner, the un-normalized probability of occurrence ${P_D}^L(g)$ for the lower MF may be obtained as
\begin{eqnarray}\label{lowerPDF}
{P_D}^L(g) = \begin{cases}
T + 2{M_v}^L(g)\left[\left(\frac{\mu_{d(g)}+c_{i_1}}{2}\right)-g\right] & \text{if } g < \mu_{d(g)} \\ \\
T - 2{M_v}^L(g)\left[\left(\frac{\mu_{d(g)}+c_{i_2}}{2}\right)-g\right] & \text{if } g \geq \mu_{d(g)}
\end{cases}
\end{eqnarray}
where ${M_v}^L(g)$ is the fuzzy membership value of $g$ calculated for the lower MF and all other variables has same meaning as before. Due to interval type-2 nature of MF and the linear nature of the MV computation techniques, the lower and upper PDFs may be de-fuzzified without centroid computation through simple mean computation as follows:
\begin{eqnarray}\label{PDFfinal}
P_D(g) = \frac{1}{2}\left({P_D}^U(g)+{P_D}^L(g)\right)
\end{eqnarray}
where $P_D(g)$ is the final un-normalized PDF. We note that in this case, the PDF itself represents a type-1 fuzzification of the gray level probabilities. The usage of crisp outputs is inherited in the HS algorithm, which samples the PDF according to discrete gray levels. \par
Geometrically, \eqref{upperPDF} and \eqref{lowerPDF}, each consider the distance between the gray level, and the mean of the domain functions' symmetric Gaussian peak and the crossover point, depending on which arm of the symmetric Gaussian the gray level lies. This value is then scaled by the fuzzy membership value and shifted by an amount equal to the largest gray level in the image, Finally, we note that the distribution obtained using \eqref{PDFfinal} may be normalized to obtain a PDF $\widetilde{P_D}(g)$ suitable for histogram specification. \par
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_pdf2.png}
\label{PDF PW}}
\hfil
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_pdf3.png}
\label{PDF COW}}
\hfil
\subfloat[]{\includegraphics[scale = .45]{Draftpics/lena_pdf4.png}
\label{PDF Ar}}
\hfil
\subfloat[]{\includegraphics[scale = .45]{Draftpics/pdfkm.png}
\label{PDF KM}}
\caption{Illustration of the suitable PDF generated for HS using : (a) point-wise, (b) center-of-weights, (c) area, and (d) KM method}
\label{fig_sim_ARKMPDF}
\end{figure}
\newpage
\textbf{PDF for the Karnik-Mendel method:}\par It can be found out using the similar concepts used for the other 3. The following formula can be used to find the appropriate PDF
\begin{eqnarray}\label{KMPDF}
{P_D}(g) = \begin{cases}
T + 2{M_v}(g)\left[\left(\frac{v_j+start_j}{2}\right)-g\right] & \text{if } g < v_j \\ \\
T - 2{M_v}(g)\left[\left(\frac{v_j+end_j}{2}\right)-g\right] & \text{if } g \geq v_j
\end{cases}
\end{eqnarray}
where $v_j$ is the center of cluster $j$, $start_j$ is the starting gray value $g$ for the cluster $j$ and $end_j$ is the last gray value $g$ for the cluster $j$ which can easily be found using the partition point(s) obtained earlier.\\
\subsubsection*{Effect of Fuzzy MV Method on PDF Generation}
It is clear from \eqref{upperPDF}, \eqref{lowerPDF} and \eqref{PDFfinal} that PDF depends crucially on the fuzzy membership values and hence, on the value generation method. The proposed equations are linear in nature and it provides less value of probability for those gray levels which have distribution of significant height of the histogram and more value to others. This nature is justified as our aim is to enhance that portion of image which has no intensity corresponding to the particular gray levels. By providing higher value of PDF for such gray levels, we try to make image clearer by using histogram specification. Since the choice of value computation method is left to the application, this provides flexibility in generation of the output PDF and allows for finer control over the distribution. For example, the \textit{point-wise} method assigns higher probabilities to less frequently occurring gray levels leading to better distribution in the output histogram, i.e. if the difference is small, then the membership value can be assigned a large value so that histogram spreads over the gray levels, more effectively. The \textit{center-of-weights} method manipulates the closely spaced gray levels, thereby performing localized histogram equalization.The \textit{area} method assigns equal probability to closely-spaced gray levels and also controls the degree of spreading of gray levels between closely-spaced regions. In general, this method has smaller variation in gray level spreading. Fig. \ref{fig_sim_ARKMPDF} shows different PDF obtained from the proposed 4 methods.
\subsection{Histogram Specification}
This is the final stage in our proposed algorithm. Usual Histogram Specification is done using the PDF generated in previous steps. Fig. \ref{fig_sim_EMARKM} shows the final results obtained from the proposed 4 methods. We note that the algorithm does not require any external inputs in addition to the input image during its execution. Therefore, the proposed algorithm may find suitable fuzzy MFs for a given image automatically and without need of prior knowledge about the number of MFs. Consequently, the algorithm may find the proper desired PDF for each given image and obtain the contrast enhanced image automatically. Thus, the proposed algorithm can find suitable fuzzy membership function for a given image automatically and without need of prior knowledge of the number of fuzzy membership function. Consequently, the algorithm can find the proper desired PDF for each given image and obtain the contrast enhanced image automatically.
\subsection{Summary}
\begin{tabular}{*5l}
\bottomrule
\hline
Stage 1 & Smoothen and normalize the histogram of the input image. \\
& Find the appropriate partition to divide the histogram. \\
& Fit the symmetric Gaussian(s). \\
\hline
Stage 2 & -Generate UMF and LMF and hence obtain FOU.\\
\hline
Stage 3 & Generate Membership Values from FOU using either of 4 methods- \\
& - Point-wise method \\
& - Center-of-weights method \\
& - Area method \\
& - KM method \\
\hline
Stage 4 & Generate the Probability Distribution Function (PDF) using the provided formulae. \\
\hline
Stage 5 & Apply Histogram Specification using the generated PDF. \\
\\ \bottomrule
\hline
\end{tabular}\\ \par
In the described methodology, in stage-1, we are smoothing, normalising and fitting the appropriate number of Gaussian(s) to divide the image pixel into clusters. For instance, pixels can be clustered into dark and bright. As the fitting is done over a normalised histogram, it can be considered as calculating membership function corresponding to each gray level. Later, UMF and LMF was found to implement the interval type-2 fuzzy to obtain the membership value and probability density function. PDF equation is designed in such a way that it gives high probability to the gray levels having less number of pixels and low probability to the gray levels having high number of pixels. This is how, it enhances the image more accurately than the histogram equalisation, which provides equal probability to all the gray levels.
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .38]{Draftpics/PW_T2_G2_lena.png}
\label{EM PW}}
\hfil
\subfloat[]{\includegraphics[scale = .38]{Draftpics/CW_T2_G2_LENA.png}
\label{EM COW}}
\label{fig_sim PWCOW}
\hfil
\\ \subfloat[]{\includegraphics[scale = .38]{Draftpics/AR_T2_G2_LENA.png}
\label{EM AR}}
\hfil
\subfloat[]{\includegraphics[scale = .30]{Draftpics/t2.png}
\label{EM KM}}
\caption{Enhanced Images after Histogram Specification : (a) point-wise, (b) center-of-weights, (c) area, and (d) KM method}
\label{fig_sim_EMARKM}
\end{figure}
\section{Experiment Results and Discussions}
In this section, we demonstrate the results obtained using the proposed algorithm. In addition to it, we compared the obtained results with 3 already present methods namely, Histogram Equalisation (HE), Recursive
Mean-Separate Histogram Equalization (RMSHE) and Brightness Preserving Fuzzy Histogram Equalization (BPFHE). Quantitative analysis is done using the image quality index, called \textit{Average Information Content} (AIC). The information content is related to the number of binary decisions required to find the information. The number of binary decisions (number of questions whose answer is yes/no) required to find the correct element in a set of $N$ elements is:
\begin{eqnarray}
n_q = log_2N = -log_2p
\end{eqnarray}
In general, the elements are not equally likely; they have different probabilities, $p_i$. Tribus(1961) then generalises the formula here above by introducing the concept of "\textit{surprisal $h_i$}". \begin{eqnarray}
h_i = -log_2p_i
\end{eqnarray}
On this basis, Shannon introduced the \textit{uncertainty measure} (also called \textit{entropy}, which is the average of all surprisals $h_i$ weighted by their occurrence $p_i$. \begin{eqnarray}
H = \sum\limits_i p_ih_i = -\sum\limits_i p_ilog_2p_i
\end{eqnarray}
Quantitative comparison of images was performed on the basis of the average information content (AIC) measure. For images, it can be written more precisely as: \begin{eqnarray}
AIC = -\sum\limits_{k = 0}^{L - 1} p(g_k)log_2(p(g_k))
\end{eqnarray}
where $p(g_k)$ is the PDF value for the $k^{th}$ gray level. In general, AIC increases with an increase in the information content of the image. In other words, a higher score indicates a \textit{richer}, more detailed image. Table 1 shows the AIC values of different methods already present along with our proposed methods\cite{r1}. Note that the values written in bold represents the maximum AIC value among all of described methods. \par
A bar graph shown in Fig. \ref{compAIC} depicts the comparison of AIC values of all methods.\par
We present the analysis of our algorithm and compare it with already present 3 methods on 3 images - \lq Tank\rq , \lq Classroom\rq , \lq Vendovka\rq . As seen in Fig.\ref{tank orig and hist}\subref{Histogram tank}, histogram of image \lq Tank\rq is concentrated in the middle (near gray level $150$). As desired, we got the PDFs (Fig. \ref{fig_simtank2}) which has the least value around $150^{th}$ gray level. This spreads the final histogram of image to left as well as right of $150^{th}$ gray level, enhancing the contrast of the original image.
AIC value is found to be increased by $13\%$ as compared to that of HE. KM method worked best for this image.\par
For image \lq Classroom\rq , histogram of original image is found to be concentrated before around $70^{th}$ gray level (Fig.\ref{fig_simclass1}\subref{Histogram Class}). The PDFs obtained from our proposed algorithm has it's least value around $70^{th}$ gray level and it increases as gray level increased from $70$ to $255$ (Fig.\ref{fig_simclass3}). It spreads the histogram to the right side, hence enhancing the contrast of original image. AIC value is increased by $1.07\%$ as compared to that of HE. Interestingly, KM method worked best for this image too.\par
The histogram of image \lq Vendovka\rq is concentrated mostly around $35^{th}$ gray level (Fig.\ref{fig_simven1}\subref{Histogram vendovka}). The PDFs obtained dips near $40^{th}$ gray level, and increases as value of gray level increases (Fig.\ref{fig_simven3}). This spreads the histogram of final image to the right, enhancing the image. AIC value is increased by $1.62\%$ in comparison to that of HE. However, contrary to earlier cases, this image shows the best AIC value after being processed by Point-wise and Center of weights methods.
\begin{figure}
\centering
\includegraphics[scale=.35]{Draftpics/meta-chart.png}
\caption{{Comparision of AIC value for all images}}\label{compAIC}
\end{figure} \par
\vspace*{1 cm}
\begin{table}[H]
\renewcommand{\arraystretch}{1.3}
\caption{Experimental AIC Values}
\label{aic_value}
\centering
\begin{tabular}{ M{2 cm}M{1.5cm} M{1.5cm} M{1.5cm} M{1.5cm} M{1.5cm} M{1.5cm} M{1.5cm} }
\hline \hline
{Image Name} & {HE} &{RMSHE}&{BPFHE} & {Point-Wise} & {\centering Center of Weights} &{Area} & {KM Algo.} \\
\hline
Lena &\small 5.9771 &\small 7.0824 &\small 7.2748 & \small 7.3034 &\small 7.3020 &\small \textbf{7.3035} & \small {7.3015}\\
{Classroom} &\small 5.7884 &\small 5.6951 &\small 5.3573 &\small 5.8476 &\small {5.8476} &5.8476 &\small \textbf{5.8490}\\
{Wood} &\small 4.6507&\small 4.7357&\small 4.2906&\small 4.8248 &\small 4.8279&\small 4.8267&\small\textbf{4.8290}\\
{Vendovka} &\small 4.4804&\small 4.4704&\small 4.1791&\small\textbf{4.5527}&\small \textbf{4.5527}&\small{4.5485}&\small 4.5480\\
{Tank} &\small 5.3293&\small 5.8783&\small 5.8884&\small 6.0098&\small 6.0018&\small 6.0098&\small \textbf{6.0222}\\
{Park} &\small 5.6881 &\small 5.8592 &\small 5.6391&\small 6.0245&\small \textit{6.0345}&\small 6.0355&\small \textbf{6.0440}\\
{Hilly Area} &\small 4.6946 & \small 5.6468&\small 5.6507 &\small 5.6537 &\small{5.6397} &\small \textbf{5.6778} &\small 5.6499\\
{House} &\small 4.8299 &\small 4.9056 &\small 4.5675 &\small 4.9811 &\small 4.9904 &\small 4.9928 &\small\textbf{5.0093}\\
{Children} &\small 5.9743 &\small 7.0063 &\small 7.2546 &\small 7.3014 &\small 7.3039 &\small 7.3048 &\small\textbf{7.3070}\\
{Nature} &\small 5.7658&\small 6.5970 &\small 6.7286&\small 6.7435 &\small 6.7314 &\small \textbf{6.7756} &\small{6.7575}\\
\hline
\end{tabular} \par
\end{table}
\clearpage
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .72]{Draftpics/d.png}
\label{Original Tank}}
\subfloat[]{\includegraphics[scale = .6]{Draftpics/d_hist.png}
\label{Histogram tank}}
\caption{\lq{Tank}\rq image : (a) original image, (b) input histogram}
\label{tank orig and hist}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .72]{Draftpics/d_en2.png}
\label{PW Tank}}
\subfloat[]{\includegraphics[scale = .72]{Draftpics/d_en3.png}
\label{COW tank}}
\hfil
\subfloat[]{\includegraphics[scale = .72]{Draftpics/d_en4.png}
\label{ar tank}}
\subfloat[]{\includegraphics[scale = .72]{Draftpics/d_en5.png}
\label{km tank}}
\caption{Enhanced image of \lq{Tank}\rq for : (a) point-wise, (b) center-of-weights, (c) area, and (d) KM method}
\label{en_image}
\end{figure}
\newpage
\vspace*{2 cm}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .58]{Draftpics/d_pdf2.png}
\label{COW Tank}}
\subfloat[]{\includegraphics[scale = .58]{Draftpics/d_pdf3.png}
\label{COW tank}}
\hfil
\subfloat[]{\includegraphics[scale = .58]{Draftpics/d_pdf4.png}
\label{ar tank}}
\subfloat[]{\includegraphics[scale = .58]{Draftpics/d_pdf5.png}
\label{km tank}}
\caption{Generated PDF for \lq{Tank}\rq image using : (a) point-wise, (b) center-of-weights, (c) area, and (d) KM method}
\label{fig_simtank2}
\end{figure}
\newpage
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .72]{Draftpics/a.png}
\label{Original Class}}
\hfil
\subfloat[]{\includegraphics[scale = .6]{Draftpics/a_hist.png}
\label{Histogram Class}}
\caption{\lq{Classroom}\rq image : (a) original image, (b) input histogram}
\label{fig_simclass1}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .72]{Draftpics/a_en2.png}
\label{PW class}}
\subfloat[]{\includegraphics[scale = .72]{Draftpics/a_en3.png}
\label{COW class}}
\hfil
\subfloat[]{\includegraphics[scale = .72]{Draftpics/a_en4.png}
\label{ar class}}
\subfloat[]{\includegraphics[scale = .72]{Draftpics/a_en5.png}
\label{km class}}
\caption{Enhanced image of \lq{Classroom}\rq for : (a) point-wise, (b) center-of-weights, (c) area, and (d) KM method}
\label{fig_simclass2}
\end{figure}
\newpage
\vspace*{2 cm}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .59]{Draftpics/a_pdf2.png}
\label{COW class}}
\subfloat[]{\includegraphics[scale = .59]{Draftpics/a_pdf3.png}
\label{COW class}}
\hfil
\subfloat[]{\includegraphics[scale = .59]{Draftpics/a_pdf4.png}
\label{ar class}}
\subfloat[]{\includegraphics[scale = .59]{Draftpics/a_pdf5.png}
\label{km class}}
\caption{Generated PDF for \lq{Classroom}\rq image using : (a) point-wise, (b) center-of-weights, (c) area, and (d) KM method}
\label{fig_simclass3}
\end{figure}
\newpage
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .42]{Draftpics/c.png}
\label{Original vendovka}}
\hfil
\subfloat[]{\includegraphics[scale = .45]{Draftpics/c_hist.png}
\label{Histogram vendovka}}
\caption{\lq{Vendovka}\rq image : (a) original image, (b) input histogram}
\label{fig_simven1}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .4]{Draftpics/c_en2.png}
\label{PW ven}}
\subfloat[]{\includegraphics[scale = .4]{Draftpics/c_en3.png}
\label{COW ven}}
\hfil
\subfloat[]{\includegraphics[scale = .4]{Draftpics/c_en4.png}
\label{ar ven}}
\subfloat[]{\includegraphics[scale = .4]{Draftpics/c_en5.png}
\label{km ven}}
\caption{Enhanced image of \lq{Vendovka}\rq for : (a) point-wise, (b) center-of-weights, (c) area, and (d) KM method}
\label{fig_simven2}
\end{figure}
\newpage
\vspace*{2 cm}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale = .59]{Draftpics/c_pdf2.png}
\label{COW ven}}
\subfloat[]{\includegraphics[scale = .59]{Draftpics/c_pdf3.png}
\label{COW ven}}
\hfil
\subfloat[]{\includegraphics[scale = .59]{Draftpics/c_pdf4.png}
\label{ar ven}}
\subfloat[]{\includegraphics[scale = .59]{Draftpics/c_pdf5.png}
\label{km ven}}
\caption{Generated PDF for \lq{Vendovka}\rq image using : (a) point-wise, (b) center-of-weights, (c) area, and (d) KM method}
\label{fig_simven3}
\end{figure}
\clearpage
\section{Conclusion}
Contrast enhancement algorithms aid in extraction of information which may not be easily distinguished in low contrast images. In this paper, we attempted to devise an automatic algorithm which extracts the desired probability distribution function (PDF) for the histogram specification from the input image. The process involved the application of Interval Type-2 fuzzy system and Histogram Specification. The main idea is to spread the histogram of input image in such a way that the image becomes clear and the information content is not lost, which is observed in already present algorithms such as Histogram Equalization(HE), Recursive Mean-Separate Histogram (RMSHE) and Brightness Preserving Fuzzy Histogram Equalization(BPFHE). \par
The proposed algorithm works in 5 stages, each of which address a particular aspect of contrast enhancement. Fuzzy MF extraction replaces the histogram linearisation by fuzzifying discrete gray levels to continuous spectrum, thereby overcoming the limitations imposed by discreteness and generating functional approximation of the histogram. For getting the footprint of uncertainty, we used the concept of Gaussian fitting, after which we propose 4 methods to get the membership values and hence the desired PDF. Out of the 4 methods, method of MV extraction using Karnik-Mendel (KM) method is found to be the best for most of the images. Significantly, the proposed algorithm is sensitive to local variations in the histogram and is also modular and flexible (due to the usage of a MV computation techniques). The first stage and MV computation may be computationally expensive but lead to significant improvements in the visual output and in a computer vision pipeline.\par
The proposed algorithm opens up several avenues of further research. For instance, non-linear generation of PDF may improve smoothness in gray level variation of output histogram and preserve image detail. Moreover, We believe that MV computation techniques may be considered to be early attempts for effectively generating general type-2 fuzzy MFs. Thus, an extension of the first stage to higher dimensional fuzzy sets (general type-2) may prove beneficial. Computational complexity may be improved by optimizing the approximation of histograms or providing an alternate framework for MV computation.\par
The proposed method can also be used as a major component of image enhancement in computer vision pipelines. We believe that applying proposed automatic fuzzy PDF generation for HS into various detection and recognition system, may improve their performance.
\section*{Acknowledgment}
The authors would like to thank Duddu Sai Meher Karthik and Raghav Gulati (Students of Indian Institute of Technology, Guwahati, India) for giving some important suggestions. The authors thank the anonymous referees for their helpful suggestions for the improvement of the manuscript.
|
{
"timestamp": "2018-05-08T02:10:26",
"yymm": "1805",
"arxiv_id": "1805.02173",
"language": "en",
"url": "https://arxiv.org/abs/1805.02173"
}
|
\section{Introduction}
Let $\mathcal{L}^\epsilon$ and $\mathcal{L}^0$ be linearized Boltzmann collision operators with and without angular cutoff respectively. The present work aims at quantitative estiamtes for the asymptotic behavior of the operator $\mathcal{L}^\epsilon$ and its associated equation from angular cutoff to non-cutoff, which corresponds to the limit that $\epsilon$ goes to zero. The main motivation comes from the facts that the following properties of the collision operator are totally changed in the limit process:
\begin{enumerate}
\item[(i)] For fixed $\epsilon$, $\mathcal{L}^\epsilon$ behaves like a damping term for the Boltzmann equation with angular cutoff while $\mathcal{L}^0$ behaves like a fractional Laplace operator for the equation without cutoff.
\smallskip
\item[(ii)] For moderate soft potentials($\gamma\in [-2s,0)$), the operator $\mathcal{L}^\epsilon$ has no spectral gap for fixed $\epsilon$ but the limiting point $\mathcal{L}^0$ of $\{\mathcal{L}^\epsilon\}_{\epsilon>0}$ does have.
\end{enumerate}
Another motivation arises from the approximation problem for the Boltzmann equation. It is of great importance to find out the asymptotic formula to describe the limit for the nonlinear equation.
\subsection{The Boltzmann collision operator and its associated equation} We first introduce our basic assumptions and definitions on the Boltzmann collision operator and its associated equation.
\subsubsection{The Boltzmann collision operator}
The Boltzmann collision operator $Q$ is a bilinear
operator acting only on the velocity variables $v$, which is defined by,
\beno Q(g,h)(v)\eqdefa
\int_{\R^3}\int_{\SS^{2}}B(v-v_*,\sigma)(g'_*h'-g_*h)d\sigma dv_*.
\eeno Here we use the standard shorthand $h=h(v)$, $g_*=g(v_*)$,
$h'=h(v')$, $g'_*=g(v'_*)$ where $v'$, $v_*'$ are given by
\begin{eqnarray}\label{e3}
v'=\frac{v+v_{*}}{2}+\frac{|v-v_{*}|}{2}\sigma\ ,\ \ \
v'_{*}=\frac{v+v_{*}}{2}-\frac{|v-v_{*}|}{2}\sigma\
,\qquad\sigma\in\SS^{2}.
\end{eqnarray}
The nonnegative function $B(v-v_*,\sigma)$ in the collision
operator is called the Boltzmann collision kernel. It is always
assumed to depend only on $|v-v_{*}|$ and $\langle\frac{v-v_{*}}{|v-v_{*}|},\sigma
\rangle$. It is convenient to introduce the angle variable $\theta$ through
$\cos\theta=\langle\frac{v-v_{*}}{|v-v_{*}|},\sigma
\rangle$. Without loss of generality, we may
assume that $B(v-v_{*},\sigma)$ is supported in the set
$0\leq\theta\leq\frac{\pi}{2}$ , i.e,
$\langle\frac{v-v_{*}}{|v-v_{*}|},\sigma
\rangle\ge0
$. Next we state some basic assumptions on the collision kernel.
\smallskip
(i). For the non-cutoff collision kernel, we assume that
\begin{itemize}
\item (A-1) The cross-section $B(v-v_{*},\sigma)$ takes a product form of
\beno B(v-v_{*},\sigma)= |v-v_{*}|^{\gamma}b(\cos\theta),\eeno
where $-3<\gamma \leq 2$ and $b$ is a nonnegative function satisfying
\begin{eqnarray*} K^{-1}\theta^{-1-2s} \le\sin\theta
b(\cos\theta)\le K
\theta^{-1-2s}, \quad
\mbox{with}\ 0<s<1,\ K \geq 1.\end{eqnarray*}
The parameters $\gamma$ and $s$ verify $\gamma + 2s > -1$.
\end{itemize}
For inverse repulsive potentials, one has
$\gamma=\frac{p-5}{p-1}$ and $s=\frac{1}{p-1}$. Generally, the case $\gamma>0$, $\gamma=0$, and
$\gamma<0$ correspond to so-called hard, maxwellian, and soft
potentials respectively. Then the inhomogeneous Boltzmann equation without cutoff reads:
\begin{eqnarray}\label{homb}\left\{ \begin{aligned}
&\partial _t F + v \cdot \nabla_{x} F=Q(F,F), ~~t > 0, x \in \mathbb{T}^{3}, v \in\R^3 ;\\
&F|_{t=0} = F_{0}.
\end{aligned} \right.
\end{eqnarray}
where $F(t,x,v)\geq 0$ is the distribution
function of collision particles which at time
$t\geq 0$, position $x \in \mathbb{T}^{3} \eqdefa[-\pi,\pi]^{3}$, move with velocity
$v\in\R^3$.
\medskip
(ii). For the cutoff collision kernel, we assume that
\begin{itemize}
\item (A-2) The cross-section $B^\epsilon(v-v_{*},\sigma)$ takes a product form of
\beno B^\epsilon(v-v_{*},\sigma)= |v-v_*|^\gamma b^\epsilon(\cos\theta),\eeno
where $b^{\epsilon} = b (1-\phi)(\sin\frac{\theta}{2}/\epsilon)$, where $0 <\epsilon \leq \frac{\sqrt{2}}{2}$ and $\phi$ is a function defined by \eqref{function-phi-psi}, which has support in $[0, 4/3]$ and equals to $1$ in $[0, 3/4]$.
\end{itemize}
Then the Boltzmann collision operator with angular cutoff and its associated equation are defined by
\beno Q^{\epsilon}(g,h)(v)\eqdefa
\int_{\R^3}\int_{\SS^{2}}B^{\epsilon}(v-v_*,\sigma)(g'_*h'-g_*h)d\sigma dv_*
\eeno
and
\begin{equation}\label{cutoffboltzmann} \left\{ \begin{aligned}
&\partial _t F + v \cdot \nabla_{x} F=Q^{\epsilon}(F,F), ~~t > 0, x \in \mathbb{T}^{3}, v \in\R^3 ;\\
&F|_{t=0} = F_{0}.
\end{aligned} \right.
\end{equation}
\smallskip
We remark that the solutions to \eqref{homb} and \eqref{cutoffboltzmann} have the fundamental physical properties of conserving total mass, momentum and kinetic energy, that is, for all $t\ge0$,
\ben\label{conserveq} \int_{\mathbb{T}^{3} \times \R^3} F(t,x,v)\phi(v)dxdv=\int_{\mathbb{T}^{3} \times \R^3} F(0,x,v)\phi(v)dxdv,\quad \phi(v)=1,v_{j},|v|^2, \quad j=1,2,3.\een
If initially $F_{0}(x,v)$ has the same mass, momentum and total energy as those of the global
Maxwellian $\mu(v) \eqdefa (2\pi)^{-\frac{3}{2}}e^{-\frac{|v|^{2}}{2}}$, then for any $t \geq 0$,
\beno \int_{\mathbb{T}^{3} \times \R^3} (F-\mu)(t)\phi dxdv=0, \quad \phi(v)=1,v_{j},|v|^2, \quad j=1,2,3.\eeno
\subsubsection{The linearized Boltzmann collision operator}
For the cutoff case, the operator writes
\ben\label{DefLep}\qquad
\mathcal{L}^{\epsilon}g \eqdefa -\Gamma^{\epsilon}(\mu^{1/2},g) - \Gamma^{\epsilon}(g, \mu^{1/2}) \eqdefa \mathcal{L}^{\epsilon}_{1}g + \mathcal{L}^{\epsilon}_{2}g, \quad\mbox{where}\quad \Gamma^{\epsilon}(g,h) \eqdefa \mu^{-1/2} Q^{\epsilon}(\mu^{1/2}g,\mu^{1/2}h).
\een
For the non-cutoff case, the operator $\mathcal{L}^{0}$ is defined in the same manner as that for the cutoff case:
\ben\label{DefL}\qquad
\mathcal{L}^{0}g \eqdefa -\Gamma^{}(\mu^{1/2},g) - \Gamma^{}(g, \mu^{1/2}) \eqdefa \mathcal{L}^{0}_{1}g + \mathcal{L}^{0}_{2}g,
\quad\mbox{where}\quad \Gamma^{ }(g,h) \eqdefa \mu^{-1/2} Q^{ }(\mu^{1/2}g,\mu^{1/2}h). \een
It is not difficult to check that $\mathcal{N}(\mathcal{L}^\epsilon)$ and $\mathcal{N}(\mathcal{L}^0)$, the null spaces of $\mathcal{L}^\epsilon$ and $\mathcal{L}^0$ respectively, verify
\beno \mathcal{N}(\mathcal{L}^\epsilon)=\mathcal{N}(\mathcal{L}^0)=\mathcal{N}\eqdefa \mathrm{span}\{\sqrt{\mu}, \sqrt{\mu}v_1, \sqrt{\mu}v_2,\sqrt{\mu}v_3, \sqrt{\mu}|v|^2 \}. \eeno
If we set $F=\mu +\mu^{1/2}f$, then
\eqref{cutoffboltzmann} and \eqref{homb} reduce to
\begin{equation}\label{linearizedBE} \left\{ \begin{aligned}
&\partial_{t}f + v\cdot \nabla_{x} f + \mathcal{L}^{\epsilon}f= \Gamma^{\epsilon}(f,f), ~~t > 0;\\
&f|_{t=0} = f_{0},
\end{aligned} \right.
\end{equation}
and
\begin{equation}\label{linearizedNBE} \left\{ \begin{aligned}
&\partial_{t}f + v\cdot \nabla_{x} f + \mathcal{L}^{0}f= \Gamma^{}(f,f), ~~t > 0;\\
&f|_{t=0} = f_{0},
\end{aligned} \right.
\end{equation}
where $f_0$ verifies
\ben\label{Nuspace} \int_{\mathbb{T}^{3} \times \R^3} \sqrt{\mu}f_0\phi dxdv=0, \quad \phi(v)=1,v_{j},|v|^2, \quad j=1,2,3.\een
\subsection{Problems and their difficulties}
The main purpose of the paper is to understand what happens for the collision operator $\mathcal{L}^\epsilon $ and its associate equation \eqref{linearizedBE} in the limit that $\epsilon $ goes to zero. The problems addressed here can be summarized as follows:
\smallskip
{\bf Problem 1:} What is the behavior of the operator $ \mathcal{L}^\epsilon $ in the limit process?
\smallskip
We recall that $\mathcal{L}^\epsilon$ behaves like a damping term for equation \eqref{linearizedBE} while $\mathcal{L}^0$ behaves like a fractional Laplace operator for equation \eqref{linearizedNBE}. The motivation of {\bf (P-1)} is to see clearly which kind of link between these two different properties in the limit process.
Obviously it is a fundamental problem and full of challenge.
To explain the main difficulty of the problem, we focus on the Maxwellian molecules ($\gamma=0$), which is simpler than the other cases. Previous work \cite{amuxy4,amuxy5,gs1,gs2,he2} shows that
for $\gamma=0$, there holds
\ben\label{eqiL01} \langle \mathcal{L}^0f,f\rangle_v+|f|_{L^2}^2\sim |f|^{2}_{L^{2}_{s}}+|f|^{2}_{H^{s}} + |(-\Delta_{\SS^{2}})^{s/2}f|^{2}_{L^{2}}. \een
Here $\langle f,g \rangle_v$ denotes the inner product for $v$ variable.
In the description of $\langle \mathcal{L}^0f, f\rangle_v$, there are three parts in the right-hand side of the equivalence which correspond to gain of weight, gain of Sobolev regularity and gain of tangential derivative on the unit sphere respectively. Considering that $ \langle \mathcal{L}^\epsilon f,f\rangle_v\rightarrow \langle \mathcal{L}^0f,f\rangle_v$ as $\epsilon \rightarrow 0$, we believe that $\langle \mathcal{L}^\epsilon f, f\rangle_v$ will have the similar structure. However the main difficulty is what they are.
To find out a good candidate, we go back to the original proof of the coercivity estimate for the collision operator in \cite{advw}. Following the computation used there, we can derive that
\ben\label{sobolevregu} \langle Q^\epsilon(g, f), f\rangle_v+|f|_{L^2}^2\ge C_g|W^\epsilon(D)f|_{L^2}^2, \een where $W^\epsilon$ is defined in \eqref{charicter function}.
Observe that \eqref{eqiL01} can be rewritten by
\ben\label{eqiL02}
\langle \mathcal{L}^0f,f\rangle_v+|f|_{L^2}^2\sim |Wf|^{2}_{L^{2}}+|W(D)f|^{2}_{L^2} + |W(-\Delta_{\SS^{2}})^{1/2})f|^{2}_{L^{2}},
\een
where $W(x)=(1+|x|^2)^{\f{s}2}$. This shows that $W$ is a characteristic function for $\mathcal{L}^0$ which captures its full structure.
Thus we conjecture that $W^\epsilon$ is the characteristic function for $\mathcal{L}^\epsilon$ in the following sense: \ben\label{conject1}\langle \mathcal{L}^\epsilon f, f\rangle_v + |f|^{2}_{L^{2}} \sim |W^\epsilon f|^{2}_{L^{2}}+|W^\epsilon(D)f|^{2}_{L^2} + |W^\epsilon((-\Delta_{\SS^{2}})^{1/2})f|^{2}_{L^{2}}. \een
Let us give some comments on conjecture \eqref{conject1}. Firstly it is easy to see that when $\epsilon$ goes to zero, \eqref{conject1} will coincide with \eqref{eqiL02}. This shows that the characteristic function $W^\epsilon$ describes the link of the operators between the cutoff case and the non cutoff case. Secondly in the description of the behavior of $\mathcal{L}^\epsilon$ \eqref{conject1}, gain of weight only happens in the region $|v|\lesssim 1/\epsilon$ in the phase space, gain of Sobolev regularity only happens in the region $|\xi|\lesssim 1/\epsilon$ in the frequency space and gain of tangential derivative only happens in the region that the eigenvalue $\lambda$ of the operator $(-\Delta_{\SS^{2}})^{1/2}$ verifies that $\lambda\lesssim 1/\epsilon$. These properties
result from the fact that the operator $\mathcal{L}^\epsilon$ still retains the hyperbolic structure due to the cutoff assumption on the angle, that is, $\theta\gtrsim \epsilon$.
Thirdly, because of the hyperbolic structure of the operator $\mathcal{L}^\epsilon$, the methods introduced in the previous work \cite{amuxy4,amuxy5,gs1,gs2,afl,lmkx,he2,pao} cannot be applied to capture the terms
$|W^\epsilon((-\Delta_{\SS^{2}})^{1/2})f|_{L^2}$ and $|W^\epsilon f|_{L^{2}}$ in \eqref{conject1}.
Therefore we need some new idea to prove the conjecture.
\medskip
{\bf Problem 2:} What is the longtime behavior of $e^{-\mathcal{L}^\epsilon t}f$ with $f\in {\mathcal{N}(\mathcal{L^\epsilon})}^{\perp}$ for moderate soft potentials in the limit process that $\epsilon$ goes to zero?
\smallskip
As we know, for $\gamma\in [-2s,0)$, the operator $\mathcal{L}^\epsilon$ has no spectral gap for any fixed $\epsilon$ but the limiting point $\mathcal{L}^0$ of $\{\mathcal{L}^\epsilon\}_{\epsilon>0}$ does have. It looks like that there is a jump for this property. Instead of investigating the spectrum property of the operator which looks extremely difficult, we turn to consider the longtime behavior of $e^{-\mathcal{L}^\epsilon t}f$ with $f\in {\mathcal{N}(\mathcal{L^\epsilon})}^{\perp}$ because the spectrum property of an operator has strong connection with the semi-group generated by the same operator.
For the operator $\mathcal{L}^0$, due to the existence of spectrum gap, it is easy to see that for any $f\in {\mathcal{N}}^{\perp}$,
\beno \|e^{-t\mathcal{L}^0}f\|_{L^2}\le e^{-c t}\|f\|_{L^2}. \eeno
For the operator $\mathcal{L}^\epsilon$, by imposing the additional assumption that $f\in L^2_l$, we can derive that $e^{-t\mathcal{L^\epsilon}}f$ will decay to zero with polynomial rate. However we have no idea on the explicit rate of this relaxation for $f\in {\mathcal{N}}^{\perp}$. By approximation argument, we only can prove that
\beno \lim_{t\rightarrow\infty}\|e^{-t\mathcal{L}^\epsilon}f\|_{L^2}=0. \eeno
Therefore from these two estimates, it is difficult to find out the link between these two different longtime behaviors. We comment that this difficulty matches the facts that $\mathcal{L}^\epsilon$ does not have spectrum gap but $\mathcal{L}^0$ does have.
\medskip
{\bf Problem 3:} Which kind of asymptotic formula describes the limit that $\epsilon$ goes to zero for the solutions of the nonlinear equations \eqref{linearizedBE} and \eqref{linearizedNBE}?
\smallskip
Formally when the parameter $\epsilon$ goes to zero, the solution $f^\epsilon$ to \eqref{linearizedBE} will converge to the solution $f$ to \eqref{linearizedNBE}. The motivation of {\bf (P-3)} is to justify this convergence and find out an asymptotic formula to describe the limit.
To make clear which kind of relation between $f^\epsilon$ and $f$, we first have a look for the stationary case.
By Taylor expansion, we can prove that for any smooth functions $f$,
\beno |Q^\epsilon(f,f)-Q(f,f)|\sim O(\epsilon^{2-2s}). \eeno
Thus it is natural to conjecture that for the nonlinear equations, there holds
$$f^\epsilon-f=O(\epsilon^{2-2s}).$$
Obviously the main difficulty of the proof lies in the behavior of $\mathcal{L}^\epsilon$ and the uniform estimates with respect to the parameter $\epsilon$.
\subsection{Notations and main results}
We first list the function spaces and notations
which we shall use throughout the paper.
\subsubsection{Basic notations}
We denote the multi-index $\alpha =(\alpha _1,\alpha _2,\alpha _3)$ with
$|\alpha |=\alpha _1+\alpha _2+\alpha _3$. We write $a\lesssim b$ to indicate that there is a
uniform constant $C,$ which may be different on different lines,
such that $a\leq Cb$. We use the notation $a\sim b$ whenever $a\lesssim b$ and $b\lesssim
a$. The notation $a^+$ means the maximum value of $a$ and $0$ and $[a]$ denotes the maximum integer which does not exceed $a$. The Japanese bracket $\langle \cdot\rangle$ is defined by $\langle v\rangle=(1+|v|^2)^{\frac{1}{2}}$. Then the weight function $W_l$ is defined by $W_l(v)\eqdefa \langle v\rangle^l $. We denote $C(\lambda_1,\lambda_2,\cdots, \lambda_n)$ by a constant depending on parameters $\lambda_1,\lambda_2,\cdots, \lambda_n$. The notations $\langle f,g\rangle_v\eqdefa \int_{\R^3}f(v)g(v)dv$ and $(f,g)\eqdefa \int_{\R^3\times\TT^3} fgdxdv$ are used to denote the inner products for $v$ variable and for $x,v$ variables respectively. The translator operator $T_h$ is defined by $(T_hf)(v)\eqdefa f(v+h)$, for any $h \in \R^{3}$.
As usual, $\mathbf{1}_A$ is the characteristic function of the set $A$. If $A,B$ are two operators, then $[A,B]\eqdefa AB-BA$.
\subsubsection{Function spaces} Several spaces are introduced as follows:
(1). For real number $n, l $, we define the weighted Sobolev space on $\R^3$
\begin{equation*}
H^{n}_l \eqdefa\bigg\{f(v)\big| |f|^2_{H^n_l}\eqdefa\int_{\R^3_v} |\langle D\rangle^n \langle v\rangle^l f(v)|^2 dv
<+\infty\bigg\},
\end{equation*} Here $a(D)$ is a differential operator with the symbol
$a(\xi)$ defined by
\beno \big(a(D)f\big)(v)\eqdefa\f1{(2\pi)^3}\int_{\R^3}\int_{\R^3} e^{i(v-y)\xi}a(\xi)f(y)dyd\xi.\eeno
(2). The general weighted Sobolev space $W^{N,p}_l$ on $\R^3$ with $p\in [1, \infty)$ is defined as follows
\beno
W^{N,p}_l\eqdefa \bigg\{f(v)\big| |f|_{W^{N,p}_l}\eqdefa\sum_{|\alpha|\le N} \bigg(\int_{\R^3} |\partial^\alpha f(v)|^p\langle v\rangle^{lp}dv \bigg)^{1/p}<\infty
\bigg\}.
\eeno
In particular, if $N=0$, we introduce the weighted $L^p_l$ space as
\beno
L^p_l\eqdefa \bigg\{f(v)\big| |f|_{L^p_l}\eqdefa\bigg(\int_{\R^3} |f(v)|^p\langle v
\rangle^{l p}dv\bigg)^{\f1{p}}<\infty \bigg\}.
\eeno
(3). For $m\in\N$, we denote the Sobolev space on $\mathbb{T}^{3}$ by
\begin{equation*} H^{m}_{x} \eqdefa\bigg\{f(x)\big| |f|^{2}_{H^{m}_{x}}\eqdefa \sum_{|\alpha | \leq m}|\partial^{\alpha}_{x} f|^{2}_{L^{2}_{x}}<+\infty\bigg\}.
\end{equation*}
(4). For a distribution function $f(x,v)$, we define the following weighted Sobolev spaces
with weight on velocity variable. For $m,n \in \N, l \in \R$, the weighted Sobolev space $\mathbb{T}^{3}\times\R^{3}$ is defined by
\beno H^{m}_xH^{n}_{l} \eqdefa \bigg\{f(x,v) \big| \|f\|^{2}_{ H^{m}_xH^{n}_{l}} \eqdefa
\sum_{|\alpha| \leq m, |\beta| \leq n} \int_{\TT^3}| \partial^{\alpha}_x\pa^{\beta}_vf|^{2}_{L^{2}_l}dx < \infty\bigg\}.\eeno
For simplicity, we write $\|f\|_{H^{m}_{x}L^{2}_{l}} \eqdefa \|f\|_{ H^{m}_xH^{0}_{l} }$ if $n=0$ and $\|f\|_{L^{2}_{l}} \eqdefa \|f\|_{ H^{0}_xH^{0}_{l} }$ if $m=n=0$. We can define the homogeneous space $\dot{H}^{m}_x\dot{H}^{n}_{l}$ if we replace by $|\alpha| \leq m, |\beta| \leq n$ by $|\alpha| = m, |\beta| = n$. Similarly we can introduce the partial homogeneous space $\dot{H}^{m}_xH^{n}_{l}$ and $H^{m}_x\dot{H}^{n}_{l}$.
\subsubsection{Dyadic decompositions} We now introduce the dyadic
decomposition. Let $B_{4/3} \eqdefa \{x\in\R^{3}: |x| \leq 4/3\}$ and $C \eqdefa \{x\in\R^{3}: 3/4 \leq |x| \leq 8/3\}$. Then one may introduce two
radial functions $\phi \in C_{0}^{\infty}(B_{4/3})$ and $\psi \in C_{0}^{\infty}(C)$ which satisfy
\ben \label{function-phi-psi} 0\leq \phi, \psi \leq 1, \text{ and } \phi(x) + \sum_{j \geq 0} \psi(2^{-j}x) =1, \text{ for all } x \in \R^{3}. \een
Now define $\varphi_{-1}(x) \eqdefa \phi(x)$ and $\varphi_{j}(x) \eqdefa \psi(2^{-j}x)$ for any $x \in \R^{3}$ and $j \geq 0$. Then one has the following dyadic decomposition
\ben \label{dyadic-decomposition} f =\sum_{j-=1}^\infty \mathcal{P}_jf\eqdefa \sum_{j=-1}^{\infty} \varphi_{j}f, \een
for any function defined on $\R^{3}$.
We will use the notations \ben\label{defphilh} f_\phi\eqdefa \phi(\epsilon D) f,\quad f^\phi\eqdefa(1-\phi(\epsilon D))f,\quad f^l=\phi(\epsilon \cdot) f,\quad f^h=(1-\phi(\epsilon\cdot))f.\een
\subsubsection{Macro-Macro decomposition} Recall the definition of $\mathcal{N}(\mathcal{L}^\epsilon)=\mathcal{N}(\mathcal{L}^0)=\mathcal{N}\eqdefa \mathrm{span}\{\sqrt{\mu}, \sqrt{\mu}v_1, \\ \sqrt{\mu}v_2,\sqrt{\mu}v_3, \sqrt{\mu}|v|^2 \}$, we introduce the projection operator $\mathbb{P}$ as follows:
\ben\label{DefProj} \mathbb{P}f=(a+b\cdot v+c|v|^2)\sqrt{\mu}, \een
where for $1\le i\le 3$, \ben\label{Defabc}
a=\int_{\R^3} (2-\frac{|v|^{2}}{2})\sqrt{\mu}fdv; b_i=\int_{\R^3} v_i\sqrt{\mu}fdv; c=\int_{\R^3} (\frac{|v|^2}{6}-\frac{1}{2})\sqrt{\mu}fdv.
\een
\subsubsection{Characteristic function and function spaces related to the collision operator} The characteristic function $W^{\epsilon}$ associated to $\mathcal{L}^\epsilon$ is defined by
\ben\label{charicter function}
W^{\epsilon}(v) = \langle v \rangle^{s}\phi(\epsilon v) + \epsilon^{-s}(1-\phi(\epsilon v)).
\een Let $Y_l^m$ with $-l\le m\le l$ be real spherical harmonics verifying that
$ (-\triangle_{\SS^2})Y_l^m=l(l+1)Y_l^m. $
Then the operator $W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2})$ is defined by: if $v = r \sigma$, then
\ben\label{DeltaWe} (W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2})f)(v) &\eqdefa& \sum_{l=0}^\infty\sum_{m=-l}^{l} W^\epsilon((l(l+1))^{1/2}) Y^{m}_{l}(\sigma)f^{m}_{l}(r),
\een
where
$ f^{m}_{l}(r) = \int_{\SS^{2}} Y^{m}_{l}(\sigma) f(r \sigma) d\sigma.$
\smallskip
Now we introduce several function spaces to catch the behavior of $\mathcal{L}^\epsilon$.
(i). {\it The space $L^2_{\epsilon,l}$.} For functions defined in $\R^3$,
the space $L^2_{\epsilon,l}$ with $l\in\R$ is defined by
\beno
L^2_{\epsilon,l}\eqdefa\bigg\{f(v)\big||f|^{2}_{\epsilon,l}\eqdefa |W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2})W_{l}f|^{2}_{L^{2}} + |W^{\epsilon}(D)W_{l}f|^{2}_{L^{2}} + |W^{\epsilon}W_{l}f|^{2}_{L^{2}}<\infty\bigg\}.
\eeno
(ii). {\it The space $H^{m}_xH^{n}_{\epsilon,l}$ with $m,n\in\N$.} For functions defined in $\mathbb{T}^{3}\times\R^{3}$, the space $H^{m}_xH^{n}_{\epsilon,l}$ is defined by
\beno H^{m}_xH^{n}_{\epsilon,l}\eqdefa\bigg\{f(x,v)\big|
\|f\|^{2}_{H^{m}_xH^{n}_{\epsilon,l}} \eqdefa \sum_{|\alpha| \leq m, |\beta| \leq n}\int_{\mathbb{T}^{3}} |\pa^\alpha_x\pa^\beta_vf(x,\cdot)|^{2}_{\epsilon,l} dx<\infty\bigg\}.
\eeno
For simplicity, we set $\|f\|_{H_x^{m}L^2_{\epsilon,l}}\eqdefa \|f\|_{H^{m}_xH^{0}_{\epsilon,l}}$ if $n=0$ and $\|f\|_{L^2_{\epsilon,l}}\eqdefa \|f\|_{H^{0}_xH^{0}_{\epsilon,l}}$ if $m=n=0$. Similarly we can introduce the spaces $\dot{H}^{m}_x\dot{H}^{n}_{\epsilon,l}$ $\dot{H}^{m}_xH^{n}_{\epsilon,l}$ and $H^{m}_x\dot{H}^{n}_{\epsilon,l}$.
(iii). {\it Semi-norms related to $\mathcal{L}^\epsilon$.} We introduce
\beno&&
\mathcal{R}_g^{\epsilon,\gamma}(f) \eqdefa \int_{\R^6\times\SS^2} b^{\epsilon}(\cos\theta)| v-v_{*} |^{\gamma} g_{*} (f^{\prime}-f)^{2} d\sigma dv dv_{*}; \quad
\mathcal{R}^{\epsilon,\gamma}_{*,g}(f) \eqdefa \int_{\R^6\times\SS^2} b^{\epsilon}(\cos\theta)\langle v-v_{*} \rangle^{\gamma}\\&&\times g_{*}(f^{\prime}-f)^{2}d\sigma dv dv_{*};\quad
\mathcal{M}^{\epsilon,\gamma}(f)\eqdefa\int_{\R^6\times\SS^2} b^{\epsilon}(\cos\theta)| v-v_{*} |^{\gamma} f_{*}^{2} (\mu^{\prime 1/2}-\mu^{1/2})^{2} d\sigma dv dv_{*}.
\eeno
Let us explain where they come from. As we will show in Section 2, \beno \langle \mathcal{L}^{\epsilon} f,f \rangle_v+|f|_{L^2}^2\sim\mathcal{R}_\mu^{\epsilon,0}(f)+\mathcal{M}^{\epsilon,0}(f).\eeno Thus the quantities $\mathcal{R}_g^{\epsilon,\gamma}(f) $ and $\mathcal{M}^{\epsilon,\gamma}(f)$ correspond to gain of regularity and gain of weight respectively. Compared to $\mathcal{R}^{\epsilon,\gamma}_{g}(f) $, in the definition of $\mathcal{R}^{\epsilon,\gamma}_{*,g}(f)$, there is no singularity for the relative velocity $v-v_*$ in the integral.
\subsubsection{Main results}
We are in a position to state our main results. Our first one is on the description of the behavior of $\mathcal{L}^{\epsilon}$, which fully solves {\bf (P-1)}.
\begin{thm}\label{main1}
There exists a constant $\epsilon_0$ such that for $\epsilon\le\epsilon_0$ and any smooth function $f$,
\begin{eqnarray}\label{uniforml2}
\langle \mathcal{L}^{\epsilon}f, f\rangle_v + |f|^{2}_{L^{2}_{\gamma/2}} \sim |f|^{2}_{\epsilon,\gamma/2}=|W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2}) f|^{2}_{L^{2}_{\gamma/2}} + |W^{\epsilon}(D)f|^2_{L^{2}_{\gamma/2}} + |W^{\epsilon} f|^{2}_{L^{2}_{\gamma/2}}.
\end{eqnarray}
\end{thm}
Some remarks are in order:
\begin{rmk} It discloses the link between the hyperbolic structure due to the cutoff assumption and the smoothing property due to the long-range interaction.
\end{rmk}
\begin{rmk} Let us focus on the description of $\mathcal{L}^\epsilon$ for moderate soft potentials(that is, $\gamma\in(-2s,0)$). In this case, we first notice that $$|W^\epsilon f|^2_{L^2_{\gamma/2}}\sim | f^l|^2_{L^2_{\gamma/2+s}}+\epsilon^{-2s}|f^h|^2_{L^2_{\gamma/2}}.$$ Obviously
in the region of $|v|\lesssim 1/\epsilon$ of the phase space, the operator $\mathcal{L}^\epsilon$ really gains the weight. While in the region $|v|\gtrsim 1/\epsilon$ of the phase space, the operator $\mathcal{L}^\epsilon$ still retains the hyperbolic property. It perfectly explains why $\mathcal{L}^\epsilon$ has no spectrum gap for any fixed $\epsilon$ but for the limiting point $\mathcal{L}^0$ of $\{\mathcal{L}^\epsilon\}_{\epsilon>0}$ does have.
\end{rmk}
Our second result is on the diversity of the longtime behavior of $e^{-\mathcal{L}^\epsilon t}f$ with $f\in {\mathcal{N}(\mathcal{L^\epsilon})}^{\perp}=\mathcal{N}^\perp$ for moderate soft potentials.
\begin{thm}\label{main2} Suppose $\epsilon\le\epsilon_0$, $\gamma\in [-2s,0)$
and $f_0\in \mathcal{N}^{\perp}$. Then we have
\ben\label{semigroupLe1} |e^{-\mathcal{L}^\epsilon t}f_0|_{L^2}^2
\lesssim e^{-ct}|f_0^l|_{L^2}^2+|f_0^h|_{L^2}^2+\epsilon^{2s}|f_0|_{L^2}^2, \een
where $f^l $ and $f^h$ are defined in \eqref{defphilh}. Furthermore,
\begin{enumerate}
\item[(i)] if $f_0$ additionally verifies that $f_0\in L^2_{-p\gamma /2}$ with $-\gamma p/2\ge2$, then \ben\label{semigroupLe2} |e^{-\mathcal{L}^\epsilon t}f_0|_{L^2}^2
\lesssim |f_0|_{L^2}^2\big(e^{-c t}\mathrm{1}_{t\le t_* }+C(p,|f_0|_{L^2_{-p\gamma/2}})\epsilon^{2sp}(1+t)^{-p}\mathrm{1}_{t\ge t_*}\big),\een
where $c$ is a universal constant and $t_*=O(-C(p,|f_0|_{L^2_{-p\gamma/2}})2s \ln \epsilon)$;
\item[(ii)] if $f_0$ additionally verifies that $|f_0|^2_{L^2}=1$ and $|\mathcal{P}_{j}f_0|_{L^2}^2= 1-\eta$ with $\eta$ sufficiently small and $2^{j\gamma}\ll \epsilon^{2s}$(which implies that $1/\epsilon\ll 2^j$), then for $t\in [0, C^{-1}\eta2^{-j\gamma}\epsilon^{2s}]$ where $C$ is a universal constant,
\ben\label{semigroupLe3} |e^{-\mathcal{L}^\epsilon t}f_0|_{L^2}^2\ge |\mathcal{P}_je^{-\mathcal{L}^\epsilon t}f_0|_{L^2}^2\ge 1-4\eta-C\epsilon^{2s}. \een
\end{enumerate} As a consequence, for fixed sufficiently small $\epsilon$, the estimate, $\lim\limits_{t\rightarrow\infty}|e^{-\mathcal{L}^\epsilon t}f_0|_{L^2}=0$, is sharp.
\end{thm}
Some remarks are in order:
\begin{rmk} We have three comments on estimate \eqref{semigroupLe1}. Firstly it shows that the longtime behavior of $e^{-\mathcal{L}^\epsilon t}f_0$ depends heavily on distribution of the energy of $f_0$. Secondly the estimate is sharp for general data $f_0\in \mathcal{N}^{\perp}$ thanks to the estimates \eqref{semigroupLe2} and \eqref{semigroupLe3}, which deal with the case that the energy of $f_0$ is concentrated in the ball $B_{1/\epsilon}$ and the case that the energy of $f_0$ is concentrated far away from the ball $B_{1/\epsilon}$.
Thirdly, by passing the limit $\epsilon\rightarrow0$, we recover from \eqref{semigroupLe1} that for all $t\ge0$,
\ben\label{semigroupL0} |e^{-\mathcal{L}^0 t}f_0|_{L^2}^2
\lesssim e^{-ct}|f_0|_{L^2}^2.\een This demonstrates that there is no jump for the facts that the operator $\mathcal{L}^\epsilon$ has no spectral gap for fixed $\epsilon$ but the limiting point $\mathcal{L}^0$ of $\{\mathcal{L}^\epsilon\}_{\epsilon>0}$ does have.
\end{rmk}
\begin{rmk} Estimates \eqref{semigroupLe2} and \eqref{semigroupL0} show that for a long time up to a critical time $t_*=O(-c(p,f_0)2sp \ln \epsilon)$, there is no difference between the behavior of $e^{-\mathcal{L}^\epsilon t}f_0$ and that of $e^{-\mathcal{L}^0 t}f_0$ if the energy of $f_0$ is concentrated in the ball $B_{1/\epsilon}$. The difference appears only after the critical time $t_*$. In fact, after $t_*$ the hyperbolic structure will take over the behavior of the semi-group $ e^{-\mathcal{L}^\epsilon t}$, which explains the polynomial decay factor in \eqref{semigroupLe2}. To our best knowledge, this phenomena is observed for the first time. \end{rmk}
\begin{rmk} We have two remarks on \eqref{semigroupLe3}. Firstly it reveals that for any given time interval we can construct a datum $f_0$ such that the total energy of $e^{-t\mathcal{L}^\epsilon}f_0$ is almost conserved. Such kind of data prevents the formation of spectral gap for $\mathcal{L}^\epsilon$ when $\epsilon$ is sufficiently small. Secondly we want to show that there exists a datum $f_0$ such that it verifies the assumptions in $(ii)$. Let $f$ be a function in $L^2$ verifying that $|f|_{L^2}=1$ and the support of $f$ belongs to the ring $\mathcal{C}_j=\{v\in \R^3|2^j\le |v|\le N_02^j\}$ with $N_0\ge2$. Let $f_0=(I-\mathbb{P})f $. Then $f_0$ verifies $f_0\in {\mathcal{N}(\mathcal{L^\epsilon})}^\perp$,
$ |f_0|_{L^2}\ge 1-O(e^{-\f18 2^{2j}})$ and $|\mathcal{P}_jf_0|_{L^2}\ge 1-O(e^{-\f18 2^{2j}})$.
\end{rmk}
\begin{rmk} Sharpness of estimate, $\lim\limits_{t\rightarrow\infty}|e^{-\mathcal{L}^\epsilon t}f_0|_{L^2}=0$, directly follows \eqref{semigroupLe2} and \eqref{semigroupLe3}. Indeed, the estimate can be derived by approximation thanks to \eqref{semigroupLe2}. On the other hand, due to \eqref{semigroupLe3}, it is impossible to get an explicit and uniform decay rate for the above relaxation. These two facts reveal the diversity of the longtime behavior of $e^{-\mathcal{L}^\epsilon t}f_0$. Our results are comparable to the results for the homogeneous Boltzmann equation with moderate soft potentials. The authors in \cite{cclu} show that the rate of the convergence to the equilibrium can be very slow if we only assume that the solution is conserved for mass, momentum and energy.
\end{rmk}
\begin{rmk} Let us comment on the connection between the constant $c$ in \eqref{semigroupLe2} and the spectral gap $\lambda$ of the operator $\mathcal{L}^0$. Obviously $c\le \lambda$. An interesting problem is to see the dependence of $\lambda$ on $c$. Observe that $|(\mathcal{L}^\epsilon-\mathcal{L}^0)f|_{L^2}\lesssim \epsilon^{2-2s}|f|_{H^2_{\gamma+2}}$. Then if $f_0$ is smooth, \eqref{semigroupLe2} can be improved to
\beno |e^{-\mathcal{L}^\epsilon t}f_0|_{L^2}^2
\lesssim |f_0|_{L^2}^2e^{-\lambda t}+\epsilon^{2-2s} |f_0|_{H^2_{\gamma+2}}^2.\eeno
\end{rmk}
Our third result is concerned with the global well-posedness and the global dynamics for equation \eqref{linearizedBE}. As a direct consequence, we derive the asymptotic formula for the solutions to \eqref{linearizedBE} and \eqref{linearizedNBE}, which solves {\bf (P-3)}.
Let us introduce some useful notations which will be used throughout this subsection. For $J,N\in\N$ with $J\le N$, we introduce a sequence of weight functions $\{W_{m,j}\}_{m+j\le N-1}\cup \{W_{N-j,j}\}_{0\le j\le J}$ with $W_{m,j}=W_{l_{m,j}}$ verifying \ben\label{AsuWf}W_{m-1,j+1}W_{-\gamma}\le W_{m,j},W_{m,0}\le W_{0,m-1} \quad\mbox{and}\quad W_{N-J,J}\ge W_{2}.\een
Then we define
\beno
\dot{\mathcal{E}}^{N-j,j}(f)=\sum_{|\alpha|= N-j,|\beta|=j}\|W_{N-j,j}\pa^\alpha_\beta f\|_{L^2}^2; \quad \dot{\mathcal{D}}^{N-j,j}(f)=\sum_{|\alpha|= N-j,|\beta|=j}\|W_{N-j,j}\pa^\alpha_\beta f\|_{L^2_{\epsilon,\gamma/2}}^2;\\
\dot{\mathcal{E}}^{N}(f)=\sum_{j=0}^N\dot{\mathcal{E}}^{N-j,j}(f);\quad \dot{\mathcal{D}}^{N}(f)=\sum_{j=0}^N\dot{\mathcal{D}}^{N-j,j}(f);\quad
\mathcal{E}^{N,J}(f)=\sum_{j=0}^{N-1}\dot{\mathcal{E}}^{j}(f)+\sum_{j=0}^J\dot{\mathcal{E}}^{N-j,j}(f);\\ \mathcal{D}^{N,J}(f)=\sum_{j=0}^{N-1}\dot{\mathcal{D}}^{j}(f)+\sum_{j=0}^J\dot{\mathcal{D}}^{N-j,j}(f); \quad\mathcal{E}^{N}(f)=\mathcal{E}^{N,N}(f);\quad \mathcal{D}^{N}(f)=\mathcal{D}^{N,N}(f).
\eeno
\begin{thm}\label{main3}
Suppose $\epsilon\le\epsilon_0$, $\gamma\in(-3/2,0)\cap[-2s,0)$ and $\delta_0$ is a sufficiently small constant which is independent of $\epsilon$. Let $f_0$ verify \eqref{Nuspace} and $\|f_{0}\|_{H^{2}_{x}L^{2}}\leq \delta_{0}$.
\begin{enumerate}
\item{\bf (Global well-posedness)}
\eqref{linearizedBE} admits a unique and global smooth solution $f^\epsilon$ in the function space $C([0,\infty];H^2_xL^2)$ which verifies
$\sup_{t\ge0}\|f^\epsilon(t)\|_{H^{2}_{x}L^{2}}\lesssim \delta_{0}$.
(i). If additionally $f_0\in H^N_xL^2_l$ with $N,l\ge2$, then \beno \sup_{t\ge0}\|f^\epsilon(t)\|_{H^N_xL^2_l}\lesssim C(\|f_0\|_{H^N_xL^2_l}).\eeno
(ii). If additionally
$ \mathcal{E
}^{N,J}(f_0)<\infty$ with $N\ge2$, then
\beno \sup_{t\in(0,\infty)}\mathcal{E
}^{N,J}(f(t))+\int_0^\infty \mathcal{D}^{N,J}(f(\tau))d\tau \lesssim C(\mathcal{E
}^{N,J}(f_0)). \eeno
As a consequence, \eqref{linearizedNBE} admits a unique and global solution $f$ in the space $C([0,\infty); \mathcal{E}^{N,J})$ with the same initial data $f_0$.
\item{\bf (Global dynamics)} (i). If $2^{j\gamma}\ll \epsilon^{2s}$, then for $t\in [0, C^{-1}\delta_{0}^{-1}\eta 2^{-j\gamma}\epsilon^{2s}]$, we have
\ben\label{localizedenergy} \|\mathcal{P}_jf^\epsilon(t)\|^2_{L^2}\ge \|\mathcal{P}_jf_0\|^2_{L^2}-\eta-C\epsilon^{2s}. \een
(ii). If
$f_0\in H_x^NL^2_{-p\gamma/2}$ with $-p\gamma/2\ge2$ and $N\ge2$, then there exists a critical time $t_*=O(-C(p,f_0)2s\ln\epsilon )$ such that \ben\label{decay-uniform-formula} \|f^\epsilon(t)\|^2_{H^N_xL^2}\lesssim \|f_0\|_{H^N_xL^2}^2\big(e^{-ct}\mathrm{1}_{t\le t_* }+C(p,f_0)\epsilon^{2sp}(1+t)^{-p}\mathrm{1}_{t\ge t_*}\big),\een
where $c$ is a universal constant and $C(p,f_0)$ is a constant depending on $p$ and $\|f_0\|_{H^N_{x}L^2_{-p\gamma/2}}$.
\item{\bf (Global asymptotic formula for the limit process)}
If $ \mathcal{E}^{N,2}(f_0)<\infty$ with $N\ge2$, then
\ben \label{error-function-uniform-estimate} \sup_{t \geq 0} \|f^{\epsilon}(t)-f(t)\|^{2}_{H^{N-2}_{x}L^{2}} \leq C(\mathcal{E}^{N}(f_0)) \epsilon^{2-2s}. \een
\end{enumerate}
\end{thm}
Some comments are in order:
\begin{rmk} The sequence of weighted functions is designed to prove the propagation of the full regularity of the solution.
\end{rmk}
\begin{rmk} \eqref{localizedenergy} and \eqref{decay-uniform-formula} show that the pictures on the behavior of semi-group $e^{-t\mathcal{L}^\epsilon}$ can be extended to the non-linear level. In other words, even in the perturbation framework, the original solution $F$ of the Boltzmann equation converges to the equilibrium without any explicit rate:$$\lim_{t\rightarrow\infty}\bigg\|\f{F(t)-\mu}{\mu^{\f12}}\bigg\|_{L^2}=0.$$ We have two comments on this phenomena. Firstly, if we go back to original equation, by energy-entropy method introduced in \cite{He-Jiang}, it holds that
\beno \lim_{t\rightarrow\infty}\|F-\mu\|_{L^2}=O(t^{-\infty}). \eeno Secondly, it is very interesting to ask what is the impact of such convergence on the hydrodynamic limit for the soft potentials.
\end{rmk}
\begin{rmk} To our best knowledge, these results are new for the moderate soft potentials. To keep the paper a reasonable size, we refrain to generalize the results to the other potentials, which can be done by noticing that all the estimates involving $\mathcal{L}^\epsilon$ and $\Gamma^\epsilon$ in this paper are valid for $\gamma>-3$.
\end{rmk}
\subsection{Idea and novelty of the proof} Let us illustrate the ideas and novelties of the proof to our main theorems.
\subsubsection{Proof of Theorem \ref{main1}} For simplicity, we focus on the Maxwellian molecules. It is not difficult to prove that the description of the behavior of $\mathcal{L}^\epsilon$ can be reduced to the control of two quantities $\mathcal{M}^{\epsilon,0}(f)$ and $\mathcal{R}_\mu^{\epsilon,0}(f)$, which correspond to gain of weight and gain of regularity respectively.
\smallskip
\noindent $\bullet$ Instead of using Carleman representation of the collision operator, we introduce a new coordinate system which enables us to make full use of the cancellation and the law of sines to describe the behavior of $\mathcal{M}^{\epsilon,0}(f)$. The method is elementary and stable to catch the hyperbolic structure of $\mathcal{L}^\epsilon$.
\smallskip
\noindent$\bullet$ To give a precise description of $\mathcal{R}_\mu^{\epsilon,0}(f)$, we develop some new techniques for the geometric decomposition of the operator. The first new idea is to apply the geometric decomposition to $\mathcal{R}_\mu^{\epsilon,0}(f)$ in the frequency space instead of the phase space. More precisely,
by Bobylev's equality, we have
\beno
\mathcal{R}_\mu^{\epsilon,0}(f) &=& \frac{1}{(2\pi)^{3}}\int b^{\epsilon}(\frac{\xi}{|\xi|} \cdot \sigma)(\hat{\mu}(0)|\hat{f}(\xi) - \hat{f}(\xi^{+})|^{2} + 2\Re((\hat{\mu}(0) - \hat{\mu}(\xi^{-}))\hat{f}(\xi^{+})\bar{\hat{f}}(\xi)) d\xi d\sigma
\\ &\eqdefa& \frac{\hat{\mu}(0)}{(2\pi)^{3}}\mathcal{I}_{1} + \frac{2}{(2\pi)^{3}}\mathcal{I}_{2},
\eeno
where $\xi^{+} = \frac{\xi+|\xi|\sigma}{2}$ and $\xi^{-} = \frac{\xi-|\xi|\sigma}{2}$. It is not difficult to prove that
\beno
|\mathcal{I}_{2}| \lesssim |W^{\epsilon}(D)f|^{2}_{L^{2}}\lesssim \langle\mathcal{L}^{\epsilon} f,f \rangle_v+|f|_{L^2}^2,
\eeno
thanks to \eqref{sobolevregu}. Therefore
we only need to consider the estimate of $\mathcal{I}_{1}$. By the geometric decomposition introduced in \cite{he2},
\beno
\hat{f}(\xi) - \hat{f}(\xi^{+}) = \hat{f}(\xi) - \hat{f}(|\xi|\frac{\xi^{+}}{|\xi^{+}|})+ \hat{f}(|\xi|\frac{\xi^{+}}{|\xi^{+}|}) - \hat{f}(\xi^{+}),
\eeno
we have
\beno
\mathcal{I}_{1} &=& \int b^{\epsilon}(\frac{\xi}{|\xi|} \cdot \sigma)|\hat{f}(\xi) - \hat{f}(\xi^{+})|^{2} d\xi d\sigma
\\&\geq& \frac{1}{2} \int b^{\epsilon}(\frac{\xi}{|\xi|} \cdot \sigma)|\hat{f}(\xi) - \hat{f}(|\xi|\frac{\xi^{+}}{|\xi^{+}|})|^{2} d\xi d\sigma
- \int b^{\epsilon}(\frac{\xi}{|\xi|} \cdot \sigma)|\hat{f}(|\xi|\frac{\xi^{+}}{|\xi^{+}|}) - \hat{f}(\xi^{+})|^{2} d\xi d\sigma
\\&\eqdefa& \frac{1}{2}\mathcal{I}_{1,1} - \mathcal{I}_{1,2}.
\eeno
Thanks to the fact that Fourier transform is commutative with $W^\epsilon((-\triangle_{\SS^2})^{\f12})$, we obtain the anisotropic regularity from $\mathcal{I}_{1,1}$. Now we only need to give the upper the bound for $\mathcal{I}_{1,2}$. Our key observation lies in the fact that $\hat{f}(\frac{\xi^{+}}{|\xi^{+}|})$ and $ \hat{f}(\xi^{+})$ can be localized in the same region both in the frequency space and in the phase space. This enables us to use localization method to show that $\mathcal{I}_{1,2}$ can be bounded by Sobolev regularity.
\smallskip
To complete the proof Theorem \ref{main1}, we have to give the upper bound for $\langle \Gamma^\epsilon (g,h),f\rangle_v$ for the general potentials. To do that, our new idea is to separate the estimate into two regions, $|v-v_*|\le 1$ and $|v-v_*|\ge1$, to manifest the hyperbolic structure and the smoothing property of the operator.
(i). In the region of $|v-v_*|\le 1$, the hyperbolic structure prevails over the
the anisotropic structure, which can be checked from the proof of the sharp bounds for the operator in weighted Sobolev spaces(see \cite{he2} for details). It suggests that we can use Sobolev regularity to give the upper bounds for the operator.
(ii). In the region of $|v-v_*|\ge 1$, the operator is dominated by the anisotropic structure. We resort to the geometric decomposition in the phase space to give the corresponding upper bounds. In particular, we make full use of the symmetric property of the structure inside the operator and also the dissipation $\mathcal{R}^{\epsilon,\gamma}_{*,g}(f)$ obtained from the lower bound of the operator.
\subsubsection{The proof of Theorem \ref{main2}} We have two novelties in the proof.
\smallskip
\noindent$\bullet$ The first one lies in the localization techniques in the phase space which are totally new and important considering that the Boltzmann equation is a non-local equation. It shows that the linear or even non-linear Boltzmann equations can be almost localized thanks to the commutator estimates between $\mathcal{L}^\epsilon$ and the localization function. This fact enables us to consider the evolution of the local energy which is the key to prove the diversity of the
longtime behavior of $e^{-t\mathcal{L}^\epsilon}f$.
\smallskip
\noindent$\bullet$ We reduce the longtime behavior of $e^{-t\mathcal{L}^\epsilon}f$ to some special ODE system. Based on a technical argument, we obtain the sharp estimate for the ODE system which in turn gives the precise behavior of the semi-group. The result shows that there exists a critical time $t_*$ such that the behavior is totally different before and after $t_*$ which matches the complex property of $\mathcal{L}^\epsilon$.
\subsubsection{Proof of Theorem \ref{main3}} The proof has some new features.
\smallskip
\noindent$\bullet$ Since we only impose the smallness assumption on $\|f\|_{H^2_xL^2}$, we have to find a new way to
prove the propagation of the full regularity. To do that, we first close the energy estimates for the pure $x$-regularity. Then the desired result is reduced to prove that if we have the control of $\dot{\mathcal{E}}^{N-j,j}(f)$ with $j\le N$, then by the equation we can get the control of $\dot{\mathcal{E}}^{N-j-1,j+1}(f)$ with the help of the weight functions.
\smallskip
\noindent$\bullet$ To prove the global error estimate, the key idea is to regard the error equation as a linear equation since we already have the control of the solutions to \eqref{linearizedBE} and \eqref{linearizedNBE}.
\subsection{Organization of the paper} In Section 2, we endeavor to prove theorem \ref{main1} and also the general upper bounds for the nonlinear term $\Gamma^\epsilon$. Section 3 presents the proof to the longtime behavior of the semi-group $e^{-t\mathcal{L}^\epsilon}$. In section 4, in the perturbation framework, we prove global well-posedness, global dynamics and global error estimates for the Boltzmann equation with and without angular cutoff. In the appendix, we list some useful lemmas which are of great importance to the bounds of the operator.
\section{Behavior of the Linearized Boltzmann Collision operator}
In this section, we will prove Theorem \ref{main1}. To do that, we separate the proof into three parts: lower bounds of $\langle \mathcal{L}^\epsilon f, f\rangle_v$, the general upper bounds for $\langle \Gamma^\epsilon (g,h), f\rangle_v$ and the commutator estimates between the collision operator $ \Gamma^\epsilon (g,\cdot)$ and the weight function $W_{l}$ which are crucial to Theorem \ref{main2} and Theorem \ref{main3}. Throughout this section, we assume that $\epsilon\le\epsilon_0$ with $\epsilon_0$ sufficiently small.
\subsection{Lower bounds of the collision operator $\mathcal{L}^\epsilon$} Our strategy of the proof can be summarized as follows. We first give the descriptions of $\mathcal{R}_g^{\epsilon,\gamma}(f) $ and $ \mathcal{M}^{\epsilon,\gamma}(f)$.
Then the lower bound of $\mathcal{L}^\epsilon$ is concluded by proving $\langle \mathcal{L}^{\epsilon}f, f\rangle_v + |f|^{2}_{L^{2}_{\gamma/2}} \gtrsim \mathcal{R}_g^{\epsilon,\gamma}(f) + \mathcal{M}^{\epsilon,\gamma}(f)$.
\subsubsection{Description of $\mathcal{M}^{\epsilon,\gamma}(f) $} Now we state a lemma on the description of $\mathcal{M}^{\epsilon,\gamma}(f) $.
\begin{lem}\label{lowerboundpart1} There exists $\epsilon_{0} >0$ such that for $0< \epsilon \leq \epsilon_{0}$,
\beno
\mathcal{M}^{\epsilon,\gamma}(f) + |f|^{2}_{L^{2}_{\gamma/2}} \sim |W^{\epsilon}f|^{2}_{L^{2}_{\gamma/2}}.
\eeno\end{lem}
\begin{proof} We divide the proof into two steps.
{\it Step 1: The lower bound of $\mathcal{M}^{\epsilon,\gamma}(f)$.} For simplicity, denote $M = \mu^{1/2}$, then one has $\Delta M = -\frac{M}{2} v$ and $\nabla^{2} M = \frac{M}{4} (-2I+v \otimes v)$. By Taylor expansion, we have
\beno
M(v^{\prime}) - M(v) = -\frac{M(v)}{2} v \cdot (v^{\prime}-v) + \int_{0}^{1} \frac{1-\kappa}{2} (\nabla^{2} M) (v(\kappa)):(v^{\prime}-v)\otimes(v^{\prime}-v) d \kappa,
\eeno
where $v(\kappa) = v+\kappa (v^{\prime}-v)$.
Thanks to the fact $(a-b)^{2} \geq \frac{a^{2}}{2} - b^{2}$, we have
\beno
(M(v^{\prime}) - M(v))^{2} \geq \frac{M^{2}(v)}{8} |v \cdot (v^{\prime}-v)|^{2} - \frac{1}{4}\int_{0}^{1} |(\nabla^{2} M) (v(\kappa))|^{2}|v^{\prime}-v|^{4} d \kappa.
\eeno
Let $0 <\eta <1$ and $r \geq 8/\pi$ verifying $\epsilon \leq \eta/2r$. We set $A(\epsilon, \eta, r) = \{(v_{*},v,\sigma): 2r \leq |v^{*}| \leq \eta/\epsilon, |v| \leq r, 2\eta|v-v^{*}|^{-1} \leq \theta \leq 4\eta|v-v^{*}|^{-1}\}$. Then $A(\epsilon, \eta, r) $ is a non-empty subset of the original integration region $\{(v_{*},v,\sigma): \epsilon \leq \theta \leq \pi/2\}$. Indeed, if $(v_{*},v,\sigma) \in A(\epsilon, \eta, r)$, then $|v-v_{*}| \geq |v_{*}|-|v| \geq r$, which implies $4\eta|v-v^{*}|^{-1} \leq 4 r^{-1} \leq \pi/2$. On the other hand, $|v-v^{*}| \leq |v|+|v_{*}| \leq r + \eta/\epsilon \leq 3\eta / 2\epsilon$, which implies $2\eta|v-v^{*}|^{-1} \geq 4\epsilon/3 \geq \epsilon$. Thus we have
\begin{eqnarray}\label{vsmallvstarsmall}
\mathcal{M}^{\epsilon,\gamma}(f) &\geq& \int B^{\epsilon,\gamma} \textbf{1}_{A(\epsilon, \eta, r)} f_{*}^{2} (\mu^{\prime 1/2}-\mu^{1/2})^{2} d\sigma dv dv_{*}
\\&\geq& \frac{1}{8}\int B^{\epsilon,\gamma} \textbf{1}_{A(\epsilon, \eta, r)} M^{2}(v)|v \cdot (v^{\prime}-v)|^{2} f_{*}^{2} d\sigma dv dv_{*} \nonumber
\\&& - \frac{1}{4} \int B^{\epsilon,\gamma} \textbf{1}_{A(\epsilon, \eta, r)} |(\nabla^{2} M) (v(\kappa))|^{2}|v^{\prime}-v|^{4} f_{*}^{2} d\sigma dv dv_{*} d\kappa \nonumber
\\&\eqdefa& \frac{1}{8}\mathcal{M}_{1}^{\epsilon,\gamma} (\eta , r) - \frac{1}{4}\mathcal{M}_{2}^{\epsilon,\gamma} (\eta , r). \nonumber
\end{eqnarray}
{\underline{The estimate of $\mathcal{M}_{1}^{\epsilon,\gamma} (\eta , r)$.}} For fixed $v, v_*$, we introduce an orthogonal basis $(h^{1}_{v,v_{*}},h^{2}_{v,v_{*}}, \frac{v-v_{*}}{|v-v_{*}|})$ such that $d\sigma= \sin\theta d\theta d\phi$. Then one has
\beno
\frac{v^{\prime}-v}{|v^{\prime}-v|} = \cos\frac{\theta}{2}\cos\phi h^{1}_{v,v_{*}} + \cos\frac{\theta}{2}\sin\phi h^{2}_{v,v_{*}} -\sin\frac{\theta}{2} \frac{v-v_{*}}{|v-v_{*}|},
\eeno
and
\beno
\frac{v}{|v|} = c_{1} h^{1}_{v,v_{*}} + c_{2} h^{2}_{v,v_{*}} + c_{3} \frac{v-v_{*}}{|v-v_{*}|},
\eeno
where $c_{3}=\frac{v}{|v|}\cdot \frac{v-v_{*}}{|v-v_{*}|}$ and $c_{1}, c_{2}$ are constants independent of $\theta$ and $\phi$. Then we have
\beno
\frac{v}{|v|} \cdot \frac{v^{\prime}-v}{|v^{\prime}-v|} = c_{1}\cos\frac{\theta}{2}\cos\phi + c_{2}\cos\frac{\theta}{2}\sin\phi - c_{3}\sin\frac{\theta}{2},
\eeno
and thus
\beno
|\frac{v}{|v|} \cdot \frac{v^{\prime}-v}{|v^{\prime}-v|}|^{2} &=& c^{2}_{1}\cos^{2}\frac{\theta}{2}\cos^{2}\phi + c^{2}_{2}\cos^{2}\frac{\theta}{2}\sin^{2}\phi + c^{2}_{3}\sin^{2}\frac{\theta}{2}
\\ && + 2c_{1}c_{2}\cos^{2}\frac{\theta}{2}\cos\phi\sin\phi - 2c_{3}\cos\frac{\theta}{2}\sin\frac{\theta}{2}(c_{1}\cos\phi + c_{2}\sin\phi).
\eeno
Integrating with respect to $\sigma$, we have
\beno
\int b^{\epsilon}(\cos\theta)\textbf{1}_{A(\epsilon, \eta, r)}|v \cdot (v^{\prime}-v)|^{2}d\sigma &=& \int_{0}^{\pi/2}\int_{0}^{2\pi}b^{\epsilon}(\cos\theta)\sin\theta \textbf{1}_{A(\epsilon, \eta, r)}|v \cdot (v^{\prime}-v)|^{2}d\phi d\theta
\\ &\geq& \pi(c^{2}_{1}+c^{2}_{2})|v|^{2}|v-v_{*}|^{2}
\\&& \times \int_{0}^{\pi/2} b^{\epsilon}(\cos\theta)\sin\theta \cos^{2}\frac{\theta}{2} \sin^{2}\frac{\theta}{2}\textbf{1}_{A(\epsilon, \eta, r)}d\theta
\\&\gtrsim& (c^{2}_{1}+c^{2}_{2})|v|^{2}|v-v_{*}|^{2} \textbf{1}_{B(\epsilon, \eta, r)} \int_{2\eta|v-v_{*}|^{-1}}^{4\eta|v-v_{*}|^{-1}} \theta^{1-2s}d\theta
\\&\gtrsim& \eta^{2-2s}(c^{2}_{1}+c^{2}_{2})|v|^{2}|v-v_{*}|^{2s}\textbf{1}_{B(\epsilon, \eta, r)},
\eeno
where $B(\epsilon, \eta, r) = \{(v_{*},v):2r \leq |v_{*}| \leq \eta/\epsilon, |v| \leq r\}$.
Then we arrive at
\beno
\mathcal{M}_{1}^{\epsilon,\gamma} (\eta , r) &\gtrsim& \eta^{2-2s} \int (c^{2}_{1}+c^{2}_{2})|v-v_{*}|^{\gamma+2s}|v|^{2}\textbf{1}_{B(\epsilon, \eta, r)} M^{2}(v) f_{*}^{2}dv dv_{*}
\\&=& \eta^{2-2s} \int (1-(\frac{v}{|v|}\cdot\frac{v_{*}}{|v_{*}|})^{2})|v_{*}|^{2}|v-v_{*}|^{\gamma+2s-2}|v|^{2}\textbf{1}_{B(\epsilon, \eta, r)} M^{2}(v) f_{*}^{2}dv dv_{*},
\eeno
where we have used the fact $c_1^2+c_2^2+c_3^2=1$ and the law of sines $$(1-(\frac{v}{|v|}\cdot\frac{v_{*}}{|v_{*}|})^{2})^{-1}|v-v_{*}|^{2}= (1-c_3^2)^{-1}|v_{*}|^{2}.$$
Note that in the region $B(\epsilon, \eta, r)$, one has $|v-v_{*}| \leq \frac{3}{2}|v_{*}|$, which implies $|v-v_{*}|^{\gamma+2s-2} \geq (\frac{3}{2}|v_{*}|)^{\gamma+2s-2}$. Denote $c_{1}(r) = \int (1-(\frac{v}{|v|}\cdot\frac{v_{*}}{|v_{*}|})^{2}) |v|^{2} M^{2}(v) \textbf{1}_{|v|\leq r}dv$, and note that the value of the integral is independent of $v_{*}$, thus we have
\beno
\mathcal{M}_{1}^{\epsilon,\gamma} (\eta , r) &\gtrsim& \eta^{2-2s} c_{1}(r) \int |v_{*}|^{\gamma+2s}\textbf{1}_{\{2r \leq |v_{*}|\leq \eta/\epsilon\}} f_{*}^{2} dv_{*}
\\&\gtrsim& \eta^{2-2s} c_{1}(r) (\int \langle v_{*} \rangle^{\gamma+2s}\textbf{1}_{\{|v_{*}|\leq \eta/\epsilon\}} f_{*}^{2} dv_{*}- (2r)^{2s})|f|^{2}_{L^{2}_{\gamma/2}}) .
\eeno
{\underline{The estimate of $\mathcal{M}_{2}^{\epsilon,\gamma} (\eta , r)$.}} By the change of variable $v \rightarrow v(\kappa)$, we have
\beno
\mathcal{M}_{2}^{\epsilon,\gamma} (\eta , r) \lesssim \eta^{4-2s} \int \langle v_{*} \rangle^{\gamma+2s}\textbf{1}_{\{|v_{*}|\leq \eta/\epsilon\}} f_{*}^{2} dv_{*}.
\eeno
So we have, for some generic constant $C$,
\beno
\mathcal{M}^{\epsilon,\gamma}(f) \gtrsim \eta^{2-2s} (c_{1}(r)-C\eta^{2}) \int \langle v_{*} \rangle^{\gamma+2s}\textbf{1}_{\{|v_{*}|\leq \eta/\epsilon\}} f_{*}^{2} dv_{*} - c_{1}(r)(2r)^{2s}\eta^{2-2s}|f|^{2}_{L^{2}_{\gamma/2}}.
\eeno
Choose suitable $\eta$ and $r$, we have
\begin{eqnarray}\label{lowerboundvstarsmall}
\mathcal{M}^{\epsilon,\gamma}(f)+ |f|^{2}_{L^{2}_{\gamma/2}} \gtrsim \int \langle v_{*} \rangle^{\gamma+2s}\textbf{1}_{\{|v_{*}|\leq \eta/\epsilon\}} f_{*}^{2} dv_{*}.
\end{eqnarray}
Let $R$ be large enough, we aim to prove
$\mathcal{M}^{\epsilon,\gamma}(f) \gtrsim \epsilon^{-2s}\int \langle v_{*} \rangle^{\gamma}\textbf{1}_{\{|v_{*}|\geq R/\epsilon\}} f_{*}^{2} dv_{*}.$
By direct computation, we have
\beno
\mathcal{M}^{\epsilon,\gamma}(f) &=& \int B^{\epsilon,\gamma} f_{*}^{2} (\mu^{\prime 1/2}-\mu^{1/2})^{2} d\sigma dv dv_{*}
\\&\geq& \int b^{\epsilon}|v-v_{*}|^{\gamma}\textbf{1}_{\{|v_{*}|\geq R/\epsilon\}} f_{*}^{2} \mu^{\prime} d\sigma dv dv_{*}
+ \int b^{\epsilon}|v-v_{*}|^{\gamma}\textbf{1}_{\{|v_{*}|\geq R/\epsilon\}} f_{*}^{2} \mu d\sigma dv dv_{*}
\\&&- 2\int b^{\epsilon}|v-v_{*}|^{\gamma}\textbf{1}_{\{|v_{*}|\geq R/\epsilon\}} f_{*}^{2} \mu^{\prime 1/2}\mu^{1/2} d\sigma dv dv_{*}
\\&\eqdefa&\mathcal{M}_{1}^{\epsilon,\gamma}(R)+\mathcal{M}_{2}^{\epsilon,\gamma}(R)-\mathcal{M}_{3}^{\epsilon,\gamma}(R).
\eeno
By the change of variable $v \rightarrow v^{\prime}$, if $\cos\tilde{\theta}=\f{|v'-v_*|}{|v'-v_*|}\cdot \sigma$, then we have
\beno
\mathcal{M}_{1}^{\epsilon,\gamma}(R)&\geq& (2\pi)\int_{4\epsilon}^{\pi/4}\tilde{\theta}^{-2-2s}\sin\tilde{\theta} d\tilde{\theta} \int |v-v_{*}|^{\gamma}\textbf{1}_{\{|v_{*}| \geq R/\epsilon\}} f_{*}^{2} \mu dv dv_{*}
\\&\gtrsim& K^{-1}\epsilon^{-2s}\int \langle v_{*} \rangle^{\gamma}\textbf{1}_{\{|v_{*}| \geq R/\epsilon\}} f_{*}^{2}dv_{*}.
\eeno
Similarly, $\mathcal{M}_{2}^{\epsilon,\gamma}(R) \gtrsim K^{-1}\epsilon^{-2s}\int \langle v_{*} \rangle^{\gamma}\textbf{1}_{\{|v_{*}| \geq R/\epsilon\}} f_{*}^{2}dv_{*}.$
Since $\theta \geq \epsilon$, there holds $|v^{\prime}|+|v| \geq |v^{\prime}-v| = \sin\frac{\theta}{2}|v-v_{*}|\geq\frac{\epsilon}{\pi}|v-v_{*}|\geq \frac{\epsilon}{\pi}(|v_{*}|-|v|)$, that is $|v^{\prime}|+(1+\frac{\epsilon}{\pi})|v| \geq \frac{\epsilon}{\pi}|v_{*}| \geq \frac{R}{\pi}$. Then
$R^{2}/16 \leq 4(|v^{\prime}|+|v|)^{2} \leq 8 (|v^{\prime}|^{2}+|v|^{2}),$
which implies
\beno
\mu^{\prime 1/2}\mu^{1/2} = e^{-\frac{|v^{\prime}|^{2}+|v|^{2}}{4}} \leq e^{-\frac{|v|^{2}}{8}}e^{-\frac{R^{2}}{2^{10}}}.
\eeno
Thus we have
\beno
\mathcal{M}_{3}^{\epsilon,\gamma}(R) \lesssim e^{-\frac{R^{2}}{2^{10}}} K\epsilon^{-2s}\int \langle v_{*} \rangle^{\gamma}\textbf{1}_{\{|v_{*}| \geq R/\epsilon\}} f_{*}^{2}dv_{*}.
\eeno
Patch all together the above estimates, we arrive at
\beno
\mathcal{M}^{\epsilon,\gamma}(f) \gtrsim (K^{-1} - Ke^{-\frac{R^{2}}{2^{10}}})\epsilon^{-2s}\int \langle v_{*} \rangle^{\gamma}\textbf{1}_{\{|v_{*}|\geq R/\epsilon\}} f_{*}^{2} dv_{*}.
\eeno
Note that the above estimate is valid for any $R>0$ and $\epsilon \leq 1$.
Fix $\eta>0$, and choose $R=N\eta$ where $N$ is large enough such that $K^{-1} - Ke^{-\frac{(N\eta)^{2}}{2^{10}}} \geq \frac{K^{-1}}{2}$. Then we have
\beno
\mathcal{M}^{N\epsilon,\gamma}(f) &\gtrsim& (K^{-1} - Ke^{-\frac{(N\eta)^{2}}{2^{10}}})(N\epsilon)^{-2s}\int \langle v_{*} \rangle^{\gamma}\textbf{1}_{\{|v_{*}|\geq \eta/\epsilon\}} f_{*}^{2} dv_{*}
\\&\geq& \frac{K^{-1}}{2}N^{-2s}\epsilon^{-2s}\int \langle v_{*} \rangle^{\gamma}\textbf{1}_{\{|v_{*}|\geq \eta/\epsilon\}} f_{*}^{2} dv_{*}.
\eeno
It is obvious that $\mathcal{M}^{\epsilon,\gamma}(f) \geq \mathcal{M}^{N\epsilon,\gamma}(f)$. From this and together with (\ref{lowerboundvstarsmall}), we arrive at
\beno
\mathcal{M}^{\epsilon,\gamma}(f) + |f|^{2}_{L^{2}_{\gamma/2}} \gtrsim \int \langle v_{*} \rangle^{\gamma+2s}\textbf{1}_{\{|v_{*}|\leq \eta/\epsilon\}} f_{*}^{2} dv_{*}+\epsilon^{-2s}\int \langle v_{*} \rangle^{\gamma}\textbf{1}_{\{|v_{*}|\geq \eta/\epsilon\}} f_{*}^{2} dv_{*}
\gtrsim |W^{\epsilon}f|^{2}_{L^{2}_{\gamma/2}}.
\eeno
{\it Step 2: The upper bound of $\mathcal{M}^{\epsilon,\gamma}(f)$.} First we have
\beno
\mathcal{M}^{\epsilon,\gamma}(f) &\lesssim& \int B^{\epsilon,\gamma} f_{*}^{2} (\mu^{\prime 1/4}-\mu^{1/4})^{2}(\mu^{\prime 1/2}+\mu^{1/2}) d\sigma dv dv_{*}
\\&\lesssim& \int B^{\epsilon,\gamma} f_{*}^{2} (\mu^{\prime 1/4}-\mu^{1/4})^{2}\mu^{\prime 1/2} d\sigma dv dv_{*} + \int B^{\epsilon,\gamma} f_{*}^{2} (\mu^{\prime 1/4}-\mu^{1/4})^{2}\mu^{1/2} d\sigma dv dv_{*}
\\&\eqdefa& \mathcal{M}^{\epsilon,\gamma}_{1}(f) + \mathcal{M}^{\epsilon,\gamma}_{2}(f).
\eeno
By Taylor expansion, one has
$(\mu^{\prime 1/4} - \mu^{1/4})^{2} \lesssim \min\{1,|v-v_{*}|^{2}\theta^{2}\} \sim \min\{1,|v^{\prime}-v_{*}|^{2}\theta^{2}\}.$
By Proposition \ref{symbol}, we have
$
\int b^{\epsilon}(\cos\theta) \min\{1,|v-v_{*}|^{2}\theta^{2}\} d\sigma \sim |v-v_{*}|^{2} \mathrm{1}_{|v-v_{*}|\leq 2} + (W^{\epsilon})^2(|v-v_*|) \mathrm{1}_{|v-v_{*}|\ge 2}.
$
After checking
\begin{eqnarray}\label{combinewithgamma}
(W^{\epsilon})^2(|v-v_*|) \lesssim (W^{\epsilon})^{2}(v)(W^{\epsilon})^{2}(v_{*}),
\end{eqnarray}
we have
$
\int b^{\epsilon}(\cos\theta) \min\{1,|v-v_{*}|^{2}\theta^{2}\} d\sigma \lesssim (W^{\epsilon})^{2}(v)(W^{\epsilon})^{2}(v_{*}).
$
Thus we have
\beno
\mathcal{M}^{\epsilon,\gamma}_{2}(f) \lesssim \int f_{*}^{2} |v-v_{*}|^{\gamma}(W^{\epsilon})^{2}(v)(W^{\epsilon})^{2}(v_{*}) \mu^{1/2} dv dv_{*} \lesssim |W^\epsilon f|_{L^2_{\gamma/2}}^2.
\eeno
The term $\mathcal{M}^{\epsilon,\gamma}_{1}(f)$ can be similarly estimated by the change of variable $v \rightarrow v^{\prime}$. Indeed, one has $\mathcal{M}^{\epsilon,\gamma}_{1}(f) \lesssim \int b^{\epsilon}(\cos(2\theta^{\prime})) |v^{\prime}-v_{*}|^{\gamma} f_{*}^{2} (\mu^{\prime 1/4}-\mu^{1/4})^{2}\mu^{\prime 1/2} d\sigma dv^{\prime} dv_{*},$
where $\theta^{\prime}$ is the angle between $v^{\prime}-v_{*}$ and $\sigma$. With the fact $\theta^{\prime} = \theta/2$, we also have
\beno
\int b^{\epsilon}(\cos(2\theta^{\prime})) \min\{1,|v^{\prime}-v_{*}|^{2}\theta^{2}\} d\sigma \lesssim (W^{\epsilon})^{2}(v^{\prime})(W^{\epsilon})^{2}(v_{*}).
\eeno
Thus by exactly the same argument as that for $\mathcal{M}^{\epsilon,\gamma}_{2}(f)$, we have $\mathcal{M}^{\epsilon,\gamma}_{1}(f) \lesssim |W^\epsilon f|_{L^2_{\gamma/2}}^2$.
The proof of the lemma is complete.
\end{proof}
\subsubsection{Description of $\mathcal{R}_\mu^{\epsilon,\gamma}(f)$ } Following the computation in \cite{advw}, we can obtain that
\beno \langle \mathcal{L}^{\epsilon}f, f\rangle_v+|f|_{L^2_{\gamma/2}}^2\gtrsim |W^\epsilon(D)f|_{L^2}^2, \eeno
where we use Proposition \ref{symbol} in the appendix.
Thus we only need to derive the anisotropic regularity from the lower bound of $\mathcal{R}_\mu^{\epsilon,\gamma}(f)$. To do that, our key observation is applying the geometric decomposition in the frequency space rather than in the phase space. We start with three technical lemmas.
\begin{lem}\label{a-technical-lemma}
For any smooth function $f$, we have
\beno \mathcal{A} \eqdefa \int_{\R^{3}}\int_{\epsilon}^{\pi/4} \theta^{-1-2s}|f(v) - f(v/\cos\theta)|^{2} dv d\theta \lesssim |W^{\epsilon}(D)f|^{2}_{L^{2}} + |W^{\epsilon}f|^{2}_{L^{2}}.\eeno
\end{lem}
\begin{proof}
First applying dyadic decomposition in the phase space, we have
\beno
\mathcal{A} &=& \int_{\R^{3}}\int_{\epsilon}^{\pi/4} \theta^{-1-2s}| \sum_{k=-1}^{\infty}(\varphi_{k}f)(v)- \sum_{k=-1}^{\infty}(\varphi_{k}f)(v/\cos\theta)|^{2} dv d\theta
\\&\lesssim& \sum_{k=-1}^{\infty}\int_{\R^{3}}\int_{\epsilon}^{\pi/4} \theta^{-1-2s}| (\varphi_{k}f)(v)- (\varphi_{k}f)(v/\cos\theta)|^{2} dv d\theta \eqdefa\sum_{k=-1}^{\infty}\mathcal{A}_{k}.
\eeno
It is easy to check $\sum_{2^{k} \geq 1/\epsilon} \mathcal{A}_{k} \lesssim |W^{\epsilon}f|^{2}_{L^{2}}$.
For the case $2^{k} \leq 1/\epsilon$, by Fourier transform and dyadic decomposition in the frequency space, we have
\beno
\mathcal{A}_{k}&=&
\int_{\R^{3}}\int_{\epsilon}^{\pi/4} \theta^{-1-2s}|\widehat{\varphi_{k}f}(\xi)- \cos^{3}\theta\widehat{\varphi_{k}f}(\xi\cos\theta)|^{2} d\xi d\theta
\\&\lesssim& \int_{\R^{3}}\int_{\epsilon}^{\pi/4} \theta^{-1-2s}|\widehat{\varphi_{k}f}(\xi)- \widehat{\varphi_{k}f}(\xi\cos\theta)|^{2} d\xi d\theta + |\varphi_{k}f|^{2}_{L^{2}}
\\&=& \int_{\R^{3}}\int_{\epsilon}^{\pi/4} \theta^{-1-2s}|\sum_{l=-1}^{\infty}(\varphi_{l}\widehat{\varphi_{k}f})(\xi)- \sum_{l=-1}^{\infty}(\varphi_{l}\widehat{\varphi_{k}f})(\xi\cos\theta)|^{2} d\xi d\theta + |\varphi_{k}f|^{2}_{L^{2}}
\\&\lesssim& \sum_{l=-1}^{\infty} \int_{\R^{3}}\int_{\epsilon}^{\pi/4} \theta^{-1-2s}|(\varphi_{l}\widehat{\varphi_{k}f})(\xi)- (\varphi_{l}\widehat{\varphi_{k}f})(\xi\cos\theta)|^{2} d\xi d\theta + |\varphi_{k}f|^{2}_{L^{2}}
\\&\eqdefa& \sum_{l=-1}^{\infty}\mathcal{A}_{k,l} + |\varphi_{k}f|^{2}_{L^{2}}.
\eeno
Note that $\sum_{2^{l} \geq 1/\epsilon} \mathcal{A}_{k,l} \lesssim |W^{\epsilon}(D)\varphi_{k}f|^{2}_{L^{2}}$, thus $\mathcal{A}_{k} \lesssim \sum_{2^{l} \leq 1/\epsilon} \mathcal{A}_{k,l} + |W^{\epsilon}(D)\varphi_{k}f|^{2}_{L^{2}}+|\varphi_{k}f|^{2}_{L^{2}}$.
Notice that $\sum_{k \geq -1}^{\infty}|\varphi_{k}f|^{2}_{L^{2}} \lesssim |f|^{2}_{L^{2}}$ and (\ref{decompostionpacth}), we have
\beno
\mathcal{A} \lesssim \sum_{2^{k} \leq 1/\epsilon,2^{l}\leq 1/\epsilon}\mathcal{A}_{k,l}+ |W^{\epsilon}(D)f|^{2}_{L^{2}} + |W^{\epsilon}f|^{2}_{L^{2}}.
\eeno
For each $k$ and $l$, we have
\beno
\mathcal{A}_{k,l} &=& \int_{\R^{3}}\int_{\epsilon}^{2^{-k/2-l/2}} \theta^{-1-2s}|(\varphi_{l}\widehat{\varphi_{k}f})(\xi)- (\varphi_{l}\widehat{\varphi_{k}f})(\xi\cos\theta)|^{2} d\xi d\theta
\\&&+ \int_{\R^{3}}\int_{2^{-k/2-l/2}}^{\pi/4} \theta^{-1-2s}|(\varphi_{l}\widehat{\varphi_{k}f})(\xi)- (\varphi_{l}\widehat{\varphi_{k}f})(\xi\cos\theta)|^{2} d\xi d\theta
\\&\leq& \int_{\R^{3}}\int_{\epsilon}^{2^{-k/2-l/2}} \theta^{-1-2s}|(\varphi_{l}\widehat{\varphi_{k}f})(\xi)- (\varphi_{l}\widehat{\varphi_{k}f})(\xi\cos\theta)|^{2} d\xi d\theta
+ 2^{s(l+k)}|\varphi_{l}\widehat{\varphi_{k}f}|^{2}_{L^{2}}
\\&\eqdefa&\mathcal{B}_{k,l}+ 2^{s(l+k)}|\varphi_{l}\widehat{\varphi_{k}f}|^{2}_{L^{2}}.
\eeno
By Taylor expansion,
$
(\varphi_{l}\widehat{\varphi_{k}f})(\xi)- (\varphi_{l}\widehat{\varphi_{k}f})(\xi\cos\theta) = (1-\cos\theta)\int_{0}^{1}
(\nabla \varphi_{l}\widehat{\varphi_{k}f})(\xi(\kappa))\cdot \xi d\kappa,
$
where $\xi(\kappa) = (1-\kappa)\xi\cos\theta + \kappa \xi$. Thus we obtain
\beno
\mathcal{B}_{k,l}
\lesssim \int_{0}^{1}\int_{\R^{3}}\int_{\epsilon}^{2^{-k/2-l/2}} \theta^{3-2s}|\xi|^{2}|\nabla \varphi_{l}\widehat{\varphi_{k}f}|^{2}(\xi(\kappa)) d\xi d\theta d\kappa.
\eeno
By the change of variable $\xi \rightarrow \eta = \xi(\kappa)$, we have
\beno
\mathcal{B}_{k,l}
&=& \int_{0}^{1}\int_{\R^{3}}\int_{\epsilon}^{2^{-k/2-l/2}} \theta^{3-2s}\frac{|\eta|^{2}}{((1-\kappa)\cos\theta + \kappa)^{5}}|\nabla \varphi_{l}\widehat{\varphi_{k}f}|^{2}(\eta) d\eta d\theta d\kappa
\\&\lesssim&\int_{\R^{3}}\int_{\epsilon}^{2^{-k/2-l/2}} \theta^{3-2s}|\eta|^{2}|\nabla \varphi_{l}\widehat{\varphi_{k}f}|^{2}(\eta) d\eta d\theta
\\&\lesssim& 2^{-(2-s)(l+k)} \int_{\R^{3}} |\eta|^{2}|\nabla \varphi_{l}\widehat{\varphi_{k}f}|^{2}(\eta) d\eta
\lesssim 2^{s(l+k)}2^{-2k} \int_{\R^{3}} |\nabla \varphi_{l}\widehat{\varphi_{k}f}|^{2}(\eta) d\eta.
\eeno
Note that
$
(\nabla \varphi_{l}\widehat{\varphi_{k}f})(\eta) = (\nabla \varphi_{l}) \widehat{\varphi_{k}f} + \varphi_{l} \nabla \widehat{\varphi_{k}f}
= 2^{-l}(\nabla \varphi) (\frac{\eta}{2^{l}}) \widehat{\varphi_{k}f}(\eta) - i (\varphi_{l} \widehat{v\varphi_{k}f}) (\eta),
$
thus we have
$
|(\nabla \varphi_{l}\widehat{\varphi_{k}f})|^{2}(\eta) \lesssim 2^{-2l}|\varphi|^{2}_{W^{1,\infty}} |\widehat{\varphi_{k}f}|^{2}(\eta)
+ |\varphi_{l}\widehat{v \varphi_{k}f}|^{2}(\eta),
$
which implies that
\beno
\mathcal{B}_{k,l}
\lesssim 2^{-(2-s)(l+k)} |\varphi_{k}f|^{2}_{L^{2}} + 2^{s(l+k)-2k}|\varphi_{l} \widehat{v\varphi_{k}f}|^{2}_{L^{2}}.
\eeno
We finally arrive at
\beno
\mathcal{A}_{k,l} &\lesssim& 2^{-(2-s)(l+k)} |\varphi_{k}f|^{2}_{L^{2}} + 2^{s(l+k)-2k}|\varphi_{l} \widehat{v\varphi_{k}f}|^{2}_{L^{2}} + 2^{s(l+k)}|\varphi_{l}\widehat{\varphi_{k}f}|^{2}_{L^{2}}\\&\eqdefa&\mathcal{A}_{k,l,1}+\mathcal{A}_{k,l,2}+\mathcal{A}_{k,l,3}.
\eeno
The first term is estimated by $\sum_{2^{k} \leq 1/\epsilon,2^{l}\leq 1/\epsilon} \mathcal{A}_{k,l,1} \lesssim |f|^{2}_{L^{2}}.$
For the second term, we have
\beno
\sum_{2^{k} \leq 1/\epsilon,2^{l}\leq 1/\epsilon} \mathcal{A}_{k,l,2}
&\lesssim& \sum_{j=1}^{3}\sum_{2^{k} \leq 1/\epsilon,2^{l}\leq 1/\epsilon} 2^{2sl}2^{-2k}|\varphi_{l} \widehat{v_{j}\varphi_{k}f}|^{2}_{L^{2}} + \sum_{j=1}^{3}\sum_{2^{k} \leq 1/\epsilon,2^{l}\leq 1/\epsilon} 2^{2sk}2^{-2k}|\varphi_{l} \widehat{v_{j}\varphi_{k}f}|^{2}_{L^{2}}
\\&\lesssim& \sum_{j=1}^{3}\sum_{2^{k} \leq 1/\epsilon} 2^{-2k}|W^{\epsilon}\widehat{v_{j}\varphi_{k}f}|^{2}_{L^{2}} + \sum_{j=1}^{3}\sum_{2^{k} \leq 1/\epsilon} 2^{2sk}2^{-2k}|v_{j} \varphi_{k}f|^{2}_{L^{2}}
\\&\lesssim& \sum_{j=1}^{3}\sum_{2^{k} \leq 1/\epsilon} 2^{-2k}|W^{\epsilon}(D) v_{j} \varphi_{k}f|^{2}_{L^{2}} + |W^{\epsilon}f|^{2}_{L^{2}}\lesssim |W^{\epsilon}(D)f|^{2}_{L^{2}} + |W^{\epsilon}f|^{2}_{L^{2}}.
\eeno
In the last inequality, we apply Lemma \ref{operatorcommutator1} to get that $|W^{\epsilon}(D) v_{j} \varphi_{k}f|^{2}_{L^{2}} \lesssim |v_{j} \varphi_{k}W^{\epsilon}(D)f|^{2}_{L^{2}} + |f|^{2}_{H^{s-1}}$
thanks to $W^{\epsilon}\in S^{s}_{1,0}, v_{j}\varphi_{k} \in S^{1}_{1,0}$(see Definition \ref{psuopde} for $S^m_{1,0}$).
As for the sum of the last term, we have
\beno
\sum_{2^{k} \leq 1/\epsilon,2^{l}\leq 1/\epsilon} \mathcal{A}_{k,l,3}
&\lesssim& \sum_{2^{k} \leq 1/\epsilon,2^{l}\leq 1/\epsilon} 2^{2sl}|\varphi_{l}\widehat{\varphi_{k}f}|^{2}_{L^{2}} +
\sum_{2^{k} \leq 1/\epsilon,2^{l}\leq 1/\epsilon} 2^{2sk}|\varphi_{l}\widehat{\varphi_{k}f}|^{2}_{L^{2}}
\\&\lesssim& \sum_{2^{k} \leq 1/\epsilon} |W^{\epsilon}(D)\varphi_{k}f|^{2}_{L^{2}} + \sum_{2^{k} \leq 1/\epsilon} 2^{2sk}|\widehat{\varphi_{k}f}|^{2}_{L^{2}}
\lesssim |W^{\epsilon}(D)f|^{2}_{L^{2}} + |W^{\epsilon}f|^{2}_{L^{2}}.
\eeno
The lemma follows from the above estimates.
\end{proof}
\begin{lem}\label{gammanonzerotozero}
Let $
\mathcal{Z}^{\epsilon,\gamma}(f) \eqdefa \int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)\langle u\rangle^{\gamma} (f(|u|\frac{u^{+}}{|u^{+}|})-f(u^{+}))^{2} d\sigma du
$ with $u^{+} = \frac{u + |u|\sigma}{2}$. Then
\beno
\mathcal{Z}^{\epsilon,\gamma}(f) &\lesssim& |W^{\epsilon}(D)W_{\gamma/2}f|^{2}_{L^{2}} + |W^{\epsilon}W_{\gamma/2}f|^{2}_{L^{2}}.
\eeno
\end{lem}
\begin{proof} We divide the proof into two steps.
{\it Step 1: $\gamma=0$.}
By the change of variable $(u, \sigma) \rightarrow (r, \tau, \varsigma)$ with $u=r\tau$ and $\varsigma=\f{\sigma+\tau}{|\sigma+\tau|}$, we have
\beno
\mathcal{Z}^{\epsilon, 0}(f) = 4 \int b^{\epsilon}(2(\tau\cdot\varsigma)^{2} - 1)|f(r\varsigma) - f((\tau\cdot\varsigma)r\varsigma)|^{2} (\tau\cdot\varsigma) r^{2} dr d \tau d \varsigma.
\eeno
Let $\eta = r\varsigma$, and $\theta$ be the angle between $\tau$ and $\varsigma$. Since $b^{\epsilon}(2(\tau\cdot\varsigma)^{2} - 1) = b^{\epsilon}(\cos2\theta) \sim \theta^{-2-2s}$, and $r^{2} dr d \tau d \varsigma = \sin\theta d\eta d\theta d\SS$, we have
\beno
\mathcal{Z}^{\epsilon,0}(f) &\lesssim& \int_{\R^{3}}\int_{\epsilon}^{\pi/4} \theta^{-1-2s}|f(\eta) - f(\eta\cos\theta)|^{2} d\eta d\theta
\\&\lesssim& \int_{\R^{3}}\int_{\epsilon}^{\pi/4} \theta^{-1-2s}|f(\eta) - f(\eta/\cos\theta)|^{2} d\eta d\theta
\lesssim |W^{\epsilon}(D) f|^{2}_{L^{2}} + |W^{\epsilon} f|^{2}_{L^{2}},
\eeno
where the last inequality comes from Lemma \ref{a-technical-lemma}.
{\it Step 2: general cases.} We reduce the general case to the special case $\gamma = 0$. For simplicity, denote $w = |u|\frac{u^{+}}{|u^{+}|}$, then $W_{\gamma}(u) = W_{\gamma}(w)$. Thanks to this fact, we have
\beno
&&\langle u\rangle^{\gamma} (f(w)-f(u^{+}))^{2} \\&=& \{[(W_{\gamma/2}f)(w)-(W_{\gamma/2}f)(u^{+})]
+ (W_{\gamma/2}f)(u^{+})(1-W_{\gamma/2}(w)W_{-\gamma/2}(u^{+}))\}^{2}
\\&\leq& 2 [(W_{\gamma/2}f)(u^{+})-(W_{\gamma/2}f)(w)]^{2}
+ 2 |(W_{\gamma/2}f)(u^{+})|^{2}|1-W_{\gamma/2}(w)W_{-\gamma/2}(u^{+})|^{2}.
\eeno
Thus we have
\beno
\mathcal{Z}^{\epsilon,\gamma}(f) &\lesssim& \mathcal{Z}^{\epsilon,0}(W_{\gamma/2}f)
+ \int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)|(W_{\gamma/2}f)(u^{+})|^{2}|1-W_{\gamma/2}(w)W_{-\gamma/2}(u^{+})|^{2} du d\sigma
\\&\eqdefa& \mathcal{Z}^{\epsilon,0}(W_{\gamma/2}f) + \mathcal{A}.
\eeno
By noticing that
$
|W_{\gamma/2}(w)W_{-\gamma/2}(u^{+}) - 1| \lesssim \theta^{2},
$
we have
$
|\mathcal{A}| \lesssim |W_{\gamma/2}f|^{2}_{L^{2}},
$
where the change of variable $u \rightarrow u^{+}$ is used.
The desired result follows from the estimates in {\it Step 1}.
\end{proof}
Next we want to show
\begin{prop} For any smooth function $f$ defined on $\SS^2$, we have \begin{eqnarray}\label{similarlemma5.5}
\int_{\SS^2\times\SS^2}\frac{|f(\sigma)-f(\tau)|^{2}}{|\sigma-\tau|^{2+2s}}\mathbf{1}_{\{|\sigma-\tau| \geq \epsilon\}} d\sigma d\tau + |f|^{2}_{L^{2}(\SS^{2})}
\sim |W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2})f|^{2}_{L^{2}(\SS^{2})} + |f|^{2}_{L^{2}(\SS^{2})}.
\end{eqnarray}
As a consequence, we have
\begin{eqnarray}\label{similarlemma5.6}
\int_{\mathbb{R}_{+}\times\SS^2\times\SS^2}\frac{|f(r\sigma)-f(r\tau)|^{2}}{|\sigma-\tau|^{2+2s}} \mathbf{1}_{\{|\sigma-\tau| \geq \epsilon\}} r^{2}d\sigma d\tau dr + |f|^{2}_{L^{2}}
\sim |W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2})f|^{2}_{L^{2}} + |f|^{2}_{L^{2}}.
\end{eqnarray}
\end{prop}
\begin{proof}
We prove it in the sprit of \cite{he2}. By Additional Theorem, we have
\beno
\int_{\SS^2\times\SS^2}\frac{|f(\sigma)-f(\tau)|^{2}}{|\sigma-\tau|^{2+2s}}\textbf{1}_{\{|\sigma-\tau| \geq \epsilon\}} d\sigma d\tau =\sum_{l=0}^{\infty}\sum_{m=-l}^{l}(f^{m}_{l})^{2} \int_{\SS^2\times\SS^2}\frac{|Y^{m}_{l}(\sigma)-Y^{m}_{l}(\tau)|^{2}}{|\sigma-\tau|^{2+2s}}\textbf{1}_{\{|\sigma-\tau| \geq \epsilon\}} d\sigma d\tau,
\eeno where $f^m_l=\int_{\SS^2} fY^m_ld\sigma$.
For simplicity, let $\mathcal{A}^{\epsilon}_{l} = \int_{\SS^2\times\SS^2}\frac{|Y^{m}_{l}(\sigma)-Y^{m}_{l}(\tau)|^{2}}{|\sigma-\tau|^{2+2s}}\textbf{1}_{\{|\sigma-\tau| \geq \epsilon\}} d\sigma d\tau$. We set to analyze $\mathcal{A}^{\epsilon}_{l}$.
{\it Case 1: $\epsilon^{2} l(l+1) \leq \eta$.} We have
\beno
\mathcal{A}^{\epsilon}_{l} &=& \int_{\SS^2\times\SS^2}\frac{|Y^{m}_{l}(\sigma)-Y^{m}_{l}(\tau)|^{2}}{|\sigma-\tau|^{2+2s}}d\sigma d\tau -\int_{\SS^2\times\SS^2}\frac{|Y^{m}_{l}(\sigma)-Y^{m}_{l}(\tau)|^{2}}{|\sigma-\tau|^{2+2s}}\textbf{1}_{\{|\sigma-\tau| \leq \epsilon\}} d\sigma d\tau. \eeno
Then by Lemma 5.5 in \cite{he2}, it yields
\beno &&|(-\Delta_{\SS^{2}})^{s/2}Y^{m}_{l}|^{2}_{L^{2}(\SS^{2})} - |Y^{m}_{l}|^{2}_{L^{2}(\SS^{2})} - \epsilon^{2-2s}|\nabla_{\SS^{2}}Y^{m}_{l}|^{2}_{L^{2}(\SS^{2})}\\ &\lesssim& \mathcal{A}^{\epsilon}_{l}
\lesssim |(-\Delta_{\SS^{2}})^{s/2}Y^{m}_{l}|^{2}_{L^{2}(\SS^{2})}+ |Y^{m}_{l}|^{2}_{L^{2}(\SS^{2})} + \epsilon^{2-2s}|\nabla_{\SS^{2}}Y^{m}_{l}|^{2}_{L^{2}(\SS^{2})}. \eeno
Choosing $\eta$ small enough, for $\epsilon^{2}l(l+1)\leq \eta$, we have
\beno
&&[l(l+1)]^{s}(1 - 2^{-s} - \eta^{1-s})\le [l(l+1)]^{s}(1 - [l(l+1)]^{-s} - [\epsilon^{2} l(l+1)]^{1-s})\\&&=[l(l+1)]^{s}-1-\epsilon^{2-2s}l(l+1)\le\mathcal{A}^{\epsilon}_{l}
\le (2+\eta^{1-s})[l(l+1)]^{s}.
\eeno
In other words, in this case, we have $\mathcal{A}^{\epsilon}_{l}
\sim [l(l+1)]^{s}.$
{\it Case 2: $\epsilon^{2} l(l+1) \geq R^{2}$.} Let $\zeta$ be a smooth function with compact support verifying that $0\le \zeta\le1$, $\zeta(x)=1$ if $|x|\ge2$ and $\zeta(x)=0$ if $|x|\le1$. We have
\beno
\mathcal{A}^{\epsilon}_{l} &\ge& \int_{\SS^2\times\SS^2}\frac{|Y^{m}_{l}(\sigma)|^{2}+|Y^{m}_{l}(\tau)|^{2}-2Y^{m}_{l}(\sigma)Y^{m}_{l}(\tau)}{|\sigma-\tau|^{2+2s}}\zeta(\epsilon^{-1}|\sigma-\tau|) d\sigma d\tau
\\&\gtrsim& \epsilon^{-2s} - \int_{\SS^2\times\SS^2}\frac{Y^{m}_{l}(\sigma)Y^{m}_{l}(\tau)}{|\sigma-\tau|^{2+2s}}\zeta(\epsilon^{-1}|\sigma-\tau|)d\sigma d\tau
\eqdefa \epsilon^{-2s} - \mathcal{B}^{\epsilon}_{l}.
\eeno
Since $(-\Delta_{\SS^{2}})Y^{m}_{l} = l(l+1)Y^{m}_{l}$, we have
\beno
\mathcal{B}^{\epsilon}_{l} &=& [l(l+1)]^{-1}\int_{\SS^2\times\SS^2}\frac{(-\Delta_{\SS^{2}})Y^{m}_{l}(\sigma)Y^{m}_{l}(\tau)}{|\sigma-\tau|^{2+2s}}\zeta(\epsilon^{-1}|\sigma-\tau|) d\sigma d\tau
\\&\leq& C[l(l+1)]^{-1}|\nabla_{\SS^{2}}Y^{m}_{l}|_{L^{2}(\SS^{2})}|Y^{m}_{l}|_{L^{2}(\SS^{2})}\int_{\SS^2}|\sigma-\tau|^{-3-2s}\textbf{1}_{\{|\sigma-\tau| \geq \epsilon\}}d\sigma
\\&\leq& C\epsilon^{-2s}[\epsilon^{2}l(l+1)]^{-1/2}.
\eeno
Thus we have
$
\mathcal{A}^{\epsilon}_{l} \gtrsim \epsilon^{-2s}(1-C[\epsilon^{2}l(l+1)]^{-1/2}) \geq \epsilon^{-2s}(1-C/R).
$
Since $\mathcal{A}^{\epsilon}_{l}
\le 4\epsilon^{-2s}$, we obtain that $\mathcal{A}^{\epsilon}_{l}
\sim \epsilon^{-2s}$.
{\it Case 3: $\epsilon^{2}\sim ( l(l+1))^{-1} $.}
If $\epsilon^{2}l(l+1)\geq \eta$, then $(N\epsilon)^{2}l(l+1)\geq N^{2}\eta$. Applying the estimate in {\it Case 2} with $\epsilon:= N\epsilon, R:= N\sqrt{\eta}$, we obtain that
\beno
\mathcal{A}^{N\epsilon}_{l} \gtrsim N^{-2s}\epsilon^{-2s}(1-\frac{C}{N\sqrt{\eta}}).
\eeno
Choosing $N$ large enough, for $\epsilon^{2}l(l+1)\geq \eta$, we have
$
\mathcal{A}^{\epsilon}_{l} \geq \mathcal{A}^{N\epsilon}_{l} \gtrsim \epsilon^{-2s}.
$
Notice that there still holds $\mathcal{A}^{\epsilon}_{l} \lesssim \epsilon^{-2s}$ in this case. Thus we get
$\mathcal{A}^{\epsilon}_{l}\sim \epsilon^{-2s}\sim ( l(l+1))^{s}$.
Summing up all the cases, we finally obtain the desired result.
\end{proof}
Now we are in a position to catch the behavior of $\mathcal{R}_{\mu}^{\epsilon,\gamma}(f)$. We have
\begin{lem}\label{lowerboundpart2}
For any smooth function $f$, there holds
\ben\label{lowerupper1}
\mathcal{R}_{\mu}^{\epsilon,0}(f) + |W^{\epsilon}f|^{2}_{L^{2}} &\sim& |W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2})f|^{2}_{L^{2}}+ |W^{\epsilon}(D)f|^{2}_{L^{2}}+|W^{\epsilon}f|^{2}_{L^{2}},\\
\mathcal{R}_{\mu}^{\epsilon,\gamma}(f) +|W^{\epsilon}f|^{2}_{L^{2}_{\gamma/2}} &\gtrsim& |W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2})f|^{2}_{L^{2}_{\gamma/2}}+ |W^{\epsilon}(D)f|^{2}_{L^{2}_{\gamma/2}}. \label{lowerupper2}
\een\end{lem}
\begin{proof} The proof is split into two steps.
{\it Step 1: \eqref{lowerupper1} and \eqref{lowerupper2} with $\gamma=0$.}
By Bobylev's formula, we have
\beno
\mathcal{R}_{\mu}^{\epsilon,0}(f) &=& \frac{1}{(2\pi)^{3}}\int b^{\epsilon}(\frac{\xi}{|\xi|} \cdot \sigma)(\hat{\mu}(0)|\hat{f}(\xi) - \hat{f}(\xi^{+})|^{2} + 2\Re((\hat{\mu}(0) - \hat{\mu}(\xi^{-}))\hat{f}(\xi^{+})\bar{\hat{f}}(\xi)) d\xi d\sigma
\\ &\eqdefa& \frac{\hat{\mu}(0)}{(2\pi)^{3}}\mathcal{I}_{1} + \frac{2}{(2\pi)^{3}}\mathcal{I}_{2},
\eeno
where $\xi^{+} = \frac{\xi+|\xi|\sigma}{2}$ and $\xi^{-} = \frac{\xi-|\xi|\sigma}{2}$.
Thanks to the fact $\hat{\mu}(0) - \hat{\mu}(\xi^{-}) = \int(1-\cos(v \cdot \xi^{-}))\mu(v) dv$, we have
\beno
|\mathcal{I}_{2}| &=& |\int b^{\epsilon}(\frac{\xi}{|\xi|} \cdot \sigma) (1-\cos(v \cdot \xi^{-}))\mu(v) \Re(\hat{f}(\xi^{+})\bar{\hat{f}}(\xi)) d\sigma d\xi dv |
\\ &\lesssim& (\int b^{\epsilon}(\frac{\xi}{|\xi|} \cdot \sigma) (1-\cos(v \cdot \xi^{-}))\mu(v) |\hat{f}(\xi^{+})|^{2} d\sigma d\xi dv)^{1/2}
\\&& \times (\int b^{\epsilon}(\frac{\xi}{|\xi|} \cdot \sigma) (1-\cos(v \cdot \xi^{-}))\mu(v) |\bar{\hat{f}}(\xi)|^{2} d\sigma d\xi dv)^{1/2}.
\eeno
Observe that
$
1-\cos(v \cdot \xi^{-}) \lesssim |v|^{2}|\xi^{-}|^{2} = \frac{1}{4}|v|^{2}|\xi|^{2}|\frac{\xi}{|\xi|} - \sigma|^{2} \sim |v|^{2}|\xi^{+}|^{2}|\frac{\xi^{+}}{|\xi^{+}|} - \sigma|^{2},
$
thus
$
1-\cos(v \cdot \xi^{-}) \lesssim \min\{|v|^{2}|\xi|^{2}|\frac{\xi}{|\xi|} - \sigma|^{2},1\} \sim \min\{|v|^{2}|\xi^{+}|^{2}|\frac{\xi^{+}}{|\xi^{+}|} - \sigma|^{2},1\}.
$
Note that
$
\frac{\xi}{|\xi|} \cdot \sigma = 2(\frac{\xi^{+}}{|\xi^{+}|} \cdot \sigma)^{2} - 1,
$
by the change of variable from $\xi$ to $\xi^{+}$, and the property $W^{\epsilon}(|v||u|) \lesssim W^{\epsilon}(|v|)W^{\epsilon}(|v|)$, we have
\beno
|\mathcal{I}_{2}| \lesssim \int (W^{\epsilon})^{2}(|v||\xi|)|\hat{f}(\xi)|^{2}\mu(v)dvd\xi
\lesssim |W^{\epsilon}\mu^{1/2}|^{2}_{L^{2}} |W^{\epsilon}(D)f|^{2}_{L^{2}}\lesssim |W^{\epsilon}(D)f|^{2}_{L^{2}}.
\eeno
Now we set to investigate the lower bound of $\mathcal{I}_{1}$. By the geometric decomposition
\beno
\hat{f}(\xi) - \hat{f}(\xi^{+}) = \hat{f}(\xi) - \hat{f}(|\xi|\frac{\xi^{+}}{|\xi^{+}|})+ \hat{f}(|\xi|\frac{\xi^{+}}{|\xi^{+}|}) - \hat{f}(\xi^{+}),
\eeno
we have
\beno
\mathcal{I}_{1} &=& \int b^{\epsilon}(\frac{\xi}{|\xi|} \cdot \sigma)|\hat{f}(\xi) - \hat{f}(\xi^{+})|^{2} d\xi d\sigma
\\&\geq& \frac{1}{2} \int b^{\epsilon}(\frac{\xi}{|\xi|} \cdot \sigma)|\hat{f}(\xi) - \hat{f}(|\xi|\frac{\xi^{+}}{|\xi^{+}|})|^{2} d\xi d\sigma
- \int b^{\epsilon}(\frac{\xi}{|\xi|} \cdot \sigma)|\hat{f}(|\xi|\frac{\xi^{+}}{|\xi^{+}|}) - \hat{f}(\xi^{+})|^{2} d\xi d\sigma
\\&\eqdefa& \frac{1}{2}\mathcal{I}_{1,1} - \mathcal{I}_{1,2}.
\eeno
Let $\xi = r \tau$ and $\varsigma = \frac{\tau+\sigma}{|\tau+\sigma|}$, then $\frac{\xi}{|\xi|} \cdot \sigma = 2(\tau\cdot\varsigma)^{2} - 1$ and $|\xi|\frac{\xi^{+}}{|\xi^{+}|} = r \varsigma$. For the change of variable $(\xi, \sigma) \rightarrow (r, \tau, \varsigma)$, one has
$
d\xi d\sigma = 4 (\tau\cdot\varsigma) r^{2} dr d \tau d \varsigma.
$
With the fact $b^{\epsilon}(2(\tau\cdot\varsigma)^{2} - 1) \sim |\tau - \varsigma|^{-2-2s} \mathbf{1}_{\{\epsilon \leq|\tau-\varsigma| \leq \sqrt{2-\sqrt{2}}\}}$ and the help of \eqref{similarlemma5.6}, we have
\beno
\mathcal{I}_{1,1}+|f|_{L^2}^2 &=& 4 \int b^{\epsilon}(2(\tau\cdot\varsigma)^{2} - 1)|\hat{f}(r\tau) - \hat{f}(r\varsigma)|^{2} (\tau\cdot\varsigma) r^{2} dr d \tau d \varsigma+|f|_{L^2}^2
\\&\sim& \int \frac{|\hat{f}(r\tau) - \hat{f}(r\varsigma)|^{2}}{|\tau - \varsigma|^{2+2s}}\mathbf{1}_{\{ |\tau-\varsigma| \geq \epsilon\}} r^{2} dr d \tau d \varsigma+|f|_{L^2}^2
\\&\sim& |W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2})\hat{f}|^{2}_{L^{2}}+ |\hat{f}|^{2}_{L^{2}}
\sim |W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2})f|^{2}_{L^{2}}+|f|^{2}_{L^{2}}.
\eeno
Here we have used Lemma \ref{comWep}.
By Lemma \ref{gammanonzerotozero}, there holds
$\mathcal{I}_{1,2} \lesssim |W^{\epsilon}(D)f|^{2}_{L^{2}} + |W^{\epsilon}f|^{2}_{L^{2}}.$
The lower bound (\ref{lowerupper2}) then follows from the above estimates
and the estimate (see the proof in \cite{advw} with the help of Proposition \ref{symbol} or \cite{He-Jiang}) $\mathcal{R}_{\mu}^{\epsilon,0}(f) +|f|^{2}_{L^{2}} \gtrsim |W^{\epsilon}(D)f|^{2}_{L^{2}}. $
The lower bound (\ref{lowerupper2}) implies the lower bound direction of (\ref{lowerupper1}),
\beno \mathcal{R}_{\mu}^{\epsilon,0}(f) + |W^{\epsilon}f|^{2}_{L^{2}} \gtrsim |W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2})f|^{2}_{L^{2}}+ |W^{\epsilon}(D)f|^{2}_{L^{2}}+|W^{\epsilon}f|^{2}_{L^{2}}. \eeno
The other direction of (\ref{lowerupper1}) follows easily from $\mathcal{R}_{\mu}^{\epsilon,0}(f) \lesssim \mathcal{I}_{1,1} + \mathcal{I}_{1,2} + |\mathcal{I}_{2}|.$
{\it Step 2: \eqref{lowerupper2} with general potentials.}
Thanks to Lemma 3.4 in \cite{he2} which reads that
\begin{eqnarray}\label{casegammazerotonon}
\mathcal{R}_\mu^{\epsilon,\gamma}(f) + |f|^{2}_{L^{2}_{\gamma/2}} \gtrsim \mathcal{R}_{\mu}^{\epsilon,0}(W_{\gamma/2}f),
\end{eqnarray}
we obtain the desired result by applying \eqref{lowerupper2} with $\gamma=0$. We complete the proof of the lemma.
\end{proof}
\subsubsection{Lower bound of $\langle \mathcal{L}^{\epsilon}f,f\rangle_v$} The aim of this subsection is to prove that $\langle \mathcal{L}^{\epsilon}f,f\rangle_v$ is bounded below by $\mathcal{R}_\mu^{\epsilon,\gamma}(f) + \mathcal{M}^{\epsilon,\gamma}(f) -|f|^{2}_{L^{2}_{\gamma/2}}$, and thus by $|f|_{\epsilon,\gamma/2}^2-|f|^{2}_{L^{2}_{\gamma/2}}$. We have
\begin{lem}\label{equivalencenorm} For $\gamma>-3$, there holds
\ben \label{intermediate-lower-bound}
\langle \mathcal{L}^{\epsilon}f, f\rangle_v + |f|^{2}_{L^{2}_{\gamma/2}} \gtrsim |f|_{\epsilon,\gamma/2}^2.
\een
\end{lem}
\begin{proof}
Thanks to the definition of \eqref{DefLep} and $(a+b)^{2} \geq a^{2}/2 - b^{2}$, there holds
\ben\label{L1lowerbound1}
&& 2 \langle \mathcal{L}_{1}^{\epsilon}f,f \rangle_{v} =\int B^{\epsilon} (\mu_{*}^{1/2}f - \mu_{*}^{\prime 1/2}f^{\prime})^{2}dv dv_{*} d\sigma
= \int B^{\epsilon} (\mu_{*}^{1/2}(f - f^{\prime})\nonumber\\&&\quad+ (\mu_{*}^{1/2}- \mu_{*}^{\prime 1/2})f^{\prime})^{2} dv dv_{*} d\sigma
\ge\frac{1}{2} \mathcal{R}_\mu^{\epsilon,\gamma}(f) - \mathcal{M}^{\epsilon,\gamma}(f).
\een
On the other hand, we have $$2 \langle \mathcal{L}_{1}^{\epsilon}f,f \rangle_v = \mathcal{R}_\mu^{\epsilon,\gamma}(f) + \mathcal{M}^{\epsilon,\gamma}(f) + 2\int B^{\epsilon} (\mu_{*}^{1/2}- \mu_{*}^{\prime 1/2})\mu_{*}^{1/2}(f - f^{\prime})f^{\prime} dv dv_{*} d\sigma.$$
From this together with the fact $2(a-b)b = a^{2}-b^{2}-(a-b)^{2}$, we derive that
\beno
&&2(\mu_{*}^{1/2}- \mu_{*}^{\prime 1/2})\mu_{*}^{1/2}(f - f^{\prime})f^{\prime} =\frac{1}{2}(f^{2} -f^{\prime 2} - (f - f^{\prime})^{2}) (\mu_{*} - \mu_{*}^{\prime} + (\mu_{*}^{1/2}- \mu_{*}^{\prime 1/2})^{2})\\
&&\quad= \frac{1}{2}(f^{2} -f^{\prime 2})(\mu_{*} - \mu_{*}^{\prime}) - \frac{1}{2}(f - f^{\prime})^{2}(\mu_{*}^{1/2}- \mu_{*}^{\prime 1/2})^{2}
+\frac{1}{2}(f^{2} -f^{\prime 2})(\mu_{*}^{1/2}- \mu_{*}^{\prime 1/2})^{2}\\&&\quad -\frac{1}{2}(f - f^{\prime})^{2}(\mu_{*}- \mu_{*}^{\prime})
\eqdefa A_{1} + A_{2} + A_{3} + A_{4}.
\eeno
By the change of variable $(v,v_{*}) \rightarrow (v^{\prime},v_{*}^{\prime})$ and cancellation lemma introduced in \cite{advw}, we first have
\beno \bigg|\int B^{\epsilon} A_{1} dv dv_{*} d\sigma\bigg|&=& \bigg|\int B^{\epsilon} \mu_{*} (f^{2} -f^{\prime 2}) dv dv_{*} d\sigma\bigg| \leq C |f|^{2}_{L^{2}_{\gamma/2}},\\
\int B^{\epsilon} A_{3} dv dv_{*} d\sigma& =& \int B^{\epsilon} A_{4} dv dv_{*} d\sigma = 0.
\eeno Secondly we observe that \beno
\int B^{\epsilon} A_{2} dv dv_{*} d\sigma = - \int B^{\epsilon} \mu_{*} (f -f^{\prime})^{2} dv dv_{*} d\sigma +
\int B^{\epsilon} \mu_{*}^{1/2}\mu_{*}^{\prime 1/2}(f -f^{\prime})^{2} dv dv_{*} d\sigma
\ge - \mathcal{R}_\mu^{\epsilon,\gamma}(f).
\eeno
We infer that
$
2 \langle \mathcal{L}_{1}^{\epsilon}f,f \rangle_v \geq \mathcal{M}^{\epsilon,\gamma}(f) - C |f|^{2}_{L^{2}_{\gamma/2}}$.
From which together with (\ref{L1lowerbound1}), we have
\beno
5 \langle \mathcal{L}_{1}^{\epsilon}f,f \rangle_v \geq \frac{1}{2}\mathcal{R}_\mu^{\epsilon,\gamma}(f) + \frac{1}{2}\mathcal{M}^{\epsilon,\gamma}(f) - \frac{3}{2}C |f|^{2}_{L^{2}_{\gamma/2}}
\gtrsim |f|_{\epsilon,\gamma/2}^2-C|f|^{2}_{L^{2}_{\gamma/2}}.
\eeno
Here we use Lemma \ref{lowerboundpart1} and Lemma \ref{lowerboundpart2}.
Due to the facts $\mathcal{L}^\epsilon=\mathcal{L}^\epsilon_1+\mathcal{L}^\epsilon_2$ and $|\langle \mathcal{L}^{\epsilon}_{2}f, f\rangle_v| \lesssim \eta |f|_{\epsilon,\gamma/2}^2+C_\eta| f|^{2}_{L^{2}_{\gamma/2}}$(see \eqref{upgammamuff1}), we arrive at \eqref{intermediate-lower-bound}.
\end{proof}
\subsection{Upper bound for $ \Gamma^{\epsilon}(g,h)$}
In this subsection, we will give the upper bound for the nonlinear term $ \Gamma^{\epsilon}(g,h)$.
We prove it by duality.
Observe that \beno
\langle \Gamma^{\epsilon}(g,h), f\rangle_v &=& \langle Q^{\epsilon}(\mu^{1/2}g,h), f\rangle_v + \int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}(\mu_{*}^{\prime 1/2} - \mu_{*}^{1/2})g_{*} h f^{\prime} d\sigma dv_{*} dv
\\&\eqdefa& \langle Q^{\epsilon}(\mu^{1/2}g,h), f\rangle_v + \mathcal{I}(g,h,f).
\eeno
We will analyze them one by one.
\subsubsection{Upper bounds for the collision operator $Q^\epsilon$}
We perform the decomposition:
\beno
\langle Q^{\epsilon}(g,h), f\rangle_v = \langle Q^{\epsilon}_{-1}(g,h), f\rangle_v + \langle Q^{\epsilon}_{\geq 0}(g,h), f\rangle_v,
\eeno
where
$
\langle Q^{\epsilon}_{-1}(g,h), f\rangle_v \eqdefa \int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}\phi(v-v_{*}) g_{*} h (f^{\prime}-f) d\sigma dv_{*} dv,
$
and
$
\langle Q^{\epsilon}_{\geq 0}(g,h), f\rangle_v \eqdefa \int b^{\epsilon}(\cos\theta)\\|v-v_{*}|^{\gamma} (1-\phi)(v-v_{*})g_{*} h (f^{\prime}-f) d\sigma dv_{*} dv.
$ Here $\phi$ is defined in \eqref{function-phi-psi}.
\smallskip
To give an estimate for $Q_{-1}^\epsilon$, we begin with several useful lemmas.
\begin{lem}\label{hardylittlewoodsoblev}
Suppose $g, h$ and $f$ are smooth functions. Let
$
A \eqdefa \int |v-v_{*}|^{\gamma} \phi(v-v_{*}) g_{*} h f dv d v_{*}
$ and $
B \eqdefa \epsilon^{2s} \int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}\phi(v-v_{*})g_{*} h f^{\prime} d\sigma dv d v_{*}.
$ Then
\begin{itemize}
\item if $\gamma>-\frac{3}{2}$,
$ |A|+|B| \lesssim |g|_{L^{2}} |h|_{L^{2}}|f|_{L^{2}};$
\item if $\gamma=-\f32$, for any $\eta>0$, there exists a constant $C(\eta)$ such that
\beno |A|+|B|\le\left\{\begin{aligned} & C(\eta) |g|_{H^\eta}|h|_{L^{2}}|f|_{L^{2}};\\&
C(\eta) (|g|_{L^1}+|g|_{L^2})|h|_{H^{\eta}}|f|_{L^{2}};\end{aligned}\right.\eeno
\item if $\gamma\in (-3, -\f32)$, then for non-negative constants $s_1, s_2$ and $s_3$ which verify that $s_{1}+s_{2}+s_{3}=-\gamma-3/2$ if $ s_{2}+s_{3}\in (0,-\gamma-3/2]$ and $s_1=-\gamma-3/2+\eta$ if $s_2=s_3=0$,
\beno
|A|+|B| \lesssim \left\{\begin{aligned} &
|g|_{H^{s_{1}}}|h|_{H^{s_{2}}}|f|_{H^{s_{3}}};\\
& C(\eta) |g|_{H^{\eta-\frac{3}{2}-\gamma}}|h|_{L^{2}}|f|_{L^{2}};\end{aligned}\right.
\eeno
\end{itemize}
\end{lem}
\begin{proof} We first handle the term $A$. For the case of
$\gamma>-\frac{3}{2}$, the desired result comes from the inequality
\beno \int |v-v_{*}|^{\gamma} \phi(v-v_{*}) g_{*} d v_{*} \lesssim |g|_{L^{2}}.\eeno
For the case of $\gamma=-\f32$, the first result follows the Hardy's inequality
\beno \int |v-v_*|^{-\f32}\phi(|v-v_*|) |g_*|dv_*\le C(\eta)\bigg(\int |v-v_*|^{-2\eta} |g_*|^2dv_*\bigg)^{\f12}\lesssim C_\eta |g|_{H^\eta}. \eeno
The second result follows the Hardy-Littlewood-Sobolev inequality. Indeed, one has
\beno |A|\lesssim |g|_{L^{p_1}}|h|_{L^{p_2}}|f|_{L^2}, \eeno
where $ \frac{1}{p_{1}} + \frac{1}{p_2}=1$ with $p_2>2$ and $1<p_1<2$. We get the result by Sobolev embedding theorem and the interpolation inequality.
If $\gamma\in(-3,-\f32)$, let $s_2$ and $s_3$ be non-negative constants verifying that $s_2+s_3\in (0, -\f32-\gamma]$. Then by Hardy-Littlewood-Sobolev inequality
and Sobolev embedding theorem, there holds
\beno
|A| \lesssim |g|_{L^{p_{1}}}|h|_{L^{p_{2}}}|f|_{L^{p_{3}}} \lesssim |g|_{H^{s_{1}}}|h|_{H^{s_{2}}}|f|_{H^{s_{3}}},
\eeno
where $\f{-\gamma}3+\f1{p_1}+\f1{p_2}+\f1{p_3}=2$ and $s_{1}, s_{2}, s_{3}$ verify
$s_{1}+s_{2} + s_{3} = -\frac{3}{2}-\gamma.$ For the second result, it follows from the Hardy's inequality:
\beno \int |v-v_*|^{\gamma}\phi(|v-v_*|) |g_*|dv_*\le C(\eta)\bigg(\int |v-v_*|^{2\gamma+3-2\eta} |g_*|^2dv_*\bigg)^{\f12}\lesssim C_\eta |g|_{H^{\eta-\f32-\gamma}}. \eeno
Now we point out how to derive the same estimates for $B$. From the proof in the above, it seems that we only need to prove that the Hardy-Littlewood-Sobolev inequality is still valid for $B$. To see this, we observe that for $\f{-\gamma}3+\f1{p_1}+\f1{r}=2$ and $\f1{p_2}+\f1{p_3}=\f1{r}$,
\beno |B|&\lesssim& \bigg(\epsilon^{2s}\int b^\epsilon |v-v_*|^{\gamma}\phi(|v-v_*|)g_*|h|^{\f{p_2}{r}}d\sigma dv_* dv\bigg)^{\f{r}{p_2}}\\
&&\times\bigg(\epsilon^{2s}\int b^\epsilon |v-v_*|^{\gamma}\phi(|v-v_*|)g_*|f'|^{\f{p_3}{r}}d\sigma dv_* dv\bigg)^{\f{r}{p_3}}
\lesssim |g|_{L^{p_{1}}}|h|_{L^{p_{2}}}|f|_{L^{p_{3}}}.
\eeno Then we conclude the results for $B$ by copying the same argument used in the above.
\end{proof}
\begin{lem}\label{aftercancellation}
Suppose $g, h$ and $f $ are smooth functions. Set
$
A \eqdefa \int |v-v_{*}|^{\gamma}g_{*} h f dv d v_{*}
$ and $
B \eqdefa \epsilon^{2s} \int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma} g_{*} h f^{\prime} d\sigma dv d v_{*}$.
Then
\begin{itemize}
\item if $\gamma \geq 0$,
$ |A|+|B| \lesssim |g|_{L^{1}_{\gamma}}|h|_{L^{2}_{\gamma/2}}|f|_{L^{2}_{\gamma/2}}; $
\item if $-\frac{3}{2}<\gamma<0$,
$ |A|+|B| \lesssim (|g|_{L^{2}_{|\gamma|}}+|g|_{L^{1}_{|\gamma|}})|h|_{L^{2}_{\gamma/2}}|f|_{L^{2}_{\gamma/2}};$
\item if $\gamma=-\f32$, for any $\eta>0$, there exists a constant $C(\eta)$ such that
\beno |A|+|B|\le\left\{\begin{aligned} & C(\eta) (|g|_{L^1_{|\gamma|}}+ |g|_{H^\eta_{|\gamma|}})|h|_{L^{2}_{\gamma/2}}|f|_{L^{2}_{\gamma/2}};\\&
C(\eta) (|g|_{L^1_{|\gamma|}}+|g|_{L^2_{|\gamma|}})|h|_{H^{\eta}_{\gamma/2}}|f|_{L^{2}_{\gamma/2}}.\end{aligned}\right.\eeno
\item if $\gamma\in (-3, -\f32)$, then for non-negative constants $s_1,s_2$ and $s_3$ verifying that $s_{2}, s_{3} \in (0, -\frac{3}{2}-\gamma]$ and $s_{1} + s_{2} + s_{3} = -\frac{3}{2}-\gamma$ or $\eta>0$,
\beno
|A|+|B| \lesssim \left\{\begin{aligned} & (|g|_{L^1_{|\gamma|}}+
|g|_{H^{s_{1}}_{|\gamma|}})|h|_{H^{s_{2}}_{\gamma/2}}|f|_{H^{s_{3}}_{\gamma/2}};\\
& C(\eta) (|g|_{L^1_{|\gamma|}}+|g|_{H^{\eta-\frac{3}{2}-\gamma}_{|\gamma|}})|h|_{L^{2}_{\gamma/2}}|f|_{L^{2}_{\gamma/2}};\end{aligned}\right.
\eeno
\end{itemize}
\end{lem}
\begin{proof} Let $G = gW_{|\gamma|}, H = hW_{\gamma/2}, F = fW_{\gamma/2}$. Then we have \beno
A &=& \int |v-v_{*}|^{\gamma}\langle v_{*}\rangle^{-|\gamma|}\langle v\rangle^{-\gamma}G_{*} H F dv d v_{*}
\lesssim \int (1+|v-v_{*}|^{\gamma}\phi(|v-v_*|))G_{*} H F dv d v_{*}
\eeno
Then the lemma follows from the previous results. By a similar argument, we can conclude the results for the term $B$.
\end{proof}
Now we are ready to give the following upper bounds for $Q^\epsilon_{-1}$.
\begin{prop}\label{ubqepsilonsingular}
For any $\eta>0$ and smooth functions $g,h$ and $f$, there hold
\begin{itemize}
\item if $\gamma>-\frac{3}{2}$,
$|\langle Q^{\epsilon}_{-1}(g,h), f\rangle_v| \lesssim |g|_{L^{2}_{|\gamma|}}|W^{\epsilon}(D)h|_{L^{2}_{\gamma/2}}|W^{\epsilon}(D)f|_{L^{2}_{\gamma/2}}$;
\item if $\gamma=-\f32$, $|\langle Q^{\epsilon}_{-1}(g,h), f\rangle_v| \lesssim (|g|_{L^1_{|\gamma|}}+|g|_{L^2_{|\gamma|}})|W^{\epsilon}(D)h|_{H^\eta_{\gamma/2}}|W^{\epsilon}(D)f|_{L^2_{\gamma/2}};$
\item if $-3<\gamma\leq -\frac{3}{2}$,
$
|\langle Q^{\epsilon}_{-1}(g,h), f\rangle_v| \lesssim |g|_{H^{s_{1}}_{|\gamma|}}|W^{\epsilon}(D)h|_{H^{s_{2}}_{\gamma/2}}|W^{\epsilon}(D)f|_{H^{s_{3}}_{\gamma/2}}.
$
\end{itemize}
Here $s_1, s_2$ and $s_3$ verify that $s_{1}+s_{2}+s_{3}=-\gamma-3/2$ if $ s_{2}+s_{3}\in (0,-\gamma-3/2]$ and $s_1=-\gamma-3/2+\eta$ if $s_2=s_3=0$.
\end{prop}
\begin{proof} We divide the proof into two steps.
{\it Step 1: Estimates without weight.} Following the proof of Theorem 1.1 in \cite{he2}, we conclude that
\ben\label{Q-1} | Q^{\epsilon}_{-1}(g,h), f\rangle_v|\lesssim |g|_{L^2}|h|_{H^a}|f|_{H^b}, \een
where $a+b=2s$ with $a,b\in [0,2s]$.
From this together with the decomposition:
\beno
\langle Q^{\epsilon}_{-1}(g,h), f\rangle_v = \langle Q^{\epsilon}_{-1}(g,h_\phi+h^\phi), f_\phi+f^\phi\rangle_{v},
\eeno we deduce that
$ |\langle Q^{\epsilon}_{-1}(g,h_\phi), f_\phi\rangle_{v}|\lesssim |g|_{L^2}|h_\phi|_{H^s}|f_\phi|_{H^s},
|\langle Q^{\epsilon}_{-1}(g,h_\phi), f^\phi\rangle_{v}|\lesssim |g|_{L^2}|h_\phi|_{H^{2s}}|f^\phi|_{L^2}$ and $
|\langle Q^{\epsilon}_{-1}(g,h^\phi), f_\phi\rangle_{v}|\lesssim |g|_{L^2}|h^\phi|_{L^2}|f_\phi|_{H^{2s}}.$
Thanks to the fact $|h_\phi|_{H^{2s}}\lesssim \epsilon^{-s}|h_\phi|_{H^{s}}$, we have
\beno |\langle Q^{\epsilon}_{-1}(g,h_\phi), f_\phi\rangle_{v}|+
|\langle Q^{\epsilon}_{-1}(g,h_\phi), f^\phi\rangle_{v}|+
|\langle Q^{\epsilon}_{-1}(g,h^\phi), f_\phi\rangle_{v}|\lesssim |g|_{L^2}|W^\epsilon(D)h|_{L^2}|W^\epsilon(D)f|_{L^2}.\eeno
Next we focus on the estimate of the term $
\langle Q^{\epsilon}_{-1}(g,h^\phi), f^\phi\rangle_{v}$. Thanks to Lemma \ref{hardylittlewoodsoblev}, one has if
$\gamma>-\frac{3}{2}$,
we have $|\langle Q^{\epsilon}_{-1}(g,h^\phi), f^\phi\rangle_{v}|
\lesssim |g|_{L^2}|W^\epsilon(D)h|_{L^2}|W^\epsilon(D)f|_{L^2}$. If $\gamma=-\f32$, $|\langle Q^{\epsilon}_{-1}(g,h^\phi), f^\phi\rangle_{v}|
\lesssim (|g|_{L^1}+|g|_{L^2})|W^\epsilon(D)h|_{H^\eta}|W^\epsilon(D)f|_{L^2}$. If $-3<\gamma< -\frac{3}{2}$,
$|\langle Q^{\epsilon}_{-1}(g,h^\phi), f^\phi\rangle_{v}|
\lesssim |g|_{H^{s_{1}}}|W^\epsilon(D)h|_{H^{s_{2}}}|W^\epsilon(D)f|_{H^{s_{3}}}. $ Here $s_1, s_2$ and $s_3$ satisfy the conditions mentioned in the proposition.
{\it Step 2: Estimates with weight.} We recall that
\beno
\langle Q^{\epsilon}_{-1}(g,h), f\rangle_v &=& \sum_{j\geq N_{0} - 1} \langle Q^{\epsilon}_{-1}(\varphi_{j}g,\tilde{\varphi}_{j}h), \tilde{\varphi}_{j}f\rangle_v + \sum_{j\leq N_{0} - 2} \langle Q^{\epsilon}_{-1}(\varphi_{j}g,\mathcal{U}_{N_{0}-1}h), \mathcal{U}_{N_{0}-1}f\rangle_v
\\&\eqdefa&\mathcal{A}_{1} + \mathcal{A}_{2},
\eeno
where $\tilde{\varphi}_{j} = \sum_{k \geq -1,|k-j| \leq N_{0}} \varphi_{k}$ and $\mathcal{U}_{N_{0}-1} = \sum_{-1 \leq k \leq N_{0}-1} \varphi_{k} $ for some fixed $N_{0}$.
Thus under the case $-3<\gamma< -\frac{3}{2}$, by {\it Step 1}, we have
\beno |\mathcal{A}_{1}| &\lesssim& \sum_{j\geq N_{0} - 1} |\langle D \rangle^{s_{1}}\varphi_{j}g|_{L^{2}}|\langle D \rangle^{s_{2}}W^{\epsilon}(D)\tilde{\varphi}_{j}h|_{L^{2}}
|\langle D \rangle^{s_{3}}W^{\epsilon}(D)\tilde{\varphi}_{j}f|_{L^{2}} \eqdefa \sum_{j\geq N_{0} - 1} \mathcal{A}_{1,j}.\eeno
For simplicity, we write $\mathcal{A}_{1,j} = \mathcal{B}_{j}\mathcal{C}_{j}\mathcal{D}_{j} $, where
$ \mathcal{B}_{j} \eqdefa 2^{-j}|\langle D \rangle^{s_{1}}2^{(- \gamma + 1) j}\langle \cdot \rangle^{\gamma}\varphi_{j}\langle \cdot \rangle^{-\gamma}g|_{L^{2}}, \mathcal{C}_{j} \eqdefa 2^{-j}|\langle D \rangle^{s_{2}}\\W^{\epsilon}(D)2^{(\gamma/2+1)j}\langle \cdot \rangle^{-\gamma/2}\tilde{\varphi}_{j}\langle \cdot \rangle^{\gamma/2}h|_{L^{2}}$ and $ \mathcal{D}_{j} \eqdefa 2^{-j}|\langle D \rangle^{s_{3}}W^{\epsilon}(D)2^{(\gamma/2+1)j}\langle \cdot \rangle^{-\gamma/2}\tilde{\varphi}_{j}\langle \cdot \rangle^{\gamma/2}f|_{L^{2}}$.
Thanks to $2^{(- \gamma + 1) j}\langle \cdot \rangle^{\gamma}\varphi_{j} \in S^{1}_{1,0}$ and $\langle \cdot \rangle^{s_{1}} \in S^{s_{1}}_{1,0}$, by Lemma \ref{operatorcommutator1}, we have
\beno \mathcal{B}_{j} \lesssim | \varphi_{j}\langle D \rangle^{s_{1}}\langle \cdot \rangle^{-\gamma}g|_{L^{2}} + 2^{-j}|\langle \cdot \rangle^{-\gamma}g|_{H^{s_{1}-1}}. \eeno
Similarly, by Lemma \ref{operatorcommutator1}, we have
\beno \mathcal{C}_{j} \lesssim | \tilde{\varphi}_{j}\langle D \rangle^{s_{2}} W^{\epsilon}(D)\langle \cdot \rangle^{\gamma/2}h|_{L^{2}} + 2^{-j}|\langle \cdot \rangle^{\gamma/2}h|_{H^{s_{2}+s-1}}, \eeno
\beno \mathcal{D}_{j} \lesssim| \tilde{\varphi}_{j}\langle D \rangle^{s_{3}}W^{\epsilon}(D)\langle \cdot \rangle^{\gamma/2}f|_{L^{2}}+ 2^{-j}|\langle \cdot \rangle^{\gamma/2}f|_{H^{s_{3}+s-1}}. \eeno
Thus it is not difficult to conclude that
\beno
|\mathcal{A}_{1}| \lesssim |\langle D \rangle^{s_{1}}\langle \cdot \rangle^{-\gamma}g|_{L^{2}}|\langle D \rangle^{s_{2}}W^{\epsilon}(D)\langle \cdot \rangle^{\gamma/2}h|_{L^{2}}|\langle D \rangle^{s_{3}}W^{\epsilon}(D)\langle \cdot \rangle^{\gamma/2}f|_{L^{2}}.
\eeno
The term $\mathcal{A}_{2}$ is much easier since it has only finite terms. Finally, we have
\beno
|\langle Q^{\epsilon}_{-1}(g,h), f\rangle_v| \lesssim |\langle D \rangle^{s_{1}}\langle \cdot \rangle^{-\gamma}g|_{L^{2}}|\langle D \rangle^{s_{2}}W^{\epsilon}(D)\langle \cdot \rangle^{\gamma/2}h|_{L^{2}}|\langle D \rangle^{s_{3}}W^{\epsilon}(D)\langle \cdot \rangle^{\gamma/2}f|_{L^{2}}.
\eeno
For the case $\gamma\ge-\frac{3}{2}$, we may repeat the above procedure to get the desired results. We complete the proof
of the proposition with the help of Lemma \ref{func}.
\end{proof}
To give the upper bound for $Q^{\epsilon}_{\geq 0}$, we need the next two lemmas.
\begin{lem}\label{crosstermsimilar}
Let
$
\mathcal{Y}^{\epsilon,\gamma}(h,f) \eqdefa \int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)\langle u \rangle^{\gamma} h(u)[f(u^{+}) - f(|u|\frac{u^{+}}{|u^{+}|})] du d\sigma,
$
then
\beno
|\mathcal{Y}^{\epsilon,\gamma}(h,f)| &\lesssim& (|W^{\epsilon}W_{\gamma/2}h|^{2}_{L^{2}}+|W^{\epsilon}(D)W_{\gamma/2}h|^{2}_{L^{2}})^{1/2}
(|W^{\epsilon}W_{\gamma/2}f|^{2}_{L^{2}}+|W^{\epsilon}(D)W_{\gamma/2}f|^{2}_{L^{2}})^{1/2}.
\eeno
\end{lem}
\begin{proof}We divide the proof into two steps.
{\it Step 1: $\gamma = 0$.} For ease of notation, we denote $\mathcal{Y} = \mathcal{Y}^{\epsilon, 0}(h,f)$. First applying dyadic decomposition in the phase space, we have
\beno
\mathcal{Y}&=& \sum_{k=-1}^{\infty}\int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma) (\tilde{\varphi}_{k}h)(u) [(\varphi_{k}f)(u^{+})- (\varphi_{k}f)(|u|\frac{u^{+}}{|u^{+}|})] du d\sigma
\eqdefa \sum_{k=-1}^{\infty} \mathcal{Y}_{k}.
\eeno
where $\tilde{\varphi}_{k} = \sum_{|l-k|\leq 3} \varphi_{l}$.
We split the proof into two cases: $2^{k}\geq 1/\epsilon$ and $2^{k}\leq 1/\epsilon$.
For the case $2^{k}\geq 1/\epsilon$, we have
\beno
|\mathcal{Y}_{k}| &\leq& \bigg(\int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma) |(\tilde{\varphi}_{k}h)(u)|^{2} du d\sigma\bigg)^{\frac{1}{2}}
\bigg(\int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma) (|(\varphi_{k}f)(u^{+})|^{2} + |(\varphi_{k}f)(|u|\frac{u^{+}}{|u^{+}|})|^{2}) du d\sigma\bigg)^{\frac{1}{2}}.
\eeno
By the change of variable $u \rightarrow u^{+}$ and $u \rightarrow w= |u|\frac{u^{+}}{|u^{+}|}$ respectively, we have
$|\mathcal{Y}_{k}| \lesssim\epsilon^{-2s}|\tilde{\varphi}_{k}h|_{L^{2}}|\varphi_{k}f|_{L^{2}}$, which implies
\beno
|\sum_{2^{k}\geq 1/\epsilon} \mathcal{Y}_{k}| \lesssim \sum_{2^{k}\geq 1/\epsilon} \epsilon^{-2s}|\tilde{\varphi}_{k}h|_{L^{2}}|\varphi_{k}f|_{L^{2}} \lesssim |W^{\epsilon}h|_{L^{2}}|W^{\epsilon}f|_{L^{2}}.
\eeno
For the case $2^{k}\leq 1/\epsilon$, by Proposition \ref{fourier-transform-cross-term} and the dyadic decomposition in the frequency space, we have
\beno
\mathcal{Y}_{k} &=& \int b^{\epsilon}(\frac{\xi}{|\xi|}\cdot\sigma) [\widehat{\tilde{\varphi}_{k}h}(\xi^{+})- \widehat{\tilde{\varphi}_{k}h}(|\xi|\frac{\xi^{+}}{|\xi^{+}|})] \overline{\widehat{\varphi_{k}f}}(\xi) d\xi d\sigma
\\&=& \sum_{l=-1}^{\infty} \int b^{\epsilon}(\frac{\xi}{|\xi|}\cdot\sigma)[({\varphi}_{l}\widehat{\tilde{\varphi}_{k}h})(\xi^{+})- ({\varphi}_{l}\widehat{\tilde{\varphi}_{k}h})(|\xi|\frac{\xi^{+}}{|\xi^{+}|})] (\tilde{\varphi}_{l}\overline{\widehat{\varphi_{k}f}})(\xi) d\xi d\sigma
\eqdefa \sum_{l=-1}^{\infty} \mathcal{Y}_{k,l}.
\eeno
For the case $2^{l}\geq 1/\epsilon$, we have
$
|\mathcal{Y}_{k,l}| \lesssim \epsilon^{-2s}|{\varphi}_{l}\widehat{\tilde{\varphi}_{k}h}|_{L^{2}}|\tilde{\varphi}_{l}\widehat{\varphi_{k}f}|_{L^{2}},
$
which yields
\beno
\sum_{2^{l}\geq 1/\epsilon}|\mathcal{Y}_{k,l}| \lesssim \sum_{2^{l}\geq 1/\epsilon} \epsilon^{-2s}|{\varphi}_{l}\widehat{\tilde{\varphi}_{k}h}|_{L^{2}}|\tilde{\varphi}_{l}\widehat{\varphi_{k}f}|_{L^{2}}
\lesssim |W^{\epsilon}(D)\tilde{\varphi}_{k}h|_{L^{2}}|W^{\epsilon}(D)\varphi_{k}f|_{L^{2}}.
\eeno
Then by \eqref{decompostionpacth}, we have
$
\sum_{2^{k}\leq 1/\epsilon,2^{l}\geq 1/\epsilon}|\mathcal{Y}_{k,l}| \lesssim |W^{\epsilon}(D)h|_{L^{2}}|W^{\epsilon}(D)f|_{L^{2}}.
$
For the case $2^{l}\leq 1/\epsilon$, we have
\beno
\mathcal{Y}_{k,l} &=& \int b^{\epsilon}(\frac{\xi}{|\xi|}\cdot\sigma){\bf 1}_{\{\theta \geq 2^{-\frac{k+l}{2}} \}}[({\varphi}_{l}\widehat{\tilde{\varphi}_{k}h})(\xi^{+})- ({\varphi}_{l}\widehat{\tilde{\varphi}_{k}h})(|\xi|\frac{\xi^{+}}{|\xi^{+}|})] (\tilde{\varphi}_{l}\overline{\widehat{\varphi_{k}f}})(\xi) d\xi d\sigma
\\&& + \int b^{\epsilon}(\frac{\xi}{|\xi|}\cdot\sigma){\bf 1}_{\{\theta \leq 2^{-\frac{k+l}{2}} \}}[({\varphi}_{l}\widehat{\tilde{\varphi}_{k}h})(\xi^{+})- ({\varphi}_{l}\widehat{\tilde{\varphi}_{k}h})(|\xi|\frac{\xi^{+}}{|\xi^{+}|})] (\tilde{\varphi}_{l}\overline{\widehat{\varphi_{k}f}})(\xi) d\xi d\sigma
\\&\eqdefa& \mathcal{Y}_{k,l,1} + \mathcal{Y}_{k,l,2}.
\eeno
By the similar argument as before, we have
$
|\mathcal{Y}_{k,l,1}| \lesssim 2^{s(k+l)}|{\varphi}_{l}\widehat{\tilde{\varphi}_{k}h}|_{L^{2}}|\tilde{\varphi}_{l}\widehat{\varphi_{k}f}|_{L^{2}}.
$
Therefore we have
\beno
\sum_{2^{k}\leq 1/\epsilon,2^{l}\leq 1/\epsilon} |\mathcal{Y}_{k,l,1}| &\leq& \{\sum_{2^{k}\leq 1/\epsilon,2^{l}\leq 1/\epsilon} 2^{s(k+l)}|{\varphi}_{l}\widehat{\tilde{\varphi}_{k}h}|^{2}_{L^{2}}\}^{1/2}\{\sum_{2^{k}\leq 1/\epsilon,2^{l}\leq 1/\epsilon}2^{s(k+l)}|\tilde{\varphi}_{l}\widehat{\varphi_{k}f}|^{2}_{L^{2}}\}^{1/2}
\\&\lesssim& (|W^{\epsilon}h|^{2}_{L^{2}}+|W^{\epsilon}(D)h|^{2}_{L^{2}})^{1/2}(|W^{\epsilon}f|^{2}_{L^{2}}+|W^{\epsilon}(D)f|^{2}_{L^{2}})^{1/2}.
\eeno
By Taylor expansion,
$
({\varphi}_{l}\widehat{\tilde{\varphi}_{k}h})(\xi^{+})- ({\varphi}_{l}\widehat{\tilde{\varphi}_{k}h})(|\xi|\frac{\xi^{+}}{|\xi^{+}|}) = (1-\frac{1}{\cos\theta})\int_{0}^{1}
(\nabla \varphi_{l}\widehat{\tilde{\varphi}_{k}h})(\xi^{+}(\kappa))\cdot \xi^{+} d\kappa,
$
where $\xi^{+}(\kappa) = (1-\kappa)|\xi|\frac{\xi^{+}}{|\xi^{+}|} + \kappa \xi^{+}$. Thus, we obtain that
\beno
|\mathcal{Y}_{k,l,2}| &=& | \int_{[0,1]\times \R^{3} \times \SS^{2}} b^{\epsilon}(\frac{\xi}{|\xi|}\cdot\sigma)(1-\frac{1}{\cos\theta}){\bf 1}_{\{\theta \leq 2^{-\frac{k+l}{2}} \}}(\tilde{\varphi}_{l}\overline{\widehat{\varphi_{k}f}})(\xi)
\\&&\times(\nabla \varphi_{l}\widehat{\tilde{\varphi}_{k}h})(\xi^{+}(\kappa))\cdot \xi^{+} d\kappa d\xi d\sigma |
\\&\lesssim& \{\int_{0}^{2^{-\frac{k+l}{2}}} \theta^{1-2s} |\tilde{\varphi}_{l}\widehat{\varphi_{k}f}|^{2}(\xi) d\theta d\xi\}^{1/2}
\{\int_{0}^{2^{-\frac{k+l}{2}}} \theta^{1-2s} |\eta|^{2}|(\nabla \varphi_{l}\widehat{\tilde{\varphi}_{k}h})|^{2}(\eta) d\theta d\eta\}^{1/2}
\\&\lesssim& 2^{s(k+l)/2}|\tilde{\varphi}_{l}\widehat{\varphi_{k}f}|_{L^{2}}\{2^{-(2-s)(k+l)}\int |\eta|^{2}|(\nabla \varphi_{l}\widehat{\tilde{\varphi}_{k}h})|^{2}(\eta)d\eta\}^{1/2},
\eeno
where we use the change of variable $\xi \rightarrow \eta = \xi^{+}(\kappa)$.
It is not difficult to compute that
\beno
2^{-(2-s)(k+l)}\int |\eta|^{2}|(\nabla \varphi_{l}\widehat{\tilde{\varphi}_{k}h})|^{2}(\eta)d\eta\ \lesssim 2^{-(2-s)(l+k)} |\tilde{\varphi}_{k}h|^{2}_{L^{2}} + 2^{s(l+k)-2k}|\varphi_{l} \widehat{v\tilde{\varphi}_{k}h}|^{2}_{L^{2}}.
\eeno
Thus by \eqref{decompostionpacth},
$
\sum_{2^{k}\leq 1/\epsilon,2^{l}\leq 1/\epsilon}|\mathcal{Y}_{k,l,2}| \lesssim (|W^{\epsilon}h|^{2}_{L^{2}}+|W^{\epsilon}(D)h|^{2}_{L^{2}})^{1/2}(|W^{\epsilon}f|^{2}_{L^{2}}+|W^{\epsilon}(D)f|^{2}_{L^{2}})^{1/2}.
$
Patch together all the above results, we conclude that
\beno
|\mathcal{Y}| \lesssim (|W^{\epsilon}h|^{2}_{L^{2}}+|W^{\epsilon}(D)h|^{2}_{L^{2}})^{1/2}(|W^{\epsilon}f|^{2}_{L^{2}}+|W^{\epsilon}(D)f|^{2}_{L^{2}})^{1/2}.
\eeno
{\it Step 2: $\gamma \neq 0$.} For simplicity, denote $w = |u|\frac{u^{+}}{|u^{+}|}$, then $W_{\gamma/2}(u) = W_{\gamma/2}(w)$. We have
\beno
\langle u \rangle^{\gamma} h(u)[f(u^{+}) - f(w)] &=& (W_{\gamma/2}h)(u)[(W_{\gamma/2}f)(u^{+})-(W_{\gamma/2}f)(w)]
\\&& +(W_{\gamma/2}h)(u)(W_{\gamma/2}f)(u^{+})(W_{\gamma/2}(w)W_{-\gamma/2}(u^{+}) - 1).
\eeno
Thus we have
\beno
\mathcal{Y}^{\epsilon,\gamma}(h,f) &=& \mathcal{Y}^{0}(W_{\gamma/2}h, W_{\gamma/2}f)+ \int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)(W_{\gamma/2}h)(u) \\
&&\times(W_{\gamma/2}f)(u^{+})(W_{\gamma/2}(w)W_{-\gamma/2}(u^{+}) - 1) du d\sigma
\eqdefa \mathcal{Y}^{0}(W_{\gamma/2}h, W_{\gamma/2}f) + \mathcal{A}.
\eeno
Observing that
$
|W_{\gamma/2}(u)W_{-\gamma/2}(u^{+}) - 1| \lesssim \theta^{2},
$
we have
\beno
|\mathcal{A}| &=& \{\int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)|W_{\gamma/2}h|^{2}(u)|W_{\gamma/2}(w)W_{-\gamma/2}(u^{+}) - 1|du d\sigma\}^{1/2}
\\&&\times\{\int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)|W_{\gamma/2}f|^{2}(u^{+})|W_{\gamma/2}(w)W_{-\gamma/2}(u^{+}) - 1| du d\sigma\}^{1/2}
\lesssim |W_{\gamma/2}h|_{L^{2}}|W_{\gamma/2}f|_{L^{2}},
\eeno
where the change of variable $u \rightarrow u^{+}$ is used.
We complete the proof of the lemma.
\end{proof}
\begin{rmk} \label{exact-cross-term} Define $
\mathcal{X}^{\epsilon,\gamma}(h,f) \eqdefa \int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)|u|^{\gamma}(1-\phi)(u) h(u)[f(u^{+}) - f(|u|\frac{u^{+}}{|u^{+}|})] du d\sigma,
$
then
\beno
|\mathcal{X}^{\epsilon,\gamma}(h,f)| &\lesssim& (|W^{\epsilon}W_{\gamma/2}h|^{2}_{L^{2}}+|W^{\epsilon}(D)W_{\gamma/2}h|^{2}_{L^{2}})^{1/2}
(|W^{\epsilon}W_{\gamma/2}f|^{2}_{L^{2}}+|W^{\epsilon}(D)W_{\gamma/2}f|^{2}_{L^{2}})^{1/2}.
\eeno
To see that, due to the fact $|u|^{\gamma}(1-\phi)(u) = \langle u \rangle^{\gamma}(|u|^{\gamma}\langle u \rangle^{-\gamma}-1)(1-\phi)(u)+\langle u \rangle^{\gamma}(1-\phi)(u)$, we have \beno \mathcal{X}^{\epsilon,\gamma}(h,f) = \mathcal{Y}^{\epsilon,\gamma}((|\cdot|^{\gamma}\langle \cdot \rangle^{-\gamma}-1)(1-\phi)h,f)+\mathcal{Y}^{\epsilon,\gamma}((1-\phi)h,f).\eeno
Then the result follows from Lemma \ref{crosstermsimilar} and \eqref{func5}.
\end{rmk}
\begin{lem}\label{nosigularityeg}
Recall $
\mathcal{R}^{\epsilon,\gamma}_{*,g}(h) = \int b^{\epsilon}(\cos\theta)\langle v-v_{*} \rangle^{\gamma}g_{*}(h^{\prime}-h)^{2}d\sigma dv dv_{*}$ with $g\ge0$, then
\beno
\mathcal{R}^{\epsilon,\gamma}_{*,g}(h)\lesssim \mathcal{R}^{\epsilon,0}_{g W_{|\gamma|}}(W_{\gamma/2}h) + |g|_{L^1_{|\gamma+2|}}|h|^{2}_{L^{2}_{\gamma/2}}.
\eeno
\end{lem}
\begin{proof}
Let $H = W_{\gamma/2}h$, then we have
\beno
(h^{\prime}-h)^{2} = (H^{\prime}W_{-\gamma/2}^{\prime} - H W_{-\gamma/2})^{2}
\lesssim W^{\prime}_{-\gamma}(H^{\prime}-H)^{2} + (W^{\prime}_{-\gamma/2}-W_{-\gamma/2})^{2}H^{2}.
\eeno
Observing that
$
\langle v^{\prime} \rangle^{-\gamma} \leq \langle v^{\prime} - v_{*} \rangle^{-\gamma} \langle v_{*} \rangle^{|\gamma|} \sim
\langle v - v_{*} \rangle^{-\gamma} \langle v_{*} \rangle^{|\gamma|},
$
we have
\beno
\mathcal{R}^{\epsilon,\gamma}_{*,g}(h) \lesssim \mathcal{R}^{\epsilon,0}_{g W_{|\gamma|}}(W_{\gamma/2}h) + \int b^{\epsilon}(\cos\theta)\langle v-v_{*} \rangle^{\gamma}(\langle v^{\prime} \rangle^{-\gamma/2}-\langle v \rangle^{-\gamma/2})^{2}g_{*}H^{2} d\sigma dv dv_{*}
\eeno
By Taylor expansion, one has
$
(W^{\prime}_{-\gamma/2}-W_{-\gamma/2})^{2} \lesssim \int_{0}^{1}\langle v(\kappa) \rangle^{-\gamma-2} \langle v-v_{*} \rangle^{2} \theta^{2} d\kappa,
$
where $v(\kappa) = \kappa v + \kappa(v^{\prime}-v)$.
Note that $\langle v-v_{*} \rangle^{\gamma+2}\sim \langle v(\kappa)-v_{*} \rangle^{\gamma+2}\lesssim \langle v(\kappa) \rangle^{\gamma+2}\langle v_{*} \rangle^{|\gamma+2|}$. Then we have
\beno \mathcal{R}^{\epsilon,\gamma}_{*,g}(h) \lesssim \mathcal{R}^{\epsilon,0}_{g W_{|\gamma|}}(W_{\gamma/2}h) + \int b^{\epsilon}(\cos\theta)\theta^2 \langle v_{*} \rangle^{|\gamma+2|} g_{*}H^{2} d\sigma dv dv_{*},\eeno
which yields the desired result.
\end{proof}
\bigskip
Now we are in a position to state the following upper bound of $Q^\epsilon_{\geq 0}$.
\begin{prop}\label{ubqepsilonnonsingular} For smooth functions $g, h$ and $f$, there holds
\beno
|\langle Q^{\epsilon}_{\geq 0}(g,h), f\rangle_v| &\lesssim& | g|_{L^{1}_{|\gamma|+2}}|h|_{\epsilon,\gamma/2}|f|_{\epsilon,\gamma/2}.
\eeno
\end{prop}
\begin{proof} Define the translation operator $T_{v_{*}}$ by $(T_{v_{*}}f)(v) = f(v_{*}+v)$.
By geometric decomposition, we have
$\langle Q^{\epsilon}_{\geq 0}(g,h), f\rangle_v = \mathcal{D}_{1} + \mathcal{D}_{2}$,
where
$ \mathcal{D}_{1} \eqdefa \int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)|u|^{\gamma}(1-\phi)(u)g_{*} (T_{v_{*}}h)(u) ((T_{v_{*}}f)(u^{+})-(T_{v_{*}}f)(|u|\frac{u^{+}}{|u^{+}|})) d\sigma dv_{*} du,$
and $
\mathcal{D}_{2} \eqdefa\int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)|u|^{\gamma}(1-\phi)(u)g_{*} (T_{v_{*}}h)(u)((T_{v_{*}}f)(|u|\frac{u^{+}}{|u^{+}|})- (T_{v_{*}}f)(u))\\ d\sigma dv_{*} du.$
We now analyze $\mathcal{D}_{1}$ and $\mathcal{D}_{2}$ one by one.
{\it Step 1: Estimate of $\mathcal{D}_{1}$.}
By Lemma \ref{crosstermsimilar} and Remark \ref{exact-cross-term}, we have
\beno
|\mathcal{D}_{1}| &\lesssim& \int |g_{*}| (|W^{\epsilon}W_{\gamma/2}T_{v_{*}}h|^{2}_{L^{2}}+|W^{\epsilon}(D)W_{\gamma/2}T_{v_{*}}h|^{2}_{L^{2}})^{1/2}
\\&&\times(|W^{\epsilon}W_{\gamma/2}T_{v_{*}}f|^{2}_{L^{2}}+|W^{\epsilon}(D)W_{\gamma/2}T_{v_{*}}f|^{2}_{L^{2}})^{1/2} dv_{*}.
\eeno
It is easy to check that
\begin{eqnarray}\label{tvstartonovstar1}
|W^{\epsilon}W_{\gamma/2}T_{v_{*}}h|_{L^{2}} \lesssim W^{\epsilon}(v_{*})W_{|\gamma|/2}(v_{*})|W^{\epsilon}W_{\gamma/2}h|_{L^{2}}.
\end{eqnarray}
By Lemma \ref{operatorcommutator1}, we have
\begin{eqnarray}\label{tvstartonovstar2}
|W^{\epsilon}(D)W_{\gamma/2}T_{v_{*}}h|_{L^{2}} &\lesssim& |W_{\gamma/2}W^{\epsilon}(D)T_{v_{*}}h|_{L^{2}} + |T_{v_{*}}h|_{H^{s-1}_{\gamma/2-1}}
\\&\lesssim& W_{|\gamma|/2}(v_{*})(|W_{\gamma/2}W^{\epsilon}(D)h|_{L^{2}} + |h|_{L^{2}_{\gamma/2-1}}) \nonumber
\\&\lesssim& W_{|\gamma|/2}(v_{*})|W^{\epsilon}(D)W_{\gamma/2}h|_{L^{2}}. \nonumber
\end{eqnarray}
Thus we get the estimate of $\mathcal{D}_{1}$ as follows
\beno
|\mathcal{D}_{1}| \lesssim | g|_{L^{1}_{|\gamma|+2}}( |W^{\epsilon}(D)W_{\gamma/2}h|_{L^{2}} + |W^{\epsilon}W_{\gamma/2}h|_{L^{2}})
( |W^{\epsilon}(D)W_{\gamma/2}f|_{L^{2}} + |W^{\epsilon}W_{\gamma/2}f|_{L^{2}}).
\eeno
{\it Step 2: Estimate of $\mathcal{D}_{2}$.}
Let $u = r \tau$ and $\varsigma = \frac{\tau+\sigma}{|\tau+\sigma|}$, then $\frac{u}{|u|} \cdot \sigma = 2(\tau\cdot\varsigma)^{2} - 1$ and $|u|\frac{u^{+}}{|u^{+}|} = r \varsigma$. By the change of variable $(u, \sigma) \rightarrow (r, \tau, \varsigma)$, one has
$
du d\sigma = 4 (\tau\cdot\varsigma) r^{2} dr d \tau d \varsigma.
$
Then
\beno
\mathcal{D}_{2} &=& 4 \int r^\gamma(1-\phi)(r)b^{\epsilon}(2(\tau\cdot\varsigma)^{2} - 1)(T_{v_*}h)(r\tau)\big((T_{v_*}f)(r\varsigma) - (T_{v_*}f) (r\tau)\big) (\tau\cdot\varsigma) r^{2} dr d \tau d \varsigma\\
&=& 2 \int r^\gamma(1-\phi)(r)b^{\epsilon}(2(\tau\cdot\varsigma)^{2} - 1)\big((T_{v_*}h)(r\tau) - (T_{v_*}h) (r\varsigma)\big)\\ &&\times \big((T_{v_*}f)(r\varsigma) - (T_{v_*}f) (r\tau)\big) (\tau\cdot\varsigma) r^{2} dr d \tau d \varsigma\\&=&
-\frac{1}{2}\int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)|u|^{\gamma}(1-\phi)(u)g_{*} ((T_{v_{*}}h)(|u|\frac{u^{+}}{|u^{+}|})-(T_{v_{*}}h)(u))\\ &&\times
((T_{v_{*}}f)(|u|\frac{u^{+}}{|u^{+}|})-(T_{v_{*}}f)(u)) d\sigma dv_{*} du.
\eeno
Then by Cauchy-Schwartz inequality and the fact $|u|^{\gamma}(1-\phi)(u) \lesssim \langle u \rangle^{\gamma}$, we have
\beno
|\mathcal{D}_{2}| &\lesssim& \{\int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)\langle u \rangle^{\gamma}|g_{*}| ((T_{v_{*}}h)( |u|\frac{u^{+}}{|u^{+}|})-(T_{v_{*}}h)( u))^{2} d\sigma dv_{*} du\}^{1/2}
\\&& \times \{\int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)\langle u \rangle^{\gamma}|g_{*}|
((T_{v_{*}}f)(|u|\frac{u^{+}}{|u^{+}|})-(T_{v_{*}})f(u))^{2} d\sigma dv_{*} du\}^{1/2}
\eqdefa (\mathcal{D}_{2,h})^{1/2}(\mathcal{D}_{2,f})^{1/2}.
\eeno
Note that $\mathcal{D}_{2,h}$ and $\mathcal{D}_{2,f}$ have exactly the same structure. It suffices to focus on $\mathcal{D}_{2,f}$.
Since
\beno
((T_{v_{*}}f)(|u|\frac{u^{+}}{|u^{+}|})-(T_{v_{*}}f)(u))^{2} \leq 2 ((T_{v_{*}}f)(|u|\frac{u^{+}}{|u^{+}|})-(T_{v_{*}}f)(u^{+}))^{2} + 2 ((T_{v_{*}}f)(u^{+})-(T_{v_{*}}f)(u))^{2},
\eeno
we have
\beno
\mathcal{D}_{2,f} &\lesssim& \int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)\langle u \rangle^{\gamma}|g_{*}| ((T_{v_{*}}f)(|u|\frac{u^{+}}{|u^{+}|})-(T_{v_{*}}f)(u^{+}))^{2} d\sigma dv_{*} du
\\&&+\int b^{\epsilon}(\frac{u}{|u|}\cdot\sigma)\langle u \rangle^{\gamma}|g_{*}| ((T_{v_{*}}f)(u^{+})-(T_{v_{*}}f)(u))^{2} d\sigma dv_{*} du
\eqdefa \mathcal{D}_{2,f,1}+ \mathcal{D}_{2,f,2}.
\eeno
By Lemma \ref{gammanonzerotozero}, and the facts (\ref{tvstartonovstar1}) and (\ref{tvstartonovstar2}), we have
\beno
\mathcal{D}_{2,f,1} \lesssim \int |g_{*}| \mathcal{Z}^{\epsilon,\gamma}(T_{v_{*}}f) dv_{*} \lesssim| g|_{L^{1}_{|\gamma|+2}}(|W^{\epsilon}(D)W_{\gamma/2}f|^{2}_{L^{2}}+|W^{\epsilon}W_{\gamma/2}f|^{2}_{L^{2}}).
\eeno
Thanks to Lemma \ref{nosigularityeg}, we have
\beno
\mathcal{D}_{2,f,2} \le \mathcal{R}^{\epsilon,\gamma}_{*,|g|}(f)
\lesssim \mathcal{R}^{\epsilon,0}_{|g| W_{|\gamma|}}(W_{\gamma/2}f) + | g|_{L^{1}_{|\gamma|+2}}|f|^{2}_{L^{2}_{\gamma/2}}.
\eeno
Due to the fact (see Lemma 3.3 in \cite{he2}) that
$
|\mathcal{R}^{\epsilon,0}_{g}(f)| \lesssim |g|_{L^{1}} \mathcal{R}^{\epsilon,0}_{\mu}(f) + | g|_{L^{1}_2}|W^{\epsilon}(D)f|^{2}_{L^{2}},
$
and \eqref{lowerupper1} in Lemma \ref{lowerboundpart2},
$
\mathcal{R}^{\epsilon,0}_{\mu}(f) \lesssim |W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2})f|^{2}_{L^{2}} + |W^{\epsilon}(D)f|^{2}_{L^{2}} + |W^{\epsilon}f|^{2}_{L^{2}},
$
we have
\beno
\mathcal{D}_{2,f,2} \lesssim | g|_{L^{1}_{|\gamma|+2}}(|W^{\epsilon}((-\Delta_{\SS^{2}})^{1/2})W_{\gamma/2}f|^{2}_{L^{2}} + |W^{\epsilon}(D)W_{\gamma/2}f|^{2}_{L^{2}} + |W^{\epsilon}W_{\gamma/2}f|^{2}_{L^{2}}).
\eeno
Thus we have
$
\mathcal{D}_{2,f} \lesssim |g|_{L^{1}_{|\gamma|+2}}|f|^{2}_{\epsilon,\gamma/2}.
$
Similarly $\mathcal{D}_{2,h}$ has the same upper bound, so we have
\beno
|\mathcal{D}_{2}|&\lesssim& | g|_{L^{1}_{|\gamma|+2}}|h|_{\epsilon,\gamma/2}|f|_{\epsilon,\gamma/2}.
\eeno
We complete the proof of the proposition
by patching together the estimates of $\mathcal{D}_{1}$ and $\mathcal{D}_{2}$.
\end{proof}
\medskip
Combining together the previous two propositions, we are led to
\begin{thm} \label{upQepsilon}
For any $\eta>0$ and smooth functions $g,h$ and $f$, there hold
\begin{itemize}
\item if $\gamma>-\frac{3}{2}$,
$|\langle Q^{\epsilon}(g,h), f\rangle_v| \lesssim (|g|_{L^{2}_{|\gamma|}}+| g|_{L^{1}_{|\gamma|+2}})|h|_{\epsilon,\gamma/2}|f|_{\epsilon,\gamma/2}$;
\item if $\gamma=-\f32$, $|\langle Q^{\epsilon}(g,h), f\rangle_v| \lesssim (|g|_{L^1_{|\gamma|+2}}+|g|_{L^2_{|\gamma|}}) (|W^{\epsilon}(D)h|_{H^\eta_{\gamma/2}}+|h|_{\epsilon,\gamma/2})|f|_{\epsilon,\gamma/2};$
\item if $-3<\gamma< -\frac{3}{2}$,
$
|\langle Q^{\epsilon} (g,h), f\rangle_v| \lesssim |g|_{H^{s_{1}}_{|\gamma|}}|W^{\epsilon}(D)h|_{H^{s_{2}}_{\gamma/2}}|W^{\epsilon}(D)f|_{H^{s_{3}}_{\gamma/2}}+| g|_{L^{1}_{|\gamma|+2}}|h|_{\epsilon,\gamma/2}|f|_{\epsilon,\gamma/2}.
$
\end{itemize}
Here $s_1, s_2$ and $s_3$ verify that $s_{1}+s_{2}+s_{3}=-\gamma-3/2$ if $ s_{2}+s_{3}\in (0,-\gamma-3/2]$ and $s_1=-\gamma-3/2+\eta$ if $s_2=s_3=0$.
\end{thm}
\begin{proof}
The theorem follows easily from the estimates for $\langle Q^{\epsilon}_{-1}(g,h), f\rangle_v$ in Proposition \ref{ubqepsilonsingular} and for $\langle Q^{\epsilon}_{\geq 0}(g,h), f\rangle_v$ in Proposition \ref{ubqepsilonnonsingular}.
\end{proof}
\subsubsection{Upper bounds for $\mathcal{I}(g,h,f)$}
For ease of notation, we simply write $\mathcal{I}(g,h,f)$ as $\mathcal{I}$. Notice that
\beno
\mu_{*}^{\prime 1/2} - \mu_{*}^{1/2} = (\mu_{*}^{\prime 1/4} + \mu_{*}^{1/4})(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})
= (\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2} + 2\mu_{*}^{1/4}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4}),
\eeno
then we have
\beno
\mathcal{I} &=& \int B^{\epsilon,\gamma}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2}g_{*} h f^{\prime} d\sigma dv_{*} dv
+ 2 \int B^{\epsilon,\gamma}\mu_{*}^{1/4}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})g_{*} h f^{\prime} d\sigma dv_{*} dv \\&=& \int B^{\epsilon,\gamma}(\mu_{*}^{\prime 1/8} + \mu_{*}^{1/8})^{2}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2}g_{*} h f^{\prime} d\sigma dv_{*} dv
\\&&+ 2 \int B^{\epsilon,\gamma}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})(\mu^{1/4}g)_{*}(h-h^{\prime})f^{\prime} d\sigma dv_{*}dv
\\&&+ 2 \int B^{\epsilon,\gamma}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})(\mu^{1/4}g)_{*} h^{\prime} f^{\prime} d\sigma dv_{*}dv
\eqdefa \mathcal{I}_{1} + \mathcal{I}_{2} + \mathcal{I}_{3}.
\eeno
\begin{lem}\label{upforI}
For any $\eta>0$ and smooth functions $g,h$ and $f$, there hold
\begin{itemize}
\item if $\gamma>-\frac{3}{2}$,
$|\mathcal{I}(g,h,f)| \lesssim |\mu^{1/8}g|_{L^{2}}|h|_{\epsilon,\gamma/2}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}$;
\item if $\gamma=-\f32$, $|\mathcal{I}(g,h,f)|\lesssim |\mu^{1/8}g|_{L^{2}}(|W^{\epsilon}(D)h|_{H^\eta_{\gamma/2}}+|h|_{\epsilon,\gamma/2})|W^{\epsilon}f|_{L^{2}_{\gamma/2}};$
\item if $-3<\gamma< -\frac{3}{2}$,
$
|\mathcal{I}(g,h,f)| \lesssim |\mu^{1/8}g|_{H^{s_{1}}}|W^{\epsilon}(D)h|_{H^{s_{2}}_{\gamma/2}}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}+| \mu^{1/8}g|_{L^{2}}|h|_{\epsilon,\gamma/2}|W^{\epsilon}f|_{L^{2}_{\gamma/2}},$
where $s_1$ and $ s_2$ verify $s_{1}+2s_{2}=-\gamma-3/2$ if $ s_{2} \in (0,-\gamma/2-3/4]$ and $s_1=-\gamma-3/2+\eta$ if $s_2=0$.
\end{itemize}
\end{lem}
\begin{proof} We estimate $\mathcal{I}_{1}, \mathcal{I}_{2}$ and $\mathcal{I}_{3}$ one by one. In what follows, we will
constantly use the fact:
\beno
(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2} \lesssim \min\{1,|v-v_{*}|^{2}\theta^{2}\} \sim \min\{1,|v^{\prime}-v_{*}|^{2}\theta^{2}\}\sim \min\{1,|v-v^{\prime}_{*}|^{2}\theta^{2}\}.
\eeno
{\it Step 1: Estimate of $\mathcal{I}_{1}$.}
We introduce the function $\phi$ to separate the relative velocity into two parts:
\beno
\mathcal{I}_{1} &=& \int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}(1-\phi)(v-v_{*})(\mu_{*}^{\prime 1/8} + \mu_{*}^{1/8})^{2}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2}g_{*} h f^{\prime} d\sigma dv_{*} dv
\\&&+\int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}\phi(v-v_{*})(\mu_{*}^{\prime 1/8} + \mu_{*}^{1/8})^{2}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2}g_{*} h f^{\prime} d\sigma dv_{*} dv
\\&\eqdefa& \mathcal{I}_{1,1}+\mathcal{I}_{1,2}.
\eeno
\underline{Estimate of $\mathcal{I}_{1,1}$.} By Cauchy-Schwartz inequality, we have
\beno
|\mathcal{I}_{1,1}| &\lesssim& \{\int b^{\epsilon}(\cos\theta)\langle v-v_{*}\rangle^{\gamma}(\mu_{*}^{\prime 1/8} + \mu_{*}^{1/8})^{2}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2}g^{2}_{*} h^{2} d\sigma dv_{*} dv\}^{1/2}
\\&&\times \{\int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}(\mu_{*}^{\prime 1/8} + \mu_{*}^{1/8})^{2}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2} f^{\prime 2} d\sigma dv_{*} dv\}^{1/2}
\eqdefa (\mathcal{I}_{1,1,1})^{1/2} (\mathcal{I}_{1,1,2})^{1/2}.
\eeno
We claim that
\ben \label{claim-1-L2} \mathcal{A} \eqdefa \int b^{\epsilon}(\cos\theta)\langle v-v_{*}\rangle^{\gamma}(\mu_{*}^{\prime 1/8} + \mu_{*}^{1/8})^{2}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2} d\sigma \lesssim (W^{\epsilon})^{2}(v)\langle v\rangle^{\gamma},\een
which implies
$\mathcal{I}_{1,1,1} \lesssim |g|^{2}_{L^{2}}|W^{\epsilon}h|^{2}_{L^{2}_{\gamma/2}}$.
To prove the claim, we notice that
\beno \mathcal{A} &\lesssim& \int b^{\epsilon}(\cos\theta)\langle v-v_{*}\rangle^{\gamma}\mu_{*}^{1/4}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2} d\sigma + \int b^{\epsilon}(\cos\theta)\langle v-v_{*}\rangle^{\gamma}\mu_{*}^{\prime 1/4}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2} d\sigma
\\&\eqdefa& \mathcal{A}_{1}+\mathcal{A}_{2}.
\eeno
Due to Proposition \ref{symbol} and the fact that $W^{\epsilon}(v-v_{*}) \lesssim W^{\epsilon}(v)W^{\epsilon}(v_{*})$,
we get $\mathcal{A}_{1} \lesssim \langle v-v_{*}\rangle^{\gamma}\mu_{*}^{1/4}(W^{\epsilon})^{2}(v-v_{*}) \lesssim (W^{\epsilon})^{2}(v)\langle v\rangle^{\gamma}$.
As for $\mathcal{A}_{2}$, thanks to $|v-v_{*}| \sim |v-v^{\prime}_{*}|$ and thus $\langle v-v_{*}\rangle^{\gamma} \lesssim \langle v-v^{\prime}_{*}\rangle^{\gamma} \lesssim \langle v\rangle^{\gamma}\langle v^{\prime}_{*}\rangle^{|\gamma|}$, we have
$\mathcal{A}_{2} \lesssim \langle v\rangle^{\gamma} \int b^{\epsilon}(\cos\theta) \mu_{*}^{\prime 1/8} \min\{1,|v-v_{*}|^{2}\theta^{2}\} d\sigma$.
If $|v-v_{*}|\geq 10|v|$, then there holds $|v^{\prime}_{*}| = |v^{\prime}_{*}-v+v|\geq |v^{\prime}_{*}-v| -|v| \geq (1/\sqrt{2} - 1/10)|v-v_{*}| \geq \frac{1}{5}|v-v_{*}|$, and thus $\mu_{*}^{\prime 1/8} \lesssim \mu^{1/200}(v-v_{*})$,
which indicates
\beno \mathcal{A}_{2} \lesssim \langle v\rangle^{\gamma}\mu^{1/200}(v-v_{*}) (W^{\epsilon})^{2}(v-v_{*}) \lesssim \langle v\rangle^{\gamma}.\eeno
If $|v-v_{*}|\leq 10|v|$, by Proposition \ref{symbol}, we have
\beno \mathcal{A}_{2} \lesssim \langle v\rangle^{\gamma} \int b^{\epsilon}(\cos\theta) \min\{1,|v|^{2}\theta^{2}\} d\sigma \lesssim (W^{\epsilon})^{2}(v)\langle v\rangle^{\gamma}.\eeno
It ends the proof to the claim \eqref{claim-1-L2}.
By the change of variable $(v, v_{*})\rightarrow (v^{\prime}, v_{*}^{\prime})$, we have
\beno
\mathcal{I}_{1,1,2} &=& \int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}(\mu_{*}^{\prime 1/8} + \mu_{*}^{1/8})^{2}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2} f^{2} d\sigma dv_{*} dv
\\&\leq& 2\int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}\mu_{*}^{1/4}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2} f^{2} d\sigma dv_{*} dv
\\&& + 2\int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}\mu_{*}^{\prime 1/4}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2} f^{2} d\sigma dv_{*} dv
\eqdefa \mathcal{I}_{1,1,2,1} + \mathcal{I}_{1,1,2,2}.
\eeno
With the help of (\ref{combinewithgamma}), we have
\beno
\mathcal{I}_{1,1,2,1} &\lesssim& \int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}\mu_{*}^{1/4}\min\{1,|v-v_{*}|^{2}\theta^{2}\} f^{2} d\sigma dv_{*} dv
\\&\lesssim& \int |v-v_{*}|^{\gamma}\mu_{*}^{1/8} (W^{\epsilon})^{2}(v)f^{2} dv_{*} dv
\lesssim |W^{\epsilon}f|^{2}_{L^{2}_{\gamma/2}}.
\eeno
By the fact $|v-v_{*}|\sim|v-v_{*}^{\prime}|$ and the change of variable $v_{*}\rightarrow v_{*}^{\prime}$, we have
\beno
\mathcal{I}_{1,1,2,2} \lesssim \int b^{\epsilon}(\cos\theta)|v-v_{*}^{\prime}|^{\gamma}\mu_{*}^{\prime 1/4}\min\{1,|v-v_{*}^{\prime}|^{2}\theta^{2}\} f^{2} d\sigma dv_{*}^{\prime} dv
\lesssim |W^{\epsilon}f|^{2}_{L^{2}_{\gamma/2}}.
\eeno
Therefore we have
$
\mathcal{I}_{1,1,2} \lesssim |W^{\epsilon}f|^{2}_{L^{2}_{\gamma/2}}.
$
Together with the estimate for $\mathcal{I}_{1,1,1}$, we have
\ben \label{I-1-1}
\mathcal{I}_{1,1} \leq |g|_{L^{2}}|W^{\epsilon}h|_{L^{2}_{\gamma/2}}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}.
\een
\underline{Estimate of $\mathcal{I}_{1,2}$.} By Cauchy-Schwartz inequality, we have
\beno
|\mathcal{I}_{1,2}| &\lesssim& \{\int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}\phi(v-v_{*})(\mu_{*}^{\prime 1/8} + \mu_{*}^{1/8})^{2}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2}|g_{*}| h^{2} d\sigma dv_{*} dv\}^{1/2}
\\&&\times \{\int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}\phi(v-v_{*})(\mu_{*}^{\prime 1/8} + \mu_{*}^{1/8})^{2}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2} |g_{*}|f^{\prime 2} d\sigma dv_{*} dv\}^{1/2}
\\&\eqdefa& (\mathcal{I}_{1,2,1})^{1/2} (\mathcal{I}_{1,2,2})^{1/2}.
\eeno
Note that the support of function $\phi$ is $B_{4/3}$. When $|v-v_{*}| \leq 4/3$, there hold
$
|v_{*}|\geq |v|-4/3$ and $ |v_{*}^{\prime}|\geq |v|-|v-v_{*}^{\prime}| \geq |v|-|v-v_{*}| \geq |v|-4/3,
$ which imply that $(\mu_{*}^{\prime 1/8} + \mu_{*}^{1/8})^{2}\lesssim \mu^{\f14}$. Recalling Proposition \ref{symbol} and the fact
$\gamma+2s > -\frac{3}{2}$, one has
\beno
\mathcal{I}_{1,2,1} &\lesssim& \int |v-v_{*}|^{\gamma+2s}\phi(v-v_{*})\mu^{1/8}|g_{*}| h^{2} d\sigma dv_{*} dv
\lesssim |g|_{L^{2}}|\mu^{1/16}h|^{2}_{L^{2}}.
\eeno
The similar argument can be applied to $\mathcal{I}_{1,2,2} $ to derive that
$
\mathcal{I}_{1,2,2}
\lesssim |g|_{L^{2}}|\mu^{1/16}f|^{2}_{L^{2}}.
$
Patching together the estimates for $\mathcal{I}_{1,2,1}$ and $\mathcal{I}_{1,2,2}$, we arrive at
$
|\mathcal{I}_{1,2}| \leq |g|_{L^{2}}|\mu^{1/16}h|_{L^{2}}|\mu^{1/16}f|_{L^{2}}.
$
Together with the estimate \eqref{I-1-1} for $\mathcal{I}_{1,1}$, we obtain
$
|\mathcal{I}_{1}| \lesssim |g|_{L^{2}}|W^{\epsilon}h|_{L^{2}_{\gamma/2}}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}.
$
{\it Step 2: Estimate of $\mathcal{I}_{2}$.} By Cauchy-Schwartz inequality, we have
\beno
\mathcal{I}_{2} &=& 2\int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})(\mu^{1/4}g)_{*}(h-h^{\prime})f^{\prime}
d\sigma dv_{*} dv
\\&\lesssim& \{\int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}|(\mu^{1/4}g)_{*}|(h-h^{\prime})^{2}d\sigma dv_{*} dv\}^{1/2}
\\&&\times \{\int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2}|(\mu^{1/4}g)_{*}|f^{\prime 2}d\sigma dv_{*} dv\}^{1/2}
\eqdefa (\mathcal{I}_{2,1})^{1/2}(\mathcal{I}_{2,2})^{1/2}.
\eeno
\underline{Estimate of $\mathcal{I}_{2,1}$.}
Notice that
$
(h-h^{\prime})^{2} = h^{\prime 2} - h^{2} - 2h(h^{\prime}-h),
$
we have
\beno
\mathcal{I}_{2,1} &=& \int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}(\mu^{1/4}g)_{*}(h^{\prime 2} - h^{2})d\sigma dv_{*} dv - 2 \langle Q^{\epsilon}(\mu^{1/4}g, h), h\rangle
\\&\eqdefa& \mathcal{I}_{2,1,1} - 2 \langle Q^{\epsilon}(\mu^{1/4}g, h), h\rangle .
\eeno
By Cancellation Lemma, one has
$ \mathcal{I}_{2,1,1} =C(\epsilon)\int |v-v_{*}|^{\gamma}(\mu^{1/4}g)_{*}h^{2} dv_{*} dv$ with $|C(\epsilon)|\lesssim 1$. Thus
by Lemma \ref{aftercancellation} and Lemma \ref{upQepsilon}, we have,
if $\gamma>-\frac{3}{2}$,
$|\mathcal{I}_{2,1}| \lesssim |\mu^{1/8}g|_{L^{2}}|h|_{\epsilon,\gamma/2}^2$;
if $\gamma=-\f32$, $|\mathcal{I}_{2,1}| \lesssim |\mu^{1/8}g|_{L^{2}}(|W^{\epsilon}(D)h|_{H^\eta_{\gamma/2}}+|h|_{\epsilon,\gamma/2})|h|_{\epsilon,\gamma/2};$
if $-3<\gamma< -\frac{3}{2}$,
\beno
|\mathcal{I}_{2,1}| \lesssim |\mu^{1/8}g|_{H^{s_{1}}}|W^{\epsilon}(D)h|_{H^{s_{2}}_{\gamma/2}}^2+| \mu^{1/8}g|_{L^{2}}|h|_{\epsilon,\gamma/2}^2,
\eeno
where $s_1$ and $s_2$ verify that $s_{1}+2s_{2}=-\gamma-3/2$ if $s_{2}\in (0,-\gamma/2-3/4]$ and $s_1=-\gamma-3/2+\eta$ if $s_2=0$.
\underline{Estimate of $\mathcal{I}_{2,2}$.}
We separate the relative velocity $|v-v_*|$ into two regions by introducing the cutoff function $\phi$. If $|v-v_*|\lesssim 1$, the estimate is as the same as that for $\mathcal{I}_{1,1,1}$. If $|v-v_*|\ge 1$ , the estimate is exactly the same as that for $\mathcal{I}_{1,2,2}$. We conclude that
$
\mathcal{I}_{2,2} \lesssim |\mu^{1/8}g|_{L^{2}}|W^{\epsilon}f|^{2}_{L^{2}_{\gamma/2}}.
$ Then we get
if $\gamma>-\frac{3}{2}$,
$|\mathcal{I}_{2}| \lesssim |\mu^{1/8}g|_{L^{2}}|h|_{\epsilon,\gamma/2}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}$;
if $\gamma=-\f32$, $|\mathcal{I}_{2}| \lesssim |\mu^{1/8}g|_{L^{2}}(|W^{\epsilon}(D)h|_{H^\eta_{\gamma/2}}+|h|_{\epsilon,\gamma/2})|W^{\epsilon}f|_{L^{2}_{\gamma/2}};$
if $-3<\gamma< -\frac{3}{2}$,
$
|\mathcal{I}_{2}| \lesssim |\mu^{1/8}g|_{H^{s_{1}}}|W^{\epsilon}(D)h|_{H^{s_{2}}_{\gamma/2}}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}+| \mu^{1/8}g|_{L^{2}}|h|_{\epsilon,\gamma/2}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}.
$
{\it Step 3: Estimate of $\mathcal{I}_{3}$.}
By the change of variables $(v,v_{*}) \rightarrow (v^{\prime},v_{*}^{\prime})$ and $(v,v_{*},\sigma) \rightarrow (v_{*},v,-\sigma)$,
\beno
\mathcal{I}_{3} = 2\int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}(\mu^{ 1/4} - \mu^{\prime 1/4})(\mu^{1/4}g)^{\prime} h_{*}f_{*} d\sigma dv_{*}dv.
\eeno
For ease of notation, let $E_{1} = \{(v,v_{*},\sigma): |v-v_{*}| \geq 1/\epsilon\}, E_{2} = \{(v,v_{*},\sigma): |v-v_{*}| \leq 1/\epsilon, |v-v_{*}|^{-1}\leq \theta \leq \pi/2\}, E_{3} = \{(v,v_{*},\sigma): |v-v_{*}| \leq 1/\epsilon, \epsilon \leq \theta \leq |v-v_{*}|^{-1}\}$. Then $\mathcal{I}_{3}$ can be decomposed into three parts
$ \mathcal{I}_{3,1}, \mathcal{I}_{3,2}$ and $\mathcal{I}_{3,3}$ which correspond to $E_{1}, E_{2}$ and $E_{3}$ respectively.
\underline{Estimate of $\mathcal{I}_{3,1}$}
By the change of variable $v \rightarrow v^{\prime}$ and the fact $|v^{\prime}-v_{*}|\geq |v-v_{*}| /\sqrt{2}$, we have
\beno
|\mathcal{I}_{3,1}| &\lesssim& \int b^{\epsilon}(\cos\theta)|v^{\prime}-v_{*}|^{\gamma}\mathbf{1}_{|v^{\prime}-v_{*}|\geq (\sqrt{2}\epsilon)^{-1}}|(\mu^{1/4}g)^{\prime} h_{*}f_{*}| d\sigma dv_{*}dv^{\prime}
\\&\lesssim& \epsilon^{-2s} \int |v^{\prime}-v_{*}|^{\gamma}\mathbf{1}_{|v^{\prime}-v_{*}|\geq (\sqrt{2}\epsilon)^{-1}}|(\mu^{1/4}g)^{\prime} h_{*}f_{*}| dv_{*}dv^{\prime}.
\eeno
On one hand, by Cauchy-Schwartz inequality, we have
\begin{eqnarray}\label{lessthanep2s}
&&\epsilon^{-2s} \int |v^{\prime}-v_{*}|^{\gamma}\textbf{1}_{|v^{\prime}-v_{*}|\geq (\sqrt{2}\epsilon)^{-1}}|(\mu^{1/4}g)^{\prime}|dv^{\prime}
\\&\leq& |\mu^{1/8}g|_{L^{2}} \epsilon^{-2s} \{\int |v^{\prime}-v_{*}|^{2\gamma}\textbf{1}_{|v^{\prime}-v_{*}|\geq (\sqrt{2}\epsilon)^{-1}}(\mu^{1/4})^{\prime}dv^{\prime}\}^{1/2}
\lesssim |\mu^{1/8}g|_{L^{2}} \epsilon^{-2s} \langle v_{*} \rangle^{\gamma}. \nonumber
\end{eqnarray}
Here we have used the fact that
$
| \langle v^{\prime}-v_{*} \rangle^{2\gamma} \lesssim \langle v^{\prime} \rangle^{|2\gamma|}\langle v_{*} \rangle^{2\gamma}.
$
On the other hand, we have
\begin{eqnarray}\label{lessthanvstar2s}
&&\epsilon^{-2s} \int |v^{\prime}-v_{*}|^{\gamma}\textbf{1}_{|v^{\prime}-v_{*}|\geq (\sqrt{2}\epsilon)^{-1}}|(\mu^{1/4}g)^{\prime}|dv^{\prime}
\\&\lesssim& \int |v^{\prime}-v_{*}|^{\gamma+2s}\textbf{1}_{|v^{\prime}-v_{*}|\geq (\sqrt{2}\epsilon)^{-1}}|(\mu^{1/4}g)^{\prime}|dv^{\prime} \nonumber
\\&\leq& |\mu^{1/8}g|_{L^{2}} \{\int |v^{\prime}-v_{*}|^{2\gamma+4s}\textbf{1}_{|v^{\prime}-v_{*}|\geq (\sqrt{2}\epsilon)^{-1}}(\mu^{1/4})^{\prime}dv^{\prime}\}^{1/2} \nonumber
\lesssim |\mu^{1/8}g|_{L^{2}} \langle v_{*} \rangle^{\gamma+2s}, \nonumber
\end{eqnarray}
thanks to $\gamma+2s > -\f32$.
With estimates (\ref{lessthanep2s}) and (\ref{lessthanvstar2s}) in hand, we have
\beno
|\mathcal{I}_{3,1}| \lesssim& |\mu^{1/8}g|_{L^{2}}|W^{\epsilon}h|_{L^{2}_{\gamma/2}}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}.
\eeno
\underline{Estimate of $\mathcal{I}_{3,2}$.} Thanks to $ |v^{\prime}-v_{*}|\sim|v-v_{*}| $ and the change of variable $v \rightarrow v^{\prime}$, we get
\begin{eqnarray}\label{i32preliminary}
|\mathcal{I}_{3,2}| &\lesssim& \int b^{\epsilon}(\cos\theta)\textbf{1}_{\theta \geq (\sqrt{2}|v^{\prime}-v_{*}|)^{-1} }|v^{\prime}-v_{*}|^{\gamma}\textbf{1}_{|v^{\prime}-v_{*}|\leq 1/\epsilon}|(\mu^{1/4}g)^{\prime} h_{*}f_{*}| d\sigma dv_{*}dv^{\prime}
\\&\lesssim& \int |v^{\prime}-v_{*}|^{\gamma+2s}\textbf{1}_{|v^{\prime}-v_{*}|\leq 1/\epsilon}|(\mu^{1/4}g)^{\prime} h_{*}f_{*}| dv_{*}dv^{\prime}. \nonumber
\end{eqnarray}
On one hand, similar to the argument in (\ref{lessthanvstar2s}), we have
\begin{eqnarray}\label{i32lessthanvstar2s}
\int |v^{\prime}-v_{*}|^{\gamma+2s}\textbf{1}_{|v^{\prime}-v_{*}|\leq 1/\epsilon}|(\mu^{1/4}g)^{\prime}| dv^{\prime}
&\lesssim& |\mu^{1/8}g|_{L^{2}} \langle v_{*} \rangle^{\gamma+2s}.
\end{eqnarray}
On the other hand, if $|v_{*}|\geq 2/\epsilon$, then $|v^{\prime}| \geq |v_{*}| - |v^{\prime}-v_{*}| \geq |v_{*}|/2 \geq 1/\epsilon$, which implies $\mu^{\prime} \leq \mu_{*}^{1/4} \lesssim e^{-1/2\epsilon^{2}}$. Then we deduce that
\begin{eqnarray}\label{i32lessthanep2s}
&&
\mathrm{1}_{|v_*|\ge \f2{\epsilon}}\int |v^{\prime}-v_{*}|^{\gamma+2s}\textbf{1}_{|v^{\prime}-v_{*}|\leq 1/\epsilon}|(\mu^{1/4}g)^{\prime}| dv^{\prime}
\\&\lesssim& \mathrm{1}_{|v_*|\ge \f2{\epsilon}}|\mu^{1/8}g|_{L^{2}} \{\int |v^{\prime}-v_{*}|^{2\gamma+4s}\textbf{1}_{|v^{\prime}-v_{*}|\leq 1/\epsilon}(\mu^{1/4})^{\prime}dv^{\prime}\}^{1/2} \nonumber
\\&\lesssim& \mathrm{1}_{|v_*|\ge \f2{\epsilon}}|\mu^{1/8}g|_{L^{2}} \mu_{*}^{1/64} (\epsilon^{-1})^{\gamma+2s+3/2} e^{-1/32\epsilon^{2}} \nonumber
\lesssim \mathrm{1}_{|v_*|\ge \f2{\epsilon}}|\mu^{1/8}g|_{L^{2}} \mu_{*}^{1/64}. \nonumber
\end{eqnarray}
With estimates (\ref{i32lessthanvstar2s}) and (\ref{i32lessthanep2s}) in hand, we have
\beno
|\mathcal{I}_{3,2}| \lesssim& |\mu^{1/8}g|_{L^{2}}|W^{\epsilon}h|_{L^{2}_{\gamma/2}}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}.
\eeno
\underline{Estimate of $\mathcal{I}_{3,3}$.}
By Taylor expansion, one has
\beno
\mu^{ 1/4} - \mu^{\prime 1/4} = (\nabla \mu^{ 1/4})(v^{\prime})\cdot(v-v^{\prime}) + \frac{1}{2}\int_{0}^{1} (1-\kappa) [(\nabla^{2} \mu^{ 1/4})(v(\kappa)):(v-v^{\prime})\otimes(v-v^{\prime})] d\kappa,
\eeno
where $v(\kappa) = v^{\prime} + \kappa(v-v^{\prime})$.
Observe that, for any fixed $v_{*}$, there holds
\beno
\int b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma} \textbf{1}_{|v-v_{*}| \leq 1/\epsilon, \epsilon \leq \theta \leq |v-v_{*}|^{-1}}(\nabla \mu^{ 1/4})(v^{\prime})\cdot(v-v^{\prime})(\mu^{1/4}g)^{\prime} d\sigma dv = 0.
\eeno
Thus we have
\beno
|\mathcal{I}_{3,3}| &=& |\int_{E_{3}\times[0,1]} b^{\epsilon}(\cos\theta)|v-v_{*}|^{\gamma}\textbf{1}_{|v-v_{*}| \leq 1/\epsilon, \epsilon \leq \theta \leq |v-v_{*}|^{-1}}
\\&&\times (1-\kappa)[(\nabla^{2} \mu^{ 1/4})(v(\kappa)):(v-v^{\prime})\otimes(v-v^{\prime})] (\mu^{1/4}g)^{\prime} h_{*}f_{*} d\kappa d\sigma dv_{*}dv|
\\&\lesssim& \int b^{\epsilon}(\cos\theta)|v^{\prime}-v_{*}|^{\gamma+2}\theta^{2}\textbf{1}_{|v^{\prime}-v_{*}| \leq 1/\epsilon, \epsilon \leq \theta \leq |v^{\prime}-v_{*}|^{-1}}|(\mu^{1/4}g)^{\prime} h_{*}f_{*}| d\sigma dv_{*}dv^{\prime}
\\&\lesssim& \int |v^{\prime}-v_{*}|^{\gamma+2s}\textbf{1}_{|v^{\prime}-v_{*}| \leq 1/\epsilon}|(\mu^{1/4}g)^{\prime} h_{*}f_{*}| dv_{*}dv^{\prime}.
\eeno
Copy the argument applied to (\ref{i32preliminary}), then we have
$|\mathcal{I}_{3,3}| \lesssim|\mu^{1/8}g|_{L^{2}}|W^{\epsilon}h|_{L^{2}_{\gamma/2}}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}$.
Patching together the above upper estimates of $\mathcal{I}_{3,1}, \mathcal{I}_{3,2}$ and $\mathcal{I}_{3,3}$, we have
\beno
|\mathcal{I}_{3}| \lesssim |\mu^{1/8}g|_{L^{2}}|W^{\epsilon}h|_{L^{2}_{\gamma/2}}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}.
\eeno
The lemma follows from the above estimates of $\mathcal{I}_{1}, \mathcal{I}_{2}$ and $\mathcal{I}_{3}$.
\end{proof}
\subsubsection{Upper bounds for the nonlinear term $ \Gamma^{\epsilon}(g,h)$}
We are now ready to give the upper bound for the inner product $\langle \Gamma^{\epsilon}(g,h), f\rangle_v$.
\begin{thm}\label{upGammagh}
For any $\eta>0$ and smooth functions $g,h$ and $f$, there hold
\begin{itemize}
\item if $\gamma>-\frac{3}{2}$,
$|\langle \Gamma^\epsilon(g,h), f\rangle_v| \lesssim |\mu^{1/8}g|_{L^{2}}|h|_{\epsilon,\gamma/2}| f|_{ \epsilon,\gamma/2}$;
\item if $\gamma=-\f32$, $|\langle \Gamma^\epsilon(g,h), f\rangle_v|\lesssim |\mu^{1/8}g|_{L^{2}}(|W^{\epsilon}(D)h|_{H^\eta_{\gamma/2}}+|h|_{\epsilon,\gamma/2})| f|_{ \epsilon,\gamma/2};$
\item if $-3<\gamma< -\frac{3}{2}$,
\beno
|\langle \Gamma^\epsilon(g,h), f\rangle_v| \lesssim |\mu^{1/8}g|_{H^{s_{1}}}|W^{\epsilon}(D)h|_{H^{s_{2}}_{\gamma/2}}|W^{\epsilon}(D)f|_{H^{s_{3}}_{\gamma/2}}+| \mu^{1/8}g|_{L^{2}}|h|_{\epsilon,\gamma/2}| f|_{ \epsilon,\gamma/2},\eeno
where $s_1, s_2$ and $ s_3$ verify that $s_{1}+s_{2}+s_3=-\gamma-3/2$ if $ s_{2}+ s_3 \in (0,-\gamma-3/2]$ and $s_1=-\gamma-3/2+\eta$ if $s_2=s_3=0$.
\end{itemize}
As a direct consequence, we have \begin{eqnarray}\label{upgammamuff1}
|\langle \Gamma^{\epsilon}(f, \mu^{1/2}), f\rangle_v| \lesssim |f|_{L^2_{\gamma/2}}|f|_{\epsilon,\gamma/2}; \quad |\Gamma^{\epsilon}(\mu^{1/2},f), f\rangle_v| \lesssim |f|^{2}_{\epsilon,\gamma/2}.
\end{eqnarray}
\end{thm}
\begin{proof}
The theorem follows from Theorem \ref{upQepsilon} and Lemma \ref{upforI}.
\end{proof}
\subsection{Proof of Theorem \ref{main1}}
Now we are in a position to prove theorem \ref{main1}.
\begin{proof} On one hand, by Lemma \ref{equivalencenorm}, we derive that
$ \langle \mathcal{L}^{\epsilon}f, f\rangle_v + |f|^{2}_{L^{2}_{\gamma/2}} \gtrsim |f|^{2}_{\epsilon,\gamma/2}$.
On the other hand, by \eqref{DefLep} and \eqref{upgammamuff1}, we have
$
|\langle \mathcal{L}^{\epsilon}f, f\rangle_v| \lesssim |f|^{2}_{\epsilon,\gamma/2},
$
which completes the proof of the theorem.
\end{proof}
As a result, we can get the
coercivity estimate of the linear operator $\mathcal{L}^{\epsilon}$:
\begin{prop}\label{coercvityforLep} For any smooth function $f$, we have
\beno
\langle \mathcal{L}^{\epsilon}f, f\rangle_v \gtrsim |(\mathbf{I}-\mathbb{P})f|^{2}_{\epsilon,\gamma/2}.
\eeno
\end{prop}
\begin{proof}
By \cite{mouhot}, there holds $\langle \mathcal{L}^{\epsilon}f, f\rangle_v \gtrsim |(\mathbf{I}-\mathbb{P})f|^{2}_{L^{2}_{\gamma/2}}.$
By the definition of $\mathbb{P}$ and theorem \ref{main1}, we have
\beno
\langle \mathcal{L}^{\epsilon}f, f\rangle_v = \langle \mathcal{L}^{\epsilon}(\mathbf{I}-\mathbb{P})f, (\mathbf{I}-\mathbb{P})f\rangle_v \gtrsim |(\mathbf{I}-\mathbb{P})f|^{2}_{\epsilon,\gamma/2} - |(\mathbf{I}-\mathbb{P})f|^{2}_{L^{2}_{\gamma/2}},
\eeno
which ends the proof of the proposition.
\end{proof}
\subsection{Commutator estimates between $ \Gamma^{\epsilon}(g,\cdot)$ and $W_{l}$} In this subsection, we want to prove
\begin{lem}\label{commutatorgamma}
Let $l \geq 2$. Then there hold \begin{enumerate} \item if $\gamma+2\ge0$,
$|\langle \Gamma^{\epsilon}(g,W_{l}h)-W_{l}\Gamma^{\epsilon}(g,h), f\rangle_v| \lesssim |g|_{L^{2}}|W_{l+\gamma/2}h|_{L^{2}}|f|_{\epsilon,\gamma/2};$
\item if $-3 < \gamma < -2$,
\beno
|\langle \Gamma^{\epsilon}(g,W_{l}h)-W_{l}\Gamma^{\epsilon}(g,h), f\rangle_v| &\lesssim& |g|_{L^{2}}|W_{l+\gamma/2}h|_{L^{2}}|f|_{\epsilon,\gamma/2}+ |\mu^{1/32}g|_{H^{s_{1}}}|\mu^{1/32}h|_{H^{s_{2}}}|f|_{\epsilon,\gamma/2},
\eeno
where $s_{1}, s_{2} \in [0,-\gamma/2-1]$ with $s_{1} + s_{2} = -\gamma/2-1$.
\end{enumerate}
\end{lem}
This lemma is a consequence of Lemma \ref{commutatorQepsilon} and Lemma \ref{commutatorforI}.
We first prove the commutator estimates for $Q^{\epsilon}$.
\begin{lem}\label{commutatorQepsilon}
Let $l \geq 2$. Then there hold
\begin{enumerate} \item if $\gamma+2\ge0$,
$|\langle Q^{\epsilon}(\mu^{1/4}g,W_{l}h)-W_{l}Q^{\epsilon}(\mu^{1/4}g,h), f\rangle_v| \lesssim |\mu^{1/32}g|_{L^{2}}|W_{l+\gamma/2}h|_{L^{2}}|f|_{\epsilon,\gamma/2};$
\item if $-3 < \gamma < -2$,
\beno
|\langle Q^{\epsilon}(\mu^{1/4}g,W_{l}h)-W_{l}Q^{\epsilon}(\mu^{1/4}g,h), f\rangle_v| \lesssim (|\mu^{1/32}g|_{L^{2}}|W_{l+\gamma/2}h|_{L^{2}}+ |\mu^{1/32}g|_{H^{s_{1}}}|\mu^{1/32}h|_{H^{s_{2}}})|f|_{\epsilon,\gamma/2},
\eeno
where $s_{1}, s_{2} \in [0,-\gamma/2-1]$ with $s_{1} + s_{2} = -\gamma/2-1$.
\end{enumerate}
\end{lem}
\begin{proof}
We observe that
\beno &&\langle Q^{\epsilon}(\mu^{1/4}g,W_{l}h)-W_{l}Q^{\epsilon}(\mu^{1/4}g,h), f\rangle_v = \int B^{\epsilon,\gamma}(W_{l}-W^{\prime}_{l})\mu_{*}^{1/4}g_{*} h f^{\prime} d\sigma dv_{*} dv
\\&&= \int B^{\epsilon,\gamma}(W_{l}-W^{\prime}_{l})\mu_{*}^{1/4}g_{*} h (f^{\prime}-f) d\sigma dv_{*} dv
+\int B^{\epsilon,\gamma}(W_{l}-W^{\prime}_{l})\mu_{*}^{1/4}g_{*} h f d\sigma dv_{*} dv
\eqdefa\mathcal{A}_{1} + \mathcal{A}_{2}. \eeno
{\it Step 1: Estimate of $\mathcal{A}_{1}$.}
By Cauchy-Schwartz inequality, we have
\beno |\mathcal{A}_{1}| &\leq& \{\int B^{\epsilon,\gamma} \mu_{*}^{1/4}(f^{\prime}-f)^{2} d\sigma dv_{*} dv\}^{1/2}
\{\int B^{\epsilon,\gamma}(W_{l}-W^{\prime}_{l})^{2}\mu_{*}^{1/4}g^{2}_{*} h^{2} d\sigma dv_{*} dv\}^{1/2}
\\&\eqdefa&(\mathcal{A}_{1,1})^{1/2}(\mathcal{A}_{1,2})^{1/2}. \eeno
Note that $\mathcal{A}_{1,1} = \mathcal{R}^{\epsilon,\gamma}_{\mu^{1/4}}(f)$. Thus by Lemma \ref{equivalencenorm} and Theorem \ref{main1}, we have $ \mathcal{A}_{1,1} \lesssim |f|^{2}_{\epsilon,\gamma/2}$.
It is easy to derive $ \int b^{\epsilon}(W_{l}-W^{\prime}_{l})^{2}d\sigma \lesssim |v-v_{*}|^{2}\langle v \rangle^{2l-2}\langle v_{*} \rangle^{2l-2}, $
which gives $$\mathcal{A}_{1,2} \lesssim \int |v-v_{*}|^{\gamma+2}\langle v \rangle^{2l-2}\langle v_{*} \rangle^{2l-2}\mu_{*}^{1/4}g^{2}_{*} h^{2} dv_{*} dv.$$
If $\gamma+2 \geq 0$, there holds $ \mathcal{A}_{1,2} \lesssim |\mu^{1/16}g|^{2}_{L^{2}}|h|^{2}_{L^{2}_{l+\gamma/2}}.$
If $\gamma+2 < 0$, we make the decomposition,
\beno \mathcal{A}_{1,2} &\lesssim& \int |v-v_{*}|^{\gamma+2}\mathbf{1}_{|v-v_{*}|\leq 1}\langle v \rangle^{2l-2}\langle v_{*} \rangle^{2l-2}\mu_{*}^{1/4}g^{2}_{*} h^{2} dv_{*} dv
\\&&+ \int |v-v_{*}|^{\gamma+2}\mathbf{1}_{|v-v_{*}|\geq 1}\langle v \rangle^{2l-2}\langle v_{*} \rangle^{2l-2}\mu_{*}^{1/4}g^{2}_{*} h^{2} dv_{*} dv
\eqdefa \mathcal{A}_{1,2,1}+\mathcal{A}_{1,2,2}.\eeno
When $|v-v_{*}|\leq 1$, there holds $|v_{*}|\geq |v|-1$, thus $|v_{*}|^{2}\gtrsim |v|^{2}/2$ and $\mu_{*} \lesssim \mu^{1/2}$. Therefore we get
$\langle v \rangle^{2l-2}\langle v_{*} \rangle^{2l-2}\mu_{*}^{1/4} \lesssim \langle v_{*} \rangle^{4l-4}\mu_{*}^{1/8}\mu^{1/16} \lesssim \mu_{*}^{1/16}\mu^{1/16},$ which implies
\beno \mathcal{A}_{1,2,1} \lesssim \int |v-v_{*}|^{\gamma+2}\mathbf{1}_{|v-v_{*}|\leq 1}\mu_{*}^{1/16}\mu^{1/16}g^{2}_{*} h^{2}dv_{*} dv = \int |v-v_{*}|^{\gamma+2}\mathbf{1}_{|v-v_{*}|\leq 1}G_{*} H dv_{*} dv, \eeno
where $G = \mu^{1/16}g^{2}$ and $H = \mu^{1/16}h^{2}$. We claim that for $s_1,s_2\ge0$ with $s_1+s_2=-(\gamma+2)/2$, $$|\mathcal{A}_{1,2,1}|\lesssim |\mu^{1/32}g|^{2}_{H^{s_{1}}} |\mu^{1/32}h|^{2}_{H^{s_{2}}}.$$ To see that, if $s_1\in (0, -(\gamma+2)/2)$,
by Hardy-Littlewood-Sobolev inequality and Sobolev embedding theorem, we get the result. For $s_1=0$ or $s_1=-(\gamma+2)/2$, by Hardy's inequality, one has
\beno |\mathcal{A}_{1,2,1}|\lesssim |\sqrt{G}|^{2}_{H^{-(\gamma+2)/2}}|\sqrt{H}|^{2}_{L^2};\quad |\mathcal{A}_{1,2,1}|\lesssim |\sqrt{G}|^{2}_{L^2}|\sqrt{H}|^{2}_{H^{-(\gamma+2)/2}}. \eeno
Then we proved the claim.
When $|v-v_{*}|\geq 1$, there holds $|v-v_{*}|^{\gamma+2} \sim \langle v-v_{*} \rangle^{\gamma+2} \lesssim \langle v \rangle^{\gamma+2}\langle v_{*} \rangle^{|\gamma+2|}$, which implies
\beno \mathcal{A}_{1,2,2} \lesssim \int \langle v \rangle^{2l+\gamma}\langle v_{*} \rangle^{2l-2+|\gamma+2|}\mu_{*}^{1/4}g^{2}_{*} h^{2} dv_{*} dv
\lesssim |\mu^{1/32}g|^{2}_{L^{2}} |h|^{2}_{L^{2}_{l+\gamma/2}}.\eeno
Patching together the above estimates, we have if $\gamma+2 \geq 0$, $|\mathcal{A}_{1}| \lesssim |\mu^{1/16}g|_{L^{2}}|h|_{L^{2}_{l+\gamma/2}}|f|_{\epsilon,\gamma/2}$,
and if $\gamma+2 < 0$, $|\mathcal{A}_{1}| \lesssim (|\mu^{1/32}g|_{H^{s_{1}}} |\mu^{1/32}h|_{H^{s_{2}}}+|\mu^{1/32}g|_{L^{2}} |h|_{L^{2}_{l+\gamma/2}})|f|_{\epsilon,\gamma/2}.$
{\it Step 2: Estimate of $\mathcal{A}_{2}$.}
By Taylor expansion, one has
\beno W^{\prime}_{l} - W_{l} = (\nabla W_{l})(v)\cdot(v^{\prime}-v) +\frac{1}{2}\int_{0}^{1}(1-\kappa)(\nabla^{2}W_{l})(v(\kappa)):(v^{\prime}-v)\otimes(v^{\prime}-v)d\kappa, \eeno
where $v(\kappa) = v + \kappa (v^{\prime}-v)$. Thus we have
\beno \mathcal{A}_{2} &=& -\int B^{\epsilon,\gamma}(\nabla W_{l})(v)\cdot(v^{\prime}-v)\mu_{*}^{1/4}g_{*} h f d\sigma dv_{*} dv
\\&&-\frac{1}{2}\int B^{\epsilon,\gamma}(1-\kappa)(\nabla^{2}W_{l})(v(\kappa)):(v^{\prime}-v)\otimes(v^{\prime}-v)\mu_{*}^{1/4}g_{*} h f d\kappa d\sigma dv_{*} dv
\eqdefa \mathcal{A}_{2,1}+\mathcal{A}_{2,2}.\eeno
\underline{Estimate of $\mathcal{A}_{2,1}$.} Thanks to the fact that there exists a constant $C(\epsilon)$ with $|C(\epsilon)|\lesssim 1$ such that
\ben\label{canvvx} \int b^{\epsilon}(\cos\theta) (v^{\prime}-v) d\sigma = -(v-v_{*})\int b^{\epsilon}(\cos\theta)\sin^{2}(\theta/2)d\sigma = -(v-v_{*}) C(\epsilon),\een
we have \beno |\mathcal{A}_{2,1}| &\lesssim& \int |v-v_{*}|^{\gamma+1}\langle v \rangle^{l-1}\langle v_{*} \rangle^{l-1}\mu_{*}^{1/4}|g_{*} h f| d\sigma dv_{*} dv
\\&\lesssim&\int |v-v_{*}|^{\gamma+1}\mathbf{1}_{|v-v_{*}|\leq 1}\langle v \rangle^{l-1}\langle v_{*} \rangle^{l-1}\mu_{*}^{1/4}|g_{*} h f| dv_{*} dv
\\&&+ \int |v-v_{*}|^{\gamma+1}\mathbf{1}_{|v-v_{*}|\geq 1}\langle v \rangle^{l-1}\langle v_{*} \rangle^{l-1}\mu_{*}^{1/4}|g_{*} h f| dv_{*} dv
\eqdefa\mathcal{A}_{2,1,1}+\mathcal{A}_{2,1,2}.\eeno
When $|v-v_{*}|\leq 1$, as before, one has
$ \langle v \rangle^{l-1}\langle v_{*} \rangle^{l-1}\mu_{*}^{1/4} \lesssim \langle v_{*} \rangle^{2l-2}\mu_{*}^{1/8}\mu^{1/16}
\lesssim \mu_{*}^{1/16}\mu^{1/16}$.
Thus by Cauchy-Schwartz inequality, we have
\beno \mathcal{A}_{2,1,1} &\lesssim& \int |v-v_{*}|^{\gamma+1}\mathbf{1}_{|v-v_{*}|\leq 1}\mu_{*}^{1/16}\mu^{1/16}|g_{*} h f| dv_{*} dv
\\&\leq& \{\int |v-v_{*}|^{\gamma+2}\mathbf{1}_{|v-v_{*}|\leq 1}\mu_{*}^{1/16}\mu^{1/16}g^{2}_{*} h^{2} dv_{*} dv\}^{\f12}
\{\int |v-v_{*}|^{\gamma}\mathbf{1}_{|v-v_{*}|\leq 1}\mu_{*}^{1/16}\mu^{1/16} f^{2} dv_{*} dv\}^{\f12}
\\&\lesssim& \{\int |v-v_{*}|^{\gamma+2}\mathbf{1}_{|v-v_{*}|\leq 1}\mu_{*}^{1/16}\mu^{1/16}g^{2}_{*} h^{2}dv_{*} dv\}^{1/2}|\mu^{1/32}f|_{L^{2}}. \eeno
Copy the argument for $\mathcal{A}_{1,2,1}$, we conclude that if $\gamma \geq -2$, we get
$\mathcal{A}_{2,1,1} \lesssim |\mu^{1/32}g|_{L^{2}} |\mu^{1/32}h|_{L^{2}}|\mu^{1/32}f|_{L^{2}} $. And if
$-3 < \gamma < - 2$, we get
$ \mathcal{A}_{2,1,1} \lesssim |\mu^{1/32}g|_{H^{s_{1}}} |\mu^{1/32}h|_{H^{s_{2}}}|\mu^{1/32}f|_{L^{2}}, $
with $s_{1}, s_{2} \in [0,-\gamma/2-1]$ satisfying $s_{1}+s_{2} = -\gamma/2 -1 $.
By nearly the same argument as that for $\mathcal{A}_{1,2,2}$, we have,
$ \mathcal{A}_{2,1,2} \lesssim |\mu^{1/32}g|_{L^{2}} |h|_{L^{2}_{l+\gamma/2}} |f|_{L^{2}_{\gamma/2}}.$
\underline{Estimate of $\mathcal{A}_{2,2}$.} Since $|(\nabla^{2}W_{l})(v(\kappa))| \lesssim \langle v(\kappa) \rangle^{l-2} \lesssim \langle v \rangle^{l-2}\langle v_{*} \rangle^{l-2}$ and $|v^{\prime}-v|^{2} \lesssim \theta^{2} |v-v_{*}|^{2}$, we have
\beno |\mathcal{A}_{2,2}| &\lesssim& \int b^{\epsilon}(\cos\theta)\theta^{2}|v-v_{*}|^{\gamma+2}\langle v \rangle^{l-2}\langle v_{*} \rangle^{l-2}\mu_{*}^{1/4}|g_{*} h f |d\sigma dv_{*} dv
\\&\lesssim& \int |v-v_{*}|^{\gamma+2}\langle v \rangle^{l-2}\mu_{*}^{1/8}|g_{*} h f | dv_{*} dv.
\eeno
Thanks to $\gamma+2 > -1$, we have
$\int |v-v_{*}|^{\gamma+2}\mu_{*}^{1/8}|g_{*}| dv_{*} \lesssim \langle v \rangle^{\gamma+2}|\mu^{1/16}g|_{L^{2}}, $
which implies $ |\mathcal{A}_{2,2}| \lesssim |\mu^{1/16}g|_{L^{2}} |h|_{L^{2}_{l+\gamma/2}} |f|_{L^{2}_{\gamma/2}}. $
Patching together the estimates of $\mathcal{A}_{2,1,1},\mathcal{A}_{2,1,2}$ and $\mathcal{A}_{2,2}$, we drive that if $\gamma \geq - 2$,
$ |\mathcal{A}_{2}| \lesssim |\mu^{1/32}g|_{L^{2}} |h|_{L^{2}_{l+\gamma/2}} |f|_{L^{2}_{\gamma/2}}$,
and if $-3 < \gamma < - 2$, $|\mathcal{A}_{2}| \lesssim (|\mu^{1/32}g|_{H^{s_{1}}} |\mu^{1/32}h|_{H^{s_{2}}}+|\mu^{1/32}g|_{L^{2}} |h|_{L^{2}_{l+\gamma/2}}) |f|_{L^{2}_{\gamma/2}}$.
The lemma follows by patching together the estimates of $\mathcal{A}_{1}$ and $\mathcal{A}_{2}$.
\end{proof}
The next lemma gives the commutator estimates for $\mathcal{I}(g,h,f)$.
\begin{lem}\label{commutatorforI}
When $l \geq 1$, there holds
\beno
|\mathcal{I}(g,W_{l}h,f)-\mathcal{I}(g,h,W_{l}f)| \lesssim |g|_{L^{2}}|W_{l+\gamma/2}h|_{L^{2}}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}.
\eeno
\end{lem}
\begin{proof}
By the definition of $\mathcal{I}(g,h,f)$ and the fact that $(\mu_{*}^{\prime 1/2} - \mu_{*}^{1/2}) =(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2}+2\mu_{*}^{1/4}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})$, we have
\beno
\mathcal{I}(g,W_{l}h,f)-\mathcal{I}(g,h,W_{l}f) &=& \int B^{\epsilon,\gamma}(\mu_{*}^{\prime 1/2} - \mu_{*}^{1/2}) (W_{l}-W^{\prime}_{l})g_{*} h f^{\prime} d\sigma dv_{*} dv
\\&=& \int B^{\epsilon,\gamma}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2}(W_{l}-W^{\prime}_{l})g_{*} h f^{\prime} d\sigma dv_{*} dv
\\&&
+ 2 \int B^{\epsilon,\gamma}\mu_{*}^{1/4}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})(W_{l}-W^{\prime}_{l})g_{*} h f^{\prime} d\sigma dv_{*} dv
\eqdefa \mathcal{A}_{1} + 2\mathcal{A}_{2}.
\eeno
{\it Step 1: Estimate of $\mathcal{A}_{1}$.}
By Cauchy-Schwartz inequality, we have
\beno |\mathcal{A}_{1}| &\leq& \{\int B^{\epsilon,\gamma} (\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2} f^{\prime 2} d\sigma dv_{*} dv\}^{1/2}
\\&&\times\{\int B^{\epsilon,\gamma}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2}(W_{l}-W^{\prime}_{l})^{2}g^{2}_{*} h^{2} d\sigma dv_{*} dv\}^{1/2}
\eqdefa(\mathcal{A}_{1,1})^{1/2}(\mathcal{A}_{1,2})^{1/2}. \eeno
By the change of variables $(v,v_{*}) \rightarrow (v_{*}^{\prime},v^{\prime})$ and Lemma \ref{lowerboundpart1}, we have
\beno \mathcal{A}_{1,1} &=& \int B^{\epsilon,\gamma} (\mu^{\prime 1/4} - \mu^{1/4})^{2} f^{2}_{*} d\sigma dv_{*} dv
\lesssim |W^{\epsilon}f|^{2}_{L^{2}_{\gamma/2}}.\eeno
Thanks to $(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2} = (\mu_{*}^{\prime 1/8} + \mu_{*}^{1/8})^{2}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2} \leq 2 (\mu_{*}^{\prime 1/4} + \mu_{*}^{1/4})(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2}$, we have
\beno \mathcal{A}_{1,2} &\lesssim& \int B^{\epsilon,\gamma}\mu_{*}^{1/4}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2}(W_{l}-W^{\prime}_{l})^{2}g^{2}_{*} h^{2} d\sigma dv_{*} dv \\&&+ \int B^{\epsilon,\gamma}\mu_{*}^{\prime 1/4}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2}(W_{l}-W^{\prime}_{l})^{2}g^{2}_{*} h^{2} d\sigma dv_{*} dv
\eqdefa \mathcal{A}_{1,2,1} + \mathcal{A}_{1,2,2}.\eeno
Thanks to the facts $|v-v^{\prime}_{*}| \sim |v-v_{*}|$, and
\begin{eqnarray}\label{roughaboutwl}
(W_{l}-W^{\prime}_{l})^{2} \lesssim \min\{\theta^{2}|v-v_{*}^{\prime}|^{2}\langle v \rangle^{2l-2} \langle v_{*}^{\prime} \rangle^{2l-2}, \theta^{2}\langle v \rangle^{2l} \langle v_{*}^{\prime} \rangle^{2l}\},
\end{eqnarray}
\begin{eqnarray}\label{roughaboutmu}
(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2} \lesssim \min\{ \theta^{2}|v-v_{*}^{\prime}|^{2}, 1\},
\end{eqnarray}
we claim
\begin{eqnarray}\label{kernelestimate2}
\mathcal{B} \eqdefa \int B^{\epsilon,\gamma}\mu_{*}^{\prime 1/4}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2}(W_{l}-W^{\prime}_{l})^{2} d\sigma \lesssim \langle v \rangle^{2l+\gamma},
\end{eqnarray}
which implies $ \mathcal{A}_{1,2,2} \lesssim |g|^{2}_{L^{2}}|h|^{2}_{L^{2}_{l+\gamma/2}}.$
In fact, by (\ref{roughaboutwl}) and (\ref{roughaboutmu}), on one hand, there holds
\beno \mathcal{B} \lesssim \int b^{\epsilon}(\cos\theta)\theta^{4}|v-v^{\prime}_{*}|^{\gamma+4}\mu_{*}^{\prime 1/4} \langle v \rangle^{2l-2} \langle v_{*}^{\prime} \rangle^{2l-2} d\sigma. \eeno
When $|v-v_{*}|\leq 1$, there holds $|v-v_{*}^{\prime}|\leq 1$, $|v-v^{\prime}_{*}|^{\gamma+4}\leq 1$ and $\langle v \rangle \sim \langle v^{\prime}_{*} \rangle$, thus $\langle v \rangle^{2l-2} \lesssim \langle v \rangle^{2l+\gamma} \langle v^{\prime}_{*} \rangle^{-2-\gamma}$, which implies
\beno \mathcal{B} \lesssim \int b^{\epsilon}(\cos\theta)\theta^{4}\mu_{*}^{\prime 1/4} \langle v \rangle^{2l+\gamma} \langle v_{*}^{\prime} \rangle^{2l-4-\gamma} d\sigma \lesssim \int b^{\epsilon}(\cos\theta)\theta^{4} \langle v \rangle^{2l+\gamma} d\sigma \lesssim \langle v \rangle^{2l+\gamma}. \eeno
By (\ref{roughaboutwl}) and (\ref{roughaboutmu}), on the other hand, there holds
$\mathcal{B} \lesssim \int b^{\epsilon}(\cos\theta)\theta^{2}|v-v^{\prime}_{*}|^{\gamma}\mu_{*}^{\prime 1/4} \langle v \rangle^{2l} \langle v_{*}^{\prime} \rangle^{2l} d\sigma. $
When $|v-v_{*}|\geq 1$, there holds $|v-v^{\prime}_{*}|^{\gamma}\sim \langle v-v^{\prime}_{*} \rangle^{\gamma} \lesssim \langle v \rangle^{\gamma}\langle v^{\prime}_{*} \rangle^{|\gamma|}$, which implies
$ \mathcal{B} \lesssim \int b^{\epsilon}(\cos\theta)\theta^{2}\mu'_* 1/4 \langle v \rangle^{2l+\gamma} \langle v_{*}^{\prime} \rangle^{2l+|\gamma|} d\sigma \\ \lesssim \int b^{\epsilon}(\cos\theta)\theta^{2} \langle v \rangle^{2l+\gamma} d\sigma \lesssim \langle v \rangle^{2l+\gamma}. $
Now the claim (\ref{kernelestimate2}) is proved. Thanks to
$
(W_{l}-W^{\prime}_{l})^{2} \lesssim \min\{\theta^{2}|v-v_{*}|^{2}\langle v \rangle^{2l-2} \langle v_{*} \rangle^{2l-2}, \theta^{2}\langle v \rangle^{2l} \langle v_{*} \rangle^{2l}\},
$
and $
(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2} \lesssim \min\{ \theta^{2}|v-v_{*}|^{2}, 1\},
$
similar to (\ref{kernelestimate2}), we can prove
\begin{eqnarray}\label{kernelestimate1}
\int B^{\epsilon,\gamma}\mu_{*}^{1/4}(\mu_{*}^{\prime 1/8} - \mu_{*}^{1/8})^{2}(W_{l}-W^{\prime}_{l})^{2} d\sigma \lesssim \langle v \rangle^{2l+\gamma}\mu_{*}^{1/8},
\end{eqnarray}
which implies $ \mathcal{A}_{1,2,1} \lesssim |\mu^{1/16}g|^{2}_{L^{2}}|h|^{2}_{L^{2}_{l+\gamma/2}}.$
Patching together the upper bound estimates of $\mathcal{A}_{1,2,1}$ and $\mathcal{A}_{1,2,2}$, we arrive at
$\mathcal{A}_{1,2} \lesssim |g|^{2}_{L^{2}}|h|^{2}_{L^{2}_{l+\gamma/2}}. $
From this together with the estimates for $\mathcal{A}_{1,1}$, we conclude that
$|\mathcal{A}_{1}| \lesssim |g|_{L^{2}}|h|_{L^{2}_{l+\gamma/2}}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}.$
{\it Step 2: Estimate of $\mathcal{A}_{2}$.} By Cauchy-Schwartz inequality, we have
\beno |\mathcal{A}_{2}| &\leq& \{\int B^{\epsilon,\gamma}\mu_{*}^{1/4}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2}g_{*} f^{\prime 2} d\sigma dv_{*} dv\}^{1/2}
\\&\times&\{\int B^{\epsilon,\gamma}\mu_{*}^{1/4}(W_{l}-W^{\prime}_{l})^{2}g_{*} h^{2} d\sigma dv_{*} dv\}^{1/2}
\eqdefa(\mathcal{A}_{2,1})^{1/2}(\mathcal{A}_{2,2})^{1/2}. \eeno
\underline{Estimate of $\mathcal{A}_{2,1}$.} By the change of variable $v \rightarrow v^{\prime}$, we have
\beno \mathcal{A}_{2,1} \lesssim \int b^{\epsilon}(\cos\theta)|v^{\prime}-v_{*}|^{\gamma}\mu_{*}^{1/4}(\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2}g_{*} f^{\prime 2} d\sigma dv_{*} dv^{\prime}.\eeno
By Proposition \ref{symbol}, one has
$ \int b^{\epsilon}(\cos\theta) (\mu_{*}^{\prime 1/4} - \mu_{*}^{1/4})^{2} d\sigma \lesssim |v'-v_*|^2\mathrm{1}_{|v'-v_*|\le2}+(W^{\epsilon})^{2}(v^{\prime}-v_{*})\mathrm{1}_{|v'-v_*|\ge2},$
which implies that
\beno\mathcal{A}_{2,1} &\lesssim& \int |v^{\prime}-v_{*}|^{\gamma}\mu_{*}^{1/4} (|v'-v_*|^2\mathrm{1}_{|v'-v_*|\le2}+(W^{\epsilon})^{2}(v^{\prime}-v_{*})\mathrm{1}_{|v'-v_*|\ge2}) g_{*} f^{\prime 2} dv_{*} dv^{\prime}\\
&\lesssim& |\mu^{1/8}g|_{L^2}|W^\epsilon f|_{L^2_{\gamma/2}}^2. \eeno
\underline{Estimate of $\mathcal{A}_{2,2}$.}
By Taylor expansion, when $l \geq 1$, it is easy to check that
\beno (W_{l}-W^{\prime}_{l})^{2} \lesssim \theta^{2}|v-v_{*}|^{2}(\langle v \rangle^{2l-2} + \langle v_{*} \rangle^{2l-2}) \lesssim \theta^{2}|v-v_{*}|^{2}\langle v \rangle^{2l-2} \langle v_{*} \rangle^{2l-2}.\eeno
Thus we have
\beno \mathcal{A}_{2,2} &\lesssim& \int b^{\epsilon}(\cos\theta) \theta^{2}|v-v_{*}|^{\gamma+2}\langle v \rangle^{2l-2} \langle v_{*} \rangle^{2l-2}\mu_{*}^{1/4} g_{*} h^{ 2} d\sigma dv_{*} dv
\\&\lesssim& \int |v-v_{*}|^{\gamma+2}\langle v \rangle^{2l-2} \langle v_{*} \rangle^{2l-2}\mu_{*}^{1/4} g_{*} h^{ 2} dv_{*} dv.
\eeno
Noting that \beno \int |v-v_{*}|^{\gamma+2} \langle v_{*} \rangle^{2l-2}\mu_{*}^{1/4} g_{*} dv_{*} &\leq& \{\int |v-v_{*}|^{2\gamma+4} \mu_{*}^{1/4} dv_{*}\}^{1/2}
\{\int \langle v_{*} \rangle^{4l-4}\mu_{*}^{1/4} g^{2}_{*} dv_{*} \}^{1/2}
\\&\lesssim& \langle v \rangle^{\gamma+2} |\mu^{1/16}g|_{L^{2}}, \eeno
which implies
$ \mathcal{A}_{2,2} \lesssim |\mu^{1/16}g|_{L^{2}}|h|^{2}_{L^{2}_{l+\gamma/2}}. $
Putting together the estimates of $\mathcal{A}_{2,1}$ and $\mathcal{A}_{2,2}$, we arrive at
\beno |\mathcal{A}_{2}| \lesssim |\mu^{1/16}g|_{L^{2}}|h|_{L^{2}_{l+\gamma/2}}|W^{\epsilon}f|_{L^{2}_{\gamma/2}}.\eeno
The lemma follows the estimates of $\mathcal{A}_{1}$ and $\mathcal{A}_{2}$.
\end{proof}
\section{Diversity of longtime behavior of $e^{-\mathcal{L}^\epsilon t}$}
In this section, we will give the proof to Theorem \ref{main2}. Throughout this section, we will set $f=e^{-t\mathcal{L}^\epsilon }f_0$ with $f_0\in {\mathcal{N}(\mathcal{L}^\epsilon)}^{\perp}$. Then $f$ verifies that $f\in {\mathcal{N}(\mathcal{L}^\epsilon)}^{\perp}$ and \begin{equation}\label{linlinBo}
\left\{ \begin{aligned}
&\pa_t f+\mathcal{L}^\epsilon f=0 ;\\
&f|_{t=0} = f_{0}.
\end{aligned} \right.
\end{equation}
We begin with a technical lemma for a commutator estimate.
\begin{lem}\label{CommSemi} Let $\chi_M(v)\eqdefa\chi(v/M)$ with $(\chi, M)=(\phi, 1/\epsilon)$, $(\chi, M)=(1-\phi, 1/\epsilon)$ or $(\chi, M)=(\psi, 2^{j})$. Here $\phi$ and $\psi$ are defined in \eqref{function-phi-psi} and $j$ verifies $2^{j\gamma}\ll \epsilon^{2s}$ with $\gamma+2s\ge0$. Suppose that the support of $\na \chi$ is contained in the ring $\mathcal{C}=\{v\in\R^3|c_1\le |v|\le c_2\}$ for some universal constants $0<c_{1}<c_{2}$. Then we have
\beno |\langle [\mathcal{L}^\epsilon ,\chi_M]f, f\chi_M\rangle_v |\lesssim C_\eta\epsilon^{2s}|f|_{\epsilon,\gamma/2}^2+\eta|f\chi_M|_{\epsilon,\gamma/2}^2. \eeno
As a result, for $\gamma\in(-\f32,0)\cap[-2s,0)$, we have
\beno |\langle [\Gamma^\epsilon(g,h), \varphi_j], f\varphi_j\rangle_v |\lesssim C_\eta\epsilon^{2s}(|g|_{L^2}^2|h|_{\epsilon,\gamma/2}^2+|g|_{\epsilon,\gamma/2}^2|h|_{L^2}^2)+\eta|f\varphi_j|_{\epsilon,\gamma/2}^2. \eeno
\end{lem}
\begin{proof} We first note that $2^j\ge 1/\epsilon$. By the definition of $\mathcal{L}^\epsilon$(see \eqref{DefLep}), the desired result can be reduced to consider the quantity $I\eqdefa \langle \Gamma^\epsilon(g,h\chi_M)-\Gamma^\epsilon(g,h)\chi_M, f\chi_M\rangle_v$ where $(g,h)=(\mu^{\f12}, f)$ or $(g,h)=(f,
\mu^{\f12})$.
Direct calculation will give
\beno I=\int B^\epsilon\big[(g\mu^{\f12})_*h(f\chi_M)'\big(-(\chi_M)'+\chi_M\big)+g_*\big((\mu^{\f12})_*'
-(\mu^{\f12})_*\big)h(f\chi_M)\big(-(\chi_M)'+\chi_M\big)\big]d\sigma dv_*dv.\eeno
By Cauchy-Schwartz inequality, we get that
\beno &&|I|\lesssim \bigg(\int B^\epsilon g_*^2h^2(\mu_*^{\f12}+(\mu^{\f12})_*') \big((\chi_M)'-\chi_M\big)^2d\sigma dv_*dv\bigg)^{\f12} \bigg(\int B^\epsilon \big[\mu^\f12_* \big((f\chi_M)'-f\chi_M\big)^2\\&&\qquad+(f\chi_M)^2\big((\mu^{\f14})_*'-(\mu^{\f14})_*\big)^2\big]d\sigma dv_*dv\bigg)^{\f12}+\big|\int B^\epsilon(g\mu^{\f12})_*hf\chi_M\big((\chi_M)'-\chi_M\big)d\sigma dv_*dv\big|\\
&&\qquad\lesssim \eta|f\chi_M|_{\epsilon,\gamma/2}^2+C_\eta II+III,\eeno
where $II\eqdefa \int B^\epsilon g_*^2h^2 (\mu_*^{\f12}+(\mu^{\f12})_*')\big((\chi_M)'-\chi_M\big)^2d\sigma dv_*dv$ and $ III\eqdefa
\big|\int B^\epsilon(g\mu^{\f12})_*hf\chi_M\big((\chi_M)'-\chi_M\big)d\sigma dv_*dv\big|$.
We will give the estimates term by term.
{\it \underline{Estimate of $II$}.} We separate the integration domain of $II$ into three regions: $\{|v_*|\le \eta M\}$, $\{|v_*|\ge \eta M; |v|\le \eta|v_*|\}$ and $\{|v_*|\ge \eta M; |v|\ge \eta|v_*|\}$ where $\eta$ is sufficiently small. In the region of $\{|v_*|\le \eta M\}$, we notice that
\beno (\chi_M)(v')-\chi_M(v)=\int_0^1 (\na \chi_M)(\kappa(v))\cdot (v'-v)d\kappa,\eeno where $\kappa(v)=v+\kappa(v'-v)$.
Thanks to the assumption for $\na \chi$, it implies that $|\kappa(v)|\sim M$. Therefore we have $|v|\sim|\kappa(v)|\sim M$. In the region of $\{|v_*|\ge \eta M; |v|\le \eta|v_*|\}$, we deduce that $|v_*|\sim |v-v_*|\sim |v-v_*'|\sim |v_*'|$. And in the region of $\{|v_*|\ge \eta M; |v|\ge \eta|v_*|\}$, there holds $|v|\ge \eta^2M$. Putting together all the facts, we get
\beno |(\chi_M)(v')-\chi_M(v)|^2&\lesssim& \mathrm{1}_{|v_*|\le \eta M}\mathrm{1}_{|v|\sim M} M^{-2}\theta^2 |v-v_*|^2+(\mathrm{1}_{|v_*|\ge \eta M}\mathrm{1}_{|v_*'|\sim |v_*|}\mathrm{1}_{|v|\le \eta|v_*|}\\&&+\mathrm{1}_{|v_*|\ge \eta M}\mathrm{1}_{|v|\ge \eta^2 M}\mathrm{1}_{|v|\ge \eta|v_*|} )\min\{1,M^{-2}|v-v_*|^2\theta^2\},\eeno
from which together with Proposition \ref{symbol} yield that for $a\ge0$,
\beno II&\lesssim& |g\mathrm{1}_{|\cdot|\le \eta M}|_{L^2_{-a}}^2|h\mathrm{1}_{|\cdot|\sim M}|^2_{L^2_{\gamma/2+a}}+e^{-C\eta^2M^2}|W^\epsilon\mathrm{1}_{|\cdot|\ge \eta M}g|^2_{L^2_{\gamma/2+a}}| h|^2_{L^2_{-a}}\\&&+|W^\epsilon\mathrm{1}_{|\cdot|\ge \eta M}g|^2_{L^2_{-a}}|W^\epsilon \mathrm{1}_{|\cdot|\ge \eta^2 M}h|^2_{L^2_{a+\gamma/2}}.\eeno
{\it \underline{Estimate of $III$}.} We decompose the integration domain of $III$ into two regions: $\{|v_*|\le \eta M\}$ and $\{|v_*|\ge \eta M\}$. Correspondingly, $III$ can be split into two parts which are denoted by $III_1$ and $III_2$.
We first deal with $III_1$ whose integration domain is $\{|v_*|\le \eta M\}$. In this case, if $|v|\sim M$ or $|\kappa(v)|\sim M$, then $|v|\sim |v-v_*|\sim M$. By \eqref{canvvx} and Taylor's expansion
\ben\label{taylorchi} (\chi_M)(v')-\chi_M(v)=(\na \chi_M)(v)\cdot (v-v')+\f12\int_0^1 (\na^2 \chi_M)(\kappa(v)): (v'-v)\otimes (v'-v)d\kappa, \een
we infer that $|\int B^\epsilon (\chi_M(v')-\chi_M(v))d\sigma|\lesssim \mathrm{1}_{|v_*|\le \eta M} \mathrm{1}_{|v|\sim |v-v_*|\sim M} \langle v \rangle^{\gamma}$, which implies that
\beno |III_1|
&\lesssim& |g\mu^{\f12}\mathrm{1}_{|\cdot|\le \eta M}|_{L^1}
|h\mathrm{1}_{|\cdot|\sim M}|_{L^2_{\gamma/2}}|f\chi_M|_{L^2_{\gamma/2}}. \eeno
We turn to the estimate of $III_2$. Due to the definition of $\chi_M$, the support of $\chi_M$ is in the ball $B_{\eta^{-1} M}$ or outside of the ball $B_{\eta M}$. We first handle the latter which corresponds to the cases $(\chi, M)=(1-\phi, 1/\epsilon)$ and $(\chi, M)=(\psi, 2^{j})$. If $(g,h)=(\mu^{\f12}, f)$, then
\beno
|III_2|&\lesssim&
\int B^\epsilon |(\mu)_*|\mathrm{1}_{|v_*|\ge \eta M}\mathrm{1}_{|v|\ge \eta M} |f||f\chi_M|d\sigma dv_*dv\\
\\&\lesssim&\epsilon^{-2s}(|\mu\mathrm{1}_{|\cdot|\ge \eta M}|_{L^\infty_{|\gamma|}}+|\mu\mathrm{1}_{|\cdot|\ge \eta M}|_{L^1_{|\gamma|}})
|f\mathrm{1}_{|\cdot|\ge \eta M}|_{L^2_{\gamma/2}}|f\chi_M|_{L^2_{\gamma/2}}.
\eeno
If $(g, h)=(f, \mu^{\f12})$, thanks to Lemma \ref{aftercancellation}, we have
\beno
|III_2|\lesssim \epsilon^{-2s}(|f\mu^{\f12}\mathrm{1}_{|\cdot|\ge \eta M}|_{L^2_{|\gamma|}}+|f\mu^{\f12}
\mathrm{1}_{|\cdot|\ge \eta M}|_{L^1_{|\gamma|}})
(\mathrm{1}_{\gamma\le -\f32}|\mu^{\f12}|_{H^{-\f32-\gamma+\eta}}+|\mu^{\f12}|_{L^2_{\gamma/2}})|f\chi_M|_{L^2_{\gamma/2}}.
\eeno
When the support of $\chi_M$ is in the ball $B_{\eta^{-1}M}$, it corresponds to the case that $\chi=\phi$ and $M=1/\epsilon$. In this case, we have
\beno
|III_2|&\lesssim& \big|\int B^\epsilon \mathrm{1}_{|v_*|\ge \eta M}\mathrm{1}_{\theta\le |v-v_*|^{-1}M} (g\mu^{\f12})_*\mathrm{1}_{|v|\le \eta^{-1} M}hf\chi_M\big((\chi_M)'-\chi_M\big)d\sigma dv_*dv\big|\\&&+\big|\int B^\epsilon \mathrm{1}_{|v_*|\ge \eta M}\mathrm{1}_{\theta\ge |v-v_*|^{-1}M} (g\mu^{\f12})_*\mathrm{1}_{|v|\le \eta^{-1} M}hf\chi_M\big((\chi_M)'-\chi_M\big)d\sigma dv_*dv\big|\\&\eqdefa& III_{2,1}+III_{2,2}. \eeno
By Taylor expansion \eqref{taylorchi} and \eqref{canvvx}, one has
\beno |III_{2,1}|&\lesssim&
\big|\int |v-v_*|^\gamma \mathrm{1}_{|v_*|\ge \eta M}\mathrm{1}_{|v|\le \eta^{-1} M} |(g\mu^{\f12})_*hf\chi_M|(|v-v_*|^{2s}M^{-2s}+ |v-v_*|^{2s-1}M^{1-2s})dv_*dv\big|\\
&\lesssim& M^{-2s} (|g\mu^{\f12}\mathrm{1}_{|\cdot|\ge \eta M}|_{L^1_{1+\gamma+2s}}+|g\mu^{\f12}\mathrm{1}_{|\cdot|\ge \eta M}|_{L^2_{1+\gamma+2s}})|h\mathrm{1}_{|\cdot|\le \eta^{-1}M}|_{L^2_{\gamma/2+s}}|f\chi_M|_{L^2_{\gamma/2+s}}. \eeno
For $III_{2,2}$, thanks to the fact that $\gamma+2s\ge0$, it is not difficult to check that
\beno |III_{2,2}|&\lesssim&M^{-2s} |g\mu^{\f12}\mathrm{1}_{|\cdot|\ge \eta M}|_{L^1 } |h\mathrm{1}_{|\cdot|\le \eta^{-1}M}|_{L^2_{\gamma/2+s}}|f\chi_M|_{L^2_{\gamma/2+s}}.
\eeno
We conclude that \beno
|III_{2}|
&\lesssim& M^{-2s} (|g\mu^{\f12}\mathrm{1}_{|\cdot|\ge \eta M}|_{L^1_{1+\gamma+2s}}+|g\mu^{\f12}\mathrm{1}_{|\cdot|\ge \eta M}|_{L^2_{1+\gamma+2s}})|h\mathrm{1}_{|\cdot|\le \eta^{-1}M}|_{L^2_{\gamma/2+s}}|f\chi_M|_{L^2_{\gamma/2+s}}. \eeno
Then the first result follows from the estimates for $II$ and $III$ and our assumptions for $\chi$ and $M$.
Next we turn to the proof of the second result. Observe that the support of $\varphi_j$ is located in the ring $\{|v|\sim 2^j\}$. We use the same notations as above. Note that $(\chi, M)=(\psi, 2^{j})$ since $\varphi_j=\psi(2^{-j}\cdot)$. Then we may rewrite the term $II$ as follows
\beno II=\int B^\epsilon g_*^2h^2\mathrm{1}_{|v|\sim 2^j} (\mu_*^{\f12}+(\mu^{\f12})_*')\big((\varphi_j)'-\varphi_j\big)^2d\sigma dv_*dv.\eeno
We follow the argument used in the before except in the region that $\{|v_*|\ge \eta 2^j; |v|\ge \eta|v_*|\}$. Since now $|v|\sim 2^j$, we deduce that $|v|\sim |v_*|\sim 2^j$ in this region. We obtain that \beno |(\varphi_j)(v')-\varphi_j(v)|^2&\lesssim& \mathrm{1}_{|v_*|\le \eta M}\mathrm{1}_{|v|\sim 2^j} 2^{-2j}\theta^2 |v-v_*|^2+\mathrm{1}_{|v_*|\ge \eta 2^{j}}\mathrm{1}_{|v_*'|\sim |v_*|}\mathrm{1}_{|v|\le \eta|v_*|}\\&&\times\min\{1,2^{-2j}|v-v_*|^2\theta^2\}+\mathrm{1}_{|v_*|\sim |v|\sim 2^j}|v-v_*|^22^{-2j}\theta^2,\eeno
from which together with Proposition \ref{symbol} yield that
\beno II&\lesssim& e^{-C\eta^22^{2j}}|W^\epsilon g|^2_{L^2_{\gamma/2}}|h|^2_{L^2} +\epsilon^{2s}|g|^2_{L^2}|W^\epsilon h|^2_{L^2_{\gamma/2}}.\eeno
Following the same argument for $III_1$ as before, we derive that
\beno |III_1|
&\lesssim& \epsilon^{s}|g |_{L^2}
|W^\epsilon h |_{L^2_{\gamma/2}}|f\varphi_j|_{L^2_{\gamma/2}}. \eeno
As for the term $III_2$, we have
\beno III_2\lesssim\big|\int B^\epsilon( |g|\mu^{\f12})_*|h||\varphi_jf| \mathrm{1}_{|v_*|\ge \eta 2^j,|v|\sim 2^j}d\sigma dv_*dv\big|\lesssim e^{-C\eta^22^{2j}} |g|_{L^2}|W^{\epsilon}h|_{L^2_{\gamma/2}}|W^{\epsilon}\varphi_jf|_{L^2_{\gamma/2}}. \eeno
Patching together all the estimates will yield the result. We end the proof of the lemma.
\end{proof}
Now we are in a position to prove Theorem \ref{main2}.
\begin{proof}[Proof of Theorem \ref{main2} {\bf (Part I)}] We first prove \eqref{semigroupLe1}. Recall that $f^l(v)=\phi(\epsilon v)f(v)$ and $f^h=f-f^l$. Then we have
\beno
\pa_t f^l+\mathcal{L}^\epsilon f^l=[\mathcal{L}^\epsilon,\phi(\epsilon \cdot)]f;\quad \pa_t f^h+\mathcal{L}^\epsilon f^h=[\mathcal{L}^\epsilon,1-\phi(\epsilon \cdot)]f
\eeno
Thanks to Theorem \ref{main1} and the fact that $|f^h|_{\epsilon,\gamma/2}\ge \epsilon^{-2s}|f^h|_{L^2_{\gamma/2}}$, we have
\beno
\langle \mathcal{L}^\epsilon f^l, f^l\rangle_v\gtrsim |f^l|^2_{\epsilon,\gamma/2}-|f^l|^2_{L^2_{\gamma/2}};
\langle \mathcal{L}^\epsilon f^l, f^l\rangle_v \gtrsim |(I-\mathbb{P})f^l|^2_{\epsilon,\gamma/2};
\langle \mathcal{L}^\epsilon f^h, f^h\rangle_v\sim |f^h|_{\epsilon,\gamma/2}^2. \eeno
Notice that $\mathbb{P}(f^l+f^h)=\mathbb{P}f=0$. We derive that
\beno \langle \mathcal{L}^\epsilon f^l, f^l\rangle_v&\gtrsim &|f^l|^2_{\epsilon,\gamma/2}-|\mathbb{P}f^l|_{L^2_{\gamma/2}}^2 = |f^l|^2_{\epsilon,\gamma/2}-|\mathbb{P}f^h|_{L^2_{\gamma/2}}^2
\ge |f^l|^2_{\epsilon,\gamma/2}-\epsilon^{2s} |f^h|_{\epsilon,\gamma/2}^2 \eeno
Thanks to Lemma \ref{CommSemi} and the above inequalities, one has
\beno &&\f{d}{dt}|f^l|_{L^2}^2+C|f^l|_{\epsilon,\gamma/2}^2\lesssim \epsilon^{2s}(|f^h|_{\epsilon,\gamma/2}^2+|f|_{\epsilon,\gamma/2}^2);\\
&&\f{d}{dt}|f^h|_{L^2}^2+C|f^h|_{\epsilon,\gamma/2}^2\lesssim \epsilon^{2s} |f|_{\epsilon,\gamma/2}^2,
\eeno from which together with the fact $\int_0^\infty|f|_{\epsilon,\gamma/2}^2dt\lesssim |f_0|_{L^2}^2$, we get \eqref{semigroupLe1}.
Next we want to prove \eqref{semigroupLe3}. It is easy to check that
\beno \pa_t \mathcal{P}_jf+\mathcal{L}^\epsilon \mathcal{P}_jf=[\mathcal{L}^\epsilon, \psi(2^{-j}\cdot)]f. \eeno Recall that
$2^j\ge 1/\epsilon$.
Thanks to Thereom \ref{main1} and Lemma \ref{CommSemi}, we obtain that
\beno \f{d}{dt}|\mathcal{P}_j f(t)|_{L^2}^2+C|\mathcal{P}_jf|_{\epsilon,\gamma/2}^2\gtrsim -\epsilon^{2s}|f|_{\epsilon,\gamma/2}^2. \eeno Observe that
$|W^\epsilon\mathcal{P}_jf|^2_{L^2_{\gamma/2}}\sim \epsilon^{-2s}2^{j\gamma}|\mathcal{P}_jf|_{L^2}^2$ and
$|W^{\epsilon}(D)W_{\gamma/2}\mathcal{P}_jf|_{L^2}^2+|W^\epsilon((-\Delta)^{\f12})W_{\gamma/2}\mathcal{P}_jf|^2\lesssim\\ \epsilon^{-2s}2^{j\gamma}|\mathcal{P}_jf|_{L^2}^2$.
We are led to
\beno \f{d}{dt}|\mathcal{P}_j f(t)|_{L^2}^2 \gtrsim -\epsilon^{2s}|f|^{2}_{\epsilon,\gamma/2}-\epsilon^{-2s}2^{j\gamma}|\mathcal{P}_j f|_{L^2}^2,\eeno which implies
$|\mathcal{P}_jf(t)|_{L^2}^2\ge |\mathcal{P}_jf_0|_{L^2}^2-C\epsilon^{-2s}2^{j\gamma}t-C\epsilon^{2s} $. From this, we conclude the result \eqref{semigroupLe3}.
\end{proof}
To complete the proof of Theorem \ref{main2}, we need the following proposition.
\begin{prop}\label{propODE} Let $c_1, c_2$ and $p$ be three universal and positive constants. Consider the ordinary differential inequality
\begin{equation}\label{ODEdecay}
\left\{ \begin{aligned}
&\f{d}{dt} Y+c_1Y_1+c_2\epsilon^{-2s}Y_2^{1+\f1{p}}\le 0 ;\\
&Y|_{t=0} = Y_{0},
\end{aligned} \right.
\end{equation}
where $Y=Y_1+Y_2$ and $Y,Y_1,Y_2\ge0$. Then there exists a critical time $t_*=O(-C(c_1,c_2,p)2s\ln\epsilon)$ such that
\ben\label{decayY} Y(t)\lesssim Y(0)\big(e^{-c_1t/8}\mathrm{1}_{t\le t_*}+C(c_2Y(0)^{\f1{p}},p)\epsilon^{2sp}(1+t)^{-p}\mathrm{1}_{t\ge t_*}\big). \een
\end{prop}
\begin{proof} Without loss of generality, we assume that $Y(0)=1$, otherwise we may let $Y(t):=Y(t)/Y(0)$ and $c_2:=c_2Y(0)^{\f1{p}}$. It is easy to check that $Y(t)$ is a strictly decreasing function before it vanishes. Assume that there exists a time $t_j$ with $j\in \N$ such that $Y(t_j)=2^{-j}$.
To obtain the desired result, the key point is to give the estimate for $t_j$. Since $Y=Y_1+Y_2$, one has $Y_1\ge \f12 Y$ or $Y_2\ge\f12 Y$. Then for $t\in[t_j,t_{j+1}]$, $(c_1Y_1+c_2\epsilon^{-2s}Y_2^{1+\f1{p}})(t)\ge \min\{\f{c_{1}}{2}Y(t_{j+1}), c_2\epsilon^{-2s}(Y/2)^{1+\f1{p}}(t_{j+1})\}$. By \eqref{ODEdecay}, we obtain that for $t\in[t_j,t_{j+1}]$, $(-Y'(t))^{-1}\le (c_1\f12Y(t_{j+1}))^{-1}+ (c_2\epsilon^{-2s}(Y(t_{j+1})/2)^{1+\f1{p}})^{-1}$.
By mean value theorem, there exists a $\tilde{t}\in[t_j,t_{j+1}]$ such that
\beno Y(t_j)-Y(t_{j+1})=Y'(\tilde{t})(t_j-t_{j+1}), \eeno
which implies that
$ t_{j+1}-t_j\le 4c_1^{-1}+\epsilon^{2s}4^{1+\f1{p}}c_2^{-1}2^{\f{j}{p}}.$
From this, we obtain that
\beno t_N\le 4c_1^{-1}N+\epsilon^{2s} 4^{1+\f1{p}}c_2^{-1}2^{\f{N}{p}}(1-2^{-p})^{-1}=4c_1^{-1}N+C(c_2,p)\epsilon^{2s}2^{\f{N}{p}}.\eeno
Let $H(x)=C(c_2,p)\epsilon^{2s}2^{\f{x}{p}}-4c_1^{-1}x$. Then for $x\ge1$, there exists a unique $x_*$ such that if $x\le x_*$, $H(x)\le 0$ and if $x\ge x_*$, $H(x)\ge0$. Moreover, there exist two constants $C_1$ and $C_2$ depending only on $c_1, c_2$ and $p$ such that $-C_1(c_1,c_2,p)2s\ln \epsilon\le x_*\le -C_2(c_1,c_2,p)2s\ln \epsilon$.
From the above argument, we get that
if $1\le N\le N_*\eqdefa [x_*]$, $t_N\le 8c_1^{-1}N$ and if $N\ge N_*+1$,
$t_N\le 2\epsilon^{2s}C(c_2,p)2^{\f{N}{p}}$.
For $N_*-1\le N\le N_*+1$, we have
\beno t_N&\le& 4c_1^{-1}(N_*+1)+C(c_2,p)\epsilon^{2s}2^{\f{N_*+1}{p}}\\&\le& 2C(c_2,p)\epsilon^{2s}2^{\f{N_*+1}{p}}\le 2C(c_2,p)\epsilon^{2s}2^{\f{N+2}{p}},
\eeno which yields that for $N\ge N_*-1$, $t_N\le (1+2^{\f{2}{p}})C(c_2,p)\epsilon^{2s}2^{\f{N}{p}}$.
Thanks to the fact that $Y(t)$ is a strictly decreasing function before it vanishes, we obtain that if $t\le t_{N_*}$, $Y(t)\lesssim 2^{-c_1t/8}$ and if $t\ge t_{N_*-1}$, $Y(t)\lesssim \epsilon^{2sp}C(c_2,p)^p(1+t)^{-p}$.
We conclude that for $t\ge 0$, \beno Y(t)\lesssim 2^{- c_1 t/8}\mathrm{1}_{t\le t_{N_*}}+\epsilon^{2sp}C(c_2,p)^p(1+t)^{-p}\mathrm{1}_{t\ge t_{N_*}},\eeno
where $t_{N_*}\le 8c_1^{-1}N_*=O(-C_2(c_1,c_2,p)2s\ln\epsilon)$. On the other hand, for $t\le -2sp8(c_1\ln 2)^{-1}\ln\epsilon-p\ln C(c_2,p)$, there holds
$2^{-c_1 t/8}\ge \epsilon^{2sp}C(c_2,p)^p(1+t)^{-p}$. Therefore we deduce that there exists a time $t_*=O(-C(c_1,c_2,p)2s\ln\epsilon)$ such that \eqref{decayY} holds. This completes the proof of the proposition.
\end{proof}
\begin{rmk}\label{OptimalODEdecay} To show that estimate (\ref{decayY}) is sharp for \eqref{ODEdecay}, we consider the following special case:
\begin{equation}\label{ODEdecay-specialcase}
\left\{ \begin{aligned}
&\f{d}{dt} Y+Y_1+\epsilon^{-2s}Y_2^{2} = 0 ;\\
&Y|_{t=0} = 1,
\end{aligned} \right.
\end{equation}
where $c_1=c_2=p=Y(0)=1$.
Let us impose $Y_1=\epsilon^{-2s}Y_2^{2}$. Since $Y_{1}+Y_{2}=Y$, we get
$Y_{2}=\frac{-1+\sqrt{1+4\epsilon^{-2s}Y}}{2\epsilon^{-2s}}$,
which implies
\beno Y_1+\epsilon^{-2s}Y_2^{2}=2\epsilon^{-2s}Y_2^{2}=\frac{1+2\epsilon^{-2s}Y-\sqrt{1+4\epsilon^{-2s}Y}}{\epsilon^{-2s}}.\eeno
Now let $X=\epsilon^{-2s}Y$. Then we have the following ODE
\begin{equation}\label{ODEdecay-specialcase-addicon}
\left\{ \begin{aligned}
&\f{d}{dt} X+1+2X- \sqrt{1+4X}= 0 ;\\
&X|_{t=0} = \epsilon^{-2s},
\end{aligned} \right.
\end{equation}
If we set $f(x)=1+2x- \sqrt{1+4x}$, then one has
$f^{\prime}(x)=2-2(1+4x)^{-1/2}, ~~f^{\prime \prime}(x)=4(1+4x)^{-3/2},~~f^{(3)}(x)=-24(1+4x)^{-5/2},~~f^{(4)}(x)=240(1+4x)^{-7/2}$.
By Taylor expansion, one has
\beno f(x) &=& f(0)+f^{\prime}(0)x+\frac{f^{\prime\prime}(0)}{2}x^{2}+\frac{f^{(3)}(0)}{6}x^{3}+\frac{1}{6}\int_{0}^{x}(x-t)^{3}f^{(4)}(t)dt\\
&=&2x^{2}-4x^{3}+\frac{1}{6}\int_{0}^{x}(x-t)^{3}f^{(4)}(t)dt.\eeno
Since $0\leq f^{(4)}(t) \leq 240$, we have
$ 2x^{2}-4x^{3} \leq f(x) \leq 2x^{2}-4x^{3}+10x^{4}.$
If $x\leq 1/4$, then $4x^{3}\leq x^{2}$ and $10x^{4}\leq x^{2},$
which gives
\ben \label{small-value-square}x^{2} \leq 1+2x- \sqrt{1+4x} \leq 3x^{2}, ~~ x \leq 1/4.\een
Let $g(x)=f(x)-x/4$, if $x\geq 1/4$, then
$ g^{\prime}(x)=11/4-2(1+4x)^{-1/2} \geq 11/4-\sqrt{2}> 0,$
which implies
\beno g(x)\geq g(1/4)=\frac{3}{2}-\sqrt{2}-\frac{1}{16}>0.\eeno
On the other hand, if $x\geq 1/4$, then $1+2x \leq 6x$.
Therefore we have
\ben \label{large-value-linear}x/4 \leq 1+2x- \sqrt{1+4x} \leq 6x,~~ x \geq 1/4.\een
Suppose $t_{*}$ is the critical time such that $X(t_{*})=1/4$, then by (\ref{large-value-linear}), we get
\beno \frac{d}{dt} X+X/4 \leq \frac{d}{dt} X+1+2X- \sqrt{1+4X}= 0 \leq \frac{d}{dt} X+6X, ~~t \leq t_{*},\eeno
which yields
$ -6 \leq \frac{d}{dt} \ln X \leq -1/4, ~~t \leq t_{*}$.
Integrating over $[0,t]$, we have
\ben\label{exp1-exponetial-decay}\epsilon^{-2s}\exp(-6t)\leq X(t)\leq \epsilon^{-2s}\exp(-t/4),~~t \leq t_{*},\een
By (\ref{small-value-square}), we get
\beno \frac{d}{dt} X+X^{2} \leq \frac{d}{dt} X+1+2X- \sqrt{1+4X}= 0 \leq \frac{d}{dt} X+3X^{2},~~t \geq t_{*}\eeno
which indicates
\beno-3 \leq \frac{d}{dt} (-\frac{1}{X}) \leq -1, ~~t \geq t_{*}.\eeno
Integrating over $[t_{*},t]$, we have
\ben\label{exp1-polynomial-decay}\frac{1}{4+3(t-t_{*})}\leq X(t)\leq \frac{1}{4+(t-t_{*})},~~t \geq t_{*}.\een
By (\ref{exp1-exponetial-decay}), recalling $X(t_{*})=1/4$, we have
\beno \frac{-2s\ln\epsilon-\ln 1/4}{6}\leq t_{*}\leq 4(-2s\ln\epsilon-\ln 1/4),\eeno
which implies $t_{*}\sim -s\ln\epsilon$.
Recalling $X=\epsilon^{-2s}Y$, we have
\ben\label{ep1-decayY} e^{-6t}\mathrm{1}_{t\le t_*}+\epsilon^{2s}\frac{1}{4+3(t-t_{*})}\mathrm{1}_{t> t_*}\leq Y(t)\leq e^{-t/4}\mathrm{1}_{t\le t_*}+\epsilon^{2s}\frac{1}{4+(t-t_{*})}\mathrm{1}_{t> t_*}. \een
\end{rmk}
\medskip
We are in a position to complete the proof of Theorem \ref{main2}.
\begin{proof}[Proof of Theorem \ref{main2} {\bf (Part II)}] By basic energy estimate, for $l\ge 2$, one has \beno \f{d}{dt}|f|_{L^2_l}^2+c_1|f|_{\epsilon,\gamma/2+l}^2\lesssim |f|_{L^2_{l+\gamma/2}}^2, \eeno
where we have used Lemma \ref{commutatorgamma} and Theorem \ref{main1}. Observing that
\beno |f|_{L^2_{l+\gamma/2}}^2\lesssim |f^h|_{L^2_{l+\gamma/2}}^2+\eta |f^l|^2_{L^2_{\gamma/2+l+s}}+C_\eta |f^l|^2_{L^2_{\gamma/2+s}}.\eeno
We infer that
$\f{d}{dt}|f|_{L^2_l}^2\lesssim |f^l|^2_{L^2_{\gamma/2+s}}$,
which implies that for any $t\ge0$, $|f(t)|_{L^2_l}^2\lesssim |f_0|_{L^2_l}^2$. Thanks to the facts
$\f{d}{dt}|f|_{L^2}^2+c_1|f|_{\epsilon,\gamma/2}^2\le 0 $
and $|f|_{L^2}\le |f|_{L^2_{\gamma/2}}^{\f{p}{p+1}}|f|_{L^2_{-\gamma p/2 }}^{\f1{p+1}}$, we get
\beno \f{d}{dt}|f|_{L^2}^2+c_1|f^l|_{L^2}^2+c_2|f_0|_{L^2_{-\gamma p/2}}^{-2/p}\epsilon^{-2s}|f^h|^{2+\f{2}{p}}\lesssim 0.\eeno
From this together with Proposition \ref{propODE}, we obtain \eqref{semigroupLe2} which completes the proof of Theorem \ref{main2}.
\end{proof}
\section{Nonlinear Boltzmann equation in the perturbation framework}
In this section, we will give the proof to Theorem \ref{main3}, that is, the global well-posedness, global dynamics and global asymptotic formula for the Boltzmann equation with and without angular cutoff in the close-to-equilibrium setting. We divide the proof into three parts. The first two parts consider the global well-posedness and dynamics for the equation. The third part deals with the global asymptotic formula which describes the limit that $\epsilon$ goes to zero.
\subsection{Global well-posedenss of the Boltzmann equation \eqref{linearizedBE}}
We only provide the {\it a priori} estimates for the equation.
\subsubsection{Estimate for the linear equation}
Suppose $f$ is a solution to
\ben \label{lBE}\partial_{t}f + v\cdot \nabla_{x} f + \mathcal{L}^{\epsilon}f= g. \een
We set $f_{1} \eqdefa\mathbb{P} f$ and $f_{2} \eqdefa f - \mathbb{P} f$.
The estimate for the linear solution \eqref{lBE} can be stated as follows:
\begin{prop}\label{essential-estimate-of-micro-macro} Suppose $f$ is a smooth solution to \eqref{lBE}.
Then for $M$ large enough, there holds
\ben \label{essential-micro-macro-result} &&\frac{d}{dt}(M\|f\|^{2}_{H^{N}_{x}L^{2}}+\mathcal{I}_{N}(f))+ \frac{1}{2}(|\nabla_{x}(a,b,c)|^{2}_{H^{N-1}_{x}}+\|f_{2}\|^{2}_{H^{N}_{x}L^2_{\epsilon,\gamma/2}}) \\&\lesssim& \sum_{|\alpha| \leq N}
|(\pa^{\alpha}g, \pa^{\alpha}f)|+ \sum_{|\alpha| \leq N}\sum_{j=1}^{13}
\int_{\mathbb{T}^{3}}|\langle \pa^{\alpha}g, e_j\rangle|^{2} dx, \nonumber \een where $M\|f\|^{2}_{H^{N}_{x}L^{2}}+\mathcal{I}_{N}(f)\sim \|f\|^{2}_{H^{N}_{x}L^{2}}$,
$\mathcal{I}_{N}(f)$ is a functional defined in \eqref{interactive-INf} and $\{e_{j}\}_{1\leq j \leq 13}$ is defined explicitly as
\beno e_{1} = \mu^{1/2}, e_{2} = v_{1}\mu^{1/2}, e_{3} = v_{2}\mu^{1/2},e_{4} = v_{3}\mu^{1/2}, e_{5} = v_{1}^{2}\mu^{1/2}, e_{6} = v_{2}^{2}\mu^{1/2},e_{7} = v_{3}^{2}\mu^{1/2},\\ e_{8} = v_{1}v_{2}\mu^{1/2}, e_{9} = v_{2}v_{3}\mu^{1/2},e_{10} = v_{3}v_{1}\mu^{1/2}, e_{11} = |v|^{2}v_{1}\mu^{1/2}, e_{12} = |v|^{2}v_{2}\mu^{1/2},e_{13} = |v|^{2}v_{3}\mu^{1/2}. \eeno
\end{prop}
The proof of Proposition \ref{essential-estimate-of-micro-macro} will be postponed a little bit. By \eqref{DefProj}, we first observe that,
\ben \label{definition-f-1} f_{1}(t,x,v) = \{a(t,x) + b(t,x) \cdot v + c(t,x)|v|^{2}\}\mu^{1/2},\een which solves
\ben \label{macro-micro-LBE-2} \partial_{t}f_{1} + v\cdot \nabla_{x} f_{1} = r + l + g,\een
where $r = - \partial_{t}f_{2}$ and $ l = - v\cdot \nabla_{x} f_{2} - \mathcal{L}^{\epsilon}f_{2}$.
Let $A = (a_{ij})_{1\leq i \leq 13, 1\leq j \leq 13}$ be the matrix defined by $a_{ij} = \langle e_{i}, e_{j} \rangle $ and $y$ be the 13-dimensional vector with components $\partial_{t} a, \{\partial_{t}b_{i}+ \partial_{i} a \}_{1\leq i \leq 3}, \{\partial_{t}c+ \partial_{i} b_{i} \}_{1\leq i \leq 3}, \{\partial_{i}b_{j}+ \partial_{j} b_{i} \}_{1\leq i < j \leq 3}, \{\partial_{i}c \}_{1\leq i \leq 3}$. If $z = (z_i)_{i=1}^{13}=(\langle r+l+g, e_i\rangle)_{i=1}^{13}$, then by \eqref{definition-f-1} and inner product in the space $L^{2}(\R^{3}_{v})$, one has $Ay = z$, which implies
$ y = A^{-1}z $.
For simplicity, we define $z^{r}=(z^r_i)_{i=1}^{13}=( \langle r, e_i\rangle)_{i=1}^3$. Similarly we can define $ z^{l}$ and $z^g$. If we suppose that
\beno \tilde{r}= (r^{(0)}, \{r^{(1)}_{i}\}_{1\leq i \leq 3}, \{r^{(2)}_{i}\}_{1\leq i \leq 3}, \{r^{(2)}_{ij}\}_{1\leq i < j \leq 3}, \{r^{(3)}_{i}\}_{1\leq i \leq 3})^{T} = A^{-1} z^{r}, \eeno
\beno \tilde{l}= (l^{(0)}, \{l^{(1)}_{i}\}_{1\leq i \leq 3}, \{l^{(2)}_{i}\}_{1\leq i \leq 3}, \{l^{(2)}_{ij}\}_{1\leq i < j \leq 3}, \{l^{(3)}_{i}\}_{1\leq i \leq 3})^{T} = A^{-1} z^{l}, \eeno
\beno \tilde{g}=(g^{(0)}, \{g^{(1)}_{i}\}_{1\leq i \leq 3}, \{g^{(2)}_{i}\}_{1\leq i \leq 3}, \{g^{(2)}_{ij}\}_{1\leq i < j \leq 3}, \{g^{(3)}_{i}\}_{1\leq i \leq 3})^{T} = A^{-1} z^{g}, \eeno
then by denoting
\beno \tilde{f}= (\tilde{f}^{(0)}, \{\tilde{f}^{(1)}_{i}\}_{1\leq i \leq 3}, \{\tilde{f}^{(2)}_{i}\}_{1\leq i \leq 3}, \{\tilde{f}^{(2)}_{ij}\}_{1\leq i < j \leq 3}, \{\tilde{f}^{(3)}_{i}\}_{1\leq i \leq 3})^{T} = A^{-1} (\langle f_{2}, e\rangle)_{i=1}^{13}. \eeno
one has
$ \tilde{r} = - \partial_{t}\tilde{f} $, which yields
\ben \label{linear-equation-abc-3} y = -\partial_{t}\tilde{f} + \tilde{l} + \tilde{g}.\een
We are in a position to state a lemma to capture the dissipation of $(a,b,c)$.
\begin{lem}\label{estimate-for-highorder-abc} Let us define the
temporal energy functional $\mathcal{I}_{N}(f)$ as
\ben \label{interactive-INf} \mathcal{I}_{N}(f) \eqdefa \sum_{|\alpha|\leq N-1}\sum_{i=1}^{3}( \mathcal{I}^{a}_{\alpha,i}(f)+\mathcal{I}^{b}_{\alpha,i}(f)+\mathcal{I}^{c}_{\alpha,i}(f)+\mathcal{I}^{ab}_{\alpha,i}(f)), \een
where
$\mathcal{I}^{a}_{\alpha,i}(f)= \langle \partial^{\alpha} \tilde{f}^{(1)}_{i}, \partial_{i}\partial^{\alpha} a\rangle, \mathcal{I}^{b}_{\alpha,i}(f)= -\sum_{j \neq i}\langle \partial^{\alpha} \tilde{f}^{(2)}_{j}, \partial_{i}\partial^{\alpha} b_{i}\rangle + \sum_{j \neq i}\langle \partial^{\alpha} \tilde{f}^{(2)}_{ji}, \partial_{j}\partial^{\alpha} b_{i}\rangle + 2 \langle \partial^{\alpha} \tilde{f}^{(2)}_{i}, \\\partial_{i}\partial^{\alpha} b_{i}\rangle, \mathcal{I}^{c}_{\alpha,i}(f)= \langle \partial^{\alpha} \tilde{f}^{(3)}_{i}, \partial_{i}\partial^{\alpha} c\rangle$ and $ \mathcal{I}^{ab}_{\alpha,i}(f)= \langle \partial_{i}\partial^{\alpha} a, \partial^{\alpha} b_{i}\rangle$.
There exists a constant $C > 0$ such that
\ben \label{solution-property-part2} \frac{d}{dt}\mathcal{I}_{N}(f) + \frac{1}{2}|\nabla_{x}(a,b,c)|^{2}_{H^{N-1}_{x}} \leq C(\|f_{2}\|^{2}_{H^{N}_{x}L^2_{\epsilon,\gamma/2}} + \sum_{|\alpha|\leq N-1}\int_{\mathbb{T}^{3}}|\langle \pa^{\alpha}g, e\rangle|^{2} dx).\een
\end{lem}
The proof of lemma \ref{estimate-for-highorder-abc} will be given in the Appendix. Now we are able to prove Proposition \ref{essential-estimate-of-micro-macro}.
\begin{proof}[Proof of Proposition \ref{essential-estimate-of-micro-macro}.] Applying $\partial^{\alpha}$ to equation \eqref{lBE}, taking inner product with $\partial^{\alpha}f$, we have
\beno \frac{1}{2} \frac{d}{dt} \|\partial^{\alpha}f\|^{2}_{L^{2}} + (\mathcal{L}^{\epsilon}\partial^{\alpha}f, \partial^{\alpha}f) = (\partial^{\alpha}g, \partial^{\alpha}f).\eeno
Thanks to Theorem \ref{main1}, we have
\ben \label{solution-property-part-g}\frac{1}{2}\frac{d}{dt}\|f\|^{2}_{H^{N}_{x}L^{2}} + c_0\|f_{2}\|^{2}_{H^{N}_{x}L^2_{\epsilon,\gamma/2}} \lesssim \sum_{|\alpha| \leq N}
|(\pa^{\alpha}g, \pa^{\alpha}f)|.\een
Then \eqref{essential-micro-macro-result} follows from \eqref{solution-property-part-g} and Lemma \ref{estimate-for-highorder-abc}.
\end{proof}
\subsubsection{Global well-posedness of the Boltzmann equation \eqref{linearizedBE} in the space $H^{N}_{x}L^{2}$.}
In this subsection, we derive the estimate in $H^{N}_{x}L^{2}$ for solutions to the Cauchy problem \eqref{linearizedBE}. We adopt proposition \ref{essential-estimate-of-micro-macro} by taking $g = \Gamma^{\epsilon}(f,f)$. For ease of notation, let us define the energy and the dissipation functionals as
\beno \mathcal{E}_{N}(f(t)) = \|f(t)\|^{2}_{H^{N}_{x}L^{2}};\quad \mathcal{D}_{N}(f(t)) = \|(a,b,c)\|^{2}_{H^{N}_{x}L^{2}} + \|f_{2}(t)\|^{2}_{H^{N}_{x}L^2_{\epsilon,\gamma/2}}.\eeno The result can be concluded as follows:
\begin{thm}\label{a-priori-estimate-LBE}
For $\gamma > -3/2$ and $N \geq 2$, there exists a sufficiently small constant $\delta_{2}$ which is independent of $\epsilon$, such that if
$\mathcal{E}_{2}(f_{0})\le \delta_2$, then the solution
$f^{\epsilon}$ to the Cauchy problem \eqref{linearizedBE} satisfies
\beno \sup_{t\in[0,\infty]}\mathcal{E}_{N}(f^{\epsilon}(t)) + \ \int_{0}^{\infty}\mathcal{D}_{N}(f^{\epsilon}(s))ds \leq C(\mathcal{E}_{N}(f_{0})).\eeno
\end{thm}
\begin{proof} Thanks to \eqref{conserveq} and \eqref{Nuspace}, we can apply Poincare inequality to $(a,b,c)$ to obtain that $|(a,b,c)|_{H^N_x}\sim |\na_x(a,b,c)|_{H^{N-1}_x}$. By Proposition \ref{essential-estimate-of-micro-macro}, we need to estimate $
|(\pa^{\alpha}\Gamma^{\epsilon}(f,f), \pa^{\alpha}f)|$ and $\int_{\mathbb{T}^{3}}|\langle \pa^{\alpha}\Gamma^{\epsilon}(f,f), e_j\rangle|^{2} dx$ for $|\alpha| \leq N$.
If we denote the Fourier transform of $f$ with respect to $x$ variable by $\hat{f}$, then we have
\beno (\Gamma^\epsilon(g, h), f)=\sum_{k,m\in\Z^3} \langle \Gamma^\epsilon (\hat{g}(k), \hat{h}(m-k)), \hat{f}(m)\rangle_v, \eeno
from which together with Theorem \ref{upGammagh}, we get
\beno |(\Gamma^\epsilon(\pa_x^\alpha g, \pa_x^\beta h), f)|\lesssim \sum_{k,m\in\Z^3} |k|^{|\alpha|}|m-k|^{|\beta|}|\hat{g}(k)|_{L^2}|\hat{h}(m-k)|_{L^2_{\epsilon,\gamma/2}}|\hat{f}(m)|_{L^2_{\epsilon,\gamma/2}}.
\eeno
From this, we derive that for $a, b\ge$ with $a+b>\f32$,
\ben\label{HNGamma1}
|( \Gamma^\epsilon(\pa_x^\alpha g, \pa_x^\beta h), f)|\lesssim \|g\|_{H^{|\alpha|+a}_xL^2}\|h\|_{H^{|\beta|+b}_xL^2_{\epsilon,\gamma/2}}\|f\|_{L^2_{\epsilon,\gamma/2}}.\een
As a result, for $|\alpha|\leq N$,
\ben\label{HNGamma} |(\pa^\alpha \Gamma^\epsilon(g, h), f)|\lesssim \|g\|_{H^2_xL^2}\|h\|_{H^N_xL^2_{\epsilon,\gamma/2}}\|f\|_{L^2_{\epsilon,\gamma/2}}+
\mathrm{1}_{N\ge3}\|g\|_{H^N_xL^2}\|h\|_{H^{N-1}_xL^2_{\epsilon,\gamma/2}}\|f\|_{L^2_{\epsilon,\gamma/2}}. \een
Observing that $\|f\|_{H^N_xL^2_{\epsilon,\gamma/2}}\lesssim |(a,b,c)|_{H^N_x}+\|f_2\|_{H^N_xL^2_{\epsilon,\gamma/2}}$, we obtain that
\ben\label{solution-property-part2-2} \sum_{|\alpha| \leq N}|(\partial^{\alpha}\Gamma^{\epsilon}(f,f), \partial^{\alpha}f)| \leq C \mathcal{E}_{2}^{1/2}(f)\mathcal{D}_{N}(f)+\mathrm{1}_{N\ge3}\mathcal{E}_{N}^{1/2}(f)\mathcal{D}_{N-1}^{1/2}(f)\mathcal{D}_{N}^{1/2}(f).\een
Thanks to Theorem \ref{upGammagh}, estimate (\ref{HNGamma1}), similar to (\ref{HNGamma}) and (\ref{HNGamma}), we have if $|\alpha|\leq N$,
\ben \label{solution-property-part1}\int_{\mathbb{T}^{3}}|\langle \pa^{\alpha}\Gamma^{\epsilon}(f,f), e_j\rangle|^{2} dx \leq C \mathcal{E}_{2}(f)\mathcal{D}_{N}(f)+\mathrm{1}_{N\ge3}\mathcal{E}_{N}(f)\mathcal{D}_{N-1}(f).\een
If we choose $M$ large enough such that $\mathcal{E}^M_{N}(f^\epsilon)\eqdefa M\mathcal{E}_{N}(f^\epsilon)+\mathcal{I}_{N}(f^\epsilon)\sim \mathcal{E}_{N}(f^\epsilon)$, then by the estimates \eqref{essential-micro-macro-result}, \eqref{solution-property-part2-2} and \eqref{solution-property-part1}, we arrive at
\ben\label{ENERG1} \frac{d}{dt}\mathcal{E}^M_{N}(f^\epsilon) + \frac{1}{2}\mathcal{D}_{N}(f^\epsilon) &\lesssim& C(\mathcal{E}_{2}^{1/2}(f^\epsilon)+\mathcal{E}_{2}(f^\epsilon))\mathcal{D}_{N}(f^\epsilon)+\mathrm{1}_{N\ge3}\mathcal{E}_{N}(f)\mathcal{D}_{N-1}(f). \een
For $N=2$, thanks to the condition that $\mathcal{E}_{2}(f_0)\le \delta_2$ with $\delta_2$ sufficiently small, the continuity argument will yield that
\beno \frac{d}{dt}\mathcal{E}^M_{N}(f^\epsilon) + \frac{1}{4}\mathcal{D}_{N}(f^\epsilon) \le0, \quad \sup_{t\in[0,\infty]}\mathcal{E}_{2}(f^\epsilon(t))+\int_0^\infty \mathcal{D}_2(f^\epsilon(s))ds\lesssim \mathcal{E}_{2}(f_0)\le \delta_2. \eeno
For $N\ge3$, \eqref{ENERG1} can be rewritten as
\beno \frac{d}{dt}\mathcal{E}^M_{N}(f^\epsilon)+ \frac{1}{4}\mathcal{D}_{N}(f^\epsilon)\lesssim \mathcal{E}_{N}(f)\mathcal{D}_{N-1}(f). \eeno
Then the inductive method will yield the desired result.
\end{proof}
\subsubsection{Propagation of the weighted Sobolev regularity $H^{N}_{x}L^{2}_{l}$.}. We aim to prove:
\begin{prop}\label{a-priori-estimate-LBE-HNL2l}
For $-3/2< \gamma<0$, $l \geq 2, N \geq 2$, there exists $\delta_{2}$ which is independent of $\epsilon$ such that if $\|f_0\|_{H^2_xL^2}\le \delta_2$, then the solution $f^{\epsilon}$ to the Cauchy problem \eqref{linearizedBE} satisfies
\beno \sup_{t\in [0,\infty]}\|f^{\epsilon}(t)\|^{2}_{H^{N}_{x}L^{2}_{l}} + \int_{0}^{\infty}\|f^{\epsilon}(s)\|^{2}_{H^{N}_{x}L^{2}_{\epsilon,l+\gamma/2}}ds \lesssim\|f_{0}\|^{2}_{H^{N}_{x}L^{2}_{l}}。\eeno
\end{prop}
\begin{proof}
We omit the superscript $\epsilon$ in $f^{\epsilon}$ to assume that
\ben \label{linearized-Boltzamann} \partial_{t}f + v\cdot \nabla_{x} f + \mathcal{L}^{\epsilon}f= \Gamma^{\epsilon}(f,f).\een
Applying $W_{l}\pa^{\alpha}$ to both sides of \eqref{linearized-Boltzamann}, we have
\ben \label{weight-l-LBE-2} \partial_{t}W_{l}\pa^{\alpha}f + v\cdot \nabla_{x} W_{l}\pa^{\alpha}f + W_{l}\mathcal{L}^{\epsilon}\pa^{\alpha}f = W_{l}\pa^{\alpha}\Gamma^{\epsilon}(f,f). \een
Taking inner product with $\pa^{\alpha} W_{l}f$ over $(x,v)$, and taking sum over $|\alpha| \leq N$, we get
\beno&& \frac{1}{2}\frac{d}{dt}\|f\|^{2}_{H^{N}_{x}L^{2}_{l}} + \sum_{|\alpha| \leq N} (W_{l}\mathcal{L}^{\epsilon}\pa^{\alpha}f,\pa^{\alpha}W_{l}f) = \sum_{|\alpha| \leq N}( W_{l}\pa^{\alpha}\Gamma^{\epsilon}(f,f),\pa^{\alpha}W_{l}f). \nonumber \eeno
By Theorem \ref{main1}, Lemma \ref{commutatorgamma} and the condition that $\gamma/2+l\ge0$, we have
\beno\sum_{|\alpha| \leq N} (W_{l}\mathcal{L}^{\epsilon}\pa^{\alpha}f,\pa^{\alpha}W_{l}f) \geq \frac{\eta_{0}}{2} \mathcal{D}_N(W_lf)- C \|f\|^{2}_{H^{N}_{x}L^{2}_{l+\gamma/2}}-\|f\|_{H^{N}_{x}L^{2}_{l+\gamma/2}}\mathcal{D}^{1/2}_N(W_l f).\eeno
Observe that
\beno\sum_{|\alpha| \leq N}(W_{l}\pa^{\alpha}\Gamma^{\epsilon}(f,f),\pa^{\alpha}W_{l}f) &=& \sum_{|\alpha| \leq N}(W_{l}\pa^{\alpha}\Gamma^{\epsilon}(f,f)-\pa^{\alpha}\Gamma^{\epsilon}(f,W_{l}f),\pa^{\alpha}W_{l}f)
\nonumber \\&&+\sum_{|\alpha| \leq N}(\pa^{\alpha}\Gamma^{\epsilon}(f,W_{l}f),\pa^{\alpha}W_{l}f).\eeno
With the help of the proof of \eqref{HNGamma}, Theorem \ref{upGammagh} and Lemma \ref{commutatorgamma} imply that
\beno \sum_{|\alpha| \leq N}|(W_{l}\pa^{\alpha}\Gamma^{\epsilon}(f,f),\pa^{\alpha}W_{l}f)|\lesssim \mathcal{E}^{1/2}_{2}(f)\mathcal{D}_N(W_l f)+ \mathrm{1}_{N\ge3}\mathcal{E}^{1/2}_{N}(f)\mathcal{D}_{N-1}^{1/2}(W_l f)\mathcal{D}^{1/2}_N(W_l f)\eeno
Putting together the above results and using the facts that $ \mathcal{E}_{2}(f)\lesssim \delta_2$ and $ \mathcal{E}_{N}(f)\lesssim 1$, we arrive at
\beno \frac{d}{dt} \mathcal{E}_N(W_l f) + \frac{\eta_{0}}{4}\mathcal{D}_N(W_l f) \leq C \|f\|^{2}_{H^{N}_{x}L^{2}_{l+\gamma/2}}+ \mathrm{1}_{N\ge3} \mathcal{D}_{N-1}(W_l f). \eeno
It is not difficult to check that $\|f\|^{2}_{H^{N}_{x}L^{2}_{l+\gamma/2}}\le \|f^l\|^{2}_{H^{N}_{x}L^{2}_{l+\gamma/2}}+\|f^h\|^{2}_{H^{N}_{x}L^{2}_{l+\gamma/2}}\le \eta \|f^l\|^{2}_{H^{N}_{x}L^{2}_{l+\gamma/2+s}}+C_\eta \|f^l\|^{2}_{H^{N}_{x}L^{2}_{\gamma/2+s}} +\|f^h\|^{2}_{H^{N}_{x}L^{2}_{l+\gamma/2}}.
$ Then we derive that
\beno \frac{d}{dt} \mathcal{E}_N(W_l f) + \frac{\eta_{0}}{8}\mathcal{D}_N(W_l f) \leq C \mathcal{D}_N(f)+ \mathrm{1}_{N\ge3} \mathcal{D}_{N-1}(W_l f). \eeno
From this, the desired result is easily concluded for $N=2$. For $N\ge 3$, the inductive method can be applied to get the desired result. This ends the proof of the proposition.
\end{proof}
\subsubsection{Propagation of the full regularity} We begin with a useful proposition:
\begin{prop}\label{interWsd} Suppose $f$ is a smooth function. Then if $l_1\le l_2$,
\beno |f|_{H^m_l}^2\lesssim (\eta+\epsilon^{2s})|W^\epsilon(D)f|_{H^m_l}^2+C(\eta)|f|_{L^2_l}^2;\quad |f|_{L^2_{\epsilon,l_1}}\lesssim |f|_{L^2_{\epsilon,l_2}} . \eeno
\end{prop}
\begin{proof} By interpolation inequality, it is easy to check that
\beno |f|_{H^m_l}^2\lesssim |f^\phi|_{H^m_l}^2+|f_\phi|_{H^m_l}^2\lesssim |f^\phi|_{H^m_l}^2+\eta |f_\phi|_{H^{m+s}_l}+C_\eta|f_\phi|_{L^2_l}^2.\eeno
Then the first result follows Lemma \ref{func}. The second result follows directly from the definition of $|\cdot|_{L^2_{\epsilon,l_1}}$
\end{proof}
We aim to prove:
\begin{prop}\label{a-priori-estimate-LBE-HNlL2q}
Suppose $-3/2< \gamma<0 $ and $\mathcal{E}_2(f_0)\le \delta_0$. Then for $N\ge2$,
\beno \sup_{t\in[0,\infty)}\mathcal{E}^{N,J}(f(t))+\int_{0}^{\infty}\mathcal{D}^{N,J}(f(s))ds\lesssim C(\mathcal{E}^{N,J}(f_0)). \eeno
\end{prop}
\begin{proof} Since we have the control for $\dot{\mathcal{E}}^{N,0}(f)$, we will focus on the estimate of $\dot{\mathcal{E}}^{N-j,j}(f)$ with $1\le j\le N$.
We denote
\beno \Gamma^{\epsilon}(g,h;\beta)(v)\eqdefa
\int_{\R^3}\int_{\SS^{2}}B^{\epsilon}(v-v_*,\sigma)(\pa_{\beta}\mu^{1/2})_{*}(g'_*h'-g_*h)d\sigma dv_*.
\eeno
With this notation, one has
\ben \label{alpha-beta-on-Gamma} \pa^{\alpha}_{\beta}\Gamma^{\epsilon}(g,h) = \sum _{\beta_{0}+\beta_{1}+\beta_{2}= \beta,\alpha_{1}+\alpha_{2}=\alpha} C^{\beta_{0},\beta_{1},\beta_{2}}_{\beta} C^{\alpha_{1},\alpha_{2}}_{\alpha} \Gamma^{\epsilon}(\pa^{\alpha_{1}}_{\beta_{1}}g,\pa^{\alpha_{2}}_{\beta_{2}}h;\beta_{0}).\een
It is easy to check that for any fixed $\beta$, $\Gamma^{\epsilon}(g,h;\beta)$ shares the same upper bound and commutator estimates as those for $\Gamma^{\epsilon}(g,h)$.
Recalling
$ \mathcal{L}^{\epsilon}g = -\Gamma^{\epsilon}(\mu^{1/2},g) - \Gamma^{\epsilon}(g, \mu^{1/2}). $
Thus
\ben \label{alpha-beta-Lep} &&\pa^{\alpha}_{\beta}\mathcal{L}^{\epsilon}g \\&=& \mathcal{L}^{\epsilon}\pa^{\alpha}_{\beta}g
-\sum_{\beta_{0}+\beta_{1}+\beta_{2}= \beta, \beta_{2} < \beta} C^{\beta_{0},\beta_{1},\beta_{2}}_{\beta}
[\Gamma^{\epsilon}(\pa_{\beta_{1}}\mu^{1/2}, \pa^{\alpha}_{\beta_{2}}g;\beta_{0}) + \Gamma^{\epsilon}(\pa^{\alpha}_{\beta_{1}}g, \pa_{\beta_{2}}\mu^{1/2};\beta_{0})]. \nonumber\een
Take two indexes $\alpha$ and $\beta$ such that $|\alpha|= N-j$ and $|\beta|= j$ and apply $W_{q}\pa^{\alpha}_{\beta}$ to both sides of \eqref{linearized-Boltzamann}, then we obtain that
\ben \label{weight-q-LBE-2} \partial_{t}W_{q}\pa^{\alpha}_{\beta}f + v\cdot \nabla_{x} W_{q}\pa^{\alpha}_{\beta}f +\sum_{\beta_{1}\leq \beta,|\beta_{1}|=1}W_{q}\pa^{\alpha+\beta_{1}}_{\beta-\beta_{1}}f + W_{q}\pa^{\alpha}_{\beta}\mathcal{L}^{\epsilon}f = W_{q}\pa^{\alpha}_{\beta}\Gamma^{\epsilon}(f,f). \een
Let $W_q=W_{N-j,j}$. Taking inner product with $W_{q}\pa^{\alpha}_{\beta} f$ over $(x,v)$, one has
\beno \frac{1}{2}\frac{d}{dt}\|\pa^{\alpha}_{\beta}f \|^{2}_{L^{2}_{q}} + \sum_{\beta_{1}\leq \beta,|\beta_{1}|=1}(W_{q}\pa^{\alpha+\beta_{1}}_{\beta-\beta_{1}}f,W_{q}\pa^{\alpha}_{\beta}f) + (W_{q}\pa^{\alpha}_{\beta}\mathcal{L}^{\epsilon}f,W_{q}\pa^{\alpha}_{\beta}f) = (W_{q}\pa^{\alpha}_{\beta}\Gamma^{\epsilon}(f,f),W_{q}\pa^{\alpha}_{\beta}f). \eeno
Let us give the estimates term by term.
\underline{(i). The estimate of $(W_{q}\pa^{\alpha+\beta_{1}}_{\beta-\beta_{1}}f,W_{q}\pa^{\alpha}_{\beta}f) $.} It is not difficult to check that
\beno |(W_{q}\pa^{\alpha+\beta_{1}}_{\beta-\beta_{1}}f,W_{q}\pa^{\alpha}_{\beta}f) |\lesssim \|W_{q}W_{-\gamma/2}\pa^{\alpha+\beta_{1}}_{\beta-\beta_{1}}f\|_{L^2}\|W_{q}W_{\gamma/2}\pa^{\alpha}_{\beta}f\|_{L^2}
\lesssim \eta \dot{\mathcal{D}}^{N-j,j}(f)+C_\eta\dot{\mathcal{D}}^{N-j+1,j-1}(f), \eeno
where we have used \eqref{AsuWf}.
\smallskip
\underline{(ii). The estimate of $(W_{q}\pa^{\alpha}_{\beta}\mathcal{L}^{\epsilon}f,W_{q}\pa^{\alpha}_{\beta}f) $.} Thanks to \eqref{alpha-beta-Lep}, Theorem \ref{main1}, Theorem \ref{upGammagh} and Lemma \ref{commutatorgamma}, we have
\beno (W_{q}\pa^{\alpha}_{\beta}\mathcal{L}^{\epsilon}f,W_{q}\pa^{\alpha}_{\beta}f) &\geq& \frac{\eta_{0}}{4}\|W_{q}\pa^{\alpha}_{\beta}f\|^{2}_{\epsilon,\gamma/2} - C\|W_{q}\pa^{\alpha}_{\beta}f\|^{2}_{L^2_{\gamma/2}} - C \|f\|^{2}_{H^{N-j}_{x}H^{j-1}_{\epsilon,q+\gamma/2}}.
\eeno
Due to Proposition \ref{interWsd} and our assumption for $W_{m,j}$, the above inequality can be rewritten as follows
\beno (W_{q}\pa^{\alpha}_{\beta}\mathcal{L}^{\epsilon}f,W_{q}\pa^{\alpha}_{\beta}f) &\geq& \frac{\eta_{0}}{4}\|W_{q}\pa^{\alpha}_{\beta}f\|^{2}_{\epsilon,\gamma/2} -\eta \dot{\mathcal{D}}^{N-j,j}(f)-C_\eta \dot{\mathcal{D}}^{N-j,0}(f)- C\mathcal{D}^{N-1}(f).\eeno
\underline{(iii). The estimate of $(W_{q}\pa^{\alpha}_{\beta}\Gamma^{\epsilon}(f,f),W_{q}\pa^{\alpha}_{\beta}f) $.}
It is easy to check that
\beno
&&(W_{q}\pa^{\alpha}_{\beta}\Gamma^{\epsilon}(f,f),W_{q}\pa^{\alpha}_{\beta}f)\\&=&(W_{q}\Gamma^{\epsilon}(f,\pa^{\alpha}_{\beta}f),W_{q}\pa^{\alpha}_{\beta}f)\\&&+\sum_{\beta_{0}+\beta_{1}+\beta_{2}= \beta, \alpha_1+\alpha_2=\alpha, |\alpha_2|+|\beta_{2}|\le N-1} C^{\beta_{0},\beta_{1},\beta_{2}}_{\beta}C^{\alpha_{1},\alpha_{2}}_{\alpha}(W_{q}\Gamma^{\epsilon}(\pa^{\alpha_1}_{\beta_1}f,\pa^{\alpha_2}_{\beta_2}f;\beta_0),W_{q}\pa^{\alpha}_{\beta}f)
\eeno
Set $A\eqdefa (W_{q}\Gamma^{\epsilon}(\pa^{\alpha_1}_{\beta_1}f,\pa^{\alpha_2}_{\beta_2}f;\beta_0),W_{q}\pa^{\alpha}_{\beta}f)$. We will give the estimate for $A$ case by case.
{\it Case 1: $N=1$.} This yields that $(|\alpha_1|,|\beta_1|)=(0,0)$ or $(0,1)$. Then we have
\beno |A|\lesssim (\|\pa_{\beta}f\|_{L^2}+\|f\|_{L^2}) \mathcal{D}_2(f)^{\f12} \|\pa_{\beta}f\|_{L^2_{\epsilon,\gamma/2+q}}.\eeno
{\it Case 2: $N=2$.} We divide the estimate into two cases:
$|\alpha_2|+|\beta_2|=1$ and $|\alpha_2|+|\beta_2|=0$.
In the case of $|\alpha_2|+|\beta_2|=1$, we have $(|\alpha_2|, |\beta_2|)=(1,0)$ or $(|\alpha_2|, |\beta_2|)=(0,1)$. If $(|\alpha_2|, |\beta_2|)=(1,0)$, we get that $j=1$ and $(|\alpha_1|, |\beta_1|)=(0,1)$ or $(0,0)$. Then we have
\beno |A|\lesssim \mathcal{E}_2(f)\|W_q f\|_{H^1_xL^2_{\epsilon,\gamma/2}}^2+\|f\|_{H^1_x\dot{H}^1_v}^2\|W_q f\|_{H^2_xL^2_{\epsilon,\gamma/2}}^2+\eta\|W_q\pa^\alpha_\beta f\|_{L^2_{\epsilon,\gamma/2}}^2. \eeno
If $(|\alpha_2|, |\beta_2|)=(0,1)$, then we have $(|\alpha_1|, |\beta_1|)=(2-j,j-1)$ or $(2-j,j-2)$ if $j\ge2$. These imply that
\beno |A|\lesssim (\mathcal{E}_2(f)+\dot{\mathcal{E}}^{2-j+1,j-1}(f))\|W_q f\|_{H^1_x\dot{H}^1_{\epsilon,\gamma/2}}^2 +\eta\|W_q\pa^\alpha_\beta f\|_{L^2_{\epsilon,\gamma/2}}^2. \eeno
In the case of $|\alpha_2|+|\beta_2|=0$, we deduce that
$(|\alpha_1|, |\beta_1|)=(2-j,j)$ or $(2-j,j-1)$ or $(2-j,j-2)$ if $j\ge2$. Then we arrive at
\beno |A|\lesssim C_\eta(\|f\|_{\dot{H}^{2-j}_x\dot{H}^j}^2+\mathcal{E}^1(f))\mathcal{D}_2(f) +
\eta\|W_q\pa^\alpha_\beta f\|_{L^2_{\epsilon,q+\gamma/2}}^2.\eeno
{\it Case 3: $N\ge3$} We separate two cases to give the estimates.
{\quad \it Case 3.1: $|\alpha_2|+|\beta_2|=N-1$.} In this case, we have $(|\alpha_2|,|\beta_2|)=(N-j-1,j)$ or $(N-j,j-1)$. We have
\beno |A|&\lesssim& \mathcal{E}_2(f)(\|W_qf\|_{\dot{H}^{N-j}_x\dot{H}^j_{\epsilon,\gamma/2}}^2+\|W_qf\|_{\dot{H}^{N-j-1}\dot{H}^j_{\epsilon,\gamma/2}}^2)+\|f\|^2_{H^2_xH^1}\|W_qf\|^2_{H^{N-j}_xH^{j-1}_{\epsilon,\gamma/2}}+ \eta\|W_q\pa^\alpha_\beta f\|_{L^2_{\epsilon,\gamma/2}}^2\\&\lesssim& (\mathcal{E}_2(f)+\eta) \dot{\mathcal{D}}^{N-j,j}(f)+ \mathcal{D}^{N-1}(f)(\dot{\mathcal{E}}^{2,1}(f)+\mathcal{E}^2(f))+\eta \dot{\mathcal{D}}^{N-j,j}(f)
.\eeno
{\quad\it Case 3.2: $|\alpha_2|+|\beta_2|\le N-2$ and $|\beta_2|=j$.} We first have $j\le N-2$. It is easy to check that
$(|\alpha_2|,|\beta_2|)=(N-j-2,j)$ or $|\alpha_2|\le N-j-3$ if $N\ge4$. We get that
\beno |A|\lesssim (\dot{\mathcal{E}}^{3,0}(f)+\mathcal{E}^2(f))\mathcal{D}^{N-1}(f)+\mathrm{1}_{N\ge 4}\mathcal{E}^{N-j}(f)\mathcal{D}^{N-1}(f)+\eta \dot{\mathcal{D}}^{N-j,j}(f). \eeno
{\quad\it Case 3.3: $|\alpha_2|+|\beta_2|= N-2$ and $|\beta_2|\le j-1$.} We first get that $|\alpha_1|+|\beta_1|\le 2$ and $|\beta_0|+|\beta_1|\ge 1$. We obtain that
\beno A|\lesssim (\dot{\mathcal{E}}^{3,0}(f)+\dot{\mathcal{E}}^{2,1}(f)+\dot{\mathcal{E}}^{1,2}(f)\mathrm{1}_{j\ge2}+
\mathcal{E}^2(f))\mathcal{D}^{N-1}(f) +\eta \dot{\mathcal{D}}^{N-j,j}(f). \eeno
{\quad\it Case 3.4: $|\alpha_2|+|\beta_2|\le N-3$}
It is not difficult to see that
\beno |A|\lesssim (\dot{\mathcal{E}}^{N-j,j}(f)+\mathcal{E}^{N-1}(f))\mathcal{D}^{N-1}(f)+\eta \dot{\mathcal{D}}^{N-j,j}(f).\eeno
Now we patch together to derive that
\begin{enumerate}
\item if $N=1$, $|(W_{q}\pa^{\alpha}_{\beta}\Gamma^{\epsilon}(f,f),W_{q}\pa^{\alpha}_{\beta}f)|\lesssim
(\dot{\mathcal{E}}^{0,1}(f)+1)\mathcal{D}_2(f)+ (\eta+\mathcal{E}_2(f))\dot{\mathcal{D}}^{0,1}(f)$;
\item if $N=2$, $|(W_{q}\pa^{\alpha}_{\beta}\Gamma^{\epsilon}(f,f),W_{q}\pa^{\alpha}_{\beta}f)|\lesssim
(\mathcal{E}_2(f)+\mathcal{E}^{2-j+1,j-1}(f))( \dot{\mathcal{D}}^{1,1}(f)+\mathcal{D}^1(f))+(\dot{\mathcal{E}}^{2-j,j}(f)+\dot{\mathcal{E}}^{1,1}(f)+\mathcal{E}^{1}(f))\mathcal{D}_2(f)+(\eta+\mathcal{E}_2(f)) \dot{\mathcal{D}}^{2-j,j}(f)$;
\item if $N\geq3$, $|(W_{q}\pa^{\alpha}_{\beta}\Gamma^{\epsilon}(f,f),W_{q}\pa^{\alpha}_{\beta}f)|\lesssim
(\mathcal{E}_2(f)+\eta) \dot{\mathcal{D}}^{N-j,j}(f)+ \mathcal{D}^{N-1}(f)(\dot{\mathcal{E}}^{2,1}(f)+\dot{\mathcal{E}}^{3,0}(f)+\dot{\mathcal{E}}^{1,2}(f)\mathrm{1}_{j\ge2}+\mathcal{E}^{N-1}(f)+\dot{\mathcal{E}}^{N-j,j}(f)).$
\end{enumerate}
\smallskip
Now we are in a position to prove the proposition. To get the estimate of $\mathcal{E}^1(f)$, we only need to bound $\dot{\mathcal{E}}^{0,1}$. From the above estimates, we have
\beno \f{d}{dt} \dot{\mathcal{E}}^{0,1}(f(t))+\f18\eta_0 \dot{\mathcal{D}}^{0,1}(f(t))\lesssim (\dot{\mathcal{E}}^{0,1}(f)+1)\mathcal{D}_2(f)+\dot{\mathcal{D}}^{0,1}(f).\eeno
By Gronwall inequality, we conclude that
\beno \sup_{t\in[0,\infty)}\mathcal{E}^1(f(t))+\int_0^\infty \mathcal{D}^1(f(\tau))d\tau\lesssim C(\mathcal{E}_2(f_0),\mathcal{E}^{1}(f_0)). \eeno
To prove the propagation of $\mathcal{E}^2(f)$, we need to consider the energy $\dot{\mathcal{E}}^{2-j,j}$ with $j=1,2$. It is not difficult to conclude from the above estimates that
\beno
\f{d}{dt} \dot{\mathcal{E}}^{1,1}(f(t))+\f18\eta_0 \dot{\mathcal{D}}^{1,1}(f(t))\lesssim \mathcal{D}^1(f)+\mathcal{D}_2(f)+\dot{\mathcal{E}}^{1,1}\mathcal{D}_2(f)+\dot{\mathcal{D}}^{0,2}(f),\eeno
which gives
\beno \sup_{t\in[0,\infty)}\dot{\mathcal{E}}^{1,1}(f(t))+\int_0^\infty \dot{\mathcal{D}}^{1,1}(f(\tau))d\tau\lesssim C(\mathcal{E}^{2,1}(f_0)). \eeno
Next we have \beno
\f{d}{dt} \dot{\mathcal{E}}^{0,2}(f(t))+\f18\eta_0 \dot{\mathcal{D}}^{0,2}(f(t))\lesssim ( 1+\dot{\mathcal{E}}^{0,2})\mathcal{D}^1(f)+\dot{\mathcal{D}}^{1,1}(f(t))+\mathcal{D}^1(f),
\eeno which implies
\beno \sup_{t\in[0,\infty)}\dot{\mathcal{E}}^{2,0}(f(t))+\int_0^\infty \dot{\mathcal{D}}^{2,0}(f(\tau))d\tau\lesssim C( \mathcal{E}^{2,2}(f_0)). \eeno
In other words, for $J\le2$, we have
$ \sup_{t\in[0,\infty)}\mathcal{E}^{2,J}(f(t))+\int_0^\infty \mathcal{D}^{2,J}(f(\tau))d\tau\lesssim C(\mathcal{E}^{2,J}(f_0))$.
Now we shall use the inductive method to complete the proof. We assume that the result in the proposition holds for $J\le N\le n$ with $n\ge2$. For $J\le N=n+1$, we begin with the propagation of $\dot{\mathcal{E}}^{n,1}(f(t))$. From the above inequalities, we have
\beno \f{d}{dt} \dot{\mathcal{E}}^{n,1}(f(t))+\f18\eta_0 \dot{\mathcal{D}}^{n,1}(f(t))\lesssim ( 1+\dot{\mathcal{E}}^{n,1}(f)+\mathcal{E}^{n}(f)+\dot{\mathcal{E}}^{3,0}(f))\mathcal{D}^{n}(f)+\dot{\mathcal{D}}^{n+1,0}(f(t)), \eeno
which implies that $\sup_{t\in[0,\infty)} \mathcal{E}^{n+1,1}(f(t))+\int_0^\infty \mathcal{D}^{n+1,1}(f(\tau))d\tau\lesssim C(\mathcal{E}^{n+1,1}(f_0))$ thanks to Gronwall inequality. For $j\ge2$, we derive that
\beno \f{d}{dt} \dot{\mathcal{E}}^{n+1-j,j}(f(t))+\f18\eta_0 \dot{\mathcal{D}}^{n+1-j,j}(f(t))&\lesssim& ( 1+\dot{\mathcal{E}}^{n+1-j,j}(f)+\mathcal{E}^{n}(f)+\dot{\mathcal{E}}^{3,0}(f)\\&&+\dot{\mathcal{E}}^{2,1}(f))\mathcal{D}^{n}(f)+\dot{\mathcal{D}}^{n+2-j,j-1}(f(t)). \eeno
The inductive method applied to $j$ will yield that for $2\le j\le J$, \beno \sup_{t\in[0,\infty)}\dot{\mathcal{E}}^{n+1-j,j}(f(t))+\int_0^\infty \dot{\mathcal{D}}^{n+1-j,j}(f(\tau)))d\tau\lesssim C(\mathcal{E}^{n+1,J}(f_0)),\eeno
which completes the inductive argument for $n$. We end the proof of the proposition.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main3} (Part I: Global Well-posedness)] The results follow directly from Theorem \ref{a-priori-estimate-LBE}, \\Proposition \ref{a-priori-estimate-LBE-HNL2l} and Proposition \ref{a-priori-estimate-LBE-HNlL2q}.
\end{proof}
\subsection{Global dynamics of the Boltzmann equation \eqref{linearizedBE}} We now give the proof to the second part of Theorem \ref{main3}.
\begin{proof}[Proof of Theorem \ref{main3}(Part II: Global dynamics)] We first give the proof to \eqref{localizedenergy}. It is easy to check that $\mathcal{P}_jf$ verifies
\beno \pa_t\mathcal{P}_jf+v\cdot\na_x \mathcal{P}_jf+\mathcal{L^\epsilon}\mathcal{P}_jf=[\mathcal{L^\epsilon}, \mathcal{P}_j]f+\mathcal{P}_j\Gamma^\epsilon(f, f). \eeno
Thanks to Theorem \ref{main1}, Lemma \ref{CommSemi} and \eqref{HNGamma1}, one has
\beno \f{d}{dt}\|\mathcal{P}_jf\|_{L^2}^2+\eta_0\|\mathcal{P}_jf\|_{L^2_{\epsilon,
\gamma/2}}^2\ge -\epsilon^{2s}\|f\|_{L^2_{\epsilon,\gamma/2}}^2-\epsilon^{2s}\|f\|_{H^2_xL^2}^2\|f\|_{L^2_{\epsilon,\gamma/2}}^2 -\|f\|_{H^2_xL^2}^2\|\mathcal{P}_jf\|_{L^2_{\epsilon,\gamma/2}}^2.\eeno
Recalling that $\|\mathcal{P}_jf\|_{L^2_{\epsilon,
\gamma/2}}^2\ge -C\epsilon^{-2s}2^{j\gamma}\|\mathcal{P}_jf\|_{L^2}$, we obtain that
\beno \|\mathcal{P}_jf(t)\|_{L^2}^2 \ge \|\mathcal{P}_jf_0\|_{L^2}^2- C\epsilon^{-2s}2^{j\gamma}\delta_0t-C\epsilon^{2s}, \eeno
which yields the desired result.
We turn to the proof of \eqref{decay-uniform-formula}. By the interpolation inequality $|f|_{L^2}\lesssim |f|_{L^2_{\gamma/2}}^{\f{p}{p+1}}|f|_{L^2_{-p\gamma/2}}^{\f1{p+1}}$ and the facts $\mathcal{E}_N(W_lf^\epsilon)\lesssim 1$ and $\f{d}{dt}\mathcal{E}^M_N(f^\epsilon)+\f14\mathcal{D}_N(f^\epsilon)\le 0$, we obtain that
\beno \f{d}{dt}\mathcal{E}^M_N+\f14(c_1\|f^l\|_{H^N_xL^2}^2+ C(\|f_0\|_{H^N_xL^2_{-p\gamma/2}})\epsilon^{-2s}\|f^h\|_{H^N_xL^2}^{2(1+\f1{p})})\le 0.\eeno
Then \eqref{decay-uniform-formula} follows Proposition \ref{propODE}.
\end{proof}
\subsection{Asymptotic formula for the limit} We want to prove \eqref{error-function-uniform-estimate}. Let $f^{\epsilon}$ and $f$ be the solutions to \eqref{linearizedBE} and \eqref{linearizedNBE} respectively with the data $f_0$. Set $F^{\epsilon}_{R} \eqdefa \epsilon^{2-2s}(f^{\epsilon}-f)$, then it solves
\beno \partial_{t}F^{\epsilon}_{R} + v \cdot \nabla_{x} F^{\epsilon}_{R} + \mathcal{L}F^{\epsilon}_{R}=\frac{1}{\epsilon^{2-2s}}[(\mathcal{L}-\mathcal{L}^{\epsilon})f^{\epsilon}+(\Gamma^{\epsilon}-\Gamma)(f^{\epsilon},f)]
+\Gamma^{\epsilon}(f^{\epsilon},F^{\epsilon}_{R})+\Gamma(F^{\epsilon}_{R},f). \eeno
We first derive the estimate on the operator $\Gamma-\Gamma^{\epsilon}$.
\begin{lem}\label{estimate-operator-difference} If $\gamma>-3$, there holds
\beno|\langle (\Gamma-\Gamma^{\epsilon})(g,h), f \rangle_v| \lesssim \epsilon^{2-2s}|g|_{L^{2}}|h|_{H^{2}_{\gamma/2+2}}|f|_{L^{2}_{\gamma/2}}.\eeno
\end{lem}
\begin{proof} By direct calculation, we have
\beno\langle (\Gamma-\Gamma^{\epsilon})(g,h), f \rangle_v &\eqdefa& \mathcal{A}_{1}+\mathcal{A}_{2}+\mathcal{A}_{3}+\mathcal{A}_{4},\eeno
where $\mathcal{A}_{1}=\int (b-b^{\epsilon})(\cos\theta)|v-v_{*}|^{\gamma} (\mu_{*}^{\prime 1/2}-\mu_{*}^{1/2}) g_{*}h^{\prime}f^{\prime} d\sigma dv_{*} dv$,
$\mathcal{A}_{2}= \int (b-b^{\epsilon})(\cos\theta)|v-v_{*}|^{\gamma} (\mu_{*}^{\prime 1/2}-\mu_{*}^{1/2}) g_{*}( h -h^{\prime})f^{\prime} d\sigma dv_{*} dv$,
$\mathcal{A}_{3}=\int (b-b^{\epsilon})(\cos\theta)|v-v_{*}|^{\gamma} \mu_{*}^{1/2} g_{*}( h -h^{\prime})f^{\prime} d\sigma dv_{*} dv$ and
$\mathcal{A}_{4}= \int (b-b^{\epsilon})(\cos\theta)|v-v_{*}|^{\gamma} \mu_{*}^{1/2} g_{*}(h^{\prime}f^{\prime}- h f) d\sigma dv_{*} dv$.
\underline{Estimate of $\mathcal{A}_{1}$.} By change of variables, we have
\beno
\mathcal{A}_{1} = \int (b-b^{\epsilon})(\cos\theta)|v-v_{*}|^{\gamma}(\mu^{1/2} - \mu^{\prime 1/2}) g^{\prime} h_{*}f_{*} d\sigma dv_{*}dv.
\eeno
By Taylor expansion, one has
\beno
\mu^{1/2} - \mu^{\prime 1/2} = (\nabla \mu^{1/2})(v^{\prime})\cdot(v-v^{\prime}) + \frac{1}{2}\int_{0}^{1} (1-\kappa) [(\nabla^{2} \mu^{1/2})(v(\kappa)):(v-v^{\prime})\otimes(v-v^{\prime})] d\kappa,
\eeno
where $v(\kappa) = v^{\prime} + \kappa(v-v^{\prime})$.
Observe that, for any fixed $v_{*}$, there holds
\beno
\int (b-b^{\epsilon})(\cos\theta)|v-v_{*}|^{\gamma} (\nabla \mu^{1/2})(v^{\prime})\cdot(v-v^{\prime}) g^{\prime} d\sigma dv = 0.
\eeno
Thus we have
\beno
|\mathcal{A}_{1}| &=& \frac{1}{2}|\int (b-b^{\epsilon})(\cos\theta)|v-v_{*}|^{\gamma} (1-\kappa) [(\nabla^{2} \mu^{1/2})(v(\kappa)):(v-v^{\prime})\otimes(v-v^{\prime})] g^{\prime} h_{*}f_{*} d \kappa d\sigma dv_{*} dv|
\\&\lesssim&\epsilon^{2-2s}\{\int \langle v_{*}\rangle^{\gamma+4} |g^{\prime}|^{2} |h_{*}|^{2} dv_{*} dv^{\prime} \}^{1/2}
\{\int |v(\kappa)-v_{*}|^{\gamma} \mu^{1/8}(v(\kappa))|f_{*}|^{2} d \kappa dv_{*} dv(\kappa) \}^{1/2}
\\&\lesssim& \epsilon^{2-2s}|g|_{L^{2}}|h|_{L^{2}_{\gamma/2+2}}|f|_{L^{2}_{\gamma/2}}.
\eeno
\underline{Estimate of $\mathcal{A}_{2}$.}
By Cauchy-Schwartz inequality, we have
\beno
\mathcal{A}_{2} &\leq& \{\int (b-b^{\epsilon})(\cos\theta)|v-v_{*}|^{\gamma+2} g^{2}_{*}(h -h^{\prime})^{2} (\mu_{*}^{\prime 1/4}+\mu_{*}^{1/4})^{2}d\sigma dv_{*} dv \}^{1/2}
\\&&\times\{\int (b-b^{\epsilon})(\cos\theta)|v-v_{*}|^{\gamma-2}(\mu_{*}^{\prime 1/4}-\mu_{*}^{1/4})^{2}|f^{\prime}|^{2} d\sigma dv_{*} dv \}^{1/2}
\eqdefa \{\mathcal{A}_{2,1}\}^{1/2} \times \{\mathcal{A}_{2,2}\}^{1/2}.
\eeno
By Taylor expansion,
$
h -h^{\prime} = \int_{0}^{1} (\nabla h)(v(\kappa))\cdot(v-v^{\prime}) d\kappa,
$
where $v(\kappa) = v^{\prime} + \kappa(v-v^{\prime})$. By the change of variable $v\rightarrow v(\kappa)$, we get
\beno
\mathcal{A}_{2,1} \leq \epsilon^{2-2s} \int \langle v(\kappa)\rangle^{\gamma+4} g^{2}_{*} |(\nabla h)(v(\kappa))|^{2} dv_{*} dv(\kappa) d\kappa
\lesssim \epsilon^{2-2s}|g|^{2}_{L^{2}}|h|^{2}_{H^{1}_{\gamma/2+2}}.
\eeno
Note that $(\mu_{*}^{\prime 1/4}-\mu_{*}^{1/4})^{2} \lesssim (\mu_{*}^{\prime 1/4}+\mu_{*}^{1/4}) \theta^{2}|v-v_{*}|^{2}$, thus we have
\beno
\mathcal{A}_{2,2} \lesssim \epsilon^{2-2s}\int |v-v_{*}|^{\gamma} \mu_{*}^{1/2} |f|^{2} dv_{*} dv \lesssim \epsilon^{2-2s}|f|^{2}_{L^{2}_{\gamma/2}}.
\eeno
Therefore, we have $\mathcal{A}_{2} \lesssim \epsilon^{2-2s}|g|_{L^{2}}|h|_{H^{1}_{\gamma/2+2}}|f|_{L^{2}_{\gamma/2}}$.
\underline{Estimate of $\mathcal{A}_{3}$.}
By Taylor expansion, one has
\beno
h -h^{\prime} = (\nabla h)(v^{\prime})\cdot(v-v^{\prime}) + \frac{1}{2}\int_{0}^{1} (1-\kappa) [(\nabla^{2} h)(v(\kappa)):(v-v^{\prime})\otimes(v-v^{\prime})] d\kappa,
\eeno
where $v(\kappa) = v^{\prime} + \kappa(v-v^{\prime})$.
Observe that, for any fixed $v_{*}$, there holds
\beno
\int (b-b^{\epsilon})(\cos\theta)|v-v_{*}|^{\gamma} (\nabla h)(v^{\prime})\cdot(v-v^{\prime}) f^{\prime} d\sigma dv = 0.
\eeno
Thus we have
\beno
\mathcal{A}_{3} &=& \frac{1}{2}\int (b-b^{\epsilon})(\cos\theta)|v-v_{*}|^{\gamma} \mu_{*}^{1/2} g_{*} (1-\kappa) [(\nabla^{2} h)(v(\kappa)):(v-v^{\prime})\otimes(v-v^{\prime})] f^{\prime} d \kappa d\sigma dv_{*} dv
\\&\lesssim&\epsilon^{2-2s}\{\int |v(\kappa)-v_{*}|^{\gamma+4} \mu_{*}^{1/2} g^{2}_{*}|(\nabla^{2} h)(v(\kappa))|^{2} d \kappa dv_{*} dv(\kappa)\}^{1/2}
\\&&\times\{\int |v-v_{*}|^{\gamma} \mu_{*}^{1/2} |f^{\prime}|^{2} dv_{*} dv^{\prime} \}^{1/2}
\lesssim \epsilon^{2-2s}|g|_{L^{2}}|h|_{H^{2}_{\gamma/2+2}}|f|_{L^{2}_{\gamma/2}}.
\eeno
\underline{Estimate of $\mathcal{A}_{4}$.} By cancellation lemma and Lemma \ref{aftercancellation}, we have
\beno |\mathcal{A}_{4}| \lesssim \epsilon^{2-2s} \int |v-v_{*}|^{\gamma} \mu_{*}^{1/2} g_{*} h f dv_{*} dv \lesssim \epsilon^{2-2s} |g|_{L^{2}}|h|_{L^2_{\gamma/2}}|f|_{L^{2}_{\gamma/2}}. \eeno
The lemma then follows by patching together the above estimates.
\end{proof}
We are ready to prove \eqref{error-function-uniform-estimate}.
\begin{proof}[Proof of Theorem \ref{main3}(Part III: Asymptotic formula)]
Set \beno g=\frac{1}{\epsilon^{2-2s}}[(\mathcal{L}-\mathcal{L}^{\epsilon})f^{\epsilon}+(\Gamma^{\epsilon}-\Gamma)(f^{\epsilon},f)]
+\Gamma^{\epsilon}(f^{\epsilon},F^{\epsilon}_{R})+\Gamma(F^{\epsilon}_{R},f).\eeno
By applying Lemma \ref{essential-estimate-of-micro-macro}, we have
\beno &&\frac{d}{dt}(M\|F^{\epsilon}_{R}\|^{2}_{H^{N-2}_{x}L^{2}}+\mathcal{I}_{N-2}(F^{\epsilon}_{R}))+ \frac{1}{2}(| (F^{\epsilon}_{R})_{1}|^{2}_{H^{N-2}_{x}}+\|(F^{\epsilon}_{R})_{2}\|^{2}_{H^{N-2}_{x}L^{2}_{0,\gamma/2}}) \\&\lesssim& \sum_{|\alpha| \leq N-2 }
|(\pa^{\alpha}g, \pa^{\alpha}F^{\epsilon}_{R})|+ \sum_{|\alpha| \leq N-2}\sum_{j=1}^{13}
\int_{\mathbb{T}^{3}}|\langle \pa^{\alpha}g, e_j\rangle|^{2} dx. \eeno
Thanks to $|\langle \Gamma^{\epsilon}(g,h), e_j\rangle| \lesssim |g|_{L^{2}_{\gamma/2}}|h|_{L^{2}_{\gamma/2}}$, for any $|\alpha|\leq N-2$, we have
\beno &&\int_{\mathbb{T}^{3}} \big(|\langle \pa^{\alpha}\Gamma^{\epsilon}(f^{\epsilon},F^{\epsilon}_{R}), e\rangle|^{2}+ |\langle \pa^{\alpha}\Gamma(F^{\epsilon}_{R},f), e\rangle|^{2}\big) dx \\ &&\lesssim \|f^{\epsilon}\|^{2}_{H^{2}_{x}L^{2} }\|F^{\epsilon}_{R}\|^{2}_{H^{N-2}_{x}L^{2}_{0,\gamma/2}} +
\mathrm{1}_{N\ge3}\|f^{\epsilon}\|^{2}_{H^{N-2}_{x}L^{2}}\|F^{\epsilon}_{R}\|^{2}_{H^{N-3}_{x}L^{2}_{0,\gamma/2}}+
\|f \|^{2}_{H^{N-2}_{x}L^{2}_{0,\gamma/2}}\|F^{\epsilon}_{R}\|^{2}_{H^{N-2}_{x}L^{2}}.\eeno
By Lemma \ref{estimate-operator-difference}, we get that
$ \epsilon^{2s-2}\int_{\mathbb{T}^{3}}|\langle \pa^{\alpha} (\Gamma^{\epsilon}-\Gamma)(f^{\epsilon},f), e\rangle|^{2} dx \lesssim \|f^{\epsilon}\|^{2}_{H^{N-2}_{x}L^{2}}\|f\|^{2}_{H^{N-2}_{x}H^{2}_{\gamma/2+2}},$
and\\
$ \epsilon^{2s-2}\int_{\mathbb{T}^{3}}|\langle \pa^{\alpha}(\mathcal{L}-\mathcal{L}^{\epsilon})f^{\epsilon}, e\rangle|^{2} dx \lesssim \|f^{\epsilon}\|^{2}_{H^{N-2}_{x}H^{2}_{\gamma/2+2}}.$
By Theorem \ref{upGammagh} with $\epsilon=0$ and \eqref{HNGamma}, we have
\beno &&|(\pa^{\alpha}\Gamma(F^{\epsilon}_{R},f), \pa^{\alpha}F^{\epsilon}_{R})|+|(\pa^{\alpha}\Gamma^{\epsilon}(f^{\epsilon},F^{\epsilon}_{R}), \pa^{\alpha}F^{\epsilon}_{R})|\\&& \lesssim (\|F^{\epsilon}_{R}\|_{H^{N-2}_{x}L^{2}}\|f\|_{H^{N}_{x}L^{2}_{0,\gamma/2}}+\|f^{\epsilon}\|_{H^{2}_{x}L^{2}}\|F^{\epsilon}_{R}\|_{H^{N-2}_{x}L^{2}_{0,\gamma/2}}\\&& +\mathrm{1}_{N\ge3}\|f^{\epsilon}\|_{H^{N}_{x}L^{2}}\|F^{\epsilon}_{R}\|_{H^{N-3}_{x}L^{2}_{0,\gamma/2}})\|F^{\epsilon}_{R}\|_{H^{N-2}_{x}L^{2}_{0,\gamma/2}}.\eeno
By Lemma \ref{estimate-operator-difference}, we have
\beno&&
|(\pa^{\alpha}\frac{1}{\epsilon^{2-2s}}(\Gamma^{\epsilon}-\Gamma)(f^{\epsilon},f), \pa^{\alpha}F^{\epsilon}_{R})| +|(\pa^{\alpha}\frac{1}{\epsilon^{2-2s}}(\mathcal{L}-\mathcal{L}^{\epsilon})f^{\epsilon}, \pa^{\alpha}F^{\epsilon}_{R})|\\&& \lesssim (\|f^{\epsilon}\|_{H^{N}_{x}L^{2}_{}} \|f\|_{H^{N-2}_{x}H^{2}_{\gamma/2+2}}+\|f^{\epsilon}\|_{H^{N-2}_{x}H^{2}_{\gamma/2+2}})
\|F^{\epsilon}_{R}\|_{H^{N-2}_{x}L^{2}_{0,\gamma/2}}.\eeno
Patching together the above results, we arrive at
\beno &&\frac{d}{dt}(M\|F^{\epsilon}_{R}\|^{2}_{H^{N-2}_{x}L^{2}}+\mathcal{I}_{N-2}(F^{\epsilon}_{R}))+ \frac{1}{4}(| (F^{\epsilon}_{R})_{1}|^{2}_{H^{N -2}_{x}}+\|(F^{\epsilon}_{R})_{2}\|^{2}_{H^{N-2}_{x}L^{2}_{0,\gamma/2}}) \\&\lesssim&
\mathrm{1}_{N\ge3}\|f^{\epsilon}\|^{2}_{H^{N}_{x}L^{2}}\|F^{\epsilon}_{R}\|^{2}_{H^{N-3}_{x}L^{2}_{0,\gamma/2}}+
\|f \|^{2}_{H^{N}_{x}L^{2}_{0,\gamma/2}}\|F^{\epsilon}_{R}\|^{2}_{H^{N-2}_{x}L^{2}}+\|f^{\epsilon}\|_{H^{N}_{x}L^{2}_{}}^2 \|f\|_{H^{N-2}_{x}H^{2}_{\gamma/2+2}}^2\\&&+\|f^{\epsilon}\|_{H^{N-2}_{x}H^{2}_{\gamma/2+2}}^2.
\eeno
Thanks to Proposition \ref{a-priori-estimate-LBE-HNlL2q}, we derive that
\beno \int_{0}^\infty (\|f^{\epsilon}\|_{H^{N-2}_{x}H^{2}_{\gamma/2+2}}^2+\|f\|_{H^{N-2}_{x}H^{2}_{\gamma/2+2}}^2+\|f \|^{2}_{H^{N}_{x}L^{2}_{0,\gamma/2}}) ds \lesssim C(\mathcal{E}^{N,2}(f_0)), \eeno
which implies that
\beno \sup_{t\ge0} \|F^{\epsilon}_{R}\|^{2}_{L^2}+\int_0^\infty \|F^{\epsilon}_{R}\|^{2}_{L^2_{x} L^{2}_{0,\gamma/2}}ds\lesssim C(\mathcal{E}^{2,2}(f_0)).\eeno
From this together with the inductive method, for $N\geq3$, we will arrive at
\beno \sup_{t\ge0} \|F^{\epsilon}_{R}\|^{2}_{H^{N-2}_{x}L^{2}}+\int_0^\infty \|F^{\epsilon}_{R}\|^{2}_{H^{N-2}_{x}L^{2}_{0,\gamma/2}}ds\lesssim C(\mathcal{E}^{N,2}(f_0)).\eeno
It ends the proof to \eqref{error-function-uniform-estimate} and then complete the proof to Theorem \ref{main3}.
\end{proof}
\section{Appendix}
We first give the definition on the symbol $S^{m}_{1,0}$.
\begin{defi}\label{psuopde} A smooth function $a(v,\xi)$ is said to a symbol of type $S^{m}_{1,0}$ if $a(v,\xi)$ verifies for any multi-indices $\alpha$ and $\beta$,
\beno |(\pa^\alpha_\xi\pa^\beta_v a)(v,\xi)|\le C_{\alpha,\beta} \langle \xi\rangle^{m-|\alpha|}, \eeno
where $C_{\alpha,\beta}$ is a constant depending only on $\alpha$ and $\beta$.
\end{defi}
\begin{lem}(\cite{he2})\label{operatorcommutator1}
Let $l, s, r \in \R, M \in S^{r}_{1,0}$ and $\Phi \in S^{l}_{1,0}$. Then there exists a constant $C$ such that
\beno
|[M(D), \Phi]f|_{H^{s}} \leq C|f|_{H^{r+s-1}_{l-1}}.
\eeno
\end{lem}
As a consequence, if $W^{\epsilon}\in S^{s}_{1,0}, 2^{k}\varphi_{k} \in S^{1}_{1,0}$ with $s<1$, then we will have
\begin{eqnarray}\label{decompostionpacth}
\sum_{k \geq -1}^{\infty}|W^{\epsilon}(D)\varphi_{k}f|^{2}_{L^{2}} &=& \sum_{k \geq -1}^{\infty}2^{-2k}|W^{\epsilon}(D)2^{k}\varphi_{k}f|^{2}_{L^{2}}
\nonumber \\&\lesssim&\sum_{k \geq -1}^{\infty}2^{-2k}(|2^{k}\varphi_{k}W^{\epsilon}(D)f|^{2}_{L^{2}}+|f|^{2}_{H^{s-1}})
\lesssim |W^{\epsilon}(D)f|^{2}_{L^{2}}.
\end{eqnarray}
\begin{lem}(\cite{He-Jiang}) \label{func} Let $f$ be a smooth function defined in $\R^3$ and $W_q^\epsilon(v)\eqdefa \phi(\epsilon v)\langle v\rangle^q+\epsilon^{-q}(1-\phi(\epsilon v))$. Then for $l \in \R$ and $m,q\ge0$, there holds
\beno |f|_{H^m_l}&\sim& |f^\phi|_{H^m_l}+|f_\phi|_{H^m_l},\quad
|W^\epsilon_q(D)(W_l f )|_{H^m} \sim|W^\epsilon_q(D)f |_{H^m_l}.
\eeno
Suppose $\Phi(v)\in S^l_{1,0}$. For $q\ge0$, $B^\epsilon(\xi)$ verifies $|B^\epsilon(\xi)|\le W^\epsilon_{q}(\xi)$ and
$ |(\pa^\alpha B^\epsilon)(\xi)|\le W^\epsilon_{(q-|\alpha|)^+}(\xi).$
Then we have
\ben\label{func5} |\Phi B^\epsilon(D)f|_{H^m}+| B^\epsilon(D)\Phi f|_{H^m}\lesssim |W^\epsilon_q(D)W_lf|_{H^m}.
\een
\end{lem}
\begin{prop}\label{symbol} Suppose $ A^\epsilon(\xi)\eqdefa \int_{\sigma\in\SS^2} b^\epsilon(\f{\xi}{|\xi|}\cdot \sigma)\min\{ |\xi|^2\sin^2(\theta/2),1\} d\sigma$. Then we have
$A^\epsilon(\xi)\sim |\xi|^2\mathrm{1}_{|\xi|\le2}+\mathrm{1}_{|\xi|\ge2}(W^\epsilon(\xi))^2$.
\end{prop}
\begin{proof} By definition, we first get
$A^\epsilon(\xi)=2\pi\int_0^{\pi/2} \sin\theta b(\cos\theta)\phi(\sin\f{\theta}2/\epsilon)\min\{|\xi|^2\sin^2(\theta/2),1\} d\theta. $
By the change of variable: $t=\sin(\theta/2)$, we have
\beno A^\epsilon(\xi)&\sim& \int_0^\f12 t^{-1-2s}\phi(t/\epsilon)\min\{ |\xi|^2t^2,1\}dt
= |\xi|^{2s} \int_0^{|\xi|/2} t^{-1-2s}\phi(\epsilon^{-1}t|\xi|^{-1})\min\{ t^2,1\}dt.
\eeno
It is easy to check there exist constants $\bar{c}_1$ and $\bar{c}_2$ such that $\bar{c}_1<\bar{c}_2$ and
\beno
|\xi|^{2s}\int_{\bar{c}_2\epsilon|\xi|}^{|\xi|/2} t^{-1-2s} \min\{ t^2,1\}dt\lesssim A^\epsilon(\xi)\lesssim |\xi|^{2s}\int_{\bar{c}_1\epsilon|\xi|}^{|\xi|/2} t^{-1-2s} \min\{ t^2,1\} dt.
\eeno
Now we focus on the quantity
$ I(\xi)\eqdefa |\xi|^{2s}\int_{c\epsilon|\xi|}^{|\xi|/2} t^{-1-2s} \min\{ t^2,1\} dt.$
\begin{enumerate}
\item For the case of $|\xi|\le2$, we have
$I(\xi)=|\xi|^{2s} \int_{c\epsilon|\xi|}^{|\xi|/2} t^{1-2s} dt\sim (1-s)^{-1}|\xi|^2.$
\item For the case of $2<|\xi|\le (c\epsilon)^{-1}$, we have \beno
I(\xi)&=&|\xi|^{2s} \big(\int_{c\epsilon|\xi|}^{1} t^{1-2s} dt+ \int_{1}^{|\xi|/2} t^{-1-2s} dt\big)\\
&\sim& (1-s)^{-1}|\xi|^{2s}(1-(c\epsilon |\xi|)^{2-2s})+|\xi|^{2s}(1-(2|\xi|^{-1})^{2s}). \eeno
\item For the case of $|\xi|\ge (c\epsilon)^{-1}$,
we have
$ I(\xi)=|\xi|^{2s} \int_{c
\epsilon |\xi|}^{|\xi|/2} t^{-1-2s} dt \sim \epsilon^{-2s}. $
\end{enumerate}
The desired result follows from all the above estimates.
\end{proof}
\begin{prop} \label{fourier-transform-cross-term} We have
\beno
\int_{\R^3\times\SS^2} b(\f{u}{|u|}\cdot \sigma) h(u)(f(u^+)-f(\f{|u|}{|u^+|}u^+)) d\sigma du
=\int_{\R^3\times\SS^2} b(\f{\xi}{|\xi|}\cdot \sigma) (\hat{h}(\xi^+)-\hat{h}(\f{|\xi|}{|\xi^+|}\xi^+))\bar{\hat{f}}(\xi) d\sigma d\xi.
\eeno
\end{prop}
\begin{proof} By Plancherel equality, first we have
\beno &&\int_{\R^3\times\SS^2} b(\f{u}{|u|}\cdot \sigma) h(u) f(\f{|u|}{|u^+|}u^+) d\sigma du\\
&=&\int_{\R^3} h(u) \underbrace{\bigg(\int_{\SS^2} b(\f{u}{|u|}\cdot \sigma) f(\f{|u|}{|u^+|}u^+)d\sigma\bigg)}_{\eqdefa F(u)} du
=\int_{\R^3} \hat{h}(\xi) \bar{\hat{F}}(\xi)d\xi.\eeno
Next, we compute the Fourier transform $\hat{F}$ of $F$. By definition, we have
\beno
\hat{F}(\xi)=\int_{\R^3} e^{-iu\cdot \xi} F(u)du
=\f1{(2\pi)^{3/2}}\int_{\R^3} \int_{\SS^2} \int_{\R^3} e^{-iu\cdot \xi}e^{i\f{|u|}{|u^+|}u^+\cdot\eta} b(\f{u}{|u|}\cdot \sigma) \hat{f}(\eta) d\sigma d\eta du.
\eeno
Notice that $\f{|u|}{|u^+|}u^+\cdot\eta= \f12 \big((\f{u}{|u|}\cdot \sigma +1)/2\big)^{-\f12}(u
\cdot \eta+|u\|\eta| \f{\eta}{|\eta|}\cdot\sigma),$
then by the fact $\int_{\SS^2} b(\kappa\cdot \sigma) d(\tau\cdot \sigma)d\sigma=\int_{\SS^2} b(\tau\cdot \sigma) d(\kappa\cdot \sigma)d\sigma$, one has
\beno
\hat{F}(\xi)
&=&\f1{(2\pi)^{3/2}}\int_{\R^3} \int_{\SS^2} \int_{\R^3} e^{-iu\cdot \xi}e^{i\f{|\eta|}{|\eta^+|}\eta^+\cdot u} b(\f{u}{|u|}\cdot \sigma) \hat{f}(\eta) d\sigma d\eta du\\
&=&\f1{(2\pi)^{3/2}}\int_{\R^3} \int_{\SS^2} b(\f{\eta}{|\eta|}\cdot \sigma) \hat{f}(\eta)\delta [\xi=\f{|\eta|}{|\eta^+|}\eta^+] d\sigma d\eta,
\eeno
which yields that
\beno \int_{\R^3\times\SS^2} b(\f{u}{|u|}\cdot \sigma) h(u) f(\f{|u|}{|u^+|}u^+) d\sigma du
=\int_{\R^3\times\SS^2} b(\f{\xi}{|\xi|}\cdot \sigma) \hat{h}(\f{|\xi|}{|\xi^+|}\xi^+)\bar{\hat{f}}(\xi) d\sigma d\xi.\eeno
Similar argument can be applied to the remainder term and then we get the desired result.
\end{proof}
\begin{lem}\label{comWep} Let $\mathcal{F}$ denotes the Fourier transform. Then we have $\mathcal{F}W^\epsilon((-\triangle_{\SS^2})^{1/2})=W^\epsilon((-\triangle_{\SS^2})^{1/2})\mathcal{F}$.
\end{lem}
\begin{proof} By definition of \eqref{DeltaWe}, if $\xi=\rho \tau$, we have \beno \mathcal{F}\big(W^\epsilon((-\triangle_{\SS^2})^{1/2})f\big)(\xi)&=&\sum_{l=0}^\infty\sum_{m=-l}^l W^\epsilon((l(l+1))^{1/2}) \mathcal{F}(Y^m_l f^m_l)(\xi)\\
&=&\sum_{l=0}^\infty\sum_{m=-l}^l W^\epsilon((l(l+1))^{1/2}) Y_l^m(\tau)W_l^m(\rho),
\eeno where we have used the fact that $\mathcal{F}(Y^m_l f^m_l)(\xi)=Y_l^m(\tau)W_l^m(\rho)$.
On the other hand, using the same notation, we have
$(\mathcal{F}f)(\xi)=\sum_{l=0}^\infty\sum_{m=-l}^l Y_l^m(\tau)W_l^m(\rho), $ which implies
\beno W^\epsilon((-\triangle_{\SS^2})^{1/2}(\mathcal{F}f)(\xi)=\sum_{l=0}^\infty\sum_{m=-l}^l W^\epsilon((l(l+1))^{1/2}) Y_l^m(\tau)W_l^m(\rho)= \mathcal{F}\big(W^\epsilon((-\triangle_{\SS^2})^{1/2})f\big)(\xi).\eeno It completes the proof of the lemma. \end{proof}
In the rest of this appendix, we aim to prove Lemma \ref{estimate-for-highorder-abc}.
Note that (\ref{linear-equation-abc-3}) is equivalent to
\ben \label{equation-a}\partial_{t} a = -\partial_{t}\tilde{f}^{(0)} + l^{(0)} + g^{(0)},\een
\ben \label{equation-ba}\partial_{t}b_{i}+ \partial_{i} a = -\partial_{t}\tilde{f}^{(1)}_{i} + l^{(1)}_{i} + g^{(1)}_{i}, ~~~~~~~~ 1\leq i \leq 3,\een
\ben \label{equation-cb}\partial_{t}c+ \partial_{i} b_{i} = -\partial_{t}\tilde{f}^{(2)}_{i} + l^{(2)}_{i} + g^{(2)}_{i}, ~~~~~~~~ 1\leq i \leq 3,\een
\ben \label{equation-b}\partial_{i}b_{j}+ \partial_{j} b_{i} = -\partial_{t}\tilde{f}^{(2)}_{ij} + l^{(2)}_{ij} + g^{(2)}_{ij},~~~~~~~~1\leq i < j \leq 3,\een
\ben \label{equation-c}\partial_{i}c = -\partial_{t}\tilde{f}^{(3)}_{i} + l^{(3)}_{i} + g^{(3)}_{i}, ~~~~~~~~ 1\leq i \leq 3.\een
Based on equations \eqref{equation-cb} and \eqref{equation-b}, it is easy to derive:
\begin{prop} \label{equation-b-itself}For $j = 1,2,3$, the macroscopic $b_{j}$ satisfies
\ben \label{equation-b-itself-2}-\triangle_{x}b_{j}-\partial^{2}_{j}b_{j} &=& \sum_{i\neq j} \partial_{j}[-\partial_{t}\tilde{f}^{(2)}_{i} + l^{(2)}_{i} + g^{(2)}_{i}] - \sum_{i \neq j} \partial_{i}[-\partial_{t}\tilde{f}^{(2)}_{ij} + l^{(2)}_{ij} + g^{(2)}_{ij}] \\&&- 2 \partial_{j}[-\partial_{t}\tilde{f}^{(2)}_{j} + l^{(2)}_{j} + g^{(2)}_{j}]. \nonumber \een
\end{prop}
The functions $\tilde{f}, \tilde{l}, \tilde{g}$ can be controlled as:
\begin{prop} \label{estimate-on-fln-tilde} There holds
\beno \sum_{|\alpha|\leq N}|\partial^{\alpha}\tilde{f}|^{2}_{L^{2}_{x}} \leq C \|f_{2}\|^{2}_{H^{N}_{x}L^2_{\epsilon,\gamma/2}}, \sum_{|\alpha|\leq N-1}|\partial^{\alpha}\tilde{l}|^{2}_{L^{2}_{x}} \leq C \|f_{2}\|^{2}_{H^{N}_{x}L^2_{\epsilon,\gamma/2}}, \\
\sum_{|\alpha|\leq N-1}|\partial^{\alpha}\tilde{g}|^{2}_{L^{2}_{x}} \leq C \sum_{|\alpha| \leq N},
\int_{\mathbb{T}^{3}}|\langle \pa^{\alpha}g, e\rangle|^{2} dx. \eeno
\end{prop}
\begin{proof}
The first one easily follows. The second one is proved by transforming some weight to $\mathcal{L}e_{j}$, and using $|\cdot|_{\epsilon,\gamma/2} \geq |W^{\epsilon}W_{\gamma/2}|_{L^{2}}$. The third is obvious by the fact $\tilde{g}= A^{-1} \langle g, e\rangle$.
\end{proof}
The next lemma on the dynamics of (a,b,c) is from macroscopic conservation laws.
\begin{lem}\label{equation-for-ptabc}
The macroscopic components $(a,b,c)$ satisfy the following system of equations:
\ben \label{equation-for-pta}\partial_{t}a - \frac{1}{2} \nabla_{x} \cdot \langle \mu^{1/2}|v|^{2}v, f_{2}\rangle=\frac{1}{2}\langle (5-|v|^{2})\mu^{1/2}, g\rangle.\een
\ben \label{equation-for-ptb}\partial_{t}b + \nabla_{x}(a+5c) + \nabla_{x} \cdot \langle \mu^{1/2}v \otimes v, f_{2}\rangle=\langle v \mu^{1/2}, g\rangle.\een
\ben \label{equation-for-ptc}\partial_{t}c + \frac{1}{3} \nabla_{x} \cdot b + \frac{1}{6}\nabla_{x} \cdot \langle \mu^{1/2}|v|^{2}v, f_{2}\rangle=\frac{1}{6}\langle (3|v|^{2}-1)\mu^{1/2}, g\rangle.\een
\end{lem}
\begin{proof}
Multiply both sides of the equation \eqref{lBE} by the collision invariants $\mu^{1/2}\{1, v_{i}, |v|^{2}\}$, and take integration over $\R^{3}_{v}$
to get equations for inner products $\langle \mu^{1/2}, f\rangle_v,\langle \mu^{1/2}v_{i}, f\rangle_v, \langle \mu^{1/2}|v|^{2}, f\rangle_v$. Then express out each item in the equations in terms of $(a,b,c)$ as many as possible. Finally, take suitable combinations to get the desired equation.
\end{proof}
The previous lemma implies:
\begin{lem}\label{estimate-for-ptabc}
There holds:
$$\sum_{|\alpha|\leq N-1}|\partial^{\alpha}\partial_{t}(a,b,c)|^{2}_{L^{2}_{x}} \leq C (\sum_{0<|\alpha|\leq N}\|\mu^{1/4}\partial^{\alpha}f_{2}\|^{2}_{L^{2}}+|\nabla_{x}(a,b,c)|^{2}_{H^{N-1}_{x}})
+\sum_{|\alpha| \leq N-1} \int_{\mathbb{T}^{3}}|\langle \pa^{\alpha}g, e\rangle|^{2} dx.$$
\end{lem}
\begin{proof}
The lemma follows easily from lemma \ref{equation-for-ptabc}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{estimate-for-highorder-abc}] For $|\alpha| \leq N-1$,
apply $\pa^{\alpha}$ to equation \eqref{equation-b-itself-2} for $b_{j}$, then by taking inner product with $\pa^{\alpha} b_{j}$, one has \beno |\nabla_{x} \pa^{\alpha}b_{j}|^{2}_{L^{2}_{x}} + |\pa_{j} \pa^{\alpha}b_{j}|^{2}_{L^{2}_{x}} &=& \langle \sum_{i\neq j} \partial_{j}\pa^{\alpha}[-\partial_{t}\tilde{f}^{(2)}_{i} + l^{(2)}_{i} + g^{(2)}_{i}], \pa^{\alpha} b_{j}\rangle \\&&- \langle \sum_{i \neq j} \partial_{i}\pa^{\alpha}[-\partial_{t}\tilde{f}^{(2)}_{ij} + l^{(2)}_{ij} + g^{(2)}_{ij}] , \pa^{\alpha} b_{j}\rangle \\&&- 2 \langle \partial_{j}\pa^{\alpha}[-\partial_{t}\tilde{f}^{(2)}_{j} + l^{(2)}_{j} + g^{(2)}_{j}] , \pa^{\alpha} b_{j}\rangle. \eeno
By integration by parts, the time derivative can transferred to $\pa^{\alpha} b_{j}$, one has
\beno |\nabla_{x} \pa^{\alpha}b_{j}|^{2}_{L^{2}_{x}} + |\pa_{j} \pa^{\alpha}b_{j}|^{2}_{L^{2}_{x}} &=& \frac{d}{dt} \mathcal{I}^{b}_{\alpha,j}(f) +
\langle \sum_{i\neq j} \partial_{j}\pa^{\alpha}\tilde{f}^{(2)}_{i}, \partial_{t}\pa^{\alpha} b_{j}\rangle
+ \langle \sum_{i\neq j}\partial_{i}\pa^{\alpha}\tilde{f}^{(2)}_{ij}, \partial_{t}\pa^{\alpha} b_{j}\rangle\\&&- 2 \langle \partial_{j}\pa^{\alpha}\tilde{f}^{(2)}_{j}, \partial_{t}\pa^{\alpha} b_{j}\rangle
+\langle \sum_{i\neq j} \partial_{j}\pa^{\alpha}[ l^{(2)}_{i} + g^{(2)}_{i}] , \pa^{\alpha} b_{j}\rangle
\\&&- \langle \sum_{i\neq j} \partial_{i}\pa^{\alpha}[ l^{(2)}_{ij} +g^{(2)}_{ij}], \pa^{\alpha} b_{j}\rangle - 2 \langle \partial_{j}\pa^{\alpha}[l^{(2)}_{j} + g^{(2)}_{j}] , \pa^{\alpha} b_{j}\rangle.\eeno
By Cauchy-Schwartz inequality, one has
\beno &&\langle \sum_{i\neq j} \partial_{j}\pa^{\alpha}\tilde{f}^{(2)}_{i}, \partial_{t}\pa^{\alpha} b_{j}\rangle
+ \langle \sum_{i\neq j}\partial_{i}\pa^{\alpha}\tilde{f}^{(2)}_{ij}, \partial_{t}\pa^{\alpha} b_{j}\rangle- 2 \langle \partial_{j}\pa^{\alpha}\tilde{f}^{(2)}_{j}, \partial_{t}\pa^{\alpha} b_{j}\rangle
\\&\leq& \eta \sum_{|\alpha|\leq N-1}|\partial^{\alpha}\partial_{t}(a,b,c)|^{2}_{L^{2}_{x}} + \frac{1}{4\eta}\sum_{|\alpha|\leq N}|\partial^{\alpha}\tilde{f}|^{2}_{L^{2}_{x}}. \eeno
Via integrating by parts, by Cauchy-Schwartz inequality, one has
\beno &&\langle \sum_{i\neq j} \partial_{j}\pa^{\alpha}[ l^{(2)}_{i} + g^{(2)}_{i}] , \pa^{\alpha} b_{j}\rangle -
\langle \sum_{i\neq j} \partial_{i}\pa^{\alpha}[ l^{(2)}_{ij} +g^{(2)}_{ij}], \pa^{\alpha} b_{j}\rangle - 2 \langle \partial_{j}\pa^{\alpha}[l^{(2)}_{j} + g^{(2)}_{j}] , \pa^{\alpha} b_{j}\rangle
\\&=&\langle \sum_{i\neq j} \pa^{\alpha}[ l^{(2)}_{i} + g^{(2)}_{i}] , \partial_{j}\pa^{\alpha} b_{j}\rangle +
\langle \sum_{i\neq j} \pa^{\alpha}[ l^{(2)}_{ij} +g^{(2)}_{ij}], \partial_{i}\pa^{\alpha} b_{j}\rangle + 2 \langle \pa^{\alpha}[l^{(2)}_{j} + g^{(2)}_{j}] , \partial_{j}\pa^{\alpha} b_{j}\rangle
\\&\leq& \eta |\nabla_{x}(a,b,c)|^{2}_{H^{N-1}_{x}} + \frac{1}{\eta}\sum_{|\alpha|\leq N-1}|\partial^{\alpha}\tilde{l}|^{2}_{L^{2}_{x}}+\frac{1}{\eta}\sum_{|\alpha|\leq N-1}|\partial^{\alpha}\tilde{g}|^{2}_{L^{2}_{x}}. \eeno
Taking sum over $1 \leq j \leq 3$, by Proposition \ref{estimate-on-fln-tilde} and Lemma \ref{estimate-for-ptabc}, we get
\beno |\nabla_{x} \pa^{\alpha}b|^{2}_{L^{2}_{x}}+\frac{d}{dt}\sum_{j=1}^{3}\mathcal{I}^{b}_{\alpha,j}(f)
\le \eta |\nabla_{x}(a,b,c)|^{2}_{H^{N-1}_{x}} + C(\eta)(\|f_{2}\|^{2}_{H^{N}_{x}L^2_{\epsilon,\gamma/2}} + \sum_{|\alpha|\leq N-1}\int_{\mathbb{T}^{3}}|\langle \pa^{\alpha}g, e\rangle|^{2} dx).\eeno
Similar techniques can be used to deal with $|\nabla_{x}\pa^{\alpha}c|^{2}_{L^{2}_{x}}$ and $|\nabla_{x}\pa^{\alpha}a|^{2}_{L^{2}_{x}}$, and we have
\beno |\nabla_{x} \pa^{\alpha}c|^{2}_{L^{2}_{x}}+\frac{d}{dt}\sum_{j=1}^{3}\mathcal{I}^{c}_{\alpha,j}(f)
\leq \eta |\nabla_{x}(a,b,c)|^{2}_{H^{N-1}_{x}} + C(\eta)(\|f_{2}\|^{2}_{H^{N}_{x}L^2_{\epsilon,\gamma/2}} + \sum_{|\alpha|\leq N-1}\int_{\mathbb{T}^{3}}|\langle \pa^{\alpha}g, e\rangle|^{2} dx),\eeno
and \beno &&|\nabla_{x} \pa^{\alpha}a|^{2}_{L^{2}_{x}}+\frac{d}{dt}\sum_{j=1}^{3}(\mathcal{I}^{a}_{\alpha,j}(f)+\mathcal{I}^{ab}_{\alpha,j}(f))
\\&\leq& \eta |\nabla_{x}(a,b,c)|^{2}_{H^{N-1}_{x}} + C(\eta)(\|f_{2}\|^{2}_{H^{N}_{x}L^2_{\epsilon,\gamma/2}} + \sum_{|\alpha|\leq N-1}\int_{\mathbb{T}^{3}}|\langle \pa^{\alpha}g, e\rangle|^{2} dx).\eeno
Patching together the above estimates and taking sum over $|\alpha|\leq N-1$, we have
\beno \frac{d}{dt}\mathcal{I}_{N}(f) + |\nabla_{x}(a,b,c)|^{2}_{H^{N-1}_{x}}
\le \eta |\nabla_{x}(a,b,c)|^{2}_{H^{N-1}_{x}} + C(\eta)(\|f_{2}\|^{2}_{H^{N}_{x}L^2_{\epsilon,\gamma/2}} + \sum_{|\alpha|\leq N-1}\int_{\mathbb{T}^{3}}|\langle \pa^{\alpha}g, e\rangle|^{2} dx).\eeno
Taking $\eta=1/2$, the lemma then follows.
\end{proof}
{\bf Acknowledgments.} Ling-Bing He is supported by NSF of China under the grant 11771236.
|
{
"timestamp": "2018-05-08T02:09:40",
"yymm": "1805",
"arxiv_id": "1805.02151",
"language": "en",
"url": "https://arxiv.org/abs/1805.02151"
}
|
\section{Introduction}\label{sec.introduction}
The work of this paper can be viewed in three ways:
\begin{itemize}
\item as a relationship between Boolean functions with small spectral norm and certain decision trees;
\item as a description of integer-valued functions in the Fourier algebra of a finite group;
\item as an example of how to extend certain additive combinatorial results to a non-abelian setting.
\end{itemize}
The books \cite{odo::1}, \cite{rud::1} and \cite{taovu::} provide background for each of these three perspectives respectively.
We discuss the first two in order in the introduction, and those interested in the third can move to \S\ref{sec.over} (though it may be worth noting a few basic definitions from \S\ref{sec.not} first).
The purpose of the paper is to develop various arguments which currently exist for abelian group in a more general setting. It may well be worth first consulting some of the abelian material, for example the paper \cite{shptalvol::1} which gives a good introduction from the perspective of computer science, or \cite{gresan::0} for the additive combinatorial view point.
Given a finite abelian group $G$ we write $\wh{G}$ for its dual group, that is the set of homomorphisms $G \rightarrow S^1$ where $S^1:=\{z \in \C: |z|=1\}$. The \textbf{Fourier transform} and \textbf{algebra norm} (sometimes called the \textbf{spectral norm}) of $f:G \rightarrow \C$ are defined by
\begin{equation}\label{eqn.sn}
\wh{f}:\wh{G} \rightarrow \C; \gamma\mapsto \E_{x \in G}{f(x)\overline{\gamma(x)}} \text{ and } \|f\|_{A(G)}:=\|\wh{f}\|_{\ell_1(\wh{G})}=\sum_\gamma{|\wh{f}(\gamma)|}.
\end{equation}
The paper \cite{kusman::} was one of the early papers to study the class of Boolean functions with small algebra norm, and amongst other things they showed that such functions can be efficiently learnt both randomly \cite[Theorem 4.2]{kusman::} and deterministically \cite[Theorem 4.12]{kusman::}. Their arguments are based around algorithms for finding significant Fourier coefficients and these have been generalised (from their setting of $G=(\Z/k\Z)^n$) to general finite abelian groups in \cite{aka::0} leading to analogous learning results. (The case of cyclic groups is already rather different to the situation of cubes considered in \cite{kusman::}. In this case a deterministic learning algorithm complementing \cite[Theorem 4.12]{kusman::} is given in \cite[Theorem 1]{aka::3}, and using some explicit constructions also developed in \cite{aka::2} and \cite{aka::1}.) There have also been extensions to (possibly non-abelian) $p$-groups\footnote{Boneh notes all such groups are nilpotent and hence monomial groups which helps with their representation theory.} by Boneh in \cite{bon::0} which can be used to produced a randomised efficient learning algorithm in that setting.
In potentially non-abelian groups we need to give a slightly different definition of the algebra norm. One can replace $\wh{G}$ above by the set of irreducible representations and extend the definitions in (\ref{eqn.sn}). This is the approach taken in \cite[\S3]{bon::0} but we shall proceed slightly differently avoiding any representation theory.
Given a finite group $G$, any $f:G \rightarrow \C$ naturally induces a linear operator
\begin{equation*}
L_2(G) \rightarrow L_2(G); g \mapsto \left(x\mapsto f \ast g(x):=\E_{y\in G}{f(y)g(y^{-1}x)}\right),
\end{equation*}
and we define the \textbf{algebra norm} of $f$, written $\|f\|_{A(G)}$, to be the the trace-norm of this operator or, equivalently, the sum of its singular values\footnote{For a definition (of what are there called singular numbers) see \cite[Chapter III.G, \S6]{woj::}.}. This norm makes the space of complex-valued functions on $G$ into a complex Banach algebra.
When $G$ is abelian the Fourier transform gives a unitary map $L_2(G) \rightarrow \ell_2(\wh{G})$ (simultaneously) diagonalising these operators. The values on the diagonal are just the Fourier coefficients $\wh{f}(\gamma)$ and so the trace norm is exactly $\|\wh{f}\|_{\ell_1(\wh{G})}$ and our new definition agrees with (\ref{eqn.sn}).
Returning to \cite{kusman::}, Kushilevitz and Mansour also showed that Boolean functions that can be computed by small decision trees have small algebra norm. To explain this it will be helpful to have a little more notation.
Suppose that we have a set $\mathcal{W}$ of subsets of $X$ where for each $W \in \mathcal{W}$ and $x \in X$ it is cheap to determine whether $x \in W$. We define a \textbf{(binary) $\mathcal{W}$-decision tree} to be a rooted binary tree in which each leaf is labelled with $0$ or $1$; each internal vertex $v$ is labelled with some $W_v\in \mathcal{W}$; and the two outgoing edges of $v$ with $0$ and $1$. Given $x \in X$, the decision tree $T$ constructs a computation path from the root to a leaf: when the path reaches vertex $v$ it follows the outgoing edge labelled $1$ if $x\in W_v$ and $0$ otherwise. The output of $T$ on input $x$ is the value of the leaf. (\emph{c.f.} \cite[Definition 3.13]{odo::1}.)
An example of a $\mathcal{W}$-decision tree computing the Boolean function
\begin{equation*}
f=1_{W_0}1_{W_1}1_{W_3} + 1_{W_0}(1-1_{W_1}) + (1-1_{W_0})(1-1_{W_2})1_{W_5} \text{ where } W_0,\dots,W_5 \in \mathcal{W}
\end{equation*}
is given in Figure \ref{fig.example}. (Of course it would be more efficient to replace the $f_4$ node since both its out-edges are leaves with value $0$; we leave it in for illustrative purposes.)
\begin{figure}
\centering
\begin{tikzpicture}[->,level/.style={sibling distance=70mm/#1},level distance=50pt]
\node [circle,draw] {$W_0$}
child {node [circle,draw] {$W_1$}
child {node [circle,draw] {$W_3$}
child {node [rectangle,draw] {1}
edge from parent
node[left, yshift=5pt] {\scriptsize{$x \in W_3$}}
edge from parent
node[right] {\scriptsize{$1$}}}
child {node [rectangle,draw] {0}
edge from parent
node[right, yshift=5pt] {\scriptsize{$x \in G\setminus W_3$}}
edge from parent
node[left] {\scriptsize{$0$}}}
edge from parent
node[left, yshift=5pt] {\scriptsize{$x \in W_1$}}
edge from parent
node[right] {\scriptsize{$1$}}
}
child {node [rectangle,draw] {$1$}
edge from parent
node[right, yshift=5pt] {\scriptsize{$x \in G\setminus W_1$}}
edge from parent
node[left] {\scriptsize{$0$}}
}
edge from parent
node[left, yshift=5pt] {\scriptsize{$x \in W_0$}}
edge from parent
node[right,yshift=-5pt] {\scriptsize{$1$}}
}
child {node [circle,draw] {$W_2$}
child {node [circle,draw] {$W_4$}
child {node [rectangle,draw] {$0$}
edge from parent
node[left, yshift=5pt] {\scriptsize{$x \in W_4$}}
edge from parent
node[right] {\scriptsize{$1$}}}
child {node [rectangle,draw] {$0$}
edge from parent
node[right, yshift=10pt] {\scriptsize{$x \in G\setminus W_4$}}
edge from parent
node[left] {\scriptsize{$0$}}}
edge from parent
node[left, yshift=5pt] {\scriptsize{$x \in W_2$}}
edge from parent
node[right] {\scriptsize{$1$}}
}
child {node [circle,draw] {$W_5$}
child {node [rectangle,draw] {1}
edge from parent
node[left] {\scriptsize{$x \in W_5$}}
edge from parent
node[right] {\scriptsize{$1$}}}
child {node [rectangle,draw] {0}
edge from parent
node[right,yshift=5pt] {\scriptsize{$x \in G\setminus W_5$}}
edge from parent
node[left] {\scriptsize{$0$}}}
edge from parent
node[right, yshift=5pt] {\scriptsize{$x \in G\setminus W_2$}}
edge from parent
node[left] {\scriptsize{$0$}}
}
edge from parent
node[right, yshift=5pt] {\scriptsize{$x \in G\setminus W_0$}}
edge from parent
node[left,yshift=-5pt] {\scriptsize{$0$}}
};
\end{tikzpicture}
\caption{Example of a $\mathcal{W}$-decision tree.} \label{fig.example}
\end{figure}
The idea is, of course, that if the functions in $\mathcal{W}$ are easy to compute and $f$ can be computed by a small $\mathcal{W}$-decision tree, then $f$ is easy to compute.
Suppose that $T$ is a $\mathcal{W}$-decision tree computing $f:G \rightarrow \{0,1\}$. If $P=v_0\cdots v_r$ is a maximal path in $T$ then we write $z_P$ for the value of the label on the leaf in $P$; and for $0 \leq i <r$ we write $g_i:=1_{W_{v_i}}$ if the edge $v_iv_{i+1}$ is labelled with a $1$ and $g_i:=1-1_{W_{v_i}}$ if it is labelled by a $0$; we define $g_P$ to be the product $g_0\cdots g_{r-1}$. Then (\emph{c.f.} \cite[Fact 3.15]{odo::1})
\begin{equation}\label{eqn.id}
f(x)=\sum_{P\text{ is a maximal path in }T}{z_Pg_P(x)} \text{ for all }x \in G.
\end{equation}
A \textbf{parity decision tree}\footnote{See \cite[Exercise 3.26]{odo::1}.} corresponds to the case
\begin{equation*}
X=(\Z/2\Z)^n \text{ and }\mathcal{W}=\{H \leq X: |X:H|=2\},
\end{equation*}
and a \textbf{decision tree}\footnote{See \cite[Definition 3.13]{odo::1}.} corresponds to the case
\begin{equation*}
X=(\Z/2\Z)^n\text{ and }\mathcal{W}=\{\{x:x_i=0\}: 1 \leq i \leq n\}.
\end{equation*}
There are also notions of $k$-ary decision trees which are natural when $X=(\Z/k\Z)^n$, and \cite[Definition 5.1]{bon::0} defines a $G$-decision tree in which each internal vertex has a normal subgroup $H \lhd G$ associated with it and an out-edge for each coset of $H$.
In \cite[Lemma 5.1]{kusman::} the authors show that if a Boolean function can be computed by a parity decision tree with $m$ leaves then it has algebra norm at most $m$. Given this it is natural to ask whether every Boolean function on $(\Z/2\Z)^n$ with small spectral norm can be computed by a parity decision tree with a small number of leaves. Unfortunately this is not quite true: one can check that if $f$ is not identically $0$ and a Boolean function computed by a parity decision tree with $m$ leaves then\footnote{See \cite[Exercise 3.30]{odo::1} for the decision tree version of this.} $\E{f} \geq 2^{-m}$. But if $f$ is the indicator function of a singleton then $\|f\|_{A(G)}=1$, so it follows that $f$ is a Boolean function with algebra norm $1$ that cannot be computed by a parity decision tree with $o(n)$ leaves.
The singleton example in the previous paragraph extends to a general class of examples for finite groups: if $G$ is a finite group then we write $\mathcal{W}(G)$ for the set of cosets of subgroups of $G$. A short calculation shows that $\|1_W\|_{A(G)}=1$.
We call a $\mathcal{W}(G)$-decision tree a \textbf{coset decision tree}, and then in view of (\ref{eqn.id}) and the fact that the algebra norm really is an algebra norm we see that if $f:G \rightarrow \{0,1\}$ can be computed by a coset decision tree with $m$ leaves then
\begin{equation*}
\|f\|_{A(G)} \leq m2^m.
\end{equation*}
In this paper we shall show the following converse.
\begin{theorem}\label{thm.mn}
Suppose that $G$ is a finite group and $f:G \rightarrow \{0,1\}$ has $\|f\|_{A(G)}\leq M$. Then there is a coset decision tree with $\exp(\exp(\exp(O(M^{2}))))$ leaves computing $f$.
\end{theorem}
We remark that if $G$ is abelian then better results are available. Indeed, if $G=(\Z/p\Z)^n$, Shpilka, Tal and lee Volk \cite[Theorem 1.2]{shptalvol::1} identify the stronger structure of a parity decision tree with far better bounds. (Of course their bounds necessarily depend on the size of $G$, but they do so in an very mild way, and their theorem comes along with a host of other results.)
We now turn to our second, harmonic analytic, motivation. In \cite[Definition 3.5]{eym::} Eymard extended the classical definition\footnote{See, for example, \cite[\S1.2.3]{rud::1}.} of $A(G)$ for locally compact abelian groups to locally compact groups. This is done first by extending the Fourier-Stieltjes algebra $B(G)$ \cite[Definition 2.2]{eym::}.
Given a locally compact group $G$ and function $f:G \rightarrow \C$ define
\begin{equation}\label{eqn.norm}
\|f\|_{B(G)}:=\inf\{\|x\|_H\|y\|_H:f(t)=\langle \pi(t)x,y\rangle_H \text{ for all }t \in G\},
\end{equation}
where the infimum is over all Hilbert spaces $H$; elements $x,y \in H$; and strongly continuous representations $\pi:G \rightarrow \Aut(H)$ -- that is strongly continuous unitary representations of $G$ on $H$. The set $B(G)$ is then the set of functions for which this quantity is finite.
\cite[Proposition 2.16]{eym::} shows that $B(G)$ equipped with $\|\cdot\|_{B(G)}$ is a complex Banach algebra and \cite[Lemme 2.14]{eym::} that the infimum in (\ref{eqn.norm}) is attained. The space $A(G)$ can be defined as the closure of $B(G)\cap C_c(G)$ in $B(G)$ where $C_c(G)$ is the set of continuous compactly supported functions, and we write $\|f\|_{A(G)}:=\|f\|_{B(G)}$ for $f \in A(G)$. If $G$ is finite then this definition agrees with the one in the previous section.
We write $\mathcal{W}(G)$ for the set of cosets of open subgroups of $G$ and a short calculation confirms that if $W \in \mathcal{W}(G)$ then $\|1_W\|_{B(G)} =1$. Moreover, if $z \in \ell_1(\mathcal{W}(G))$ is integer-valued then
\begin{equation*}
f:=\sum_{W \in \mathcal{W}(G)}{z_W1_W} \text{ is integer-valued and has }\|f\|_{B(G)} \leq \|z\|_{\ell_1(\mathcal{W}(G))}.
\end{equation*}
In \cite{lef::}, Lefranc announced the following converse.
\begin{theorem}\label{thm.lf}
Suppose that $G$ is a locally compact group and $f \in B(G)$ is integer-valued. Then there is some integer-valued $z \in \ell_1(\mathcal{W}(G))$ such that
\begin{equation*}
f=\sum_{W \in \mathcal{W}(G)}{z_W1_W}.
\end{equation*}
\end{theorem}
It seems Lefranc's proof of Theorem \ref{thm.lf} never appeared in print, but happily a beautiful argument of Host did in \cite{hos::}, and Theorem \ref{thm.lf} is sometimes (see \emph{e.g.} \cite{run::}) called the Cohen-Host theorem since Cohen \cite{coh::} proved it in the case when $G$ is abelian.
It has been used for a number of endeavours in harmonic analysis, for example characterising the locally compact groups $G$ for which $A(G)$ is amenable \cite{forrun::}; and characterising the ideals of $A(G)$ with bounded approximate identities \cite{forkanlauspr::}. Both of these and more are discussed in \cite{run::}.
Host's argument actually shows the following stronger result.
\begin{theorem}\label{thm.host}
Suppose that $G$ is a locally compact group and $f \in B(G)$ is integer-valued with $\|f\|_{B(G)}\leq M$. Then there is an integer $L \leq M$, open subgroups $K_1,\dots,K_L$ of $G$, and integer-valued functions $z^{(i)} \in \ell_1(G/K_i)$ such that
\begin{equation*}
f=\sum_{i=1}^L{\sum_{W \in G/K_i}{z_W^{(i)}1_W}}.
\end{equation*}
\end{theorem}
The bound on $L$ makes this result partially quantitative, but it still tells us nothing if $G$ is a finite group, and we prove the following.
\begin{theorem}\label{thm.mn2}
Suppose that $G$ is a finite group and $f:G\rightarrow \Z$ has $\|f\|_{A(G)} \leq M$. Then there is some $L=O(M)$, subgroups $K_1,\dots,K_L \leq G$, and integer-valued functions $z^{(i)} \in \ell_1(G/K_i)$ (for $1\leq i \leq L$) such that
\begin{equation*}
f=\sum_{i=1}^L{\sum_{W \in G/K_i}{z_W^{(i)}1_W}} \text{ and } \|z^{(i)}\|_{\ell_1(G/K_i)} \leq \exp(\exp(\exp(O(M^2)))).
\end{equation*}
\end{theorem}
This improves on the bounds in \cite[Theorem 1.2]{san::9} (which are triply tower in nature), and we also hope it provides a more easily digested proof.
If $f$ is the indicator function of an arithmetic progression of integers with size $N$ then its algebra norm was computed classically (see \emph{e.g.} \cite[(17.)]{fej::0}) and shown to be $\frac{4}{\pi^2}\log N +O(1)$. Embedding this progression in $\Z/p\Z$ for a sufficiently large prime $p$ yields a subset of a finite group of size $N$ with algebra norm asymptotic to $\frac{4}{\pi^2}\log N+O(1)$. Since the only subgroups of $\Z/p\Z$ are the whole group and the trivial group we see that we cannot hope to beat $L =\Omega(\exp(\frac{\pi^2}{4}M))$ in the bounds in Theorem \ref{thm.mn2}.
In the setting above -- small sets in cyclic groups of prime order -- Theorem \ref{thm.mn2} is known with bounds of the form $L=\exp(O(M))$ (see \cite[Theorem 2]{konshk::1}), so in some sense the example above is tight. It seems plausible that arithmetic progressions are the worst examples more generally. In the dyadic groups -- groups of the form $\F_2^n$ -- we do not even have arithmetic progressions and one might hope there that a polynomial bound for $L$ holds. We do not know of examples that make fundamental use of the non-abelian structure of more general groups.
As a final remark it might be of interest to combine the arguments here with Host's to prove a fully quantitative version of Theorem \ref{thm.host}. Indeed, when $G$ is abelian this was done in \cite[Theorem 1.2]{gresan::0} and that result has since been applied in \cite{woj::0} and \cite{czuwoj::}.
\section{Notation and basic facts}\label{sec.not}
It is convenient to gather together a few definitions and standard lemmas along with some alternative ways of defining the algebra norm. We assume from now on that $G$ is a finite group.
We write $C(G)$ for the complex-valued functions on $G$ and $\rho$ for the right regular representation so that
\begin{equation*}
\rho_y(f)(x):=f(xy) \text{ for all }x,y \in G.
\end{equation*}
We write $M(G)$ for the complex-valued measures on $G$ and extend $\rho$ to $M(G)$ in the natural way. Moreover we put
\begin{equation*}
f\ast \mu:=\int{\rho_{y^{-1}}(f)(x)d\mu(y)} \text{ for all }f \in C(G) \text{ and }\mu \in M(G),
\end{equation*}
so that
\begin{equation*}
\rho_z(f \ast \mu) = \int{\rho_{zy^{-1}}(f)d\mu(y)} = \int{\rho_{u^{-1}}(f)d\rho_z(\mu)(u)} = f \ast \rho_z(\mu) \text{ for all }z \in G.
\end{equation*}
We define convolution of measures similarly.
If $\mu \in M(G)$ is non-negative and $g \in L_1(\mu)$ then we write $gd\mu$ for the element of $M(G)$ induced by
\begin{equation*}
C(G) \rightarrow C(G); f \mapsto \int{fgd\mu}.
\end{equation*}
If $S \subset G$ is non-empty then we write $m_S$ for the uniform probability measure on $G$ supported on $S$, and $\delta_S$ for the counting measure supported on $S$; we write $\ell_p(S)$ for $L_p(\delta_S)$.
If $\mu$ is a Haar measure on $G$ then we define
\begin{equation*}
f \ast g:=f \ast (gd\mu) \text{ for all } f \in L_1(\mu) \text{ and }g \in L_1(\mu).
\end{equation*}
The two examples we use are when $f,g \in L_1(m_G)$ and $h,k \in \ell_1(G)$ when we have
\begin{equation*}
f \ast g=\E_y{\rho_{y^{-1}}(f)g(y)} \text{ and } h \ast k = \sum_y{\rho_{y^{-1}}(f)g(y)}.
\end{equation*}
We also put
\begin{equation*}
\wt{f}(x):=\overline{f(x^{-1})} \text{ for all }x \in G, f \in C(G),
\end{equation*}
so that $\wt{1_A} = 1_{A^{-1}}$ for any $A \subset G$, and make a similar definition for $\wt{\mu}$ where $\mu \in M(G)$.
For $f \in C(G)$ and $\mu \in M(G)$ we write
\begin{equation*}
\langle f,\mu\rangle:=\int{f\overline{d\mu}} \text{ and } \langle \mu,f\rangle:=\int{\overline{f}d\mu}.
\end{equation*}
A short calculation verifies that
\begin{equation*}
\langle f,\nu\ast \wt{\mu}\rangle=\langle f\ast \mu,\nu\rangle = \langle \mu, \wt{f}\ast \nu\rangle \text{ for all }f \in C(G), \mu,\nu \in M(G).
\end{equation*}
Given a homomorphism $\pi:G \rightarrow \Aut(H)$ we write
\begin{equation*}
\wh{\mu}(\pi):=\int{\pi(t^{-1})d\mu(t)}.
\end{equation*}
This the analogue of the Fourier transform.
\begin{lemma}\label{lem.split}
Suppose that $H \leq G$ and $f \in A(G)$. Then
\begin{equation*}
\|f\|_{A(G)} = \|f-f\ast m_H\|_{A(G)} + \|f\ast m_H\|_{A(G)}.
\end{equation*}
\end{lemma}
\begin{proof}
This is a routine calculation: write $M$ for the operator $g \mapsto f \ast g -f \ast m_H \ast g$ and $N$ for the operator $g \mapsto f \ast m_H \ast g$. Then $N^*g=m_H \ast \wt{f}\ast g$ and so $MN^*g = 0$. But then $M^*MN^*N = 0=N^*NM^*M$ and so $M^*M$ and $N^*N$ commute. By the spectral theorem they can be simultaneously diagonalised and any eigenvector for $M^*M$ with non-zero eigenvalue is in the kernel of $N^*N$ and vice versa. The result follows from the definition of the $A(G)$-norm.
\end{proof}
We now turn to two useful equivalent definitions for the algebra norm. The first can be compared with the definition of the uniform almost periodicity norms \cite[Definition 11.15]{taovu::}, but see also \cite[Th{\'e}or{\`e}m, p218]{eym::}.
\begin{lemma}\label{lem.various}
Suppose that $f \in A(G)$. Then there is a constant $M \leq \|f\|_{A(G)}$; a finite probability space $(\Omega,\P)$; and functions $g_\omega, h_\omega \in L_2(m_G)$ of unit $L_2(m_G)$-norm for all $\omega \in \Omega$ such that
\begin{equation*}
f(x)=M\E_\omega{\wt{h_\omega} \ast g_\omega(x)} \text{ for all }x \in G.
\end{equation*}
There is also a finite dimensional Hilbert space $H$; a homomorphism $\pi:G \rightarrow \Aut(H)$; and elements $v,w \in H$ such that $\|v\|\|w\| \leq \|f\|_{A(G)}$ and
\begin{equation*}
f(x)=\langle \pi(x)v,w\rangle \text{ for all }x \in G.
\end{equation*}
\end{lemma}
\begin{proof}
Let $(\lambda_\omega)_{\omega}$ be the singular values of $g\mapsto f \ast g$ with corresponding bases $(v_\omega)_{\omega}$ and $(w_\omega)_{\omega}$ so that $f\ast v_\omega = \lambda_\omega w_\omega$. Put $u:=|G|1_{\{1_G\}}$, whence
\begin{equation*}
f(x)=\int{\rho_{t^{-1}}(f \ast \rho_t(u)) (x)dm_G(t)} = \sum_\omega{\lambda_\omega\int{\rho_{t^{-1}}(w_\omega)\langle \rho_t(u),v_\omega\rangle dm_G(t)}}.
\end{equation*}
Let $\P(\omega)=\lambda_\omega\|f\|_{A(G)}^{-1}$ (which is a probability measure by definition of $\|\cdot \|_{A(G)}$) and put $h_\omega:=\wt{w_\omega}$ and $g_\omega(t):=\langle \rho_t(u),v_\omega\rangle$. It is then easy to check that $\|h_\omega\|_{L_2(m_G)} = \|w_\omega\|_{L_2(m_G)} = 1$ for all $\omega$; and furthermore
\begin{equation*}
\|g_\omega\|_{L_2(m_G)}^2 = \int{|\langle \rho_t(u),v_\omega\rangle|^2 dm_G(t)}=\|v_\omega\|_{L_2(m_G)}^2 = 1
\end{equation*}
for all $\omega$. The first part follows.
Given the first representation note that
\begin{equation*}
\wt{h_\omega} \ast g_\omega(x) = \langle g_\omega,\rho_x(h_\omega)\rangle_{L_2(m_G)}.
\end{equation*}
Let $H:=L_2(m_G)^{\Omega}$ and $\rho$ the right regular representation in the natural way. The result then follows from a calculation.
\end{proof}
It is useful to make a further definition: we write $\mathcal{N}(G)$ for the set of symmetric neighbourhoods of $G$ and for $Z,Z^+,Z^- \subset G$ non-empty and $X \in \mathcal{N}(G)$ we say that $(Z,X;Z^+,Z^-)$ is \textbf{$\eta$-closed} if
\begin{equation*}
Z^-X \subset Z , ZX^{-1} \subset Z^+ \text{ and } \frac{|Z^+\setminus Z^-|}{|Z|} \leq \eta.
\end{equation*}
Frequently we shall simply say $(Z,X)$ is $\eta$-closed and introduce $Z^+$ and $Z^-$ as needed.
Such sets support an approximate invariant measure captured by the following lemma. The proof is immediate.
\begin{lemma}\label{lem.inv}
Suppose that $(Z,X)$ is $\eta$-closed. Then
\begin{equation*}
\sup{\{|\rho_x(f\ast m_Z)(t) - f \ast m_Z(t)|: x \in X\}}=\|f\ast m_Z-f \ast m_Z(t)\|_{L_\infty(tX)} \leq \eta \|f\|_{L_\infty(G)}.
\end{equation*}
\end{lemma}
The representation $\rho$ is isometric on $L_p(G)$ and we preserve an approximate version of this in the following facts:
\begin{equation*}
\|\rho_x(f)\|_{L_p(m_{BX})} = \|f\|_{L_p(m_{BX})} \text{ whenever }f \in L_p(m_B),
\end{equation*}
and
\begin{equation*}
\|\rho_x(f)\|_{L_p(m_B)}^p\leq \frac{|BX|}{|B|}\|f\|_{L_p(m_{BX})}^p \text{ whenever } f \in L_p(m_{BX}).
\end{equation*}
Finally we shall need the following version of Ruzsa's covering lemma; the argument is the same as \cite[Lemma 2.14]{taovu::}.
\begin{lemma}[Ruzsa's covering lemma]\label{lem.rcl}
Suppose that $X,W \subset G$ are non-empty and have $D|W|\geq |WX|$. Then there is a set $T \subset X$ of size at most $D$ such that $\{W^{-1}Wt:t \in T\}$ is a cover of $X$.
\end{lemma}
\section{Overview and conditional proof}\label{sec.over}
In the introduction we discussed two ways to view this work. In this section we give an overview of the proof of our results and also highlight a third way to view our arguments, this time through an additive combinatorial lens.
The overall structure of our argument is fairly typical for additive combinatorics, translating the work of \cite{gresan::0} to the non-abelian setting. This can be an involved affair as seen in \cite{san::9}, and part of our hope here is that the shorter and sharper arguments we now give will be more illuminating and useful.
The main approach is inductive but works over the class of functions whose values are \emph{almost} integers -- we say $f:G \rightarrow \C$ is \textbf{$\epsilon$-almost integer-valued} if there is some $f_\Z:G \rightarrow \Z$ such that $\|f-f_\Z\|_{L_\infty(G)} <\epsilon$. If $\epsilon \in \left(0,\frac{1}{2}\right]$ then $f_\Z$ is uniquely defined and we shall always assume it is.
We shall prove the following result.
\begin{theorem}\label{thm.ky}
There is an absolute constant $C>0$ such that if $f$ is $\epsilon$-almost integer-valued, $\|f\|_{A(G)} \leq M$, and $\epsilon \leq \exp(-CM)$, then there is some $L=O(M)$, subgroups $H_1,\dots,H_L \leq G$, and integer-valued functions $z^{(i)} \in \ell_1(G/H_i)$ (for $1\leq i \leq L$) such that
\begin{equation*}
f_\Z=\sum_{i=1}^L{\sum_{W \in G/H_i}{z_W^{(i)}1_W}} \text{ and } \|z^{(i)}\|_{\ell_1(G/H_i)} \leq \exp(\exp(\exp(O(M^2)))).
\end{equation*}
\end{theorem}
First note that Theorem \ref{thm.mn2} follows immediately.
In thinking about this statement it may help to note that since $z^{(i)}$ is integer-valued the upper bound on $\|z^{(i)}\|_{\ell_1(G/H_i)}$ yields an upper bound on the size of the support of $z^{(i)}$ -- that is on the number of cosets of $H_i$ where $f_{\Z}$ can have support.
\begin{proof}[Proof of Theorem \ref{thm.mn}]
The argument is purely a manipulation of notation and is better answered through a picture.
Since $f$ is Boolean we have $f=f_\Z$ and so $f$ is certainly $\exp(-CM)$-integer-valued; we can apply Theorem \ref{thm.ky} to get subgroups $H_1,\dots,H_L \leq G$ and integer-valued functions $z^{(i)} \in \ell_1(G/H_i)$ such that
\begin{equation*}
f=\sum_{i=1}^L{\sum_{W \in G/H_i}{z_W^{(i)}1_W}} \text{ and } \|z^{(i)}\|_{\ell_1(G/H_i)} \leq \exp(\exp(\exp(O(M^2)))).
\end{equation*}
Let $g_1^{(i)}H_i<\dots<g_{R_i}^{(i)}H_i$ be an arbitrary ordering of the $R_i=\exp(\exp(\exp(O(M^2))))$ cosets of $H_i$ in the support of $z^{(i)}$. Construct a coset decision tree $T^{(i)}$ with a root at $g_1^{(i)}H_i$; an edge from $g_j^{(i)}H_i$ to $g_{j+1}^{(i)}H_i$ for all $j<R_i$ labelled $0$; and a leaf at every vertex labelled $1$. (The tree is really a decision list\footnote{See \cite[Exercise 3.23]{odo::1}.} -- see Figure \ref{fig.2}.)
\begin{figure}
\centering
\begin{tikzpicture}[level/.style={level distance=95pt, sibling distance=50mm/#1}]
\node [circle,draw] {$g_1^{(i)}H_i$}
child[grow=right] {node [circle,draw] {$g_2^{(i)}H_i$}
child[grow=down] {node [rectangle,draw,yshift=30pt] {$*$}
edge from parent
node[right, yshift=5pt] {\scriptsize{$x \in g_2^{(i)}H_i$}}
node[left,yshift=5pt] {\scriptsize{$1$}}
}
child[grow=right] {node {$\cdots$}
child[grow=right] {node [circle,draw] {$g_{R_i}^{(i)}H_i$}
child[grow=down] {node [rectangle,draw,yshift=30pt] {$*$}
edge from parent
node[right, yshift=5pt] {\scriptsize{$x \in g_{R_i}^{(i)}H_i$}}
node[left,yshift=5pt] {\scriptsize{$1$}}}
child[grow=right] {node [rectangle,draw] {$*$}
edge from parent
node[above, yshift=5pt] {\scriptsize{$x \in G\setminus g_{R_i}^{(i)}H_i$}}
node[below] {\scriptsize{$0$}}
}
edge from parent
node[above, yshift=5pt] {\scriptsize{$x \in G\setminus g_{R_i-1}^{(i)}H_i$}}
node[below] {\scriptsize{$0$}}
}edge from parent
node[above, yshift=5pt] {\scriptsize{$x \in G\setminus g_2^{(i)}H_i$}}
node[below] {\scriptsize{$0$}}
}
edge from parent
node[below] {\scriptsize{$0$}}
node[above, yshift=5pt] {\scriptsize{$x \in G\setminus g_1^{(i)}H_i$}}
}
child[grow=down] {node [rectangle,draw,yshift=30pt] {$*$}
edge from parent
node[right, yshift=5pt] {\scriptsize{$x \in g_1^{(i)}H_i$}}
node[left,yshift=5pt] {\scriptsize{$1$}}
};
\end{tikzpicture}
\caption{The coset decision tree $T^{(i)}$ with asterisks where a root of one of $R_i+1$ copies of $T^{(i+1)}$ go.}\label{fig.2}
\end{figure}
We produce $T$ iteratively and it is convenient to enlarge our class of coset decision trees to include integer-values on the leaves not just values in $\{0,1\}$. The final tree we produce has values in $\{0,1\}$. We start with $T_1:=T^{(1)}$ and write $l_i$ for the leaf-value function on $T_i$. At stage $i<L$ take every vertex $v$ of degree one and append a copy of $T^{(i+1)}$ such that $v$ is the root of $T^{(i+1)}$, copying all the edge values from $T^{(i+1)}$ in the obvious way. Given a leaf $w$ in the new graph, let $w'$ be the vertex it is connected to and define
\begin{equation*}
l_{i+1}(w):=\begin{cases} l_i(v) + z_{g_j^{(i+1)}H_{i+1}}^{(i+1)}& \text{ if }w' \text{ is labelled }g_j^{(i+1)}H_{i+1} \text{ and }ww' \text{ has value }1\\
l_i(v) & \text{ otherwise.}
\end{cases}
\end{equation*}
We terminate with $T:=T_L$. At the end of this process, suppose that we look at a computation path for $x \in G$. This gives us a unique path from the root to some vertex $v$, say
\begin{equation}\label{eqn.path}
v_{1,1},\dots,v_{r_1,1},v_{1,2},\dots,v_{r_2,2},v_{1,3},\dots,v_{r_{L-1},L-1},v_{1,L},\dots,v_{r_L,L},v_{1,L+1}:=v,
\end{equation}
such that $v_{j,i}$ is labelled $g_j^{(i)}H_i$ for $1 \leq j \leq r_i$ and $1\leq i \leq L$. The fact that all the cosets of $H_1$ occur before those of $H_2$ \emph{etc.} simply reflects the order we built up $T$.
Write $I \subset \{1,\dots,L\}$ for the set of indices such that the edge between $v_{r_i,i}$ and $v_{1,i+1}$ is a $1$. If the value of the edge between $v_{r_i,i}$ and $v_{1,i+1}$ is $0$ then $r_i=R_i$ and $x \in G\setminus g_j^{(i)}H_i$ for all $1 \leq j \leq R_i$ and so $x \not \in \bigcup{\supp z^{(i)}}$. If the value of the edge between $v_{r_i,i}$ and $v_{1,i+1}$ is $1$ then $x \in G\setminus g_j^{(i)}H_i$ for all $1 \leq j < r_i$ and $x \in g_{r_i}^{(i)}H_i$. It follows that
\begin{equation*}
f(x)=\sum_{i=1}^L{\sum_{W \in G/H_i}{z_W^{(i)}1_W(x)}}=\sum_{i \in I}{z^{(i)}_{g_{r_i}^{(i)}H_i}}=l_L(v)
\end{equation*}
as required. The total number of leaves of the resulting coset decision tree is at most $(R_1+1)\cdots (R_L+1) \leq \exp(\exp(\exp(O(M^2))))$ as claimed.
\end{proof}
To prove Theorem \ref{thm.ky} we need two key ingredients. The first shows us how to find structure in the support of $f_\Z$ when $f$ has small algebra norm and is almost integer-valued. We shall prove this in \S\ref{sec.ac}.
\begin{proposition}\label{prop.infstruct}
There is an absolute $C>0$ such that if $f$ is $\epsilon$-almost integer-valued with $\|f\|_{A(G)} \leq M$ and $\epsilon \leq \exp(-CM)$, then there is some $S \subset \supp f_\Z$ such that $|SS^{-1}| \leq M^{O(M)}|S|$ and $|S| = M^{-O(M)}|\supp f_\Z|$.
\end{proposition}
When $G$ is abelian the conclusion $|SS^{-1}| \leq K|S|$ (in additive notation $|S-S| \leq K|S|$) output above is the input for Fre{\u\i}man-type theorem's. The output of these \emph{e.g.} \cite[Theorem 5.46]{taovu::} is a set of the form $P(d_1,N_1)+\dots+P(d_r,N_r) +H$ where $H \leq G$ and $P(d_i,N_i)$ is an arithmetic progression of length $2N_i+1$ and common difference $d_i$ centred at $0_G$. For us the important feature is that the sets $B_i:=P(d_1,2^{-i}N_1)+\dots+P(d_r,2^{-i}N_r) +H$ (for $i \in \N_0$) form a base for a topology on $G$ with some nice properties. In particular, all the sets $B_i$ are symmetric neighbourhoods of the identity with
\begin{equation}\label{eqn.K}
B_{i+1} + B_{i+1} \subset B_i \text{ and } |B_{i+1}| = \Omega_K(|B_i|) \text{ for all }i \in\N_0
\end{equation}
and
\begin{equation*}
|B_0| = \Omega_K(|A|) \text{ and }B_0 \subset 2A-2A.
\end{equation*}
The fact that the $\Omega$-term in (\ref{eqn.K}) does not depend on $i$ captures the linear structure of characters, and replicating this is a key hurdle in proving a Fre{\u\i}man-type theorem in general groups. This (and much more) has now been achieved by Breuillard, Green and Tao in \cite[Theorem 1.6]{bregretao::0}, but at the cost of weaker dependencies. We take a different approach and accept some (relatively) mild $i$ dependence in exchange for better $K$-dependence.
\begin{lemma}\label{lem.fr}
Suppose that $A$ is non-empty and $|AA^{-1}| \leq K|A|$ and $\eta \in (0,1]$. Then there are $Z,Y \in \mathcal{N}(G)$ such that $(Z,Y^4)$ is $\eta$-closed with
\begin{equation*}
|Y| \geq \exp(-O(\eta^{-2}\log^{O(1)}2K))|A|\text{ and } m_{A^{-1}} \ast 1_{AA^{-1}} \ast m_{A}(x) > \frac{1}{2} \text{ for all }x \in (Z^+)^4.
\end{equation*}
In particular, $(Z^+)^4 \subset A^{-1}AA^{-1}A$.
\end{lemma}
We prove this in \S\ref{sec.tools}.
This corollary is designed to be used iteratively and the conclusion in terms of the convolution is there to deal with the first step when we may know that $|AA^{-1}| \leq K|A|$ but not that $|A^{-1}AA^{-1}A| =O_K(|A|)$. (In the abelian setting Pl{\"u}nnecke's inequality \cite[Corollary 6.26]{taovu::} gives the latter as a consequence of the former but in non-abelian groups this need not be the case. See the discussion after \cite[Proposition 2.38]{taovu::}.)
As well as providing us with a way of dealing with the output of Proposition \ref{prop.infstruct}, Lemma \ref{lem.fr} also provides us with a general way of replacing level sets of characters -- Bohr sets\footnote{See \cite[\S4.4]{taovu::}.} -- in the abelian setting. (The obvious analogue of using characters or representations of non-abelian groups runs into complications with controlling the dimension of the representation.) We use this to prove a sort of quantitative continuity result:
\begin{proposition}\label{prop.inv}
Suppose that $A \in \mathcal{N}(G)$ has $|A^2| \leq K|A|$; $f \in A(G)$ has $\|f\|_{A(G)} \leq M$; and $\epsilon,\eta \in (0,1]$ and $p \geq 2$ are parameters. Then there are sets $X,B \in \mathcal{N}(G)$ such that $(X,B)$ is an $\eta$-closed pair with $(X^+)^4 \subset A^4$,
\begin{equation*}
|B| \geq \exp(-(\eta^{-1}MK)^{p\exp(O(\epsilon^{-2}))})|A| \text{ and } \sup_t{\|f-f \ast m_X\|_{L_p(m_{tB})}} \leq \epsilon M.
\end{equation*}
\end{proposition}
We prove this result in \S\ref{sec.is}, the key tool is Corollary \ref{cor.keyt} recorded and proved in \S\ref{sec.tools}.
The fact that the argument gives a triply exponential bound is a result of iterative application of Lemma \ref{lem.fr}, but the fact that it is triply-exponential in $O(\epsilon^{-2})$ (rather than, say, $O(\epsilon^{-1})$) comes from the application of the Cotlar-Stein lemma at the end of the proof of Proposition \ref{prop.inv}. It seems conceivable that this might be improved.
With these tools recorded we are ready to stitch them together to give our main iteration lemma.
\begin{lemma}\label{lem.itlem}
There is an absolute constant $C>0$ such that if $f$ is $\epsilon$-almost integer-valued, $\|f\|_{A(G)} \leq M$, $\eta>0$ is a parameter and $\epsilon \leq \exp(-CM)$, then there is some $H \leq G$ such that $f\ast m_H$ is $(\epsilon + \eta)$-almost integer-valued, $(f\ast m_H)_\Z \not \equiv 0$ and $|H| \geq \exp(-\exp(\exp(O(M^{2}+\log \log \eta^{-1}))))|\supp f_\Z|$.
\end{lemma}
\begin{proof}
Apply Proposition \ref{prop.infstruct} (possible provided $\epsilon \leq \exp(-CM)$) to get $S \subset \supp f_\Z$ such that
\begin{equation*}
|S| = M^{-O(M)}|\supp f_\Z| \text{ and }|SS^{-1}| \leq M^{O(M)}|S|.
\end{equation*}
By Lemma \ref{lem.fr} (applied to $S$ with parameter $1$) there is some $A \in \mathcal{N}(G)$ such that
\begin{equation*}
|A| \geq \exp(-M^{O(1)})|S|, \text{ and }m_{S^{-1}}\ast 1_{SS^{-1}}\ast m_S(x)>\frac{1}{2} \text{ for all }x \in A^4.
\end{equation*}
It follows that
\begin{equation*}
\frac{1}{2}|A^2|\leq \sum_{x \in A^2}{m_{S^{-1}}\ast 1_{SS^{-1}}\ast m_S(x)} \leq |SS^{-1}| \leq M^{O(M)}|S|,
\end{equation*}
so
\begin{equation*}
|A| \geq \exp(-M^{O(1)})|\supp f_\Z| \text{ and }|A^2| \leq \exp(M^{O(1)})|A|.
\end{equation*}
Moreover, if $Z\subset A^4$ then
\begin{equation*}
\frac{1}{2}\leq \langle m_{S^{-1}} \ast 1_{SS^{-1}}\ast m_S,m_Z\rangle = \langle m_S \ast \wt{m_Z},1_{SS^{-1}} \ast m_S\rangle_{\ell_2(G)} \leq \|1_S \ast \wt{m_Z}\|_{\ell_\infty(G)}|SS^{-1}||S|^{-1},
\end{equation*}
so
\begin{equation}\label{eqn.uq}
\sup_t{m_{tZ}(S)}=\|1_S \ast \wt{m_{Z}}\|_{\ell_\infty(G)} \geq M^{-O(M)} \text{ for all }\emptyset \neq Z \subset A^4.
\end{equation}
Apply Proposition \ref{prop.inv} with parameters $2^{-6}M^{-1}$, $2^{-6}M^{-1}$, and $p$ (the last of which is to be optimised) to get $X,B \in \mathcal{N}(G)$ such that $(X,B)$ is $2^{-6}M^{-1}$-closed, $(X^+)^4 \subset A^4$ and
\begin{equation*}
|B| \geq \exp(-\exp(\exp(O(M^2 +\log p))))|\supp f_\Z| \text{ and }\sup_t{\|f-f\ast m_X\|_{L_p(m_{tB})}} \leq 2^{-6}.
\end{equation*}
We may assume that $\epsilon \leq 2^{-5}$ and hence by the triangle inequality and Lemma \ref{lem.inv} we have that
\begin{equation*}
\|f_\Z - f\ast m_X(t)\|_{L_p(m_{tB})} \leq 2^{-4} \text{ for all }t \in G.
\end{equation*}
It follows that $f\ast m_X$ is $2^{-4}$-almost integer-valued. By Lemma \ref{lem.inv} for all $t\in G$ and $b \in B^{-1}=B$ we also have
\begin{align*}
|(f \ast m_X)_\Z(tb) - (f \ast m_X)_\Z(t)| & \leq |(f \ast m_X)_\Z(tb)-f \ast m_X(tb)|\\ & \qquad + |f \ast m_X(tb)-f \ast m_X(t)|\\ & \qquad \qquad + |f \ast m_X(t)-(f \ast m_X)_\Z(t)| < \frac{1}{2}.
\end{align*}
It follows that $(f \ast m_X)_\Z$ is constant on left cosets of $H$, the group generated by $B$.
Now suppose $t \in G$ so that
\begin{align*}
m_{tB}\left(\left\{x \in tB: (f \ast m_X)_\Z(x)\neq f_\Z(x)\right\}\right) &\leq \|(f\ast m_X)_\Z - f_\Z\|_{L_p(m_{tB})}^p\\
& \leq \big(\|(f\ast m_X)_\Z - f\ast m_X\|_{L_p(m_{tB})}\\
& \qquad \qquad + \|f\ast m_X - f\|_{L_p(m_{tB})}\\
& \qquad \qquad \qquad \qquad + \|f - f_\Z\|_{L_p(m_{tB})}\big)^p\\
&\leq \left(2^{-4} + 2^{-6}+\epsilon\right)^p\leq 4^{-p},
\end{align*}
and since $H=HB$ it follows that
\begin{align*}
&m_{tH}\left(\left\{x \in tH: (f \ast m_X)_\Z(x)\neq f_\Z(x)\right\}\right)\\
&\qquad \qquad = \frac{1}{|B|}\sum_{b \in B}{\frac{\left|\left\{x \in tHb: (f \ast m_X)_\Z(x)\neq f_\Z(x)\right\}\right|}{|H|}}\\
&\qquad \qquad \qquad \qquad = \E_{h \in H}{\frac{\left|\left\{x \in thB: (f \ast m_X)_\Z(x)\neq f_\Z(x)\right\}\right|}{|B|}}\\
&\qquad \qquad \qquad \qquad \qquad \qquad = \E_{h \in H}{m_{thB}\left(\left\{x \in thB: (f \ast m_X)_\Z(x)\neq f_\Z(x)\right\}\right)}\leq 4^{-p}.
\end{align*}
We showed earlier that $(f \ast m_X)_\Z$ is constant on left cosets of $H$ so
\begin{equation*}
(f\ast m_X)_\Z\ast \wt{m_H}(t) = \int{(f\ast m_X)_\Z dm_{tH}} \in \Z.
\end{equation*}
But
\begin{equation*}
|f_\Z\ast \wt{m_H}(t) - (f\ast m_X)_\Z\ast \wt{m_H}(t)| \leq \int{|f_\Z - (f \ast m_X)_\Z|dm_{tH}} =O(M4^{-p}),
\end{equation*}
and so
\begin{equation*}
|f \ast \wt{m_H}(t) - (f\ast m_X)_\Z\ast \wt{m_H}(t)| \leq \epsilon + O(M4^{-p}).
\end{equation*}
Since $H=H^{-1}$ the function $f \ast m_H$ is $(\epsilon + O(M4^{-p}))$-almost integer-valued.
Finally, if $(f\ast m_{H})_\Z \equiv 0$ then $(f\ast m_X)_\Z\equiv 0$ and we have
\begin{equation}\label{eqn.con}
m_{tB}\left(\left\{x \in tB: 0\neq f_\Z(x)\right\}\right) \leq 4^{-p} \text{ for all }t \in G,
\end{equation}
but $B \subset A^4$ and so by (\ref{eqn.uq}) $\sup_t{m_{tB}(\supp f_\Z)} \geq M^{-O(M)}$. It follows that we may take $p=O(\max\{M\log M,\log M\eta^{-1}\})$ such that $f \ast m_H$ is $(\epsilon+\eta)$-almost integer-valued and we have a contradiction to (\ref{eqn.con}) so that $(f\ast m_{H})_\Z \not \equiv 0$. The lemma follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm.ky}]
Let $C>0$ be the absolute constant in the statement of Lemma \ref{lem.itlem}. Let $\epsilon_i:=2^i\epsilon + 4^{i-2M-4}\exp(-CM)$. We shall define functions $f_i$ such that
\begin{equation*}
f_i \text{ is $\epsilon_i$-almost integer-valued, } \|f_{i+1}\|_{A(G)} \leq\|f_i\|_{A(G)} - \frac{1}{2},
\end{equation*}
and so that there is a group $H_i \leq G$ and an integer-valued function $z^{(i)} \in \ell_1(G/H_i)$ with
\begin{equation*}
(f_i-f_{i+1})_\Z = \sum_{W \in G/H_i}{z_W^{(i)}1_W} \text{ and }\|z^{(i)}\|_{\ell_1(G/H_i)} \leq \exp(\exp(\exp(O(M^{2})))).
\end{equation*}
We set $f_0:=f$ which is certainly $\epsilon_0$-almost integer-valued (for $\epsilon$ sufficiently small). At stage $i \leq 2M+1$ apply Lemma \ref{lem.itlem} with $\eta=4^{-2M-3}\exp(-CM)$ which is possible provided $\epsilon \leq \exp(-C'M)$. We get $H_{i+1} \leq G$ with
\begin{equation*}
|H_{i+1}| \geq \exp(-\exp(\exp(O(M^2))))|\supp (f_i)_\Z|
\end{equation*}
and
\begin{equation*}
f_i \ast m_{H_{i+1}} \text{ is }(\epsilon_i+\eta)-\text{almost integer-valued}.
\end{equation*}
Put $f_{i+1}:=f_i - f_i \ast m_{H_{i+1}}$. Then $f_{i+1}$ is $2\epsilon_i + \eta \leq \epsilon_{i+1}$ almost integer-valued. For $\epsilon$ sufficiently small we have $\epsilon_i+\eta \leq \frac{1}{4}$ and so if $(f_i \ast m_{H_{i+1}})_\Z(x) \neq 0$ then $|f_i \ast m_{H_{i+1}}(x)| \geq \frac{3}{4}$ by the triangle inequality. Similarly if $(f_i)_{\Z}(y)=0$ then $|f_i(y)|<\frac{1}{4}$, and so $|f_i(y)| < M\cdot 1_{\supp (f_i)_\Z}(y) + \frac{1}{4}$. Hence
\begin{equation*}
M\frac{|\supp (f_i)_\Z \cap (xH_{i+1})|}{|H_{i+1}|} +\frac{1}{4}=M1_{\supp(f_i)_\Z} \ast m_{H_{i+1}}(x) +\frac{1}{4} > |f_i \ast m_{H_{i+1}}(x)| \geq \frac{3}{4}
\end{equation*}
by the triangle inequality. Since the left cosets of $H_{i+1}$ partition $G$ and $(f_i\ast m_{H_{i+1}})_\Z$ is invariant on left cosets of $H_{i+1}$ it follows that
\begin{align}\label{eqn.ineq}
|\supp (f_i\ast m_{H_{i+1}})_\Z|&=\sum_{xH_{i+1} \in G/H_{i+1}}{|H_{i+1}|1_{\supp(f_i\ast m_{H_{i+1}})_\Z}(x)}\\ \nonumber
& \leq \sum_{xH_{i+1} \in G/H_{i+1}}{2\cdot M|\supp (f_i)_\Z \cap (xH_{i+1})|}=2M|\supp (f_i)_\Z|.
\end{align}
Again, since the function $(f_i\ast m_{H_{i+1}})_\Z$ is invariant on left cosets of $H_{i+1}$ so it follows from (\ref{eqn.ineq}) and the lower bound on $|H_{i+1}|$ that $(f_i\ast m_{H_{i+1}})_\Z$ takes non-zero integer values on at most $ \exp(\exp(\exp(O(M^{2}))))$ left cosets of $H_{i+1}$. Added to this, the value of $(f_i\ast m_{H_{i+1}})_\Z$ on each of these is an integer between $-(M+1)$ and $(M+1)$. It follows that $(f_i-f_{i+1})_\Z=(f_i\ast m_{H_{i+1}})_\Z$ has the claimed form.
Finally, since $(f_i\ast m_{H_{i+1}})_\Z$ is not identically $0$ it follows that $\|f_i\ast m_{H_{i+1}}\|_{A(G)} \geq 1-\epsilon_{i+1} \geq \frac{1}{2}$ and hence $ \|f_{i+1}\|_{A(G)} =\|f_i\|_{A(G)} -\|f_i\ast m_{H_{i+1}}\|_{A(G)} \leq\|f_i\|_{A(G)} - \frac{1}{2}$ (by Lemma \ref{lem.split}). In view of this the iteration terminates in $2M$ steps and unpacking what that means we have the result.
\end{proof}
\section{Croot-Sisask lemmas}\label{sec.tools}
The basic tool we need is a slightly adjusted version of \cite[Lemma 3.2]{croabasis::} (the proof of which is the same). Before beginning we set some standard notation: if $g:\Omega \times Z \rightarrow \C$ and $\omega \in \Omega$ define
\begin{equation*}
g_\omega:Z \rightarrow \C; z \mapsto g(\omega,z).
\end{equation*}
\begin{lemma}\label{lem.cas}
Suppose that $\mu$ is a non-negative measure on a finite set; $\nu$ is a complex-valued measure on a finite set $\Omega$; $p \geq 2$ is a parameter; and $g \in L_p(|\nu| \times \mu)$. Then there is a function $h$ with $|h(\omega)|=\|\nu\|$ for all $\omega \in \Omega$ and a positive integer $r=O(p\epsilon^{-2})$ such that
\begin{equation*}
|\nu|^r\left(\left\{ \omega \in \Omega^r: \left\| \int{g_{\omega'} d\nu(\omega')} - \frac{1}{r}\left(\sum_{i=1}^r{h(\omega_i)g_{\omega_i}}\right)\right\|_{L_p(\mu)} \leq \epsilon \|g\|_{L_p(|\nu|\times \mu)}\right\}\right) \geq \frac{1}{2}\|\nu\|^r.
\end{equation*}
\end{lemma}
We also need \cite[Proposition 4.2]{san::00} with $S$ replaced by $S^{-1}$ and $T$ replaced by $T^{-1}$.
\begin{lemma}\label{lem.br}
Suppose that $A,S,T \subset G$ are finite and non-empty with $|AS^{-1}| \leq K|A|$ and $|ST| \leq L|S|$, and $k \in \N$ and $\epsilon \in (0,1]$ are parameters. Then there is some $X \in \mathcal{N}(G)$ such that
\begin{equation*}
|X| \geq \exp(-O(\epsilon^{-2}k^2(\log 2K)( \log 2L)))|T|
\end{equation*}
such that
\begin{equation*}
|m_{A^{-1}} \ast 1_{AS^{-1}}\ast m_S(x)-1| \leq \epsilon \text{ for all }x \in X^k.
\end{equation*}
\end{lemma}
With these recorded we turn to developing the consequences we need.
\begin{lemma*}[Lemma \ref{lem.fr}]
Suppose that $A$ is non-empty and $|AA^{-1}| \leq K|A|$ and $\eta \in (0,1]$. Then there are $Z,Z^+,Z^-,Y \in \mathcal{N}(G)$ such that $(Z,Y^4;Z^+,Z^-)$ is $\eta$-closed with
\begin{equation*}
|Y| \geq \exp(-O(\eta^{-2}\log^{O(1)}2K))|A|\text{ and } m_{A^{-1}} \ast 1_{AA^{-1}} \ast m_{A}(x) > \frac{1}{2} \text{ for all }x \in (Z^+)^4.
\end{equation*}
In particular, $(Z^+)^4 \subset A^{-1}AA^{-1}A$.
\end{lemma*}
\begin{proof}
Apply Lemma \ref{lem.br} with $T=A^{-1}$, $S=A$ and $k=48$ to get $W \in \mathcal{N}(G)$ with
\begin{equation*}
|W| \geq \exp(-O(\log^2 2K))|A| \text{ and }|m_{A^{-1}} \ast 1_{AA^{-1}}\ast m_A(x)-1| <\frac{1}{2} \text{ for all }x \in W^{12}.
\end{equation*}
Apply Lemma \ref{lem.br} with all sets equal to $W$ and a parameter $8r$ to get $Y \in \mathcal{N}(G)$ such that
\begin{equation*}
Y^{8r} \subset W^4 \text{ and } |Y| \geq \exp(-O(r^2\log^4K))|A|.
\end{equation*}
Now
\begin{equation*}
\prod_{i=0}^{r-1}{\frac{|Y^8Y^{8i}W^4Y^{8i}Y^8|}{|Y^{8i}W^4Y^{8i}|}} \leq \frac{|W^{12}|}{|W^4|} \leq \exp(O(\log^2 2K)) ,
\end{equation*}
and so there is some $0 \leq i <r$ such that
\begin{equation*}
|Y^8Y^{8i}W^8Y^{8i}Y^8| \leq \left(\frac{|W^{12}|}{|W^4|}\right)^{\frac{1}{r}}|Y^{8i}W^{8i}Y^{8i}|.
\end{equation*}
Let $r=O(\eta^{-1}\log^2K)$ be such that the right hand side is at most $1+\eta$. Set $Z:=Y^4Y^{8i}W^{4}Y^{8i}Y^4$, $Z^-:=Y^{8i}W^{4}Y^{8i}$ and $Z^+:=Y^8Y^{8i}W^{4}Y^{8i}Y^8$ which are all elements of $\mathcal{N}(G)$ and $Z^-Y^4 \subset Z$ and $Z(Y^4)^{-1} \subset Z^+$ so $(Z,Y^4)$ is $\eta$-closed. Finally, $(Z^+)^4 \subset W^{48}$ from which the result follows.
\end{proof}
The following lemma is a version of the Croot-Sisask lemma \cite{crosis::} but set up to deal with signed-measures and to produce covers. The covers make iterative applications of the lemma easier, essentially because the meet of a cover of size $K$ and a cover of size $L$ is a cover of size at most $KL$. We shall see this benefit explicitly in the proof of Lemma \ref{lem.int}.
\begin{corollary}\label{cor.keyt}
Suppose that $X,A,S \subset G$ are non-empty with $D|A|\geq |AS|$; $\nu$ is a complex-valued measure, absolutely continuous w.r.t. $m_A$, with
\begin{equation*}
\left\|\frac{d\nu}{dm_A}\right\|_{L_2(m_A)}^2 \leq E\left\|\frac{d\nu}{dm_A}\right\|_{L_1(m_A)};
\end{equation*}
and $f \in L_p(m_X)$ for some $p \in [2,\infty)$; and $\epsilon \in (0,1]$ is a parameter. Then there is a cover $\mathcal{P}$ of $S$ having size at most $\exp(O(\epsilon^{-2}p \log 2DE\epsilon^{-1}))$ such that
\begin{equation*}
\|\rho_{s^{-1}}(f\ast \nu)- \rho_{t^{-1}}(f \ast \nu) \|_{L_p(m_{XAS})} \leq \epsilon \|\nu\| \|f\|_{L_p(m_{XAS})} \text{ for all }s,t \in P \in \mathcal{P}.
\end{equation*}
\end{corollary}
\begin{proof}
Put $g(x,y):=\rho_{x^{-1}}(f)(y)$ for $|\nu|\times m_{XA}$-a.e. $(x,y) \in G^2$ and note that
\begin{equation*}
\|g\|_{L_p(|\nu| \times m_{XA})}^p = \int{\int{|\rho_{x^{-1}}(f)(y)|^pdm_{XA}(y)}d|\nu|(x)}= \|\nu\|\|f \|_{L_p(m_{XA})}^p
\end{equation*}
since $\int{g_xd\nu(x)} = f \ast \nu$. Apply Lemma \ref{lem.cas} with $m_{XA}$ on $G$, the complex-valued measure $\nu$ on $G$, the function $g$, and parameter $\frac{1}{6}\epsilon$ to get a function $h:G \rightarrow \C$ with $|h(x)| = \|\nu\|$ for all $x \in G$ such that the set
\begin{equation*}
\mathcal{L}:=\left\{x \in G^r: \left\| f \ast \nu - \frac{1}{r}\left(\sum_{i=1}^r{h(x_i)\rho_{x_i^{-1}}(f)}\right)\right\|_{L_p(m_{XA})} \leq \frac{1}{6}\epsilon \|\nu\| \|f\|_{L_p(m_{XA})}\right\}
\end{equation*}
has $|\nu|^r(\mathcal{L}) \geq \frac{1}{2}\|\nu\|^r$. By the Cauchy-Schwarz inequality (recalling our rescaling) we have
\begin{align*}
m_A^r(\mathcal{L})& = \left\|\frac{d\nu}{dm_A}\right\|_{L_2(m_A)}^{-2r}m_A^r(\mathcal{L})\left(\int{\prod_{i=1}^r{\left|\frac{d\nu}{dm_A}(a_i)\right|^{2}}dm_A^r(a)}\right)\\ & \geq \left\|\frac{d\nu}{dm_A}\right\|_{L_2(m_A)}^{-2r}\left|\int{1_\mathcal{L}(a)\prod_{i=1}^r{\left|\frac{d\nu}{dm_A}(a_i)\right|}dm_A^r(a)}\right|^2 =\left\|\frac{d\nu}{dm_A}\right\|_{L_2(m_A)}^{-2r} |\nu|^r(\mathcal{L}) \geq \frac{1}{2}E^{-2r}.
\end{align*}
We may certainly assume that $\mathcal{L} \subset \supp |\nu|^r \subset A^r$ and so it follows (since $\supp f \ast \nu \subset XA$) that for all $s \in S$ and $x \in \mathcal{L}$ we have
\begin{equation*}
\left\| \rho_{s^{-1}}( f \ast \nu) - \frac{1}{r}\left(\sum_{i=1}^r{h(x_i)\rho_{(x_is)^{-1}}(f)}\right)\right\|_{L_p(m_{XAS})} \leq \frac{1}{6}\epsilon \|\nu\| \|f\|_{L_p(m_{XAS})}.
\end{equation*}
Let $\mathcal{Q}$ be a partition of $(S^1)^r$ of size $\epsilon^{-O(r)}$ such that $\|z-z'\|_{\ell_\infty^r} \leq \frac{1}{6}\epsilon$ for all $z,z' \in Q \in \mathcal{Q}$. Pulling back this partition along the map $\mathcal{L} \rightarrow (S^1)^r; x \mapsto (\|\nu\|^{-1}h(x_1),\dots,\|\nu\|^{-1}h(x_r))$ it follows that there is some $\mathcal{M} \subset \mathcal{L}$ with
\begin{equation*}
m_A^r(\mathcal{M}) \geq \frac{1}{|\mathcal{Q}|} m_A^r(\mathcal{L}) \text{ s.t. } |h(x_i)-h(x_i')| \leq \frac{1}{6}\epsilon \|\nu\|\text{ for all }x,x' \in \mathcal{M}, 1 \leq i \leq r.
\end{equation*}
In particular, if $s,t \in S$ and $x,y\in \mathcal{M}$ are such that $x_is=y_it$ for all $1\leq i \leq r$ then
\begin{align*}
& \left\| \frac{1}{r}\left(\sum_{i=1}^r{h(x_i)\rho_{(x_is)^{-1}}(f)}\right) - \frac{1}{r}\left(\sum_{i=1}^r{h(y_i)\rho_{(y_it)^{-1}}(f)}\right)\right\|_{L_p(m_{XAS})}\\ & \qquad \qquad \qquad \leq \frac{1}{6}\epsilon \|\nu\|\sup{\left\{\left\|\rho_{(as')^{-1}}(f)\right\|_{L_p(m_{XAS})} : s' \in S, a \in A\right\}} \leq \frac{1}{6}\epsilon \|\nu\|\|f\|_{L_p(m_{XAS})}
\end{align*}
by the triangle inequality. Write $\Delta:G \rightarrow G^r; t \mapsto (t,\dots,t)$. Then
\begin{equation*}
|\mathcal{M}\Delta(S)| \leq |\mathcal{M}(S\times \cdots \times S)| \leq |AS|^r \leq D^r|A|^r \leq 2D^rE^{2r}\epsilon^{-O(r)}|\mathcal{M}|.
\end{equation*}
By Ruzsa's covering lemma (Lemma \ref{lem.rcl}) we see that there is a set $T\subset S$ of size at most $2D^rE^{2r}\epsilon^{-O(r)}$ such that $\{
\mathcal{M}^{-1}\mathcal{M}\Delta(t) : t \in T\}$ covers $\Delta(S)$; let
\begin{equation*}
\mathcal{P}:=\{\{s: \Delta(s) \in \mathcal{M}^{-1}\mathcal{M}\Delta(t)\}: t \in T\}
\end{equation*}
so that $\mathcal{P}$ is a cover of $S$ of the claimed size. Finally, if $s_0,s_1 \in P \in \mathcal{P}$ then there is some $t \in T$ and elements
elements $x^{(0)},x^{(1)},y^{(0)},y^{(1)} \in \mathcal{M}$ such that
\begin{equation*}
x^{(0)}_is_0 = y^{(0)}_it \text{ and } x^{(1)}_is_1 = y^{(1)}_it \text{ for all }1 \leq i \leq r.
\end{equation*}
For $j \in \{0,1\}$ we then have
\begin{align*}
\left\|\rho_{s_j^{-1}}( f \ast \nu) - \rho_{t^{-1}}( f \ast \nu)\right\|_{L_p(m_{XAS})} \leq &\left\|\rho_{s_j^{-1}}( f \ast \nu) - \frac{1}{r}\left(\sum_{i=1}^r{h(x_i^{(j)})\rho_{(x_i^{(j)}s_j)^{-1}}(f)}\right)\right\|_{L_p(m_{XAS})}\\
&+ \left\| \frac{1}{r}\left(\sum_{i=1}^r{h(x_i^{(j)})\rho_{(x_i^{(j)}s_j)^{-1}}(f)}\right)\right.\\
&\qquad\qquad\left.- \frac{1}{r}\left(\sum_{i=1}^r{h(y_i^{(j)})\rho_{(y_i^{(j)}t)^{-1}}(f)}\right)\right\|_{L_p(m_{XAS})}\\
& + \left\| \frac{1}{r}\left(\sum_{i=1}^r{h(y_i^{(j)})\rho_{(y_i^{(j)}t)^{-1}}(f)}\right)- \rho_{t^{-1}}( f \ast \nu)\right\|_{L_p(m_{XAS})}.
\end{align*}
It follows that
\begin{equation*}
\left\|\rho_{s_j^{-1}}( f \ast \nu) - \rho_{t^{-1}}( f \ast \nu)\right\|_{L_p(m_{XAS})} \leq \frac{1}{2}\epsilon \|\nu\|\|f\|_{L_p(m_{XAS})},
\end{equation*}
and we get the conclusion by the triangle inequality again.
\end{proof}
We do \emph{not} actually need the $p$-dependence in the above result. Although Proposition \ref{prop.inv} does use good $L_p$-control of the norm it achieves this by pigeon-holing the $A(G)$-mass on the spectral side since $L_p$ is dominated by $L_\infty$ which, in turn, is dominated by the $A(G)$ norm. Nevertheless the proof for general $p$ is not significantly more involved and may be of use elsewhere.
\section{Invariant sets}\label{sec.is}
In this section we shall prove the following proposition. The argument is not dissimilar to the basic scheme in \cite{grekon::} which, itself, is a sort of regularity argument.
\begin{proposition*}[Proposition \ref{prop.inv}]
Suppose that $A \in \mathcal{N}(G)$ has $|A^2| \leq K|A|$; $f \in A(G)$ has $\|f\|_{A(G)} \leq M$; and $\epsilon,\eta \in (0,1]$ and $p \geq 2$ are parameters. Then there are sets $X,X^+,X^-,B \in \mathcal{N}(G)$ such that $(X,B;X^+,X^-)$ is an $\eta$-closed pair with $(X^+)^4 \subset A^4$,
\begin{equation*}
|B| \geq \exp(-(\eta^{-1}MK)^{p\exp(O(\epsilon^{-2}))})|A| \text{ and } \sup_t{\|f-f \ast m_X\|_{L_p(m_{tB})}} \leq \epsilon M.
\end{equation*}
\end{proposition*}
The proof is iterative and based around the following lemma.
\begin{lemma}\label{lem.int}
Suppose that $f \in A(G)$ has $\|f\|_{A(G)} = M$; $X,B \subset G$, $B \in \mathcal{N}(G)$ have $K|X| \geq |XB|$ and $(B,T)$ $\eta$-closed; and $\epsilon \in (0,1]$ is a parameter. Then there is a set $W \subset T$ such that
\begin{equation*}
|W| \geq \exp(-(\epsilon^{-1} MK)^{O(1)})|T|
\end{equation*}
and
\begin{equation*}
\|\rho_{u^{-1}}(f)-\rho_{t^{-1}}(f) \|_{L_2(m_{B^-})} \leq \epsilon +\eta (MK)^{O(1)} \text{ for all }u,t\in W.
\end{equation*}
\end{lemma}
\begin{proof}
Since $f \in A(G)$, Lemma \ref{lem.various} tells us that there is a constant (which we may as well take to be $M$); a probability space $(\Omega,\P)$; and functions $h_\omega,g_\omega \in L_2(m_G)$ with
\begin{equation*}
\|h_\omega\|_{L_2(m_G)}\leq 1 \text{ and } \|g_{\omega}\|_{L_2(m_G)} \leq 1 \text{ for all } \omega \in \Omega,
\end{equation*}
such that
\begin{equation*}
f(x)=M\E_{\omega}{\wt{h_\omega} \ast g_\omega(x)} \text{ for all }x \in G.
\end{equation*}
For $z \in G$ we have
\begin{equation*}
\int{\frac{\wt{1_{sX}}(z)}{m_G(sX)}dm_G(s)}=\int{\frac{1_{z^{-1}X^{-1}}(s)}{m_G(X)}dm_G(s)}=1
\end{equation*}
and so if $x \in B$ then
\begin{align*}
f(x) & = M\E_{\omega}{\int{\wt{h_\omega}(xy^{-1})g_\omega(y)\frac{\wt{1_{sX}}(xy^{-1})}{m_G(sX)}dm_G(s)dm_G(y)}}\\
& = M\E_{\omega}{\int{(h_\omega dm_{sX})^\sim(xy^{-1})(g_\omega 1_{sXB})(y)dm_G(y)dm_G(s)}}.
\end{align*}
For each $y \in G$ we put
\begin{equation*}
h_{\omega,s}(y):=\frac{1}{\|h_\omega\|_{L_2(m_{sX})}}h_\omega(y)1_{sX}(y) \text{ and } g_{\omega,s}(y):=\frac{1}{\|g_\omega\|_{L_2(m_{sXB})}}g_\omega(y)1_{sXB}(y),
\end{equation*}
and with this notation we see that (for $x \in B$)
\begin{align*}
f(x) & = M\E_{\omega,s}{\|h_\omega\|_{L_2(m_{sX})}\|g_\omega\|_{L_2(m_{sXB})} (h_{\omega,s}dm_{sX})^\sim \ast g_{\omega,s}(x)}.
\end{align*}
By the Cauchy-Schwarz inequality we have (for each $\omega \in \Omega$)
\begin{align*}
\E_{s}{\|h_\omega\|_{L_2(m_{sX})}\|g_\omega\|_{L_2(m_{sXB})}} &\leq \left(\E_{s}{\|h_\omega\|_{L_2(m_{sX})}^2}\right)^{1/2}\left(\E_{s}{\|g_\omega\|_{L_2(m_{sXB})}^2}\right)^{1/2}\leq 1.
\end{align*}
It follows that
\begin{equation*}
\E_{\omega,s}{\|h_\omega\|_{L_2(m_{sX})}\|g_\omega\|_{L_2(m_{sXB})}} \leq 1,
\end{equation*}
and hence there is a probability measure $\P'$ on $\Omega \times G$ and a constant $M' \leq M$ such that
\begin{equation*}
f(x)=M'\E'{ (h_{\omega,s}dm_{sX})^\sim\ast g_{\omega,s}(x)} \text{ for all }x \in B.
\end{equation*}
Note that
\begin{align*}
\|(h_{\omega,s}dm_{sX})^\sim \ast g_{\omega,s}\|_{L_\infty(G)} \leq \sqrt{\frac{m_G(sXB)}{m_G(sX)}}\|h_{\omega,s}\|_{L_2(m_{sX})}\|g_{\omega,s}\|_{L_2(m_{sXB})} \leq \sqrt{K},
\end{align*}
and so
\begin{align*}
\left(\E'{\|M'(h_{\omega,s}dm_{sX})^\sim \ast g_{\omega,s}\|_{L_2(m_{B})}^2}\right)^{\frac{1}{2}} \leq M'\sqrt{K}.
\end{align*}
It follows by Lemma \ref{lem.cas} (applied to $((\omega,s),x) \mapsto M'(h_{\omega,s}dm_{sX})^\sim \ast g_{\omega,s}(x)$ with $p=2$, $\mu=m_{B}$, $\nu=\P'$ and with some parameter $\delta$ chosen later) that there is some $r=O(\delta^{-2})$, $\omega_1,\dots,\omega_r \in \Omega$ and $s_1,\dots,s_r \in G$ such that
\begin{equation*}
\left\| f - \frac{M'}{r}\left(\sum_{i=1}^r{(h_{\omega_i,s_i}dm_{s_iX})^\sim \ast g_{\omega_i,s_i} }\right)\right\|_{L_2(m_{B})} \leq \delta \sqrt{K}M'.
\end{equation*}
Write $\gamma_i:=(h_{\omega_i,s_i}dm_{sX})^\sim \ast g_{\omega_i,s_i}$ so that $\|\gamma_i\|_{L_\infty(G)} \leq \sqrt{K}$ and
\begin{equation}\label{eqn.inserty}
\left\| \rho_{t^{-1}}(f) - \frac{M'}{r}\left(\sum_{i=1}^r{\rho_{t^{-1}}(\gamma_i) }\right)\right\|_{L_2(m_{B^-})} \leq \delta \sqrt{K}M' \text{ for all }t \in T.
\end{equation}
For each $1\leq i\leq r$ apply Corollary \ref{cor.keyt} with sets $s_iX$, $B$ and $T$, measure $\nu:=\gamma_idm_B$
\begin{equation*}
\left\|\frac{d\nu}{dm_B}\right\|_{L_2(m_B)}^2 = \|\gamma_i\|_{L_2(m_B)}^2 \leq \sqrt{K}\|\gamma_i\|_{L_1(m_B)}=\sqrt{K}\left\|\frac{d\nu}{dm_B}\right\|_{L_1(m_B)},
\end{equation*}
function $h_{\omega_i,s_i} \in L_2(m_{s_iX})$, and parameters $2$ and $\delta$ to get a partition $\mathcal{P}_{i}$ of $T$ of size $\exp(O(\delta^{-2}\log 2K\delta^{-1}))$ such that for all $t,u \in P \in \mathcal{P}_{i,j}$ we have
\begin{align*}
&\|\rho_{t^{-1}}(h_{\omega_i,s_i}\ast (\gamma_idm_{B})) - \rho_{u^{-1}}(h_{\omega_j,s_j}\ast ( \gamma_idm_{B}))\|_{L_2(m_{s_iXBT})}\\
&\qquad \qquad \qquad \qquad \qquad \leq \delta\|\gamma_i\|_{L_1(m_{B})}\| h_{\omega_i,s_i}\|_{L_2(m_{s_iXBT})} \leq \delta \sqrt{K}\| h_{\omega_i,s_i}\|_{L_2(m_{s_iXBT})}.
\end{align*}
Let $\mathcal{P}:=\bigwedge_{i=1}^r{\mathcal{P}_{i}}$ which is a partition of $T$ of size at most $\exp(O(r\delta^{-2}\log 2K\delta^{-1}))$, and suppose that $W$ is the largest cell so that we have the size bound for $W$ we require (once we have chosen $\delta$), and if $t,u \in W$ and $x \in T$ then
\begin{align*}
& |\langle \rho_{t^{-1}}(h_{\omega_i,s_i}\ast (\gamma_idm_{B}))- \rho_{u^{-1}}(h_{\omega_i,s_i}\ast ( \gamma_idm_{B})), \rho_{x^{-1}}(g_{\omega_i,s_i})\rangle_{L_2(m_{s_iXBT})}|\\ &
\qquad \qquad \qquad \qquad \qquad \leq \delta\sqrt{K}\|\rho_{x^{-1}}(g_{\omega_i,s_i})\|_{L_2(m_{s_iXBT})}\| h_{\omega_i,s_i}\|_{L_2(m_{s_iXBT})}\\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad = \delta \sqrt{K}\|g_{\omega_i,s_i}\|_{L_2(m_{s_iXBT})}\| h_{\omega_i,s_i}\|_{L_2(m_{s_iXBT})} .
\end{align*}
On the other hand
\begin{align*}
&\langle \rho_{t^{-1}}(h_{\omega_i,s_i}\ast (\gamma_idm_{B}))- \rho_{u^{-1}}(h_{\omega_i,s_i}\ast ( \gamma_idm_{B})), \rho_{x^{-1}}(g_{\omega_i,s_i})\rangle_{L_2(m_{s_iXBT})}\\ &
\qquad \qquad =\frac{|X|}{|XBT|}\langle (h_{\omega_i,s_i}dm_{s_iX})\ast (\rho_{t^{-1}}(\gamma_idm_{B})- \rho_{u^{-1}}( \gamma_idm_{B})), \rho_{x^{-1}}(g_{\omega_i,s_i})\rangle\\
&
\qquad \qquad \qquad \qquad =\frac{|X|}{|XBT|}\langle \rho_{t^{-1}}(\gamma_idm_{B})- \rho_{u^{-1}}( \gamma_idm_{B}), \rho_{x^{-1}}((h_{\omega_i,s_i}dm_{s_iX})^\sim\ast g_{\omega_i,s_i})\rangle\\
& \qquad \qquad \qquad \qquad \qquad \qquad =\frac{|X|}{|XBT|}\frac{|BT|}{|B|}\langle \rho_{t^{-1}}(\gamma_i|_B)- \rho_{u^{-1}}( \gamma_i|_B), \rho_{x^{-1}}(\gamma_i)\rangle_{L_2(m_{BT})}.
\end{align*}
Combining the above we get
\begin{equation*}
|\langle \rho_{t^{-1}}(\gamma_i|_B) - \rho_{u^{-1}}(\gamma_i|_B),\rho_{x^{-1}}(\gamma_i)\rangle_{L_2(m_{BT})}| \leq \delta.
\end{equation*}
Since $(B,T)$ is $\eta$-closed we have
\begin{equation*}
\frac{|BT|}{|B|}\cdot \|\rho_{x^{-1}}(\gamma_i)-\rho_{x^{-1}}(\gamma_i|_B)\|_{L_1(m_{BT})} \leq 2\|\gamma_i\|_{L_\infty(G)}\frac{|B^+ \setminus B^-|}{|B|} = O(\eta \sqrt{K})
\end{equation*}
and it follows that
\begin{equation*}
|\langle \rho_{t^{-1}}(\gamma_i|_B) - \rho_{u^{-1}}(\gamma_i|_B),\rho_{x^{-1}}(\gamma_i|_B)\rangle_{L_2(m_{BT})}| \leq \delta +O(\eta \sqrt{K}).
\end{equation*}
Since $x \in T$ was arbitrary we conclude from the cases $x=u$ and $x=t$ that
\begin{align*}
\frac{|B^-|}{|B^+|}\left\|\rho_{t^{-1}}(\gamma_i) - \rho_{u^{-1}}(\gamma_i)\right\|_{L_2(m_{B^-})}& \leq \left\|\rho_{t^{-1}}(\gamma_i|_B) - \rho_{u^{-1}}(\gamma_i|_B)\right\|_{L_2(m_{BT})} \leq 2\delta+O(\eta \sqrt{K})
\end{align*}
for all $t,u \in W$. By the triangle inequality we then have
\begin{equation*}
\left\|\frac{M'}{r}\left(\sum_{i=1}^r{\rho_{t^{-1}}(\gamma_i) }\right)-\frac{M'}{r}\left(\sum_{i=1}^r{\rho_{u^{-1}}(\gamma_i) }\right)\right\|_{L_2(m_{B^-})} =O(M\delta + \eta M\sqrt{K}),
\end{equation*}
for all $t,u \in W$, and combining this with (\ref{eqn.inserty}) taking $\delta = \Omega(\epsilon M^{-1})$ gives the result.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop.inv}]
Let $H$ be a Hilbert space, $\pi:G \rightarrow \Aut(H)$ a homomorphism and $v,w\in H$ have $\|w\|=1$ and $\|v\| \leq M$ such that $f(x)=\langle\pi(x)v,w\rangle$ for all $x \in G$. (This is possible by Lemma \ref{lem.various}.)
We begin by fixing some auxiliary parameters. Let $\nu$ and $n$ be parameters (we will have $n=O(\epsilon^{-2})$ and $\nu=\Omega(\epsilon^{2})$) that will be optimised by later choices. Let $\delta = \epsilon^{-O(p)} M^{O(1)}$ be such that if $\eta \leq \delta$ in Lemma \ref{lem.int} then the $\eta (MK)^{O(1)}$ error term is at most $\frac{1}{4^p}\epsilon^{p/2}$ if $K\leq 2$.
We construct $X_i,Z_{i+1},B_i,T_i \in \mathcal{N}(G)$ for $0 \leq i \leq n-1$ such that
\begin{equation*}
X_i \subset A_0:=\left\{x:m_A \ast 1_{A^2}\ast m_A(x)>\frac{1}{2}\right\};
\end{equation*}
$(X_i,Z_{i+1})$ and $(Z_{i+1},X_{i+1})$ are $\nu$-closed; $(B_i,T_i^4)$ is $\delta$-closed; $(X_i,B_i^4)$ is $\min\{\eta,\nu\}$-closed; and $|A_0| \leq K_i|T_i|$. (We shall calculate how $K_i$ evolves later as it is a little more complicated.) Finally, we shall ensure
\begin{equation}\label{eqn.u}
\left\| \left(\wh{m_{Z_{i+1}}}(\pi)-\wh{m_{X_{i}}}(\pi)\right)v\right\|>\frac{1}{2}\epsilon M.
\end{equation}
Applying Lemma \ref{lem.fr} to $A$ with parameter $\min\{\eta,\nu\}$ to get sets $X_0,Y_0 \in \mathcal{N}(G)$ such that $X_0 \subset A_0$; $(X_0,Y_0^4)$ is $\min\{\eta,\nu\}$-closed; and
\begin{equation*}
|Y_0| \geq \exp(-O(\eta^{-2}\nu^{-2}\log^{O(1)}K))|A|.
\end{equation*}
Now $|A_0| \leq 2K|A|$, and since $Y_0^4 \subset X_0 \subset A_0$ we conclude that
\begin{equation*}
|Y_0Y_0^{-1}| = |Y_0^2| \leq |A_0| \leq \exp(O(\eta^{-2}\nu^{-2}\log^{O(1)}K))|Y_0|.
\end{equation*}
Apply Lemma \ref{lem.fr} again to $Y_0$ with parameter $\delta$ to get $B_0,T_0\in \mathcal{N}(G)$ such that $B_0^4 \subset Y_0^4$ (so $(X_0,B_0^4)$ is $\min\{\eta,\nu\}$-closed); $(B_0,T_0^4)$ is $\delta$-closed; and
\begin{equation*}
|T_0| \geq \exp(-(\delta^{-1}\eta^{-1}\nu^{-1}\log K)^{O(1)})|Y_0| \geq \exp(-(\delta^{-1}\eta^{-1}\nu^{-1}\log K)^{O(1)})|A_0|,
\end{equation*}
which gives our bound on $K_0$.
Suppose we are at step $i \leq n-1$ (so that $X_i$, $B_i$ and $T_i$, but \emph{not} $Z_{i+1}$, have been defined) and there is some $t \in G$ such that
\begin{equation}\label{eqn.boost}
\|f -f\ast m_{X_i}\|_{L_p(m_{tB_i^-})} > \epsilon M.
\end{equation}
Since $f(tb) = \langle \pi(t)\pi(b)v,w\rangle = \langle \pi(b)v,\pi(t)^*w\rangle$ we can replace $w$ by $\pi(t)^*w$ and assume $t=1_G$. Apply Lemma \ref{lem.int} with sets $X_i$, and $(B_i,T_i;B_i^-T_i^3,B_iT_i)$ $\delta$-closed to get a set $W_{i} \subset T_i$ with
\begin{equation*}
|W_i| \geq \exp(-(2^p\epsilon^{-p}M)^{O(1)})|T_i|
\end{equation*}
such that,
\begin{equation*}
\|\rho_{u^{-1}}(f) - \rho_{t^{-1}}(f)\|_{L_2(m_{B_i^-T_i^3})} \leq \frac{2}{4^p}\epsilon^{p/2} \text{ for all }u,t \in W_i,
\end{equation*}
from the choice of $\delta$. It follows that for all $r,s,t,u \in W_i$ we have
\begin{align}
\nonumber \|f - \rho_{rs^{-1}ut^{-1}}(f)\|_{L_2(m_{B_i^{-}})}& \leq \|f - \rho_{rs^{-1}}(f)\|_{L_2\left(m_{B_i^{-}}\right)}+\|\rho_{rs^{-1}}(f) - \rho_{rs^{-1}ut^{-1}}(f)\|_{L_2\left(m_{B_i^{-}}\right)}\\
\nonumber & \leq \sqrt{\frac{|B_i^-T_i|}{|B_i^-|}}\|\rho_{r^{-1}}(f) - \rho_{s^{-1}}(f)\|_{L_2\left(m_{B_i^-T_i}\right)}\\
\nonumber &\qquad \qquad \qquad + \sqrt{\frac{|B_i^-T_i^2|}{|B_i^-|}}\|f - \rho_{ut^{-1}}(f)\|_{L_2\left(m_{B_i^-T_i^2}\right)}\\
\nonumber & \leq \sqrt{\frac{|B_i^-T_i^3|}{|B_i^-|}}\|\rho_{r^{-1}}(f) - \rho_{s^{-1}}(f)\|_{L_2\left(m_{B_i^-T_i^3}\right)}\\
\nonumber & \qquad \qquad \qquad + \sqrt{\frac{|B_i^-T_i^3|}{|B_{i}^-T_i^2|}}\|\rho_{u^{-1}}(f) - \rho_{t^{-1}}(f)\|_{L_2\left(m_{B_i^-T_i^3}\right)}\\
\label{eqn.upit}& \leq 2^{3-2p}\epsilon^{p/2}.
\end{align}
Since $W_i^{-1} \subset T_i^{-1}=T_i$ and $T_i^2\subset B_i \subset X_i \subset A_0$ we see that
\begin{equation*}
|W_i^{-1}W_i| \leq |A_0| \leq K_i|T_i| \leq \exp((2^p\epsilon^{-p}M)^{O(1)})K_i|W_i|.
\end{equation*}
Apply Lemma \ref{lem.fr} to $W_i^{-1}$ with parameter $\nu$ to get $Z_{i+1}\in \mathcal{N}(G)$ such that $Z_{i+1}^4 \subset W_iW_i^{-1}W_iW_i^{-1}\subset T_i^4$; $(Z_{i+1},Y_{i}^4)$ is $\nu$-closed; and
\begin{equation*}
|Y_i| \geq \exp(-(\nu^{-1}2^p\epsilon^{-p}M\log K_i)^{O(1)})|W_i|.
\end{equation*}
In particular, since $Y_i^2\subset Z_{i+1}^2\subset T_i^4 \subset B_i\subset X_i \subset A_0$ we have
\begin{equation}\label{eqn.ydoub}
|Y_i^2| \leq |A_0| \leq K_i|T_i| \leq \exp(-(\nu^{-1}2^p\epsilon^{-p}M\log K_i)^{O(1)})|Y_i|.
\end{equation}
It follows from the triangle inequality applied to (\ref{eqn.upit}) that
\begin{equation*}
\|f - f\ast m_{Z_{i+1}}\|_{L_p(m_{B_i^{-}})}^p \leq (2M)^{p-2}\|f - f\ast m_{Z_{i+1}}\|_{L_2(m_{B_i^{-}})}^2 \leq \left(\frac{1}{2}\epsilon M\right)^p.
\end{equation*}
By the triangle inequality again and (\ref{eqn.boost}) (recalling that we have argued we may take $t=1_G$) we have
\begin{equation*}
\| f\ast m_{Z_{i+1}}-f \ast m_{X_i}\|_{A(G)}\geq \| f\ast m_{Z_{i+1}}-f \ast m_{X_i}\|_{L_p(m_{B_i^-})} >\frac{1}{2}\epsilon M,
\end{equation*}
and it follows from this that (\ref{eqn.u}) holds.
Apply Lemma \ref{lem.fr} to $Y_i$ (using the bound in (\ref{eqn.ydoub})) with parameter $\min\{\nu,\eta\}$ to get $X_{i+1},U_{i+1} \in \mathcal{N}(G)$, such that $(X_{i+1},U_{i+1}^4)$ is $\min\{\nu,\eta\}$-closed; $(X_{i+1}^+)^4 \subset Y_{i}^4$; and
\begin{equation*}
|U_{i+1}| \geq \exp(-(\eta^{-1}\nu^{-1}2^p\epsilon^{-p}M\log K_i)^{O(1)})|Y_i|.
\end{equation*}
Moreover, $X_{i+1}\subset Y_i^4 \subset Z_{i+1}\subset T_i^4\subset B_i\subset X_i \subset A_0$ and $U_{i+1}^2 \subset X_{i+1}$ so
\begin{equation*}
|U_{i+1}^2| \leq |A_0| \leq \exp((\eta^{-1}\nu^{-1}2^p\epsilon^{-p}M\log K_i)^{O(1)})|U_{i+1}|.
\end{equation*}
Finally, apply Lemma \ref{lem.fr} to $U_{i+1}$ with parameter $\delta$ to get $B_{i+1},T_{i+1}\in \mathcal{N}(G)$ such that $(B_{i+1},T_{i+1}^4)$ is $\delta$-closed; $(B_{i+1}^+)^4 \subset U_{i+1}^4$ and
\begin{equation*}
|T_{i+1}| \geq \exp(-(\delta^{-1}\eta^{-1}\nu^{-1}2^p\epsilon^{-p}M\log K_i)^{O(1)})|U_{i+1}|.
\end{equation*}
Since $B_{i+1}^4 \subset U_{i+1}^4$ and $(X_{i+1},U_{i+1}^4)$ is $\min\{\nu,\eta\}$-closed we conclude that $(X_{i+1},B_{i+1}^4)$ is $\min\{\nu,\eta\}$-closed. Since $Z_{i+1}^4 \subset T_i^4\subset B_i^4$ we conclude that $(X_i,Z_{i+1})$ is $\nu$-closed, and since $X_{i+1}\subset Y_i^4$ and $(Z_{i+1},Y_i^4)$ is $\nu$-closed we conclude that $(Z_{i+1},X_{i+1})$ is $\nu$-closed as required.
We repeat this provided $i \leq n-1$ and (\ref{eqn.boost}) does not happen for any $t\in G$, and we shall see for a suitable choice of $n$ that that leads to a contradiction.
Since $(X_i,Z_{i+1})$ and $(Z_{i+1},X_{i+1})$ are $\nu$-closed for $0 \leq i \leq n-1$, if $j>i$ then $Z_j \subset X_j \subset X_{i+1}$ and and so
\begin{equation*}
\|m_{Z_{i+1}} \ast m_{X_j} - m_{Z_{i+1}}\|,\|m_{Z_{i+1}} \ast m_{Z_{j+1}} - m_{Z_{i+1}}\| \leq \nu,
\end{equation*}
and
\begin{equation*}
\|m_{X_i} \ast m_{Z_{j+1}} - m_{X_i}\|,\|m_{X_i} \ast m_{X_j} - m_{X_i}\| \leq \nu.
\end{equation*}
Dealing with $i>j$ similarly we have
\begin{equation*}
\|\left(\wh{m_{Z_{i+1}}}(\pi)-\wh{m_{X_{i}}}(\pi)\right)\left(\wh{m_{Z_{j+1}}}(\pi)-\wh{m_{X_{j}}}(\pi)\right)\| \leq \begin{cases} 4 & \text{ if }i=j\\ 4\nu & \text{ if } i \neq j\end{cases}.
\end{equation*}
It follows from the (finite version of the) Cotlar-Stein lemma that for any signs $\sigma_1,\dots,\sigma_n \in \{-1,1\}$ we have
\begin{equation*}
\left\|\sum_{i=0}^{n-1}{\sigma_i\left(\wh{m_{Z_{i+1}}}(\pi)-\wh{m_{X_{i}}}(\pi)\right)}\right\| \leq O(1+\nu n).
\end{equation*}
We conclude (from (\ref{eqn.u})) that
\begin{align*}
n\left(\frac{1}{2}\epsilon M\right)^2 &\leq \sum_{i=0}^{n-1}{\left\|\left(\wh{m_{Z_{i+1}}}(\pi)-\wh{m_{X_{i}}}(\pi)\right)v\right\|^2}\\
& = \E_\sigma{\left\|\sum_{i=0}^{n-1}{\sigma_i\left(\wh{m_{Z_{i+1}}}(\pi)-\wh{m_{X_{i}}}(\pi)\right)v}\right\|^2} \leq O(1+\nu n)^2\|v\|^2,
\end{align*}
where the $\sigma_0,\dots,\sigma_{n-1} \in \{-1,1\}$ are chosen independently and uniformly. It follows that there is some $\nu = \Omega(\epsilon^2)$ and $n=O(\epsilon^{-2})$ leading to a contradiction as claimed.
It remains to note
\begin{equation*}
\log K_{i+1} \leq (\eta^{-1} \delta^{-1}\nu^{-1}2^p\epsilon^{-p}M\log K_i)^{O(1)}.
\end{equation*}
Since the iteration proceeds at most $i_0=O(\epsilon^{-2})$ times this gives us a bound on $B_{i_0}$ when the iteration terminates (since $|B_{i_0}| \geq |T_{i_0}|$). We set $X:=X_{i_0}$ and $B:=B_{i_0}$ which have the required properties.
\end{proof}
\section{Arithmetic connectivity}\label{sec.ac}
If $f$ is Boolean and has $\|f\|_{A(G)} \leq M$ then $\log$-convexity of $L_p$-norms can be used to show that $E(f) \geq M^{-2}|\supp f|^3$. However, when $f$ is only almost integer-valued then it may be that most of the mass of $E(f)$ may be supported outside $\supp f_\Z$. To deal with this in the abelian setting Green introduced the concept of arithmetic connectivity in \cite[Definition 5.2]{gresan::}. In this section we develop this in the setting of general groups and prove the following proposition.
\begin{proposition*}[Proposition \ref{prop.infstruct}]
There is an absolute $C>0$ such that if $f$ is $\epsilon$-almost integer-valued with $\|f\|_{A(G)} \leq M$ and $\epsilon \leq \exp(-CM)$, then there is some $S \subset \supp f_\Z$ such that $|SS^{-1}| \leq M^{O(M)}|S|$ and $|S| = M^{O(M)}|\supp f_\Z|$.
\end{proposition*}
Given $k,t \in \N$ we are interested in the vectors in $[k]^{t}=\{1,\dots,k\}^t$. We say that $i \in [k]^{t}$ is \textbf{trivial} if
\begin{equation*}
|\{j \in [k]: |\{s \in [t]: i_s=j\}|=1\}| \leq 1;
\end{equation*}
we write $T_{k,t}$ for the set of trivial elements of $[k]^{t}$, and $N_{k,t}$ for the set of non-trivial elements, that is $N_{k,r}:=[k]^{t}\setminus T_{k,t}$. In words we say that $i \in [k]^t$ is trivial if at most one coordinate is unique.
Given a group $G$ we say that $A \subset G$ is \textbf{$(k,l)$-arithmetically connected} if for every $x \in A^k$ there is some $1 \leq r \leq l$, some $i \in N_{k,2r+1}$, and some $\sigma \in \{-1,1\}^{2r+1}$ such that
\begin{equation*}
x_{i_1}^{\sigma_1}\cdots x_{i_{2r+1}}^{\sigma_{2r+1}} \in A.
\end{equation*}
It is useful to know roughly how many trivial pairs there are and we capture this in the following lemma.
\begin{lemma}\label{lem.ct}
Given $k,r \in \N$ we have
\begin{equation*}
|T_{k,2r+1}| \leq \exp(O(r))r^{r}k^{r+1}.
\end{equation*}
\end{lemma}
\begin{proof}
Writing
\begin{equation*}
R_{k,r}:=\{i:[2r] \rightarrow [k] \text{ s.t. } |i^{-1}(\{j\})|>1 \text{ for all }j \in [k]\},
\end{equation*}
we see that there is a natural surjection
\begin{equation*}
[2r+1] \times [k] \times R_{k,r} \rightarrow T_{k,2r+1}
\end{equation*}
so that $|T_{k,2r+1}| \leq (2r+1)k|R_{k,r}|$. The result will follow from an upper bound on this last quantity.
Let $X_1,\dots,X_k$ be independent random variables with $\E{X_j}=0$ and $\E{X_j^n}=1$ for all $n \in \{2,3,\dots\}$ and $j \in \{1,\dots,k\}$. Then for any parameter $\eta\in (0,1]$ (to be optimised later) we have
\begin{align*}
|R_{r,k}| = \E{\left(\sum_{j=1}^k{X_j}\right)^{2r}} &\leq (2r)!\eta^{-2r}\E{\exp\left(\sum_{j=1}^k{\eta X_j}\right)}\\ & = (2r)!\eta^{-2r}\left(\E{\exp\left(\eta X_1\right)}\right)^k = (2r)!\eta^{-2r}\exp(O(\eta^2k)).
\end{align*}
Optimising with $\eta^2k =r$ we end up with $|R_{r,k}| \leq \exp(O(r))r^rk^r$ from which the result follows.
\end{proof}
The proof of the next result is inspired by \cite[Lemma 1]{mel::} of M{\'e}la and uses Chebychev polynomials, the basic properties of which may be found in \cite[\S6.10.6]{zwikraros::}.
Keeping the second parameter small is most important with the below but if the first parameter is a concern then it may be useful to know that the auxiliary measures in \cite[Lemme 4]{mel::} can be used to show that $\supp f_\Z$ is $(O(M^2\log M),O(M\log M))$-arithmetically connected.
\begin{lemma}\label{lem.st}
There is an absolute $C>0$ such that if $f$ is $\epsilon$-almost integer-valued with $\|f\|_{A(G)} \leq M$ and $\epsilon \leq \exp(-CM)$, then $\supp f_\Z$ is $(O(M^3),O(M))$-arithmetically connected.
\end{lemma}
\begin{proof}
Since $f \in A(G)$ there is a Hilbert space $H$ with elements $v,w \in H$ and a homomorphism $\pi:G \rightarrow \Aut(H)$ such that $f(t)=\langle \pi(t)v,w\rangle$ for all $t \in G$, and $\|v\|\|w\|\leq M$.
Let $k$ and $l$ be natural numbers to be optimised later and suppose that $A:=\supp f_\Z$ is \emph{not} $(k,l)$-arithmetically connected. It follows that there is some $x \in A^k$ such that for every $1 \leq r \leq l$, $i \in N_{k,2r+1}$ and $\sigma \in \{-1,1\}^{2r+1}$ we have
\begin{equation*}
\left|f \left(x_{i_1}^{\sigma_1} \cdots x_{i_{2r+1}}^{\sigma_{2r+1}}\right)\right| \leq \epsilon.
\end{equation*}
We define an auxiliary function $\omega \in (S^1)^k$ as follows. Since the function $f$ is real and since $x_j \in A$ we see that $|f(x_j)| \geq 1-\epsilon \geq \frac{1}{2}$ for all $j \in [k]$ so that $\sgn f(x_j) \neq 0$. For each $j \in [k]$ if $\sgn f(x_j^{-1})=0$ or $\sgn f(x_j)=\sgn f(x_j^{-1})$ then set $\omega_j=\sgn f(x_j)$ otherwise set $\omega_j=i\sgn f(x_j)$. It follows that
\begin{align*}
\left|\frac{1}{2k}\sum_{j=1}^k{(\omega_jf(x_j) + \omega_j^{-1}f(x_j^{-1}))}\right|& \geq \frac{1}{2k}\left(\left|\sum_{j:\omega_j \in \R}^k{|f(x_j)|}\right|^2+\left|\sum_{j:\omega_j \not\in \R}^k{|f(x_j)|}\right|^2 \right)^{\frac{1}{2}} \geq \frac{1}{8}.
\end{align*}
The operator
\begin{equation*}
R:=\frac{1}{2k}\sum_{j=1}^k{\left(\omega_j\pi(x_j) + \omega_j^{-1}\pi(x_j)^*\right)}
\end{equation*}
is hermitian and has eigenvalues in $[-1,1]$. Let $T_{2l+1}(X)=a_1X+a_3X^3+\cdots + a_{2l+1}X^{2l+1}$ be the Chebychev polynomial (of the first kind) of degree $2l+1$. Since $T_{2l+1}$ maps $[-1,1]$ to $[-1,1]$, the spectral radius of $T_{2l+1}(R)$ is at most $1$.
Now, for $1 \leq r \leq l$ we have
\begin{align*}
& \left\langle \left(\frac{1}{2k}\sum_{j=1}^k{\left(\omega_j\pi(x_j) + \omega_j^{-1}\pi(x_j)^*\right)}\right)^{2r+1}v,w\right\rangle\\
& \qquad \qquad= \frac{1}{(2k)^{2r+1}}\sum_{i \in N_{k,2r+1},\sigma \in \{-1,1\}^{2r+1}}{\omega_{i_1}^{\sigma_{1}}\cdots \omega_{i_{2r+1}}^{\sigma_{2r+1}}f \left(x_{i_1}^{\sigma_1} \cdots x_{i_{2r+1}}^{\sigma_{2r+1}}\right)}\\ &\qquad \qquad \qquad \qquad + \frac{1}{(2k)^{2r+1}}\sum_{i \in T_{k,2r+1},\sigma \in \{-1,1\}^{2r+1}}{\omega_{i_1}^{\sigma_{1}}\cdots \omega_{i_{2r+1}}^{\sigma_{2r+1}}f \left(x_{i_1}^{\sigma_1} \cdots x_{i_{2r+1}}^{\sigma_{2r+1}}\right)},
\end{align*}
from which it follows by Lemma \ref{lem.ct} that
\begin{align*}
\left|\left\langle \left(\frac{1}{2k}\sum_{j=1}^k{\left(\omega_j\pi(x_j) + \omega_j^{-1}\pi(x_j)^*\right)}\right)^{2r+1}v,w\right\rangle\right| & \leq \epsilon + MO\left(\frac{r}{k}\right)^r.
\end{align*}
Since $|a_1|=2l+1$, $|a_{2r+1}| = O(l/r)^{2r+1}$ for $0 \leq r \leq l$ and $\sum_{r=1}^l{|a_{2r+1}|}=\exp(O(l))$ we have
\begin{align*}
M \geq \|v\|\|w\| &\geq \left|\left\langle T_{2l+1}\left(\frac{1}{2k}\sum_{j=1}^k{\left(\omega_j\pi(x_j) +\omega_j^{-1}\pi(x_j)^*\right)}\right)v,w\right\rangle\right|\\
& = \left|\sum_{r=0}^l{a_{2r+1} \left\langle \left(\frac{1}{2k}\sum_{j=1}^k{\left(\omega_j\pi(x_j) + \omega_j^{-1}\pi(x_j)^*\right)}\right)^{2r+1}v,w\right\rangle}\right|\\
& \geq |a_1|\left|\frac{1}{2k}\sum_{j=1}^k{(\omega_jf(x_j)+\omega_j^{-1}f(x_j^{-1}))}\right| - \sum_{r=1}^l{|a_{2r+1}|\left(\epsilon + MO\left(\frac{r}{k}\right)^r\right)}\\
& \geq \frac{(2l+1)}{8} - \epsilon \exp(O(l)) - O(Ml^3/k)
\end{align*}
provided $l^2 \leq ck$ for some sufficiently small absolute $c>0$. Let $k=c'l^3$ and note that for $l=CM$ sufficiently large we arrive at a contradiction. The result is proved.
\end{proof}
The next lemma is the part of our argument that depends most on $G$ being finite -- we need some way to measure the size of the set $A$ so that it is non-trivial.
\begin{lemma}\label{lem.s}
Suppose that $A$ is $(k,l)$-arithmetically connected. Then there is some $S \subset A$ such that $|S| = k^{-O(l)}|A|$ and $|SS^{-1}| \leq k^{O(l)}|S|$.
\end{lemma}
\begin{proof}
Since
\begin{equation*}
\sum_{1 \leq r \leq l}{\sum_{i \in N_{k,2r+1}}{\sum_{\sigma \in\{-1,1\}^{2r+1}}{1}}} = \sum_{1 \leq r \leq l}{|N_{k,2r+1}|\left|\{-1,1\}^{2r+1}\right|} \leq \sum_{r=1}^l{(2k)^{2r+1}} \leq (2k)^{2l+3}
\end{equation*}
we conclude by averaging that there is some $1 \leq r \leq l$, $i \in N_{k,2r+1}$ and $\sigma \in \{-1,1\}^{2r+1}$ such that
\begin{equation*}
\sum_{x_1,\dots,x_k}{1_A(x_1)\dots1_A(x_k)1_A\left(x_{i_1}^{\sigma_1}\cdots x_{i_{2r+1}}^{\sigma_{2r+1}}\right)} \geq \frac{1}{(2k)^{2l+3}}|A|^k.
\end{equation*}
Since $i \in N_{k,2r+1}$ it follows that
\begin{equation*}
|\{j \in [k]: |\{s \in [2r+1]: i_s=j\}|=1\} \geq 2,
\end{equation*}
and so there are elements $s_1<s_2 \in [2r+1]$ such that
\begin{equation}\label{eqn.preimageone}
|\{s \in [2r+1]: i_s=i_{s_1}\}|=1 \text{ and }|\{s \in [2r+1]: i_s=i_{s_2}\}|=1.
\end{equation}
Let $S:=[k] \setminus \{i_{s_1},i_{s_2}\}$. Then
\begin{equation*}
\sum_{\substack{x_{i_1},\dots,x_{i_{s_1-1}},x_{i_{s_1+1}},\dots, \\ x_{i_{s_2-1}},x_{i_{s_2+1}},\dots,x_{i_{2r+1}}}}{1_A\left(x_{i_1}\right)\cdots 1_A\left(x_{i_{s_1-1}}\right)1_A\left(x_{i_{s_1+1}}\right)\cdots 1_A\left(x_{i_{s_2-1}}\right)1_A\left(x_{i_{s_2+1}}\right)\cdots 1_A\left(x_{i_{2r+1}}\right)}
\end{equation*}
equals $|A|^{k-2}$. Averaging gives $x_{i_1} ,\dots ,x_{i_{s_1-1}} ,x_{i_{s_1+1}} ,\dots ,x_{i_{s_2-1}} ,x_{i_{s_2+1}},\dots,x_{i_{2r+1}} \in A$ such that
\begin{align*}
& \sum_{x_{i_{s_1}},x_{i_{s_2}}}{1_A\left(x_{i_{s_1}}\right)1_A\left(x_{i_{s_2}}\right)}\\
& \qquad \qquad \qquad \times 1_A\left(x_{i_1}^{\sigma_1}\cdots x_{i_{s_1-1}}^{\sigma_{s_1-1}}\cdot x_{i_{s_1}}^{\sigma_{s_1}}\cdot x_{i_{s_1+1}}^{\sigma_{s_1+1}}\cdots x_{i_{s_2-1}}^{\sigma_{s_2-1}}\cdot x_{i_{s_2}}^{\sigma_{s_2}}\cdot x_{i_{s_2+1}}^{\sigma_{s_2+1}}\cdots x_{i_{2r+1}}^{\sigma_{2r+1}}\right)\\ & \qquad \qquad \qquad \qquad \qquad\geq \frac{1}{|A|^{k-2}}\cdot \frac{1}{(2k)^{2l+3}}|A|^k = \frac{|A|^2}{(2k)^{2l+3}}.
\end{align*}
Write
\begin{equation*}
w:=x_{i_1}^{\sigma_1}\cdots x_{i_{s_1-1}}^{\sigma_{s_1-1}}, y:=x_{i_{s_1+1}}^{\sigma_{s_1+1}}\cdots x_{i_{s_2-1}}^{\sigma_{s_2-1}} \text{ and } z:= x_{i_{s_2+1}}^{\sigma_{s_2+1}}\cdots x_{i_{2r+1}}^{\sigma_{2r+1}},
\end{equation*}
where each of these products may be empty and if so is the identity. In light of (\ref{eqn.preimageone}) none of $w,y$ or $z$ depends on $x_{i_{s_1}}$ or $x_{i_{s_2}}$. By the change of variables
\begin{equation*}
u:=wx_{i_{s_1}}^{\sigma_{s_1}} \text{ and } v^{-1}:=yx_{i_{s_2}}^{\sigma_{s_2}}
\end{equation*}
we have
\begin{align*}
\frac{|A|^2}{(2k)^{2l+3}} & \leq \sum_{u,v}{1_{wA^{\sigma_{s_1}}}(u) 1_{A^{-\sigma_{s_2}}y^{-1}}(v)1_{Az^{-1}}(uv^{-1})}\\
& = \langle 1_{wA^{\sigma_{s_1}}}, 1_{Az^{-1}} \ast 1_{A^{-\sigma_{s_2}}y^{-1}}\rangle_{\ell_2(G)} \leq \|1_{wA^{\sigma_{s_1}}}\|_{\ell_2(G)} \| 1_{Az^{-1}} \ast 1_{A^{-\sigma_{s_2}}y^{-1}}\|_{\ell_2(G)}.
\end{align*}
It follows that, in the notation of Tao and Vu \cite[(2.37)]{taovu::}, we have $E(Az^{-1},A^{-\sigma_{s_2}}y^{-1}) \geq k^{-O(l)}|A|^3$. Apply the Balog-Szemer{\'e}di-Gowers Lemma \cite[Corollary 2.40]{taovu::} to get $A' \subset Az^{-1}$ such that $|A'| = k^{-O(l)}|A|$ and $|(A')(A')^{-1}| \leq k^{O(l)}|A'|$; the result follows on taking $S:=A'z$ so that $S \subset A$, $|S| = k^{-O(l)}|A|$ and $|SS^{-1}| = |(A'z)(A'z)^{-1}| \leq k^{O(l)}|S|$.
The result is proved.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop.infstruct}]
This follows immediately on combining Lemma \ref{lem.st} and Lemma \ref{lem.s}.
\end{proof}
\section*{Acknowledgement}
The author should like to thank the referee for thoughtful comments, and identifying an error in the proof of Theorem \ref{thm.ky}.
\bibliographystyle{halpha}
|
{
"timestamp": "2019-11-11T02:00:55",
"yymm": "1805",
"arxiv_id": "1805.02168",
"language": "en",
"url": "https://arxiv.org/abs/1805.02168"
}
|
\section{Introduction}
Extremely metal-poor (EMP; [Fe/H] $< -3.0$) stars of the Galactic halo
are thought to be the immediate successors of the first stars, and were
likely to have formed when the Universe was only a few hundred million
years old (e.g., \citealt{bromm2009}) -- their evolution and
explosion led to the first production of heavy elements.
These first supernovae had a considerable dynamical, thermal, and
chemical impact on the evolution of the surrounding interstellar medium,
including mini-halos that can be some distance away from the location of
the first-star explosion \citep{cooke2014,smith2015,chiaki2018}. Stars
(and their host galaxies) that formed thereafter are expected to carry
the imprints of the nucleosynthesis events from these Population III
stars \citep{beers2005,frebelandnorris,sharma2018}. Studies of such EMP
stars have greatly benefited from the large spectroscopic surveys that
were carried out in the past in order to identify them in significant
numbers, such as the HK survey of Beers and collaborators
\citep{beers85,beers92} and the Hamburg/ESO Survey of Christlieb and
colleagues \citep{christlieb2003}. More recent surveys, such as SDSS,
RAVE, APOGEE and LAMOST continue to expand the known members of this
rare class of stars (e.g., \citealt{rave2010fulbright};
\citealt{ivezic2012}; \citealt{aokibeers2013}; \citealt{zhao2012lamost};
\citealt{apogee2013}; \citealt{apogee2014}; \citealt{li2015lamost}).
High-resolution spectroscopic studies of metal-poor Galactic halo stars
have demonstrated diversity in their chemical compositions. For example,
on the order of 20\% of stars with [Fe/H] $< -2.0$ exhibit large
enhancements in their carbon-to-iron ratios ([C/Fe] $> +0.7$;
\citealt{aokibeers2007}; \citealt{lee2013}; \citealt{lee2017}). As shown
by numerous studies, the frequency of carbon-enhanced metal-poor (CEMP)
stars continues to increase with decreasing [Fe/H]. The fractions of
CEMP stars also increase with distance from the Galactic plane
\citep{frebel2006,beers2017}, and also between the inner-halo and
the outer-halo regions \citep{lee2017}.
CEMP stars can be separated into four sub-classes \citep{beers2005}: i)
CEMP-$s$ stars, which show enhancements of $s$-process elements, ii)
CEMP-$r$ stars, which exhibit enhancements of $r$-process elements, iii)
CEMP-$r/s$ stars, which show enhancements in both $r$- and $s$-process
elements\footnote{\cite{hampel2016} suggest that the observed heavy
element patterns of these stars are well accounted for by an
``intermediate neutron-capture process,'' (as first suggested by
\citealt{cowanandrose}), and should be referred to henceforth as CEMP-$i$
stars.}, and iv) CEMP-no stars, which exhibit no neutron-capture element
enhancements. Long-term radial-velocity monitoring studies have shown
that most ($> 80$\%, possibly all) CEMP-$s$ stars are members of binary
systems involving a (now extinct) asymptotic giant branch (AGB) star
that transferred carbon and $s$-process rich material to the presently
observed (lower-mass) star \citep{lucatello,starkenburg2014,
hansen2016a}, while CEMP-no stars exhibit observed binary frequencies
typical of non-carbon-enhanced halo giants, $\sim 18$\%
\citep{starkenburg2014,hansen2016b}.
\cite{jinmiyoon} have considered the rich morphology of the absolute
abundance of carbon, $A$(C)$ = log(C)$, as a function of [Fe/H], based on
high-resolution analyses of a large sample of CEMP stars (their Figure
1, the Yoon-Beers diagram). In addition to their Group~I stars, which
are dominated by CEMP-$s$ stars, they demonstrate that the CEMP-no stars
not only exhibit substantially lower $A$(C), but bifurcate into two
apparently different regions of the diagram, which they refer to as
Group~II and Group~III stars. This behavior immediately suggests that
these groups might be associated with different progenitors responsible
for the carbon production, a suggestion borne out by the modeling
carried out by \cite{placco2016}, and/or on the masses of the mini-halos
in which these stars formed. \citet{chiaki2017} have emphasized that
different cooling pathways, dependent on the formation of carbon or
silicate dust, may have applied to the Group~III and Group~II stars in
the Yoon-Beers diagram.
Multiple models for the production of CEMP-no stars have been considered
in the literature, such as the ``spinstar'' models (e.g.,
\citealt{meynet2006}; \citealt{meynet2010}; \citealt{chiappini2013}), and the ``mixing and
fallback'' models for faint SNe (e.g., \citealt{umedanomoto2003};
\citealt{umedanomoto2005}; \citealt{nomoto2013};
\citealt{tominaga2014}). Both processes may well play a role
\citep{maeder2015,choplin}.
Regardless of the complexity of the situation, additional detailed
observations of EMP stars with and without clear carbon enhancement,
such as those carried out here, are required for progress in
understanding. This paper is outlined as follows. In Section 2 we
describe our high-resolution observations. Consideration of possible
radial-velocity variations for our two targets is presented in Section 3.
Section 4 summarizes our estimates of stellar atmospheric parameters,
and describes our abundance analyses. Results of the abundance analysis
are reported in Section 5. We present a discussion of our results with a comparative study of
CEMP-no and EMP stars in
Section 6, along with a brief conclusion in Section 7.
\section{Observations and Analysis}
\subsection{Sample Selection}
MARVELS \citep{paegertmarvels}, a multi-object radial-velocity survey
designed for efficient exo-planet searches, was one of the three
sub-surveys carried out as part of SDSS-III \citep{eisenstein}. The
targets for the first two years of MARVELS were selected based on a
lower-resolution ($R \sim 1800)$ spectroscopic pre-survey using the SDSS
spectrographs. Most of the pre-survey observations were carried out
during twilight, when the fields were at low elevation. Targets were
selected from these pre-survey fields for the MARVELS main radial velocity (RV) survey,
which were later observed at higher elevations. There were about 30,000
stars observed as part of the spectroscopic pre-survey of stars with
$B-V > 0.6$ and $8 < V < 13$. Target fields for the first two years of
the MARVELS survey were around known RV standards, and about 75\% of the
target fields were in the Galactic latitude range $2^{\circ} < |b| <
30^{\circ}$. Although not the ideal location to find metal-poor stars,
it does offer the chance to identify a small number of bright halo
targets, suitable for high-resolution spectroscopic follow-up with
moderate-aperture telescopes, that happen to fall into the MARVELS
pre-survey footprint during their orbits about the Galactic center. The
pre-survey also has simple magnitude and color cuts, which reduces potential
selection biases. As in our previously published work
\citep{susmitha}, we used synthetic spectral fitting of the pre-survey
data to identify new metal-poor candidates. Here, we present
high-resolution observations and analysis of two EMP stars,
SDSS J082625.70+612515.10\ (hereafter, SDSS~J0826+6125) and SDSS J134144.60+474128.90\ (hereafter,
SDSS~J1341+4741), with $V$ magnitudes of 11.44 and 12.38, respectively. These
two stars were selected for follow up, as they were found to be very
metal poor from spectral fitting of the pre-survey data, and were
also very bright. Results from the spectral fitting used to identify
metal-poor candidates from the MARVELS pre-survey will be discussed in a
separate paper.
\subsection{High-Resolution Observations}
High-resolution ($R \sim 30,000$) spectroscopic observations of the two EMP
stars were obtained with the Hanle Echelle Spectrograph (HESP) on the
2.3-m Himalayan Chandra telescope (HCT) at the Indian Astronomical
Observatory (IAO). The dates of observation, wavelength coverage, radial
velocities, and signal-to-noise ratios of the available spectra are
listed in Tables 1 and 2.
Data reduction was carried out using the IRAF\footnote{IRAF is
distributed by the National Optical Astronomy Observatory, which is
operated by the Association of Universities for Research in Astronomy
(AURA) under cooperative agreement with the National Science
Foundation.} echelle package. HESP has a dual fiber mode available, one
fiber for the target star and another, which can be fed with a
calibration source for precise RV measurements, or by the
night sky through a pinhole that has a separation of about 13\arcsec\
from the target. The sky fiber is used for background subtraction. All
the orders were normalized, corrected for radial velocity, and merged to
produce the final spectrum. The equivalent widths for individual species
are listed in the tables in the appendix. Recently, a custom data
reduction pipeline has been developed by A. Surya (available
publicly\footnote{$ https://www.iiap.res.in/hesp/ $})
that is more suitable for the crowded and curved orders of the stellar
spectra observed with HESP. However, in the present paper, we use IRAF,
and proper care was taken to avoid drift of the spectral tracings
blending into adjacent orders.
\begin{table}
\begin{center}
\caption{Observation log and radial velocities for SDSS~J0826+6125} \label{tbl-1}
\begin{tabular}{ccccrrrrrrr}
\tableline\tableline
Date & MJD & $\lambda$ Coverage & SNR & Radial Velocity \\
& & (\AA) & & (km~sec$^{-1}$) \\
\tableline
2015-11-03 &57330.20903 &3600-10800 &51 &$-$110.4 \\
2015-11-29 &57356.36042 &3600-10800 &49 &$-$95.6 \\
2015-12-22 &57379.10417 &3600-10800 &47 &$-$80.3 \\
2016-01-27 &57415.09792 &3600-10800 &47 &$-$52.3 \\
2016-10-20 &57682.21667 &3600-10800 &50 &$-$108.9 \\
2016-11-16 &57709.13542 &3600-10800 &51 &$-$104.1 \\
\tableline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Observation log and radial velocities for SDSS~J1341+4741} \label{tbl-2}
\begin{tabular}{cccccrrrrrrr}
\tableline\tableline
Date & MJD & $\lambda$ Coverage & SNR & Radial Velocity \\
& & (\AA) & & (km~sec$^{-1}$) \\
\tableline
2016-27-01 &57415.24722 & 3600-10800 &43 &$-$240.1\\
2016-24-04 &57503.18819 & 3600-1080 &49 &$-$190.5\\
2016-26-04 &57505.06458 & 3600-1080 &47 &$-$192.1\\
2016-24-06 &57564.02361 & 3600-1080 &48 &$-$176.2\\
2016-25-06 &57565.20139 & 3600-1080 &47 &$-$174.5\\
\tableline
\end{tabular}
\end{center}
\end{table}
\section{RADIAL VELOCITIES}
The HESP instrument is thermally controlled to ${\Delta}T$ = $\pm$ 0.1$^\circ$~C
at a set point of 16$^\circ$~C over the entire year, which is expected to
provide a long-term stability of $\sim 200$ m~s$^{-1}$ (Sivarani et al.
in preparation), substantially lowering systematic errors with respect
to a spectrograph that does not have such control.
RVs were calculated for SDSS~J0826+6125\ based on six epochs
of observations spread over a period of 12 months. For SDSS~J1341+4741, we
obtained five observations spread over six months. A cross-correlation
analysis was performed with a synthetic template spectrum suitable for
each star to obtain the RV measurement for each spectrum. We made use of
the software package RVLIN provided by \citet{rvlin}, which is a set of
IDL routines used to fit Keplerian orbits to derive the orbital
parameters from the RV data. The RV measurements exhibit peak-to-peak
variations of $\sim 60$ km~s$^{-1}$, with a period of 180 days for
SDSS~J0826+6125, and $\sim 110$ km~s$^{-1}$, with a period of 116 days for
SDSS~J1341+4741. The best-fit orbits for these stars, based on the data in hand,
are shown in Figures 1 and 2. Although the existence of RV variations is
secure, with such sparse coverage of the proposed orbits more data are
required to confirm the periods.
\begin{figure}[!tbp]
\centering
\begin{minipage}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figure1.eps}
\caption{\normalfont \small Variation of radial velocity for SDSS~J0826+6125. The derived period is
of 180.4 days}
\end{minipage}
\hfill
\begin{minipage}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figure2.eps}
\caption{\normalfont \small Variation of radial velocity for SDSS~J1341+4741. The derived period is
116.0 days}
\end{minipage}
\end{figure}
\section{STELLAR PARAMETERS}
Both photometric and spectroscopic data were used to derive estimates of
the stellar parameters for our program stars. The effective temperatures
were determined using various photometric observations in the literature
and the $T_{\rm eff}$-color relations derived by \citet{ramirez}. They
were found to be in close proximity (a difference of $~$40\,K was found)
to values obtained using \citet{alonso1996} and \citet{alonso1999}. The
$V-K$ temperature estimate is expected to be superior, as it is least
affected by metallicity and the possible presence of molecular carbon
bands. We also employed VOSA (http://svo2.cab.inta-csic.es/), the online
SED fitter \citep{bayo2008vosa}, to derive the temperatures using all of
the available photometry (optical, 2MASS, and WISE). A Bayesian fit
using the Kurucz ODFNEW/NOVER model was used to obtain the SED
temperature. Final fits for the two stars are shown in Figures 3 and 4.
$T_{\rm eff}$ estimates have also been derived spectroscopically, by
demanding that there be no trend of Fe~I line abundances with excitation
potential (shown in the upper panels of Figures 5 and 6), as well by
fitting the $H_{\alpha}$ profiles. Estimates for the effective
temperatures of our target stars are listed in Table 3.~ For
SDSS~J1341+4741, ~ we have adopted the temperature obtained from fitting of
the $H_{\alpha}$~ wings, as it is highly sensitive to small variations in
temperature. For SDSS~J0826+6125, the $H_{\alpha}$~ profile was asymmetric, and
thus it could not be used for accurate measurement of temperature. So,
the temperature obtained from Fe~I line abundances was adopted.
Surface gravity, $\log (g)$, estimates for the two stars have been
determined by the usual technique that demands equality of the iron
abundances derived for the neutral (Fe~I) lines and singly ionized
(Fe~II) lines. We used 7 Fe~II lines and 82 Fe~I lines for SDSS~J0826+6125, and
5 Fe~II lines and 49 Fe~I lines for SDSS~J1341+4741; best-fit models for our
target stars are shown in the upper panels of Figures 5 and 6. The wings
of the Mg~I lines have also been fitted to obtain estimates for $\log
(g)$; best-fit models are shown in Figure 7.
The microturbulent velocity ($\xi$) estimates for each star have been
derived iteratively in this process, by demanding no trend of
Fe~I abundances with the reduced equivalent widths, and are plotted in
the lower panels of figures 5 and 6. The
final adopted stellar atmospheric parameters are listed in Table 4.
\begin{table}
\begin{center}
\caption{Estimates of Effective Temperature} \label{tbl-3}
\begin{tabular}{cccccrrrrrr}
\tableline\tableline
Method & $T_{\rm eff}$ (K) \\
\tableline
&SDSS~J0826+6125 &SDSS~J1341+4741\\
\tableline
$V-K$ &4453 &5827\\
SED &4500 &5500\\
H$\alpha$ &4400 &5450\\
Fe~I/Fe~II &4300 &5400\\
\tableline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Adopted Stellar Parameters} \label{tbl-4}
\begin{tabular}{crrrrrrrrrrr}
\tableline\tableline
Object &$T_{\rm eff}$ (K) & $\log (g)$ & $\xi$ &[Fe$/$H]\\
\tableline
SDSS~J0826+6125 &4300 &0.40 &1.80 &$-$3.10\\
SDSS~J1341+4741 &5450 &2.50 &1.80 &$-$3.20\\
\tableline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure3.eps}
\caption{ The SED obtained from VOSA for SDSS~J0826+6125 shows the
temperature to be $\sim$4500\,K.} \label{fig3}
\end{figure*}
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure4.eps}
\caption{The SED obtained from VOSA for SDSS~J1341+4741 shows
the temperature to be $\sim$5500\,K. } \label{fig4}
\end{figure*}
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure5.eps}
\caption{ Top panel: Fe abundances derived from all lines, as a function of the
lower excitation potential, for the adopted model for
SDSS~J0826+6125. Lower panel: Fe abundances, as a function of
reduced equivalent widths, for the measured lines.} \label{fig5}
\end{figure*}
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure6.eps}
\caption{Top panel: Fe abundances derived from all lines, as a function of the
lower excitation potential, for the adopted model for SDSS~J1341+4741.
Lower panel: Fe abundances, as a function of reduced equivalent widths,
for the measured lines.} \label{fig6}
\end{figure*}
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure7.eps}
\caption{High-resolution HESP spectra of SDSS~J0826+6125 (upper panel) and
SDSS~J1341+4741 (lower panel) in the region of the Mg~I triplet for
different values of $\log (g)$, in steps of 0.25 dex. The red solid line
indicates the best-fit synthetic spectrum. The adopted paremeters for
SDSS~J0826+6125 are $T_{\rm eff}$ = 4300~K and $\log (g) = 0.40$, while
those for SDSS~J1341+4741 are $T_{\rm eff}$ = 5450~K and $\log (g)$ =
2.50.} \label{fig7}
\end{figure*}
\subsection{Abundance Analysis}
To determine the abundance estimates for the various elements present in
our target stars we have employed one dimensional LTE stellar
atmospheric models (ATLAS9; \citealt{castellikurucz}) and the spectral
synthesis code turbospectrum \citep{alvarezplez1998}. We measured the
equivalent widths of the absorption lines present in the spectra, and
considered only those lines for abundance analysis whose equivalent
width is less than 120 m{\AA}, since they are on the linear part of the
curve of growth, and are relatively insensitive to the choice of
microturbulence. We measured the equivalent widths of 53 clean lines
present in the spectra of SDSS~J0826+6125, among which 82 are Fe~I lines,
and 122 clean lines for SDSS~J1341+4741, among which 49 are Fe~I lines. We
have adopted the solar abundances for each element from
\cite{asplund2009,scott2,scott1,scott3}; solar isotopic fractions were used for all the
elements. Version 12 of the turbospectrum code for spectrum synthesis
and abundance estimates have been used for the analysis.We have adopted
the hyperfine splitting provided by \cite{mcwilliam1998} and solar
isotopic ratios. We have also used 2D MARCS models~(\citealt{marcs2008})
to derive the abundances, but no significant deviations were obtained.
The abundances differed by a values ranging from 0.01 to 0.02 dex for
individual species.
\begin{table}
\begin{center}
\caption{Elemental Abundance Determinations for SDSS~J0826+6125} \label{tbl-5}
\begin{tabular}{crrrrrrrrrrr}
\tableline\tableline
Elements &Species & $N_{lines}$ & A(X) & Solar & [X/H] & [X/Fe] & $\sigma$\tablenotemark{*} \\
\tableline
C\tablenotemark{s} &CH & \dots &4.60 &8.43 &$-$3.92 &$-$0.82 &0.04 \\
N\tablenotemark{s} &CN & \dots &6.00 &7.83 &$-$1.83 &+1.27 &0.03 \\
O\tablenotemark{s} &O I & \dots &6.50 &8.69 &$-$2.19 &+0.91 &0.01 \\
Na\tablenotemark{s} &Na I &2 &3.30 &6.21 &$-$2.91 &+0.19\tablenotemark{b} &0.01 \\
Mg\tablenotemark{s} &Mg I &4 &5.05 &7.59 &$-$2.54 &+0.56 &0.01\\
Al\tablenotemark{s} &Al I &1 &3.40 &6.43 &$-$3.03 &+0.07\tablenotemark{b} &0.02 \\
Ca &Ca I &8 &3.68 &6.32 &$-$2.64 &+0.46 &0.06\\
Sc\tablenotemark{s} &Sc II &5 &$-$0.06 &3.15 &$-$3.21 &$-$0.11 &0.01\\
Ti &Ti I &7 &1.96 &4.93 &$-$2.97 &+0.13 &0.03\\
&Ti II &6 &2.06 &4.93 &$-$2.87 &+0.23 &0.04\\
Cr &Cr I &3 &2.10 &5.62 &$-$3.52 &$-$0.42 &0.05\\
&Cr II &2 &2.35 &5.62 &$-$3.27 &$-$0.17 &0.05\\
Mn\tablenotemark{s} &Mn I &4 &1.60 &5.42 &$-$3.82 &$-$0.72 &0.02\\
Co\tablenotemark{s} &Co I &2 &2.00 &4.93 &$-$2.93 &+0.17 &0.01\\
Ni &Ni I &3 &3.00 &6.20 &$-$3.20 &$-$0.10 &0.04\\
Zn &Zn I &2 &1.50 &4.56 &$-$2.96 &+0.14 &0.05\\
Sr\tablenotemark{s} &Sr II &2 &$-$0.90 &2.83 &$-$3.73 &$-$0.63 &0.01 \\
Y\tablenotemark{s} &Y II &1 &$-$1.47 &2.21 &$-$3.68 &$-$0.58 &0.01\\
Zr\tablenotemark{s} &Zr II &2 &$-$0.75 &2.59 &$-$3.34 &$-$0.24 &0.01\\
Ba\tablenotemark{s} &Ba II &2 &$-$1.80 &2.25 &$-$4.05 &$-$0.95 &0.01 \\
\tableline
\end{tabular}
\end{center}
\begin{tablenotemark}
\newline
$\sigma$\tablenotemark{*} indicates the error.
\newline
\tablenotemark{b} Values obtained after applying NLTE corrections.
\newline
\tablenotemark{s} Indicates abundances obtained using synthesis.
\end{tablenotemark}
\end{table}
\begin{table}
\begin{center}
\caption{Elemental Abundance Determinations for SDSS~J1341+4741} \label{tbl-6}
\begin{tabular}{crrrrrrrrrrr}
\tableline\tableline
Elements &Species & $N_{lines}$ & A(X) & Solar & [X/H] & [X/Fe] & $\sigma$\tablenotemark{*} \\
\tableline
Li\tablenotemark{s} &Li I &1 &1.95 & \dots & \dots & \dots &0.01 \\
C\tablenotemark{s} &CH & \dots &6.22 &8.43 &$-$2.21 &+0.99 &0.04 \\
N\tablenotemark{s}{\textdagger} &CN & \dots &7.00 &7.83 &$-$0.83 &+2.37 &0.05 \\
Na\tablenotemark{s} &Na I &2 &2.80 &6.21 &$-$3.41 &$-$0.21\tablenotemark{b} &0.01 \\
Mg\tablenotemark{s} &Mg I &5 &5.10 &7.59 &$-$2.49 &+0.71 &0.01\\
Al\tablenotemark{s} &Al I &1 &3.2 &6.43 &$-$3.23 &-0.03\tablenotemark{b} &0.02 \\
Si &Si I &1 &5.33 &7.51 &$-$2.18 &+1.02 &0.07\\
Ca &Ca I &11 &3.60 &6.32 &$-$2.72 &+0.48 &0.05\\
Sc\tablenotemark{s} &Sc II &3 &-0.1 &3.16 &$-$3.26 &$-$0.06 &0.01\\
Ti &Ti I &4 &2.23 &4.93 &$-$2.70 &+0.50 &0.05\\
&Ti II &13 &1.89 &4.93 &$-$3.04 &+0.16 &0.04\\
Cr &Cr I &6 &2.31 &5.62 &$-$3.31 &$-$0.11 &0.04\\
&Cr II &1 &2.77 &5.62 &$-$2.85 &+0.35 &0.06\\
Mn &Mn I &5 &1.89 &5.42 &$-$3.53 &$-$0.33 &0.05\\
Co &Co I &2 &1.99 &4.93 &$-$2.96 &+0.24 &0.05\\
Ni &Ni I &4 &3.35 &6.20 &$-$2.85 &+0.35 &0.04\\
Sr\tablenotemark{s} &Sr II &2 &-0.88 &2.83 &$-$3.71 &$-$0.51 &0.01\\
Ba\tablenotemark{s} &Ba II &2 &-1.68 &2.25 &$-$3.93 &$-$0.73 &0.01 \\
\tableline
\end{tabular}
\end{center}
\begin{tablenotemark}
\newline
\textdagger Only upper limits could be derived.
\newline
$\sigma$\tablenotemark{*} indicates the error.
\newline
\tablenotemark{b} Values obtained after applying NLTE corrections.
\newline
\tablenotemark{s} Indicates abundances obtained using synthesis
\end{tablenotemark}
\end{table}
\section{ABUNDANCES}
\subsection{Carbon, Nitrogen, and Oxygen}
Carbon-abundance estimates for our stars were derived by iteratively
fitting the CH bandhead region with synthetic spectra, and adopting the
value that yields the best match. We have used the CH molecular line
list compiled by Bertrand Plez \citep{plez2005}. The CN and CH
molecular linelists are taken from the Kurucz database.
For SDSS~J0826+6125, the O~I at 630 nm was used to measure the
oxygen abundance, which was found to be strongly enhanced, [O/Fe] =
+0.91. The chemical equilibrium of CO is taken into consideration in the
turbospectrum synthesis code \citep{laverny}. We also have CO spectra, and
though noisy, its oxygen abundance is coonsistent with the estimates from O~I. The carbon
abundance was obtained from the CH $G$-band region, which yielded a value
of [C/Fe] $= -0.82$. We have also checked the sensitivity of the CH band for
various O abundances, but no variation could be detected. The C$_{\rm 2}$
molecular band at 516.5 nm also could not be detected, which is
consistent with a low C abundance. We could also detect the bandhead in
the region of the CN band at 3884\, {\AA}, and obtain an enhancement in
nitrogen corresponding to a value of [N/Fe] = +1.27.
For SDSS~J1341+4741, the derived fit to the CH $G$-band yielded [C/Fe] = +0.99,
clear evidence for its enhancement. Using medium-resolution
spectroscopy from SDSS, \cite{fernandez} has previously reported a carbon
abundance ratio of [C/Fe] = +0.95. The O~I line at 630 nm is too weak to be detected,
hence no meaningful O abundance could be derived for this star. The
signal-to-noise ratio at the region of CN band is too low to confirm
enhancement in nitrogen for this star; so we could only obtain an upper
limit of [N/Fe] $< +2.37$.
Fits for in the region of the CH $G$-band are shown for both stars
in Figure 8.
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure8.eps}
\caption{High-resolution HESP spectra in the CH $G-$band region for
SDSS~J0826+6125\ (upper panel) and SDSS~J1341+4741\ (lower panel). The red solid line
indicates the synthetic spectrum corresponding to the best fit,
overplotted with two synthetic spectra with carbon 0.20 dex higher and
lower than the adopted value.} \label{fig8}
\end{figure*}
\subsection{The $\alpha$-Elements}
Several magnesium lines were detected in the spectra of our target stars.
Two of the lines in the Mg triplet at 5172\,{\AA}, and three other
lines at 4167\,{\AA}, 4702\,{\AA}, and 5528\,{\AA}, were used to obtain
the abundances. The derived [Mg/Fe] ratios for SDSS~J0826+6125\ and SDSS~J1341+4741\ are
[Mg/Fe] = +0.56 and [Mg/Fe] = +0.71, respectively, values often found among
halo stars. The silicon lines at 5268\,{\AA} and 6237\,{\AA} were too
weak to be used for abundance estimates of SDSS~J0826+6125, but for SDSS~J1341+4741, we
obtain [Si/Fe] = +1.0. It should be noted that Si may appear
over-abundant for metal-poor stars because LTE results are known to
overestimate the true value \citep{shi2012}.
Eight and eleven Ca~I lines were detected in the spectra of SDSS~J0826+6125\ and
SDSS~J1341+4741, respectively, including the prominent lines at 4226.73\,{\AA},
4302.53\,{\AA}, and 4454.78\,{\AA}, and used to measured its abundance.
The measurements indicate slightly enhanced ratios of
[Ca/Fe] = +0.46 (for SDSS~J0826+6125) and [Ca/Fe] = +0.48 (for SDSS~J1341+4741). The overall abundance of
the $\alpha$-elements is consistent with the typical
halo enhancement of [$\alpha$/Fe] = +0.4.
\subsection{The Odd-Z Elements}
The sodium abundance is determined from the Na~$D_{1}$ and $D_{2}$
resonance lines at 5890\,{\AA} and 5896\,{\AA}. The aluminium abundance
is obtained from one of the resonance lines at 3961.5\,{\AA}. This line
is not the ideal indicator, as it can have large departures from LTE, as
discussed by \cite{baumuller}, who found it to be as large as +0.6 dex.
\cite{gratton2001} showed that incorporation of these corrections
improves the agreement between the values of aluminum abundances
obtained from this line and the high-excitation infrared doublet at
8773\,{\AA}, in the case of globular cluster dwarfs. Hence, we have
applied this non-LTE correction to Al in our abundance table. Aluminum
is slightly enhanced for SDSS~J0826+6125, while Na tracks the iron content
of the stars. The scandium content is also very similar to iron.
Na and Al are produced by the Ne-Na and Mg-Al cycles in intermediate and
massive stars during H-shell burning.
Sodium and aluminum in the two stars could be due to a well-mixed ISM,
and unlikely to have received direct contribution from intermediate-mass
or massive-star winds.
\subsection{The Iron-Peak Elements}
Iron abundances for SDSS~J0826+6125\ were calculated using 82 Fe~I lines and 7 Fe
II~lines found in the spectra; a difference of 0.3 dex was noted between
the derived abundances. This difference between Fe~I and Fe~II is in
agreement with the NLTE effects explored by \cite{asplund2005}. Iron
abundances for SDSS~J1341+4741\ were calculated using 49 Fe~I lines and 7 Fe~II
lines found in the spectra; a difference of ~0.5 dex between the
abundance values obtained from these lines was found, which is rather
large.
We also detected the iron-peak elements Mn, Cr, Co, Ni, and Zn in our
target stars. Mn and Cr are products of incomplete explosive silicon
burning, and their abundances decrease with decreasing metallicity
\citep{mcwilliam1995, ryan1996, carretta2002}. For SDSS~J0826+6125, the abundance
of Mn was derived from the resonance Mn triplet at 4030\,{\AA} and
three weaker lines near 4780\,{\AA}. Cr lines are measured from 4 lines,
including the stronger ones at 4646\, {\AA} and 5206\,{\AA}. Products of
complete silicon burning, such as Co, Ni, and Zn, have also been found in this
star; all of these elements are found to track the iron content.
For SDSS~J1341+4741, the abundance of Mn was derived from the resonance
Mn triplet at 4030\,{\AA} and an additional line at 3823\,{\AA}. The
observed abundances of Mn and Cr are similar to other EMP stars. The
abundance derived for Ni using the 4 lines of this element present in
the spectrum of SDSS~J1341+4741\ is clearly higher relative to iron, [Ni/Fe] =
+0.35.
\subsection{The Neutron-Capture Elements}
Strontium and barium are the two neutron-capture elements detected in
the spectra of SDSS~J1341+4741. Resonance lines of Sr~II at 4077\,{\AA}
and 4215\, {\AA} are detected in both of our target stars. SDSS~J0826+6125\
is found to be under-abundant in both strontium and barium, with
abundances of [Sr/Fe] $ = -0.63$ and [Ba/Fe] $= -0.95$, respectively.
The other neutron-capture elements found in this star are Y and Zr,
which are under-abundant as well. SDSS~J1341+4741\ is also found to be
under-abundant in strontium compared to the solar ratio,
[Sr/Fe] $ = -0.51$. Ba~II resonance lines at 4554\, {\AA} and 4937\,{\AA} were also
measured, and exhibited a considerable barium depletion, [Ba/Fe] $ =
-0.73$. Based on the clear under-abundance of the neutron-capture
elements, along with its strong carbon over-abundace, this star can be
confidently classified as a CEMP-no star.
Best-fit spectra of the Sr and Ba syntheses for our two stars are shown
in Figures 9 and 10.
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure9.eps}
\caption{Synthesis in the Sr II region for SDSS~J0826+6125 (upper panel)
and SDSS~J1341+4741 (lower panel). The red line indicates the best-fit,
overplotted with two synthetic spectra with Sr abundance 0.20 dex higher and
lower than the adopted value.} \label{fig9}
\end{figure*}
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure10.eps}
\caption{Synthesis in the Ba II region for SDSS~J0826+6125 (upper panel)
and SDSS~J1341+4741 (lower panel). The red line indicates the best-fit,
overplotted with two synthetic spectra with Ba abundance 0.20 dex higher and
lower than the adopted value.} \label{fig10}
\end{figure*}
\subsection{Lithium}
Although lithium was not detected in SDSS~J0826+6125, there is a strong feature
observed in SDSS~J1341+4741\ at 6707\,{\AA}, the Li doublet, from which we
obtain an abundance $A$(Li) = 1.95, which is similar to some other
CEMP-no stars (e.g , \citealt{sivarani2006}, \citealt{matsuno2017}). The
detection of lithium indicates that this star is unlikely to have
experienced AGB binary mass transfer or direct winds from a massive
star. Mass transfer from a low-mass AGB would produce large amounts of
carbon and deplete lithium, along with the production of $s$-process
enhanced material. A $(4-7M_{\odot})$ AGB star that had experienced hot bottom
burning produces large nitrogen and very low carbon. There are some
models in which AGB stars could produce lithium through the
Cameron-Fowler mechanism \citep{cameronfowler1971}. It is unclear if an
AGB with mass 3-4$M_{\odot}$ could explain the observed C, N, low $s$-process
elements, and lithium. Evolutionary mixing inside the star in its
subgiant phase might deplete the original lithium abundance of the
star-forming cloud. The synthesis for this element is shown in Figure
11.
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure11.eps}
\caption{Synthesis of lithium for SDSS~J1341+4741 at
6707\,{\AA}. The red line indicates the best-fit,
overplotted with two synthetic spectra with Li abundance 0.20 dex higher and
lower than the adopted value of A(Li) = 1.95.} \label{fig11}
\end{figure*}
\section{DISCUSSION}
\subsection{SDSS J082625.70+612515.10}
\subsubsection{Carbon, Nitrogen, and the Non-detection of Lithium}
In the {\it First Stars VI} paper, \citet{firststars6} found that carbon and
nitrogen were anti-correlated, and the faint halo stars
could be classified into two groups -- ``unmixed'' stars, which
exhibited C enhancement with N depletion, having $A$(Li) between 0.2 and
1.2, and ``mixed'' stars, which showed [C/Fe] $< 0.0$, [N/Fe] $> +0.5$,
and Li below the detection threshold. SDSS~J0826+6125\ clearly falls into the
second group. Lithium is a very fragile element, which is destroyed at
temperatures in excess of 2.5 million K. Evidence for this can be seen
in previous samples of metal-poor stars; the $A$(Li) = 2.3 observed for
metal-poor dwarfs starts decreasing as the star ascends the giant
branch, to $A$(Li) $< 1.2$ for giants ({\it First Stars VII};
\citealt{firststars7}). The non-detection of lithium for this star could
be understood in this way.
In {\it First Stars IX}, \cite{firststars9} argued that such destruction
could be taken as a signature of mixing, and placed this mixed group of
stars higher up in the giant-branch stage of evolution. Other scenarios
for depletion of lithium, such as binary mass transfer, can be
eliminated for SDSS~J0826+6125, as no such peculiar chemical imprints
have been found. During mixing, material from deeper layers where carbon
is converted to nitrogen is brought to the stellar surface. Figure 4 of
\cite{cayrel2004} shows the decline in the value of [C/Fe] for
temperatures below 4800~K in metal-poor stars, which is again attributed
to deep mixing at lower temperatures. With a $T_{\rm eff}$ of 4300~K and
a low $\log (g) = 0.4$, SDSS~J0826+6125\ can be placed in the mixed group of
stars close to the tip of the red giant branch (RGB). Figure 12 shows the
position of the star in the $\log (g)$-$T_{\rm eff}$ plane, compared
with other metal-poor halo stars compiled in the SAGA database
\citep{sudasaga}. It sits right at the tip of the RGB.
Figure 13 compares the [C/N] ratio with metallicity of the halo stars
having carbon deficiency (and for which both estimates of carbon and
nitrogen are available). The abundance ratio of [C/N] for SDSS~J0826+6125~ is
remakably low compared to other stars at the tip of the RGB.
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics[width=\textwidth]{figure12.eps}
\caption{\normalfont \small The position of SDSS~J0826+6125 among other EMP halo stars in the log(g)-$T_{\rm eff}$ plane.
The position of the star at the tip of the RGB is marked by the blue cross.}
\end{figure*}
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics[width=\textwidth]{figure13.eps}
\caption{\normalfont \small The very low [C/N] abundance ratio other low-metallicity C-poor halo stars.
SDSS~J0826+6125 is marked by the blue cross. The red dots mark the stars at the tip of the RGB with log(g)$<$ 1.}
\end{figure*}
\subsubsection{The Light Elements}
SDSS~J0826+6125\ exhibits a low Na, high Mg, and low Al content, consistent
with the odd-even pattern expected to occur during massive-star
nucleosynthesis at low metallicities. A slight
enhancement of Na is observed, which could be an imprint of the previous
generations of stars which underwent the Ne-Na cycle, as it is not
possible to produce these elements in the RGB phase. Such an anomaly
could be similar to that seen in globular cluster stars
\citep{gratton2001,gratton2004}, which have undergone the AGB
phase and passed on processed material to a subsequent generation of
star formation in a closed system. Unfortunately, other signatures seen
in globular cluster stars, such as the C-N-O and O-Na-Mg-Al correlations
and anti-correlations \citep{shetrone1996,gratton2004,carretta2010,
coelho2011,meszaros2015} were not observed in this star.
\subsubsection{The Iron-Peak Elements}
Abundances of Fe-peak elements (Cr, Mn, Co, and Ni) for metal-poor stars
from the SAGA database are plotted, as a function
of metallicity, in Figure 14, along with the position of SDSS~J0826+6125.
This star appears is relatively rich in Co, but poor in Cr, Mn, and Ni,
consistent with \citet{mcwilliam1995} and \citet{andouze}, who showed the
same trends for several stars with metallicity below [Fe/H] $= -2.4$.
The relative abundances of the Fe-peak nuclei could be well-explained
by their dependence on the mass cut of the progenitor supernova with
temperature, which gives rise to a photo-disintegration process
\citep{woosleynweaver}.
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure14.eps}
\caption{Distribution of Fe-peak elements for Galactic halo stars. The
red dots represent the CEMP-no stars, while black dots represent C-normal halo stars.
The two program stars SDSS~J0826+6125 and SDSS~J1341+4741 are indicated
by blue and red crosses, respectively.} \label{fig14}
\end{figure*}
\subsubsection{The Neutron-Capture Elements}
Abundances of both the heavy and light $s$-process elements found in
SDSS~J0826+6125\ are low, which is again consistent with the lack of available neutron flux
\citep{andouze}. The abundance values are very similar to other EMP
giants.
\subsubsection{The Asymmetric H$\alpha$ Profile of SDSS~J0826+6125\ }
SDSS~J0826+6125\ was observed several times, and an asymmetry in the H$\alpha$
profile was noted for all of the spectra. The profile also could not be
well-fit with synthetic spectra. The H$\alpha$ profile and its fit with
the model spectrum is shown in Figure 15. This could be due to the
inadequacy of the 1D stellar models, or it may be due to an extended
atmosphere present in the star. The H$\alpha$ profile was also found to
be not varying over several observation epochs indicating no ongoing
mass transfer. The extended atmosphere could be the result of past mass
transfer from an intermediate-mass AGB companion, or mixing due to first
dredge up of the star in the RGB phase. It is also possible that the
star itself is an AGB star (e.g., \citealt{masseron2006}).
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure15.eps}
\caption{The strange H$\alpha$ profile of SDSS~J0826+6125, for different values
of temperature from 4200\,K to 4600\,K, in steps of 100\,K.} \label{fig15}
\end{figure*}
\subsection{SDSS J134144.60+474128.90}
\subsubsection{Lithium}
We have obtained a measurement of $A$(Li) = 1.95 for SDSS~J1341+4741, which is
lower than the Spite Plateau \citep{spitenspite} value of $A$(Li) =
2.2$\pm$ 0.1 \citep{Pinsonneault}, and much lower than the predicted
amount of Li from Big Bang nucleosynthesis ($A$(Li) = 2.75; \citealt{steigman2005}). Our
limited RV information for this star indicates clear variation, from
which we derive a possible period of 116 days. However, we have no other
evidence that a mass-transfer event may have occurred. The distribution
of lithium for CEMP-no stars, along with other EMP stars, is shown in
Figure 16. According to the analysis of \cite{meynet2010} and
\cite{masseron2012}, this star falls close to the edge of Li-depleted
stars ($A$(Li) = 2.00 is adopted as the separation between Li-normal and
Li-depleted metal-poor stars). A slight depletion from the Spite Plateau
value could be attributed to internal mixing of the star, or
the observed value of lithium for SDSS~J1341+4741\ may be the result of several
concurrent phenomena.
\begin{itemize}
\item The ejected material from the progenitor SN will have depleted lithium
abundance along with other nucleosynthetic elements and enhanced carbon
(for the case of SDSS~J1341+4741) that is mixed with the primordial cloud.
Depending upon the dilution factor in the natal cloud, it may be
possible to achieve the necessary lithium value \citep{piau2006,
meynet2010, maedermeynet2015}.
\item A Spite Plateau value of Li was present in
the natal cloud of SDSS~J1341+4741, and it is depleted by thermohaline mixing or
meridional circulation \citep{masseron2012} in the star. If we consider
the current evolutionary state of the star to be in the RGB phase, this
could be a viable mechanism.
\item Enhanced rotationally-induced mixing in
the RGB phase (following \citealt{denherwig}) can lead to formation of
lithium in the star, following depletion of all the primordial lithium.
It is very difficult to differentiate between an AGB or a massive
rotating star as the precursor using Li as the sole yardstick, as both
result in almost the same nucleosynthetic yield of Li
\citep{meynet2006,masseron2012}.
\end{itemize}
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure16.eps}
\caption{Comparison of the observed lithium for CEMP-no stars, taken from the SAGA Database.
The blue dots mark the EMP giants while black dots are the EMP
dwarfs. Red points are the CEMP-no stars. The red cross marks the location of
SDSS~J1341+4741.} \label{fig16}
\end{figure*}
\subsubsection{Carbon}
According to \citet{spite2013} and \citet{bonifacio2015}, CEMP stars are
distributed along two bands in the $A$(C) vs. [Fe/H] plane. The upper
band is centered around $A$(C)$\sim 8.25$, and comprises relatively more
metal-rich CEMP-$s$ stars, while the lower band centered around $A$(C)
$\sim 6.50 $ comprises more metal-poor, and primarily CEMP-no, stars.
Further investigation by \cite{hansen2016b} also led the result that the
majority of the stars that are known binaries lie close to the upper
band.
By expanding the list of CEMP stars with available high-resolution
spectroscopic analyses to include more evolved sub-giants and giants
(with the later giants having C abundances corrected for evolutionary
mixing effects; Placco et al. 2014), \citet{jinmiyoon} demonstrated that
the morphology of this abundance space is more complex, with three
prominent groups identified in the so-called Yoon-Beers diagram (their
Figure 1). They argued that a separation between CEMP-$s$ stars and
CEMP-no stars in their sample could be reasonably achieved by splitting
the sample at $A$(C) = 7.1, with the Group I CEMP-$s$ stars lying above
this level and the Group II and III CEMP-no stars lying below this
level. In this classification scheme, SDSS~J1341+4741, with $A$(C) $\sim
$6.22, can be comfortably identified as a Group II CEMP-no star. Hence,
the enhancement of carbon in this star is most likely to be intrinsic to
the star (i.e., the C was present in its natal gas), and not the
result of mass transfer from an extinct AGB companion. Thus, the
elemental-abundance pattern observed from this star is associated with
nucloesynthesis from a core collapse SN at early times, perhaps with
additional contributions from stars that formed and evolved within its
natal gas cloud.
\subsubsection{The Light Elements}
SDSS~J1341+4741\ exhibits the low [Na/Fe], high [Mg/Fe], and low [Al/Fe] ratios
expected from the odd-even pattern in massive-star nucleosynthesis
yields at low metallicities (e.g., \citealt{arnett1971};
\citealt{truran1971}; \citealt{peterson1976}; \citealt{umeda2000};
\citealt{hegerandwoosley2002}). The light elements closely follow the
overall halo population observed in the Galaxy as well
\citep{cayrel2004}. Following \cite{jinmiyoon},
SDSS~J1341+4741 is clearly a member of the Group II stars, and supports
a possible mixing and fallback SN as a likely progenitor.
\subsubsection{The Iron-Peak Elements}
Abundances of Fe-peak elements for SDSS~J1341+4741\ (Cr, Mn, Co, and Ni) are
shown in Figure 14, as a function of [Fe/H], compared with other CEMP-no
and C-normal EMP stars compiled from the SAGA database
\citep{sudasaga}. One feature that clearly stands out is the over-abundance
of Cr and Ni, and to some extent Mn. In the low-metallicity regime, the
stars are expected to show signatures of Type II SNe nucleosynthesis.
All three elements play key roles in determining the progenitor
population in the halo and the subsequent SNe yields. A decrease in
[Cr$/$Fe] and [Mn/Fe] with decreasing [Fe/H] should be accompanied with
enhancement in [Co/Fe], as a result of deeper mass cuts in the
progenitor SNe (refer to Figure 9 of \citealt{nakamura1999}). However,
enhancement in both [Cr/Fe] and [Mn/Fe] can be explained by an excess of
neutrons as well. Since neutron excess is a function of metallicity, we
have plotted [Cr/Fe] vs. [Mn/Fe] in Figure 17 to eliminate the trend
with Fe abundance (following, e.g., \citealt{carretta2002}). In this
plot, our program star occupies a relatively higher position amidst the
population of CEMP-no stars. From \citet{hegerandwoosley2002},
\citet{hegerandwoosley2008}, and \citet{qian2002}, it is known that very massive stars ($80 <
M/M_{\odot}< 240)$ belonging to Population III explode as
pair-instability SNe, which should not produce a correlation between
[Cr/Fe] and [Mn/Fe]. Thus, the presence of this correlation points us
towards Type II SNe associated with a relatively high-mass
($M/M_{\odot}< 80$), but not extremely high-mass, progenitor.
Nickel is an extremely important element to gain further insight into
the nature of the progenitor of SDSS~J1341+4741. The depth of the gravitational
potential and amount of neutrino-absorbing material in the models are
the two factors that compete for the production of Ni in Type II SNe. In
very massive ($M/M_{\odot}> 30$) stars the deeper gravitational
potential restricts nickel from being ejected due to fallback, while
intermediate-mass ($10 < M/M_{\odot}< 20$) stars eject large amounts of
Ni because of a large neutrino-absorbing region
\citep{nakamura1999}. Thus, enhancement of Ni also points in the same
direction, that the progenitor is likely to be a massive ($20 <
M/M_{\odot}< 30$) star exploding as a Type II SNe in the early Galaxy. The
observations support the hypothesis of a mixing and fallback model
\citep{nomoto2013} with a lower degree of fallback, so as to eject a
larger mass of $^{56}$Ni.
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure17.eps}
\caption{The relative enhancement of Cr and Mn for
SDSS~J1341+4741, shown as a red cross, in the [Cr/Fe] vs. [Mn/Fe] space.
Red dots mark the CEMP-no stars while the black dots mark the EMP stars.} \label{fig17}
\end{figure*}
\subsubsection{The Neutron-Capture Elements}
The first $s$-process peak element Sr and the second $s$-process peak
element Ba have been detected in both SDSS~J0826+6125\ and SDSS~J1341+4741, and they
exhibit under-abundances. The ratio of light to heavier neutron-capture
elements are sensitive to the nature of the progenitors. Neutron star
mergers are expected to produce heavy neutron-capture elements (e.g.,
\citealt{argast2004}) -- and have been observed to do so in the kilonova
SSS17a associated with GW170817 \citep{kilpatrick2017}, which
exhibited clear evidence for the presence of unstable isotopes created by the
$r$-process \citep{drout2017,shappee2017}. SNe with jets (e.g.,
\citealt{winteler2012}; \citealt{nishimura2015}) may also produce heavy
neutron-capture elements. Formation of these systems may depend on the
environment as well.
\subsubsection{Nature of the Binary Companions of SDSS~J0826+6125\ and SDSS~J1341+4741}
Both of the program stars exhibit clear RV variations,
indicating the likely presence of a binary companion. In the case of
SDSS~J0826+6125, the enhanced abundances of N and under-abundance of
C indicates possible mixing of the atmosphere with CN-cycle products.
This can result from first dredge-up mixing in the star, which is
currently in the RGB, following mass transfer from an intermediate-mass AGB
star that might have gone through hot bottom burning \citep{lau2007,suda2012sf}.
The low $\log (g)$ value of the star supports RGB mixing, although AGB
mass transfer cannot be ruled out. The non-detection of Li and peculiar
H$\alpha$ profiles could indicate either internal mixing or binary mass
transfer as well. In the case of an intermediate-mass AGB that goes
through hot bottom burning, the temperatures are sufficiently high for
the star to operate the CNO cycle. In that case, SDSS~J0826+6125\ may be a
true nitrogen-enhanced metal-poor (NEMP; see \citealt{johnson2007};
\citealt{pols2009}; \citealt {pols2012}) star, which are known to exist, but are relatively rare.
In the case of SDSS~J1341+4741, the binary companion did not likely contribute
through a mass-transfer event, since the Li abundance in the star is
similar to other EMP stars, although it is lower than the Spite Plateau
value. The mild depletion of Li could be due to binary-induced mixing or
internal mixing of the star during its sub-giant phase. It may well be
worthwhile to mount a RV-monitoring campaign for this and other
Li-depleted EMP stars to test for a possible binary-star origin to the
declining lithium abundance problem for stars with [Fe/H] $< -3.0$.
\subsection{CEMP-no and EMP Stars}
From the above discussion, and based on previous studies, it is evident that
CEMP-no and C-normal EMP stars have very different origins. Even
within the sub-class of CEMP-no stars, there may well be different types
of progenitors. As discussed by \citet{jinmiyoon}, the Group II CEMP-no
stars could be associated with the faint mixing and fallback SNe,
whereas the Group III CEMP-no stars can be attributed to the spinstar
models, with a number of exceptions for both the classes
\citep{meynet2006,nomoto2013}. See also the discussion of the
progenitors for CEMP-no stars by \citet{placco2016}. Some of the CEMP-no
stars lying in the low $A$(C) region may have a binary component, but no
mass transfer is supposed to have taken place \citep{starkenburg2014,
bonifacio2015, jinmiyoon}, which is further strengthened by the only
``slight'' depletion of Li in SDSS~J1341+4741, as described in the previous
section.
Iron-peak elements can provide valuable insights regarding the
nucleosynthetic yields of their progenitor supernovae, as these elements
cannot be produced or modified during the post main-sequence
evolutionary stages of the star. Figure 14 shows the distribution of
some key Fe-peak elements for both CEMP-no and C-normal EMP stars.
Visual inspection suggests that Cr and Co are enhanced for the CEMP-no
population. We have compiled data from SAGA databse to see if there is
an enhancement of Cr in CEMP-no stars. The fit is given in Figure 18
for [Cr/Fe]. There is a slight offset between the EMP and CEMP-no stars,
but they exhibit similar increasing trends of [Cr/Fe] with [Fe/H]. We
have checked, and these behaviors apply to both dwarfs and giants. Similar offset could also be
noted for Co.
\citet{lai2008} and \citet{bonifacio2012} have considered the
discrepancies in the behavior of Cr between giants and dwarfs, since
Cr~II could be measured only in giants, while Cr~I is a resonance line,
and could suffer substantial NLTE effects. However, such issues are not
expected to play a substantial role when we compare only giants with
giants or dwarfs with dwarfs. Temperature and gravity do not play a
major role in deviations from LTE abundances \citep{bergemann},
so we have not used them to further refine our sample
from the archival data.
Enhancement in [Cr/Fe] for CEMP-no stars with respect to C-normal EMP stars can play a
key role for understanding of the SNe ejecta and relevant mass cuts. It
would be very interesting to investigate the origin for this
discrepancy.
\begin{figure*}[h!]
\epsscale{.80}
\includegraphics{figure18.eps}
\caption{Linear fit for
CEMP-no and C-normal EMP stars. Red is used for CEMP-no stars while
black is used for EMP stars. The slope and $\sigma$ are shown for each
fit in the corresponding color.} \label{fig18}
\end{figure*}
\section{Conclusion}
We have derived LTE abundances for SDSS J082625.70+612515.10; it is mostly
consistent with behavior of other halo stars. The depletion in carbon
and enhancement in nitrogen could be due to internal mixing within the
star. It is unlikely that self enrichment similar to that is seen in
globular clusters has occured, due to the overabundance in oxygen. The
peculiar H$\alpha$ profiles of SDSS~J0826+6125 also supports the
possibility of mixing that might occur in an extended atmosphere. The
radial-velocity variation strongly suggests this star is a member of a
binary system, but it is likely there is no ongoing mass transfer, due
to non-variable peculiar H$\alpha$ profile over a period of year.
SDSS J134144.60+474128.90\ is a CEMP-no type star, and likely to be a member of
binary system. The lithium is detected and midly depleted, similar to
other EMP stars. Lithium in EMP dwarfs and CEMP-no stars exhibit similar
trends at different metallicities. Below [Fe/H] $< -3.0$, EMP and
CEMP-no stars often have lithium abundance below the Spite Plateau. We
also studied the trends of heavy elements among EMP and CEMP stars. At a
given metallicity, CEMP-no stars appear to have larger abundances of Cr.
This might provide important clues to the nature of the
progenitors that contributed to the origin of carbon.
\section{Acknowledgement}
We would like to take this opportunity to thank the HESP team for
succesful installation and maintenance of the high-resolution
spectroscope at the Hanle telescope, despite the numerous challenges and
hurdles that were encountered along the way. We would specially like to
thank Amit Kumar, M N Anand, Sriram and Kathiravan for their efforts. We
also extend our gratitude towards the observation team, namely Kiran B.S, Pramod Kumar,Lakshmi Prasad and
Venkatesh Shankar for their tireless effort.
T.C.B. acknowledges partial support from grant PHY 14-30152 (Physics
Frontier Center/JINA-CEE), awarded by the U.S. National Science
Foundation (NSF). T.M. acknowledges support provided by the Spanish
Ministry of Economy and Competitiveness (MINECO) under grants
AYA−2014−58082-P and AYA-2017-88254-P.
\clearpage
\newpage
|
{
"timestamp": "2018-05-08T02:13:01",
"yymm": "1805",
"arxiv_id": "1805.02280",
"language": "en",
"url": "https://arxiv.org/abs/1805.02280"
}
|
\section{Introduction}
Magnetic resonance imaging (MRI) is an important technique for visualizing human tissue. The raw measurements come in the form of Fourier transform coefficients in ``k-space'' and the MRI can be viewed after an inverse 2D Fourier transform of the fully sampled k-space. Conventionally, radiologists view MRI for diagnosis. However, in areas where medical expertise may be lacking or not sufficient to meet demand, automated methods may also be useful. To this end, automatic MR image segmentation is essential because it allows for finer localization of focus. To take brain segmentation for example, usually four structures emerge including background, gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). Lesions appearing in white matter are closely associated with various issues such as strokes and Alzheimer's disease \cite{6}.
Although rich anatomical information can be provided by MRI, it is limited by a long imaging period. This can introduce motion artifacts caused by movement of the patient \cite{2} or induce psychological pressures brought by claustrophobia \cite{1}. Thus accelerating imaging speed while maintaining high imaging quality is key for MRI. Compressed sensing (CS) theory \cite{3,4}, which shows the possibility of recovering signals with sub-Nyquist sampling rates, has been introduced to the field of MRI to accelerate imaging. In 2017, the US Food and Drug Administration (FDA) approved CS-MRI techniques for use by two major MRI vendors: Siemens and GE \cite{5}. Thus, one can expect increasing deployment of CS-MRI technique in the future for real-world applications.
To our knowledge, current segmentation algorithms for MRI assume a ``clean'' (i.e., fully-sampled) image as input and do not take CS-MRI scenarios into consideration. Likewise, CS-MRI reconstruction methods do not consider their output's potential downstream segmentation. Although experienced human experts can make relatively robust decisions with CS-reconstructed images, the anticipated increase in the number of CS-reconstructed MRI for clinical application will call for automatic segmentation algorithms optimized for this type of data. Therefore, a unified approach to MRI reconstruction/segmentation under the compressed sensing framework is worthy of exploration.
In this paper, we develop a unified deep neural network called SegNetMRI for joint MRI reconstruction and segmentation with compressive measurements. We build SegNetMRI on two networks: an MRI reconstruction network (MRN) and MRI segmentation network (MSN). The MSN is an encoder-decoder structure network and SegNetMRI is made up of basic blocks which consists of encoder-decoder and data fidelity units. The MRN is pre-trained with pairs of artificially under-sampled and their corresponding fully-sampled MRI and the MSN with fully-sampled MRI and corresponding segmentation labels. We fine-tune the resulting unified network with MSN and MRN sharing the encoder component.
With the basic features produced by the fine-tuned encoder, the MRI reconstruction and segmentation can regularize each other this way.
The outputs are merged by $1\times1$ convolution.
\section{Background and Related Work}
\paragraph{MRI segmentation}
Broadly speaking, the research in MRI segmentation can be categorized into three classes: (1) atlas-based segmentation with registration; (2) machine learning models with hand-crafted features; (3) deep learning models. Atlas-based segmentation \cite{20,21} requires accurate registration and is time-consuming, so it is impractical in real applications that require fast speed. In the second class, manually designed features are fed into classifiers, e.g., 3D Haar/spatial features into random forests \cite{35}. These hand-crafted features are not very flexible in encode diverse patterns within MRI data. Recently deep learning based models have been propose,d such as a 2D convolutional neural network \cite{23,24}, a 3D convolutional neural network \cite{25,26}, and parallelized long short-term memory (LSTM) \cite{27}. They can learn semantic image features from data, leading to the state-of-the-art performance in MRI segmentation. However, these MRI segmentation models
do not take the input quality into consideration, but assume full measurements.
\paragraph{Compressed Sensing MRI}
We denote the underlying vectorized MRI $x \in {\mathbb C^{P \times 1}}$ which we seek to reconstruct from the sub-sampled vectorized k-space data $y\in {\mathbb C^{Q \times 1}}$ ($Q \ll P$). CS-MRI is then typically formulated as
\begin{equation}\label{eq1}
x = \arg \min \left\| {{F_u}x - y} \right\|_2^2 + {f_\theta }\left( x \right),
\end{equation}
where the $F_u\in {\mathbb C^{Q \times P}}$ denotes the under-sampled Fourier matrix. The $\ell_2$ term is the data fidelity and ${f_\theta }\left( \cdot \right)$ represents a regularization with parameter $\theta$ to constrain the solution space.
The main research focus of CS-MRI is proposing better $f_\theta$ and efficient optimization techniques. In the first CS-MRI work called SparseMRI \cite{8}, wavelet domain $\ell_1$ sparsity plus image total variation are imposed as regularizations. More complicated wavelet variants are designed for CS-MRI in PANO \cite{12} and GBRWT \cite{13}. Dictionary learning techniques are also introduced in CS-MRI, e.g., DLMRI \cite{14} and BPTV \cite{16}. These works can all be categorized as sparsity-based CS-MRI methods; they model the MRI with a ``shallow'' sparse prior, which often tends to over-smooth the image.
Recently, deep neural networks have been introduced to CS-MRI. Researchers have directly applied convolutional neural networks (CNN) to learn a direct mapping from the zero-filled MRI $F_u^Hy$ (obtained by zero-padding the unsampled positions in k-space) to the true MRI \cite{17}. A deep residual architecture was also proposed for this same mapping \cite{18}. Data fidelity terms have been incorporated into the deep neural network by \cite{19} to add more guidance. These deep learning based CS-MRI models have achieved higher reconstruction quality and faster reconstruction speed.
\paragraph{Combining visual tasks}
The combination of different visual tasks in a unified model has been investigated in the field of computer vision. For example, a joint blind image restoration and recognition model \cite{28} based sparse coding was proposed for face recognition with low-quality images. In the image dehazing model AOD-Net \cite{32}, the researchers discuss detection in the presence of haze by performing dehazing during detection. In the MRI field, 3T-obtained images have been used in joint segmentation and super-resolution (7T) image generation \cite{37}.
\begin{figure}
\begin{center}
{\label {figure2} \includegraphics[width=1\columnwidth]{U-Net.pdf}}
\caption{The MSN architecture composed of an encoder and decoder. It is used to assess the segmentation accuracy on different reconstructed MRI produced by various CS-MRI methods.}
\label {figure2}
\end{center}
\end{figure}
\section{Methodology}
In this section, we give a detailed description of the proposed SegNetMRI model. First, we propose a segmentation network baseline and test popular CS-MRI methods on the well-trained model. Next we propose a MRI reconstruction network formed by cascading basic blocks. We show the proposed MRI reconstruction network achieves better performance on segmentation over conventional sparsity based CS-MRI models, but still inferior to the full-sampled MR image. We further propose the SegNetMRI model to merge the MRI reconstruction and segmentation into an single model.
\begin{figure*}
\centering
\subfigure[ ZF] {\label {figure3a} \includegraphics[width=0.187\textwidth]{im_zf_20.png}}
\subfigure[ PANO] {\label {figure3b} \includegraphics[width=0.187\textwidth]{PANO_20.png}}
\subfigure[ MRN$_{5}$] {\label {figure3c} \includegraphics[width=0.187\textwidth]{MReconNet4B_20.png}}
\subfigure[ Full-sample (GT) ] {\label {figure3d} \includegraphics[width=0.187\textwidth]{im_ori_20.png}}
\subfigure[ Mask] {\label {figure3e} \includegraphics[width=0.187\textwidth]{mask_1D_20.png}}\\
\subfigure[ ZF Seg] {\label {figure3f} \includegraphics[width=0.187\textwidth]{seg_zf.png}}
\subfigure[ PANO Seg] {\label {figure3g} \includegraphics[width=0.187\textwidth]{seg_PANO.png}}
\subfigure[ MRN$_{5}$ Seg] {\label {figure3h} \includegraphics[width=0.187\textwidth]{seg_MReconNet4B.png}}
\subfigure[ Full-sample Seg] {\label {figure3i} \includegraphics[width=0.187\textwidth]{seg_fs.png}}
\subfigure[ Manual Seg] {\label {figure3j} \includegraphics[width=0.187\textwidth]{seg_label.png}}
\caption{First row: The reconstructed MRI using different CS-MRI methods and a $20\%$ sampling mask. These are then segmented by an independently-trained segmentation model based on the state-of-the-art U-Net (referred to as MSN in this paper for its MRI application). }
\label {figure3}
\end{figure*}
\subsection{Illustration: Segmentation after CS-MRI}
We first test several popular CS-MRI outputs on an automatic MRI segmentation model to assess the impact of compressed sensing for this task. Inspired by the state-of-the-art medical image segmentation model U-Net \cite{34}, we propose the MRI segmentation network (MSN) shown in Figure \ref{figure2}. The segmentation encoder (SE) component, using convolution and pooling, can extract features from the input image at different scales, and the segmentation decoder (SD) component, using a deconvolution operation, predicts the the pixel-wise segmentation class from these features. Shortcut connections are used by this model to directly send lower-layer features to high-layer features by concatenation.
Training this MSN model using fully-sampled MRI and their segmentation labels, we test this models performance on reconstructed MRI produced by various CS-MRI methods. We use a $20\%$ Cartesian under-sampling mask as shown in Figure \ref{figure3e}. Our tested methods including the degraded zero-filled (ZF) reconstruction as baseline, PANO \cite{12} and the proposed MRN model which will be discussed in the following section\footnote{We adjust the parameters of PANO for this problem.}.
We observe that the zero-filled (ZF) reconstruction in Figure \ref{figure3a} produces a low-quality MR image, which leads to the poor segmentation performance shown in Figure \ref{figure3f}. The PANO reconstructed MRI in Figure \ref{figure3b} has been segmented better in Figure \ref{figure3g}, but is still far from satisfactory because of the loss of structural details in the reconstruction. The segmentation using the fully-sampled (FS) MRI in Figure \ref{figure3d} is shown in Figure \ref{figure3i}. Though this isn't the ground truth segmentation, it is the segmentation performed on the ground truth MRI, and so represented an upper bound for CS-MRI on this segmentation task. The manually label segmentation is shown in Figure \ref{figure3j}. This experiment shows that while CS-MRI can substantially improve the reconstruction quality visually, the fine structural details which are important for segmentation can still be mission, leaving much space for further improvement.
\begin{figure*}
\begin{center}
{\includegraphics[width=1\textwidth]{UniNet_new3.png}}
\caption{The SegNetMRI structure, formed by connecting the discussed MRN (top) for reconstruction, with MSN (bottom) for segmentation.}
\label {figure4}
\end{center}
\end{figure*}
\subsection{An MRI Reconstruction Network}
Deep learning for CS-MRI has the advantage of large modeling capacity, fast running speed, and high-level semantic modeling ability, which eases the integration of high-level task information compared with traditional sparsity-based CS-MRI models. Therefore, we adopt the same MSN encoder-decoder architecture from Figure \ref{figure2} as a basic encoder-decoder unit with a global residual shortcut, which has been proven to help network training. The encoder-decoder unit produces the reconstructed MRI. Since with deep neural networks the information loss can become severe, we introduce a data fidelity (DF) unit to help correct the Fourier coefficients of the reconstructed MRI produced by the encoder-decoder architecture on the sampled positions in k-space. This takes advantage of the fact that we have accurate measurements at the under-sampled k-space locations, and so the layers of the network should not override this information. (The details of the data fidelity unit can be found in \cite{19}.)
The encoder-decoder architecture and data fidelity unit make up a basic block. As more blocks are stacked in a cascaded manner, the quality of the reconstructed MRI of each block can be gradually improved \cite{19}. We therefore cascade $N$ such basic units to form the MRI reconstruction network (MRN$_N$) in Figure \ref{figure4}. The reconstruction encoder in different blocks extract features at different depth. Previously in Figure \ref{figure2}, we observed that MRN$_5$ achieves better reconstruction performance in Figure \ref{figure3c} than the non-deep PANO method, but the segmentation output in Figure \ref{figure3h} (MRN$\rightarrow$MSN) is still inferior to the fully-sampled segmentation. This motivates the following joint framework.
\subsection{SegNetMRI: A Unified Deep Neural Network}
Based on these observations, we propose a joint framework for CS-MRI reconstruction/segmentation that uses a deep neural network. We call this joint network SegNetMRI, which is shown in Figure \ref{figure4}.
To learn this model, we first pre-train separate models. We pre-train the MRN$_N$ with under-sampled and fully-sampled MRI training pairs. Similarly, we pre-train the MSN with fully-sampled MRI and their corresponding segmentation labels. After training MSN, we leave out the encoder component (SE) and keep the decoder component (SD). We then connect the single decoder component of the MSN (SD) to each of the encoder components of each MRN (called RE$_n$) within each block. The resulting $N$ outputs of the MSN decoder for each block are concatenated and merged to give the final segmentation via a $1\times1$ convolution. After pre-training separately and initializing the remaining parts, the parameters of SegNetMRI$_N$ (with $N$ blocks in the MRN portion, but a single segmentation decoder duplicated $N$ times) are then fine-tuned. Therefore, both the reconstruction and segmentation tasks share the \textit{same} encoders, but have \textit{separate} decoders for their respective tasks.
The rationale for this architecture is the following:
\begin{enumerate}
\item With the pre-training of MRN$_N$, the reconstruction encoder extracts basic features in different blocks. In SegNetMRI, the sharing of the reconstruction encoders between MRN and MSN means that these same features are used for both reconstruction and segmentation, which can help the two problems regularize each other.
\item The segmentation component uses information at various depths in the cascaded MRN, and combines this information in the decoder. The $1\times1$ convolution used to merge the outputs of the segmentation decoder at each layer can be viewed as ensemble learning.
\end{enumerate}
\paragraph{Loss function}
We adopt the $\ell_2$ Euclidean distance as the loss function for pre-training the MRN,
\begin{equation}\label{eq2}
{\mathcal{L}^{{\rm{MRN}}}} = \frac{1}{L}\sum\nolimits_{i = 1}^L {\left\| {x_i^{fs} - {x_i}} \right\|_2^2},
\end{equation}
where $x_i^{fs}$ and $x_i$ are the $i^{th}$ full-sampled training image and the output of the MRN, respectively. To pre-train the MSN, we adopt the pixel-wise cross-entropy loss function
\begin{equation}\label{eq3}
{\mathcal{L}^{{\rm{MSN}}}} = - \sum\nolimits_{i = 1}^L {\sum\nolimits_{j = 1}^N {\sum\nolimits_{c = 1}^C {t_{ijc}^{gt}} } } \ln {t_{ijc}},
\end{equation}
where $C$ tissue classes are to be classified. $t^{gt}$ and $t$ is the pixel-wise ground-truth label and the MSN predicted label, respectively.
Pre-training the MRN and MSN this way, we then construct and fine-tune SegNetMRI with the following loss function
\begin{equation}\label{eq4}
{{\cal L}^{{\rm{SegNetMRI}}}} = {{\cal L}^{{\rm{MRN}}}} + \lambda {{\cal L}^{{\rm{OMSN}}}}.
\end{equation}
We set $\lambda = 0.01$ in our experiments. Note the overall MSN (OMSN) loss containing $N+1$ loss function terms if the SegNetMRI contains $N$ blocks is
\begin{equation}\label{eq5}
{\mathcal{L}^{{\rm{OMSN}}}} = \frac{1}{{N + 1}}\left( {{\mathcal{L}^{{\rm{MMSN}}}} + \sum\nolimits_{i = 1}^N {{\mathcal{L}_i}^{{\rm{SMSN}}}} } \right),
\end{equation}
where the ${\cal L^{{\rm{MMSN}}}}$ is the loss for the merged prediction and ${\cal L^{{\rm{SMSN}}}}$ is the loss for each sub-MSN decoder prediction.
\section{Experiments and Discussions}
\begin{figure*}
\begin{center}
\subfigure[ GBRWT+MSN] {\label {figure5a} \includegraphics[width=0.155\textwidth]{seg_GBRWT_exp.png}}
\subfigure[ MRN$_5$+MSN] {\label {figure5b} \includegraphics[width=0.155\textwidth]{seg_MRN_exp.png}}
\subfigure[ \protect\cite{33}] {\label {figure5c} \includegraphics[width=0.155\textwidth]{seg_Huang_exp.png}}
\subfigure[ SegNetMRI$_5$] {\label {figure5d} \includegraphics[width=0.155\textwidth]{seg_UniNet_exp.png}}
\subfigure[ FS+MSN] {\label {figure5e} \includegraphics[width=0.155\textwidth]{seg_FS_exp.png}}
\subfigure[ GroundTruth] {\label {figure5f} \includegraphics[width=0.155\textwidth]{seg_label_exp.png}}
\caption{The segmentations of the compared methods. }
\label {figure5}
\end{center}
\end{figure*}
\begin{table*}[]
\centering
\begin{tabular}{|l|c|l|l|c|l|l|c|c|l|}
\hline
\multirow{2}{*}{Methods} & \multicolumn{3}{c|}{GM} & \multicolumn{3}{c|}{WM} & \multicolumn{3}{c|}{CSF} \\ \cline{2-10}
& DC & \multicolumn{1}{c|}{HD} & \multicolumn{1}{c|}{AVD} & DC & \multicolumn{1}{c|}{HD} & \multicolumn{1}{c|}{AVD} & DC & HD & \multicolumn{1}{c|}{AVD} \\ \hline
GBRWT+MSN & 75.55 & 2.24 & 4.21 & 65.56 & 1.90 & 3.10 & 76.50 & 1.77 & 2.69 \\ \hline
MRN$_5$+MSN & 79.36 & 2.06 & 3.57 & 65.76 & \multicolumn{1}{c|}{1.88} & 2.96 & 78.43 & \multicolumn{1}{l|}{1.64} & \multicolumn{1}{c|}{2.33} \\ \hline
\cite{33} & 83.41 & 1.81 & 2.96 & 78.05 & 1.24 & 1.61 & 77.81 & 1.76 & 2.58 \\ \hline
SegNetMRI$_5$ & \textbf{86.38} & \multicolumn{1}{c|}{\textbf{1.66}} & \textbf{2.52} & \textbf{81.49} & \textbf{1.08} & \textbf{1.34} & \textbf{79.23} & \multicolumn{1}{l|}{\textbf{1.61}} & \textbf{2.23} \\ \hline
FS+MSN & 87.36 & 1.60 & 2.33 & 85.94 & 1.00 & 1.14 & 81.01 & 1.61 & 2.18 \\ \hline
\end{tabular}
\caption{The segmentation comparison of the different models using DC (\%), HD and AVD (\%) as metrics. FS+MSN is the segmentation performance when the ground truth MRI is known.}
\label{SegTable}
\end{table*}
\subsection{Implementation Details}
\paragraph{Setup}
We implement all deep models on TensorFlow for the Python environment using a NVIDIA Geforce GTX 1080Ti with 11GB GPU memory and Intel Xeon CPU E5-2683 at 2.00GHz. We show the hyperparameter settings of encoder-decoder architecture used for both MRN and MSN in Figure \ref{figure2}. We use batch normalization to stabilize training. ReLU is used as activation function except for the last convolution layer of the encoder-decoder unit within each MRN block, where the identity map is applied for residual learning. We apply Xavier initialization for pre-training MRN and MSN. MSN is pre-trained for $60,000$ iterations using $64\times64$ fully-sampled MRI patches randomly cropped ($16$ patches in a batch) and MRN is pre-trained for $30,000$ iterations using the entire training image ($4$ images in a batch). We then fine-tune the SegNetMRI model for $8,000$ further iterations using entire images ($4$ images in a batch). ADAM is chosen as optimizer. We select the initial learning rate to be $0.0005$, the first-order momentum to be $0.9$ and the second momentum to be $0.999$.
\paragraph{Data}
We test our SegNetMRI architecture on the MRBrainS datasets from the Grand Challenge on MR Brain Image Segmentation (MRBrainS) Workshop \cite{7}. The datasets are acquired using 3T MRI scans. Five datasets are provided containing T1-1mm, T1, T1-IR and T2-FLAIR imaging modalities already registered and with manual segmentations. Here we use the T1 MRI data of the size $240\times240$ throughout the paper. We use four datasets for training (total 192 slices) and one dataset for testing (total 48 slices). We adopt the same data augmentation technique discussed in \cite{36}.
\begin{figure}[ht!]
\begin{center}
\subfigure[ ZF] {\label {figure6a} \includegraphics[width=0.15\textwidth]{recon_ZF_exp.png}}
\subfigure[ GBRWT] {\label {figure6b} \includegraphics[width=0.15\textwidth]{recon_GBRWT_exp.png}}
\subfigure[ MRN$_5$] {\label {figure6c} \includegraphics[width=0.15\textwidth]{recon_MRN_exp.png}}
\subfigure[ Huang] {\label {figure6d} \includegraphics[width=0.15\textwidth]{recon_Huang_exp.png}}
\subfigure[ SegNetMRI$_5$] {\label {figure6e} \includegraphics[width=0.15\textwidth]{recon_UniNet_exp.png}}
\subfigure[ FS] {\label {figure6f} \includegraphics[width=0.15\textwidth]{recon_Label_exp.png}}
\subfigure[\tiny GBRWT Error] {\label {figure6g} \includegraphics[width=0.11\textwidth]{error_GBRWT.png}}
\subfigure[\tiny MRN$_5$ Error] {\label {figure6h} \includegraphics[width=0.11\textwidth]{error_MRN.png}}
\subfigure[\tiny Huang Error] {\label {figure6i} \includegraphics[width=0.11\textwidth]{error_Huang.png}}
\subfigure[\tiny SegNetMRI$_5$ Error] {\label {figure6j} \includegraphics[width=0.11\textwidth]{error_UniNet.png}}
\caption{The reconstructed MR images with different CS-MRI methods in the first two rows and the residual maps in the third row. }
\label {figure6}
\end{center}
\end{figure}
\subsection{Experimental Results}
To evaluate the segmentation performance, we compare the proposed SegNetMRI$_5$ ($5$ blocks) with the inputting a fully-sampled MRI in MSN (FS+MSN) as ground truth performance, inputing the MRN$_5$ reconstruction in MSN (MRN$_5$+MSN) -- i.e., no fine-tuning), inputting GBRWT reconstruction in MSN (GBRWT+MSN). GBRWT \cite{13} represents the state-of-the-art performance in the conventional sparse based CS-MRI methods. Finally, we also compare with the joint framework proposed by \cite{33}, where only the reconstruction network is fine-tuned in MRN$_5$+MSN using the loss function ${\mathcal{L}^{{\rm{MRN}}}} + \lambda{\mathcal{L}^{{\rm{MSN}}}}$.
The same under-sampling mask shown in Figure \ref{figure3e} is again used.
We show qualitative comparison in Figure \ref{figure5}. We colorize segmentation corresponding white matter, gray matter and cerebrospinal fluid with yellow, green and red. We observe that the proposed SegNetMRI$_5$ model provides better segmentation and approximates the ideal FS+MSN most closely, both of which are not perfect compared with the human labeling. For quantitative evaluation, we use the Dice Coefficient (DC), the $95$th-percentile of the Hausdoff distance (HD) and the absolute volume difference (AVD), as also used in the MRBrainS challenge \cite{7}. Larger DC, and smaller HD and AVD, indicate better segmentation. We show these results in Table \ref{SegTable}, which is consistent with the subjective evaluation.
In addition to the improvement of segmentation accuracy, we also evaluate the reconstruction quality of the SegNetMRI model. We show the reconstructed MRI from SegNetMRI$_5$, the model in \cite{33}, MRN$_5$ and GBRWT in Figure \ref{figure6}, along with their corresponding residuals. (We set the error ranges from $[0~~0.1]$ on a $[0~~1]$ pre-scaled image.) We find that SegNetMRI achieves the minimal reconstruction error, especially in the meaningful tissue regions. We also give averaged reconstruction performance measures in Table \ref{ReconTable} using peak signal-to-noise ratio (PSNR) and the corresponding normalized mean squared error (NMSE) on all 37 test MRI.
\begin{table}[t]
\centering
\small
\begin{tabular}{|l|c|c|c|c|}
\hline
& GBRWT & MRN$_5$ & \cite{33} & SegNetMRI$_5$ \\ \hline
PSNR & 31.80 & 33.94 & 33.47 & \textbf{34.27} \\ \hline
NMSE & 0.0584 & 0.0361 & 0.0388 & \textbf{0.0333} \\ \hline
\end{tabular}
\caption{The averaged PSNR (dB) and NMSE on 37 test MRI data.}
\label{ReconTable}
\end{table}
\paragraph{Discussion}
It is worth noting that the the model in \cite{33} achieves better segmentation performance than the MRN$_5$ model, but the reconstruction quality is worse than MRN$_5$ both qualitatively and quantitatively. The original work in \cite{33} is devote to joint natural image denoising and segmentation, while for medical image analysis, the absolute reconstruction and segmentation accuracy is equally important. Usually, segmented MRI will also be provided by radiologists for diagnosis. A reconstruction error can thus cause the loss of valuable diagnostic information. In contrast, the SegNetMRI model achieves better performance on both reconstruction and segmentation.
In SegNetMRI$_N$, the output of $N$ MSN decoders are concatenated and merged into the final segmentation using a $1\times1$ convolution. The ensemble learning can make full use of the information from different depth of the SegNetMRI and produce better segmentation accuracy. We take the SegNetMRI$_5$ on the gray matter tissue of all the test MRI data for example. In Table \ref{Merge}, we show the segmentation performance of the outputs from each block in SegNetMRI$_5$ model without the $1\times1$ convolution, and we compare them with the segmentation output produced after merging the SegNetMRI$_5$ outputs with $1\times1$ convolution. The output of SegNetMRI$_5$ achieves better segmentation performance, showing the effectiveness of this ensemble structure.
In Figure \ref{figure7}, we show the segmentation accuracy (in Dice Coefficient metric) as a function of blocks, $N$. The reconstruction quality (in PSNR metric) improves as the number of the blocks increases in the SegNetMRI model, but at the expense of longer training time.
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
GM & B$_1$ & B$_2$ & B$_3$ & B$_4$ & B$_5$ & Merged \\ \hline
DC & 75.15 & 80.31 & 83.64 & 81.02 & 85.66 & \textbf{86.38} \\ \hline
HD & 2.15 & 1.95 & 1.77 & 1.90 & 1.68 & \textbf{1.66} \\ \hline
AVD & 4.35 & 3.58 & 2.90 & 3.43 & 2.60 & \textbf{2.52} \\ \hline
\end{tabular}
\caption{We compare the segmentation performance of the outputs from each block in SegNetMRI$5$ model without the $1\times1$ convolution and the segmentation output produced by the SegNetMRI$_5$ via merging.}
\label{Merge}
\end{table}
\begin{figure}
\begin{center}
{\label {figure7} \includegraphics[width=0.45\textwidth]{Blocks.png}}
\caption{The model performance of SegNetMRI$_N$ architecture as a function of the number of blocks.}
\label {figure7}
\end{center}
\end{figure}
\section{Conclusion}
Automatic segmentation of MRI is an important problem in medical imaging, and with the recent adoption of CS-MRI by industry, segmentation techniques that take CS-MRI reconstruction into account are needed. After verifying that the two tasks suffer when done independently, we proposed a deep neural network architecture called SegNetMRI to merge the MRI reconstruction and segmentation problems into a joint framework. Our experiments show that doing simultaneous reconstruction and segmentation can positively reinforce each other, improving both tasks significantly.
\section*{Acknowledgement}
\thanks{This work was supported in part by the National Natural Science Foundation of China under Grants 61571382, 81671766, 61571005, 81671674, U1605252, 61671309 in part by the Guangdong Natural Science Foundation under Grant 2015A030313007, in part by the Fundamental Research Funds for the Central Universities under Grant 20720160075, 20720180059, in part by the National Natural Science Foundation of Fujian Province, China under Grant 2017J01126. (Corresponding author: Xinghao Ding)}
\bibliographystyle{named}
|
{
"timestamp": "2018-05-08T02:10:09",
"yymm": "1805",
"arxiv_id": "1805.02165",
"language": "en",
"url": "https://arxiv.org/abs/1805.02165"
}
|
\section{Introduction}
Neural models have become the dominant approach in the NLP literature.
Compared to hand-crafted indicator features, neural sentence representations are less sparse, and more flexible in encoding intricate syntactic and semantic information.
Among various neural networks for encoding sentences, bi-directional LSTMs (BiLSTM) \cite{hochreiter1997long} have been a dominant method, giving state-of-the-art results in language modelling \cite{sundermeyer2012lstm}, machine translation \cite{bahdanau2014neural}, syntactic parsing \cite{dozat2016deep} and question answering \cite{tan2015lstm}.
Despite their success, BiLSTMs have been shown to suffer several limitations.
For example, their inherently sequential nature endows computation non-parallel within the same sentence \cite{vaswani2017attention}, which can lead to a computational bottleneck, hindering their use in the industry.
In addition, local ngrams, which have been shown a highly useful source of contextual information for NLP, are not explicitly modelled \cite{wang2016combination}.
Finally, sequential information flow leads to relatively weaker power in capturing long-range dependencies, which results in lower performance in encoding longer sentences \cite{koehn2017six}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{slstm_fig.pdf}
\caption{Sentence-State LSTM}
\label{fig:mlstm}
\end{figure}
We investigate an alternative recurrent neural network structure for addressing these issues.
As shown in Figure \ref{fig:mlstm}, the main idea is to model the hidden states of all words simultaneously at each recurrent step, rather than one word at a time.
In particular, we view the whole sentence as a single state, which consists of sub-states for individual words and an overall sentence-level state.
To capture local and non-local contexts, states are updated recurrently by exchanging information between each other.
Consequently, we refer to our model as sentence-state LSTM, or S-LSTM in short.
Empirically, S-LSTM can give effective sentence encoding after 3 -- 6 recurrent steps.
In contrast, the number of recurrent steps necessary for BiLSTM scales with the size of the sentence.
At each recurrent step, information exchange is conducted between consecutive words in the sentence, and between the sentence-level state and each word.
In particular, each word receives information from its predecessor and successor simultaneously.
From an initial state without information exchange, each word-level state can obtain 3-gram, 5-gram and 7-gram information after 1, 2 and 3 recurrent steps, respectively.
Being connected with every word, the sentence-level state vector serves to exchange non-local information with each word.
In addition, it can also be used as a global sentence-level representation for classification tasks.
Results on both classification and sequence labelling show that S-LSTM gives better accuracies compared to BiLSTM using the same number of parameters, while being faster. We release our code and models at \url{https://github.com/leuchine/S-LSTM}, which include all baselines and the final model.
\section{Related Work}
LSTM \cite{graves2005framewise} showed its early potentials in NLP when a neural machine translation system that leverages LSTM source encoding gave highly competitive results compared to the best SMT models \cite{bahdanau2014neural}.
LSTM encoders have since been explored for other tasks, including syntactic parsing \cite{dyer2015transition}, text classification \cite{yang2016hierarchical} and machine reading \cite{hermann2015teaching}.
Bi-directional extensions have become a standard configuration for achieving state-of-the-art accuracies among various tasks \cite{wen2015semantically,ma2016end,dozat2016deep}.
S-LSTMs are similar to BiLSTMs in their recurrent bi-directional message flow between words, but different in the design of state transition.
CNNs \cite{krizhevsky2012imagenet} also allow better parallelisation compared to LSTMs for sentence encoding \cite{kim2014convolutional}, thanks to parallelism among convolution filters.
On the other hand, convolution features embody only fix-sized local n-gram information, whereas sentence-level feature aggregation via pooling can lead to loss of information \cite{sabour2017dynamic}.
In contrast, S-LSTM uses a global sentence-level node to assemble and back-distribute local information in the recurrent state transition process, suffering less information loss compared to pooling.
Attention \cite{bahdanau2014neural} has recently been explored as a standalone method for sentence encoding, giving competitive results compared to Bi-LSTM encoders for neural machine translation \cite{vaswani2017attention}.
The attention mechanism allows parallelisation, and can play a similar role to the sentence-level state in S-LSTMs, which uses neural gates to integrate word-level information compared to hierarchical attention.
S-LSTM further allows local communication between neighbouring words.
Hierarchical stacking of CNN layers \cite{lecun1995convolutional,kalchbrenner2014convolutional,papandreou2015weakly,dauphin2016language} allows better interaction between non-local components in a sentence via incremental levels of abstraction.
S-LSTM is similar to hierarchical attention and stacked CNN in this respect, incrementally refining sentence representations.
However, S-LSTM models hierarchical encoding of sentence structure as a \emph{recurrent} state transition process.
In nature, our work belongs to the family of LSTM sentence representations.
S-LSTM is inspired by message passing over graphs \cite{murphy1999loopy,scarselli2009graph}.
Graph-structure neural models have been used for computer program verification \cite{li2015gated} and image object detection \cite{liang2016semantic}.
The closest previous work in NLP includes the use of convolutional neural networks \cite{bastings-EtAl:2017:EMNLP2017,marcheggiani-titov:2017:EMNLP2017} and DAG LSTMs \cite{TACL1028} for modelling syntactic structures.
Compared to our work, their motivations and network structures are highly different.
In particular, the DAG LSTM of \newcite{TACL1028} is a natural extension of tree LSTM \cite{tai-socher-manning:2015:ACL-IJCNLP}, and is sequential rather than parallel in nature.
To our knowledge, we are the first to investigate a graph RNN for encoding sentences, proposing parallel graph states for integrating word-level and sentence-level information.
In this perspective, our contribution is similar to that of \newcite{kim2014convolutional} and \newcite{bahdanau2014neural} in introducing a neural representation to the NLP literature.
\section{Model}
Given a sentence $\boldsymbol{s}=w_1,\, w_2, \, \dots , \, w_n$, where $w_i$ represents the $i$th word and $n$ is the sentence length, our goal is to find a neural representation of $\boldsymbol{s}$, which consists of a hidden vector $\boldsymbol{h}_i$ for each input word $w_i$, and a global sentence-level hidden vector $\boldsymbol{g}$.
Here $\boldsymbol{h}_i$ represents syntactic and semantic features for $w_i$ under the sentential context, while $\boldsymbol{g}$ represents features for the whole sentence.
Following previous work, we additionally add $\langle s \rangle$ and $\langle /s \rangle$ to the two ends of the sentence as $w_0$ and $w_{n+1}$, respectively.
\subsection{Baseline BiLSTM}
\label{sec:bilstm}
The baseline BiLSTM model consists of two LSTM components, which process the input in the forward left-to-right and the backward right-to-left directions, respectively.
In each direction, the reading of input words is modelled as a recurrent process with a single hidden state.
Given an initial value, the state changes its value recurrently, each time consuming an incoming word.
Take the forward LSTM component for example. Denoting the initial state as $\boldsymbol{\overrightarrow{h}}^0$, which is a model parameter, the recurrent state transition step for calculating $\boldsymbol{\overrightarrow{h}}^1,\dots,\boldsymbol{\overrightarrow{h}}^{n+1}$ is defined as follows \cite{graves2005framewise}:
\begin{equation} \label{eq:gate_bilstm}
\begin{split}
\boldsymbol{\hat{i}}^t &= \sigma(\boldsymbol{W}_i \boldsymbol{x}_t + \boldsymbol{U}_i \boldsymbol{\overrightarrow{h}}^{t-1} + \boldsymbol{b}_i) \\
\boldsymbol{\hat{f}}^t &= \sigma(\boldsymbol{W}_f \boldsymbol{x}_t + \boldsymbol{U}_f \boldsymbol{\overrightarrow{h}}^{t-1} + \boldsymbol{b}_f) \\
\boldsymbol{o^t} &= \sigma(\boldsymbol{W}_o \boldsymbol{x}_t + \boldsymbol{U}_o \boldsymbol{\overrightarrow{h}}^{t-1} + \boldsymbol{b}_o) \\
\boldsymbol{u^t} &= {\it tanh}(\boldsymbol{W}_u \boldsymbol{x}_t + \boldsymbol{U}_u \boldsymbol{\overrightarrow{h}}^{t-1} + \boldsymbol{b}_u) \\
\boldsymbol{i}^t&, \boldsymbol{f}^t = {\it softmax}(\boldsymbol{\hat{i}}^t,\boldsymbol{\hat{f}}^t) \\
\boldsymbol{c}^t &= \boldsymbol{c}^{t-1} \odot \boldsymbol{f}^t + \boldsymbol{u}^t \odot \boldsymbol{i}^t \\
\boldsymbol{\overrightarrow{h}}&^t = \boldsymbol{o}^t \odot {\it tanh}(\boldsymbol{c}^t)
\end{split}
\end{equation}
where $\boldsymbol{x}_t$ denotes the word representation of $w_t$; $\boldsymbol{i}^t$, $\boldsymbol{o}^t$, $\boldsymbol{f}^t$ and $\boldsymbol{u}^t$ represent the values of an input gate, an output gate, a forget gate and an actual input at time step $t$, respectively, which controls the information flow for a recurrent cell $\boldsymbol{\overrightarrow{c}}^t$ and the state vector $\boldsymbol{\overrightarrow{h}}^t$; $\boldsymbol{W}_x$, $\boldsymbol{U}_x$ and $\boldsymbol{b}_x$ ($x \in \{i,o,f,u\}$) are model parameters.
$\sigma$ is the sigmoid function.
The backward LSTM component follows the same recurrent state transition process as described in Eq \ref{eq:gate_bilstm}.
Starting from an initial state $\boldsymbol{h}^{n+1}$, which is a model parameter, it reads the input $\boldsymbol{x}_n$, $\boldsymbol{x}_{n-1}$, $\dots$, $\boldsymbol{x}_0$, changing its value to $\boldsymbol{\overleftarrow{h}}^n$, $\boldsymbol{\overleftarrow{h}}^{n-1}$, $\dots$, $\boldsymbol{\overleftarrow{h}}^0$, respectively.
A separate set of parameters $\boldsymbol{\hat{W}}_x$, $\boldsymbol{\hat{U}}_x$ and $\boldsymbol{\hat{b}}_x$ ($x \in \{i,o,f,u\}$) are used for the backward component.
The BiLSTM model uses the concatenated value of $\boldsymbol{\overrightarrow{h}}^t$ and $\boldsymbol{\overleftarrow{h}}^t$ as the hidden vector for $w_t$:
\[
\boldsymbol{h}^t = [\boldsymbol{\overrightarrow{h}}^t; \boldsymbol{\overleftarrow{h}}^t]
\]
A single hidden vector representation $\boldsymbol{g}$ of the whole input sentence can be obtained using the final state values of the two LSTM components:
\[
\boldsymbol{g} = [\boldsymbol{\overrightarrow{h}}^{n+1}; \boldsymbol{\overleftarrow{h}}^0]
\]
\subparagraph{Stacked BiLSTM}
Multiple layers of BiLTMs can be stacked for increased representation power, where the hidden vectors of a lower layer are used as inputs for an upper layer.
Different model parameters are used in each stacked BiLSTM layer.
\subsection{Sentence-State LSTM}
\label{sec:m-lstm}
Formally, an S-LSTM state at time step $t$ can be denoted by:
\[
\boldsymbol{H}^t = \langle \boldsymbol{h}_0^t, \boldsymbol{h}_1^t, \dots, \boldsymbol{h}_{n+1}^t, \boldsymbol{g}^t \rangle \textrm{,}
\]
which consists of a sub state $\boldsymbol{h}_i^t$ for each word $w_i$ and a sentence-level sub state $\boldsymbol{g}^t$.
S-LSTM uses a recurrent state transition process to model information exchange between sub states, which enriches state representations incrementally.
For the initial state $\boldsymbol{H}^0$, we set $\boldsymbol{h}_i^0 = \boldsymbol{g}^0 = \boldsymbol{h}^0$, where $\boldsymbol{h}^0$ is a parameter.
The state transition from $\boldsymbol{H}^{t-1}$ to $\boldsymbol{H}^t$ consists of sub state transitions from $\boldsymbol{h}_i^{t-1}$ to $\boldsymbol{h}_i^t$ and from $\boldsymbol{g}^{t-1}$ to $\boldsymbol{g}^t$.
We take an LSTM structure similar to the baseline BiLSTM for modelling state transition, using a recurrent cell $\boldsymbol{c}_i^t$ for each $w_i$ and a cell $\boldsymbol{c}_g^t$ for $\boldsymbol{g}$.
As shown in Figure~\ref{fig:mlstm}, the value of each $\boldsymbol{h}_i^{t}$ is computed based on the values of $\boldsymbol{x}_i$, $\boldsymbol{h}_{i-1}^{t-1}$, $\boldsymbol{h}_i^{t-1}$, $\boldsymbol{h}_{i+1}^{t-1}$ and $\boldsymbol{g}^{t-1}$, together with their corresponding cell values:
\begin{equation} \label{eq:mlstm}
\begin{split}
\boldsymbol{\xi}_i^t &= [\boldsymbol{h}_{i-1}^{t-1}, \boldsymbol{h}_i^{t-1}, \boldsymbol{h}_{i+1}^{t-1}] \\
\boldsymbol{\hat{i}}_i^t &= \sigma(\boldsymbol{W}_i \boldsymbol{\xi}_i^t + \boldsymbol{U}_i \boldsymbol{x}_{i} + \boldsymbol{V}_i \boldsymbol{g}^{t-1} + \boldsymbol{b}_i)\\
\boldsymbol{\hat{l}}_i^t &= \sigma(\boldsymbol{W}_l \boldsymbol{\xi}_i^t + \boldsymbol{U}_l \boldsymbol{x}_{i} + \boldsymbol{V}_l \boldsymbol{g}^{t-1} + \boldsymbol{b}_l)\\
\boldsymbol{\hat{r}}_i^t &= \sigma(\boldsymbol{W}_r \boldsymbol{\xi}_i^t + \boldsymbol{U}_r \boldsymbol{x}_{i} + \boldsymbol{V}_r \boldsymbol{g}^{t-1} + \boldsymbol{b}_r)\\
\boldsymbol{\hat{f}}_i^t &= \sigma(\boldsymbol{W}_f \boldsymbol{\xi}_i^t + \boldsymbol{U}_f \boldsymbol{x}_{i} + \boldsymbol{V}_f \boldsymbol{g}^{t-1} + \boldsymbol{b}_f)\\
\boldsymbol{\hat{s}}_i^t &= \sigma(\boldsymbol{W}_s \boldsymbol{\xi}_i^t + \boldsymbol{U}_s \boldsymbol{x}_{i} + \boldsymbol{V}_s \boldsymbol{g}^{t-1} + \boldsymbol{b}_s)\\
\boldsymbol{o}_i^t &= \sigma(\boldsymbol{W}_o \boldsymbol{\xi}_i^t + \boldsymbol{U}_o \boldsymbol{x}_{i} + \boldsymbol{V}_o \boldsymbol{g}^{t-1} + \boldsymbol{b}_o)\\
\boldsymbol{u}_i^t &= {\it tanh}(\boldsymbol{W}_u \boldsymbol{\xi}_i^t + \boldsymbol{U}_u \boldsymbol{x}_{i} + \boldsymbol{V}_u \boldsymbol{g}^{t-1} + \boldsymbol{b}_u)\\
\boldsymbol{i}_i^t &, \boldsymbol{l}_i^t, \boldsymbol{r}_i^t, \boldsymbol{f}_i^t, \boldsymbol{s}_i^t = {\it softmax}(\boldsymbol{\hat{i}}_i^t, \boldsymbol{\hat{l}}_i^t, \boldsymbol{\hat{r}}_i^t, \boldsymbol{\hat{f}}_i^t, \boldsymbol{\hat{s}}_i^t)\\
\boldsymbol{c}_i^t &= \boldsymbol{l}_i^t \odot \boldsymbol{c}_{i-1}^{t-1} + \boldsymbol{f}_i^t \odot \boldsymbol{c}_{i}^{t-1} + \boldsymbol{r}_i^t \odot \boldsymbol{c}_{i+1}^{t-1}\\
~~~~~~&~~~~~~~~~~~~~~~~~~~~ + \boldsymbol{s}_i^t \odot \boldsymbol{c}_g^{t-1} + \boldsymbol{i}_i^t \odot \boldsymbol{u}_i^t\\
\boldsymbol{h}_i^t &= \boldsymbol{o}_t^i \odot {\it tanh}(\boldsymbol{c}_i^t)
\end{split}
\end{equation}
where $\boldsymbol{\xi}_i^t$ is the concatenation of hidden vectors of a context window, and $\boldsymbol{l}_i^t$, $\boldsymbol{r}_i^t$, $\boldsymbol{f}_i^t$, $\boldsymbol{s}_i^t$ and $\boldsymbol{i}_i^t$ are gates that control information flow from $\boldsymbol{\xi}_i^t$ and $\boldsymbol{x}_i$ to $\boldsymbol{c}_i^t$.
In particular, $\boldsymbol{i}_i^t$ controls information from the input $\boldsymbol{x}_i$; $\boldsymbol{l}_i^t$, $\boldsymbol{r}_i^t$, $\boldsymbol{f}_i^t$ and $\boldsymbol{s}_i^t$ control information from the left context cell $\boldsymbol{c}_{i-1}^{t-1}$, the right context cell $\boldsymbol{c}_{i+1}^{t-1}$, $\boldsymbol{c}_{i}^{t-1}$ and the sentence context cell $\boldsymbol{c}_g^{t-1}$, respectively.
The values of $\boldsymbol{i}_i^t$, $\boldsymbol{l}_i^t$, $\boldsymbol{r}_i^t$, $\boldsymbol{f}_i^t$ and $\boldsymbol{s}_i^t$ are normalised such that they sum to {\bf 1}.
$\boldsymbol{o}_i^t$ is an output gate from the cell state $\boldsymbol{c}_i^t$ to the hidden state $\boldsymbol{h}_i^t$.
$\boldsymbol{W}_x$, $\boldsymbol{U}_x$, $\boldsymbol{V}_x$ and $\boldsymbol{b}_x$ ($x \in \{i,o,l,r,f,s,u\}$) are model parameters. $\sigma$ is the sigmoid function.
The value of $\boldsymbol{g}^{t}$ is computed based on the values of $\boldsymbol{h}_i^{t-1}$ for all $i \in [0..n+1]$:
\begin{equation} \label{eq:mlstm_g}
\begin{aligned}
\boldsymbol{\bar{h}}&= {\it avg}(\boldsymbol{h}_0^{t-1}, \boldsymbol{h}_1^{t-1}, \dots, \boldsymbol{h}_{n+1}^{t-1})\\
\boldsymbol{\hat{f}}_g^{t} &= \sigma(\boldsymbol{W}_g \boldsymbol{g}^{t-1} + \boldsymbol{U}_g \boldsymbol{\bar{h}} + \boldsymbol{b}_g)\\
\boldsymbol{\hat{f}}_i^{t} &= \sigma(\boldsymbol{W}_f \boldsymbol{g}^{t-1} + \boldsymbol{U}_f \boldsymbol{h}_i^{t-1} + \boldsymbol{b}_f)\\
\boldsymbol{o}^{t} &= \sigma(\boldsymbol{W}_o \boldsymbol{g}^{t-1} + \boldsymbol{U}_o \boldsymbol{\bar{h}} + \boldsymbol{b}_o)\\
\boldsymbol{f}_0^t&, \dots, \boldsymbol{f}_{n+1}^t, \boldsymbol{f}_g^{t} = {\it softmax}(\boldsymbol{\hat{f}}_0^t, \dots, \boldsymbol{\hat{f}}_{n+1}^t, \boldsymbol{\hat{f}}_g^{t})\\
\boldsymbol{c}_g^{t} &= \boldsymbol{f}_g^{t} \odot \boldsymbol{c}_g^{t-1} + \sum_i \boldsymbol{f}_i^t \odot \boldsymbol{c}_i^{t-1} \\
\boldsymbol{g}^t &= \boldsymbol{o}^{t} \odot {\it tanh}(\boldsymbol{c}_g^{t})\\
\end{aligned}
\end{equation}
where $\boldsymbol{f}_0^t, \dots, \boldsymbol{f}_{n+1}^t$ and $\boldsymbol{f}_g^{t}$ are gates controlling information from $\boldsymbol{c}_0^{t-1}, \dots, \boldsymbol{c}_{n+1}^{t-1}$ and $\boldsymbol{c}_g^{t-1}$, respectively, which are normalised.
$\boldsymbol{o}^{t}$ is an output gate from the recurrent cell $\boldsymbol{c}_g^{t}$ to $\boldsymbol{g}^{t}$.
$\boldsymbol{W}_x$, $\boldsymbol{U}_x$ and $\boldsymbol{b}_x$ ($x \in \{g, f, o\}$) are model parameters.
\subparagraph{Contrast with BiLSTM}
The difference between S-LSTM and BiLSTM can be understood with respect to their recurrent states.
While BiLSTM uses only one state in each direction to represent the subsequence from the beginning to a certain word, S-LSTM uses a structural state to represent the full sentence, which consists of a sentence-level sub state and $n+2$ word-level sub states, simultaneously.
Different from BiLSTMs, for which $\boldsymbol{h}^t$ at different time steps are used to represent $w_0, \dots, w_{n+1}$, respectively, the word-level states $\boldsymbol{h}_i^t$ and sentence-level state $\boldsymbol{g}^t$ of S-LSTMs directly correspond to the goal outputs $\boldsymbol{h}_i$ and $\boldsymbol{g}$, as introduced in the beginning of this section.
As $t$ increases from 0, $\boldsymbol{h}_i^t$ and $\boldsymbol{g}^t$ are enriched with increasingly deeper context information.
From the perspective of information flow, BiLSTM passes information from one end of the sentence to the other.
As a result, the number of time steps scales with the size of the input.
In contrast, S-LSTM allows bi-directional information flow at each word simultaneously, and additionally between the sentence-level state and every word-level state.
At each step, each $\boldsymbol{h}_i$ captures an increasing larger ngram context, while additionally communicating globally to all other $\boldsymbol{h}_j$ via $\boldsymbol{g}$.
The optimal number of recurrent steps is decided by the end-task performance, and does not necessarily scale with the sentence size.
As a result, S-LSTM can potentially be both more efficient and more accurate compared with BiLSTMs.
{\bf Increasing window size}.
By default S-LSTM exchanges information only between neighbouring words, which can be seen as adopting a 1-word window on each side.
The window size can be extended to 2, 3 or more words in order to allow more communication in a state transition, expediting information exchange.
To this end, we modify Eq~\ref{eq:mlstm}, integrating additional context words to $\boldsymbol{\xi}_i^t$, with extended gates and cells.
For example, with a window size of 2, $\boldsymbol{\xi}_i^t = [\boldsymbol{h}_{i-2}^{t-1}, \boldsymbol{h}_{i-1}^{t-1}, \boldsymbol{h}_{i}^{t-1}, \boldsymbol{h}_{i+1}^{t-1}, \boldsymbol{h}_{i+2}^{t-1}]$.
We study the effectiveness of window size in our experiments.
{\bf Additional sentence-level nodes}. By default S-LSTM uses one sentence-level node.
One way of enriching the parameter space is to add more sentence-level nodes, each communicating with word-level nodes in the same way as described by Eq~\ref{eq:mlstm_g}.
In addition, different sentence-level nodes can communicate with each other during state transition.
When one sentence-level node is used for classification outputs, the other sentence-level node can serve as hidden memory units, or latent features. We study the effectiveness of multiple sentence-level nodes empirically.
\subsection{Task settings}
\label{sec:ext_attn}
We consider two task settings, namely classification and sequence labelling.
For \emph{classification}, $\boldsymbol{g}$ is fed to a {\it softmax} classification layer:
\[
\boldsymbol{y} = {\it softmax}(\boldsymbol{W}_c \boldsymbol{g} + \boldsymbol{b}_c)
\]
where $\boldsymbol{y}$ is the probability distribution of output class labels and $\boldsymbol{W}_c$ and $\boldsymbol{b}_c$ are model parameters.
For \emph{sequence labelling}, each $\boldsymbol{h}_i$ can be used as feature representation for a corresponding word $w_i$.
\subparagraph{External attention}
It has been shown that summation of hidden states using attention \cite{bahdanau2014neural,yang2016hierarchical} give better accuracies compared to using the end states of BiLSTMs.
We study the influence of attention on both S-LSTM and BiLSTM for \emph{classification}.
In particular, additive attention \cite{bahdanau2014neural} is applied to the hidden states of input words for both BiLSTMs and S-LSTMs calculating a weighted sum
\begin{equation*}
\boldsymbol{g} = \sum_t \alpha_t \boldsymbol{h}_t
\end{equation*}
where
\begin{equation*}
\alpha_t = \frac{\exp{\boldsymbol{u}^T \boldsymbol{\epsilon}_t}}{\sum_i \exp{\boldsymbol{u}^T\boldsymbol{\epsilon}_i}}
\end{equation*}
\begin{equation*}
\boldsymbol{\epsilon}_t= {\it tanh}(\boldsymbol{W}_\alpha \boldsymbol{h}_t+\boldsymbol{b}_\alpha)
\end{equation*}
Here $\boldsymbol{W}_\alpha$, $\boldsymbol{u}$ and $\boldsymbol{b}_\alpha$ are model parameters.
\subparagraph{External CRF}
For \emph{sequential labelling}, we use a CRF layer on top of the hidden vectors $\boldsymbol{h}_1, \boldsymbol{h}_2, \dots, \boldsymbol{h}_n$ for calculating the conditional probabilities of label sequences \cite{huang2015bidirectional,ma2016end}:
\[
P(\boldsymbol{Y}_1^n|\boldsymbol{h}, \boldsymbol{W}_s, \boldsymbol{b}_s) = \frac{\prod_{i=1}^n \psi_i(y_{i-1}, y_i, \boldsymbol{h})}{\sum_{\boldsymbol{Y}_1^{n'}} \prod_{i=1}^n \psi_i(y'_{i-1}, y'_i, \boldsymbol{h})}
\]
\[
\psi_i(y_{i-1}, y_i, \boldsymbol{h})= {\it exp}(\boldsymbol{W}_{s}^{y_{i-1},y_i} h_i + b_{s}^{y_{i-1},y_i})
\]
where $\boldsymbol{W}_{s}^{y_{i-1},y_i}$ and $b_{s}^{y_{i-1},y_i}$ are parameters specific to two consecutive labels $y_{i-1}$ and $y_i$.
For training, standard log-likelihood loss is used with $L_2$ regularization given a set of gold-standard instances.
\section{Experiments}
We empirically compare S-LSTMs and BiLSTMs on different classification and sequence labelling tasks.
All experiments are conducted using a GeForce GTX 1080 GPU with 8GB memory.
\begin{table}[t]
\centering
\scriptsize
\tabcolsep=0.03cm
\begin{tabular}{|ccc|c|c|c|c|c|c|}
\hline
\multicolumn{3}{|c|}{\textbf{Dataset}} &\multicolumn{2}{|c|}{\textbf{Training}} &\multicolumn{2}{|c|}{\textbf{Development}}& \multicolumn{2}{|c|}{\textbf{Test}}\\
\hline
\multicolumn{3}{|c|}{} &\textbf{\#sent}&\textbf{\#words} &\textbf{\#sent}&\textbf{\#words}& \textbf{\#sent}&\textbf{\#words}\\
\hline
\multicolumn{3}{|c|}{Movie review \cite{pang2008opinion}} &8527&201137&1066&25026&1066&25260\\
\hline
& \multicolumn{2}{|c|}{Books} &1400 &297K&200&59K&400&68K\\
& \multicolumn{2}{|c|}{Electronics} &1398 &924K&200&184K&400&224K\\
& \multicolumn{2}{|c|}{DVD}&1400 &1,587K&200&317K&400&404K\\
& \multicolumn{2}{|c|}{Kitchen}&1400 &769K&200&153K&400&195K\\
& \multicolumn{2}{|c|}{Apparel}&1400 &525K&200&105K&400&128K\\
& \multicolumn{2}{|c|}{Camera}&1397 &1,084K&200&216K&400&260K\\
Text & \multicolumn{2}{|c|}{Health}&1400&742K&200&148K&400&175K\\
Classification & \multicolumn{2}{|c|}{Music}&1400 &1,176K&200&235K&400&276K\\
\cite{liu2017adversarial} & \multicolumn{2}{|c|}{Toys} &1400&792K&200&158K&400&196K\\
& \multicolumn{2}{|c|}{Video}& 1400 &1,311K&200&262K&400&342K\\
& \multicolumn{2}{|c|}{Baby}& 1300 &855K&200&171K&400&221K\\
& \multicolumn{2}{|c|}{Magazines}& 1370 &1,033K&200&206K&400&264K\\
& \multicolumn{2}{|c|}{Software}& 1315 &1,143K&200&228K&400&271K\\
& \multicolumn{2}{|c|}{Sports}& 1400 &833K&200&183K&400&218K\\
& \multicolumn{2}{|c|}{IMDB}& 1400 &2,205K&200&507K&400&475K\\
& \multicolumn{2}{|c|}{MR}& 1400 &196K&200&41K&400&48K\\
\hline
\multicolumn{3}{|c|}{POS tagging \cite{marcus1993building}}&39831&950011&1699&40068&2415&56671\\
\hline
\multicolumn{3}{|c|}{NER \cite{tjong2003introduction}}&14987&204567 &3466&51578&3684&46666\\
\hline
\end{tabular}
\caption{\label{dataset_statistics}Dataset statistics}
\end{table}
\subsection{Experimental Settings}
{\bf Datasets}. We choose the movie review dataset of \newcite{pang2008opinion}, and additionally the 16 datasets of \newcite{liu2017adversarial} for classification evaluation.
We randomly split the movie review dataset into training (80\%), development (10\%) and test (10\%) sections, and the original split of \newcite{liu2017adversarial} for the 16 classification datasets.
For sequence labelling, we choose the Penn Treebank \cite{marcus1993building} POS tagging task and the CoNLL \cite{tjong2003introduction} NER task as our benchmarks.
For POS tagging, we follow the standard split \cite{manning2011part}, using sections 0 -- 18 for training, 19 -- 21 for development and 22 -- 24 for test.
For NER, we follow the standard split, and use the BIOES tagging scheme \cite{ratinov2009design}.
Statistics of the four datasets are shown in Table \ref{dataset_statistics}.
{\bf Hyperparameters}. We initialise word embeddings using GloVe \cite{pennington2014glove} 300 dimensional embeddings.\footnote{https://nlp.stanford.edu/projects/glove/}
Embeddings are fine-tuned during model training for all tasks.
Dropout \cite{srivastava2014dropout} is applied to embedding hidden states, with a rate of 0.5.
All models are optimised using the Adam optimizer \cite{kingma2014adam}, with an initial learning rate of 0.001 and a decay rate of 0.97. Gradients are clipped at 3 and a batch size of 10 is adopted.
Sentences with similar lengths are batched together.
The L2 regularization parameter is set to 0.001.
\begin{table}[t]
\centering
\tabcolsep=0.1cm
\begin{tabular}{|ccc|ccc|}
\hline
\multicolumn{3}{|c|}{\textbf{Model}}& \textbf{Time (s)} & \textbf{Acc} & \textbf{\# Param}\\
\hline
\multicolumn{3}{|c|}{+0 dummy node}&56&81.76 &7,216K\\
\multicolumn{3}{|c|}{+1 dummy node}&65&82.64&8,768K\\
\multicolumn{3}{|c|}{+2 dummy node}&76&82.24 &10,321K\\
\hline
\multicolumn{3}{|c|}{Hidden size 100 }&42&81.75 &4,891K\\
\multicolumn{3}{|c|}{Hidden size 200}&54&82.04 &6,002K\\
\multicolumn{3}{|c|}{Hidden size 300}&65&82.64 &8,768K\\
\multicolumn{3}{|c|}{Hidden size 600}&175&81.84&17,648K\\
\multicolumn{3}{|c|}{Hidden size 900}&235&81.66 &33,942K\\
\hline
\multicolumn{3}{|c|}{Without $\langle \textrm{s} \rangle$, $\langle \textrm{/s} \rangle$}&63&82.36&8,768K\\
\multicolumn{3}{|c|}{With $\langle \textrm{s} \rangle$, $\langle \textrm{/s} \rangle$}&65&82.64&8,768K \\
\hline
\end{tabular}
\caption{\label{tab:movie_dev}Movie review \textsc{Dev} results of S-LSTM}
\end{table}
\begin{figure}
\vspace{-0.4em}
\centering
\includegraphics[width=0.9\linewidth]{iteration_window.pdf}
\vspace{-0.8em}
\caption{Accuracies with various window sizes and time steps on movie review development set}
\label{fig:iteration_window}
\end{figure}
\subsection{Development Experiments}
We use the movie review development data to investigate different configurations of S-LSTMs and BiLSTMs.
For S-LSTMs, the default configuration uses $\langle s \rangle$ and $\langle /s \rangle$ words for augmenting words of a sentence.
A hidden layer size of 300 and one sentence-level node are used.
\textbf{Hyperparameters}: Table \ref{tab:movie_dev} shows the development results of various S-LSTM settings, where Time refers to training time per epoch.
Without the sentence-level node, the accuracy of S-LSTM drops to 81.76\%, demonstrating the necessity of global information exchange.
Adding one additional sentence-level node as described in Section~\ref{sec:m-lstm} does not lead to accuracy improvements, although the number of parameters and decoding time increase accordingly.
As a result, we use only 1 sentence-level node for the remaining experiments.
The accuracies of S-LSTM increases as the hidden layer size for each node increases from 100 to 300, but does not further increase when the size increases beyond 300.
We fix the hidden size to 300 accordingly.
Without using $\langle s \rangle$ and $\langle /s \rangle$, the performance of S-LSTM drops from 82.64\% to 82.36\%, showing the effectiveness of having these additional nodes.
Hyperparameters for BiLSTM models are also set according to the development data, which we omit here.
{\bf State transition}. In Table \ref{tab:movie_dev}, the number of recurrent state transition steps of S-LSTM is decided according to the best development performance.
Figure \ref{fig:iteration_window} draws the development accuracies of S-LSTMs with various window sizes against the number of recurrent steps.
As can be seen from the figure, when the number of time steps increases from 1 to 11, the accuracies generally increase, before reaching a maximum value.
This shows the effectiveness of recurrent information exchange in S-LSTM state transition.
On the other hand, no significant differences are observed on the peak accuracies given by different window sizes, although a larger window size (e.g. 4) generally results in faster plateauing.
This can be be explained by the intuition that information exchange between distant nodes can be achieved using more recurrent steps under a smaller window size, as can be achieved using fewer steps under a larger window size.
Considering efficiency, we choose a window size of 1 for the remaining experiments, setting the number of recurrent steps to 9 according to Figure \ref{fig:iteration_window}.
\begin{table}[t]
\centering
\tabcolsep=0.1cm
\begin{tabular}{|ccc|ccc|}
\hline
\multicolumn{3}{|c|}{\textbf{Model}}& \textbf{Time (s)} & \textbf{Acc} & \textbf{\# Param}\\
\hline
\multicolumn{3}{|c|}{LSTM}&67&80.72 &5,977K\\
\multicolumn{3}{|c|}{BiLSTM}&106&81.73 &7,059K\\
\multicolumn{3}{|c|}{2 stacked BiLSTM}&207&81.97&9,221K\\
\multicolumn{3}{|c|}{3 stacked BiLSTM}&310&81.53 &11,383K\\
\multicolumn{3}{|c|}{4 stacked BiLSTM}&411&81.37 &13,546K\\
\multicolumn{3}{|c|}{S-LSTM}&65&82.64* &8,768K\\
\hline
\multicolumn{3}{|c|}{CNN}&34&80.35&5,637K\\
\multicolumn{3}{|c|}{2 stacked CNN}&40&80.97 &5,717K\\
\multicolumn{3}{|c|}{3 stacked CNN}&47&81.46 &5,808K\\
\multicolumn{3}{|c|}{4 stacked CNN}&51&81.39 &5,855K\\
\hline
\multicolumn{3}{|c|}{Transformer (N=6)}&138&81.03&7,234K\\
\multicolumn{3}{|c|}{Transformer (N=8)}&174&81.86 &7,615K\\
\multicolumn{3}{|c|}{Transformer (N=10)}&214&81.63 &8,004K\\
\hline
\multicolumn{3}{|c|}{BiLSTM+Attention}&126&82.37 &7,419K\\
\multicolumn{3}{|c|}{S-LSTM+Attention}&87&83.07* &8,858K\\
\hline
\end{tabular}
\caption{\label{tab:movie_dev_2}Movie review development results}
\end{table}
\textbf{S-LSTM vs BiLSTM}: As shown in Table \ref{tab:movie_dev_2}, BiLSTM gives significantly better accuracies compared to uni-directional LSTM\footnote{$p<0.01$ using t-test. For the remaining of this paper, we use the same measure for statistical significance.}, with the training time per epoch growing from 67 seconds to 106 seconds.
Stacking 2 layers of BiLSTM gives further improvements to development results, with a larger time of 207 seconds.
3 layers of stacked BiLSTM does not further improve the results.
In contrast, S-LSTM gives a development result of 82.64\%, which is significantly better compared to 2-layer stacked BiLSTM, with a smaller number of model parameters and a shorter time of 65 seconds.
We additionally make comparisons with stacked CNNs and hierarchical attention \cite{vaswani2017attention}, shown in Table \ref{tab:movie_dev_2} (the CNN and Transformer rows), where $N$ indicates the number of attention layers.
CNN is the most efficient among all models compared, with the smallest model size.
On the other hand, a 3-layer stacked CNN gives an accuracy of 81.46\%, which is also the lowest compared with BiLSTM, hierarchical attention and S-LSTM.
The best performance of hierarchical attention is between single-layer and two-layer BiLSTMs in terms of both accuracy and efficiency.
S-LSTM gives significantly better accuracies compared with both CNN and hierarchical attention.
{\bf Influence of external attention mechanism}.
Table \ref{tab:movie_dev_2} additionally shows the results of BiLSTM and S-LSTM when external attention is used as described in Section~\ref{sec:ext_attn}.
Attention leads to improved accuracies for both BiLSTM and S-LSTM in classification, with S-LSTM still outperforming BiLSTM significantly.
The result suggests that external techniques such as attention can play orthogonal roles compared with internal recurrent structures, therefore benefiting both BiLSTMs and S-LSTMs.
Similar observations are found using external CRF layers for sequence labelling.
\begin{table}[t]
\centering
\tabcolsep=0.03cm
\begin{tabular}{ccccc|c|c|}
\hline
\multicolumn{4}{|l|}{\textbf{Model}}& \textbf{Accuracy} & \textbf{Train (s)}&\textbf{Test (s)}\\
\hline
\multicolumn{4}{|l|}{\,\,\newcite{socher2011semi}} &77.70& --& --\\
\multicolumn{4}{|l|}{\,\,\newcite{socher2012semantic}} &79.00& --& --\\
\multicolumn{4}{|l|}{\,\,\newcite{kim2014convolutional}} &81.50& --& --\\
\multicolumn{4}{|l|}{\,\,\newcite{qian2016linguistically}} &81.50& --& --\\
\hline
\multicolumn{4}{|l|}{\,\,BiLSTM}&81.61&51&1.62\\
\multicolumn{4}{|l|}{\,\,2 stacked BiLSTM}&81.94&98&3.18\\
\multicolumn{4}{|l|}{\,\,3 stacked BiLSTM}&81.71&137&4.67\\
\multicolumn{4}{|l|}{\,\,3 stacked CNN}&81.59&31&1.04\\
\multicolumn{4}{|l|}{\,\,Transformer (N=8)}&81.97&89&2.75\\
\multicolumn{4}{|l|}{\,\,S-LSTM}&\textbf{82.45}* & 41&1.53\\
\hline
\end{tabular}
\caption{\label{movie_review_classification}Test set results on movie review dataset (* denotes significance in all tables).}
\end{table}
\begin{table*}[t]
\centering
\begin{tabular}{cccc|c|c|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{\textbf{Dataset}}& \textbf{SLSTM}&\textbf{Time (s)}&\textbf{BiLSTM} &\textbf{Time (s)}&\textbf{2 BiLSTM} &\textbf{Time (s)}\\
\hline
\multicolumn{4}{|c|}{Camera}&\textbf{90.02}*&50 (2.85)&87.05&115 (8.37)&88.07&221 (16.1)\\
\multicolumn{4}{|c|}{Video}&\textbf{86.75}*&55 (3.95)&84.73&140 (12.59)&85.23&268 (25.86)\\
\multicolumn{4}{|c|}{Health}&\textbf{86.5}&37 (2.17)&85.52&118 (6.38)&85.89&227 (11.16)\\
\multicolumn{4}{|c|}{Music}&\textbf{82.04}*&52 (3.44)&78.74&185 (12.27)&80.45&268 (23.46)\\
\multicolumn{4}{|c|}{Kitchen}&\textbf{84.54}*&40 (2.50)&82.22&118 (10.18)&83.77&225 (19.77)\\
\multicolumn{4}{|c|}{DVD}&\textbf{85.52}*&63 (5.29)&83.71&166 (15.42)&84.77&217 (28.31)\\
\multicolumn{4}{|c|}{Toys}&85.25&39 (2.42)&85.72&119 (7.58)&\textbf{85.82}&231 (14.83)\\
\multicolumn{4}{|c|}{Baby}&\textbf{86.25}*&40 (2.63)&84.51&125 (8.50)&85.45&238 (17.73)\\
\multicolumn{4}{|c|}{Books}&\textbf{83.44}*&64 (3.64)&82.12&240 (13.59)&82.77&458 (28.82)\\
\multicolumn{4}{|c|}{IMDB}&\textbf{87.15}*&67 (3.69)&86.02&248 (13.33)&86.55&486 (26.22)\\
\multicolumn{4}{|c|}{MR}&\textbf{76.2}&27 (1.25)&75.73&39 (2.27)&75.98&72 (4.63)\\
\multicolumn{4}{|c|}{Appeal}&85.75&35 (2.83)&86.05&119 (11.98)&\textbf{86.35}*&229 (22.76)\\
\multicolumn{4}{|c|}{Magazines}&\textbf{93.75}*&51 (2.93)&92.52&214 (11.06)&92.89&417 (22.77)\\
\multicolumn{4}{|c|}{Electronics}&\textbf{83.25}*&47 (2.55)&82.51&195 (10.14)&82.33&356 (19.77)\\
\multicolumn{4}{|c|}{Sports}&\textbf{85.75}*&44 (2.64)&84.04&172 (8.64)&84.78&328 (16.34)\\
\multicolumn{4}{|c|}{Software}&\textbf{87.75}*&54 (2.98)&86.73&245 (12.38)&86.97&459 (24.68)\\
\hline
\multicolumn{4}{|c|}{\textbf{Average}} &\textbf{85.38}*&47.30 (2.98)&84.01&153.48 (10.29)&84.64&282.24 (20.2)\\
\hline
\end{tabular}
\caption{\label{classification}Results on the 16 datasets of \newcite{liu2017adversarial}. Time format: train (test)}
\end{table*}
\begin{figure}[t]
\vspace{-0.6em}
\begin{center}
\hspace{-.0em}
\subfigure[CoNLL03]{\includegraphics[width=0.9\linewidth]{conll_dev.pdf}}\hskip 0.1pt
\subfigure[WSJ]{\includegraphics[width=0.9\linewidth]{pos_dev}} \hskip 1pt
\end{center}
\vspace{-1em}
\caption{\label{sequence_dev}Sequence labelling development results.}
\end{figure}
\begin{table}[t]
\centering
\tabcolsep=0.03cm
\begin{tabular}{|l|c|c|c|}
\hline
\textbf{Model} & \textbf{Accuracy}&\textbf{Train (s)} &\textbf{Test (s)}\\
\hline
\newcite{manning2011part} & 97.28&--&--\\
\newcite{collobert2011natural} & 97.29&--&--\\
\newcite{sun2014structure} & 97.36&--&--\\
\newcite{sogaard2011semisupervised} & 97.50 &--&--\\
\newcite{huang2015bidirectional} & \textbf{97.55} &--&--\\
\newcite{ma2016end} & \textbf{97.55} &--&--\\
\newcite{yang2017transfer} & \textbf{97.55} &--&--\\
\hline
BiLSTM&97.35&254&22.50\\
2 stacked BiLSTM&97.41&501&43.99\\
3 stacked BiLSTM&97.40&746&64.96\\
S-LSTM&\textbf{97.55}&237&22.16\\
\hline
\end{tabular}
\caption{\label{wsj_test}Results on PTB (POS tagging)}
\end{table}
\begin{table}[t]
\centering
\tabcolsep=0.03cm
\begin{tabular}{|l|c|c|c|}
\hline
\textbf{Model}& \textbf{F1} & \textbf{Train (s)} & \textbf{Test (s)} \\
\hline
\newcite{collobert2011natural} & 89.59 &--&--\\
\newcite{passos2014lexicon} &90.90&--&--\\
\newcite{luo2015joint} & 91.20&--&--\\
\newcite{huang2015bidirectional} & 90.10&--&--\\
\newcite{lample2016neural} & 90.94&--&--\\
\newcite{ma2016end} & 91.21&--&--\\
\hline
\newcite{yang2017transfer} & 91.26 & -- & -- \\
\newcite{rei:2017:Long} & 86.26 & -- & -- \\
\newcite{peters2017semi} & \textbf{91.93} & -- & -- \\
\hline
BiLSTM&90.96&82&9.89\\
2 stacked BiLSTM&91.02&159&18.88\\
3 stacked BiLSTM&91.06&235&30.97\\
S-LSTM & \textbf{91.57}*&79&9.78\\
\hline
\end{tabular}
\caption{Results on CoNLL03 (NER)}
\label{conll_test}
\vspace{-1.0em}
\end{table}
\subsection{Final Results for Classification}
The final results on the movie review and rich text classification datasets are shown in Tables \ref{movie_review_classification} and \ref{classification}, respectively.
In addition to training time per epoch, test times are additionally reported.
We use the best settings on the movie review development dataset for both S-LSTMs and BiLSTMs.
The step number for S-LSTMs is set to 9.
As shown in Table \ref{movie_review_classification}, the final results on the movie review dataset are consistent with the development results, where S-LSTM outperforms BiLSTM significantly, with a faster speed.
Observations on CNN and hierarchical attention are consistent with the development results.
S-LSTM also gives highly competitive results when compared with existing methods in the literature.
As shown in Table \ref{classification}, among the 16 datasets of \newcite{liu2017adversarial}, S-LSTM gives the best results on 12, compared with BiLSTM and 2 layered BiLSTM models.
The average accuracy of S-LSTM is 85.6\%, significantly higher compared with 84.9\% by 2-layer stacked BiLSTM.
3-layer stacked BiLSTM gives an average accuracy of 84.57\%, which is lower compared to a 2-layer stacked BiLSTM, with a training time per epoch of 423.6 seconds.
The relative speed advantage of S-LSTM over BiLSTM is larger on the 16 datasets as compared to the movie review test test.
This is because the average length of inputs is larger on the 16 datasets (see Section \ref{sec:analysis}).
\subsection{Final Results for Sequence Labelling}
Bi-directional RNN-CRF structures, and in particular BiLSTM-CRFs, have achieved the state of the art in the literature for sequence labelling tasks, including POS-tagging and NER.
We compare S-LSTM-CRF with BiLSTM-CRF for sequence labelling, using the same settings as decided on the movie review development experiments for both BiLSTMs and S-LSTMs.
For the latter, we decide the number of recurrent steps on the respective development sets for sequence labelling.
The POS accuracies and NER F1-scores against the number of recurrent steps are shown in Figure \ref{sequence_dev} (a) and (b), respectively.
For POS tagging, the best step number is set to 7, with a development accuracy of 97.58\%.
For NER, the step number is set to 9, with a development F1-score of 94.98\%.
As can be seen in Table \ref{wsj_test}, S-LSTM gives significantly better results compared with BiLSTM on the WSJ dataset.
It also gives competitive accuracies as compared with existing methods in the literature.
Stacking two layers of BiLSTMs leads to improved results compared to one-layer BiLSTM, but the accuracy does not further improve with three layers of stacked LSTMs.
For NER (Table \ref{conll_test}), S-LSTM gives an F1-score of 91.57\% on the CoNLL test set, which is significantly better compared with BiLSTMs.
Stacking more layers of BiLSTMs leads to slightly better F1-scores compared with a single-layer BiLSTM.
Our BiLSTM results are comparable to the results reported by \newcite{ma2016end} and \newcite{lample2016neural}, who also use bidirectional RNN-CRF structures.
In contrast, S-LSTM gives the best reported results under the same settings.
In the second section of Table \ref{conll_test}, \newcite{yang2017transfer} use cross-domain data, obtaining an F-score of 91.26\%; \newcite{rei:2017:Long} perform multi-task learning using additional language model objectives, obtaining an F-score of 86.26\%; \newcite{peters2017semi} leverage character-level language models, obtaining an F-score of 91.93\%, which is the current best result on the dataset.
All the three models are based on BiLSTM-CRF.
On the other hand, these semi-supervised learning techniques are orthogonal to our work, and can potentially be used for S-LSTM also.
\subsection{Analysis}
\label{sec:analysis}
\begin{figure}[t]
\vspace{-0.6em}
\begin{center}
\hspace{-.0em}
\subfigure[Movie review]{\includegraphics[width=0.9\linewidth]{length.pdf}}\hskip 0.1pt
\subfigure[CoNLL03]{\includegraphics[width=0.9\linewidth]{conll_length.pdf}} \hskip 1pt
\end{center}
\vspace{-1em}
\caption{\label{fig:length_analysis}Accuracies against sentence length.}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{length_time.pdf}
\end{center}
\vspace{-1.5em}
\caption{\label{fig:length_time}Time against sentence length.}
\end{figure}
Figure \ref{fig:length_analysis} (a) and (b) show the accuracies against the sentence length on the movie review and CoNLL datasets, respectively, where test samples are binned in batches of 80.
We find that the performances of both S-LSTM and BiLSTM decrease as the sentence length increases.
On the other hand, S-LSTM demonstrates relatively better robustness compared to BiLSTMs.
This confirms our intuition that a sentence-level node can facilitate better non-local communication.
Figure \ref{fig:length_time} shows the training time per epoch of S-LSTM and BiLSTM on sentences with different lengths on the 16 classification datasets.
To make these comparisons, we mix all training instances, order them by the size, and put them into 10 equal groups, the medium sentence lengths of which are shown.
As can be seen from the figure, the speed advantage of S-LSTM is larger when the size of the input text increases, thanks to a fixed number of recurrent steps.
Similar to hierarchical attention \citep{vaswani2017attention}, there is a relative disadvantage of S-LSTM in comparison with BiLSTM, which is that the memory consumption is relatively larger.
For example, over the movie review development set, the actual GPU memory consumption by S-LSTM, BiLSTM, 2-layer stacked BiLSTM and 4-layer stacked BiLSTM are 252M, 89M, 146M and 253M, respectively.
This is due to the fact that computation is performed in parallel by S-LSTM and hierarchical attention.
\section{Conclusion}
We have investigated S-LSTM, a recurrent neural network for encoding sentences, which offers richer contextual information exchange with more parallelism compared to BiLSTMs.
Results on a range of classification and sequence labelling tasks show that S-LSTM outperforms BiLSTMs using the same number of parameters, demonstrating that S-LSTM can be a useful addition to the neural toolbox for encoding sentences.
The structural nature in S-LSTM states allows straightforward extension to tree structures, resulting in highly parallelisable tree LSTMs.
We leave such investigation to future work.
Next directions also include the investigation of S-LSTM to more NLP tasks, such as machine translation.
\section*{Acknowledge}
We thank the anonymous reviewers for their constructive and thoughtful comments.
|
{
"timestamp": "2018-05-08T02:16:49",
"yymm": "1805",
"arxiv_id": "1805.02474",
"language": "en",
"url": "https://arxiv.org/abs/1805.02474"
}
|
\section{Estimating separations with a quantum pulse gate}\label{sec:QPGestimator}
In this section, we derive Eq.~\eqref{eq:QPGestimator} of the main text, which provides the separation estimators $\hat{\textgoth{s:}}_\nu$ and $\hat{\textgoth{s:}}_t$ for measurements made with a quantum pulse gate. The quantum pulse gate operation relies on a large discrepancy between the group velocities of the input and upconverted signals, which manifests in the energy picture as a much larger input acceptance bandwidth than output signal bandwidth. In practice, this discrepancy is of course finite, which places limitations on the achievable time and frequency resolutions. Here, starting from the basic nonlinear interaction, we outline these limitations.
In our treatment herein, we assume that the three-field interaction takes place inside a single-mode $\chi^{(2)}$ waveguide, such that we may neglect the spatial modes involved. We also assume that we are working in the low-efficiency regime, such that a first-order approach is sufficient~\cite{reddy2017engineering,ansari2018tailoring}. We label the modes as the input (``1''), the QPG pump (``2''), and upconverted output (``3''), and group the central frequencies into the variables as $\tilde\nu=\nu-\nu_0$. In this case, the upconverted spectral amplitude $\gamma(\tilde\nu_3)$ is related to the spectral amplitude of the QPG pump $\alpha(\tilde\nu_2)$ and the input signal $\psi(\tilde\nu_1)$ as \begin{equation}\gamma(\tilde\nu_3)=\theta\int\mathrm{d}\tilde\nu_1\,H(\tilde\nu_1,\tilde\nu_3)\alpha(\tilde\nu_3-\tilde\nu_1)\psi(\tilde\nu_1),\label{eq:threewavemixing}\end{equation} where energy conservation $\tilde\nu_2=\tilde\nu_3-\tilde\nu_1$ has been accounted for, $\theta$ is a coupling constant representing factors such as the material nonlinearity, and $H(\tilde\nu_1,\tilde\nu_3)$ is the phasematching function, characterized by the relationships between the wavenumbers $k_j(\tilde\nu_j)=\frac{2\pi\nu_jn_j(\tilde\nu_j)}{c}$ of the interacting fields.
If process is phasematched at the central frequencies through periodic poling and chromatic dispersion within each field can be neglected, the phasematching function for an interaction length $L$ can be expressed as \begin{equation}H(\tilde\nu_1,\tilde\nu_3)\propto L\,\mathrm{sinc}\left(\frac{L\left[(k'_1-k'_2)\tilde\nu_1-(k'_3-k'_2)\tilde\nu_3\right]}{2}\right)\end{equation} where $k'_j=\frac{\partial k_j}{\partial\nu_j}\big\rvert_{\nu_{0,j}}=\frac{1}{2\pi u_{j}}$ is inversely proportional to the group velocity $u_{j}$. If the input signal and QPG pump are group-velocity matched $k'_1=k'_2$, the phasematching function simplifies to a function of only the output frequency $\tilde\nu_3$. If we use a bandpass filter to remove the side lobes of the sinc function, we can approximate the phasematching function as a Gaussian, \begin{equation}H(\tilde\nu_3)\approx L\,e^{-\eta\frac{(L(k'_3-k'_1)\tilde\nu_3)^2}{4}}\vcentcolon= L\,e^{-\frac{\tilde\nu_3^2}{4\sigma_{\mathrm{PM}}^2}},\end{equation} where $\sigma_{\mathrm{PM}}$ is the RMS phasematching bandwidth and $\eta\approx0.193$.
We assume that the input signal wavefunction is a Gaussian pulse with some offset $\delta\nu$ from the perfectly phasematched frequencies and a small time delay $\delta t$ relative to the QPG pump pulse, which we express as \begin{equation}\psi(\tilde\nu_1)=\frac{1}{(2\pi\sigma_\nu^2)^\frac{1}{4}}\exp\left[-\frac{(\tilde\nu+\delta\nu)^2}{4\sigma_\nu^2}-i2\pi\tilde\nu\delta t\right].\end{equation} Note that the RMS width of the pulse in time is $\sigma_t=1/(4\pi\sigma_\nu)$. The QPG pump pulse is shaped to the first two Hermite-Gauss temporal modes with bandwidth $\sigma_2$, given by \begin{equation}\begin{split} \alpha_{\scalerel*{\includegraphics{figures/hg0_small.pdf}}{p}}(\tilde\nu_2) &= \frac{1}{(2\pi\sigma_2^2)^\frac{1}{4}}\exp\left[-\frac{\tilde\nu_2^2}{4\sigma_2^2}\right] \\ \alpha_{\scalerel*{\includegraphics{figures/hg1_small.pdf}}{b}}(\tilde\nu_2) &= \frac{\tilde\nu_2}{(2\pi\sigma_2^6)^\frac{1}{4}}\exp\left[-\frac{\tilde\nu_2^2}{4\sigma_2^2}\right] .\end{split}\end{equation} Substituting these and the phasematching function into Eq.~\eqref{eq:threewavemixing} and finding the relative upconversion probability as $P=\int\mathrm{d}\nu_3\,|\gamma(\tilde\nu_3)|^2$, the ratio of the upconversion probabilities for the first two modes is found to be \begin{equation}\begin{array}{ccl} \frac{P_{\scalerel*{\includegraphics{figures/hg1_small.pdf}}{b}}}{P_{\scalerel*{\includegraphics{figures/hg0_small.pdf}}{p}}}&=& \sigma_2^2\left[\frac{\sigma_\nu^2+16\pi^2\delta t^2\sigma_\nu^2+\sigma_2^2}{(\sigma_\nu^2+\sigma_2^2)^2}+\frac{\delta\nu^2-\sigma_\nu^2-\sigma_2^2-\sigma_{\mathrm{PM}}^2}{(\sigma_\nu^2+\sigma_2^2+\sigma_{\mathrm{PM}}^2)^2}\right]\\ &\overset{\sigma_2=\sigma_\nu}{=}& \frac{\sigma_\mathrm{PM}^2}{2(2\sigma_\nu^2+\sigma_\mathrm{PM}^2)} + 4\pi^2\delta t^2\sigma_\nu^2+\frac{\delta\nu^2\sigma_\nu^2}{2(2\sigma_\nu^2+\sigma_\mathrm{PM}^2)^2}\\ &\overset{\sigma_\nu^2\gg\sigma_\mathrm{PM}^2}{\approx}&\frac{\sigma_\mathrm{PM}^2}{4\sigma_\nu^2} + \frac{\delta t^2}{4\sigma_t^2} + \frac{\delta\nu^2}{4\sigma_\nu^2}.\label{eq:QPGestimatorDerivation}\end{array}\end{equation} To get from the first line to the second, we have set the bandwidth of the QPG pump to be equal to the input signal, ensuring that the two pulses have matched temporal-mode bases. To get from the second line to the third, we have assumed that the phasematching bandwidth is narrower than the input pulses, such that ${2\sigma_\nu^2+\sigma_\mathrm{PM}^2\approx2\sigma_\nu^2}$. Since $P_{\scalerel*{\includegraphics{figures/hg0_small.pdf}}{p}}$ and $P_{\scalerel*{\includegraphics{figures/hg1_small.pdf}}{b}}$ are both symmetric functions of $\delta\nu$ or $\delta t$, Eq.~\eqref{eq:QPGestimatorDerivation} holds for incoherent mixtures of positive and negative shifts, and Eq.~\eqref{eq:QPGestimator} can be retrieved by substituting $\delta\nu\mapsto\frac{\textgoth{s:}_\nu}{2}$ and $\delta t\mapsto \frac{\textgoth{s:}_t}{2}$. It is apparent that the minimum resolvable shift will be on the order of $\sigma_\mathrm{PM}$ in frequency and $\frac{\sigma_\mathrm{PM}}{\sigma_\nu}\sigma_t$ in time, and that any misalignment in frequency or time will adversely effect the resolution of measurements in the other setting.
\section{Measurement tomography methods}\label{sec:MeasTomo}
In this section, we describe the measurement tomography method used to retrieve an accurate separation estimator from the directly measured data. To characterize the device, we implement projections onto the first three Hermite-Gauss modes, where ``ideal measurement'' can be described by projections of the input signal on the three lowest-order Hermite-Gauss modes HG$_0$, HG$_1$, and HG$_2$. We denote $q_j(\textgoth{s:})$ the probability of the $j$th measurement output given the true separation is $\textgoth{s:}$. For a Gaussian point-spread function (PSF) of width $\sigma$, this probability reads
\begin{equation}
\label{basis}
q_j(\textgoth{s:})=\frac{1}{k!}\left(\frac{\textgoth{s:}}{4\sigma}\right)^{2j} e^{-\left(\frac{\textgoth{s:}}{4\sigma}\right)^2},\qquad j=0,1,2\,.
\end{equation}
Due to unavoidable imperfections, the actual detection probabilities $p_j(\textgoth{s:})$ differ slightly from $q_j(\textgoth{s:})$ and the measurement device needs to be characterized before using. Assuming the setup works well, that is, the differences between the actual and target distributions $p_j(\textgoth{s:})$ and $q_j(\textgoth{s:})$ are small, we expand the former using the latter as a basis as follows
\begin{equation}
p_j(\textgoth{s:})=\sum\limits_{k=0}^M c_{jk}\, q_k(\textgoth{s:}),\qquad j=0,1,2\,.
\end{equation}
Having repeatedly measured a set of known separations $\textgoth{s:}=\{\textgoth{s:}_1,\textgoth{s:}_2,\ldots,\textgoth{s:}_N\}$, the probabilities $p_j$ can be estimated by the corresponding relative frequencies $f_j=\langle n_j\rangle/\sum_j \langle n_j\rangle$. Denoting further $f^j_\alpha= f_j(\textgoth{s:}_\alpha)$, $q_{k\alpha}= q_k(\textgoth{s:}_\alpha)$, and $c^j_k=c_{jk}$,
we obtain three sets of linear equations to be solved for the set of unknown detection coefficients $c^j_k$
\begin{equation}
f^j_k=\sum_k q_{k\alpha} c_k^j,\qquad j=0,1,2\,.
\end{equation}
The pseudo-inverse can be used to obtain standard solutions minimizing the $L_2$ norm,
\begin{equation}
\mathbf{c}^j=Q^{+} \mathbf{f}^j,\qquad j=0,1,2\,.
\end{equation}
It turns out just a few ($M\approx 4$) basis functions in Eq.~\eqref{basis} are required to observe excellent fits of the detected relative frequencies $f_j$ in terms of the corresponding theoretical models $p_j$ for all measured separations in the region of interest $\textgoth{s:}\in [0,2]$.
We next proceed to the parameter estimation step using our characterized measurement. Each measurement returns a three numbers, $n_0$, $n_1$, and $n_2$. Assuming Poissonian statistics, the separation is estimated by maximizing the log-likelihood
\begin{equation}
\hat{\textgoth{s:}}=\arg\max\limits_{\textgoth{s:}}\left\{ \sum_j n_j \log\left[\frac{p_j(\textgoth{s:})}{\sum_{j'} p_{j'}(\textgoth{s:})}\right]\right\}\,
\end{equation}
subject to $\hat{\textgoth{s:}}\ge 0$ using a suitable optimization tool. Finally, for every true separation we calculate the statistics of the estimates and compare the measurement errors to the relevant classical and quantum resolution limits.
\end{appendix}
\end{document}
|
{
"timestamp": "2018-05-08T02:17:08",
"yymm": "1805",
"arxiv_id": "1805.02491",
"language": "en",
"url": "https://arxiv.org/abs/1805.02491"
}
|
\section{Introduction}
Current popular optimization methods for Deep Neural Networks (DNNs) are mainly first order with heuristics, e.g., SGD \cite{bottou2010large}, Adagrad \cite{duchi2011adaptive}, Adam \cite{kingma2014adam}, RMSProp \cite{tieleman2012lecture}, AdaDelta \cite{zeiler2012adadelta}, etc. Since DNNs are typically non-convex, it is difficult to apply the second order methods in optimizing DNNs due to the fact that the Hessian matrix might not be positive semi-definite (PSD), although second order methods converges much faster than first order methods.
There has been attempts in applying the second order method Limited memory BFGS (LBFGS) to optimizing DNNs in the famous deep learning framework PyTorch. However, when optimizing DNNs, the Hessian of LBFGS may become negative semi-definite and thus the objective loss will increase to a very large value. Moreover, the optimization process of LBFGS is highly sensitive to the learning rate. If the learning rate is too large, the training process is unstable. Otherwise, the objective function does not decrease.
Recently, Wang et al \cite{wang2017stochastic} propose a second order algorithm, Stochastic damped LBFGS (SdLBFGS), to optimize non-convex functions. The Hessian in each step of SdLBFGS is guaranteed to be PSD. However, the convergence is not fully proved and we observed non-convergence and crashes in experiments.
Based on the original SdLBFGS algorithm, we modified two important steps in the algorithm:
\begin{itemize}
\item[1] The Hessian in each step is initialized by the identity matrix.
\item[2] Normalize the direction computed in each step using $l_2$ normalization.
\end{itemize}
We implemented the original algorithm and our improved algorithm in PyTorch \footnote{Code is available at https://github.com/harryliew/SdLBFGS}. We denoted the original algorithm as SdLBFGS0 and our improved algorithm as SdLBFGS.
By initialize the Hessian at each step using the identity matrix, SdLBFGS converges better than SdLBFGS0. By normalizing the direction, SdLBFGS is much more stable than SdLBFGS0 without line search.
In the following sections, we will first briefly introduce the SdLBFGS0 algorithm. Then, we will show our improvements of the SdLBFGS algorithm.
Moreover, we will do experiments on a simple 2D non-convex function, CIFAR10 \cite{krizhevsky2009learning} and MINST \cite{lecun1998gradient} datasets. We will compare our implementation with SGD, Adagrad, LBFGS in PyTorch. Finally we will conclude our paper.
\section{The Original SdLBFGS0 Algorithm}
Since SdBFGS0 is based on LBFGS with minor modifications, we shall first introduce LBFGS.
\subsection{Limited memory BFGS (LBFGS)}
Denote by $d$ the dimension of the optimum solution. Since BFGS needs to store the Hessian in each step, the space complexity of BFGS in each step is $O(d^2)$, which is very expensive when optimizing DNNs. The Limited memory BFGS (LBFGS) is proposed to resolve this issue. Instead of storing the Hessian, LBFGS computes the approximation of inverse of Hessian using previously calculated variables $p$, $s_j$ and $y_j$ by
\begin{eqnarray}
\label{eq: inv_H_lbfgs}
H_{k,i} &=& (I - \rho_j s_j y^T_j) H_{k, i-1} (I - \rho_j y_j s^T_j) + \rho_j s_j s^T_j,
j = k-(c-i+1), \forall 1\leq i \leq c.
\end{eqnarray}
where $c = \min\{k,p\}$, $s_j = x_j - x_{j-1}$ and $y_j = g_j - g_{j-1}$, where $g_j$ is the sub-gradient in step $j$. Therefore, the space complexity in each step is reduced from $O(d^2)$ to $O(pd)$.
\subsection{Original Stochastic dampled LBFGS (SdLBFGS0)}
SdLBFGS0 adopts the idea of SdLBFGS, but substitutes the $y_j$ in Eq. (\ref{eq: inv_H_lbfgs}) to $\bar{y}_j$, where $\bar{y}_j$ is computed as
\begin{eqnarray}
\label{eq: y_bar}
\bar{y}_{k-1} = \theta_{k-1} y_{k-1} + (1-\theta_{k-1}) H_{k,0}^{-1}s_{k-1}
\end{eqnarray}
where
$$
\theta_{k-1} = \left\{ \begin{array}{ll}
\frac{0.75 s_{k-1}^T H_{k,0}^{-1}s_{k-1} }{s_{k-1}^T H_{k,0}^{-1}s_{k-1} - s_{k-1}^T y_{k-1}} &\mbox{ if $s_{k-1}^T y_{k-1} < 0.25 s_{k-1}^T H_{k,0}^{-1}s_{k-1} $} \\
1 &\mbox{ otherwise}
\end{array} \right.
$$
In SdLBFGS0,
\begin{eqnarray}
\label{eq: H_0}
H_{k,0} = \gamma_{k} ^ {-1} I ~~~~~\mbox{where}~~\gamma_{k} = \max \left \{ \frac{ y_{k-1}^T y_{k-1}}{ s_{k-1}^T y_{k-1} }, \delta \right \} > 0
\end{eqnarray}
where $\delta > 0 $ is a given constant.
By substituting $y_j$ by $\bar{y}_j$, the Hessian computed by SdLBFGS0 is guaranteed to be PSD.
The step size in \cite{wang2017stochastic} is set to be
\begin{equation*}
\alpha_k = \frac{\underline{\kappa}}{L\bar{\kappa}^2} k^{-\beta},
\end{equation*}
where $\beta \in (0.5, 1)$, $L$ is the Lipschitz constant
and $\underline{\kappa}, \bar{\kappa}$ are the bounds on the Hessian.
The authors claim that the number of iterations required for convergence is $O(\epsilon^{-\frac{1}{1-\beta}})$.
The idea for computing the approximation of Hessian is using the stochastic damped L-BFGS (SdLBFGS) \cite{liu1989limited}.
\section{Our Improvement of the SdLBFGS Algorithm}
In this section, we will introduce our two important improvements of the SdLBFGS0, 1) Initializing the Hessian with an identity matrix, and 2) Normalizing the direction in each step. The algorithm of SdLBFGS is presented in Algorithm \ref{alg:SdLBFGS}. Note that in Procedure 3.1 of \cite{wang2017stochastic}, the subscripts in line 9-12 are out of range.
\begin{algorithm}[htbp]
\caption{Step computation using SdLBFGS}
\label{alg:SdLBFGS}
\begin{algorithmic}[1]
\REQUIRE Let $x_k$ be a current iterate.
Given the stochastic gradient $g_{k-1}$ at iterate $x_{k-1}$,
the random variable $\xi_{k-1}$, the batch size $m_{k-1}$, $s_j$, $\bar{y}_j$
and $\rho_j$, $j = k-p, \dots, k-2$, and $u_0 = g_k$.
\ENSURE $H_kg_k = v_p$.
\STATE Set $s_{k-1} = x_k - x_{k-1}$ and $y_{k-1} = g_k - g_{k-1}$.
\STATE Set $H_{k,0} = I$
\STATE Calculate $\bar{y}_{k-1}$ through Eq. (\ref{eq: y_bar}) and $\rho_{k-1} = (s_{k-1}^T \bar{y}_{k-1})^{-1}$
\STATE Set $c = \min\{p,k-1\}$
\FOR{$i = 0, \dots, c-1$}
\STATE Calculate $\mu_i = \rho_{k-i-1} u_i^T s_{k-i-1}$.
\STATE Calculate $u_{i+1} = u_i - \mu_i \bar{y}_{k-i-1}$.
\ENDFOR
\STATE Set $v_0 = u_p$
\FOR{$i = 0, \dots, c-1$}
\STATE Calculate $\nu_i = \rho_{k-c+i} v_i^T \bar{y}_{k-c+i}$.
\STATE Calculate $v_{i+1} = v_i + (\mu_{c-i-1} - \nu_i) s_{k-c+i}$.
\ENDFOR
\STATE $v_p = v_p / ||v_p||_2$
\end{algorithmic}
\end{algorithm}
\subsection{Initialize the Hessian with an Identity Matrix}
SdLBFGS0 proposed in \cite{wang2017stochastic} cannot converge to a local minimum in practice.
A crucial difference between SdLBFGS and the original LBFGS is that the original LBFGS method applies backtracking line search to guarantee convergence,
while SdLBFGS0 uses the diminishing step size instead of line search.
We would prefer using diminishing step size in SdLBFGS0, because when training DNNs,
the computation of line search is highly expensive.
Without line search, an immediate consequence of SdLBFGS0 is that it cannot converge to a local minimum in practice. The main reason is that, in SdLBFGS0, if the Hessian is large, then at some point the algorithm will output a direction with a small norm.
With diminishing step size,
the movement towards that direction is small.
This will also cause the the norm of direction in the next step to be small.
Therefore, with diminishing step size, this algorithm will slowly converge to some point which is not a local minimum. Therefore, $\delta$ in Eq.~(\ref{eq: H_0}) needs to be chosen very carefully.
To rectify this situation, we will set the initial approximation of the Hessian matrix $H_{k,0}$ in each step $k$ to be the identity matrix $I$.
In this way, the norm of the direction will not converge to zero
even with diminishing step size, and
the objective will still gradually converge to a local minimum. This step is line 2 in Algorithm \ref{alg:SdLBFGS}.
\subsection{Direction Normalization}
Another problem of SdLBFGS0 is that the norm of the direction might be too large in some cases.
In this case, without line search, the movement towards that direction may be too large, especially when the algorithm runs in the first few steps where the step size is still large. Thus, this will cause the algorithm to be very unstable.
Hence, the objective fuction may become increasingly larger and gradually approach infinity.
To address this issue,
we normalize the direction in each step using the~$l_2$ norm, so that in each step, the algorithm will not move too far away from the current point. This makes the algorithm stable. This step is stated in line 14 of Algorithm \ref{alg:SdLBFGS}.
\section{Experiments}
In the experiments, we analyze the performance of our algorithm on a simple 2D non-convex function, the CIFAR10 \cite{krizhevsky2009learning} and the MNIST \cite{lecun1998gradient} datasets. Denote the implementation of the original algorithm in \cite{wang2017stochastic} as SdLBFGS0, and our modification of SdLBFGS0 as SdLBFGS. We compare our algorithm with the build-in optimizers SGD, Adagrad, LBFGS in PyTorch, and we implement both SdLBFGS0 and SdLBFGS in PyTorch. For SdLBFGS0 and SdLBFGS, we set the step size to be $1/\sqrt{k}$, where $k$ is the number of iterations. The memory sizes of LBFGS, SdLBFGS0 and SdLBFGS are all set to be~100 for fair comparison. The batch sizes for all the those methods are set to be 64 on CIFAR10 and MNIST datasets.
\subsection{Optimization of A simple 2D Non-Convex Function}
In this experiment, we compare SdLBFGS0 and SdLBFGS on a simple 2D non-convex function expressed as follows:
$$
f(x, y ) = 100(x^2-y)^2 + (x-1)^2
$$
For both methods, we start from $(-1.2, 1)$ and perform 1 million iterations. We show the number of iterations (in natural log) vs. the objective function value (in natural log) in Figure \ref{fig: simple_case}. From this figure we can see that the the value of the objective function optimized by SdLBFGS0 is very high at the beginning, because the direction of SdLBFGS0 is not normalized, and the iteration point goes too far away from the optimum. In addition, after about 100 number of iterations, the value of the objective function no longer decreases although the diminishing step size is applied. This means that SdLBFGS0 does not converge. By contrast, the value of the objective function optimized by SdLBFGS is normal at the beginning, and it decreases as the number of iterations increases. This demonstrates that our modification of the original algorithm is effective.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=60mm]{simple_case_1M.png}
\caption{SdLBFGS0 and SdLBFGS on a simple 2D non-convex function.}
\label{fig: simple_case}
\end{center}
\vspace{-.5cm}
\end{figure}
\subsection{Results on the CIFAR10 dataset}
We apply different optimizers to do classification on the CIFAR10 dataset. We use the convolution neural network provided online \footnote{https://pytorch.org/tutorials/beginner/blitz/cifar10\_tutorial.html} for all the optimizers. For SGD, Adagrad and LBFGS we choose their learning rates (lr) from [1e-4, 1e-3, 1e-2, 1e-1]. Figure \ref{fig: cifar10_all4methods_acc} (a), (b) and (c) show the accuracies of different methods v.s. number of iterations under different learning rates. In Figure \ref{fig: cifar10_sdlbfgs} (d), SdLBFGS0 only works at the beginning. After a few epochs, it outputs nan and stops working, whereas our improved SdLBFGS works normally, i.e., the accuracy increases and remains stable.
Figure \ref{fig: cifar10_all4methods_loss} (a), (b) and (c) show the losses of different methods v.s. number of iterations under different learning rates. From Figure \ref{fig: cifar10_all4methods_loss}(c), we can see that the build-in optimizer LBFGS in PyTorch does not converge. The loss of SdLBFGS0 in Figure \ref{fig: cifar10_all4methods_loss}(d)
is higher than that of SdLBFGS and SdLBFGS0 crashes after a few number of iterations. This shows that SdLBFGS0 is not stable.
In Figure \ref{fig: cifar10_all4methods_loss}, we choose the best learning rate for different methods and compare them together in accuracies (Figure \ref{fig:subfig:cifar10_comp_all4methods_acc}), and in losses (Figure \ref{fig:subfig:cifar10_comp_all4methods_loss}). From Figure \ref{fig:subfig:cifar10_comp_all4methods_acc} we can see that the testing accuracy of our implementation SdLBFGS increases faster than all the other methods under their best learning rates. Besides, SdLBFGS achieves the highest final testing accuracy of approximately 66\% compared to all the other methods. SGD and Adagrad achieve approximately similar testing accuracy of approximately 62\%. LBFGS gets the lowest accuracy, since the loss does not decrease according to Figure \ref{fig:subfig:cifar10_comp_all4methods_loss}. The losses of SdLBFGS and Adagrad are similar in Figure \ref{fig:subfig:cifar10_comp_all4methods_loss}. Although the loss of SGD decreases to nearly 0 in Figure \ref{fig:subfig:cifar10_comp_all4methods_loss}, it's testing accuracy is less than SdLBFGS.
\begin{figure}[t!]
\centering
\subfigure[]{\label{fig:subfig:cifar10_sdlbfgs_all_acc}\includegraphics[width=60mm]{CIFAR10_SdLBFGS_all_acc.png}}
\subfigure[]{\label{fig:subfig:cifar10_sdlbfgs_all_loss}\includegraphics[width=60mm]{CIFAR10_SdLBFGS_all_loss.png}}
\caption{Comparison of SdLBFGS0 and SdLBFGS on the CIFAR10 dataset.}
\label{fig: cifar10_sdlbfgs}
\vspace{-.5cm}
\end{figure}
\begin{figure}
\centering
\subfigure[]{\label{fig:subfig:cifar10_sgd_acc}\includegraphics[width=60mm]{CIFAR10_SGD_all_acc.png}}
\subfigure[]{\label{fig:subfig:cifar10_adagrad_acc}\includegraphics[width=60mm]{CIFAR10_Adagrad_all_acc.png}}
\subfigure[]{\label{fig:subfig:cifar10_lbfgs_acc}\includegraphics[width=60mm]{CIFAR10_LBFGS_all_acc.png}}
\subfigure[]{\label{fig:subfig:cifar10_sdlbfgs_acc}\includegraphics[width=60mm]{CIFAR10_SdLBFGS_all_acc.png}}
\caption{Accuracies of SGD, Adagrad, LBFGS, SdLBFGS0 and SdLBFGS under different learning rates on CIFAR10 dataset. Note that in (c) LBFGS lr = 0.1 overlaps lr = 0.01, and lr = 0.001 overlaps lr = 0.0001.}
\label{fig: cifar10_all4methods_acc}
\vspace{-0.5cm}
\end{figure}
\begin{figure}
\vspace{-.5cm}
\centering
\subfigure[]{\label{fig:subfig:cifar10_sgd_loss}\includegraphics[width=60mm]{CIFAR10_SGD_all_loss.png}}
\subfigure[]{\label{fig:subfig:cifar10_adagrad_loss}\includegraphics[width=60mm]{CIFAR10_Adagrad_all_loss.png}}
\subfigure[]{\label{fig:subfig:cifar10_lbfgs_loss}\includegraphics[width=60mm]{CIFAR10_LBFGS_all_loss.png}}
\subfigure[]{\label{fig:subfig:cifar10_sdlbfgs_loss}\includegraphics[width=60mm]{CIFAR10_SdLBFGS_all_loss.png}}
\caption{Losses of SGD, Adagrad, LBFGS, SdLBFGS0 and SdLBFGS under different learning rates on the CIFAR10 dataset.}
\label{fig: cifar10_all4methods_loss}
\vspace{-.5cm}
\end{figure}
\begin{figure}
\centering
\subfigure[]{\label{fig:subfig:cifar10_comp_all4methods_acc}\includegraphics[width=60mm]{CIFAR10_all4methods_acc.png}}
\subfigure[]{\label{fig:subfig:cifar10_comp_all4methods_loss}\includegraphics[width=60mm]{CIFAR10_all4methods_loss.png}}
\caption{Comparison of different methods under their best learning rates on the CIFAR10 dataset.}
\label{fig: cifar10_comp_all4methods}
\vspace{-.5cm}
\end{figure}
\subsection{Results on the MNIST dataset}
Similar to the experiments on the MNIST dataset, we apply different optimizers to do classification on the MNIST dataset. We use the convolution neural network provided on line \footnote{https://github.com/pytorch/examples/tree/master/mnist} for all the optimizers.
The learning rates of SGD, Adagrad and LBFGS are chosen from [1e-4, 1e-3, 1e-2, 1e-1].
Figure~\ref{fig: mnist_all4methods_acc}~(a), (b) and (c) show the accuracies of different methods v.s. number of iterations under different learning rates.
From Figure \ref{fig: mnist_sdlbfgs}(d) we can see that SdLBFGS0 works at the beginning, but after a few epochs, it crashes, whereas SdLBFGS works stably. Figure \ref{fig: mnist_all4methods_loss} (a), (b) and (c) show the losses of different methods v.s. number of iterations under different learning rates. Figure \ref{fig: mnist_all4methods_loss}(c) shows that LBFGS is very sensible to learning rates. Figure \ref{fig: mnist_all4methods_loss}(d) shows that SdLBFGS0 stops working at the beginning of the training and the loss does not decrease.
In Figure \ref{fig: mnist_comp_all4methods}, SGD, Adagrad, LBFGS, SdLBFGS are compared under their best learning rates tuned in Figures \ref{fig: mnist_all4methods_acc} and \ref{fig: mnist_all4methods_loss}. From Figure \ref{fig:subfig:mnist_comp_all4methods_acc} we can see that the testing accuracies of SGD, Adagrad and SdLBFGS are comparable, which is approximately 98\%, whereas LBFGS performs poorly at about~10\%. The losses plotted in Figure \ref{fig:subfig:mnist_comp_all4methods_acc} show that SGD, LBFGS and SdLBFGS give comparable losses.
\begin{figure}
\centering
\subfigure[]{\label{fig:subfig:mnist_sdlbfgs_all_acc}\includegraphics[width=60mm]{MNIST_SdLBFGS_all_acc.png}}
\subfigure[]{\label{fig:subfig:mnist_sdlbfgs_all_loss}\includegraphics[width=60mm]{MNIST_SdLBFGS_all_loss.png}}
\caption{Comparison of SdLBFGS0 and SdLBFGS on the MNIST dataset.}
\label{fig: mnist_sdlbfgs}
\vspace{-.5cm}
\end{figure}
\begin{figure}
\centering
\subfigure[]{\label{fig:subfig:mnist_sgd_acc}\includegraphics[width=60mm]{MNIST_SGD_all_acc.png}}
\subfigure[]{\label{fig:subfig:mnist_adagrad_acc}\includegraphics[width=60mm]{MNIST_AdaGrad_all_acc.png}}
\subfigure[]{\label{fig:subfig:mnist_lbfgs_acc}\includegraphics[width=60mm]{MNIST_LBFGS_all_acc.png}}
\subfigure[]{\label{fig:subfig:mnist_sdlbfgs_acc}\includegraphics[width=60mm]{MNIST_SdLBFGS_all_acc.png}}
\caption{Accuracies of SGD, Adagrad, LBFGS, SdLBFGS0 and SdLBFGS under different learning rates on the MNIST dataset. Note that in (c) LBFGS lr = 0.01 overlaps lr = 0.001 and lr = 0.0001.}
\label{fig: mnist_all4methods_acc}
\vspace{-.5cm}
\end{figure}
\begin{figure}
\centering
\subfigure[]{\label{fig:subfig:mnist_sgd_loss}\includegraphics[width=60mm]{MNIST_SGD_all_loss.png}}
\subfigure[]{\label{fig:subfig:mnist_adagrad_loss}\includegraphics[width=60mm]{MNIST_AdaGrad_all_loss.png}}
\subfigure[]{\label{fig:subfig:mnist_lbfgs_loss}\includegraphics[width=60mm]{MNIST_LBFGS_all_loss.png}}
\subfigure[]{\label{fig:subfig:mnist_sdlbfgs_loss}\includegraphics[width=60mm]{MNIST_SdLBFGS_all_loss.png}}
\caption{Losses of SGD, Adagrad, LBFGS, SdLBFGS0 and SdLBFGS under different learning rates on the MNIST dataset.}
\label{fig: mnist_all4methods_loss}
\vspace{-.6cm}
\end{figure}
\begin{figure}
\centering
\subfigure[]{\label{fig:subfig:mnist_comp_all4methods_acc}\includegraphics[width=60mm]{MNIST_all4methods_acc.png}}
\subfigure[]{\label{fig:subfig:mnist_comp_all4methods_loss}\includegraphics[width=60mm]{MNIST_all4methods_loss.png}}
\caption{Comparison of different methods under their best learning rates on the MNIST dataset.}
\label{fig: mnist_comp_all4methods}
\end{figure}
\section{Conclusion}
This paper presents the implementation of the Stochastic damped LBFGS (SdLBFGS) method in PyTorch for non-convex stochastic optimization problem. By initializing the Hessian in each step using the identity matrix, the algorithm converges better than the original algorithm. By performing direction normalization, we could obtain stable optimization procedure without line search. Experiments on minimizing a 2D non-convex function shows that our improved algorithm converges better than original algorithm, and experiments on the CIFAR10 and MNIST datasets show that out implementation works stably and gives comparable or even better testing accuracies than widely-used optimizers SGD, Adagrad, and the second order optimizer LBFGS in PyTorch.
\section*{Acknowledgement}
The authors would thank Professor Francesco Orabona for insightful discussions and helpful suggestions.
\medskip
\small
\bibliographystyle{abbrv}
|
{
"timestamp": "2018-05-08T02:13:55",
"yymm": "1805",
"arxiv_id": "1805.02338",
"language": "en",
"url": "https://arxiv.org/abs/1805.02338"
}
|
\section{Introduction}
\label{sec:intro}
Identity verification plays an important role in our daily lives. For example, access control, physical security and international border crossing require us to verify our access (security) level and our identities. Although an ideal solution is to have a digital biometric database of all citizens/users which can be accessed whenever user verification is needed, it is currently not feasible in most countries. A practical and common approach that has been deployed is to utilize an ID document containing the subject's photo, and verify the identity by comparing the document image with the subject's live face. For example, immigration and customs officials look at the passport photos and manually compare it with subject's face standing in front of them to confirm that the passenger is indeed the legitimate owner of the passport. Clerks at supermarkets look at driver licenses to check customers' age. Such an ID document photo matching is conducted in numerous scenarios, but it is primarily conducted by humans, which is time-consuming, costly and potentially error-prone. Therefore, an automatic system that matches ID document photos to selfies with high speed and low errors rate has a bright future in these applications. Here, we define the ``selifes'' as any self-captured photos, including those from kiosks. In addition, automatic ID matching systems also enable remote applications not otherwise possible, such as onboarding new customers in a mobile app (verifying their identities for account creation), or account recovery. One application scenario of ID document photo matching system is illustrated in Figure ~\ref{fig:application}.
\begin{figure}
\center
\begin{subfigure}[b]{\linewidth}
\includegraphics[width=\linewidth]{fig/example.png}
\caption{General face matching}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\includegraphics[width=\linewidth]{fig/example2.png}
\caption{ID document photo matching}
\end{subfigure}
\caption{Example images from (a) LFW dataset~\cite{LFWTech} and (b) ID-Selfie-A dataset. Each row shows two pairs from the two datasets, respectively. Compared with the general unconstrained face recognition shown in (a), ID Document photo matching (b) does not need to consider large pose variations. Instead, it involves a number of other challenges such as aging and information loss via image compression.}
\label{fig:examples}
\end{figure}
\begin{figure*}
\center
\begin{subfigure}[b]{0.32\linewidth}
\includegraphics[width=\linewidth]{fig/gate_aus.jpg}
\caption{Australia SmartGate~\cite{gates_au}}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\includegraphics[width=\linewidth]{fig/gate_uk.jpg}
\caption{UK ePassport gates~\cite{gates_uk}}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\includegraphics[width=\linewidth]{fig/gate_us.jpg}
\caption{US Automated Passport Control~\cite{gates_us}}
\end{subfigure}
\caption{Example automatic ID document photo matching systems at international borders.}
\label{fig:gates}
\end{figure*}
A number of automatic ID document photo to selfies matching systems have been deployed at international borders. The earliest of such a system is the SmartGate deployed in Australia~\cite{gates_au}. See Figure~\ref{fig:gates}. Due to the increasing number of travelers to Australia, the Australian government introduced SmartGate at most of its international airports for electronic passport control checks for ePassport holders. To use the SmartGate, travelers only need to let a machine read their ePassport chips containing their digital photos and then capture their face images using a camera mounted at the SmartGate. After verifying a traveler's identity by face comparison, the gate is automatically opened for the traveler to enter Australia. Similar machines have also been installed in the UK (ePassport gates) ~\cite{gates_uk}, USA (US Automated Passport Control)~\cite{gates_us} and other countries. However, all of the above border crossing applications read the subject's face image from ePassport's chip. If a traveler does not have an ePassport, he will still have to be processed by an inspector who will do the typical manual photo comparison. Besides international border crossing, some businesses are providing face recognition solutions to ID document verification for online services~\cite{netverify}~\cite{mitek}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/docface2.png}
\vspace{-1.5em}
\caption{An application scenario of the ID document matching system. The kiosk scans the ID document or reads its chip for the face photo and the camera takes another photo of the holder's live face (selfie). Then, through face recognition, the system decides whether the holder is indeed the owner of the ID document.}
\label{fig:application}
\vspace{-0.5em}
\end{figure}
The problem of ID document face matching involves many difficulties that are different from general face recognition, namely image to image matching. For typical unconstrained face recognition tasks, the main difficulties lie in the pose, illumination and expression (PIE) variations. But in document photo matching, we are comparing a scanned or digital document photo to a digital camera photo of a live face. Assuming that the user is cooperative, both of the images are captured under constrained conditions and there would not be large PIE variations. Instead, low quality of document photos due to image compression\footnote{Most chips in e-Passports have a memory from 8KB to 30KB; the face images have to be compressed to be stored in the chip. See \url{https://www.readid.com/blog/face-images-in-ePassports}} and the time gap between document issue date and verification time remain as the main difficulties, as shown in Figure~\ref{fig:examples}. In addition, since nowadays all of the face recognition systems use neural networks, another difficulty faced in our problem is the lack of a large dataset (pairs of ID photos and selfies), which is crucial to the training and evaluation of deep neural networks.
In spite of these numerous applications and challenges, there is paucity of research on this topic. On the one hand, only a few studies have been published on ID document matching\cite{starovoitov2000matching}\cite{bourlai2009matching}\cite{bourlai2011restoring}\cite{starovoitov2002three}, all of which are over five years old. On the other hand, face recognition technology has made tremendous strides in the past five years, mainly due to the availability of large scale face training data and deep neural network models for face recognition. Verification Rate (VR) on the Labeled Faces in the Wild (LFW) dataset, one of the first public domain "faces in the wild" dataset, at a False Accept Rate (FAR) of $0.1\%$ has increased from $41.66\%$ in 2014~\cite{liao2014benchmark} to $98.65\%$ in 2017~\cite{hasnat2017deepvisage}. Hence, the earlier published results on ID document photo to live face matching are now obsolete. Advances in face recognition algorithms allow us to build more robust and accurate matchers for ID document matching.
In this paper, we first briefly review existing studies on the ID document photo matching problem and state-of-the-art deep neural network-based face recognition methods. We then propose DocFace for developing a domain-specific face matcher for ID document photos by exploiting transfer learning techniques. Our experiments use two datasets of Chinese Identity Cards with corresponding camera photos to evaluate the performance of a Commercial-Off-The-Shelf (COTS) face matcher, open source deep network face matcher and the proposed method. The contributions of the paper are summarized below:
\begin{itemize}
\item An evaluation of published face matchers on the problem of ID document photo matching.
\item A new system and loss function for learning representations from heterogeneous face pairs.
\item A domain-specific matcher, namely DocFace, for ID Document photo matching, which significantly improves the performance of existing general face matchers. The TAR on a private Chinese Identity Card dataset is improved from $61.14\%$ to $92.77\%$ at FAR=$0.1\%$.
\end{itemize}
\section{Related Works}
\subsection{ID Document Photo Matching}
To the best of our knowledge, the first study on ID document face photo matching is attributed to Starovoitov et al.~\cite{starovoitov2000matching}~\cite{starovoitov2002three}. Assuming all face images are frontal faces without large expression variations, the authors first localize the eyes with Hough Transform. Based on eye location, face region is cropped and gradient maps are computed as feature maps. The algorithm is similar to a general constrained face matcher, except it is developed for a document photo dataset. Bourlai et al. [13][14] considered ID document face recognition as a comparison between degraded face images by scanning the document photo against high quality live face images. Because their dataset is composed of scanned document photos (i.e. passports), they categorized the degradation of scanned document photos into three types: i) person-related, including hairstyle, makeup and aging; ii) document related, including image compression and watermarks; iii) scanning-device related, such as operator variability. To eliminate the degradation caused by these factors, Bourlai et al. inserted an image restoration phase before comparing the photos using a general face matcher. In particular, they train a classifier to classify the degradation type for a given image, and then apply a degradation-specific linear and nonlinear filters to restore the degraded images. Compared with their work on scanned documents, the document photos in our dataset are read from the chips in the Chinese Identity Cards. Additionally, our method is not designed for any specific degradation type but could be applied to any ID document photos.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.325\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{fig/dataset_ms.png}
\caption{MS-Celeb-1M}
\label{fig:datasets:ms}
\end{subfigure}
\begin{subfigure}[b]{0.325\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{fig/dataset_a.png}
\caption{ID-Selfie-A}
\label{fig:datasets:zk10k}
\end{subfigure}
\begin{subfigure}[b]{0.325\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{fig/dataset_b.png}
\caption{ID-Selfie-B}
\label{fig:datasets:zk1n}
\end{subfigure}
\vspace{-0.5em}
\caption{Example images in each dataset. The left image in each pair in (b) and each row in (c) is the ID photo and on its right are the corresponding selfies.}
\label{fig:datasets}
\vspace{-1.0em}
\end{figure*}
\subsection{Deep Face Recognition}
Since the success of deep neural networks in the ImageNet competition~\cite{krizhevsky2012imagenet}, all of the ongoing research and development in face recognition now utilizes deep neural networks to learn face representations~\cite{taigman2014deepface}~\cite{deepid2plus}~\cite{schroff2015facenet}~\cite{hasnat2017deepvisage}. The popularity of deep neural networks could partially be attributed to a special property that the low-level image features are transferable, i.e. they are not limited to a particular task, but applicable to many image analysis tasks. Given this property, one can first train a network on a large dataset to learn salient low-level features, then train a domain specific neural network by transfer learning on a relatively small dataset. For example, Sankaranarayanan et al.~\cite{sankaranarayanan2016triplet} proposed to retrain networks by using a triplet probability embedding (TPE) loss function and achieved good results on the IJB-A benchmark~\cite{klare2015pushing}. Xiong et al.~\cite{xiong2017good} proposed a framework named Transferred Deep Feature Fusion (TDFF) to fuse the features from two different networks trained on different datasets and learn a face classifiers in the target domain, which achieved state-of-the-art performance on IJB-A dataset. Mittal et al.~\cite{mittal2015composite} developed a sketch-photo face matching system by fine-tuning the features of a stacked Auto-encoder and Deep Belief Network trained on a larger unconstrained face dataset.
\section{Datasets}
In this section we briefly introduce the datasets that are used in this paper. An overview of the datasets are in Figure~\ref{fig:datasets}.
\subsection{MS-Celeb-1M}
The MS-Celeb-1M dataset~\cite{guo2016msceleb} is a public domain face dataset facilitating training of deep networks for face recognition. It contains $8,456,240$ images of $99,892$ subjects (mostly celebrities) downloaded from internet. In our transfer learning framework, it is used as the source domain to train a very deep network with rich low-level features.
However, the dataset is known to have many mislabels. We use a cleaned version of MS-Celeb-1M with $5,041,527$ images of $98,687$. Some example images are shown in Figure~\ref{fig:datasets:ms}
\subsection{ID-Selfie-A Dataset}
\label{sec:dataset_A}
Our first ID document-selfie dataset is a private dataset composed of $10,000$ pairs of ID Cards photo and selfies. The ID card photos are read from chips in the Chinese Resident Identity Cards\footnote{\url{https://en.wikipedia.org/wiki/Resident_Identity_Card}}. The selfies are from a stationary camera. Among the $10,000$ pairs, we were able to align only $9,915$ pairs, i.e. a total of $19,830$ images. Assuming all the participants are cooperative, and hence there should be no failure-to-enroll case, we only keep these aligned pairs for our experiments. This dataset represents the target domain in our transfer learning framework. In experiments, we will further separate the dataset into two parts, one part to fine-tune the network trained on source domain, the other part for testing the performance. Some example pairs from this dataset are shown in Figure~\ref{fig:datasets:zk10k}.
\subsection{ID-Selfie-B Dataset}
Our second ID document-selfie dataset is a private dataset composed of $10,844$ images from $547$ subjects, each with one ID Card image and a varying number of selfies from different devices, including mobile phones. Compared with ID-Selfie-A, the selfies in this dataset are less constrained and some images have been warped or processed by image filters, as shown in Figure~\ref{fig:datasets:zk1n}. Out of these $547$ subjects, some subjects do not have any selfie photos. After cleaning and alignment, we retain $10,806$ images from $537$ subjects and use them for cross-dataset evaluation of the model trained on ID-Selfie-A dataset. There is no overlapping between the identities in ID-Selfie-A dataset and those in ID-Selfie-B dataset. See Figure~\ref{fig:datasets:zk1n} for example images in this dataset.
\section{Methodology}
\subsection{Notation}
Using a transfer learning framework, we first train a network as \emph{base model} on the source domain, i.e. unconstrained face dataset and then transfer its features to the target domain, ID and Selfie face images. Let $X^s=\{(x^s_i, y^s_i)|i=1,2,3,\cdots, N^s\}$ be the dataset of source domain, where $x^s_i\in\IR^{h\times w}$ and $y^s_i= {1,2,3,\cdots,C}$ are the $i^{th}$ image and label, respectively, $h$ and $w$ are the height and width of images, $N^s$ is the number of images, and $C$ is the number of classes. The training dataset of the target domain is denoted by $X^t=\{(x^t_{i1}, x^t_{i2})|i=1,2,3,\cdots, N^t\}$ where $x^t_{i1}\in\IR^{h\times w}$ and $x^t_{i2}\in\IR^{h\times w}$ refer to the ID image and selfie of the $i^{th}$ subject in the source domain, respectively. Here, $N^t$ is the number of ID-selfie pairs rather than the number of images. Function $\mcF:\IR^{h\times w} \rightarrow \IR^{d}$ denotes the base model for the source domain where $d$ is the dimensionality of the face representation. Similarly, $\mcG:\IR^{h\times w} \rightarrow \IR^{d}$ represents the face representation network for ID photos and $\mcH:\IR^{h\times w} \rightarrow \IR^{d}$ for selfies. An overview of the work flow is shown in Figure~\ref{fig:overview}.
\subsection{Training on source domain}
The source domain in our work is unconstrained face recognition, where we can train a very deep network on a large-scale dataset composed of different types of face images from a large number of subjects, i.e. MS-Celeb-1M. The objective is to train a base model $\mcF$ so that its face representations maximizes inter-subject separation and minimizes intra-subject variations. To guarantee its performance for better transfer learning, we utilize the popular Face-ResNet architecture~\cite{hasnat2017deepvisage} to build the convolutional neural network. We adopt the state-of-the-art \textit{Additive Max-margin Softmax} (AM-Softmax) loss function~\cite{wang2018additive}\cite{deng2018arcface}\cite{wang2018cosface} for training the base model. For each training sample in a mini-batch, the loss function is given by:
\begin{equation}
\newcommand{-\log\frac{ \exp(s\cos{\theta_{y_i,i}}-m) }{ \exp(s\cos{\theta_{y_i,i}}-m) + \sum_{j\neq y_i}{\exp(s\cos{\theta_{j,i}})} }}{-\log\frac{ \exp(s\cos{\theta_{y_i,i}}-m) }{ \exp(s\cos{\theta_{y_i,i}}-m) + \sum_{j\neq y_i}{\exp(s\cos{\theta_{j,i}})} }}
\EL_s ={-\log\frac{ \exp(s\cos{\theta_{y_i,i}}-m) }{ \exp(s\cos{\theta_{y_i,i}}-m) + \sum_{j\neq y_i}{\exp(s\cos{\theta_{j,i}})} }}
\label{eq:loss_additive}
\end{equation}
where
\begin{align*}
\cos{\theta_{j,i}} = W_j^Tf_i \\
W_j=\frac{W_j^*}{\|W_j^*\|^2_2} \\
f_i=\frac{\mcF(x^s_i)}{\|\mcF(x^s_i)\|^2_2}.
\end{align*}
$W^*\in\IR^{d\times C}$ is the weight matrix and $m$ is a hyper-parameter for controlling the margin. The scale parameter $s$ can either be manually chosen or automatically learned~\cite{wang2017normface}; we leave it automatically learned for simplicity. During training, the loss in Equation~(\ref{eq:loss_additive}) is averaged across all images in the mini-batch.
\subsection{Training on target domain}
\label{sec:method_target}
The target domain is a relatively small dataset composed of ID-selfie image pairs. The sources of these images are very different from those from the source domain, thus directly applying $\mcF$ to these images will not work well. Because the ID images and selfies are from different sources, the problem can also be regarded as a sub-problem of the heterogeneous face recognition~\cite{klare2013heterogeneous}. A common approach in heterogeneous face recognition is to utilize two separate domain-specific models to map images from different sources into a unified embedding space. Therefore, we use a pair of \textit{sibling networks} $\mcG$ and $\mcH$ for ID images and selfie images, respectively, which share the same architecture but could have different parameters. Both of their features are transferred from $\mcF$, i.e. they have the same initialization. Notice that although this increases the model size, the inference speed would remain unchanged as each image is only fed into one of the sibling networks.
Inspired by recent metric learning methods~\cite{schroff2015facenet}, we propose a \textit{Max-margin Pairwise Score} (MPS) loss for training the heterogeneous face pair dataset. For each mini-batch of size $M$, $M/2$ ID-selfie pairs are randomly selected from all the subjects. For each pair, the MPS loss is given by:
\begin{equation}
\EL_t = [\max_{j\neq i}(\max(\cos{\theta_{j,i}},\cos{\theta_{i,j}})) - \cos{\theta_{i,i}} + m']_+
\label{eq:loss_hetero}
\end{equation}
where
\begin{align*}
\cos{\theta_{i,j}} = g_i^Th_j \\
g_i=\frac{\mcG(x^t_{i1})}{\|\mcG(x^t_{i1})\|^2_2} \\
h_i=\frac{\mcH(x^t_{i2})}{\|\mcH(x^t_{i2})\|^2_2}.
\end{align*}
The loss is averaged across all the $M/2$ pairs. Here, $j$ iterates over all the other subjects in the batch. $[x]_+=max(0,x)$. Hyper-paramter $m'$ is similar to the $m$ in the AM-Softmax. The idea of MPS loss function in Equation~(\ref{eq:loss_hetero}) is learning representation by maximizing the margin between genuine pair similarities and imposter pair similarities. The MPS loss simulates the application scenario where the ID photos act like templates, while selfies from different subjects act like probes trying to be verified, or conversely. Notice that, after the hardest imposter pair is chosen by maximum score, the MPS loss is similar to Triplet Loss~\cite{schroff2015facenet} with one of the ID / selfie image as the anchor.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/overview.pdf}
\vspace{-1.5em}
\caption{Overview of the work flow of the proposed method. We first train a base model $\mcF$ on a large scale unconstrained face dataset. Then the features are transferred to domain-specific models $\mcG$ and $\mcH$, which are trained on a ID-Selfie dataset using the proposed MPS loss function.}
\label{fig:overview}
\vspace{-0.5em}
\end{figure}
\section{Experiments}
\label{sec:exp}
\subsection{Experiment Settings}
\label{sec:exp_settings}
We conduct all of our experiments using Tensorflow r$1.2$. When training the base model on the MS-Celeb-1M, we use a batch size of $256$ and keep training for $280$K steps. We start with a learning rate of $0.1$ and it is decreased to $0.01$, $0.001$ after $160$K and $240$K steps, respectively. When fine-tuning on the ID-Selfie-A dataset, we keep the batch size of $256$ and train the sibling networks for $800$ steps. We start with a lower learning rate of $0.01$ and decrease the learning rate to $0.001$ after $500$ steps. For both the training stages, the model is optimized by Stochastic Gradient Descent (SGD) optimizer with a momemtum of $0.9$ and a weight decay of $(5e-4)$. All the images are aligned via similarity transformation based on landmarks detected by MTCNN~\cite{zhang2016joint} and resized to $96\times112$. We set margin parameters $m$ and $m'$ as $5.0$ and $0.5$, respectively. All the training and testing are run on a single Nvidia Geforce GTX 1080Ti GPU with $11$GB memory. The inference speed of our model on this GPU is $0.003$s per image.
By utilizing the MS-Celeb-V1 dataset and the AM-Softmax loss function in Equation~(\ref{eq:loss_additive}), our Face-ResNet network achieved $99.67\%$ accuracy on the standard verification protocol of LFW and a Verification Rate (VR) of $99.60\%$ at False Accept Rate (FAR) of $0.1\%$ on the BLUFR~\cite{liao2014benchmark} protocol.
We name our method as DocFace. In Section~\ref{sec:exp_explore}, we first conduct a few exploratory experiments on the ID-Selfie-A dataset to compare different approaches for training a ID-Selfie matcher and to justify the efficacy of proposed methods. Then in Section~\ref{sec:exp_compare}, we compare the performance of DocFace with existing general face matchers on the ID-Selfie-A dataset. In Section~\ref{sec:exp_dataset}, by training the model on different subsets of the ID-Selfie-A dataset, we show that the performance increases steadily with the increase of dataset size in target domain. Finally, by using the model trained on ID-Selfie-A dataset, we conduct a cross-dataset evaluation on the ID-Selfie-B dataset.
For all the experiments on ID-Selfie-A dataset, a five-fold cross validation is conducted to evaluate the performance and robustness of the methods. The dataset is equally split into 5 splits, and in each fold one split is used for testing while the remaining are used for training. In particular, $7,932$ and $1,983$ pairs are used for training and testing, respectively, in each fold. We use the whole ID-Selfie-B dataset for cross-dataset evaluation. Cosine similarity is used as comparison score for all experiments.
\subsection{Exploratory Experiments}
\label{sec:exp_explore}
\begin{table}[t]
\scriptsize
\begin{center}
\begin{tabularx}{\linewidth}{ccccc}
\toprule
Model & Loss & Sibling & VR(\%) & VR(\%) \\
& & Networks & @FAR$=0.01\%$ & @FAR$=0.1\%$ \\
\midrule
FS & MPS & Yes & $0.03\pm0.04$ & $0.07\pm0.04$ \\
BM & - & No & $67.88\pm1.72$ & $82.06\pm1.40$ \\
TL & L2-Softmax & Yes & $70.53\pm1.73$ & $85.15\pm1.38$ \\
TL & AM-Softmax & Yes & $71.07\pm1.81$ & $85.24\pm1.52$ \\
TL & MPS & No & $85.71\pm1.29$ & $92.51\pm1.13$ \\
TL & MPS & Yes & $\mathbf{86.27\pm1.39}$ & $\mathbf{92.77\pm1.03}$ \\
\bottomrule
\end{tabularx}
\end{center}
\vspace{-1.5em}
\caption{\small Performance of different approaches for developing a ID Face matcher on the ID-Selfie-A dataset. ``FS'',``BM'' and ``TL'' refer to ``from scratch'', ``base model'' and ``transfer learning'', respectively. ``VR'' refers to Verification Rate. For the pre-trained model, because there is no training involved, we leave the loss function as blank.}
\label{tab:training_strategies}
\vspace{-1.5em}
\end{table}
In this section, by using the ID-Selfie-A dataset, we compare different ways to develop an ID-Selfie face matcher. First, we compare with the approaches without transfer learning: (1) a network trained from scratch with the same architecture and MPS loss function, and (2) the base model pre-trained on MS-Celeb-1M but not fine-tuned. To justify the efficacy of the proposed MPS loss function, we fine-tune the base model on ID-Selfie dataset using two other loss functions: L2-Softmax~\cite{ranjan2017l2} and AM-Softmax~\cite{wang2018additive}, which achieved successful results in unconstrained face recognition. Finally, using the base model and MPS loss function, we compare the performance of sibling networks, i.e. different parameters for $\mcG$ and $\mcH$, to that of a shared network, i.e. $\mcG=\mcH$. As mentioned in Section~\ref{sec:exp_settings}, all the experiments are conducted in five fold cross validation and we report the average performance and standard deviation.
The results are shown in Table~\ref{tab:training_strategies}. Because the ID-Selfie-A dataset is too small, the model trained from scratch (FS) overfits heavily and performs very poorly. Similar result was observed even when we were trying to train a smaller network from scratch. In comparison, the base model (BM) pre-trained on MS-Celeb-V1 perform much better even before fine-tuning. This confirms that the features learned by the neural networks are transferable and can be helpful for developing domain specific matchers with small dataset. This performance is then further improved after transfer learning (TL). Although both L2-Softmax and AM-Softmax lead to an improvement in the performance, our proposed loss function (MPS) outperforms the pre-trained even more significantly. This is because our loss function is specially designed for the problem, and directly maximizes the margin of pairwise score rather than classification probability. Finally, we find that using a pair of sibling networks $\mcG$ and $\mcH$ slightly outperforms that of using a shared network. This means learning separate domain-specific models for ID photos and selfies could help the system learn more discriminative low-level features and lead to better face representations in the shared embedding space.
\begin{table}[t]
\scriptsize
\begin{center}
\begin{tabularx}{\linewidth}{Xccc}
\toprule
Method & \multicolumn{2}{c}{VR(\%) on ID-Selfie-A} & VR(\%) on LFW \\
\cmidrule(lr){2-3}\cmidrule(lr){4-4}
& @FAR$=0.01\%$ & @FAR$=0.1\%$ & @FAR$=0.1\%$ \\
\midrule
COTS & $27.32\pm1.46$ & $46.33\pm1.61$ & $92.01$ \\
CenterFace~\cite{wen2016discriminative} & $28.02\pm1.93$ & $60.10\pm1.68$ & $91.70$ \\
SphereFace~\cite{liu2017sphereface} & $34.76\pm0.88$ & $61.14\pm0.82$ & $96.74$ \\
\textit{DocFace} & $\mathbf{86.27\pm1.39}$ & $\mathbf{92.77\pm1.03}$ & - \\
\bottomrule
\end{tabularx}
\end{center}
\vspace{-1.5em}
\caption{\small Comparison of the proposed method with existing general face matchers on the ID-Selfie-A dataset under five-fold cross validation protocol. ``VR'' refers to Verification Rate. ``TL'' refers to Transfer Learning. For comparison, we report the performance of existing matchers on LFW according to BLUFR protocol~\cite{liao2014benchmark}. The proposed model is shown in italic style.}
\label{tab:existing_methods}
\vspace{-1.5em}
\end{table}
\subsection{Comparison with Existing Matchers}
\label{sec:exp_compare}
We evaluate the performance of existing general face matchers on the ID-Selfie-A dataset and compare them with the proposed method. To make sure our experiments are comprehensive enough, we compare our method not only with a Commercial-Off-The-Shelf (COTS) matcher, but also two open-source matchers representing the state-of-the-art unconstrained face recognition methods: CenterFace\footnote{\url{https://github.com/ydwen/caffe-face}}~\cite{wen2016discriminative} and SphereFace\footnote{\url{https://github.com/wy1iu/sphereface}}~\cite{liu2017sphereface}. During the five-fold cross validation, because the existing methods don't involve training, only the test split is used. For comparison, we also report the performance of the existing matchers on the unconstrained face dataset, LFW~\cite{LFWTech}, using the BLUFR protocol~\cite{liao2014benchmark}.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\linewidth]{fig/false_accept.png}
\vspace{-2.0em}\caption{False accept pairs}
\end{subfigure}\\
\vspace{0.5em}
\begin{subfigure}[b]{\linewidth}
\centering
\includegraphics[width=\linewidth]{fig/false_reject.png}
\vspace{-2.0em}\caption{False reject paris}
\end{subfigure}
\vspace{-1.5em}
\caption{Example false classified images by our model on ID-Selfie-A dataset at FAR=$0.1\%$.}
\label{fig:false_classification_images}
\vspace{-1.0em}
\end{figure}
The results are shown in Table~\ref{tab:existing_methods}. As one can see, although all the existing methods perform well on the unconstrained face dataset, there is a large performance drop when we test them on the ID-Selfie-A dataset. This is consistent with our observation in Section~\ref{sec:intro} that the characteristics of images and difficulties in the two problems are very different. In comparison, the proposed method significantly improves the performance on the target problem.
Some false accept and false reject image pairs of our model are shown in Figure~\ref{fig:false_classification_images}. From the figure, we can see that most of the selfies in the genuine pairs either have a makeup or their appearance changes drastically because of aging. Besides, many impostor pairs look surprisingly similar, and because of low quality of the ID image, it is hard to find fine-grained clues to tell they are actually different people.
\subsection{Effect of Dataset Size}
\label{sec:exp_dataset}
In the previous sections, we fix the dataset size and conduct a cross validation to test the performance of different matchers and training strategies. Here we want to explore how much the size of the training dataset could affect our domain-specific network and whether there is a potential for improvement by acquiring more training data. We conduct the same five-fold cross validation, where in each fold we keep the test split unchanged but randomly select a subset of the ID-Selfie pairs in the training splits and report the average performance across the five folds. In particular, we select $1,000$,
$3,000$, $5,000$ and all ($7,932$) image pairs for training. The resulting TAR along with the dataset size is shown in Figure~\ref{fig:training_size}. For both FAR=$0.01\%$ and FAR=$0.1\%$, the performance keeps increasing as the training dataset becomes larger. Notice that we are increasing the size of dataset linearly, which means the relative growth rate of the dataset size is decreasing, yet we can still observe a trend of increasing performance for larger datasets. More performance gain could be expected if we can increase the size of the dataset by one or two orders of magnitude.
\begin{figure}
\center
\includegraphics[width=\linewidth]{fig/tar_data.pdf}
\caption{Performance when training on subsets of different sizes on ID-Selfie-A dataset. The subsets are randomly selected from the training splits. The performance is reported by taking the average of the five folds.}
\label{fig:training_size}
\end{figure}
\subsection{Cross-dataset Performance Evaluation}
\label{sec:exp_crosseval}
Although it is an important application to match an ID document photos to selfies from stationary cameras, in many other scenarios, the selfies could be captured by different devices including different mobile phones. An ideal model is supposed to perform robustly in all these cases. Therefore, we train a model on the entire ID-Selfie-A dataset and test it on ID-Selfie-B dataset, whose selfies are from different sources. In testing, for subjects in ID-Selfie-B dataset that have more than one selfie images, we fuse their feature vectors by taking the average vector. The results are shown in Table~\ref{tab:performance_dataset_b}. For comparison, we also show the performance of the existing methods. Our model performs the best on ID-Selfie-B, also higher than the base model which has not been fine-tuned on ID-Selfie-A dataset. This indicates that the face representation learned from the ID-Selfie-A dataset is not only discriminative for ID vs. stationary camera pairs, but also useful for other ID-selfie datasets. It also suggests that training on a mixed dataset of images from different sources could be helpful for the performance on all the sub-problems.
\begin{table}[t]
\scriptsize
\begin{center}
\begin{tabularx}{\linewidth}{Xccc}
\toprule
Method & VR(\%)@FAR$=0.01\%$ & VR(\%)@FAR$=0.1\%$ \\
\midrule
COTS & $13.97$ & $30.91$ \\
CenterFace~\cite{wen2016discriminative} & $17.69$ & $35.20$ \\
SphereFace~\cite{liu2017sphereface} & $34.82$ & $54.19$ \\ \hline
\textit{Base model} & $70.87$ & $86.77$ \\
\textit{DocFace} & $\mathbf{78.40}$ & $\mathbf{90.32}$ \\
\bottomrule
\end{tabularx}
\end{center}
\vspace{-1.5em}
\caption{\small Cross-dataset evaluation on the ID-Selfie-B dataset. The ``base model" is only trained on MS-Celeb-1M. The model \textit{DocFace} has been fine-tuned on ID-Selfie-A. Our models are shown in italic style.}
\label{tab:performance_dataset_b}
\vspace{-1.5em}
\end{table}
\section{Conclusion}
In this paper, we propose a new method, DocFace, which uses transfer learning techniques with a new loss function, Max-margin Pairwise Score (MPS) loss, to fine-tune a pair of sibling networks for the ID document photo matching problem. By using two private datasets, we evaluate the performance of the DocFace and existing unconstrained face matchers on the ID document matching problem. Experiment results show that general face matchers perform poorly on this problem because it involves many different difficulties and DocFace significantly improves the performance of current face matchers on this problem. We also show that testing performance increases steadily with the size of the training set, which implies that additional training data could lead to better recognition performance.
{\small
\bibliographystyle{ieee}
|
{
"timestamp": "2018-05-08T02:13:06",
"yymm": "1805",
"arxiv_id": "1805.02283",
"language": "en",
"url": "https://arxiv.org/abs/1805.02283"
}
|
\section{Introduction}
Quantum key distribution (QKD)~\cite{QKD} is one of the most practical quantum information technologies, which allows two distant legitimate parties named Alice and Bob to generate unconditional security keys despite the presence of an eavesdropper Eve. According to the encode mode of key information, there are mainly two kinds of QKD systems: the systems that encode key information on discrete-variable (DV)~\cite{BB84,E91} and the systems that encode key information on continuous-variable (CV)~\cite{GG02,2004NOSW,2014LZ,2015SP,2014ZYC}. We focus on CV-QKD system because it has the advantage of using standard telecommunication technologies~\cite{2012RMP,Paul13,2015en,Zhang2017}. A practical CV-QKD system includes two parts: quantum communication and classical communication. In the first part, the sender Alice prepares quantum states and sends them to receiver Bob though an non-trusted quantum channel which can be eavesdropped by Eve, then Bob measures the quantum states by homodyne or heterodyne detector. Generally, the second part is the postprocessing of CV-QKD systems, which mainly contains sifting, parameter estimation~\cite{PE1,PE2}, information reconciliation~\cite{IR1,WangRA,WangHSEC} and privacy amplification~\cite{PA1,PA2,PA3}, and it is implemented though a classical authenticated channel.
Privacy amplification is an indispensable step in postprocessing, which is used to distill the final secret keys from the identical corrected keys between Alice and Bob. This process is implemented by using universal Hash families which can be used to compress keys~\cite{Hash1,Hash2}. Toeplitz matrix is one of universal hash families~\cite{Toeplitz}. Its structure is simple and can be implemented in parallel. For high speed real-time CV-QKD system, the implementation speed of privacy amplification is one of the limitations. Some approaches can be used to speed up the process based on software implementation, such as fast Fourier transform (FFT) and number theoretic transform~\cite{NTT}. There are some other methods have been used to accelerate the process. In Ref.~\cite{ZhangCM}, they use four basic multiplication algorithms at different input length to achieve a fast implementation. In addition to software implementation, this process can also be implemented with hardware, such as field-programmable gate arrays (FPGA)~\cite{FPGA}.
Currently, the main bottleneck of CV-QKD system is the postprocessing of information, especially in high efficiency and high speed real-time security implementation, and it highly affects the secret key rate and transmission distance of CV-QKD systems~\cite{long11,Slice1}. We achieve high efficiency by combining multidimensional reconciliation~\cite{MR} and multi-edge type low density parity check codes (MET-LDPC)~\cite{METLDPC}. However, the MET-LDPC codes require long block length (on the order of $10^6$), large iteration numbers and soft-decision decoding algorithm to correct errors at low signal-to-noise ratios~\cite{WangRA}. Generally, it is more suitable for software implementation which has the advantages of big memory and high floating-point precision compare to hardware-based decoder. We obtain high error correction speed for CV-QKD system based on graphic processing unit (GPU), because it provides parallel computation and is suitable for processing floating-point information~\cite{WangHSEC}. To obtain high speed real-time CV-QKD systems and simplify the postprocessing operations, we still implement high speed privacy amplification on GPU. Due to the high computational complexity of matrix multiplication, we convert the matrix multiplication into polynomial multiplication and accelerate it with FFT which reduces the computational complexity from O($n^2$) to O($nlog_{2}n$).
Furthermore, the input length of privacy amplification is required to be large enough to ensure the security of CV-QKD systems when considering the finite-size effect~\cite{PE1,ZhangXY}. Normally, the input length increases as the transmission distance increases when the security parameter is fixed. The security parameter represents the failure probability of the system, which means that the eavesdropper Eve can get the secret keys with a probability no large than the value of the security parameter. Typically, it is fixed to $10^{-10}$. Practically, when the transmission distance are about 50km, 80km, and 100km, the input length are respectively the order of $10^8$, $10^9$, and $10^{10}$~\cite{Paul13,Zhang2017}. However, the computational power of common computers is limited by memory size or other factors, and can not perform long-input-block-length privacy amplification directly. Thus, we propose a length-compatible method to satisfy the length requirement of different transmission distance. This method is implemented by dividing the long input length into short block length, and then performing the privacy amplification separately, finally merging the corresponding results together. The correctness of FFT results is related to the input length and computational precision. With the increase of input length, the required computational precision of FFT increases as well. Otherwise, the results will be wrong and the privacy amplification will fail. And the higher the precision, the slower the speed. To solve this problem, we separate the long input length into short block length which can be correctly calculated at low precision. Meanwhile, in order to make full use of the computing and memory resources of GPU and further accelerate the speed of privacy amplification, the short blocks can be calculated in parallel by setting batches.
In this paper, we propose a high speed implementation method of length-compatible privacy amplification for CV-QKD system. The early stage of this work has been applied to the longest field test of a high speed real time CV-QKD system and we further improve it in this work~\cite{Zhang2017}. The principe of privacy amplification for CV-QKD system is described in Section 2. In Section 3, we present the high speed implementation method of length-compatible privacy amplification based on GPU. The results under different input length are given in Section 4. And this paper is concluded in Section 5.
\section{The Principle of Privacy Amplification for CV-QKD system}
Privacy amplification is one of the important steps in symmetric-key quantum cryptography by providing an approach for Alice and Bob to distill unconditional security keys from a shared corrected keys in order to securely encrypt/decrypt information. This step is realized over a classical public channel, but it is assumed to be authenticated, so that adversary Eve can eavesdrop on the messages exchanged between legitimate parties Alice and Bob but she can not modify them without being discovered. Typically, privacy amplification algorithm is achieved by universal hash families which provides information-theoretically secure for CV-QKD system. For a practical system, the finite-size effect on the security of privacy amplification can not be ignored and the size is related to the transmission distance, security parameter, and other factors. In this section, we first introduce the universal hash family used in our system, then describe the finite-size effect of privacy amplification, finally present the implementation procedure of privacy amplification algorithm.
\subsection{Universal Hash Families}
An important application of universal hashing is privacy amplification in quantum cryptography. A hash function is selected at random from universal hash families to extract secure keys from corrected keys at a low collision probability. The collision probability means that different input data have the same output keys after universal hashing. In other words, even if adversary Eve does not have the same corrected keys as Alice and Bob, she can also have the same secret keys with a certain probability. Thus, collision probability should be as small as possible, otherwise the security of the secret keys can not be guaranteed. And its value is determined by three factors: the hash function that selected from universal families, the input length and the output length of the hash function. Different hash functions have different implementation complexity. Thus, choosing a hash function with low collision probability and low complexity is of great significance to the security and the speed of privacy amplification for CV-QKD system.
Toeplitz matrix is one of the universal hash functions, it is also called diagonal-constant matrix which means that the element on a diagonal line from the upper left to the lower right of the matrix is constant. Therefore, all elements of the whole matrix are determined as long as the elements of the first column and the first row of the matrix are determined. For instance, Eq. (\ref{toeplitz}) is a Toeplitz matrix.
\begin{equation}
T={
\left[ \begin{array}{cccccc}
t_0 & t_n & t_{n+1} & \cdots & \cdots & t_{2n-2}\\
t_1 & t_0 & t_n & \ddots & & \vdots\\
t_2 & t_1 & \ddots & \ddots & \ddots & \vdots\\
\vdots & \ddots & \ddots & \ddots & t_n & t_{n+1}\\
\vdots & & \ddots & t_1 & t_0 & t_n\\
t_{n-1} & \cdots & \cdots & t_2 & t_1 & t_0
\end{array}
\right ].}
\label{toeplitz}
\end{equation
Eq. (\ref{toeplitz}) is a square matrix of $n\times n$. Supposing that the $i$th, $j$th element of $T$ is denoted by $T_{i,j}$, then the elements in the lower and upper triangular part of matrix $T$ have the relations that $T_{i,j}=T_{i+1,j+1}=t_{i-j}$ and $t_{j-i+n-1}$, $0 \leq i$, $j \leq n-1$. The degrees of freedom of Toeplitz matrix are $2n-1$, rather than $n^2$. Actually, the Toeplitz matrix is not necessarily square. The collision probability of Toeplitz matrix is $n\cdot 2^{-m+1}$, where $n$ is the input length and $m$ is the output length. Toeplitz matrix has low complexity, which supports high speed implementation of privacy amplification.
\subsection{Finite-Size Effect of Privacy Amplification}
The finite-size effect has to be considered to guarantee the security of the final secret keys obtained by CV-QKD systems. We mainly analyse the finite-size effect on the privacy amplification procedure. Assuming that $x$ and $y$ are the classical data of Alice and Bob after they measure their quantum states and $E$ is the quantum states of the eavesdropper Eve. The final secret key rate of a CV-QKD system is expressed as follows~\cite{PE1}:
\begin{equation}
k=\beta I(x:y)-S(y:E)-\Delta(n),
\label{keyrate}
\end{equation
where $\beta$ is the reconciliation efficiency, $I(x:y)$ refers to the Shannon entropy of Alice and Bob, $S(y;E)$ refers to the Von Neumann entropy of Bob and Eve for reverse reconciliation, $\Delta(n)$ refers to the effect of finite-size on the security of privacy amplification. Its value can be calculated by~\cite{PE1}:
\begin{equation}
\Delta(n)\equiv (2dim\mathcal{H}_x+3)\sqrt{\frac{log_2(2/\overline{\epsilon})}{n}}+\frac{2}{n}log_2(1/\epsilon_{PA}),
\label{delta}
\end{equation
where $dim\mathcal{H}_x$ refers to the dimension of Hilbert space corresponding to the raw key $x$, $\overline{\epsilon}$ is a smoothing parameter, $\epsilon_{PA}$ is the failure probability of privacy amplification, and $n$ is the input block length of privacy amplification. Actually, the parameter $\epsilon_{PA}$ refers to the collision probability of the selected universal hash functions. The parameter $\overline{\epsilon}$ and $\epsilon_{PA}$ are intermediate parameters which can be optimized to a small value that satisfies the security requirement of CV-QKD systems.
As shown in Eq.(\ref{keyrate}), the parameter $\Delta(n)$ has an impact on the final secret key rate. Generally, the secret key rate decreases as the transmission distance increases for CV-QKD systems. Thus, the impact is more obvious on long distance systems. In other words, it will affect the maximum transmission distance of CV-QKD systems. To achieve high secret key rate and support long transmission distance CV-QKD systems, the value of $\Delta(n)$ should be as small as possible. As can be seen from Eq.(\ref{delta}), the input block length has a great influence on the value of $\Delta(n)$ when other factors are fixed. The value of $n$ should be as big as possible so that the $\Delta(n)$ will have a small impact on the final secret key rate. However, if the block length is too long, it will be unrealistic to implement privacy amplification directly. Thus, a compromise block length should be selected. In the previous demonstration, to satisfy the security requirement of privacy amplification when considering the finite-size effect, the block length are respectively required to the order of $10^8$, $10^9$, and $10^{10}$ when the transmission distance are about 50km, 80km, and 100km.
\subsection{The Procedure of Privacy Amplification}
Privacy amplification is used to extract unconditional secure keys from shared weak keys between Alice and Bob and eliminate the information that Eve may know from the side channel. This procedure is performed by choosing a hash function from universal families to compress the shared weak keys and requires a classical authentication channel. As previously described, we choose Toeplitz matrix as hash function to perform the procedure and consider the finite-size effect of privacy amplification. We describe the detailed procedure of privacy amplification as follows.
Assuming that $u_A$ and $u_B$ refer to the shared weak keys for Alice and Bob after error correction with length $n$, $r_A$ and $r_B$ refer to the final secret keys of them with length $l$, and $T$ refers to the Toeplitz matrix of size $n\times l$. The length of strings $u_A$ and $u_B$ is obtained by considering the finite-size effect of privacy amplification. The length of strings $r_A$ and $r_B$ is calculated by $l=\lfloor n\times k \rfloor$, where $k$ is the secret key rate.
{\it Step~1}: Alice randomly generates a uniform string of length $n+l-1$ to construct the Toeplitz matrix $T$. She sends $T$ to Bob. She calculates $r_A=u_AT$ to obtain her final secret keys.
{\it Step~2}: Bob receives $T$ from Alice. He perform the same calculation $r_B=u_BT$ to obtain his final keys.
The strings $u_A$ and $u_B$ are the same after error correction procedure except a small probability that the error correction fails but Alice and Bob can not detect. This probability can be arbitrarily small by performing an error verification procedure. Thus, the final secret keys of Alice and Bob can be considered the same. In practice, an authentication is performed to further ensure the identical of the final secret keys between Alice and Bob, before using them to encrypt or decrypt information in quantum cryptography.
\section{High Speed Implementation of Length-Compatible Privacy Amplification}
High speed privacy amplification is required to support real-time CV-QKD systems. The implementation speed is related to the input length, universal hash functions, implementation platform, calculation precision, and other factors. Because the complexity of Toeplitz matrix is low, we choose it as the hash function. GPU provides parallel computation and it is a suitable platform to perform postprocessing procedure of CV-QKD systems. We have achieved high speed error correction based on it. To simplify the postprocessing implementation procedure, we still implement the privacy amplification process on it. However, the computational complexity is extremely high when performing the long input length privacy amplification directly in software. FFT is used to accelerate the process in our system and it can reduce the complexity from O($n^2$) to O($nlog_2n$). It is difficult and impractical to perform the process directly when the block length is too long. On the other hand, the calculation precision of FFT increases with the block length increases and it affects the correctness and speed of FFT. Thus, length-compatible algorithm is proposed to solve this problem.
\begin{figure}[t]
\centering
\includegraphics[width=36pc]{PA}
\caption{The process of length-compatible privacy amplification algorithm. The shared weak keys and the Toeplitz matrix are divided into $p$ blocks. $SWK_i$ and $T_i$ ($i=1,2,\cdots,p$) represent the $i$th block of them. Each of the block individually performs privacy amplification and obtains the intermediate keys. The final secret keys can be obtained by modulo-2 addition among all the intermediate keys.}
\label{PA1}
\end{figure}
The process of length-compatible privacy amplification algorithm is shown in Fig.~\ref{PA1}. The shared weak keys ({\it i.e.} corrected keys) are obtained by the error correction step which has been implemented on GPU. After GPU-based decoding, the corrected keys are copied from device to host. Because we perform the privacy amplification procedure on the GPU platform, the copy process is no longer needed. In a practical CV-QKD system, to achieve unconditional secret keys, the finite-size effect is needed to be considered. The input length of privacy amplification is long, especially in the long distance systems. In practice, the input lengthes are respectively required to the order of $10^8$, $10^9$, and $10^{10}$, when the transmission distance are about 50km, 80km, and 100km. The length will be longer if the distance is further, and it is unrealistic to complete the whole process at one time on a normal platform. Thus, we divide the input data, including the shared weak keys and Toeplitz matrix (universal hash function), into many blocks, then execute them separately to obtain the intermediate keys, finally achieve the final secret keys by modulo-2 addition among all the intermediate keys. The shared weak keys are divided into $p$ blocks sequentially. As shown in Eq.(\ref{Ti}) (The symbol ``$\prime$" represents matrix transpose, and it is the same in Eq.(\ref{Tij})), the Toeplitz matrix is divided into $p$ sub-matrices by rows, and the column of matrix is not divided. If the length of final secret keys is long, the column may also need to be divided.
\begin{equation}
T=[T_0,T_1,\cdots,T_i,\cdots,T_{p-1}]^{\prime}.
\label{Ti}
\end{equation
According to the above method, the privacy amplification of arbitrary length can be realized. Since FFT is used to accelerate the procedure, the calculation precision must match the length. For example, if the length of FFT is lower than $2^{23}$, single-precision floating-point is enough to achieve the correct results. However, if the length is larger than $2^{23}$, double-precision or even long-double-precision must be used, otherwise the results will be wrong. The wrong probability increases with the increase of input length. And the speed of FFT decreases when the precision increases. Thus, to improve the implementation speed and decrease the calculation precision, we further divide each of the blocks ($SWK_i$ and $T_i$, $i=1,2,\ldots,p$) into $q$ batches ($SWK_{ij}$ and $T_{ij}$, $i=1,2,\ldots,p$, $j=1,2,\cdots,q$). The divided method of matrix $T_i$ is shown in Eq.(\ref{Tij}). The computing and memory resources of GPU can not be fully utilized if the length of FFT is too short. To solve this problem, we use multi-batches to independently perform FFT in parallel. The procedure of accelerating length-compatible privacy amplification with FFT is shown in Fig.~\ref{FFT} and is described as follows.
\begin{equation}
T_i=[T_{i0},T_{i1},\cdots,T_{ij},\cdots,T_{i(q-1)}]^{\prime}.
\label{Tij}
\end{equation
\begin{figure}[t]
\centering
\includegraphics[width=32pc]{FFT}
\caption{The procedure of accelerating length-compatible privacy amplification with FFT. The shared weak keys and the Toeplitz matrix are divided into $p$ blocks. Each block is further divided to $q$ batches. $SWK_{ij}$ and $T_{ij}$ ($i=1,2,\cdots,p$, $j=1,2,\cdots, q$) represent the $j$th batch in the $i$th block. IFFT is the inverse transformation of FFT. $K_{ij}$ refers to the intermediate keys of $j$th batch in the $i$th block. The blocks are serially performed. All the batches in a block are performed in parallel on GPU.}
\label{FFT}
\end{figure}
{\it Step~1}: The length of shared weak keys is extended from $n$ to $2n+l-2$ by filling 0 at the end. Similarly, the length of Toeplitz matrix is extended from $n+l-1$ to $2n+l-2$.
{\it Step~2}: According to the input length of privacy amplification (including the length of extended shared weak keys and extended Toeplitz matrix) and the resources of GPU (mainly refers to the memory size), calculating the number of blocks $p$, then calculating the number of batches $q$.
{\it Step~3}: Performing FFT for each $SWK_{ij}$ and $T_{ij}$, $i=1,2,\ldots,p$, $j=1,2,\cdots,q$, then multiplying the corresponding results of them, next performing IFFT on the results of multiplication, finally the results of IFFT from the $n$th to $(n+k-1)$th are chosen as the intermediate keys $K_{ij}$ of the current batch. The purpose of this step is to calculate all the intermediate keys $K_{ij}$ by using FFT and IFFT. The result of $K_{ij}$ is:
\begin{equation}
{k_{ij}} = SW{K_{ij}} \times {T_{ij}}.
\end{equation
The final secret keys are obtained by modulo-2 addition among all the $K_{ij}$.
\begin{equation}
r=\sum_{i=1}^{p}\sum_{j=1}^{q}K_{ij} \quad mod \quad 2.
\label{finalkey}
\end{equation
\section{Results}
We implement high speed length-compatible privacy amplification based on GPU, which supports high speed and secure CV-QKD systems and can achieve high secret key rate. The speed is related to the GPU resources, the input length, the calculation precision of FFT, etc. We analyze the effects of these factors in different circumstances. The finite-size effect of privacy amplification is considered to ensure the security of CV-QKD systems. Length-compatible is used to support long input length privacy amplification and decrease the calculation precision of FFT. It is easy to understand that the processing speed of low precision is faster than high precision. Normally, low precision calculation occupies less resources and has a low computational complexity. However, half-precision floating-point number is a new development of nvidia corporation, it needs a special type of GPU to exert its advantages and is not universal to all GPUs. Although our GPU supports half precision calculation, it can not give full play to its advantages. Not only does the processing speed not improve but decline. Thus, we choose single-precision to perform the calculation.
\begin{table}[!t]
\centering
\renewcommand\arraystretch{1.21}
\caption{The speed of privacy amplification under different input length.}
\label{tab1}
\begin{tabular}{|c|ccc|ccc|ccc|}
\hline
{\bf Input} & \multicolumn{3}{c|}{\bf Batch size 1Mbits} & \multicolumn{3}{c|}{\bf Batch size 2Mbits} &\multicolumn{3}{c|}{\bf Batch size 4Mbits}\\
\cline{2-10}
{\bf length} & {\bf Number of} & {\bf Time} & {\bf Speed} & {\bf Number of} & {\bf Time} & {\bf Speed} & {\bf Number of} & {\bf Time} & {\bf Speed} \\
{\bf (Mbits)} & {\bf batches} & {\bf (ms)} & {\bf (Gbps)} & {\bf batches} & {\bf (ms)} & {\bf (Gbps)} & {\bf batches} & {\bf (ms)} & {\bf (Gbps)}\\
\hline
4 & 4 & 3.2 & 1.25 & 2 & 3.2 & 1.25 & 1 & 3.2 & 1.25 \\
8 & 8 & 6.0 & 1.33 & 4 & 6.0 & 1.33 & 2 & 6.0 & 1.33 \\
16 & 16 & 11.7 & 1.37 & 8 & 11.7 & 1.37 & 4 & 11.7 & 1.37 \\
32 & 32 & 23.3 & 1.37 & 16 & 23.2 & 1.38 & 8 & 23.2 & 1.38 \\
64 & 64 & 46.4 & 1.38 & 32 & 46.4 & 1.38 & 16 & 46.5 & 1.38 \\
128 & 128 & 95.0 & 1.35 & 64 & 95.0 & 1.35 & 32 & 95.1 & 1.35 \\
\hline
\end{tabular}
\end{table}
Table~\ref{tab1} shows the speed of privacy amplification under different input length. The length of secret keys is 10\% of shared weak keys. In practical systems, the ratio may be a few orders of magnitude lower than 10\%, especially in the case of long distance. As shown in table~\ref{tab1}, the speed has no relationship with the batch size and the number of batches when the input length is consistent. When using single-precision calculation, the results of FFT will be wrong if the input length of a batch is higher than 8Mbits, therefore we set the batch size lower than it. We can achieve the privacy amplification speed to 1.35Gbps at different input length. The results are obtained on a NVIDIA TITAN Xp GPU. However, for practical CV-QKD systems, the finite-size effect of privacy amplification has to be considered to ensure the security. The input length may be over the processing capacity of GPU, we can not perform the procedure directly. Thus, we divide the input data into blocks that can be directly performed. And each block is divided to smaller batches to decrease the calculation complexity and precision. The blocks are processed serially, while the batches are processed in parallel. In fact, if the number of batches is too large or the batch size is too long, the following batches are performed until the previous batches are finished.
\begin{figure}[t]
\centering
\includegraphics[width=32pc]{speed}
\caption{The privacy amplification speeds comparison of direct implementation and length-compatible implementation. The red dots represent the speeds of length-compatible implementation. The blue squares represent the speeds of direct implementation. The experiment results between 128Mbits and 10Gbits are obtained on the same batch size with 1Mbits. We still show the previous implementation results of other works. USTC: University of Science and Technology of China. SXU: Shanxi University.}
\label{speed}
\end{figure}
In Fig.~\ref{speed}, we compare the privacy amplification speeds of direct implementation and length-compatible implementation, and we also show the results of other works. The length-compatible implementation can achieve high speed at any input length, where the average speed is about 1.35Gbps. However, only when the single-precision can be used to correctly calculate the results of FFT, direct implementation can achieve high speed. When the input length is between 4M and 80M, direct implementation can obtain the speed to about 0.3Gbps by using double-precision. Due to the limitation of GPU resources, we can not directly perform the privacy amplification procedure when the input length is larger than 80M. In Ref.~\cite{ZhangCM}, they achieve the speeds to about 10Mbps by using multiplication algorithm. In Ref.~\cite{NTT}, they obtain the speed to 108.77Mbps when the input length is 100M by using number theoretic transform to accelerate the privacy amplification procedure, and they implement the procedure on a coprocessor. In Ref.~\cite{FPGA}, they get the speed to 65.443Mbps based on FPGA. As shown in Fig.~\ref{speed}, the proposed length-compatible method can obtain the speed over 1Gbps at any input length, which ensures the security of final secret keys in the case of considering the finite-size effect of privacy amplification and supports high speed real-time CV-QKD system.
\section{Conclusions}
We propose a high speed implementation method of GPU-based length-compatible privacy amplification for continuous-variable quantum key distribution system. The finite-size effect of privacy amplification is considered to ensure secret keys extraction. The long input data is divided into small blocks when GPU can not directly perform the privacy amplification procedure and fast Fourier transform is used to speed up the procedure. To further accelerate the procedure, each block is divided into smaller batches to reduce the calculation precision. The batches are performed in parallel to make full use of the resources of GPU, while the blocks are performed serially. The proposed length-compatible method can be applied to privacy amplification with arbitrary input length. The average speed is achieved to about 1.35Gbps at arbitrary input length, which is one to two orders of magnitude faster than the previous implementation. The early stage of this work has been applied to the longest field test of the continuous-variable quantum key distribution system~\cite{Zhang2017}.
\section*{Acknowledgements}
The authors wish to thank the anonymous reviewers for their valuable suggestions.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-05-08T02:14:41",
"yymm": "1805",
"arxiv_id": "1805.02372",
"language": "en",
"url": "https://arxiv.org/abs/1805.02372"
}
|
\section{Introduction}
\label{sec:1}
The emergence of high-throughput technologies has made it feasible to measure the activities of
thousands of genes simultaneously, which provides scientists with a major opportunity to infer gene
regulatory networks. Accurate inference of gene regulatory networks is pivotal to gaining a systematic understanding
of the molecular mechanism, to shedding light on the mechanism of diseases that occur when
cellular processes are dysregulated, and to identifying potential therapeutic targets for the diseases.
Given the high dimensionality and complexity of high-throughput data, inference of gene regulatory networks largely
relies on the advance of statistical modeling and computation.
The Gaussian graphical model is a promising tool to achieve this challenge.
The Gaussian graphical model uses a network to depict conditional independence relationships for a large set of Gaussian random
variables, where the absence of an edge between two variables indicate
independence of the two variables conditioned on all other variables.
In the literature, a variety of methods have been proposed to learn
Gaussian graphical networks. To name a few, they include covariance selection (Dempster, 1972),
nodewise regression (Meinshausen and B\"uhlmann, 2006),
graphical Lasso (Yuan and Lin, 2007; Friedman et al., 2008), adaptive graphical Lasso (Fan et al., 2009), projected covariance matrix method (Fan et al., 2015),
and $\psi$-learning (Liang et al., 2015).
In general, these methods assume that the data are homogeneous, i.e., all samples are drawn from a single
Gaussian distribution. However, in practice, we have often
the data that are heterogeneous, i.e., the samples are drawn from a mixture Gaussian distribution,
while a single Gaussian graphical network still needs to be learned for all the samples in a
fashion of data integration. Here are some examples:
\begin{itemize}
\item[1.] {\bf Data with hidden biological/clinical subtypes.}
It is known that complex diseases such as cancer can have significant heterogeneity
in response to treatments, and this heterogeneity is often reflected in gene expression.
For example, the gene expression patterns can vary with subtypes of the cancer.
Since for many types of cancers, the definition of subtypes is still unclear and
the number of samples from each subtype can be very small,
it is impractical to construct an individual gene regulatory
network for each subtype. In this case, we might still be interested in constructing a single
gene regulatory network for the heterogeneous data in a fashion of data integration.
Such an integrated gene regulatory network can facilitate us to identify fundamental patterns
common to the development and progression of the disease.
\item[2.] {\bf Data with hidden confounding factors.} In real-world applications, the gene expression
data may contain some systematic differences caused by known or unknown confounding factors,
such as study cohorts, sample collection, experimental batches, etc.
Due to the limited number of samples from each level of the confounding
factors, we also prefer to learn a single gene regulatory network for the
heterogeneous data in a fashion of data integration.
Moreover, for many problems, the confounding factors can be unknown.
\end{itemize}
In this paper, we develop a mixture model method to learn Gaussian graphical networks
for heterogeneous data with hidden clusters. The new method is developed based on
the imputation-consistency (IC) algorithm proposed by
Liang et al. (2018)and
the $\psi$-learning algorithm proposed by Liang et al. (2015).
The IC algorithm is a general algorithm for dealing with
high-dimensional missing data problems. Like the EM algorithm (Dempster et al., 1977),
the IC algorithm works in an iterative manner, iterating between
an I-step and a C-step.
The I-step is to impute the missing data conditioned
on the observed data and the current estimate of parameters, and the C-step is
to find a ``consistent'' estimator
for the minimizer of a Kullback-Leibler divergence defined on the pseudo-complete
data. For high-dimensional problems, the ``consistent'' estimate can be
found with sparsity constraints or screened data. Refer to Fan and Lv (2008) and Fan and Song (2010)
for variable screening methods.
Under quite general conditions,
Liang et al. (2018)showed that the average of the ``consistent'' estimators
across iterations is consistent to the true parameters.
The $\psi$-learning algorithm is originally designed for learning Gaussian graphical models
for homogeneous data. The proposed method can be viewed as a combination of the
IC algorithm and the $\psi$-learning algorithm, which simultaneously clusters samples
to different groups and learn an integrated network across all the groups.
When applying the IC algorithm to cluster samples, their cluster membership is
treated as missing data.
We note that the proposed mixture model method is different from the
methods for joint estimation of multiple Gaussian graphical models, such as
fused Lasso (Danaher et al., 2014), Bayesian nodewise regression (Lin et al., 2017), and
Bayesian integrated $\psi$-learning (Jia et al., 2018). For the latter methods,
the samples' cluster membership is known {\it a priori} and the goal is to learn an
individual network for each cluster of samples. In contrast, the proposed method
works for the case that the cluster membership is unknown and the goal is to learn
an integrated network across all hidden groups. The proposed method is also different from
the methods proposed by Ruan et al. (2011) and Lee et al. (2018).
For the former, the goal is to learn an individual
network for each cluster of samples, although it assumes that the cluster
membership is unknown. The latter is to first group samples to different clusters using
an eigen-analysis based approach and then
apply the $\psi$-learning algorithm to learn the network structure.
Since the method did not account for the uncertainty of sample clustering, it
often performs less well.
The rest part of this paper is organized as follows. In Section 2, we
describe the proposed method. In Section 3, we illustrate the
performance of the proposed method using simulated examples.
In Section 4, we apply the proposed method to learn a gene regulatory network
for breast cancer with a heterogeneous gene expression dataset.
In Section 5, we conclude the paper with a brief discussion.
\section{Mixture Gaussian Graphical Models}
\label{sec:2}
\subsection{Algorithms for homogeneous data}
To have a better description for the proposed method, we first give a brief review
for the existing Gaussian graphical model algorithms for homogeneous data.
Let ${\boldsymbol V}=\{{\boldsymbol X}_1,\ldots, {\boldsymbol X}_p\}$ denote a set of $p$ Gaussian random variables,
where ${\boldsymbol X}_i=\{X_{i1}, \ldots X_{in}\}$ denotes $n$ observations of variable $i$.
In the context of gene regulatory networks, $X_{ij}$ refers to the expression level of
gene $i$ measured in experiment $j$.
Let ${\boldsymbol X}^{(j)}=(X_{1j}, \ldots, X_{pj})^T$ denote the expression levels of
all $p$ genes measured in experiment $j$, which is assumed to follow a Gaussian
distribution $N_p({\boldsymbol \mu},{\boldsymbol \Sigma})$ with the mean vector ${\boldsymbol \mu}$ and covariance matrix ${\boldsymbol \Sigma}$.
Let ${\boldsymbol E} =(e_{ij})$ denote the adjacency matrix,
where $e_{ij}=1$ if the edge is present and 0 otherwise.
The adjacency matrix specifies the structure of the Gaussian graphical network.
Let $\rho_{ij|V\setminus \{i,j\}}$ denote the partial correlation coefficient of
variable $i$ and variable $j$ conditioned on all other variables.
Let ${\boldsymbol C}=(C_{ij})={\boldsymbol \Sigma}^{-1}$ denote the concentration matrix, also known as
the precision matrix.
Let $\beta_{i}^{(j)}$'s denote the coefficients of the regressions
\begin{equation} \label{GGMeq3}
{\boldsymbol X}_j=\beta_{i}^{(j)} {\boldsymbol X}_i+ \sum_{r \in V \setminus \{i,j\}} \beta_r^{(j)} {\boldsymbol X}_r+\epsilon^{(j)}, \quad j=1,2,\ldots,p,
\end{equation}
where $\epsilon^{(j)}$ is a zero-mean Gaussian random vector.
Since $\rho_{ij|V\{i,j\}}$ can be expressed as $\rho_{ij|V\setminus\{i,j\}}= −C_{ij}/C_{ii} C_{jj}$
and $\beta_{i}^{(j)}$'s can be
expressed as $\beta_i^{(j)}=-C_{ji}/C_{jj}$ and $\beta_j^{(i)}=-C_{ji}/C_{ii}$,
the following relationship holds:
\begin{equation} \label{GGMeq4}
e_{ij}=e_{ji}=1 \Leftrightarrow \rho_{ij|V\setminus \{i,j\}} \ne 0 \Leftrightarrow C_{ij} \ne 0 \Leftrightarrow
\beta_i^{(j)} \ne 0 \ \mbox{and} \ \beta_j^{(i)} \ne 0.
\end{equation}
Based on the relation between partial correlation coefficients and the concentration matrix,
Dempster (1972) proposed the covariance selection method,
which identifies the edges of the Gaussian graphical network by identifying
the nonzero elements of the concentration matrix.
However, this method cannot be applied to the problems with
$p>n$, where the sample covariance matrix is nonsingular and thus the concentration
matrix cannot be calculated. To tackle this difficulty,
Yuan and Lin (2007) proposed to estimate the concentration matrix with $l_1$-regularization.
Soon, this method was accelerated by Friedman et al. (2008) using the coordinate descent algorithm
in a similar way to Lasso regression (Tibishrani, 1996), which leads to the
so-called graphical Lasso algorithm.
Based on the relation between partial correlation coefficients and regression coefficients,
Meinshausen and B\"uhlmann (2006) proposed the nodewise regression method, which
is to learn Gaussian graphical networks by identifying nonzero regression coefficients of
the regressions given in (\ref{GGMeq3}) with a sparsity constraint.
Alternative to estimating the concentration matrix and regression coefficients,
the $\psi$-learning algorithm (Liang et al., 2015) is to provide an
equivalent measure for the partial correlation coefficient in the sense that
\begin{equation} \label{GGMeq2}
\psi_{ij} = 0 \Longleftrightarrow \rho_{ij|V\setminus \{i,j\}} =0,
\end{equation}
where $\psi_{ij}$ is the partial correlation coefficient
of variable $i$ and variable $j$ conditioned on a subset of $V\setminus \{i,j\}$
and the subset is obtained via correlation screening.
Since the $\psi$-learning algorithm is used as a component of the proposed
mixture model method for learning Gaussian graphical models with grouped samples,
the details of the algorithm are given below.
\vspace{3mm}
\noindent
\textbf{Algorithm 1.}($\psi$-learning)
\begin{itemize}
\item[(a)] (Correlation screening) Determine the reduced neighborhood for each variable ${\boldsymbol X}_i$.
\begin{itemize}
\item[(i)] Conduct a multiple hypothesis test to identify the pairs of variables for which the empirical correlation coefficient
is significantly different from zero. This step results in a so-called empirical correlation network.
\item[(ii)] For each variable $X_i$, identify its neighborhood in the empirical correlation network, and reduce the size of the neighborhood to
$O(n/\log(n))$ by removing the variables having lower correlation (in absolute value) with $X_i$.
This step results in a so-called reduced correlation network.
\end{itemize}
\item[(b)] ($\psi$-calculation) For each pair of variables $i$ and $j$, identify a subset of nodes $S_{ij}$
based on the reduced correlation network resulted in step (a) and calculate $\psi_{ij}=\rho_{ij|S_{ij}}$,
where $\rho_{ij|S_{ij}}$ denotes the partial correlation coefficient
of $X_i$ and $X_j$ conditioned on the variables $\{X_k: k \in S_{ij} \}$.
In this paper, we set $S_{ij} = S_i\setminus\{j\}$ if $|S_i\setminus\{j\}| \leq |S_j\setminus\{i\}|$ and $S_j\setminus\{j\}$
otherwise, where $S_i$ denotes the neighborhood of node $i$ in the reduced correlation network,
and $|\cdot|$ denotes the cardinality of a set.
\item[(c)] ($\psi$-screening) Conduct a multiple hypothesis test to identify the pairs of vertices for which $\psi_{ij}$
is significantly different from zero, and set the corresponding element of the adjacency matrix to 1.
\end{itemize}
The multiple hypothesis tests involved in the algorithm can be done using the empirical Bayesian method
developed in Liang and Zhang (2008), which allows for the general dependence between test statistics.
Other multiple hypothesis testing procedures that account for the dependence
between test statistics, e.g., the two-stage procedure of Benjamini et al. (2006), can also be applied here.
The correlation screening step involves two procedures,
(i) multiple hypothesis test and (ii) sure independence screening, to control the neighborhood size for each variable.
The two procedures seem redundant, but actually they are not. Indeed the multiple hypothesis test is able to
identify the pairs of independent variables, but the size of each neighborhood cannot be guaranteed to be less
than $O(n/\log(n))$ as established in Liang et al. (2015).
We have tried to use the sure independence screening procedure only,
which results in the same neighborhood size $O(n/\log(n))$ for each variable.
However, in this case, the enlarged neighborhood may contain some variables that are independent of the
central one, and thus the power of the followed $\psi$-screening test will be reduced.
The $\psi$-learning algorithm consists of two free parameters, namely $\alpha_1$ and $\alpha_2$, which
refer to the significance levels used in correlation
screening and $\psi$-screening, respectively. Following the suggestion of Liang et al. (2015), we specify their values
in terms of $q$-values (Storey, 2002); setting $\alpha_1=0.2$ and $\alpha_2=0.05$ or 0.1 in all computations. In particular,
we set $\alpha_2=0.05$ for the simulated examples and $\alpha_2=0.1$ for the real data example.
A large value of $\alpha_2$ avoids to lose more potential interactions between different genes.
Under mild conditions, e.g., the joint Gaussian distribution of ${\boldsymbol X}_1,\ldots, {\boldsymbol X}_p$ satisfies the faithfulness condition,
Liang et al. (2015) showed that the $\psi$-partial correlation coefficient is equivalent
to the true partial correlation coefficient
in determining the structure of Gaussian graphical models in the sense of (\ref{GGMeq2}).
Compared to other Gaussian graphical model algorithms,
the $\psi$-learning algorithm has a significant advantage that
it has reduced the computation of partial correlation coefficients
from a high dimensional problem to a low dimensional problem via correlation screening
and thus can be used for very high-dimensional problems.
As shown in Liang et al. (2015), the $\psi$-learning algorithm is consistent; the resulting
network will converge to the true one in probability as the sample size becomes large.
The $\psi$-learning algorithm tends to produce better numerical performance
and cost less CPU time than the existing algorithms, such as gLasso and nodewise regression,
especially when $p$ is large.
\subsection{The Mixture Gaussian Graphical Model Method}
Let ${\cal X}=\{{\boldsymbol X}^{(1)}, \ldots, {\boldsymbol X}^{(n)}\}$ denote a set of $n$ independent samples which are drawn from a mixture
Gaussian distribution with $M$ components, where the sample size $n$ can be
much smaller than the dimension $p$. Suppose that $M$ is known.
Later, we will describe a Bayesian information criterion (BIC) to determine
the value of $M$. The log-likelihood function
of the samples is given by
\begin{equation}\label{llf}
\ell({\cal X}|\Theta)=\sum_{k=1}^M \log\left(\pi_k\phi(X_i|\mu_k,\Sigma_k)\right),
\end{equation}
where $\Theta=\{(\pi_k,\mu_k,\Sigma_k): k=1,\ldots,M\}$ denotes the collection of
unknown parameters, $\pi_k$'s are mixture proportions,
$\mu_k$'s are mean vectors, and $\Sigma_k$'s are covariance matrices of the
$M$ Gaussian components, respectively; and $\phi(\cdot|\mu_k,\Sigma_k)$
denotes the density function of the multivariate Gaussian distribution.
Let $\tau_i$ denote
the indicator variable for the component/cluster membership of sample $i$,
for $i=1,2,\ldots,n$. That is,
$p(\tau_i=k)=\pi_k$ and
${\boldsymbol X}_i|\tau_i=K \sim N({\boldsymbol \mu}_k, \Sigma_k)$ for $k=1,\ldots, M$ and $i=1,2,\ldots,n$.
Henceforth, we will use cluster to denote the group of samples assigned to a
component of the mixture Gaussian graphical model. Cluster and component are
also used exchangeably in this paper.
If the sample size $n$ is greater than $p$, then the parameters $\Theta$ can be
estimated using the EM algorithm as described in what follows.
Let $\pi_k^{(t)}$, $\mu_k^{(t)}$ and $\Sigma_k^{(t)}$ denote, respectively,
the estimates of $\pi_k$, $\mu_k$ and $\Sigma_k$ obtained at iteration $t$.
Let $\Theta_k^{(t)}=(\pi_k^{(t)}, \mu_k^{(t)}, \Sigma_k^{(t)})$.
The $E$-step calculates the the conditional expectation of $\tau_i$
given $X_i$ and the current estimate of $\Theta$, i.e.
\begin{equation}\label{pp}
\gamma_{ik}^{(t)} = P(\tau_i=k|X_i;\Theta^{(t)})=\frac{\pi_k^{(t)}\phi(X_i|\mu_k^{(t)},\Sigma_k^{(t)})}{\sum_{l=1}^M\pi_l^{(t)}\phi(X_i|\mu_l^{(t)},\Sigma_l^{(t)})}.
\end{equation}
which leads to the so-called $Q$-function,
\begin{equation}
Q(\Theta,\Theta^{(t)})= \sum_{k=1}^M\left[\sum_{i=1}^nlog(\phi(X_i|\mu_k^{(t)},\Sigma_k^{(t)}))\gamma_{ik}^{(t)}\right]
=\sum_{k=1}^M Q_k(\Theta,\Theta^{(t)}).
\end{equation}
The M-step updates $\Theta^{(t)}$ by maximizing the $Q$-function, which can be done by
maximizing $Q_k$ with respect to $\Theta_k=(\pi_k, \mu_k, \Sigma_k)$ for each $k$.
For each value of $k$, $\Theta_k^{(t)}$ can be updated by setting
\begin{equation}\label{mstep}
\begin{split}
\pi_k^{(t+1)}& =\frac{1}{n}\sum_{i=1}^n\gamma_{ik}^{(t)},\\
\mu_k^{(t+1)}&=\frac{\sum_{i=1}^n\gamma_{ik}^{(t)}X_i}{\sum_{i=1}^n\gamma_{ik}^{(t)}},\\
\Sigma_k^{(t)} &=\sum_{i=1}^n\left(\frac{\gamma_{ik}^{(t)}}{\sum_{j=1}^n\gamma_{jk}^{(t)}}
(X_i-\mu_k^{(t+1)})(X_i-\mu_k^{(t+1)})'\right). \\
\end{split}
\end{equation}
However, this algorithm does not work when $n<p$, as $\Sigma_k^{(t)}$'s will be
singular in this case.
When $n<p$, to avoid the issues caused by the singularity of $\Sigma_k^{(t)}$'s,
we propose the following algorithm. For the proposed algorithm, we assume that
all components of the mixture Gaussian graphical model share a common adjacency matrix,
although their covariance and precision matrices can be different from each other.
The new algorithm consists of two stages.
The first stage is to apply the Imputation-Consistency (IC) algorithm to generate a series of estimates for the
common adjacency matrices, and the second stage is to
average the estimates to get a stable estimate for the common adjacency matrix.
Note that, as can be seen below, the IC algorithm generates
a Markov chain.
To learn the common adjacency matrix at each iteration, a $\psi$-integration
procedure is needed, which is to integrate the adjacency matrices learned for each component
into one adjacency matrix. This procedure can be described as follows. Let
$\psi_{kij}^{(t)}$ denote the $\psi$-partial correlation coefficient calculated for
the $k$-th cluster at iteration $t$, which can be transformed to a $z$-score via Fisher's transformation:
\begin{equation}\label{zscore}
Z_{kij}^{(t)}=\frac{\sqrt{n_k^{(t)}-|S_{kij}^{(t)}|-3}}{2} \log\left[\frac{1+\hat{\psi}_{kij}^{(t)}}{1-\hat{\psi}_{kij}^{(t)}}\right],
\quad i,j=1,\ldots p, k=1,\ldots M.
\end{equation}
where $|S_{kij}^{(t)}|$ denotes the conditioning set used in calculating
$\psi_{kij}^{(t)}$,
and $n_k^{(t)}$ is the number of samples assigned to cluster $k$ at iteration $t$.
For convenience, we call the $z$-score a $\psi$-score.
The $\psi$-scores from different clusters can be combined using Stouffer's meta-analysis method (Stouffer et al., 1949)
by setting
\begin{equation}\label{czscore1}
Z_{ij}^{(t)}=\frac{\sum_{k=1}^M\omega_k^{(t)} {z_{kij}^{(t)}}}{\sqrt{\sum_{k=1}^M (\omega_k^{(t)})^2}}, \quad i,j=1,\ldots p,
\end{equation}
where $\omega_k^{(t)}$ is a nonnegative weight assigned to cluster $k$ at iteration $t$. In this paper, we set
$\omega_k^{(t)}=n_k^{(t)}/n$. Note that $Z_{ij}^{(t)}$ approximately follows a standard normal distribution under
the null hypothesis $H_0 : e_{ij} = 0$. Then a multiple hypothesis test can be conducted on
$Z_{ij}^{(t)}$'s to identify the pairs of nodes for which $Z_{ij}^{(t)}$ is
differentially distributed from the standard normal $N (0,1)$, and the adjacency
matrix common to all components of the mixture model can be determined thereby. In this paper, we
adopted the multiple hypothesis testing procedure of Liang and Zhang (2008) to
conduct the test. This testing procedure allows general dependence between
test statistics.
Given the $\psi$-integration procedure, the first stage of the proposed method can be summarized as follows:
It starts with an initial estimate
$\Theta^{(0)}=\{(\pi_k^{(0)},\mu_k^{(0)},\Sigma_k^{(0)}): k=1,\ldots,M\}$, and then iterates
between the following steps:
\vspace{3mm}
\noindent
\textbf{Algorithm 2.} (IC estimation for mixture Gaussian graphical models)
\begin{itemize}
\item[(a)] (imputation) Impute the indicator variable $\tau_i^{(t+1)}$ drawn from the probability of multinomial distribution (\ref{pp}) for each $i=1,2,\ldots,n$.
\item[(b)] (consistency) Based on the imputed values of $\tau_i^{(t+1)}$'s, update the estimate $\Theta^{(t)}$ by
\begin{itemize}
\item[(i)] \hspace{2mm} setting $n_k^{(t+1)}=\sum_{i=1}^n I(\tau_i^{(t+1)}=k)$,
$\pi_k^{(t+1)}=n_k^{(t+1)}/n$, and \\
$\mu_k^{(t+1)}=\sum_{j \in\{i: \tau_i^{(t+1)}=k\}} {\boldsymbol X}_j/n_k^{(t+1)}$;
\item[(ii)]\hspace{2mm} applying the $\psi$-learning algorithm to learn an adjacency matrix
for each cluster of the samples.
\item[(iii)] \hspace{2mm} applying the $\psi$-integration procedure to integrate the
adjacency matrices learned in step (ii) into one.
\item[(iv)] \hspace{2mm} applying the algorithm given in Hastie et al. (2009, page 634) to recover the
covariance matrices for each cluster, given the common adjacency matrix
learned in step (iii).
\end{itemize}
\end{itemize}
Let ${\boldsymbol \tau}^{(t)}=\{\tau_1^{(t)}, \ldots,\tau_n^{(t)} \}$ for $t=1,2,\ldots$.
According to the theory of the IC algorithm (Liang et al., 2018), which holds for general
conditions, the sequence
$\Theta^{(0)} \to {\boldsymbol \tau}^{(1)} \to \Theta^{(1)} \to \ldots \to {\boldsymbol \tau}^{(t)}\to \Theta^{(t)} \to \cdots$
forms two interleaved Markov chains, and a consistent estimate of $\Theta$ can be
obtained by averaging $\Theta^{(t)}$'s. This is similar to the stochastic EM algorithm
(Celeux and Diebolt, 1995; Nielsen, 2000).
Further, by theory of the IC algorithm,
a consistent estimate of the common adjacency matrix can be obtained by averaging the respective estimates along
with iterations. More precisely, the adjacency matrix can be averaged in the following way
(second stage of the proposed method). Define
\[
Z_{ij}=\sum_{t=t_0+1}^T Z_{ij}^{(t)}/(T-t_0), \quad i,j=1,2,\ldots,p,
\]
where $t_0$ denotes the number of burn-in iterations of the IC algorithm, and then
the final estimate of the adjacency matrix can be obtained by conducting another multiple
hypothesis test for $Z_{ij}$'s. As before, under the null hypothesis $H_0$: $e_{ij}=0$,
$Z_{ij}$ follows the standard normal distribution.
Thus far, we have treated the number of clusters $M$ as known. In practice, $M$ can be
determined using an information criterion, such as AIC or BIC.
Following Ruan et al. (2011), we define the degree of freedom
for a model with $M$ components as
\begin{equation} \label{dfeq}
df(M)=M\left[ p+\sum_{i \leqslant j}\hat{e}_{ij} \right],
\end{equation}
where $p$ represents the dimension of the mean vector, and $\hat{e}_{ij}$ denotes the
$(i,j)$-th element of the estimated common adjacency matrix.
Although we have assumed that the mixture Gaussian graphical model has a common adjacency
matrix for all components, it can have a completely different concentration matrix
for each component. Hence, for each component, we count each nonzero entry of the concentration matrix as
a different parameter. The BIC score is then given by
\begin{equation}\label{bic}
BIC(M) = -2\ell({\cal X}|\hat{\Theta}(M))+\log(n) df(M),
\end{equation}
where $\ell({\cal X}|\hat{\Theta}(M))$ is the log-likelihood function given by equation (\ref{llf}),
and $M$ can be determined by minimizing $BIC(M)$.
In (\ref{dfeq}), we did not count for the parameters $\pi_1,\ldots, \pi_{M-1}$. This is due to two reasons.
First, the problem is considered under the high-dimensional scenario where $p$ is allowed to be
greater than and grow with $n$. However, $M$ is considered as fixed or to grow at a lower order of $\log(n)$.
Therefore, including $M-1$ or not in (\ref{dfeq}) will not affect much the performance of
the criterion when $n$ becomes large. Second, we ignore $M-1$ in (\ref{dfeq}) to make
the definition of the BIC score (\ref{bic}) consistent with the one used in
Ruan et al. (2011), which facilitates comparisons.
\section{Simulation Studies}
We compare the performance of the proposed method with some methods developed
for homogeneous data such as gLasso (Friedman et al.,2008), nodewise regression (Meinshausen and B\"uhlmann, 2006),
and $\psi$-learning (Liang et al., 2015), as well as the EM-regularization method developed by Ruan et al. (2011)
for mixture Gaussian graphical models. As aforementioned, the method by Ruan et al. (2011) is different from
the proposed one, as whose goal is to estimate an individual Gaussian graphical network for
each cluster. Moreover, since Ruan et al. (2011) applied the gLasso algorithm to learn an individual
Gaussian graphical network for each cluster, it will be very hard to integrate those
networks into a common one.
\subsection{Example 1}
We began with the case where the number of clusters $M$ of the mixture model is known and
the components are different in means.
For this simulation study, we fix $M=3$ and the total number of samples $n=300$,
and varied the dimension $p$ between $100$ and $200$. We set the component means as
${\boldsymbol \mu}_1=0$, ${\boldsymbol \mu}_2=m {\bf 1}_p$ and ${\boldsymbol \mu}_3=-m {\bf 1}_p$,
where ${\bf 1}_p$ denotes a $p$-dimensional vector of ones.
We let all the three components share the same precision matrix $C$:
\begin{equation}\label{plugin}
C_{ij}=\left\{\begin{array}{ll}
0.5,&\textrm{if $\left| j-i \right|=1, i=2,...,(p-1),$}\\
0.25,&\textrm{if $\left| j-i \right|=2, i=3,...,(p-2),$}\\
1,&\textrm{if $i=j, i=1,...,p,$}\\
0,&\textrm{otherwise,}
\end{array}\right.
\end{equation}
and generated $100$ samples from each component of the mixture model.
The samples from different components are combined and shuffled.
Three different values of $m$ are considered,
including $m=0$, 0.3 and 0.5. Under each setting of $m$ and $p$,
50 independent datasets were generated.
The proposed method was applied to this heterogeneous dataset.
To initialize $\pi_k$'s and ${\boldsymbol \mu}_k$'s, we randomly grouped the samples into three clusters and calculated
their respective proportions and means.
To initialize the covariance matrices, we first applied the $\psi$-learning algorithm to the whole dataset
to obtain a common adjacency matrix, and then applied the algorithm by Hastie et al. (2009, page 634) to
estimate the covariance matrix for each cluster with the common adjacency matrix.
The IC algorithm converges very fast, usually in
about 10 iterations. For this example, the algorithm was run for 20 iterations for each dataset.
To access the performance of the proposed method, Figure \ref{fig_1} shows the precision-recall curves obtained
for some datasets, where each plot was drawn based on the average of all 50 simulated datasets.
The precision and recall are defined by
\begin{equation*}
\mbox{precision}=\frac{TP}{TP+FP}, \qquad \mbox{recall}=\frac{TP}{TP+FN},
\end{equation*}
where $TP$, $FP$ and $FN$ denote true positives, false positives and false negatives, respectively,
and they are defined via a binary decision table (see Table \ref{Binarytab}).
In general, the method producing a larger area under the precision-recall curve is
considered as a better method. The area under the precision-recall curve is often denoted
by AUC (Area Under Curve) in the literature.
\begin{table}[htbp]
\tabcolsep=3pt\fontsize{8}{14}
\selectfont
\begin{center}
\caption{Outcomes of binary decision.}
\label{Binarytab}
\vspace{0cm}
\begin{tabular}{ccc} \hline
&$A_{ij}=1$&$A_{ij}=0$\\\hline
$\hat{A}_{ij}=1$&True Positive (TP)& False Positive (FP) \\\hline
$\hat{A}_{ij}=1$&False Negative (FN) & True Negative (TN) \\\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[]
\centering
\subfigure[ $m=0$ and $p=100$]{
\label{fig11a}
\includegraphics[width=1.5in,angle=270]{555_0.eps}}
\hspace{0.03in}
\subfigure[ $m=0$ and $p=200$]{
\label{fig_12a}
\includegraphics[width=1.5in,angle=270]{555_0pre.eps}}
\\
\subfigure[ $m=0.3$ and $p=100$]{
\label{fig21a}
\includegraphics[width=1.5in,angle=270]{555_3.eps}}
\hspace{0.03in}
\subfigure[ $m=0.3$ and $p=200$]{
\label{fig22a}
\includegraphics[width=1.5in,angle=270]{555_3pre.eps}}
\\
\subfigure[ $m=0.5$ and $p=100$]{
\label{fig31a}
\includegraphics[width=1.5in,angle=270]{555_5.eps}}
\hspace{0.03in}
\subfigure[ $m=0.5$ and $p=200$]{
\label{fig32a}
\includegraphics[width=1.5in,angle=270]{555_5pre.eps}} \\
\caption{Comparison of different methods for recovering underling networks for
heterogeneous data with different cluster means: `gLasso' refers to the graphical Lasso method,
`NodeReg' refers to the nodewise regression method, `Penalized' refers to
the EM-regularization method of Ruan et al (2011), $\psi$-learning refers to the
$\psi$-learning methods, and 'IC' refers to the proposed method.
The plots (a) and (b) represent the scenario of homogeneous data.}
\label{fig_1}
\end{figure}
For comparison, we have applied other methods, including the EM-regularization method of Ruan et al. (2011),
$\psi$-learning, gLasso and nodewise regression, to this example.
As shown in Figure \ref{fig_1}, under both scenarios with $m=0$ (representing homogeneous data)
and $m\ne 0$, the proposed method outperforms all others. Moreover, when the value of $m$ increases,
the performance of the $\psi$- learning, gLasso and nodewise regression methods deteriorates
as they are designed for homogeneous data.
The EM-regularization method is robust to the value of $m$,
and tends to outperform gLasso, nodewise regression and $\psi$-learning when $m$ becomes large.
Note that the EM-regularization method produced a different network for each cluster
and thus three precision-recall curves in total. Figure \ref{fig_1} showed only the best one,
i.e., the curve with the largest value of AUC.
Table \ref{AUC} summarizes the areas under the precision-recall curves
produced by different methods, where the average areas (over50 datasets) were reported with
the standard deviations given in the corresponding parentheses. The comparison indicates that the proposed method
significantly outperforms other methods for the heterogeneous data.
For the homogeneous data (i.e., $m=0$), the proposed method performs as well as the $\psi$-learning
method, while significantly outperforms others. This indicates generality
of the proposed method, which can be applied to homogeneous data without significant harms.
For the EM-regularization method, its poor performance for the homogeneous data
may be due to two reasons. Firstly, the gLasso procedure employed there
tends to perform less well than the $\psi$-learning and nodewise regression methods
as shown in Figure \ref{fig_1}(a) \& \ref{fig_1}(b).
Secondly, the EM-regularization method produced three different networks,
which are not allowed to be integrated under its current procedure.
For the purpose of comparison, we reported only the result for the network with the largest AUC area.
However, this ``best'' network may still be worse than the properly
integrated one for the homogeneous data.
\begin{table}
\tabcolsep=2pt\fontsize{9}{14}
\selectfont
\begin{center}
\caption{Average AUCs produced by different methods for
the heterogeneous data with different cluster means. }
\vspace{0cm}
\label{AUC}
\begin{tabular}{ccccccc}
&m & gLasso & NodeReg & $\psi$-learning & Penalized & IC \\ \hline
\multirow{3}{2cm}{\centering $p=100$} & 0 & 0.696(0.002) & 0.765(0.003) & 0.859(0.003) & 0.662(0.003) & 0.888(0.004) \\ \cline{2-7}
& 0.3 & 0.437(0.002) & 0.453(0.003) & 0.602(0.003) & 0.634(0.003) & 0.892(0.004) \\ \cline{2-7}
& 0.5 & 0.084(0.001) & 0.112(0.001) & 0.095(0.003) & 0.459(0.020) & 0.876(0.004) \\ \hline
\multirow{3}{2cm}{\centering $p=200$} & 0 & 0.658(0.002) & 0.731(0.002) & 0.834(0.002) & 0.654(0.002) & 0.855(0.004) \\ \cline{2-7}
&0.3 & 0.402(0.002) & 0.421(0.002) & 0.585(0.002) & 0.597(0.008) & 0.857(0.004)\\ \cline{2-7}
&0.5 & 0.059(0.001) & 0.084(0.001) & 0.051(0.002) & 0.439(0.015) & 0.829(0.004)\\ \hline
\end{tabular}
\end{center}
\end{table}
In addition to underlying networks, we are interested in parameter estimation
and cluster identification for the mixture Gaussian graphical model.
To access the accuracy of parameter estimation, we adopt the criteria used by Ruan et al. (2011),
which include the averaged spectral norm defined by
\begin{equation}
SL=\frac{1}{M}\sum_{k=1}^M\lVert\hat{\Sigma}_k^{-1}-\Sigma_k^{-1}\lVert,
\end{equation}
where $\lVert A\lVert$ is the largest singular value of matrix $A$;
the averaged Frobenius norm defined by
\begin{eqnarray}
FL&&=\frac{1}{M}\sum_{k=1}^M\lVert\hat{\Sigma}_k^{-1}-\Sigma_k^{-1}\lVert_F\\
&&=\frac{1}{M}\sum_{k=1}^M\sqrt{\sum_{i,j}(\hat{\Sigma}_k^{-1}(i,j)-\Sigma_k^{-1}(i,j))^2},
\end{eqnarray}
and the averaged Kullback-Leibler (KL) loss defined by
\begin{equation}
KL=\frac{1}{M}\sum_{k=1}^MKL(\Sigma_k,\hat{\Sigma}_k),
\end{equation}
where
\begin{equation}
KL(\Sigma,\hat{\Sigma})=tr(\Sigma\hat{\Sigma}^{-1})-log|\Sigma\hat{\Sigma}^{-1}|-p.
\end{equation}
To assess the accuracy of cluster identification, we calculated the averaged false and negative selection rates over
different clusters. Let $\bm{s_k}$ denote the index set of observations for cluster $k$,
and let $\bm{\hat{s}_k}$ denote its estimate. Define
\begin{equation}
fsr=\frac{1}{M}\sum_{k=1}^M\frac{|\bm{\hat{s}_k}\backslash\bm{s_k}|}{|\bm{\hat{s}_k}|}, \qquad nsr=\frac{1}{M}\sum_{k=1}^M\frac{|\bm{s_k}\backslash\bm{\hat{s}_k}|}{|\bm{s_k}|}
\end{equation}
where $|\cdot|$ denotes the set cardinality. The smaller the values of fsr and nsr are, the better the performance of
the method is. The comparison was summarized in Table \ref{tab1} where, for each setting of $m$ and $p$,
each method was evaluated based on50 datasets with the averaged evaluation results reported.
The numbers in the parentheses represent the standard deviations
of the corresponding averages. The comparison indicates that the proposed method
significantly outperforms the other methods in both parameter estimation and
cluster identification.
\begin{table}
\tabcolsep=2pt\fontsize{9}{14}
\selectfont
\begin{center}
\caption{Comparison of different methods in parameter estimation and cluster identification
for the heterogeneous data with different cluster means.}
\vspace{0cm}
\label{tab1}
\begin{tabular}{cccccccc}
&&m & SL & FL & KL & fsr & nsr \\ \hline
\multirow{6}{2cm}{\centering $p=100$} &\multirow{3}{2cm}{\centering penalized} & 0 & 3.642(0.015) & 22.309(0.018)
& 149.701(1.217) & --- & --- \\ \cline{3-8}
&& 0.3 & 3.618(0.008) & 22.231(0.004) & 149.058(0.569) & 0.453(0.004) & 0.393(0.005) \\ \cline{3-8}
&&0.5 & 3.488(0.027)& 22.414(0.056) & 160.736(1.540) & 0.014(0.002) & 0.016(0.002)\\ \cline{2-8}
& \multirow{3}{2cm}{\centering IC} & 0 & 3.261(0.045) & 11.222(0.072) & 24.619(0.281) & --- & --- \\ \cline{3-8}
&&0.3 &2.984(0.037) & 10.508(0.084) & 21.439(0.251) & 0.008(0.001) & 0.008(0.001)\\ \cline{3-8}
&& 0.5 & 3.025(0.035) & 10.635(0.081) & 21.701(0.276) & 0(0) & 0(0)\\ \hline
\multirow{6}{2cm}{\centering $p=200$} &\multirow{3}{2cm}{\centering penalized} & 0 & 3.644(0.009)
& 31.480(0.007) & 296.131(1.483) & --- & --- \\ \cline{3-8}
&&0.3 & 3.578(0.022) & 31.529(0.056) & 304.948(4.098) & 0.512(0.004) & 0.533(0.010) \\ \cline{3-8}
&&0.5 & 3.143(0.033) & 32.712(0.124) & 388.672(5.041)& 0.015(0.002) & 0.021(0.007)\\ \cline{2-8}
& \multirow{3}{2cm}{\centering IC} & 0 & 3.437(0.042) & 16.102(0.107) & 51.161(0.413) & --- & --- \\ \cline{3-8}
&&0.3 & 3.350(0.042) & 15.800(0.039) & 49.069(0.311) & 0(0) & 0(0)\\ \cline{3-8}
&& 0.5 & 2.732(0.036)& 16.312(0.010) &50.177(0.246) & 0(0) & 0(0)\\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Example 2}
To make the problem harder, we consider the model for which each component has a different mean vector as well as a
different concentration matrix, although the adjacency matrix is still the same for all components.
As for Example 1, we fix $M=3$ and the total sample size $n=300$,
varied the dimension $p$ between $100$ and $200$, and set the cluster mean vectors as
${\boldsymbol \mu}_1=0$, ${\boldsymbol \mu}_2=m {\bf 1}_p$ and ${\boldsymbol \mu}_3=-m {\bf 1}_p$,
where ${\bf 1}_p$ denotes a $p$-dimensional vector of ones.
The common pattern of the concentration matrix is given by
\begin{equation}\label{plugin2}
C_{ij}^{(k)}=\left\{\begin{array}{ll}
c_k,&\textrm{if $\left| j-i \right|=1, i=2,...,(p-1),$}\\
c_k/2,&\textrm{if $\left| j-i \right|=2, i=3,...,(p-2),$}\\
1,&\textrm{if $i=j, i=1,...,p,$}\\
0,&\textrm{otherwise,}
\end{array}\right.
\end{equation}
for $k=1,2,3$. We set $c_1=0.6$, $c_2=0.5$ and $c_3=0.4$ for the three components, respectively.
From each component, we generated 100 samples.
Three different values of $m$ are considered, which are 0, 0.3 and 0.5. Under each setting of $m$ and $p$,
50 independent datasets were generated.
Figure \ref{fig_2} shows the precision-recall curves produced gLasso, nodewise regression,
$\psi$-learning, EM-regularization, and the proposed method. It indicates that the proposed method
outperforms others. The two plots in the first row of Figure \ref{fig_2}
compares the performance of different methods when $m=0$, which represents a very difficult
scenario that each cluster is only slightly different in precision matrices and thus the
samples will be extremely difficult to be clustered.
However, the proposed method still outperform others under this scenario.
\begin{figure}[]
\centering
\subfigure[$m=0$ and $p=100$]{
\label{fig11b}
\includegraphics[width=1.5in,angle=270]{654_0.eps}}
\hspace{0.03in}
\subfigure[$m=0$ and $p=200$]{
\label{fig_12b}
\includegraphics[width=1.5in,angle=270]{654_0pre.eps}}
\\
\subfigure[$m=0.3$ and $p=100$]{
\label{fig21b}
\includegraphics[width=1.5in,angle=270]{654_3.eps}}
\hspace{0.03in}
\subfigure[$m=0.3$ and $p=200$]{
\label{fig22b}
\includegraphics[width=1.5in,angle=270]{654_3pre.eps}}
\\
\subfigure[$m=0.5$ and $p=100$]{
\label{fig31b}
\includegraphics[width=1.5in,angle=270]{654_5.eps}}
\hspace{0.03in}
\subfigure[$m=0.5$ and $p=200$]{
\label{fig32b}
\includegraphics[width=1.5in,angle=270]{654_5pre.eps}} \\
\caption{Comparison of different methods for recovering underlying networks for heterogeneous data with
different cluster means as well as different cluster precision matrices: `gLasso' refers to the graphical Lasso method,
`NodeReg' refers to the nodewise regression method, $\psi$-learning refers to the $\psi$-learning method,
'Penalized' refers to the EM-regularization method, and 'IC' refers to the proposed method.}
\label{fig_2}
\end{figure}
Table \ref{AUC2} compares the areas under the precision-recall curves produced by different methods,
and Table \ref{ex2tab2} compares the performance of different methods in parameter estimation and
cluster identification. For each setting of $m$ and $p$, each method was evaluated based on50
datasets the averaged evaluation results reported.
The numbers in the parentheses of the two tables represent the standard deviations
of the corresponding averages. The comparison indicates that the proposed method outperforms others
in both parameter estimation and cluster identification.
\begin{table}
\tabcolsep=2pt\fontsize{9}{13}
\selectfont
\begin{center}
\caption{Comparison of average AUCs produced by different methods for the heterogeneous data
with different cluster means as well as different cluster precision matrices. }
\vspace{0cm}
\label{AUC2}
\begin{tabular}{ccccccc}
&m & gLasso & NodeReg & $\psi$-learning & Penalized & IC \\ \hline
\multirow{3}{2cm}{\centering $p=100$} & 0 & 0.653(0.002) & 0.732(0.002) & 0.888(0.003) & 0.595(0.003) & 0.927(0.003)\\ \cline{2-7}
&0.3 & 0.416(0.003) & 0.429(0.003) & 0.624(0.002) & 0.571(0.003) & 0.926(0.003)\\ \cline{2-7}
& 0.5& 0.162(0.001) & 0.184(0.001) & 0.434(0.005) & 0.460(0.003) & 0.914(0.003)\\ \hline
\multirow{3}{2cm}{\centering $p=200$} & 0 & 0.625(0.002) & 0.711(0.002) & 0.858(0.001) & 0.573(0.002) & 0.896(0.002)\\ \cline{2-7}
&0.3 & 0.388(0.002) & 0.401(0.003) & 0.615(0.002) & 0.555(0.002) & 0.898(0.003)\\ \cline{2-7}
& 0.5& 0.136(0.001) & 0.161(0.001) & 0.380(0.004) & 0.358(0.018) & 0.878(0.003)\\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\tabcolsep=2pt\fontsize{9}{14}
\selectfont
\begin{center}
\caption{Comparison of different methods in parameter estimation and cluster identification for
the heterogeneous data with different cluster means as well as different cluster precision matrices. }
\vspace{0cm}
\label{ex2tab2}
\begin{tabular}{cccccccc}
&&m & SL & FL & KL & fsr & nsr \\ \hline
\multirow{6}{2cm}{\centering $p=100$} &\multirow{3}{2cm}{\centering penalized} & $0$ & 3.745(0.019) & 22.952(0.033) & 258.397(3.042) & 0.633(0.121) & 0.651(0.106) \\ \cline{3-8}
&&0.3 & 3.723(0.022)& 22.902(0.038) &257.245(3.395) & 0.174(0.009) & 0.196(0.013)\\ \cline{3-8}
&&0.5 & 3.749(0.019)& 22.973(0.032) & 257.954(2.943) & 0.159(0.035) & 0.177(0.100)\\ \cline{2-8}
& \multirow{3}{2cm}{\centering IC} &0 & 3.453(0.049) & 11.782(0.069) & 42.363(0.369) & 0.103(0.017) & 0.094(0.011)\\ \cline{3-8}
&&0.3 & 3.387(0.042)& 11.572(0.060) & 41.161(0.284) & 0.010(0.003) & 0.010(0.003)\\ \cline{3-8}
&& 0.5 & 3.292(0.041) & 11.521(0.069) & 42.098(0.419) & 0(0) & 0(0)\\ \hline
\multirow{6}{2cm}{\centering $p=200$} &\multirow{3}{2cm}{\centering penalized} & $0$ & 3.797(0.002) & 32.411(0.004) & 505.035(3.343) & 0.597(0.231) & 0.678(0.429) \\ \cline{3-8}
&&0.3 & 3.740(0.020)& 32.336(0.045) & 510.729(4.079) & 0.479(0.104) & 0.345(0.085)\\ \cline{3-8}
&&0.5 & 3.752(0.012)& 32.333(0.034)& 508.792(1.768) & 0.267(0.009) & 0.108(0.005)\\ \cline{2-8}
& \multirow{3}{2cm}{\centering IC} &0 & 3.405(0.014) & 17.045(0.080) & 92.140(0.709) & 0.212(0.010) & 0.209(0.009)\\ \cline{3-8}
&&0.3 & 3.533(0.043)& 16.989(0.075) & 91.503(0.746) & 0.005(0.001) & 0.004(0.001)\\ \cline{3-8}
&& 0.5 & 3.573(0.041) & 17.194(0.090) & 94.059(0.701) & 0(0) & 0(0)\\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Identification of cluster numbers}
When the number of clusters $M$ is unknown, we propose to determine its value according to the
BIC criterion give in (\ref{bic}). In what follows, we illustrated the performance of the proposed method under this
scenario using some simulated examples. We considered the cases with $M=2$ and $3$ and $p=100$ and 200.
For each combination of $(M,p)$, we simulated $100$ samples from each cluster with the same precision
matrix as defined in (\ref{plugin}). For the cluster means, we set ${\boldsymbol \mu}_1=0.5 {\bf 1}_p$ and
${\boldsymbol \mu}_2=-0.5 {\bf 1}_p$ for $M=2$, and set ${\boldsymbol \mu}_1=0$, ${\boldsymbol \mu}_2=0.5 {\bf 1}_p$ and
${\boldsymbol \mu}_3=-0.5 {\bf 1}_p$ for $M=3$.
Figure \ref{BICfig} compares the performance of the EM-regularization method and the proposed
method in identification of cluster numbers. It indicates that for the simulated example,
the proposed method was able to
correctly identify the true value of $M$ according to the BIC criterion, while
the EM-regularization method could not.
\begin{figure}
\centering
\subfigure[ $M=2$ and $p=100$]{
\label{fig1}
\includegraphics[width=2.2in]{M2p100.pdf}}
\hspace{0.05in}
\subfigure[$M=2$ and $p=200$]{
\label{fig2}
\includegraphics[width=2.2in]{M2p200.pdf}}
\\
\noindent
\subfigure[$M=3$ and $p=100$]{
\label{fig3}
\includegraphics[width=2.2in]{M3p100.pdf}}
\hspace{0.05in}
\subfigure[$M=3$ and $p=200$]{
\label{fig4}
\includegraphics[width=2.2in]{M3p200.pdf}}
\\
\caption{ BIC scores produced by the EM-regularization (Penalized) method and the proposed method (IC)
for different settings of $(M,p)$.}
\label{BICfig}
\end{figure}
\section{A Real Data Example}
Breast cancer is one of the most prevalent types of cancer which can be classified into four molecular
subtypes, namely, luminal A, basal-like, HER2-enriched,
and luminal B, based on their tumor expression profiles (Haque et al, 2012).
In this study, we aim to construct a single gene regulatory network
across the four subtypes to discover the overall gene regulation mechanism in breast cancer.
The gene expression data for breast cancer are available at The Cancer Genome Atlas (TCGA),
which contains 768 patients and 20502 genes.
For each patient, some clinical information such as survival time, age, gender and tumor stages
are also available, but
the cancer subtypes are unknown. Since the data might be heterogeneous given the existence
of breast cancer subtypes, the proposed method can be applied here.
For this study, we are interested in learning a gene regulatory network related to the
survival time of patients. For this reason, we first applied a
marginal screening method to select the survival time-related genes. For each gene, we calculated its $p$-value
using the marginal Cox regression after adjusting the effects of age, gender and tumor stages,
and then selected 592 genes according to a multiple hypothesis test at a false discovery rate (FDR) level
of 0.05. We used the empirical Bayes method of Liang and Zhang (2008) to conduct the test.
\begin{figure}[htbp]
\centering
\includegraphics[width=3.0in,angle=270]{BIC_plot.eps}
\caption{BIC scores produced by the proposed method for breast cancer data.}
\label{realbic_plot}
\end{figure}
To determine the number of components for the mixture model, we calculated BIC scores for $M=1,2,\ldots, 5$ with
the results shown in Figure \ref{realbic_plot}. According to the BIC scores, we set $M=3$.
The resulting three clusters consist of 338, 191 and 238 patients, respectively.
Figure \ref{KM_plot} shows the Kaplan-Meier curves of the three clusters.
A log-rank test for the three curves
produced a $p$-value of 3.89$\times 10^{-5}$, which indicate that
the patients in different clusters have different survival probabilities.
Further, for each gene, we conducted a ANOVA test for its mean expression level across
the three clusters. The resulting $p$-values are shown in Figure \ref{pvalue},
where most $p$-values are very close to 0.
This implies that the three clusters have different means and thus the data
are heterogeneous.
We note that the clustering results produced by the proposed method are biologically meaningful,
which is likely due to the existence of hidden subtypes of breast cancer.
As stated in Haque et al (2012), women with luminal A tumors had the longest survival time,
women with HER2-enriched and luminal B tumors had a much shorter survival time,
and women with basal-like tumors had an intermediate survival time
with deaths occurring earlier than those with luminal A tumors.
\begin{figure}
\centering
\subfigure[Kaplan-Meier curves for three patient groups.]{
\label{KM_plot}
\includegraphics[width=1.5in,angle=270]{KM_plot.eps}}
\hspace{0.03in}
\subfigure[Histogram of log $p$-values.]{
\label{pvalue}
\includegraphics[width=1.5in,angle=270]{pvalue.eps}} \\
\caption{The left panel shows the Kaplan-Meier curves for three different patient groups, and the right penal
shows the the histogram of the logarithm of $p$-values obtained for each gene in the ANOVA test. }
\label{Clust}
\end{figure}
With $M$ being set to 3, the proposed method produced a gene regulatory network
(shown in Figure \ref{brca_plot}), from which some hub genes can be identified. The hub genes
refer to those with high connectivity in the gene regulatory network, and they
tend to play important roles in gene regulation.
To provide a stable way to identify hub genes, we consider a cross-validation like method.
We divide the dataset into five subsets equally and then run the proposed method for five times,
each applying to four of the five subsets only. In each run, we identified
ten hub genes according to their connectivity. The results were summarized in
Table \ref{hub2}, where the genes were ranked by
their frequencies being selected as the hub gene among the 5 runs.
The results indicate that the performance of the proposed method is quite stable: quite a
few genes are frequently selected as the hub gene in different runs.
\begin{table}[!h]
\tabcolsep=3pt\fontsize{10}{14}
\selectfont
\begin{center}
\caption{Top ten hub genes identified by the proposed method, where
`Freq' denotes the number of times that the gene was selected as a hub gene
in the five subset runs, `Links' denotes
the average number of edges connected to the gene in the five networks with
its standard deviation given in the parentheses,
and the superscript * indicates that this gene has been verified in the literature to be related with
breast cancer.}
\vskip 0.3cm
\label{hub2}
\vspace{0cm}
\begin{tabular}{ccccccccc}
\hline\hline
Rank &Gene & Freq & Links & &Rank &Gene & Freq & Links\\ \hline
1 & LHFPL3$^*$ & 4 &49.2 (9.6) & & 6 & KRT12 & 3& 13.4 (5.1) \\ \cline{1-4} \cline{6-9}
2 & SEPP1$^*$ & 4 & 8.4 (1.4) & & 7 & FXYD1$^*$ & 2& 5.4 (2.3)\\ \cline{1-4} \cline{6-9}
3 & MYH11 & 4 & 8.6 (1.4) & & 8 & SCARA5$^*$ & 2& 6.4 (1.6) \\ \cline{1-4} \cline{6-9}
4 & F13A1$^*$ & 3 &12.2 (3.8) & & 9 & CLEC3B$^*$ & 2& 7.8 (2.8) \\ \cline{1-4} \cline{6-9}
5 & MAMDC2$^*$ & 3 & 5.4 (1.0) & & 10& LRRC70$^*$ & 2& 5.8 (1.7) \\\hline\hline
\end{tabular}
\end{center}
\end{table}
Our findings of hub genes are pretty consistent with the existing knowledge.
Among the top 10 hub genes, 8 of them has been verified in the literature to be related with breast cancer.
For example, LHFPL3, the gene has the most connectivities in the networks,
is characteristic of primary glioblastoma which are important processes for
cancer development and progression (Milinkovic et al., 2013). The gene SEPP1 is significantly associated
with breast cancer risk among women (Mohammaddoust et al., 2018). The gene F13A1 is known
as a thrombotic factor that plays a major role in tumor formation (Ahmadi et al., 2016).
In the cancer coexpression network developed by Meng et al., (2016), they found that MAMDC2 plays a key role
in the development of breast invasive ductal carcinoma. Our results also reveal some new findings, such as the gene MYH11.
Li et al. (2016) reported that MYH11 plays a role in tumor formation by disturbing stem cell differentiation
or affecting cellular energy balance and has been identified as a driver gene in human colorectal cancer,
although few researches identify its function in breast cancer.
\begin{figure}[htbp]
\centering
\includegraphics[width=3.5in,angle=270]{brca_plot.eps}
\caption{The gene regulatory network constructed by the proposed method for breast cancer.}\label{brca_plot}
\end{figure}
\section{Discussion}
In this paper, we have proposed a new method for constructing gene regulatory networks for
heterogeneous data, which is able to simultaneously cluster samples to difference groups
and learn an integrated network across the groups. The proposed method was illustrated using
some simulated examples and a real-world gene expression data example. The numerical results indicate
that the proposed method significantly outperforms the existing ones, such as graphical Lasso,
nodewise regression, $\psi$-learning, and EM-regularization. For the real-world gene expression
data example, we conducted a detailed post-clustering analysis, which indicates the heterogeneity
of the data and justifies the importance of the proposed method for real problems.
In addition to microarray gene expression data, the proposed method can be applied
to next generation sequencing (NGS) data based on the transformations developed
in Jia et al. (2017). To learn gene regulatory networks from NGS data, which are
often assumed to follow a Poisson or negative binomial distribution,
Jia et al. (2017) developed a random effect model-based transformation to continuize
the NGS data. Further, the continuized data can be transformed to Gaussian using
the nonparanormal transformation (Liu et al., 2012), and the
proposed method can be applied then. We expect that the proposed method can also
find wide applications in other scientific fields.
\section*{Acknowledgment}
The authors thank the book editor Dr. Yichuan Zhao and two referees for their constructive comments which have
led to significant improvement of this paper.
Liang's research was support in part by the grants DMS-1612924 and DMS/NIGMS R01-GM117597.
\section*{References}
\begin{description}%
\item[]
Ahmadi, M., Nasiri, M., and Ebrahimi, A. (2016). Thrombosis-Related Factors FV and F13A1 Mutations in Uterine Myomas. {\it Zahedan Journal of Research in Medical Sciences}, 18(10).
\item[]
Benjamini, Y., Krieger, A. M., and Yekutieli, D. (2006). Adaptive linear step-up procedures that control the false discovery rate. {\it Biometrika}, 93(3), 491-507.
\item[]
Celeux, G., and Govaert, G. (1995). Gaussian parsimonious clustering models. Pattern recognition, 28(5), 781-793.
\item[]
Danaher, P., Wang, P., and Witten, D. M. (2014). The joint graphical lasso for inverse covariance estimation across multiple classes. {\it Journal of the Royal Statistical Society, Series B}, 76(2), 373-397.
\item[]
Dempster, A. P. (1972). Covariance selection. {\it Biometrics}, 157-175.
\item[]
Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. {\it Journal of the royal statistical society. Series B}, 1-38.
\item[]
Fan, J., Feng, Y., and Wu, Y. (2009). Network exploration via the adaptive LASSO and SCAD penalties. {\it The annals of applied statistics}, 3(2), 521.
\item[]
Fan, J., Feng, Y., and Xia, L. (2015). A projection based conditional dependence measure with applications to high-dimensional undirected graphical models. arXiv preprint arXiv:1501.01617.
\item[]
Fan, J. and Lv, J. (2008). Sure Independence Screening for Ultrahigh Dimensional Feature Space. {\it Journal of the Royal Statistical Society, Series B}, 70, 849-911.
\item[]
Fan, J. and Song, R. (2010). Sure independence screening in generalized linear model with NP-dimensionality. {\it Annals of Statistics}, 38, 3567-3604.
\item[]
Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse covariance estimation
with the graphical lasso. {\it Biostatistics}, {\bf 9}, 432-441.
\item[]
Hastie, T., Tibshirani, R. and Friedman, J. (2009).
The elements of statistical learning (Second Edition). Springer-Verlag, 763 pages.
\item[]
Jia, B., Tseng, G., and Liang, F. (2018). Fast Bayesian Integrative Analysis for Joint Estimation of Multiple Gaussian Graphical Models. Submitted to {\it Journal of the American Statistical Association}.
\item[]
Jia, B., Xu, S., Xiao, G., Lamba, V., and Liang, F. (2017). Learning gene regulatory networks from next generation sequencing data. {\it Biometrics}.
\item[]
Haque, R., Ahmed, S. A., Inzhakova, G., Shi, J., Avila, C., Polikoff, J., ... and Press, M. F. (2012). Impact of breast cancer subtypes and treatment on survival: an analysis spanning two decades. {\it Cancer Epidemiology and Prevention Biomarkers}, 21(10), 1848-1855.
\item[]
Lee, S., Liang, F., Cai, L., and Xiao, G. (2018). A two-stage approach of gene network analysis for high-dimensional heterogeneous data. {\it Biostatistics}, 19(2), 216-232.
\item[]
Li, Y., Tang, X. Q., Bai, Z., and Dai, X. (2016). Exploring the intrinsic differences among breast tumor subtypes defined using immunohistochemistry markers based on the decision tree. {\it Scientific reports}, 6, 35773.
\item[]
Liang, F., Jia, B., Xue, J., Li, Q., and Luo, Y. (2018). An imputation-consistency algorithm for high-dimensional missing data problems and beyond. arXiv preprint arXiv:1802.02251.
\item[]
Liang, F., Song, Q. and Qiu, P. (2015). An Equivalent Measure of Partial Correlation Coefficients for High Dimensional Gaussian Graphical Models. {\it Journal of the American Statistical Association}, {\bf 110}, 1248-1265.
\item[]
Liang, F. and Zhang, J. (2008). Estimating the false discovery rate using the
stochastic approximation algorithm. {\it Biometrika}, {\bf 95}, 961-977.
\item[]
Liu, H., Han, F., Yuan, M., Lafferty, J., and Wasserman, L. (2012). High-dimensional semiparametric Gaussian copula graphical models. {\it The Annals of Statistics}, 40(4), 2293-2326.
\item[]
Meinshausen, N. and B\"uhlmann, P. (2006). High-dimensional graphs and variable selection
with the Lasso. {\it Annals of Statistics}, {\bf 34}, 1436-1462.
\item[]
Meng, L., Xu, Y., Xu, C., and Zhang, W. (2016). Biomarker discovery to improve prediction of breast cancer survival: using gene expression profiling, meta-analysis, and tissue validation. {\it OncoTargets and therapy}, 9, 6177.
\item[]
Milinkovic, V., Bankovic, J., Rakic, M., Stankovic, T., Skender-Gazibara, M., Ruzdijic, S., and Tanic, N. (2013). Identification of novel genetic alterations in samples of malignant glioma patients. {\it PLoS One}, 8(12), e82108.
\item[]
Mohammaddoust, S., Salehi, Z., and Saeidi Saedi, H. (2018). SEPP1 and SEP15 gene polymorphisms and susceptibility to breast cancer. {\it British Journal of Biomedical Science}, 1-11.
\item[]
Nielsen, S. F. (2000). The stochastic EM algorithm: estimation and asymptotic results. Bernoulli, 6(3), 457-489.
\item[]
Ruan, L., Yuan, M., and Zou, H. (2011). Regularized parameter estimation in high-dimensional gaussian mixture models.{\it Neural computation}, 23(6), 1605-1622.
\item[]
Storey, J. D. (2002). A direct approach to false discovery rates. {\it Journal of the Royal Statistical Society: Series B (Statistical Methodology)}, 64(3), 479-498.
\item[]
Stouffer, S. A., Suchman, E. A., DeVinney, L. C., Star, S. A., Jr Williams, R. M., (1949), {\it The American Soldier, Vol. 1: Adjustment During Army Life}. Princeton, NJ: Princeton University Press.
\item[]
Tibshirani, R. (1996). Regression analysis and selection via the Lasso. {\it Journal of the Royal Statistical Society, Series B}, {\bf 58}, 267-288.
\item[]
Yuan, M. and Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model.
{\it Biometrika}, {\bf 94}, 19-35.
\end{description}
\end{document}
|
{
"timestamp": "2018-05-08T02:18:25",
"yymm": "1805",
"arxiv_id": "1805.02547",
"language": "en",
"url": "https://arxiv.org/abs/1805.02547"
}
|
\section{Introduction}
\label{sec:intro}
As written by Daniel Kahneman in the article based on his Nobel Prize lecture: ``\textit{A theory of choice that completely ignores feelings such as the pain of losses and the regret of mistakes is not just descriptively unrealistic. It also leads to prescriptions that do not maximize the utility of outcomes as they are actually experienced (...).}'' \cite{Kah:03}. This means that individual's memories influence judgments and choices that people make. Yet, most of opinion dynamics models, which can be treated as a zero-level approach to various more complex social processes like political voting, marketing choices, diffusion of innovation, etc., initially did not include memory on the microscopic level. This applies in particular to binary-opinion models, such as: the voter model (VM) \cite{Cli:Sud:73,Sen:Bik:13}, models of social impact \cite{Now:Sza:Lat:90,Hol:Kac:Sch:00}, the Galam (majority) model \cite{Gal:86,Gal:90,Gal:12}, the Sznajd model \cite{Szn:Szn:00,Szn:05a}, the threshold model \cite{Wat:02}, or the $q$-voter model \cite{Cas:Mun:Pas:09}.
Due to our knowledge, the idea of memory was introduced for the first time into the voter model by Dall'Asta and Castellano to reduce the noise of VM, which resulted in the appearance of an effective surface tension \cite{Dal:Cas:07}. They endowed each $i$-th voter with a pair of counters: $C^+_i$ for being in a state '$+$' and $C^-_i$ for being in a state '$-$'. At each time step the corresponding counter was updated, and a voter could change its state only when the counter reached a given threshold value. The similar idea was used later in the model describing consumer decisions regarding switching to dynamic electricity tariffs \cite{Kow:etal:14}. However, in the latter paper only one counter was introduced to measure for how many steps the same state (opinion) is kept, which reflected the assumption that decision is based on the unanimity of past opinions. Since 2007, the concept of memory has been incorporated into the voter model in several ways \cite{Sen:Bik:13}. However, all proposed approaches included one or another waiting time for the opinion change \cite{Sta:Tes:Sch:08,Xio:Liu:11,Tak:Mas:11}. For example, Stark \textit{et al.} studied the effect of a memory-dependent transition rate in the voter model assuming that the flip rate decreases with the time the voter has been in its current state \cite{Sta:Tes:Sch:08}. Xiong \textit{et al.} have assumed that an individual inclination increases with the number of times the voter has held its most frequent opinion in the past interactions \cite{Xio:Liu:11}. Different idea of memory has been proposed by Takaguchi and Masuda, who endowed links, instead of voters, between the nodes with a random variable, which represented the time until the initial update event occurs on this link \cite{Tak:Mas:11}. Zhong \textit{et al.} have introduced a generalized voter model with time-decaying rate of the influence of peer pressure, which incorporates a multilayer network and memory of past influences \cite{Zho:etal:16}.
Recently, another idea of memory has been introduced into the voter model by Woolcock \textit{et al.} \cite{Woo:etal:17}. They studied a heterogeneous voter model in which each agent was endowed with the fitness parameter. During pairwise interactions agents' fitnesses are compared, and with probability $p$ the agent with the lower fitness adopts the opinion of the one with the higher value, whereas with complementary probability the opposite happens. The winning agent (the one that keeps its opinion) increases its fitness. This idea reminds a lot of the Bonabeau model, in which agents also fight when they meet. Each fight influences the agent's power: a winner becomes stronger, and a looser becomes weaker, what alters the probabilities of winning their future fights \cite{Bon:The:Den:95,Mal:Sta:Kul:06}.
In this work, we propose to introduce memory into opinion dynamics in yet another way inspired by a paper on random walkers with extreme value memory \cite{Har:15}. We will study the $q$-voter model, which has occurred to be particularly interesting from theoretical and applicative point of view \cite{Cas:Mun:Pas:09,Mor:etal:13,Tim:Pra:14,Jav:Squ:15,Mob:15,Tim:Gal:15,Chm:Szn:15,Sie:Szw:Wer:16,Mel:Mob:Zia:16, Kru:Szw:Wer:17,Jed:17,Jed:Szn:17}. We will focus on the $q$-voter model with independence, introduced in \cite{Nyc:Szn:Cis:12}. This version of the model occurred to be particularly useful in modeling diffusion of eco-innovations \cite{Kow:etal:14,Kow:etal:16,Byr:etal:16,Kow:17}. Until now the level of independence $p$ was an external parameter of the model, and two different approaches have been used: person and situation-oriented \cite{Szn:Szw:Wer:14,Jed:Szn:17}. Within the situation-oriented approach the system is homogeneous, and each of $N$ agents can behave independently with probability $p$ and conform to the group with complementary probability $1-p$. On the other hand, within the person-oriented approach $pN$ agents are permanently independent, and $(1-p)N$ of them always conform to the source of influence. In this paper we will show that without assuming a priori person-oriented approach and setting particular value of parameter $p$, we can obtain two regimes (situation or person state) depending on the external noise $T$, which can be interpreted as the social temperature \cite{Kac:Hol:96,Bah:Pas:98}. For low $T$, the system will self-organize from the situation state, in which the population is homogeneous, to the state in which agents acquire personal traits. This self-organization will appear as a result of bad and good memories related to the type of social response.
There are many social benefits of conformity, like sense of security, fraternity, or convenience. It is easier to cooperate with people that follow social norms. On the other hand, we often think of conformity as a bad thing, meaning that people who conform are weak and dependent. Especially in western individualistic cultures, the word ``conformity'' tends to carry a negative value judgment. Moreover, there are many examples that conformity can lead to disaster (e.g., suicides, fraternity hazing, sexual assault). Nevertheless, conformity can also be used for many worthwhile purposes like stimulating pro-ecological or anti-racist behavior \cite{San:10}. Therefore, the question ``is conformity good or bad?'' has no scientific answer. Assuming the values most of us share, we can say that conformity is at times bad (e.g., when it leads someone to drive drunk), at times good (e.g., when it inhibits people from cutting into a theater line), and at times inconsequential (when it disposes tennis players to wear white) \cite{Mye:10}. Assumption that we make in this paper is that all these bad and good experiences related to the individual's behavior, conformist or independent, are collected in personal memory and influence somehow future behavior. So, if someone gained more benefits from being independent in the past, he will more likely behave independently in the future and vice versa. This means that we do not assume a priori that agents have personalities. At the beginning the system is homogeneous, i.e., all agents are the same. However, as the time goes, their individual situational experiences may lead to heterogeneity.
To build the model based on the above assumption we need to know how person's memories of the past may influence their decisions about the future. The issue how memory impacts various decisions has been investigated in a number of psychological experiments related to political voting \cite{Mic:14}, evaluations of pleasurable experiences \cite{Do:Rup:Wol:08}, or episodes of pain \cite{Kah:Fre:Sch:Red:93,Red:Kat:Kah:03}.
For example, it has been shown empirically that patients' memories of the past may influence their decisions about the future medical treatment, yet memories are imperfect and susceptible to bias. In particular, the duration of the episode of pain has relatively little effect on subsequent evaluations, whereas the worst part of the experience and the amount of pain just before the episode ends are weighted heavily in the final impression. This observation, known presently under the name \textit{peak-end rule}, has been recently applied to a random walk model where the probability of moving left or right depends on the maximum value of a random variable associated with each time step \cite{Har:15}.
Here, we will use the same idea to model how agents decide to behave, independently or conform to social norms.
\section{Model description}
\label{sec:model}
We consider a system composed of $N$ mutually connected agents so that each agent is a neighbor of everyone else. Such a structure corresponds to a social network represented by a complete graph where each node is associated with one agent, and links indicate possible interactions between individuals.
Networks like this one are frequently used to model relations in small groups or cliques where everybody knows each other \cite{Byr:etal:16, Kar:Sri:Cha:17}.
In order to mimic interactions in larger societies where the social structure is more complicated, complex networks are embraced, and they \textcolor{mycolor}{form} a framework for agent-based models \cite{Jed:17,Suc:Egu:Mig:05,Jav:Squ:15}.
An agent in $i$-th vertex is characterized by a two-state variable $s_i=\pm 1$, $i\in\{1,2,...,N\}$, which can be interpreted as a binary opinion on a given issue, e.g., an agent agrees or disagrees with something.
Of course, the interpretation depends strictly on a phenomenon that is modeled.
Since the variable $s_i$ takes only two values just like a spin in the famous Ising model, agents are often called spins. Herein, we will use terms agent, spin, voter, individual interchangeably.
Agents in the system are subjected to two types of interactions that may change their opinions.
From a psychological point of view, these factors are recognized as two different responses to social pressure: conformity and independence \cite{Nai:Szn:16}.
The first one tends to order the system whereas the second one tries to disturb it and thus is regarded as stochastic noise.
In the model, a random sequential updating scheme is used. Therefore, at each elementary time step, we choose randomly one agent that can reconsider its opinion. Additionally, we choose at random a group of influence comprised of $q$ its nearest neighbors.
The group is called $q$-panel, and it attempts to impact upon a state of the chosen agent.
With probability $1-p$, the agent behaves as a conformist, and it yields to the group pressure by taking the same opinion of the panel each time when the group is unanimous -- all $q$ individuals are in the same state. Otherwise, when there is no consensus in the panel, nothing happens, and the state of the chosen \textcolor{mycolor}{agent} remains unchanged.
With complementary probability $p$, the agent acts independently of its neighbors.
It decides by its own whether to change its opinion, so with the probability $1/2$, the agent's state is changed to the opposite one.
In the previous studies, the parameter $p$, which controls the level of independence in the system, did not change in time. It was considered as an external parameter established at the beginning of the simulation so that all agents were equally likely to be independent in a given time step. Such an approach is called situation-oriented one because the \textcolor{mycolor}{agent}'s behavior (i.g., acting as a conformist or as an independent individual) may change from one time step to other during the simulation \cite{Nyc:Szn:13,Nyc:Szn:Cis:12}. Such a probabilistic situational approach has been used already earlier within the Galam's majority
model with contrarians\cite{Gal:04, Gal:07}.
From physical point of view, one can also think of these fluctuations in agents' attitude as the annealed disorder introduced to the system.
A competitive approach named person-oriented one has been studied, as well. It assumes that the behavior of agents arises from their personal traits, therefore, it does not evolve in time. So at the beginning of the simulation, a fraction $p$ of all individuals are set to always act independently, and the rest of agents are always conformists \cite{Szn:Szw:Wer:14, Jed:Szn:17}.
This approach, on the other hand, can be related to the quenched disorder since the behavior of agents is frozen in time.
As it turned out, there are qualitative differences between these two approaches applied to the $q$-voter model with independence described above -- discontinuous phase transitions are present only in the situation-oriented model \cite{Jed:Szn:17}. Moreover, within the quenched approach the critical value of independence, below which an ordered phase with the majority-minority broken symmetry is observed, increases with the size of the influence group $q$. The same result has been obtained also for the $q$-voter model with anticonformity \cite{Nyc:Szn:Cis:12} and the Galam's majority model with contrarians \cite{Gal:04, Gal:07}.
In the annealed version of the $q$-voter model with independence the opposite relation is observed -- the critical value of
independence decreases with the size of the influence group $q$.
In this work, on the other hand, $p$ is an internal parameter, which is not fixed and can alter in time depending on the past experiences of a given agent.
In particular, we incorporate the idea of the extreme value memory \cite{Har:15} into the $q$-voter model.
With each time step, we associate a random variable $U_t$ that represents the utility, \textcolor{mycolor}{i.e.}, the amount of satisfaction that an agent receives from its choice.
We assume that these random variables $U_t$ are independent and identically distributed for different times.
Now, \textcolor{mycolor}{agents} remember the maximum value of $U_t$ for all their independent behaviors $U^I$ and separately for all their conformist behaviors $U^C$.
The next choice of their attitude is established based on these remembered maximum utilities in such a way that more probable is the behavior from which we had the most satisfaction in the past.
Therefore, \textcolor{mycolor}{we choose} the level of independence $p$ \textcolor{mycolor}{to be} given by a logistic function commonly used in discrete choice models in economics. For specific values of random variates $u^I$ and $u^C$, outcomes of random variables $U^I$ and $U^C$, the level of independence can be computed by the following formula:
\begin{equation}
\label{eq:independence}
p=\frac{e^{u^I/T}}{e^{u^I/T}+e^{u^C/T}},
\end{equation}
where $T$ is a positive external parameter, and it represents the noise level in the decision making process.
The maximum utilities $U^I$ and $U^C$ at the time $t$ are given by
\begin{align}
U^I &=\max_{0< \textcolor{mycolor}{j}< t}\{\textcolor{mycolor}{u^I_0, U_j}: \textrm{an agent is independent at \textcolor{mycolor}{$j$-th} step}\},\\
U^C &=\max_{0< \textcolor{mycolor}{j}< t}\{\textcolor{mycolor}{u^C_0, U_j}: \textrm{an agent is a conformist at \textcolor{mycolor}{$j$-th} step}\},
\end{align}
where \textcolor{mycolor}{$u^I_0$ and $u^C_0$} are the initial values of the utilities assigned to all agents. Note that when \textcolor{mycolor}{$u^I_0=u^C_0$}, the level of independence $p$ is equal to $1/2$ at the beginning of simulation, and initially, there is no bias in the attitude of agents, see Fig.~\ref{fig:independenceF}.
Additionally, the relation between $p$ and the noise level, that is, Eq.~(\ref{eq:independence}) is exemplified in Fig.~\ref{fig:independenceF} for three different pairs of utilities.
We should emphasize that $p$ not only changes in time, but also it differs from agent to agent since its value depends on the history of the specific individual.
In general, $U_t$ can come from any distribution. However, throughout the work, we use an exponentially distributed random variable with parameter $\lambda>0$. Thus, the probability density function of $U_t$ has the following form:
\begin{equation}
\label{eq:pdfU}
f_{U_t}(u)=\mathbbm{1}_{\{u\ge 0\}}\lambda e^{-\lambda u},
\end{equation}
where $\mathbbm{1}_{\{u\ge 0\}}$ is the indicator function defined on the interval $[0,\infty)$.
This particular choice of a distribution is motivated by the emergence of an interesting transition between two stable states of the system \cite{Har:15}, which is described in the next section.
\begin{figure}[!t]
\centerline{\epsfig{file=fig1.eps}}
\caption{\label{fig:independenceF} (a) Levels of independence $p$ as functions of the noise strength $T$ for different fixed values of the utilities \textcolor{mycolor}{$(u^I;u^C)$}. (b) Time evolution of the average levels of independence in the system of the size $N=10^6$. The initial utilities are as follows: \textcolor{mycolor}{$u^I_0=0.4$ and $u^C_0=0$}. Markers represent the outcome of Monte Carlo simulations and are connected just to guide the eye. Each line refers to a different value of $T$. The noise increases from the top to the bottom. Note that the final values of $\langle p\rangle$ depend on the noise level, in contrast to the case with the same initial utilities where $\langle p\rangle=0.5$ independently of the noise.}
\end{figure}
The algorithm of a simulation is following:
\begin{enumerate}
\item Initialize the simulation ($t=0$): choose an initial concentration of up-spins $c(0)$, then assign an initial opinion to each agent -- for all vertices numbered by $i\in\{1,...,N\}$ draw a random number $r_i$ from a uniform distribution supported on the interval $[0,1]$.
If $r_i<c(0)$, set $s_i(0)=1$. Otherwise, set $s_i(0)=-1$. Select any two real numbers as the initial values of utilities \textcolor{mycolor}{$u_0^I$ and $u_0^C$}. Note that all agents have the same initial values.
\item Set $t=t+\Delta t$, and choose randomly $i$-th agent from the system.
\item Draw two random numbers $r$ and \textcolor{mycolor}{$u_t$}, the first from the uniform distribution on $[0,1]$ and the second from the utility distribution given by Eq.~(\ref{eq:pdfU}).
\item Calculate $p$ for the chosen agent based on its extreme value memory from Eq.~(\ref{eq:independence}).
\item If $r<p$, the agent acts independently -- flip the spin with probability $1/2$. Relate \textcolor{mycolor}{$u_t$} with independence; go to 2.
\item If $r\ge p$, the agent acts as a conformist -- choose randomly a group of $q$ distinct neighbors of the chosen $i$-th agent: $n_1, n_2,...,n_q\in\{1,2,...,N\}$. If all $q$ \textcolor{mycolor}{agents} are in the same state, the chosen agent takes the same opinion as the group. Relate \textcolor{mycolor}{$u_t$} with conformity; go to 2.
\end{enumerate}
As usual, one Monte Carlo step (MCS) corresponds to $N$ elementary time steps $\Delta t=1/N$.
\section{Results}
\label{sec:results}
In Ref.~\cite{Har:15}, a similar decision making mechanism was applied to a single, one-dimensional random walker that based on his past experiences had to choose the direction of its next step.
It turns out that the long-time walker's behavior is dependent on the utility distribution.
In particular, when $U_t$ comes from an exponential distribution with parameter $\lambda$, there is a transition at
\begin{equation}
\label{eq:temperature}
T=T_c=\frac{1}{\lambda }
\end{equation}
\begin{figure}[!b]
\centerline{\epsfig{file=fig2.eps}}
\caption{\label{fig:histogramSym} Normalized histograms of the independence level $p$ in the system comprised of $N=10^6$ agents after $10^3$ MCS for the same initial utilities \textcolor{mycolor}{$u_0^I=u_0^C=0$} at five different noise levels increasing from the left to the right. The group of influence contains $q=4$ agents. Note that although the distribution changes with $T$, it stays symmetric all the time. Moreover, the average value of independence in the system $\langle p\rangle$ is not affected by the noise. It remains unchanged for different $T$ and equals $\langle p\rangle=0.5$.}
\end{figure}
\begin{figure}[!b]
\centerline{\epsfig{file=fig3a.eps}}
\centerline{\epsfig{file=fig3b.eps}}
\caption{\label{fig:histogramPanel} Normalized histograms of the independence level $p$ in the system comprised of $N=10^6$ agents with the bias in the initial values of utilities at five different noise levels increasing from the first column on the left to the last column on the right. The top row corresponds to the following initial values \textcolor{mycolor}{$u^I_0=0.4$ and $u^C_0=0$} while the bottom row refers to the values \textcolor{mycolor}{$u^I_0=0$ and $u^C_0=0.4$}. The data are collected after $10^3$ Monte Carlo steps.}
\end{figure}
between a stable state in which the walker is frozen into the motion along one direction, this happens for $\lambda T<1$, and a stable state in which it samples both directions roughly equally for $\lambda T>1$.
In terms of our model, the motion in one direction would correspond to an agent that does not change its behavior, i.e., it is permanently a conformist or an independent individual. On the other hand, the state with the equal sampling of both directions would refer to an agent that constantly changes its attitude from being independent to being a conformist.
In fact, these states may be thought of as two separate regimes with quenched (person-oriented approach) and annealed disorders (situation-oriented approach) \cite{Jed:Szn:17,Szn:Szw:Wer:14}.
Now, the question arises whether the same transition will be observed in the system of interacting agents defined in the previous section, and how memory will impact the collective behavior of the system.
In order to bring answers to the above issues, we carry out Monte Carlo simulations of the $q$-voter model with independence and the extreme value memory where the utilities are drawn from the exponential distribution with $\lambda=1$.
In our model, the level of independence $p$ is responsible for the behavior of agents.
Since $p$ is the property of every individual, we can examine the distribution of this feature in the system.
Figure~\ref{fig:histogramSym} presents the empirical density function \textcolor{mycolor}{$f(p)$} of the independence level after $10^3$ Monte Carlo steps for the system comprised of $N=10^6$ agents for several values of $T$.
In this case, the initial utility values are set to zero, i.e., \textcolor{mycolor}{$u_0^I=u_0^C=0$}.
Indeed, when $T$ is low, agents quickly promote one attitude over the other, and similarly as the walker became frozen into the motion along one direction, our agents become frozen into one type of behavior. Moreover, we can see from Fig.~\ref{fig:histogramSym} that approximately half of all agents are constantly independent, that is, their $p$ is close to 1, and the other half are conformists since their $p$ is close to 0.
On the other hand, when $T$ is high, the distribution of $p$ concentrates around the value $p=0.5$, so agents constantly change their behavior from one time step to the other.
On average, half of their choices involve conformity and the other half independence; see the right panel of Fig.~\ref{fig:histogramSym}.
Similar symmetric functions \textcolor{mycolor}{$f(p)$} are obtain for the initial utilities that are different from 0 but of the same value, i.e., \textcolor{mycolor}{$u_0^I=u_0^C$}.
In all these cases, the average value of independence $\langle p\rangle$ in the system is not influenced by the noise, and it remains at the same level $\langle p\rangle=0.5$ for different $T$.
In contrast, when \textcolor{mycolor}{$u_0^I\neq u_0^C$}, the distribution of the level of independence becomes skewed for low values of $T$, and only strong noise makes it symmetric, as we can see from Fig.~\ref{fig:histogramPanel}.
Moreover, now the average value of independence in the system relies on $T$, see Fig.~\ref{fig:independenceF}.
In general, we can obtain diverse stationary probability density functions of the level of independence. The shape of them depends on the initial values of utilities and the level of noise $T$.
Although we are not able to derive strictly the formula for \textcolor{mycolor}{$f(p)$}, let us assume that we already know the form of this density function.
In that case, we can calculate the stationary value of the up-spin concentration in the system defined as a fraction of agents with a positive opinion, i.e., $s_i=1$. If there are $N_\uparrow$ \textcolor{mycolor}{agents} with $s_i=1$ in a given time step, the up-spin concentration is as follows
\begin{equation}
c=\frac{N_\uparrow}{N}.
\end{equation}
In every elementary time step, the total number of agents with a positive opinion may increase by one, decrease by one, or remain at the same level since we use a random sequential updating. This corresponds to the following transition rates:
\begin{equation}
\begin{split}
\gamma^+ &=\text{P}\left(c\rightarrow c+\frac{1}{N}\right),\\
\gamma^- &=\text{P}\left(c\rightarrow c-\frac{1}{N}\right),\\
\end{split}
\end{equation}
so that the time evolution of the concentration is given by the rate equation
\begin{equation}
\label{eq:rateEq}
\frac{\partial c}{\partial t}=\gamma^+-\gamma^-,
\end{equation}
in the limit of $N\rightarrow\infty$. Note that for the stationary value of the up-spin concentration, Eq.~(\ref{eq:rateEq}) gives $\gamma^+=\gamma^-$.
Similarly, in the quenched region, the concentration of up-spins only among agents with the independence level in the range $(p,p+dp)$ can be introduced. We use the following notation $c_p$ to denote such defined quantity\textcolor{mycolor}{:
\begin{equation}
c_p=\frac{N^\uparrow_p}{N_p},
\end{equation}
where $N_p$ is the number of agents with the specific independence level, and $N_p^\uparrow$ indicates how many of them have a positive opinion.}
Now, thanks to the above partition, we can write down explicitly the transition rates $\gamma_p^+$, $\gamma_p^-$ for all values of independence, and consequently, find the stationary levels of the concentrations $c_p$.
Based on the model description and the mean-field approximation \cite{Mor:etal:13,Nyc:Szn:13}, the transition rates have the following forms:
\begin{equation}
\begin{split}
\gamma^+_p&=(1-c_p)\left[(1-p)c^q+\frac{p}{2}\right],\\
\gamma^-_p&=c_p\left[(1-p)(1-c)^q+\frac{p}{2}\right].
\end{split}
\end{equation}
By equalizing the above equations, we obtain the stationary concentration of agents with a positive opinion and with the level of independence close to $p$:
\begin{equation}
\label{eq:cp}
c_p=\frac{(1-p)c^q+\frac{p}{2}}{(1-p)\left[c^q+(1-c)^q\right]+p}.
\end{equation}
Having $c_p$, we are able to determine the overall up-spin concentration in the quenched case by solving the following self-consistent equation
\begin{equation}
\label{eq:selfConsistent}
c=\int_{-\infty}^\infty c_pf(p)dp.
\end{equation}
In the quenched region, since the probability density function \textcolor{mycolor}{$f(p)$} concentrates around points 0 and 1, as seen in Figs.~\ref{fig:histogramSym} and \ref{fig:histogramPanel}, it can be approximated by a discrete distribution represented as a sum of two Dirac delta functions at these points with certain masses $\bar{p}$ and $1-\bar{p}$ accordingly:
\begin{equation}
\label{eq:quenchedDistribution}
f(p)=\bar{p}\delta(p-1)+(1-\bar{p})\delta(p),
\end{equation}
where $\bar{p}$ corresponds to the average level of independence in the system because
\begin{equation}
\langle p\rangle=\int_{-\infty}^\infty pf(p)dp=\bar{p}.
\end{equation}
Note that $\bar{p}$ can be also interpreted as a fraction of independent individuals in the quenched system. For $\bar{p}=0.5$, we get the symmetric distribution which is obtain when the initial utilities are equal \textcolor{mycolor}{$u_0^I=u_0^C$}. In contrast, if at the beginning the utilities are not equal and \textcolor{mycolor}{$u_0^I<u_0^C$}, then we have $\bar{p}<0.5$. The opposite situation takes place for \textcolor{mycolor}{$u_0^I>u_0^C$}; see the first column of Fig.~\ref{fig:histogramPanel}.
Now, let us perform the integration in Eq.~(\ref{eq:selfConsistent}) after inserting Eqs.~(\ref{eq:cp}) and (\ref{eq:quenchedDistribution}) into it. The following implicit expression is obtained for the stationary up-spin concentration at low noise values $T$ where we can approximate \textcolor{mycolor}{$f(p)$} by two Dirac delta functions Eq.~(\ref{eq:quenchedDistribution}):
\begin{equation}
\label{eq:qurnchedc}
c=\frac{1}{2}\bar{p}+\frac{c^q}{c^q+(1-c)^q}(1-\bar{p}).
\end{equation}
Figure~\ref{fig:phaseDiagramsQVoter} illustrates graphically the above dependency for a few values of $q$.
Continuous and dashed lines refer to stable and unstable states accordingly.
Now, if we knew the connection between the average independence $\bar{p}$, the initial utilities \textcolor{mycolor}{$u_0^I$ and $u_0^C$}, and the noise level $T$, we would read from Fig.~\ref{fig:phaseDiagramsQVoter} or directly from Eq.~(\ref{eq:qurnchedc}) the stationary value of the up-spin concentration for these parameters in the quenched regime, i.e., for small $T$.
In general, this relation is unidentified, however, we already know that $\langle p\rangle=0.5$ for all values of $T$ for the special case when \textcolor{mycolor}{$u_0^I=u_0^C$}.
Thus, when the initial utilities are the same, in order to establish the stationary concentrations for small noises, we look at Fig.~\ref{fig:phaseDiagramsQVoter} and the intersections of solid lines representing the stable solutions of Eq.~(\ref{eq:qurnchedc}) with the dotted vertical line at $\bar{p}=0.5$.
We see that there are only two such intersections. Moreover, we expect that $q$ should not influence much the level of the stationary up-spin concentration in the quenched area since changing $q$ does not shift a lot the intersection points.
In the asymmetric case, that is to say, when \textcolor{mycolor}{$u_0^I\neq u_0^C$}, the situation is different. As seen in Fig.~\ref{fig:histogramPanel}, we start from the skewed distribution, i.e., $\bar{p}\neq 0.5$ for small $T$, but along with the increasing level of noise, \textcolor{mycolor}{$f(p)$} becomes more symmetric, and $\bar{p}$ approaches 0.5.
Of course, this happens only to some extent since for higher $T$, Eq.~(\ref{eq:quenchedDistribution}) does not approximate well the real distribution.
The direction from which we approach this point depends on the initial utilities and is illustrated by an arrow in Fig.~\ref{fig:phaseDiagramsQVoter}. If \textcolor{mycolor}{$u_0^I<u_0^C$}, the average value of independence will increase to $\bar{p}= 0.5$.
On the other hand, when \textcolor{mycolor}{$u_0^I>u_0^C$}, it will decrease to that value.
Therefore, raising the level of noise should decrease the order for \textcolor{mycolor}{$u_0^I<u_0^C$}, that it, the stationary values of the up-spin concentration tend towards a disordered phase $c=0.5$, and at the same time, increase it for \textcolor{mycolor}{$u_0^I>u_0^C$}; see Fig.~\ref{fig:phaseDiagramsQVoter}.
Particularly interesting is the second case because greater noise boosts strength of ordering in the system, which is untypical.
\begin{figure}[!t]
\centerline{\epsfig{file=fig4.eps}}
\caption{\label{fig:phaseDiagramsQVoter} Phase diagrams for the $q$-voter model in the (a) quenched and (b) annealed approaches. Gray lines indicate the stationary values of the up-spin concentration given by Eqs.~(\ref{eq:qurnchedc}) and (\ref{eq:annealedc}), continuous and dashed ones refer to stable and unstable states accordingly.
(a) For the quenched case, when the initial utilities are equal, we have $\bar{p}=0.5$ (dotted vertical line) for all $T$. Lower values of the average level of independence are related to the situation in which \textcolor{mycolor}{$u_0^I<u_0^C$}. Higher values, on the other hand, correspond to the following utilities \textcolor{mycolor}{$u_0^I>u_0^C$}. For these assymetric initial conditions, arrows illustrate how $\bar{p}$ will change when we increase the noise $T$ in the system.
(b) For the annealed case, the average level of independence in the system amounts to $\bar{p}=0.5$ (dotted vertical line), and the only stable value of the stationary up-spin concentration is $c=0.5$.
Note that for the annealed case, $\bar{p}$ corresponds to the probability of an agent being independent in a given time step whereas for the quenched one, $\bar{p}$ refers to the fraction of agents in the system that are independent all the time.}
\end{figure}
In the annealed region, the probability density function \textcolor{mycolor}{$f(p)$} concentrates around one point $\bar{p}$; see Figs.~\ref{fig:histogramSym} and \ref{fig:histogramPanel}. Therefore, it can be approximated by a single Dirac delta function at this point:
\begin{equation}
\label{eq:annaledDistribution}
f(p)=\delta(p-\bar{p}).
\end{equation}
Once again $\bar{p}$ refers to the average level of independence in the system. We know that in our case $\bar{p}=0.5$, but we can consider more general one.
Such annealed version of the $q$-voter model was already considered in \cite{Nyc:Szn:Cis:12}, and the stationary
up-spin concentration has the following form in this case:
\begin{equation}
\label{eq:annealedc}
c=\frac{(1-\bar{p})c^q+\frac{\bar{p}}{2}}{(1-\bar{p})\left[c^q+(1-c)^q\right]+\bar{p}}.
\end{equation}
We present the above dependency in Fig.~\ref{fig:phaseDiagramsQVoter} for different values of $q$.
As seen, the only stationary value of the up-spin concentration that can be attained for $\bar{p}=0.5$ is $c=0.5$.
It means that for large values of $T$, in the annealed regime, our system is disordered.
\begin{figure}[!t]
\centerline{\epsfig{file=fig5.eps}}
\caption{\label{fig:phaseDiagrams} Phase diagrams for the $q$-voter model with extreme value memory and exponentially distributed utilities with parameter $\lambda=1$. \textcolor{mycolor}{Initially, all spins are up, so $c(0)=1$.} Dots represent the outcome of Monte Carlo simulations of the system containing $N=10^4$ agents. The results are averaged over $10^2$ runs and collected after $10^3$ MCS. Dotted lines are drawn just to guide the eye. (a) The model with no bias in the initial values of the utilities, i.e., \textcolor{mycolor}{$u_0^I=u_0^C=0$} for different $q$-panel sizes. (b) The model with the group of influence comprised of $q=4$ agents and different initial conditions for the utilities \textcolor{mycolor}{$(u_0^I; u_0^C)$}.}
\end{figure}
Our theoretical predictions agree with the Monte Carlo simulations of the system. The stationary value of the up-spin concentration as a function of the noise level $T$ is presented in Fig.~\ref{fig:phaseDiagrams} \textcolor{mycolor}{ for the initial value of the up-spin concentration $c(0)=1$}. Phase diagrams in the left panel correspond to the model with the same initial values of both utilities \textcolor{mycolor}{$u_0^I=u_0^C=0$} but different sizes of the group of influence. In the right panel, on the other hand, we see how the results change if we introduce a bias in these initial conditions for the fixed group size $q=4$.
For the cases where \textcolor{mycolor}{$u_0^I=u_0^C$}, a transition between an ordered phase at low noise levels and a disordered phase at high noise levels is observed.
Moreover, the number of agents in the $q$-panel does not affect much the up-spin concentration in the quenched regime, i.e., in the region where the noise $T$ is sufficiently small.
When \textcolor{mycolor}{$u_0^I\neq u_0^C$} depending on which utility is \textcolor{mycolor}{greater}, we can control the value of the up-spin concentration for small $T$.
For \textcolor{mycolor}{$u_0^I< u_0^C$}, the concentration of agents with a positive opinion decreases along with the increasing noise. For the opposite case, when \textcolor{mycolor}{$u_0^I> u_0^C$}, we have very different situation. Initially, the concentration unusually rises with the rising level of noise but only to some point after which it starts decreasing. As predicted, the first phase is connected with the quenched regime where bigger noise makes the bimodal distribution of $p$ more symmetric, then $c$ rises. Nonetheless, for large enough $T$ the distribution eventually changes its shape to unimodal in the annealed area, and $c$ decreases.
\section{Conclusions}
\label{sec:conclusion}
Which of these two approaches, situation-oriented or person-oriented, is more reasonable? The answer for this question was the subject of a long-term person-situation debate \cite{Don:Luc:Fle:09}. Particularly illustrative metaphor related to this issue has been given by one of the most creative social psychologists Richard E. Nisbett, who claims that behavior can be predicted much better from the social setting than from personality traits \cite{Nis:80}. However, the metaphor that he used, as well as modern personality psychology suggest that both approaches are reasonable depending on the investigated problem. Let us quote here Nisbett's metaphor that was aimed to convince people why believing that personal traits can predict individual's behavior is naive and reminds ancient beliefs about physical world: ``\textit{(...) in ancient physics, the behavior of objects was understood exclusively in terms of the object itself: A stone sinks when placed in water because it has the property of ``gravity''; a piece of woods floats because it has the property of ``levity''. In modern physics, in contrast, an understanding of the behavior of objects requires simultaneous specification of the environmental forces, the properties of the object, and the relation between the environmental forces and the properties of the object.}''. As physicists we know that a stone and a piece of wood indeed fall differently in water because of their individual traits, such as density of the material and the size of an object. However, in the air these differences will be less visible, and in vacuum both objects will fall identically. Such a metaphor may seem too far reaching, yet social psychologists have shown that often situation may completely overcome personality \cite{Mye:10,San:10,Nis:80}. However, it all depends on the external conditions.
In this paper we have shown that indeed, depending on the external factor related to social temperature $T$, we can observe one of two types of behavior: person or situation-oriented. We have developed the $q$-voter model with independence incorporating the idea of memory in the spirit of extreme value memory \cite{Har:15}. Initially the system is homogeneous, which means that all agents have the same tendencies \textcolor{mycolor}{$u_0^I$} for being independent and \textcolor{mycolor}{$u_0^C$} for being conformists. However, due to the memories of the past experiences related to each type of social response they may acquire personal traits if the level of social temperature is below a critical value. We may relate the ratio between \textcolor{mycolor}{$u_0^I$ and $u_0^C$} to one of the most important Hofstede's cultural dimensions\cite{Hof:Hof:Min:10}, namely individualism vs. collectivism (IDV). Particularly interesting relation between public opinion and social temperature $T$ has been found for the initially individualistic societies \textcolor{mycolor}{($u_0^I>u_0^C$)}. It occurs that in such a society there is an optimal $T$ at which the agreement in the society is the highest. For low $T$, there is complete disagreement, i.e., the number of positive and negative opinions is the same ($c=1/2$). Above certain threshold a majority opinion appears $c\neq1/2$ and above the critical social temperature $T_c$ it again decreases to $c=1/2$ (stale-mate situation). For initially collective societies \textcolor{mycolor}{($u_0^I<u_0^C$)} \textcolor{mycolor}{the agreement in the society} decreases monotonically and reaches stale-mate situation above the critical social temperature $T_c$.
Spontaneous appearance of heterogeneity related to different types of social response, rational and inflexible, has been observed already in \cite{Mar:Gal:13}. They have combined the Galam unifying frame (GUF) \cite{Gal:05} with the CODA formalism proposed by Martins \cite{Mar:08}, in which each agent is described by two variables: continuous private opinion and related discrete choice. In CODA model an individual's continuous opinion was used to measure how certain each agent was about its decision, and therefore, inflexibility could naturally occurred as a consequence of very strong private opinion. In our model agents are described by a single binary variable, and the heterogeneity appears as a consequence of past experiences related to the type of behavior, analogously as in \cite{Har:15}.
The extreme value memory is certainly not the only possibility to incorporate the idea of memory into the $q$-voter model although particularly interesting because of the empirical justification laying below the assumption as well as interesting outcome, which can be also nicely interpreted in terms of social psychology. However, it would be worth checking out how other types of memory would influence the opinion formation under the $q$-voter model, and we believe that this is an interesting task for the future studies.
\section*{Acknowledgements}
This work was supported by funds from the National Science Center (NCN, Poland) through grants no. 2016/23/N/ST2/00729 and no. 2016/21/B/HS6/01256.
\section*{References}
\bibliographystyle{elsarticle-num}
|
{
"timestamp": "2018-05-08T02:10:19",
"yymm": "1805",
"arxiv_id": "1805.02169",
"language": "en",
"url": "https://arxiv.org/abs/1805.02169"
}
|
\section{Introduction}
Data-driven machine translation (MT) systems depends on the text domain of their training data. In a typical in-domain MT scenario the amount of parallel texts from a single domain is not enough to train a good translation system, even more so for neural machine translation \nocite{bahdanau} (NMT; Bahdanau et al., 2014); thus models are commonly trained on a mixture of parallel texts from different domains and then later fine-tuned to in-domain texts \cite{finetuning}.
In-domain fine-tuning has two main shortcomings: it depends on the availability of sufficient amounts of in-domain data in order to avoid overfitting and it results in degraded performance for all other domains. The latter means that for translating multiple domains one has to run an individual NMT system for each domain.
In this work we treat text domains as distinct languages: for example, instead of English-to-Estonian translation we see it as translating English news to Estonian news. We test two multilingual NMT approaches \cite{Johnson:17,Tiedemann:17} in a bilingual multi-domain setting and show that both outperform single-domain fine-tuning on all the text domains in our experiments.
However, this only works when the text domain is known both when training and translating. In some cases the text domain of the input segment is unknown -- for example, web MT systems have to cope with a variety of text domains. Also, some parallel texts do not have a single domain while they are either a mix of texts from different sources (like crawled corpora) or naturally constitute a highly heterogeneous mix of texts (like subtitles or Wikipedia articles).
We address these issues by replacing known domains with automatically derived ones. At training time we cluster parallel sentences and then applying the multi-domain approach to these clusters. When translating, the input segments are classified as belonging to one of these clusters and translated with this automatically derived information.
In the following we review related work in Section~\ref{sctRelWork}, then present our methodology of multi-domain NMT and sentence clustering in Section~\ref{sctModels}. After that, we describe our experiments in Sections~\ref{sctExp1} and \ref{sctExp2} and discuss the results in Section~\ref{sctAnalysis}. Section~\ref{sctConclusions} concludes the paper.
\section{Related Work}
\label{sctRelWork}
The baseline to which we compare our work is fine-tuning NMT systems to a single text domain \cite{finetuning}. There, the NMT system is first trained on a mix of parallel texts from different domains and then fine-tuned via continued training on just the in-domain texts. The method shows improved performance on in-domain test data but degrades performance on other domains.
In \cite{politeness_sennrich} the NMT system is parametrized with one additional input feature (politeness), which is included as part of the input sequence, similarly to one of our two approaches (in our work -- the domain tag approach). However, their goal is different from ours.
In \cite{domain_control_kobus} additional word features are used for specifying the text domain together with the same approach as \cite{politeness_sennrich}. Although both methods overlap with the first part or our work (domain features and domain tags), they only test these methods on pre-specified domains, while we include automatic domain clustering and identification. Also, they use in-domain trained NMT systems as baselines even for small parallel corpora and do experiments with a different NMT architecture. Finally, their results show very modest improvements, while in our case the improvements are much greater.
Other approaches also define a mixture of domains, for example \cite{effective_mixing_britz,topic_aware_chen}. However, both define custom NMT methods and also limit the experiments to the cases where the text domain is known.
\section{Methodology}
\label{sctModels}
In the following we describe two different approaches to treating text domains as distinct languages and using multi-lingual methods, resulting in multi-domain NMT models. The first approach is inspired by Google's multilingual NMT ~\cite{Johnson:17} and the second one by the cross-lingual language models~\cite{Tiedemann:17}.
Then we describe our methods of unsupervised domain segmentation used in our experiments in comparison with the pre-specified text domains.
\subsection{Domain as a Tag}
The first approach is based on \cite{Johnson:17}. Their method of multilingual translation is based on
training the NMT model on data from multiple language pairs, while appending a token specifying the target language to the beginning of the source sequence. No changes to the NMT architecture are required with this approach. They show that the method improves NMT for all languages involved; as an additional benefit, there is no increase in the number of parameters, since all language pairs are included in the same model.
We adapt the language tag approach to text domains, appending the domain ID to each source sentence; thus, for instance, ``How you doin' ?'' from OpenSubtitles2016
\cite{opsubs}
becomes ``\_\_OpenSubs How you doin' ?''.
The described method has two advantages. Firstly, it is independent of the NMT architecture, and scaling to more domains means simply adding data for these domains. We can assign a domain to each sentence pair of the training set sentence pair, or set the domain to ``other'' for sentences whose domain we cannot or do not want to identify.
Secondly, in a multilingual NMT model, all parameters are implicitly shared by all the language pairs being modeled. This forces the model to generalize across language boundaries during training. It is observed that when language pairs with little available data and language pairs with abundant data are mixed into a single model, translation quality on the low resource language pair is significantly improved.
We expect this to be even more useful for text domains. Traditional tuning to a low-resource domain, or for any specific domain for that matter, would result in a likely over-fitting to that domain. Our approach, where all parameters are shared, learns target domain representations without harming other domains' results while maintaining the ability to generalize also on in-domain translation, because little to no over-fitting will be caused. Furthermore, since domains are much more similar than languages, we expect the parameter sharing to have a stronger effect.
\subsection{Domain as a Feature}
The second approach is based on \cite{Tiedemann:17} for continuous multilingual language models. The authors propose to use a single RNN model with language vectors that indicate what language is used. As a result each language gets its own embedding, thus ending up with a language model with a predictive distribution $p(x_t\vert x_{1...t-1}, l)$ which is a continuous function of the language vector $l$.
In our approach the same idea is implemented via word features of Nematus \cite{Sennrich:17},
with their learned embeddings replacing the language vector of \cite{Tiedemann:17}. For example, translating "This is a sentence ." to the Estonian Wikipedia domain would mean an input of "This$\vert$2wi is$\vert$2wi a$\vert$2wi sentence$\vert$2wi .$\vert$2wi"\footnote{the "$\vert$" is a special symbol in Nematus for delimiting input features.}
Having a single language model learn several languages helps similar languages improve each others representations \cite{Tiedemann:17}. Also, they point out that this greatly alleviates the problem of sparse data for smaller languages. We expect the same effect for text domains, especially since similarity between different domains of the same languages is higher than between different languages. Moreover, similarly to the domain tag approach, the usage of many domains in one model helps bypass the over-fitting problem of smaller domains.
\subsection{Automatic domain tags}
Here we define the domain of each of the source--target sentence pair automatically. We take two different approaches to achieve the annotation.
\paragraph{Supervised approach} is done only in single domain setting. It involves assigning categories to roughly 10,000 Wikipedia articles, for which it could be done with high certainty. Assigning categories to more articles is problematic, because the categories assigned in Wikipedia can often be misleading in terms of content. Next we tag each sentence with the article category.
After tagging the sentences, we train a FastText ~\cite{fasttext1,fasttext2} classification model with default settings and apply it to classify the rest of the sentences that were not classified based on the article categories. Test/dev set sentences are tagged using the same FastText model that is used to cluster training data.
\paragraph{Unupervised approach} is applied to sentence-split data. In case of multi-domain data we still treat it as a single domain data of which we have no domain structure knowledge. In this approach, we train a model and calculate sentence vectors in an unsupervised manner using sent2vec ~\cite{sent2vec}. After that, we apply KMeans clustering to identify the clusters in the set of calculated sentence vectors. Finally, we tag each sentence with the label that it was assigned by KMeans. To find the optimal number of clusters, we create several versions with different numbers of clusters.
To tag the test/dev set sentences, we train a FastText ~\cite{fasttext1} ~\cite{fasttext2} supervised classification model on the tagged training set. For each of the cluster versions and for each language pair, we train a separate FastText model. The additional benefit of this kind of clustering is that each new input sentence can be efficiently assigned its cluster. Also, because of more potentially homogenous train-set clusters, the new sentence is hypothetically assigned more appropriate domain than it would be assigned in case of the pre-defined domains.
The potential benefit of the unsupervised approach over supervised approach is that it does not assume any prior knowledge of the data and thus the domain structure does not rely on potentially faulty pre-defined domain structure. This in turn allows the multi-domain translation approach to be applied to any data without the knowledge of its domain structure.
\section{Experiments with Known Domains}
\label{sctExp1}
In the experiments we use mixed-domain parallel data consisting of Europarl \cite{euparl}, OpenSubtitles2016 \cite{opsubs}, parallel data extracted from English-Estonian Wikipedia articles and some more mixed parallel corpora from the OPUS collection \cite{opsubs}. The size of the corpora is shown in Table \ref{fig:data_size}. For each corpus we use a randomly chosen and held-out test set of 3000 parallel sentences.
\begin{table} [htbp]
\centering
\begin{tabular}{| l | c | c | c |}
\hline
\textbf{Corpus} & \textbf{Sents} & \textbf{EN tok} & \textbf{ET tok} \\ \hline \hline
\textbf{Opensubs} & 10.32 & 83.57 & 67.56 \\ \hline
\textbf{Europarl} & 0.644 & 17.18 & 12.82 \\ \hline
\textbf{Wiki} & 0.135 & 2.281 & 2.089 \\ \hline
\textbf{Other} & 7.972 & 169.9 & 143.5 \\ \hline \hline
\textbf{Total} & 19.07 & 272.9 & 225.9\\ \hline
\end{tabular}
\caption{Data sizes for the training data. Number of tokens (tok) is given pre-BPE. All of the numbers are given in millions}
\label{fig:data_size}
\end{table}
\subsection{Mining Wikipedia for Translations}
Wikipedia\footnote{\url{http://www.wikipedia.org/}} itself is a big set of articles. The articles have two properties, which are extremely useful from our task point of view. Firstly, the articles have links to the articles of same topic, but in different languages, which makes it easier to find comparable data from which to extract parallel data. Secondly, each article has one or several categories attached to it. This means that hypothetically we can assign domain(s) to at least some of the articles based on these categories.
To extract meaningful text from the Wikipedia XML dumps, we used the WikiExtractor tool\footnote{\url{https://github.com/attardi/wikiextractor}}. The data is extracted in a way that preserves article and paragraphs boundaries. The extraction is done separately for English and Estonian version.
After extracting text from the dumps, another custom-made solution is applied to detect parallel articles. The number of Wikipedia articles in English is well over 5 million whereas for Estonian it is just over 100 thousand. We keep all Estonian articles and only those English articles that have a parallel article in Estonian articles. This leaves us with roughly 70 thousand English articles.
The parallel articles form a comparable corpus. In case of this comparable corpora we know that the articles are parallel in terms of topics but not in sentences.
To extract parallel sentences from parallel articles, we used the LEXACC ~\cite{LEXACC} tool, which is a part of the ACCURAT
toolkit ~\cite{ACCURAT1,ACCURAT2}.
Parallel sentence identification allows also to maintain the info of article origin, which means that direct domain assigning is possible. The identification process also assigns score to each sequence pair, which allows us to create parallel sets with different grade of purity. The optimal grade of purity produced 340 thousand parallel sentences. The size of Estonian Wikipedia in total is 2.8 million sentences. To the rest 2.5 million sentences back-translation is applied to extend the Wikipedia dataset for EN-ET direction; the back-translated sentences are also filtered based on attention weights ~\cite{attfilter} with a 50\% threshold.
\subsection{Technical Settings}
We apply BPE segmentation \cite{bpe} in a joint learning scenario, learning from the input and the output,
limiting the vocabulary to 65,000 entries. The acquired segmentation mostly corresponds to the linguistic intuition on frequent tokens (which are left intact) and medium-frequency tokens (which are split into compound parts or endings off stems); low-frequency tokens (also names, numeric tokens) are split into letters and letter pairs.
The NMT model we use is encoder-decoder with an attention mechanism \cite{bahdanau},
implemented in Nematus \cite{Sennrich:17}.
All settings (like embedding size, number of recurrent layers in encoder and decoder, etc.) are kept at their default values. Batch size in experiments is 50 sequences.
\subsection{Results}
For the \textbf{Baseline} experiment we first train a baseline model on all the datasets are used, and use it for translation. Then in the \textbf{Tuned} approach for each dataset separately we fine-tune the \textbf{Baseline} model to each corpus separately.
For the comparability of the results, the number of iterations during training (800,000) and input parameters are kept equal for \textbf{Baseline}, \textbf{Tag}, \textbf{Feat}. The tuning of \textbf{Baseline} is done for additional 60,000 iterations. One iteration means one batch seen during training.
Tables~\ref{fig:ET-EN_btrans} and \ref{fig:EN-ET_btrans} show the BLEU scores \cite{bleu} and the p-values of the statistical significance of their difference for Baseline, fine-tuned baseline, domain tags, and domain feature approaches.
\begin{table*} [htbp]
\centering
\begin{tabular}{| l | c | c | c | c |}
\hline
\textbf{Corp} & \textbf{Baseline} & \textbf{Tuned} & \textbf{Tag} & \textbf{Feat} \\ \hline
\textbf{Eu} & 33.0$\pm$0.3 & 35.4$\pm$0.3 & 36.2$\pm$0.3 & 37.3$\pm$0.3 \\ \hline
\textbf{Op} & 27.9$\pm$0.6 & 28.1$\pm$0.6 & 30.5$\pm$0.6 & 30.3$\pm$0.6 \\ \hline
\textbf{Wi} & 15.3$\pm$0.4 & 15.4$\pm$0.4 & 16.9$\pm$0.4 & 17.7$\pm$0.4 \\ \hline
\hline
\textbf{Corp} & \textbf{Baseline} & \textbf{Tuned} & \textbf{Tag} & \textbf{Feat} \\ \hline
\textbf{Eu} & 0.0001 \textbf{/} 0.0001 & 0.009 \textbf{/} 0.0001 & - \textbf{/} 0.0001 & 0.0001 \textbf{/} - \\ \hline
\textbf{Op} & 0.0001 \textbf{/} 0.0001 & 0.0001 \textbf{/} 0.0001 & - \textbf{/} 0.1 & 0.1 \textbf{/} - \\ \hline
\textbf{Wi} & 0.0001 \textbf{/} 0.0001 & 0.0001 \textbf{/} 0.0001 & - \textbf{/} 0.001 & 0.001 \textbf{/} - \\ \hline
\end{tabular}
\caption{BLEU scores and p-values for Estonian-English direction. \textbf{Baseline} model is trained without domain tags. \textbf{Tuned} is achieved by tuning these models with the specific corpus. \textbf{Tag} is trained with data that has domain tag prepended to each source sentence. \textbf{Feat} is trained with data that has domain embedding added as a feature to each source sequence word. p-values are given for significance against \textbf{Tag} and \textbf{Feat} respectively, separated with \textbf{/}. }
\label{fig:ET-EN_btrans}
\end{table*}
\begin{table*} [htbp]
\centering
\begin{tabular}{| l | c | c | c | c |}
\hline
\textbf{Corp} & \textbf{Baseline} & \textbf{Tuned} & \textbf{Tag} & \textbf{Feat} \\ \hline
\textbf{Eu} & 22.5$\pm$0.3 & 25.3$\pm$0.3 & 25.4$\pm$0.3 & 24.9$\pm$0.3 \\ \hline
\textbf{Op} & 24.2$\pm$0.6 & 24.5$\pm$0.6 & 24.8$\pm$0.6 & 25.3$\pm$0.6 \\ \hline
\textbf{Wi} & 11.8$\pm$0.4 & 12.1$\pm$0.4 & 12.5$\pm$0.3 & 12.8$\pm$0.4 \\ \hline
\hline
\textbf{Corp} & \textbf{Baseline} & \textbf{Tuned} & \textbf{Tag} & \textbf{Feat} \\ \hline
\textbf{Eu} & 0.0001 \textbf{/} 0.0001 & 0.3 \textbf{/} 0.04 & - \textbf{/} 0.04 & 0.04 \textbf{/} - \\ \hline
\textbf{Op} & 0.01 \textbf{/} 0.001 & 0.09 \textbf{/} 0.03 & - \textbf{/} 0.06 & 0.06 \textbf{/} - \\ \hline
\textbf{Wi} & 0.01 \textbf{/} 0.001 & 0.06 \textbf{/} 0.03 & - \textbf{/} 0.14 & 0.14 \textbf{/} - \\ \hline
\end{tabular}
\caption{BLEU scores and p-values for English-Estonian direction. \textbf{Baseline} model is trained without domain tags. \textbf{Tuned} is achieved by tuning these models with the specific corpus. \textbf{Tag} is trained with data that has domain tag prepended to each source sentence. \textbf{Feat} is trained with data that has domain embedding added as a feature to each source sequence word. p-values are given for significance against \textbf{Tag} and \textbf{Feat} respectively, separated with \textbf{/}. }
\label{fig:EN-ET_btrans}
\end{table*}
As we can see from the results, both of the additional domain info models perform really well. The domain tag (\textbf{Tag}) model outperforms both of its baseline (\textbf{Baseline}) and tuned (\textbf{Tuned}) counterpart in ET--EN direction. It even goes as far as exceeding the \textbf{Tuned} approach by more than 1.0 BLEU in all domains. The same holds, but even more strongly, for the version where we add the domain embedding as an input feature for each word (\textbf{Feat}).
For EN--ET direction the results do not show such strong improvements. In this direction both \textbf{Tag} and \textbf{Feat} outperform \textbf{Baseline} for all domains. However, the scoring is quite close to the \textbf{Tuned} approach with the results between \textbf{Tag} and \textbf{Feat} also being closer than in ET--EN case. All in all, the fact that the domain tagging results are essentially on-par with Tuned approach, means it is superior to the \textbf{Tuned} approach in practice because of the fact that it requires only one model rather than three.
Table~\ref{fig:ET-EN_ex1} shows an example of the \textbf{ET--EN} translations highlighting some improvements. Since the quality of \textbf{Tuned} is close to \textbf{Tag} and \textbf{Feat}, we omit it from the comparison since the differences would be highly circumstantial and would not hold much information in small scale.
\begin{table} [htbp]
\centering
\begin{tabular}{| l | l |}
\hline
\textbf{Src} & \textit{vastuseid saab muidugi olla ainult üks} \\
(ET) & \textit{: lõpetada kohe igasugused} \\
& \textit{läbirääkimised Türgiga .}\\ \hline
\textbf{Ref} & \textit{there is , of course , only one possible} \\
(EN) & \textit{response : to immediately cease all} \\
& \textit{negotiations with Turkey .}\\ \hline
\textbf{Base} & \textit{only one can only be one : stop any} \\
(EN)& \textit{negotiations with Turkey immediately .}\\ \hline
\textbf{Tag} & \textit{the answer , of course , can only be} \\
(EN) & \textit{one : stop all the negotiations with} \\
& \textit{Turkey immediately .}\\ \hline
\textbf{Feat} & \textit{there is , of course , only one answer :} \\
(EN) & \textit{to put an end to all negotiations with} \\
& \textit{Turkey immediately .}\\ \hline
\end{tabular}
\caption{An example of Europarl corpus translations from Estonian to English using the Baseline, \textbf{Tag} and \textbf{Feat} methods.}
\label{fig:ET-EN_ex1}
\end{table}
\section{Experiments with Automatic Domains}
\label{sctExp2}
Since the results on the full parallel data show that both of multi-domain approaches are on-par, or superior to the single-domain baseline, we apply the methods in a setting where we do not assume beforehand knowledge of the origin domain of source sentences. Here we take the domain tagging approach: even though domain features show better results, domain tags are more generic and compatible with any NMT architecture.
We experiment with two data settings. In the first one, we have a single heterogeneous text domain. We explore both supervised and unsupervised tagging of single text domain based on sentence vectors.
In the second one, we have texts from several domains but we ignore the pre-specified text domains and replace them with automatic clustering based on sentence embeddings.
\subsection{Automatic single-domain tagging}
To choose the best setting for unsupervised approach, we do a small sweep for input data versions. We check for best number of clusters by training a model for each number of clusters. The input data for this is the whole Wikipedia corpus. The models are trained for 12 hours, which should be sufficient to make them diverge enough to choose the best number of clusters. We also train a regular model without data clustering for reference.
It is important to note that for this experiment a different test set was used than in the full data experiments. Thus the scores in \ref{fig:Wiki_sweep} are not comparable to scores presented earlier.
\begin{table*} [htbp]
\centering
\begin{tabular}{| l | c | c | c | c | c | c |}
\hline
\textbf{NClust} & \textbf{C4} & \textbf{C5} & \textbf{C6} & \textbf{C8} & \textbf{C12} & \textbf{Ref}\\ \hline
\textbf{BLEU} & 19.7 & 19.5 & 19.6 & 19.5 & 20.0 & 17.9 \\ \hline
\end{tabular}
\caption{BLEU scores for Unsupervised Wikipedia parameter setting.}
\label{fig:Wiki_sweep}
\end{table*}
The initial sweep indicates that the best option for the unsupervised classification is 12 clusters. Also, the 12 hours -- ~100,000 iterations are already showing the effect that domain tagging has over the regular reference approach, making other clusterings also a viable choice.
\subsubsection*{Wikipedia Translation Results}
In the final experiment, three models were trained:
\begin{itemize}
\item Supervised 5-domain source tag model
\item Unsupervised 5-domain source tag model
\item Unsupervised 12-domain source tag model
\item Regular not domain-tagged model
\end{itemize}
Unsupervised 5-domain model was included to compare the performance of supervised and unsupervised approach with the same amount of domains, giving an indication of the "goodness" of these cluster assignments. The Unsupervised 12-domain model was included to compare the performance of best unsupervised clustering and the intuitively optimal supervised clustering. Supervised 12-domain model is not presented because we were not able produce such reasonable structure from ET Wikipedia. The results are presented in \ref{fig:Wiki_final}. The models were trained for 48 hours.
\begin{table*} [htbp]
\centering
\begin{tabular}{| l | c | c | c | c |}
\hline
\textbf{NClust} & \textbf{Usup12} & \textbf{Usup5} & \textbf{Super} & \textbf{Ref}\\ \hline
\textbf{BLEU} & 26.0$\pm$0.4 & 25.2$\pm$0.4 & 25.8$\pm$0.4 & 23.6$\pm$0.4 \\ \hline
\textbf{pU12} & - & 0.01 & 0.1 & 0.0001 \\ \hline
\textbf{pU5} & 0.01 & - & 0.03 & 0.0001 \\ \hline
\textbf{pSup} & 0.1 & 0.03 & - & 0.0001 \\ \hline
\textbf{pRef} & 0.0001 & 0.0001 & 0.0001 & - \\ \hline
\end{tabular}
\caption{BLEU scores and p-values for test on Wikipedia-only data to compare the effect of Unsupervised clustering (\textbf{Usup12, Usup5}), supervised clustering (\textbf{Super}) and no-clustering approach (\textbf{Ref}). The p-values are shown in respect to the version where the value is \textbf{-}.}
\label{fig:Wiki_final}
\end{table*}
As we see in Table~\ref{fig:Wiki_final}, the Supervised approach (\textbf{Super}) with five clusters slightly outperforms Unsupervised 5-cluster approach (\textbf{Usup5}). The best option for Unsupervised clustering (\textbf{Usup12}) performs as well as the Supervised approach. The results show that Unsupervised approach is comparable in performance to the Supervised approach, which means that at least in this setting both of the approaches are viable. Even more so, when obtaining labelled data for supervised clustering can often require a lot of additional effort, the unsupervised approach is not chained by the (lack) of pre-existing knowledge about the data.
Most important is the fact that both of the unsupervised cluster versions outperform the regular reference (\textbf{Ref}) version where sentence cluster tags were not used. This shows that the unsupervised clustering approach can potentially be used in settings that previously were viewed upon as single clusters. For example OpenSubtitles corpus could be clustered further, to improve the translations.
\subsection{Unsupervised multi-domain tagging}
\label{sctExp3}
Hinging on the fact that domain tagging approach outperformed the traditional tuning approach and on the results that unsupervised Wikipedia dataset clustering produced, the "traditional" approach of text domains should be given another look. One possible action is to cluster or sub-cluster the existing parallel data to restructure it from the domain point of view.
In addition to the results produced on wikipedia dataset, the hypothesis on why this would work, is that large text domains are probably not very homogenous. Also, different domains have probably pretty big overlap of similar sentences. This would mean that the usual approach of domain tuning or domain tagging does not achieve its true potential, because predefined domains are \textit{de facto} several domains and the same domains are actually present in other predefined domains also.
To check for this property and its potential benefit for NMT, we cluster existing parallel sentences to $n$ clusters in the previously described unsupervised manner, train NMT models with domain tagged sentences, and finally, cluster test set sentences in a supervised manner with a supervised clustering model that is trained on the data that was obtained from unsupervised clustering.
The training is done using Nematus with the same settings as in the initial experiment with domain tags. Firstly, we do the sweep of clusters by training 4, 8, 16, and 32 cluster versions for both EN--ET and ET--EN direction. After that we choose the version that has achieved the best BLEU scores on the dev sets for both of the directions and train it for the same time as in the initial domain tag experiment with full data.
\subsubsection*{Results of unsupervised multi-domain tagging}
To evaluate the model performance, we train supervised FastText classification models on the tagged training data. We apply these models on the test/dev sets to classify the sentences. This means that each of the sets -- Opensubs, Europarl, and Wiki -- gets actually tags from several clusters, depending on which cluster the FastText model assigns to each of the sentences. This means that for each source test set we create four different versions, each for cluster numbers 4, 8, 16, and 32.
The initial parameter sweep shows that the best option is 16 clusters for both EN--ET \ref{fig:Full_sweep_ENET} and ET--EN \ref{fig:Full_sweep_ETEN} directions across all test sets. Hence the final models were both trained with 16 clusters.
\begin{table} [htbp]
\centering
\begin{tabular}{| l | c | c | c | c |}
\hline
\textbf{Corp} & \textbf{C4} & \textbf{C8} & \textbf{C16} & \textbf{C32} \\ \hline
\textbf{Eu} & 4.13 & 3.19 & \textbf{5.94} & 4.17 \\ \hline
\textbf{Op} & 9.41 & 9.36 & \textbf{10.80} & 10.62 \\ \hline
\textbf{Wi} & 1.09 & 0.94 & \textbf{1.31} & 0.81 \\ \hline
\end{tabular}
\caption{BLEU scores for English-Estonian direction sweep. The model is trained on parallel data that is tagged in unsupervised manner using sent2vec + Kmeans clustering. The dev sets are clustered based on this tagged data using FastText. The best scores for each corpus are presented in \textbf{bold}.}
\label{fig:Full_sweep_ENET}
\end{table}
\begin{table} [htbp]
\centering
\begin{tabular}{| l | c | c | c | c |}
\hline
\textbf{Corp} & \textbf{C4} & \textbf{C8} & \textbf{C16} & \textbf{C32}\\ \hline
\textbf{Eu} & 20.48 & 19.88 & \textbf{20.82} & 18.43 \\ \hline
\textbf{Op} & 20.05 & 19.54 & \textbf{20.17} & 20.01 \\ \hline
\textbf{Wi} & 4.61 & 4.38 & \textbf{5.50} & 4.32 \\ \hline
\end{tabular}
\caption{Test set BLEU scores for Estonian-English direction sweep. The model is trained on parallel data that is tagged in unsupervised manner using sent2vec + Kmeans clustering. The dev sets are clustered based on this tagged data using FastText. The best scores for each corpus are presented in \textbf{bold}.}
\label{fig:Full_sweep_ETEN}
\end{table}
In table \ref{fig:ClustrStructEN} is shown the OpenSubs test sets cluster structure. The test sets are tagged using FastText models trained on tagged train set. We can see that different train set clusters produce different granularity in test sets also. For \textbf{C4, C8} the OpenSubs structure is similar, same holds for other test sets. \textbf{C16} vs \textbf{C8} however shows a significant difference in test set clustering. Here we see that OpenSubs, which based on content is probably not homogeneous domain, is separated quite granularly in \textbf{C16}, producing 3--4 main sub-domains. In \textbf{C32} the test set is clustered even further, but based on sweep scores, it could be said that the achieved clustering is already too granular.
\begin{table} [htbp]
\centering
\begin{tabular}{| l | c | c | c | c | c | c |}
\hline
\textbf{Corp} & \textbf{N1} & \textbf{N2} & \textbf{N3} & \textbf{N4} & \textbf{N5} & \textbf{N6}\\ \hline
\textbf{C4} & 2921 & 29 & - & - & - & - \\ \hline
\textbf{C8} & 2907 & 43 & - & - & - & - \\ \hline
\textbf{C16} & 1331 & 1015 & 398 & 181 & 18 & 7 \\ \hline
\textbf{C32} & 1137 & 828 & 356 & 293 & 241 & 71 \\ \hline
\end{tabular}
\caption{Cluster structure of FastText tagged English OpenSubs test sets. The test sets are clustered based on tagged train data. The clusters are numbered left to right based on size. Here only top 6 clusters are shown. For C32 $N7=11$, $N8=6$, $N9=3$, $N10=2$, $N11=2$. Test set structures for Estonian sets are similar.}
\label{fig:ClustrStructEN}
\end{table}
Considering that our OpenSubs cluster is 10 million sentence pairs in size, we can say that \textbf{C16} finds 5 significant sub-domains and one less significant sub-domain inside it. This shows that, at least from sentence vectorizing point of view, there exists more than one domain inside OpenSubs, and similarly in other domains.
\begin{table*} [htbp]
\centering
\begin{tabular}{| l | c | c | c | c | c | c | c | c |}
\hline
\textbf{Corp} & \textbf{N1} & \textbf{N2} & \textbf{N3} & \textbf{N4} & \textbf{N5} & \textbf{N6} & \textbf{N7} & \textbf{N8} \\ \hline
\textbf{Train} & 4859672 & 4444177 & 3704753 & 2767889 & 1407228 & 822225 & 711526 & 134301 \\ \hline
\textbf{Corp} & \textbf{N9} & \textbf{N10} & \textbf{N11} & \textbf{N12} & \textbf{N13} & \textbf{N14} & \textbf{N15} & \textbf{N16}\\ \hline
\textbf{Train} & 114778 & 40260 & 22004 & 18165 & 10298 & 9585 & 2492 & 646 \\ \hline
\end{tabular}
\caption{Cluster structure of KMeans tagged English train set for \textbf{C16}. The clusters are numbered left to right based on size. Train set structure for Estonian is similar.}
\label{fig:ClustrTrainEN}
\end{table*}
When looking at the number of clusters present in Table~\ref{fig:ClustrStructEN}, one could notice that the clusters present is less than number of clusters defined. It should be kept in mind that we have 3 main text sources in training set and fourth mixed-corpus which could be divided into 5-6 parts, so 8-9 text domains in total. Also, some sentences are quite distinct from the others based on full train set cluster structure as we can see from the train set structure of \textbf{C16} in Table~\ref{fig:ClustrTrainEN}. The clustering and its structure is probably interesting aspect to look into in future work.
The final results, where the 16 cluster models were trained for the same amount of iterations as in the initial full data experiments, are presented in \ref{fig:EN-ET_full} and \ref{fig:ET-EN_full} for EN--ET and ET--EN language pairs respectively.
\begin{table*} [htbp]
\centering
\begin{tabular}{| l | c | c | c | c |}
\hline
\textbf{Corp} & \textbf{Baseline} & \textbf{Tuned} & \textbf{Tag} & \textbf{Unsup} \\ \hline
\textbf{Eu} & 22.5$\pm$0.3 & 25.3$\pm$0.3 & 25.4$\pm$0.3 & 24.5$\pm$0.3 \\ \hline
\textbf{Op} & 24.2$\pm$0.6 & 24.5$\pm$0.6 & 24.8$\pm$0.6 & 24.6$\pm$0.6 \\ \hline
\textbf{Wi} & 11.8$\pm$0.4 & 12.1$\pm$0.4 & 12.5$\pm$0.3 & 11.1$\pm$0.4 \\ \hline
\hline
\textbf{Corp} & \textbf{Baseline} & \textbf{Tuned} & \textbf{Tag} & \textbf{Unsup} \\ \hline
\textbf{Eu} & 0.0001 \textbf{/} 0.0001 & 0.3 \textbf{/} 0.03 & - \textbf{/} 0.004 & 0.004 \textbf{/} - \\ \hline
\textbf{Op} & 0.01 \textbf{/} 0.03 & 0.09 \textbf{/} 0.4 & - \textbf{/} 0.2 & 0.2 \textbf{/} - \\ \hline
\textbf{Wi} & 0.01 \textbf{/} 0.01 & 0.06 \textbf{/} 0.005 & - \textbf{/} 0.0001 & 0.0001 \textbf{/} - \\ \hline
\end{tabular}
\caption{Test set BLEU scores and p-values for English-Estonian direction. \textbf{Baseline} model is trained without domain tags. \textbf{Tuned} is achieved by tuning these models with the specific corpus. \textbf{Tag} is trained with data that has domain tag prepended to each source sentence. \textbf{Unsup} is trained with data that has domain tags assigned to each sentence in an previously described unsupervised manner. p-values are given for significance against \textbf{Tag} and \textbf{Unsup} respectively, separated with \textbf{/}. }
\label{fig:EN-ET_full}
\end{table*}
\begin{table*} [htbp]
\centering
\begin{tabular}{| l | c | c | c | c |}
\hline
\textbf{Corp} & \textbf{Baseline} & \textbf{Tuned} & \textbf{Tag} & \textbf{Unsup} \\ \hline
\textbf{Eu} & 33.0$\pm$0.3 & 35.4$\pm$0.3 & 36.2$\pm$0.3 & 36.0$\pm$0.3 \\ \hline
\textbf{Op} & 27.9$\pm$0.6 & 28.1$\pm$0.6 & 30.5$\pm$0.6 & 30.2$\pm$0.6 \\ \hline
\textbf{Wi} & 15.3$\pm$0.4 & 15.4$\pm$0.4 & 16.9$\pm$0.4 & 16.0$\pm$0.4 \\ \hline
\hline
\textbf{Corp} & \textbf{Baseline} & \textbf{Tuned} & \textbf{Tag} & \textbf{Unsup} \\ \hline
\textbf{Eu} & 0.0001 \textbf{/} 0.0001 & 0.009 \textbf{/} 0.01 & - \textbf{/} 0.3 & 0.3 \textbf{/} - \\ \hline
\textbf{Op} & 0.0001 \textbf{/} 0.0001 & 0.0001 \textbf{/} 0.0001 & - \textbf{/} 0.1 & 0.1 \textbf{/} - \\ \hline
\textbf{Wi} & 0.0001 \textbf{/} 0.004 & 0.0001 \textbf{/} 0.009 & - \textbf{/} 0.01 & 0.01 \textbf{/} - \\ \hline
\end{tabular}
\caption{Test set BLEU scores and p-values for Estonian-English direction. \textbf{Baseline} model is trained without domain tags. \textbf{Tuned} is achieved by tuning these models with the specific corpus. \textbf{Tag} is trained with data that has domain tag prepended to each source sentence. \textbf{Unsup} is trained with data that has domain tags assigned to each sentence in an previously described unsupervised manner. p-values are given for significance against \textbf{Tag} and \textbf{Unsup} respectively, separated with \textbf{/}.}
\label{fig:ET-EN_full}
\end{table*}
The results show that the unsupervised clustering approach performs similarly with the pre-defined tag version. The results are evidence that the unsupervised tagging approach can serve as a viable alternative to the traditional pre-defined domain approach. Our hypothesis is that this is caused by the pre-defined domains being less homogenous in content than the unsupervised clustered "domains". However, this hypothesis should be investigated further to assert its existence and magnitude. Also, since the clustering approach is pretty much applied out-of-the-box, then improved clustering could provide considerable improvements.
All-in-all, taking into consideration the fact that unsupervised approach allows new sentences to be translated with potentially more appropriate domain assigned to them, the unsupervised tagging approach can be seriously considered as the go-to approach for multi-domain translation models.
\section{Discussion}
\label{sctAnalysis}
The results from the experiments - EN--ET and ET--EN direction parallel translation, Wikipedia data translation, and unsupervised sentence tagging - show that both of the two chosen multi-domain approaches outperform regular approach of uniform translation and domain-tuning.
This indicates the hypothesis that the parameter sharing effect discussed in Google's zero-shot article would benefit domain translation holds. The translation scores even outperform domain-tuning approach, which could be explained by the same parameter sharing. In tuning we tune the model to translate sentences characteristic to the model we are tuning to. This means that domain-characteristic sentences get translated really well. On the other hand, the not-so-characteristic sentences get neglected. The parameter sharing effect of the multi-domain approach helps negate the negative effect by the support of other domains while still learning to more effectively represent each domain by the additional domain info.
Furthermore, the results indicate that adding domains as an input feature can have even stronger effect on the translation scores. This shows that concatenating the domain feature embedding with word embedding at each timestep - basically remembering the source domain equally throughout the sequence improves model performance. This could be explained by the fact that in tag prepending case, the neural net may "forget" for longer sequences what the input tag was, making the effect of it weaker.
The results also show that for highly quality dependent settings the domain feature concatenation with word embedding is the more suitable option. However, the differences in scores are not drastically different from the domain tag prepending. This means that for the sake of data simplicity, model simplicity and efficiency the tag prepending approach could prove more reasonable of the two for in-production settings.
Finally, the performance of unsupervised domain tagged model indicates that there is grounds to substitute the pre-defined domain approach with automatically assigned domain approach. The unsupervised certainly serves as an improvement in less homogenous single domain settings, where the effect of the detection of underlying "domains" was shown on the example of Wikipedia.
No less important are the facts that the unsupervised tagging approach ensures better domain assignment to each new sentence and can efficiently incorporate new data from various small domains to fortify each of the learned "domain" (clusters).
It has to be taken into account that the unsupervised clustering performed in these experiments is applied basically in out-of-the-box manner, which means that domain assignments can be improved and thus the translation scores should also improve.
\section{Conclusions}
\label{sctConclusions}
In this article we tested two approaches to improve multi-domain neural translation. One approach involves prepending domain tags to source sentences, the other adding domain embeddings as an input feature to each source sentence word. We showed that both ways of adding domain information to source sentences in bilingual neural translation improves translation scores considerably compared to both regular baseline translation and fine-tuning. These improvements in source sentence tagging case can be obtained with mere data manipulation.
We also showed that the domain tagging approach can be successfully coupled with unsupervised sentence clustering to add a "domain dimension" to a previously single-domain corpus. This approach produces better results as opposed to using the corpus as a single domain. The results indicate that unsupervised or semi-supervised training data clustering can be effectively used to improve neural machine translation.
Finally, to bring the two experiments together, we apply unsupervised domain tagging to full parallel data and show that it can serve as a viable alternative to the pre-defined domain approach
For future work the clustering in fully unsupervised tagging approach should be improved to see if this gives a visible improvement in translation scores.
Secondly, a more comprehensive sweep on number of clusters should be done. It would be interesting to see for how many clusters the effect still persists. This however would need more extensive computational resources and should probably be done with some model dataset.
The differences of the two approaches - source sentence tagging and adding domain info as an input feature - deserve to be looked into more deeply. More precisely, the result profiles of the two in different cases of domain granularity.
Finally, in this work domains are still treated as nominal values; it would be interesting to explore the estimation of domain embeddings at translation time as continuous values.
\section*{Acknowledgements}
This work was supported by the Estonian Research Council grant no. 1226. Also we are grateful for the anonymous reviews and their help in improving this article.
\bibliographystyle{eamt18}
|
{
"timestamp": "2018-05-08T02:13:04",
"yymm": "1805",
"arxiv_id": "1805.02282",
"language": "en",
"url": "https://arxiv.org/abs/1805.02282"
}
|
\section{Introduction}
Human action recognition is an important and challenging problem in computer vision research. It plays an important role in many applications, such as intelligent video surveillance, sports analysis and video retrieval. Human action recognition can also help robots to have a better understanding of human behaviors, thus robots can interact with people much better \cite{Poppe2010survey,Weinland2011survey,Aggarwal2011Human}.
Recently, there have existed many approaches to recognize human actions, the input data type of which can be grossly divided into two categories: RGB videos \cite{Simonyan2014Two-stream} and 3D skeleton sequences \cite{Du2015Hierarchical}. For RGB videos, spatial appearance and temporal optical flow generally are applied to model the motion dynamics. However, the spatial appearance only contains 2D information that is hard to capture all the motion information, and the optical flow generally needs high computing costs. Compared to RGB videos, Johansson et al.~\cite{johansson1973visual} have explained that 3D skeleton sequences can effectively represent the dynamics of human actions. Furthermore, the skeleton sequences can be obtained by the Microsoft Kinect \cite{zhang2012microsoft} and the advanced human pose estimation algorithms \cite{cao2017realtime}. Over the years, skeleton-based human action recognition has attracted more and more attention \cite{aggarwal2014human,Du2015Hierarchical,Song2017Attention}. In this paper, we focus on recognizing human actions from 3D skeleton sequences.
For sequential data, recurrent neural networks (RNNs) perform a strong power in learning the temporal dependencies. There has been a lot of work successfully applying RNNs for skeleton-based action recognition. Hierarchical RNN \cite{Du2015Hierarchical} is proposed to learn motion representations from skeleton sequences. Shahroudy et al.~\cite{Shahroudy2016NTU} introduce a part-aware LSTM network to further improve the performance of the LSTM framework. To model the discriminative features, a spatial-temporal attention model \cite{Song2017Attention} based on LSTM is proposed to focus on discriminative joints and pay different attentions to different frames. Despite the great improvement in performance, there exist two urgent problems to be solved. First, human behavior is accomplished in coordination with each part of the body. For example, walking requires legs to walk, and it also needs the swing of arms to coordinate the body balance. It is very difficult to capture the high-level spatial structural information within each frame if directly feeding the concatenation of all body joints into networks. Second, these methods utilize RNNs to directly model the overall temporal dynamics of skeleton sequences. The hidden representation of the final RNN is used to recognize the actions. For long-term sequences, the last hidden representation cannot completely contain the detailed temporal dynamics of sequences.
In this paper, we propose a novel model with spatial reasoning and temporal stack learning (SR-TSL) for this task, which can effectively solve the above challenges. Fig.~\ref{model_pipeline} shows the overall pipeline of our model that contains a spatial reasoning network (SRN) and a temporal stack learning network (TSLN). First, we propose a spatial reasoning network to capture the high-level spatial structural features within each frame. The body can be decomposed into different parts, e.g. two arms, two legs and one trunk. The concatenation of joints of each part is transformed into individual spatial feature with a linear layer. These individual spatial features of body parts are fed into a residual graph neural network(RGNN) to capture the high-level structural features between the different body parts, where each node corresponds to a body part. Second, we propose a temporal stack learning network to model the detailed temporal dynamics of the sequences, which consists of three skip-clip LSTMs. For a long-term sequence, it is divided into multiple clips. The short-term temporal information of each clip is modeled with an LSTM layer shared among the clips in a skip-clip LSTM layer. When feeding a clip into shared LSTM, the initial hidden of shared LSTM is initialized with the sum of the final hidden state of all previous clips, which can inherit previous dynamics to maintain the dependency between clips. We propose a clip-based incremental loss to further improve the ability of stack learning. Therefore, our model can also effectively solve the problem of long-term sequence optimization. Experimental results show that the proposed SR-TSL speeds up the model convergence and improve the performance.
The main contributions of this paper are summarized as follows:
\begin{enumerate}
\item We propose a spatial reasoning network for each skeleton frame, which can effectively capture the high-level spatial structural information between the different body parts using a residual graph neural network.
\item We propose a temporal stack learning network to model the detailed temporal dynamics of skeleton sequences by a composition of multiple skip-clip LSTMs.
\item The proposed clip-based incremental loss further improves the ability of temporal stack learning, which can effectively speed up convergence and obviously improve the performance.
\item Our method obtains the state-of-the-art results on the SYSU 3D Human-Object Interaction dataset and NTU RGB+D dataset.
\end{enumerate}
\begin{figure}[!t]
\centering
\includegraphics[width=1.\linewidth,height=0.38\linewidth]{./model.eps}
\caption{The overall pipeline of our model which contains a spatial reasoning network and a temporal stack learning network. In the spatial reasoning network, a residual graph neural network (RGNN) is used to capture the high-level spatial structural information between the different body parts. The temporal stack learning network can model the detailed temporal dynamics for skeleton sequence. During training, the proposed model is efficiently optimized with the clip-based incremental losses (CIloss) }
\label{model_pipeline}
\end{figure}
\section{Related Work}
\label{Related Work}
In this section, we briefly review the existing literature that closely relates to the proposed method.
\textbf{\emph{Skeleton based action recognition}} \hspace{3mm}
There have been amounts of work proposed for skeleton-based action recognition, which can be divided into two classes. The first class is to focus on designing handcrafted features to represent the information of skeleton motion. Wang et al.~\cite{Wang2012Mining} exploit a new feature called local occupancy pattern, which can be treated as the depth appearance of joints, and propose an actionlet ensemble model to represent each action. Hussein et al.~\cite{Hussein2013Human} use the covariance matrix for skeleton joint locations over time as a discriminative descriptor for a sequence. Vemulapalli et al.~\cite{Raviteja2014Human} utilize rotations and translations to represent the 3D geometric relationships of body parts in Lie group.
The second class is to use deep neural networks to recognize human actions. \cite{Ke2017A,Kim2017Interpretable} exploit the Convolutional Neural Networks (CNNs) for skeleton-based action recognition. Recently, most of methods utilize the Recurrent Neural Networks (RNNs) for this task. Du et al.~\cite{Du2015Hierarchical} first propose an end-to-end hierarchical RNN for skeleton-based action recognition. Zhu et al.~\cite{Zhu2016Co-Occurrence} design a fully connected deep LSTM network with a regularization scheme to learn the co-occurrence features of skeleton joints. An end-to-end spatial and temporal attention model \cite{Song2017Attention} learns to selectively focus on discriminative joints of the skeleton within each frame of the inputs and pays different levels of attention to the outputs of different frames. Zhang et al.~\cite{Zhang2017View} exploit a view adaptive model with LSTM architecture, which enables the network to adapt to the most suitable observation viewpoints from end to end. A two-stream RNN architecture is proposed to model both temporal dynamics and spatial configurations for skeleton-based action recognition in \cite{Wang2017Modeling}. The most similar work to ours is \cite{Inwoong2017Ensemble} which proposes an ensemble temporal sliding LSTM (TS-LSTM) networks for skeleton-based action recognition. They utilize an ensemble of multi-term temporal sliding LSTM networks to capture short-term, medium-term, long-term temporal dependencies and even spatial skeleton pose dependency. In this paper, we design a spatial reasoning network and temporal stack learning network, which can capture the high-level spatial structural information and the detailed temporal dynamics of skeleton sequences, separately.
\textbf{\emph{Graph neural networks}} \hspace{3mm}
Recently, more and more works have used the graph neural networks (GNNs) to the graph-structured data, which can be categorized into two broad classes. The first class is to apply Convolutional Neural Networks (CNNs) to graph, which improves the traditional convolution network on graph. \cite{henaff2015deep,NIPS2015_5954} utilize the CNNs in the spectral domain relying on the graph Laplacian. \cite{Bruna2014Spectral,niepert2016learning} apply the convolution directly on the graph nodes and their neighbors, which construct the graph filters on the spatial domain. Yan et al.~\cite{Yan2018Spatial} are the first to apply the graph convolutional neural networks for skeleton-based action recognition. The second class is to utilize the recurrent neural networks to every node of the graph. \cite{Scarselli2009graph} proposes to recurrently update the hidden state of each node of the graph. Li et al.~\cite{Li_2017_ICCV} propose a model based on Graph Neural Networks for situation recognition, which can efficiently capture joint dependencies between roles using neural networks defined on a graph. Qi et.al. \cite{Qi_2017_ICCV} use 3D graph neural networks for RGBD semantic segmentation. In this paper, a residual graph neural network is utilized to model the high-level spatial structural information between different body parts.
\section{Overview}
\label{Overview}
In this section, we briefly review the Graph Neural Networks (GNNs), the Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM), which are utilized in our framework.
\subsection{Graph Neural Network}
Graph Neural Network (GNN) is introduced in \cite{Scarselli2009graph} as a generalization of recursive neural networks, which can deal with a more general class of graphs. The GNNs can be defined as an ordered pair $G$ = \{$V, E$\}, where $V$ is the set of nodes and $E$ is the set of edges. At time step $t$, the hidden state of the $i$-th ($i \in \{ 1,...,\left| V \right| \}$) node is $\vec{s}_i^t$, and the output is $\vec{o}_i^t$. The set of nodes $\Omega_v$ stands for the neighbors of node $v$.
For a GNN, the input vector of each node $v \in V$ is based on the information contained in the neighborhood of node $v$, and the hidden state of each node is updated recurrently. At time step $t$, the received messages of a node are calculated with the hidden states of its neighbors. Then the received messages and previous state $\vec{s}_i^{t-1}$ are utilized to update the hidden state $\vec{s}_i^t$. Finally, the output $\vec{o}_i^t$ is computed with $\vec{s}_i^t$. The GNN formulation at time step $t$ is defined as follows:
\begin{align}
\label{formu_m} \vec{m}_i^t & = f_m \left( \{ \vec{s}_{\hat i}^{t-1} | {\hat i} \in \{ 1,...,\left| \Omega_{v_i} \right| \} \right) \\
\label{formu_s} \vec{s}_i^t & = f_s \left( \vec{m}_i^t, \vec{s}_i^{t-1} \right) \\
\label{formu_0} \vec{o}_i^t & = f_o \left( \vec{s}_i^{t} \right)
\end{align}
where $\vec{m}_i^t$ is the sum of all the messages that the neighbors $\Omega_{v_i}$ send to node $v_i$, $f_m$ is the function to compute the incoming messages, $f_s$ is the function that expresses the state of a node and $f_o$ is the function to produce the output. Similar to RNNs, these functions are the learned neural networks and are shared among different time steps.
\subsection{RNN and LSTM}
Recurrent Neural Networks (RNNs) are the powerful models to capture the dependencies of sequences via cycles in the network of nodes, which are suitable for the sequence tasks. However, there exist two difficult problems of vanishing gradient and exploding gradient when the standard RNN is used for long-term sequences.
The advanced RNN architecture of Long Short-Term Memory (LSTM) is proposed by Hochreiter et al.~\cite{Hochreiter1997lstm}. LSTM neuron contains an input gate, a forget gate, an output gate and a cell, which can promote the ability to learn long-term dependencies.
\section{Model Architecture}
\label{Model Architecture}
In this paper, we propose an effective model for skeleton-based action recognition, which contains a spatial reasoning network and a temporal stack learning network. The overall pipeline of our model is shown in Fig.~\ref{model_pipeline}. In this section, we will introduce these networks in detail.
\subsection{Spatial Reasoning Network}
\begin{figure}[!b]
\centering
\subfigure[]{
\label{RGNN:a}
\includegraphics[width=1.5in]{./skeleton_gnn.eps}}
\hspace{8mm}
\subfigure[]{
\label{RGNN:b}
\includegraphics[width=1.65in]{./RGNN.eps}}
\caption{The architecture of residual graph neural network (RGNN). (a) illustrates five human pose parts and a corresponding RGNN. (b) shows the principle of a RGNN with three nodes}
\label{RGNN}
\end{figure}
Rich inherent structures of the human body that are involved in action recognition task, motivate us to design an effective architecture called spatial reasoning network to model the high-level spatial structural information within each frame. According to the general knowledge, the body can be decomposed into $K$ parts, e.g. two arms, two legs and one trunk (shown in Fig. \ref{RGNN:a}), which express the knowledge of human body configuration.
For spatial structures, the spatial reasoning network encodes the coordinate vectors via two steps (see Fig. \ref{model_pipeline}) to capture the high-level spatial features of skeleton structural relationships. First, the preliminary encoding process maps the coordinate vector of each part into the individual part feature $\vec{e}_k$, $k \in \{1,...,K \}$ with a linear layer that is shared among different body parts. Second, all part features $\vec{e}_k$ are fed into the proposed residual graph neural network (RGNN) to model the structural relationships between these body parts. Fig. \ref{RGNN:b} shows a RGNN with three nodes.
For a RGNN, there are $K$ nodes that correspond to the human body parts. At time step $t$, each node has a relation feature vector $\vec{r}_k^t \in R^t$, where $R^t = \{ \vec{r}_1^t,...,\vec{r}_K^T\}$. And $\vec{r}_k^t$ denotes the spatial structural relationships of the part $k$ with other parts. We initialize the $\vec{r}_k^t$ with the individual part feature $\vec{e}_k$, such that $\vec{r}_k^0 = \vec{e}_k$. We use $\vec{m}_{ik}^t$ to denote the received message of node $k$ from node $i$ at time step $t$, where $i \in \{1,...,K \}$. Furthermore, the received messages $\vec{m}_{k}^t$ of node $k$ from all the neighbors $\Omega_{v_k}$ at time step $t$ is defined as follows:
\begin{align}
\label{rgnn_message} \vec{m}_{k}^t & = \sum\limits_{i \in \Omega_{v_k}} \vec{m}_{ik}^{t} \nonumber\\
& = \sum\limits_{i \in \Omega_{v_k}} \vec{W}_m \vec{s}_{i}^{t-1} + \vec{b}_m
\end{align}
where $\vec{s}_{i}^{t-1}$ is the state of node $i$ at time step $t-1$, and a shared linear layer of weights $\vec{W}_m$ and biases $\vec{b}_m$ will be used to compute the messages for all nodes. After aggregating the messages, updating function of the node hidden state can be defined as follows:
\begin{align}
\label{rgnn_state} \vec{s}_k^t & = f_{lstm} \left( \vec{r}_k^{t-1}, \vec{m}_k^t, \vec{s}_k^{t-1} \right)
\end{align}
where $f_{lstm} \left( \cdot \right)$ denotes the LSTM cell function. Then, we calculate the relation representation $\vec{r}_k^{t}$ at time step $t$ via:
\begin{align}
\label{rgnn_relation} \vec{r}_k^{t} & = \vec{r}_k^{t-1} + \vec{s}_k^{t}
\end{align}
The residual design of Eqn.\ref{rgnn_relation} aims to add the relationship features between each part based on the individual part features, so that the representations contain the fusion of both features.
After the RGNN is updated $T$ times, we extract node-level output as the spatial structural relationships $\vec{r}_k^T$ of each part within each frame. Finally, the high-level spatial structural information $\vec{q}$ of human body for a frame can be computed as follows:
\begin{align}
\label{rgnn_global1} \vec{r}^{T} & = concat \left( [\vec{r}_1^T, \vec{r}_2^T,...,\vec{r}_k^T] \right), \forall k \in K \\
\label{rgnn_global2} \vec{q} & = f_r \left( \vec{r}^{T} \right)
\end{align}
where $f_{r} \left( \cdot \right)$ is a linear layer.
\subsection{Temporal Stack Learning Network}
To further exploit the discriminative features of various actions, the proposed temporal stack learning network further focus on modeling detailed temporal dynamics. For a skeleton sequence, it has rich and detailed temporal dynamics in the short-term clips. To capture the detailed temporal information, the long-term sequence can be decomposed into multiple continuous clips. In a skeleton sequence, it consists of $N$ frames. The sequence is divided into $M$ clips at intervals of $d$ frames. The high-level spatial structural features $\{Q_1, Q_2,...,Q_M \}$ of the skeleton sequence can be extracted from the spatial reasoning network. $Q_m = \{\vec{q}_{md+1}, \vec{q}_{md+2},..., \vec{q}_{(m+1)d} \}$ is the set of features of clip $m$, and $\vec{q}_n$ denotes the high-level spatial structural features of the skeleton frame $n, n \in \{1,...,N\}$.
\begin{figure}[!b]
\centering
\includegraphics[width=0.9\linewidth,height=0.4\linewidth]{./clip_lstm.eps}
\caption{The architecture of three skip-clip LSTM layers}
\label{clip_lstm}
\end{figure}
Our proposed temporal stack learning network is a two stream network: position network and velocity network (see Fig.~\ref{model_pipeline}). The two networks have the same architecture, which is composed of three skip-clip LSTM layers (shown in Fig.~\ref{clip_lstm}). The inputs of position network are the high-level spatial structural features $\{Q_1, Q_2,...,Q_M \}$. The inputs of velocity network are the temporal differences $\{V_1, V_2,...,V_M \}$ of the spatial features between two consecutive frames, where $V_m = \{\vec{v}_{md+1}, \vec{v}_{md+2},..., \vec{v}_{(m+1)d} \}$. $\vec{v}_n = \vec{q}_n - \vec{q}_{n-1}$ denotes the temporal difference of high-level spatial features for the skeleton frame $n$.
\textbf{\emph{Skip-Clip LSTM Layer}} \hspace{3mm}
In the skip-clip LSTM layer, there is an LSTM layer shared among the continuous clips (see Fig.~\ref{clip_lstm}). For the position network, the spatial features of continuous skeleton frames in the clip $m$ will be fed into the shared LSTM to capture the short-term temporal dynamics in the first skip-clip LSTM layers:
\begin{align}
\label{clip_last_state}
\vec{h}_m^{'} & = f_{LSTM} \left( Q_m \right) \nonumber\\
& = f_{LSTM} \left( \{\vec{q}_{md+1}, \vec{q}_{md+2},..., \vec{q}_{(m+1)d} \} \right)
\end{align}
where $\vec{h}_m^{'}$ is the last hidden state of shared LSTM for the clip $m$, $f_{LSTM} \left( \cdot \right)$ denotes the shared LSTM in the skip-clip LSTM layer.
Note that the inputs of LSTM cell between the first skip-clip LSTM layer and the other layers are different (see Fig.~\ref{clip_lstm}). In order to gain more dependency between two adjacent frames, the input $\vec{x}_t^l$ of LSTM cell for the $l$ ($l \geq 2$) layer at time step $t$ is defined as follows:
\begin{align}
\label{cell_input}
\vec{x}_t^l & = concat \left( \vec{h}_{t-1}^{l-1}, \vec{h}_{t}^{l-1} \right)
\end{align}
where $\vec{h}_{t}^{l-1}$ is the hidden state of the $l-1$ LSTM layer at time step $t$.
Then the representation of clip dynamics can be calculated as follows:
\begin{align}
\label{clip_state}
\vec{H}_m & = \vec{H}_{m-1} + \vec{h}_m^{'} \nonumber\\
& = \sum_{i=1}^m \vec{h}_i^{'}
\end{align}
where $\vec{H}_{m-1}$ and $\vec{H}_{m}$ denote the representations of clip $m-1$ and $m$, respectively. The representation $\vec{H}_{m}$ is to aggregate all the detailed temporal dynamics of the $m$-th clip and all previous clips to represent the long-term sequence. When feeding the clip $m$ into the shared LSTM layer, we initialize the initial hidden state $\vec{h}_m^0$ of the shared LSTM with the $\vec{H}_{m-1}$, such that $\vec{h}_m^0$ = $\vec{H}_{m-1}$, which can inherit previous dynamics to learn the short-term dynamics of the $m$-th clip to maintain the dependency between clips.
The skip-clip LSTM layer can capture the temporal dynamics of the short-term clip based on the temporal information of previous clips. And the larger $m$ is, the richer temporal dynamics $\vec{H}_m$ contains.
\textbf{\emph{Learning the Classier}} \hspace{3mm}
Finally, two linear layers are used to compute the scores for $C$ classes:
\begin{align}
\label{clip_output}
\vec{O}_m & = F_o \left( \vec{H}_m \right)
\end{align}
where $\vec{O}_m$ is the score of clip $m$ and $\vec{O}_m = \left(o_{m1}, o_{m2},...,o_{mC}\right)$, $F_o$ denotes the two linear layers. And the output is fed to a softmax classifier to predict the probability being the $i^{th}$ class:
\begin{align}
\label{clip_softmaxt}
{\hat y}_{mi} & = { {e^{o_{mi}}} \over { \sum_{j=1}^C e^{o_{mj}} }}, i = 1,...,C
\end{align}
where ${\hat y}_{mi}$ indicates the probability that the clip $m$ is predicted as the $i^{th}$ class. And $\vec{{\hat y}}_{m} = \left({\hat y}_{m1},...,{\hat y}_{mC} \right)$ denotes the probability vector of clip $m$.
Our proposed temporal stack learning network is a two stream network, so the clip dynamic representations ($\vec{H}_m^p$, $\vec{H}_m^v$ and $\vec{H}_m^s$) of three modes will be captured. $\vec{H}_m^p$ and $\vec{H}_m^v$ denote the dynamic representations extracted from the position and velocity for the clip $m$, respectively. And $\vec{H}_m^s$ is the sum of $\vec{H}_m^p$ and $\vec{H}_m^v$. The probability vectors ($\vec{{\hat y}}_{m}^p$, $\vec{{\hat y}}_{m}^v$ and $\vec{{\hat y}}_{m}^s$) can be predicted from the network.
In order to optimize the model, we propose the clip based incremental losses for a skeleton sequence:
\begin{align}
\label{clip_output_p} \mathcal{L}_p & = - \sum_{m=1}^M {m \over M} \sum_{i=1}^C y_i log {\hat y}_{mi}^p \\
\label{clip_output_v} \mathcal{L}_v & = - \sum_{m=1}^M {m \over M} \sum_{i=1}^C y_i log {\hat y}_{mi}^v \\
\label{clip_output_s} \mathcal{L}_s & = - \sum_{m=1}^M {m \over M} \sum_{i=1}^C y_i log {\hat y}_{mi}^s
\end{align}
where $\vec{y} = \left( y_1,...,y_C \right)$ denotes the groundtruth label. The richer temporal information the clip contains, the greater the coefficient ${m \over M}$ is. The clip-based incremental loss will promote the ability of modeling the detailed temporal dynamics for long-term skeleton sequences. Finally, the training loss of our model is defined as follows:
\begin{align}
\label{loss} \mathcal{L} & = \mathcal{L}_p + \mathcal{L}_v + \mathcal{L}_s
\end{align}
Due to the mechanisms of skip-clip LSTM (see the Eqn.\ref{clip_state}), the representation $\vec{{ H}}_M^s$ of clip $M$ aggregates all the detailed temporal dynamics of the continuous clips from the position sequences and velocity sequences. In the testing process, we only use the probability vector $\vec{{\hat y}}_{M}^s$ to predict the class of the skeleton sequence.
\section{Experiments}
\label{Experiments}
To verify the effectiveness of our proposed model for skeleton-based action recognition, we perform extensive experiments on the NTU RGB+D dataset \cite{Shahroudy2016NTU} and the SYSU 3D Human-Object Interaction dataset \cite{Hu2015Jointly}. We also analyze the performance of our model with several variants.
\subsection{Datasets and Experimental Settings}
\textbf{\emph{NTU RGB+D Dataset (NTU)}} \hspace{3mm}
This is the current largest action recognition dataset with joints annotations that are collected by Microsoft Kinect v2. It has 56880 video samples and contains 60 action classes in total. These actions are performed by 40 distinct subjects. It is recorded with three cameras simultaneously in different horizontal views. The joints annotations consist of 3D locations of 25 major body joints. \cite{Shahroudy2016NTU} defines two standard evaluation protocols for this dataset: Cross-Subject and Cross-View.
For Cross-Subject evaluation, the 40 subjects are split into training and testing groups. Each group consists of 20 subjects. For Cross-View evaluation, all the samples of camera 2 and 3 are used for training while the samples of camera 1 are used for testing.
\textbf{\emph{SYSU 3D Human-Object Interaction dataset (SYSU)}} \hspace{3mm}
This dataset contains 480 video samples in 12 action classes. These actions are performed by 40 subjects. There are 20 joints for each subject in the 3D skeleton sequences. There are two standard evaluation protocols \cite{Hu2015Jointly} for this dataset. In the first setting (setting-1), for each activity class, half of the samples are used for training and the rest for testing. In the second setting (setting-2), half of subjects are used to train model and the rest for testing. For each setting, there is 30-fold cross validation.
\textbf{\emph{Experimental Settings}} \hspace{3mm}
In all our experiments, we set the hidden state dimension of RGNN to 256. For the NTU dataset, the human body is decomposed into $K$ = 8 parts: two arms, two hands, two legs, one trunk and one head. For the SYSU dataset, there are $K$ = 5 parts: two arms, two legs, and one trunk. We set the length $N$ = 100 of skeleton sequences for the two datasets. The neuron size of LSTM cell in the skip-clip LSTM layer is 512. The learning rate, initiated with 0.0001, is reduced by multiplying it by 0.1 every 30 epochs. The batch sizes for the NTU dataset and the SYSU dataset are 64 and 10, respectively. The network is optimized using the ADAM optimizer \cite{kingma2015adam}. Dropout with a probability of 0.5 is utilized to alleviate overfitting during training.
\subsection{Experimental Results}
We compare the performance of our proposed model against several state-of-the-art approaches on the NTU dataset and SYSU dataset in Table \ref{NTU_comparison} and Table \ref{SYSU_comparison}. These methods for skeleton-based action recognition can be divided into two categories: CNN-based methods \cite{liu2017enhanced,Yan2018Spatial} and LSTM-based methods \cite{Zhang2017View,Inwoong2017Ensemble,Song2017Attention}.
\setlength{\tabcolsep}{8pt}
\begin{table}[!t]
\fontsize{8pt}{0.85\baselineskip}\selectfont
\begin{center}
\caption{The comparison results on NTU RGB+D dataset with Cross-Subject and Cross-View settings in accuracy (\%) }
\label{NTU_comparison}
\begin{tabular}{r|cc}
\hline\noalign{\smallskip}
\multicolumn{1}{c}{Methods} & Cross-Subject & Cross-View \\
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
HBRNN-L \cite{Du2015Hierarchical} (2015) & 59.1 & 64.0 \\
Part-aware LSTM \cite{Shahroudy2016NTU} (2016) & 62.9 & 70.3 \\
Trust Gate ST-LSTM \cite{Liu2016Spatio-temporal} (2016) & 69.2 & 77.7 \\
Two-stream RNN \cite{Wang2017Modeling} (2017) & 71.3 & 79.5 \\
STA-LSTM \cite{Song2017Attention} (2017) & 73.4 & 81.2 \\
Ensemble TS-LSTM \cite{Inwoong2017Ensemble} (2017) & 74.6 & 81.3 \\
Visualization CNN \cite{liu2017enhanced} (2017) & 76.0 & 82.6 \\
VA-LSTM \cite{Zhang2017View} (2017) & 79.4 & 87.6 \\
ST-GCN \cite{Yan2018Spatial} (2018) & 81.5 & 88.3 \\
\hline
SR-TSL (Ours) & \textbf{84.8} & \textbf{92.4}\\
\hline
\end{tabular}
\end{center}
\end{table}
As shown in Table \ref{NTU_comparison}, we can see that our proposed model achieves the best performances of 84.8\% and 92.4\% on the current largest NTU dataset. Our performances significantly outperform the state-of-the-art CNN-based method \cite{Yan2018Spatial} by about 3.3\% and 4.1\% for cross-subject evaluation and cross-view evaluation, respectively. Our model belongs to the LSTM-based methods. Compared with VA-LSTM \cite{Zhang2017View} that is the current best LSTM-based method for action recognition, our results are about 5.4\% and 4.8\% better than VA-LSTM on the NTU dataset. Ensemble TS-LSTM \cite{Inwoong2017Ensemble} is the most similar work to ours. The results of our model outperform by 10.2\% and 11.1\% compared with \cite{Inwoong2017Ensemble} in cross-subject evaluation and cross-view evaluation, respectively. As shown in Table \ref{SYSU_comparison}, our proposed model achieves the best performances of 80.7\% and 81.9\% on SYSU dataset, which significantly outperforms the state-of-the-art approach \cite{Zhang2017View} by about 3.8\% and 4.4\% for setting-1 and setting-2, respectively.
\setlength{\tabcolsep}{8pt}
\begin{table}[!b]
\fontsize{8pt}{0.85\baselineskip}\selectfont
\begin{center}
\caption{The comparison results on SYSU dataset in accuracy (\%) }
\label{SYSU_comparison}
\begin{tabular}{r|cc}
\hline\noalign{\smallskip}
\multicolumn{1}{c|}{Methods} & Setting-1 & Setting-2 \\
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
LAFF \cite{Hu2016Real} (2016) & - & 54.2 \\
Dynamic Skeletons \cite{Hu2015Jointly} (2015) & 75.5 & 76.9 \\
VA-LSTM \cite{Zhang2017View} (2017) & 76.9 & 77.5 \\
\hline
SR-TSL (Ours) & \textbf{80.7} & \textbf{81.9}\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Model Analysis}
We analyze the proposed model by comparing it with several baselines. The comparison results demonstrate the effectiveness of our model. There are two key ingredients in the proposed model: spatial reasoning network (SRN) and temporal stack learning network (TSLN). To analyze the role of each component, we compare our model with several combinations of these components. Each variant is evaluated on NTU dataset.
\setlength{\tabcolsep}{8pt}
\begin{table}[!t]
\fontsize{8pt}{0.85\baselineskip}\selectfont
\begin{center}
\caption{The comparison results on NTU and SYSU dataset in accuracy (\%). We compare the performances of several variants and our proposed model to verify the effectiveness of our model}
\label{baselines_comparison}
\begin{tabular}{l|cc|cc}
\hline\noalign{\smallskip}
\multirow{2}{*}{Methods} & \multicolumn{2}{c|}{NTU} & \multicolumn{2}{c}{SYSU} \\
\cline{2-5}
&Cross-Subject & Cross-View & Setting-1 & Setting-2\\
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
FC + LSTM & 77.0 & 84.7 & 39.9 & 40.7 \\
SRN + LSTM & 78.7 & 87.3 & 42.1 & 44.4 \\
FC + TSLN & 83.8 & 91.6 & 77.3 & 77.4 \\
SR-TSL(Position) & 78.8 & 88.2 & 77.1 & 76.9 \\
SR-TSL(Velocity) & 82.2 & 90.6 & 71.7 & 71.8 \\
\hline
SR-TSL (Ours) & \textbf{84.8} & \textbf{92.4} & \textbf{80.7} & \textbf{81.9}\\
\hline
\end{tabular}
\end{center}
\end{table}
\textbf{FC+LSTM}\hspace{2mm} For this model, the coordinate vectors of each body part are encoded with the linear layer and three LSTM layers are used to model the sequence dynamics. It is also a two stream network to learn the temporal dynamics from position and velocity.
\textbf{SRN+LSTM}\hspace{2mm} Compared with FC+LSTM, this model uses spatial reasoning network to capture the high-level spatial structural features of skeleton sequences within each frame.
\textbf{FC+TSLN}\hspace{2mm} Compared with FC+LSTM, the temporal stack learning network replaces three LSTM layers to learn the detailed sequence dynamics for skeleton sequences.
\textbf{SR-TSL (Position)}\hspace{2mm} Compared with our proposed model, the temporal stack learning network of this model only contains the position network.
\textbf{SR-TSL (Velocity)}\hspace{2mm} Compared with our proposed model, the temporal stack learning network of this model only contains the velocity network.
\textbf{SR-TSL }\hspace{2mm} It denotes our proposed model.
\begin{figure}[!t]
\centering
\subfigure[Cross-Subject]{
\label{acc:a}
\includegraphics[width=2.3in]{./cs.eps}}
\subfigure[Cross-View]{
\label{acc:b}
\includegraphics[width=2.3in]{./cv.eps}}
\caption{The accuracy of the baselines and our model on the testing set of NTU RGB+D dataset during learning phase. (a) shows the comparison results for cross-subject evaluation, and (b) is for cross-view evaluation}
\label{acc}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[width=0.7\linewidth,height=0.38\linewidth]{./clip_acc.eps}
\caption{The accuracy of the increasing clips on the testing set of NTU RGB+D dataset}
\label{clip_acc}
\end{figure}
Table \ref{baselines_comparison} shows the comparison results of the variants and our proposed model on NTU and SYSU dataset. We can observe that our model can obviously increase the performances on both datasets. And the increased performances showed in Table \ref{baselines_comparison} illustrate that the spatial reasoning network and temporal stack learning network are effective for the skeleton based action recognition, especially the temporal stack learning network. Furthermore, the two stream architecture of temporal stack learning network is efficient to learn the temporal dynamics from the velocity sequence and position sequence. Fig.~\ref{acc} shows the accuracy of the baselines and our model on the testing set of NTU RGB+D dataset during learning phase. We can see that our proposed model can speed up convergence and obviously improve the performance. We also show the process of temporal stack learning in Fig.~\ref{clip_acc}. With the increase of $m$, the much richer temporal information is contained in the representation of a sequence.
And the network can consider more temporal dynamics of the details to recognize human action, so as to improve the accuracy. The above results illustrate the proposed SR-TSL can effectively speed up convergence and obviously improve the performance.
\setlength{\tabcolsep}{6pt}
\begin{table}[t]
\begin{floatrow}
\begin{minipage}{0.4\linewidth}
\centering
\ttabbox{\caption{The comparison results on NTU dataset in accuracy (\%). We compare several models that have different time steps for the RGNN to show the improvements achieved at every step }}
\label{RGNN_comparison}
\begin{tabular}{l|cc}
\hline\noalign{\smallskip}
RGNN & Cross-Subject & Cross-View \\
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
$T$ = 1 & 84.1 & 92.0 \\
$T$ = 2 & 84.4 & 92.2 \\
$T$ = 3 & 84.5 & \textbf{92.4} \\
$T$ = 4 & 84.7 & 92.3 \\
$T$ = 5 & \textbf{84.8} & 92.3 \\
$T$ = 6 & 84.7 & 92.2 \\
\hline
\end{tabular}}
\end{minipage}
\hfil
\begin{minipage}{0.4\linewidth}
\centering
\ttabbox{\caption{The comparison results on NTU dataset in accuracy (\%). We compare the performances of several proposed models that have different the length $d$ of clips}}
\label{clip_comparison}
\begin{tabular}{l|cc}
\hline\noalign{\smallskip}
TSLN & Cross-Subject & Cross-View \\
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
$d$ = 2 & 81.6 & 90.6 \\
$d$ = 4 & 84.1 & 91.4 \\
$d$ = 6 & 84.5 & \textbf{92.4} \\
$d$ = 8 & 84.5 & 92.3 \\
$d$ = 10 & \textbf{84.8} & 92.1 \\
$d$ = 15 & 84.7 & 92.2 \\
$d$ = 20 & 84.4 & 92.1 \\
\hline
\end{tabular} }
\end{minipage}
\end{floatrow}
\end{table}
We also discuss the effect of two important hyper-parameters: the time step $T$ of the RGNN and the length $d$ of clips. The comparison results are shown in Table \ref{RGNN_comparison} and Table \ref{clip_comparison}. For the time step $T$, we can find that the performance increases by a small amount when increasing $T$, and saturates soon. We think that the high-level spatial structural features between a small number of body parts can be learned quickly. For the length $d$ of clips, with the increase of $d$, the performance is significantly improved and then saturated. The reason of saturation is that learning short-term dynamic does not require too many frames. The above experimental results illustrate that our proposed model is effective for skeleton-based action recognition.
\section{Conclusions}
\label{Conclusions}
In this paper, we propose a novel model with spatial reasoning and temporal stack learning for long-term skeleton based action recognition, which achieves much better results than the state-of-the-art methods. The spatial reasoning network can capture the high-level spatial structural information within each frame, while the temporal stack learning network can model the detailed temporal dynamics of skeleton sequences. We also propose a clip-based incremental loss to further improve the ability of stack learning, which provides an effective way to solve long-term sequence optimization. With extensive experiments on the current largest NTU RGB+D dataset and SYSU dataset, we verify the effectiveness of our model for the skeleton based action recognition. In the future, we will further analyze the error samples to improve the model, and consider more contextual information, such as interactions, to aid action recognition.
\section*{Acknowledgements}
\label{ Acknowledgements}
This work is jointly supported by National Key Research and Development Program of China (2016YFB1001000), National Natural Science Foundation of China (61525306, 61633021, 61721004, 61420106015, 61572504), Scientific Foundation of State Grid Corporation of China.
\clearpage
\bibliographystyle{splncs04}
|
{
"timestamp": "2018-12-04T02:23:03",
"yymm": "1805",
"arxiv_id": "1805.02335",
"language": "en",
"url": "https://arxiv.org/abs/1805.02335"
}
|
\section{Introduction}\label{Intro}
\label{intro}
The one-dimensional [$1$D] harmonic oscillator is one of the most simplest and fundamental classical
as well as quantum system studied in the literature.
However, the study of the two-dimensional [$2$D] harmonic oscillator in quantum mechanics for the case of
the rotationally symmetric oscillator turns out to be interesting and less explored.
In fact, it is more difficult to solve when the problem involves time-dependent parameters.
In the last few decades the problem of the time-dependent quantum systems has received
a great interest since Lewis and Riesenfeld have introduced an excellent method of invariants to solve the time-dependent
Schr\"{o}dinger equation \cite{1}.
This method stimulated some interest in using the invariants for solving $1$D and $2$D time-dependent harmonic oscillators problems
\cite{2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,16',17,18,19,20,21,22}.
The $1$D damped harmonic oscillator has been extensively studied in the literature \cite{23,24,25,26,27}, while its generalization in
two-dimensions as far as we know is less explored.
We discuss a system of two-non-interacting damped oscillators with equal time-dependent coefficients of friction and equal time-dependent
frequencies.
In section \ref{sec2}, we study the system at the classical level and formulate the corresponding quantum system. We solve the classical equations
of motion for a constant coefficient of friction and for some particular cases of frequencies.
In section \ref{sec3}, we use the Lewis-Riesenfeld's method to construct the invariant operator $\hat I(t)$. The eigenvalues and the eigenfunctions
of the invariants are calculated explicitly by operators methods, the key element being the introduction of an appropriate unitary
operator. We derive then a conserved angular momentum $\hat L_z$ that is simultaneously commuting with the invariant operator
$\hat I(t)$ and the Hamiltonian $\hat H(t)$. However, the three operators cannot be simultaneously diagonalized at this stage
of the problem.
In section \ref{sec4}, we introduce the helicity Fock basis in order to simultaneously diagonalize the operators
$\hat I(t), \hat H(t)$ and $\hat L_z$. The rotationally symmetry of the system has been useful in determining an orthogonal
basis of the Hilbert space for the procedure of the simultaneous diagonalization. Then we derive the exact solution of the Schr\"{o}dinger
equations in terms of generalized Laguerre polynomials.
In section \ref{sec5}, we use the eigenfunctions of the Hamiltonian to verify a generalization version of the Heisenberg's uncertainty relations that
are formulated following the standard arguments as follows: for the simultaneous measurement of two observables $\hat A$ and $\hat B$ in the
states $|\psi\rangle$, the uncertainty satisfy the inequality
\begin{equation}\label{He}
\Delta \hat A \Delta \hat B\geq\frac{\hbar}{2}\big |\langle\psi|[\hat A,\hat B]|\psi\rangle\big|,\,
\end{equation}
where $ \Delta \hat A $ and $ \Delta\hat B $ are respectively the dispersions defined as
\begin{eqnarray} \label{v3}
\Delta \hat A=\sqrt{\langle\psi|\hat A^2|\psi\rangle-\langle\psi|\hat A|\psi\rangle^2},\,\,
\Delta \hat B=\sqrt{\langle\psi|\hat B^2|\psi\rangle-\langle\psi|\hat B|\psi\rangle^2}.
\end{eqnarray}
Similar discussions can be also read in \cite{10}.
In section \ref{sec6}, we derive from the solution of the system the hidden generators of the $su(1,1)$ Lie algebra.
We proceed by the factorization method as developped in \cite{28,29} to find the hidden symmetry of the system and derive
from the eigenfunctions the related raising and lowering operators which generate the $su(1,1)$ Lie algebra.
In section \ref{sec7}, we discuss the $SU(1,1)$ coherent states $\grave{\textrm{a}}$ la Barut-Girardello \cite{30} and $\grave{\textrm{a}}$ la Perelomov \cite{31}.
A brief story about these coherent states is that in $ 1926$'s
Schr\"{o}dinger introduced for the first time in quantum mechanics the semiclassical states defined
as the minimum uncertainty Gaussian states whose dynamics has
maximum similarity to classical oscillator \cite{32}. These states were rediscovered by
Glauber in the framework of quantum optics in early $1960'$s \cite{33}. They are defined as eigenstates of the annihilation
operator of harmonic oscillator and were obtained by action of the Weyl-Heisenberg operator on the ground state.
These coherent states introduced by Glauber have inspired respectively Barut-Girardello \cite{30}
and Perelomov \cite{31} in constructing the coherent states
for $SU (1, 1)$ Lie algebraic group through different approaches. The Barut-Girardello and the Perelomov coherent states
gained lot of applications, for instance in the fields of quantum optics \cite{34,35}, quantum computation \cite{36,37}
and quantum mechanics \cite{38,39,40}.
The conclusion is given in section \ref{sec8}.
\section{The Model}\label{sec2}
We consider in two-dimensional configuration space, the system of two non-interacting damped oscillators with equal time-dependent
coefficients of friction and equal time-\\dependent frequencies. The equations of motion are given by
\begin{equation}\label{z}
\left\{
\begin{array}{rcr}
\ddot{x}_1 +\eta(t)\dot{x}_1+\omega^2(t) x_1=0, \\
\ddot{x}_2 +\eta(t)\dot{x}_2+\omega^2(t) x_2=0,
\end{array}
\right.
\end{equation}
where $\eta(t)$ is the time-dependent coefficient of friction, $\omega(t)$ is the time-dependent frequency and the dot represents
time-derivative.\\
These equations of motion may be derived from the Lagrangian
\begin{equation}\label{eq1}
L(x_1,x_2,\dot{x}_1,\dot{x}_2,t)=f^{-1}(t)\left[\frac{m}{2}(\dot{x}_1^2+ \dot{x}_2^2) -\frac{m\omega^2(t)}{2}(x_1^2+x_2^2)\right],
\end{equation}
where $f$ is an arbitrary function such that $f(t)=e^{-\int_0^t\eta(t')dt'}$ or $\eta(t)=-\frac{d}{dt}[\ln f(t)]$.\\
Let consider $R(\vartheta)$, the rotation matrix in the plane which transforms coordinates $x (x_1,x_2)$ into others $x'(x_1',x_2')$ such as
\begin{eqnarray}
x'=R(\vartheta)x,\,\,\,\,
R(\vartheta)=
\left (
\begin{array}{cc}
\cos\vartheta&-\sin \vartheta\\\sin\vartheta &\cos \vartheta
\end{array}
\right ), \,\,\,\, \,\,\,\vartheta\in \mathbb R.
\end{eqnarray}
This transformation preserves the invariance of
the Lagrangian. This rotational invariance in the plane manifests the presence of
the Noether charge which correspond to the angular-momentum of the system.
The canonical momentum associated with the variables $x_1$ and $x_2$ are
\begin{equation}\label{w}
\left\{
\begin{array}{rcr}
p_1=\frac{\partial L}{\partial \dot{x}_1}=f^{-1}(t) m \dot{x}_1, \\
p_2=\frac{\partial L}{\partial \dot{x}_2}=f^{-1}(t) m \dot{x}_2.
\end{array}
\right.
\end{equation}
The Hamiltonian is given by
\begin{eqnarray} \label{x1}
H(x_1,x_2,p_1,p_2,t)&=& \dot{x}_1 p_1+\dot{x}_2 p_2-L\cr
&=&\frac{f(t)}{2m}\left(p_1^2+p_2^2\right)+ f^{-1}(t)\frac{ m\omega^2(t)}{2} \left (x_1^2+x_2^2\right).
\end{eqnarray}
We recover the $2$D Hamiltonian that describes the dissipative system previously introduced in one dimension
by Pedrosa \cite{26,27}.
For $f(t)=1$ and $ f(t)=\exp\left(-\gamma t\right)$ with $\omega(t)=\omega_0$ where $\gamma,\,\omega_0$ are positive constants, the Hamiltonian
(\ref{x1}) is respectively reduced to the ordinary $2$D harmonic oscillator and the $2$D Caldirola and Kanai Hamiltonian \cite{41,42}.
Since we are in two dimensional configuration space, we can look for the solutions of the classical equations in the complex system by
setting $z=x_2+ix_1$. The classical equation of motion in term of the coordinate $z$ is
\begin{equation}\label{x2}
\ddot{z} +\eta(t) \dot{z}+\omega^2(t) z=0.
\end{equation}
For $\eta(t)=\gamma$ and $\omega(t)=\omega_0$, the equation (\ref{x2}) takes the form
\begin{equation}
\ddot{z} +\gamma \dot{z}+\omega_0^2 z=0,
\end{equation}
and the classical solutions are \cite{43}
\begin{eqnarray}
z(t)= \left\{
\begin{array}{l}
e^{-\frac{1}{2}\gamma t}\left[ A_1\exp\left(\frac{1}{2}\tau t\right)+
A_2\exp\left(-\frac{1}{2}\tau t\right)\right] \,\,\mbox{if} \,\,\tau^2=\gamma^2-4\omega_0^2>0, \\
e^{-\frac{1}{2}\gamma t}\left[ A_1\sin\left(\frac{1}{2}\tau t\right)+
A_2\cos\left(\frac{1}{2}\tau t\right)\right]\,\,\,\mbox{if} \,\,\,\tau^2=4\omega_0^2-\gamma^2>0,\\
e^{-\frac{1}{2}\gamma t} (A_1+ A_2) \,\,\,\mbox{if} \,\,\gamma^2=4\omega_0^2,
\end{array}
\right.
\end{eqnarray}
where $ A_1$ and $A_2$ are constants.
For $\omega(t)= \omega_0 e^{-\frac{1}{2}\gamma t}$, the equation can be rewritten as follows
\begin{equation}
\ddot{z} +\gamma \dot{z}+\omega_0^2 e^{-\gamma t} z=0.
\end{equation}
The solution is given by \cite{24,43}
\begin{equation}
z(t)= e^{-\frac{1}{2}\gamma t}\left [ B_1 J_1\left(\frac{2\omega_0}{\gamma}e^{-\frac{1}{2}\gamma t}\right)+ B_2Y_1\left(
\frac{2\omega_0}{\gamma}e^{-\frac{1}{2}\gamma t}\right)\right],
\end{equation}
where $J_k$ and $Y_k$ are respectively Bessel functions of first and second kind, $B_1$ and $B_2$ are constants.
For $\omega(t)= \omega_0 e^{-\gamma t}$,
the solution is known to be \cite{24,43}
\begin{equation}
z(t)= C_1 \cos\left(\frac{\omega_0e^{-\gamma t}}{\gamma}\right)+ C_2\sin\left(\frac{\omega_0 e^{-\gamma t}}{\gamma}\right),
\end{equation}
where $C_1$ and $C_2$ are constants.
At the quantum level, the corresponding Hamiltonian operator describing the system reads
\begin{eqnarray}\label{H1}
\hat H(t) = \frac{f(t)}{2m}\left(\hat p_1^2+\hat p_2^2\right)+ f^{-1}(t)\frac{ m\omega^2(t)}{2} \left (\hat x_1^2+\hat x_2^2\right),
\end{eqnarray}
where the position operators $\hat x_1, \hat x_2$ and the momentum operators $\hat p_1,\hat p_2$ satisfy the canonical commutation relations
\begin{eqnarray}
[\hat x_i,\hat p_j]=i\hbar\mathbf{I}\delta_{ij},\,\,\,\,\,[\hat x_i,\hat x_j]=0=[\hat p_i,\hat p_j],\,\,\, i,j=1,2.
\end{eqnarray}
To diagonalize this Hamiltonian, many methods can be considered to achieve this end \cite{1,6,44,45,46,47,48}.
Among them, we have the Lewis-Riesenfield method based on the construction of the Hermitian invariant operator \cite{1}.
\section{Construction and eigensystems of the invariant operator}\label{sec3}
To construct the exact invariant operator for the quantum system described by the time-dependent Hamiltonian (\ref{H1}),
we use the dynamic invariant method formulated by Lewis and Riesenfeld \cite{1}.
Now, we look for the invariant in the form
\begin{equation}\label{x4}
\hat I(t)=\alpha (t)\hat J_++\beta(t)\hat J_- +\delta (t) \hat J_0,
\end{equation}
where $\alpha,\beta,\delta$ are time-dependent real coefficients and
$\hat J_+=\frac{1}{2}(\hat x_1^2+\hat x_2^2)$, $\hat J_-=\frac{1}{2}(\hat p_1^2+\hat p_2^2)$,
$\hat J_0=\frac{1}{2}(\hat x_1\hat p_1+\hat p_1\hat x_1+\hat x_2\hat p_2+\hat p_2\hat x_2)$ satisfy the following commutation relations
\begin{eqnarray}
[\hat J_+,\hat J_-]=i\hat J_0; \,\, [\hat J_0,\hat J_\pm]=\pm2i\hat J_\pm.
\end{eqnarray}
The Hamiltonian (\ref{H1}) is rewritten in term of the latter operators as follows
\begin{equation}
\hat H(t)=\frac{f(t)}{m}\hat J_-+ f^{-1}(t) m\omega^2(t) \hat J_+.
\end{equation}
To determine an explicit form of the Hermitian invariant (\ref{x4}), one solves
the following equation
\begin{equation}\label{C1}
\frac{d\hat I (t)}{dt}=\frac{\partial \hat I(t)}{\partial t}+\frac{1}{i}[\hat I(t),\hat H(t)]\equiv0,
\end{equation}
where $\hbar=1$. By expansion of equation (\ref{C1}),
we obtain the first-order linear differential equations for the unknown coefficient functions
\begin{eqnarray}
\dot{\alpha}-2f^{-1}m\omega^2\delta=0,\\
\dot{\beta}+\frac{2f}{m}\delta=0,\\
\dot{\delta}+\frac{f}{m}\alpha-f^{-1}m\omega^2\beta=0.
\end{eqnarray}
As in \cite{1,14},
it is convenient to introduce another real function $\rho (t)$
\begin{equation}\label{x5}
\beta(t)=\rho^2(t).
\end{equation}
For an arbitrary positive constant $\nu$, the other coefficients are
\begin{eqnarray}\label{x6}
\delta (t)=-mf^{-1}\dot{\rho}\rho,\,\,\,\,\,\alpha(t)=\frac{\nu^2}{\rho^2}+m^2f^{-2}{\dot{\rho}}^2.
\end{eqnarray}
Replacing (\ref{x5}), (\ref{x6}) in (\ref{x4}), the Hermitian invariant acquires the form
\begin{eqnarray}
\hat I(t)=\frac{1}{2}\left[\left(mf^{-1}\dot{\rho}\hat x_1-\rho \hat p_1\right)^2 +\frac{\nu^2}{\rho^2}\hat x_1^2+
\left(mf^{-1}\dot{\rho}\hat x_2-\rho \hat p_2\right)^2 +\frac{\nu^2}{\rho^2}\hat x_2^2\right],
\end{eqnarray}
where the function $\rho$ is the solution of the so-called Ermakov-Pinney equation \cite{49}
\begin{equation}\label{e4}
\ddot{\rho}+\eta \dot{\rho}+\omega^2\rho=\frac{\nu^2 f^2}{m^2\rho^3}.
\end{equation}
Next we determine the spectrum of the invariant operator by solving the eigenvalue equation
\begin{equation}\label{I1}
\hat I(t)\phi( x_1, x_2,t)=E \phi( x_1, x_2,t),
\end{equation}
where $E$ is a constant, $\phi(x_1,x_2,t)$ is element of Hilbert space
$\mathcal{H}$ on which this operator is defined.
In order to solve equation (\ref{I1}) we introduce the unitary operator
that is written as follows
\begin{equation}
\hat U=\exp\left[-\frac{imf^{-1}\dot{\rho}}{2\rho}(\hat x_1^2+\hat x_2^2)\right],\,\,\,\,\, \hat U^\dag \hat U= \hat U\hat U^\dag =\mathbf{I}.
\end{equation}
Setting
\begin{equation}
U\phi(x_1,x_2,t)=\phi'(x_1,x_2,t),
\end{equation}
and
\begin{equation}
\hat I'(t)=\hat U \hat I\hat U^\dag= \frac{1}{2}\left[\rho^2(\hat p_1^2+\hat p_2^2)+\frac{\kappa^2}{\rho^2}(\hat x_1^2+\hat x_2^2)\right],
\end{equation}
it is easy to verify that
\begin{equation}\label{I2}
\hat I'(t)\phi'( x_1, x_2,t)=E \phi'( x_1, x_2,t),
\end{equation}
where $\phi'( x_1, x_2,t)\in\mathcal{H}$.
To achieve the diagonalization of equation (\ref{I2}) as clear as possible, we introduce the lowering
and raising operators given by
\begin{eqnarray}
a_1'&=&\frac{1}{\sqrt{2\nu}}\left(\frac{\nu}{\rho}\hat x_1+i\rho \hat p_1\right),\,\,\,\,
{a'}_1^\dag=\frac{1}{\sqrt{2\nu}}\left(\frac{\nu}{\rho }\hat x_1-i\rho \hat p_1\right),\\
a_2'&=&\frac{1}{\sqrt{2\nu}}\left(\frac{\nu}{\rho}\hat x_2+i\rho \hat p_2\right),\,\,\,\,
{a'}_2^\dag=\frac{1}{\sqrt{2\nu}}\left(\frac{\nu}{\rho}\hat x_2-i\rho \hat p_2\right),
\end{eqnarray}
which satisfy the following commutation relations
\begin{eqnarray}
[a_1',{a'}_1^\dag]=\mathbf{I}= [a_2',{a'}_2^\dag],\,\,\,\,\,[a_1',a_2']=0= [{a'}_1^\dag,{a'}_2^\dag].
\end{eqnarray}
Let us consider any nonnegative integers $n_1,n_2$ and $|\phi'_{n_1,n_2}(t)\rangle$ the orthonormalized
Fock space such as
\begin{eqnarray}
|\phi'_{n_1,n_2}(t)\rangle&=&\frac{1}{\sqrt{n_1!n_2!}}\left({a'_1}^\dag\right)^{n_1} \left({a'_2}^\dag\right)^{n_2}|\phi'_{0,0}(t)\rangle,\label{to}\\
\langle \phi'_{n_1,n_2}(t)|\phi'_{m_1,m_2}(t)\rangle&=&\delta_{n_1,m_1}\delta_{n_2,m_2},
\end{eqnarray}
with $|\phi'_{0,0}(t)\rangle$ is a normalized state annihilated by $a'_1,a'_2$.
In order to determine the exact solution $\phi_{n_1,n_2}(x_1,x_2,t)$ of the invariant operator $I(t)$, we first express the
ground state $|\phi_{0,0}(t)\rangle$ in the configuration space basis as follows
\begin{eqnarray}
\phi_{0,0}(x_1,x_2,t)&=&U^\dag\langle x_1|\phi_0'(t)\rangle\langle x_2|\phi_0'(t)\rangle\cr
&=&\left(\frac{\nu}{\pi\rho^2}\right)^{\frac{1}{2}}\exp\left[ \left(imf^{-1}\frac{\dot{\rho}}{\rho}-\frac{\nu}{\rho^2}\right)
\left (\frac{x_1^2+x_2^2}{2}\right)\right].
\end{eqnarray}
Then, the nth eigenfunction are obtained from (\ref{to}) as
\begin{eqnarray}\label{ajou}
\phi_{n_1,n_2}(x_1,x_2,t)&=& U^\dag \phi_{n_1,n_2}'(x_1,x_2,t)\cr
&=&\frac{1}{\rho}\left(\frac{\nu}{2^{n_1+n_2}\pi\, n_1!n_2!}\right)^{\frac{1}{2}}
H_{n_1}\left(x_1\frac{\sqrt{\nu}}{\rho}\right)
H_{n_2}\left(x_2\frac{\sqrt{\nu}}{\rho}\right)\cr&&\times\exp\left[\left(imf^{-1}\frac{\dot{\rho}}{\rho}-\frac{\nu}{\rho^2}\right)\left(\frac{x_1^2}{2}
+\frac{x_2^2}{2}\right)\right],
\end{eqnarray}
where $H_{n_1}$ and $H_{n_2}$ are the Hermite polynomials of order $n_1$ and $n_2$.
To obtain the eigenvalues $E_{n_1,n_2}$ of the invariant operator $\hat I(t)$, let us introduce a
new pair of raising and lowing operators define as
\begin{eqnarray}
a_j&=&U^\dag a'_j U=\frac{1}{\sqrt{2\nu}}\left(mf^{-1}\dot{\rho}\hat x_j-\rho \hat p_j+i\frac{\nu}{\rho}\hat x_j\right)\label{x7},\\
a_j^\dag&=&U^\dag {a'}_j^\dag U=\frac{1}{\sqrt{2\nu}}\left(mf^{-1}\dot{\rho}\hat x_j-\rho \hat p_j-i\frac{\nu}{\rho}\hat x_j\right) \label{x8}.
\end{eqnarray}
with $j=1,2$. In term of these operators the invariant operator $\hat I(t)$ takes the form
\begin{eqnarray}
\hat I(t)&=& \nu\left(a_1^\dag a_1+a_2^\dag a_2+\mathbf{I}\right).
\end{eqnarray}
The action of $a_j$ and $a_j^\dag$ on $ |\phi_{n_j}(t)\rangle$ finds expression in
\begin{eqnarray}
a_j^\dag|\phi_{n_j}(t)\rangle&=&\sqrt{n_j+1}|\phi_{n_j+1}(t)\rangle,\\
a_j|\phi_{n_j}(t)\rangle&=& \sqrt{n_j}|\phi_{n_j-1}(t)\rangle,\\
a_j^\dag a_j|\phi_{n_j}(t)\rangle&=& n_j|\phi_{n_j}(t)\rangle.
\end{eqnarray}
Basing on these definitions, the invariant is diagonalized as follows
\begin{eqnarray}
\hat I(t)|\phi_{n_1,n_2}(t)\rangle&=& \nu\left(n_1+n_2+1\right)|\phi_{n_1,n_2}(t)\rangle.
\end{eqnarray}
Since the Hamiltonian of the system is time-dependent, the Schr\"odinger equation of the system is
\begin{equation}\label{x9}
i\frac{\partial }{\partial t}\psi(x_1,x_2,t)
=\hat H(t)\psi(x_1, x_2,t),\,\,\,\,\,\, \psi(x_1,x_2,t)\in \mathcal{H}
\end{equation}
where the eigenfunction $\psi(x_1,x_2,t)$ is related to $\phi (x_1,x_2,t)$ by
\begin{equation}\label{x10}
\psi_{n_1,n_2} (x_1,x_2,t)=e^{i\theta_{n_1,n_2}(t)}\phi_{n_1,n_2}(x_1,x_2,t).
\end{equation}
Inserting this equation in (\ref{x9}), one determines the phase function $\theta_{n_1,n_2}(t)$
in the form
\begin{equation}\label{w3}
\theta_{n_1,n_2}(t)=\int_0^t \langle \phi_{n_1,n_2} (t')| i\frac{\partial}{\partial t'}-\hat H(t')|\phi_{n_1,n_2}(t')\rangle dt'.
\end{equation}
However, as we pointed out in the previous section, this system possesses a conserved angular-momentum
\begin{eqnarray}\label{w2}
\hat L_z&=&\hat x_1\hat p_2-\hat x_2\hat p_1\cr
&=&i(a_2^\dag a_1-a_1^\dag a_2),
\end{eqnarray}
which commutes with the invariant operator and with the Hamiltonian
\begin{eqnarray}
[\hat L_z,\hat I(t)]=0,\,\,\,\,\,\,\,\,\,[\hat L_z,\hat H(t)]=0
\end{eqnarray}
Although the operator $\hat L_z$ commutes with both $ \hat I(t)$ and $ \hat H(t)$, the basis $|\phi_{n_1,n_2}(t)\rangle$ cannot diagonalize them simultaneously.
Therefore, it is convenient to find another basis of Hilbert space that diagonalizes these operators.
\section{ Eigensystems of the Hamiltonian operator}\label{sec4}
To recover the available eigenbasis of the invariant operator which can diagonalize simultaneously the invariant operator, the angular momentum
and the Hamiltonian of the system, let us consider the helicity Fock algebra generators as follows
\begin{eqnarray}\label{v1}
a_{\pm}'&=&\frac{1}{\sqrt{2}}\left(a_1'\pm ia_2'\right),\,\,\, a_{\pm}'^\dag=\frac{1}{\sqrt{2}}\left(a_1'^\dag\mp ia_2'^\dag\right),
\end{eqnarray}
with
\begin{eqnarray}
[a_\pm',a_\pm'^\dag]=\mathbf{I},\,\,\,\,\,\, [a_\pm',a_\mp'^\dag]=0,
\end{eqnarray}
where $a_1', a_2',{a_1'}^\dag, {a_2'}^\dag$ are the ones in the previous equations.
The associated helicity-like basis $|\phi_{n_+,n_-}'(t)\rangle$ are defined as follows
\begin{eqnarray}\label{v2}
|\phi_{n_+,n_-}'(t)\rangle&=&\frac{1}{\sqrt{n_+!n_-!}}\left(a_+'^\dag\right)^{n_+} \left(a_-'^\dag\right)^{n_-}|\phi_{0,0}'(t)\rangle,\\
\langle \phi_{n_+,n_-}'(t)|\phi_{m_+,m_-}'(t)\rangle&=&\delta_{n_+,m_+}\delta_{n_-,m_-},
\end{eqnarray}
with $|\phi_{0,0}'(t)\rangle$ is a normalized state annihilated by $ a_\pm' $ as by $a_1',a_2'$.
In order to find the exact expression of the joint eigenfunction of the invariant operator and the angular momentum, we introduce the polar coordinates
through the following canonical transformation $\hat x_1=r\cos\alpha,\,\,\hat x_2=r\sin\alpha,\,
\hat p_1=-i(\cos\alpha \partial_r-\frac{\sin\alpha}{r}\partial_\alpha)$ and
$\hat p_2=-i(\sin\alpha \partial_r+\frac{\cos\alpha}{r}\partial_\alpha)$.
In terms of these coordinates the operators in equation (51) can be written as
\begin{eqnarray}
{a'_\pm}^\dag&=&\frac{1}{2}e^{\mp i\alpha}\left[\left(\frac{\nu}{\rho}r-\rho\partial_r\right)\pm i\frac{\rho}{r}\partial_\alpha\right],\\
a'_\pm&=& \frac{1}{2}e^{\pm i\alpha}\left[\left(\frac{\nu}{\rho}r+\rho\partial_r\right)\mp i\frac{\rho}{r}\partial_\alpha\right].
\end{eqnarray}
From the relation (\ref{v2}) we construct the eigenfunction for the invariant operator of the system according to \cite{50}. One finds
\begin{eqnarray}\label{v}
\phi_{n_+,n_-}(x_1,x_2,t)&=&U^\dag \phi_{n_+,n_-}'(x_1,x_2,t),
\end{eqnarray}
that is
\begin{eqnarray}
\phi_{n_+,n_-}(x_1,x_2,t)&=&(-)^n\frac{(\nu)^{\frac{1+|\ell|}{2}}}{\rho^{1+|\ell|}\sqrt{\pi}}\sqrt{\frac{n!}{\Gamma(n+|\ell|+1)}}r^{|\ell|}e^{\left(imf^{-1}
\frac{\dot{\rho}}{\rho}-\frac{\nu}{\rho^2}\right)\frac{r^2}{2}} \cr&&\times L_n^{|\ell|}\left(\frac{\nu}{\rho^2}r^2\right)e^{i\ell\alpha},
\end{eqnarray}
where $\ell=n_+-n_-$,\,\,\,$n=\min(n_+,n_-)=\frac{1}{2}(n_++n_--|\ell|)$, $\Gamma(u)$ the Gamma function and $L_n^{|\ell|}\left(u\right)$
are the generalised Laguerre polynomials.
To obtain the expectation values of the operators $\hat I(t), \hat L_z, \hat H(t)$ that are respectively
$E_{n_\pm},l_{n_\pm},\mathcal{E}_{n_\pm}$, we introduce a new
pair of raising and lowing helicity operators define as
\begin{eqnarray}
a_\pm&=&U^\dag a_\pm' U= \frac{1}{2\sqrt{\nu}}\left[\left(mf^{-1}\dot{\rho}+i\frac{\nu}{\rho}\right)
\left(\hat x_1\pm i\hat x_2\right)-\rho\left(\hat p_1\pm i\hat p_2\right)\right],\\
a_\pm^\dag&=&U^\dag a_\pm'^\dag U=\frac{1}{2\sqrt{\nu}}\left[\left(mf^{-1}\dot{\rho}-i\frac{\nu}{\rho}\right)
\left(\hat x_1\mp i\hat x_2\right)-\rho\left(\hat p_1\mp i\hat p_2\right)\right].
\end{eqnarray}
In term of these operators we have
\begin{eqnarray}
\hat I(t)&=& \nu\left(a_+^\dag a_++a_-^\dag a_-+\mathbf{I}\right),\\
\hat L_z&=&\left(a_-^\dag a_--a_+^\dag a_+\right),\\
\hat H(t)&=&\frac{1}{2\nu}\left(mf^{-1}\dot{\rho}^2+\frac{f\nu^2}{m\rho^2}+m\omega^2f^{-1}\rho^2\right)\left(a_+^\dag a_+ +a_-^\dag a_-+\mathbf{I}\right)+\cr&&
\left(-\frac{mf^{-1}\dot{\rho}}{2\nu}+i\frac{\dot{\rho}^2}{\rho}+\frac{f\nu}{2m\rho^2}-\frac{m\omega^2f^{-1}\rho^2}{2\nu}\right)a_-a_++\cr&&
\left(-\frac{mf^{-1}\dot{\rho}}{2\nu}-i\frac{\dot{\rho}^2}{\rho}+\frac{f\nu}{2m\rho^2}-\frac{m\omega^2f^{-1}\rho^2}{2\nu}\right)a_-^\dag a_+^\dag.
\end{eqnarray}
The expectation values of the above operators read as
\begin{eqnarray}
E_{n_\pm}&=&\langle \phi_{n_+,n_-}(t)|\hat I(t)|\phi_{n_+,n_-}(t)\rangle=\nu\left(n_++n_-+1\right), \\
l_{n_\pm}&=&\langle \phi_{n_+,n_-}(t)|\hat L_z|\phi_{n_+,n_-}(t)\rangle=n_--n_+,\\
\mathcal{E}_{n_\pm}&=&\langle \phi_{n_+,n_-}(t)|\hat H(t)|\phi_{n_+,n_-}(t)\rangle=
\frac{1}{2\nu}\left(mf^{-1}\dot{\rho}^2+\frac{f\nu^2}{m\rho^2}+m\omega^2f^{-1}\rho^2\right) \cr&&\times
\left(n_+ +n_-+1\right),\label{af}
\end{eqnarray}
where the action of $a_\pm$ and $a_\pm^\dag$ on $ |\phi_{n_\pm}(t)\rangle$ finds expression in
\begin{eqnarray}
a_\pm^\dag|\phi_{n_\pm,n_\mp}(t)\rangle&=&\sqrt{n_\pm+1}|\phi_{n_\pm+1,n_\mp}(t)\rangle,\\
a_\pm|\phi_{n_\pm,n_\mp}(t)\rangle&=& \sqrt{n_\pm}|\phi_{n_\pm-1,n_\mp}(t)\rangle,\\
a_\pm^\dag a_\pm|\phi_{n_\pm,n_\mp}(t)\rangle&=& n_\pm|\phi_{n_\pm,n_\mp}(t)\rangle.
\end{eqnarray}
To determine the exact solution of the Schr\"odinger equation (\ref{x9}), we have to find the exact
expression of the phase function in equation (\ref{w3}) such that
\begin{eqnarray}\label{t7}
\frac{d}{dt}\theta_{n_1,n_2}(t)&=&\langle \phi_{n_+,n_-}(t)|i\frac{\partial }{\partial t}-\hat H(t)|\phi_{n_+,n_-}(t)\rangle\cr
&=& \langle \phi_{n_+,n_-}(t)|i\frac{\partial }{\partial t}|\phi_{n_+,n_-}(t)\rangle-
\langle \phi_{n_+,n_-}(t)|\hat H(t)|\phi_{n_+,n_-}(t)\rangle.
\end{eqnarray}
Let us evaluate the following expression
\begin{eqnarray}\label{w4}
\langle \phi_{n_+,n_-}(t)|\frac{\partial}{\partial t}|\phi_{n_+,n_-}(t)\rangle&=&\frac{1}{\sqrt{n_+!n_-!}}
\langle \phi_{n_+,n_-}(t)|\frac{\partial}{\partial t}\left[\left(a_+^\dag\right)^{n_+} \left(a_-^\dag\right)^{n_-}|\phi_{0,0}(t)\rangle\right]\cr
&=&\langle \phi_{0,0}(t)|\frac{\partial}{\partial t}|\phi_{0,0}(t)\rangle+\frac{1}{\sqrt{n_+!n_-!}}\cr&&\times
\langle \phi_{n_+,n_-}(t)|\frac{\partial}{\partial t}\left[\left(a_+^\dag\right)^{n_+} \left(a_-^\dag\right)^{n_-}\right]|\phi_{0,0}(t)\rangle.
\end{eqnarray}
We have
\begin{eqnarray}\label{w5}
\langle \phi_{0,0}(t)|\frac{\partial}{\partial t}|\phi_{0,0}(t)\rangle=\frac{imf^{-1}}{2\nu}(\ddot{\rho}\rho+\dot{\rho}\rho-\dot{\rho}^2),
\end{eqnarray}
and
\begin{eqnarray}\label{w6}
\frac{1}{\sqrt{n_+!n_-!}}
\langle \phi_{n_+,n_-}(t)|\frac{\partial}{\partial t}\left[\left(a_+^\dag\right)^{n_+} \left(a_-^\dag\right)^{n_-}\right]|\phi_{0,0}(t)\rangle&=&
\frac{imf^{-1}}{2\nu}\left(\ddot{\rho}\rho+\eta\dot{\rho}\rho-\dot{\rho}^2\right)\cr&&\times(n_++n_-),
\end{eqnarray}
where the expressions of $\frac{\partial a_+^\dag}{\partial t}$ and $\frac{\partial a_-^\dag}{\partial t}$ in terms of $a_\pm$ and $a_\pm^\dag$ are
\begin{eqnarray}
\frac{\partial a_+^\dag}{\partial t}&=& \frac{1}{2\sqrt{\nu}}\left[\left(m f^{-1}\eta\dot{\rho}+m f^{-1}\ddot{\rho}+i\nu\frac{\dot{\rho}}{\rho^2}
\right)(\hat x_1-i\hat x_2)-\dot{\rho}(\hat p_1-i\hat p_2)\right]\cr
&=&\frac{imf^{-1}}{2\nu}\left(\ddot{\rho}\rho+\eta\dot{\rho}\rho-\dot{\rho}^2\right)a_+^\dag+\left[\frac{\dot{\rho}}{\rho}-\frac{imf^{-1}}{2\nu}
\left(\ddot{\rho}\rho+\eta\rho-\dot{\rho}^2\right)\right]a_-,\\
\frac{\partial a_-^\dag}{\partial t}&=& \frac{1}{2\sqrt{\nu}}\left[\left(m f^{-1}\eta\dot{\rho}+m f^{-1}\ddot{\rho}+i\nu\frac{\dot{\rho}}{\rho^2}
\right)(\hat x_1+i\hat x_2)-\dot{\rho}(\hat p_1+i\hat p_2)\right]\cr
&=&\frac{imf^{-1}}{2\nu}\left(\ddot{\rho}\rho+\eta\dot{\rho}\rho-\dot{\rho}^2\right)a_-^\dag+\left[\frac{\dot{\rho}}{\rho}-\frac{imf^{-1}}{2\nu}
\left(\ddot{\rho}\rho+\eta\rho-\dot{\rho}^2\right)\right]a_+.
\end{eqnarray}
We then find
\begin{eqnarray}\label{w7}
\langle \phi_{n_+,n_-}(t)|\frac{\partial}{\partial t}|\phi_{n_+,n_-}(t)\rangle&=&
\frac{imf^{-1}}{2\nu}\left(\ddot{\rho}\rho+\eta\dot{\rho}\rho-\dot{\rho}^2\right)(n_++n_-+1)\cr
&=&\frac{imf^{-1}}{2\nu}\left(\frac{\nu^2f^2}{m^2\rho^2}-\omega^2\rho^2-\dot{\rho}^2\right)(n_++n_-+1).
\end{eqnarray}
Finally, taking into account (\ref{af}) and (\ref{w7}), we find that the phase function in (\ref{t7}) is given
by
\begin{equation}
\theta_{n_+,n_-}(t)=-\frac{\nu}{2m}(n_++n_-+1)\int_0^t\frac{f(t')}{\rho^2(t')}dt'.
\end{equation}
Our result for $\theta_{n_+,n_-}(t)$ confirms the $2$D case result of \cite{26}, slightly differs from the one calculated in \cite{14} and
largerly differs from our previous result \cite{51}. In fact, in the presence of an external electromagnetic field $[\vec B(\vec x,t),
\vec E(\vec x,t)]$, we obtain the phase function of \cite{26} due to the contribution of
the magnetic field $\vec B(\vec x,t)$ which induces a minimal coupling in the Hamiltonian $\hat H(\vec x,\vec p,t)$.
Indeed, in addition to the magnetic field contribution, with an appropriate canonical and gauge transformations on the
electric field $\vec E(\vec x,t)$, this phase function is extended to the one obtained in \cite{51}.
The solution of the Schr\"odinger equation is given by
\begin{eqnarray}\label{vv}
\psi_{n, \ell}(x_1,x_2,t)&=& (-)^n\frac{(\nu)^{\frac{1+|\ell|}{2}}}{\rho^{1+|\ell|}\sqrt{\pi}}
\sqrt{\frac{n!}{\Gamma(n+|\ell|+1)}}r^{|\ell|}e^{\left(imf^{-1}
\frac{\dot{\rho}}{\rho}-\frac{\nu}{\rho^2}\right)\frac{r^2}{2}}\cr&& \times L_n^{|\ell|}\left(\frac{\nu}{\rho^2}r^2\right)e^{i\ell\alpha}e^{i\theta_{n,\ell}(t)}.
\end{eqnarray}
However, one can deduce from the Lagrangian (\ref{eq1}) the usual kinetic momentum $p_{k_j}$ such as
\begin{eqnarray}
p_{k_j}=\frac{\partial L}{\partial \dot{x}_j}=f(t)p_j,\,\,\,\,\,\,\,j=1,2,
\end{eqnarray}
where $p_j$ the canonical momentum and $ p_{k_j}=m\dot{x}_j$. The mechanical energy of the system in term of the Hamiltonian (\ref{x1}) reads as
\begin{eqnarray}\label{E_m}
E_m&=&\frac{m}{2}\dot{x}_j^2+\frac{m\omega^2(t)}{2}x_j^2\cr
&=&f(t) H(t).
\end{eqnarray}
As pointed out in the literature by several authors \cite{52,53,54,55,56},
the quantization of this dissipative system for particular value of the function
$f(t)=e^{-\gamma t}$ through a non-inertial canonical transformation, is unsatisfactory with the laws of quantum theory such that
the zero-point of the expectation values of the energy instead of going to the quantum ground energy
and the violation of the Heisenberg uncertainty relations when one tends the time to infinity
($t\rightarrow\infty$). Therefore, the expectation value of the mechanical energy (\ref{E_m}) is given by
\begin{eqnarray}
\langle\psi_{n,\ell}| E_m|\psi_{n,\ell}\rangle&=&\frac{1}{2\nu}\left(m\dot{\rho}^2+
\frac{f^2\nu^2}{m\rho^2}+m\omega^2\rho^2\right)\left(2n+|\ell|+1\right)
\end{eqnarray}
and
\begin{eqnarray}
\lim_{t\rightarrow\infty}\langle\psi_{n,\ell}| E_m|\psi_{n,\ell}\rangle\neq0,\,\,\,
\forall f\in \mathbb R
\end{eqnarray}
One infers that the problem of the zero-point energy caused by the use of the non-inertial canonical transformation
is raised up by this method of Lewis-Riesenfeld. In the next section, let us check the validity
of the generalized version of the Heisenberg's uncertainty relations.
\section{Heisenberg's uncertainty relations}\label{sec5}
To prove the validity of the generalized uncertainty relations (\ref{He}) with $\hbar=1$, we start
with the determination of the standard expectation values of the operators $\hat x_1,\hat x_2,\hat p_1,\hat p_2$ and $\hat p_{k_j}$
\begin{eqnarray}
\langle \psi_{n,\ell}|\hat x_1|\psi_{n,\ell}\rangle&=& \langle \psi_{n,\ell}|\hat x_2|\psi_{n,\ell}\rangle=0,\\
\langle \psi_{n,\ell}|\hat p_1|\psi_{n,\ell}\rangle&=& \langle \psi_{n,\ell}|\hat p_2|\psi_{n,\ell}\rangle=0,\\
\langle \psi_{n,\ell}|\hat x_1^2|\psi_{n,\ell}\rangle&=&\langle \psi_{n,\ell}|\hat x_2^2|\psi_{n,\ell}\rangle= \frac{\rho^2}{2\nu}\left(2n+|\ell|+1\right),\\
\langle \psi_{n,\ell}|\hat p_1^2|\psi_{n,\ell}\rangle&=&\langle \psi_{n,\ell}|\hat p_2^2|\psi_{n,\ell}\rangle=
\left(2n+|\ell|+1\right) \left(\frac{m^2f^{-2}\dot{\rho}^2}{2\nu}+\frac{\nu}{2\rho^2}\right),\\
\langle \psi_{n,\ell}|[\hat x_1,\hat p_1]|\psi_{n,\ell}\rangle&=& \langle n,\ell|[\hat x_2,\hat p_2]|n,\ell\rangle= i,\\
\langle \psi_{n,\ell}|[\hat x_1,\hat p_{k_1}]|\psi_{n,\ell}\rangle&=& \langle n,\ell|[\hat x_2,\hat p_{k_2}]|n,\ell\rangle= if(t).
\end{eqnarray}
The dispersions of operators are computed to
\begin{eqnarray}
\Delta x_1=\Delta x_2&=&\sqrt{\frac{\rho^2}{2\nu}\left(2n+|\ell|+1\right)},\\
\Delta p_1=\Delta p_2&=&\sqrt{\frac{1}{2}\left(2n+|\ell|+1\right) \left(\frac{m^2f^{-2}\dot{\rho}^2}{\nu}+\frac{\nu}{\rho^2}\right)},\\
\Delta p_{k_1}=\Delta p_{k_2}&=& f(t)\sqrt{\frac{1}{2}\left(2n+|\ell|+1\right) \left(\frac{m^2f^{-2}\dot{\rho}^2}{\nu}+\frac{\nu}{\rho^2}\right)}.
\end{eqnarray}
The Heisenberg uncertainty relations can be inferred
\begin{eqnarray}
\Delta x_1 \Delta p_1&=&\Delta x_2 \Delta p_2=\frac{1}{2}\left(2n+|\ell|+1\right)\sqrt{1+\frac{m^2f^{-2}\dot{\rho}^2\rho^2}{\nu^2}}\geq \frac{1}{2},\\
\Delta x_1 \Delta p_{k_1}&=&\Delta x_2 \Delta p_{k_2}=\frac{f(t)}{2}\left(2n+|\ell|+1\right)
\sqrt{1+\frac{m^2f^{-2}\dot{\rho}^2\rho^2}{\nu^2}}\geq \frac{f(t)}{2},\label{p1}\\
\Delta x_1\Delta x_2&=& \frac{\rho^2}{2\nu}\left(2n+|\ell|+1\right)\geq 0,\\
\Delta p_1\Delta p_2&=& \left(2n+|\ell|+1\right) \left(\frac{m^2f^{-2}\dot{\rho}^2}{2\nu}+\frac{\nu}{2\rho^2}\right)\geq 0,\\
\Delta p_{k_1}\Delta p_{k_2}&=& f^2(t)\left(2n+|\ell|+1\right) \left(\frac{m^2f^{-2}\dot{\rho}^2}{2\nu}+\frac{\nu}{2\rho^2}\right)\geq 0.
\end{eqnarray}
These results are related to similar discussions in \cite{10}. In the present case
the uncertainty relations are satisfied except for the relation in equation (\ref{p1}). In fact
this uncertainty relation may tend to zero if $\lim_{t\rightarrow \infty}f(t)\rightarrow 0$ (for instance $f(t)=e^{-\gamma t}).$
This result seems to violate the Heisenberg uncertainty relations,
but as observed authors in \cite{52,53,54,55,56}, this result cannot disagree with the quantum mechanics theory,
because the uncertainty relations hold only for the conjugate canonical operators $\hat x_j$ and $\hat p_j$.
Accordingly, the Lewis-Riesenfeld approach removes all
the major objections related to this model.
As we can also remark, with this approach, the determination of the spectrum allowed the introduction of
the nonstationary discrete eigenbasis. Thus, to convert this spectrum into nonstationary
continuous spectrum, it is useful to introduce a continuous basis in which the diagonalization is possible. In this sense, the coherent
states are the best candidates to achieve this purpose. In the literature, various coherent states \cite{57,58,59} are contructed for different Lie algebra.
To construct the appropriate coherent states for this system whose
eigenfunction is expressed in terms of the generalized Laguerre functions as in \cite{61,62,63,64,65,66}, we factorise this eigenfunction
to find the hidden symmetry of the
system through the establishment of an appropriate Lie algebra.
\section{The hidden dynamical Lie algebra}\label{sec6}
We construct in this section the raising and lowering operators from the Hamiltonian's eigenfunction which generate the hidden Lie algebra.
Since the eigenfunctions of the invariant operator and the Hamiltonian are expressed in terms of the generalized Laguerre
functions $L_n^{\ell}(u)$ with $\ell>0$.
It is important to review some useful properties related to this special function that will be used to generate the symmetry operators.
Thus, the generalized Laguerre polynomials $L_n^{\ell}(u)$ are defined as \cite{67}
\begin{eqnarray}
L_n^{\ell}(u)&=&\frac{1}{n!}e^u u^{-\ell}\frac{d^n}{du^n}(e^{-u}u^{n+\ell}).
\end{eqnarray}
For $\ell=0,\,\,\,L_n^0(u)=L_n(u)$ and \,\,\,for $n=0,\,\,\,L_0^{\ell}(u)=1$. The generating functions corresponding to associated
Laguerre polynomials are
\begin{eqnarray}
\frac{e^{\frac{uz}{z-1}}}{(1-z)^{1+\ell}}&=&\sum_{n=0}^\infty L_n^{\ell}(u)z^n,\,\,\,\,\,\,|z|<1,\label{g}\\
J_{\ell}\left(2\sqrt{uz}\right)e^z(uz)^{-\frac{\ell}{2}}&=&\sum_{n=0}^\infty\frac{z^n}{\Gamma(n+\ell+1)}L_n^{\ell}(u)\label{j},
\end{eqnarray}
where the $ J_\kappa(x)$ is the ordinary Bessel function of $\kappa$-order.
The orthogonality relation is
\begin{eqnarray}
\int_0^{\infty}du e^{-u}u^{\ell}L_n^{\ell}(u)L_m^{\ell}(u)= \frac{\Gamma(\ell+n+1)}{n!} \delta_{nm}.
\end{eqnarray}
The generalised Laguerre polynomials satisfy the following differential equation
\begin{equation}
\left[u\frac{d^2}{du^2}+(\ell-u+1)\frac{d}{du}+n\right]L_n^{\ell}(u)=0,
\end{equation}
and the recurrence relations
\begin{eqnarray}
(n+1)L_{n+1}^{\ell}(u)-\left(2n+\ell+1-u\right)L_n^{\ell}(u)+\left(n+\ell\right)L_{n-1}^{\ell}(u)&=&0,\label{r1}\\
u\frac{d}{du}L_n^{\ell}(u)-nL_n^{\ell}(u)+(n+\ell)L_{n-1}^{\ell}(u)&=&0.\label{r2}
\end{eqnarray}
With respect to the equations, we rewrite the eigenfunction of the invariant operator in equation (\ref{ajou}) in the form
\begin{equation}
\phi_n^{\ell}(u)=N(\rho,\alpha)\sqrt{\frac{n!}{\Gamma(n+\ell+1)}}u^{\frac{\ell}{2}}e^{-\frac{\varpi}{2}u}L_n^{\ell}(u),
\end{equation}
where $u=\frac{\nu}{\rho^2}r^2$, $N(\rho,\alpha)=(-)^n\sqrt{\frac{\nu}{\pi\rho^2}}e^{i\ell\alpha}$,\,\,\,\,
$\varpi=1-imf^{-1}\frac{\rho\dot{\rho}}{\nu}$ and $\Gamma(n)=(n-1)!$.\\
Basing on the recurrence relations (\ref{r1}) and (\ref{r2}), we obtain the following equations
\begin{eqnarray}
\left(-u\frac{d}{du}+\frac{\ell}{2}+n-\frac{\varpi}{2}u\right)\phi_n^{\ell}(u)&=&\sqrt{n(n+\ell)}\phi_{n-1}^{\ell}(u),\\
\left(u\frac{d}{du}+\frac{\ell}{2}+n-\frac{\tilde \varpi}{2}u+1\right)\phi_n^{\ell}(u)&=&\sqrt{(n+1)(n+\ell+1)}\phi_{n+1}^{\ell}(u),
\end{eqnarray}
where $\tilde \varpi=2-\varpi$.
For the sake of simplicity we define the raising operator $K_+$ and the lowering operator $K_-$ acting on the wave function $ \phi_n^{\ell}(u)$ as
\begin{eqnarray}
K_-=\left(-u\frac{d}{du}+\frac{\ell}{2}+n-\frac{\varpi}{2}u\right),\\
K_+=\left(u\frac{d}{du}+\frac{\ell}{2}+n-\frac{\tilde \varpi}{2}u+1\right),
\end{eqnarray}
and hence obtain
\begin{eqnarray}
K_-\phi_n^{\ell}(u)&=&\sqrt{n(n+\ell)}\phi_{n-1}^{\ell}(u),\\
K_+\phi_n^{\ell}(u)&=&\sqrt{(n+1)(n+\ell+1)}\phi_{n+1}^{\ell}(u).
\end{eqnarray}
By multiplying both side of the latter equations by the factor $e^{i\theta_{n,\ell}(t)}$ we obtain
\begin{eqnarray}
K_-\psi_n^{\ell}(u)&=& \sqrt{n(n+\ell)}\psi_{n-1}^{\ell}(u),\label{s}\\
K_+\psi_n^{\ell}(u)&=&\sqrt{(n+1)(n+\ell+1)}\psi_{n+1}^{\ell}(u).
\end{eqnarray}
By successively applying $K_+$ on the ground state $\psi_0^{\ell} (u)$, we generate the eigenfunction $\psi_n^{\ell} (u)$ of the system
as follows
\begin{eqnarray}
\psi_n^{\ell}(u)&=&\sqrt{\frac{\Gamma(1+\ell)}{n!\Gamma(n+\ell+1)}}(K_+)^n\psi_0^{\ell} (u),\\
\end{eqnarray}
where,
\begin{eqnarray}
\psi_0^{\ell}(u)&=&\frac{N(\rho,\alpha)}{\sqrt{\Gamma(\ell+1)}}u^{\frac{\ell}{2}}e^{-\frac{\varpi}{2}u}e^{i\theta_{n,\ell}(t)},\\
K_- \psi_0^{\ell}(u)&=&0.
\end{eqnarray}
One can also observe that the following relations are satisfied
\begin{eqnarray}
K_+K_-\psi_n^{\ell} (u)&=&n(n+\ell)\psi_n^{\ell} (u),\\
K_+K_-\psi_n^{\ell} (u)&=&(n+1)(n+\ell+1)\psi_n^{\ell} (u).
\end{eqnarray}
Now, to establish the dynamical Lie algebra associated with the ladder operators $K_\pm$, we calculate the
commutator
\begin{equation}
[K_-,K_+]\psi_n^{\ell}(u)=(2n+\ell+1)\psi_n^{\ell}(u).
\end{equation}
As a consequence, we can introduce the operator $K_0$ defined to satisfy
\begin{eqnarray}
K_0\psi_n^{\ell}(u)=\frac{1}{2}(2n+\ell+1)\psi_n^{\ell}(u).
\end{eqnarray}
The operators $K_\pm$ and $K_0$ satisfy the following commutation relations
\begin{eqnarray}
[K_-,K_+]=2K_0,\,\,[K_0,K_\pm]=\pm K_\pm,
\end{eqnarray}
which can be recognized as commutation relation of the generators of a non-compact Lie
algebra $su(1,1)$. The corresponding Casimir operator for any irreducible representation is the identity times a number
\begin{eqnarray}
K^2=K_0^2-\frac{1}{2}(K_+K_-+K_-K_+)=\frac{1}{4}(\ell+1)(\ell-1).
\end{eqnarray}
It satisfies
\begin{equation}
[K^2,K_\pm]=0=[K^2,K_0].
\end{equation}
If we make the following connection between the physical quantum numbers $(n,\ell)$ and the ordinary $SU(1,1)$ group numbers $(n,k)$ such as
\begin{eqnarray}
\ell=2k-1,
\end{eqnarray}
then we recover the ordinary discrete representations of the $su(1,1)$ Lie algebra
\begin{eqnarray}
K^2\psi_n^{k}(u)&=&k(k-1)\psi_n^{k}(u),\\
K_-\psi_n^{k}(u)&=& \sqrt{n(n+2k-1)}\psi_{n-1}^{k}(u),\label{s}\\
K_+\psi_n^{k}(u)&=&\sqrt{(n+1)(n+2k)}\psi_{n+1}^{k}(u),\\
K_0\psi_n^{k}(u)&=& (n+k)\psi_n^{k}(u).
\end{eqnarray}
Thus, in what follows we use the Bargmann index $\ell$ instead of the ordinary index $k$ in the representation of $su(1, 1)$ algebra.
Now, with the properties of the generators $K_\pm$ and $K_0$ of the $su(1,1)$ algebra, we are in the position to construct the corresponding
coherent states to this system.
\section{SU(1,1) coherent states}\label{sec7}
We investigate in this section the $SU(1,1)$ coherent states by adopting Barut-Girardello \cite{30} and Perelomov \cite{31} approaches. We examin for each
approach the resolution of identity and overlapping properties.
\subsection{Barut-Girardello coherent states}
\subsubsection{Construction}
Following the Barut and Girardello approach \cite{30}, $ SU(1,1)$ coherent states are defined to be
the eigenstates of the lowering generator $K_-$
\begin{equation}\label{Bar1}
K_-|\psi_z^\ell\rangle=z |\psi_z^\ell\rangle,
\end{equation}
where $z$ is an arbitrary complex number. Based on the completeness of the wavefunction such that
$\sum_{n=0}^\infty|\psi_n^{\ell}\rangle\langle \psi_n^{\ell}|=\mathbf{I}$, on
can represent the coherent states $|\psi_z^\ell\rangle$ as follows
\begin{eqnarray}\label{Bar2}
|\psi_z^\ell\rangle&=& \sum_{n=0}^\infty \langle \psi_n^{\ell}|\psi_z^\ell\rangle|\psi_n^{\ell}\rangle.
\end{eqnarray}
Acting the operator $K_-$ on the equation (\ref{Bar2}) and then, using the equations (\ref{Bar1}) and (\ref{s})
we have the following result
\begin{eqnarray}
\langle \psi_n^{\ell}|\psi_z^\ell\rangle=\frac{z}{\sqrt{n(n+\ell)}}\langle \psi_{n-1}^{\ell}|\psi_z^\ell\rangle.
\end{eqnarray}
After the recurrence procedure, the formal equation becomes
\begin{equation}
\langle \psi_n^{\ell}|\psi_z^\ell\rangle=z^n\sqrt{\frac{\Gamma(1+\ell)}{n!\Gamma(n+\ell+1)}}\langle \psi_0^{\ell}|\psi_z^\ell\rangle.
\end{equation}
Referring to \cite{67}, the Gamma function is linked to the modified Bessel function $I_\mu(x)$ of order $\mu$ through the relation
\begin{equation}
\sum_{n=0}^\infty\frac{x^{2n}}{n!\Gamma(n+\mu+1)}=\frac{I_\mu(2x)}{x^\mu}.
\end{equation}
Therefrom, by setting $x=z$ and $\mu=\ell$, we deduce the Barut-Girardello coherent states as fallows
\begin{eqnarray}
|\psi_z^\ell\rangle&=&
\sqrt{\frac{|z|^{\ell}}{I_{\ell}(2|z|)}}\sum_{n=0}^\infty\frac{z^n}{\sqrt{n!\Gamma(n+\ell+1)}}|\psi_n^{\ell}\rangle,\label{cbar}\\
\psi_z^{\ell}(u)&=&\frac{|z|^{\frac{\ell}{2}} N(\rho,\alpha)}{\sqrt{I_{\ell}(2|z|)}}
\sum_{n=0}^\infty \frac{z^n}{\Gamma(n+\ell+1)}u^{\frac{\ell}{2}}e^{-\frac{\varpi}{2}u}L_n^{\ell}(u)e^{i\theta_{n,\ell}(t)}.
\end{eqnarray}
However, in term of the generating function (\ref{j}), the Barut-Girardello coherent states can be written as follows
\begin{eqnarray}
\psi_z^{\ell}(u)&=&\left(\frac{z}{|z|}\right)^{-\frac{\ell}{2}}\frac{ N(\rho,\alpha) e^{z-\frac{\varpi}{2}u} }{\sqrt{I_{\ell}(2|z|)}}
J_{\ell}\left(2\sqrt{uz}\right)e^{i\theta_{n,\ell}(t)}.
\end{eqnarray}
\subsubsection{Properties}
It is well-known that the states (\ref{cbar}) are normalized but not orthogonal and satisfy the resolution of identity. Thus, we can see that the
scalar product of two coherent states does not vanish
\begin{equation}
\langle\psi_{z_1}^\ell|\psi_{z_2}^\ell\rangle=\frac{I_{\ell}(2\sqrt{ z_1^*z_2})}{\sqrt{I_{\ell}(2|z_1|)|I_{\ell}(2|z_2|)}}.
\end{equation}
The overcompleteness relation reads as follows
\begin{equation}
\int d\mu(z,\ell)|\psi_z^\ell\rangle\langle \psi_z^\ell|=\sum_{n=0}^\infty|\psi_n^{\ell}\rangle\langle \psi_n^{\ell}|=\mathbf{I},
\end{equation}
with the measure
\begin{equation}
d\mu(z,\ell)=\frac{2}{\pi}K_{\ell}(2|z|)I_{\ell}(2|z|)d^2z,
\end{equation}
where $d^2z=d(Re z)d(Im z)$ and $K_\upsilon(x)$ is the $\upsilon$-order modified Bessel function of the second kind.\\
For arbitrary state $|\Phi\rangle=\sum_{n=0}^\infty c_n|\psi_n^{\ell}\rangle$ in the Hilbert space, one can construct the analytic function $f(z)$
such as
\begin{eqnarray}
f(z)=\sqrt{\frac{I_{\ell}(2|z|)}{|z|^{\ell}}}\langle \psi_z^\ell|\Phi\rangle=\sum_{m=0}^\infty\frac{c_m}{\sqrt{m!\Gamma(m+\ell+1)}}z^m.
\end{eqnarray}
On the Barut-Girardello coherent states (\ref{cbar}) one can explicitly express the state $|\Phi\rangle$ as follows
\begin{equation}
|\Phi\rangle=\int d\mu(z,\ell)\frac{({z^*})^{\frac{\ell}{2}}}{\sqrt{I_{\ell}(2|z|)}}f(z)|\psi_z^\ell\rangle,
\end{equation}
and we have
\begin{equation}
||\Phi||^2=\int d\mu(z,\ell)\frac{|z|^{\ell}}{I_{\ell}(2|z|)}|f(z)|^2<\infty.
\end{equation}
\subsection{Perelomov coherent states}
\subsubsection{Construction}
In analogy to canonical coherent states construction, Perelomov $SU(1,1)$
coherent states $|\psi_\eta^\ell\rangle$ are obtained by acting the displacement operator $S(\xi)$ on the ground state
$|\psi_0^{\ell}\rangle$ \cite{31}
\begin{eqnarray}
|\psi_\eta^\ell\rangle&=& S(\xi)|\psi_0^{\ell}\rangle,\cr
&=& \exp\left(\xi K_+-\xi^*K_-\right)|\psi_0^{\ell}\rangle,
\end{eqnarray}
where $\xi\in\mathbb C,$ such as $\xi= -\frac{\theta}{2}e^{-i\varphi}$, with $-\infty<\theta<+\infty$ and $0\leq\varphi\leq 2\pi$.
Using Baker-Campbell-Haussdorf relation, we explicit the displacement operator as follows \cite{68}
\begin{equation}\label{d}
S(\xi)= \exp(\eta K_+)\exp(\zeta K_0)\exp(-\eta^* K_-),
\end{equation}
where $\eta=-\tanh(\frac{\theta}{2})e^{-i\varphi}$ and $\zeta=-2\ln\cosh|\xi|=\ln(1-|\eta|^2)$. By using this normal
form of the displacement operator (\ref{d}), the standard Perelomov $SU(1,1)$ coherent states are found to be
\begin{eqnarray}
|\psi_\eta^\ell\rangle&=&(1-|\eta|^2)^{\ell+1}\sum_{n=0}^\infty\sqrt{\frac{\Gamma(n+\ell+1)}{n!\Gamma(\ell+1)}}\eta^n|\psi_n^{\ell}\rangle,\label{P}\\
\psi_\eta^{\ell}(u)&=&N(\rho,\alpha)\frac{(1-|\eta|^2)^{\ell+1}}{\sqrt{\Gamma(\ell+1)}}u^{\frac{\ell}{2}}e^{-\frac{\varpi}{2}u}
\sum_{n=0}^\infty \eta^n L_n^{\ell}(u) e^{i\theta_{n,\ell}(t)}.
\end{eqnarray}
In term of the generating function (\ref{g}), the Perelomov coherent states can be written as follows
\begin{equation}
\psi_\eta^{\ell}(u)=N(\rho,\alpha)\frac{(1-|\eta|^2)^{\ell+1}}{\sqrt{\Gamma(\ell+1)}}u^{\frac{\ell}{2}}e^{-\frac{\varpi}{2}u}
\frac{e^{\frac{u\eta}{\eta-1}}}{(1-\eta)^{1+\ell}}e^{i\theta_{n,\ell}(t)}.
\end{equation}
\subsubsection{Properties}
The Perelomov $SU(1,1)$ coherent states as the Barut-Girardello coherent states are normalized states but not orthogonal
\begin{equation}
\langle \psi_{\eta_1}^\ell|\psi_{\eta_2}^\ell\rangle=\left[(1-|\eta_1|^2)(1-|\eta_2|^2)\right]^{\frac{\ell+1}{2}}(1-\eta_1\eta_2^*)^{-\ell-1},
\end{equation}
and satisfy the completeness relation
\begin{eqnarray}
\int |\psi_\eta^\ell\rangle\langle \psi_\eta^\ell|d\mu(\eta,\ell)=\sum_{n=0}^\infty|\psi_n^{\ell}\rangle\langle \psi_n^{\ell}|=\mathbf{I}
\end{eqnarray}
where the measure $d\mu(\eta,\ell)=\frac{\ell}{\pi}\frac{d^2\eta}{(1-|\eta|^2)^2}$.\\
As we noted for the Barut-Girardello coherent states, for any $|\Psi\rangle=\sum_{n=0}^\infty c_n|\psi_n^{\ell}\rangle$
in the Hilbert space, one can construct an analytic function
\begin{eqnarray}
f(\eta)=(1-|\eta|^2)^{-\ell-1}\langle \psi_\eta^\ell|\Psi\rangle=\sum_{n=0}^\infty c_n
\sqrt{\frac{\Gamma(n+\ell+1)}{n!\Gamma(\ell+1)}}(\eta^*)^n.
\end{eqnarray}
The expansion of $|\Psi\rangle$ on the basis of coherent states (\ref{P}) can be written as
\begin{eqnarray}
|\Psi\rangle &=&\int d\mu(\eta,\ell)(1-|\eta|^2)^{\frac{\ell+1}{2}}f(\eta)|\psi_\eta^\ell\rangle,\\
||\Psi||^2&=&\int d\mu(\eta,\ell)(1-|\eta|^2)^{\ell+1}|f(\eta)|^2<\infty.
\end{eqnarray}
\section{Conclusion}\label{sec8}
In this paper we have investigated the system of a nonrelativistic particle of mass $m$ with time-dependent harmonic frequency $\omega(t)$
in rotational symmetric in the plane under the influence of a time-dependent friction force. At the classical level we
solved the equations of motion which describe three particulary physical systems. At the quantum level, we used the
Lewis-Riesenfeld's method to construct the spectra of the invariant operator $\hat I(t)$ and the Hamiltonian $\hat H(t)$ on the
helicity-like basis $|\phi_{n_\pm}(t)\rangle$. The configuration space wave functions of both operators are expressed in terms of the
generalized Laguerre polynomials.
This system previously introduced in the one dimensional case \cite{26,27} as the generalization of the Kanai Hamiltonian \cite{46}
has been criticized for violating certain laws of quantum theory. Nevertheless, as many approaches of solution have been
given to raise these controversies
\cite{52,53,54,55,56}, we used the invariant method of Lewis-Riesenfeld to confirm the preservation of those laws by
investigating the validity of the
Heisenberg uncertainty relations and the expectation values of mechanical energy.
This model generalizes not only the $1$D damped systems studied in the literature \cite{26,27} but also improves
the technique of quantization of those model achieved in the framework of Lewis-Riesenfeld method \cite{23,24,25,26,27}.
By analogy with the work of Pedrosa \cite{27} who constructed the canonical coherent states for the $1$D case of this system,
we constructed the system of $SU(1,1)$ coherent states based on the eigenfunction of the Hamiltonian.
For these states the resolution of identity and some properties are examined. Referring to the original paper of Lewis-Riesenfeld \cite{1},
it would be also good to determine the transition amplitude connecting any initial state in the remote past to any final state
in the remote future in the case of a constant frequency, we hope to report these aspects elsewhere.
\section*{Appendix: Expressions of the phase space operators in terms of the helicity Fock algebra generators}
In this appendix we explicitly develop some intermediary calculations which allowed us to determine the expressions of operators $\hat I (t)$, $\hat L_z$,
$\hat H(t)$ of section \ref{sec4} and the Heisenberg uncertainty relations of section \ref{sec5}
\begin{eqnarray}
a_j&=&\hat U^\dag a'_j \hat U=\frac{1}{\sqrt{2\nu}}\left(mf^{-1}\dot{\rho}\hat x_j-\rho \hat p_j+i\frac{\nu}{\rho}\hat x_j\right)\label{xx7},\\
a_j^\dag&=& \hat U^\dag {a'}_j^\dag \hat U=\frac{1}{\sqrt{2\nu}}\left(mf^{-1}\dot{\rho}\hat x_j-\rho \hat p_j-i\frac{\nu}{\rho}\hat x_j\right) \label{xx8}.
\end{eqnarray}
with $j=1,2$. Conversely,
\begin{eqnarray}
\hat x_j=\frac{i\rho}{\sqrt{2\nu}}\left(a_j^\dag-a_j\right), \,\,\, \hat p_j=\frac{imf^{-1}\dot{\rho}}{\sqrt{2\nu}}\left(a_j^\dag-a_j\right)
-\frac{\sqrt{2\nu}}{2\rho}\left(a_j^\dag+a_j\right).
\end{eqnarray}
The helicity Fock algebra generators in terms of generators $a_j$ and $a_j^\dag$ are given as follows
\begin{eqnarray}\label{vv1}
a_{\pm}&=&\frac{1}{\sqrt{2}}\left(a_1\pm ia_2\right),\,\,\, a_{\pm}^\dag=\frac{1}{\sqrt{2}}\left(a_1^\dag\mp ia_2^\dag\right).
\end{eqnarray}
The inverse relations are,
\begin{eqnarray}
a_1&=& \frac{1}{\sqrt{2}}\left(a_++a_-\right) ,\,\,\, a_1^\dag=\frac{1}{\sqrt{2}}\left(a_+^\dag+ a_-^\dag\right),\cr
a_2&=& -\frac{i}{\sqrt{2}}\left(a_+-a_-\right) ,\,\,\, a_2^\dag=\frac{i}{\sqrt{2}}\left(a_+^\dag- a_-^\dag\right).
\end{eqnarray}
In terms of helicity generators, the phase space operators read
\begin{eqnarray}
\hat x_1&=&-\frac{i\rho}{2\sqrt{\nu}}\left(a_--a_+^\dag+a_+-a_-^\dag\right),\\
\hat p_1&=&-\frac{imf^{-1}\dot{\rho}}{2\sqrt{\nu}}\left(a_--a_+^\dag+a_+
-a_-^\dag\right)-\frac{\sqrt{\nu}}{2\rho}\left(a_-+a_+^\dag+a_++a_-^\dag\right),\\
\hat x_2&=&\frac{\rho}{2\sqrt{\nu}}\left(a_--a_+^\dag-a_++a_-^\dag\right),\\
\hat p_2&=&\frac{mf^{-1}\dot{\rho}}{2\sqrt{\nu}}\left(a_--a_+^\dag-a_+
+a_-^\dag\right)-i\frac{\sqrt{\nu}}{2\rho}\left(a_-+a_+^\dag-a_+-a_-^\dag\right).
\end{eqnarray}
In particulary
\begin{eqnarray}
\hat x_1-i\hat x_2&=&\frac{i\rho}{\sqrt{\nu}}\left(a_+^\dag-a_-\right),\,\,\, \hat x_1+i\hat x_2=\frac{i\rho}{\sqrt{\nu}}\left(a_-^\dag-a_+\right),\\
\hat p_1+i\hat p_2&=&\frac{imf^{-1}\dot{\rho}}{\sqrt{\nu}}\left(a_-^\dag-a_+\right)
-\frac{\sqrt{\nu}}{\rho}\left(a_-^\dag+a_+\right), \\
\hat p_1-i\hat p_2&=&\frac{imf^{-1}\dot{\rho}}{\sqrt{\nu}}\left(a_+^\dag-a_-\right)
-\frac{\sqrt{\nu}}{\rho}\left(a_+^\dag+a_-\right).
\end{eqnarray}
\section*{Acknowledgments}
L. M. Lawson acknowledges the receipt of the grant from the
Abdus Salam international Centre for Theoretical Physics (ICTP) Trieste
Italy and from the German Academic Exchange Service (DAAD)
|
{
"timestamp": "2018-11-09T02:13:31",
"yymm": "1805",
"arxiv_id": "1805.02484",
"language": "en",
"url": "https://arxiv.org/abs/1805.02484"
}
|
\section{Introduction}
We consider the problem of enumerating all inclusion-wise minimal dominating sets of a given graph, denoted by \textsc{Dom-Enum}.
A {\em dominating set} in a graph $G$ is a set of vertices $D$ such that every vertex of $G$ is either in $D$ or is adjacent to some vertex of $D$.
It is said to be {\em minimal} if it does not contain any dominating set as a proper subset.
To this date, it is open whether \textsc{Dom-Enum} admits an output-polynomial time algorithm.
An enumeration algorithm is said to be running in {\em output-polynomial} time if its running time is bounded by a polynomial in the combined size of the input and the output.
It is said to be running in {\em incremental-polynomial} time if the running times between two consecutive outputs and after the last output are bounded by a polynomial in the combined size of the input and already output solutions.
If the running times between two consecutive outputs and after the last output are bounded by a polynomial in the size of the input alone, then the algorithm is said to be running with {\em polynomial delay}; see~\cite{johnson1988generating,creignou2019complexity}.
Recently, it has been proved in~\cite{kante2014enumeration} that \textsc{Dom-Enum} is equivalent to the problem of enumerating all inclusion-wise minimal transversals of a hypergraph, denoted by \textsc{Trans-Enum}.
The best known algorithm for this problem is due to Fredman and Khachiyan~\cite{fredman1996complexity} and runs in incremental quasi-polynomial time.
Nevertheless, several classes of graphs were shown to admit output-polynomial time algorithms.
For example, it has been shown that there exist output-polynomial time algorithms for $\log(n)$-degenerate graphs~\cite{eiter2003new}, triangle-free graphs~\cite{bonamy2019enumerating}, and recently for $K_t$-free for any fixed $t\in \mathbb{N}$, diamond-free and paw-free graphs~\cite{bonamy2019enumeratingkt}.
Incremental-polynomial time algorithms are known for chordal bipartite graphs~\cite{golovach2016enumerating} and graphs of bounded conformality~\cite{boros2004generating}.
Polynomial-delay algorithms are known for degenerate graphs~\cite{eiter2003new}, line graphs~\cite{kante2015polynomial}, and chordal graphs~\cite{kante2015chordal}.
Linear-delay algorithms are known for permutation and interval graphs~\cite{kante2012neighbourhood}, graphs with bounded clique width~\cite{courcelle2009linear}, split and $P_6$-free chordal graphs~\cite{kante2014enumeration}.
In this paper, we investigate the enumeration of minimal dominating sets from their intersection with redundant vertices, i.e., vertices that have an inclusion-wise non-minimal neighborhood in the graph.
This technique was first introduced in~\cite{kante2014enumeration} for the enumeration of minimal dominating sets in split and $P_6$-free chordal graphs.
We investigate generalizations of this technique to $P_k$-free chordal graphs for larger integers $k$.
In particular, we give $O(n+m)$ and $O(n^3\cdot m)$ delays algorithms in the classes of $P_7$-free and $P_8$-free chordal graphs, where $n$ and $m$ respectively denote the number of vertices and edges in the graph.
Our algorithms rely on two main properties.
The first one is that the intersections of minimal dominating sets with redundant vertices form an independence system and an accessible set system in $P_7$-free and $P_8$-free chordal graphs.
The second is that the connected components obtained after removing redundant vertices in $P_7$-free and $P_8$-free chordal graphs are respectively $P_3$-free and $P_4$-free chordal.
As for $P_k$-free chordal graphs for $k\geq 9$, we give evidence that such a technique is inefficient as a key step of the algorithm, namely the irredundant extension problem, becomes {\sf NP}-complete.
The rest of the paper is organized as follows.
In Section~\ref{sec:preliminaries} we introduce definitions and preliminary notions.
In Section~\ref{sec:algorithm} we describe the general algorithm that we consider throughout the paper and that can be decomposed into two distinct parts: redundant parts enumeration, and irredundant extensions enumeration.
In Section~\ref{sec:properties} we prove properties on chordal graphs that depend on the size of a longest induced path in the graph.
Section~\ref{sec:redundant} is devoted to the complexity analysis of the first part of the algorithm, while Section~\ref{sec:irredundant} consider the second.
We conclude in Section~\ref{sec:conclusion} by discussing the outlooks of such a technique.
\section{Preliminaries}\label{sec:preliminaries}
In this paper, all graphs are considered finite, undirected, simple, and loopless.
For a graph $G=(V(G),E(G))$, $V(G)$ is its set of vertices and $E(G)\subseteq \{\{x,y\} \mid x,y\in V(G),\ x\neq y\}$ is its set of edges.
Edges may be denoted by $xy$ (or $yx$) instead of $\{x,y\}$.
Two vertices $x,y$ of $G$ are called {\em adjacent} if $xy\in E(G)$.
A {\em clique} in a graph $G$ is a set of pairwise adjacent vertices.
An {\em independent set} in a graph $G$ is a set of pairwise non-adjacent vertices.
The subgraph of $G$ {\em induced} by $X\subseteq V(G)$, denoted by $G[X]$, is the graph $(X,E\cap \{\{x,y\} \mid x,y\in X,\ x\neq y\})$; $G-X$ is the graph $G[V(G)\setminus X]$.
An {\em induced path} (resp.~{\em induced cycle}) in $G$ is a path (resp.~cycle) that is an induced subgraph of~$G$.
We denote by $P_k$ an induced path on $k$ vertices.
We call {\em hole} (or {\em chordless cycle}) an induced cycle of size at least four.
A~graph $G$ is {\em split} if its vertex set can be partitioned into a clique and an independent set.
It~is {\em chordal} if it has no chordless cycle.
It~is called $P_k$-free if it has no induced path on $k$ vertices.
Let $G$ be a graph and $x\in V(G)$ be a vertex of $G$.
The {\em neighborhood} of $x$ is the set $N(x)=\{y\in V(G) \mid xy\in E(G)\}$.
The {\em closed neighborhood} of $x$ is the set $N[x]= N(x)\cup\{x\}$.
For a subset $X\subseteq V(G)$ we define $N[X]=\bigcup_{x\in X} N[x]$ and $N(X)=N[X]\setminus X$.
In case of ambiguity or when several graphs are considered, we shall note $N_G[x]$ the neighborhood of $x$ in $G$.
The {\em degree} of $x$ is defined by $deg(x)=|N(x)|$.
We say that $x$ is {\em complete} to $X$ if $X\subseteq N(x)$, and that it is {\em partially adjacent} to $X$ if it is adjacent to an element of $X$ but not complete to $X$.
Let $D,X\subseteq V(G)$ be two subsets of vertices of $G$.
We say that $D$ {\em dominates} $X$ if $X\subseteq N[D]$.
It is inclusion-wise {\em minimal} if $X\not\subseteq N[D\setminus \{x\}]$ for any $x\in D$.
We say that $D$ dominates $x$ if it dominates $\{x\}$.
A (minimal) {\em dominating set} of $G$ is a (minimal) dominating set of $V(G)$.
The set of all minimal dominating sets of $G$ is denoted by $\D(G)$, and the problem of enumerating $\D(G)$ given $G$ by \textsc{Dom-Enum}.
Let $x$ be a vertex of~$D$.
A~{\em private neighbor} of $x$ w.r.t.~$D$ in $G$ is a vertex $u$ of $G$ that is only adjacent to $x$ in~$D$, that is, such that $N[u]\cap D=\{x\}$.
Note that $x$ can be its own private neighbor (in that case we say that $x$ is {\em self-private}).
The set of all private neighbors of $x$ w.r.t.~$D$ is denoted by $Priv(D,x)$.
It is well known that a subset $D\subseteq V(G)$ is a minimal dominating set of $G$ if and only if it dominates $G$, and for every $x\in D$, $Priv(D,x)\neq \emptyset$.
Let $x$ be a vertex of $G$.
We say that $x$ is {\em irredundant} if it is minimal with respect to neighborhood inclusion.
In case of equality between minimal neighborhoods, exactly one vertex is considered as irredundant.
We say that $x$ is {\em redundant} if it is not irredundant.
Then to every redundant vertex $y$ corresponds at least one irredundant vertex $x$ such that $N[x]\subseteq N[y]$, and no vertex $y$ is such that $N[y]\subset N[x]$ whenever $x$ is irredundant.
The set of irredundant vertices of $G$ is denoted by $IR(G)$, and the set of redundant vertices by $RN(G)$.
We call {\em irredundant component} a connected component of $G[IR(G)]$.
For a subset $D$ of vertices of $G$ we note $D_{RN}=D\cap RN(G)$ its intersection with redundant vertices, and $D_{IR}=D\cap IR(G)$ its intersection with irredundant vertices.
Then $D_{RN}$ and $D_{IR}$ form a bipartition of $D$.
For a subset $D$ and a vertex $x\in D$, we call irredundant private neighbors of $x$ w.r.t.~$D$ the elements of the set $Priv_{IR}(D,x)=Priv(D,x)\cap IR(G)$.
In the remaining of the paper we shall note $\D_{RN}(G)=\{D_{RN} \mid D\in \D(G)\}$ and refer to this set as the {\em redundant parts} of minimal dominating sets of $G$.
We call {\em irredundant extension} of $A\in \D_{RN}(G)$ a set $I\subseteq IR(G)$ such that $A\cup I\in \D(G)$, and note $DIR(A)$ the set of all such sets.
Observe that $|\D_{RN}(G)|\leq |\D(G)|$ and that this inequality might be sharp (take a star graph), or strict (take a path on six vertices).
We end the preliminaries stating general properties that will be used throughout the paper.
\begin{proposition}\label{prop:ACS-emptyset}
Let $G$ be a graph.
Then $IR(G)$ dominates $G$, hence $\emptyset\in \D_{RN}(G)$.
\end{proposition}
\begin{proof}
Take any vertex $x$ of $G$.
Either it is irredundant, or not.
If it is then it is dominated by $IR(G)$.
If not then by definition there exists $y\in IR(G)$ such that $N[y]\subseteq N[x]$, and it is dominated by $IR(G)$.
Consequently, $IR(G)$ dominates $G$ and thus there exists $D\subseteq IR(G)$ such that $D\in \D(G)$ and $D_{RN}=\emptyset$.
Hence $\emptyset\in \D_{RN}(G)$.
\end{proof}
\begin{proposition}\label{prop:IR-private}
Let $G$ be a graph and $D\subseteq V(G)$.
Then $D$ is a minimal dominating set of $G$ if and only if it dominates $IR(G)$ and $Priv_{IR}(D,x)\neq\emptyset$ for every $x\in D$.
\end{proposition}
\begin{proof}
We prove the first implication.
Let $D\in \D(G)$.
Clearly $D$ dominates $IR(G)$.
Let us assume for contradiction that $Priv_{IR}(D,x)=\emptyset$ for some $x\in D$.
We first exclude the case where $x$ is self-private.
If $x$ is self-private then it is redundant and it has a neighbor $y\in IR(G)$ such that $N[y]\subseteq N[x]$.
Since by hypothesis $Priv_{IR}(D,x)=\emptyset$, $y$ is dominated by some $z\in D$, $x\neq z$.
However, since $N[y]\subseteq N[x]$ then $zx\in E(G)$ and $x$ is not self-private, a contradiction.
Consequently $x$ has a neighbor $u\in D$, and a private neighbor $v$ in $RN(G)$.
Let $w\in IR(G)$ such that $N[w]\subseteq N[v]$.
Such a vertex exists since $v$ is redundant.
Two cases arise depending on whether $w=x$ or $w\neq x$.
In the first case we conclude that $uv\in E(G)$, hence that $v$ is not a private neighbor of $x$, a contradiction.
In the other case, observe that since $w$ is irredundant it cannot be a private neighbor of $x$ (if ever it was adjacent to $x$).
Hence it must be dominated by some $z\in D$, $z\neq x$ (possibly $z=w)$.
Since $N[w]\subseteq N[v]$, $z$ is adjacent to $v$, hence $v$ is not a private neighbor of $x$, a contradiction.
As for the other implication, observe that if an irredundant neighborhood $N[x]$, $x\in IR(G)$ is intersected by some set $D\subseteq V(G)$, then every neighborhood $N[y]$ such that $N[x]\subseteq N[y]$ is also intersected by $D$.
Now if $D$ dominates $IR(G)$, then it intersects every irredundant neighborhood.
As for every $y\in RN(G)$ there exists $x\in IR(G)$ such that $N[x]\subseteq N[y]$ we conclude that $D$ dominates $G$ whenever it dominates $IR(G)$.
Minimality follows from the inclusion $Priv_{IR}(D,x)\subseteq Priv(D,x)$, recalling that a dominating set $D$ is minimal if and only if $Priv(D,x)\neq\emptyset$ for every $x\in D$.
\end{proof}
A corollary of Proposition~\ref{prop:IR-private} is the following, observing for $A\subseteq RN(G)$ and $I\subseteq IR(G)$ that if $I$ dominates $IR(G)\setminus N(A)$ but not $Priv_{IR}(A,a)$ for any $a\in A$, then $I$ can be arbitrarily reduced into a minimal such set.
\begin{corollary}\label{cor:IR-private}
Let $G$ be a graph and $A\subseteq RN(G)$.
Then $A\in \D_{RN}(G)$ if and only if every $a\in A$ has an irredundant private neighbor, and there exists $I\subseteq IR(G)$ such that $I$ dominates $IR(G)\setminus N(A)$ but not $Priv_{IR}(A,x)\neq\emptyset$ for any $a\in A$.
Furthermore, $I\in DIR(A)$ whenever it is minimal with this property.
\end{corollary}
\section{The algorithm}\label{sec:algorithm}
We describe a general algorithm enumerating the minimal dominating sets of a graph from their intersection with redundant vertices.
See Algorithm~\ref{algo:main}.
The first step is the enumeration of such intersections, Line~\ref{line:main-forallRN}.
The second step is the enumeration of their irredundant extensions, Line~\ref{line:main-forallIR}.
The correctness of the algorithm follows from the bipartition induced by $RN(G)$ and $IR(G)$ in $G$.
The next sections are devoted to the complexity analysis of these two steps in the restricted case of $P_7$-free and $P_8$-free chordal graphs.
\begin{algorithm}
\SetAlgoLined
\SetKwProg{myproc}{Procedure}{}{}
\myproc{{\em \texttt{DOM}($G$)}}{
\For{{\bf all} $A\subseteq RN(G)$ {\bf such that} $A\in \D_{RN}(G)$\label{line:main-forallRN}}
{
\For{{\bf all} $I\subseteq IR(G)$ {\bf such that} $I\in DIR(A)$\label{line:main-forallIR}}
{
{\bf output} $A\cup I$\;\label{line:main-output}
}
}
}
\caption{An algorithm enumerating the minimal dominating sets of a graph $G$ from their intersection with the set $RN(G)$ of redundant vertices of $G$.}\label{algo:main}
\end{algorithm}
\section{Properties on $P_k$-free chordal graphs}\label{sec:properties}
We give structural properties on redundant vertices and irredundant components of $G$ whenever $G$ is chordal, and depending on the size of a longest induced path in $G$.
\begin{proposition}\label{prop:IR-path}
Let $G$ be a graph and $u,v$ be two adjacent irredundant vertices of $G$.
Then there exist $u'\in N[u]\setminus N[v]$, $u''\in N[u']\setminus N[u]$, $v'\in N[v]\setminus N[u]$ and $v''\in N[v']\setminus N[v]$.
In particular if $G$ is chordal, then $u''u'uvv'v''$ induces a $P_6$.
\end{proposition}
\begin{proof}
Let us assume for contradiction that no such $u'$ exists.
Then either $N[u]\subset N[v]$, or $N[u]=N[v]$.
In the first case $v$ is redundant, a contradiction.
In the other case only one of $u$ and $v$ should be irredundant by definition, a contradiction.
Hence $u'$ exists.
By symmetry, $v'$ exists.
Let us now assume for contradiction that no such $u''$ exists.
Then either $N[u']\subset N[u]$, or $N[u']=N[u]$.
In the first case $u$ is redundant, a contradiction.
In the other case $vu'\in E(G)$, a contradiction.
Hence $u''$ exists.
By symmetry, $v''$ exists.
Now if $G$ is chordal, $u''u'uvv'v'$ induces a $P_6$.
\end{proof}
An {\em accessible set system} is a family of sets in which every non-empty set $X$ contains an element $x$ such that $X\setminus\{x\}$ belongs to the family.
If $x$ is of largest index in $X$ such that $X\setminus\{x\}$ belongs to the family, then it is called {\em maximal generator} of $X$.
An \emph{independence system} is a family of sets such that for every non-empty set $X$ of the family, and every element $x\in X$, $X\setminus\{x\}$ belongs to the family.
In particular, every independence system is an accessible set system.
Note that the maximal generator of $X$ in that case is always the vertex of maximal index in $X$.
Accessible set systems and independence systems play an important role in the design of efficient enumeration algorithms~\cite{arimura2009polynomial,kante2014enumeration}.
The next theorem suggests that the enumeration of $\D_{RN}(G)$ is tractable in $P_7$-free and $P_8$-free chordal graphs.
\begin{figure}
\center
\caption{The situation of Proposition~\ref{prop:ASC-IS}, case one. Circles denote private neighborhoods.}\label{fig:P8ASC}
\includegraphics{DRN.pdf}
\end{figure}
\begin{proposition}\label{prop:ASC-IS}
Let $G$ be a chordal graph.
Then $\D_{RN}(G)$ is an independence system whenever $G$ is $P_7$-free, and it is an accessible set system whenever $G$ is $P_8$-free.
\end{proposition}
\begin{proof}
Let $G$ be a chordal graph.
We first assume that $\D_{RN}(G)$ is not an independence system to exhibit a $P_7$, and then assume that $\D_{RN}(G)$ is not an accessible system to exhibit a $P_8$.
%
So suppose that $\D_{RN}(G)$ is not an independence system and let $A\in\D_{RN}(G)$ and $a\in A$ such that $A\setminus \{a\}\not\in \D_{RN}(G)$.
By Proposition~\ref{prop:ACS-emptyset}, $|A|\geq 2$.
Let $I\in DIR(A)$.
Clearly $Priv_{IR}(A,a)\not\subseteq I$.
Let $A'=A\setminus \{a\}$ and $I'=I\cup Priv_{IR}(A,a)$.
Then $I'$ dominates $IR(G)\setminus N(A')$.
By Corollary~\ref{cor:IR-private} there must be some $b\in A'$ such that $I'$ dominates $Priv_{IR}(A',b)$, hence $Priv_{IR}(A,b)$ as $Priv_{IR}(A,b)\subseteq Priv_{IR}(A',b)$.
Let $b$ be one such vertex.
%
We put $U=Priv_{IR}(A,a)\setminus N[I]$ and $V=Priv_{IR}(A,b)\setminus N[I]$.
Then neither of $U$ nor $V$ is empty, $U\cap V=\emptyset$, and $U$ dominates~$V$.
Let $u\in U$ and $v\in V$ be such that $uv\in E(G)$ (such $u$ and $v$ exist since $U$ dominates $V$).
Since $u$ and $v$ are private neighbors of $a$ and $b$, $av,bu\not\in E(G)$.
Since $G$ is chordal, $ab\not\in E(G)$.
Then $auvb$ induces a $P_4$.
By Proposition~\ref{prop:IR-path} since $u,v$ are irredundant, there exists $u''$ and $u'$ such that $u''u'uvb$ induces a $P_5$.
%
Consider an irredundant vertex $w$ such that $N[w]\subseteq N[b]$.
Such a vertex exists since~$b$ is redundant.
Two cases arise depending on whether $w\in Priv_{IR}(A,b)$ or $w\not\in Priv_{IR}(A,b)$.
Let us consider the case $w\in Priv_{IR}(A,b)$.
It is illustrated in Figure~\ref{fig:P8ASC}.
Since $U$ dominates $V$ and $N[w]\subseteq N[b]$ we know that $w\not\in V$ (as otherwise $b$ is adjacent to a vertex of $U$, i.e., a private neighbor of~$a$).
Hence $w\in Priv_{IR}(A,b)\cap N[I]$.
Note that $w\not\in I$ as $N[w]\subseteq N[b]$ ($w$ cannot be part of an irredundant extension if it has no private neighbors).
Accordingly, consider $x\in I$ such that $wx\in E(G)$.
Since $N[w]\subseteq N[b]$, $xb\in E(G)$.
Since $v\not\in N[I]$, $xv\not\in E(G)$.
Now, since $x$ belongs to $I$ it has a private neighbor $y\in N[x]\setminus N[w]$.
As $G$ is chordal $u''u'uvbxxy$ induces a $P_7$, concluding the first part of the proposition in this case.
Let us now assume that $\D_{RN}(G)$ is not an accessible set system, that is $A\setminus \{c\}\not\in \D_{RN}(G)$ for any $c\in A$.
Observe that if replacing $x$ by $Priv_{IR}(A'\cup I', x)$ in $I'$ for all $x\in N(w)\cap I'$ does not dominate $Priv_{IR}(A',c)$ for any $c\in A'\setminus \{b\}$, then $w$ becomes a private neighbor of~$b$, and $A\setminus \{a\} \in \D_{RN}(G)$, a contradiction.
Consequently there must exist $x\in I$ such that $wx\in E(G)$ and $y\in Priv_{IR}(A'\cup I', x)$, $c\in A$ and $z\in Priv_{IR}(A',c)$ such that $yz\in E(G)$.
Also $yb,zx,cy\not\in E(G)$ as $y$ and $z$ are private neighbors of $x$ and~$c$.
Since $G$ is chordal, $u''u'uvbxyzc$ induces $P_9$, concluding the second part of the proposition in this case.
Let us now consider the other case $w\not\in Priv_{IR}(A,b)$.
Then there must exist $c\in A\setminus \{b\}$ such that $wc\in E(G)$.
Since $N[w]\subseteq N[b]$, we have $bc\in E(G)$.
Consequently $a\neq c$.
Furthermore since $v$ is a private neighbor of $b$, $cv\not\in E(G)$.
Since $c\in A$ it has a private neighbor $z$, and $bz\not\in E(G)$.
As $G$ is chordal $u''u'uvbcz$ induces a $P_7$, concluding the first part of the proposition in this second case.
Let us now assume that $\D_{RN}(G)$ is not an accessible set system.
Then $A\setminus \{c\}\not\in \D_{RN}(G)$.
Observe that if every private neighbor $z$ of $c$ is such that $N[z]\subseteq N[c]$, then replacing $c$ by every such private neighbors in $A\cup I$ yields a minimal dominating set $D$ of $G$ such that $D_{RN}=A\setminus \{c\}$, a contradiction.
Hence there exist $z\in N[c]\setminus N[b]$ and $z'\in N[z]\setminus N[c]$.
As $G$ is chordal $u''u'uvbczz'$ induces a $P_8$, concluding the second part of the proposition in this case, and the proof.
\end{proof}
\begin{proposition}\label{prop:Pk-Pk-4}
Let $G$ be a chordal graph and $C$ be an irredundant component of~$G$.
Then the graph $G[C]$ is $P_{k-4}$-free chordal whenever $G$ is $P_k$-free, $k\geq 6$.
\end{proposition}
\begin{proof}
We proceed by contradiction.
Let $G$ be a $P_k$-free graph, $k\geq 6$ and $C$ be an irredundant component of $G$.
Suppose that $G[C]$ is not $P_{k-4}$-free, and let $P_{uv}$ be an induced path of length at least $k-4$ in $G[C]$ with endpoints $u$ and~$v$.
Let $u^*$ and $v^*$ be the neighbors of $u$ and $v$ in $P_{uv}$ (possibly $u^*=v$ and $v^*=u$, or $u^*=v^*$).
By Proposition~\ref{prop:IR-path} since $u,u^*$ and $v,v^*$ are irredundant and adjacent, there exist $u'',u',v',v''$ such that $u''u'P_{uv}v'v''$ induces a path of length at least $k$ in $G$, a contradiction.
\end{proof}
\begin{proposition}\label{prop:Na-connected}
Let $G$ be a chordal graph, $a\in RN(G)$, $C$ be an irredundant component of~$G$, and $u,v$ be two vertices in $C\cap N(a)$.
Then $N(a)$ contains every induced path from $u$ to~$v$.
In particular $G[N(a)\cap C]$ is connected.
\end{proposition}
\begin{proof}
Clearly the proposition holds if $uv\in E(G)$.
Let $u,v$ be two non-adjacent vertices in $C\cap N(a)$.
Let $P_{uv}$ be an induced path from $u$ to $v$ in $G[C]$.
One such path exists since $G[C]$ is connected.
Let us assume for contradiction that there exists $x\in P_{uv}$ such that $x\not\in N(a)$.
Consider $u^*$ and $v^*$ to be the first elements of $P_{uv}$ respectively in the way from $x$ to $u$, and from $x$ to $v$, such that $u^*,v^*\in N(a)$ (possibly $u^*=u$ and $v^*=v$).
Consider the path $P_{u^*v^*}$ obtained from $P_{uv}$ and shortened at endpoints $u^*$ and $v^*$.
Then $P_{u^*v^*}$ is an induced path with only its endpoints adjacent to $a$, inducing a hole in $G$, a contradiction.
\end{proof}
\begin{proposition}\label{prop:Na-complete}
Let $G$ be a chordal graph and $a\in RN(G)$.
Then $a$ is partially adjacent to at most one irredundant component of $G$ (it is either disconnected or complete to all other irredundant components of $G$) whenever $G$ is $P_9$-free chordal.
\end{proposition}
\begin{proof}
We proceed by contradiction.
Let us assume that $G$ is $P_9$-free chordal and that there exist two irredundant components $C_1,C_2$ such that $C_1\cap N(a)\neq \emptyset$, $C_2\cap N(a)\neq \emptyset$, and $C_1,C_2\not\subseteq N(a)$.
Let $u\in C_1\cap N(a)$, $u'\in C_1\setminus N(a)$, $v\in C_2\cap N(a)$ and $v'\in C_2\setminus N(a)$.
Consider a shortest path $P_{u'u}$ in $G[C_1]$ from $u'$ to $u$, and one $P_{vv'}$ in $G[C_2]$ from $v$ to $v'$.
These paths are induced.
Let $u^*$ and $v^*$ be the neighbors of $u'$ and $v'$ in $P_{u'u}$ and $P_{vv'}$, respectively (possibly $u^*=u$ and $v^*=v$).
By Proposition~\ref{prop:IR-path} since $u',u^*$ and $v',v^*$ are irredundant and adjacent, there exist $u''$, $u'''$, $v''$ and $v'''$ such that $u'''u''u'u^*$ and $v^*v'v''v'''$ induce paths of length four in $G$.
Consider $x$ the last vertex in $P_{u'u}$ starting from $u$ which is adjacent to $a$, and $y$ the last vertex in $P_{vv'}$ starting from $v$ which is adjacent to $a$ (possibly $x=u^*$ and $y=v^*$ but $x\neq u'$, $y\neq v'$).
Consider the paths $P_{u'x}$ and $P_{yv'}$ obtained from $P_{u'u}$ and $P_{vv'}$ and shortened at endpoints $x$ and $y$.
Then $u'''u''P_{u'x}aP_{yv'}v''v'''$ induces a path of length at least nine in $G$, a contradiction.
\end{proof}
In the following, for a set $A\subseteq RN(G)$ we consider the following bipartition.
The part $B(A)$ contains the elements of $A$ having an irredundant private neighbor in some irredundant component $C$ such that $C\subseteq N(A)$.
Observe that no irredundant extension of $A$ can steal these private neighbors, as only $IR(G)\setminus N(A)$ has to be dominated by such extensions, and $C$ is disconneced from $IR(G)\setminus N(A)$.
The part $R(A)$ contains all other elements of~$A$.
We call {\em red} and {\em blue} vertices the elements of $R(A)$ and $B(A)$, respectively.
If $C_i$ is an irredundant component of $G$, then $R_i(A)$ denote the red elements of $A$ having at least one private neighbor in $C_i$.
Recall that by Proposition~\ref{prop:Na-complete}, the elements of $A$ are partially adjacent to at most one irredundant component whenever $G$ is $P_9$-free chordal.
In particular in such class, the red elements have their private neighbors in at most one irredundant component.
The next theorem follows.
\begin{theorem}\label{thm:P9-IR-characterization}
Let $G$ be a $P_9$-free chordal graph, $A\in \D_{RN}(G)$ and $I\subseteq IR(G)$.
%
Then $I$ is an irredundant extension of $A$ if and only if for every irredundant component $C_i$ of $G$, $D_i=I\cap C_i$ is minimal such that
\begin{itemize}
\item $D_i$ dominates $C_i\setminus N(A)$, but
\item $D_i$ does not dominate $Priv_{IR}(A,x)$ for any $x\in R_i(A)$.
\end{itemize}
\end{theorem}
We immediately derive the next two corollaries, observing for the first one that a minimal set $I$ as described in Theorem~\ref{thm:P9-IR-characterization} can be greedily obtained from a non-minimal such set, and for the second that by Proposition~\ref{prop:Pk-Pk-4}, every irredundant component $C$ of $G$ is a clique whenever $G$ is $P_7$-free chordal.
\begin{corollary}\label{cor:P9-IR-characterization}
Let $G$ be a $P_9$-free chordal graph and $A\subseteq RN(G)$.
%
Then $A\in \D_{RN}(G)$ if and only if every $a\in A$ has an irredundant private neighbor, and, for every irredundant component $C_i$ of $G$ there exists $D_i\subseteq C_i$ such that
\begin{itemize}
\item $D_i$ dominates $C_i\setminus N(A)$, but
\item $D_i$ does not dominate $Priv_{IR}(A,x)$ for any $x\in R_i(A)$.
\end{itemize}
\end{corollary}
\begin{corollary}\label{cor:P7-IR-characterization}
Let $G$ be a $P_7$-free chordal graph.
Then $\D_{RN}(G)=\{A\subseteq RN(G) \mid$ every $x\in A$ has a private neighbor in some irredundant component $C\subseteq N(A)$, i.e., $R(A)=\emptyset\}$.
\end{corollary}
\section{Enumerating the redundant part of minimal dominating sets}\label{sec:redundant}
This section is devoted to the complexity analysis of Line~\ref{line:main-forallRN} of Algorithm~\ref{algo:main}.
More precisely, we show that enumerating the redundant part of minimal dominating sets can be done with linear and polynomial delays in $P_7$-free and $P_8$-free chordal graphs.
Recall that by Proposition~\ref{prop:ASC-IS}, the set $\D_{RN}(G)$ is an accessible set system whenever $G$ is $P_8$-free chordal.
Hence, it is sufficient to be able to decide whether (i) a given set $A\subseteq RN(G)$ belongs to $\D_{RN}(G)$, and (ii) a given vertex $c$ of $A\in\D_{RN}$ is a maximal generator of~$A$, in order to get an algorithm enumerating $\D_{RN}(G)$ without repetitions in such class.
We~call {\em irredundant extension problem} the first decision problem (denoted by \textsc{IEP}), and {\em maximal generator problem} the second (denoted by \textsc{MGP}).
The algorithm proceeds as follows.
See Algorithm~\ref{algo:RN}.
Given $A\in \D_{RN}(G)$ (starting with $A=\emptyset$ according to Proposition~\ref{prop:ACS-emptyset}) it checks for every candidate vertex $c\in RN(G)\setminus A$ whether $A\cup \{c\}$ belongs to $\D_{RN}(G)$, whether $c$ is a maximal generator of $A\cup \{c\}$, and if so, makes a recursive call on such a set.
The correctness of the algorithm follows from the fact that $\D_{RN}(G)$ being an accessible set system, every set in $\D_{RN}(G)$ is accessible by such a procedure.
In particular, every set $A$ received by the algorithm belongs to $\D_{RN}(G)$.
Repetitions are avoided by the choice of $c$.
\begin{algorithm}
\SetAlgoLined
\SetKwProg{myproc}{Procedure}{}{}
\myproc{{\em \texttt{RNDom}($G$)}}{
\texttt{RecRNDom}($G, \emptyset$)\;
}
\SetKwProg{myproc}{Procedure}{}{}
\myproc{{\em \texttt{RecRNDom}($G, A$)}}{
{\bf output} $A$\;\label{line:RN-output}
\For{{\bf all} $c\in RN(G)\setminus A$\label{line:RN-forall}}
{
\If{$A\cup \{c\}\in \D_{RN}(G)$ {\bf and} $c$ {is a maximal generator of} $A\cup \{c\}$\label{line:RN-condition}}{
\texttt{RecRNDom}($G,A\cup\{c\}$)\;\label{line:RN-recursivecall}
}
}\label{line:RN-afterloop}
}
\caption{An algorithm enumerating the set $\D_{RN}(G)$ of a $P_8$-free chordal graph~$G$, relying on the fact that $\D_{RN}(G)$ is an accessible set system on such class.}
\label{algo:RN}
\end{algorithm}
\subsection{Linear delay implementation in $P_7$-free chordal graphs.}
We show that there is a linear-delay implementation of Algorithm~\ref{algo:RN} in $P_7$-free chordal graphs.
The proof is technically involved and makes use of preprocessed arrays that are maintained throughout the computation.
\begin{theorem}\label{thm:P7-RN}
There is an $O(n+m)$ delay, $O(n^2)$ space and $O(n^2)$ preprocessing-time implementation of Algorithm~\ref{algo:RN} whenever $G$ is $P_7$-free chordal, where $n$ and $m$ respectively denote the number of vertices and edges in $G$.
\end{theorem}
\begin{proof}
Let $C_1,\dots,C_\ell$ denote the $\ell$ irredundant components of $G$.
For every $a\in RN(G)$, and according to Proposition~\ref{prop:Na-complete}, we note $C^a=C_i$ the unique irredundant component $C_i$ to which $a$ is partially adjacent, if it exists, and $C^a=\emptyset$ otherwise.
Note that the computation of such components, and the identification of $C^a$ for every $a\in RN(G)$ can be done in $O(n^2)$ preprocessing time and takes $O(n^2)$ space.
Consider $A\in \D_{RN}(G)$ as received by the algorithm.
Let $c\in RN(G)\setminus A$.
%
First observe that the condition of $c$ being a maximal generator of $A\cup \{c\}$ Line~\ref{line:RN-condition} can be implicitly verified by selecting $c$ of index greater than those in $A$.
This can be done by computing the maximal index $\rho$ in $A$ before the loop in $O(n)$ time, and iterating on $c$ such that $c>\rho$ with no extra cost on the complexity of the loop.
%
We shall show using preprocessed arrays maintained at each step of the loop that testing whether $A\cup \{c\}\in \D_{RN}(G)$ is bounded by $O(\deg(c))$.
Note that by Corollary~\ref{cor:P7-IR-characterization}, $A\cup \{c\}$ belongs to $\D_{RN}(G)$ if and only if (i) every $a\in A$ has a private neighbor in some irredundant component $C_j\subseteq N(A\cup \{c\})$, $j\in \intv{1}{\ell}$, and (ii) there exists $C_i$, $i\in \intv{1}{\ell}$ such that $C_i\not\subseteq N(A)$ and $C_i\subseteq N(A\cup\{c\})$.
%
Also, recall that by Proposition~\ref{prop:Pk-Pk-4} every component $C_1,\dots,C_\ell$ is a clique.
%
Let $T_1$ be an array of size $\ell$ such that $T_1[i]=|C_i|$ for every $i\in\intv{1}{\ell}$.
This array will be used to know the number of vertices that are yet to be dominated in every component.
Let $T_2$ be an array of size $n$ such that $T_2[y]=i$ if $y\in C_i$, and $T_2[y]=0$ otherwise (if $y$ is redundant).
Using these two arrays, one can access in constant time to the number of vertices that are yet to be dominated in the unique clique $C_i$ in which $y$ belongs, by checking $T_1[T_2[y]]$.
%
Let $M_1$ be a two dimensional array of size $n\times 2$ such that $M_1[a][0]=|Priv_{IR}(A,a)\cap C^a|$ if $C^a\neq \emptyset$, $M_1[a][0]=-1$ otherwise, and $M_1[a][1]=|Priv_{IR}(A,a)\setminus C^a|$.
Let $M_2$ be an array of size $n$ such that $M_2[y]=a$ if $y\in Priv_{IR}(A,a)$, and $M_2[y]=0$ otherwise.
Let $M_3$ be an array of size $n$ such that $M_3[y]=0$ if furthermore $y\in C^a$, $M_3[y]=1$ otherwise.
Using these three arrays, one can access in constant time to the number of irredundant private neighbors a vertex $a$ such that $y\in Priv_{IR}(A,a)$ has by checking $M_1[M_2[y]][0]$ and $M_1[M_2[y]][1]$.
The size of the set $Priv_{IR}(A,a)\cap C^a$ in case where $y\in C^a$, and $Priv_{IR}(A,a)\setminus C^a$ in case where $y\not\in C^a$ can be accessed by $M_1[M_2[y]][M_3[y]]$.
%
Finally, consider an array $W$ of size $n$ initialized to zero.
This array will be used to know if a vertex $y$ is dominated by $A\cup \{c\}$, by setting $W[y]=x$ if $y$ is connected to some $x\in A\cup \{c\}$ and $W[y]=0$ otherwise.
%
Note that these six arrays can be computed in $O(n^2)$ preprocessing time and $O(n^2)$ space.
We are now ready to detail each iteration of the loop Line~\ref{line:RN-forall}.
When considering a new candidate vertex $c\in RN(G)\setminus A$, we do the following.
For each $y\in N(c)\cap IR(G)$, we set $W[y]:=c$, $M_2[y]:=c$ and $T_1[T_2[y]]:=T_1[T_2[y]]-1$ whenever $W[y]=0$ (i.e., if $y$ is not dominated by $A$).
Note that $T_1[T_2[y]]$ is decreased to zero if and only if $c$ verifies $C_i\not\subseteq N(A)$ and $C_i\subseteq N(A\cup\{c\})$ for $i=T_2[y]$.
The next claim follows.
\begin{claim}\label{claim:P7-1}
Deciding whether there exists $C_i$, $i\in \intv{1}{\ell}$ such that $C_i\not\subseteq N(A)$ and $C_i\subseteq N(A\cup\{c\})$ takes $O(\deg(c))$ time.
\end{claim}
If~$W[y]\neq 0$ then $y$ was already dominated by $A$, and in particular it might have been the private neighbor of some $a\in A$ given by both $M_2[y]$ and $W[y]$.
In that case (i.e., whenever $M_2[y]\neq 0$) we set $M_1[M_2[y]][M_3[y]]:=M_1[M_2[y]][M_3[y]]-1$, and $M_2[y]:=n+1$ (this value is set temporarily).
Note that $M_1[M_2[y]][0]$ (resp.~$M_1[M_2[y]][1]$) is decreased to zero if and only if $c$ steals all the private neighbors of $a\in A$ that are in irredundant components that are partially adjacent (resp.~complete) to $a$.
Also, observe that we still have $W[y]=a$ for all such $y$ in that case.
We prove the following
\begin{claim}\label{claim:P7-2}
Deciding whether every $a\in A$ has a private neighbor in some irredundant component $C_j\subseteq N(A\cup \{c\})$, $j\in \intv{1}{\ell}$ takes $O(\deg(c))$ time.
\end{claim}
\begin{claimproof}
Consider some $y\in N(c)\cap IR(G)$ and let $a=M_2[y]$, $j=M_3[y]$.
Observe that if both $M_1[a][0]$ and $M_1[a][1]$ have value zero after updating $M_1[a][j]:=M_1[a][j]-1$, then we answer negatively ($a$ lost all its private neighbors).
If $M_1[a][1]$ does not equal zero, then we answer positively ($a$ has a private neighbor in a dominated irredundant component).
If $M_1[a][1]$ equals zero and $M_1[a][0]$ does not equal zero, then we need to check whether $C^a$ is dominated or not, that is whether $T_1[T_2[y]]$ equals zero or not.
We answer positively if it is the case, and negatively otherwise.
This covers all possibilities and the claim follows.
\end{claimproof}
A consequence of Claims~\ref{claim:P7-1} and~\ref{claim:P7-2} is that $A\cup \{c\}\in \D_{RN}(G)$ can be decided in $O(\deg(c))$ time in the condition of Line~\ref{line:RN-condition}.
%
Now, if $A\cup \{c\}\in \D_{RN}(G)$ then we set $M_2[y]=0$ whenever $M_2[y]=n+1$ ($y$ is adjacent to some $a\in A$ and $c$, it is not a private neighbor anymore), for every $y\in N(c)\cap IR(G)$ and in a time which is also bounded by $O(\deg(c))$.
Let us overview the case where $c$ does not satisfy the conditions of Claims~\ref{claim:P7-1} and~\ref{claim:P7-2}, or when a backtrack is executed.
First, we undo the changes by setting $W[y]=0$ and $T_1[T_2[y]]:=T_1[T_2[y]]+1$ for every $y\in N(c)\cap IR(G)$ such that $W[y]=c$ (such a $y$ was not adjacent to any $a\in A$ and no other modifications occurred).
If $W[y]\neq c$ and $M_2[y]= n+1$ (in that case $y$ was the private neighbor of some $a\in A$, $a=W[y]$) then we set $M_2[y]=W[y]$ and $M_1[M_2[y]][M_3[y]]=M_1[M_2[y]][M_3[y]]+1$.
If $W[y]\neq c$ and $M_2[y]\neq n+1$ then $y$ was not adjacent to $c$ and no modification occurred.
This undo process also takes $O(\deg(c))$ time.
Since the sum of degrees of $G$ is bounded by $O(n+m)$, the time spent in the loop Line~\ref{line:RN-forall} is bounded by $O(n+m)$.
Let us finally consider the case when consecutive backtracks are executed.
Observe that in that case, it could be that $n$ times $O(n+m)$ steps are computed without output.
In order to avoid this, a common trick is to output half of the solutions while going down the recursive tree, and the other half when going up the tree.
This is done by moving the output of Line~\ref{line:RN-output} after the loop Line~\ref{line:RN-afterloop} on odd depths of the recursive tree.
\end{proof}
\subsection{Polynomial delay implementation in $P_8$-free chordal graphs.}\label{subsec:RN-P8}
We show that \textsc{IEP} and \textsc{MGP} can be solved in polynomial time in $P_8$-free chordal graphs.
This yields a polynomial-delay implementation of Algorithm~\ref{algo:RN} in the same class.
From now on and until the end of the section, let $G$ be a $P_8$-free chordal graph.
Recall that by Proposition~\ref{prop:Pk-Pk-4}, every irredundant component $C$ of $G$ induces a graph $H=G[C]$ that is $P_4$-free chordal.
It is known that every $P_4$-free chordal graph is the comparability graph of a tree poset~\cite{wolk1962comparability}, where two vertices of the graph are made adjacent if they are comparable in the poset.
To $H$ we associate $T(H)$ its tree poset.
Note that in particular, the root of $T(H)$ is universal in $H$, and that $x\leq y$ implies $N_H[y]\subseteq N_H[x]$.
An example of a $P_4$-free chordal graph and its tree poset is given in Figure~\ref{fig:P4}.
\begin{figure}
\center
\caption{A $P_4$-free chordal graph $H$ and the Hasse diagram of its poset tree.
On such instance $p=4$, $X_1=\{t_1,1,4\}$, $X_2=\{5, t_3,t_4\}$, $X_3=\{t_6\}$, $X_4=\{t_7\}$ and $Y=\{3\}$.
Then $F=\{t_2,2,t_5\}$ and a set $D$ such that $D$ dominates $C\setminus (X\cup Y)$ but not $X_1,\dots,X_4$ is given by $D=\{t_2,t_3,t_5\}$.}\label{fig:P4}
\includegraphics{P4.pdf}
\end{figure}
In the following, let $A\subseteq RN(G)$ and $C$ be an irredundant component of $G$.
Let $x_1,\dots,x_p$ denote the $p$ elements of $R(A)$ having a private neighbor in $C$, $X_j=Priv_{IR}(A, x_j)$ for every $j\in \intv{1}{p}$, $X=X_1\cup\dots\cup X_p$ and $Y=(N(A)\cap C)\setminus X$ (this last set correponds to the vertices in $C$ that are already dominated by $A$, but that are not private for any $x_i$, $i\in \intv{1}{p}$).
By Corollary~\ref{cor:P9-IR-characterization}, \textsc{IEP} can be tested independently on every such component by checking whether $X_1,\dots,X_p$ are non-empty, and, whether there exists $D\subseteq C$ such that
\begin{itemize}
\item $D$ dominates $C\setminus (X\cup Y)$, but
\item $D$ does not dominate any of $X_1,\dots,X_p$.
\end{itemize}
We will show that such a test can be conducted in linear time whenever $X_1,\dots,X_p$ and $Y$ are given by lists and arrays, a condition that can be fulfilled at low cost as in the implementation of Theorem~\ref{thm:P7}.
In the remaining of this section, we note $r$ the root of $T(H)$, and $F$ the maximal elements of $T(H)$ which are neither in $X_1,\dots,X_p$ nor in $Y$ (hence no two elements of $F$ are comparable in $T(H)$).
One such instance is given in Figure~\ref{fig:P4}.
\begin{lemma}\label{lem:P4-F}
Let $D$ be a subset of vertices of $H$.
Then $D$ dominates $C\setminus (X\cup Y)$ if and only if it dominates $F$.
\end{lemma}
\begin{proof}
The first implication trivially holds since $F$ is selected in $C\setminus (X\cup Y)$.
Now since every vertex of $C\setminus (X\cup Y)$ belongs to a path in $T(H)$ from $r$ to some $x\in F$, $F$ dominates $C\setminus (X\cup Y)$.
Consider any $x\in F$ and some dominating set $D$ of $F$.
Since $D$ dominates $F$, either $x\in D$, or there exists $y\in D$ such that either $x<y$, or $x>y$.
In all such cases, the unique path from $r$ to $x$ in $T(H)$ is dominated.
Hence $D$ dominates $C\setminus (X\cup Y)$.
\end{proof}
\begin{lemma}\label{lem:IEP-characterization}
There exists a set $D$ dominating $C\setminus (X\cup Y)$ and not $X_1,\dots,X_p$ if and only if for every $x\in F$ there exists a leaf $t$ of $T(H)$ such that $x\leq t$ and $X_i\not\subseteq N_H[t]$, $i\in \intv{1}{p}$.
\end{lemma}
\begin{proof}
We show the first implication.
Let $D$ be a dominating set of $C\setminus (X\cup Y)$ which does not dominate $X_1,\dots,X_p$.
By Lemma~\ref{lem:P4-F}, every $x\in F$ is dominated by some $y\in D$.
Consider some such $x$ and $y$.
Then one of $x\leq y$ or $y<x$ holds.
Let $t$ be a leaf of $T(H)$ such that $y\leq t$ and $x\leq t$.
Then $t$ does not dominate any of $X_1,\dots,X_p$ as $y$ does not, and $y\leq t$ hence $N_H[y]\supseteq N_H[t]$.
Since $x\leq t$ the first implication follows.
As for the other implication, is it a consequence of Lemma~\ref{lem:P4-F} observing that every leaf $t$ such that $x\leq t$ for some $x\in F$ dominates $x$.
\end{proof}
The next lemma shows that the characterization of Lemma~\ref{lem:IEP-characterization} can be checked for each element $x\in F$ independently.
\begin{lemma}\label{lem:IEP-independently}
Consider $X_j$, $j\in \intv{1}{p}$.
Then, either
\begin{itemize}
\item $X_j\subseteq \{y\in C \mid x<y\}$ for some unique $x\in F$, or
\item $X_j\subseteq \{y\in C \mid y<x\ \text{for some}\ x\in F\}$ and \textsc{IEP} can be answered negatively, or
\item $X_j\not\subseteq N_H[F]$ and it can be ignored when checking the characterization of Lemma~\ref{lem:IEP-characterization}.
\end{itemize}
\end{lemma}
\begin{proof}
Let $X_j$, $j\in \intv{1}{p}$.
First note that if $X_j\subseteq \{y\in C \mid x<y\}$ then such a $x$ is unique or else $T(H)$ is not a tree.
Let us assume that $X_j\subseteq \{y\in C \mid y<x\ \text{for some}\ x\in F\}$.
Observe that these two cases are disjoint as otherwise there exist two elements of $F$ that are comparable, a contradiction.
Recall that by Proposition~\ref{lem:P4-F} every dominating set $D$ of $C\setminus (X\cup Y)$ dominates $F$.
Now if $D$ dominates $F$ then it dominates every $y\in C$ such that $y<x$ for some $x\in F$, and $X_j$ consequently.
In that case \textsc{IEP} can be answered negatively.
Let us now assume that $X_j$ is not of the first two cases.
We first show by contradiction that there is no $x\in F$ such that $x<y$ for some $y\in X_j$.
%
Suppose that there exist two such $x$ and $y$.
Since $X_j$ is not of the first case there must be some $y'$, $y'\neq y$ such that $x\not<y'$.
Consider $a\in R(A)$ such that $X_j=Priv_{IR}(A,a)$.
Note that $x$ belongs to a shortest path from $y$ to $y'$ in $C$ (the common ancestor of $y$ and $y'$ in $T(H)$ must be smaller than $x$).
By Proposition~\ref{prop:Na-connected}, $x$ belongs to $N(a)\cap C$.
Hence it either belongs to $X_j$, or $Y$, contradicting the fact that $x\in F$.
Consequently, and since $X_j$ is not of the second case, $X_j\not\subseteq N_H[F]$.
Now, since the leaves of $T(H)$ selected in Lemma~\ref{lem:IEP-characterization} are of neighborhood included in that of $F$, $X_j$ can be ignored.
\end{proof}
\begin{lemma}\label{lem:IEP-algorithm}
There is an $O(n+m)$ time algorithm solving \textsc{IEP} whenever $G$ is $P_8$-free chordal, where $n$ and $m$ respectively denote the number of vertices and edges in $G$, whenever
\begin{itemize}
\item the leaves of $T(H=G[C])$,
\item the predecessors and the successors of every $x\in T(H)$, and
\item each of the sets $X_1,\dots,X_p$ and $Y$
\end{itemize}
are given by lists and arrays for every irredundant component $C$ of $G$.
\end{lemma}
\begin{proof}
Let us first focus on an irredundant component $C$ of $G$, and $H=G[C]$ its induced subgraph.
We want to decide whether $F$ can be dominated without dominating $X_1,\dots,X_p$.
Note that by assumption, the leaves of $T(H)$, the predecessors and the successors of every $x\in T(H)$, and each of the sets $X_1,\dots,X_p$ and $Y$ can be iterated in a time which is bounded by their size.
Furthermore, deciding whether a vertex belongs to one given set takes constant time.
The same assumptions hold for the set $Z=C\setminus (X\cup Y)$ which can be computed in $O(n_H)$ time iterating on $X_1,\dots,X_p$ and $Y$.
The algorithm proceeds as follows.
First it computes $F$ by checking for every $x\in Z$ whether it has a successor in $Z$.
It computes the set $F^-=\{y\in C \mid y<x\ \text{for some}\ x\in F\}$ in a $n$-element array by adding predecessors of every $x\in F$ at a time.
Since the sum of degrees of $H$ is bounded by $O(n_H+m_H)$, this takes $O(n_H+m_H)$ time.
Then it tests for every set $X_1,\dots,X_p$ whether it is included in $F^-$ within the same time.
At this stage if we find an inclusion then we can answer negatively according to the second item of Lemma~\ref{lem:IEP-independently}, and can consider $X_1,\dots,X_p$ to be of the first type in the following.
For every set $X_j$, $j\in \intv{1}{p}$ we check whether it has two non-adjacent vertices.
This is done in $O(n_H+m_H)$ time by testing for every vertex in $X_j$ whether it has a neighbor in $X_j$, recalling that every such $X_j$ is disjoint (we iterate through vertices and their neighborhood only once).
If $X_j$ has no two non-adjacent vertices, then it is a path in $T(H)$ and we mark in a $n_H$-element array the indexes of leaves that are greater that its maximal element (each of these leaves dominates $X_j$).
Similarly, computing the maximal element of every such $X_j$, and the indexes of leaves that are greater that their maximal element, can be done in $O(n_H+m_H)$ time.
%
If a set $X_j$, $j\in \intv{1}{p}$ has two non-adjacent vertices then no leaf $t$ of $T(H)$ dominates $X_j$ and it can be ignored for the next step.
We now proceed as follows according to Lemma~\ref{lem:IEP-independently}.
We check independently for every $x\in F$ if it has a descendant leaf $t$ (to each $x\in F$ corresponds disjoint sets of such leaves) which was not indexed previously.
If it has then we answer positively.
If not then $x$ (hence $F$) cannot be dominated without dominating one of $X_1,\dots,X_p$ and we can answer negatively.
We now need to conduct this test for every irredundant component $C$ of $G$ independently.
Since irredundant components are subgraphs of $G$ we have that $n$ and $m$ are respectively bounded by the sums of $n_H$'s and $m_H$'s for every irredundant component $C$ of $G$ where $H=G[C]$, and the complexity follows.
\end{proof}
A corollary of Lemma~\ref{lem:IEP-algorithm} is the following, observing that $x$ is a maximal generator of $A$ if and only if $A\setminus \{y\}\not\in \D_{RN}(G)$ for any $y\in A$ of index greater than $x$, and that $n$ times $O(n+m)$ is bounded by $O(n\cdot m)$ since $G$ is connected.
\begin{corollary}\label{cor:MGP-algorithm}
There is an $O(n\cdot m)$ time algorithm solving \textsc{MGP} whenever $G$ is $P_8$-free chordal, where $n$ and $m$ respectively denote the number of vertices and edges in $G$, and assuming the conditions of Lemma~\ref{lem:IEP-algorithm}.
\end{corollary}
We can thus conclude the section with the following result.
\begin{theorem}\label{thm:P8-RN}
There is an $O(n^2\cdot m)$ delay and $O(n^2)$ space implementation of Algorithm~\ref{algo:RN} whenever $G$ is $P_8$-free chordal, where $n$ and $m$ respectively denote the number of vertices and edges in $G$.
\end{theorem}
\begin{proof}
By~Lemma~\ref{lem:IEP-algorithm} and Corollary~\ref{cor:MGP-algorithm}, there is an $O(n^2\cdot m)$ delay implementation of Algorithm~\ref{algo:RN} whenever the assumptions of Lemma~\ref{lem:IEP-algorithm} can be fulfilled at every step of the loop Line~\ref{line:RN-forall}.
Clearly, the representation $T(H)$ of $H=G[C]$ can be computed for every irredundant component $C$ of $G$ in $O(n^2)$ preprocessing time, and $O(n^2)$ space.
The lists and arrays containing the leaves of $T(H)$, the predecessors and the successors of every $x\in T(H)$, and that will contain the sets $X_1,\dots,X_p$, $Y$ and $Z=C\setminus (X\cup Y)$ at each step of loop Line~\ref{line:RN-forall} can also be computed within these preprocessing-time and space complexities.
Furthermore, the sets $X_1,\dots,X_p$ and $Y$ can be maintained at each step of loop as in the proof of Theorem~\ref{thm:P7-RN}, and in a time which is clearly upper-bounded by $O(n\cdot m)$ for each $c\in RN(G)\setminus A$.
We proceed as in the proof of Theorem~\ref{thm:P7-RN} to maintain an $O(n^2\cdot m)$ delay in case of consecutive backtrack.
The theorem follows.
\end{proof}
\section{Enumerating irredundant extensions}\label{sec:irredundant}
This section is devoted to the complexity analysis of Line~\ref{line:main-forallIR} of Algorithm~\ref{algo:main}.
More precisely, we show that irredundant extensions can be enumerated with linear and polynomial delays in $P_7$-free and $P_8$-free chordal graphs.
This allows us to conclude with the two main results of this paper.
\subsection{Irredundant extensions in $P_7$-free chordal graphs.}
Let $G$ be a $P_7$-free chordal graph and $A\in \D_{RN}(G)$.
Recall that by Corollary~\ref{cor:P7-IR-characterization}, every $x\in A$ has a private neighbor in some irredundant component $C\subseteq N(A)$.
Let $C_1,\dots,C_k$ denote the $k$ irredundant components of $G$ that are not dominated by $A$.
By Proposition~\ref{prop:Pk-Pk-4}, every such component is a clique.
Consequently we have that
\[
DIR(A)= \{\{x_1,\dots,x_k\} \mid x_i\in C_i,\ i\in\intv{1}{k}\}.
\]
Now, such a set can clearly be enumerated with $O(n+m)$ delay given $A$ and $C_1,\dots,C_k$.
Furthermore, a track of these irredundant components is maintained at each step of the loop Line~\ref{line:RN-forall} of Algorithm~\ref{algo:RN} in the implementation of Theorem~\ref{thm:P7-RN}.
We conclude with the following theorem which improves a previous result of Kant\'e et al.~in~\cite{kante2014enumeration} on $P_6$-free chordal graphs.
\begin{theorem}\label{thm:P7}
There is an $O(n+m)$ delay, $O(n^2)$ space and $O(n^2)$ preprocessing-time algorithm enumerating $\D(G)$ whenever $G$ is $P_7$-free chordal, where $n$ and $m$ respectively denote the number of vertices and edges in $G$.
\end{theorem}
\subsection{Irredundant extensions in $P_8$-free chordal graphs.}
Let $G$ be a $P_8$-free chordal graph and $A\in \D_{RN}(G)$.
By Theorem~\ref{thm:P9-IR-characterization}, the intersection of an irredundant extensions of $A$ with an irredundant component $C$ of $G$ is a minimal set $D\subseteq C$ such that
\begin{itemize}
\item $D$ dominates $C\setminus (X\cup Y)$, and
\item $D$ does not dominate any of $X_1,\dots,X_p$,
\end{itemize}
where $x_1,\dots,x_p$ denote the $p$ elements of $R(A)$ having a private neighbor in $C$, where $X_j=Priv_{IR}(A, x_j)$ for every $j\in \intv{1}{p}$, $X=X_1\cup\dots\cup X_p$ and $Y=(N(A)\cap C)\setminus X$.
In the following, to $A$ and $C$ we associate $DIR(A,C)$ the set of all such minimal sets.
Then, if $C_1,\dots,C_k$ denote the $k$ irredundant components of $G$ that are not dominated by $A$, we have that
\[
DIR(A)= \{D_1\cup \dots \cup D_k \mid D_i\in DIR(A,C_i),\ i\in\intv{1}{k}\}.
\]
Clearly, such a set can be enumerated with $O(n^3\cdot m)$ delay given an algorithm enumerating $DIR(A,C_i)$ with $O(n^2\cdot m)$ delay for every irredundant component $C_1,\dots,C_k$, where $n$ and $m$ respectively denote the number of vertices and edges in $G$.
We shall show that such an algorithm exists.
Consider $C$, $X_1,\dots,X_p$ and $Y$ as described above.
Let $H=G[C]$.
Recall that by Proposition~\ref{prop:Pk-Pk-4}, $H$ is $P_4$-free chordal.
In the remaining of this section, we rely on the notations of Section~\ref{sec:redundant} and note $r$ the root of $T(H)$ and $F$ the maximal elements of $T(H)$ which are neither in $X_1,\dots,X_p$ nor in $Y$.
One such instance is given in Figure~\ref{fig:P4}.
We call {\em irredundant component extension problem}, denoted by \textsc{ICEP}, the following decision problem.
Given $S,Q\subseteq C$, is there a solution $D\in DIR(A,C)$ such that $S\subseteq D$ and $D\cap Q=\emptyset$?
We shall show that this problem can be solved in $O(n\cdot m)$ time, which, using the {\em backtrack search} technique, leads to an $O(n^2\cdot m)$ algorithm enumerating irredundant extensions in $P_8$-free chordal graphs.
\begin{lemma}\label{lem:ICEP}
There is an algorithm solving \textsc{ICEP} in $O(n\cdot m)$ time, assuming the conditions of Lemma~\ref{lem:IEP-algorithm}.
\end{lemma}
\begin{proof}
Observe that \textsc{IEP} restricted to a single component and \textsc{ICEP} only differ on the fact that the set $D\subseteq C$ should in addition satisfy $D\cap Q=\emptyset$ and should not dominate $Priv_H(S,s)\setminus (X\cup Y)$ for any $s\in S$, where $Priv_H(S,s)$ denotes the private neighborhood of $s\in S$ in $H$.
In that case, $D$ can be reduced to a minimal set $D^*$ such that $S\subseteq D^*$ and $D^*\cap Q=\emptyset$.
We show that these additional conditions can be handled at the cost of an increasing complexity, relying on the proof of Lemma~\ref{lem:IEP-algorithm}.
%
Clearly, we can first answer negatively if $Priv_H(S,s)\setminus (X\cup Y)$ is empty for some $s\in S$.
Otherwise, the condition that $D$ does not dominate $Priv_H(S,s)\setminus (X\cup Y)$ for any $s\in S$ can be handled by adding extra sets $X_{p+i}=Priv_H(S,s_i)\setminus (X\cup Y)$ for every $s_1,\dots,s_q\in S$, and updating $X:=X\cup X_{p+1}\cup \dots \cup X_{p+q}$.
Since these sets are connected Lemma~\ref{lem:IEP-independently} still applies.
%
As for $D$ satisfying $D\cap Q=\emptyset$, we proceed as follows.
For every $x\in F$, and instead of only checking descendant leaves of $x$, we iterate through all the descendants of $x$ and check whether it has a successor $y$ such that $y\not\in Q$, and such that $y$ does not dominate any of $X_1,\dots,X_{p+q}$.
This can be done in $O(n\cdot m)$ time as we iterate through every such $y$ in $O(n+m)$ time, and check for each of these $y$'s whether it has some $X_j$ in its neighborhood in $O(n)$ time.
At this stage, and according to Lemmas~\ref{lem:IEP-characterization},~\ref{lem:IEP-independently} and~\ref{lem:IEP-algorithm}, we can answer yes if and only every $x$ has such a neighbor $y$.
\end{proof}
We can now conclude using the backtrack search technique that we briefly recall now.
Formal proofs are omitted.
The enumeration is a depth-first search of a tree whose nodes are partial solutions and leaves are solutions.
The algorithm constructs partial solutions by considering one vertex $x_i$ at a time (following some linear ordering $x_1,\dots,x_n$ of the vertices), checking at each step whether there is a final solution $D_1\in DIR(A)$ containing $S\cup \{x_i\}$ and not intersecting $Q$, and one $D_2\in DIR(A)$ containing $S$ and not intersecting $Q\cup \{x_i\}$.
This step is called the extension problem.
The algorithm recursively calls on such sets each time the extension is possible.
At first, $S$ and $Q$ are empty.
The delay time complexity is bounded by the depth of the tree (the number of vertices) times the time complexity of solving the extension problem, i.e., \textsc{ICEP}.
For further details on this technique, see for instance~\cite{read1975bounds,strozecki2019efficient}.
We conclude to the next lemma and theorem, noting that the conditions of Lemma~\ref{lem:IEP-algorithm} can be fulfilled as in the proof of Theorems~\ref{thm:P7-RN} and~\ref{thm:P8-RN}.
\begin{lemma}\label{lem:P8-IR}
There is an algorithm enumerating $DIR(A,C)$ with $O(n^2\cdot m)$ delay, assuming the conditions of Lemma~\ref{lem:IEP-algorithm}.
\end{lemma}
\begin{theorem}\label{thm:P8}
There is an $O(n^3\cdot m)$ delay and $O(n^2)$ space algorithm enumerating $\D(G)$ whenever $G$ is $P_8$-free chordal, where $n$ and $m$ denote the number of vertices and edges in $G$.
\end{theorem}
\section{Discussions}\label{sec:conclusion}
We investigated the enumeration of minimal dominating sets from their intersection with redundant vertices.
This technique was first introduced in~\cite{kante2014enumeration} and led to linear-delay algorithms in split and $P_6$-free chordal graphs.
We investigated generalizations of this technique to $P_k$-free chordal graphs for larger integers $k$.
In particular, we gave $O(n+m)$ and $O(n^3\cdot m)$ delays algorithms in the classes of $P_7$-free and $P_8$-free chordal graphs, where $n$ and $m$ respectively denote the number of vertices and edges in the graph.
As for $P_k$-free chordal graphs for $k\geq 9$, we now give evidence that the enumeration of $\D_{RN}(G)$ might need other techniques for $k\geq 9$, as \textsc{IEP} becomes {\sf NP}-complete.
\begin{theorem}\label{thm:NPC}
\textsc{IEP} is {\sf NP}-complete even when restricted to $P_9$-free chordal graphs.
\end{theorem}
\begin{proof}
First notice that \textsc{IEP} belongs to {\sf NP}: a polynomial certificate is given by an irredundant set $I\subseteq IR(G)$ such that $A\cup I\in \D(G)$, and which can be verified in polynomial time.
Given an instance $\varphi$ of \textsc{3SAT} with variables $x_1,\dots,x_n$ and clauses $C_1,\dots,C_m$, we construct a $P_9$-free chordal graph $G$ and a set $A\subseteq RN(G)$ such that $A$ admits an irredundant extension if and only if there exists a truth assignment of the variables of $\varphi$ that satisfies all the clauses.
In the following, we assume that the degenerate cases where a literal intersects every clause, where two clauses are equal, or where the number of variables and clauses is lesser than three are excluded.
Then, the construction is the following.
The first part concerns the construction of a split graph $H$ which contains one vertex for each of the literals $x_i$ and $\neg x_i$, a copy $u$ and $\neg u_i$ of such literals, and one vertex $c_j$ per clause $C_j$.
The graph induced by the $u_i$'s, $\neg u_i$'s and $c_j$'s is completed into a clique, while an edge is added between $u_i$ and $x_i$, between $\neg u_i$ and $\neg x_i$, and between a literal $x_i$ (resp.~$\neg x_i$) and a clause $c_j$ whenever the literal is contained into that clause.
%
As for the second part, it consists of a pendant path $x_iy_iz_i$ and $\neg x_i\neg y_i\neg z_i$ rooted at every literal $x_i$ and $\neg x_i$, and of a paw $a_ib_iv_iw_i$ (a triangle $a_iv_iw_i$ with a pendant edge $a_ib_i$) made adjacent to both $u_i$ and $\neg u_i$ only through $v_i$, for every $i\in \intv{1}{n}$.
%
The construction is illustrated in Figure~\ref{fig:NPC}.
It can be easily seen that the obtained graph $G$ is $P_9$-free chordal.
Also that a $P_8$ is induced by $b_ia_iv_iu_iu_jv_ja_jb_j$ for $i\neq j\in\intv{1}{n}$.
\begin{figure}
\center
\caption{The construction of $G$ in Theorem~\ref{thm:NPC}. Irredundant vertices are represented in black while redundant ones are in white. The vertices in the rectangle induce a clique and $H$ a split graph.}
\includegraphics{NPC.pdf}
\label{fig:NPC}
\end{figure}
Let us show that $H=G[C]$ for an irredundant component $C$ of $G$.
First note that every vertex outside of $H$ has a neighbor that is not adjacent to $H$, so it cannot make a vertex from $H$ redundant.
Now if a vertex of $H$ makes another one of $H$ redundant then it cannot be a literal or some $u_i$, $\neg u_i$ (as it has either $y_i$, $\neg y_i$ or $v_i$ as a neighbor outside of $H$).
Also, it cannot be a clause as by assumption every two clauses differ on a literal, and no literal is complete to the clique.
Hence vertices of $H$ are all irredundant.
It is easily seen that irredundant components of $G$ include $\{z_i\}$, $\{\neg z_i\}$, $\{b_i\}$ and $\{w_i\}$ for all $i\in\intv{1}{n}$.
Also that redundant vertices of $G$ are $a_i$'s, $v_i$'s, $y_i$'s and $\neg y_i$'s.
We conclude that $H$ cannot be extended, hence that $C$ is indeed an irredundant component of $G$, as claimed.
Now, let us set $A=RN(G)$ and show that $A\in \D_{RN}(G)$ if and only if there exists a truth assignment of the variables of $\varphi$ that satisfies all the clauses.
If $A\in \D_{RN}(G)$ then there exists an irredundant extension $D\in DIR(A)$.
Observe that only the $c_i$'s are to be dominated by $D$, i.e., $IR(G)\setminus N(A)=\{c_1,\dots,c_m\}$.
However, $D$ does not intersect any element of the clique of $H$ as otherwise it would dominate $\{u_i,\neg u_i\}$ and thus steal all the private neighbors of the $v_i$'s.
For the same reason, it cannot contain one literal and its negation.
Consequently $D$ corresponds to a truth assignment of the variables of $\varphi$ that satisfies all the clauses.
Consider now any truth assignment of the variables of $\varphi$ that satisfies all the clauses, and $D$ the associated set of vertices in $G$.
By construction $D$ dominates all the $c_i$'s.
Furthermore it does not steal any private neighbor to any vertex of $A$.
By Corollary~\ref{cor:IR-private} we have that $A\in \D_{RN}(G)$ concluding the proof.
\end{proof}
|
{
"timestamp": "2019-10-01T02:11:20",
"yymm": "1805",
"arxiv_id": "1805.02412",
"language": "en",
"url": "https://arxiv.org/abs/1805.02412"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.